Skip to content

Commit

Permalink
Migration of Tensorflow model kserve Rest testcase UI -> API
Browse files Browse the repository at this point in the history
  • Loading branch information
Raghul-M committed Jan 17, 2025
1 parent 239f958 commit d5c39c3
Showing 1 changed file with 46 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,9 @@ ${KSERVE_RUNTIME_REST_NAME}= triton-kserve-runtime
${PYTORCH_MODEL_NAME}= resnet50
${INFERENCE_REST_INPUT_PYTORCH}= @tests/Resources/Files/triton/kserve-triton-resnet-rest-input.json
${EXPECTED_INFERENCE_REST_OUTPUT_FILE__PYTORCH}= tests/Resources/Files/triton/kserve-triton-resnet-rest-output.json
${TENSORFLOW_MODEL_NAME}= inceptiongraphdef
${INFERENCE_REST_INPUT_TENSORFLOW}= @tests/Resources/Files/triton/kserve-triton-tensorflow-rest-input.json
${EXPECTED_INFERENCE_REST_OUTPUT_FILE_TENSORFLOW}= tests/Resources/Files/triton/kserve-triton-tensorflow-rest-output.json

Check warning

Code scanning / Robocop

Line is too long ({{ line_length }}/{{ allowed_length }}) Warning test

Line is too long (126/120)
${PATTERN}= https:\/\/([^\/:]+)
${PROTOBUFF_FILE}= tests/Resources/Files/triton/grpc_predict_v2.proto

Expand Down Expand Up @@ -256,6 +259,49 @@ Test Onnx Model Grpc Inference Via API (Triton on Kserve) # robocop: off=too-
... AND
... Run Keyword If "${KSERVE_MODE}"=="RawDeployment" Terminate Process triton-process kill=true


Test Tensorflow Model Rest Inference Via API (Triton on Kserve) # robocop: off=too-long-test-case

Check warning

Code scanning / Robocop

Invalid number of empty lines between test cases ({{ empty_lines }}/{{ allowed_empty_lines }}) Warning test

Invalid number of empty lines between test cases (2/1)

Check warning

Code scanning / Robocop

Test case '{{ test_name }}' has too many keywords inside ({{ keyword_count }}/{{ max_allowed_count }}) Warning test

Test case 'Test Tensorflow Model Rest Inference Via API (Triton on Kserve)' has too many keywords inside (13/10)
[Documentation] Test the deployment of Tensorflow model in Kserve using Triton
[Tags] Tier2 RHOAIENG-16910 RunThisTest
Setup Test Variables model_name=${TENSORFLOW_MODEL_NAME} use_pvc=${FALSE} use_gpu=${FALSE}

Check warning

Code scanning / Robocop

Line is too long ({{ line_length }}/{{ allowed_length }}) Warning test

Line is too long (123/120)
... kserve_mode=${KSERVE_MODE} model_path=triton/model_repository/
Log ${TENSORFLOW_MODEL_NAME}
Set Project And Runtime runtime=${KSERVE_RUNTIME_REST_NAME} protocol=${PROTOCOL} namespace=${test_namespace}
... download_in_pvc=${DOWNLOAD_IN_PVC} model_name=${TENSORFLOW_MODEL_NAME}
... storage_size=100Mi memory_request=100Mi
${requests}= Create Dictionary memory=1Gi

Check notice

Code scanning / Robocop

{{ create_keyword }} can be replaced with VAR Note test

Create Dictionary can be replaced with VAR
Compile Inference Service YAML isvc_name=${TENSORFLOW_MODEL_NAME}
... sa_name=models-bucket-sa
... model_storage_uri=${storage_uri}
... model_format=tensorflow serving_runtime=${KSERVE_RUNTIME_REST_NAME}
... version="2"
... limits_dict=${limits} requests_dict=${requests} kserve_mode=${KSERVE_MODE}
Deploy Model Via CLI isvc_filepath=${INFERENCESERVICE_FILLED_FILEPATH}
... namespace=${test_namespace}
# File is not needed anymore after applying
Remove File ${INFERENCESERVICE_FILLED_FILEPATH}
Wait For Pods To Be Ready label_selector=serving.kserve.io/inferenceservice=${TENSORFLOW_MODEL_NAME}
... namespace=${test_namespace}
${pod_name}= Get Pod Name namespace=${test_namespace}
... label_selector=serving.kserve.io/inferenceservice=${TENSORFLOW_MODEL_NAME}
${service_port}= Extract Service Port service_name=${TENSORFLOW_MODEL_NAME}-predictor protocol=TCP
... namespace=${test_namespace}
IF "${KSERVE_MODE}"=="RawDeployment"
Start Port-forwarding namespace=${test_namespace} pod_name=${pod_name} local_port=${service_port}
... remote_port=${service_port} process_alias=triton-process
END
${EXPECTED_INFERENCE_REST_OUTPUT_TENSORFLOW}= Load Json File
... file_path=${EXPECTED_INFERENCE_REST_OUTPUT_FILE_TENSORFLOW} as_string=${TRUE}
Verify Model Inference With Retries model_name=${TENSORFLOW_MODEL_NAME} inference_input=${INFERENCE_REST_INPUT_TENSORFLOW}

Check warning

Code scanning / Robocop

Line is too long ({{ line_length }}/{{ allowed_length }}) Warning test

Line is too long (131/120)
... expected_inference_output=${EXPECTED_INFERENCE_REST_OUTPUT_TENSORFLOW} project_title=${test_namespace}
... deployment_mode=Cli kserve_mode=${KSERVE_MODE} service_port=${service_port}
... end_point=/v2/models/${model_name}/infer retries=3
[Teardown] Run Keywords
... Clean Up Test Project test_ns=${test_namespace}
... isvc_names=${models_names} wait_prj_deletion=${FALSE} kserve_mode=${KSERVE_MODE}
... AND
... Run Keyword If "${KSERVE_MODE}"=="RawDeployment" Terminate Process triton-process kill=true

*** Keywords ***
Suite Setup
[Documentation] Suite setup keyword
Expand Down

0 comments on commit d5c39c3

Please sign in to comment.