Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

400 Client Error on Phenom-Beta Inference API endpoint #10

Open
aluettringhaus opened this issue Apr 18, 2024 · 2 comments
Open

400 Client Error on Phenom-Beta Inference API endpoint #10

aluettringhaus opened this issue Apr 18, 2024 · 2 comments

Comments

@aluettringhaus
Copy link

Description
I am encountering a 400 Bad Request error when attempting to use the API's inference endpoint. However, uploading and deleting assets are working correctly without any issues. This problem only arises during the inference operation.

Steps to Reproduce

  1. Upload an asset (asset uploads successfully).
  2. Attempt to run inference on the uploaded asset.
  3. Receive a 400 Bad Request error.

Expected Behavior
The inference operation should process the asset and return a successful response with the inference results.

Actual Behavior
The API returns a 400 Bad Request error, and no inference results are received.

Additional Information
Endpoint: https://api.nvcf.nvidia.com/v2/nvcf/exec/functions/7db32b36-ec04-43a6-a78f-1d8296accd8d/versions/3d73b252-008d-4469-b4c3-b25b9cbec654
Request Method: POST
Payload Used: { "asset_id": "f2af3e59-aea2-49b8-a567-98ab4e226410" }
Error Message: Bad Request

Attempts to Resolve
Verified that the asset exists and is accessible (accessible via delete endpoint).
Ensured that the request payload is structured correctly according to the documentation.
Checked API documentation for any recent changes (https://developer.nvidia.com/docs/bionemo-service/phenom-beta.html).

Request
Could you please investigate this issue? The request appears to be structured correctly, and other endpoints are functioning as expected. I would greatly appreciate any guidance or corrections to my request.

@kian-kd
Copy link
Collaborator

kian-kd commented Apr 22, 2024

Hi @aluettringhaus , thanks for reaching out! Currently the public endpoint is very particular and requires that the input TIFFs be a rectangle with a shape that's exactly divisible by 256x256. We did not want to include logic that applies clipping the edges of the images behind the scenes, and instead leave that to the user. The error may not be surfacing correctly, currently. So if you don't mind, please try re-uploading your images after trimming them, ensuring that the Height and Width of your input images are of a dimensionality that is evenly divisible by 256x256.

@aluettringhaus
Copy link
Author

Hi @kian-kd , Thanks for getting back to me! I have double-checked the input images. They are multi-channel images with a shape of (4, 1536, 2048), so the height and width dimensions are exactly divisible 256. Additionally, I have tried the inference endpoint with LZW-compressed and uncompressed 8-bit TIFF images. Are there any other requirements for the TIFFs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants