You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying the vlm_ptq by following the readme in vlm_ptq folder, and when I call a command "scripts/huggingface_example.sh --type llava --model llava-1.5-7b-hf --quant fp8 --tp 8", following error message is reported:
I have tried hard coded DEPLOYMENT="tensorrt_llm" in huggingface_example.sh, and still have this error message as:
hf_ptq.py: error: unrecognized arguments: --deployment=tensorrt-llm
A bug in llm_ptq or a bug in huggingface_example.sh?
I am using modelopt0.17.0 by installing it with command "pip install "nvidia-modelopt[all]" --extra-index-url https://pypi.nvidia.com"
The text was updated successfully, but these errors were encountered:
I am trying the vlm_ptq by following the readme in vlm_ptq folder, and when I call a command "scripts/huggingface_example.sh --type llava --model llava-1.5-7b-hf --quant fp8 --tp 8", following error message is reported:
hf_ptq.py: error: unrecognized arguments: --deployment=
I have tried hard coded DEPLOYMENT="tensorrt_llm" in huggingface_example.sh, and still have this error message as:
hf_ptq.py: error: unrecognized arguments: --deployment=tensorrt-llm
A bug in llm_ptq or a bug in huggingface_example.sh?
I am using modelopt0.17.0 by installing it with command "pip install "nvidia-modelopt[all]" --extra-index-url https://pypi.nvidia.com"
The text was updated successfully, but these errors were encountered: