Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert NVILA with 0.16.0 #2706

Open
2 of 4 tasks
dzy130120 opened this issue Jan 20, 2025 · 1 comment
Open
2 of 4 tasks

convert NVILA with 0.16.0 #2706

dzy130120 opened this issue Jan 20, 2025 · 1 comment
Assignees
Labels
bug Something isn't working Investigating LLM API/Workflow triaged Issue has been triaged by maintainers

Comments

@dzy130120
Copy link

dzy130120 commented Jan 20, 2025

System Info

I can not convet NVILA checkpoint with tensorRT-LLM 0.16.0 follow examples/multimodal/README.md

Who can help?

@ncomly-nvidia

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

  1. build 0.16.0 TensorRT-LLM docker and run with make -C docker release_run.
  2. follow examples/multimodal/README.md
  3. change export MODEL_NAME="vila1.5-3b" to export MODEL_NAME="NVILA-8B-Video"
  4. run python ../llama/convert_checkpoint.py
    --model_dir tmp/hf_models/${MODEL_NAME}
    --output_dir tmp/trt_models/${MODEL_NAME}/fp16/1-gpu
    --dtype float16

Expected behavior

Total time of reading and converting: 1.589 s
Total time of saving checkpoint: 5.007 s
Total time of converting checkpoints: 00:00:06

actual behavior

--dtype float16
[TensorRT-LLM] TensorRT-LLM version: 0.16.0
0.16.0
[01/20/2025-12:06:37] [TRT-LLM] [W] AutoConfig cannot load the huggingface config.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/configuration_auto.py", line 1034, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/configuration_auto.py", line 736, in getitem
raise KeyError(key)
KeyError: 'llava_llama'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/app/tensorrt_llm/examples/multimodal/../llama/convert_checkpoint.py", line 581, in
main()
File "/app/tensorrt_llm/examples/multimodal/../llama/convert_checkpoint.py", line 573, in main
convert_and_save_hf(args)
File "/app/tensorrt_llm/examples/multimodal/../llama/convert_checkpoint.py", line 514, in convert_and_save_hf
execute(args.workers, [convert_and_save_rank] * world_size, args)
File "/app/tensorrt_llm/examples/multimodal/../llama/convert_checkpoint.py", line 521, in execute
f(args, rank)
File "/app/tensorrt_llm/examples/multimodal/../llama/convert_checkpoint.py", line 496, in convert_and_save_rank
llama = LLaMAForCausalLM.from_hugging_face(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/tensorrt_llm/models/llama/model.py", line 393, in from_hugging_face
config = LLaMAConfig.from_hugging_face(hf_config_or_dir,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/tensorrt_llm/models/llama/config.py", line 108, in from_hugging_face
hf_config = transformers.AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/configuration_auto.py", line 1036, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type llava_llama but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

additional notes

no

@dzy130120 dzy130120 added the bug Something isn't working label Jan 20, 2025
@nv-guomingz
Copy link
Collaborator

Hi @dzy130120 thanks for reporting this issue, we'll take a look on it.

@github-actions github-actions bot added triaged Issue has been triaged by maintainers Investigating labels Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Investigating LLM API/Workflow triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

3 participants