You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tried converting a few models and got same/similar errors. This should work as it is in your repo and a model you support:
mistralai/Mistral-7B-Instruct-v0.3
Conversion Results:
[2025-01-17 12:06:05] INFO auto_config.py:116: �[92mFound�[0m model configuration: dist/models/Mistral-7B-Instruct-v0.3/config.json
[2025-01-17 12:06:07] INFO auto_device.py:79: �[92mFound�[0m device: cuda:0
[2025-01-17 12:06:08] INFO auto_device.py:88: �[91mNot found�[0m device: rocm:0
[2025-01-17 12:06:09] INFO auto_device.py:88: �[91mNot found�[0m device: metal:0
[2025-01-17 12:06:11] INFO auto_device.py:88: �[91mNot found�[0m device: vulkan:0
[2025-01-17 12:06:12] INFO auto_device.py:88: �[91mNot found�[0m device: opencl:0
[2025-01-17 12:06:12] INFO auto_device.py:35: Using device: �[1mcuda:0�[0m
[2025-01-17 12:06:12] INFO auto_weight.py:71: Finding weights in: dist/models/Mistral-7B-Instruct-v0.3
[2025-01-17 12:06:12] INFO auto_weight.py:137: �[91mNot found�[0m Huggingface PyTorch
[2025-01-17 12:06:12] INFO auto_weight.py:144: �[92mFound�[0m source weight format: huggingface-safetensor. Source configuration: dist/models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json
[2025-01-17 12:06:12] INFO auto_weight.py:107: Using source weight configuration: �[1mdist/models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json�[0m. Use --source to override.
[2025-01-17 12:06:12] INFO auto_weight.py:111: Using source weight format: �[1mhuggingface-safetensor�[0m. Use --source-format to override.
[2025-01-17 12:06:12] INFO auto_config.py:154: �[92mFound�[0m model type: �[1mmistral�[0m. Use --model-type to override.
[2025-01-17 12:06:12] INFO mistral_model.py:59: �[1mcontext_window_size�[0m not found in config.json. Falling back to �[1mmax_position_embeddings�[0m (32768)
[2025-01-17 12:06:12] INFO mistral_model.py:87: �[1mprefill_chunk_size�[0m defaults to 8192
0%| | 0/195 [00:00<?, ?it/s]�[A
�[A[2025-01-17 12:06:14] INFO huggingface_loader.py:185: Loading HF parameters from: dist/models/Mistral-7B-Instruct-v0.3/model-00003-of-00003.safetensors
0%| | 0/195 [00:00<?, ?it/s]�[A
0%| | 0/195 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.11.11/bin/mlc_llm", line 8, in
sys.exit(main())
^^^^^^
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/main.py", line 37, in main
cli.main(sys.argv[2:])
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/cli/convert_weight.py", line 88, in main
convert_weight(
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 181, in convert_weight
_convert_args(args)
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 145, in _convert_args
tvmjs.dump_ndarray_cache(
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/tvm/contrib/tvmjs.py", line 273, in dump_ndarray_cache
for k, origin_v in param_generator:
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 129, in _param_generator
for name, param in loader.load(device=args.device, preshard_funcs=preshard_funcs):
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/loader/huggingface_loader.py", line 118, in load
param = self._load_mlc_param(mlc_name, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/loader/huggingface_loader.py", line 150, in _load_mlc_param
self._load_file(path)
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/loader/huggingface_loader.py", line 189, in _load_file
for name, param in load_func(path):
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/loader/utils.py", line 59, in load_safetensor_shard
with safetensors.safe_open(path, framework="pt", device="cpu") as in_file:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/torch/init.py", line 367, in
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12
The text was updated successfully, but these errors were encountered:
https://huggingface.co/spaces/mlc-ai/MLC-Weight-Conversion
This is broken!
Tried converting a few models and got same/similar errors. This should work as it is in your repo and a model you support:
mistralai/Mistral-7B-Instruct-v0.3
Conversion Results:
[2025-01-17 12:06:05] INFO auto_config.py:116: �[92mFound�[0m model configuration: dist/models/Mistral-7B-Instruct-v0.3/config.json
[2025-01-17 12:06:07] INFO auto_device.py:79: �[92mFound�[0m device: cuda:0
[2025-01-17 12:06:08] INFO auto_device.py:88: �[91mNot found�[0m device: rocm:0
[2025-01-17 12:06:09] INFO auto_device.py:88: �[91mNot found�[0m device: metal:0
[2025-01-17 12:06:11] INFO auto_device.py:88: �[91mNot found�[0m device: vulkan:0
[2025-01-17 12:06:12] INFO auto_device.py:88: �[91mNot found�[0m device: opencl:0
[2025-01-17 12:06:12] INFO auto_device.py:35: Using device: �[1mcuda:0�[0m
[2025-01-17 12:06:12] INFO auto_weight.py:71: Finding weights in: dist/models/Mistral-7B-Instruct-v0.3
[2025-01-17 12:06:12] INFO auto_weight.py:137: �[91mNot found�[0m Huggingface PyTorch
[2025-01-17 12:06:12] INFO auto_weight.py:144: �[92mFound�[0m source weight format: huggingface-safetensor. Source configuration: dist/models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json
[2025-01-17 12:06:12] INFO auto_weight.py:107: Using source weight configuration: �[1mdist/models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json�[0m. Use
--source
to override.[2025-01-17 12:06:12] INFO auto_weight.py:111: Using source weight format: �[1mhuggingface-safetensor�[0m. Use
--source-format
to override.[2025-01-17 12:06:12] INFO auto_config.py:154: �[92mFound�[0m model type: �[1mmistral�[0m. Use
--model-type
to override.[2025-01-17 12:06:12] INFO mistral_model.py:59: �[1mcontext_window_size�[0m not found in config.json. Falling back to �[1mmax_position_embeddings�[0m (32768)
[2025-01-17 12:06:12] INFO mistral_model.py:87: �[1mprefill_chunk_size�[0m defaults to 8192
0%| | 0/195 [00:00<?, ?it/s]�[A
�[A[2025-01-17 12:06:14] INFO huggingface_loader.py:185: Loading HF parameters from: dist/models/Mistral-7B-Instruct-v0.3/model-00003-of-00003.safetensors
0%| | 0/195 [00:00<?, ?it/s]�[A
0%| | 0/195 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.11.11/bin/mlc_llm", line 8, in
sys.exit(main())
^^^^^^
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/main.py", line 37, in main
cli.main(sys.argv[2:])
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/cli/convert_weight.py", line 88, in main
convert_weight(
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 181, in convert_weight
_convert_args(args)
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 145, in _convert_args
tvmjs.dump_ndarray_cache(
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/tvm/contrib/tvmjs.py", line 273, in dump_ndarray_cache
for k, origin_v in param_generator:
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 129, in _param_generator
for name, param in loader.load(device=args.device, preshard_funcs=preshard_funcs):
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/loader/huggingface_loader.py", line 118, in load
param = self._load_mlc_param(mlc_name, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/loader/huggingface_loader.py", line 150, in _load_mlc_param
self._load_file(path)
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/loader/huggingface_loader.py", line 189, in _load_file
for name, param in load_func(path):
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/mlc_llm/loader/utils.py", line 59, in load_safetensor_shard
with safetensors.safe_open(path, framework="pt", device="cpu") as in_file:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/torch/init.py", line 367, in
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home/user/.pyenv/versions/3.11.11/lib/python3.11/site-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12
The text was updated successfully, but these errors were encountered: