You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried to convert llama 2 model from .gguf to .bin
~/llm_inferences/llama.cpp/models/meta$ ls
llama-2-7b.Q4_K_M.gguf
python3 export.py llama2_7b.bin --meta-llama /home/####/llm_inferences/llama.cpp/models
Traceback (most recent call last):
File "/home/aadithya.bhat/llm_inferences/llama2.c/export.py", line 559, in <module>
model = load_meta_model(args.meta_llama)
File "/home/aadithya.bhat/llm_inferences/llama2.c/export.py", line 373, in load_meta_model
with open(params_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/aadithya.bhat/llm_inferences/llama.cpp/models/params.json'
The text was updated successfully, but these errors were encountered:
adi-lb-phoenix
changed the title
failed to convert llama_2 7B model .gguf to .bin
failed to convert llama_2 7B model in .gguf to .bin format
Jun 20, 2024
I have tried to convert llama 2 model from .gguf to .bin
I have downloaded this model from https://huggingface.co/TheBloke/Llama-2-7B-GGUF, the model with name ending with Q4_K.gguf
The text was updated successfully, but these errors were encountered: