-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert Bunny-v1.0-3B to GGUF #115
Comments
change
to
|
The config has already been modified. When I run I get How can I solve this? |
Seems that it's because recent updates of |
Sorry, I followed the instructions and used llama.cpp version b2636, but the conversion still failed. |
I have tested using the HF DEMO and found that the results of Bunny-v1.1-Llama-3-8B-V and Bunny-v1.0-3B are what I am looking for. However, I discovered that llama.cpp does not currently support S2-Wrapper, so I want to convert Bunny-v1.0-3B to GGUF for use on edge devices (I have tested Bunny-v1_0-4B.gguf and the results were not ideal).
To convert Bunny-v1_0-3B to gguf, I follow the instructions on the GitHub page. However, when I execute the final step:
python ../../convert-hf-to-gguf.py Bunny-v1_0-3B
But I encounter the error:
KeyError: "could not find any of: ['rms_norm_eps']"
along with several other missing format in the config.
I think that the configs for Bunny-v1_0-3B and Bunny-v1_0-4B are different, which causes the error when loading the model.
Could you please provide the config.json for Bunny-v1_0-3B or a solution to this issue?
The text was updated successfully, but these errors were encountered: