-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
i was told i could use these bindings with VICUNA model #9
Comments
So i installed VICUNA with this repo https://github.com/mps256/autovicuna They put everything in C:/VICUNA they use and you can get these from https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/blob/main/ggml-vicuna-13b-4bit-rev1.bin " ...rev1.bin" have some fixes but its not included in mps256/autovicuna ig you can get 7B from and it can run on same llama files Heres Some SS running 13B VICUNA and here 7B (its faster and gives answers better than LLAMA/ alpaca in my pov) output sometimes end with |
sample command |
Refer to the source code, it's NOT supported yet. Lines 15 to 33 in 429559f
|
yes that's why i made one |
Nice 👍 Do you think that's good for community if your package is part of gpt4all-ts? Have you considered to send PR to gpt4all-ts? Thanks. |
originally thats what i wanted but gpt4all-ts s code would have become mess if i stuffed vicuna |
what i pass to constructior i have quantized
file with me
The text was updated successfully, but these errors were encountered: