Can't find custom model, when running a query #436
Replies: 3 comments
-
Same deal |
Beta Was this translation helpful? Give feedback.
-
h2GPT supports custom models just fine, as long as one knows the prompt_type associated with the model. I recommend using wizardlm uncensored 7B or 13B on CPU using TheBloke (https://github.com/h2oai/h2ogpt#cpu) or on GPU directly (any HF model is supported, as long as prompt_type is correctly set) |
Beta Was this translation helpful? Give feedback.
-
I realized what the issue was: in the env config; the model type for the specific one is not GPT4ALL. Env config
|
Beta Was this translation helpful? Give feedback.
-
When running query getting error
ptj_model_load: loading model from 'models/ggml-vic13b-q5_1.bin' - please wait ...
gptj_model_load: invalid model file 'models/ggml-vic13b-q5_1.bin' (bad magic)
GPT-J ERROR: failed to load model from models/ggml-vic13b-q5_1.bin
Able to ingest data and is creating db folder.
I downloaded a new model from hugging face and updated .env file with that config, after placing the raw model file in models folder.
Env config
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-vic13b-q5_1.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=100
Beta Was this translation helpful? Give feedback.
All reactions