privateGPT errors #725
advashishta
started this conversation in
General
Replies: 1 comment
-
What version of Python? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have implimented the privateGPT project in windows 2012 server - all running fill it lands asking a Enter a Query :
But when i enter a question , it responds with error :
PS C:\wwwpython\liam> python privateGPT.py
Using embedded DuckDB with persistence: data will be stored in: db
Found model file at models/ggml-gpt4all-j-v1.3-groovy.bin
gptj_model_load: loading model from 'models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
gptj_model_load: ggml ctx size = 5401.45 MB
gptj_model_load: kv self size = 896.00 MB
gptj_model_load: ................................... done
gptj_model_load: model size = 3609.38 MB / num tensors =
285
Enter a query: russian stock market status?
Traceback (most recent call last):
File "C:\wwwpython\liam\privateGPT.py", line 82, in
main()
File "C:\wwwpython\liam\privateGPT.py", line 53, in main
File "C:\Python311\Lib\site-packages\langchain\chains\base.py", line 145, in call
raise e
File "C:\Python311\Lib\site-packages\langchain\chains\base.py", line 139, in call
self._call(inputs, run_manager=run_manager)
File "C:\Python311\Lib\site-packages\langchain\chains\retrieval_qa\base.py", line 120, in _call
answer = self.combine_documents_chain.run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain\chains\base.py", line 259, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain\chains\base.py", line 145, in call
raise e
File "C:\Python311\Lib\site-packages\langchain\chains\base.py", line 139, in call
self._call(inputs, run_manager=run_manager)
File "C:\Python311\Lib\site-packages\langchain\chains\combine_documents\base.py", line 84, in _call
output, extra_return_dict = self.combine_docs(
^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain\chains\combine_documents\stuff.py", line 87, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain\chains\llm.py", line 213, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain\chains\base.py", line 145, in call
raise e
File "C:\Python311\Lib\site-packages\langchain\chains\base.py", line 139, in call
self._call(inputs, run_manager=run_manager)
File "C:\Python311\Lib\site-packages\langchain\chains\llm.py", line 69, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain\chains\llm.py", line 79, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain\llms\base.py", line 138, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain\llms\base.py", line 201, in generate
raise e
File "C:\Python311\Lib\site-packages\langchain\llms\base.py", line 193, in generate
self._generate(
File "C:\Python311\Lib\site-packages\langchain\llms\base.py", line 488, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "C:\Python311\Lib\site-packages\langchain\llms\gpt4all.py", line 208, in _call
for token in self.client.generate(prompt, **params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\gpt4all\gpt4all.py", line 178, in generate
return self.model.prompt_model(prompt, streaming=streaming, **generate_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\gpt4all\pyllmodel.py", line 232, in prompt_model
llmodel.llmodel_prompt(self.model,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: exception: access violation reading 0x000000E76BCC4000
Exception ignored in: <gpt4all.pyllmodel.DualStreamProcessor object at 0x000000E704B46290>
AttributeError: 'DualStreamProcessor' object has no attribute 'flush'
Please help
Beta Was this translation helpful? Give feedback.
All reactions