You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2024-10-20 21:15:31.980 | ERROR | chatchat.server.utils:wrap_done:46 - ValueError: Caught exception: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass handle_parsing_errors=True to the AgentExecutor. This is the error: Could not parse LLM output: 我很高兴地看到pt 工程技术的兴趣!在本书中,我们将探讨可用于 ChatGPT 的各种 Prompt 工程技术,包括指令提示技术。
response = requests.post(f"{base_url}/chat/completions", json=data, stream=True)
for line in response.iter_content(None, decode_unicode=True):
print(line)
目前模型采用ollama的llama3.1 + mxbai-embed-large
The text was updated successfully, but these errors were encountered:
问题描述 / Problem Description
2024-10-20 21:15:31.980 | ERROR | chatchat.server.utils:wrap_done:46 - ValueError: Caught exception: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass
handle_parsing_errors=True
to the AgentExecutor. This is the error: Could not parse LLM output: 我很高兴地看到pt 工程技术的兴趣!在本书中,我们将探讨可用于 ChatGPT 的各种 Prompt 工程技术,包括指令提示技术。问几句或者复杂的问题之后,或者带上agent进行聊天时都有这个报错,请问这个是需要做什么呢?需要自定义output吗?xxx (后面还有一些回答的问题),其实内容已经回答了,但是会有上面的报错。
demo 代码 为文档中提供代码。
base_url = "http://127.0.0.1:7861/chat"
data = {
"messages": [
{"role": "user", "content": "如何提问以获得高质量答案"},
],
"tool_choice": "search_local_knowledgebase",
"handle_parsing_errors": True,
"stream": True,
}
import requests
response = requests.post(f"{base_url}/chat/completions", json=data, stream=True)
for line in response.iter_content(None, decode_unicode=True):
print(line)
目前模型采用ollama的llama3.1 + mxbai-embed-large
The text was updated successfully, but these errors were encountered: