OutputParserException: Failed to parse GradeDocuments from completion {"binary_score": {"title": "Binary Score", "description": "Documents are relevant to the question, 'yes' or 'no'", "type": "string"}}. Got: 1 validation error for GradeDocuments binary_score #22204
Replies: 3 comments
-
The Here's how you can modify your code to use the
Here is an example code snippet to illustrate this: from langchain_core.language_models import YourLanguageModel
from langchain_core.output_parsers import YourOriginalParser
from langchain.output_parsers.fix import OutputFixingParser
from langchain_core.prompts import YourPromptTemplate
from langchain_core.exceptions import OutputParserException
# Initialize your language model, original parser, and prompt
llm = YourLanguageModel()
original_parser = YourOriginalParser()
prompt = YourPromptTemplate()
# Create an OutputFixingParser instance
fixing_parser = OutputFixingParser.from_llm(
llm=llm,
parser=original_parser,
prompt=prompt,
max_retries=3 # You can set the number of retries as needed
)
# Use the fixing parser to parse the completion
try:
result = fixing_parser.parse(completion)
except OutputParserException as e:
print(f"Failed to parse after retries: {e}") This approach leverages the
|
Beta Was this translation helpful? Give feedback.
-
I am also getting the same error at the following line for output in app.stream(inputs): of a jupyter notebook |
Beta Was this translation helpful? Give feedback.
-
Since gemma 2 doesn't accept system prompt, I changed grade_prompt = ChatPromptTemplate.from_messages( Now it is working for me. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
While executing above code, i am getting below error
OutputParserException: Failed to parse GradeDocuments from completion {"binary_score": {"title": "Binary Score", "description": "Documents are relevant to the question, 'yes' or 'no'", "type": "string"}}. Got: 1 validation error for GradeDocuments
binary_score
str type expected (type=type_error.str)
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
Below is my retrieval grader output
ChatPromptTemplate(input_variables=['document', 'question'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template="You are a grader assessing relevance of a retrieved document to a user question. \n \n It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n\n If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n\n Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.")), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['document', 'question'], template='Retrieved document: \n\n {document} \n\n User question: {question}'))])
| RunnableBinding(bound=OllamaFunctions(model='gemma:2b', temperature=0.0), kwargs={'functions': [<class 'main.GradeDocuments'>], 'format': 'json'})
| RunnableLambda(parse_response)
| PydanticOutputParser(pydantic_object=<class 'main.GradeDocuments'>)
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
Beta Was this translation helpful? Give feedback.
All reactions