Replies: 2 comments 4 replies
-
Hey @OSS-GR! I'm here to help you with any bug or question you have. Let's get to the bottom of this together! To stream the response in a structured output using
Here is an example of how to do it: from langchain_community.chat_models import ChatBedrock
from langchain_core.pydantic_v1 import BaseModel
from typing import List, TypedDict, Annotated
class CitationSubModel(TypedDict):
number: int
id: str
class FinalAnswerModel(TypedDict):
answer: Annotated[str, ..., "The answer to the user question using the citations"]
citations: Annotated[List[CitationSubModel], ..., "A dictionary that includes the numbering and the id references for the citations to be used to answer"]
llm = ChatBedrock(
model_id="your_model_id",
aws_access_key_id="your_access_key",
aws_secret_access_key="your_secret_key",
region_name="your_region",
streaming=True # Enable streaming
)
structured_llm = llm.with_structured_output(FinalAnswerModel)
for chunk in structured_llm.stream({"context": context, "task": state["task"], "citation": citation_instruction, "coding_instructions": coding_instructions}):
print(chunk) In this example, the Additionally, the structured_llm = llm.with_structured_output(
FinalAnswerModel,
method="function_calling", # or "json_mode"
include_raw=True # Include raw response along with parsed output
) This configuration allows you to handle the structured output more flexibly and avoid potential recursion errors. The For more detailed examples and alternative methods, you can refer to the LangChain documentation on structured output: This should help you stream the response in a structured output without encountering recursion errors [1][2][3][4][5]. |
Beta Was this translation helpful? Give feedback.
-
Same here! |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Beta Was this translation helpful? Give feedback.
All reactions