Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_iY0nNb7BlKHtLUT3BRmK28r8", 'type': 'invalid_request_error', 'param': 'messages', 'code': None}} #23733
Replies: 2 comments 6 replies
-
To resolve the error indicating that an assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id', you need to ensure that each tool call in the assistant message is followed by a corresponding tool message that responds to the 'tool_call_id'. Here's an example of how you can structure your messages to include the necessary tool messages: from langchain_core.messages import AIMessage, ToolMessage
# Example assistant message with tool calls
assistant_message = AIMessage(
content='',
tool_calls=[
{
'id': 'call_iY0nNb7BlKHtLUT3BRmK28r8',
'function': {
'name': 'YourToolFunctionName',
'arguments': '{"arg1":"value1","arg2":"value2"}'
},
'type': 'function'
}
]
)
# Corresponding tool message responding to the tool call
tool_message = ToolMessage(
role='tool',
content='{"result":"Your tool result"}',
name='YourToolFunctionName',
tool_call_id='call_iY0nNb7BlKHtLUT3BRmK28r8'
)
# Ensure the tool message follows the assistant message
messages = [assistant_message, tool_message] Additionally, you can use the from langchain_core.tools import tool
@tool
def add(a: int, b: int) -> int:
"""Adds a and b."""
return a + b
@tool
def multiply(a: int, b: int) -> int:
"""Multiplies a and b."""
return a * b
tools = [add, multiply]
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
llm_with_tools = llm.bind_tools(tools) When invoking the model, ensure that tool calls are properly handled and tool results are passed back to the model using from langchain_core.messages import HumanMessage, ToolMessage
query = "What is 3 * 12? Also, what is 11 + 49?"
messages = [HumanMessage(query)]
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
for tool_call in ai_msg.tool_calls:
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
tool_output = selected_tool.invoke(tool_call["args"])
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
messages This ensures that the tool messages are correctly appended and the model can process them accordingly [1][2][3][4][5]. |
Beta Was this translation helpful? Give feedback.
-
I had this problem because my call_model() function was incorrectly dealing with tool call messages. My prompt template was turning messages with type tool into type human
Obviously this is not correct. I should have been leveraging the ("placeholder", "{messages}") to send all of my messsages. As placeholder maintains the type of the message. I should have just used placeholder as intended:
Just thought I would post this incase it gives anyone guidence on solving the same error |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
Anybody knows what error this is?
Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_iY0nNb7BlKHtLUT3BRmK28r8", 'type': 'invalid_request_error', 'param': 'messages', 'code': None}}
System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-experimental==0.0.62
langchain-openai==0.1.13
langchain-qdrant==0.1.0
langchain-text-splitters==0.2.1
Beta Was this translation helpful? Give feedback.
All reactions