You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ToolCall is not generating from the response of llama 3.1 model from LM Studio, when using langchain framework connecting through ChatOpenAI ,
Same Tool call is working fine with ollama for the same llama 3.1 model ,
as per langchain team , the response was not proper from the model , the tool call is working as expected with ollama ,
you can also refer this issue #26342 in langchain-ai
below is the code snippet
`
from typing import #Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
import os
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
Getting the Env value
TAVILY_API_KEY = os.getenv("TAVILY_API_KEY")
Point to the local server
llm = ChatOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
class State(TypedDict):
messages: Annotated[list, add_messages]
Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()
from langchain_core.messages import BaseMessage
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
if isinstance(value["messages"][-1], BaseMessage):
print("Assistant:", value["messages"][-1].content)`
The text was updated successfully, but these errors were encountered:
As far as I know LMStudio does not yet support tool calls, but it will be fantastic when it does! There's a small chance I'm wrong and someone will correct me, I just couldn't find any reference to tool calling in the docs and the tools parameter is not listed on the server chat completions endpoint. If not, gotta stick to ollama and other alternatives when using anything that relies on tool calls.
when using thew tool calling function with GPTscript the LM Studio with the model meta-llama-3.1-8b-instruct returns this:
[ERROR]
Error rendering prompt with jinja template: Error: Cannot put tools in the first user message when there's no first user message!. Error Data: n/a, Additional Data: n/a
ToolCall is not generating from the response of llama 3.1 model from LM Studio, when using langchain framework connecting through ChatOpenAI ,
Same Tool call is working fine with ollama for the same llama 3.1 model ,
as per langchain team , the response was not proper from the model , the tool call is working as expected with ollama ,
you can also refer this issue #26342 in langchain-ai
below is the code snippet
`
from typing import #Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
import os
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
Getting the Env value
TAVILY_API_KEY = os.getenv("TAVILY_API_KEY")
Point to the local server
llm = ChatOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2,include_answer="true")
tools = [tool]
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
print(State)
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()
from langchain_core.messages import BaseMessage
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
if isinstance(value["messages"][-1], BaseMessage):
print("Assistant:", value["messages"][-1].content)`
The text was updated successfully, but these errors were encountered: