not able to track entire request end to end in langchain + langfuse #3392
Replies: 2 comments 4 replies
-
To track the entire user journey in one trace using Langchain and Langfuse, you need to ensure that all stages of the request are grouped into a single trace. Here is how you can modify your code to achieve this:
Here is the updated code: class Demo:
def create_handler(self, user_identifier, trace_name):
langfuse_handler = CallbackHandler(
session_id=user_identifier,
metadata={"a": "b"},
trace_name=trace_name,
)
return langfuse_handler
def method1(self, question, llm, prompt, langfuse_handler):
retrieval_chain = (
{"question": RunnablePassthrough()} | prompt | llm
)
output_data = retrieval_chain.invoke(
{"question": question},
config={
"callbacks": [langfuse_handler],
},
)
return output_data # Simulating classification output like "sports", "politics", etc.
def method2(self, question, llm, prompt, langfuse_handler):
retrieval_chain = (
{"question": RunnablePassthrough()} | prompt | llm
)
output_data = retrieval_chain.invoke(
{"question": question},
config={
"callbacks": [langfuse_handler],
},
)
return output_data # Return RAG-based response for "sports"
def method3(self, question, llm, prompt, langfuse_handler):
retrieval_chain = (
{"question": RunnablePassthrough()} | prompt | llm
)
output_data = retrieval_chain.invoke(
{"question": question},
config={
"callbacks": [langfuse_handler],
},
)
return output_data # Return RAG-based response for non-sports
class RequestInput(BaseModel):
question: str
user_identifier: str
demo_instance = Demo()
llm = "Llama"
prompt_classification = "classification_prompt"
prompt_rag_sports = "sport_promot"
prompt_rag_other = "rag_prompt"
@app.post("/get_response/")
async def get_response(request: RequestInput):
langfuse_handler = demo_instance.create_handler(request.user_identifier, "end_to_end_trace")
class_identification = demo_instance.method1(
question=request.question,
llm=llm,
prompt=prompt_classification,
langfuse_handler=langfuse_handler
)
if class_identification == "sports":
final_output = demo_instance.method2(
question=request.question,
llm=llm,
prompt=prompt_rag_sports,
langfuse_handler=langfuse_handler
)
else:
final_output = demo_instance.method3(
question=request.question,
llm=llm,
prompt=prompt_rag_other,
langfuse_handler=langfuse_handler
)
return {"final_output": final_output} By creating a single |
Beta Was this translation helpful? Give feedback.
-
Hi @jaiswalvineet, have a look at the Langfuse python |
Beta Was this translation helpful? Give feedback.
-
I am using langchain with langfuse and its working fine for individual LLM call tracking but my end goal is to track end to end request in one trace, and I am not able to find my method to do that activity
here are sample code
here for each request, it first goes to classification and suppose it classify as sports then go to method2 else method3, now I want to track entire user journey in trace, ideally langfuse should support it as I am using same handler but its creating two different rows as trace, If I track in last state then also it misses the first classification.
Can someone please let me know what I am missing ?
PS: I am using langgraph for classification + RAG switching, for sake of simplicity, I am not added in this code
Beta Was this translation helpful? Give feedback.
All reactions