-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.59.5 issue with Tracing on Langfuse v3.13.0 + prompt #7938
Comments
What's the client side request? Can you share more details |
this workers for me @qymab public trace i just sent on latest https://us.cloud.langfuse.com/project/clvlhdfat0007vwb74m9lvfvi/traces/051c83e1-7949-40e4-b8e6-e5d0da99f76e?timestamp=2025-01-23T15%3A48%3A18.371Z |
This is a docker logs for litellm container.
|
does this issue go away when you downgrade to v1.57.1 ? is this with self hosted langfuse @qymab ? |
|
Btw, in litellm/utils.py, I notice that dynamic_success_callbacks is None now. The langfuse integration that works on 1.59.3 but not on 1.59.5 is simply:
|
May not be related, but even in 1.59.3, the langfuse callback is no longer executed when using context_window_fallback_dict and the fallback is activated (e.g. the model's context window is too small). If you happen to be improving test coverage for success callbacks, include this too in the test harness :) |
@motin - can you give me a way to repro ? |
ok - I will try with this
|
@motin - can you share a reproducible code snippet ? this seems to be working for me what langfuse python sdk version are you using ? |
@ishaan-jaff Langfuse package is v2.57.12 Here is the shortest possible repro code I could get:
This code results in a working langfuse trace on 1.59.3 but not on 1.59.5 |
And here is repro code for the other case, which also fails in 1.59.3: This results in a langfuse trace:
As does this:
But this doesn't, even though it completes fine, e.g. the fallback mechanism works, just not the langfuse trace:
Let me know if this ought to be reported as a separate issue, not related to the other one. |
testing with litellm 1.59.7 i'm able to repro this issue |
I am able to repro the @motin issue - it was only for sync calls, as it had to do with a missing implementation for the sync success event on the new customlogger version of langfuse - updating our testing for this as well. @qymab your call works fine for me tested via proxy + clean google colab (screenshots below) |
Unable to repro your issue @qymab - just ran your exact config - and i can see it logged to langfuse as well REGARDING YOUR ERROR
looks like your issue was caused when writing the log to langfuse, perhaps there was an error on the receiving server |
* fix(base_utils.py): supported nested json schema passed in for anthropic calls * refactor(base_utils.py): refactor ref parsing to prevent infinite loop * test(test_openai_endpoints.py): refactor anthropic test to use bedrock * fix(langfuse_prompt_management.py): add unit test for sync langfuse calls Resolves #7938 (comment)
I opted to use the Docker container's hostname as the host instead of relying on the public URL |
@krrishdholakia Thanks! I did however notice that upgrading to 1.59.7 actually resolved my first issue, even without your fix included. Not sure if you referred to fixing the second issue related to context fallback dict, but I didn't seen the fallback in the new tests so I reported it separately here: #8014 - if already resolved by your fix, please close it with a comment stating as such. |
After upgrading from v1.57.1 to v1.59.5, prompt traces are no longer being recorded in Langfuse, despite using the same Langfuse version (v3.13.0).
Here are my LiteLLM settings (Docker-based):
The text was updated successfully, but these errors were encountered: