-
Notifications
You must be signed in to change notification settings - Fork 683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging LLM provider request ids as gen_ai
attributes
#2236
Comments
@nirga for this, we'll need to first uncomment https://github.com/traceloop/openllmetry/blob/main/packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py#L53 and release a new version of |
@dinmukhamedm can you create a PR with that - I'll release the version then on top of your PR |
I've started a few PRs, so here's the status:
|
@nirga do you think this work is sufficient to close the issue? WatsonX was a bit difficult to set up, plus, the instrumented library is on deprecation path. I know people from IBM actually have contributed to the library, so if they want they could implement this as well? |
@gyliu513 wdyt? |
Discussed in #2174
Originally posted by dinmukhamedm October 18, 2024
It would be very useful for debugging purposes (e.g. with OpenAI support) to have a unique identifier to an LLM call span. Has OpenLLMetry considered adding something like
gen_ai.request.id
attribute?The biggest challenge I see with this is that they are not unified, and formatted differently across providers and even across endpoints of a single provider (e.g. completions vs assistant). Generally, there is a request-wide unique ID though, and if there is none, the attribute can obviously remain optional.
The text was updated successfully, but these errors were encountered: