-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug] opentelemetry incompatibilities with LLM semantics #604
Comments
Hey @codefromthecrypt, thanks for this and good callout. This project actually pre-dates the |
Thanks @mikeldking for the feedback and hope to see you around the #otel-llm-semconv-wg slack or any SIG meetings. |
Hi, @codefromthecrypt. I'm Dosu, and I'm helping the OpenInference team manage their backlog. I'm marking this issue as stale. Issue Summary:
Next Steps:
Thank you for your understanding and contribution! |
Describe the bug
As a first timer, I tried the openai instrumentation, and sent a trace to a local collector (using ollama as the backend). Then I compared the output with llm semantics defined by otel. I noticed some incompatibilities and some attributes not yet defined.
compatible:
none
incompatible:
gen_ai.prompt
)gen_ai.completion
)not yet defined in the standard:
To Reproduce
You can use a program like this:
Expected behavior
I would expect semantics to extend, not clash with otel LLM ones. Ack this is a moving target as the otel ones change frequently.
Screenshots
Example collector log
Desktop (please complete the following information):
Additional context
The otel semantics are defined by "Semantic Conventions: LLM" and folks doing the changes there are frequently on the slack channel, in case you have any questions. I personally haven't yet made any changes to the LLM semantics.
https://github.com/open-telemetry/community?tab=readme-ov-file#specification-sigs
The text was updated successfully, but these errors were encountered: