Logging LLM provider request ids as gen_ai attributes
#2174
Replies: 4 comments 6 replies
-
|
Also, saw this comment in the code of semantic conventions from 6 months ago. Looks like this was planned but never executed? |
Beta Was this translation helpful? Give feedback.
-
|
Do you mean to log an ID returned by the provider? Or our own random ID? |
Beta Was this translation helpful? Give feedback.
-
|
Closing in favor of #2236 |
Beta Was this translation helpful? Give feedback.
-
|
Request IDs as A pattern we found valuable: propagate a The
We ended up adding custom attributes: Related: we wrote up how cost attribution flows through a delegation chain here — https://blog.kinthai.ai/agent-wallet-economic-models-autonomous-agents — the economic model informs what you need to log at the OTEL layer. Are you trying to solve cross-agent attribution or mostly just provider-level deduplication with the request IDs? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
It would be very useful for debugging purposes (e.g. with OpenAI support) to have a unique identifier to an LLM call span. Has OpenLLMetry considered adding something like
gen_ai.request.idattribute?The biggest challenge I see with this is that they are not unified, and formatted differently across providers and even across endpoints of a single provider (e.g. completions vs assistant). Generally, there is a request-wide unique ID though, and if there is none, the attribute can obviously remain optional.
Beta Was this translation helpful? Give feedback.
All reactions