Skip to content
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,17 @@ module SpanAttributes

# Deprecated
TRACELOOP_CORRELATION_ID = "traceloop.correlation.id"

# Gen AI
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just replace the existing ones

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and remove references to llm semantics?

GEN_AI_REQUEST_MODEL = "gen_ai.request.model"
GEN_AI_RESPONSE_MODEL = "gen_ai.response.model"
GEN_AI_USAGE_OUTPUT_TOKENS = "gen_ai.usage.output_tokens"
GEN_AI_USAGE_INPUT_TOKENS = "gen_ai.usage.input_tokens"
GEN_AI_COMPLETIONS = "gen_ai.completion"
GEN_AI_PROMPTS = "gen_ai.prompt"
GEN_AI_SYSTEM = "gen_ai.system"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenTelemetry semantic conventions gen_ai.system deprecated gen_ai.provider.name

💡 Result:

gen_ai.system was deprecated and renamed to gen_ai.provider.name in OpenTelemetry semantic-conventions v1.37.0 (GenAI “system-specific naming policy” breaking change). [1]

What to use instead:

  • Use gen_ai.provider.name to identify the GenAI telemetry “flavor” / provider discriminator (e.g., openai, azure.ai.openai, aws.bedrock, gcp.vertex_ai, etc.). [2]
  • The spec notes gen_ai.provider.name should be set based on the instrumentation’s best knowledge and may differ from the actual underlying model provider (proxies/gateways); gen_ai.request.model, gen_ai.response.model, and server.address can help disambiguate. [2]

Sources:
[1] OpenTelemetry semantic-conventions GitHub releases (v1.37.0): rename gen_ai.systemgen_ai.provider.name
[2] OpenTelemetry GenAI spans semantic conventions: definition + well-known values for gen_ai.provider.name


🏁 Script executed:

# Check the file context around lines 31-32 and 41-42
cat -n semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb | sed -n '25,50p'

Repository: traceloop/openllmetry-ruby

Length of output: 1250


Mark GEN_AI_SYSTEM = "gen_ai.system" as deprecated or remove it.

gen_ai.system was deprecated and renamed to gen_ai.provider.name in OpenTelemetry semantic-conventions v1.37.0. Line 42 already defines GEN_AI_PROVIDER = "gen_ai.provider.name" as the replacement. If keeping GEN_AI_SYSTEM for backward compatibility, add a # Deprecated comment consistent with the pattern at line 31–32 (TRACELOOP_CORRELATION_ID); otherwise, remove it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb` at line
41, The constant GEN_AI_SYSTEM = "gen_ai.system" is deprecated in favor of
GEN_AI_PROVIDER = "gen_ai.provider.name"; either remove GEN_AI_SYSTEM or mark it
as deprecated for backward compatibility by adding a comment like "# Deprecated:
use GEN_AI_PROVIDER (gen_ai.provider.name) as of OpenTelemetry v1.37.0"
following the same style used for TRACELOOP_CORRELATION_ID so reviewers can see
the replacement and rationale.

GEN_AI_PROVIDER = "gen_ai.provider.name"
GEN_AI_CONVERSATION_ID = "gen_ai.conversation.id"
end

module LLMRequestTypeValues
Expand Down
82 changes: 62 additions & 20 deletions traceloop-sdk/lib/traceloop/sdk.rb
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,21 @@ module Traceloop
module SDK
class Traceloop
def initialize
api_key = ENV["TRACELOOP_API_KEY"]
raise "TRACELOOP_API_KEY environment variable is required" if api_key.nil? || api_key.empty?

OpenTelemetry::SDK.configure do |c|
c.add_span_processor(
OpenTelemetry::SDK::Trace::Export::SimpleSpanProcessor.new(
OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
OpenTelemetry::Exporter::OTLP::Exporter.new(
endpoint: "#{ENV.fetch("TRACELOOP_BASE_URL", "https://api.traceloop.com")}/v1/traces",
headers: { "Authorization" => "Bearer #{ENV.fetch("TRACELOOP_API_KEY")}" }
headers: {
"Authorization" => "#{ENV.fetch("TRACELOOP_AUTH_SCHEME", "Bearer")} #{ENV.fetch("TRACELOOP_API_KEY")}"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is that?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

authorization headers may vary in type. i.e. for our dynatrace exporter, they use Api-Token.

}
)
)
)
puts "Traceloop exporting traces to #{ENV.fetch("TRACELOOP_BASE", "https://api.traceloop.com")}"
puts "Traceloop exporting traces to #{ENV.fetch("TRACELOOP_BASE_URL", "https://api.traceloop.com")}"
end

@tracer = OpenTelemetry.tracer_provider.tracer("Traceloop")
Expand All @@ -41,25 +46,30 @@ def log_messages(messages)
def log_prompt(system_prompt="", user_prompt)
unless system_prompt.empty?
@span.add_attributes({
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.0.role" => "system",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.0.content" => system_prompt,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.1.role" => "user",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.1.content" => user_prompt
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.0.role" => "system",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.0.content" => system_prompt,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.1.role" => "user",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.1.content" => user_prompt
})
else
@span.add_attributes({
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.0.role" => "user",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.0.content" => user_prompt
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.0.role" => "user",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.0.content" => user_prompt
})
end
end

def log_response(response)
if response.respond_to?(:body)
log_bedrock_response(response)
# Check for RubyLLM::Message objects
elsif response.is_a?(::RubyLLM::Message)
log_ruby_llm_message(response)
elsif response.is_a?(::RubyLLM::Tool::Halt)
log_ruby_llm_halt(response)
# This is Gemini specific, see -
# https://github.com/gbaptista/gemini-ai?tab=readme-ov-file#generate_content
elsif response.has_key?("candidates")
elsif response.respond_to?(:has_key?) && response.has_key?("candidates")
log_gemini_response(response)
else
log_openai_response(response)
Expand All @@ -73,10 +83,29 @@ def log_gemini_response(response)

@span.add_attributes({
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => "assistant",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig("candidates", 0, "content", "parts", 0, "text")
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig(
"candidates", 0, "content", "parts", 0, "text")
})
end

def log_ruby_llm_message(response)
@span.add_attributes({
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_OUTPUT_TOKENS => response.output_tokens || 0,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_INPUT_TOKENS => response.input_tokens || 0,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => response.role.to_s,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content
})
end
Comment on lines +115 to +123
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

response.model_id and response.content can be nil, silently dropping span attributes.

nil values are filtered out by the OTel Ruby SDK's add_attributes, so no crash occurs, but the attributes won't be recorded. Add nil guards to at least preserve an empty string for content.

🛡️ Proposed fix
 def log_ruby_llm_message(response)
   `@span.add_attributes`({
-    OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id,
+    OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id || `@model`,
     OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_OUTPUT_TOKENS => response.output_tokens || 0,
     OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_INPUT_TOKENS => response.input_tokens || 0,
     "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => response.role.to_s,
-    "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content
+    "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content.to_s
   })
 end
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@traceloop-sdk/lib/traceloop/sdk.rb` around lines 115 - 123, In
log_ruby_llm_message, guard against nil span attributes so they aren't silently
dropped by OpenTelemetry: ensure response.model_id and response.content are
converted to safe defaults before calling `@span.add_attributes` (e.g., use a
fallback like an empty string for content and a sensible fallback or stringified
value for model_id), and keep the existing keys (GEN_AI_RESPONSE_MODEL and the
completions content key) but pass non-nil values so the attributes are recorded.


def log_ruby_llm_halt(response)
@span.add_attributes({
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => @model,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => "tool",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content
})
end

def log_bedrock_response(response)
body = JSON.parse(response.body.read())

Expand Down Expand Up @@ -109,25 +138,38 @@ def log_openai_response(response)
})
if response.has_key?("usage")
@span.add_attributes({
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_TOTAL_TOKENS => response.dig("usage", "total_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_COMPLETION_TOKENS => response.dig("usage", "completion_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_PROMPT_TOKENS => response.dig("usage", "prompt_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_TOTAL_TOKENS => response.dig("usage",
"total_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_COMPLETION_TOKENS => response.dig(
"usage", "completion_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_PROMPT_TOKENS => response.dig("usage",
"prompt_tokens"),
})
end
if response.has_key?("choices")
@span.add_attributes({
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => response.dig("choices", 0, "message", "role"),
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig("choices", 0, "message", "content")
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => response.dig(
"choices", 0, "message", "role"),
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig(
"choices", 0, "message", "content")
})
end
end
end

def llm_call(provider, model)
def llm_call(provider, model, conversation_id: nil)
@tracer.in_span("#{provider}.chat") do |span|
span.add_attributes({
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_REQUEST_MODEL => model,
})
attributes = {
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_REQUEST_MODEL => model,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_SYSTEM => provider,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROVIDER => provider,
}

if conversation_id
attributes[OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_CONVERSATION_ID] = conversation_id
end

span.add_attributes(attributes)
yield Tracer.new(span, provider, model)
end
end
Expand Down
4 changes: 2 additions & 2 deletions traceloop-sdk/traceloop-sdk.gemspec
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ Gem::Specification.new do |spec|

spec.add_dependency 'opentelemetry-semantic_conventions_ai', '~> 0.0.3'

spec.add_dependency 'opentelemetry-sdk', '~> 1.3.1'
spec.add_dependency 'opentelemetry-exporter-otlp', '~> 0.26.1'
spec.add_dependency 'opentelemetry-exporter-otlp', '~> 0.31.1'
spec.add_dependency 'opentelemetry-sdk', '~> 1.10.0'

if spec.respond_to?(:metadata)
spec.metadata['source_code_uri'] = 'https://github.com/traceloop/openllmetry-ruby/tree/main/traceloop-sdk'
Expand Down