Skip to content

Enhance LangChain GenAI semconv tracing#4389

Open
nagkumar91 wants to merge 4 commits intoopen-telemetry:mainfrom
nagkumar91:feature/langchain-instrumentor-enhanced-tracing
Open

Enhance LangChain GenAI semconv tracing#4389
nagkumar91 wants to merge 4 commits intoopen-telemetry:mainfrom
nagkumar91:feature/langchain-instrumentor-enhanced-tracing

Conversation

@nagkumar91
Copy link
Copy Markdown
Contributor

@nagkumar91 nagkumar91 commented Apr 2, 2026

Description

This PR expands the LangChain instrumentor toward semconv-first GenAI coverage and folds in the review/test follow-up fixes needed to make the package consistent with contrib expectations.

The main changes are:

  • semconv-aligned model, agent, workflow, tool, and retriever tracing
  • richer message/tool/retriever event emission and improved LangGraph/W3C parent propagation
  • review follow-up fixes for callback/span handling plus VCR compatibility with current pytest-recording
  • final changelog, typecheck, and spellcheck cleanup for the package

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

  • python3 -m pytest instrumentation-genai/opentelemetry-instrumentation-langchain/tests
  • uv tool run ruff check src tests (from instrumentation-genai/opentelemetry-instrumentation-langchain)
  • tox -e typecheck
  • tox -e spellcheck

Does This PR Require a Core Repo Change?

  • Yes. - Link to PR:
  • No.

Checklist:

  • Followed the style guidelines of this project
  • Changelogs have been updated
  • Unit tests have been added
  • Documentation has been updated

Notes

  • Package changelog updated at instrumentation-genai/opentelemetry-instrumentation-langchain/CHANGELOG.md.
  • test_gemini is skipped when tests/cassettes/test_gemini.yaml is absent and no real Google API key is available for recording.
  • Bedrock cassette replay now ignores path drift caused by newer inference-profile/model request shapes while keeping the recorded response assertions intact.

nagkumar91 and others added 3 commits April 1, 2026 20:46
Add on_llm_new_token callback method to track streaming chunk timing:
- Records time_to_first_chunk (TTFC) on first token arrival
- Records time_per_output_chunk (TPOC) between subsequent tokens
- Uses timeit.default_timer() consistent with TelemetryHandler timing
- Builds metric attributes matching InvocationMetricsRecorder pattern
- Cleans up streaming state in on_llm_end and on_llm_error

Creates two new histograms using semconv metric names:
- gen_ai.client.operation.time_to_first_chunk
- gen_ai.client.operation.time_per_output_chunk

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Major enhancement to the LangChain instrumentor bringing it to parity with
the GenAI semantic conventions for spans, metrics, and events.

New callback handlers:
- on_chain_start/end/error: invoke_agent and invoke_workflow spans
- on_tool_start/end/error: execute_tool spans with tool attributes
- on_retriever_start/end/error: execute_tool spans with retriever semantics
- on_llm_start: text_completion spans for non-chat models
- on_llm_new_token: streaming timing metrics (TTFC, TPOC)
- on_agent_action/finish: agent lifecycle support

New infrastructure:
- operation_mapping.py: safe callback-to-semconv classification
- semconv_attributes.py: per-operation attribute matrix
- span_manager.py: full span lifecycle with hierarchy tracking
- utils.py: provider detection, server address, W3C propagation
- message_formatting.py: structured message serialization with redaction
- content_recording.py: content capture policy integration
- event_emitter.py: semconv-aligned events for non-LLM spans

Key capabilities:
- Removed ChatOpenAI/ChatBedrock restriction, supports any LLM
- Full request/response attribute extraction per semconv
- Parent-child hierarchy with LangGraph support (filtering, goto, stacks)
- Token usage accumulation from LLM to agent spans
- W3C trace context propagation (incoming traceparent/tracestate)
- Cache token attributes (cache_read, cache_creation)
- Streaming metrics (time_to_first_chunk, time_per_output_chunk)
- 276 unit and e2e tests

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Apply the follow-up LangChain review fixes, wire non-LLM event emission, correct experimental content capture behavior, clean ignored-run state, and repair the package's VCR-based integration tests under the current pytest-recording setup.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@nagkumar91 nagkumar91 requested a review from a team as a code owner April 2, 2026 16:41
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@tammy-baylis-swi tammy-baylis-swi moved this to Ready for review in Python PR digest Apr 2, 2026
@MikeGoldsmith
Copy link
Copy Markdown
Member

MikeGoldsmith commented Apr 7, 2026

Thanks for the PR @nagkumar91. At over 7k lines of code, this is nearly impossible to approve with confidence.

Could you break this up in to smaller chunks? For example, your PR desc breaks down the changes into 4 distint areas, I'd suggest trying to use those as areas to focus on per PR.

Copy link
Copy Markdown
Member

@MikeGoldsmith MikeGoldsmith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forgot to add a review yesterday. I'd really like to see this PR broken down into smaller parts.

@github-project-automation github-project-automation bot moved this from Ready for review to Reviewed PRs that need fixes in Python PR digest Apr 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Reviewed PRs that need fixes

Development

Successfully merging this pull request may close these issues.

3 participants