Skip to content
Open
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
799e80e
wip: first draft of openai responses instrumentation.
eternalcuriouslearner Apr 12, 2026
13d5d03
Merge remote-tracking branch 'upstream/main' into feat/openai-respons…
eternalcuriouslearner Apr 16, 2026
89c8434
wip: converted responses to use new handler factory methods.
eternalcuriouslearner Apr 16, 2026
b36205d
WIP: Adding test files and refined the missing parts.
eternalcuriouslearner Apr 19, 2026
0b241e3
WIP: Moving cache assertions to common utils file.
eternalcuriouslearner Apr 19, 2026
9dcb021
WIP: removing the async create method instrumentation.
eternalcuriouslearner Apr 19, 2026
881686b
WIP: removed the unnecessary cassette checks added context around res…
eternalcuriouslearner Apr 20, 2026
0c277ac
WIP: fixing the lint in files.
eternalcuriouslearner Apr 21, 2026
ecc3774
WIP: fixing the precommit stuff.
eternalcuriouslearner Apr 21, 2026
d4ff5af
WIP: added changelog.
eternalcuriouslearner Apr 21, 2026
c5abec7
Merge branch 'main' into feat/openai-responses-create-instrumentation…
eternalcuriouslearner Apr 21, 2026
0293f21
wip: fixing the wrap configuration.
eternalcuriouslearner Apr 22, 2026
9bb3221
Merge branch 'main' into feat/openai-responses-create-instrumentation…
eternalcuriouslearner Apr 23, 2026
57a7715
wip: remove pydantic based validation and convert the request to data…
eternalcuriouslearner Apr 24, 2026
8558a3e
wip: delete old pydantic job.
eternalcuriouslearner Apr 24, 2026
0634075
Merge branch 'main' into feat/openai-responses-create-instrumentation…
eternalcuriouslearner Apr 24, 2026
da22aa9
wip: removed the unwanted LLMInvocation.
eternalcuriouslearner Apr 24, 2026
2daf193
wip: cleaning up the functions.
eternalcuriouslearner Apr 24, 2026
d9374b3
wip: cleaning up the failing tests.
eternalcuriouslearner Apr 24, 2026
0520e0b
wip: fixing precommit.
eternalcuriouslearner Apr 24, 2026
ce8882d
wip: cleaning the precommit.
eternalcuriouslearner Apr 24, 2026
e74229a
polish: cleaning up tests and lint.
eternalcuriouslearner Apr 24, 2026
81969b9
polish: adding span assertions for missing tests.
eternalcuriouslearner Apr 24, 2026
db8a212
Merge branch 'main' into feat/openai-responses-create-instrumentation…
eternalcuriouslearner Apr 24, 2026
fa4eec3
Merge branch 'main' into feat/openai-responses-create-instrumentation…
lzchen Apr 29, 2026
019c695
Merge branch 'main' into feat/openai-responses-create-instrumentation…
eternalcuriouslearner Apr 30, 2026
20f361f
polish: adding typechecks and using attributes from genai util struct…
eternalcuriouslearner Apr 30, 2026
bba92d9
Merge branch 'feat/openai-responses-create-instrumentation-first-part…
eternalcuriouslearner Apr 30, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 0 additions & 19 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -214,25 +214,6 @@ jobs:
- name: Run tests
run: tox -e py314-test-instrumentation-openai-v2-latest -- -ra

py313-test-instrumentation-openai-v2-pydantic1_ubuntu-latest:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Purpose of this change? Shouldn't we test with pydantic2 if openai dropped support for 1?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pydantic test were added previously when I was using pydantic for validating our response models. But @herin049 suggested on removing this as it might make this instrumentation tightly coupled with pydantic coming from openai library. Hence the above change was made to remove the dependency on pydantic.

name: instrumentation-openai-v2-pydantic1 3.13 Ubuntu
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repo @ SHA - ${{ github.sha }}
uses: actions/checkout@v4

- name: Set up Python 3.13
uses: actions/setup-python@v5
with:
python-version: "3.13"

- name: Install tox
run: pip install tox-uv

- name: Run tests
run: tox -e py313-test-instrumentation-openai-v2-pydantic1 -- -ra

pypy3-test-instrumentation-openai-v2-oldest_ubuntu-latest:
name: instrumentation-openai-v2-oldest pypy-3.10 Ubuntu
runs-on: ubuntu-latest
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Add strongly typed Responses API extractors with validation and content
extraction improvements
([#4337](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/4337))
- Add instrumentation for OpenAI Responses API `create`
([#4474](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/4474))

## Version 2.3b0 (2025-12-24)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@
---
"""

from importlib import import_module
from typing import Collection

from wrapt import wrap_function_wrapper
Expand Down Expand Up @@ -70,6 +71,9 @@
chat_completions_create_v_old,
embeddings_create,
)
from .patch_responses import (
responses_create,
)


class OpenAIInstrumentor(BaseInstrumentor):
Expand Down Expand Up @@ -159,10 +163,33 @@ def _instrument(self, **kwargs):
),
)

responses_module = _get_responses_module()
# Responses instrumentation is intentionally limited to the latest
# experimental semconv path. Unlike chat completions, we do not carry
# a second legacy wrapper here; the current implementation is built on
# the inference handler lifecycle and would need a separate old-path
# implementation to support legacy semconv mode.
if responses_module is not None and latest_experimental_enabled:
wrap_function_wrapper(
"openai.resources.responses.responses",
"Responses.create",
responses_create(handler, content_mode),
)

def _uninstrument(self, **kwargs):
import openai # pylint: disable=import-outside-toplevel # noqa: PLC0415

unwrap(openai.resources.chat.completions.Completions, "create")
unwrap(openai.resources.chat.completions.AsyncCompletions, "create")
unwrap(openai.resources.embeddings.Embeddings, "create")
unwrap(openai.resources.embeddings.AsyncEmbeddings, "create")
responses_module = _get_responses_module()
if responses_module is not None:
unwrap(responses_module.Responses, "create")


def _get_responses_module():
try:
return import_module("openai.resources.responses.responses")
except ImportError:
return None
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import annotations

from opentelemetry.util.genai.handler import TelemetryHandler
from opentelemetry.util.genai.types import ContentCapturingMode, Error

from .response_extractors import (
apply_request_attributes,
extract_params,
get_inference_creation_kwargs,
set_invocation_response_attributes,
)
from .response_wrappers import ResponseStreamWrapper
from .utils import is_streaming


def responses_create(
handler: TelemetryHandler,
content_capturing_mode: ContentCapturingMode,
):
"""Wrap the `create` method of the `Responses` class to trace it."""

capture_content = content_capturing_mode != ContentCapturingMode.NO_CONTENT

def traced_method(wrapped, instance, args, kwargs):
params = extract_params(**kwargs)
invocation = handler.start_inference(
**get_inference_creation_kwargs(params, instance)
)
apply_request_attributes(invocation, params, capture_content)

try:
result = wrapped(*args, **kwargs)
parsed_result = _get_response_stream_result(result)

if is_streaming(kwargs):
return ResponseStreamWrapper(
parsed_result,
invocation,
capture_content,
)

set_invocation_response_attributes(
invocation, parsed_result, capture_content
)
invocation.stop()
return result
except Exception as error:
invocation.fail(Error(type=type(error), message=str(error)))
raise

return traced_method


def _get_response_stream_result(result):
if hasattr(result, "parse"):
return result.parse()
return result
Loading
Loading