The auto_instrumentation code built into the python otel layer is causing several hundred lines of error messages on normal startup.
There should be no errors logged during normal lambda start. And during normal start, otel layer shouldn't be responsible for hundreds of log lines, of any level. Excessive or spurious logging makes it harder to find any real problems.
After the initial normal logs, there are many errors and tracebacks logged for various modules the auto-instrumentation fails to import.
| 1751571035883 | INIT_START Runtime Version: python:3.13.v48 Runtime Version ARN: arn:aws:lambda:us-east-2::runtime:f10b100a00b6c2fc7052100b08a13d8c6dc4176c7c01a522d2fb9c948bab31f0 |
| 1751571036143 | {"level":"info","ts":1751571036.1437285,"msg":"Launching OpenTelemetry Lambda extension","version":"v0.117.0"} |
| 1751571036154 | {"level":"info","ts":1751571036.1539807,"logger":"telemetryAPI.Listener","msg":"Listening for requests","address":"sandbox.localdomain:59159"} |
| 1751571036154 | {"level":"info","ts":1751571036.1549685,"logger":"telemetryAPI.Client","msg":"Subscribing","baseURL":"http://127.0.0.1:9001/2022-07-01/telemetry"} |
| 1751571036155 | TELEMETRY Name: collector State: Subscribed Types: [Platform] |
| 1751571036156 | {"level":"info","ts":1751571036.1561928,"logger":"telemetryAPI.Client","msg":"Subscription success","response":"\"OK\""} |
| 1751571036159 | {"level":"info","ts":1751571036.1598654,"logger":"NewCollector","msg":"Using default config URI","uri":"/opt/collector-config/config.yaml"} |
| 1751571036176 | {"level":"info","ts":1751571036.176668,"caller":"service@v0.117.0/service.go:164","msg":"Setting up own telemetry..."} |
| 1751571036192 | {"level":"warn","ts":1751571036.1919985,"caller":"service@v0.117.0/service.go:213","msg":"service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers"} |
| 1751571036192 | {"level":"info","ts":1751571036.192041,"caller":"telemetry/metrics.go:70","msg":"Serving metrics","address":"localhost:8888","metrics level":"Normal"} |
| 1751571036212 | {"level":"info","ts":1751571036.2065032,"caller":"builders/builders.go:26","msg":"Development component. May change in the future.","kind":"exporter","data_type":"metrics","name":"debug"} |
| 1751571036234 | {"level":"info","ts":1751571036.234155,"caller":"service@v0.117.0/service.go:230","msg":"Starting aws-otel-lambda...","Version":"v0.117.0","NumCPU":2} |
| 1751571036234 | {"level":"info","ts":1751571036.2342062,"caller":"extensions/extensions.go:39","msg":"Starting extensions..."} |
| 1751571036234 | {"level":"info","ts":1751571036.2343183,"caller":"otlpreceiver@v0.117.0/otlp.go:112","msg":"Starting GRPC server","kind":"receiver","name":"otlp","data_type":"metrics","endpoint":"localhost:4317"} |
| 1751571036234 | {"level":"info","ts":1751571036.2344594,"caller":"otlpreceiver@v0.117.0/otlp.go:169","msg":"Starting HTTP server","kind":"receiver","name":"otlp","data_type":"metrics","endpoint":"localhost:4318"} |
| 1751571036234 | {"level":"info","ts":1751571036.234538,"caller":"service@v0.117.0/service.go:253","msg":"Everything is ready. Begin running and processing data."} |
| 1751571036930 | Importing of aiohttp-client failed, skipping it |
| 1751571036930 | Traceback (most recent call last): |
| 1751571036930 | File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/_load.py", line 74, in _load_instrumentors |
| 1751571036930 | distro.load_instrumentor( |
| 1751571036930 | ~~~~~~~~~~~~~~~~~~~~~~~~^ |
| 1751571036930 | entry_point, raise_exception_on_conflict=True |
| 1751571036930 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 1751571036930 | ) |
| 1751571036930 | ^ |
| 1751571036930 | File "/opt/python/opentelemetry/instrumentation/distro.py", line 61, in load_instrumentor |
| 1751571036930 | instrumentor: BaseInstrumentor = entry_point.load() |
| 1751571036930 | ~~~~~~~~~~~~~~~~^^ |
| 1751571036930 | File "/opt/python/importlib_metadata/__init__.py", line 189, in load |
| 1751571036930 | module = import_module(match.group('module')) |
| 1751571036930 | File "/var/lang/lib/python3.13/importlib/__init__.py", line 88, in import_module |
| 1751571036930 | return _bootstrap._gcd_import(name[level:], package, level) |
| 1751571036930 | ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 1751571036930 | File "<frozen importlib._bootstrap>", line 1387, in _gcd_import |
| 1751571036930 | File "<frozen importlib._bootstrap>", line 1360, in _find_and_load |
| 1751571036930 | File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked |
| 1751571036930 | File "<frozen importlib._bootstrap>", line 935, in _load_unlocked |
| 1751571036930 | File "<frozen importlib._bootstrap_external>", line 1026, in exec_module |
| 1751571036930 | File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed |
| 1751571036930 | File "/opt/python/opentelemetry/instrumentation/aiohttp_client/__init__.py", line 95, in <module>
Many more import errors follow; that's just the first.
name: Bug report
about: Create a report to help us improve
title: Excessive number of errors logged during normal python lambda startup
labels: bug
assignees: ''
Describe the bug
The auto_instrumentation code built into the python otel layer is causing several hundred lines of error messages on normal startup.
Steps to reproduce
AWS_LAMBDA_EXEC_WRAPPER. I tested with version1-32-0:2.What did you expect to see?
There should be no errors logged during normal lambda start. And during normal start, otel layer shouldn't be responsible for hundreds of log lines, of any level. Excessive or spurious logging makes it harder to find any real problems.
What did you see instead?
After the initial normal logs, there are many errors and tracebacks logged for various modules the auto-instrumentation fails to import.
Many more import errors follow; that's just the first.
What version of collector/language SDK version did you use?
Version: arn:aws:lambda:us-east-2:901920570463:layer:aws-otel-python-amd64-ver-1-32-0:2
What language layer did you use?
Config: Python
Additional context