Automatically instrument common libraries for seamless tracing.
Botanu leverages OpenTelemetry's auto-instrumentation ecosystem. When enabled, your HTTP clients, web frameworks, databases, and LLM providers are automatically traced without code changes.
from botanu import enable
enable(
service_name="my-service",
auto_instrument=True, # Default
)Or with specific packages:
enable(
service_name="my-service",
auto_instrument_packages=["requests", "fastapi", "openai_v2"],
)| Library | Package | Notes |
|---|---|---|
| requests | opentelemetry-instrumentation-requests |
Sync HTTP |
| httpx | opentelemetry-instrumentation-httpx |
Sync/async HTTP |
| urllib3 | opentelemetry-instrumentation-urllib3 |
Low-level HTTP |
| aiohttp | opentelemetry-instrumentation-aiohttp-client |
Async HTTP |
| Framework | Package | Notes |
|---|---|---|
| FastAPI | opentelemetry-instrumentation-fastapi |
ASGI framework |
| Flask | opentelemetry-instrumentation-flask |
WSGI framework |
| Django | opentelemetry-instrumentation-django |
Full-stack framework |
| Starlette | opentelemetry-instrumentation-starlette |
ASGI toolkit |
| Database | Package | Notes |
|---|---|---|
| SQLAlchemy | opentelemetry-instrumentation-sqlalchemy |
ORM/Core |
| psycopg2 | opentelemetry-instrumentation-psycopg2 |
PostgreSQL |
| asyncpg | opentelemetry-instrumentation-asyncpg |
Async PostgreSQL |
| pymongo | opentelemetry-instrumentation-pymongo |
MongoDB |
| redis | opentelemetry-instrumentation-redis |
Redis |
| System | Package | Notes |
|---|---|---|
| Celery | opentelemetry-instrumentation-celery |
Task queue |
| kafka-python | opentelemetry-instrumentation-kafka-python |
Kafka client |
| Provider | Package | Notes |
|---|---|---|
| OpenAI | opentelemetry-instrumentation-openai-v2 |
ChatGPT, GPT-4 |
| Anthropic | opentelemetry-instrumentation-anthropic |
Claude |
| Vertex AI | opentelemetry-instrumentation-vertexai |
Google Vertex |
| Google GenAI | opentelemetry-instrumentation-google-genai |
Gemini |
| LangChain | opentelemetry-instrumentation-langchain |
LangChain |
| Library | Package | Notes |
|---|---|---|
| gRPC | opentelemetry-instrumentation-grpc |
RPC framework |
| logging | opentelemetry-instrumentation-logging |
Python logging |
Install the instrumentation packages you need:
# Full suite
pip install "botanu[instruments,genai]"
# Or individual packages
pip install opentelemetry-instrumentation-fastapi
pip install opentelemetry-instrumentation-openai-v2- At startup, Botanu calls each instrumentor's
instrument()method - Instrumented libraries automatically create spans for operations
- RunContextEnricher adds
run_idto every span via baggage - All spans are linked to the current run, enabling cost attribution
from botanu import enable, botanu_use_case
enable(service_name="my-service")
@botanu_use_case("Customer Support")
async def handle_ticket(ticket_id: str):
# requests.get() automatically creates a span with run_id
context = requests.get(f"https://api.example.com/tickets/{ticket_id}")
# OpenAI call automatically creates a span with tokens, model, etc.
response = await openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": context.text}]
)
return responseAuto-instrumented HTTP clients automatically propagate context:
@botanu_use_case("Distributed Workflow")
async def orchestrate():
# Baggage (run_id, use_case) is injected into request headers
response = requests.get("https://service-b.example.com/process")
# Service B extracts baggage and continues the traceHeaders injected:
traceparent: 00-{trace_id}-{span_id}-01
baggage: botanu.run_id=019abc12...,botanu.use_case=Distributed%20Workflow
from opentelemetry.instrumentation.requests import RequestsInstrumentor
# Exclude health checks from tracing
RequestsInstrumentor().instrument(
excluded_urls=["health", "metrics"]
)def request_hook(span, request):
span.set_attribute("http.request.custom_header", request.headers.get("X-Custom"))
def response_hook(span, request, response):
span.set_attribute("http.response.custom_header", response.headers.get("X-Custom"))
RequestsInstrumentor().instrument(
request_hook=request_hook,
response_hook=response_hook,
)Automatically captures:
- Model name and parameters
- Token usage (input, output, cached)
- Request/response IDs
- Streaming status
- Tool/function calls
# Automatically traced
response = await openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)Span attributes:
gen_ai.operation.name: chat
gen_ai.provider.name: openai
gen_ai.request.model: gpt-4
gen_ai.usage.input_tokens: 10
gen_ai.usage.output_tokens: 25
Automatically captures:
- Model and version
- Token usage with cache breakdown
- Stop reason
# Automatically traced
response = await anthropic.messages.create(
model="claude-3-opus-20240229",
messages=[{"role": "user", "content": "Hello"}]
)Traces the full chain execution:
# Each step is traced
chain = prompt | llm | parser
result = await chain.ainvoke({"input": "Hello"})Auto-instrumentation works alongside manual tracking:
from botanu import botanu_use_case, emit_outcome
from botanu.tracking.llm import track_llm_call
@botanu_use_case("Hybrid Workflow")
async def hybrid_example():
# Auto-instrumented HTTP call
data = requests.get("https://api.example.com/data")
# Manual tracking for custom provider
with track_llm_call(provider="custom-llm", model="my-model") as tracker:
response = await custom_llm_call(data.json())
tracker.set_tokens(input_tokens=100, output_tokens=200)
# Auto-instrumented database call
await database.execute("INSERT INTO results VALUES (?)", response)
emit_outcome("success")enable(
service_name="my-service",
auto_instrument_packages=[], # Empty list
)enable(
service_name="my-service",
auto_instrument_packages=["fastapi", "openai_v2"], # Only these
)-
Check the library is installed:
pip list | grep opentelemetry-instrumentation -
Verify instrumentation is enabled:
from opentelemetry.instrumentation.requests import RequestsInstrumentor print(RequestsInstrumentor().is_instrumented())
-
Ensure
enable()is called before library imports:from botanu import enable enable(service_name="my-service") # Import after enable() import requests
Check that baggage propagator is configured:
from opentelemetry import propagate
print(propagate.get_global_textmap())
# Should include W3CBaggagePropagator- Existing OTel Setup - Integration with existing OTel
- Collector Configuration - Collector setup
- Context Propagation - How context flows