OpenTelemetry-native run-level cost attribution for AI workflows.
Botanu adds runs on top of distributed tracing. A run represents a single business transaction that may span multiple LLM calls, database queries, and services. By correlating all operations to a stable run_id, you get accurate cost attribution without sampling artifacts.
from botanu import enable, botanu_use_case, emit_outcome
enable(service_name="my-app")
@botanu_use_case(name="Customer Support")
async def handle_ticket(ticket_id: str):
# All LLM calls, DB queries, and HTTP requests inside
# are automatically instrumented and linked to this run
context = await fetch_context(ticket_id)
response = await generate_response(context)
emit_outcome("success", value_type="tickets_resolved", value_amount=1)
return responseThat's it. All operations within the use case are automatically tracked.
pip install "botanu[all]"| Extra | Description |
|---|---|
sdk |
OpenTelemetry SDK + OTLP exporter |
instruments |
Auto-instrumentation for HTTP, databases |
genai |
Auto-instrumentation for LLM providers |
all |
All of the above (recommended) |
When you install botanu[all], the following are automatically tracked:
- LLM Providers — OpenAI, Anthropic, Vertex AI, Bedrock, Azure OpenAI
- Databases — PostgreSQL, MySQL, SQLite, MongoDB, Redis
- HTTP — requests, httpx, urllib3, aiohttp
- Frameworks — FastAPI, Flask, Django, Starlette
- Messaging — Celery, Kafka
No manual instrumentation required.
For large-scale deployments, use zero-code instrumentation via OTel Operator:
metadata:
annotations:
instrumentation.opentelemetry.io/inject-python: "true"See Kubernetes Deployment Guide for details.
- Python 3.9+
- OpenTelemetry Collector (recommended for production)
See CONTRIBUTING.md. This project uses DCO sign-off.
git commit -s -m "Your commit message"This project is an LF AI & Data Foundation project.