@@ -11,12 +11,6 @@ OpenTelemetry-native run-level cost attribution for AI workflows.
1111
1212Botanu adds ** runs** on top of distributed tracing. A run represents a single business transaction that may span multiple LLM calls, database queries, and services. By correlating all operations to a stable ` run_id ` , you get accurate cost attribution without sampling artifacts.
1313
14- ** Key features:**
15- - ** Run-level attribution** — Link all costs to business outcomes
16- - ** Cross-service correlation** — W3C Baggage propagation
17- - ** OTel-native** — Works with any OpenTelemetry-compatible backend
18- - ** GenAI support** — OpenAI, Anthropic, Vertex AI, and more
19-
2014## Quick Start
2115
2216``` python
@@ -26,64 +20,57 @@ enable(service_name="my-app")
2620
2721@botanu_use_case (name = " Customer Support" )
2822async def handle_ticket (ticket_id : str ):
23+ # All LLM calls, DB queries, and HTTP requests inside
24+ # are automatically instrumented and linked to this run
2925 context = await fetch_context(ticket_id)
3026 response = await generate_response(context)
3127 emit_outcome(" success" , value_type = " tickets_resolved" , value_amount = 1 )
3228 return response
3329```
3430
31+ That's it. All operations within the use case are automatically tracked.
32+
3533## Installation
3634
3735``` bash
38- pip install botanu # Core SDK
39- pip install " botanu[sdk]" # With OTel SDK + OTLP exporter
40- pip install " botanu[genai]" # With GenAI instrumentation
41- pip install " botanu[all]" # Everything included
36+ pip install " botanu[all]"
4237```
4338
4439| Extra | Description |
4540| -------| -------------|
46- | ` sdk ` | OpenTelemetry SDK + OTLP HTTP exporter |
41+ | ` sdk ` | OpenTelemetry SDK + OTLP exporter |
4742| ` instruments ` | Auto-instrumentation for HTTP, databases |
48- | ` genai ` | GenAI provider instrumentation |
49- | ` carriers ` | Cross-service propagation (Celery, Kafka) |
50- | ` all ` | All extras |
43+ | ` genai ` | Auto-instrumentation for LLM providers |
44+ | ` all ` | All of the above (recommended) |
5145
52- ## LLM Tracking
46+ ## What Gets Auto-Instrumented
5347
54- ``` python
55- from botanu.tracking.llm import track_llm_call
56-
57- with track_llm_call(provider = " openai" , model = " gpt-4" ) as tracker:
58- response = await openai.chat.completions.create(
59- model = " gpt-4" ,
60- messages = [{" role" : " user" , " content" : " Hello" }]
61- )
62- tracker.set_tokens(
63- input_tokens = response.usage.prompt_tokens,
64- output_tokens = response.usage.completion_tokens,
65- )
66- ```
48+ When you install ` botanu[all] ` , the following are automatically tracked:
6749
68- ## Data Tracking
50+ - ** LLM Providers** — OpenAI, Anthropic, Vertex AI, Bedrock, Azure OpenAI
51+ - ** Databases** — PostgreSQL, MySQL, SQLite, MongoDB, Redis
52+ - ** HTTP** — requests, httpx, urllib3, aiohttp
53+ - ** Frameworks** — FastAPI, Flask, Django, Starlette
54+ - ** Messaging** — Celery, Kafka
6955
70- ``` python
71- from botanu.tracking.data import track_db_operation, track_storage_operation
56+ No manual instrumentation required.
57+
58+ ## Kubernetes Deployment
7259
73- with track_db_operation(system = " postgresql" , operation = " SELECT" ) as db:
74- result = await cursor.execute(query)
75- db.set_result(rows_returned = len (result))
60+ For large-scale deployments, use zero-code instrumentation via OTel Operator:
7661
77- with track_storage_operation(system = " s3" , operation = " PUT" ) as storage:
78- await s3.put_object(Bucket = " bucket" , Key = " key" , Body = data)
79- storage.set_result(bytes_written = len (data))
62+ ``` yaml
63+ metadata :
64+ annotations :
65+ instrumentation.opentelemetry.io/inject-python : " true"
8066` ` `
8167
68+ See [Kubernetes Deployment Guide](./docs/integration/kubernetes.md) for details.
69+
8270## Documentation
8371
8472- [Getting Started](./docs/getting-started/)
8573- [Concepts](./docs/concepts/)
86- - [ Tracking Guides] ( ./docs/tracking/ )
8774- [Integration](./docs/integration/)
8875- [API Reference](./docs/api/)
8976
0 commit comments