Skip to content

Commit eb56528

Browse files
Merge pull request #14 from botanu-ai/developer-deborah
docs: add Kubernetes deployment guide for zero-code instrumentation
2 parents 21c0cf5 + be33b8d commit eb56528

File tree

3 files changed

+394
-50
lines changed

3 files changed

+394
-50
lines changed

README.md

Lines changed: 25 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,6 @@ OpenTelemetry-native run-level cost attribution for AI workflows.
1111

1212
Botanu adds **runs** on top of distributed tracing. A run represents a single business transaction that may span multiple LLM calls, database queries, and services. By correlating all operations to a stable `run_id`, you get accurate cost attribution without sampling artifacts.
1313

14-
**Key features:**
15-
- **Run-level attribution** — Link all costs to business outcomes
16-
- **Cross-service correlation** — W3C Baggage propagation
17-
- **OTel-native** — Works with any OpenTelemetry-compatible backend
18-
- **GenAI support** — OpenAI, Anthropic, Vertex AI, and more
19-
2014
## Quick Start
2115

2216
```python
@@ -26,64 +20,57 @@ enable(service_name="my-app")
2620

2721
@botanu_use_case(name="Customer Support")
2822
async def handle_ticket(ticket_id: str):
23+
# All LLM calls, DB queries, and HTTP requests inside
24+
# are automatically instrumented and linked to this run
2925
context = await fetch_context(ticket_id)
3026
response = await generate_response(context)
3127
emit_outcome("success", value_type="tickets_resolved", value_amount=1)
3228
return response
3329
```
3430

31+
That's it. All operations within the use case are automatically tracked.
32+
3533
## Installation
3634

3735
```bash
38-
pip install botanu # Core SDK
39-
pip install "botanu[sdk]" # With OTel SDK + OTLP exporter
40-
pip install "botanu[genai]" # With GenAI instrumentation
41-
pip install "botanu[all]" # Everything included
36+
pip install "botanu[all]"
4237
```
4338

4439
| Extra | Description |
4540
|-------|-------------|
46-
| `sdk` | OpenTelemetry SDK + OTLP HTTP exporter |
41+
| `sdk` | OpenTelemetry SDK + OTLP exporter |
4742
| `instruments` | Auto-instrumentation for HTTP, databases |
48-
| `genai` | GenAI provider instrumentation |
49-
| `carriers` | Cross-service propagation (Celery, Kafka) |
50-
| `all` | All extras |
43+
| `genai` | Auto-instrumentation for LLM providers |
44+
| `all` | All of the above (recommended) |
5145

52-
## LLM Tracking
46+
## What Gets Auto-Instrumented
5347

54-
```python
55-
from botanu.tracking.llm import track_llm_call
56-
57-
with track_llm_call(provider="openai", model="gpt-4") as tracker:
58-
response = await openai.chat.completions.create(
59-
model="gpt-4",
60-
messages=[{"role": "user", "content": "Hello"}]
61-
)
62-
tracker.set_tokens(
63-
input_tokens=response.usage.prompt_tokens,
64-
output_tokens=response.usage.completion_tokens,
65-
)
66-
```
48+
When you install `botanu[all]`, the following are automatically tracked:
6749

68-
## Data Tracking
50+
- **LLM Providers** — OpenAI, Anthropic, Vertex AI, Bedrock, Azure OpenAI
51+
- **Databases** — PostgreSQL, MySQL, SQLite, MongoDB, Redis
52+
- **HTTP** — requests, httpx, urllib3, aiohttp
53+
- **Frameworks** — FastAPI, Flask, Django, Starlette
54+
- **Messaging** — Celery, Kafka
6955

70-
```python
71-
from botanu.tracking.data import track_db_operation, track_storage_operation
56+
No manual instrumentation required.
57+
58+
## Kubernetes Deployment
7259

73-
with track_db_operation(system="postgresql", operation="SELECT") as db:
74-
result = await cursor.execute(query)
75-
db.set_result(rows_returned=len(result))
60+
For large-scale deployments, use zero-code instrumentation via OTel Operator:
7661

77-
with track_storage_operation(system="s3", operation="PUT") as storage:
78-
await s3.put_object(Bucket="bucket", Key="key", Body=data)
79-
storage.set_result(bytes_written=len(data))
62+
```yaml
63+
metadata:
64+
annotations:
65+
instrumentation.opentelemetry.io/inject-python: "true"
8066
```
8167
68+
See [Kubernetes Deployment Guide](./docs/integration/kubernetes.md) for details.
69+
8270
## Documentation
8371
8472
- [Getting Started](./docs/getting-started/)
8573
- [Concepts](./docs/concepts/)
86-
- [Tracking Guides](./docs/tracking/)
8774
- [Integration](./docs/integration/)
8875
- [API Reference](./docs/api/)
8976

docs/index.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ Botanu introduces **run-level attribution**: a unique `run_id` that follows your
3131
### Integration
3232

3333
- [Auto-Instrumentation](integration/auto-instrumentation.md) - Automatic instrumentation for common libraries
34+
- [Kubernetes Deployment](integration/kubernetes.md) - Zero-code instrumentation at scale
3435
- [Existing OTel Setup](integration/existing-otel.md) - Integrate with existing OpenTelemetry deployments
3536
- [Collector Configuration](integration/collector.md) - Configure the OpenTelemetry Collector
3637

@@ -48,21 +49,19 @@ Botanu introduces **run-level attribution**: a unique `run_id` that follows your
4849
## Quick Example
4950

5051
```python
51-
from botanu import init_botanu, botanu_use_case
52-
from botanu.tracking.llm import track_llm_call
52+
from botanu import enable, botanu_use_case, emit_outcome
5353

54-
# Initialize once at startup
55-
init_botanu(service_name="support-agent")
54+
enable(service_name="support-agent")
5655

5756
@botanu_use_case("Customer Support")
58-
def handle_ticket(ticket_id: str):
59-
# Every operation inside gets the same run_id
60-
with track_llm_call(provider="openai", model="gpt-4") as tracker:
61-
response = openai.chat.completions.create(...)
62-
tracker.set_tokens(
63-
input_tokens=response.usage.prompt_tokens,
64-
output_tokens=response.usage.completion_tokens,
65-
)
57+
async def handle_ticket(ticket_id: str):
58+
# All LLM calls, DB queries, and HTTP requests are auto-instrumented
59+
context = await fetch_context(ticket_id)
60+
response = await openai.chat.completions.create(
61+
model="gpt-4",
62+
messages=[{"role": "user", "content": context}]
63+
)
64+
emit_outcome("success", value_type="tickets_resolved", value_amount=1)
6665
return response
6766
```
6867

0 commit comments

Comments
 (0)