Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -50,3 +50,13 @@ SGR__PROMPTS__CLARIFICATION_RESPONSE_FILE=path/to/your/clarification_response.tx
# =======================================================
# Note: MCP configuration is complex and better suited for config.yaml
# See config.yaml.example for MCP server configuration examples

# =======================================================
# Observability: Langfuse integration
# =======================================================
SGR__LANGFUSE__ENABLED=false
# SGR__LANGFUSE__PUBLIC_KEY=pk-lf-xxx
# SGR__LANGFUSE__SECRET_KEY=sk-lf-xxx
# SGR__LANGFUSE__HOST=http://localhost:3000

# Shorthand (credentials via LANGFUSE_* env vars): SGR__LANGFUSE=true
9 changes: 9 additions & 0 deletions config.yaml.example
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,15 @@ llm:
temperature: 0.4 # Temperature (0.0-1.0)
# proxy: "socks5://127.0.0.1:1081" # Optional proxy (socks5:// or http://)

# Observability (Langfuse)
# When enabled, AgentFactory will create Langfuse AsyncOpenAI client instead of standard AsyncOpenAI.
# Credentials can be set here or via LANGFUSE_PUBLIC_KEY / LANGFUSE_SECRET_KEY / LANGFUSE_HOST env vars.
langfuse:
enabled: false
# public_key: "pk-lf-xxx"
# secret_key: "sk-lf-xxx"
# host: "http://localhost:3000"

# Execution Settings
execution:
max_clarifications: 3 # Max clarification requests
Expand Down
21 changes: 21 additions & 0 deletions docs/en/framework/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,27 @@ config = GlobalConfig.from_yaml("config.yaml")

An example can be found in [`config.yaml.example`](https://github.com/vamplabAI/sgr-agent-core/blob/main/config.yaml.example).

### Observability and Langfuse integration

SGR Agent Core supports optional [Langfuse](https://langfuse.com) integration for LLM tracing:

```yaml
langfuse:
enabled: true
public_key: "pk-lf-..."
secret_key: "sk-lf-..."
host: "https://cloud.langfuse.com" # or your self-hosted URL
```

Shorthand (when credentials are already in `LANGFUSE_*` env vars):

```yaml
langfuse: true
```

See the [Langfuse integration guide](langfuse.md) for all connection scenarios
(Langfuse Cloud, self-hosted, LiteLLM proxy), environment variable reference, and troubleshooting.

### Parameter Override

**Key Feature:** `AgentDefinition` inherits all parameters from `GlobalConfig` and overrides only those explicitly specified. This allows creating minimal configurations by specifying only necessary changes.
Expand Down
176 changes: 176 additions & 0 deletions docs/en/framework/langfuse.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,176 @@
# Langfuse Integration

[Langfuse](https://langfuse.com) is an open-source observability platform for LLM applications.
When enabled, SGR Agent Core wraps the OpenAI client with Langfuse tracing — every LLM call
is automatically recorded as a trace with inputs, outputs, latency, and token usage.

## Quick Start

Add the `langfuse` block to your `config.yaml`:

```yaml
langfuse:
enabled: true
public_key: "pk-lf-..."
secret_key: "sk-lf-..."
host: "https://cloud.langfuse.com" # or your self-hosted URL
```

That's it. On the next agent run you will see traces appearing in the Langfuse UI.

---

## Connection Scenarios

### Option 1: Langfuse Cloud

The simplest option — use the managed service at [cloud.langfuse.com](https://cloud.langfuse.com).

1. Sign up at [cloud.langfuse.com](https://cloud.langfuse.com) and create a project.
2. Copy **Public Key** and **Secret Key** from *Project Settings → API Keys*.
3. Add to `config.yaml`:

```yaml
langfuse:
enabled: true
public_key: "pk-lf-..."
secret_key: "sk-lf-..."
host: "https://cloud.langfuse.com"
```

!!! note
`host` defaults to `https://cloud.langfuse.com` in the Langfuse SDK, so you can omit it
for the cloud deployment. It is shown here explicitly for clarity.

---

### Option 2: Self-Hosted Langfuse

Run Langfuse on your own infrastructure using the official Docker Compose setup.

1. Follow the [self-hosting guide](https://langfuse.com/docs/deployment/self-host) to start Langfuse locally:

```bash
git clone https://github.com/langfuse/langfuse.git
cd langfuse
docker compose up -d
```

Langfuse will be available at `http://localhost:3000` by default.

2. Open `http://localhost:3000`, create a project, and copy the API keys.

3. Point SGR at your instance:

```yaml
langfuse:
enabled: true
public_key: "pk-lf-..."
secret_key: "sk-lf-..."
host: "http://localhost:3000"
```

!!! tip
For production deployments, replace `localhost` with the hostname or IP of your Langfuse server
(e.g. `https://langfuse.internal.example.com`).

---

### Option 3: Via LiteLLM Proxy

[LiteLLM](https://docs.litellm.ai/) can act as a unified proxy in front of multiple LLM providers
and forward traces to Langfuse automatically.

**Request flow:**

```
SGR Agent → LiteLLM Proxy → LLM Provider (OpenAI, etc.)
Langfuse
```

1. Configure LiteLLM to forward traces to Langfuse.
In your LiteLLM `config.yaml`, add the Langfuse callback:

```yaml
# litellm/config.yaml
litellm_settings:
success_callback: ["langfuse"]

environment_variables:
LANGFUSE_PUBLIC_KEY: "pk-lf-..."
LANGFUSE_SECRET_KEY: "sk-lf-..."
LANGFUSE_HOST: "https://cloud.langfuse.com"
```

2. In SGR's `config.yaml`, point the LLM base URL at your LiteLLM proxy and **disable** the
SGR-level Langfuse integration (LiteLLM handles tracing itself):

```yaml
llm:
api_key: "your-litellm-api-key"
base_url: "http://localhost:4000" # LiteLLM proxy address
model: "gpt-4o"

langfuse:
enabled: false # LiteLLM handles tracing
```

!!! note
Alternatively, you can enable Langfuse in SGR **and** in LiteLLM at the same time —
you will get two levels of tracing (SGR-side LLM calls + LiteLLM-side routing).
In most cases, one level is sufficient.

---

## Environment Variables

All `langfuse` config fields can be set via environment variables using the `SGR__LANGFUSE__*`
prefix:

```bash
SGR__LANGFUSE__ENABLED=true
SGR__LANGFUSE__PUBLIC_KEY=pk-lf-xxx
SGR__LANGFUSE__SECRET_KEY=sk-lf-xxx
SGR__LANGFUSE__HOST=http://localhost:3000
```

Alternatively, when credentials are already in native Langfuse environment variables, use the
shorthand to enable SGR integration without duplicating credentials:

```bash
# These are read directly by the Langfuse SDK
LANGFUSE_PUBLIC_KEY=pk-lf-xxx
LANGFUSE_SECRET_KEY=sk-lf-xxx
LANGFUSE_HOST=http://localhost:3000
```

```yaml
# config.yaml — shorthand, credentials come from LANGFUSE_* env vars
langfuse: true
```

!!! warning "Loading `.env` files"
Environment variables in `.env` are **not** loaded automatically by Python.
SGR's server (`sgr` CLI) loads `.env` on startup via `python-dotenv`.
If you run SGR as a library, call `load_dotenv()` yourself before initializing `GlobalConfig`.

---

## Troubleshooting

### `LangfuseImportError` / cannot import `langfuse`

If `langfuse.enabled` is `true` in configuration but the `langfuse` package is not installed
or cannot be imported, agent startup raises `LangfuseImportError` with a clear message.
Install the project dependencies (`langfuse` is a core dependency of SGR Agent Core) or
disable Langfuse by setting `langfuse.enabled` to `false`.

### "Authentication error: Langfuse client initialized without public_key"

The Langfuse SDK cannot find credentials. Check the following:

- `public_key` and `secret_key` are set in `config.yaml` under `langfuse:`, **or**
`LANGFUSE_PUBLIC_KEY` / `LANGFUSE_SECRET_KEY` are present in the environment.
- If using `.env`, make sure the server was started via the `sgr` CLI (which loads `.env`)
rather than called directly as a Python module.
20 changes: 20 additions & 0 deletions docs/ru/framework/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,26 @@ config = GlobalConfig.from_yaml("config.yaml")
Пример можно найти в [`config.yaml.example`](https://github.com/vamplabAI/sgr-agent-core/blob/main/config.yaml.example).


### Наблюдаемость и интеграция с Langfuse

SGR Agent Core поддерживает опциональную интеграцию с [Langfuse](https://langfuse.com) для трассировки LLM-вызовов:

```yaml
langfuse:
enabled: true
public_key: "pk-lf-..."
secret_key: "sk-lf-..."
host: "https://cloud.langfuse.com" # или ваш self-hosted URL
```

Сокращённая форма (когда ключи уже заданы в `LANGFUSE_*` env):

```yaml
langfuse: true
```

Подробнее о всех сценариях подключения (Langfuse Cloud, self-hosted, LiteLLM proxy),
переменных окружения и решении проблем — в [руководстве по интеграции с Langfuse](langfuse.md).

### Переопределение параметров

Expand Down
Loading
Loading