-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathenv.example
More file actions
27 lines (22 loc) · 1.18 KB
/
env.example
File metadata and controls
27 lines (22 loc) · 1.18 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
## Vercel AI SDK (Gemini) - Only used for CHAT, not embeddings
# https://ai-sdk.dev
GOOGLE_GENERATIVE_AI_API_KEY=
# Optional: Gemini model to use for chat (default: gemini-2.5-flash)
# Options: gemini-2.5-flash (default, free tier, 10 RPM/250 RPD), gemini-2.5-flash-lite (free tier, 15 RPM/1,000 RPD), gemini-2.0-flash-lite (free tier, 30 RPM), gemini-2.5-pro (best quality, higher cost)
# GEMINI_MODEL=gemini-2.5-flash
## Upstash Redis (REST) - For semantic caching
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=
## Upstash Vector (REST) - For document storage and retrieval
# IMPORTANT: Create an index with BUILT-IN EMBEDDING MODEL (e.g., BAAI/bge-small-en-v1.5)
# This avoids external embedding API costs - Upstash handles embeddings for free!
UPSTASH_VECTOR_REST_URL=
UPSTASH_VECTOR_REST_TOKEN=
## Observability (optional)
# OpenTelemetry: export traces to any OTLP backend (Langfuse, Jaeger, Grafana, etc.)
# OTEL_SERVICE_NAME=serverless-rag
# OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://your-otel-backend/v1
# Langfuse: LLM observability (traces, token usage, cost). Get keys at https://langfuse.com
# LANGFUSE_PUBLIC_KEY=
# LANGFUSE_SECRET_KEY=
# LANGFUSE_BASE_URL=https://cloud.langfuse.com