This sample demonstrates how to use Redis LangCache
with ADK agents for managed semantic caching. Unlike the local SemanticCache
example, LangCache handles embedding generation and vector storage server-side
-- no local vectorizer or Redis instance is required.
- Python 3.10+ (Python 3.12+ recommended)
- A LangCache account (sign up)
- ADK and adk-redis installed
- Google API key (for the LLM)
First, install uv if you haven't already:
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or with pip
pip install uvThen install the package with LangCache support:
uv pip install "adk-redis[langcache]" python-dotenvCreate a .env file in this directory:
# Required: Google API key for the agent
GOOGLE_API_KEY=your-google-api-key
# Required: LangCache credentials (from https://redis.io/langcache)
LANGCACHE_CACHE_ID=your-cache-id
LANGCACHE_API_KEY=your-api-key
# Optional: LangCache server URL (defaults to US East)
# LANGCACHE_SERVER_URL=https://aws-us-east-1.langcache.redis.iouv run python main.pyThis runs a demo that:
- Creates an agent with LangCache semantic caching enabled
- Sends multiple queries, including semantically similar ones
- Shows cache hits for similar queries
adk web .Then open http://localhost:8000 to interact with the cached agent.
langcache_cache/
├── main.py # Demo script
├── langcache_agent/
│ ├── __init__.py # Agent package initialization
│ └── agent.py # Agent with LangCache caching callbacks
└── README.md # This file
-
Before Model Callback: Checks LangCache for a semantically similar prompt. If found, returns the cached response immediately.
-
After Model Callback: Stores the prompt-response pair in LangCache for future similar queries.
-
Managed Embeddings: LangCache generates embeddings server-side using optimized models. No local vectorizer setup is needed.
-
Exact + Semantic Search: By default, LangCache uses both exact hash matching and semantic vector search to maximize cache hit rates.
| Feature | Local (semantic_cache) |
Managed (langcache_cache) |
|---|---|---|
| Vectorizer | Local (HuggingFace, OpenAI, etc.) | Server-side (managed) |
| Redis instance | Required | Not required |
| Install extra | adk-redis[search] |
adk-redis[langcache] |
| Provider class | RedisVLCacheProvider |
LangCacheProvider |
| Setup complexity | Higher (Redis + vectorizer) | Lower (API key only) |
cache_id(str): LangCache cache ID (required)api_key(str): LangCache API key (required)server_url(str): LangCache server URLname(str): Cache name identifierttl(int | None): Time-to-live in seconds for cached entriesdistance_threshold(float | None): Semantic similarity thresholduse_exact_search(bool): Enable exact hash matching (default: True)use_semantic_search(bool): Enable semantic vector search (default: True)
first_message_only(bool): Only cache first message in sessioninclude_app_name(bool): Include app name in cache keyinclude_user_id(bool): Include user ID in cache keyinclude_session_id(bool): Include session ID in cache key