This sample demonstrates the complete two-tier memory architecture using Redis Agent Memory Server with the adk-redis package:
- RedisWorkingMemorySessionService - Session management with auto-summarization
- RedisLongTermMemoryService - Persistent long-term memory with semantic search
┌────────────────────────────────────────────────────────────────┐
│ ADK Agent │
├──────────────────────────────┬─────────────────────────────────┤
│ TIER 1: Working Memory │ TIER 2: Long-Term Memory │
├──────────────────────────────┼─────────────────────────────────┤
│ • Current session messages │ • Extracted facts & preferences │
│ • Auto-summarization │ • Semantic vector search │
│ • Context window management │ • Cross-session persistence │
│ • TTL support │ • Recency-boosted retrieval │
├──────────────────────────────┴─────────────────────────────────┤
│ Agent Memory Server API │
├────────────────────────────────────────────────────────────────┤
│ Redis 8.4 │
└────────────────────────────────────────────────────────────────┘
- Python 3.10+
- Docker (for Redis 8.4 and Agent Memory Server)
First, install uv if you haven't already:
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or with pip
pip install uvThen install the package with all dependencies:
uv pip install "adk-redis[all]"Option A: Automated setup (recommended)
# Run from the repository root
./scripts/start-redis.shThis script will automatically start Redis 8.4 with health checks and verify it's running correctly.
Option B: Manual setup
docker run -d --name redis -p 6379:6379 redis:8.4-alpineVerify Redis is running:
docker ps | grep redis
# Or test the connection
docker exec redis redis-cli ping
# Should return: PONGNote: Redis 8.4 includes the Redis Query Engine (evolved from RediSearch) with native support for vector search, full-text search, and JSON operations. Docker will automatically download the image (~40MB) on first run.
docker run -d --name agent-memory-server -p 8088:8088 \
-e REDIS_URL=redis://host.docker.internal:6379 \
-e GEMINI_API_KEY=your-gemini-api-key \
-e GENERATION_MODEL=gemini/gemini-2.0-flash \
-e EMBEDDING_MODEL=gemini/text-embedding-004 \
-e FAST_MODEL=gemini/gemini-2.0-flash \
-e SLOW_MODEL=gemini/gemini-2.0-flash \
-e EXTRACTION_DEBOUNCE_SECONDS=5 \
redislabs/agent-memory-server:0.13.2 \
agent-memory api --host 0.0.0.0 --port 8088 --task-backend=asyncioConfiguration Options:
- LLM Provider: Agent Memory Server uses LiteLLM and supports 100+ providers (OpenAI, Gemini, Anthropic, AWS Bedrock, Ollama, etc.). Set the appropriate environment variables for your provider (e.g.,
GEMINI_API_KEY,GENERATION_MODEL=gemini/gemini-2.0-flash). See the Agent Memory Server LLM Providers docs for details.- Memory Extraction Debounce:
EXTRACTION_DEBOUNCE_SECONDScontrols how long to wait before extracting memories from a conversation (default: 300 seconds). Lower values (e.g., 5) provide faster memory extraction, while higher values reduce API calls.- Embedding Models: Agent Memory Server also uses LiteLLM for embeddings. For local/offline embeddings, use Ollama (e.g.,
EMBEDDING_MODEL=ollama/nomic-embed-text,REDISVL_VECTOR_DIMENSIONS=768). Note: Theredis/langcache-embed-v1model used in the semantic_cache example is not supported by Agent Memory Server (it's RedisVL-specific). See Embedding Providers docs for all options.
curl http://localhost:8088/v1/healthCreate .env in this directory:
GOOGLE_API_KEY=your-google-api-key
REDIS_MEMORY_SERVER_URL=http://localhost:8088
REDIS_MEMORY_NAMESPACE=adk_agent_memory
REDIS_MEMORY_EXTRACTION_STRATEGY=discrete
REDIS_MEMORY_CONTEXT_WINDOW=8000
REDIS_MEMORY_RECENCY_BOOST=trueRun the web server:
cd examples/simple_redis_memory
uv run python main.pyOpen http://localhost:8080 in your browser.
Note: This project uses
uvfor dependency management. If you prefer to usepip, install the package first:uv pip install "adk-redis[all]"and then runpython main.py.
Run the interactive demo to see memory in action:
uv run python demo_conversation.pyThis will:
- Create a session and share personal information
- Wait for memory extraction
- Create a NEW session and test memory recall
Session 1 - Share information:
User: Hi, I'm Nitin. I'm a Machine Learning Engineer working on ML projects.
User: I love coffee, especially Berliner Frühstück Coffee from Berliner Kaffeerösterei.
User: My favorite programming language is Python.
Session 2 - Test memory recall:
User: What do you remember about me?
User: What's my favorite coffee?
| Feature | Working Memory (Tier 1) | Long-Term Memory (Tier 2) |
|---|---|---|
| Scope | Current session | All sessions |
| Auto-summarization | Yes | No |
| Semantic search | No | Yes |
| Fact extraction | Background | Persistent |
| TTL support | Yes | No |
| Variable | Default | Description |
|---|---|---|
REDIS_MEMORY_SERVER_URL |
http://localhost:8088 |
Memory server URL |
REDIS_MEMORY_NAMESPACE |
adk_agent_memory |
Namespace for isolation |
REDIS_MEMORY_EXTRACTION_STRATEGY |
discrete |
discrete, summary, preferences |
REDIS_MEMORY_CONTEXT_WINDOW |
8000 |
Max tokens before summarization |
REDIS_MEMORY_RECENCY_BOOST |
true |
Boost recent memories in search |