| layout | default |
|---|---|
| title | Letta Tutorial - Chapter 3: Agent Configuration |
| nav_order | 3 |
| has_children | false |
| parent | Letta Tutorial |
Welcome to Chapter 3: Agent Configuration. In this part of Letta Tutorial: Stateful LLM Agents, you will build an intuitive mental model first, then move into concrete implementation details and practical production tradeoffs.
Customize agent personalities, system prompts, models, and behavior settings.
Letta agents are highly configurable. This chapter covers personas, system prompts, model selection, and fine-tuning agent behavior for different use cases.
Personas define the agent's character and behavior:
# Create with a detailed persona
letta create --name mentor --persona "You are Alex, an experienced software engineering mentor. You have 15 years of experience in full-stack development, DevOps, and team leadership. You speak professionally but accessibly, always explaining concepts clearly. You ask thoughtful questions to understand problems deeply before offering solutions."
# Update existing agent persona
letta update-agent --name sam --persona "You are Sam, a cheerful and helpful AI assistant who remembers everything about your conversations. You're enthusiastic about learning new things and helping users solve problems."System prompts provide detailed instructions:
from letta import create_client
client = create_client()
# Create agent with custom system prompt
agent_config = {
"name": "code-reviewer",
"persona": "You are a senior code reviewer with expertise in Python, JavaScript, and Go.",
"system": """You are a meticulous code reviewer. Follow these guidelines:
1. Check for security vulnerabilities
2. Verify code follows best practices
3. Look for performance issues
4. Ensure proper error handling
5. Suggest improvements with explanations
Always provide specific line references and explain the reasoning behind your suggestions.""",
"model": "gpt-4o",
}
agent = client.create_agent(**agent_config)Choose the right model for your use case:
# Fast and cost-effective
letta create --name fast-assistant --model gpt-4o-mini
# High-quality reasoning
letta create --name expert-assistant --model gpt-4o
# Creative tasks
letta create --name creative-writer --model gpt-4oConfigure model parameters:
agent_config = {
"name": "creative-writer",
"model": "gpt-4o",
"model_settings": {
"temperature": 0.9, # Higher for creativity
"max_tokens": 2000,
"top_p": 0.9,
}
}Customize memory behavior:
# Configure memory settings
agent = client.create_agent(
name="researcher",
memory_config={
"recall_memory_limit": 50, # Messages to keep in recall
"archival_memory_limit": 10000, # Max archival entries
"working_memory_limit": 10, # Core memory blocks
}
)Enable tools for enhanced capabilities:
agent_config = {
"name": "web-researcher",
"tools": ["web_search", "web_scrape", "save_file"],
"system": "You are a research assistant who can search the web and save findings."
}letta create --name code-assistant \
--persona "You are an expert programmer who writes clean, efficient, well-documented code." \
--model gpt-4o \
--system "Focus on:
- Writing readable, maintainable code
- Following language-specific best practices
- Adding helpful comments
- Considering edge cases
- Optimizing for performance when relevant"meeting_agent = client.create_agent(
name="meeting-facilitator",
persona="You are a professional meeting facilitator who keeps discussions on track and ensures all voices are heard.",
system="""Meeting facilitation guidelines:
1. Start with agenda confirmation
2. Time management - keep to schedule
3. Ensure balanced participation
4. Summarize key decisions
5. End with action items and owners""",
model="gpt-4o-mini"
)letta create --name learning-coach \
--persona "You are a patient, encouraging learning coach who adapts to each student's pace and style." \
--system "Adapt your teaching to the learner:
- Assess current knowledge level
- Break complex topics into digestible chunks
- Use analogies and examples
- Provide practice exercises
- Give constructive feedback"Save and reuse configurations:
# Define templates
TEMPLATES = {
"code-reviewer": {
"persona": "Expert code reviewer with 10+ years experience",
"model": "gpt-4o",
"system": "Focus on security, performance, maintainability...",
"tools": ["run_tests", "check_security"]
},
"customer-support": {
"persona": "Friendly, empathetic customer support specialist",
"model": "gpt-4o-mini",
"system": "Be patient, ask clarifying questions, escalate when needed...",
"tools": ["search_kb", "create_ticket"]
}
}
# Create from template
def create_from_template(name, template_name):
config = TEMPLATES[template_name].copy()
config["name"] = name
return client.create_agent(**config)Modify agents after creation:
# Update model
letta update-agent --name sam --model gpt-4o
# Change persona
letta update-agent --name sam --persona "New persona description"
# Update system prompt
letta update-agent --name sam --system "New system instructions"Different settings for development vs production:
import os
def create_agent_for_env(name, base_config):
config = base_config.copy()
config["name"] = name
if os.getenv("ENV") == "production":
config["model"] = "gpt-4o" # Higher quality
config["model_settings"] = {"temperature": 0.1} # More consistent
else:
config["model"] = "gpt-4o-mini" # Faster, cheaper
config["model_settings"] = {"temperature": 0.7} # More creative
return client.create_agent(**config)- Be Specific: Include role, experience level, communication style
- Define Boundaries: What the agent should/shouldn't do
- Add Context: Industry knowledge, specializations
- Clear Instructions: Use numbered lists for complex procedures
- Examples: Include input/output examples
- Constraints: Define limits and boundaries
- Error Handling: How to respond to unclear requests
- gpt-4o: Complex reasoning, high-quality output
- gpt-4o-mini: Fast, cost-effective for simple tasks
- Local Models: Privacy, cost control (via compatibility layer)
Validate agent behavior:
def test_agent_config(agent_name, test_cases):
"""Test agent responses to ensure configuration works as expected"""
for test_input, expected_behavior in test_cases:
response = client.send_message(agent_name, test_input)
# Validate response matches expected behavior
assert expected_behavior in response.content
# Test cases for a code reviewer
test_cases = [
("Review this function", "security check"),
("Optimize this code", "performance suggestion"),
("What's wrong here?", "specific feedback")
]
test_agent_config("code-reviewer", test_cases)Track configuration changes:
# Save configurations
def save_config(agent, version="v1"):
config = {
"persona": agent.persona,
"system": agent.system,
"model": agent.model,
"tools": agent.tools,
"version": version,
"created": datetime.now().isoformat()
}
with open(f"configs/{agent.name}_{version}.json", "w") as f:
json.dump(config, f, indent=2)This allows you to experiment with different configurations and roll back if needed.
Next: Add custom tools and functions to extend agent capabilities.
Most teams struggle here because the hard part is not writing more code, but deciding clear boundaries for name, persona, agent so behavior stays predictable as complexity grows.
In practical terms, this chapter helps you avoid three common failures:
- coupling core logic too tightly to one implementation path
- missing the handoff boundaries between setup, execution, and validation
- shipping changes without clear rollback or observability strategy
After working through this chapter, you should be able to reason about Chapter 3: Agent Configuration as an operating subsystem inside Letta Tutorial: Stateful LLM Agents, with explicit contracts for inputs, state transitions, and outputs.
Use the implementation notes around model, system, code as your checklist when adapting these patterns to your own repository.
Under the hood, Chapter 3: Agent Configuration usually follows a repeatable control path:
- Context bootstrap: initialize runtime config and prerequisites for
name. - Input normalization: shape incoming data so
personareceives stable contracts. - Core execution: run the main logic branch and propagate intermediate state through
agent. - Policy and safety checks: enforce limits, auth scopes, and failure boundaries.
- Output composition: return canonical result payloads for downstream consumers.
- Operational telemetry: emit logs/metrics needed for debugging and performance tuning.
When debugging, walk this sequence in order and confirm each stage has explicit success/failure conditions.
Use the following upstream sources to verify implementation details while reading this chapter:
- View Repo
Why it matters: authoritative reference on
View Repo(github.com). - Awesome Code Docs
Why it matters: authoritative reference on
Awesome Code Docs(github.com).
Suggested trace strategy:
- search upstream code for
nameandpersonato map concrete implementation paths - compare docs claims against actual runtime/config code before reusing patterns in production