Skip to content

Commit 1bea6b2

Browse files
Merge pull request #563 from MervinPraison/claude/issue-459-20250531_155633
fix: resolve MCP tools not working in Streamlit applications (issue #459)
2 parents f6dc5e9 + 4e7422d commit 1bea6b2

3 files changed

Lines changed: 559 additions & 0 deletions

File tree

Lines changed: 253 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,253 @@
1+
---
2+
title: "MCP with Streamlit"
3+
description: "How to properly integrate MCP (Model Context Protocol) tools with PraisonAI Agents in Streamlit applications"
4+
---
5+
6+
# MCP with Streamlit
7+
8+
This guide explains how to properly integrate MCP (Model Context Protocol) tools with PraisonAI Agents in Streamlit applications, addressing common issues and providing working solutions.
9+
10+
## Common Issues
11+
12+
When integrating MCP tools with Streamlit, users often encounter these problems:
13+
14+
### Issue #1: Agent Re-initialization
15+
**Problem**: Agent gets re-initialized on every Streamlit interaction, causing MCP tools to fail.
16+
17+
**Solution**: Use Streamlit's session state to initialize the agent only once.
18+
19+
### Issue #2: LLM Provider Format
20+
**Problem**: Using provider/model format like `"ollama/llama3.2"` can cause tool calling issues in Streamlit environments.
21+
22+
**Solution**: Use standard model names like `"gpt-4o-mini"` instead of provider/model format.
23+
24+
### Issue #3: Missing Error Handling
25+
**Problem**: No user feedback when MCP initialization fails.
26+
27+
**Solution**: Implement comprehensive error handling with user-friendly messages.
28+
29+
## Working Example
30+
31+
Here's a complete working example that demonstrates the correct approach:
32+
33+
```python
34+
import streamlit as st
35+
from praisonaiagents import Agent
36+
from praisonaiagents.mcp import MCP
37+
import traceback
38+
39+
st.title("🏠 AI Airbnb Assistant")
40+
41+
# Configuration in sidebar
42+
with st.sidebar:
43+
st.header("⚙️ Configuration")
44+
45+
# Use standard model names, not provider/model format
46+
llm_model = st.selectbox(
47+
"Choose LLM Model",
48+
options=[
49+
"gpt-4o-mini", # ✅ Correct format
50+
"gpt-4o",
51+
"claude-3-5-sonnet-20241022"
52+
],
53+
index=0
54+
)
55+
56+
debug_mode = st.checkbox("Enable Debug Mode", value=False)
57+
58+
# Initialize session state (CRITICAL for Streamlit)
59+
if "agent_initialized" not in st.session_state:
60+
st.session_state.agent_initialized = False
61+
st.session_state.agent = None
62+
st.session_state.init_error = None
63+
64+
# Function to initialize agent with proper error handling
65+
def initialize_agent():
66+
try:
67+
with st.spinner("🔄 Initializing AI agent..."):
68+
# Create MCP tools
69+
mcp_tools = MCP(
70+
"npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt",
71+
timeout=60,
72+
debug=debug_mode
73+
)
74+
75+
# Create agent - this should only happen ONCE
76+
agent = Agent(
77+
instructions="You are a helpful Airbnb assistant...",
78+
llm=llm_model, # Use standard format
79+
tools=mcp_tools,
80+
verbose=debug_mode
81+
)
82+
83+
return agent, None
84+
85+
except Exception as e:
86+
error_msg = f"Failed to initialize agent: {str(e)}"
87+
if debug_mode:
88+
error_msg += f"\n\nFull traceback:\n{traceback.format_exc()}"
89+
return None, error_msg
90+
91+
# Agent initialization (only runs once)
92+
if not st.session_state.agent_initialized:
93+
if st.button("🚀 Initialize AI Assistant", type="primary"):
94+
agent, error = initialize_agent()
95+
96+
if agent:
97+
st.session_state.agent = agent
98+
st.session_state.agent_initialized = True
99+
st.session_state.init_error = None
100+
st.success("✅ AI Assistant initialized successfully!")
101+
st.rerun()
102+
else:
103+
st.session_state.init_error = error
104+
st.error(f"❌ Initialization failed: {error}")
105+
106+
# Main interface (only show if agent is ready)
107+
if st.session_state.agent_initialized and st.session_state.agent:
108+
query = st.text_input("🔍 What are you looking for?")
109+
110+
if st.button("Search") and query:
111+
try:
112+
with st.spinner("🔍 Searching..."):
113+
# Use the SAME agent instance from session state
114+
result = st.session_state.agent.start(query)
115+
st.write(result)
116+
117+
except Exception as e:
118+
st.error(f"❌ Search failed: {str(e)}")
119+
```
120+
121+
## Best Practices
122+
123+
### 1. Session State Management
124+
Always use Streamlit's session state to manage agent lifecycle:
125+
126+
```python
127+
# ✅ Correct - Initialize once in session state
128+
if "agent" not in st.session_state:
129+
st.session_state.agent = Agent(...)
130+
131+
# ❌ Wrong - Re-initializes on every interaction
132+
agent = Agent(...)
133+
```
134+
135+
### 2. LLM Model Selection
136+
Use standard model names, avoid provider/model format in Streamlit:
137+
138+
```python
139+
# ✅ Correct - Standard model names
140+
llm="gpt-4o-mini"
141+
llm="claude-3-5-sonnet-20241022"
142+
143+
# ❌ Problematic in Streamlit - Provider/model format
144+
llm="ollama/llama3.2"
145+
llm="anthropic/claude-3-5-sonnet"
146+
```
147+
148+
### 3. Error Handling
149+
Implement comprehensive error handling:
150+
151+
```python
152+
try:
153+
# MCP initialization
154+
mcp_tools = MCP("your-mcp-command")
155+
agent = Agent(tools=mcp_tools, ...)
156+
157+
except Exception as e:
158+
st.error(f"Initialization failed: {str(e)}")
159+
# Provide troubleshooting tips
160+
```
161+
162+
### 4. User Feedback
163+
Provide clear feedback during initialization:
164+
165+
```python
166+
with st.spinner("🔄 Initializing AI agent and MCP tools..."):
167+
# Initialization code here
168+
pass
169+
170+
if initialization_successful:
171+
st.success("✅ AI Assistant initialized successfully!")
172+
else:
173+
st.error("❌ Initialization failed")
174+
```
175+
176+
## Troubleshooting
177+
178+
### MCP Tools Not Found
179+
If you get "MCP tool cannot be found" errors:
180+
181+
1. **Check MCP Server Command**: Ensure the MCP server command is correct and the server starts successfully
182+
2. **Verify Dependencies**: Make sure all required dependencies (Node.js, npm, etc.) are installed
183+
3. **Test Manually**: Try running the MCP server command manually first
184+
4. **Enable Debug Mode**: Set `debug=True` in MCP constructor for detailed logs
185+
186+
### Tool Calling Issues
187+
If tools aren't being called properly:
188+
189+
1. **Use Standard LLM Format**: Avoid provider/model format like `"ollama/llama3.2"`
190+
2. **Check API Keys**: Ensure proper environment variables are set for your chosen LLM
191+
3. **Session State**: Verify agent is properly stored in session state
192+
4. **Timeout Settings**: Increase timeout if needed: `MCP(command, timeout=120)`
193+
194+
### Threading Conflicts
195+
If you encounter threading-related errors:
196+
197+
1. **Single Agent Instance**: Use only one agent instance per session
198+
2. **Proper Cleanup**: Let Streamlit handle cleanup naturally
199+
3. **Avoid Manual Threading**: Don't create additional threads in Streamlit
200+
201+
## Complete Working Example
202+
203+
For a complete, production-ready example, see:
204+
- [`examples/python/ui/mcp-streamlit-airbnb.py`](https://github.com/MervinPraison/PraisonAI/blob/main/examples/python/ui/mcp-streamlit-airbnb.py)
205+
206+
This example includes:
207+
- ✅ Proper session state management
208+
- ✅ Comprehensive error handling
209+
- ✅ User-friendly interface
210+
- ✅ Debug mode support
211+
- ✅ Troubleshooting guidance
212+
- ✅ Configuration options
213+
214+
## Environment Setup
215+
216+
Make sure your environment has the required dependencies:
217+
218+
```bash
219+
# Install PraisonAI Agents
220+
pip install praisonaiagents
221+
222+
# Install Streamlit
223+
pip install streamlit
224+
225+
# For Airbnb MCP server example
226+
npm install -g @openbnb/mcp-server-airbnb
227+
228+
# Set environment variables for your chosen LLM
229+
export OPENAI_API_KEY="your-key"
230+
# or
231+
export ANTHROPIC_API_KEY="your-key"
232+
```
233+
234+
## Running the Example
235+
236+
```bash
237+
# Run the working example
238+
streamlit run examples/python/ui/mcp-streamlit-airbnb.py
239+
240+
# Or create your own based on the patterns above
241+
streamlit run your-mcp-app.py
242+
```
243+
244+
## Summary
245+
246+
The key to successfully using MCP with Streamlit is:
247+
248+
1. **Proper session state management** - Initialize agent only once
249+
2. **Standard LLM format** - Avoid provider/model format in Streamlit
250+
3. **Comprehensive error handling** - Provide user feedback
251+
4. **Correct usage patterns** - Follow Streamlit best practices
252+
253+
By following these guidelines, MCP tools will work reliably in your Streamlit applications without requiring any modifications to the core MCP implementation.

0 commit comments

Comments
 (0)