docs: Update streaming.mdx to cover tool follow-up retries and new in-stream error messages#253
Conversation
…-stream error messages - Add 'Streaming with Tools' section with Mermaid diagram explaining two-phase flow - Add 'Error Handling in the Stream' section documenting new error sentinel format - Update 'Handle errors in callbacks' accordion to explain both layers of error handling - Add troubleshooting entry for '[Error: ... ref: followup-...]' messages - Extend Related cards to include Rate Limiter with 3-column layout - Add cross-link in rate-limiter.mdx explaining shared rate limiting behavior Fixes #247 Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
📝 WalkthroughWalkthroughDocumentation updates to clarify rate limiter behavior and tool-enabled streaming. The rate-limiter.mdx Overview now mentions reuse across initial and follow-up LLM calls. The streaming.mdx documentation is expanded with two new sections on streaming with tools and error handling, updated error guidance, troubleshooting additions, and a new Rate Limiter resource card. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/features/streaming.mdx`:
- Around line 237-250: The code example uses a non-runnable placeholder
tools=[...] which breaks copy-paste; replace it with a concrete tool (eg. the
existing get_weather function from the previous section) and include the
necessary import/registration so the Agent instantiation is self-contained:
import or define get_weather, wrap it as a tool the Agent accepts (matching the
project’s tool API), pass tools=[get_weather] into Agent(...), and keep the rest
(full accumulation and sentinel detection using iter_stream) unchanged so the
snippet runs unmodified and demonstrates the sentinel-detection flow with
Agent.iter_stream.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bc85751b-8d7d-4d56-9b06-95895df03a85
📒 Files selected for processing (2)
docs/features/rate-limiter.mdxdocs/features/streaming.mdx
| ```python | ||
| from praisonaiagents import Agent | ||
|
|
||
| agent = Agent(instructions="You are a helpful assistant", tools=[...]) | ||
|
|
||
| full = "" | ||
| for chunk in agent.iter_stream("Research and summarize quantum computing"): | ||
| full += chunk | ||
| print(chunk, end="", flush=True) | ||
|
|
||
| if "[Error:" in full and "ref:" in full: | ||
| # Surface ref to your logs / retry externally | ||
| print(f"\n⚠️ Error detected, check logs for correlation ID") | ||
| ``` |
There was a problem hiding this comment.
Replace the tools=[...] placeholder with a runnable example.
The literal [...] makes this snippet fail to execute on copy-paste, which contradicts the documentation standard that every Python example must run unmodified. Reuse the get_weather tool from the section above (or any concrete function) so readers can actually reproduce the sentinel-detection flow.
♻️ Proposed fix
-from praisonaiagents import Agent
-
-agent = Agent(instructions="You are a helpful assistant", tools=[...])
+from praisonaiagents import Agent
+
+def get_weather(city: str) -> str:
+ """Get weather for a city."""
+ return f"Weather in {city}: 72°F, sunny"
+
+agent = Agent(instructions="You are a helpful assistant", tools=[get_weather])As per coding guidelines: "Every Python code example must include all necessary imports and run without modification".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/features/streaming.mdx` around lines 237 - 250, The code example uses a
non-runnable placeholder tools=[...] which breaks copy-paste; replace it with a
concrete tool (eg. the existing get_weather function from the previous section)
and include the necessary import/registration so the Agent instantiation is
self-contained: import or define get_weather, wrap it as a tool the Agent
accepts (matching the project’s tool API), pass tools=[get_weather] into
Agent(...), and keep the rest (full accumulation and sentinel detection using
iter_stream) unchanged so the snippet runs unmodified and demonstrates the
sentinel-detection flow with Agent.iter_stream.
There was a problem hiding this comment.
Code Review
This pull request updates the documentation for the rate limiter and streaming features, explaining that the rate limiter is shared across LLM call phases and introducing a new error sentinel for failed tool execution follow-ups. Review feedback recommends applying standard color schemes to the new Mermaid diagram for consistency and replacing an invalid Python placeholder in a code example with runnable code.
| ```mermaid | ||
| sequenceDiagram | ||
| participant U as User | ||
| participant A as Agent | ||
| participant L as LLM | ||
| participant T as Tools | ||
|
|
||
| U->>A: Request with stream=True | ||
| A->>L: Phase 1 (streamed) | ||
| L-->>A: "I'll use tool_name..." | ||
| A->>T: Execute tool_name() | ||
| T-->>A: Tool result | ||
| A->>L: Phase 2 follow-up (streamed) | ||
| L-->>A: Synthesized response | ||
| A-->>U: Combined stream | ||
|
|
||
| Note over L: Both phases use retry-wrapped LLM calls | ||
| ``` |
There was a problem hiding this comment.
The new Mermaid sequence diagram is missing the standard color scheme mentioned in the PR checklist and used in other diagrams in this file. Applying these colors ensures visual consistency across the documentation.
sequenceDiagram
participant U as User #6366F1
participant A as Agent #F59E0B
participant L as LLM #189AB4
participant T as Tools #10B981
U->>A: Request with stream=True
A->>L: Phase 1 (streamed)
L-->>A: "I'll use tool_name..."
A->>T: Execute tool_name()
T-->>A: Tool result
A->>L: Phase 2 follow-up (streamed)
L-->>A: Synthesized response
A-->>U: Combined stream
Note over L: Both phases use retry-wrapped LLM calls
| ```python | ||
| from praisonaiagents import Agent | ||
|
|
||
| agent = Agent(instructions="You are a helpful assistant", tools=[...]) |
There was a problem hiding this comment.
Fixes #247
Summary
Updates docs/features/streaming.mdx to document the new streaming behavior introduced in PraisonAI PR #1538, where tool follow-up responses now use retry-wrapped LLM calls and surface visible error messages instead of silently failing.
Changes Made
Quality Checklist
Generated with Claude Code
Summary by CodeRabbit