ROB-1689 support stream holmes chat function#1886
Conversation
WalkthroughA new boolean field Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant holmes_chat
participant Holmes API
participant WebSocket
User->>holmes_chat: Invoke with params (stream=True)
holmes_chat->>Holmes API: POST /api/chat (stream=True)
loop For each chunk in response
Holmes API-->>holmes_chat: Send chunk
holmes_chat->>WebSocket: ws(data=chunk)
end
holmes_chat-->>User: (returns after streaming)
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
src/robusta/core/playbooks/internal/ai_integration.py (1)
356-369: Streaming implementation looks good with minor suggestions.The streaming implementation correctly uses a context manager, appropriate headers, and processes chunks as they arrive. The early return prevents duplicate processing.
Consider these improvements for robustness:
for line in resp.iter_content( chunk_size=None, decode_unicode=True - ): # Avoid streaming chunks from holmes. send them as they arrive. + ): # Send chunks from Holmes as they arrive if line: + # Log chunk size for debugging if needed + logging.debug(f"Streaming chunk size: {len(line)}") event.ws(data=line)The comment could be clearer about what's happening, and optional debug logging could help with troubleshooting streaming issues.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/robusta/core/model/base_params.py(2 hunks)src/robusta/core/playbooks/internal/ai_integration.py(1 hunks)src/robusta/core/reporting/holmes.py(2 hunks)
🔇 Additional comments (6)
src/robusta/core/model/base_params.py (2)
5-5: LGTM! Necessary import for the new field.The Field import is required for the stream parameter definition with a default value.
193-193: LGTM! Well-structured field addition.The stream field is properly defined with a default value of False, ensuring backward compatibility while enabling the new streaming functionality.
src/robusta/core/reporting/holmes.py (2)
3-3: LGTM! Required import addition.Adding Field to the pydantic import is necessary for the stream field definition.
41-41: LGTM! Consistent field definition.The stream field definition matches the pattern used in HolmesChatParams, maintaining consistency across the codebase.
src/robusta/core/playbooks/internal/ai_integration.py (2)
354-355: LGTM! Clean parameter passing and URL definition.Good practice to pass the stream parameter and define the URL once for reuse in both streaming and non-streaming paths.
371-371: LGTM! Preserved backward compatibility.The non-streaming path remains unchanged, ensuring backward compatibility for existing functionality.
No description provided.