FIX: Thought Chain UI Improvements#45
Conversation
Change "Preparing database information..." to "Gathering context..." for a more accurate description of the initial processing step. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Remove EnhancedPromptInfo.jsx (renamed to PromptInfo.jsx as the only implementation) - Remove unused props: isThoughtChainReceived, llmModel, coderLlmModel, chatIntent - Remove unused memoized values from Conversation.jsx - Memoize processedChain with useMemo to prevent re-computation - Memoize errorDetails object to prevent unnecessary re-renders 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Add CSS for collapsible thought chain component: - Collapsed by default, shows latest message with step count - Expand to see all messages in chronological order - Proper padding and styling for collapsed/expanded states 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
|
| Filename | Overview |
|---|---|
| backend/backend/application/context/llm_context.py | Straightforward string change from 'Preparing database information...' to 'Gathering context...' |
| frontend/src/ide/chat-ai/Conversation.jsx | Removed EnhancedPromptInfo import, unused llmModels prop, and computed values; added errorDetailsMemo for a stable prop reference |
| frontend/src/ide/chat-ai/EnhancedPromptInfo.jsx | File deleted — functionality consolidated into PromptInfo.jsx |
| frontend/src/ide/chat-ai/ExistingChat.jsx | No longer passes llmModels to Conversation; all other Conversation props remain unchanged |
| frontend/src/ide/chat-ai/PromptInfo.jsx | Merged EnhancedPromptInfo into PromptInfo; added collapsible thought chain via Ant Design Collapse; all five previous review issues resolved; minor unused catch variable remains |
| frontend/src/ide/chat-ai/ThoughtChainEnhancements.css | Added CSS for collapsible thought chain: .thought-chain-collapse, .thought-chain-collapse-header, and related layout classes |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A["Conversation.jsx\nreceives message prop"] --> B["Derives errorDetailsMemo\n(useMemo)"]
B --> C["PromptInfo\nthoughtChain / shouldStream / errorDetails"]
C --> D{thoughtChain\nnot empty?}
D -- Yes --> E["processedChain useMemo\npairs attempt + DISAPPROVE REASON"]
E --> F["renderThoughtChain()"]
F --> G["Ant Design Collapse\ncollapsed by default"]
G -- Collapsed --> H["Header: latest message\n+ N previous steps count"]
G -- Expanded --> I["Body: full timeline\nrenderMessage() for each step"]
D -- No --> J{shouldStream?}
J -- Yes --> K["Fallback shimmer\n'Processing...'"]
J -- No --> L["return null\n(completed conversation)"]
Prompt To Fix All With AI
This is a comment left during a code review.
Path: frontend/src/ide/chat-ai/PromptInfo.jsx
Line: 140-148
Comment:
**Unused `err` parameter in catch block**
The caught error is declared but never used — neither logged nor surfaced in the notification description. This will trigger a `no-unused-vars` ESLint warning and silently discards the failure reason, making clipboard issues harder to debug.
```suggestion
.catch(() => {
notification.error({
message: "Copy failed",
description: "Could not copy to clipboard",
placement: "topRight",
duration: 2,
});
});
```
How can I resolve this? If you propose a fix, please make it concise.Reviews (6): Last reviewed commit: "fix: remove stale llmModels prop passed ..." | Re-trigger Greptile
Prevent showing indefinite "Processing..." shimmer for completed conversations that have no thought chain data. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Only show "(N previous steps)" label when previousCount > 0 to avoid displaying misleading "(0 previous steps)" for single-message chains. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Rename local variable in parseMessage from errorDetails to msgErrorDetails to prevent shadowing the component-level errorDetails prop. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Extract display text from object messages for the key instead of getting "[object Object]" from string coercion. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
wicky-zipstack
left a comment
There was a problem hiding this comment.
LGTM — clean consolidation, all prior review issues resolved. Two minor P2s for follow-up: unused llmModels required prop in Conversation.jsx, and latest message duplication when Collapse is expanded.
Prevent duplicate display of latest message by hiding the header content when expanded, since all messages are shown in the body. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Override Ant Design's default 8px border-radius on .ant-collapse-content to match the parent container's 12px border-radius. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The llmModels prop was no longer used after removing the llmModelDisplayName and coderLlmModelDisplayName memoized values. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
@wicky-zipstack Thanks for pointing it out. Both P2 items have been addressed as well. |
The fallback regex /^[^:]+:\s*.+/ matched any string with a colon, which could incorrectly consume legitimate messages like "Creating: model.sql". Keep only the explicit [DISAPPROVE REASON] prefix check. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The Conversation component no longer uses llmModels, so stop passing it from ExistingChat. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
What
EnhancedPromptInfoandPromptInfointo a singlePromptInfocomponentWhy
EnhancedPromptInfoandPromptInfo) added unnecessary complexityHow
llm_context.pyEnhancedPromptInfo.jsxtoPromptInfo.jsxand removed legacy componentCollapsecomponent for collapsible UIisThoughtChainReceived,llmModel,coderLlmModel,chatIntent)useMemoforprocessedChainanderrorDetailsobjectsisInProgress && !isExpanded)Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)
Database Migrations
Env Config
Relevant Docs
Related Issues or PRs
Dependencies Versions
Notes on Testing
Screenshots
Collapsed
Expanded
Checklist
I have read and understood the Contribution Guidelines.