- LLM-powered CommandSafetyChecker for intelligent command analysis
- SafetyCache with LRU eviction and TTL-based expiration for safety decisions
- Built-in model configurations with /model init command for new users
- ToolResult dataclass for improved type safety in tool handlers
- Centralized ClippySettings system with environment variable management
- Refactor AgentLoopConfig dataclass to consolidate run_agent_loop parameters
- Replace large if-elif chain with dispatch table in executor.py
- Improve find_replace path handling with absolute paths and recursive glob matching
- Enhance model commands with built-in indicator column and improved completions
- Streamline version bumping process with git tag automation
- Narrow exception handling for file operations to catch specific exception types
- Resolve path handling issues in find_replace file collection
- Prevent deadlocks by using RLock instead of Lock for reentrant calls
- Simplified streaming for more consistent real-time responses
- Fixed text appearing on wrong line below attachments
- Fixed duplicate text in streaming responses
- Fixed words running together in responses
- Real-time responses now stream continuously as they're generated
- Simplified and more reliable message processing across all AI providers
- Removed old compatibility code for a cleaner, faster experience
- Six new specialty subagents: architect, debugger, security, performance, integrator, and researcher
- Track your token usage and estimated costs with detailed session reports
- Enhanced safety rules for file deletion to better protect your data
- Simplified subagent model selection - now fully user-controlled
- Cleaner status display showing essential token metrics
- Fixed token tracking to prevent double-counting and ensure accurate usage
- Fixed /subagent command bug that was causing errors
- Better version management for more accurate updates
- Fixed incorrect line range calculation when using negative numbers
- New 'grepper' subagent for safely searching and gathering information without making changes
- Better handling of file line requests that go beyond file limits
- Updated provider documentation with current examples and providers
- More reliable test results for stuck detection
- Improved build process for cleaner releases
- Fixed version mismatch between project files
- New safety control commands to turn checks on/off and view status
- Quickly toggle safety checks while the app is running
- Comprehensive documentation including best practices, troubleshooting, and migration guides
- Better test coverage for safety integration features
- Choose which AI model checks command safety for more control
- Safety checks now use configurable models for better flexibility
- Command cache clears automatically when switching safety models
- Updated default AI model to gpt-5-mini for better performance
- Enhanced command safety checker to be smarter about development workflows
- Improved first-time setup to use model configurations dynamically
- Better error handling when reading file lines with invalid ranges
- Fixed crashes when requesting lines outside file boundaries
- Command safety checks now use caching for faster performance
- Safety cache settings can be configured to reduce unnecessary checks
- Added example scripts showing cache performance benefits
- New intelligent safety agent protects you from dangerous commands by analyzing them before execution
- Better protection against harmful commands with automatic safety checking when AI provider is available
- Enhanced security documentation with detailed examples and troubleshooting guides
- New command completion for /model remove and /model threshold commands
- Improve model command help text formatting and organization
- Separate built-in indicator into dedicated column in model list display
- Update provider reference from /providers to /provider list
- Streamline version bumping process with git tag automation and pre-bump validation
- Resolve path handling issues in find_replace tool for absolute and relative glob patterns
- /model init command helps new users get started with default models
- Built-in model configurations for all providers
- Better code organization and reliability
- Cleaner model management with built-in and user-defined models
- Improved error handling for file operations
- Fixed JSON loading error handling
- Removed redundant error handling in file operations
- Better reliability with improved threading for complex operations
- More organized code structure for better performance and stability
- Enhanced error handling with specific exceptions for file operations
- Cleaner tool management with improved result handling
- Better model detection with shared utility functions
- Assistant responses now display with better formatting and visual indicators
- Cleaner text display by removing extra blank lines from responses
- App now shuts down more reliably with configurable cleanup timing
- Tests run significantly faster while maintaining accuracy
- More reliable monitoring and cleanup of background tasks
- Control command output visibility with new settings option
- Set custom timeouts for command execution
- Better test coverage with multiple report formats
- Comprehensive changelog documenting complete project history
- Added support for Groq, Mistral AI, Together AI, and Minimax AI providers
- Enhanced model command completion for better tab suggestions
- Better error messages when provider configuration is missing
- More reliable app performance with improved thread safety
- Enhanced security with protection against dangerous commands
- Fixed security vulnerability preventing shell injection attacks
- Improved command execution reliability with better timeout handling
- Enhanced security with path validation for file operations
- Better error handling for external integrations
- Improved conversation management with automatic compaction
- Cleaner model management by removing temporary commands
- Faster and more reliable AI provider connections
- Fixed conversation history not updating after auto-compaction
- Prevented crashes from external tool connection issues
- DeepSeek provider now available as an AI option with reasoner model support
- Read specific line ranges from files with new read_lines tool
- Better organized model list display with visual indicators and clearer formatting
- Quick model switching with '/model ' shortcut command
- Automatic re-authentication for Claude Code OAuth sessions
- Standardized provider commands for better consistency
- Fixed ASCII art alignment in the welcome message display
- Fixed provider command interface inconsistency
- Fixed model threshold command parsing with multi-part arguments
- Interactive setup wizard guides you through AI provider configuration on first run
- Cleaner first-time experience without forcing a default model selection
- Better error messages guide you to setup instead of mentioning default models
- Create custom commands with interactive step-by-step wizards
- Use project-level custom commands that override global settings for team collaboration
- Configure AI models with an interactive 5-step setup wizard
- Manage AI providers more easily with optional API key support
- Better tab completion now includes your custom commands
- Cleaner provider list focusing on actively supported services
- Subagents can now auto-approve specific tools without manual confirmation
- Simplified system prompt for faster and more focused responses
- Enhanced tool descriptions for better clarity and understanding
- Added safety features to prevent execution of dangerous commands
- Streamlined file editing with centralized usage guidelines
- Fixed circular import issues in tool handling
- Resolved trailing whitespace problems in parallel task execution
- Create your own custom slash commands for automation
- Read specific line ranges from files with new read_lines tool
- Get started quickly with custom commands quickstart guide
- Better organized command system for improved reliability
- Enhanced model management with easier switching between AI models
- More comprehensive help system with custom commands integration
- Better conversation management with intelligent token tracking
- Automatic truncation of oversized tool results to keep responses manageable
- Streamlined documentation with improved structure and easier navigation
- Enhanced test reliability and code quality improvements
- Fixed OAuth authentication issues with proper test environment setup
- Resolved trailing whitespace problems in parallel task execution
- Fixed test isolation by clearing conflicting environment variables
- Better support for parallel task execution with improved iteration limits
- More reliable subagent coordination during complex multi-task operations
- Cleaned up code structure for better reliability
- Automatic recovery system for stuck subagents during parallel tasks
- Better ZAI provider compatibility with conversation summaries
- Streamlined command structure and improved type checking
- Enhanced OAuth token handling and error management
- Fixed recursive directory listing functionality
- Resolved OAuth test environment interference
- Fixed conversation compaction issues with ZAI GLM-4.6 model
- Sign in with Claude Code subscription using OAuth authentication
- New vaporwave dream mode with retro 90s-themed interface and animations
- Better organization with clear tool names in results
- Safer directory browsing without automatic recursion
- AI can now complete tasks without step limits
- Fixed issue where tool results didn't show which tool was used
- Fetch web pages directly for research and documentation
- Press Ctrl+J to easily create multi-line inputs
- AI can now complete complex tasks without step limits
- Better performance for file operations
- Fixed directory listing to prevent unintended file access
- Fetch content from web pages for research and documentation
- Press Ctrl+J to easily create multi-line inputs
- Better help commands with detailed guidance for models and servers
- Cleaner welcome screen with improved centering and layout
- Enhanced notifications show conversation space savings when auto-compacted
- Updated token usage threshold takes effect immediately without restart
- Fixed model threshold cache not updating when changed during session
- Added support for Hugging Face models and AI providers
- New paperclip ASCII art banner for better appearance
- Streamlined documentation for easier onboarding
- Updated provider list and Anthropic API key configuration
- Cleaner example configuration files
- Simplified welcome message to focus on essential information for new users
- Added a fun ASCII art welcome banner with Clippy's classic greeting
- New think tool helps AI organize thoughts before taking action
- AI can now plan internally before executing tasks
- Better reasoning process for more accurate results
- New YOLO mode for auto-approving all actions
- Streamlined workflow with automatic approvals
- New /init command automatically creates project documentation files
- Enhance existing documentation with project-specific insights using --refine flag
- Cleaner AI responses by removing extra blank lines at the start
- Better project analysis detects structure, dependencies, and development commands
- Support for Anthropic and Google Gemini AI providers
- HuggingFace model integration for more AI options
- Better handling of custom AI providers with OpenAI-compatible settings
- More flexible provider system with improved model identification
- Fixed issue with prefixed models not using correct provider settings
- Better AI model management with improved configuration options
- More reliable tool calling with enhanced compatibility
- Enhanced performance and stability for all AI interactions
- Get helpful suggestions when you type an incorrect slash command
- Better error messages for unknown commands in both interactive and quick modes
- New /truncate command to manage conversation length
- Copy and move files with validation and progress tracking
- Find and replace text across multiple files with preview mode
- Better tool organization with categories and smart suggestions
- Help commands grouped by category for easier navigation
- Reduced tool catalog with more powerful capabilities
- New project analysis tool for security scanning and code quality assessment
- Real-world examples and development scenarios added to documentation
- Enhanced interactive mode with progress indicators and smart file completion
- Better model management UI with detailed status panels
- Streamlined file operations using familiar shell commands
- Improved error recovery with contextual suggestions
- Fixed automated execution issues in CI pipelines by removing interactive flag
- Automatic file validation checks for common formats like Python, JSON, and YAML when writing files
- Binary file detection prevents errors when working with images, documents, and other non-text files
- Better error messages with actionable guidance when file operations fail
- File validation can be skipped for large files over 1MB to keep things fast
- Fixed issue with error handling that could cause problems with external tool connections
- Mistral AI now available as a provider option
- Enable and disable MCP servers for better control
- /model load command makes switching AI models faster
- Better tab completion for model commands with context-aware suggestions
- Command timeout increased to 5 minutes with configurable options
- Cleaner documentation with restructured features section
- Fixed search patterns starting with dash being misinterpreted as flags
- Prevented special formatting characters from causing display errors
- Fixed duplicate commands in MCP manager
- Save and resume conversations automatically
- Interactive conversation picker when resuming
- Better file change previews with cleaner formatting
- Auto-generates timestamps for saved conversations
- Shows conversation history when loading saves
- Fixed crash when messages contain special formatting characters
- Prevented display errors with mismatched text formatting
- MiniMax provider now available as an AI option
- Smart file completion suggests files without typing @ symbol
- Better tab completion for file references with @ symbol
- Enhanced file detection by analyzing paths and extensions
- Fixed display issues when tool outputs contain special characters
- Prevented rendering artifacts in diff content
- Tab completion for slash commands makes typing faster
- New 'clippy-code' command as an alternative way to start the app
- Auto-enters interactive mode when no task provided
- Can handle more complex tasks with increased limit to 100 operations
- Added Chutes.ai as a new AI provider option
- Simplified provider names for cleaner display
- Better search behavior with ripgrep's automatic recursive search
- Fixed search tool flag handling for more reliable results
- Better token usage tracking with model-specific limits
- Status command now shows how usage is calculated
- Model names and IDs are now case-insensitive for easier matching
- Conversations automatically summarize when they get too long to save space
- Clippy now has a fun personality with paperclip-themed jokes and puns
- Model management commands now work better in document mode
- File editing is more reliable with exact string matching instead of patterns
- Switching between AI models works better with validation and case-insensitive matching
- Document mode header now shows your current working directory
- Fixed multi-line pattern deletion in file edits
- Fixed issues with trailing newlines when deleting text patterns
- Switch between AI models more easily with case-insensitive matching
- Edit files with simpler exact string matching instead of complex patterns
- Better error messages when model switching fails
- More reliable multi-line pattern deletion
- Smarter fuzzy matching finds similar text when exact match isn't found
- Fixed issues with trailing newlines when deleting multi-line patterns
- Fixed pattern matching that counted wrong number of occurrences
- New subagent system for delegating complex tasks to specialized AI agents
- Set custom models for different subagent types with /subagent commands
- Better multi-line pattern handling for file edits
- Enhanced approval dialogs with improved error handling
- Clear visual indicators show which subagent is working
- Fixed file editing issues with patterns ending in newlines
- Fixed potential runtime errors with external tool connections
- Simplified configuration by removing environment variable fallbacks
- Better model selection with explicit model names and IDs
- Cleaner setup process with updated documentation and examples
- Cleaner terminal output when connecting to external tools
- Better error messages for troubleshooting connection issues
- Manage your own AI models and providers with new commands like /model add/remove/default
- Better visual feedback with a spinner while AI is thinking
- More flexible model system with separate provider and user configurations
- Fixed security issue preventing text markup from breaking the UI
- Added detailed logging to help track what the app is doing
- Current working directory now displayed in the document header for easier navigation
- Better error tracking with detailed logs when something goes wrong
- Simplified approval system with clearer yes/no/allow options
- Better approval prompt validation with helpful error messages
- App now asks if you want to continue when reaching task limits
- Switched to faster dependency management for quicker updates
- Project renamed to 'clippy-code' for clearer branding and consistency
- Added automated publishing workflow for smoother updates
- Enhanced type checking for better reliability
- Better console message formatting for improved readability
- Fixed security issue with error message display to prevent markup injection
- You can now approve MCP tools on-the-fly without pre-configuring trust settings
- New block editing operations for replacing and deleting multi-line text sections
- Advanced regex replacement with support for capture groups and flags
- Enhanced file editing with more precise block-based operations
- Better MCP tool compatibility and result formatting
- Expanded documentation with comprehensive MCP integration guides and examples
- Customized scrollbar appearance for better conversation viewing
- Connect to external tools through Model Context Protocol (MCP) servers
- New MCP commands: list, tools, refresh, allow, and revoke
- Enhanced approval dialog with expandable details and better error messages
- Manually trust MCP servers for better security control
- Better error handling with contextual suggestions when things go wrong
- Improved grep tool now accepts both 'path' and 'paths' for flexibility
- Fixed UI display issues in document mode
- Resolved connection reliability problems with external servers
- Edit tool now supports multi-line patterns for complex file changes
- Manage auto-approvals with new /auto command (list, revoke, clear)
- Approval dialog redesigned with modern Windows-style security interface
- Can handle longer tasks with increased limit from 25 to 50 operations
- Better code organization for improved reliability and performance
- File edits are more reliable with strict pattern matching and validation
- Fixed paperclip icon splitting in document mode
- Improved edit tool reliability with better error handling and validation
- Support for ZAI provider with GLM models
- Better search with familiar grep flags and glob patterns
- Improved file search handling with proper glob expansion
- File search and editing tools now work properly
- Fixed conversation compaction error when missing newlines
- Improved documentation formatting consistency
- Visual indicator shows when AI is thinking
- New edit tool for precise line-based file modifications
- Better search using ripgrep when available
- Directory listings now show folders with trailing slash
- Fixed security issue preventing directory traversal
- Improved error messages and reliability
- New grep tool for searching patterns across files
- Read multiple files at once with read_files tool
- Document mode now shows conversation like a chat interface
- Better error messages when something goes wrong
- Directory listings now respect .gitignore rules
- Auto-loads project documentation when available
- Cleaner document interface with modern conversation display
- Improved approval UI for tool actions
- Fixed duplicate messages in interactive mode
- Fixed Enter key submission issues in document mode
- Fixed output display and text formatting in document UI
- Use /compact to summarize long conversations and save space
- Check your token usage with /status command
- Better reliability with automatic retries for failed connections
- No more response length limits - let AI determine the best response size
- Switch between different AI models during conversations using /model commands
- See responses appear in real-time as they're being written
- Better support for multiple AI providers with separate API keys
- Press ESC twice to quickly interrupt long responses
- Improved reliability for model switching and configuration
- Works with any OpenAI-compatible service (OpenAI, Cerebras, Ollama, and more)
- Simplified setup with OpenAI as the default provider
- Cleaner code structure for better reliability and performance
- Updated guides and examples to help you get started faster
- Choose your preferred AI provider - now supporting both Anthropic and OpenAI models
- Interactive chat mode for longer conversations, plus quick command mode for single tasks
- Safety system that automatically approves safe operations while asking for confirmation on risky ones
- Better documentation with comprehensive guides and examples
- New development tools for easier testing and code quality checks
- Improved configuration system with clear setup instructions