Rust TUI Coder is a terminal-based AI coding assistant built with Rust. It provides an interactive interface for developers to interact with Large Language Models (LLMs) to assist with coding tasks, file operations, and project management.
┌─────────────────────────────────────────────────────────────┐
│ User Interface │
│ (ui.rs) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Conversation │ │ Tool Logs │ │ Status Bar │ │
│ │ Display │ │ Display │ │ │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
▲ │
│ │
│ ▼
┌─────────────────────────────────────────────────────────────┐
│ Application State │
│ (app.rs) │
│ • User input buffer │
│ • Conversation history │
│ • Tool execution logs │
│ • Usage tracking (tokens, requests, tools) │
│ • Scroll state management │
│ • Streaming state │
└─────────────────────────────────────────────────────────────┘
▲ │
│ │
│ ▼
┌─────────────────────────────────────────────────────────────┐
│ LLM Interface │
│ (llm.rs) │
│ • API communication │
│ • Message formatting │
│ • Token counting │
│ • Streaming support │
│ • Tool call parsing │
└─────────────────────────────────────────────────────────────┘
▲ │
│ │
│ ▼
┌─────────────────────────────────────────────────────────────┐
│ Agent System │
│ (agent.rs) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ File Ops │ │ Code Exec │ │ Plan Mgmt │ │
│ │ • read │ │ • python │ │ • create │ │
│ │ • write │ │ • bash │ │ • update │ │
│ │ • append │ │ • node │ │ • clear │ │
│ │ • search │ │ • ruby │ │ │ │
│ │ • delete │ │ │ │ │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ Directory │ │ Git Ops │ │
│ │ • create │ │ • status │ │
│ │ • list │ │ │ │
│ │ • recurse │ │ │ │
│ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
▲ │
│ │
│ ▼
┌─────────────────────────────────────────────────────────────┐
│ Configuration │
│ (config.rs) │
│ • LLM settings (API key, base URL, model) │
│ • Provider selection (OpenAI, Anthropic, Local) │
│ • TOML file parsing │
└─────────────────────────────────────────────────────────────┘
Responsibilities:
- Initialize the terminal UI using
ratatui - Load configuration from
config.toml - Set up the main event loop
- Handle user input (keyboard events)
- Coordinate between UI, App state, and LLM
Key Functions:
main(): Entry point, sets up terminal and runs event looprun_app(): Main application loop- Event handling for keyboard input
Dependencies:
crosstermfor terminal manipulationratatuifor TUI renderingtokiofor async runtime
Responsibilities:
- Maintain application state
- Track conversation history
- Manage tool execution logs
- Track usage statistics (tokens, requests, tools)
- Handle scroll positions
- Manage streaming state
Key Structures:
pub struct App {
pub user_input: String,
pub conversation: Vec<String>,
pub status_message: String,
pub tool_logs: Vec<String>,
pub is_executing_tool: bool,
pub current_tool: String,
// Usage tracking
pub session_start_time: Instant,
pub tokens_used: u64,
pub total_requests: u64,
pub total_tools_executed: u64,
// Scrolling and streaming
pub conversation_scroll_position: usize,
pub tool_logs_scroll_position: usize,
pub is_streaming: bool,
pub current_streaming_message: String,
}Key Methods:
new(): Initialize app with default stateadd_tool_log(): Add tool execution logsincrement_*(): Track usage metricsscroll_*(): Manage scroll positionsstart_streaming(),update_streaming_message(),finish_streaming(): Handle streaming responsesget_usage_summary(): Generate usage statistics
Responsibilities:
- Render the terminal UI using
ratatui - Display conversation history
- Display tool execution logs
- Show status messages and input area
- Handle scroll rendering
Layout:
┌─────────────────────────────────────────┐
│ Conversation Area (70% height) │
│ • User messages │
│ • Agent responses │
│ • Scrollable with UpDown │
└─────────────────────────────────────────┘
┌─────────────────────────────────────────┐
│ Tool Logs Area (20% height) │
│ • Tool execution details │
│ • Results and errors │
└─────────────────────────────────────────┘
┌─────────────────────────────────────────┐
│ Status Bar (1 line) │
│ • Commands and shortcuts │
└─────────────────────────────────────────┘
┌─────────────────────────────────────────┐
│ Input Area (remaining) │
│ • User input with cursor │
└─────────────────────────────────────────┘
Key Functions:
ui(): Main rendering function- Renders blocks with
Block::default().borders(Borders::ALL) - Uses
Paragraphwidgets for text display - Implements scrolling with
scroll()method
Responsibilities:
- Communicate with LLM APIs (OpenAI, Anthropic, local)
- Format messages for API requests
- Parse API responses
- Handle tool calls from LLM
- Count tokens for usage tracking
- Support streaming responses
Key Structures:
pub struct Message {
pub role: String,
pub content: String,
}
pub struct LlmResponse {
pub content: String,
pub tool_calls: Vec<ToolCall>,
pub tokens_used: u64,
}Key Functions:
send_message(): Send message to LLM and get responsesend_message_streaming(): Send message with streaming supportestimate_tokens(): Estimate token count for text- Format requests for different providers (OpenAI format)
API Support:
- OpenAI (GPT-3.5, GPT-4)
- Anthropic (Claude models)
- Local models (via OpenAI-compatible API)
Responsibilities:
- Execute tool calls requested by LLM
- Provide file system operations
- Execute code in various languages
- Manage project plans
- Handle git operations
Tool Categories:
read_file: Read file contentswrite_file: Write content to fileappend_to_file: Append content to filesearch_and_replace: Search and replace in filedelete_file: Delete a file
create_directory: Create directory (with parents)list_directory: List directory contentslist_directory_recursive: List directory tree
execute_python: Run Python codeexecute_bash: Run bash commandsexecute_node: Run Node.js codeexecute_ruby: Run Ruby code
create_plan: Create implementation planupdate_plan_step: Update plan step statusclear_plan: Clear the plan
git_status: Get git repository status
Key Functions:
execute_tool(): Main dispatcher for tool execution- Individual tool implementation functions
- Error handling and result formatting
Responsibilities:
- Load configuration from TOML file
- Parse LLM settings
- Provide configuration to other modules
Configuration Structure:
[llm]
provider = "openai" # optional: openai, anthropic, local
api_key = "your-api-key"
api_base_url = "https://api.openai.com/v1"
model_name = "gpt-4"Key Functions:
Config::from_file(): Load config from file path- Error handling for missing/invalid config
- User types message in input area
- Press Enter to submit
main.rsreceives input, adds to conversation- Message sent to LLM via
llm.rs - LLM response received (may include tool calls)
- If tool calls present:
- Each tool call executed via
agent.rs - Results logged to tool logs
- Results sent back to LLM
- LLM generates final response
- Each tool call executed via
- Response added to conversation
- UI updated to show new messages
- Usage statistics updated
- LLM returns tool calls in response
main.rsparses tool calls- For each tool:
- Log "Executing tool: {name}" to tool logs
- Call
agent::execute_tool()with tool name and arguments - Capture result or error
- Log result/error to tool logs
- Format tool results for LLM
- Send tool results back to LLM
- Receive and display final LLM response
- User submits message
send_message_streaming()called- App enters streaming state
- For each chunk received:
- Update
current_streaming_message - Render UI to show partial message
- Update
- When complete:
- Add final message to conversation
- Exit streaming state
- Update scroll position
- Modern, actively maintained TUI library
- Good performance and flexibility
- Widget-based architecture
- Efficient async I/O for API calls
- Non-blocking tool execution
- Streaming support
- Human-readable format
- Easy to edit
- Strong typing with serde
- LLM-friendly format
- Easy to parse and validate
- Extensible design
- Centralized in
Appstruct - Immutable where possible
- Clear ownership boundaries
- File operations: Return descriptive errors
- API calls: Handle network errors, timeouts
- Tool execution: Capture and log errors
- Configuration: Fail fast with clear messages
- Token estimation: O(n) character-based approximation
- Conversation history: Stored in memory (consider limits for long sessions)
- Tool execution: Synchronous but with progress indication
- UI rendering: Only on state changes
- Scrolling: Efficient with view windows
- API keys: Stored in config file (should not be committed)
- Code execution: Direct shell access (use in trusted environments)
- File operations: No sandboxing (user responsibility)
- Input validation: Basic validation on tool arguments
- Define tool schema in
agent.rs - Implement tool function
- Add to
execute_tool()dispatcher - Document in system prompt
- Add provider-specific formatting in
llm.rs - Update configuration schema
- Test with provider API
- Modify
ui.rsstyling - Add color schemes
- Support configuration options
- Unit tests: Individual functions and modules
- Integration tests: Component interactions
- Performance tests: Large inputs and long sessions
- Edge case tests: Error conditions and boundaries
See TESTING.md for detailed test documentation.
- Build:
cargo build --release - Test:
cargo test - Lint:
cargo clippy - Package:
cargo package - Publish:
cargo publish
See PUBLISH.md for publishing instructions.