Welcome to Rust TUI Coder! This guide will help you get up and running quickly.
- Rust 1.70 or higher
- An API key for one of:
- OpenAI (GPT-3.5, GPT-4)
- Anthropic (Claude)
- Local LLM with OpenAI-compatible API
cargo install rust_tui_coder# Clone the repository
git clone https://github.com/yourusername/rust_tui_coder.git
cd rust_tui_coder
# Build the project
cargo build --release
# The binary will be at target/release/rctCreate a config.toml file in your project directory:
[llm]
api_key = "your-api-key-here"
api_base_url = "https://api.openai.com/v1"
model_name = "gpt-4"You can also use the example configuration:
cp config_example.toml config.toml
# Edit config.toml with your API key[llm]
provider = "openai" # Optional, auto-detected
api_key = "sk-..."
api_base_url = "https://api.openai.com/v1"
model_name = "gpt-4" # or "gpt-3.5-turbo"[llm]
provider = "anthropic"
api_key = "sk-ant-..."
api_base_url = "https://api.anthropic.com"
model_name = "claude-3-opus-20240229"[llm]
provider = "local"
api_key = "not-needed"
api_base_url = "http://localhost:11434/v1" # Ollama default
model_name = "codellama"Instead of config.toml, you can use environment variables:
export LLM_API_KEY="your-api-key"
export LLM_API_BASE_URL="https://api.openai.com/v1"
export LLM_MODEL_NAME="gpt-4"rctOr if built from source:
./target/release/rctYou'll see a terminal interface with:
- Conversation area (top) - Shows your chat with the AI
- Tool logs area (middle) - Shows tool execution details
- Status bar - Shows available commands
- Input area (bottom) - Where you type your messages
-
Type a message in the input area:
Create a hello world program in Python -
Press Enter to send
-
Watch as the AI:
- Generates code
- Uses tools (like
write_file) - Executes the code
- Shows you the results
- Type your message at the bottom
- Press Enter to send
- Watch the response in the conversation area
| Key | Action |
|---|---|
Enter |
Send message |
Up / Down |
Scroll conversation |
PgUp / PgDn |
Page up/down |
Home |
Scroll to top |
End |
Scroll to bottom |
Ctrl+C |
Quit application |
| Command | Description |
|---|---|
/quit |
Exit the application |
/stats |
Show session statistics |
Create a file named hello.py with a hello world program
The AI will:
- Write the code
- Save it to
hello.py - Confirm the file was created
Read example.txt and add a timestamp at the beginning
The AI will:
- Read the file
- Add a timestamp
- Update the file
- Show you the changes
Write a Python script to calculate fibonacci numbers and run it
The AI will:
- Write the script
- Save it to a file
- Execute it
- Show you the output
Show me the current git status
The AI will use the git_status tool to show repository status.
Create a plan to build a REST API with user authentication
The AI will:
- Create a structured plan
- Save it to
plan.md - You can ask it to implement steps one by one
When the AI needs to perform actions, it uses tools:
read_file- Read file contentswrite_file- Create/overwrite fileappend_to_file- Add to end of filesearch_and_replace- Find and replace textdelete_file- Remove file
create_directory- Create folderslist_directory- List folder contentslist_directory_recursive- Show folder tree
execute_python- Run Python codeexecute_bash- Run shell commandsexecute_node- Run JavaScriptexecute_ruby- Run Ruby code
create_plan- Make implementation planupdate_plan_step- Mark steps completeclear_plan- Remove current plan
git_status- Check git status
Tool execution is shown in the Tool Logs area.
"Make a website" "Create an HTML file with a form that collects name and email"
Instead of asking for everything at once, work step-by-step:
- "Create the project structure"
- "Implement the database models"
- "Add the API endpoints"
For complex projects:
Create a plan to build a todo list application with React and Express
Then:
Implement step 1 of the plan
The tool logs area shows exactly what the AI is doing. Check it to:
- Verify file operations
- See command outputs
- Understand execution results
You can refine the AI's work:
The function is good but add error handling
Explain what this code does
or
Why did you use this approach?
Type /stats to see:
- Session duration
- Tokens used
- Number of requests
- Tools executed
- Average tokens per request
- Use Up/Down to scroll through conversation
- Use PgUp/PgDn for faster scrolling
- Use Home/End to jump to top/bottom
If you want to start a new plan:
Clear the current plan
Solution: Create a config.toml file in the directory where you run the app.
cp config_example.toml config.toml
# Edit with your API keySolution: Check your API key in config.toml:
- OpenAI keys start with
sk- - Anthropic keys start with
sk-ant- - Ensure no extra spaces or quotes
Solution: Check your api_base_url:
- OpenAI:
https://api.openai.com/v1 - Anthropic:
https://api.anthropic.com - Local: Ensure your local server is running
Solution: Check the tool logs for details. Common issues:
- File permissions
- Missing dependencies (Python, Node, etc.)
- Invalid file paths
Solution:
- Ensure your terminal supports UTF-8
- Try a different terminal emulator
- Check terminal size (minimum 80x24 recommended)
Solution:
- Your terminal may not support all features
- Try running:
export TERM=xterm-256color
While not directly supported in config, you can mention preferences:
Please be concise in your responses
The app operates in the directory where you launch it. To work on a specific project:
cd /path/to/your/project
rctCreate a config.toml in each project directory, or use environment variables:
cd project1
LLM_MODEL_NAME="gpt-3.5-turbo" rctNow that you're set up, explore these resources:
- README.md - Full feature documentation
- ARCHITECTURE.md - System design details
- API.md - API reference
- EXAMPLES.md - More usage examples
- Read the README for detailed features
- Check ARCHITECTURE.md for system internals
- Review API.md for technical details
Q: How much does it cost? A: Cost depends on your LLM provider and usage. Check with OpenAI/Anthropic for pricing.
Q: Can I use it offline? A: Yes, with a local model (Ollama, LM Studio).
Q: Is my code safe? A: Code is processed by your chosen LLM provider. Read their privacy policies.
Q: Can I customize the tools? A: Currently, tools are built-in. Custom tools require modifying the source code.
Q: What languages can I execute? A: Python, Bash, Node.js, and Ruby are supported out of the box.
- Issues: Report bugs on GitHub
- Features: Suggest features via GitHub issues
- Documentation: All docs are in the
docs/folder
Happy coding!