Skip to content

Commit 44ae2eb

Browse files
committed
Updated README
1 parent 1e922e0 commit 44ae2eb

1 file changed

Lines changed: 89 additions & 126 deletions

File tree

README.md

Lines changed: 89 additions & 126 deletions
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,6 @@ Supports tasks like file operations, image editing, video processing, data analy
2222

2323
Committed to being **free** and **simple** - no downloads or tedious setups required. Works on Windows, Linux, macOS.
2424

25-
## **Future Plans**
26-
- More free-tier Hugging Face models
27-
- Enhanced multi-modal support
28-
- Additional OS support
29-
3025
## **Table of Contents**
3126
- [Features](#features)
3227
- [Installation](#installation)
@@ -42,55 +37,61 @@ Committed to being **free** and **simple** - no downloads or tedious setups requ
4237

4338
## 📥 **Installation**
4439

45-
## Installtion with Python package manager.
46-
To install Code-Interpreter, run the following command:</br>
40+
### Installation with Python package manager
41+
To install Code-Interpreter, run the following command:
4742

4843
```bash
4944
pip install open-code-interpreter
5045
```
51-
- To run the interpreter with Python:</br>
46+
47+
To run the interpreter with Python:
48+
5249
```bash
53-
interpreter -m 'gpt-4o' -md 'code' -dc
50+
python interpreter.py -m 'z-ai-glm-5' -md 'code'
5451
```
55-
- Make sure you install required packages before running the interpreter.</br>
56-
- And you have API keys setup in the `.env` file.</br>
5752

58-
## Installtion with Git
59-
To get started with Code-Interpreter, follow these steps:</br>
53+
Make sure you install required packages before running the interpreter and have API keys setup in the `.env` file.
6054

61-
1. Clone the repository:</br>
62-
```git
55+
### Installation with Git
56+
To get started with Code-Interpreter, follow these steps:
57+
58+
1. Clone the repository:
59+
60+
```bash
6361
git clone https://github.com/haseeb-heaven/code-interpreter.git
6462
cd code-interpreter
6563
```
66-
2. Install the required packages:</br>
64+
65+
2. Install the required packages:
66+
6767
```bash
6868
pip install -r requirements.txt
6969
```
70-
3. Setup the Keys required.
71-
4. Copy the example environment file and add the keys you plan to use:</br>
70+
71+
3. Copy the example environment file and add the keys you plan to use:
72+
7273
```bash
7374
copy .env.example .env
7475
```
7576

76-
## API Key setup for All models.
77+
## API Key setup for All models
7778

7879
Follow the steps below to obtain and set up the API keys for each service:
7980

8081
1. **Obtain the API keys:**
8182
- HuggingFace: Visit [HuggingFace Tokens](https://huggingface.co/settings/tokens) and get your Access Token.
8283
- Google Gemini: Visit [Google AI Studio](https://makersuite.google.com/app/apikey) and click on the **Create API Key** button.
8384
- OpenAI: Visit [OpenAI Dashboard](https://platform.openai.com/account/api-keys), sign up or log in, navigate to the API section in your account dashboard, and click on the **Create New Key** button.
84-
- Groq AI: Obtain access [here](https://wow.groq.com), then visit [Groq AI Console](https://console.groq.com/keys), sign up or log in, navigate to the API section in your account, and click on the **Create API Key** button.
85-
- Anthropic AI: Obtain access [here](https://www.anthropic.com/earlyaccess), then visit [Anthropic AI Console](https://console.anthropic.com/settings/keys), sign up or log in, navigate to the API Keys section in your account, and click on the **Create Key** button.
85+
- Groq AI: Visit [Groq AI Console](https://console.groq.com/keys), sign up or log in, and click on the **Create API Key** button.
86+
- Anthropic AI: Visit [Anthropic AI Console](https://console.anthropic.com/settings/keys), sign up or log in, and click on the **Create Key** button.
8687
- NVIDIA API Catalog: Visit [NVIDIA Build](https://build.nvidia.com/), create a key, and use `NVIDIA_API_KEY`.
8788
- Z AI: Visit [Z AI Docs](https://docs.z.ai/) and use `Z_AI_API_KEY`.
8889
- OpenRouter: Visit [OpenRouter Keys](https://openrouter.ai/settings/keys) and use `OPENROUTER_API_KEY`.
8990
- Browser Use: Visit [Browser Use Docs](https://docs.browser-use.com/) and use `BROWSER_USE_API_KEY`.
9091

9192
2. **Save the API keys:**
92-
- Create a `.env` file in your project root directory.
93-
- Open the `.env` file and add the following lines, replacing `Your API Key` with the respective keys:
93+
94+
Create a `.env` file in your project root directory and add the following lines:
9495

9596
```bash
9697
export HUGGINGFACE_API_KEY="Your HuggingFace API Key"
@@ -105,39 +106,33 @@ export OPENROUTER_API_KEY="Your OpenRouter API Key"
105106
export BROWSER_USE_API_KEY="Your Browser Use API Key"
106107
```
107108

108-
# Offline models setup.</br>
109-
This Interpreter supports offline models via **LM Studio** and **OLlaMa** so to download it from [LM-Studio](https://lmstudio.ai/) and [Ollama](https://ollama.com/) follow the steps below.
110-
- Download any model from **LM Studio** like _Phi-2,Code-Llama,Mistral_.
111-
- Then in the app go to **Local Server** option and select the model.
112-
- Start the server and copy the **URL**. (LM-Studio will provide you with the URL).
113-
- Run command `ollama serve` and copy the **URL**. (OLlaMa will provide you with the URL).
114-
- Open config file `configs/local-model.json` and paste the **URL** in the `api_base` field.
115-
- Now you can use the model with the interpreter set the model name to `local-model` and run the interpreter.</br>
109+
## Offline models setup
116110

117-
4. Run the interpreter with Python:</br>
118-
### Running with Python.
119-
```bash
120-
python interpreter.py -md 'code' -m 'gpt-4o' -dc
121-
```
111+
This Interpreter supports offline models via **LM Studio** and **Ollama**. Follow the steps below:
112+
113+
- Download any model from [LM Studio](https://lmstudio.ai/) like _Phi-2, Code-Llama, Mistral_.
114+
- In the app go to **Local Server** option and select the model.
115+
- Start the server and copy the **URL** (LM-Studio will provide you with the URL).
116+
- Run command `ollama serve` and copy the **URL** (Ollama will provide you with the URL).
117+
- Open config file `configs/local-model.json` and paste the **URL** in the `api_base` field.
118+
- Set the model name to `local-model` and run the interpreter.
122119

123-
5. Run the interpreter directly:</br>
124-
### Running Interpreter without Python (Executable macOS/Linux only).
125120
```bash
126-
./interpreter -md 'code' -m 'gpt-4o' -dc
121+
python interpreter.py -md 'code' -m 'local-model'
127122
```
128123

129124
## 🌟 **Features**
130125

131126
- 🚀 Executes generated code from instructions
132127
- 💾 Saves and edits code with advanced editor
133-
- 📡 Supports offline models via LM Studio
128+
- 📡 Supports offline models via LM Studio and Ollama
134129
- 📜 Command history and mode selection
135130
- 🧠 Multiple models and languages (Python/JavaScript)
136131
- 👀 Code review before execution
137132
- 🛡️ Safe sandbox execution with timeout and security
138-
- 🧠 Self-repair for failed executions
133+
- 🔁 Self-repair for failed executions
139134
- 💻 Cross-platform (Windows/macOS/Linux)
140-
- 🤝 Integrates with HuggingFace, OpenAI, Gemini, etc.
135+
- 🤝 Integrates with HuggingFace, OpenAI, Gemini, Groq, Claude, DeepSeek, NVIDIA, Z AI, OpenRouter, Browser Use
141136
- 🎯 Versatile tasks: file ops, image/video editing, data analysis
142137

143138
## 🛡️ **Safety Features**
@@ -147,11 +142,6 @@ The interpreter displays the current safety mode in the session banner:
147142
- **[SAFE MODE]** - Default mode with safety restrictions enabled (green)
148143
- **[UNSAFE MODE ⚠️]** - Unrestricted mode (red with warning emoji)
149144

150-
To enable unsafe mode, use the `--unsafe` flag:
151-
```bash
152-
interpreter --unsafe
153-
```
154-
155145
### Dangerous Operation Handling
156146
The interpreter handles dangerous operations with a single confirmation prompt:
157147

@@ -164,97 +154,74 @@ The interpreter handles dangerous operations with a single confirmation prompt:
164154
- Single prompt for ALL operations (safe or dangerous)
165155
- Safe operations: `Execute the code? (Y/N):`
166156
- Dangerous operations: `⚠️ Dangerous operation. Continue? (Y/N):`
167-
- Operations execute only if you confirm with 'Y'
157+
- Operations execute only if you confirm with `Y`
168158

169-
To enable unsafe mode, use the `--unsafe` flag:
159+
To enable unsafe mode:
170160
```bash
171-
interpreter --unsafe
161+
python interpreter.py --unsafe
172162
```
173163

174-
**Warning:** Use unsafe mode with caution! Dangerous operations can delete or modify your files.
164+
To enable safe mode:
165+
```bash
166+
python interpreter.py --sandbox
167+
```
168+
169+
> **Warning:** Use unsafe mode with caution. Dangerous operations can delete or modify your files.
175170
176171
## 🛠️ **Usage**
177172

178173
To use Code-Interpreter, use the following command options:
179174

180-
- List of all **programming languages** are: </br>
175+
- List of all **programming languages**:
181176
- `python` - Python programming language.
182177
- `javascript` - JavaScript programming language.
183178

184-
- List of all **modes** are: </br>
179+
- List of all **modes**:
185180
- `code` - Generates code from your instructions.
186181
- `script` - Generates shell scripts from your instructions.
187182
- `command` - Generates single line commands from your instructions.
188-
- `vision` - Generates description of image or video.
183+
- `vision` - Generates description of image or video.
189184
- `chat` - Chat with your files and data.
190185

191186
- See [Models.MD](Models.MD) for the complete list of supported models.
192187

193-
- Basic usage (with least options)</br>
194-
```python
188+
### Start TUI (default)
189+
```bash
195190
python interpreter.py
196191
```
197-
- `python interpreter.py` now opens the TUI and uses arrow-key navigation in a real terminal.
198-
- The TUI falls back to plain text prompts when stdin is piped or not attached to a terminal.
199-
- In `--tui` sessions, `/mode`, `/model`, `/language`, and `/settings` can reopen interactive selectors from inside the live chat interface.
200-
201-
- Launch the classic prompt-based CLI directly</br>
202-
```python
203-
python interpreter.py --cli -m 'z-ai-glm-5' -md 'code'
204-
```
205-
- `python interpreter.py --cli` automatically picks the best configured model from your `.env` file if you do not pass `-m`.
206-
- Safe sandbox protections are enabled by default in `v3.1.0`.
207-
- Use `--unsafe` only when you explicitly want to bypass the execution safety policy.
208-
- LLM request retries are bounded to a maximum of `3` transient retry attempts before the final error is shown.
209-
210-
- Launch the selector-based TUI</br>
211-
```python
212-
python interpreter.py --tui
213-
```
214192

215-
- Using different models (replace 'model-name' with your chosen model) </br>
216-
```python
217-
python interpreter.py --cli -md 'code' -m 'model-name' -dc
218-
```
219-
220-
- Using different modes (replace 'mode-name' with your chosen mode) </br>
221-
```python
222-
python interpreter.py --cli -m 'model-name' -md 'mode-name'
223-
```
193+
`python interpreter.py` opens the TUI and uses arrow-key navigation in a real terminal. The TUI falls back to plain text prompts when stdin is piped or not attached to a terminal.
224194

225-
- Using auto execution </br>
226-
```python
227-
python interpreter.py -m 'wizard-coder' -md 'code' -dc -e
195+
### Open CLI mode
196+
```bash
197+
python interpreter.py --cli
228198
```
229199

230-
- Saving the code </br>
231-
```python
232-
python interpreter.py -m 'code-llama' -md 'code' -s
233-
```
200+
`python interpreter.py --cli` automatically picks the best configured model from your `.env` file if you do not pass `-m`.
234201

235-
- Selecting a language (replace 'language-name' with your chosen language) </br>
236-
```python
237-
python interpreter.py -m 'gemini-pro' -md 'code' -s -l 'language-name'
202+
### Run with sandbox (safe)
203+
```bash
204+
python interpreter.py --tui --sandbox
238205
```
239206

240-
- Switching to File mode for prompt input (Here providing filename is optional) </br>
241-
```python
242-
python interpreter.py -m 'gemini-pro' -md 'code' --file 'my_prompt_file.txt'
207+
### Run without sandbox (unsafe)
208+
```bash
209+
python interpreter.py --cli --no-sandbox
243210
```
244211

245-
- Using Upgrade interpreter </br>
246-
```python
212+
### Upgrade interpreter
213+
```bash
247214
python interpreter.py --upgrade
248215
```
249216

250-
- Live CLI smoke validation (stable models only) </br>
217+
### Live CLI smoke validation (stable models only)
251218
```bash
252219
python scripts/validate_models_cli.py --providers gemini,groq --tier stable --mode chat
253220
python scripts/validate_models_cli.py --providers openai,anthropic,deepseek,huggingface --tier stable --mode chat
254221
python scripts/validate_models_cli.py --providers nvidia,z-ai,browser-use,openrouter --tier stable --mode chat
255222
```
256223

257-
- Direct provider examples </br>
224+
### Direct provider examples
258225
```bash
259226
python interpreter.py -m 'nvidia-nemotron' -md 'chat' -dc
260227
python interpreter.py -m 'z-ai-glm-5' -md 'chat' -dc
@@ -295,7 +262,7 @@ When sandbox mode is enabled, commands and generated code run with the same safe
295262

296263
When sandbox mode is disabled, execution runs in unsafe mode without sandbox restrictions, intended only for trusted local workflows.
297264

298-
# Interpreter Commands 🖥️
265+
## 🖥️ **Interpreter Commands**
299266

300267
Here are the available commands:
301268

@@ -317,38 +284,39 @@ Here are the available commands:
317284
-`/upgrade` - Upgrade the interpreter.
318285
- 📁 `/prompt` - Switch the prompt mode _File or Input_ modes.
319286
- 🐞 `/debug` - Toggle Debug mode for debugging.
320-
- 📦 `/sandbox` - Toggles secure sandbox System.
321-
287+
- 📦 `/sandbox` - Toggles secure sandbox system.
322288

323289
## ⚙️ **Settings**
290+
324291
You can customize the settings of the current model from the `.json` file. It contains all the necessary parameters such as `temperature`, `max_tokens`, and more.
325292

326-
### **Steps to add your own custom API Server**
293+
### Steps to add your own custom API Server
327294
To integrate your own API server for OpenAI instead of the default server, follow these steps:
295+
328296
1. Navigate to the `Configs` directory.
329-
2. Open the configuration file for the model you want to modify. This could be either `gpt-3.5-turbo.json` or `gpt-4.json`.
297+
2. Open the configuration file for the model you want to modify (`gpt-3.5-turbo.json` or `gpt-4.json`).
330298
3. Add the following key-value pair to the JSON object:
331299
```json
332300
"api_base": "https://my-custom-base.com"
333301
```
334-
Replace `https://my-custom-base.com` with the URL of your custom API server.
335302
4. Save and close the file.
336-
Now, whenever you select the `gpt-3.5-turbo` or `gpt-4` model, the system will automatically use your custom server.
303+
304+
Now, whenever you select that model, the system will automatically use your custom server.
337305

338306
## **Steps to add new models**
339307

340-
### **Manual Method**
341-
1. 📋 Copy the `.json` file and rename it to `configs/hf-model-new.json`.
342-
2. 🛠️ Modify the parameters of the model like `start_sep`, `end_sep`.
343-
3. 📝 Set the model name from Hugging Face to `"model": "Model name here"`.
344-
4. 🚀 Now, you can use it like this: `python interpreter.py -m 'hf-model-new' -md 'code' -e`.
345-
5. 📁 Make sure the `-m 'hf-model-new'` matches the config file inside the `configs` folder.
308+
### Manual Method
309+
1. Copy the `.json` file and rename it to `configs/hf-model-new.json`.
310+
2. Modify the parameters of the model like `start_sep`, `end_sep`.
311+
3. Set the model name from Hugging Face: `"model": "Model name here"`.
312+
4. Use it like this: `python interpreter.py -m 'hf-model-new' -md 'code'`.
313+
5. Make sure the `-m 'hf-model-new'` matches the config file inside the `configs` folder.
346314

347-
### **Automatic Method**
348-
1. 🚀 Go to the `scripts` directory and run the `config_builder` script .
349-
2. 🔧 For Linux/MacOS, run `config_builder.sh` and for Windows, run `config_builder.bat` .
350-
3. 📝 Follow the instructions and enter the model name and parameters.
351-
4. 📋 The script will automatically create the `.json` file for you.
315+
### Automatic Method
316+
1. Go to the `scripts` directory and run the `config_builder` script.
317+
2. For Linux/MacOS run `config_builder.sh`, for Windows run `config_builder.bat`.
318+
3. Follow the instructions and enter the model name and parameters.
319+
4. The script will automatically create the `.json` file for you.
352320

353321
## Star History
354322

@@ -366,11 +334,11 @@ If you're interested in contributing to **Code-Interpreter**, we'd love to have
366334

367335
## 📌 **Versioning**
368336

369-
Current version: **3.2.1**
337+
Current version: **3.2.2**
370338

371339
Quick highlights:
372-
- **v3.2.1** - Added sandbox security, improved Code Interpreter architecture, fixed execution language routing, restored sandbox toggle compatibility, added subprocess security delegation, and improved safe-mode timeout handling.
373-
- **v3.2.0** - Added mode indicator ([SAFE MODE] or [UNSAFE MODE ⚠️]) in session banner, implemented strict safety blocking for dangerous operations in SAFE MODE, added single confirmation prompt for operations in UNSAFE MODE.
340+
- **v3.2.2** - Added sandbox security, improved Code Interpreter architecture, fixed execution language routing, restored sandbox toggle compatibility, added subprocess security delegation, and improved safe-mode timeout handling.
341+
- **v3.2.1** - Added mode indicator ([SAFE MODE] or [UNSAFE MODE ⚠️]) in session banner, implemented strict safety blocking for dangerous operations in SAFE MODE, added single confirmation prompt for operations in UNSAFE MODE.
374342
- **v3.1.0** - Added OpenRouter free-model aliases, made `openrouter/free` the default OpenRouter selection, improved simple-task code generation, added fresh TUI screenshots, and prepared release packaging assets.
375343
- **v3.0.0** - Added a default execution safety sandbox, dangerous command/code circuit breaker, bounded ReACT-style repair retries after failures, clearer execution feedback, and polished CLI/TUI runtime output.
376344
- **v2.4.1** - Added NVIDIA, Z AI, Browser Use, `.env.example`, and `--cli` / `--tui` startup flows.
@@ -385,12 +353,7 @@ Full release history: [CHANGELOG.md](CHANGELOG.md)
385353
This project is licensed under the **MIT License**. For more details, please refer to the LICENSE file.
386354

387355
Please note the following additional licensing details:
388-
389-
- The **GPT 3.5/4 models** are provided by **OpenAI** and are governed by their own licensing terms. Please ensure you have read and agreed to their terms before using these models. More information can be found at [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use).
390-
391-
- The **Hugging Face models** are provided by **Hugging Face Inc.** and are governed by their own licensing terms. Please ensure you have read and agreed to their terms before using these models. More information can be found at [Hugging Face's Terms of Service](https://huggingface.co/terms-of-service).
392-
393-
- The **Anthropic AI models** are provided by **Anthropic AI** and are governed by their own licensing terms. Please ensure you have read and agreed to their terms before using these models. More information can be found at [Anthropic AI's Terms of Service](https://www.anthropic.com/terms).
356+
This project is a client interface only. All models are provided by their respective third-party providers and subject to their own terms of service.
394357

395358
## 🙏 **Acknowledgments**
396359

0 commit comments

Comments
 (0)