You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
4. Copy the example environment file and add the keys you plan to use:</br>
70
+
71
+
3. Copy the example environment file and add the keys you plan to use:
72
+
72
73
```bash
73
74
copy .env.example .env
74
75
```
75
76
76
-
## API Key setup for All models.
77
+
## API Key setup for All models
77
78
78
79
Follow the steps below to obtain and set up the API keys for each service:
79
80
80
81
1.**Obtain the API keys:**
81
82
- HuggingFace: Visit [HuggingFace Tokens](https://huggingface.co/settings/tokens) and get your Access Token.
82
83
- Google Gemini: Visit [Google AI Studio](https://makersuite.google.com/app/apikey) and click on the **Create API Key** button.
83
84
- OpenAI: Visit [OpenAI Dashboard](https://platform.openai.com/account/api-keys), sign up or log in, navigate to the API section in your account dashboard, and click on the **Create New Key** button.
84
-
- Groq AI: Obtain access [here](https://wow.groq.com), then visit [Groq AI Console](https://console.groq.com/keys), sign up or log in, navigate to the API section in your account, and click on the **Create API Key** button.
85
-
- Anthropic AI: Obtain access [here](https://www.anthropic.com/earlyaccess), then visit [Anthropic AI Console](https://console.anthropic.com/settings/keys), sign up or log in, navigate to the API Keys section in your account, and click on the **Create Key** button.
85
+
- Groq AI: Visit [Groq AI Console](https://console.groq.com/keys), sign up or log in, and click on the **Create API Key** button.
86
+
- Anthropic AI: Visit [Anthropic AI Console](https://console.anthropic.com/settings/keys), sign up or log in, and click on the **Create Key** button.
86
87
- NVIDIA API Catalog: Visit [NVIDIA Build](https://build.nvidia.com/), create a key, and use `NVIDIA_API_KEY`.
87
88
- Z AI: Visit [Z AI Docs](https://docs.z.ai/) and use `Z_AI_API_KEY`.
88
89
- OpenRouter: Visit [OpenRouter Keys](https://openrouter.ai/settings/keys) and use `OPENROUTER_API_KEY`.
89
90
- Browser Use: Visit [Browser Use Docs](https://docs.browser-use.com/) and use `BROWSER_USE_API_KEY`.
90
91
91
92
2.**Save the API keys:**
92
-
- Create a `.env` file in your project root directory.
93
-
- Open the `.env` file and add the following lines, replacing `Your API Key` with the respective keys:
93
+
94
+
Create a `.env` file in your project root directory and add the following lines:
94
95
95
96
```bash
96
97
export HUGGINGFACE_API_KEY="Your HuggingFace API Key"
@@ -105,39 +106,33 @@ export OPENROUTER_API_KEY="Your OpenRouter API Key"
105
106
export BROWSER_USE_API_KEY="Your Browser Use API Key"
106
107
```
107
108
108
-
# Offline models setup.</br>
109
-
This Interpreter supports offline models via **LM Studio** and **OLlaMa** so to download it from [LM-Studio](https://lmstudio.ai/) and [Ollama](https://ollama.com/) follow the steps below.
110
-
- Download any model from **LM Studio** like _Phi-2,Code-Llama,Mistral_.
111
-
- Then in the app go to **Local Server** option and select the model.
112
-
- Start the server and copy the **URL**. (LM-Studio will provide you with the URL).
113
-
- Run command `ollama serve` and copy the **URL**. (OLlaMa will provide you with the URL).
114
-
- Open config file `configs/local-model.json` and paste the **URL** in the `api_base` field.
115
-
- Now you can use the model with the interpreter set the model name to `local-model` and run the interpreter.</br>
109
+
## Offline models setup
116
110
117
-
4. Run the interpreter with Python:</br>
118
-
### Running with Python.
119
-
```bash
120
-
python interpreter.py -md 'code' -m 'gpt-4o' -dc
121
-
```
111
+
This Interpreter supports offline models via **LM Studio** and **Ollama**. Follow the steps below:
112
+
113
+
- Download any model from [LM Studio](https://lmstudio.ai/) like _Phi-2, Code-Llama, Mistral_.
114
+
- In the app go to **Local Server** option and select the model.
115
+
- Start the server and copy the **URL** (LM-Studio will provide you with the URL).
116
+
- Run command `ollama serve` and copy the **URL** (Ollama will provide you with the URL).
117
+
- Open config file `configs/local-model.json` and paste the **URL** in the `api_base` field.
118
+
- Set the model name to `local-model` and run the interpreter.
122
119
123
-
5. Run the interpreter directly:</br>
124
-
### Running Interpreter without Python (Executable macOS/Linux only).
125
120
```bash
126
-
./interpreter -md 'code' -m 'gpt-4o' -dc
121
+
python interpreter.py -md 'code' -m 'local-model'
127
122
```
128
123
129
124
## 🌟 **Features**
130
125
131
126
- 🚀 Executes generated code from instructions
132
127
- 💾 Saves and edits code with advanced editor
133
-
- 📡 Supports offline models via LM Studio
128
+
- 📡 Supports offline models via LM Studio and Ollama
134
129
- 📜 Command history and mode selection
135
130
- 🧠 Multiple models and languages (Python/JavaScript)
136
131
- 👀 Code review before execution
137
132
- 🛡️ Safe sandbox execution with timeout and security
138
-
-🧠 Self-repair for failed executions
133
+
-🔁 Self-repair for failed executions
139
134
- 💻 Cross-platform (Windows/macOS/Linux)
140
-
- 🤝 Integrates with HuggingFace, OpenAI, Gemini, etc.
135
+
- 🤝 Integrates with HuggingFace, OpenAI, Gemini, Groq, Claude, DeepSeek, NVIDIA, Z AI, OpenRouter, Browser Use
141
136
- 🎯 Versatile tasks: file ops, image/video editing, data analysis
142
137
143
138
## 🛡️ **Safety Features**
@@ -147,11 +142,6 @@ The interpreter displays the current safety mode in the session banner:
147
142
-**[SAFE MODE]** - Default mode with safety restrictions enabled (green)
148
143
-**[UNSAFE MODE ⚠️]** - Unrestricted mode (red with warning emoji)
149
144
150
-
To enable unsafe mode, use the `--unsafe` flag:
151
-
```bash
152
-
interpreter --unsafe
153
-
```
154
-
155
145
### Dangerous Operation Handling
156
146
The interpreter handles dangerous operations with a single confirmation prompt:
157
147
@@ -164,97 +154,74 @@ The interpreter handles dangerous operations with a single confirmation prompt:
164
154
- Single prompt for ALL operations (safe or dangerous)
`python interpreter.py` opens the TUI and uses arrow-key navigation in a real terminal. The TUI falls back to plain text prompts when stdin is piped or not attached to a terminal.
@@ -295,7 +262,7 @@ When sandbox mode is enabled, commands and generated code run with the same safe
295
262
296
263
When sandbox mode is disabled, execution runs in unsafe mode without sandbox restrictions, intended only for trusted local workflows.
297
264
298
-
#Interpreter Commands 🖥️
265
+
## 🖥️ **Interpreter Commands**
299
266
300
267
Here are the available commands:
301
268
@@ -317,38 +284,39 @@ Here are the available commands:
317
284
- ⏫ `/upgrade` - Upgrade the interpreter.
318
285
- 📁 `/prompt` - Switch the prompt mode _File or Input_ modes.
319
286
- 🐞 `/debug` - Toggle Debug mode for debugging.
320
-
- 📦 `/sandbox` - Toggles secure sandbox System.
321
-
287
+
- 📦 `/sandbox` - Toggles secure sandbox system.
322
288
323
289
## ⚙️ **Settings**
290
+
324
291
You can customize the settings of the current model from the `.json` file. It contains all the necessary parameters such as `temperature`, `max_tokens`, and more.
325
292
326
-
### **Steps to add your own custom API Server**
293
+
### Steps to add your own custom API Server
327
294
To integrate your own API server for OpenAI instead of the default server, follow these steps:
295
+
328
296
1. Navigate to the `Configs` directory.
329
-
2. Open the configuration file for the model you want to modify. This could be either `gpt-3.5-turbo.json` or `gpt-4.json`.
297
+
2. Open the configuration file for the model you want to modify (`gpt-3.5-turbo.json` or `gpt-4.json`).
330
298
3. Add the following key-value pair to the JSON object:
331
299
```json
332
300
"api_base": "https://my-custom-base.com"
333
301
```
334
-
Replace `https://my-custom-base.com` with the URL of your custom API server.
335
302
4. Save and close the file.
336
-
Now, whenever you select the `gpt-3.5-turbo` or `gpt-4` model, the system will automatically use your custom server.
303
+
304
+
Now, whenever you select that model, the system will automatically use your custom server.
337
305
338
306
## **Steps to add new models**
339
307
340
-
### **Manual Method**
341
-
1.📋 Copy the `.json` file and rename it to `configs/hf-model-new.json`.
342
-
2.🛠️ Modify the parameters of the model like `start_sep`, `end_sep`.
343
-
3.📝 Set the model name from Hugging Face to`"model": "Model name here"`.
344
-
4.🚀 Now, you can use it like this: `python interpreter.py -m 'hf-model-new' -md 'code' -e`.
345
-
5.📁 Make sure the `-m 'hf-model-new'` matches the config file inside the `configs` folder.
308
+
### Manual Method
309
+
1. Copy the `.json` file and rename it to `configs/hf-model-new.json`.
310
+
2. Modify the parameters of the model like `start_sep`, `end_sep`.
311
+
3. Set the model name from Hugging Face:`"model": "Model name here"`.
312
+
4.Use it like this: `python interpreter.py -m 'hf-model-new' -md 'code'`.
313
+
5. Make sure the `-m 'hf-model-new'` matches the config file inside the `configs` folder.
346
314
347
-
### **Automatic Method**
348
-
1.🚀 Go to the `scripts` directory and run the `config_builder` script.
349
-
2.🔧 For Linux/MacOS, run `config_builder.sh` and for Windows, run `config_builder.bat`.
350
-
3.📝 Follow the instructions and enter the model name and parameters.
351
-
4.📋 The script will automatically create the `.json` file for you.
315
+
### Automatic Method
316
+
1. Go to the `scripts` directory and run the `config_builder` script.
317
+
2. For Linux/MacOS run `config_builder.sh`, for Windows run `config_builder.bat`.
318
+
3. Follow the instructions and enter the model name and parameters.
319
+
4. The script will automatically create the `.json` file for you.
352
320
353
321
## Star History
354
322
@@ -366,11 +334,11 @@ If you're interested in contributing to **Code-Interpreter**, we'd love to have
366
334
367
335
## 📌 **Versioning**
368
336
369
-
Current version: **3.2.1**
337
+
Current version: **3.2.2**
370
338
371
339
Quick highlights:
372
-
-**v3.2.1** - Added sandbox security, improved Code Interpreter architecture, fixed execution language routing, restored sandbox toggle compatibility, added subprocess security delegation, and improved safe-mode timeout handling.
373
-
-**v3.2.0** - Added mode indicator ([SAFE MODE] or [UNSAFE MODE ⚠️]) in session banner, implemented strict safety blocking for dangerous operations in SAFE MODE, added single confirmation prompt for operations in UNSAFE MODE.
340
+
-**v3.2.2** - Added sandbox security, improved Code Interpreter architecture, fixed execution language routing, restored sandbox toggle compatibility, added subprocess security delegation, and improved safe-mode timeout handling.
341
+
-**v3.2.1** - Added mode indicator ([SAFE MODE] or [UNSAFE MODE ⚠️]) in session banner, implemented strict safety blocking for dangerous operations in SAFE MODE, added single confirmation prompt for operations in UNSAFE MODE.
374
342
-**v3.1.0** - Added OpenRouter free-model aliases, made `openrouter/free` the default OpenRouter selection, improved simple-task code generation, added fresh TUI screenshots, and prepared release packaging assets.
375
343
-**v3.0.0** - Added a default execution safety sandbox, dangerous command/code circuit breaker, bounded ReACT-style repair retries after failures, clearer execution feedback, and polished CLI/TUI runtime output.
376
344
-**v2.4.1** - Added NVIDIA, Z AI, Browser Use, `.env.example`, and `--cli` / `--tui` startup flows.
@@ -385,12 +353,7 @@ Full release history: [CHANGELOG.md](CHANGELOG.md)
385
353
This project is licensed under the **MIT License**. For more details, please refer to the LICENSE file.
386
354
387
355
Please note the following additional licensing details:
388
-
389
-
- The **GPT 3.5/4 models** are provided by **OpenAI** and are governed by their own licensing terms. Please ensure you have read and agreed to their terms before using these models. More information can be found at [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use).
390
-
391
-
- The **Hugging Face models** are provided by **Hugging Face Inc.** and are governed by their own licensing terms. Please ensure you have read and agreed to their terms before using these models. More information can be found at [Hugging Face's Terms of Service](https://huggingface.co/terms-of-service).
392
-
393
-
- The **Anthropic AI models** are provided by **Anthropic AI** and are governed by their own licensing terms. Please ensure you have read and agreed to their terms before using these models. More information can be found at [Anthropic AI's Terms of Service](https://www.anthropic.com/terms).
356
+
This project is a client interface only. All models are provided by their respective third-party providers and subject to their own terms of service.
0 commit comments