You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Open Interpreter** lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `$ interpreter` after installing.
@@ -36,19 +34,18 @@ This provides a natural-language interface to your computer's general-purpose ca
####An interactive demo is also available on Google Colab:
39
+
### An interactive demo is also available on Google Colab
42
40
43
41
[](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
44
42
45
-
####Along with an example voice interface, inspired by _Her_:
43
+
### Along with an example voice interface, inspired by _Her_
46
44
47
45
[](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)
48
46
49
47
## Quick Start
50
48
51
-
52
49
### Install
53
50
54
51
```shell
@@ -85,7 +82,7 @@ OpenAI's release of [Code Interpreter](https://openai.com/blog/chatgpt-plugins#c
85
82
However, OpenAI's service is hosted, closed-source, and heavily restricted:
86
83
87
84
- No internet access.
88
-
-[Limited set of pre-installed packages](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/).
85
+
-[Limited set of pre-installed packages](https://wfhbrian.com/artificial-intelligence/mastering-chatgpts-code-interpreter-list-of-python-packages/).
89
86
- 100 MB maximum upload, 120.0 second runtime limit.
90
87
- State is cleared (along with any generated files or links) when the environment dies.
91
88
@@ -97,15 +94,6 @@ This combines the power of GPT-4's Code Interpreter with the flexibility of your
97
94
98
95
## Commands
99
96
100
-
**Update:** The Generator Update (0.1.5) introduced streaming:
101
-
102
-
```python
103
-
message ="What operating system are we on?"
104
-
105
-
for chunk in interpreter.chat(message, display=False, stream=True):
106
-
print(chunk)
107
-
```
108
-
109
97
### Interactive Chat
110
98
111
99
To start an interactive chat in your terminal, either run `interpreter` from the command line:
@@ -211,11 +199,11 @@ Alternatively you can use Llamafile without installing any third party software
211
199
interpreter --local
212
200
```
213
201
214
-
for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H)
202
+
for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU&si=cN7f6QhfT4edfG5H)
215
203
216
204
**How to run LM Studio in the background.**
217
205
218
-
1. Download [https://lmstudio.ai/](https://lmstudio.ai/) then start it.
206
+
1. Download [LM Studio](https://lmstudio.ai/) then start it.
219
207
2. Select a model then click **↓ Download**.
220
208
3. Click the **↔️** button on the left (below 💬).
221
209
4. Select your model at the top, then click **Start Server**.
@@ -351,11 +339,11 @@ There is **experimental** support for a [safe mode](https://github.com/OpenInter
351
339
352
340
## How Does it Work?
353
341
354
-
Open Interpreter equips a [function-calling language model](https://platform.openai.com/docs/guides/gpt/function-calling) with an `exec()` function, which accepts a `language` (like "Python" or "JavaScript") and `code` to run.
342
+
Open Interpreter equips a [function-calling language model](https://platform.openai.com/docs/guides/function-calling) with an `exec()` function, which accepts a `language` (like "Python" or "JavaScript") and `code` to run.
355
343
356
344
We then stream the model's messages, code, and your system's outputs to the terminal as Markdown.
357
345
358
-
# Access Documentation Offline
346
+
##Access Documentation Offline
359
347
360
348
The full [documentation](https://docs.openinterpreter.com/) is accessible on-the-go without the need for an internet connection.
361
349
@@ -383,13 +371,13 @@ mintlify dev
383
371
384
372
A new browser window should open. The documentation will be available at [http://localhost:3000](http://localhost:3000) as long as the documentation server is running.
385
373
386
-
# Contributing
374
+
##Contributing
387
375
388
376
Thank you for your interest in contributing! We welcome involvement from the community.
389
377
390
378
Please see our [contributing guidelines](https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/CONTRIBUTING.md) for more details on how to get involved.
391
379
392
-
# Roadmap
380
+
##Roadmap
393
381
394
382
Visit [our roadmap](https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/ROADMAP.md) to preview the future of Open Interpreter.
> Having access to a junior programmer working at the speed of your fingertips ... can make new workflows effortless and efficient, as well as open the benefits of programming to new audiences.
0 commit comments