diff --git a/README.md b/README.md
index ec5e502c8b..514e2e7859 100644
--- a/README.md
+++ b/README.md
@@ -15,10 +15,8 @@
-
+
-
-
**Open Interpreter** lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `$ interpreter` after installing.
@@ -36,19 +34,18 @@ This provides a natural-language interface to your computer's general-purpose ca
## Demo
-https://github.com/OpenInterpreter/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60
+[Demo video](https://github.com/OpenInterpreter/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60)
-#### An interactive demo is also available on Google Colab:
+### An interactive demo is also available on Google Colab
[](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
-#### Along with an example voice interface, inspired by _Her_:
+### Along with an example voice interface, inspired by _Her_
[](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)
## Quick Start
-
### Install
```shell
@@ -85,7 +82,7 @@ OpenAI's release of [Code Interpreter](https://openai.com/blog/chatgpt-plugins#c
However, OpenAI's service is hosted, closed-source, and heavily restricted:
- No internet access.
-- [Limited set of pre-installed packages](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/).
+- [Limited set of pre-installed packages](https://wfhbrian.com/artificial-intelligence/mastering-chatgpts-code-interpreter-list-of-python-packages/).
- 100 MB maximum upload, 120.0 second runtime limit.
- State is cleared (along with any generated files or links) when the environment dies.
@@ -97,15 +94,6 @@ This combines the power of GPT-4's Code Interpreter with the flexibility of your
## Commands
-**Update:** The Generator Update (0.1.5) introduced streaming:
-
-```python
-message = "What operating system are we on?"
-
-for chunk in interpreter.chat(message, display=False, stream=True):
- print(chunk)
-```
-
### Interactive Chat
To start an interactive chat in your terminal, either run `interpreter` from the command line:
@@ -197,9 +185,9 @@ interpreter.llm.model = "gpt-3.5-turbo"
#### Terminal
-Open Interpreter can use OpenAI-compatible server to run models locally. (LM Studio, jan.ai, ollama etc)
+Open Interpreter can use OpenAI-compatible server to run models locally (in LM Studio, Jan.ai, Ollama, etc.)
-Simply run `interpreter` with the api_base URL of your inference server (for LM studio it is `http://localhost:1234/v1` by default):
+Simply run `interpreter` with the `api_base` URL of your inference server (for LM Studio it is `http://localhost:1234/v1` by default):
```shell
interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key"
@@ -211,11 +199,11 @@ Alternatively you can use Llamafile without installing any third party software
interpreter --local
```
-for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H)
+for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU&si=cN7f6QhfT4edfG5H)
**How to run LM Studio in the background.**
-1. Download [https://lmstudio.ai/](https://lmstudio.ai/) then start it.
+1. Download [LM Studio](https://lmstudio.ai/) then start it.
2. Select a model then click **↓ Download**.
3. Click the **↔️** button on the left (below 💬).
4. Select your model at the top, then click **Start Server**.
@@ -351,11 +339,11 @@ There is **experimental** support for a [safe mode](https://github.com/OpenInter
## How Does it Work?
-Open Interpreter equips a [function-calling language model](https://platform.openai.com/docs/guides/gpt/function-calling) with an `exec()` function, which accepts a `language` (like "Python" or "JavaScript") and `code` to run.
+Open Interpreter equips a [function-calling language model](https://platform.openai.com/docs/guides/function-calling) with an `exec()` function, which accepts a `language` (like "Python" or "JavaScript") and `code` to run.
We then stream the model's messages, code, and your system's outputs to the terminal as Markdown.
-# Access Documentation Offline
+## Access Documentation Offline
The full [documentation](https://docs.openinterpreter.com/) is accessible on-the-go without the need for an internet connection.
@@ -383,13 +371,13 @@ mintlify dev
A new browser window should open. The documentation will be available at [http://localhost:3000](http://localhost:3000) as long as the documentation server is running.
-# Contributing
+## Contributing
Thank you for your interest in contributing! We welcome involvement from the community.
Please see our [contributing guidelines](https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/CONTRIBUTING.md) for more details on how to get involved.
-# Roadmap
+## Roadmap
Visit [our roadmap](https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/ROADMAP.md) to preview the future of Open Interpreter.
@@ -400,5 +388,3 @@ Visit [our roadmap](https://github.com/OpenInterpreter/open-interpreter/blob/mai
> Having access to a junior programmer working at the speed of your fingertips ... can make new workflows effortless and efficient, as well as open the benefits of programming to new audiences.
>
> — _OpenAI's Code Interpreter Release_
-
-