From 133f93b53fb4861c7c007fbdf0fdd61da0577620 Mon Sep 17 00:00:00 2001 From: endolith Date: Thu, 27 Nov 2025 23:59:32 -0500 Subject: [PATCH 1/7] Use markdown linter on README Fix heading levels Remove dangling page tag and pointless break at end of file Convert HTML image tag to markdown --- README.md | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index ec5e502c8b..bcc7559d8d 100644 --- a/README.md +++ b/README.md @@ -15,10 +15,8 @@
-local_explorer +![local_explorer](https://github.com/OpenInterpreter/open-interpreter/assets/63927363/d941c3b4-b5ad-4642-992c-40edf31e2e7a) -
-


**Open Interpreter** lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `$ interpreter` after installing. @@ -36,19 +34,18 @@ This provides a natural-language interface to your computer's general-purpose ca ## Demo -https://github.com/OpenInterpreter/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60 + -#### An interactive demo is also available on Google Colab: +### An interactive demo is also available on Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing) -#### Along with an example voice interface, inspired by _Her_: +### Along with an example voice interface, inspired by _Her_ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK) ## Quick Start - ### Install ```shell @@ -355,7 +352,7 @@ Open Interpreter equips a [function-calling language model](https://platform.ope We then stream the model's messages, code, and your system's outputs to the terminal as Markdown. -# Access Documentation Offline +## Access Documentation Offline The full [documentation](https://docs.openinterpreter.com/) is accessible on-the-go without the need for an internet connection. @@ -383,13 +380,13 @@ mintlify dev A new browser window should open. The documentation will be available at [http://localhost:3000](http://localhost:3000) as long as the documentation server is running. -# Contributing +## Contributing Thank you for your interest in contributing! We welcome involvement from the community. Please see our [contributing guidelines](https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/CONTRIBUTING.md) for more details on how to get involved. -# Roadmap +## Roadmap Visit [our roadmap](https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/ROADMAP.md) to preview the future of Open Interpreter. @@ -400,5 +397,3 @@ Visit [our roadmap](https://github.com/OpenInterpreter/open-interpreter/blob/mai > Having access to a junior programmer working at the speed of your fingertips ... can make new workflows effortless and efficient, as well as open the benefits of programming to new audiences. > > — _OpenAI's Code Interpreter Release_ - -
From 798f7ca49f6f3b72714dec181da2619a006c09c3 Mon Sep 17 00:00:00 2001 From: endolith Date: Fri, 28 Nov 2025 00:14:16 -0500 Subject: [PATCH 2/7] Add title to demo video link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index bcc7559d8d..6f29c25b72 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,7 @@ This provides a natural-language interface to your computer's general-purpose ca ## Demo - +[Demo video](https://github.com/OpenInterpreter/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60) ### An interactive demo is also available on Google Colab From 77885b33bbe690039ad2684da540cedf4cd66768 Mon Sep 17 00:00:00 2001 From: endolith Date: Fri, 28 Nov 2025 00:17:31 -0500 Subject: [PATCH 3/7] Fix typos in Local section --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 6f29c25b72..307e71e40c 100644 --- a/README.md +++ b/README.md @@ -194,9 +194,9 @@ interpreter.llm.model = "gpt-3.5-turbo" #### Terminal -Open Interpreter can use OpenAI-compatible server to run models locally. (LM Studio, jan.ai, ollama etc) +Open Interpreter can use OpenAI-compatible server to run models locally (in LM Studio, Jan.ai, Ollama, etc.) -Simply run `interpreter` with the api_base URL of your inference server (for LM studio it is `http://localhost:1234/v1` by default): +Simply run `interpreter` with the `api_base` URL of your inference server (for LM Studio it is `http://localhost:1234/v1` by default): ```shell interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key" From 973325629c7f71a135697cb749a6e26bc7cc5be4 Mon Sep 17 00:00:00 2001 From: endolith Date: Sat, 29 Nov 2025 00:17:27 -0500 Subject: [PATCH 4/7] Fix YouTube URL --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 307e71e40c..b9274e51d2 100644 --- a/README.md +++ b/README.md @@ -208,7 +208,7 @@ Alternatively you can use Llamafile without installing any third party software interpreter --local ``` -for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H) +for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU&si=cN7f6QhfT4edfG5H) **How to run LM Studio in the background.** From 2f7f45d9f9782e0e91c91a4211258c2f6f794add Mon Sep 17 00:00:00 2001 From: endolith Date: Sat, 13 Dec 2025 10:15:09 -0500 Subject: [PATCH 5/7] Update older URLs --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index b9274e51d2..c449593c36 100644 --- a/README.md +++ b/README.md @@ -82,7 +82,7 @@ OpenAI's release of [Code Interpreter](https://openai.com/blog/chatgpt-plugins#c However, OpenAI's service is hosted, closed-source, and heavily restricted: - No internet access. -- [Limited set of pre-installed packages](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/). +- [Limited set of pre-installed packages](https://wfhbrian.com/artificial-intelligence/mastering-chatgpts-code-interpreter-list-of-python-packages/). - 100 MB maximum upload, 120.0 second runtime limit. - State is cleared (along with any generated files or links) when the environment dies. @@ -348,7 +348,7 @@ There is **experimental** support for a [safe mode](https://github.com/OpenInter ## How Does it Work? -Open Interpreter equips a [function-calling language model](https://platform.openai.com/docs/guides/gpt/function-calling) with an `exec()` function, which accepts a `language` (like "Python" or "JavaScript") and `code` to run. +Open Interpreter equips a [function-calling language model](https://platform.openai.com/docs/guides/function-calling) with an `exec()` function, which accepts a `language` (like "Python" or "JavaScript") and `code` to run. We then stream the model's messages, code, and your system's outputs to the terminal as Markdown. From 028857271ff1a343adab4b47f85acf953d6dbb62 Mon Sep 17 00:00:00 2001 From: endolith Date: Sat, 13 Dec 2025 10:15:17 -0500 Subject: [PATCH 6/7] Add name to LM Studio link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index c449593c36..a5c5c1c85f 100644 --- a/README.md +++ b/README.md @@ -212,7 +212,7 @@ for a more detailed guide check out [this video by Mike Bird](https://www.youtub **How to run LM Studio in the background.** -1. Download [https://lmstudio.ai/](https://lmstudio.ai/) then start it. +1. Download [LM Studio](https://lmstudio.ai/) then start it. 2. Select a model then click **↓ Download**. 3. Click the **↔️** button on the left (below 💬). 4. Select your model at the top, then click **Start Server**. From 1730ba3ff349e2e9d3bfca0632c2badd6989d657 Mon Sep 17 00:00:00 2001 From: endolith Date: Sat, 29 Nov 2025 00:17:21 -0500 Subject: [PATCH 7/7] Remove duplicate streaming code and old update notice --- README.md | 9 --------- 1 file changed, 9 deletions(-) diff --git a/README.md b/README.md index a5c5c1c85f..514e2e7859 100644 --- a/README.md +++ b/README.md @@ -94,15 +94,6 @@ This combines the power of GPT-4's Code Interpreter with the flexibility of your ## Commands -**Update:** The Generator Update (0.1.5) introduced streaming: - -```python -message = "What operating system are we on?" - -for chunk in interpreter.chat(message, display=False, stream=True): - print(chunk) -``` - ### Interactive Chat To start an interactive chat in your terminal, either run `interpreter` from the command line: