You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/features/ai-knowledge/skills.md
+15-7Lines changed: 15 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,12 +8,20 @@ Skills are reusable, markdown-based instruction sets that you can attach to mode
8
8
9
9
## How Skills Work
10
10
11
-
Skills use a **lazy-loading** architecture to keep the model's context window efficient:
11
+
Skills behave differently depending on how they are activated:
12
12
13
-
1.**Manifest injection** — When a skill is active (either bound to a model or mentioned in chat), only a lightweight manifest containing the skill's **name** and **description** is injected into the system prompt.
13
+
### User-Selected Skills ($ Mention)
14
+
15
+
When you mention a skill in chat with `$`, its **full content is injected directly** into the system prompt. The model has immediate access to the complete instructions without needing any extra tool calls.
16
+
17
+
### Model-Attached Skills
18
+
19
+
Skills bound to a model use a **lazy-loading** architecture to keep the context window efficient:
20
+
21
+
1.**Manifest injection** — Only a lightweight manifest containing the skill's **name** and **description** is injected into the system prompt.
14
22
2.**On-demand loading** — The model receives a `view_skill` builtin tool. When it determines it needs a skill's full instructions, it calls `view_skill` with the skill name to load the complete content.
15
23
16
-
This design means that even if dozens of skills are available, only the ones the model actually needs are loaded into context.
24
+
This design means that even if many skills are attached to a model, only the ones the model actually needs are loaded into context.
17
25
18
26
## Creating a Skill
19
27
@@ -23,16 +31,16 @@ Navigate to **Workspace → Skills** and click **+ New Skill**.
23
31
| :--- | :--- |
24
32
|**Name**| A human-readable display name (e.g., "Code Review Guidelines"). |
25
33
|**Skill ID**| A unique slug identifier, auto-generated from the name (e.g., `code-review-guidelines`). Editable during creation, read-only afterwards. |
26
-
|**Description**| A short summary shown in the manifest. The model uses this to decide whether to load the full instructions. |
27
-
|**Content**| The full skill instructions in **Markdown**. This is the content loaded by`view_skill`. |
34
+
|**Description**| A short summary shown in the manifest. For model-attached skills, the model uses this to decide whether to load the full instructions. |
35
+
|**Content**| The full skill instructions in **Markdown**. For user-selected skills this is injected directly; for model-attached skills it is loaded on-demand via`view_skill`. |
28
36
29
37
Click **Save & Create** to finalize.
30
38
31
39
## Using Skills
32
40
33
41
### In Chat ($ Mention)
34
42
35
-
Type `$` in the chat input to open the skill picker. Select a skill, and it will be attached to the message as a **skill mention** (similar to `@` for models or `#` for knowledge). The skill manifest is injected for that conversation, and the model can call `view_skill`to load the full instructions when needed.
43
+
Type `$` in the chat input to open the skill picker. Select a skill, and it will be attached to the message as a **skill mention** (similar to `@` for models or `#` for knowledge). The skill's **full content**is injected directly into the conversation, giving the model immediate access to the complete instructions.
36
44
37
45
### Bound to a Model
38
46
@@ -43,7 +51,7 @@ You can permanently attach skills to a model so they are always available:
43
51
3. Check the skills you want this model to always have access to.
44
52
4. Click **Save**.
45
53
46
-
When a user chats with that model, the selected skills' manifests are automatically injected.
54
+
When a user chats with that model, the selected skills' manifests (name and description) are automatically injected, and the model can load the full content on-demand via `view_skill`.
Running bare metal gives the model shell access to your actual machine. Only use this for local development or testing.
73
73
:::
74
74
75
+
### MCP Server Mode
76
+
77
+
Open Terminal can also run as an [MCP (Model Context Protocol)](/features/extensibility/plugin/tools/openapi-servers/mcp) server, exposing all its endpoints as MCP tools. This requires an additional dependency:
78
+
79
+
```bash
80
+
pip install open-terminal[mcp]
81
+
```
82
+
83
+
Then start the MCP server:
84
+
85
+
```bash
86
+
# stdio transport (default — for local MCP clients)
87
+
open-terminal mcp
88
+
89
+
# streamable-http transport (for remote/networked MCP clients)
|`--port`|`8000`| Bind port (streamable-http only) |
98
+
99
+
Under the hood, this uses [FastMCP](https://github.com/jlowin/fastmcp) to automatically convert every FastAPI endpoint into an MCP tool — no manual tool definitions needed.
100
+
75
101
### Docker Compose (with Open WebUI)
76
102
77
103
```yaml title="docker-compose.yml"
@@ -154,8 +180,10 @@ The `/execute` endpoint description in the OpenAPI spec automatically includes l
154
180
155
181
**Query parameters:**
156
182
157
-
| Parameter | Type | Default | Description |
158
-
| :--- | :--- | :--- | :--- |
183
+
| Parameter | Default | Description |
184
+
| :--- | :--- | :--- |
185
+
| `stream` | `false` | If `true`, stream output as JSONL instead of waiting for completion |
186
+
| `tail` | (all) | Return only the last N output entries. Useful to limit response size for AI agents. |
159
187
| `wait` | number | `null` | Seconds to wait for the command to finish before returning (0–300). If the command completes in time, output is included inline. `null` to return immediately. |
160
188
| `tail` | integer | `null` | Return only the last N output entries. Useful to keep responses bounded. |
161
189
@@ -205,6 +233,31 @@ curl -X POST "http://localhost:8000/execute?wait=5" \
205
233
}
206
234
```
207
235
236
+
:::info File-Backed Process Output
237
+
All background process output (stdout/stderr) is persisted to JSONL log files under `~/.open-terminal/logs/processes/`. This means output is never lost, even if the server restarts. The response includes `next_offset` for stateless incremental polling — pass it as the `offset` query parameter on subsequent status requests to get only new output. The `log_path` field shows the path to the raw JSONL log file.
238
+
:::
239
+
240
+
### Search File Contents
241
+
242
+
**`GET /files/search`**
243
+
244
+
Search for a text pattern across files in a directory. Returns structured matches with file paths, line numbers, and matching lines. Skips binary files automatically.
245
+
246
+
**Query parameters:**
247
+
248
+
| Parameter | Type | Default | Description |
249
+
| :--- | :--- | :--- | :--- |
250
+
| `query` | string | (required) | Text or regex pattern to search for |
251
+
| `path` | string | `.` | Directory or file to search in |
252
+
| `regex` | boolean | `false` | Treat query as a regex pattern |
@@ -186,9 +186,11 @@ These models excel at multi-step reasoning, proper JSON formatting, and autonomo
186
186
|`query_knowledge_files`| Search *file contents* inside KBs using vector search. **This is your main tool for finding information.** When a KB is attached to the model, searches are automatically scoped to that KB. |
187
187
|`search_knowledge_files`| Search files across accessible knowledge bases by filename (not content). |
188
188
|`view_knowledge_file`| Get the full content of a file from a knowledge base. |
|**Web Search**|`search_web`, `fetch_url`| Search the web and fetch URL content |
325
+
|**Image Generation**|`generate_image`, `edit_image`| Generate and edit images |
326
+
|**Code Interpreter**|`execute_code`| Execute code in a sandboxed environment |
320
327
|**Channels**|`search_channels`, `search_channel_messages`, `view_channel_message`, `view_channel_thread`| Search channels and channel messages |
321
328
|**Skills**|`view_skill`| Load skill instructions on-demand from the manifest |
322
329
@@ -339,6 +346,20 @@ These per-category toggles only appear when the main **Builtin Tools** capabilit
339
346
Enabling a per-model category toggle does **not** override global feature flags. For example, if `ENABLE_NOTES` is disabled globally (Admin Panel), Notes tools will not be available even if the "Notes" category is enabled for the model. The per-model toggles only allow you to *further restrict* what's already available—they cannot enable features that are disabled at the global level.
**Web Search**, **Image Generation**, and **Code Interpreter** built-in tools have an additional layer of control: the **per-chat feature toggle** in the chat input bar. For these tools to be injected in Native Mode, **all three conditions** must be met:
351
+
352
+
1.**Global config enabled** — the feature is turned on in Admin Panel (e.g., `ENABLE_WEB_SEARCH`)
353
+
2.**Model capability enabled** — the model has the capability checked in Workspace > Models (e.g., "Web Search")
354
+
3.**Per-chat toggle enabled** — the user has activated the feature for this specific chat via the chat input bar toggles
355
+
356
+
This means users can disable web search (or image generation, or code interpreter) on a per-conversation basis, even if it's enabled globally and on the model. This is useful for chats where information must stay offline or where you want to prevent unintended tool usage.
357
+
:::
358
+
359
+
:::tip Full Agentic Experience
360
+
For the best out-of-the-box agentic experience, administrators can enable **Web Search**, **Image Generation**, and **Code Interpreter** as default features for a model. In the **Admin Panel > Settings > Models**, find the **Model Specific Settings** for your target model and toggle these three on under **Default Features**. This ensures they are active in every new chat by default, so users get the full tool-calling experience without manually enabling each toggle. Users can still turn them off per-chat if needed.
361
+
:::
362
+
342
363
:::tip Builtin Tools vs File Context
343
364
**Builtin Tools** controls whether the model gets *tools* for autonomous retrieval. It does **not** control whether file content is injected via RAG—that's controlled by the separate **File Context** capability.
0 commit comments