Skip to content
This repository was archived by the owner on Mar 11, 2026. It is now read-only.

Commit c99e251

Browse files
authored
Merge pull request #78 from QingFeng-awa/refactoring-ollama-integration-docs
优化 Ollama 接入文档
2 parents 7c954e9 + a2be1bd commit c99e251

4 files changed

Lines changed: 45 additions & 127 deletions

File tree

Lines changed: 24 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -1,88 +1,46 @@
11
# Integrating Ollama
22

3-
🦙 Ollama is a free, open-source tool that lets you run large language models (LLMs) on your own computer. (hardwares must meet requirements).
3+
🦙 Ollama is a free, open-source tool that lets you run large language models (LLMs) on your own computer. (hardware must meet requirements)
44

55
## Download and Install Ollama
66

7-
You can download `Ollama` in [https://ollama.com/](https://ollama.com/)
7+
You can download Ollama from [https://ollama.com](https://ollama.com/download).
88

99
## Select and Pull a Model
1010

11-
* Choose the model you want to use at [https://ollama.com/search](https://ollama.com/search) or HuggingFace.
12-
* In the terminal (PowerShell on Windows), enter `ollama pull <model_name>` to download the model.
13-
* `model_name` format: `<model_name>:<model_version>`. For example, `deepseek-r1:8b`.
14-
* The 8b parameter model requires at least 16GB of video memory (VRAM). Refer to other documentation for detailed information on configurations and parameter sizes.
15-
* After pulling is complete, use `ollama list` to view the models you have pulled.
16-
* Then use `ollama run <model_name>` to run the model.
11+
Choose the model you want to use at [https://ollama.com/search](https://ollama.com/search).
1712

18-
## Configure AstrBot
19-
20-
* In AstrBot:
21-
* ![image](https://github.com/user-attachments/assets/5f55556f-0278-4350-82b3-c430d855c6bb)
22-
23-
* Save the configuration.
24-
* Enter `/provider` to view the models configured in AstrBot.
25-
* For Mac/Windows users deploying AstrBot with Docker Desktop, enter `http://host.docker.internal:11434/v1` for the API Base URL.
26-
* For Linux users deploying AstrBot with Docker, enter `http://172.17.0.1:11434/v1` for the API Base URL, or replace `172.17.0.1` with your public IP address (ensure that port 11434 is allowed by the host system).
27-
* If LM Studio is deployed using Docker, ensure that port 11434 is mapped to the host.
28-
29-
## FAQ
30-
31-
* Error:
32-
* `AstrBot request failed.`
33-
* `Error type: NotFoundError`
34-
* `Error message: Error code: 404 - {'error': {'message': 'model "llama3.1-8b" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}`
35-
* Please refer to the instructions above and use `ollama pull <model_name>` to pull the model.
36-
* Then use `ollama run <model_name>` to run the model.
37-
38-
39-
# 接入 Ollama 使用 DeepSeek-R1 等模型
40-
41-
Ollama 允许在本地电脑上部署模型(需要电脑硬件配置符合要求)
42-
43-
### 下载并安装 Ollama
44-
45-
https://ollama.com/
13+
In the terminal (PowerShell on Windows), enter `ollama pull <model_name>` to download the model.
4614

47-
### 选择想要使用的模型
15+
model_name format: `<model_name>:<model_version>`. For example, `deepseek-r1:8b`.
16+
> The 8b parameter model requires at least 16GB of video memory (VRAM). Refer to other documentation for detailed information on configurations and parameter sizes.
4817
49-
https://ollama.com/search 上选择想要使用的模型。
18+
After pulling is complete, use `ollama list` to view the models you have pulled.
5019

51-
在终端上(windows 是 powershell)输入 `ollama pull <model_name>` 下载模型。
20+
Then use `ollama run <model_name>` to run the model.
5221

53-
model_name 格式:`<model_name>:<model_version>`。如 `deepseek-r1:8b`
54-
55-
> 8b 参数量模型需要至少 16GB 显存。有关配置和参数量的详细信息,请参阅其他文档。
56-
57-
拉取完成后,`ollama list` 查看已经拉取的模型。
58-
59-
然后使用 `ollama run <model_name>` 运行模型。
60-
61-
### 配置 AstrBot
62-
63-
在 AstrBot 上:
64-
65-
![image](https://github.com/user-attachments/assets/5f55556f-0278-4350-82b3-c430d855c6bb)
22+
## Configure AstrBot
6623

67-
保存配置即可。
24+
Open the AstrBot WebUI, locate Service Provider Management, click on Add Provider, find and click on `Ollama`.
25+
![image](/source/images/ollama/image.png)
6826

69-
> 输入 /provider 查看 AstrBot 配置的模型
27+
Save the configuration.
7028

71-
> 对于 Mac/Windows 使用 Docker Desktop 部署 AstrBot 部署的用户,API Base URL 请填写为 `http://host.docker.internal:11434/v1`
72-
> 对于 Linux 使用 Docker 部署 AstrBot 部署的用户,API Base URL 请填写为 `http://172.17.0.1:11434/v1`,或者将 `172.17.0.1` 替换为你的公网 IP IP(确保宿主机系统放行了 11434 端口)。
29+
::: tip
7330

74-
如果 LM Studio 使用了 Docker 部署,请确保 11434 端口已经映射到宿主机。
31+
For Mac/Windows users deploying AstrBot with Docker Desktop, enter `http://host.docker.internal:11434/v1` for the API Base URL.\
32+
For Linux users deploying AstrBot with Docker, enter `http://172.17.0.1:11434/v1` for the API Base URL, or replace `172.17.0.1` with your public IP address (ensure that port 11434 is allowed by the host system).\
33+
If Ollama is deployed using Docker, ensure that port 11434 is mapped to the host.
7534

35+
:::
7636

77-
### FAQ
37+
## FAQ
7838

79-
报错:
80-
```
81-
AstrBot 请求失败。
82-
错误类型: NotFoundError
83-
错误信息: Error code: 404 - {'error': {'message': 'model "llama3.1-8b" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}
39+
Error:
8440
```
41+
AstrBot request failed.
42+
Error type: NotFoundError
43+
Error message: Error code: 404 - {'error': {'message': 'model "llama3.1-8b" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}
8544
86-
请先看上面的教程,用 `ollama pull <model_name>` 拉取模型。
87-
88-
然后使用 `ollama run <model_name>` 运行模型。
45+
```
46+
Please refer to the instructions above and use `ollama pull <model_name>` to pull the model, then use `ollama run <model_name>` to run the model.

package-lock.json

Lines changed: 4 additions & 4 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

source/images/ollama/image.png

229 KB
Loading
Lines changed: 17 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -1,80 +1,42 @@
1-
# Integrating Ollama
1+
# 接入 Ollama
22

3-
🦙 Ollama is a free, open-source tool that lets you run large language models (LLMs) on your own computer. (hardwares must meet requirements).
3+
🦙 Ollama 是一款免费、开源的应用程序,让您能在自己的电脑上运行大型语言模型(LLM)。(硬件需满足要求)
44

5-
## Download and Install Ollama
5+
## 下载并安装 Ollama
66

7-
You can download `Ollama` in [https://ollama.com/](https://ollama.com/)
7+
您可以在 [https://ollama.com](https://ollama.com/download) 下载 Ollama。
88

9-
## Select and Pull a Model
10-
11-
* Choose the model you want to use at [https://ollama.com/search](https://ollama.com/search) or HuggingFace.
12-
* In the terminal (PowerShell on Windows), enter `ollama pull <model_name>` to download the model.
13-
* `model_name` format: `<model_name>:<model_version>`. For example, `deepseek-r1:8b`.
14-
* The 8b parameter model requires at least 16GB of video memory (VRAM). Refer to other documentation for detailed information on configurations and parameter sizes.
15-
* After pulling is complete, use `ollama list` to view the models you have pulled.
16-
* Then use `ollama run <model_name>` to run the model.
17-
18-
## Configure AstrBot
19-
20-
* In AstrBot:
21-
* ![image](https://github.com/user-attachments/assets/5f55556f-0278-4350-82b3-c430d855c6bb)
22-
23-
* Save the configuration.
24-
* Enter `/provider` to view the models configured in AstrBot.
25-
* For Mac/Windows users deploying AstrBot with Docker Desktop, enter `http://host.docker.internal:11434/v1` for the API Base URL.
26-
* For Linux users deploying AstrBot with Docker, enter `http://172.17.0.1:11434/v1` for the API Base URL, or replace `172.17.0.1` with your public IP address (ensure that port 11434 is allowed by the host system).
27-
* If LM Studio is deployed using Docker, ensure that port 11434 is mapped to the host.
28-
29-
## FAQ
30-
31-
* Error:
32-
* `AstrBot request failed.`
33-
* `Error type: NotFoundError`
34-
* `Error message: Error code: 404 - {'error': {'message': 'model "llama3.1-8b" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}`
35-
* Please refer to the instructions above and use `ollama pull <model_name>` to pull the model.
36-
* Then use `ollama run <model_name>` to run the model.
37-
38-
39-
# 接入 Ollama 使用 DeepSeek-R1 等模型
40-
41-
Ollama 允许在本地电脑上部署模型(需要电脑硬件配置符合要求)
42-
43-
### 下载并安装 Ollama
44-
45-
https://ollama.com/
46-
47-
### 选择想要使用的模型
9+
## 选择想要使用的模型
4810

4911
https://ollama.com/search 上选择想要使用的模型。
5012

51-
在终端上(windows 是 powershell)输入 `ollama pull <model_name>` 下载模型。
13+
在终端上 (Windows 上是 Powershell) 输入 `ollama pull <model_name>` 下载模型。
5214

5315
model_name 格式:`<model_name>:<model_version>`。如 `deepseek-r1:8b`
5416

5517
> 8b 参数量模型需要至少 16GB 显存。有关配置和参数量的详细信息,请参阅其他文档。
5618
57-
拉取完成后,`ollama list` 查看已经拉取的模型。
19+
拉取完成后,输入 `ollama list` 查看已经拉取的模型。
5820

5921
然后使用 `ollama run <model_name>` 运行模型。
6022

61-
### 配置 AstrBot
23+
## 配置 AstrBot
6224

63-
在 AstrBot 上:
64-
65-
![image](https://github.com/user-attachments/assets/5f55556f-0278-4350-82b3-c430d855c6bb)
25+
打开 AstrBot 控制台 -> 服务提供商页面,点击新增模型提供商,找到并点击 `Ollama`
26+
![image](/source/images/ollama/image.png)
6627

6728
保存配置即可。
6829

69-
> 输入 /provider 查看 AstrBot 配置的模型
30+
::: tip
7031

71-
> 对于 Mac/Windows 使用 Docker Desktop 部署 AstrBot 部署的用户,API Base URL 请填写为 `http://host.docker.internal:11434/v1`
72-
> 对于 Linux 使用 Docker 部署 AstrBot 部署的用户,API Base URL 请填写为 `http://172.17.0.1:11434/v1`,或者将 `172.17.0.1` 替换为你的公网 IP IP(确保宿主机系统放行了 11434 端口)。
32+
对于 Mac/Windows 使用 Docker Desktop 部署 AstrBot 部署的用户,API Base URL 请填写为 `http://host.docker.internal:11434/v1`\
33+
对于 Linux 使用 Docker 部署 AstrBot 部署的用户,API Base URL 请填写为 `http://172.17.0.1:11434/v1`,或者将 `172.17.0.1` 替换为你的公网 IP(确保宿主机系统放行了 11434 端口)。\
34+
如果 Ollama 使用了 Docker 部署,请确保 11434 端口已经映射到宿主机。
7335

74-
如果 LM Studio 使用了 Docker 部署,请确保 11434 端口已经映射到宿主机。
36+
:::
7537

7638

77-
### FAQ
39+
## FAQ
7840

7941
报错:
8042
```
@@ -83,6 +45,4 @@ AstrBot 请求失败。
8345
错误信息: Error code: 404 - {'error': {'message': 'model "llama3.1-8b" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}
8446
```
8547

86-
请先看上面的教程,用 `ollama pull <model_name>` 拉取模型。
87-
88-
然后使用 `ollama run <model_name>` 运行模型。
48+
请先看上面的教程,用 `ollama pull <model_name>` 拉取模型,然后使用 `ollama run <model_name>` 运行模型。

0 commit comments

Comments
 (0)