|
1 | 1 | # Integrating Ollama |
2 | 2 |
|
3 | | -🦙 Ollama is a free, open-source tool that lets you run large language models (LLMs) on your own computer. (hardwares must meet requirements). |
| 3 | +🦙 Ollama is a free, open-source tool that lets you run large language models (LLMs) on your own computer. (hardware must meet requirements) |
4 | 4 |
|
5 | 5 | ## Download and Install Ollama |
6 | 6 |
|
7 | | -You can download `Ollama` in [https://ollama.com/](https://ollama.com/) |
| 7 | +You can download Ollama from [https://ollama.com](https://ollama.com/download). |
8 | 8 |
|
9 | 9 | ## Select and Pull a Model |
10 | 10 |
|
11 | | -* Choose the model you want to use at [https://ollama.com/search](https://ollama.com/search) or HuggingFace. |
12 | | -* In the terminal (PowerShell on Windows), enter `ollama pull <model_name>` to download the model. |
13 | | -* `model_name` format: `<model_name>:<model_version>`. For example, `deepseek-r1:8b`. |
14 | | - * The 8b parameter model requires at least 16GB of video memory (VRAM). Refer to other documentation for detailed information on configurations and parameter sizes. |
15 | | -* After pulling is complete, use `ollama list` to view the models you have pulled. |
16 | | -* Then use `ollama run <model_name>` to run the model. |
| 11 | +Choose the model you want to use at [https://ollama.com/search](https://ollama.com/search). |
17 | 12 |
|
18 | | -## Configure AstrBot |
19 | | - |
20 | | -* In AstrBot: |
21 | | - *  |
22 | | - |
23 | | -* Save the configuration. |
24 | | - * Enter `/provider` to view the models configured in AstrBot. |
25 | | - * For Mac/Windows users deploying AstrBot with Docker Desktop, enter `http://host.docker.internal:11434/v1` for the API Base URL. |
26 | | - * For Linux users deploying AstrBot with Docker, enter `http://172.17.0.1:11434/v1` for the API Base URL, or replace `172.17.0.1` with your public IP address (ensure that port 11434 is allowed by the host system). |
27 | | - * If LM Studio is deployed using Docker, ensure that port 11434 is mapped to the host. |
28 | | - |
29 | | -## FAQ |
30 | | - |
31 | | -* Error: |
32 | | - * `AstrBot request failed.` |
33 | | - * `Error type: NotFoundError` |
34 | | - * `Error message: Error code: 404 - {'error': {'message': 'model "llama3.1-8b" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}` |
35 | | - * Please refer to the instructions above and use `ollama pull <model_name>` to pull the model. |
36 | | - * Then use `ollama run <model_name>` to run the model. |
37 | | - |
38 | | - |
39 | | -# 接入 Ollama 使用 DeepSeek-R1 等模型 |
40 | | - |
41 | | -Ollama 允许在本地电脑上部署模型(需要电脑硬件配置符合要求) |
42 | | - |
43 | | -### 下载并安装 Ollama |
44 | | - |
45 | | -https://ollama.com/ |
| 13 | +In the terminal (PowerShell on Windows), enter `ollama pull <model_name>` to download the model. |
46 | 14 |
|
47 | | -### 选择想要使用的模型 |
| 15 | +model_name format: `<model_name>:<model_version>`. For example, `deepseek-r1:8b`. |
| 16 | +> The 8b parameter model requires at least 16GB of video memory (VRAM). Refer to other documentation for detailed information on configurations and parameter sizes. |
48 | 17 |
|
49 | | -在 https://ollama.com/search 上选择想要使用的模型。 |
| 18 | +After pulling is complete, use `ollama list` to view the models you have pulled. |
50 | 19 |
|
51 | | -在终端上(windows 是 powershell)输入 `ollama pull <model_name>` 下载模型。 |
| 20 | +Then use `ollama run <model_name>` to run the model. |
52 | 21 |
|
53 | | -model_name 格式:`<model_name>:<model_version>`。如 `deepseek-r1:8b`。 |
54 | | - |
55 | | -> 8b 参数量模型需要至少 16GB 显存。有关配置和参数量的详细信息,请参阅其他文档。 |
56 | | -
|
57 | | -拉取完成后,`ollama list` 查看已经拉取的模型。 |
58 | | - |
59 | | -然后使用 `ollama run <model_name>` 运行模型。 |
60 | | - |
61 | | -### 配置 AstrBot |
62 | | - |
63 | | -在 AstrBot 上: |
64 | | - |
65 | | - |
| 22 | +## Configure AstrBot |
66 | 23 |
|
67 | | -保存配置即可。 |
| 24 | +Open the AstrBot WebUI, locate Service Provider Management, click on Add Provider, find and click on `Ollama`. |
| 25 | + |
68 | 26 |
|
69 | | -> 输入 /provider 查看 AstrBot 配置的模型 |
| 27 | +Save the configuration. |
70 | 28 |
|
71 | | -> 对于 Mac/Windows 使用 Docker Desktop 部署 AstrBot 部署的用户,API Base URL 请填写为 `http://host.docker.internal:11434/v1`。 |
72 | | -> 对于 Linux 使用 Docker 部署 AstrBot 部署的用户,API Base URL 请填写为 `http://172.17.0.1:11434/v1`,或者将 `172.17.0.1` 替换为你的公网 IP IP(确保宿主机系统放行了 11434 端口)。 |
| 29 | +::: tip |
73 | 30 |
|
74 | | -如果 LM Studio 使用了 Docker 部署,请确保 11434 端口已经映射到宿主机。 |
| 31 | +For Mac/Windows users deploying AstrBot with Docker Desktop, enter `http://host.docker.internal:11434/v1` for the API Base URL.\ |
| 32 | +For Linux users deploying AstrBot with Docker, enter `http://172.17.0.1:11434/v1` for the API Base URL, or replace `172.17.0.1` with your public IP address (ensure that port 11434 is allowed by the host system).\ |
| 33 | +If Ollama is deployed using Docker, ensure that port 11434 is mapped to the host. |
75 | 34 |
|
| 35 | +::: |
76 | 36 |
|
77 | | -### FAQ |
| 37 | +## FAQ |
78 | 38 |
|
79 | | -报错: |
80 | | -``` |
81 | | -AstrBot 请求失败。 |
82 | | -错误类型: NotFoundError |
83 | | -错误信息: Error code: 404 - {'error': {'message': 'model "llama3.1-8b" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}} |
| 39 | +Error: |
84 | 40 | ``` |
| 41 | +AstrBot request failed. |
| 42 | +Error type: NotFoundError |
| 43 | +Error message: Error code: 404 - {'error': {'message': 'model "llama3.1-8b" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}} |
85 | 44 |
|
86 | | -请先看上面的教程,用 `ollama pull <model_name>` 拉取模型。 |
87 | | - |
88 | | -然后使用 `ollama run <model_name>` 运行模型。 |
| 45 | +``` |
| 46 | +Please refer to the instructions above and use `ollama pull <model_name>` to pull the model, then use `ollama run <model_name>` to run the model. |
0 commit comments