Skip to content

Commit 7dc347b

Browse files
RychidMjackspieringcrypt0rr
authored
Adds Ollama service (#250)
* Adds Ollama service configuration with Docker Compose * Comment out optional network configurations and ollama API key in .env for clarity * Add Ollama service to general README table and update compose file. * Update .env and README for Ollama service configuration - Add time zone setting for containers in .env - Improve formatting of the configuration table in README --------- Co-authored-by: Jack Spiering <46534141+jackspiering@users.noreply.github.com> Co-authored-by: Bart <57799908+crypt0rr@users.noreply.github.com>
1 parent 2783425 commit 7dc347b

File tree

4 files changed

+203
-0
lines changed

4 files changed

+203
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -187,6 +187,7 @@ A huge thank you to all our contributors! ScaleTail wouldn’t be what it is tod
187187
| 🖥️ **Node-RED** | A flow-based development tool for visual programming. | [Details](services/nodered) |
188188
| 🖥️ **Portainer** | A lightweight management UI which allows you to easily manage your Docker environments. | [Details](services/portainer) |
189189
| 🔍 **searXNG** | A free internet metasearch engine which aggregates results from various search services. | [Details](services/searxng) |
190+
| 🧠 **Ollama** | A self-hosted solution for running open large language models (LLMs) locally with an OpenAI-compatible API. | [Details](services/ollama) |
190191

191192
### 📈 Monitoring and Analytics
192193

services/ollama/.env

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
#version=1.1
2+
#URL=https://github.com/tailscale-dev/ScaleTail
3+
#COMPOSE_PROJECT_NAME= # Optional: only use when running multiple deployments on the same infrastructure.
4+
5+
# Service Configuration
6+
SERVICE=ollama
7+
IMAGE_URL=ollama/ollama:latest
8+
9+
# Network Configuration
10+
SERVICEPORT=11434 # Ollama's default API port. Uncomment the "ports:" section in compose.yaml to expose to LAN.
11+
DNS_SERVER=9.9.9.9 # Preferred DNS server for Tailscale. Uncomment the "dns:" section in compose.yaml to enable.
12+
13+
# Tailscale Configuration
14+
TS_AUTHKEY= # Auth key from https://tailscale.com/admin/authkeys. See: https://tailscale.com/kb/1085/auth-keys#generate-an-auth-key for instructions.
15+
16+
#Time Zone setting for containers
17+
TZ=Europe/Amsterdam # See: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
18+
19+
# Any Container environment variables are declared below. See https://docs.docker.com/compose/how-tos/environment-variables/
20+
# Ollama-specific variables
21+
# OLLAMA_API_KEY= # Optional: set a secret key to restrict API access (leave blank to disable auth)

services/ollama/README.md

Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
# Ollama with Tailscale Sidecar Configuration
2+
3+
This Docker Compose configuration sets up [Ollama](https://ollama.com) with Tailscale as a sidecar container to keep the API reachable securely over your Tailnet.
4+
5+
## Ollama
6+
7+
[Ollama](https://ollama.com) lets you run large language models (LLMs) locally — such as Llama 3, Mistral, and Gemma — with a simple API compatible with the OpenAI client format. Pairing it with Tailscale means you can access your local models from any device on your Tailnet (phone, laptop, remote machine) without exposing the API to the public internet.
8+
9+
## Configuration Overview
10+
11+
In this setup, the `tailscale-ollama` service runs Tailscale, which manages secure networking for Ollama. The `app-ollama` service uses Docker's `network_mode: service:tailscale` so all traffic is routed through the Tailscale network stack. The Ollama API remains Tailnet-only by default unless you explicitly expose the port to your LAN.
12+
13+
An optional `yourNetwork` external Docker network is attached to the `tailscale` container. This allows other containers on the same host (such as Open WebUI or other LLM frontends) to reach Ollama via its Tailscale IP, keeping inter-container communication on the same overlay network.
14+
15+
## Prerequisites
16+
17+
- The host user must be in the `docker` group.
18+
- The `/dev/net/tun` device must be available on the host (standard on most Linux systems).
19+
- Pre-create the bind-mount directories before starting the stack to avoid Docker creating root-owned folders:
20+
21+
```bash
22+
mkdir -p config ts/state ollama-data
23+
```
24+
25+
- If you use the optional `yourNetwork` network, create it first if it does not already exist:
26+
27+
```bash
28+
docker network create yourNetwork
29+
```
30+
31+
If you don't use a shared proxy network, remove the `networks:` sections from `compose.yaml`.
32+
33+
## Volumes
34+
35+
| Path | Purpose |
36+
| --------------- | ------------------------------------------------------------------ |
37+
| `./config` | Tailscale serve config (`serve.json`) |
38+
| `./ts/state` | Tailscale persistent state |
39+
| `./ollama-data` | Downloaded Ollama models (can be large — ensure enough disk space) |
40+
41+
## MagicDNS and HTTPS
42+
43+
Tailscale Serve is pre-configured to proxy HTTPS on port 443 to Ollama's internal port 11434. To enable it:
44+
45+
1. Uncomment `TS_ACCEPT_DNS=true` in the `tailscale` service environment.
46+
2. Ensure your Tailnet has MagicDNS and HTTPS certificates enabled in the [Tailscale admin console](https://login.tailscale.com/admin/dns).
47+
3. The `serve.json` config in `compose.yaml` uses `$TS_CERT_DOMAIN` automatically — no manual editing needed.
48+
49+
You can then reach Ollama at `https://ollama.<your-tailnet-name>.ts.net`.
50+
51+
## Port Exposure (LAN access)
52+
53+
By default, the `ports:` section is commented out — Ollama is only accessible over your Tailnet. If you also want LAN access (e.g. from devices not on Tailscale), uncomment it in `compose.yaml`:
54+
55+
```yaml
56+
ports:
57+
- 0.0.0.0:11434:11434
58+
```
59+
60+
This is optional and not required for Tailnet-only usage.
61+
62+
## API Key (Optional)
63+
64+
Ollama supports a simple bearer token for API access. Set `OLLAMA_API_KEY` in your `.env` file to enable it. Leave it blank to allow unauthenticated access (safe when Tailnet-only).
65+
66+
## First-time Setup
67+
68+
After starting the stack, pull a model to get started:
69+
70+
```bash
71+
docker exec app-ollama ollama pull llama3
72+
```
73+
74+
You can then send requests to the API:
75+
76+
```bash
77+
curl http://<tailscale-ip>:11434/api/generate \
78+
-d '{"model": "llama3", "prompt": "Hello!"}'
79+
```
80+
81+
Or if using HTTPS via Tailscale Serve:
82+
83+
```bash
84+
curl https://ollama.<your-tailnet-name>.ts.net/api/generate \
85+
-d '{"model": "llama3", "prompt": "Hello!"}'
86+
```
87+
88+
## Files to check
89+
90+
Please check the following contents for validity as some variables need to be defined upfront.
91+
92+
- `.env` — Set `TS_AUTHKEY` (required). Optionally set `OLLAMA_API_KEY`.
93+
94+
## Useful Links
95+
96+
- [Ollama official site](https://ollama.com)
97+
- [Ollama model library](https://ollama.com/library)
98+
- [Ollama GitHub](https://github.com/ollama/ollama)
99+
- [Tailscale auth keys](https://tailscale.com/kb/1085/auth-keys)
100+
- [Tailscale Serve docs](https://tailscale.com/kb/1312/serve)
101+
- [Open WebUI](https://github.com/open-webui/open-webui) — a popular browser-based UI for Ollama

services/ollama/compose.yaml

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
configs:
2+
ts-serve:
3+
content: |
4+
{"TCP":{"443":{"HTTPS":true}},
5+
"Web":{"$${TS_CERT_DOMAIN}:443":
6+
{"Handlers":{"/":
7+
{"Proxy":"http://127.0.0.1:11434"}}}},
8+
"AllowFunnel":{"$${TS_CERT_DOMAIN}:443":false}}
9+
10+
services:
11+
# Make sure you have updated/checked the .env file with the correct variables.
12+
# All the ${ xx } need to be defined there.
13+
14+
# Tailscale Sidecar Configuration
15+
tailscale:
16+
image: tailscale/tailscale:latest # Image to be used
17+
container_name: tailscale-${SERVICE} # Name for local container management
18+
hostname: ${SERVICE} # Name used within your Tailscale environment
19+
environment:
20+
- TS_AUTHKEY=${TS_AUTHKEY}
21+
- TS_STATE_DIR=/var/lib/tailscale
22+
- TS_SERVE_CONFIG=/config/serve.json # Tailscale Serve configuration to expose the web interface on your local Tailnet - remove this line if not required
23+
- TS_USERSPACE=false
24+
- TS_ENABLE_HEALTH_CHECK=true # Enable healthcheck endpoint: "/healthz"
25+
- TS_LOCAL_ADDR_PORT=127.0.0.1:41234 # The <addr>:<port> for the healthz endpoint
26+
- TS_AUTH_ONCE=true
27+
# - TS_ACCEPT_DNS=true # Uncomment when using MagicDNS
28+
configs:
29+
- source: ts-serve
30+
target: /config/serve.json
31+
volumes:
32+
- ./config:/config # Config folder used to store Tailscale files
33+
- ./ts/state:/var/lib/tailscale # Tailscale requirement
34+
devices:
35+
- /dev/net/tun:/dev/net/tun # Network configuration for Tailscale to work
36+
cap_add:
37+
- net_admin # Tailscale requirement
38+
- sys_module # Required to load kernel modules for Tailscale
39+
#ports:
40+
# - 0.0.0.0:${SERVICEPORT}:${SERVICEPORT} # Binding port ${SERVICE}PORT to the local network - may be removed if only exposure to your Tailnet is required
41+
# If any DNS issues arise, use your preferred DNS provider by uncommenting the config below
42+
#dns:
43+
# - ${DNS_SERVER}
44+
45+
# networks:
46+
# - yourNetwork # Optional: connect to an existing proxy network so other containers can reach Ollama via its Tailscale IP
47+
48+
healthcheck:
49+
test: [ "CMD", "wget", "--spider", "-q", "http://127.0.0.1:41234/healthz" ] # Check Tailscale has a Tailnet IP and is operational
50+
interval: 1m # How often to perform the check
51+
timeout: 10s # Time to wait for the check to succeed
52+
retries: 3 # Number of retries before marking as unhealthy
53+
start_period: 10s # Time to wait before starting health checks
54+
restart: always
55+
56+
# Ollama
57+
application:
58+
image: ${IMAGE_URL} # Image to be used
59+
network_mode: service:tailscale # Sidecar configuration to route Ollama through Tailscale
60+
container_name: app-${SERVICE} # Name for local container management
61+
environment:
62+
- OLLAMA_HOST=0.0.0.0:11434
63+
- OLLAMA_KEEP_ALIVE=24h # Optional: keeps models loaded in memory (default is 5 min)
64+
# - OLLAMA_API_KEY=${OLLAMA_API_KEY} # Optional: set an API key to restrict access
65+
volumes:
66+
- ./${SERVICE}-data:/root/.ollama # Stores downloaded models
67+
depends_on:
68+
tailscale:
69+
condition: service_healthy
70+
healthcheck:
71+
test: [ "CMD", "pgrep", "-f", "${SERVICE}" ] # Check if Ollama process is running
72+
interval: 1m # How often to perform the check
73+
timeout: 10s # Time to wait for the check to succeed
74+
retries: 3 # Number of retries before marking as unhealthy
75+
start_period: 30s # Time to wait before starting health checks
76+
restart: always
77+
78+
# networks:
79+
# yourNetwork:
80+
# external: true # Assumes an existing external Docker network named "yourNetwork"

0 commit comments

Comments
 (0)