Skip to content

Commit 9d34be8

Browse files
rajbosCopilot
andcommitted
refactor: switch Ollama install to Docker service container
Replace the manual install+start steps with a service container so Dependabot's github-actions ecosystem can automatically keep the ollama/ollama image version up to date. Changes: - Add services.ollama block (ollama/ollama:0.23.4) with bind-mount /home/runner/.ollama -> /root/.ollama so the actions/cache model cache is visible inside the running container - Remove 'Install Ollama' step (no longer needed) - Rename 'Start Ollama' step to 'Ensure Ollama model is available'; Ollama is already running as a service so we only need to wait for readiness and pull the model via the REST API if not cached Caching still works: actions/cache restores blobs to the bind-mounted host path after the service starts; because it is a live bind mount the files are immediately visible inside the container. The pull call is a no-op when the blobs already exist. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
1 parent 10fb004 commit 9d34be8

1 file changed

Lines changed: 18 additions & 19 deletions

File tree

.github/workflows/check-toolnames.yml

Lines changed: 18 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -176,6 +176,13 @@ jobs:
176176
issues: write
177177
contents: write
178178
pull-requests: write
179+
services:
180+
ollama:
181+
image: ollama/ollama:0.23.4
182+
ports:
183+
- 11434:11434
184+
volumes:
185+
- /home/runner/.ollama:/root/.ollama
179186
steps:
180187
- name: Harden the runner (Audit all outbound calls)
181188
uses: step-security/harden-runner@a5ad31d6a139d249332a2605b85202e8c0b78450 # v2.19.1
@@ -254,23 +261,10 @@ jobs:
254261
key: ollama-${{ runner.os }}-qwen2.5-1.5b-v1
255262
restore-keys: ollama-${{ runner.os }}-qwen2.5-1.5b-
256263

257-
- name: Install Ollama
258-
if: steps.extract.outputs.has_missing == 'true'
259-
env:
260-
OLLAMA_VERSION: "v0.23.4"
261-
OLLAMA_SHA256: "c0822ce85413647f8502862c7179740311f271fcff8f21d61c6d352729f4c28d"
262-
run: |
263-
curl -fsSL "https://github.com/ollama/ollama/releases/download/${OLLAMA_VERSION}/ollama-linux-amd64.tar.zst" -o ollama-linux-amd64.tar.zst
264-
echo "${OLLAMA_SHA256} ollama-linux-amd64.tar.zst" | sha256sum -c
265-
sudo tar -I zstd -xf ollama-linux-amd64.tar.zst -C /usr
266-
rm ollama-linux-amd64.tar.zst
267-
268-
- name: Start Ollama and ensure model is available
264+
- name: Ensure Ollama model is available
269265
if: steps.extract.outputs.has_missing == 'true'
270266
run: |
271-
export OLLAMA_MODELS="$HOME/.ollama/models"
272-
ollama serve &
273-
echo "Waiting for Ollama to be ready..."
267+
echo "Waiting for Ollama service to be ready..."
274268
for i in {1..30}; do
275269
if curl -fsS http://localhost:11434/api/tags >/dev/null 2>&1; then
276270
echo "Ollama is ready (attempt $i)"
@@ -279,11 +273,16 @@ jobs:
279273
sleep 2
280274
done
281275
curl -fsS http://localhost:11434/api/tags >/dev/null
282-
if ! ollama list 2>/dev/null | grep -q "qwen2.5:1.5b"; then
283-
echo "Pulling qwen2.5:1.5b model..."
284-
ollama pull qwen2.5:1.5b
285-
else
276+
if curl -fsS http://localhost:11434/api/tags | \
277+
python3 -c "import json,sys; d=json.load(sys.stdin); exit(0 if any('qwen2.5:1.5b' in m['name'] for m in d.get('models',[])) else 1)" \
278+
2>/dev/null; then
286279
echo "Model qwen2.5:1.5b already available in cache"
280+
else
281+
echo "Pulling qwen2.5:1.5b model..."
282+
curl -X POST http://localhost:11434/api/pull \
283+
-H "Content-Type: application/json" \
284+
-d '{"name":"qwen2.5:1.5b","stream":false}' \
285+
--max-time 600
287286
fi
288287
289288
- name: Generate friendly names with SLM

0 commit comments

Comments
 (0)