Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM ollama/ollama:0.17.7
FROM ollama/ollama:rocm
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Breaking change: ROCm image is AMD GPU-only

The ollama/ollama:rocm tag is specifically built for AMD GPUs using the ROCm framework. Switching to this tag means the image will no longer work for users running on NVIDIA GPUs (CUDA) or CPU-only environments — which are far more common deployment targets.

The original ollama/ollama:0.17.7 is a multi-backend image that supports NVIDIA, Apple Silicon (Metal), and CPU inference. Replacing it with the ROCm variant is a significant regression in hardware compatibility.

If AMD GPU support is needed, consider instead using a build argument or a separate Dockerfile variant rather than changing the default image. If the intent truly is AMD-only, this needs to be explicitly documented and communicated to users of this project.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Floating/mutable tag breaks reproducibility

ollama/ollama:rocm is a rolling, mutable tag that gets updated regularly (Docker Hub shows it was updated just hours ago). This means two builds at different points in time will produce different images, making it impossible to guarantee reproducible builds and potentially introducing silent breaking changes or regressions.

The previous tag 0.17.7 was an immutable, pinned version. If a ROCm-based image is required, it should be pinned to a specific versioned ROCm tag, such as ollama/ollama:0.x.y-rocm (if that format exists upstream), or pinned by digest.

Suggested change
FROM ollama/ollama:rocm
FROM ollama/ollama:0.17.7

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify whether compose/deployment config exposes AMD ROCm devices.
rg -n -C3 'devices:|/dev/kfd|/dev/dri|device_cgroup_rules|group_add' docker-compose.yml

Repository: appwrite/docker-ollama

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Check if docker-compose files exist and their locations
find . -name "*docker-compose*" -type f 2>/dev/null

Repository: appwrite/docker-ollama

Length of output: 86


🏁 Script executed:

#!/bin/bash
# List files at the repository root to understand structure
ls -la | head -30

Repository: appwrite/docker-ollama

Length of output: 590


🏁 Script executed:

#!/bin/bash
# Check the contents of docker-compose.yml if it exists at root
if [ -f "docker-compose.yml" ]; then
  cat -n docker-compose.yml
fi

Repository: appwrite/docker-ollama

Length of output: 554


Add AMD GPU device configuration to expose ROCm acceleration.

The Dockerfile uses FROM ollama/ollama:rocm but docker-compose.yml does not expose AMD GPU devices (/dev/kfd, /dev/dri). Without this, the container will silently fall back to CPU and cause major performance regressions. Add the following to the ollama service:

    devices:
      - /dev/kfd
      - /dev/dri
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dockerfile` at line 1, The docker-compose configuration is missing AMD GPU
device mappings so the ollama/ollama:rocm image will fall back to CPU; update
the ollama service (service name "ollama") in docker-compose.yml to add a
devices block exposing /dev/kfd and /dev/dri so ROCm acceleration is available
to the container (i.e., add a devices: entry listing /dev/kfd and /dev/dri under
the ollama service).

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

What is the current immutable digest for ollama/ollama:rocm, and which platforms does that digest support?

💡 Result:

The current immutable digest for ollama/ollama:rocm is sha256:e6777885093e. It supports the linux/amd64 platform exclusively, as indicated by the OS/ARCH listing on Docker Hub tags page for the rocm tag (last pushed 2 days ago, aligning with today's date of 2026-03-30 given recent updates). No other platforms like arm64 are listed for rocm, unlike the latest tag.

Citations:


Pin ollama/ollama:rocm to a digest for reproducible builds.

Using a floating tag makes builds non-deterministic and can introduce unreviewed runtime changes. The current rocm tag resolves to sha256:e6777885093e and supports only linux/amd64.

Suggested change
-FROM ollama/ollama:rocm
+FROM ollama/ollama:rocm@sha256:e6777885093e
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
FROM ollama/ollama:rocm
FROM ollama/ollama:rocm@sha256:e6777885093e
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dockerfile` at line 1, Replace the floating image reference in the
Dockerfile's FROM instruction (FROM ollama/ollama:rocm) with a digest-pinned
reference (e.g. FROM ollama/ollama@sha256:e6777885093e) to ensure reproducible
builds; update the FROM line in the Dockerfile to use the `@sha256`:... form, and
optionally add a brief comment noting the pinned digest and supported platform
(linux/amd64).


# Preload specific models
ARG MODELS
Expand Down
Loading