Conversation
WalkthroughThe Dockerfile base image was updated from Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR changes the base Docker image from the pinned, multi-backend Key concerns:
Confidence Score: 2/5Not safe to merge — this change breaks GPU/CPU compatibility for all non-AMD users and loses build reproducibility. Two P1 issues are present: (1) switching to a ROCm-only image is a breaking hardware compatibility regression for NVIDIA and CPU users, and (2) using a floating tag eliminates reproducible builds. The PR also has no description or test plan.
Important Files Changed
Reviews (1): Last reviewed commit: "changed image" | Re-trigger Greptile |
| @@ -1,4 +1,4 @@ | |||
| FROM ollama/ollama:0.17.7 | |||
| FROM ollama/ollama:rocm | |||
There was a problem hiding this comment.
Breaking change: ROCm image is AMD GPU-only
The ollama/ollama:rocm tag is specifically built for AMD GPUs using the ROCm framework. Switching to this tag means the image will no longer work for users running on NVIDIA GPUs (CUDA) or CPU-only environments — which are far more common deployment targets.
The original ollama/ollama:0.17.7 is a multi-backend image that supports NVIDIA, Apple Silicon (Metal), and CPU inference. Replacing it with the ROCm variant is a significant regression in hardware compatibility.
If AMD GPU support is needed, consider instead using a build argument or a separate Dockerfile variant rather than changing the default image. If the intent truly is AMD-only, this needs to be explicitly documented and communicated to users of this project.
| @@ -1,4 +1,4 @@ | |||
| FROM ollama/ollama:0.17.7 | |||
| FROM ollama/ollama:rocm | |||
There was a problem hiding this comment.
Floating/mutable tag breaks reproducibility
ollama/ollama:rocm is a rolling, mutable tag that gets updated regularly (Docker Hub shows it was updated just hours ago). This means two builds at different points in time will produce different images, making it impossible to guarantee reproducible builds and potentially introducing silent breaking changes or regressions.
The previous tag 0.17.7 was an immutable, pinned version. If a ROCm-based image is required, it should be pinned to a specific versioned ROCm tag, such as ollama/ollama:0.x.y-rocm (if that format exists upstream), or pinned by digest.
| FROM ollama/ollama:rocm | |
| FROM ollama/ollama:0.17.7 |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dockerfile`:
- Line 1: The docker-compose configuration is missing AMD GPU device mappings so
the ollama/ollama:rocm image will fall back to CPU; update the ollama service
(service name "ollama") in docker-compose.yml to add a devices block exposing
/dev/kfd and /dev/dri so ROCm acceleration is available to the container (i.e.,
add a devices: entry listing /dev/kfd and /dev/dri under the ollama service).
- Line 1: Replace the floating image reference in the Dockerfile's FROM
instruction (FROM ollama/ollama:rocm) with a digest-pinned reference (e.g. FROM
ollama/ollama@sha256:e6777885093e) to ensure reproducible builds; update the
FROM line in the Dockerfile to use the `@sha256`:... form, and optionally add a
brief comment noting the pinned digest and supported platform (linux/amd64).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| @@ -1,4 +1,4 @@ | |||
| FROM ollama/ollama:0.17.7 | |||
| FROM ollama/ollama:rocm | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether compose/deployment config exposes AMD ROCm devices.
rg -n -C3 'devices:|/dev/kfd|/dev/dri|device_cgroup_rules|group_add' docker-compose.ymlRepository: appwrite/docker-ollama
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Check if docker-compose files exist and their locations
find . -name "*docker-compose*" -type f 2>/dev/nullRepository: appwrite/docker-ollama
Length of output: 86
🏁 Script executed:
#!/bin/bash
# List files at the repository root to understand structure
ls -la | head -30Repository: appwrite/docker-ollama
Length of output: 590
🏁 Script executed:
#!/bin/bash
# Check the contents of docker-compose.yml if it exists at root
if [ -f "docker-compose.yml" ]; then
cat -n docker-compose.yml
fiRepository: appwrite/docker-ollama
Length of output: 554
Add AMD GPU device configuration to expose ROCm acceleration.
The Dockerfile uses FROM ollama/ollama:rocm but docker-compose.yml does not expose AMD GPU devices (/dev/kfd, /dev/dri). Without this, the container will silently fall back to CPU and cause major performance regressions. Add the following to the ollama service:
devices:
- /dev/kfd
- /dev/dri
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dockerfile` at line 1, The docker-compose configuration is missing AMD GPU
device mappings so the ollama/ollama:rocm image will fall back to CPU; update
the ollama service (service name "ollama") in docker-compose.yml to add a
devices block exposing /dev/kfd and /dev/dri so ROCm acceleration is available
to the container (i.e., add a devices: entry listing /dev/kfd and /dev/dri under
the ollama service).
🧩 Analysis chain
🌐 Web query:
What is the current immutable digest for ollama/ollama:rocm, and which platforms does that digest support?
💡 Result:
The current immutable digest for ollama/ollama:rocm is sha256:e6777885093e. It supports the linux/amd64 platform exclusively, as indicated by the OS/ARCH listing on Docker Hub tags page for the rocm tag (last pushed 2 days ago, aligning with today's date of 2026-03-30 given recent updates). No other platforms like arm64 are listed for rocm, unlike the latest tag.
Citations:
- 1: https://registry.hub.docker.com/r/ollama/ollama/tags
- 2: https://hub.docker.com/r/ollama/ollama/tags
Pin ollama/ollama:rocm to a digest for reproducible builds.
Using a floating tag makes builds non-deterministic and can introduce unreviewed runtime changes. The current rocm tag resolves to sha256:e6777885093e and supports only linux/amd64.
Suggested change
-FROM ollama/ollama:rocm
+FROM ollama/ollama:rocm@sha256:e6777885093e📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FROM ollama/ollama:rocm | |
| FROM ollama/ollama:rocm@sha256:e6777885093e |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dockerfile` at line 1, Replace the floating image reference in the
Dockerfile's FROM instruction (FROM ollama/ollama:rocm) with a digest-pinned
reference (e.g. FROM ollama/ollama@sha256:e6777885093e) to ensure reproducible
builds; update the FROM line in the Dockerfile to use the `@sha256`:... form, and
optionally add a brief comment noting the pinned digest and supported platform
(linux/amd64).
What does this PR do?
(Provide a description of what this PR does.)
Test Plan
(Write your test plan here. If you changed any code, please provide us with clear instructions on how you verified your changes work.)
Related PRs and Issues
(If this PR is related to any other PR or resolves any issue or related to any issue link all related PR and issues here.)
Have you read the Contributing Guidelines on issues?
(Write your answer here.)
Summary by CodeRabbit