Skip to content

feat: add Ollama provider support#163

Open
majiayu000 wants to merge 8 commits intosimular-ai:mainfrom
majiayu000:feat/ollama-support
Open

feat: add Ollama provider support#163
majiayu000 wants to merge 8 commits intosimular-ai:mainfrom
majiayu000:feat/ollama-support

Conversation

@majiayu000
Copy link
Copy Markdown

@majiayu000 majiayu000 commented Dec 30, 2025

Summary

  • Add LMMEngineOllama class to engine.py for local Ollama model support
  • Ollama uses OpenAI-compatible API with default endpoint http://localhost:11434/v1
  • Supports OLLAMA_HOST environment variable for custom endpoint

Usage

agent_s \
    --provider ollama \
    --model llama3.2-vision \
    --ground_provider ollama \
    --ground_url http://localhost:11434 \
    --ground_model llama3.2-vision \
    --grounding_width 1920 \
    --grounding_height 1080

Closes #44

Summary by CodeRabbit

  • New Features

    • Added support for three new LLM engines (Ollama, DeepSeek, Qwen) with flexible configuration via parameters or environment variables, automatic endpoint normalization, and clear validation errors when credentials are missing.
  • Tests

    • Added tests covering initialization, endpoint resolution, and error handling for the new engine types.

✏️ Tip: You can customize this high-level summary in your review settings.

Add LMMEngineOllama class to support local Ollama models.
Ollama uses OpenAI-compatible API with default endpoint http://localhost:11434/v1.

Closes simular-ai#44

Signed-off-by: majiayu000 <1835304752@qq.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Dec 30, 2025

📝 Walkthrough

Walkthrough

Adds support for three new LLM engine providers—Ollama, DeepSeek, and Qwen—inside LMMAgent.__init__, deriving endpoints and API keys from parameters or environment variables, validating required values, normalizing endpoints, and creating engines via LMMEngineOpenAI. New tests cover initialization behaviors.

Changes

Cohort / File(s) Summary
Provider Engine Support
gui_agents/s3/core/mllm.py
Added engine_type branches for ollama, deepseek, and qwen in LMMAgent.__init__. Each branch resolves base_url and api_key from engine_params or env vars (OLLAMA_HOST, DEEPSEEK_ENDPOINT_URL, DEEPSEEK_API_KEY, QWEN_ENDPOINT_URL, QWEN_API_KEY), enforces provider-specific validation and defaults (Ollama may default api_key to "ollama", DeepSeek/Qwen require API keys), normalizes endpoint suffixes (ensures /v1 where applied), and instantiates LMMEngineOpenAI. No public signature changes.
Test Suite
tests/test_providers.py
New unit tests validating LMMAgent initialization for ollama, deepseek, and qwen: environment normalization, error on missing Ollama host, base_url resolution from params and env (including /v1 normalization), and that the created engine is LMMEngineOpenAI.
Manifests
requirements.txt, pyproject.toml, manifest_file
Updated dependency/manifest files (manifest and packaging metadata changes present).

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I hopped through configs, keys, and hosts today,
Ollama, DeepSeek, Qwen came out to play,
Env vars whispered endpoints, trimmed to view,
Engines spun up steady, tests checked through,
I nibbled a carrot and then bounced away 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 3
❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Out of Scope Changes check ⚠️ Warning The PR adds support for DeepSeek and Qwen providers beyond the stated scope of Ollama provider support, and includes general dependency/manifest changes not directly related to Ollama functionality. Remove DeepSeek and Qwen engine_type branches to keep changes focused on Ollama; defer those features to separate PRs as suggested by reviewer feedback.
Docstring Coverage ⚠️ Warning Docstring coverage is 62.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Linked Issues check ❓ Inconclusive The linked issue #44 contains no specific coding requirements or objectives, making it impossible to validate whether the code changes address the stated goals. Examine the linked issue #44 for specific requirements and verify the implementation aligns with all stated objectives from that issue.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add Ollama provider support' directly matches the main changeset, which adds support for Ollama as a new LLM provider through extended engine_type handling in LMMAgent.init.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
gui_agents/s3/core/engine.py (1)

448-464: Initialization looks correct.

The __init__ method follows the pattern of other engine implementations. Note that request_interval is calculated but not used in the generate method—this is consistent with several other engine classes in the file (e.g., LMMEngineGemini, LMMEngineOpenRouter), suggesting it may be used externally or reserved for future rate limiting.

The **kwargs parameter is unused but matches the pattern across all engine classes for forward compatibility.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2cb57fb and 432a9f4.

📒 Files selected for processing (2)
  • gui_agents/s3/core/engine.py
  • gui_agents/s3/core/mllm.py
🧰 Additional context used
🧬 Code graph analysis (1)
gui_agents/s3/core/mllm.py (1)
gui_agents/s3/core/engine.py (1)
  • LMMEngineOllama (448-491)
🪛 Ruff (0.14.10)
gui_agents/s3/core/engine.py

456-456: Unused method argument: kwargs

(ARG002)


474-474: Unused method argument: kwargs

(ARG002)

🔇 Additional comments (4)
gui_agents/s3/core/mllm.py (3)

9-9: LGTM!

The import is correctly added and follows the existing pattern.


39-40: LGTM!

The engine initialization follows the established pattern and is correctly integrated into the engine selection logic.


135-135: LGTM!

Correctly includes LMMEngineOllama in the API-style inference type check, consistent with Ollama's OpenAI-compatible API.

gui_agents/s3/core/engine.py (1)

476-480: Base URL handling is well-designed.

The fallback chain (instance attribute → OLLAMA_HOST env var → localhost default) and automatic /v1 suffix normalization provide good flexibility and user convenience.

Comment thread gui_agents/s3/core/engine.py Outdated
This ensures consistency with other engine implementations (OpenAI, Gemini,
OpenRouter, etc.) and allows callers to pass additional parameters like
stop, top_p, etc.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@Richard-Simular
Copy link
Copy Markdown
Collaborator

Hi, thanks for the contribution!

I believe for any OpenAI-compatible provider you can just init a LMMEngineOpenAI with a custom base_url:

engine = LMMEngineOpenAI(
    base_url="http://localhost:11434/v1",
    api_key="apikey",
    model="llama3.2"
)

Can you try if this works for your use case? If so, I'd prefer to avoid adding additional boilerplate code

import os
base_url = os.getenv("OLLAMA_HOST")
if base_url:
if not base_url.endswith("/v1"):
Copy link
Copy Markdown
Collaborator

@Richard-Simular Richard-Simular Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!
Instead of defaulting to localhost, raising an actionable error is better and easier for devs to debug.

Would you mind including DeepSeek and Qwen into this PR and close #164?

I think we can check in after the change and some test

@Richard-Simular Richard-Simular self-requested a review January 19, 2026 08:40
@Richard-Simular
Copy link
Copy Markdown
Collaborator

Richard-Simular commented Jan 19, 2026

Thanks @majiayu000! Three items left before we can merge:

  1. DeepSeek appears OpenAI-compatible. If so, let’s treat it like the Ollama provider and reuse LMMEngineOpenAI (with DeepSeek defaults such as base_url and env vars) rather than adding a separate class LMMEngineDeepSeek(LMMEngine).
  2. The tests you added look like smoke tests (mostly instance checks). I don’t think they’re strictly necessary right now—we can add more comprehensive tests later.
  3. Please run black locally (e.g. black gui_agents) so the automatic linter passes.

Ping me when it’s ready and I’ll take another look. If you’d like, I can also push fixes directly to this PR.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@gui_agents/s3/core/mllm.py`:
- Around line 38-94: The branch handling engine_type ==
"deepseek"/"qwen"/"ollama" should stop relying on "key" not in engine_params and
instead check falsy values (use not engine_params.get("api_key") / not
engine_params.get("base_url")), ensure DeepSeek's default base_url includes the
/v1 suffix, and normalize any env-provided base_url for DeepSeek and Qwen by
appending "/v1" when missing (same approach used for Ollama); update the checks
and normalization where engine_params is prepared before creating
LMMEngineOpenAI and keep assigning self.engine =
LMMEngineOpenAI(**engine_params).
🧹 Nitpick comments (1)
gui_agents/s3/core/mllm.py (1)

189-189: Remove duplicate LMMEngineParasail in the isinstance tuple (Line 189).
Redundant entry; no behavior change but reduces confusion.

🧹 Suggested cleanup
-                LMMEngineParasail,
                 LMMEngineParasail,

Comment thread gui_agents/s3/core/mllm.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tests/test_providers.py`:
- Around line 9-16: The setUp method in tests/test_providers.py only clears
OLLAMA_HOST, DEEPSEEK_API_KEY, and QWEN_API_KEY but misses endpoint environment
variables which can make tests flaky; update the setUp function to also remove
DEEPSEEK_ENDPOINT_URL and QWEN_ENDPOINT_URL from os.environ (check for their
presence like the others) so the tests use the expected defaults and remain
hermetic.
♻️ Duplicate comments (1)
gui_agents/s3/core/mllm.py (1)

38-66: Handle falsy base_url / api_key for Ollama + DeepSeek.

If CLI args supply base_url=None (key present), the branch is skipped and DeepSeek/Ollama can silently fall back to OpenAI defaults. Treat falsy values as missing (aligned with the Qwen branch).

Proposed fix
                 elif engine_type == "ollama":
                     # Reuse LMMEngineOpenAI for Ollama
-                    if "base_url" not in engine_params:
+                    if not engine_params.get("base_url"):
                         import os
@@
-                    if "api_key" not in engine_params:
+                    if not engine_params.get("api_key"):
                         engine_params["api_key"] = "ollama"
                     self.engine = LMMEngineOpenAI(**engine_params)
                 elif engine_type == "deepseek":
-                    if "base_url" not in engine_params:
+                    if not engine_params.get("base_url"):
                         import os

Comment thread tests/test_providers.py
@majiayu000
Copy link
Copy Markdown
Author

@Richard-Simular Thanks for the detailed feedback!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Ollama support?

2 participants