feat: add Ollama provider support#163
Conversation
Add LMMEngineOllama class to support local Ollama models. Ollama uses OpenAI-compatible API with default endpoint http://localhost:11434/v1. Closes simular-ai#44 Signed-off-by: majiayu000 <1835304752@qq.com>
📝 WalkthroughWalkthroughAdds support for three new LLM engine providers—Ollama, DeepSeek, and Qwen—inside Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
gui_agents/s3/core/engine.py (1)
448-464: Initialization looks correct.The
__init__method follows the pattern of other engine implementations. Note thatrequest_intervalis calculated but not used in thegeneratemethod—this is consistent with several other engine classes in the file (e.g.,LMMEngineGemini,LMMEngineOpenRouter), suggesting it may be used externally or reserved for future rate limiting.The
**kwargsparameter is unused but matches the pattern across all engine classes for forward compatibility.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
gui_agents/s3/core/engine.pygui_agents/s3/core/mllm.py
🧰 Additional context used
🧬 Code graph analysis (1)
gui_agents/s3/core/mllm.py (1)
gui_agents/s3/core/engine.py (1)
LMMEngineOllama(448-491)
🪛 Ruff (0.14.10)
gui_agents/s3/core/engine.py
456-456: Unused method argument: kwargs
(ARG002)
474-474: Unused method argument: kwargs
(ARG002)
🔇 Additional comments (4)
gui_agents/s3/core/mllm.py (3)
9-9: LGTM!The import is correctly added and follows the existing pattern.
39-40: LGTM!The engine initialization follows the established pattern and is correctly integrated into the engine selection logic.
135-135: LGTM!Correctly includes
LMMEngineOllamain the API-style inference type check, consistent with Ollama's OpenAI-compatible API.gui_agents/s3/core/engine.py (1)
476-480: Base URL handling is well-designed.The fallback chain (instance attribute →
OLLAMA_HOSTenv var → localhost default) and automatic/v1suffix normalization provide good flexibility and user convenience.
This ensures consistency with other engine implementations (OpenAI, Gemini, OpenRouter, etc.) and allows callers to pass additional parameters like stop, top_p, etc. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Hi, thanks for the contribution! I believe for any OpenAI-compatible provider you can just init a LMMEngineOpenAI with a custom base_url: Can you try if this works for your use case? If so, I'd prefer to avoid adding additional boilerplate code |
| import os | ||
| base_url = os.getenv("OLLAMA_HOST") | ||
| if base_url: | ||
| if not base_url.endswith("/v1"): |
There was a problem hiding this comment.
Thanks!
Instead of defaulting to localhost, raising an actionable error is better and easier for devs to debug.
Would you mind including DeepSeek and Qwen into this PR and close #164?
I think we can check in after the change and some test
|
Thanks @majiayu000! Three items left before we can merge:
Ping me when it’s ready and I’ll take another look. If you’d like, I can also push fixes directly to this PR. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@gui_agents/s3/core/mllm.py`:
- Around line 38-94: The branch handling engine_type ==
"deepseek"/"qwen"/"ollama" should stop relying on "key" not in engine_params and
instead check falsy values (use not engine_params.get("api_key") / not
engine_params.get("base_url")), ensure DeepSeek's default base_url includes the
/v1 suffix, and normalize any env-provided base_url for DeepSeek and Qwen by
appending "/v1" when missing (same approach used for Ollama); update the checks
and normalization where engine_params is prepared before creating
LMMEngineOpenAI and keep assigning self.engine =
LMMEngineOpenAI(**engine_params).
🧹 Nitpick comments (1)
gui_agents/s3/core/mllm.py (1)
189-189: Remove duplicateLMMEngineParasailin theisinstancetuple (Line 189).
Redundant entry; no behavior change but reduces confusion.🧹 Suggested cleanup
- LMMEngineParasail, LMMEngineParasail,
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/test_providers.py`:
- Around line 9-16: The setUp method in tests/test_providers.py only clears
OLLAMA_HOST, DEEPSEEK_API_KEY, and QWEN_API_KEY but misses endpoint environment
variables which can make tests flaky; update the setUp function to also remove
DEEPSEEK_ENDPOINT_URL and QWEN_ENDPOINT_URL from os.environ (check for their
presence like the others) so the tests use the expected defaults and remain
hermetic.
♻️ Duplicate comments (1)
gui_agents/s3/core/mllm.py (1)
38-66: Handle falsybase_url/api_keyfor Ollama + DeepSeek.If CLI args supply
base_url=None(key present), the branch is skipped and DeepSeek/Ollama can silently fall back to OpenAI defaults. Treat falsy values as missing (aligned with the Qwen branch).Proposed fix
elif engine_type == "ollama": # Reuse LMMEngineOpenAI for Ollama - if "base_url" not in engine_params: + if not engine_params.get("base_url"): import os @@ - if "api_key" not in engine_params: + if not engine_params.get("api_key"): engine_params["api_key"] = "ollama" self.engine = LMMEngineOpenAI(**engine_params) elif engine_type == "deepseek": - if "base_url" not in engine_params: + if not engine_params.get("base_url"): import os
|
@Richard-Simular Thanks for the detailed feedback! |
Summary
LMMEngineOllamaclass toengine.pyfor local Ollama model supporthttp://localhost:11434/v1OLLAMA_HOSTenvironment variable for custom endpointUsage
agent_s \ --provider ollama \ --model llama3.2-vision \ --ground_provider ollama \ --ground_url http://localhost:11434 \ --ground_model llama3.2-vision \ --grounding_width 1920 \ --grounding_height 1080Closes #44
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.