Feat/add litellm plugin#546
Conversation
|
ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughThis PR adds a new LiteLLM plugin for Vision Agents that provides a unified interface to 100+ LLM providers via the litellm SDK. The plugin implements LiteLLMChatCompletions as an LLM subclass with streaming and non-streaming response paths, event emission for request tracking and timing metrics, tool/function schema normalization into provider format, and exception handling for litellm error types. Supporting changes include package metadata, workspace registration, public exports, documentation, and structural tests. Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 22fc1df0-9a52-4c79-9688-da2fb08d2980
📒 Files selected for processing (7)
plugins/litellm/README.mdplugins/litellm/py.typedplugins/litellm/pyproject.tomlplugins/litellm/tests/test_litellm_llm.pyplugins/litellm/vision_agents/plugins/litellm/__init__.pyplugins/litellm/vision_agents/plugins/litellm/litellm_llm.pypyproject.toml
| license = "MIT" | ||
| dependencies = [ | ||
| "vision-agents", | ||
| "litellm>=1.60.0,<2.0.0", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check PyPI for litellm versions and security advisories
curl -s https://pypi.org/pypi/litellm/json | jq -r '.releases | keys[]' | grep '^1\.60\.' | head -5
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: PIP, package: "litellm") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'Repository: GetStream/Vision-Agents
Length of output: 1354
Update litellm constraint to exclude CRITICAL and HIGH severity vulnerabilities.
Constraint litellm>=1.60.0,<2.0.0 allows vulnerable versions:
- CRITICAL: SQL Injection in Proxy API key verification (≥1.81.16, <1.83.7)
- HIGH: Authenticated command execution via MCP stdio (≥1.74.2, <1.83.7)
- HIGH: Server-Side Template Injection in /prompts/test (≥1.80.5, <1.83.7)
- HIGH: Sandbox escape in custom-code guardrail (≥1.81.8, <1.83.10)
Restrict to a patched version or narrow the range to exclude affected versions.
| class TestLiteLLMPluginStructure: | ||
| def _parse(self): | ||
| return ast.parse(PLUGIN_PATH.read_text()) | ||
|
|
||
| def test_file_exists(self): | ||
| assert PLUGIN_PATH.exists() | ||
|
|
||
| def test_has_litellm_chat_completions_class(self): | ||
| tree = self._parse() | ||
| classes = [n.name for n in ast.walk(tree) if isinstance(n, ast.ClassDef)] | ||
| assert "LiteLLMChatCompletions" in classes | ||
|
|
||
| def test_inherits_llm(self): | ||
| tree = self._parse() | ||
| for node in ast.walk(tree): | ||
| if isinstance(node, ast.ClassDef) and node.name == "LiteLLMChatCompletions": | ||
| base_names = [b.id for b in node.bases if isinstance(b, ast.Name)] | ||
| assert "LLM" in base_names | ||
| return | ||
| pytest.fail("LiteLLMChatCompletions not found") | ||
|
|
||
| def test_has_simple_response(self): | ||
| tree = self._parse() | ||
| for node in ast.walk(tree): | ||
| if isinstance(node, ast.ClassDef) and node.name == "LiteLLMChatCompletions": | ||
| methods = [ | ||
| n.name for n in node.body if isinstance(n, ast.AsyncFunctionDef) | ||
| ] | ||
| assert "simple_response" in methods | ||
| assert "create_response" in methods | ||
| return | ||
|
|
||
| def test_has_streaming_handler(self): | ||
| src = PLUGIN_PATH.read_text() | ||
| assert "_handle_streaming" in src | ||
| assert "_handle_non_streaming" in src | ||
|
|
||
| def test_uses_drop_params_true(self): | ||
| src = PLUGIN_PATH.read_text() | ||
| assert '"drop_params": True' in src | ||
|
|
||
| def test_uses_litellm_acompletion(self): | ||
| src = PLUGIN_PATH.read_text() | ||
| assert "litellm.acompletion" in src | ||
|
|
||
| def test_emits_events(self): | ||
| src = PLUGIN_PATH.read_text() | ||
| assert "LLMRequestStartedEvent" in src | ||
| assert "LLMResponseChunkEvent" in src | ||
| assert "LLMResponseCompletedEvent" in src | ||
|
|
||
| def test_plugin_name(self): | ||
| src = PLUGIN_PATH.read_text() | ||
| assert 'PLUGIN_NAME = "litellm"' in src | ||
|
|
||
| def test_converts_tools_to_provider_format(self): | ||
| src = PLUGIN_PATH.read_text() | ||
| assert "_convert_tools_to_provider_format" in src | ||
|
|
||
| def test_extracts_tool_calls(self): | ||
| src = PLUGIN_PATH.read_text() | ||
| assert "_extract_tool_calls_from_response" in src | ||
|
|
||
|
|
||
| class TestPluginPackage: | ||
| def test_pyproject_exists(self): | ||
| pyproject = Path(__file__).resolve().parents[1] / "pyproject.toml" | ||
| assert pyproject.exists() | ||
|
|
||
| def test_litellm_in_dependencies(self): | ||
| pyproject = (Path(__file__).resolve().parents[1] / "pyproject.toml").read_text() | ||
| assert "litellm" in pyproject | ||
|
|
||
| def test_init_exports_class(self): | ||
| init = ( | ||
| Path(__file__).resolve().parents[1] | ||
| / "vision_agents" | ||
| / "plugins" | ||
| / "litellm" | ||
| / "__init__.py" | ||
| ).read_text() | ||
| assert "LiteLLMChatCompletions" in init |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major | 🏗️ Heavy lift
Replace structure checks with behavioral tests.
All tests verify code structure (AST parsing, string presence) rather than testing actual behavior. This violates the coding guideline: "ALWAYS test behavior, not calling a path. Assert on outputs and state, not method calls."
Tests should instantiate LiteLLMChatCompletions, call methods, and assert on outputs. For example:
- Test that
_convert_tools_to_provider_formatcorrectly transforms ToolSchema objects - Test that
_extract_tool_calls_from_responsehandles valid and invalid responses - Test that event emission works (if events can be captured in tests)
Structure checks don't verify correctness and will pass even if the implementation is broken.
As per coding guidelines: "ALWAYS test behavior, not calling a path. Assert on outputs and state, not method calls."
Why
litellmplugin package underplugins/litellm/, enabling access to 100+ LLM providers (OpenAI, Anthropic, Google, Azure, Bedrock, Ollama, etc.) via the litellm SDKLiteLLMChatCompletionsclass extendingLLMbase with streaming, tool calling, and event emissionplugins/openai/,plugins/anthropic/, etc.Changes
plugins/litellm/pyproject.toml- new plugin packagevision-agents-plugins-litellmwithlitellm>=1.60.0,<2.0.0dependencyplugins/litellm/vision_agents/plugins/litellm/__init__.py- exportsLiteLLMChatCompletionsplugins/litellm/vision_agents/plugins/litellm/litellm_llm.py- main LLM class with:simple_response()for the standard agent loopcreate_response()for direct usage_handle_streaming()with chunk events and TTFT tracking_handle_non_streaming()for non-stream completions_convert_tools_to_provider_format()for function calling_extract_tool_calls_from_response()for tool call parsingdrop_params=Truefor cross-provider kwargs compatibilityAuthenticationError,RateLimitError,APIConnectionError,APIError)LLMRequestStartedEvent,LLMResponseChunkEvent,LLMResponseCompletedEvent)plugins/litellm/README.md- usage docsplugins/litellm/tests/test_litellm_llm.py- 14 unit tests (all passing)pyproject.toml(root) - registeredplugins/litellmin workspace members and dev dependenciesTests
Unit tests (14/14 passing):
Lint (ruff):
Live E2E (streaming + non-streaming against real API):
Example usage
See https://docs.litellm.ai/docs/providers for 100+ supported model strings.
Risk / Compatibility
vision-agents-plugins-litellm), does not affect base installdrop_params=Truesilently drops provider-unsupported kwargsException)