feat: Add targeted operations agent server#99
feat: Add targeted operations agent server#99adamoutler wants to merge 13 commits intomakeplane:mainfrom
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new "Journey" subsystem (MCP servers, journey tools, LOD filtering, resolver caching, sanitization), generated MCP tool proxies, CI/workflow and docker-compose entries, many new tests, and runtime deps for markdown/HTML sanitization. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Client as MCP Client
participant MCP as Plane MCP<br/>Journey Server
participant Resolver as Entity<br/>Resolver
participant Cache as Workspace<br/>Context Cache
participant API as Plane API
User->>Client: search_tickets(project_slug, query, filters)
Client->>MCP: MCP tool call: search_tickets
MCP->>Resolver: resolve_project(slug)
Resolver->>Cache: check project cache
alt cache hit
Cache-->>Resolver: project_id
else
Resolver->>API: GET /projects
API-->>Resolver: projects
Resolver->>Cache: store project_id
end
Resolver-->>MCP: project_id
MCP->>API: GET /work_items?project_id=...
API-->>MCP: paginated work_items
MCP->>Resolver: resolve_state(s) as needed
Resolver->>Cache: check state cache
alt state cache miss
Resolver->>API: GET /states?project_id=...
API-->>Resolver: states
Resolver->>Cache: store states
end
MCP->>MCP: apply_lod(transform, markdownify)
MCP-->>Client: { results, next_cursor, warnings }
Client-->>User: formatted results
sequenceDiagram
actor User
participant Client as MCP Client
participant MCP as Plane MCP<br/>Journey Server
participant Resolver as Entity<br/>Resolver
participant API as Plane API
User->>Client: create_ticket(title, project_slug, state, labels, cycle)
Client->>MCP: MCP tool call: create_ticket
MCP->>Resolver: resolve_project(slug)
Resolver->>API: GET /projects
API-->>Resolver: project_id
Resolver-->>MCP: project_id
MCP->>Resolver: resolve_state(project_id, state_name)
Resolver->>API: GET /states?project_id=...
API-->>Resolver: state_id
MCP->>API: GET /labels
API-->>MCP: existing labels
MCP->>API: POST /labels (create missing up to cap)
API-->>MCP: label_ids
alt cycle provided
MCP->>API: GET /cycles
API-->>MCP: cycles
alt not found
MCP->>API: POST /cycles
API-->>MCP: cycle_id
end
MCP->>API: POST /cycles/add_work_items
API-->>MCP: success
end
MCP->>API: POST /work_items {payload}
API-->>MCP: created work_item {sequence_id}
MCP-->>Client: { status: "success", ticket_id: "PROJ-123" }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (8)
plane_mcp/journey/cache.py-64-68 (1)
64-68:⚠️ Potential issue | 🟡 MinorInconsistent error handling: returns error dict instead of raising.
When the API fetch fails, this returns
{"error": "..."}instead of raising an exception. Callers may not check for theerrorkey, leading to silent failures. Consider raising the exception or at least documenting this behavior clearly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/cache.py` around lines 64 - 68, The except block in get_workspace_context currently logs the error and returns a dict {"error": ...}, which can be missed by callers; change this to raise a clear exception instead (e.g., raise RuntimeError(f"Could not connect to Plane API: {e}. Check PLANE_API_KEY and PLANE_BASE_URL.") from e) so callers must handle the failure; keep the logging line if you want the message recorded, but remove the dict return and re-raise the error (or raise a specific custom exception) so the failure isn't silently ignored when using the context variable.plane_mcp/resolver.py-150-156 (1)
150-156:⚠️ Potential issue | 🟡 MinorFix indentation at line 152.
There appears to be an extra leading space at line 152, causing inconsistent indentation within the except block.
Suggested fix
except HttpError as e: if getattr(e, "status_code", None) == 404: - raise EntityResolutionError( - f"Ticket '{ticket_id}' not found.", - available_options=[] - ) + raise EntityResolutionError( + f"Ticket '{ticket_id}' not found.", + available_options=[] + ) from e - raise RuntimeError(f"Failed to retrieve ticket '{ticket_id}': {str(e)}") + raise RuntimeError(f"Failed to retrieve ticket '{ticket_id}': {e!s}") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/resolver.py` around lines 150 - 156, The except HttpError block has inconsistent indentation on the getattr check (an extra leading space before "if getattr(e, \"status_code\", None) == 404:"); fix by aligning that line with the other statements in the except block so the if, raise EntityResolutionError, and subsequent raise RuntimeError are consistently indented inside the except HttpError handler in resolver.py (look for the except HttpError as e block around ticket retrieval).plane_mcp/journey/cache.py-15-19 (1)
15-19:⚠️ Potential issue | 🟡 MinorSilent exception swallowing hides cache corruption.
The bare
except: passsilently ignores JSON parse errors or file read failures. At minimum, log the exception to aid debugging when cache reads fail unexpectedly.Suggested fix
try: with open(cache_file, "r") as f: context = json.load(f) - except Exception: - pass + except Exception as e: + logging.getLogger(__name__).debug(f"Cache read failed, will refresh: {e}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/cache.py` around lines 15 - 19, The code block that reads the cache (using cache_file and json.load into context) currently swallows all exceptions with "except Exception: pass"; change this to catch the exception as e and log it (e.g., using a module logger via logging.getLogger(__name__) and logger.exception or logger.warning) including the cache_file path and the error message, then continue to preserve the current fallback behavior (i.e., keep context as empty/default).tests/test_generated_tools.py-4-13 (1)
4-13:⚠️ Potential issue | 🟡 MinorMake this test fail when the generator drops or renames a target function.
check_file_for_uuid()only validates annotations if it happens to encounter one of the hard-coded names. If a generated function is renamed or omitted, these tests still go green because nothing asserts that the expected functions were actually seen.One way to close the false-negative gap
def check_file_for_uuid(filepath): with open(filepath, "r") as f: tree = ast.parse(f.read()) - + + expected_functions = { + "get_project", + "list_issues", + "create_issue_raw", + "update_issue_raw", + "delete_issue", + "list_states", + "create_state", + } + seen_functions = set() + for node in ast.walk(tree): - if isinstance(node, ast.FunctionDef) and node.name in ('get_project', 'list_issues', 'create_issue_raw', 'update_issue_raw', 'delete_issue', 'list_states', 'create_state'): + if isinstance(node, ast.FunctionDef) and node.name in expected_functions: + seen_functions.add(node.name) for arg in node.args.args: if arg.arg.endswith('_id'): + assert arg.annotation is not None, f"{arg.arg} in {node.name} is missing an annotation!" assert ast.unparse(arg.annotation) == "uuid.UUID", f"{arg.arg} in {node.name} is not UUID!" + + assert seen_functions == expected_functions, f"Missing generated functions: {sorted(expected_functions - seen_functions)}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_generated_tools.py` around lines 4 - 13, The test currently only validates annotations when it happens to encounter one of the hard-coded function names in check_file_for_uuid; update check_file_for_uuid to track which expected functions were seen and fail if any are missing or renamed. Specifically, define an expected_functions set containing 'get_project', 'list_issues', 'create_issue_raw', 'update_issue_raw', 'delete_issue', 'list_states', 'create_state', collect seen_functions when iterating ast.FunctionDef nodes, perform the existing UUID-annotation checks for those seen functions (as currently done), and after walking the tree assert that expected_functions is a subset of seen_functions (or assert equality) with a clear message listing any missing function names so the test fails if the generator dropped or renamed a target function.plane_mcp/journey/__main__.py-19-22 (1)
19-22:⚠️ Potential issue | 🟡 MinorSSE mode declared but not implemented.
ServerMode.SSEis defined in the enum, andcombined_lifespanaccepts ansse_appparameter, but themain()function has no handling for SSE mode. This will causeServerMode(sys.argv[1])to accept "sse" as valid input but then fall through without any action.Either implement SSE mode or remove it from the enum to avoid silent no-ops.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/__main__.py` around lines 19 - 22, The enum ServerMode includes SSE but main() doesn't handle it; either remove ServerMode.SSE or add handling: update main() to accept ServerMode.SSE, create the sse_app via combined_lifespan(sse_app=...) (matching the combined_lifespan signature) and start the SSE server (or await/run the ASGI app) analogous to how HTTP and STDIO are handled; reference ServerMode, combined_lifespan, and main() so you add the branch that constructs and runs the sse_app (or delete ServerMode.SSE and any sse_app parameter usage if you choose removal).plane_mcp/journey/__main__.py-71-71 (1)
71-71:⚠️ Potential issue | 🟡 MinorLog message references incorrect path.
The log states the server is available at
/mcp, but the app is mounted at/(line 57). Update the message to reflect the actual endpoint.🔧 Proposed fix
- logger.info(f"Starting HTTP server for Streamable HTTP at /mcp on port {port}") + logger.info(f"Starting HTTP server for Streamable HTTP at / on port {port}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/__main__.py` at line 71, The log message in the startup code uses logger.info and incorrectly states the server is available at "/mcp" while the FastAPI/ASGI app is mounted at "/" (see the app mounting code and the logger.info call). Update the logger.info call to reference the actual endpoint used by the app ("/") and keep the rest of the message (port variable) unchanged so it accurately reports "Starting HTTP server for Streamable HTTP at / on port {port}".plane_mcp/journey/tools/create_update.py-25-25 (1)
25-25:⚠️ Potential issue | 🟡 MinorRename
ltolabelto avoid Ruff E741 violation.The dict comprehension uses the ambiguous single-letter variable
l, which Ruff flags under rule E741. Rename it tolabelfor clarity.Fix
- name_to_id = {l.name.lower(): l.id for l in existing if l.name} + name_to_id = {label.name.lower(): label.id for label in existing if label.name}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/tools/create_update.py` at line 25, Rename the single-letter loop variable in the dict comprehension that builds name_to_id to avoid Ruff E741: change the comprehension "{l.name.lower(): l.id for l in existing if l.name}" to use a descriptive variable name like "label" (e.g., "{label.name.lower(): label.id for label in existing if label.name}") so the symbol reference is updated for the name_to_id construction involving the existing iterable.tests/test_journey_methods_comprehensive.py-200-200 (1)
200-200:⚠️ Potential issue | 🟡 MinorSplit these chained mock statements so Ruff E702 passes.
Multiple statements on single lines separated by semicolons prevent the test file from passing linting checks. This blocks CI before the test assertions run.
Example cleanup
- mock_state_resp = MagicMock(); mock_state_resp.results = [mock_state]; mock_client.states.list.return_value = mock_state_resp + mock_state_resp = MagicMock() + mock_state_resp.results = [mock_state] + mock_client.states.list.return_value = mock_state_respLines 200, 203, 209, 274, 306, 314, 354, 390, 415, 432 each have 2–3 chained statements separated by semicolons.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_journey_methods_comprehensive.py` at line 200, The test file contains chained statements on single lines (e.g., "mock_state_resp = MagicMock(); mock_state_resp.results = [mock_state]; mock_client.states.list.return_value = mock_state_resp") which triggers Ruff E702; fix by splitting each semicolon-separated statement into its own line: create the MagicMock assignment, then set its .results attribute on the next line, then assign it to mock_client.states.list.return_value on a following line. Apply the same split for the other occurrences referenced in the comment (lines with similar chained mocks such as those around mock_state_resp, mock_state, and mock_client.states.list.return_value) so each statement is on its own line.
🧹 Nitpick comments (21)
tests/test_integration.py (1)
1-10: Consider moving the import after the module docstring.The
import pyteststatement appears before the module docstring (lines 3-10). While syntactically valid, Python convention places the module docstring first. This is a minor style preference.Suggested reordering
+""" +Simple integration test for Plane MCP Server. + +Environment Variables Required: + PLANE_TEST_API_KEY: API key for authentication + PLANE_TEST_WORKSPACE_SLUG: Workspace slug for testing + PLANE_TEST_MCP_URL: MCP server URL (default: http://localhost:8211) +""" + import pytest - -""" -Simple integration test for Plane MCP Server. - -Environment Variables Required: - PLANE_TEST_API_KEY: API key for authentication - PLANE_TEST_WORKSPACE_SLUG: Workspace slug for testing - PLANE_TEST_MCP_URL: MCP server URL (default: http://localhost:8211) -""" import asyncio🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_integration.py` around lines 1 - 10, Move the module docstring to the very top of the file and place the import statement for pytest immediately after it; specifically relocate the existing "import pytest" so the module-level triple-quoted string (the docstring describing environment variables) is the first item in the file, followed by the "import pytest" line to follow Python convention.plane_mcp/tools/journeys.py (1)
66-67: Chain exceptions withraise ... from e.When re-raising as a different exception type, chain it to preserve the original traceback for debugging.
Suggested fix
except Exception as e: - raise ValueError(f"Could not find ticket {project_identifier}-{issue_sequence}: {str(e)}") + raise ValueError(f"Could not find ticket {project_identifier}-{issue_sequence}: {e!s}") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/tools/journeys.py` around lines 66 - 67, The except block currently re-raises a ValueError without chaining the original exception (the line raising ValueError(f"Could not find ticket {project_identifier}-{issue_sequence}: {str(e)}")), so update that raise to chain the original exception using "from e" (i.e., raise ValueError(... ) from e) to preserve the original traceback and diagnostics while keeping the existing message and variables project_identifier and issue_sequence.plane_mcp/journey/cache.py (1)
44-48: Rename ambiguous variablelto improve readability.The variable
l(lowercase L) is easily confused with1(digit one). Static analysis flagged this at lines 46, 51, and 99. Use a more descriptive name.Suggested fix
try: - l_res = ctx.client.labels.list(workspace_slug=ctx.workspace_slug, project_id=p.id) - labels_by_project[p.identifier] = [l.name for l in l_res.results if l.name] + label_res = ctx.client.labels.list(workspace_slug=ctx.workspace_slug, project_id=p.id) + labels_by_project[p.identifier] = [label.name for label in label_res.results if label.name]And at line 51:
- all_labels = sorted({l for ll in labels_by_project.values() for l in ll}) + all_labels = sorted({label for label_list in labels_by_project.values() for label in label_list})And at line 99:
- return ", ".join([f"'{l}'" for l in labels]) + return ", ".join([f"'{label}'" for label in labels])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/cache.py` around lines 44 - 48, Rename the ambiguous loop/iteration variable `l` to a clearer name like `label` (or `lbl`) wherever it's used in this file (notably in the list comprehension inside the try block that builds labels_by_project: the expression currently using `l.name` and any other occurrences flagged around the same area), update all references (e.g., `l.name`) to `label.name` to improve readability and avoid confusion with the digit one, and run tests/lint to ensure no other references remain.plane_mcp/resolver.py (3)
16-16: Misplaced import statement.The
import timeat line 16 is placed after the class definition (lines 9-13). Move it to the top of the file with other imports for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/resolver.py` at line 16, The import statement "import time" is misplaced below the class definition in resolver.py; move the "import time" line up into the top-level imports block alongside the other imports in plane_mcp/resolver.py so all imports are grouped before the class (the class definition around lines 9-13) for consistency and style.
72-75: Chain exceptions to preserve traceback.When catching exceptions and re-raising as a different type, use
raise ... from eto preserve the original traceback. This applies to lines 75, 114, 132, 156, and 160.Example fix for line 75
except Exception as e: if isinstance(e, EntityResolutionError): raise - raise RuntimeError(f"Failed to fetch projects: {str(e)}") + raise RuntimeError(f"Failed to fetch projects: {e!s}") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/resolver.py` around lines 72 - 75, The except blocks that catch Exception and re-raise a RuntimeError should chain the original exception to preserve traceback; replace instances like raise RuntimeError(f"Failed to fetch projects: {str(e)}") with raise RuntimeError(f"Failed to fetch projects: {str(e)}") from e (and do the same for the other RuntimeError re-raises present in the file), keeping the special-case check if isinstance(e, EntityResolutionError): raise as-is so that EntityResolutionError is re-raised unmodified.
3-3: Use Python 3.10+ type syntax per coding guidelines.The coding guidelines specify: "Use Python 3.10+ union syntax (str | None) instead of Union types". Replace
Optional[List[str]]withlist[str] | None.Suggested fix
-from typing import Dict, List, Optional +from typing import TYPE_CHECKINGAnd at line 11:
- def __init__(self, message: str, available_options: Optional[List[str]] = None): + def __init__(self, message: str, available_options: list[str] | None = None):As per coding guidelines: "Use Python 3.10+ union syntax (str | None) instead of Union types".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/resolver.py` at line 3, Replace occurrences of Optional[List[str]] with the Python 3.10+ union syntax list[str] | None (e.g., change any type annotations using Optional[List[str]] to list[str] | None), and update the imports in resolver.py by removing List and Optional from the typing import (keep Dict or convert to built-in dict[...] if you prefer) so there are no unused typing imports; locate usages by searching for Optional[List[str]] and the typing import line.plane_mcp/tools/generated_core.py (2)
26-41: Redundant.format()call after f-string.The f-string already interpolates
{workspace_slug}, making the subsequent.format(workspace_slug=workspace_slug)redundant. This pattern repeats throughout all tools in this file.Example fix for get_workspace
- url = f"{client.config.base_path}{client.projects.base_path}".replace("/workspaces", "") + f"/api/v1/workspaces/{workspace_slug}/".format(workspace_slug=workspace_slug) + url = f"{client.config.base_path}{client.projects.base_path}".replace("/workspaces", "") + f"/api/v1/workspaces/{workspace_slug}/"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/tools/generated_core.py` around lines 26 - 41, The URL construction uses an f-string and then redundantly calls .format(workspace_slug=workspace_slug) — remove the trailing .format(...) so the URL uses only the f-string interpolation (e.g., the line building url near get_plane_client_context()/get_workspace); search for and eliminate the same redundant .format(...) calls throughout this file (generated_core.py) wherever f-strings already include {workspace_slug} or other placeholders to avoid double interpolation and potential runtime errors.
12-42: Extract common URL/header construction to reduce duplication.Every tool repeats the same URL prefix construction, header building, and auth logic. Consider extracting this into a helper function to improve maintainability and reduce ~15 lines of boilerplate per tool.
Suggested helper extraction
def _build_request(client, workspace_slug: str, path: str) -> tuple[str, dict]: """Build URL and headers for Plane API request.""" base = f"{client.config.base_path}{client.projects.base_path}".replace("/workspaces", "") url = f"{base}/api/v1/workspaces/{workspace_slug}{path}" headers = {"Content-Type": "application/json", "Accept": "application/json"} if client.config.access_token: headers["Authorization"] = f"Bearer {client.config.access_token}" elif client.config.api_key: headers["x-api-key"] = client.config.api_key return url, headers🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/tools/generated_core.py` around lines 12 - 42, Extract the duplicated URL and header construction into a helper (e.g., _build_request(client, workspace_slug: str, path: str) -> tuple[str, dict]) and use it inside register_core_generated_tools' get_workspace to build the request URL and headers; specifically, move the logic that composes base = f"{client.config.base_path}{client.projects.base_path}".replace("/workspaces", ""), the workspace path assembly ("/api/v1/workspaces/{workspace_slug}/" or using a path param), and the auth header selection (access_token -> "Authorization": f"Bearer ...", elif api_key -> "x-api-key") into _build_request, return (url, headers), and replace the inline construction in get_workspace with a call to _build_request(client, workspace_slug, "/") and then call requests.request using the returned url and headers.plane_mcp/tools/generated_metadata.py (1)
12-42: Same refactoring opportunities asgenerated_core.py.This file exhibits the same patterns flagged in
generated_core.py:
- Redundant
.format()calls after f-strings (e.g., line 29)- Repeated URL/header construction that could be extracted to a shared helper
- Returns raw
dict[str, Any]instead of Pydantic models per coding guidelinesConsider creating a shared utility module (e.g.,
plane_mcp/tools/_http_utils.py) used by both generated files to eliminate the ~200 lines of duplicated boilerplate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/tools/generated_metadata.py` around lines 12 - 42, The list_states function in register_metadata_generated_tools repeats the same problematic patterns: remove the redundant .format() after the f-string (in the URL construction), extract URL and header building into a shared helper (e.g., create helpers in a new plane_mcp/tools/_http_utils.py such as build_api_url(client, workspace_slug, path_suffix) and build_auth_headers(client)), and change the return type from raw dict[str, Any] to a Pydantic model (or parse the response into an existing model) per coding guidelines; update register_metadata_generated_tools/list_states to call the new helpers and return the parsed model instead of raw JSON.plane_mcp/journey/tools/__init__.py (1)
3-6: Separate third-party and local imports so RuffIpasses.
from fastmcp import FastMCPand the relative imports are in the same import block, which will trip the import-sorting rule onplane_mcp/**/*.py.As per coding guidelines, "Lint Python code with rules: E, F, I, UP, B and line length: 120."Small cleanup
from fastmcp import FastMCP -from .read import register_read_tools + from .create_update import register_create_update_tools +from .read import register_read_tools from .workflow import register_workflow_tools🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/tools/__init__.py` around lines 3 - 6, The imports mix a third-party import with local relative imports which fails Ruff's import-sorting rule; split them into two blocks so third-party imports come first. Move the line importing FastMCP (symbol: FastMCP) into its own top import group and keep the relative imports (register_read_tools, register_create_update_tools, register_workflow_tools) in a separate group below to satisfy Ruff's "I" rule and the project's linting rules.plane_mcp/journey/__main__.py (2)
51-53: Misleading variable naming:sse_mcpused for HTTP mode.The variable is named
sse_mcpandsse_app, but this code path handles HTTP mode with streamable-http transport, not SSE. This naming mismatch can confuse maintainers.♻️ Suggested rename for clarity
if server_mode == ServerMode.HTTP: - sse_mcp = get_stdio_mcp() - sse_app = sse_mcp.http_app(transport="streamable-http") + http_mcp = get_stdio_mcp() + http_app = http_mcp.http_app(transport="streamable-http") app = Starlette( routes=[ - Mount("/", app=sse_app), + Mount("/", app=http_app), ], - lifespan=lambda app: sse_app.lifespan(sse_app), + lifespan=lambda app: http_app.lifespan(http_app), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/__main__.py` around lines 51 - 53, The variables sse_mcp and sse_app are misleading in the HTTP branch (server_mode == ServerMode.HTTP); rename them to reflect HTTP/streamable transport (e.g., stdio_mcp or http_mcp and http_app or streamable_app) and update all uses accordingly where get_stdio_mcp() is called and where http_app(transport="streamable-http") is assigned so the names match the HTTP path and avoid SSE confusion.
38-39: Consider user-friendly error handling for invalid server mode.If an invalid mode is passed,
ServerMode(sys.argv[1])raises a genericValueError. A clearer error message would improve usability.♻️ Suggested improvement
if len(sys.argv) > 1: - server_mode = ServerMode(sys.argv[1]) + try: + server_mode = ServerMode(sys.argv[1]) + except ValueError: + valid_modes = ", ".join(m.value for m in ServerMode) + raise ValueError(f"Invalid server mode '{sys.argv[1]}'. Valid modes: {valid_modes}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/__main__.py` around lines 38 - 39, Wrap the conversion ServerMode(sys.argv[1]) in a try/except to catch ValueError and print a clear, user-friendly message (including the invalid value and listing valid ServerMode options) before exiting; update the logic around server_mode in __main__.py (the ServerMode and server_mode usage) to handle the invalid input path gracefully (log/print the message and call sys.exit(2) or similar) so callers see a descriptive error instead of a raw traceback.plane_mcp/journey/lod.py (3)
107-108: Rename ambiguous loop variablel.Single-letter
lis easily confused with1(one). Use a descriptive name.♻️ Proposed fix
if "labels" in data and isinstance(data["labels"], list): - result["labels"] = [l.get("name") if isinstance(l, dict) and "name" in l else l for l in data["labels"]] + result["labels"] = [ + label.get("name") if isinstance(label, dict) and "name" in label else label + for label in data["labels"] + ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/lod.py` around lines 107 - 108, The list comprehension that builds result["labels"] uses a single-letter loop variable `l`, which is ambiguous; change it to a descriptive name (for example `label` or `label_obj`) in the comprehension over data["labels"] and update the conditional checks accordingly so the expression becomes something like: iterate as `for label in data["labels"]` and use `isinstance(label, dict) and "name" in label` and `label.get("name")` to produce the same output with a clear variable name.
41-43: UUID detection heuristic is fragile.The check
len(state_str) == 36 and "-" in state_strcan match non-UUIDs. Consider using a regex or theuuidmodule for validation.♻️ Suggested improvement
+import re + +_UUID_PATTERN = re.compile(r'^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$', re.IGNORECASE) + def _hydrate_state(data: Dict[str, Any], project_identifier: Optional[str] = None) -> None: """If state is a raw UUID, attempt to hydrate its name.""" state_val = data.get("state") if state_val: state_str = str(state_val) - if len(state_str) == 36 and "-" in state_str: + if _UUID_PATTERN.match(state_str):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/lod.py` around lines 41 - 43, The UUID detection using `if len(state_str) == 36 and "-" in state_str` is fragile; replace that heuristic in the block handling `state_val`/`state_str` with a proper validation using the uuid module (import uuid) and attempt to construct `uuid.UUID(state_str)` inside a try/except to confirm it’s a valid UUID, falling back to the non-UUID path on ValueError; update any logic in the same scope that branches based on this check (the variables `state_val` and `state_str`) to use the result of the UUID validation instead of the length-and-dash test.
60-61: Silent exception swallowing hides bugs.The bare
except Exception: passsilently discards all errors during state hydration. This could mask configuration issues, network failures, or bugs. At minimum, log the exception.♻️ Proposed fix
+import logging + +logger = logging.getLogger(__name__) + def _hydrate_state(data: Dict[str, Any], project_identifier: Optional[str] = None) -> None: """If state is a raw UUID, attempt to hydrate its name.""" state_val = data.get("state") if state_val: state_str = str(state_val) if len(state_str) == 36 and "-" in state_str: try: client, workspace_slug = get_plane_client_context() resolver = EntityResolver(client, workspace_slug) # ... existing code ... - except Exception: - pass + except Exception as e: + logger.debug("Could not hydrate state %s: %s", state_str, e)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/lod.py` around lines 60 - 61, Replace the silent "except Exception: pass" inside the state hydration try/except block in plane_mcp/journey/lod.py with proper error handling: catch the exception as "except Exception as e:" and log it (e.g., logger.exception("Error hydrating state") or logger.error("Error hydrating state: %s", e)) so the failure is visible; optionally re-raise or handle specific exception types if recovery is expected.plane_mcp/journey/base.py (3)
40-41: Useraise ... from Noneto suppress exception chaining.When re-raising with a new message, use
from Noneto avoid confusing tracebacks showing the originalValueErrorfromint().♻️ Proposed fix
try: issue_sequence = int(parts[1]) except ValueError: - raise ValueError(f"Invalid ticket sequence in '{ticket_id}'. Must be an integer (e.g., 123).") + raise ValueError(f"Invalid ticket sequence in '{ticket_id}'. Must be an integer (e.g., 123).") from None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/base.py` around lines 40 - 41, In the except ValueError handler that currently does `raise ValueError(f"Invalid ticket sequence in '{ticket_id}'. Must be an integer (e.g., 123).")`, suppress exception chaining by re-raising the new ValueError with `from None`; update the re-raise in the function/method handling `ticket_id` parsing in plane_mcp/journey/base.py so the new ValueError is raised as `ValueError(...) from None` to avoid exposing the original int() traceback.
59-62: Silent exception swallowing in decorator.The bare
except Exception: passwhen parsingticket_idfor project identifier extraction silently ignores errors. Consider logging at debug level.♻️ Suggested improvement
if not project_identifier and "ticket_id" in kwargs: try: project_identifier = kwargs["ticket_id"].split("-")[0] - except Exception: - pass + except (AttributeError, IndexError): + pass # ticket_id format doesn't match expected patternNarrowing the exception type makes the intent clearer and avoids swallowing unexpected errors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/base.py` around lines 59 - 62, Replace the bare except in the ticket_id parsing so you only catch expected errors (e.g., KeyError, AttributeError, IndexError) when extracting project_identifier from kwargs["ticket_id"]. In the function where this occurs (the decorator/handler that sets project_identifier from kwargs["ticket_id"]), change the except to those specific exceptions and add a debug-level log via the module logger (or existing logger) that records the failed input and reason; leave project_identifier as None or unset if parsing fails. Ensure you reference the exact expression kwargs["ticket_id"].split("-")[0] and the variable project_identifier when updating the code.
121-124: Useisfor type identity comparison.
sig.return_annotation == listuses equality which can have edge cases. Useisfor type comparisons.♻️ Proposed fix
try: sig = inspect.signature(func) - if sig.return_annotation == list or str(sig.return_annotation).startswith("list"): + if sig.return_annotation is list or str(sig.return_annotation).startswith("list"): return [{"error": error_msg}]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/base.py` around lines 121 - 124, The return-annotation check in the try block uses equality (sig.return_annotation == list) which should use identity for types; update the code in the block that obtains sig = inspect.signature(func) (the check inside the try) to use "is" for comparing to the built-in list type (i.e., change the equality comparison on sig.return_annotation to use "is") while keeping the existing string-based fallback (str(sig.return_annotation).startswith("list")) so the branch that returns [{"error": error_msg}] still triggers for list-typed annotations.plane_mcp/journey/server.py (2)
38-42: Dummy OAuth credentials risk silent misconfiguration in production.Using
"dummy_client_id"and"dummy_client_secret"as defaults means OAuth will silently fail with cryptic errors instead of failing fast when credentials are missing. Consider raising an error or at least logging a warning when defaults are used.♻️ Suggested defensive check
+ client_id = os.getenv("PLANE_OAUTH_PROVIDER_CLIENT_ID") + client_secret = os.getenv("PLANE_OAUTH_PROVIDER_CLIENT_SECRET") + + if not client_id or not client_secret: + logger.warning( + "OAuth credentials not configured. Set PLANE_OAUTH_PROVIDER_CLIENT_ID " + "and PLANE_OAUTH_PROVIDER_CLIENT_SECRET for OAuth functionality." + ) + # Initialize the MCP server oauth_mcp = FastMCP( "Plane Journey MCP Server", icons=[Icon(src="https://plane.so/favicon.ico", alt="Plane Journey MCP Server")], website_url="https://plane.so", auth=PlaneOAuthProvider( - client_id=os.getenv("PLANE_OAUTH_PROVIDER_CLIENT_ID", "dummy_client_id"), - client_secret=os.getenv("PLANE_OAUTH_PROVIDER_CLIENT_SECRET", "dummy_client_secret"), + client_id=client_id or "dummy_client_id", + client_secret=client_secret or "dummy_client_secret",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/server.py` around lines 38 - 42, The current instantiation uses hardcoded defaults for client_id/client_secret which can hide misconfiguration; update the code that builds the OAuth provider (the parameters client_id and client_secret) to read the env vars without silent defaults (use os.getenv("PLANE_OAUTH_PROVIDER_CLIENT_ID") and os.getenv("PLANE_OAUTH_PROVIDER_CLIENT_SECRET")), then add a defensive check after reading them: if either is missing, either raise a clear exception (e.g., ValueError) to fail fast or emit a prominent error via the existing logger before exiting; ensure the change references the same parameter names client_id and client_secret so callers remain unchanged.
19-24: Missing error handling for invalidREDIS_PORT.If
REDIS_PORTis set to a non-integer value,int(redis_port)will raise aValueErrorwithout context. Consider validating and providing a clearer error message.♻️ Suggested improvement
redis_host = os.getenv("REDIS_HOST") redis_port = os.getenv("REDIS_PORT") if redis_host and redis_port: + try: + port_int = int(redis_port) + except ValueError: + raise ValueError(f"REDIS_PORT must be an integer, got: {redis_port}") logger.info("Using Redis for token storage") - client_storage = RedisStore(host=redis_host, port=int(redis_port)) + client_storage = RedisStore(host=redis_host, port=port_int)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/server.py` around lines 19 - 24, The code currently casts REDIS_PORT with int(redis_port) which will raise a ValueError with no context; update the Redis initialization around redis_host/redis_port to validate and convert redis_port inside a try/except, logging a clear message via logger (including the invalid value and that Redis init failed) and handle the failure (e.g., exit or fallback) before constructing RedisStore; reference the variables redis_port, redis_host, the RedisStore constructor and client_storage when making this change.tests/test_journey_methods_comprehensive.py (1)
187-258: Add the disabled-cycles regression on the create path too.
begin_workhas explicit 400-fallback coverage, butcreate_ticket(..., cycle_name=...)does not. Mirroring that case here would catch the currentcycles.list()failure path inCreateUpdateJourneybefore it ships again.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_journey_methods_comprehensive.py` around lines 187 - 258, The test lacks coverage for the "disabled cycles" 400-error regression on the create path; update test_create_ticket_with_all_params to simulate cycles.list raising an HTTP 400 (or client exception) similar to the begin_work coverage so CreateUpdateJourney.create_ticket handles it gracefully. Specifically, change/mock mock_client.cycles.list to raise an appropriate 400 HTTP error (or side_effect that mimics the client behavior), then assert that create_ticket still returns successfully (result["key"] == "TEST-99"), that mock_client.cycles.create and mock_client.cycles.add_work_items are not called in the 400 case, and that other assertions (labels, workspace/project/data fields) still pass; reference CreateUpdateJourney.create_ticket, mock_client.cycles.list, mock_client.cycles.create, and mock_client.cycles.add_work_items when making the changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 065a98c9-682d-441a-b8e0-7a66974d769b
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (30)
.github/workflows/build-branch.yml.github/workflows/ci.yml.gitignoredocker-compose.ymlplane_mcp/__main__.pyplane_mcp/journey/__init__.pyplane_mcp/journey/__main__.pyplane_mcp/journey/base.pyplane_mcp/journey/cache.pyplane_mcp/journey/lod.pyplane_mcp/journey/server.pyplane_mcp/journey/tools/__init__.pyplane_mcp/journey/tools/create_update.pyplane_mcp/journey/tools/read.pyplane_mcp/journey/tools/workflow.pyplane_mcp/resolver.pyplane_mcp/sanitize.pyplane_mcp/tools/generated_core.pyplane_mcp/tools/generated_metadata.pyplane_mcp/tools/journeys.pypyproject.tomltests/conftest.pytests/test_generated_tools.pytests/test_input_validation.pytests/test_integration.pytests/test_journey_lod.pytests/test_journey_methods_comprehensive.pytests/test_journey_tools.pytests/test_resolver.pytests/test_smart_updates.py
|
Thanks for the review. Will begin tomorrow. You know why he's called coderabbit? Every time he hops in here his comments multiply. |
1. ci.yml: tighten permissions (contents:read), branch-aware Docker tagging 2. journey/__main__.py: fix per_page variable naming, log path, and server mode error 3. base.py: modernize typing (3.10+), from None, narrow exception, is list 4. cache.py: add debug log on cache read failure, rename ambiguous 'l' var 5. lod.py: use uuid module for UUID detection, debug log for hydration failures, modernize typing 6. server.py: validate REDIS_PORT is int, warn on placeholder OAuth creds 7. tools/__init__.py: separate FastMCP from local imports 8. create_update.py: wrap cycles.list() in try/except (covers disabled-cycles 400) 9. read.py: fix pagination cursor (per_page=min(limit,100)), label exception returns LLM-friendly response 10. resolver.py: move import time to top, modernize typing, add from e to raises 11. tools/journeys.py: deleted (confirmed dead code, live tools are in journey/tools/) 12. test_journey_methods_comprehensive.py: update per_page assertion, split semicolon stmts
- test_basic_search_returns_results: sanity check against live TEST project
- test_per_page_equals_limit: verifies per_page=min(limit,100) via module-level patch
- test_pagination_cursor_no_overlap: confirms page 2 has no overlap with page 1
- test_label_exception_returns_llm_friendly:
Mode A: unknown label name -> warning issued, results still returned
Mode B: labels.list raises -> empty results + LLM-friendly warning (no 500)
All 4 tests pass against https://kanban.hackedyour.info (plane-mcp workspace)
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (1)
plane_mcp/journey/tools/read.py (1)
76-87:⚠️ Potential issue | 🟠 MajorHonor explicit assignee IDs and fix the tool contract.
This block only adds
"me"; every other assignee value is ignored, so a request likeassignees=["<uuid>"]silently becomes unfiltered. The tool docstring at Line 228 also still tells agents to send usernames, which does not match the intended behavior.🛠️ Suggested fix
if assignees: assignee_ids = [] for a in assignees: if a.lower() == "me": try: me = client.users.get_me() if hasattr(me, "id"): assignee_ids.append(me.id) - except Exception: - pass + except Exception as e: + import logging + logging.getLogger(__name__).debug("Failed to resolve 'me' assignee: %s", e) + else: + assignee_ids.append(a) if assignee_ids: query_params["assignees"] = ",".join(assignee_ids) @@ - assignees: List of usernames or 'me' to filter by. + assignees: List of assignee UUIDs, or 'me' to resolve the current user.Based on learnings:
search_tickets.assigneesis intended to accept assignee UUIDs, and"me"is the only macro resolved viaclient.users.get_me().Also applies to: 218-229
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plane_mcp/journey/__main__.py`:
- Around line 19-22: ServerMode includes SSE but main() never handles it, so
either implement SSE handling or reject it; update the main() dispatch (the
logic that checks ServerMode in main()) to add a branch for ServerMode.SSE that
invokes the SSE server startup/handler, or change ServerMode to remove/disable
SSE and add validation to raise an error when ServerMode.SSE is selected;
reference the ServerMode enum and the main() dispatch branches (currently
handling STDIO and HTTP) and ensure the chosen fix returns/blocks correctly
instead of falling through.
- Around line 55-64: The HTTP branch is mounting the unauthenticated stdio MCP
via get_stdio_mcp() and exposing it as streamable HTTP; replace that with the
authenticated HTTP app factories (use the header/OAuth authenticated MCP/http
app factories imported above instead of get_stdio_mcp()) and construct the
Starlette app with the combined_lifespan function (wire combined_lifespan into
the lifespan instead of http_app.lifespan). Specifically, stop calling
get_stdio_mcp(), call the authenticated factory(ies) that produce an object
exposing http_app (the same symbol http_mcp.http_app), mount that http_app, and
pass combined_lifespan to Starlette so both header and OAuth apps are served
with proper authentication.
In `@plane_mcp/journey/cache.py`:
- Around line 8-10: The cache filename is global and needs to be namespaced per
workspace (and preferably per Plane base URL); change how cache_file is
constructed so it includes a normalized workspace_slug (and optionally a
normalized base_url) — e.g., build a key like
f"workspace_{slugified_workspace_slug}_{slugified_base_url}_context_cache.json"
and store it under cache_dir; update any nearby load/save helpers that reference
cache_file (look for variables/functions named cache_file, cache_dir, and any
read/write helpers in the same module) to use the new namespaced filename and
ensure slugification/URL-safe encoding to avoid invalid filesystem characters.
In `@plane_mcp/journey/tools/create_update.py`:
- Around line 108-122: The code currently treats a failure in
client.cycles.add_work_items as a hard failure after new_ticket was already
created; wrap the call to client.cycles.add_work_items in a try/except, catch
and log the exception (or emit a warning/partial-success result) but do not
re-raise so the function returns the created ticket key from new_ticket (from
client.work_items.create) even if cycle attachment fails; ensure you reference
new_ticket.id, client.cycles.add_work_items and client.work_items.create when
updating the logic and return {"key":
f"{project_slug}-{new_ticket.sequence_id}"} regardless, adding a
warning/metadata field or log entry to indicate the cycle attach failed.
In `@plane_mcp/journey/tools/read.py`:
- Around line 38-40: The code currently uses per_page = min(limit, 100) only per
request which still allows callers to pass a larger limit and causes the loop to
fetch multiple pages; clamp the total effective limit up-front (e.g.,
effective_limit = min(limit, 100)) before starting pagination and then use
remaining = effective_limit - fetched to compute per_page = min(remaining, 100)
inside the loop (and stop fetching once fetched >= effective_limit). Apply this
change to every occurrence that sets per_page from limit (references: per_page,
limit, query_params and the pagination loop) so the function never returns more
than the documented max of 100.
In `@plane_mcp/resolver.py`:
- Around line 17-22: The global caches and TTLs (_GLOBAL_PROJECT_CACHE,
_GLOBAL_STATE_CACHE, _GLOBAL_WORK_ITEM_CACHE, _CACHE_LAST_UPDATED,
_CACHE_TTL_SECONDS) must be scoped by workspace_slug (and by project_id for
state TTLs) to avoid cross-workspace pollution; change these top-level dicts to
map workspace_slug -> per-workspace cache (e.g., _GLOBAL_PROJECT_CACHE:
dict[str, dict[str,str]], _GLOBAL_WORK_ITEM_CACHE: dict[str, dict[str,str]]),
make _GLOBAL_STATE_CACHE map workspace_slug -> dict[project_id -> state_map],
and replace _CACHE_LAST_UPDATED with per-key timestamps keyed by workspace_slug
(and workspace_slug+project_id for state TTLs), then update any call sites such
as resolve_project and state resolution logic to read/write using the
workspace-scoped keys and enforce TTL checks using the scoped timestamps and
_CACHE_TTL_SECONDS.
In `@tests/e2e_pagination_test.py`:
- Around line 18-22: The e2e_journey fixture always creates a live client via
get_plane_client_context which breaks local runs; update the fixture to detect
missing E2E credentials (e.g., when get_plane_client_context returns falsy
client/workspace or raises a credential-specific error) and call pytest.skip
with a clear message instead of initializing EntityResolver/ReadJourney;
reference the e2e_journey fixture and the get_plane_client_context,
EntityResolver, and ReadJourney symbols when implementing the conditional skip
and ensure pytest is imported for pytest.skip.
In `@tests/test_journey_methods_comprehensive.py`:
- Line 450: The single-line statement creating and mutating the mock violates
Ruff E702; split the chained statements into separate lines by first assigning
existing_label = MagicMock() and then on subsequent lines set
existing_label.name = "bug" and existing_label.id = "lbl-1" so the variable and
its attribute assignments are on their own lines (referencing existing_label and
MagicMock in the test).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: af1b77d4-c4a3-485f-ba45-5881889d8c93
📒 Files selected for processing (12)
.github/workflows/ci.ymlplane_mcp/journey/__main__.pyplane_mcp/journey/base.pyplane_mcp/journey/cache.pyplane_mcp/journey/lod.pyplane_mcp/journey/server.pyplane_mcp/journey/tools/__init__.pyplane_mcp/journey/tools/create_update.pyplane_mcp/journey/tools/read.pyplane_mcp/resolver.pytests/e2e_pagination_test.pytests/test_journey_methods_comprehensive.py
✅ Files skipped from review due to trivial changes (2)
- .github/workflows/ci.yml
- plane_mcp/journey/tools/init.py
- __main__.py: HTTP mode uses get_header_mcp() (authenticated) instead of
get_stdio_mcp(); SSE mode now raises NotImplementedError explicitly
- cache.py: workspace context cache file scoped by workspace_slug to prevent
cross-tenant cache pollution in multi-workspace deployments
- resolver.py: global project/state/work_item caches keyed by workspace_slug;
TTL timestamps per workspace; eliminates cross-workspace UUID collisions
- create_update.py: cycles.add_work_items wrapped in try/except; returns
partial-success {key, status:warning} if cycle attach fails after ticket create
- read.py: clamp limit=min(limit,100) before pagination loop + [:limit] slice
before apply_lod; early limit<=0 exit moved above clamp
- e2e_pagination_test.py: fixture skips cleanly when E2E credentials absent;
adds pytestmark=pytest.mark.e2e for selective exclusion
- test_journey_methods_comprehensive.py: split semicolon-chained line (Ruff E702)
- tests/: update cache seeding to new workspace-scoped dict structure
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
plane_mcp/journey/tools/read.py (1)
79-90:⚠️ Potential issue | 🟡 MinorAdd debug logging to the
excepthandler instead of barepass.The exception handler at lines 88–89 currently swallows errors silently. Replace
except Exception: passwith a debug log statement to aid troubleshooting whenget_me()fails.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/tools/read.py` around lines 79 - 90, Replace the bare except: pass in the block that handles client.users.get_me() failures with a debug log that records the exception and context (e.g., mention that resolving "me" for assignee "me" failed), using the module logger or Python's logging with exc_info=True so the stack trace is captured; update the handler around the client.users.get_me() call where assignee_ids is appended (the loop over assignees / variable assignee_ids) to log the exception instead of silently swallowing it.
🧹 Nitpick comments (3)
plane_mcp/resolver.py (1)
92-93: Consider splitting this long conditional for readability.Line 92 exceeds a comfortable reading length. Extracting the TTL check into a helper or splitting the condition improves maintainability.
♻️ Optional refactor
+ def _is_cache_valid(self, cache_key: str) -> bool: + return time.time() - self._ts[cache_key] < _CACHE_TTL_SECONDS + def resolve_state(self, project_identifier: str, state_name: str) -> str: ... project_id = self.resolve_project(project_identifier) key = state_name.lower() - if project_id in self._state_cache and key in self._state_cache[project_id] and (time.time() - self._ts["states"] < _CACHE_TTL_SECONDS): + if project_id in self._state_cache and key in self._state_cache[project_id] and self._is_cache_valid("states"): return self._state_cache[project_id][key]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/resolver.py` around lines 92 - 93, The long conditional in the resolver uses three checks in one line (project_id in self._state_cache, key in self._state_cache[project_id], and (time.time() - self._ts["states"] < _CACHE_TTL_SECONDS)) which hurts readability; extract the TTL check into a small helper or local boolean (e.g., _is_state_cache_fresh() or fresh = (time.time() - self._ts["states"] < _CACHE_TTL_SECONDS)) and then rewrite the if in two clearer parts such as first checking project_id/key existence against self._state_cache and then checking the fresh boolean before returning self._state_cache[project_id][key], referencing the existing symbols _state_cache, _ts["states"], _CACHE_TTL_SECONDS, project_id and key.plane_mcp/journey/__main__.py (2)
14-14: Remove unused importget_oauth_mcp.
get_oauth_mcpis imported but never used in this module. The HTTP mode only usesget_header_mcp().🧹 Suggested fix
-from plane_mcp.journey.server import get_header_mcp, get_oauth_mcp, get_stdio_mcp +from plane_mcp.journey.server import get_header_mcp, get_stdio_mcp🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/__main__.py` at line 14, Remove the unused import get_oauth_mcp from the import statement that currently reads "from plane_mcp.journey.server import get_header_mcp, get_oauth_mcp, get_stdio_mcp"; leave only get_header_mcp and get_stdio_mcp (and verify there are no other references to get_oauth_mcp in this module). This eliminates the unused-symbol warning and keeps imports minimal.
25-32:combined_lifespanis defined but unused.This function combines three app lifespans, but the HTTP branch (lines 62-84) only uses
http_app.lifespan(http_app). Either remove the unused function or wire it in if multi-app lifespan management is intended.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/__main__.py` around lines 25 - 32, The helper combined_lifespan (asynccontextmanager) is defined but never used; either remove it or wire it into the HTTP branch where lifespans are started: if you intend to manage multiple app lifespans together, replace the single-use http_app.lifespan(http_app) with async with combined_lifespan(oauth_app, header_app, sse_app): (or call combined_lifespan with the appropriate app instances you construct) and ensure asynccontextmanager is imported/kept; otherwise delete combined_lifespan and any related unused imports to avoid dead code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plane_mcp/journey/cache.py`:
- Around line 7-32: get_cached_workspace_context can raise UnboundLocalError
because ctx is only set inside the try; initialize ctx before the try (e.g., ctx
= None) or assign a safe default object (with client=None and
workspace_slug=None) so later checks like "if ctx.client and ctx.workspace_slug"
won't reference an unbound name; update the function to always bind ctx before
using it and adjust the conditional that checks ctx.client/ctx.workspace_slug
accordingly.
In `@plane_mcp/journey/tools/create_update.py`:
- Around line 222-247: The registered create_ticket in
register_create_update_tools is missing the state_name parameter accepted by
CreateUpdateJourney.create_ticket; update the create_ticket signature to include
state_name: str | None = None, pass state_name=state_name into
journey.create_ticket, and update the create_ticket docstring to document the
state_name argument so callers can set the initial ticket state; keep the rest
of the wiring (get_plane_client_context, EntityResolver, mcp.tool,
mcp_error_boundary) unchanged.
In `@plane_mcp/journey/tools/read.py`:
- Around line 148-151: The variable issue_sequence returned from parse_ticket_id
in read_ticket is unused and triggers Ruff RUF059; change the unpacking to
discard it by renaming it to _issue_sequence (e.g., project_identifier,
_issue_sequence = self.parse_ticket_id(ticket_id)) so intent is clear and the
linter warning is silenced; keep all other logic in read_ticket, including calls
to self.resolver.resolve_ticket and self.resolver.resolve_project, unchanged.
In `@plane_mcp/resolver.py`:
- Around line 138-141: The except block that catches ValueError after parsing
issue_sequence (around the int(parts[1]) code in resolver.py) should re-raise
the new ValueError using exception chaining suppression: change the raise to use
"raise ValueError(...) from None" so the new message isn't attributed to the
original exception context; update the raise in the except ValueError that
currently raises f"Invalid ticket sequence in '{ticket_id}'..." to include "from
None".
---
Duplicate comments:
In `@plane_mcp/journey/tools/read.py`:
- Around line 79-90: Replace the bare except: pass in the block that handles
client.users.get_me() failures with a debug log that records the exception and
context (e.g., mention that resolving "me" for assignee "me" failed), using the
module logger or Python's logging with exc_info=True so the stack trace is
captured; update the handler around the client.users.get_me() call where
assignee_ids is appended (the loop over assignees / variable assignee_ids) to
log the exception instead of silently swallowing it.
---
Nitpick comments:
In `@plane_mcp/journey/__main__.py`:
- Line 14: Remove the unused import get_oauth_mcp from the import statement that
currently reads "from plane_mcp.journey.server import get_header_mcp,
get_oauth_mcp, get_stdio_mcp"; leave only get_header_mcp and get_stdio_mcp (and
verify there are no other references to get_oauth_mcp in this module). This
eliminates the unused-symbol warning and keeps imports minimal.
- Around line 25-32: The helper combined_lifespan (asynccontextmanager) is
defined but never used; either remove it or wire it into the HTTP branch where
lifespans are started: if you intend to manage multiple app lifespans together,
replace the single-use http_app.lifespan(http_app) with async with
combined_lifespan(oauth_app, header_app, sse_app): (or call combined_lifespan
with the appropriate app instances you construct) and ensure asynccontextmanager
is imported/kept; otherwise delete combined_lifespan and any related unused
imports to avoid dead code.
In `@plane_mcp/resolver.py`:
- Around line 92-93: The long conditional in the resolver uses three checks in
one line (project_id in self._state_cache, key in self._state_cache[project_id],
and (time.time() - self._ts["states"] < _CACHE_TTL_SECONDS)) which hurts
readability; extract the TTL check into a small helper or local boolean (e.g.,
_is_state_cache_fresh() or fresh = (time.time() - self._ts["states"] <
_CACHE_TTL_SECONDS)) and then rewrite the if in two clearer parts such as first
checking project_id/key existence against self._state_cache and then checking
the fresh boolean before returning self._state_cache[project_id][key],
referencing the existing symbols _state_cache, _ts["states"],
_CACHE_TTL_SECONDS, project_id and key.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1770ece9-845c-46c5-9c8d-24e6fcf59e71
📒 Files selected for processing (9)
plane_mcp/journey/__main__.pyplane_mcp/journey/cache.pyplane_mcp/journey/tools/create_update.pyplane_mcp/journey/tools/read.pyplane_mcp/resolver.pytests/conftest.pytests/e2e_pagination_test.pytests/test_journey_methods_comprehensive.pytests/test_journey_tools.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/conftest.py
- tests/e2e_pagination_test.py
- cache.py: init ctx=None before try block to prevent UnboundLocalError if get_plane_client_context() raises; guard with 'if ctx is not None' before accessing ctx.client/workspace_slug - create_update.py: expose state_name parameter on the registered MCP tool so callers can set the initial ticket state (was silently dropped); update docstring - read.py: rename issue_sequence -> _issue_sequence (Ruff RUF059 unused var) - resolver.py: add 'from None' to ValueError re-raise in resolve_ticket to satisfy Ruff B904 exception chaining rule
There was a problem hiding this comment.
♻️ Duplicate comments (1)
plane_mcp/journey/tools/read.py (1)
79-90:⚠️ Potential issue | 🟡 MinorAssignee
get_me()failure is silently swallowed.Per past discussion, the developer mentioned adding a debug log when
get_me()fails, but the current code still shows a bareexcept Exception: pass(lines 87-88). This means ifget_me()fails, the assignee filter is silently dropped with no observability.🛠️ Add debug logging as previously discussed
if a.lower() == "me": try: me = client.users.get_me() if hasattr(me, "id"): assignee_ids.append(me.id) - except Exception: - pass + except Exception as e: + import logging + logging.getLogger(__name__).debug(f"Failed to resolve 'me' assignee: {e}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/tools/read.py` around lines 79 - 90, The except block around client.users.get_me() currently swallows all errors; update it to catch Exception as e and emit a debug-level log with the exception and context (e.g., "failed to resolve 'me' for assignees") before continuing so failures are observable; modify the block around client.users.get_me(), assignee_ids, and query_params to log the exception (using the module logger or client.logger.debug) while still allowing the loop to continue.
🧹 Nitpick comments (1)
plane_mcp/resolver.py (1)
56-82: Consider paginating when listing projects.The resolver fetches projects with
per_page=100(line 60) but doesn't handle pagination. Workspaces with >100 projects would silently miss some, leading to falseEntityResolutionErrorwhen resolving valid project identifiers.♻️ Suggested pagination loop
# Fetch all projects to cache and find try: + available = [] + cursor = None + while True: + params = PaginatedQueryParams(per_page=100) + if cursor: + params.cursor = cursor - projects_resp = self.client.projects.list( - workspace_slug=self.workspace_slug, - params=PaginatedQueryParams(per_page=100) - ) - available = [] - for p in projects_resp.results: + projects_resp = self.client.projects.list( + workspace_slug=self.workspace_slug, + params=params + ) + for p in projects_resp.results: self._project_cache[p.identifier.upper()] = p.id self._project_cache[p.name.upper()] = p.id available.append(p.identifier) if hasattr(p, 'slug') and p.slug: self._project_cache[p.slug.upper()] = p.id + cursor = getattr(projects_resp, "next_cursor", None) + if not cursor: + break self._ts["projects"] = time.time()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/resolver.py` around lines 56 - 82, The current project fetch in resolver.py uses self.client.projects.list with PaginatedQueryParams(per_page=100) but does not handle pagination, so results beyond the first page are ignored and _project_cache (and subsequent resolution in resolve_project or similar functions) can miss valid projects; update the logic around self.client.projects.list to iterate through all pages (e.g., follow a next page/link or increment page params) collecting results from every page before populating _project_cache and setting self._ts["projects"], ensuring the available list includes identifiers from every page and preserving existing behavior of raising EntityResolutionError when the key is absent; keep using PaginatedQueryParams but loop until no more pages or results.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@plane_mcp/journey/tools/read.py`:
- Around line 79-90: The except block around client.users.get_me() currently
swallows all errors; update it to catch Exception as e and emit a debug-level
log with the exception and context (e.g., "failed to resolve 'me' for
assignees") before continuing so failures are observable; modify the block
around client.users.get_me(), assignee_ids, and query_params to log the
exception (using the module logger or client.logger.debug) while still allowing
the loop to continue.
---
Nitpick comments:
In `@plane_mcp/resolver.py`:
- Around line 56-82: The current project fetch in resolver.py uses
self.client.projects.list with PaginatedQueryParams(per_page=100) but does not
handle pagination, so results beyond the first page are ignored and
_project_cache (and subsequent resolution in resolve_project or similar
functions) can miss valid projects; update the logic around
self.client.projects.list to iterate through all pages (e.g., follow a next
page/link or increment page params) collecting results from every page before
populating _project_cache and setting self._ts["projects"], ensuring the
available list includes identifiers from every page and preserving existing
behavior of raising EntityResolutionError when the key is absent; keep using
PaginatedQueryParams but loop until no more pages or results.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3dbc075d-29cb-4b76-aa88-fe81bbe94644
📒 Files selected for processing (4)
plane_mcp/journey/cache.pyplane_mcp/journey/tools/create_update.pyplane_mcp/journey/tools/read.pyplane_mcp/resolver.py
- cycles.add_work_items is now non-fatal: failure logs a warning and sets a partial-success message rather than crashing the whole batch (same pattern as create_ticket's cycle attach fix) - State transition exception handler narrowed: EntityResolutionError (In Progress state not found) is caught silently as before; any other Exception (API error) is now logged with a warning instead of being silently swallowed by bare 'except ValueError'
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
plane_mcp/journey/tools/workflow.py (2)
146-151: Inconsistent exception handling: catchEntityResolutionErrorinstead ofValueError.Line 117 catches
EntityResolutionErrorfor the "In Progress" state lookup, but here line 150 catchesValueError. While this works becauseEntityResolutionErroris aValueErrorsubclass, catching the broader type could inadvertently suppress unrelatedValueErrorexceptions from other code paths.Use
EntityResolutionErrorfor consistency withbegin_workand to match whatresolve_state()actually raises.♻️ Proposed fix
for state_name in ["Done", "Completed"]: try: state_id = self.resolver.resolve_state(project_identifier, state_name) break - except ValueError: + except EntityResolutionError: continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/tools/workflow.py` around lines 146 - 151, The loop that resolves "Done" / "Completed" states currently catches ValueError; change the exception type to EntityResolutionError to match what resolve_state actually raises and to be consistent with the begin_work handling—update the except clause in the for loop that calls self.resolver.resolve_state(project_identifier, state_name) to except EntityResolutionError and leave the continue behavior unchanged.
101-122: Broad exception catches are intentional but could be narrowed.Static analysis flags the
Exceptioncatches at lines 101 and 120. Per the commit message, these are intentional for fault tolerance—cycle failures and state transition errors should not crash the entire batch operation.However, catching bare
Exceptioncan mask unexpected errors (e.g.,TypeError,AttributeErrorfrom bugs). Consider narrowing toHttpErroror a tuple of expected exception types while still maintaining the non-fatal behavior.💡 Optional: Narrow exception types
# Add all tickets in this project to the cycle; non-fatal on failure try: client.cycles.add_work_items( workspace_slug=workspace_slug, project_id=proj_id, cycle_id=cycle_id, issue_ids=w_ids ) msg += f" Added to cycle '{cycle_name}'." - except Exception as e: + except (HttpError, RuntimeError) as e: logger.warning("Failed to add tickets to cycle '%s' for project %s: %s", cycle_name, proj_identifier, e) msg += f" Warning: tickets processed but could not be added to cycle '{cycle_name}'."except EntityResolutionError: # "In Progress" state not found in this project — skip silently, it's optional pass - except Exception as e: + except (HttpError, RuntimeError) as e: # Unexpected API error — log but don't fail the whole batch logger.warning("State transition to 'In Progress' failed for project %s: %s", proj_identifier, e)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane_mcp/journey/tools/workflow.py` around lines 101 - 122, Replace the broad "except Exception as e" handlers in the cycle-assignment and state-transition blocks with a narrow catch of the expected API/HTTP error types (e.g., HttpError, ApiError, requests.RequestException) so only API communication failures are swallowed while other bugs still surface; specifically, update the except around the cycle-add logic and the except around the resolve_state/client.work_items.update (referencing resolve_state, client.work_items.update, UpdateWorkItem and the logger.warning calls) to catch a tuple of those API/HTTP exceptions, and leave EntityResolutionError handling unchanged—optionally add a small final except Exception that logs with logger.exception and re-raises to avoid silently hiding unexpected errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plane_mcp/journey/tools/workflow.py`:
- Around line 3-10: Remove the unused import get_cached_project_slugs_docstring
and consolidate the two separate imports from plane_mcp.resolver into a single
import statement that imports both EntityResolutionError and EntityResolver;
update the import block so only used symbols (EntityResolutionError,
EntityResolver) are imported and the stray get_cached_project_slugs_docstring
import is deleted.
---
Nitpick comments:
In `@plane_mcp/journey/tools/workflow.py`:
- Around line 146-151: The loop that resolves "Done" / "Completed" states
currently catches ValueError; change the exception type to EntityResolutionError
to match what resolve_state actually raises and to be consistent with the
begin_work handling—update the except clause in the for loop that calls
self.resolver.resolve_state(project_identifier, state_name) to except
EntityResolutionError and leave the continue behavior unchanged.
- Around line 101-122: Replace the broad "except Exception as e" handlers in the
cycle-assignment and state-transition blocks with a narrow catch of the expected
API/HTTP error types (e.g., HttpError, ApiError, requests.RequestException) so
only API communication failures are swallowed while other bugs still surface;
specifically, update the except around the cycle-add logic and the except around
the resolve_state/client.work_items.update (referencing resolve_state,
client.work_items.update, UpdateWorkItem and the logger.warning calls) to catch
a tuple of those API/HTTP exceptions, and leave EntityResolutionError handling
unchanged—optionally add a small final except Exception that logs with
logger.exception and re-raises to avoid silently hiding unexpected errors.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ab657c54-a729-4953-bd63-8103ae272df7
📒 Files selected for processing (1)
plane_mcp/journey/tools/workflow.py
- Merge separate EntityResolver and EntityResolutionError imports into one line - Remove unused get_cached_project_slugs_docstring import
|
Ready for merge after maintainer review. |
…variables) - Automatically sorted and fixed imports via 'ruff check --fix' - Wrapped docstrings and logger calls to stay under 120 characters to comply with E501 - Removed unused dictionary variable allocations - Updated test assertions to match the new line-wrapped message strings
6b640c8 to
8b732e2
Compare
Description
This PR introduces an alternative, intent-driven, macro-like, human-readable, interface for the Plane MCP server, designed specifically for AI agents, alongside updated documentation and CI/CD enhancements.
It is important to note that this is NOT a rearchitecture or replacement of the existing 1:1 REST API wrapper. The original comprehensive HTTP tools remain intact.
Instead, this PR introduces a new
agentmodule that lives alongside thehttpmodule. By simply swappinghttpforagentin the configuration, the surface area is scoped down to 7 highly optimized tools.🚀 Ready to Test: A fully operational, drop-in replacement image is available on Docker Hub for immediate testing: adamoutler/plane-mcp-server:plane-agent
The Problem with the
httpAPI for Agentshttpmodule exposes 108 tools. Agent environments with strict tool caps (like Antigravity) cannot load thehttpAPI at all, making it unusable for these platforms.The Solution: The Agent Namespace
The new
agentnamespace abstracts complex multi-step workflows into single, intent-driven tool calls. This bypasses environment tool caps, drastically improves the "Agent Experience" (AX), and reduces token usage.The
agentTool Roster:search_tickets(Currently the only working search in the repo)read_ticketcreate_ticketupdate_tickettransition_ticketbegin_workcomplete_workLevel of Detail (LOD) Profiles:
To explicitly address context bloat, the new tools utilize LOD profiles to return exactly what the agent needs and nothing more:
SUMMARYSTANDARDread_ticket).FULLResolves / Addresses Community Issues
This PR directly resolves or provides a viable workaround for several outstanding community requests:
agentnamespace limits tools to 7, addressing @Nek-12's issue regarding too many tools exhausting AI contexts.list_work_items#74: The newsearch_ticketstool handles the filtering on work items requested by @andreyrd, and the LOD profiles resolve @juliettefournier-econ's request to fix the level of detail to prevent full-item context dumps.Type of Change
Test Scenarios
search_ticketssuccessfully queries and returns concise issue lists.agentnamespace.begin_work.httpAPI wrapper remains completely unaffected.References
Summary by CodeRabbit
New Features
Infrastructure
Tests