Conversation
…lock endpoints GitSyncMiddleware extends Starlette's BaseHTTPMiddleware, which wraps every request in an anyio task group and proxies receive/send through memory-channel streams. For SSE endpoints, this adds per-chunk scheduler hops, ExceptionGroup wrapping, and prevents the handler from seeing the real ASGI channels. Add a GitSyncMiddleware.__call__ override that short-circuits @no_write_lock endpoints to a pure-ASGI path: resolve the manager, call ensure_fresh_for_read, attach manager to scope state, then delegate straight to self.app(scope, receive, send) with the real uvicorn channels. Non-@no_write_lock requests still go through super().__call__() → dispatch() unchanged. Dispatch's unreachable `is_self_managed` block (formerly handled @no_write_lock pass-through) is removed — @no_write_lock now never reaches dispatch. Tests: - Unit tests for the bypass routing matrix, state attachment, error path, non-HTTP scope delegation, and manager-absent pass-through. - Integration test exercising the full ASGI stack on client disconnect. - Static invariant test across the app's route table: every project-scoped endpoint whose return annotation includes StreamingResponse must carry @no_write_lock. - Adds a code-review guideline noting SSE endpoints require @no_write_lock. Note: this is a plumbing improvement and a precondition for forthcoming SSE cancellation work. It does not by itself restore client-disconnect cancellation — that is a separate Starlette-version issue handled in a follow-up. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…ellation Starlette 0.41.3+ removed the unconditional `listen_for_disconnect` task from StreamingResponse, breaking server-side cancellation when SSE clients disconnect. This adds a CancellableStreamingResponse subclass that restores the previous behavior using an anyio task group, and swaps it into the three SSE helper functions (eval runner, document extractor, RAG workflow). Includes unit tests exercising happy-path streaming, client disconnect cancellation, background task execution, ASGI spec version independence, and exception propagation. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds a cancellable SSE response class and switches three SSE call sites to use it; introduces an ASGI-level GitSync middleware bypass for Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ASGI as "ASGI Server"
participant Handler as "CancellableStreamingResponse"
participant TaskGroup as "AnyIO TaskGroup"
participant StreamTask as "stream_response(send)"
participant DisconnectTask as "listen_for_disconnect(receive)"
participant Generator as "Endpoint Generator"
Client->>ASGI: HTTP Request (SSE)
ASGI->>Handler: call(scope, receive, send)
Handler->>TaskGroup: create task group
TaskGroup->>StreamTask: start stream_response(send)
TaskGroup->>DisconnectTask: start listen_for_disconnect(receive)
StreamTask->>Generator: iterate
Generator-->>StreamTask: yield chunk
StreamTask->>ASGI: send(http.response.body)
ASGI->>Client: stream chunk
alt Client disconnects
Client->>ASGI: http.disconnect
ASGI->>DisconnectTask: deliver disconnect
DisconnectTask->>TaskGroup: return -> trigger cancel_scope.cancel()
TaskGroup->>StreamTask: CancelledError -> generator cancellation
Generator->>Generator: run finally block
TaskGroup->>Handler: exit
else Normal completion
Generator-->>StreamTask: StopAsyncIteration
StreamTask->>TaskGroup: finish
TaskGroup->>Handler: exit normally
Generator->>Generator: run finally block
end
sequenceDiagram
participant Client
participant Middleware as "GitSyncMiddleware"
participant Router as "App Router / dispatch"
participant Endpoint as "@_git_sync_no_write_lock Endpoint"
participant Manager as "GitSyncManager"
Client->>Middleware: HTTP Request
Middleware->>Middleware: __call__(scope, receive, send)
Middleware->>Middleware: resolve endpoint & attributes
alt has _git_sync_no_write_lock
Middleware->>Manager: _get_or_create_manager()
Manager-->>Middleware: manager or None
opt manager found
Middleware->>Manager: ensure_fresh_for_read()
alt success
Middleware->>scope["state"]: attach manager
Middleware->>Endpoint: forward to app (self.app(scope, receive, send))
else GitSyncError
Middleware->>Client: JSON error response (mapped status/detail)
end
end
else no
Middleware->>Router: normal dispatch path
end
Endpoint-->>Client: Response (possibly SSE via CancellableStreamingResponse)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces CancellableStreamingResponse to ensure that server-side work for SSE endpoints is correctly cancelled upon client disconnect, addressing a regression in newer Starlette versions. It also updates GitSyncMiddleware with an ASGI bypass for @no_write_lock endpoints to facilitate direct streaming and prevent response buffering. Review feedback identifies a critical bug where the middleware could crash if an endpoint is not resolved and suggests a performance optimization to cache resolved endpoints in the ASGI scope to avoid redundant route scanning.
| return | ||
|
|
||
| request = Request(scope) | ||
| endpoint = self._resolve_endpoint(request) |
There was a problem hiding this comment.
_resolve_endpoint performs a linear scan of all registered routes. By calling it here in __call__ and then again in dispatch (via the super().__call__ path), the routing overhead is doubled for every standard HTTP request. Consider caching the resolved endpoint in the ASGI scope (e.g., scope["_git_sync_endpoint"] = endpoint) and updating dispatch to check this cache before performing another scan.
📊 Coverage ReportOverall Coverage: 92% Diff: origin/main...HEAD
Summary
Line-by-lineView line-by-line diff coveragelibs/server/kiln_server/cancellable_streaming_response.pyLines 20-29 20 """
21
22 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
23 if scope["type"] != "http":
! 24 await super().__call__(scope, receive, send)
! 25 return
26
27 with collapse_excgroups():
28 async with anyio.create_task_group() as task_group:libs/server/kiln_server/document_api.pyLines 174-182 174
175 # Send the final complete message the app expects, and uses to stop listening
176 yield "data: complete\n\n"
177
! 178 return CancellableStreamingResponse(
179 content=event_generator(),
180 media_type="text/event-stream",
181 )
|
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
.agents/api_code_review.md (1)
58-58: Minor phrasing nit in new SSE checklist rule."needed or closing the browser won't stop the background job" reads a bit awkwardly — a comma (or "; otherwise") would clarify the conditional. Also consider splitting the compound bullet into two sub-bullets for readability since it mixes two distinct rules.
✏️ Suggested rewording
-- SSE endpoints should have 1) `@no_write_lock` annotation (which allows direct streaming). SSE endpoints missing it won't stream. 2) return a CancellableStreamingResponse - needed or closing the browser won't stop the background job (unless a comment says using a StreamingResponse is desired in this case). +- SSE endpoints should have: + 1. `@no_write_lock` annotation (which allows direct streaming). SSE endpoints missing it won't stream. + 2. Return a `CancellableStreamingResponse`, otherwise closing the browser won't stop the background job (unless a comment explicitly states a plain `StreamingResponse` is desired).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/api_code_review.md at line 58, Revise the SSE checklist bullet to improve clarity and readability: split the compound bullet into two sub-bullets — one stating that SSE endpoints must have the `@no_write_lock` annotation to allow direct streaming, and another stating they must return a CancellableStreamingResponse (otherwise closing the browser won’t stop the background job); mention StreamingResponse only if intentionally desired. Ensure the second sub-bullet uses a comma or “; otherwise” to make the conditional clear and mirror the exact symbols `@no_write_lock`, CancellableStreamingResponse, and StreamingResponse so reviewers can find the related code paths.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/desktop/git_sync/test_middleware.py`:
- Around line 1186-1218: The test is using StreamingResponse which exercises the
legacy Starlette disconnect path; replace the response with
CancellableStreamingResponse in sse_endpoint (use
CancellableStreamingResponse(gen(), media_type="text/event-stream")) and update
the ASGI scope's "asgi" version to {"version": "3.0", "spec_version": "2.4"} (or
set spec_version: "2.4") so the test hits the spec_version >= 2.4 fast path;
ensure any imports/reference to StreamingResponse are updated to import
CancellableStreamingResponse and keep the same generator logic and receive()
disconnect simulation.
In `@app/desktop/git_sync/test_sse_invariants.py`:
- Around line 24-28: The helper _return_type_is_streaming currently swallows
errors from get_type_hints and returns False, masking routes with unresolved
annotations; change the except block in _return_type_is_streaming to raise an
AssertionError (or re-raise a descriptive exception) that includes the
endpoint/function identifier (use fn.__name__ or repr(fn)) and the original
exception message so the guardrail test fails loudly when type-hint resolution
fails for a route instead of silently skipping it.
In `@app/desktop/studio_server/eval_api.py`:
- Around line 6-9: Move the CancellableStreamingResponse import into the
existing kiln_server import block so it is grouped with the other kiln_server.*
imports; specifically, remove the standalone "from
kiln_server.cancellable_streaming_response import CancellableStreamingResponse"
that sits between kiln_ai imports and add it to the block where other
kiln_server imports appear (keep the symbol name CancellableStreamingResponse
unchanged).
In `@specs/projects/sse_cancellable_response/functional_spec.md`:
- Around line 86-90: Update the completed spec to reference the actual test file
path used in the PR: replace the mention of
app/desktop/git_sync/test_cancellable_streaming_response.py with
libs/server/kiln_server/test_cancellable_streaming_response.py so the spec
accurately reflects the canonical location added in the PR; ensure any
surrounding text or links in the spec that point to the old path are updated to
the new libs/server/kiln_server path.
---
Nitpick comments:
In @.agents/api_code_review.md:
- Line 58: Revise the SSE checklist bullet to improve clarity and readability:
split the compound bullet into two sub-bullets — one stating that SSE endpoints
must have the `@no_write_lock` annotation to allow direct streaming, and another
stating they must return a CancellableStreamingResponse (otherwise closing the
browser won’t stop the background job); mention StreamingResponse only if
intentionally desired. Ensure the second sub-bullet uses a comma or “;
otherwise” to make the conditional clear and mirror the exact symbols
`@no_write_lock`, CancellableStreamingResponse, and StreamingResponse so reviewers
can find the related code paths.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4aec5aaa-20e4-4c18-a28b-0df73ec4b2bb
📒 Files selected for processing (13)
.agents/api_code_review.mdapp/desktop/git_sync/middleware.pyapp/desktop/git_sync/test_middleware.pyapp/desktop/git_sync/test_sse_invariants.pyapp/desktop/studio_server/eval_api.pylibs/server/kiln_server/cancellable_streaming_response.pylibs/server/kiln_server/document_api.pylibs/server/kiln_server/test_cancellable_streaming_response.pyspecs/projects/sse_cancellable_response/architecture.mdspecs/projects/sse_cancellable_response/functional_spec.mdspecs/projects/sse_cancellable_response/implementation_plan.mdspecs/projects/sse_cancellable_response/phase_plans/phase_1.mdspecs/projects/sse_cancellable_response/project_overview.md
| def _return_type_is_streaming(fn: typing.Callable) -> bool: | ||
| try: | ||
| hints = get_type_hints(fn) | ||
| except Exception: | ||
| return False |
There was a problem hiding this comment.
Don’t silently skip routes when type-hint resolution fails.
Returning False here can hide a project-scoped SSE route from the invariant if one annotation fails to resolve. Since this is a guardrail test, fail with the endpoint name instead.
🧪 Proposed fix
def _return_type_is_streaming(fn: typing.Callable) -> bool:
try:
hints = get_type_hints(fn)
- except Exception:
- return False
+ except Exception as exc:
+ raise AssertionError(
+ f"Could not resolve return type hints for "
+ f"{fn.__module__}.{fn.__qualname__}"
+ ) from exc📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _return_type_is_streaming(fn: typing.Callable) -> bool: | |
| try: | |
| hints = get_type_hints(fn) | |
| except Exception: | |
| return False | |
| def _return_type_is_streaming(fn: typing.Callable) -> bool: | |
| try: | |
| hints = get_type_hints(fn) | |
| except Exception as exc: | |
| raise AssertionError( | |
| f"Could not resolve return type hints for " | |
| f"{fn.__module__}.{fn.__qualname__}" | |
| ) from exc |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/desktop/git_sync/test_sse_invariants.py` around lines 24 - 28, The helper
_return_type_is_streaming currently swallows errors from get_type_hints and
returns False, masking routes with unresolved annotations; change the except
block in _return_type_is_streaming to raise an AssertionError (or re-raise a
descriptive exception) that includes the endpoint/function identifier (use
fn.__name__ or repr(fn)) and the original exception message so the guardrail
test fails loudly when type-hint resolution fails for a route instead of
silently skipping it.
| from fastapi.responses import StreamingResponse | ||
| from kiln_ai.adapters.eval.eval_runner import EvalRunner | ||
| from kiln_server.cancellable_streaming_response import CancellableStreamingResponse | ||
| from kiln_ai.adapters.fine_tune.finetune_run_config_id import ( |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm ruff's isort rules are enabled in the project config (would cause this import to fail lint).
fd -t f -H 'pyproject.toml|ruff.toml|.ruff.toml' | xargs -I{} sh -c 'echo "--- {} ---"; cat {}' | rg -n -C3 '\[tool\.ruff|select|isort|I\"|I\b'Repository: Kiln-AI/Kiln
Length of output: 217
🏁 Script executed:
#!/bin/bash
# Find and display ruff configuration files
fd -t f -H 'pyproject.toml|ruff.toml|.ruff.toml' | head -5 | xargs -I{} sh -c 'echo "--- {} ---"; cat {}'Repository: Kiln-AI/Kiln
Length of output: 5882
🏁 Script executed:
#!/bin/bash
# Read the eval_api.py file to check actual import order at lines 1-50
head -50 app/desktop/studio_server/eval_api.py | cat -nRepository: Kiln-AI/Kiln
Length of output: 2276
🏁 Script executed:
#!/bin/bash
# Also check what's around line 34-40 where kiln_server imports are supposedly located
sed -n '30,45p' app/desktop/studio_server/eval_api.py | cat -nRepository: Kiln-AI/Kiln
Length of output: 780
Move CancellableStreamingResponse import to the existing kiln_server.* import block.
The import on line 8 is placed between kiln_ai imports, violating isort sorting rules. Ruff's import sorting rules (I in extend-select in pyproject.toml) require imports to be grouped alphabetically by module within the third-party section. All other kiln_server.* imports are correctly grouped together at lines 34–40; this one should be placed with them.
✏️ Proposed fix
from kiln_ai.adapters.eval.eval_runner import EvalRunner
-from kiln_server.cancellable_streaming_response import CancellableStreamingResponse
from kiln_ai.adapters.fine_tune.finetune_run_config_id import (Then add it with the other kiln_server imports:
from kiln_ai.utils.name_generator import generate_memorable_name
+from kiln_server.cancellable_streaming_response import CancellableStreamingResponse
from kiln_server.git_sync_decorators import build_save_context, no_write_lock🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/desktop/studio_server/eval_api.py` around lines 6 - 9, Move the
CancellableStreamingResponse import into the existing kiln_server import block
so it is grouped with the other kiln_server.* imports; specifically, remove the
standalone "from kiln_server.cancellable_streaming_response import
CancellableStreamingResponse" that sits between kiln_ai imports and add it to
the block where other kiln_server imports appear (keep the symbol name
CancellableStreamingResponse unchanged).
| ## Tests | ||
|
|
||
| Add a single new test file: `app/desktop/git_sync/test_cancellable_streaming_response.py` (placed here so it runs in CI with the other git_sync tests; the subclass lives in `libs/server` but sits near the middleware bypass that makes it meaningful). | ||
|
|
||
| Alternatively place it at `libs/server/kiln_server/test_cancellable_streaming_response.py` if that fits the project's test-locality convention better. Either location is acceptable — pick the one consistent with how neighboring code is tested. |
There was a problem hiding this comment.
Update the completed spec to the chosen test path.
The PR added libs/server/kiln_server/test_cancellable_streaming_response.py, but the spec still names app/desktop/git_sync/test_cancellable_streaming_response.py as the primary location. Since this spec is marked complete, make the actual path canonical.
📝 Proposed fix
-Add a single new test file: `app/desktop/git_sync/test_cancellable_streaming_response.py` (placed here so it runs in CI with the other git_sync tests; the subclass lives in `libs/server` but sits near the middleware bypass that makes it meaningful).
-
-Alternatively place it at `libs/server/kiln_server/test_cancellable_streaming_response.py` if that fits the project's test-locality convention better. Either location is acceptable — pick the one consistent with how neighboring code is tested.
+Add a single new test file: `libs/server/kiln_server/test_cancellable_streaming_response.py`, colocated with the `CancellableStreamingResponse` implementation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@specs/projects/sse_cancellable_response/functional_spec.md` around lines 86 -
90, Update the completed spec to reference the actual test file path used in the
PR: replace the mention of
app/desktop/git_sync/test_cancellable_streaming_response.py with
libs/server/kiln_server/test_cancellable_streaming_response.py so the spec
accurately reflects the canonical location added in the PR; ensure any
surrounding text or links in the spec that point to the old path are updated to
the new libs/server/kiln_server path.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
app/desktop/git_sync/middleware.py (2)
142-145: Dead condition inneeds_lockafter the bypass.
@no_write_lockendpoints are now fully intercepted in__call__and never reachdispatch(), sonot getattr(endpoint, "_git_sync_no_write_lock", False)is alwaysTruehere. The clause can be dropped for clarity, or kept as defense-in-depth with a comment explaining why.♻️ Optional simplification
- needs_lock = ( - request.method in MUTATING_METHODS - or getattr(endpoint, "_git_sync_write_lock", False) - ) and not getattr(endpoint, "_git_sync_no_write_lock", False) + # `@no_write_lock` endpoints are handled in __call__ before reaching + # dispatch(), so we only need to decide between write-lock and read paths. + needs_lock = request.method in MUTATING_METHODS or getattr( + endpoint, "_git_sync_write_lock", False + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/desktop/git_sync/middleware.py` around lines 142 - 145, The boolean expression computing needs_lock is using the check not getattr(endpoint, "_git_sync_no_write_lock", False) which is dead because `@no_write_lock` endpoints are already intercepted in __call__ and never reach dispatch(); update the logic in the dispatch path to remove that redundant clause (or, if you prefer defense-in-depth, keep it but add a clarifying comment) so that needs_lock only combines request.method in MUTATING_METHODS and getattr(endpoint, "_git_sync_write_lock", False); specifically edit the computation of needs_lock in dispatch (referencing needs_lock, dispatch, __call__, _git_sync_no_write_lock, _git_sync_write_lock, and MUTATING_METHODS) to drop the always-true negation or document why it remains.
81-84: Endpoint is resolved twice per non-bypassed request.
_resolve_endpointdoes a linear scan overapp.routes;__call__runs it, thendispatch()runs it again on the same request. For large route tables this doubles the cost. If you want to avoid it, stash the resolved endpoint inscopefrom__call__and reuse it indispatch:♻️ Optional: cache endpoint in scope
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: if scope["type"] != "http": await super().__call__(scope, receive, send) return request = Request(scope) endpoint = self._resolve_endpoint(request) + scope["_git_sync_endpoint"] = endpoint if endpoint is None or not getattr(endpoint, "_git_sync_no_write_lock", False): await super().__call__(scope, receive, send) return @@ - endpoint = self._resolve_endpoint(request) + endpoint = request.scope.get("_git_sync_endpoint", None) + if "_git_sync_endpoint" not in request.scope: + endpoint = self._resolve_endpoint(request)Given the docstring already calls out the linear-scan cost as acceptable for typical apps, feel free to skip.
Also applies to: 132-132
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/desktop/git_sync/middleware.py` around lines 81 - 84, The code currently calls _resolve_endpoint twice (once in __call__ and again in dispatch), which duplicates the linear scan over app.routes; modify __call__ to stash the resolved endpoint into scope (e.g., scope["endpoint"] or a prefixed key) when it computes endpoint and then update dispatch to first check scope for that cached endpoint before calling _resolve_endpoint, preserving the existing check for getattr(endpoint, "_git_sync_no_write_lock", False) so behaviour is unchanged but the expensive resolution is only done once per request.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@app/desktop/git_sync/middleware.py`:
- Around line 142-145: The boolean expression computing needs_lock is using the
check not getattr(endpoint, "_git_sync_no_write_lock", False) which is dead
because `@no_write_lock` endpoints are already intercepted in __call__ and never
reach dispatch(); update the logic in the dispatch path to remove that redundant
clause (or, if you prefer defense-in-depth, keep it but add a clarifying
comment) so that needs_lock only combines request.method in MUTATING_METHODS and
getattr(endpoint, "_git_sync_write_lock", False); specifically edit the
computation of needs_lock in dispatch (referencing needs_lock, dispatch,
__call__, _git_sync_no_write_lock, _git_sync_write_lock, and MUTATING_METHODS)
to drop the always-true negation or document why it remains.
- Around line 81-84: The code currently calls _resolve_endpoint twice (once in
__call__ and again in dispatch), which duplicates the linear scan over
app.routes; modify __call__ to stash the resolved endpoint into scope (e.g.,
scope["endpoint"] or a prefixed key) when it computes endpoint and then update
dispatch to first check scope for that cached endpoint before calling
_resolve_endpoint, preserving the existing check for getattr(endpoint,
"_git_sync_no_write_lock", False) so behaviour is unchanged but the expensive
resolution is only done once per request.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 618ac958-8cdc-439b-bfe3-1774a9330923
📒 Files selected for processing (1)
app/desktop/git_sync/middleware.py
- test_middleware.py: switch the integration SSE disconnect test to CancellableStreamingResponse with asgi spec_version "2.4" so it actually exercises the production uvicorn code path. Previously the endpoint returned plain StreamingResponse under the default spec_version "2.0", which takes Starlette's working pre-regression branch — the test passed even when call sites forgot the subclass. - test_sse_invariants.py: narrow the get_type_hints catch to (TypeError, NameError) and log a warning so an unresolved forward reference no longer silently excludes an endpoint from the @no_write_lock invariant check. - middleware.py: document why dispatch intentionally omits request.state.git_sync_manager (the ASGI bypass in _no_write_lock_asgi is the only path that attaches it today), so a future edit adding build_save_context to a regular read endpoint fails loudly at review rather than with AttributeError at runtime. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
app/desktop/git_sync/middleware.py (1)
75-87:_resolve_endpointruns twice on the dispatch path.
__call__now resolves the endpoint for every HTTP request to decide bypass-vs-dispatch. When the decision is "dispatch" (the common case for every non-SSE project route + every non-project route),dispatch()calls_resolve_endpointagain at line 138. Given the comment on_resolve_endpointalready flags the linear-scan cost, consider threading the result through (e.g., stash it on the scope or pass todispatchvia an instance attribute per-request) so the scan runs once.Not blocking — route tables here are small — but trivially avoidable duplicate work on every hot-path request.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/desktop/git_sync/middleware.py` around lines 75 - 87, The code currently calls _resolve_endpoint twice (once in __call__ to decide bypass vs dispatch, and again in dispatch), so stash the resolved endpoint result to avoid the duplicate linear-scan: when __call__ computes endpoint, attach it to the request scope (e.g., scope["_resolved_endpoint"]) or set a per-request attribute on the middleware instance, then update dispatch to check for and consume that cached value instead of calling _resolve_endpoint again; ensure _no_write_lock_asgi and dispatch read and clear the cached endpoint so subsequent middleware/requests are unaffected.app/desktop/git_sync/test_middleware.py (1)
1123-1165: Minor:wraps=Noneis redundant and the "try/except: pass" hides regressions.
patch.object(GitSyncMiddleware, "_resolve_endpoint", wraps=None)behaves identically topatch.object(GitSyncMiddleware, "_resolve_endpoint")sincewraps=Noneis already the default — the parameter isn't doing anything here. Also,try: ... except Exception: passswallows every error insideapp(scope, receive, send), which would mask real regressions (e.g. if__call__started blowing up for websocket scopes for unrelated reasons). Consider narrowing the except or removing it — theasyncio.wait_fortimeout is sufficient to bound the call, and an exception from the websocket super path is the normal outcome you can let propagate up to the assertion or assert against explicitly.♻️ Suggested tightening
- with patch.object( - GitSyncMiddleware, "_resolve_endpoint", wraps=None - ) as mock_resolve: - try: - await asyncio.wait_for(app(scope, receive, send), timeout=1.0) - except Exception: - pass + with patch.object(GitSyncMiddleware, "_resolve_endpoint") as mock_resolve: + with contextlib.suppress(asyncio.TimeoutError): + await asyncio.wait_for(app(scope, receive, send), timeout=1.0) mock_resolve.assert_not_called()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/desktop/git_sync/test_middleware.py` around lines 1123 - 1165, The patch is redundant and the test hides failures: remove the unnecessary wraps=None when patching GitSyncMiddleware._resolve_endpoint (use patch.object(GitSyncMiddleware, "_resolve_endpoint") or a context manager) and eliminate the broad try/except that swallows all exceptions around await asyncio.wait_for(app(scope, receive, send), timeout=1.0); instead let exceptions propagate (or catch only asyncio.TimeoutError if you need to assert on timeouts) so failures in GitSyncMiddleware.__call__ for websocket scopes are not masked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@app/desktop/git_sync/middleware.py`:
- Around line 75-87: The code currently calls _resolve_endpoint twice (once in
__call__ to decide bypass vs dispatch, and again in dispatch), so stash the
resolved endpoint result to avoid the duplicate linear-scan: when __call__
computes endpoint, attach it to the request scope (e.g.,
scope["_resolved_endpoint"]) or set a per-request attribute on the middleware
instance, then update dispatch to check for and consume that cached value
instead of calling _resolve_endpoint again; ensure _no_write_lock_asgi and
dispatch read and clear the cached endpoint so subsequent middleware/requests
are unaffected.
In `@app/desktop/git_sync/test_middleware.py`:
- Around line 1123-1165: The patch is redundant and the test hides failures:
remove the unnecessary wraps=None when patching
GitSyncMiddleware._resolve_endpoint (use patch.object(GitSyncMiddleware,
"_resolve_endpoint") or a context manager) and eliminate the broad try/except
that swallows all exceptions around await asyncio.wait_for(app(scope, receive,
send), timeout=1.0); instead let exceptions propagate (or catch only
asyncio.TimeoutError if you need to assert on timeouts) so failures in
GitSyncMiddleware.__call__ for websocket scopes are not masked.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7ae483b9-e801-45f5-affc-71679e42ae21
📒 Files selected for processing (3)
app/desktop/git_sync/middleware.pyapp/desktop/git_sync/test_middleware.pyapp/desktop/git_sync/test_sse_invariants.py
🚧 Files skipped from review as they are similar to previous changes (1)
- app/desktop/git_sync/test_sse_invariants.py
|
If we go with this fix, you can take this to also use the fix on the two remaining SSE endpoints (chat): |
Summary by CodeRabbit
Bug Fixes
New Features
Tests
Documentation