Conversation
# Conflicts: # daiv/chat/api/utils.py # daiv/chat/api/views.py # uv.lock
Drop the now-redundant set_message_in_progress override on
RuntimeContextLangGraphAGUIAgent: upstream already guards the None
sentinel merge with or {}, so the override was a no-op (and was
triggering Liskov type-check errors because its signature widened
MessageInProgress to dict[str, Any]).
The get_stream_kwargs override is kept because upstream still dict-
merges config/context via .update(), which is incompatible with our
frozen RuntimeCtx dataclass passed to LangGraph as context_schema.
The old tests imported MODEL_ID and hit /models endpoints that were removed when the chat API was rewritten on top of CopilotKit/AG-UI. They failed at collection time. Replace them with coverage of the view's only custom behavior: the 404 raised when X-Repo-ID or X-Ref headers are missing. Everything past that point is third-party (AG-UI, LangGraph, ninja) and not worth exercising in unit tests.
…n activity detail
- Use aget_or_create to resolve TOCTOU races on thread creation (ChatThread.aget_or_create_from_activity and the API endpoint's implicit-create path)
- Replace role-normalization ternary with a lookup dict
- Use reverse("chat_list") instead of hardcoded URLs in breadcrumbs
- Rewrite _extract_first_user_message as a generator expression
- Drop dead <template id="message-template"> block and the stale template lookup in chat-stream.js; the inline renderer has always been the only path
- Index tool_calls by id (Map) for O(1) lookup instead of Array.find on every arg chunk
- Debounce scrollToBottom via requestAnimationFrame to avoid layout thrash under high-rate TEXT_MESSAGE_CHUNK events
- Drop unused 'expired' Alpine prop; the template already handles it server-side
- Route issue_addressor.py and review_addressor.py through core.checkpointer.open_checkpointer instead of inline AsyncRedisSaver.from_conn_string
- Wrap the AG-UI event generator in try/except: on any exception, log and emit a synthetic RUN_ERROR event so the browser surfaces a real error instead of silently terminating the SSE stream with an empty assistant bubble. - Replace read-then-write with a single conditional aupdate that claims the active_run_id slot only if currently empty — parallel tabs now race to a single winner and the loser gets 409. - Lift slot release out of the handler into a _release_thread helper used by the generator's finally block. - Add tests: happy-path active_run_id cleanup, exception-path cleanup with RUN_ERROR emission, plus unit tests for _extract_first_user_message edge cases (empty, non-string content, whitespace-only). - Client: console.error on network failure + friendlier user message, map known HTTP errors (403/404/409) to actionable copy instead of dumping raw response body, escalate malformed SSE frames from warn to error.
…CDN scripts, slimmer rail
…esume
- Empty state now mounts a single-repo picker (extracted partial reused
by activity); composer fades in once a repo is chosen.
- Render reasoning/thinking blocks as collapsible segments, including
out-of-band reasoning_content from DeepSeek/Qwen/xAI providers.
- Fold the skill tool's injected SKILL.md body into the tool result so
reload doesn't show it as a phantom user turn.
- Files-touched rail derives ops (added/modified/deleted/renamed) from
the sandbox's unified-diff patch, covering bash mutations too.
- edit_file diff body uses jsdiff for real hunks; tool signatures parse
partial-JSON args so summaries update mid-stream.
- New /threads/{id}/status endpoint + activeRunId hydration lets a
reloaded page detect when an in-flight run releases its slot.
- OpenRouter reasoning kwargs send enabled=true (z.ai GLM ignores effort
alone).
- Flesh out chat-text typography: heading scale with editorial rules under h1/h2, list markers and task-list checkboxes, blockquote side rail, fading hr, kbd keycap, mark/img defaults, and an amber-tinted inline-code chip with a heading-aware override. - Tables get a 100% width, scroll on overflow, header tint, zebra rows, and hover highlight so transcripts read as data, not text. - Pre blocks pick up a thin custom scrollbar so horizontal scroll feels intentional. - grep signature folds an optional glob arg into the scope label; glob expanded body parses the Python-list-repr result so users see one path per line instead of a quoted one-liner.
When a chat opens on a branch that already has an open merge request (created in a prior conversation, by the agent on a different thread, or by a human via the platform UI), the composer pill now appears without waiting for the agent to touch git state. - New RepoClient.get_merge_request_by_branches abstract + GitLab/GitHub implementations; SWE client raises NotImplementedError. - Shared chat.repo_state.aget_existing_mr_payload helper backed by the cached RepositoryConfig.get_config for default-branch lookup. - ChatThreadDetailView falls back to the helper at server-render time. - _emit_repo_state falls back to the helper when LangGraph state has no MR, so the streamed daiv:repo_state event surfaces existing MRs after the first message of a new chat.
- Render diff_to_metadata subagent structured-response tools (PullRequestMetadata, CommitMetadata) as compact phase chips in the assistant turn and status bar; silence their text/reasoning frames upstream via emit-messages: False so partial JSON never leaks. - Add an MR pill next to the repo chip that hydrates from the server-rendered checkpoint and updates live on daiv:repo_state custom events, with an accent on the branch segment once GitMiddleware commits to a working ref. - Tighten the new-chat empty state: drop the rail column, dock the composer under the hero, stack subtitles in one grid cell to kill the layout jump on repo pick, and soften the composer fade so the top border stays visible. - Suppress asyncio CancelledError tracebacks logged by asgiref when an SSE client disconnects mid-stream; cleanup already runs in the view's finally, the trace is just noise.
Stop suppressing tool-call events whose id differs from the active task when the call itself is another task — sibling parallel-audit subagents were being dropped on the floor, leaving the matching TOOL_CALL_RESULT with nothing to bind to until a page reload rehydrated from checkpoint. For genuinely inner tool calls, tick an innerToolsCount on the parent task segment so the user has some live signal during the long quiet window, surfaced as a small "N steps" chip on the running task card.
Move from a post-run ``daiv:repo_state`` CustomEvent to AG-UI's native STATE_SNAPSHOT stream for the composer MR pill. ``GitMiddleware`` now publishes ``merge_request`` on the public output schema and seeds it with any pre-existing open MR on the current branch, so the pill reflects reality from the first turn — no end-of-run checkpoint probe. Split ``chat/api/views.py`` into ``streaming.py`` (SSE generator + ``RuntimeContextLangGraphAGUIAgent``), ``threads.py`` (run-slot claim/ release + ref persistence), and ``event_filter.py`` (the subagent event reorder/suppress that previously lived inline). Drop the running-task step counter; the filter now handles nested frames server-side, so the client-side workaround is dead weight. Tool-stream polish: - web_search returns a JSON array; the renderer parses it into per-hit cards and a hit-count badge - web_fetch / gitlab / gh prefix failures with ``error:`` and the gitlab CLI truncation appends a sentinel — both feed result-row badges - new body renderers for web_fetch, web_search, gitlab, gh - activity-stream uses class-map objects so Alpine swaps status variants cleanly instead of letting old classes linger
ag_ui_langgraph only tracks one current_stream.tool_call_id, so when the LLM emits parallel tool_calls in one AIMessage, sibling args are attributed to the first call's id and concatenated into its segment. Generalize the subagent event filter to: - synthesize TOOL_CALL_START/ARGS/END for any tcid not naturally started (covers the dropped-start case AND parallel siblings), - drop TOOL_CALL_ARGS whose underlying chunk index > 0 (misrouted), - yield STATE_SNAPSHOT before synthesized segments so state-driven UI updates commit first. Also surface read_file errors and "no matches" in grep results in the chat tool renderer.
Introduces a shared titling pipeline (heuristic + LLM fallback via django-tasks) that backfills concise human-readable titles on Activity and ChatThread rows. Webhook-triggered activities reuse the issue/MR title; scheduled batch runs get a deterministic name; prompt-driven runs and chat threads enqueue an async generation task with LangSmith monitoring metadata. Threads created from an activity reuse its existing title to avoid a duplicate LLM call.
…nescaping' Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Add a pk attribute to _FakeActivity and patch generate_title_task so the post-create title enqueue path introduced in 166e628 doesn't crash mock-driven submit_job tests.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.