Skip to content

feat: add experimental global AI assistant option#4567

Merged
josephjclark merged 12 commits intomainfrom
4532-experimental-global-assistant
Apr 8, 2026
Merged

feat: add experimental global AI assistant option#4567
josephjclark merged 12 commits intomainfrom
4532-experimental-global-assistant

Conversation

@elias-ba
Copy link
Copy Markdown
Contributor

@elias-ba elias-ba commented Mar 26, 2026

Description

Adds an opt-in checkbox to the AI Assistant UI that routes messages to Apollo's global chat endpoint instead of the separate job_chat/workflow_chat endpoints. The global chat unifies both behind a router that intelligently decides how to handle each request.

Gated behind the existing experimental features preference - only users who have enabled experimental features in their profile see the checkbox.

Closes #4532

Validation steps

  1. Enable experimental features in user profile
  2. Open a workflow in the editor, open the AI assistant panel
  3. Verify "Global assistant (experimental)" checkbox appears next to the PII warning
  4. Check the checkbox - badge should change to "Global (experimental)" in amber
  5. Send a message - verify it routes through global_chat (check Apollo logs for "HTTP stream connected to global_chat")
  6. On the workflow canvas, ask to modify workflow structure - verify "Generated Workflow" card with Apply
  7. Navigate to a job, ask to modify code - verify "Generated Job Code" card with Preview/Apply
  8. Uncheck the checkbox - verify messages go through normal job_chat/workflow_chat as before

Additional notes for the reviewer

AI Usage

  • I have used Claude Code
  • I have used another model
  • I have not used AI

Pre-submission checklist

  • I have performed an AI review of my code (we recommend using /review with Claude Code)
  • I have implemented and tested all related authorization policies. (e.g., :owner, :admin, :editor, :viewer)
  • I have updated the changelog.
  • I have ticked a box in "AI usage" in this PR

@github-project-automation github-project-automation bot moved this to New Issues in Core Mar 26, 2026
@elias-ba elias-ba marked this pull request as ready for review March 26, 2026 03:07
@elias-ba elias-ba force-pushed the 4532-experimental-global-assistant branch from 54a117e to cda4590 Compare March 26, 2026 03:59
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 26, 2026

Codecov Report

❌ Patch coverage is 98.43750% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 89.58%. Comparing base (7a47ef2) to head (ebc5938).
⚠️ Report is 11 commits behind head on main.

Files with missing lines Patch % Lines
lib/lightning_web/channels/ai_assistant_channel.ex 90.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4567      +/-   ##
==========================================
+ Coverage   89.51%   89.58%   +0.06%     
==========================================
  Files         441      441              
  Lines       21205    21265      +60     
==========================================
+ Hits        18982    19050      +68     
+ Misses       2223     2215       -8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Base automatically changed from 3585-ai-assistant-streaming to main March 27, 2026 15:25
@elias-ba elias-ba force-pushed the 4532-experimental-global-assistant branch from 8185c5e to 5b5dbda Compare March 27, 2026 16:41
elias-ba added 11 commits March 27, 2026 19:37
- Add global_chat_stream/2 to ApolloClient for /services/global_chat/stream
- Add global_chat?/1 and process_global_message/2 to MessageProcessor
- Route to global_chat when use_global_assistant is in session meta
- Pass use_global_assistant and page through channel message_options
- Add experimental_features_enabled to workflow channel get_context
- Add query_global_stream/3 using process_stream with build_global_message
- build_global_message extracts code from attachments (job_code or workflow_yaml)
- Context-aware: on job step prefers job_code, on overview prefers workflow_yaml
- Resolves job_id from job_key in attachment by matching against workflow jobs
- Add experimental_features_enabled to SessionContextResponseSchema
- Map to experimentalFeaturesEnabled in SessionContextStore
- Add useExperimentalFeaturesEnabled hook
- Add use_global_assistant and page to MessageOptions type
- Add Global assistant (experimental) checkbox to ChatInput with
  localStorage persistence and onGlobalAssistantChange callback
- Pass use_global_assistant and page through AIChannelRegistry buildJoinParams
- Wire checkbox through AIAssistantPanel with showGlobalAssistantOption,
  isGlobalAssistantActive, and onGlobalAssistantChange props
- AIAssistantPanelWrapper includes workflow YAML and page when global
  assistant is active, derives page from workflow/job context
- Badge shows "Global (experimental)" in amber when active
The Zod schema adds a default value for the new optional field,
so the parsed output includes it. Update the test fixture to match.
Split build_global_message into smaller functions to satisfy
credo's complexity threshold.
- ApolloClient: global_chat_stream payload, nil filtering, error handling
- AiAssistant: query_global_stream SSE processing, job_code vs workflow_yaml
  extraction, job resolution from job_key, error handling
- MessageProcessor: global_chat routing when use_global_assistant is set
- AiAssistantChannel: session options and message options with global assistant
- WorkflowChannel: experimental_features_enabled in get_context
Cover workflow_yaml fallback on overview pages, nil job_key handling
in resolve_job_from_key, and non-list attachments.
@elias-ba elias-ba force-pushed the 4532-experimental-global-assistant branch from 5b5dbda to 8a8e32f Compare March 27, 2026 19:38
@hanna-paasivirta
Copy link
Copy Markdown

@elias-ba When I try a conversational turn with no YAML generation in the workflow chat (no experimental features turned on) I get some errors:


Fo Assistant Workflow

Sometimes the streamed answer disappears and it seems to regenerate a new answer, which in this case was malformed:

fn Assistant Wordfow

I'm not sure why at this stage, but I'll continue testing.

@hanna-paasivirta
Copy link
Copy Markdown

@elias-ba When I try a conversational turn in the global chat (experimental features turned on), I get an error similarly:

Fa Assistant Globai (experimental)

When the global chat routes to job_code_agent and the job can't be
resolved to a DB record, store the job_key in message meta as
from_global_job_code so the frontend can render "Generated Job Code"
with a diff view instead of a generic workflow card.
@josephjclark
Copy link
Copy Markdown
Collaborator

Hi @elias-ba - haven't run deep testing but the behaviour looks generally right. Happy with the UI.

I'm not seeing any streaming, is that expected? And @hanna-paasivirta do we have many streaming events integrated into the global assistant? It's so slow it's going to be very important we give that feedback to users.

I've seen some very bad responses come back from job chat - just wanted to see how broken it is and yes, it feels a bit broken. I think I'm OK with that - it depends on if the problems are frontend or backend. I might take a look at the implementation this afternoon just to better understand what we have here.

@hanna-paasivirta
Copy link
Copy Markdown

@josephjclark No I haven't implemented streaming in the global agent

Copy link
Copy Markdown
Collaborator

@theroinaochieng theroinaochieng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elias, on the expected user experience this looks good to me. I wasn't able to validate how the data streams as shown on the validation steps though.

@hanna-paasivirta
Copy link
Copy Markdown

I will go ahead and fix the problem in the screenshots separately (OpenFn/apollo#442). So this PR can be merged.

Copy link
Copy Markdown
Collaborator

@josephjclark josephjclark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we think that the outstanding issues are in production or purely on the apollo side - so no action to take here.

We are prioritising streaming in the global chat because I think that's important.

@josephjclark josephjclark merged commit baeb59a into main Apr 8, 2026
8 checks passed
@josephjclark josephjclark deleted the 4532-experimental-global-assistant branch April 8, 2026 09:22
@github-project-automation github-project-automation bot moved this from New Issues to Done in Core Apr 8, 2026
@elias-ba
Copy link
Copy Markdown
Contributor Author

elias-ba commented Apr 8, 2026

Thanks so much for taking the time to test this @josephjclark @hanna-paasivirta @theroinaochieng and for helping merge it !

@josephjclark @theroinaochieng what's happening is that streaming actually works when the global assistant routes a question directly to the workflow or job chat agents. But when it goes through the planner (which handles the more involved, multi-step requests), the planner doesn't stream the response text yet, it just sends a couple of status updates like "Thinking..." and then the full answer lands all at once. That's why it feels slow with no feedback. The good news is that everything on the Lightning side is already set up to handle it, so once @hanna-paasivirta adds streaming to the planner on the Apollo side, it'll just work without any changes here (normally).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Add experimental AI Assistant option

4 participants