Skip to content

Releases: EvolutionAPI/evo-nexus

v0.33.0 — Plugin contract release

26 Apr 18:09
aa33f7d

Choose a tag to compare

Bundles five plugin-contract PRs merged on 2026-04-25 to unblock plugin nutri v0.1.0 (and any future plugin needing per-endpoint role enforcement, public token-bound portals, or safe uninstall). See CHANGELOG.md for details.

Highlights:

  • requires_role on writable_data (PR #55)
  • :current_user_id auto-injected on readonly_data (PR #55)
  • public_pages capability (PR #53)
  • HTML shell content negotiation (PR #56)
  • safe_uninstall capability (PR #54)
  • rate-limit + security headers (PR #52)
  • 409 CONFLICT now surfaces the actual reason (e.g. version mismatch) instead of just the status code (PR #57)

v0.32.3

25 Apr 16:49

Choose a tag to compare

EvoNexus v0.32.3 — Workspace folder navigation fix + share link reuse

Patch release fixing a long-standing Workspace UI bug where folders refused to open and the dev console flooded with 400 Path is a directory requests on every click. Plus a small UX win on the file share dialog (reuse existing share links instead of generating a new token every time). Also includes the upstream PR #51 (private-repo plugin update flow + ClickUp webhook compat + DetachedInstanceError).

What's New

Workspace folder click no longer remounts the page

The SectionBoundary in App.tsx was keyed on location.key || location.pathname, so every navigate({replace:true}) inside /workspace/*, /agents/:name, /tickets/:id, /skills/:name and /docs produced a new key and React tore down + remounted the entire page. In the Workspace this wiped selectedPath, the expanded state in every TreeItem, and component refs on every folder click — folders never stayed open and the URL→state effect re-fired the file probe (GET /api/workspace/file?path=workspace/development → 400) on every mount.

The fix collapses subpaths within the same section into a single stable routeKey so the boundary doesn't remount between intra-section navigations. The boundary still resets between sections (so error state still clears when you leave a broken page).

Smaller follow-ups in the same area:

  • FileTree.handleClick now has explicit open / close branches instead of a blind setExpanded(prev => !prev) toggle that was vulnerable to any re-trigger flipping a freshly-opened folder back closed.
  • Workspace.tsx keeps a knownDirsRef so the URL→state deep-link effect can skip the redundant file probe when the path is already known to be a directory.

Share link reuse

Clicking the share button on a file used to mint a new token every single time, polluting file_shares with duplicates and leaving stale links live until they expired. The dialog now probes the new GET /api/shares/by-path?path=X endpoint on open and, if there's already an active share (enabled + non-expired) for that file, displays the existing link with formatted expiry and a view counter — reusing the same URL. A destructive Revoke and regenerate action is available when you actually want to rotate the link (e.g. it leaked).

The new by-path endpoint inherits the same permission gate (workspace.manage) and folder-access check as POST /api/shares.

Included from PR #51

  • Private-repo plugin update flow
  • ClickUp webhook compatibility
  • DetachedInstanceError fix

Full Changelog: v0.32.2...v0.32.3

v0.32.2

24 Apr 23:13

Choose a tag to compare

EvoNexus v0.32.2 — Fix Claude binary resolution on glibc Linux VPS

Patch release working around a bug in @anthropic-ai/claude-agent-sdk (v0.2.104+) that broke the in-dashboard chat on production Linux VPS installs with no local repro on macOS. If your EvoNexus chat started returning Error: Claude Code native binary not found at .../claude-agent-sdk-linux-x64-musl/claude out of nowhere after a restart or redeploy, this is for you.

What's New

Root cause

The SDK's auto-discovery function (sdk.mjs) tries the -musl platform package before glibc on every Linux host, regardless of the host's actual libc:

// simplified from sdk.mjs
const candidates = platform === 'linux'
  ? [`claude-agent-sdk-linux-${arch}-musl`, `claude-agent-sdk-linux-${arch}`]
  : [`claude-agent-sdk-${platform}-${arch}`];
for (const pkg of candidates) { try { return require.resolve(...); } catch {} }

On Ubuntu/Debian VPS installs where both sibling packages ended up in node_modules (common with pnpm, happens with npm too depending on lockfile state), the SDK spawned the musl binary. Because the musl dynamic loader (/lib/ld-musl-x86_64.so.1) doesn't exist on glibc, the kernel returned ENOENT — which the SDK reported as "native binary not found" (misleading; the file does exist, just can't be executed).

Upstream tracking: anthropics/claude-agent-sdk-typescript#296.

Fix in chat-bridge.js

Added resolveClaudeExecutable() that:

  1. Honors CLAUDE_CODE_EXECUTABLE env var if set (explicit override for exotic setups)
  2. Probes /lib and /usr/lib for ld-musl-* to detect the host's actual libc
  3. Reorders the candidate platform packages (glibc-first on glibc hosts, musl-first on Alpine)
  4. Passes the resolved absolute path via queryOptions.pathToClaudeCodeExecutable so the SDK skips its own discovery
  5. Falls back to the SDK default if nothing resolves — preserves macOS development flow where node_modules may not have the linux-specific sibling packages

Pin to exact SDK version

dashboard/terminal-server/package.json now pins @anthropic-ai/claude-agent-sdk to exact 0.2.119 (was ^0.2.104). Stops silent minor drift on fresh npm install from picking up a regression before upstream lands a proper libc-aware fix.

Upgrading

cd /path/to/evo-nexus
git pull
cd dashboard/terminal-server && npm install
sudo systemctl restart evo-nexus

Once upstream ships the libc-aware fix (see issue #296), we'll drop the workaround and just bump the SDK.


Full Changelog: v0.32.1...v0.32.2

v0.32.1

24 Apr 21:59

Choose a tag to compare

EvoNexus v0.32.1 — Fresh-build fix for PluginDetail

Patch release. v0.32.0 shipped with a TypeScript strict-mode error that only surfaced on fresh frontend builds — local incremental builds passed because .tsbuildinfo had the file cached as clean. Anyone running npm run build from a clean checkout (Docker image build, fresh git clone, CI, or a user pulling the new release) hit:

```
src/pages/PluginDetail.tsx:542:11 - error TS2322: Type 'unknown' is not assignable to type 'string | number | bigint | boolean | ReactElement | ...'.
```

What's Fixed

  • PluginDetail.tsx — narrow manifest['description'] with typeof === 'string' before using it as a truthy check and JSX child. Previously the Record<string, unknown> lookup was cast only in the <dd> body, not in the && condition, so strict tsc -b rejected the truthy check as unknown in JSX.

If You Were Affected

Pull main, rebuild the frontend:

```bash
git pull origin main
cd dashboard/frontend && npm run build
```

Or if you're on Docker, pull the new image (v0.32.1 tag).


Full Changelog: v0.32.0...v0.32.1

v0.32.0

24 Apr 21:55

Choose a tag to compare

EvoNexus v0.32.0 — Plugin System v1

A big one. Plugins are here — a full extensibility layer with 15 capabilities, a pre-install security scanner, per-capability toggles, update diff previews, and a reference plugin (pm-essentials) shipping exemplars of every surface. Alongside it: a security-hardening pass (PRD #37) and a batch of community-reported fixes across integrations, networking, and Docker.

What's New

Plugin System v1 (#41)

Plugins are git-installable bundles that contribute agents, skills, commands, rules, routines, heartbeats, widgets, MCP servers, custom UI pages, and seed data — all declared in a single plugin.yaml manifest.

Core runtime

  • Pydantic-validated manifests with 15 typed capabilities
  • git:// / https:// tarball / uploaded ZIP install with SHA-256 integrity check
  • Semver-aware SQL migration runner with rollback (sqlparse-backed)
  • Atomic directory ops with rollback on failure
  • Claude Code hooks dispatcher (PreToolUse / PostToolUse / Stop / SubagentStop) with per-plugin SQLite circuit breaker
  • Crash-recovery on boot for orphaned installs via .install-state.json
  • Plugin-contributed rows tagged with source_plugin across tickets, projects, goals, missions, goal_tasks, and triggers — uninstall cleans them surgically without touching user data

REST API + dashboard UI

  • GET/POST/PATCH/DELETE /api/plugins (full CRUD)
  • GET /api/plugins/marketplace (curated registry)
  • POST /api/plugins/upload (ZIP / tar.gz, 20 MB cap, zip-slip guard)
  • GET /api/plugins/<slug>/update/preview (diff before applying)
  • New /plugins page with marketplace grid, install wizard (source → security scan → config → confirm), plugin detail with widgets, capabilities toggles, MCP banner, and Update with diff preview
  • New /mcp-servers system page aggregating ~/.claude.json entries grouped by plugin / native

CLI

  • evo-nexus plugin init — scaffold from template
  • evo-nexus plugin install <source> / list / uninstall <slug> / update <slug>
  • Starter template under cli/templates/plugin-skeleton/

Plugin capabilities that close v1

  • Wave 1.1 — Per-capability toggles: ON/OFF per capability without uninstalling. Disable cascades to .claude/{agents,skills,commands}/plugin-{slug}-* (rename to .disabled), routines skipped by scheduler, hooks and heartbeats skipped by dispatchers.
  • Wave 1.2 — Update diff preview: read-only preview with added / removed / modified capabilities per type, SQL migration SHA conflict detection, breaking-changes flag, 5-minute cache.
  • Wave 2.0 — Icons + agent avatars: agent_meta_seed.py ships metadata for 38 native agents served via GET /api/agent-meta; plugin agents contribute custom icons with img fallback.
  • Wave 2.1 — Custom UI pages: plugins can contribute React pages mounted at /plugins-ui/:slug/*, sidebar groups, and writable SQLite resources (column allowlist + jsonschema validation). window.EvoNexus SDK injected post-login.
  • Wave 2.2r — Plugin integrations: declare env-var-based integrations with optional HTTP health checks running as in-process heartbeats (zero Claude CLI overhead).
  • Wave 2.3 — Plugin MCP servers: plugins declare MCP servers with command whitelist and shell-metachar block; 6-layer atomic write to ~/.claude.json with flock, timestamped backups (retention 10), drift detection.
  • Wave 2.5 — Pre-install security scan: hybrid regex + LLM scanner, 13 pattern categories, 57-domain whitelist, anti-hallucination guard, 7-day cache, APPROVE / WARN / BLOCK adaptive button with admin BLOCK override.

Plugin management skillsplugin-install, plugin-list, plugin-uninstall, plugin-update, plugin-marketplace, plugin-health, plugin-security-scan expose plugin operations to every agent.

Security hardening (PRD #37)

  • /api/health/deep now requires admin session — previously leaked filesystem paths, provider identity, secret-key source and error details to unauthenticated callers. /api/health is now a minimal public liveness probe (status only).
  • Plugin install sources restrictedresolve_source rejects local filesystem paths, file://, ssh://, and non-HTTPS schemes with a clear ValueError. Only github:, https:// tarballs, or uploaded ZIP / tar.gz are accepted.
  • Plugin triggers ship disabled by default — regardless of YAML value unless explicitly "true", so a malicious plugin can't auto-fire hooks on install.
  • Frontend route splitting — top-level bundles are code-split; main chunk drops substantially on first load.

Community fixes

  • #49 — Integration status verifies all declared env keys. list_integrations previously checked a single key per entry, so Evolution API / Evolution Go / Evo CRM appeared configured with only the token or only the URL. Schema now keys: list[str]; configured means every declared key is non-empty. Bling fixed to use real OAuth2 vars (BLING_CLIENT_ID / BLING_CLIENT_SECRET, run make bling-auth). Omie now requires both OMIE_APP_KEY and OMIE_APP_SECRET.
  • #18 — Safer start-services.sh cleanup. pkill -f 'python.*app.py' was killing unrelated Python processes on multi-project hosts. Replaced with explicit pinned match on venv interpreter + absolute script path; TCP 8080 (or EVONEXUS_PORT) freed directly via fuser / lsof — falls back to lsof -ti tcp:$PORT | kill when fuser is absent (BSD-ish / macOS).
  • #35 — Terminal detects RFC1918 + CGNAT hostnames as local. Previously only localhost / 127.0.0.1 were "local"; bare-metal installs fell back to /terminal on the same origin which didn't exist. Now covers RFC1918, RFC6598 CGNAT (100.64.0.0/10, common on Brazilian VPS), link-local, IPv6 loopback. New VITE_TERMINAL_URL explicit override.
  • #45docker-compose.proxy.yml for reverse-proxy hosts. Sibling compose file that uses expose: instead of ports: so Coolify / Dokploy / Traefik / Caddy own external traffic while containers stay reachable by name inside the Docker network. Volumes identical to hub.yml for no-loss migration.
  • #46 — Supabase / Neon / Railway pooler docs. New docs/knowledge-database.md with provider cheat-sheet (port 5432 vs 6543, Supabase IPv6-only edge case, error reference table). Inline hint added under the Knowledge connections wizard input.
  • #26 — History cleanup dry-run script. scripts/clean-history.sh — safe-by-default helper that clones the remote as --mirror and previews removal of ~283 MB of orphaned PNG avatar blobs via git filter-repo. Verifies develop/main HEAD trees are byte-identical and all tags preserved before the maintainer force-pushes. CONTRIBUTING.md gains the partial-clone recipe so new contributors download ~10 MB instead of ~290 MB in the interim.

Other changes

  • Telegram notifications moved from skills to routines. run_skill(notify_telegram=True) appends a one-shot send instruction at the end of the prompt — guarantees exactly one send per execution.
  • Merged routines listing. /api/routines now merges declared routines with execution metrics, so newly installed plugin routines and unrun core routines show up with zeroed metrics.
  • Brain Repo sync no longer blocks HTTP requests. New brain_repo/job_runner.py background executor serialises sync / milestone / bootstrap ops; endpoints enqueue and return immediately; /backups and /brain-repo poll state and expose Cancel.
  • EVONEXUS_DEV=1 — toggles Flask auto-reloader for backend development.

Migration notes

  • If you set BLING_ACCESS_TOKEN manually, run make bling-auth to obtain the OAuth credentials — the legacy env var is no longer recognized.
  • Tooling scraping /api/health/deep for internals must now authenticate as admin.

Full Changelog: v0.31.0...v0.32.0

v0.31.0

24 Apr 12:29

Choose a tag to compare

EvoNexus v0.31.0 — Brain Repo, Onboarding Wizard and Unified Backups

What's New

Brain Repo — automatic GitHub versioning of your workspace

Every change to memory/, workspace/, customizations/ and config-safe/ now lives in a dedicated private GitHub repo (evo-brain-<username>) per user. A file watcher with 30 s debounce persists changes to GitHub, and you can restore any daily, weekly or milestone snapshot via a streaming SSE engine.

Security posture is explicit:

  • GitHub PATs are Fernet-encrypted at rest with BRAIN_REPO_MASTER_KEY (auto-generated on first boot).
  • connect/sync endpoints return 500 CRYPTO_UNAVAILABLE when encryption is unavailable — the old plaintext fallback was removed.
  • A 21-pattern secrets scanner (AWS / GitHub / Anthropic / OpenAI / Stripe / JWT / SSH / …) runs before every push. Matched files are deleted and never leave the machine.
  • Import-time API and crypto contract checks emit CRITICAL logs on drift so regressions surface at startup, not in production.

Onboarding wizard

First-time users now land in a multi-step wizard instead of the raw Overview:

  • Pick an AI provider — Anthropic, OpenAI, OpenRouter or Codex OAuth, each with its own sub-flow (validated API keys for OpenAI/OpenRouter, /auth-start flow for Codex).
  • Optionally connect a brain repo — create a new private repo or reuse an existing one detected via the GitHub Search API.
  • Restore from a snapshot — for users coming back after a reinstall, with type-to-confirm safety gate and SSE progress through 10 steps (clone → validate → migrate → secrets-scan → KB validate → backup → swap → KB import → manifest update → done).

A one-time OracleWelcomeBanner on /agents dismisses itself via POST /auth/mark-agents-visited.

Unified /backups page

Local ZIPs, S3 and Brain Repo are now one surface instead of three stacked widgets:

  • 3 destination cards at the top with explicit status badges, path/bucket/repo display and last_error / crypto_ready danger states with Reconnect CTA.
  • Tabs with per-source counts and independent refresh.
  • Import dropdown — upload a .zip (existing) or pull from Brain Repo (new).
  • Contextual restore modal — merge/replace for local/S3 ZIPs, SSE progress + include_kb + kb_key_matches checkboxes for Brain Repo.
  • 30 s visibility-aware polling so watcher auto-sync results appear without manual refresh.

Brain Repo settings page

A fully i18n'd /settings/brain-repo with status card (connected / sync / pending / last_error / crypto-danger banner), Sync now, Create milestone, Disconnect with confirmation.

CLI and dev environment

  • setup.py now prompts for brain-repo during initial setup and collects the PAT; backup.py --target github does a manual sync through the dashboard's own code path.
  • New docker-compose.dev.yml with live-reload backend + named volumes, Dockerfile.dev with --legacy-peer-deps and globally-installed @anthropic-ai/claude-code + @gitlawb/openclaude, CRLF-safe entrypoint scripts, and a step-by-step DEV-SETUP.md.

i18n

About 270 new keys across onboarding.*, restore.*, brainRepoSettings.*, backups.* and agents.welcomeBanner.* — en-US, pt-BR and es bundles with identical key trees and natural translations (no English fallbacks).


Full Changelog: v0.30.4...v0.31.0

v0.30.4

24 Apr 02:11

Choose a tag to compare

EvoNexus v0.30.4 — Entrypoint Race Fix + Docker Install, Done Right

Patch release that unblocks every fresh Docker / Swarm / Portainer deploy and ships a complete first-class Docker install experience.

🐛 P0 Fix — race condition in first-boot bootstrap

When `dashboard`, `telegram` and `scheduler` share the same `/workspace/config` named volume (the canonical Docker / Swarm / Portainer deploy pattern), the 3 containers start in parallel and raced on four bootstrap operations in `entrypoint.sh`:

  1. `[ ! -f .env ] && cp .env.example .env` — one wins, others crash with `File exists`. Scheduler crashed visibly on every fresh v0.30.3 deploy.
  2. Same pattern for `providers.json` / `heartbeats.yaml`.
  3. `grep -q EVONEXUS_SECRET_KEY || echo … >> .env` — two processes both see "not found" and append two different keys. Flask picks one at random per request, invalidating sessions silently.
  4. Same pattern for `KNOWLEDGE_MASTER_KEY` — silently corrupted Knowledge Base encryption, losing all configured DB connections on second boot.

Fix: wrap the whole bootstrap section in `flock` on a lockfile inside the shared volume — serializes all containers mounting the same volume, regardless of start order. `flock` is part of `util-linux`, already in both base images. Also added `cp -n` (no-clobber) as belt-and-suspenders.

Validated end-to-end: booting all 3 services simultaneously, scheduler survives, keys appear exactly once in `.env`, dashboard healthy.

📚 Docker, done right

New: `docker-compose.hub.yml` — one-command install

No git clone, no build step. Pulls official multi-arch images from Docker Hub:

```bash
curl -O https://raw.githubusercontent.com/EvolutionAPI/evo-nexus/main/docker-compose.hub.yml
docker compose -f docker-compose.hub.yml up -d
open http://localhost:8080
```

`depends_on: condition: service_healthy` orders the boot (dashboard → telegram+scheduler), adding defense-in-depth on top of the flock fix.

New: `docs/guides/docker-install.md` — complete tutorial

Covers prerequisites (only Docker Engine 24+ — the image ships everything), first-boot wizard walkthrough, update flow, backup/restore recipes, advanced section documenting how to pass secrets via environment variables or Docker Secrets (for CI/CD and immutable-infra users), running behind Caddy for public HTTPS, and a full troubleshooting table.

Updated: README, getting-started, updating

  • `README.md` — Docker is now Method 1 in Quick Start. Prerequisites split into Docker-only vs CLI-flow tracks.
  • `docs/getting-started.md` — install-method chooser table at the top (Docker / npx / manual clone).
  • `docs/guides/updating.md` — new section for the `docker compose pull && up -d` upgrade path.

Upgrade

Using `docker-compose.hub.yml` (new recommended path)

```bash
docker compose -f docker-compose.hub.yml pull
docker compose -f docker-compose.hub.yml up -d
```

Docker Swarm / Portainer

```bash
docker service update --image evoapicloud/evo-nexus-dashboard:v0.30.4 evonexus_evonexus_dashboard
docker service update --image evoapicloud/evo-nexus-runtime:v0.30.4 evonexus_evonexus_telegram
docker service update --image evoapicloud/evo-nexus-runtime:v0.30.4 evonexus_evonexus_scheduler
```

CLI / from source

```bash
git pull origin main

Dashboard rebuild as usual

```


Full Changelog: v0.30.3...v0.30.4

v0.30.3

24 Apr 01:08

Choose a tag to compare

EvoNexus v0.30.3 — Multi-arch Docker Images

Patch release completing the Docker Hub migration: images now ship as multi-arch manifests so ARM hosts can pull without platform overrides.

What's New

linux/amd64 + linux/arm64 in every tag

Every tag under evoapicloud/evo-nexus-{dashboard,runtime} now ships a manifest list covering both architectures:

  • amd64 — existing flow (x86_64 servers, traditional VPS)
  • arm64 — Apple Silicon (M1/M2/M3 Macs), AWS Graviton, Oracle Cloud ARM free tier, Raspberry Pi, most modern ARM VPS
# Works on any platform — Docker auto-picks the right arch
docker pull evoapicloud/evo-nexus-dashboard:v0.30.3

# Inspect the manifest to confirm multi-arch
docker buildx imagetools inspect evoapicloud/evo-nexus-dashboard:v0.30.3

Under the hood: the publish workflow runs docker/setup-qemu-action@v3 to emulate arm64 on the amd64 GitHub runners, then passes platforms: linux/amd64,linux/arm64 to build-push-action. The node-pty native addon compiles from source via node-gyp inside each builder, so there's no prebuilt-binary-per-arch concern.

Repo cleanup

  • Removed evonexus.portainer.stack.yml — it was a pre-configured stack pointing at a fork's namespace (marcelolealhub/*) and a specific host (advancedbot.com.br), which could mislead new users into pulling from the wrong registry. The canonical template at evonexus.stack.yml already supports Portainer/Traefik deployments and points at the official images.

Full Changelog: v0.30.2...v0.30.3

v0.30.2

24 Apr 00:38

Choose a tag to compare

EvoNexus v0.30.2 — Docker Hub Migration

Patch release focused on distribution: Docker images now ship from the official evoapicloud namespace on Docker Hub, so Swarm/Portainer users no longer need to fill in a registry placeholder or run docker login on their manager nodes.

What's New

Official Docker Hub images

Both images are published publicly on every main push and version tag:

  • evoapicloud/evo-nexus-dashboard — Flask + React + embedded terminal + Claude CLI (used by the evonexus_dashboard service)
  • evoapicloud/evo-nexus-runtime — Node/Python runtime (used by evonexus_telegram and evonexus_scheduler)
docker pull evoapicloud/evo-nexus-dashboard:latest
docker pull evoapicloud/evo-nexus-runtime:latest

The Swarm stack template evonexus.stack.yml no longer has an OWNER placeholder — the images point straight at the public Docker Hub namespace.

Cleaner CI pipeline

  • Deleted dashboard.yml workflow — it was building a Python+React-only dashboard image (Dockerfile.dashboard) that nothing consumed, and it was failing on main with a TypeScript peer-dependency conflict. The Swarm workflow (docker-publish.yml) already produces a strict superset with the terminal-server and both CLIs baked in.
  • docker-publish.yml switched to Docker Hub — uses DOCKERHUB_USERNAME + DOCKERHUB_TOKEN secrets (repo/org level). No more ghcr.io auth dance for downstream consumers.

Updated upgrade guide

docs/guides/updating.md now documents the Docker Hub upgrade path (docker service update --image evoapicloud/evo-nexus-*:vX.Y.Z ...) alongside the existing git-pull flow for local development.


Full Changelog: v0.30.1...v0.30.2

v0.30.1

23 Apr 22:03

Choose a tag to compare

EvoNexus v0.30.1 — Thread session swap + assignee dropdown + goals seed

Patch release focused on thread UX polish.

What's New

Thread Context injected on every session

When a thread opens, the agent now receives an explicit briefing telling it:

  • It is running inside a persistent thread, not a fresh one-shot session
  • The thread title and description (fixed scope)
  • Its own assigned agent slug — with an explicit instruction not to invoke the Agent tool with subagent_type: <self> (the behaviour was causing confusing @agent calling @agent patterns like Zara invoking Zara)
  • The default workspace_path for artifacts
  • Where memory.md lives and how the 20-turn summarisation works
  • That --resume keeps the conversation alive across browser closes and day breaks

The memory.md content itself is appended when non-empty, so new threads get the scope block and older threads also surface accumulated knowledge.

Fixed

Thread switch leaked the previous conversation

Switching threads via the sidebar kept threadSessionId pinned to the old ticket and <AgentChat> kept rendering the old messages until a full page reload. Two fixes in TicketDetail.tsx:

  • Reset threadSessionId whenever ticket?.id changes so the auto-init re-runs for the new ticket
  • <AgentChat key={ticket.id}> forces a full remount — closes the old WebSocket, resets the message buffer, re-runs internal effects cleanly

Topics assignee dropdown hid 18 of 38 agents

The Assign-to-agent combobox sliced the list at 20 items, silently dropping agents whose slugs come later alphabetically. Removed the slice; bumped max-h-48 to max-h-72 so ~12 agents are visible at once without scrolling and all 38 are reachable via the scroll.

Goals: Evolution-specific seed leaking into open-source installs

dashboard/backend/app.py was seeding a hardcoded Evolution Revenue $1M Q4 2026 mission with 3 projects (evo-ai, evo-summit, evo-academy) and 5 goals on first boot — inappropriate for an open-source project consumed by any org. Removed the seed block so new instances start empty. The /goals empty state now points at the /create-goal skill instead of the misleading "Run the backend migration to seed initial data" message.

Existing installations with the seed already applied can clean it manually:

DELETE FROM goal_tasks;
DELETE FROM goals;
DELETE FROM projects;
DELETE FROM missions;

Full Changelog: v0.30.0...v0.30.1