Skip to content

PATCH /config hot-reload w/ selective cache invalidation (#15)#16

Closed
shuv1337 wants to merge 21 commits intodevfrom
config-hot-reload
Closed

PATCH /config hot-reload w/ selective cache invalidation (#15)#16
shuv1337 wants to merge 21 commits intodevfrom
config-hot-reload

Conversation

@shuv1337
Copy link
Copy Markdown
Collaborator

@shuv1337 shuv1337 commented Nov 13, 2025

PATCH /config hot-reload w/ selective cache invalidation


Note

Introduce hot-reloadable PATCH /config with targeted cache invalidation, robust JSONC persistence, plugin path handling, and full tests/specs, plus a unified state registry and updates across subsystems.

  • API/Runtime:
    • PATCH /config: supports scope (project/global), returns merged config, and (when OPENCODE_CONFIG_HOT_RELOAD=true) publishes config.updated events; GET /config reflects hot-reload mode.
    • Enriches TUI toasts with affected config sections.
  • Config system:
    • New persistence layer: file locks, backups, atomic writes, JSONC-preserving incremental edits (write.ts), and diffs (diff.ts).
    • New errors (error.ts), global file resolver, hot-reload switch, and plugin path resolution (preserve scoped specifiers; resolve relative/absolute to file://).
    • Appends discovered plugin/* files to plugin list.
  • State/Instance infrastructure:
    • State.register with named invalidation and State.invalidate(name, key?); Instance.invalidate/forEach helpers; bootstrap wires config invalidation listener.
  • Subsystem updates to registered state:
    • Migrate agent, command, filewatcher, format, lsp, mcp, permission, plugin (with cleanup), provider, and tool-registry; file listing refresh logic hardened.
  • SDK/Docs:
    • Add config.updated event to SDK types; add specs/config-spec.md for PATCH /config.
  • Tests:
    • Add hot-reload, persistence (JSONC), and config/plugin path tests; update existing config tests accordingly.

Written by Cursor Bugbot for commit 980c331. This will update automatically on new commits. Configure here.

Summary by CodeRabbit

  • New Features

    • Configuration updates now support runtime changes without requiring a restart
    • Automatic backup and rollback functionality on update errors
    • Enhanced plugin reference resolution supporting file paths and various URL formats
    • Improved configuration validation with detailed error messages
    • Targeted component invalidation when configuration changes occur
  • Documentation

    • Added configuration update endpoint specification

✏️ Tip: You can customize this high-level summary in your review settings.

PATCH /config hot-reload w/ selective cache invalidation
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +110 to +116
const hotReloadEnabled = isConfigHotReloadEnabled()
if (hotReloadEnabled && input.scope === "global") {
await State.invalidate("config")
}
if (hotReloadEnabled && input.scope === "project") {
await Instance.invalidate("config")
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Config writes no longer refresh in-memory state when hot reload is off

The previous implementation of Config.update always disposed the current instance after persisting a project config, ensuring subsequent reads saw the new settings. After the refactor the cache invalidation is conditional on OPENCODE_CONFIG_HOT_RELOAD being true, and when the flag is not set (the default) no invalidation occurs – the process keeps using the stale state returned by the old Config.get() call even though the file was updated. Any user saving their config without explicitly enabling the hot-reload flag will see the new JSON written to disk but all agents, providers and other config-driven features continue to run with the old configuration until the process is restarted. Consider invalidating State/Instance unconditionally (or falling back to the previous Instance.dispose() behaviour) so that config updates remain effective when hot reload is disabled.

Useful? React with 👍 / 👎.

@shuv1337 shuv1337 changed the base branch from integration to dev November 14, 2025 00:20
* Fix cache corruption bug in refresh function
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the final PR Bugbot will review for you during this billing cycle

Your free Bugbot reviews will reset on December 14

Details

You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

Comment thread packages/opencode/src/provider/provider.ts
directories,
}
})
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Logic Duplication Creates Dead Code

Duplicate conditional block sets result.share = "auto" twice when result.autoshare === true && !result.share. The second check on lines 185-187 is redundant since the first check on lines 181-183 already performs this exact assignment, making the second block dead code that will never execute with a different outcome.

Fix in Cursor Fix in Web

}

keyRecords.delete(entry.init)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Named Registry Leak: Unbounded Growth

State.invalidate removes disposed entries from recordsByKey but never removes them from namedRegistry, creating a memory leak. On subsequent invalidations, the function iterates over stale NamedEntry objects that no longer have corresponding state in recordsByKey, causing the registry to grow unbounded. The entries should be removed from the namedRegistry Set after disposal.

Fix in Cursor Fix in Web

files: [],
dirs: [],
})
refresh()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Race condition returns empty file cache initially

The files() method returns cache immediately without waiting for fetchEntries() to complete. When refresh() is called, it starts the async fetchEntries() operation but doesn't await it, so the first call to files() returns the initial empty cache { files: [], dirs: [] } before any files are discovered. The old implementation passed cache directly to the async function which would populate it, but the refactored version starts the fetch asynchronously and returns stale data.

Fix in Cursor Fix in Web

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Nov 21, 2025

Important

Review skipped

Review was skipped due to path filters

⛔ Files ignored due to path filters (1)
  • packages/sdk/js/src/gen/types.gen.ts is excluded by !**/gen/**

CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including **/dist/** will override the default block on the dist directory, by removing the pattern from both the lists.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

This PR refactors state management across multiple modules to use State.register instead of Instance.state, introducing comprehensive configuration hot-reload support. It adds new modules for config operations (backup, diff, locking, persistence, invalidation, error handling) and modernizes the config update API to support scoped (project/global) updates with atomic writes, JSONC preservation, and file-level locking. The server's config endpoint now publishes events for hot-reload scenarios, and a new invalidation system targets affected subsystems based on config diffs.

Changes

Cohort / File(s) Summary
State Registration Refactoring
packages/opencode/src/agent/agent.ts, packages/opencode/src/command/index.ts, packages/opencode/src/file/watcher.ts, packages/opencode/src/format/index.ts, packages/opencode/src/lsp/index.ts, packages/opencode/src/mcp/index.ts, packages/opencode/src/permission/index.ts, packages/opencode/src/plugin/index.ts, packages/opencode/src/provider/provider.ts, packages/opencode/src/tool/registry.ts
Replaced Instance.state(...) with State.register(name, dirResolver, initAsync, disposeAsync?) for unified lifecycle management across subsystems.
Config Core Infrastructure
packages/opencode/src/config/config.ts
Major refactoring: introduced loadStateFromDisk, plugin path resolution, Config.Updated event, readFreshConfig(), scoped update() API with return metadata, and state wiring via State.register.
Config Utilities
packages/opencode/src/config/backup.ts, packages/opencode/src/config/global-file.ts, packages/opencode/src/config/hot-reload.ts
New modules providing timestamped backup/restore, global config path resolution, and hot-reload flag detection.
Config Diff & Validation
packages/opencode/src/config/diff.ts, packages/opencode/src/config/error.ts
New modules for computing config diffs across all fields and structured error types (ConfigUpdateError, ConfigValidationError, ConfigWriteConflictError, ConfigWriteError).
Config Persistence & Locking
packages/opencode/src/config/persist.ts, packages/opencode/src/config/write.ts, packages/opencode/src/config/lock.ts
New modules implementing scoped config updates with atomic writes, JSONC/JSON support with comment preservation, incremental updates, and filesystem-based distributed locking with stale-lock detection.
Config Invalidation
packages/opencode/src/config/invalidation.ts
New module orchestrating targeted subsystem invalidations in response to config diffs; supports hot-reload-driven event subscription and parallel invalidation tasks.
State & Project Infrastructure
packages/opencode/src/project/state.ts, packages/opencode/src/project/bootstrap.ts, packages/opencode/src/project/instance.ts
Extended State namespace with register() and invalidate() for named state tracking and pattern-based invalidation; added Instance.invalidate() and Instance.forEach() methods; bootstrap now calls ConfigInvalidation.setup().
Server & File Updates
packages/opencode/src/server/server.ts, packages/opencode/src/file/index.ts
Server /config PATCH endpoint now computes scope, calls scoped Config.update(), publishes hot-reload events, and enriches TUI toasts with changed sections; file watcher cache now refreshes via dedicated refresh() function.
Tests & Documentation
packages/opencode/test/config/config.test.ts, packages/opencode/test/config/hot-reload.test.ts, packages/opencode/test/config/write.test.ts, specs/config-spec.md
New test suites covering plugin resolution, JSONC preservation, hot-reload workflows, and global/project scoped updates; spec documents PATCH /config endpoint behavior including locking, backup, and validation.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Server
    participant Config as Config Module
    participant Persist as Persist Module
    participant Lock as Lock Module
    participant Invalidate as Invalidation
    participant Subsystems as Subsystems
    
    Client->>Server: PATCH /config (scope, update)
    Server->>Lock: acquireLock(filepath)
    Lock-->>Server: releaseLock()
    Server->>Persist: update({scope, update, directory})
    Persist->>Config: readFreshConfig()
    Config-->>Persist: before state
    Persist->>Persist: merge & validate
    Persist->>Persist: createBackup(filepath)
    Persist->>Persist: writeConfigFile (atomic)
    Persist-->>Server: {before, after, diff, filepath}
    Server->>Invalidate: ConfigInvalidation.apply({diff, scope, directory})
    Invalidate->>Subsystems: invalidate(provider, mcp, lsp, ...)
    Subsystems-->>Invalidate: done
    Invalidate->>Server: done
    alt hot-reload enabled
        Server->>Config: Bus.publish(Config.Event.Updated)
        Config-->>Client: event with before/after
    end
    Server-->>Client: 200 OK {after config}
Loading
sequenceDiagram
    participant App
    participant Bootstrap as InstanceBootstrap
    participant ConfigInvalidation
    participant ConfigBus as Config.Updated Event
    participant Subsystems
    
    App->>Bootstrap: InstanceBootstrap()
    Bootstrap->>ConfigInvalidation: setup()
    alt hot-reload enabled
        ConfigInvalidation->>ConfigBus: subscribe to Config.Updated
        Note over ConfigBus: on config change event
        ConfigBus->>ConfigInvalidation: fire event(before, after, diff, scope, directory)
        ConfigInvalidation->>ConfigInvalidation: compute targets from diff
        ConfigInvalidation->>Subsystems: invalidate(provider, mcp, lsp, ...)
        Subsystems-->>ConfigInvalidation: done
    else hot-reload disabled
        Note over ConfigBus: no event subscription
    end
    Bootstrap-->>App: done
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–75 minutes

Areas requiring extra attention:

  • packages/opencode/src/config/persist.ts — Atomic config writes with backup/restore lifecycle, validation error mapping, and state cache invalidation logic; verify error handling paths and backup cleanup on success/failure.
  • packages/opencode/src/config/write.ts — JSONC/JSON dual-format support with incremental updates, deep equality checks, and atomic file operations; validate correctness of path-based UpdateInstructions and nested key handling.
  • packages/opencode/src/config/invalidation.ts — Complex diff-to-targets mapping, parallel invalidation orchestration, and hot-reload event wiring; ensure all affected subsystems are correctly identified and no deadlocks occur in concurrent invalidations.
  • packages/opencode/src/config/config.ts — Major refactoring of loading pipeline, plugin path resolution (file://, relative, home-relative), and new event emission; verify plugin specifier normalization and state initialization order.
  • packages/opencode/src/project/state.ts — New register() and invalidate() functions with named registry and wildcard pattern support; confirm lifecycle management and disposal correctness.
  • packages/opencode/src/server/server.ts — Scoped config update flow, LastConfigUpdate caching, global fan-out notification, and error aggregation; validate that toast enrichment logic and error propagation are correct.
  • Hot-reload integration — Verify that hot-reload flag gating, event sequencing, and per-directory invalidation work correctly across all modules; check test coverage for edge cases (concurrent updates, scope changes, missing contexts).

Poem

🐰 Hoppy Config Changes

State hops to register, neat and clean,
Config diffs dance with locks in between,
JSONC whispers keep comments safe,
Hot reload flows through every base,
Backups backup before the final leap!
✨🔧

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 1.47% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main change: adding PATCH /config hot-reload with selective cache invalidation. It directly reflects a core feature in the changeset.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (4)
packages/opencode/src/file/index.ts (1)

122-163: Still a race: files() can return empty cache before scan finishes

refresh() kicks off fetchEntries() without awaiting it, and files() just returns cache (initially empty) while a scan is in progress. First callers of files()/search() can therefore observe { files: [], dirs: [] } even when the project has files. This is the same behavior raised in the earlier review.

A concrete fix is to track the in-flight scan as a promise and always await it before returning cache:

   const state = Instance.state(async () => {
     type Entry = { files: string[]; dirs: string[] }
-    let cache: Entry = { files: [], dirs: [] }
-    let fetching = false
-    const fetchEntries = async () => {
-      const temp: Entry = { files: [], dirs: [] }
+    let cache: Entry = { files: [], dirs: [] }
+    let fetching: Promise<void> | null = null
+    const fetchEntries = async () => {
+      const temp: Entry = { files: [], dirs: [] }
       const set = new Set<string>()
       for await (const file of Ripgrep.files({ cwd: Instance.directory })) {
-        temp.files.push(file)
+        temp.files.push(file)
         let current = file
         while (true) {
           const dir = path.dirname(current)
           if (dir === ".") break
           if (dir === current) break
           current = dir
           if (set.has(dir)) continue
           set.add(dir)
-          temp.dirs.push(dir + "/")
+          temp.dirs.push(dir + "/")
         }
       }
-      cache = temp
-    }
-    const refresh = () => {
-      fetching = true
-      fetchEntries()
-        .catch((error) => {
-          if (error instanceof Error && "code" in error && error.code === "ENOENT") return
-          log.error("failed to refresh files", { error })
-        })
-        .finally(() => {
-          fetching = false
-        })
-    }
-    refresh()
+      cache = temp
+    }
+    const refresh = () => {
+      if (!fetching) {
+        fetching = fetchEntries()
+          .catch((error) => {
+            const err = error as NodeJS.ErrnoException
+            if (err?.code === "ENOENT") return
+            log.error("failed to refresh files", { error })
+          })
+          .finally(() => {
+            fetching = null
+          })
+      }
+      return fetching
+    }
+    await refresh()

     return {
       async files() {
-        if (!fetching) {
-          refresh()
-        }
+        await refresh()
         return cache
       },
     }
   })

This keeps the API the same while ensuring callers never see the pre-populated empty cache.

packages/opencode/src/config/persist.ts (1)

61-142: Re-check config cache refresh semantics when hot reload is disabled

update() only invalidates the "config" state when isConfigHotReloadEnabled() is true:

  • Global scope: await State.invalidate("config")
  • Project scope: await Instance.invalidate("config")

When the flag is false, you still write the updated config to disk and return a fresh after via Config.readFreshConfig(), but no cache invalidation happens for the in‑memory state driving Config.get() and all the subsystems that depend on it. Unless Config.get() itself bypasses caching when hot reload is off, this recreates the previously‑reported behavior: users can PATCH/save config successfully, see the file updated, but the running process continues using stale config until restart.

Given the earlier P1 comment on this exact issue, it’d be good to either:

  • Ensure config state is refreshed even when hot reload is disabled (e.g. always invalidating "config" for the current directory, and gate only the fan‑out / subsystem hot‑reload on the env flag), or
  • Clearly document that PATCH /config only takes effect at process restart when OPENCODE_CONFIG_HOT_RELOAD is not enabled.

Also, in the config.update.cacheInvalidated log entry, cacheInvalidated is computed as hotReloadEnabled && input.scope === "global", so it reports false even when you invalidate project config via Instance.invalidate("config"). That’s a small observability bug worth correcting.

#!/bin/bash
# Quick sanity check suggestion (run in repo):
# 1. Start opencode with OPENCODE_CONFIG_HOT_RELOAD unset/false.
# 2. From a client, call GET /config, then PATCH /config with a simple change (e.g. theme).
# 3. Call GET /config again and verify whether the change is visible.
# 4. Also check logs for `config.update.cacheInvalidated` and confirm whether project-scoped updates are logged as invalidated.
packages/opencode/src/project/state.ts (1)

9-17: Named-state registry entries are never cleaned up, causing an unbounded leak

State.register adds NamedEntry instances into namedRegistry, and invalidate disposes underlying recordsByKey entries, but the corresponding NamedEntry objects are never removed from their Set. Over time (especially across many directories or repeated disposals), this grows without bound and every invalidate walks stale entries that can no longer be disposed.

You can fix this by deleting NamedEntry instances as you invalidate them and pruning empty name sets, e.g.:

  export async function invalidate(name: string, key?: string) {
@@
-    const entries = namedRegistry.get(name)
+    const entries = namedRegistry.get(name)
     if (!entries) {
       return
     }
@@
-    const tasks: Promise<void>[] = []
-    for (const entry of entries) {
-      if (key && entry.key !== key) continue
-
-      const keyRecords = recordsByKey.get(entry.key)
-      if (!keyRecords) continue
-
-      const stateEntry = keyRecords.get(entry.init)
-      if (!stateEntry) continue
+    const tasks: Promise<void>[] = []
+    for (const entry of entries) {
+      if (key && entry.key !== key) continue
+
+      const keyRecords = recordsByKey.get(entry.key)
+      const stateEntry = keyRecords?.get(entry.init)
+
+      // Drop stale registry entries that no longer have backing state
+      if (!keyRecords || !stateEntry) {
+        entries.delete(entry)
+        continue
+      }
@@
-      keyRecords.delete(entry.init)
+      keyRecords.delete(entry.init)
+      if (keyRecords.size === 0) {
+        recordsByKey.delete(entry.key)
+      }
+
+      // Remove the named entry now that its state has been disposed
+      entries.delete(entry)
     }
+
+    // If there are no more entries for this name, drop it from the registry
+    if (entries.size === 0) {
+      namedRegistry.delete(name)
+    }

This keeps the named registry bounded and avoids re-processing dead entries on subsequent invalidations.

Also applies to: 38-67, 92-136

packages/opencode/src/config/config.ts (1)

181-187: Duplicate autoshareshare logic remains

The duplicated if (result.autoshare === true && !result.share) blocks still appear unchanged, so the second block is dead code. Since this was already flagged in a previous review, it would be good to remove the duplicate and keep just one normalization block.

🧹 Nitpick comments (21)
packages/opencode/src/provider/provider.ts (1)

242-488: Provider state initialization and config/database/model merge look solid

The new State.register("provider", ...) initializer cleanly centralizes provider/model setup:

  • Merges config providers with the ModelsDev database, using existing model entries (parsed.models[modelID]) as the base, so config overrides now correctly layer on top of database metadata.
  • Tracks realIdByKey for aliased model IDs and consults it in getModel, which should avoid the earlier alias mismatch issues.
  • Applies disabled_providers consistently when loading env keys, Auth entries, custom loaders, and plugin-based auth, and filters providers with no remaining models.
  • Adds github-copilot-enterprise by deriving from the Copilot DB entry and wires it into the plugin auth load path, which aligns with the intended enterprise behavior.

One small improvement: you can reuse the config snapshot instead of calling Config.get() again when computing disabled providers, which also guarantees both paths see the same config instance:

-      const config = await Config.get()
+      const config = await Config.get()
       const database = await ModelsDev.get()
@@
-      const disabled = await Config.get().then((cfg) => new Set(cfg.disabled_providers ?? []))
+      const disabled = new Set(config.disabled_providers ?? [])

Beyond that, the refactor to State.register and the GitHub Copilot Enterprise handling look good to me.

packages/opencode/test/config/write.test.ts (1)

1-75: JSONC write tests accurately cover incremental-update behavior

Both tests are well-targeted:

  • They verify that writeConfigFile preserves comments and updates fields in-place for JSONC.
  • They ensure the resulting file is still valid JSONC (no parse errors) after incremental edits.
  • Temp directory handling and cleanup are correct.

You might optionally add an assertion that the newly added agent.plan block is present in the second test, but as-is these tests already provide good coverage of the intended behavior.

packages/opencode/src/command/index.ts (1)

4-4: Command state registration aligns with new State registry and keeps behavior intact

Using State.register("command", () => Instance.directory, async () => { ... }) cleanly moves command config into the central state system while preserving behavior: you still hydrate from cfg.command, and you still ensure a synthesized init command when it’s absent. This should make command respond correctly to config invalidations.

One minor note: result is a plain {[name: string]: Info} with keys coming from user config. If you ever expect arbitrary/untrusted names (e.g., __proto__), you might prefer a Map<string, Info> to completely sidestep prototype-key edge cases, though it’s not a blocker here.

Also applies to: 40-69

packages/opencode/src/mcp/index.ts (1)

11-11: MCP State.register migration looks correct; consider slightly richer cleanup logging

The new State.register("mcp", () => Instance.directory, async () => { ... }, async (state) => { ... }) wiring preserves the prior behavior: clients and status are built from cfg.mcp, and all active clients are closed on disposal. This should integrate cleanly with config-driven invalidation.

For observability, you might consider iterating with Object.entries(state.clients) so the cleanup log can include the MCP name alongside the error; currently only the error object is logged, which can make it harder to attribute failures when multiple MCP clients are in play.

Also applies to: 57-94

specs/config-spec.md (1)

1-100: PATCH /config spec is clear and matches the new config workflow

This spec does a good job describing the end-to-end behavior: scope selection, atomic write with lock + backup, JSONC comment preservation, hot-reload gating via OPENCODE_CONFIG_HOT_RELOAD, and config.updated fan-out semantics. It also clearly explains the legacy/non-hot-reload path and client workflows.

The only suggestion is process-oriented: as the lock timeout, backup behavior, and error shapes evolve, this doc will need to be kept in sync with lock.ts, persist.ts, and error.ts. Consider adding a brief “implementation reference” section listing those modules to make future cross-checks easier.

packages/opencode/src/project/instance.ts (1)

51-75: forEach should guard against rejected context promises to avoid aborting mid-fan-out

The new invalidate helper is a thin, correct wrapper over State.invalidate.

For forEach, the general structure (iterating cached contexts, running fn inside context.provide, collecting per-directory errors, and logging a warning) is good. However, if any contextPromise in cache rejects (e.g., a previous Project.fromDirectory failure), const ctx = await contextPromise will throw and short-circuit the whole loop, skipping remaining directories and bypassing your error aggregation logic.

You can make this more robust by catching at the contextPromise level as well:

-    for (const [directory, contextPromise] of cache) {
-      const ctx = await contextPromise
-      await context
+    for (const [directory, contextPromise] of cache) {
+      const ctx = await contextPromise.catch((error) => {
+        errors.push({
+          directory,
+          error: error instanceof Error ? error : new Error(String(error)),
+        })
+        return undefined
+      })
+      if (!ctx) continue
+      await context
         .provide(ctx, async () => {
           await fn(directory)
         })

This keeps the behavior you want (per-directory error collection and logging) even if some cached contexts never successfully initialized.

packages/opencode/src/plugin/index.ts (1)

9-9: Plugin State.register wiring is solid; consider stronger typing and per-plugin error isolation

The new plugin state registration integrates well with the rest of the system:

  • The initializer constructs a PluginInput with Instance.project/worktree/directory and an opencode client backed by Server.App().fetch.
  • It respects config.plugin, conditionally adds default plugins based on Flag.OPENCODE_DISABLE_DEFAULT_PLUGINS, and installs non-file:// plugins via BunProc.install.
  • The disposer walks state.hooks and calls any cleanup() hooks, logging failures without aborting the whole cleanup.

Two small improvements to consider:

  1. Type the hooks array explicitly

Right now const hooks = [] infers a very loose type. Since you already import Hooks, you can tighten this:

-      const hooks = []
+      const hooks: Hooks[] = []

This will make downstream uses like hook.config/hook.event better-typed.

  1. Isolate failures from individual plugin exports

A single bad plugin (or export) currently causes the entire plugin state initialization to reject. If you’d rather keep the rest of the plugins usable, wrap per-export initialization in a try/catch and log failures:

-        for (const [_name, fn] of Object.entries<PluginInstance>(mod)) {
-          const init = await fn(input)
-          hooks.push(init)
-        }
+        for (const [exportName, fn] of Object.entries<PluginInstance>(mod)) {
+          try {
+            const init = await fn(input)
+            hooks.push(init)
+          } catch (error) {
+            log.error("Failed to initialize plugin export", {
+              path: plugin,
+              exportName,
+              error,
+            })
+          }
+        }

This keeps misbehaving plugins from taking down the whole plugin subsystem while still surfacing clear logs.

Also applies to: 15-67

packages/opencode/src/format/index.ts (1)

43-59: Formatter override merge semantics are subtle; consider making arrays explicit

The override block:

const result: Formatter.Info = mergeDeep(formatters[name] ?? {}, {
  command: [],
  extensions: [],
  ...item,
})
if (result.command.length === 0) continue
result.enabled = async () => true
result.name = name
formatters[name] = result

relies on mergeDeep’s array behavior plus a command: [] default. If arrays in the second argument overwrite the base arrays (which is common for deep‑merge utilities), then:

  • A config that only sets environment for an existing formatter (no command) can end up with result.command empty, causing the continue to run and the environment override to be dropped silently.
  • Similar surprises are possible for extensions.

To avoid subtle bugs and make the behavior obvious, I’d suggest handling command/extensions explicitly instead of via mergeDeep, e.g.:

  • Start from the base Formatter.Info.
  • Override command only when item.command is provided.
  • Override extensions only when item.extensions is provided.
  • Always merge/override environment and other scalar fields.

And add a small test that verifies environment‑only overrides and partial overrides behave as intended.

packages/opencode/test/config/config.test.ts (1)

271-297: Use pathToFileURL for plugin file URLs to keep tests OS-agnostic

The new plugin tests look good and accurately exercise scoped vs. relative plugin handling, but the expectation in the “appends plugins discovered from directories…” test builds a file:// URL manually:

expect(config.plugin).toEqual([
  "local-plugin",
  `file://${path.join(globalTmp.path, "plugin", "custom.ts")}`,
])

On Windows this will generate an invalid URL (file://C:\...) and won’t match how the config loader constructs file URLs (which appears to mirror the earlier test using pathToFileURL).

You can make this portable and consistent with the previous test by reusing pathToFileURL:

-        const config = await Config.get()
-        expect(config.plugin).toEqual(["local-plugin", `file://${path.join(globalTmp.path, "plugin", "custom.ts")}`])
+        const config = await Config.get()
+        const pluginFileUrl = pathToFileURL(path.join(globalTmp.path, "plugin", "custom.ts")).href
+        expect(config.plugin).toEqual(["local-plugin", pluginFileUrl])

[pathToFileURL is already imported at the top of this file.]

Also applies to: 465-505

packages/opencode/test/config/hot-reload.test.ts (1)

57-72: “event subscriber…” test doesn’t actually exercise subscription ordering

The helper subscribeWithContext is designed to run Config.Event.Updated handlers inside the correct Instance context, but the test:

test("event subscriber sees refreshed config before targeted invalidations", async () => {
  // ...
  const response = await patchConfig(directory, { model: "event-model" }, "global")
  expect(response.status).toBe(200)

  await Instance.provide({
    directory,
    fn: async () => {
      const config = await Config.get()
      expect(config.model).toBe("event-model")
    },
  })
})

never attaches a subscriber and doesn’t assert anything about the ordering of “subscriber sees refreshed config” vs. subsequent invalidations. It effectively just re-checks that a global PATCH updates the model for that workspace.

Consider either:

  • Updating the test to use subscribeWithContext and assert that inside the event callback Config.get() already reflects the updated config; or
  • Renaming the test to reflect what it actually covers if ordering is already validated elsewhere.

This will keep the test suite’s intent aligned with behavior and make regressions around event ordering easier to catch.

Also applies to: 240-262

packages/opencode/src/config/invalidation.ts (5)

19-56: Unused diff parameters in invalidate helpers

Several helpers (invalidateProvider, invalidateMCP, invalidateLSP, invalidatePlugin) accept a diff: ConfigDiff parameter they don’t use. Either drop the parameter or rename to _diff to avoid lint noise and make intent clearer.


58-133: Reliance on top‑level diff flags for invalidation coverage

applyInternal only triggers most invalidations based on boolean flags like diff.provider, diff.mcp, diff.plugin, etc., while more granular fields (providerKeys, mcpKeys, pluginAdded, pluginRemoved) are ignored for invalidation. This assumes computeDiff always sets the corresponding top‑level flags whenever nested keys change; if that ever isn’t true, provider/MCP/plugin/tool‑registry invalidations will silently be skipped. Consider asserting this invariant in tests or deriving providerChanged/mcpChanged/pluginChanged from both the flags and the nested key buckets.


58-71: Directory selection for global‑scope invalidation

targetDirectory falls back to process.cwd() when input.directory is absent, even for scope: "global". If a global config update is published without a specific directory, this will only refresh/invalidate the instance rooted at the current working directory, not all active instances. If the intended behaviour is “global fan‑out”, consider routing global updates through Instance.forEach (or ensuring the publisher includes each instance’s directory), and document this expectation.


135-155: Aggregate error handling hides which target failed

The Promise.all(tasks) block logs a single "Targeted config invalidation failed" message with only error: String(error), losing which specific invalidation(s) failed. If multiple targets are scheduled, this makes debugging harder. Consider catching/logging per‑task (or at least including targets and sections in the error log) while still allowing others to proceed.


160-182: Event payload assumptions in setup subscriber

The setup subscriber destructures diff, scope, directory, and refreshed from event.properties as any. This tightly couples to how the event is published (including refreshed), but the Zod schema for Config.Event.Updated only guarantees diff, scope, directory?, refreshed?. Please ensure the publisher always sets refreshed consistently with the earlier cache invalidation (so applyInternal doesn’t redundantly invalidate "config"), and consider narrowing the cast or validating event.properties before passing into apply.

packages/opencode/src/config/write.ts (3)

25-49: JSON vs JSONC handling and initial write semantics

isJsonc treats both .jsonc and .json as JSONC for incremental updates, but the initial write path (!existingContent || !file.exists()) always does a full JSON.stringify rewrite, losing any existing JSONC comments/formatting if existingContent is non‑null but file.exists() has raced to false. This is probably fine under the current locked writer (since existingContent should only be non‑null for an existing file), but it’s worth confirming the invariants around existingContent and whether you ever intend to preserve JSONC comments on first write.


117-174: Incremental update behaviour when previous is unavailable

When options.previous is absent, applyIncrementalUpdates only sets keys present in newConfig and never removes keys that disappeared, which can leave stale keys from the existing file. From persist.update, this happens when previousNormalized is missing (e.g., parse/validation failures of existing content). If that’s intentional (best‑effort, non‑destructive updates), it would be good to document; otherwise, consider using the parsed raw object as previous to allow deletions to propagate even in that case.


176-221: Nested instruction key derivation couples tightly to ConfigDiff

buildNestedInstructions derives diffChildKeys from diff.providerKeys/diff.mcpKeys plus unions of previous/next keys. This is reasonable, but it again assumes computeDiff is in sync with these expectations (e.g., that provider/mcp key buckets are always present when needed). Make sure tests cover cases where individual provider/MCP entries are added, removed, or modified so nested updates don’t miss or over‑update keys.

packages/opencode/src/config/config.ts (3)

43-73: Plugin file reference resolution: Windows and relative path coverage

resolvePluginFileReference correctly normalizes ~/, Windows‑style relative prefixes, and absolute paths into file:// URLs, and uses URL + pathToFileURL to resolve relative specifiers against the config file. One minor nit: normalizeWindowsPath only runs in the relative‑URL branch, so Windows absolute paths resolved via pathToFileURL will still include backslashes correctly, but specifiers like C:\foo\bar.ts on non‑Windows platforms won’t be treated as path‑like (which is probably acceptable). Overall, this is a good improvement.


89-165: Plugin aggregation and potential duplication across directories

loadStateFromDisk now aggregates plugin file URLs into pluginFiles and appends them to result.plugin after processing all directories. This ensures you don’t lose global or project‑level plugin discoveries, but it may introduce duplicates if the same plugin path appears in multiple directories or config sources. Consider de‑duplicating result.plugin (e.g., via a Set) before returning the final config to avoid redundant plugin registrations.


879-895: Config.update depends on ambient Instance.directory when directory omitted

update() defaults scope to "project" and directory to Instance.directory before delegating to ./persist. This is fine as long as all callers either (a) run inside an Instance.provide context or (b) explicitly pass directory. If you ever call Config.update for project scope outside an instance context, this will throw Context.NotFound via Instance.directory. It’s worth documenting this requirement and perhaps asserting in tests.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a8832fa and 980c331.

⛔ Files ignored due to path filters (1)
  • packages/sdk/js/src/gen/types.gen.ts is excluded by !**/gen/**
📒 Files selected for processing (29)
  • packages/opencode/src/agent/agent.ts (2 hunks)
  • packages/opencode/src/command/index.ts (2 hunks)
  • packages/opencode/src/config/backup.ts (1 hunks)
  • packages/opencode/src/config/config.ts (9 hunks)
  • packages/opencode/src/config/diff.ts (1 hunks)
  • packages/opencode/src/config/error.ts (1 hunks)
  • packages/opencode/src/config/global-file.ts (1 hunks)
  • packages/opencode/src/config/hot-reload.ts (1 hunks)
  • packages/opencode/src/config/invalidation.ts (1 hunks)
  • packages/opencode/src/config/lock.ts (1 hunks)
  • packages/opencode/src/config/persist.ts (1 hunks)
  • packages/opencode/src/config/write.ts (1 hunks)
  • packages/opencode/src/file/index.ts (2 hunks)
  • packages/opencode/src/file/watcher.ts (2 hunks)
  • packages/opencode/src/format/index.ts (2 hunks)
  • packages/opencode/src/lsp/index.ts (2 hunks)
  • packages/opencode/src/mcp/index.ts (2 hunks)
  • packages/opencode/src/permission/index.ts (2 hunks)
  • packages/opencode/src/plugin/index.ts (1 hunks)
  • packages/opencode/src/project/bootstrap.ts (1 hunks)
  • packages/opencode/src/project/instance.ts (1 hunks)
  • packages/opencode/src/project/state.ts (3 hunks)
  • packages/opencode/src/provider/provider.ts (2 hunks)
  • packages/opencode/src/server/server.ts (4 hunks)
  • packages/opencode/src/tool/registry.ts (2 hunks)
  • packages/opencode/test/config/config.test.ts (4 hunks)
  • packages/opencode/test/config/hot-reload.test.ts (1 hunks)
  • packages/opencode/test/config/write.test.ts (1 hunks)
  • specs/config-spec.md (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (22)
packages/opencode/src/project/bootstrap.ts (1)
packages/opencode/src/project/instance.ts (1)
  • Instance (15-92)
packages/opencode/src/config/diff.ts (1)
packages/opencode/src/config/config.ts (2)
  • Info (527-712)
  • Info (714-714)
packages/opencode/src/file/index.ts (2)
packages/opencode/src/util/log.ts (1)
  • error (135-139)
packages/opencode/src/project/instance.ts (1)
  • Instance (15-92)
packages/opencode/src/tool/registry.ts (3)
packages/opencode/src/config/config.ts (3)
  • state (197-197)
  • Info (527-712)
  • Info (714-714)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/tool/tool.ts (1)
  • Info (18-34)
packages/opencode/src/permission/index.ts (4)
packages/opencode/src/config/config.ts (1)
  • state (197-197)
packages/opencode/src/tool/registry.ts (1)
  • state (27-58)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/file/time.ts (1)
  • state (6-15)
packages/opencode/src/config/invalidation.ts (3)
packages/opencode/src/config/diff.ts (1)
  • ConfigDiff (4-26)
packages/opencode/src/project/instance.ts (1)
  • Instance (15-92)
packages/opencode/src/config/hot-reload.ts (1)
  • isConfigHotReloadEnabled (1-3)
packages/opencode/src/command/index.ts (3)
packages/opencode/src/config/config.ts (3)
  • state (197-197)
  • Info (527-712)
  • Info (714-714)
packages/opencode/src/tool/registry.ts (1)
  • state (27-58)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/lsp/index.ts (3)
packages/opencode/src/config/config.ts (1)
  • state (197-197)
packages/opencode/src/tool/registry.ts (1)
  • state (27-58)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/server/server.ts (3)
packages/opencode/src/project/instance.ts (2)
  • directory (39-41)
  • Instance (15-92)
packages/opencode/src/config/hot-reload.ts (1)
  • isConfigHotReloadEnabled (1-3)
packages/opencode/src/cli/cmd/tui/event.ts (1)
  • TuiEvent (4-39)
packages/opencode/test/config/write.test.ts (1)
packages/opencode/src/config/write.ts (1)
  • writeConfigFile (25-49)
packages/opencode/src/config/persist.ts (8)
packages/opencode/src/project/instance.ts (2)
  • directory (39-41)
  • Instance (15-92)
packages/opencode/src/config/global-file.ts (1)
  • resolveGlobalFile (5-8)
packages/opencode/src/config/config.ts (3)
  • Info (527-712)
  • Info (714-714)
  • update (879-895)
packages/opencode/src/config/diff.ts (2)
  • ConfigDiff (4-26)
  • computeDiff (45-123)
packages/opencode/src/config/lock.ts (1)
  • acquireLock (90-133)
packages/opencode/src/config/backup.ts (2)
  • createBackup (3-12)
  • restoreBackup (14-17)
packages/opencode/src/config/write.ts (2)
  • writeConfigFile (25-49)
  • writeFileAtomically (265-279)
packages/opencode/src/config/hot-reload.ts (1)
  • isConfigHotReloadEnabled (1-3)
packages/opencode/test/config/hot-reload.test.ts (3)
packages/opencode/src/project/instance.ts (2)
  • directory (39-41)
  • Instance (15-92)
packages/opencode/src/project/bootstrap.ts (1)
  • InstanceBootstrap (15-30)
packages/opencode/src/config/config.ts (1)
  • directories (897-899)
packages/opencode/src/agent/agent.ts (3)
packages/opencode/src/config/config.ts (3)
  • state (197-197)
  • Info (527-712)
  • Info (714-714)
packages/opencode/src/tool/registry.ts (1)
  • state (27-58)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/project/state.ts (4)
packages/opencode/src/config/config.ts (1)
  • state (197-197)
packages/opencode/src/tool/registry.ts (2)
  • state (27-58)
  • register (78-86)
packages/opencode/src/project/instance.ts (3)
  • state (48-50)
  • dispose (76-79)
  • invalidate (51-53)
packages/opencode/src/file/time.ts (1)
  • state (6-15)
packages/opencode/src/mcp/index.ts (3)
packages/opencode/src/config/config.ts (1)
  • state (197-197)
packages/opencode/src/tool/registry.ts (1)
  • state (27-58)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/test/config/config.test.ts (2)
packages/opencode/test/fixture/fixture.ts (1)
  • tmpdir (11-28)
packages/opencode/src/project/instance.ts (1)
  • Instance (15-92)
packages/opencode/src/config/config.ts (4)
packages/opencode/src/project/instance.ts (4)
  • directory (39-41)
  • Instance (15-92)
  • worktree (42-44)
  • state (48-50)
packages/opencode/src/config/global-file.ts (1)
  • resolveGlobalFile (5-8)
packages/opencode/src/config/persist.ts (1)
  • update (61-183)
packages/opencode/src/config/diff.ts (1)
  • ConfigDiff (4-26)
packages/opencode/src/provider/provider.ts (3)
packages/opencode/src/config/config.ts (1)
  • state (197-197)
packages/opencode/src/tool/registry.ts (1)
  • state (27-58)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/config/write.ts (2)
packages/opencode/src/config/diff.ts (1)
  • ConfigDiff (4-26)
packages/opencode/src/config/config.ts (2)
  • Info (527-712)
  • Info (714-714)
packages/opencode/src/file/watcher.ts (4)
packages/opencode/src/config/config.ts (1)
  • state (197-197)
packages/opencode/src/tool/registry.ts (1)
  • state (27-58)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/file/time.ts (1)
  • state (6-15)
packages/opencode/src/plugin/index.ts (4)
packages/plugin/src/index.ts (2)
  • Plugin (27-27)
  • PluginInput (19-25)
packages/opencode/src/config/config.ts (1)
  • state (197-197)
packages/opencode/src/tool/registry.ts (1)
  • state (27-58)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/format/index.ts (4)
packages/opencode/src/config/config.ts (3)
  • state (197-197)
  • Info (527-712)
  • Info (714-714)
packages/opencode/src/tool/registry.ts (2)
  • state (27-58)
  • enabled (126-145)
packages/opencode/src/project/instance.ts (2)
  • state (48-50)
  • Instance (15-92)
packages/opencode/src/format/formatter.ts (14)
  • enabled (18-20)
  • enabled (27-29)
  • enabled (66-74)
  • enabled (111-120)
  • enabled (127-129)
  • enabled (136-139)
  • enabled (146-148)
  • enabled (155-178)
  • enabled (185-205)
  • enabled (212-220)
  • enabled (227-229)
  • enabled (236-238)
  • enabled (245-247)
  • Info (6-12)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (16)
packages/opencode/src/config/hot-reload.ts (1)

1-3: Env flag helper is clear and minimal

Implementation is straightforward, typed, and matches the documented OPENCODE_CONFIG_HOT_RELOAD === "true" contract. No changes needed.

packages/opencode/src/project/bootstrap.ts (1)

13-29: Config invalidation bootstrap ordering looks appropriate

Calling await ConfigInvalidation.setup() at the start of InstanceBootstrap (before Plugin/Format/LSP/File init) is a good place to wire hot-reload invalidation so subsequent subsystems are ready to react to config updates. No issues from this change alone.

packages/opencode/src/file/watcher.ts (1)

5-77: State.register-based watcher lifecycle looks correct

The switch from Instance.state to State.register("filewatcher", ...) is consistent with the new state system:

  • Directory scoping via () => Instance.directory matches other modules (config, tool-registry).
  • Non-git projects cleanly no-op by returning {} and the disposer safely checks for sub.
  • FileWatcher.init() still respects Flag.OPENCODE_EXPERIMENTAL_WATCHER before touching state.

This refactor looks good.

packages/opencode/src/config/global-file.ts (1)

5-7: Global config resolver is simple and matches documented behavior

resolveGlobalFile() eagerly creates Global.Path.config and returns opencode.jsonc under it, which aligns with the spec’s global-scope semantics and keeps the path resolution centralized. No issues from a correctness or lifecycle standpoint.

packages/opencode/src/permission/index.ts (1)

7-7: Permission state registration and teardown behavior look sound

Registering permission state via State.register("permission", () => Instance.directory, () => { ... }, async (state) => { ... }) keeps the prior in-memory structure while making it invalidation-aware. On dispose, all pending requests are rejected with RejectedError, which is an appropriate default for hot-reload or shutdown.

No functional issues spotted in this block.

Also applies to: 53-85

packages/opencode/src/tool/registry.ts (1)

27-58: State.register migration for tool registry looks consistent

The registry state is now properly keyed as "tool-registry" and scoped by Instance.directory, matching the new state system used elsewhere (config, format, etc.). The async initializer still aggregates tools from config directories and plugins with unchanged behavior, so this refactor looks safe.

packages/opencode/src/config/diff.ts (1)

4-123: ConfigDiff and computeDiff implementation look solid

The ConfigDiff shape and computeDiff() logic line up well with the Config.Info schema: you capture all of the important top‑level sections, include key‑level details for provider/mcp, and expose pluginAdded/pluginRemoved for targeted invalidation. computeKeysChanged() handles undefined inputs safely. No issues from a correctness or readability standpoint.

packages/opencode/src/server/server.ts (2)

84-100: Config PATCH flow and LastConfigUpdate cache are well structured

The new LastConfigUpdate map plus rememberConfigUpdate() give you a cheap, per-directory record of which sections changed, and the 60s cleanup interval should prevent the cache from growing unbounded.

The /config PATCH handler’s integration with Config.update and hot-reload is also solid:

  • Scope handling (project vs global) is explicit and defaults sensibly.
  • diffForPublish is used for downstream events and to derive sections for TUI enrichment.
  • When hot reload is enabled:
    • Project scope publishes a single Config.Event.Updated for the current directory.
    • Global scope fans out via Instance.forEach, remembers sections per directory, and aggregates any publish failures into a single error with good logging context.

This looks like a good balance between correctness, observability, and keeping the hot-reload behavior clearly behind the env flag.

Also applies to: 252-303


1784-1813: Nice TUI toast enrichment based on recent config updates

The /tui/show-toast handler’s enrichment logic is well-contained and defensive:

  • It only rewrites messages that match “Saved (global|project) config”.
  • It prefers a fresh (< 10s) LastConfigUpdate entry for the current directory/scope and falls back to the freshest matching scope overall.
  • It replaces the -> undefined suffix when present, or rebuilds the message to include the changed sections or “no changes”.
  • Any failure is swallowed, so the endpoint never breaks toast delivery.

This should make config save toasts much more informative without risking instability.

packages/opencode/src/config/error.ts (1)

1-45: Structured config error types look good

The four NamedError definitions (ConfigUpdateError, ConfigValidationError, ConfigWriteConflictError, ConfigWriteError) have clear, appropriately scoped payload schemas and align well with the update/validation/write flows described in the PR. This should make error handling and logging around config operations much more consistent.

packages/opencode/src/config/write.ts (1)

281-295: syncDirectory logging and error tolerance look good

The directory fsync logic correctly skips Windows and tolerates EISDIR/ENOENT, only warning on other failures. This is a nice balance between durability and portability.

packages/opencode/src/config/config.ts (5)

27-41: Plugin path detection helper looks solid

isPathLikePluginSpecifier cleanly distinguishes path‑like plugin values (file URLs, relative, home‑relative, and absolute paths, including common Windows prefixes) from bare specifiers. This should make downstream resolution behaviour much more predictable.


123-132: Filesystem.up start/stop parameters

Using Filesystem.up({ targets: [".opencode"], start: directory, stop: worktree }) makes the search for .opencode directories explicit and bounded, which is a nice robustness improvement over unbounded upward traversal. This looks good.


197-202: Config state registration and readFreshConfig semantics

state = State.register("config", () => Instance.directory, loadStateFromDisk) plus readFreshConfig() (which bypasses state and hits disk directly) is a clear separation between cached and fresh config. This matches how persist.update chooses between Config.get() and readFreshConfig() based on hot‑reload, and looks correct.


716-746: Global config loading via resolveGlobalFile

loadGlobalConfig now resolves the global JSONC path via resolveGlobalFile() and merges config.json, opencode.json, and the resolved global file. This ensures the config directory exists and centralizes the global path logic, which should reduce path drift and race conditions. Looks good.


760-799: New plugin specifier normalization in load

The post‑parse loop that rewrites data.plugin[i] using resolvePluginFileReference when isPathLikePluginSpecifier holds gives you normalized file URLs in memory without mutating the original file (unless $schema is missing). This should make downstream plugin resolution simpler and more consistent across project/global configs.

Comment on lines +43 to +185
const state = State.register(
"agent",
() => Instance.directory,
async () => {
const cfg = await Config.get()
const defaultTools = cfg.tools ?? {}
const defaultPermission: Info["permission"] = {
edit: "allow",
bash: {
"cut*": "allow",
"diff*": "allow",
"du*": "allow",
"file *": "allow",
"find * -delete*": "ask",
"find * -exec*": "ask",
"find * -fprint*": "ask",
"find * -fls*": "ask",
"find * -fprintf*": "ask",
"find * -ok*": "ask",
"find *": "allow",
"git diff*": "allow",
"git log*": "allow",
"git show*": "allow",
"git status*": "allow",
"git branch": "allow",
"git branch -v": "allow",
"grep*": "allow",
"head*": "allow",
"less*": "allow",
"ls*": "allow",
"more*": "allow",
"pwd*": "allow",
"rg*": "allow",
"sort --output=*": "ask",
"sort -o *": "ask",
"sort*": "allow",
"stat*": "allow",
"tail*": "allow",
"tree -o *": "ask",
"tree*": "allow",
"uniq*": "allow",
"wc*": "allow",
"whereis*": "allow",
"which*": "allow",
"*": "ask",
"*": "allow",
},
webfetch: "allow",
},
cfg.permission ?? {},
)
doom_loop: "ask",
external_directory: "ask",
}
const agentPermission = mergeAgentPermissions(defaultPermission, cfg.permission ?? {})

const result: Record<string, Info> = {
general: {
name: "general",
description:
"General-purpose agent for researching complex questions, searching for code, and executing multi-step tasks. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries use this agent to perform the search for you.",
tools: {
todoread: false,
todowrite: false,
...defaultTools,
const planPermission = mergeAgentPermissions(
{
edit: "deny",
bash: {
"cut*": "allow",
"diff*": "allow",
"du*": "allow",
"file *": "allow",
"find * -delete*": "ask",
"find * -exec*": "ask",
"find * -fprint*": "ask",
"find * -fls*": "ask",
"find * -fprintf*": "ask",
"find * -ok*": "ask",
"find *": "allow",
"git diff*": "allow",
"git log*": "allow",
"git show*": "allow",
"git status*": "allow",
"git branch": "allow",
"git branch -v": "allow",
"grep*": "allow",
"head*": "allow",
"less*": "allow",
"ls*": "allow",
"more*": "allow",
"pwd*": "allow",
"rg*": "allow",
"sort --output=*": "ask",
"sort -o *": "ask",
"sort*": "allow",
"stat*": "allow",
"tail*": "allow",
"tree -o *": "ask",
"tree*": "allow",
"uniq*": "allow",
"wc*": "allow",
"whereis*": "allow",
"which*": "allow",
"*": "ask",
},
webfetch: "allow",
},
options: {},
permission: agentPermission,
mode: "subagent",
builtIn: true,
},
build: {
name: "build",
tools: { ...defaultTools },
options: {},
permission: agentPermission,
mode: "primary",
builtIn: true,
},
plan: {
name: "plan",
options: {},
permission: planPermission,
tools: {
...defaultTools,
cfg.permission ?? {},
)

const result: Record<string, Info> = {
general: {
name: "general",
description:
"General-purpose agent for researching complex questions, searching for code, and executing multi-step tasks. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries use this agent to perform the search for you.",
tools: {
todoread: false,
todowrite: false,
...defaultTools,
},
options: {},
permission: agentPermission,
mode: "subagent",
builtIn: true,
},
mode: "primary",
builtIn: true,
},
}
for (const [key, value] of Object.entries(cfg.agent ?? {})) {
if (value.disable) {
delete result[key]
continue
}
let item = result[key]
if (!item)
item = result[key] = {
name: key,
mode: "all",
build: {
name: "build",
tools: { ...defaultTools },
options: {},
permission: agentPermission,
mode: "primary",
builtIn: true,
},
plan: {
name: "plan",
options: {},
tools: {},
builtIn: false,
}
const { name, model, prompt, tools, description, temperature, top_p, mode, permission, color, ...extra } = value
item.options = {
...item.options,
...extra,
permission: planPermission,
tools: {
...defaultTools,
},
mode: "primary",
builtIn: true,
},
}
if (model) item.model = Provider.parseModel(model)
if (prompt) item.prompt = prompt
if (tools)
for (const [key, value] of Object.entries(cfg.agent ?? {})) {
if (value.disable) {
delete result[key]
continue
}
let item = result[key]
if (!item)
item = result[key] = {
name: key,
mode: "all",
permission: agentPermission,
options: {},
tools: {},
builtIn: false,
}
const { name, model, prompt, tools, description, temperature, top_p, mode, color, permission, ...extra } = value
item.options = {
...item.options,
...extra,
}
if (model) item.model = Provider.parseModel(model)
if (prompt) item.prompt = prompt
if (tools)
item.tools = {
...item.tools,
...tools,
}
item.tools = {
...defaultTools,
...item.tools,
...tools,
}
item.tools = {
...defaultTools,
...item.tools,
}
if (description) item.description = description
if (temperature != undefined) item.temperature = temperature
if (top_p != undefined) item.topP = top_p
if (mode) item.mode = mode
if (color) item.color = color
// just here for consistency & to prevent it from being added as an option
if (name) item.name = name
if (description) item.description = description
if (temperature != undefined) item.temperature = temperature
if (top_p != undefined) item.topP = top_p
if (mode) item.mode = mode
if (color) item.color = color
// just here for consistency & to prevent it from being added as an option
if (name) item.name = name

if (permission ?? cfg.permission) {
item.permission = mergeAgentPermissions(cfg.permission ?? {}, permission ?? {})
if (permission ?? cfg.permission) {
item.permission = mergeAgentPermissions(cfg.permission ?? {}, permission ?? {})
}
}
}
return result
})
return result
},
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Per-agent permission merge can drop doom_loop / external_directory defaults and mutates inputs

The overall State.register("agent", …) flow looks solid, but the permission merging has a subtle layering issue:

  • You compute agentPermission and planPermission with the intended defaults (including doom_loop: "ask" and external_directory: "ask" for the general/build agents).
  • In the per-agent override loop, you then do:
if (permission ?? cfg.permission) {
  item.permission = mergeAgentPermissions(cfg.permission ?? {}, permission ?? {})
}

For built-in agents that have a corresponding cfg.agent.<name> entry but no per-agent permission, and a top-level cfg.permission present, this re-merge uses only cfg.permission as the base and discards the earlier defaultPermission-derived fields. That can turn doom_loop / external_directory from "ask" into undefined for those agents, changing safety defaults whenever the user both sets global permissions and customizes agent metadata.

A safer pattern is to treat the existing item.permission as the base and only apply per-agent overrides:

-        if (permission ?? cfg.permission) {
-          item.permission = mergeAgentPermissions(cfg.permission ?? {}, permission ?? {})
-        }
+        if (permission) {
+          // Start from whatever base permission was assigned (default/plan/global),
+          // and layer per-agent overrides on top.
+          item.permission = mergeAgentPermissions(item.permission, permission)
+        }

Separately, mergeAgentPermissions mutates its input objects:

if (typeof basePermission.bash === "string") {
  basePermission.bash = { "*": basePermission.bash }
}
if (typeof overridePermission.bash === "string") {
  overridePermission.bash = { "*": overridePermission.bash }
}

Since cfg.permission is passed directly here, this normalizes and mutates the config object in-place. Even if current callers tolerate it, it’s surprising for a merge helper and makes future reuse of cfg.permission harder to reason about. Consider cloning the inputs first (or normalizing into locals) to keep mergeAgentPermissions side‑effect free.

Also applies to: 226-263


I'll verify the review comment by examining the actual code. Let me search the codebase for the relevant files and functions.


---

🏁 Script executed:

```shell
#!/bin/bash
# Find and display the agent.ts file at the specified lines
fd -t f "agent.ts" --path "*/agent/agent.ts" | head -5

Length of output: 233


🏁 Script executed:

#!/bin/bash
# Search for mergeAgentPermissions function definition
rg -n "mergeAgentPermissions" --type ts -A 10 -B 2

Length of output: 2776


🏁 Script executed:

#!/bin/bash
# Check the structure of packages/opencode/src/agent directory
fd -t f . "packages/opencode/src/agent/" 2>/dev/null | head -20

Length of output: 77



---

🏁 Script executed:

```shell
#!/bin/bash
# Get the full mergeAgentPermissions function (lines 226-263)
sed -n '226,263p' packages/opencode/src/agent/agent.ts

Length of output: 1007


🏁 Script executed:

#!/bin/bash
# Get the full state.register block to see permission setup (lines 43-100)
sed -n '43,100p' packages/opencode/src/agent/agent.ts

Length of output: 1759


🏁 Script executed:

#!/bin/bash
# Get the per-agent override loop section (lines 155-185)
sed -n '155,185p' packages/opencode/src/agent/agent.ts

Length of output: 1042


Permission layering loses defaults and mergeAgentPermissions mutates inputs

The review comment accurately identifies two real issues in the code:

Issue 1: Permission layering can drop doom_loop / external_directory defaults

Verified in the code flow:

  • defaultPermission includes doom_loop: "ask" and external_directory: "ask" (lines 50-57)
  • mergeAgentPermissions function returns these fields without defaults (lines 255-262): doom_loop: merged.doom_loop and external_directory: merged.external_directory can be undefined
  • At line 179-181, for built-in agents with config entries but no per-agent permission field, the condition if (permission ?? cfg.permission) is true, triggering a re-merge that uses only cfg.permission as the base
  • This overwrites the previously assigned defaults, potentially leaving doom_loop and external_directory undefined

Issue 2: mergeAgentPermissions mutates its inputs in-place

Confirmed at lines 227-235 where basePermission.bash and overridePermission.bash are directly modified. Since cfg.permission is passed as basePermission at line 180, the config object itself gets mutated with normalized bash structure—unexpected side effects for a utility function.

Suggested fix is sound: Using item.permission as the base for per-agent merges would preserve previously assigned defaults while only applying per-agent overrides.

🤖 Prompt for AI Agents
packages/opencode/src/agent/agent.ts lines 43-185: permission layering currently
overwrites defaults and the merge helper mutates inputs causing cfg to be
changed; fix by (1) changing the per-agent merge at the end to use the
already-assigned item.permission as the base (item.permission =
mergeAgentPermissions(item.permission, permission ?? {})) so defaults
(doom_loop, external_directory, etc.) are preserved, and (2) update
mergeAgentPermissions to never mutate its arguments (create new objects/clone
bash maps before normalizing/merging and always return a fresh merged object) so
cfg.permission is not modified.

Comment on lines +3 to +17
export async function createBackup(filepath: string): Promise<string> {
const timestamp = new Date().toISOString().replace(/[:.]/g, "-")
const backupPath = `${filepath}.bak-${timestamp}`

if (await Bun.file(filepath).exists()) {
await fs.copyFile(filepath, backupPath)
}

return backupPath
}

export async function restoreBackup(backupPath: string, targetPath: string): Promise<void> {
await fs.copyFile(backupPath, targetPath)
await fs.unlink(backupPath)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle missing backups gracefully in restoreBackup

createBackup returns a backup path even when the original file doesn’t exist, but restoreBackup unconditionally copyFiles from that path. If a write fails on a brand-new config (no original file), calling restoreBackup will currently throw ENOENT instead of behaving as “no backup to restore”.

Consider making restoreBackup tolerant of a missing backup file (and cleaning up best-effort):

-export async function restoreBackup(backupPath: string, targetPath: string): Promise<void> {
-  await fs.copyFile(backupPath, targetPath)
-  await fs.unlink(backupPath)
-}
+export async function restoreBackup(backupPath: string, targetPath: string): Promise<void> {
+  try {
+    await fs.copyFile(backupPath, targetPath)
+  } catch (error: any) {
+    if (error?.code === "ENOENT") {
+      // No backup to restore; nothing to do.
+      return
+    }
+    throw error
+  }
+
+  await fs.unlink(backupPath).catch((error: any) => {
+    if (error?.code === "ENOENT") return
+    throw error
+  })
+}

Alternatively, createBackup could return null/undefined when no backup is created, and callers could skip restore in that case.

🤖 Prompt for AI Agents
In packages/opencode/src/config/backup.ts around lines 3 to 17, restoreBackup
currently unconditionally copies from a backup path that may not exist; change
restoreBackup to be tolerant of missing backups by first checking that the
backup file exists before attempting to copy, and if it does not exist simply
return; perform the copy and unlink inside a try/catch so any filesystem errors
are handled gracefully and ignore ENOENT when cleaning up (best-effort unlink),
and ensure the function still resolves void on missing backup instead of
throwing.

Comment on lines +90 to +132
export async function acquireLock(filepath: string, options?: LockOptions): Promise<() => Promise<void>> {
const normalized = path.normalize(filepath)
const timeout = options?.timeout ?? 30000
const staleAfter = options?.staleAfter ?? LOCKFILE_STALE_AFTER_MS
const startTime = Date.now()

while (fileLocks.has(normalized)) {
const waited = Date.now() - startTime

if (waited > 5000 && waited < 5100) {
log.warn("lock acquisition taking longer than expected", {
filepath: normalized,
waited,
})
}

if (waited > timeout) {
throw new Error(`Lock timeout: could not acquire lock for ${normalized} after ${waited}ms`)
}

await fileLocks.get(normalized)
await Bun.sleep(10)
}

let releaseFn: () => void
const lockPromise = new Promise<void>((resolve) => {
releaseFn = resolve
})

fileLocks.set(normalized, lockPromise)

const releaseFilesystem = await acquireFilesystemLock({
filepath: normalized,
timeout,
staleAfter,
startTime,
})

return async () => {
fileLocks.delete(normalized)
releaseFn!()
await releaseFilesystem()
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Deadlock risk when filesystem lock acquisition fails

If acquireFilesystemLock throws (timeout or fs error), acquireLock leaves a pending promise in fileLocks and never calls releaseFn. Subsequent callers will hit while (fileLocks.has(normalized)) and then await fileLocks.get(normalized), which never resolves, effectively deadlocking local lock acquisition for that path.

To make failure modes safe, clean up the in-memory lock and resolve waiters when acquisition fails:

-  const releaseFilesystem = await acquireFilesystemLock({
-    filepath: normalized,
-    timeout,
-    staleAfter,
-    startTime,
-  })
-
-  return async () => {
-    fileLocks.delete(normalized)
-    releaseFn!()
-    await releaseFilesystem()
-  }
+  let releaseFilesystem: (() => Promise<void>) | undefined
+  try {
+    releaseFilesystem = await acquireFilesystemLock({
+      filepath: normalized,
+      timeout,
+      staleAfter,
+      startTime,
+    })
+  } catch (error) {
+    // Unblock local waiters if we fail to acquire the filesystem lock.
+    fileLocks.delete(normalized)
+    releaseFn!()
+    throw error
+  }
+
+  return async () => {
+    fileLocks.delete(normalized)
+    releaseFn!()
+    await releaseFilesystem!()
+  }

This preserves the timeout behavior while ensuring callers don’t get stuck after a failed acquisition.

🤖 Prompt for AI Agents
In packages/opencode/src/config/lock.ts around lines 90 to 132, the code sets an
in-memory promise in fileLocks then calls acquireFilesystemLock but if that call
throws the in-memory promise never resolves and the entry is never removed,
causing deadlocks for subsequent acquirers; wrap the acquireFilesystemLock call
in a try/catch, and on any error delete the fileLocks entry and resolve the
pending releaseFn before rethrowing the error (ensure releaseFn is defined
before the try and call it safely), so waiters are unblocked and the in-memory
lock is cleaned up while preserving the original error/timeout behavior.

Comment on lines +265 to +279
export async function writeFileAtomically(filepath: string, content: string): Promise<void> {
const directory = path.dirname(filepath)
const tempName = `${path.basename(filepath)}.${randomUUID()}.tmp`
const tempPath = path.join(directory, tempName)
await fs.mkdir(directory, { recursive: true })
const handle = await fs.open(tempPath, constants.O_WRONLY | constants.O_CREAT | constants.O_TRUNC, 0o600)
await handle.writeFile(content, "utf8")
await handle.sync()
await handle.close()
await fs.rename(tempPath, filepath).catch(async (error) => {
await fs.unlink(tempPath).catch(() => {})
throw error
})
await syncDirectory(directory)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Ensure atomic write closes handle and cleans temp file on all error paths

writeFileAtomically closes the file handle and unlinks the temp file only on the happy path and on rename failure; errors from writeFile or sync will leak the handle and leave the temp file behind. Wrapping the body in a try/finally that closes the handle (and best‑effort unlinks the temp file) would make this much more robust under disk errors.

-  const handle = await fs.open(tempPath, constants.O_WRONLY | constants.O_CREAT | constants.O_TRUNC, 0o600)
-  await handle.writeFile(content, "utf8")
-  await handle.sync()
-  await handle.close()
-  await fs.rename(tempPath, filepath).catch(async (error) => {
-    await fs.unlink(tempPath).catch(() => {})
-    throw error
-  })
+  const handle = await fs.open(tempPath, constants.O_WRONLY | constants.O_CREAT | constants.O_TRUNC, 0o600)
+  try {
+    await handle.writeFile(content, "utf8")
+    await handle.sync()
+  } finally {
+    await handle.close().catch(() => {})
+  }
+  await fs.rename(tempPath, filepath).catch(async (error) => {
+    await fs.unlink(tempPath).catch(() => {})
+    throw error
+  })

import { Config } from "../config/config"
import { spawn } from "child_process"
import { Instance } from "../project/instance"
import { State } from "../project/state"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

LSP State.register usage is fine; guard against spreading undefined for custom servers

Switching LSP to

const state = State.register("lsp", () => Instance.directory, async () => { ... }, async (state) => { ... })

is consistent with the rest of the subsystem migrations and should make LSP state hot-reloadable via config invalidation. The disposer shutting down all clients via client.shutdown() is also appropriate.

One edge case in the initializer to consider (pre-existing but worth fixing here):

for (const [name, item] of Object.entries(cfg.lsp ?? {})) {
  const existing = servers[name]
  ...
  servers[name] = {
    ...existing,
    id: name,
    root: existing?.root ?? (async () => Instance.directory),
    extensions: item.extensions ?? existing?.extensions ?? [],
    spawn: async (root) => { ... },
  }
}

If cfg.lsp allows defining custom LSP server IDs that don’t correspond to any built-in LSPServer entry, existing will be undefined, and ...existing will throw at runtime (TypeError: Cannot convert undefined or null to object). To make this safe while preserving override semantics for known servers, you can default existing to an empty object:

-        const existing = servers[name]
+        const existing = servers[name]
...
-        servers[name] = {
-          ...existing,
+        servers[name] = {
+          ...(existing ?? {}),
           id: name,
           root: existing?.root ?? (async () => Instance.directory),
           extensions: item.extensions ?? existing?.extensions ?? [],
           spawn: async (root) => { ... },
        }

This keeps overrides working for built-ins and avoids crashes when users introduce new LSP configs.

Also applies to: 62-126, 80-108

🤖 Prompt for AI Agents
In packages/opencode/src/lsp/index.ts around line 9 (and similarly for ranges
62-126 and 80-108), the initializer spreads `existing` which can be undefined
for user-defined custom servers causing a runtime TypeError; fix by defaulting
`existing` to an empty object before spreading (e.g. treat `existing` as
`existing ?? {}`) so overrides still merge for built-ins while preventing
crashes for new custom server entries, and apply the same defensive default
wherever `...existing` is used in those listed line ranges.

@shuv1337 shuv1337 closed this Nov 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants