PATCH /config hot-reload w/ selective cache invalidation (#15)#16
PATCH /config hot-reload w/ selective cache invalidation (#15)#16
Conversation
PATCH /config hot-reload w/ selective cache invalidation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const hotReloadEnabled = isConfigHotReloadEnabled() | ||
| if (hotReloadEnabled && input.scope === "global") { | ||
| await State.invalidate("config") | ||
| } | ||
| if (hotReloadEnabled && input.scope === "project") { | ||
| await Instance.invalidate("config") | ||
| } |
There was a problem hiding this comment.
Config writes no longer refresh in-memory state when hot reload is off
The previous implementation of Config.update always disposed the current instance after persisting a project config, ensuring subsequent reads saw the new settings. After the refactor the cache invalidation is conditional on OPENCODE_CONFIG_HOT_RELOAD being true, and when the flag is not set (the default) no invalidation occurs – the process keeps using the stale state returned by the old Config.get() call even though the file was updated. Any user saving their config without explicitly enabling the hot-reload flag will see the new JSON written to disk but all agents, providers and other config-driven features continue to run with the old configuration until the process is restarted. Consider invalidating State/Instance unconditionally (or falling back to the previous Instance.dispose() behaviour) so that config updates remain effective when hot reload is disabled.
Useful? React with 👍 / 👎.
* Fix cache corruption bug in refresh function
There was a problem hiding this comment.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on December 14
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| directories, | ||
| } | ||
| }) | ||
| } |
There was a problem hiding this comment.
Bug: Logic Duplication Creates Dead Code
Duplicate conditional block sets result.share = "auto" twice when result.autoshare === true && !result.share. The second check on lines 185-187 is redundant since the first check on lines 181-183 already performs this exact assignment, making the second block dead code that will never execute with a different outcome.
| } | ||
|
|
||
| keyRecords.delete(entry.init) | ||
| } |
There was a problem hiding this comment.
Bug: Named Registry Leak: Unbounded Growth
State.invalidate removes disposed entries from recordsByKey but never removes them from namedRegistry, creating a memory leak. On subsequent invalidations, the function iterates over stale NamedEntry objects that no longer have corresponding state in recordsByKey, causing the registry to grow unbounded. The entries should be removed from the namedRegistry Set after disposal.
| files: [], | ||
| dirs: [], | ||
| }) | ||
| refresh() |
There was a problem hiding this comment.
Bug: Race condition returns empty file cache initially
The files() method returns cache immediately without waiting for fetchEntries() to complete. When refresh() is called, it starts the async fetchEntries() operation but doesn't await it, so the first call to files() returns the initial empty cache { files: [], dirs: [] } before any files are discovered. The old implementation passed cache directly to the async function which would populate it, but the refactored version starts the fetch asynchronously and returns stale data.
|
Important Review skippedReview was skipped due to path filters ⛔ Files ignored due to path filters (1)
CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughThis PR refactors state management across multiple modules to use Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server
participant Config as Config Module
participant Persist as Persist Module
participant Lock as Lock Module
participant Invalidate as Invalidation
participant Subsystems as Subsystems
Client->>Server: PATCH /config (scope, update)
Server->>Lock: acquireLock(filepath)
Lock-->>Server: releaseLock()
Server->>Persist: update({scope, update, directory})
Persist->>Config: readFreshConfig()
Config-->>Persist: before state
Persist->>Persist: merge & validate
Persist->>Persist: createBackup(filepath)
Persist->>Persist: writeConfigFile (atomic)
Persist-->>Server: {before, after, diff, filepath}
Server->>Invalidate: ConfigInvalidation.apply({diff, scope, directory})
Invalidate->>Subsystems: invalidate(provider, mcp, lsp, ...)
Subsystems-->>Invalidate: done
Invalidate->>Server: done
alt hot-reload enabled
Server->>Config: Bus.publish(Config.Event.Updated)
Config-->>Client: event with before/after
end
Server-->>Client: 200 OK {after config}
sequenceDiagram
participant App
participant Bootstrap as InstanceBootstrap
participant ConfigInvalidation
participant ConfigBus as Config.Updated Event
participant Subsystems
App->>Bootstrap: InstanceBootstrap()
Bootstrap->>ConfigInvalidation: setup()
alt hot-reload enabled
ConfigInvalidation->>ConfigBus: subscribe to Config.Updated
Note over ConfigBus: on config change event
ConfigBus->>ConfigInvalidation: fire event(before, after, diff, scope, directory)
ConfigInvalidation->>ConfigInvalidation: compute targets from diff
ConfigInvalidation->>Subsystems: invalidate(provider, mcp, lsp, ...)
Subsystems-->>ConfigInvalidation: done
else hot-reload disabled
Note over ConfigBus: no event subscription
end
Bootstrap-->>App: done
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–75 minutes Areas requiring extra attention:
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (4)
packages/opencode/src/file/index.ts (1)
122-163: Still a race:files()can return empty cache before scan finishes
refresh()kicks offfetchEntries()without awaiting it, andfiles()just returnscache(initially empty) while a scan is in progress. First callers offiles()/search()can therefore observe{ files: [], dirs: [] }even when the project has files. This is the same behavior raised in the earlier review.A concrete fix is to track the in-flight scan as a promise and always await it before returning
cache:const state = Instance.state(async () => { type Entry = { files: string[]; dirs: string[] } - let cache: Entry = { files: [], dirs: [] } - let fetching = false - const fetchEntries = async () => { - const temp: Entry = { files: [], dirs: [] } + let cache: Entry = { files: [], dirs: [] } + let fetching: Promise<void> | null = null + const fetchEntries = async () => { + const temp: Entry = { files: [], dirs: [] } const set = new Set<string>() for await (const file of Ripgrep.files({ cwd: Instance.directory })) { - temp.files.push(file) + temp.files.push(file) let current = file while (true) { const dir = path.dirname(current) if (dir === ".") break if (dir === current) break current = dir if (set.has(dir)) continue set.add(dir) - temp.dirs.push(dir + "/") + temp.dirs.push(dir + "/") } } - cache = temp - } - const refresh = () => { - fetching = true - fetchEntries() - .catch((error) => { - if (error instanceof Error && "code" in error && error.code === "ENOENT") return - log.error("failed to refresh files", { error }) - }) - .finally(() => { - fetching = false - }) - } - refresh() + cache = temp + } + const refresh = () => { + if (!fetching) { + fetching = fetchEntries() + .catch((error) => { + const err = error as NodeJS.ErrnoException + if (err?.code === "ENOENT") return + log.error("failed to refresh files", { error }) + }) + .finally(() => { + fetching = null + }) + } + return fetching + } + await refresh() return { async files() { - if (!fetching) { - refresh() - } + await refresh() return cache }, } })This keeps the API the same while ensuring callers never see the pre-populated empty cache.
packages/opencode/src/config/persist.ts (1)
61-142: Re-check config cache refresh semantics when hot reload is disabled
update()only invalidates the"config"state whenisConfigHotReloadEnabled()is true:
- Global scope:
await State.invalidate("config")- Project scope:
await Instance.invalidate("config")When the flag is false, you still write the updated config to disk and return a fresh
afterviaConfig.readFreshConfig(), but no cache invalidation happens for the in‑memory state drivingConfig.get()and all the subsystems that depend on it. UnlessConfig.get()itself bypasses caching when hot reload is off, this recreates the previously‑reported behavior: users can PATCH/save config successfully, see the file updated, but the running process continues using stale config until restart.Given the earlier P1 comment on this exact issue, it’d be good to either:
- Ensure config state is refreshed even when hot reload is disabled (e.g. always invalidating
"config"for the current directory, and gate only the fan‑out / subsystem hot‑reload on the env flag), or- Clearly document that PATCH /config only takes effect at process restart when OPENCODE_CONFIG_HOT_RELOAD is not enabled.
Also, in the
config.update.cacheInvalidatedlog entry,cacheInvalidatedis computed ashotReloadEnabled && input.scope === "global", so it reportsfalseeven when you invalidate project config viaInstance.invalidate("config"). That’s a small observability bug worth correcting.#!/bin/bash # Quick sanity check suggestion (run in repo): # 1. Start opencode with OPENCODE_CONFIG_HOT_RELOAD unset/false. # 2. From a client, call GET /config, then PATCH /config with a simple change (e.g. theme). # 3. Call GET /config again and verify whether the change is visible. # 4. Also check logs for `config.update.cacheInvalidated` and confirm whether project-scoped updates are logged as invalidated.packages/opencode/src/project/state.ts (1)
9-17: Named-state registry entries are never cleaned up, causing an unbounded leak
State.registeraddsNamedEntryinstances intonamedRegistry, andinvalidatedisposes underlyingrecordsByKeyentries, but the correspondingNamedEntryobjects are never removed from theirSet. Over time (especially across many directories or repeated disposals), this grows without bound and every invalidate walks stale entries that can no longer be disposed.You can fix this by deleting
NamedEntryinstances as you invalidate them and pruning empty name sets, e.g.:export async function invalidate(name: string, key?: string) { @@ - const entries = namedRegistry.get(name) + const entries = namedRegistry.get(name) if (!entries) { return } @@ - const tasks: Promise<void>[] = [] - for (const entry of entries) { - if (key && entry.key !== key) continue - - const keyRecords = recordsByKey.get(entry.key) - if (!keyRecords) continue - - const stateEntry = keyRecords.get(entry.init) - if (!stateEntry) continue + const tasks: Promise<void>[] = [] + for (const entry of entries) { + if (key && entry.key !== key) continue + + const keyRecords = recordsByKey.get(entry.key) + const stateEntry = keyRecords?.get(entry.init) + + // Drop stale registry entries that no longer have backing state + if (!keyRecords || !stateEntry) { + entries.delete(entry) + continue + } @@ - keyRecords.delete(entry.init) + keyRecords.delete(entry.init) + if (keyRecords.size === 0) { + recordsByKey.delete(entry.key) + } + + // Remove the named entry now that its state has been disposed + entries.delete(entry) } + + // If there are no more entries for this name, drop it from the registry + if (entries.size === 0) { + namedRegistry.delete(name) + }This keeps the named registry bounded and avoids re-processing dead entries on subsequent invalidations.
Also applies to: 38-67, 92-136
packages/opencode/src/config/config.ts (1)
181-187: Duplicateautoshare→sharelogic remainsThe duplicated
if (result.autoshare === true && !result.share)blocks still appear unchanged, so the second block is dead code. Since this was already flagged in a previous review, it would be good to remove the duplicate and keep just one normalization block.
🧹 Nitpick comments (21)
packages/opencode/src/provider/provider.ts (1)
242-488: Provider state initialization and config/database/model merge look solidThe new
State.register("provider", ...)initializer cleanly centralizes provider/model setup:
- Merges config providers with the ModelsDev database, using existing model entries (
parsed.models[modelID]) as the base, so config overrides now correctly layer on top of database metadata.- Tracks
realIdByKeyfor aliased model IDs and consults it ingetModel, which should avoid the earlier alias mismatch issues.- Applies
disabled_providersconsistently when loading env keys, Auth entries, custom loaders, and plugin-based auth, and filters providers with no remaining models.- Adds
github-copilot-enterpriseby deriving from the Copilot DB entry and wires it into the plugin auth load path, which aligns with the intended enterprise behavior.One small improvement: you can reuse the
configsnapshot instead of callingConfig.get()again when computing disabled providers, which also guarantees both paths see the same config instance:- const config = await Config.get() + const config = await Config.get() const database = await ModelsDev.get() @@ - const disabled = await Config.get().then((cfg) => new Set(cfg.disabled_providers ?? [])) + const disabled = new Set(config.disabled_providers ?? [])Beyond that, the refactor to State.register and the GitHub Copilot Enterprise handling look good to me.
packages/opencode/test/config/write.test.ts (1)
1-75: JSONC write tests accurately cover incremental-update behaviorBoth tests are well-targeted:
- They verify that
writeConfigFilepreserves comments and updates fields in-place for JSONC.- They ensure the resulting file is still valid JSONC (no parse errors) after incremental edits.
- Temp directory handling and cleanup are correct.
You might optionally add an assertion that the newly added
agent.planblock is present in the second test, but as-is these tests already provide good coverage of the intended behavior.packages/opencode/src/command/index.ts (1)
4-4: Command state registration aligns with new State registry and keeps behavior intactUsing
State.register("command", () => Instance.directory, async () => { ... })cleanly moves command config into the central state system while preserving behavior: you still hydrate fromcfg.command, and you still ensure a synthesizedinitcommand when it’s absent. This should makecommandrespond correctly to config invalidations.One minor note:
resultis a plain{[name: string]: Info}with keys coming from user config. If you ever expect arbitrary/untrusted names (e.g.,__proto__), you might prefer aMap<string, Info>to completely sidestep prototype-key edge cases, though it’s not a blocker here.Also applies to: 40-69
packages/opencode/src/mcp/index.ts (1)
11-11: MCP State.register migration looks correct; consider slightly richer cleanup loggingThe new
State.register("mcp", () => Instance.directory, async () => { ... }, async (state) => { ... })wiring preserves the prior behavior: clients and status are built fromcfg.mcp, and all active clients are closed on disposal. This should integrate cleanly with config-driven invalidation.For observability, you might consider iterating with
Object.entries(state.clients)so the cleanup log can include the MCP name alongside the error; currently only the error object is logged, which can make it harder to attribute failures when multiple MCP clients are in play.Also applies to: 57-94
specs/config-spec.md (1)
1-100: PATCH /config spec is clear and matches the new config workflowThis spec does a good job describing the end-to-end behavior: scope selection, atomic write with lock + backup, JSONC comment preservation, hot-reload gating via
OPENCODE_CONFIG_HOT_RELOAD, andconfig.updatedfan-out semantics. It also clearly explains the legacy/non-hot-reload path and client workflows.The only suggestion is process-oriented: as the lock timeout, backup behavior, and error shapes evolve, this doc will need to be kept in sync with
lock.ts,persist.ts, anderror.ts. Consider adding a brief “implementation reference” section listing those modules to make future cross-checks easier.packages/opencode/src/project/instance.ts (1)
51-75: forEach should guard against rejected context promises to avoid aborting mid-fan-outThe new
invalidatehelper is a thin, correct wrapper overState.invalidate.For
forEach, the general structure (iterating cached contexts, runningfninsidecontext.provide, collecting per-directory errors, and logging a warning) is good. However, if anycontextPromiseincacherejects (e.g., a previousProject.fromDirectoryfailure),const ctx = await contextPromisewill throw and short-circuit the whole loop, skipping remaining directories and bypassing your error aggregation logic.You can make this more robust by catching at the
contextPromiselevel as well:- for (const [directory, contextPromise] of cache) { - const ctx = await contextPromise - await context + for (const [directory, contextPromise] of cache) { + const ctx = await contextPromise.catch((error) => { + errors.push({ + directory, + error: error instanceof Error ? error : new Error(String(error)), + }) + return undefined + }) + if (!ctx) continue + await context .provide(ctx, async () => { await fn(directory) })This keeps the behavior you want (per-directory error collection and logging) even if some cached contexts never successfully initialized.
packages/opencode/src/plugin/index.ts (1)
9-9: Plugin State.register wiring is solid; consider stronger typing and per-plugin error isolationThe new plugin state registration integrates well with the rest of the system:
- The initializer constructs a
PluginInputwithInstance.project/worktree/directoryand an opencode client backed byServer.App().fetch.- It respects
config.plugin, conditionally adds default plugins based onFlag.OPENCODE_DISABLE_DEFAULT_PLUGINS, and installs non-file://plugins viaBunProc.install.- The disposer walks
state.hooksand calls anycleanup()hooks, logging failures without aborting the whole cleanup.Two small improvements to consider:
- Type the hooks array explicitly
Right now
const hooks = []infers a very loose type. Since you already importHooks, you can tighten this:- const hooks = [] + const hooks: Hooks[] = []This will make downstream uses like
hook.config/hook.eventbetter-typed.
- Isolate failures from individual plugin exports
A single bad plugin (or export) currently causes the entire plugin state initialization to reject. If you’d rather keep the rest of the plugins usable, wrap per-export initialization in a try/catch and log failures:
- for (const [_name, fn] of Object.entries<PluginInstance>(mod)) { - const init = await fn(input) - hooks.push(init) - } + for (const [exportName, fn] of Object.entries<PluginInstance>(mod)) { + try { + const init = await fn(input) + hooks.push(init) + } catch (error) { + log.error("Failed to initialize plugin export", { + path: plugin, + exportName, + error, + }) + } + }This keeps misbehaving plugins from taking down the whole plugin subsystem while still surfacing clear logs.
Also applies to: 15-67
packages/opencode/src/format/index.ts (1)
43-59: Formatter override merge semantics are subtle; consider making arrays explicitThe override block:
const result: Formatter.Info = mergeDeep(formatters[name] ?? {}, { command: [], extensions: [], ...item, }) if (result.command.length === 0) continue result.enabled = async () => true result.name = name formatters[name] = resultrelies on
mergeDeep’s array behavior plus acommand: []default. If arrays in the second argument overwrite the base arrays (which is common for deep‑merge utilities), then:
- A config that only sets
environmentfor an existing formatter (nocommand) can end up withresult.commandempty, causing thecontinueto run and the environment override to be dropped silently.- Similar surprises are possible for
extensions.To avoid subtle bugs and make the behavior obvious, I’d suggest handling
command/extensionsexplicitly instead of viamergeDeep, e.g.:
- Start from the base
Formatter.Info.- Override
commandonly whenitem.commandis provided.- Override
extensionsonly whenitem.extensionsis provided.- Always merge/override
environmentand other scalar fields.And add a small test that verifies environment‑only overrides and partial overrides behave as intended.
packages/opencode/test/config/config.test.ts (1)
271-297: UsepathToFileURLfor plugin file URLs to keep tests OS-agnosticThe new plugin tests look good and accurately exercise scoped vs. relative plugin handling, but the expectation in the “appends plugins discovered from directories…” test builds a
file://URL manually:expect(config.plugin).toEqual([ "local-plugin", `file://${path.join(globalTmp.path, "plugin", "custom.ts")}`, ])On Windows this will generate an invalid URL (
file://C:\...) and won’t match how the config loader constructs file URLs (which appears to mirror the earlier test usingpathToFileURL).You can make this portable and consistent with the previous test by reusing
pathToFileURL:- const config = await Config.get() - expect(config.plugin).toEqual(["local-plugin", `file://${path.join(globalTmp.path, "plugin", "custom.ts")}`]) + const config = await Config.get() + const pluginFileUrl = pathToFileURL(path.join(globalTmp.path, "plugin", "custom.ts")).href + expect(config.plugin).toEqual(["local-plugin", pluginFileUrl])[pathToFileURL is already imported at the top of this file.]
Also applies to: 465-505
packages/opencode/test/config/hot-reload.test.ts (1)
57-72: “event subscriber…” test doesn’t actually exercise subscription orderingThe helper
subscribeWithContextis designed to runConfig.Event.Updatedhandlers inside the correctInstancecontext, but the test:test("event subscriber sees refreshed config before targeted invalidations", async () => { // ... const response = await patchConfig(directory, { model: "event-model" }, "global") expect(response.status).toBe(200) await Instance.provide({ directory, fn: async () => { const config = await Config.get() expect(config.model).toBe("event-model") }, }) })never attaches a subscriber and doesn’t assert anything about the ordering of “subscriber sees refreshed config” vs. subsequent invalidations. It effectively just re-checks that a global PATCH updates the model for that workspace.
Consider either:
- Updating the test to use
subscribeWithContextand assert that inside the event callbackConfig.get()already reflects the updated config; or- Renaming the test to reflect what it actually covers if ordering is already validated elsewhere.
This will keep the test suite’s intent aligned with behavior and make regressions around event ordering easier to catch.
Also applies to: 240-262
packages/opencode/src/config/invalidation.ts (5)
19-56: Unuseddiffparameters in invalidate helpersSeveral helpers (
invalidateProvider,invalidateMCP,invalidateLSP,invalidatePlugin) accept adiff: ConfigDiffparameter they don’t use. Either drop the parameter or rename to_diffto avoid lint noise and make intent clearer.
58-133: Reliance on top‑level diff flags for invalidation coverage
applyInternalonly triggers most invalidations based on boolean flags likediff.provider,diff.mcp,diff.plugin, etc., while more granular fields (providerKeys,mcpKeys,pluginAdded,pluginRemoved) are ignored for invalidation. This assumescomputeDiffalways sets the corresponding top‑level flags whenever nested keys change; if that ever isn’t true, provider/MCP/plugin/tool‑registry invalidations will silently be skipped. Consider asserting this invariant in tests or derivingproviderChanged/mcpChanged/pluginChangedfrom both the flags and the nested key buckets.
58-71: Directory selection for global‑scope invalidation
targetDirectoryfalls back toprocess.cwd()wheninput.directoryis absent, even forscope: "global". If a global config update is published without a specific directory, this will only refresh/invalidate the instance rooted at the current working directory, not all active instances. If the intended behaviour is “global fan‑out”, consider routing global updates throughInstance.forEach(or ensuring the publisher includes each instance’s directory), and document this expectation.
135-155: Aggregate error handling hides which target failedThe
Promise.all(tasks)block logs a single"Targeted config invalidation failed"message with onlyerror: String(error), losing which specific invalidation(s) failed. If multiple targets are scheduled, this makes debugging harder. Consider catching/logging per‑task (or at least includingtargetsandsectionsin the error log) while still allowing others to proceed.
160-182: Event payload assumptions insetupsubscriberThe
setupsubscriber destructuresdiff,scope,directory, andrefreshedfromevent.properties as any. This tightly couples to how the event is published (includingrefreshed), but the Zod schema forConfig.Event.Updatedonly guaranteesdiff,scope,directory?,refreshed?. Please ensure the publisher always setsrefreshedconsistently with the earlier cache invalidation (soapplyInternaldoesn’t redundantly invalidate"config"), and consider narrowing the cast or validatingevent.propertiesbefore passing intoapply.packages/opencode/src/config/write.ts (3)
25-49: JSON vs JSONC handling and initial write semantics
isJsonctreats both.jsoncand.jsonas JSONC for incremental updates, but the initial write path (!existingContent || !file.exists()) always does a fullJSON.stringifyrewrite, losing any existing JSONC comments/formatting ifexistingContentis non‑null butfile.exists()has raced tofalse. This is probably fine under the current locked writer (sinceexistingContentshould only be non‑null for an existing file), but it’s worth confirming the invariants aroundexistingContentand whether you ever intend to preserve JSONC comments on first write.
117-174: Incremental update behaviour whenpreviousis unavailableWhen
options.previousis absent,applyIncrementalUpdatesonly sets keys present innewConfigand never removes keys that disappeared, which can leave stale keys from the existing file. Frompersist.update, this happens whenpreviousNormalizedis missing (e.g., parse/validation failures of existing content). If that’s intentional (best‑effort, non‑destructive updates), it would be good to document; otherwise, consider using the parsed raw object aspreviousto allow deletions to propagate even in that case.
176-221: Nested instruction key derivation couples tightly toConfigDiff
buildNestedInstructionsderivesdiffChildKeysfromdiff.providerKeys/diff.mcpKeysplus unions of previous/next keys. This is reasonable, but it again assumescomputeDiffis in sync with these expectations (e.g., that provider/mcp key buckets are always present when needed). Make sure tests cover cases where individual provider/MCP entries are added, removed, or modified so nested updates don’t miss or over‑update keys.packages/opencode/src/config/config.ts (3)
43-73: Plugin file reference resolution: Windows and relative path coverage
resolvePluginFileReferencecorrectly normalizes~/, Windows‑style relative prefixes, and absolute paths intofile://URLs, and usesURL+pathToFileURLto resolve relative specifiers against the config file. One minor nit:normalizeWindowsPathonly runs in the relative‑URL branch, so Windows absolute paths resolved viapathToFileURLwill still include backslashes correctly, but specifiers likeC:\foo\bar.tson non‑Windows platforms won’t be treated as path‑like (which is probably acceptable). Overall, this is a good improvement.
89-165: Plugin aggregation and potential duplication across directories
loadStateFromDisknow aggregates plugin file URLs intopluginFilesand appends them toresult.pluginafter processing all directories. This ensures you don’t lose global or project‑level plugin discoveries, but it may introduce duplicates if the same plugin path appears in multiple directories or config sources. Consider de‑duplicatingresult.plugin(e.g., via aSet) before returning the final config to avoid redundant plugin registrations.
879-895:Config.updatedepends on ambientInstance.directorywhendirectoryomitted
update()defaultsscopeto"project"anddirectorytoInstance.directorybefore delegating to./persist. This is fine as long as all callers either (a) run inside anInstance.providecontext or (b) explicitly passdirectory. If you ever callConfig.updatefor project scope outside an instance context, this will throwContext.NotFoundviaInstance.directory. It’s worth documenting this requirement and perhaps asserting in tests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
packages/sdk/js/src/gen/types.gen.tsis excluded by!**/gen/**
📒 Files selected for processing (29)
packages/opencode/src/agent/agent.ts(2 hunks)packages/opencode/src/command/index.ts(2 hunks)packages/opencode/src/config/backup.ts(1 hunks)packages/opencode/src/config/config.ts(9 hunks)packages/opencode/src/config/diff.ts(1 hunks)packages/opencode/src/config/error.ts(1 hunks)packages/opencode/src/config/global-file.ts(1 hunks)packages/opencode/src/config/hot-reload.ts(1 hunks)packages/opencode/src/config/invalidation.ts(1 hunks)packages/opencode/src/config/lock.ts(1 hunks)packages/opencode/src/config/persist.ts(1 hunks)packages/opencode/src/config/write.ts(1 hunks)packages/opencode/src/file/index.ts(2 hunks)packages/opencode/src/file/watcher.ts(2 hunks)packages/opencode/src/format/index.ts(2 hunks)packages/opencode/src/lsp/index.ts(2 hunks)packages/opencode/src/mcp/index.ts(2 hunks)packages/opencode/src/permission/index.ts(2 hunks)packages/opencode/src/plugin/index.ts(1 hunks)packages/opencode/src/project/bootstrap.ts(1 hunks)packages/opencode/src/project/instance.ts(1 hunks)packages/opencode/src/project/state.ts(3 hunks)packages/opencode/src/provider/provider.ts(2 hunks)packages/opencode/src/server/server.ts(4 hunks)packages/opencode/src/tool/registry.ts(2 hunks)packages/opencode/test/config/config.test.ts(4 hunks)packages/opencode/test/config/hot-reload.test.ts(1 hunks)packages/opencode/test/config/write.test.ts(1 hunks)specs/config-spec.md(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (22)
packages/opencode/src/project/bootstrap.ts (1)
packages/opencode/src/project/instance.ts (1)
Instance(15-92)
packages/opencode/src/config/diff.ts (1)
packages/opencode/src/config/config.ts (2)
Info(527-712)Info(714-714)
packages/opencode/src/file/index.ts (2)
packages/opencode/src/util/log.ts (1)
error(135-139)packages/opencode/src/project/instance.ts (1)
Instance(15-92)
packages/opencode/src/tool/registry.ts (3)
packages/opencode/src/config/config.ts (3)
state(197-197)Info(527-712)Info(714-714)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)packages/opencode/src/tool/tool.ts (1)
Info(18-34)
packages/opencode/src/permission/index.ts (4)
packages/opencode/src/config/config.ts (1)
state(197-197)packages/opencode/src/tool/registry.ts (1)
state(27-58)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)packages/opencode/src/file/time.ts (1)
state(6-15)
packages/opencode/src/config/invalidation.ts (3)
packages/opencode/src/config/diff.ts (1)
ConfigDiff(4-26)packages/opencode/src/project/instance.ts (1)
Instance(15-92)packages/opencode/src/config/hot-reload.ts (1)
isConfigHotReloadEnabled(1-3)
packages/opencode/src/command/index.ts (3)
packages/opencode/src/config/config.ts (3)
state(197-197)Info(527-712)Info(714-714)packages/opencode/src/tool/registry.ts (1)
state(27-58)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)
packages/opencode/src/lsp/index.ts (3)
packages/opencode/src/config/config.ts (1)
state(197-197)packages/opencode/src/tool/registry.ts (1)
state(27-58)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)
packages/opencode/src/server/server.ts (3)
packages/opencode/src/project/instance.ts (2)
directory(39-41)Instance(15-92)packages/opencode/src/config/hot-reload.ts (1)
isConfigHotReloadEnabled(1-3)packages/opencode/src/cli/cmd/tui/event.ts (1)
TuiEvent(4-39)
packages/opencode/test/config/write.test.ts (1)
packages/opencode/src/config/write.ts (1)
writeConfigFile(25-49)
packages/opencode/src/config/persist.ts (8)
packages/opencode/src/project/instance.ts (2)
directory(39-41)Instance(15-92)packages/opencode/src/config/global-file.ts (1)
resolveGlobalFile(5-8)packages/opencode/src/config/config.ts (3)
Info(527-712)Info(714-714)update(879-895)packages/opencode/src/config/diff.ts (2)
ConfigDiff(4-26)computeDiff(45-123)packages/opencode/src/config/lock.ts (1)
acquireLock(90-133)packages/opencode/src/config/backup.ts (2)
createBackup(3-12)restoreBackup(14-17)packages/opencode/src/config/write.ts (2)
writeConfigFile(25-49)writeFileAtomically(265-279)packages/opencode/src/config/hot-reload.ts (1)
isConfigHotReloadEnabled(1-3)
packages/opencode/test/config/hot-reload.test.ts (3)
packages/opencode/src/project/instance.ts (2)
directory(39-41)Instance(15-92)packages/opencode/src/project/bootstrap.ts (1)
InstanceBootstrap(15-30)packages/opencode/src/config/config.ts (1)
directories(897-899)
packages/opencode/src/agent/agent.ts (3)
packages/opencode/src/config/config.ts (3)
state(197-197)Info(527-712)Info(714-714)packages/opencode/src/tool/registry.ts (1)
state(27-58)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)
packages/opencode/src/project/state.ts (4)
packages/opencode/src/config/config.ts (1)
state(197-197)packages/opencode/src/tool/registry.ts (2)
state(27-58)register(78-86)packages/opencode/src/project/instance.ts (3)
state(48-50)dispose(76-79)invalidate(51-53)packages/opencode/src/file/time.ts (1)
state(6-15)
packages/opencode/src/mcp/index.ts (3)
packages/opencode/src/config/config.ts (1)
state(197-197)packages/opencode/src/tool/registry.ts (1)
state(27-58)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)
packages/opencode/test/config/config.test.ts (2)
packages/opencode/test/fixture/fixture.ts (1)
tmpdir(11-28)packages/opencode/src/project/instance.ts (1)
Instance(15-92)
packages/opencode/src/config/config.ts (4)
packages/opencode/src/project/instance.ts (4)
directory(39-41)Instance(15-92)worktree(42-44)state(48-50)packages/opencode/src/config/global-file.ts (1)
resolveGlobalFile(5-8)packages/opencode/src/config/persist.ts (1)
update(61-183)packages/opencode/src/config/diff.ts (1)
ConfigDiff(4-26)
packages/opencode/src/provider/provider.ts (3)
packages/opencode/src/config/config.ts (1)
state(197-197)packages/opencode/src/tool/registry.ts (1)
state(27-58)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)
packages/opencode/src/config/write.ts (2)
packages/opencode/src/config/diff.ts (1)
ConfigDiff(4-26)packages/opencode/src/config/config.ts (2)
Info(527-712)Info(714-714)
packages/opencode/src/file/watcher.ts (4)
packages/opencode/src/config/config.ts (1)
state(197-197)packages/opencode/src/tool/registry.ts (1)
state(27-58)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)packages/opencode/src/file/time.ts (1)
state(6-15)
packages/opencode/src/plugin/index.ts (4)
packages/plugin/src/index.ts (2)
Plugin(27-27)PluginInput(19-25)packages/opencode/src/config/config.ts (1)
state(197-197)packages/opencode/src/tool/registry.ts (1)
state(27-58)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)
packages/opencode/src/format/index.ts (4)
packages/opencode/src/config/config.ts (3)
state(197-197)Info(527-712)Info(714-714)packages/opencode/src/tool/registry.ts (2)
state(27-58)enabled(126-145)packages/opencode/src/project/instance.ts (2)
state(48-50)Instance(15-92)packages/opencode/src/format/formatter.ts (14)
enabled(18-20)enabled(27-29)enabled(66-74)enabled(111-120)enabled(127-129)enabled(136-139)enabled(146-148)enabled(155-178)enabled(185-205)enabled(212-220)enabled(227-229)enabled(236-238)enabled(245-247)Info(6-12)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cursor Bugbot
🔇 Additional comments (16)
packages/opencode/src/config/hot-reload.ts (1)
1-3: Env flag helper is clear and minimalImplementation is straightforward, typed, and matches the documented
OPENCODE_CONFIG_HOT_RELOAD === "true"contract. No changes needed.packages/opencode/src/project/bootstrap.ts (1)
13-29: Config invalidation bootstrap ordering looks appropriateCalling
await ConfigInvalidation.setup()at the start ofInstanceBootstrap(before Plugin/Format/LSP/File init) is a good place to wire hot-reload invalidation so subsequent subsystems are ready to react to config updates. No issues from this change alone.packages/opencode/src/file/watcher.ts (1)
5-77: State.register-based watcher lifecycle looks correctThe switch from
Instance.statetoState.register("filewatcher", ...)is consistent with the new state system:
- Directory scoping via
() => Instance.directorymatches other modules (config, tool-registry).- Non-git projects cleanly no-op by returning
{}and the disposer safely checks forsub.FileWatcher.init()still respectsFlag.OPENCODE_EXPERIMENTAL_WATCHERbefore touching state.This refactor looks good.
packages/opencode/src/config/global-file.ts (1)
5-7: Global config resolver is simple and matches documented behavior
resolveGlobalFile()eagerly createsGlobal.Path.configand returnsopencode.jsoncunder it, which aligns with the spec’s global-scope semantics and keeps the path resolution centralized. No issues from a correctness or lifecycle standpoint.packages/opencode/src/permission/index.ts (1)
7-7: Permission state registration and teardown behavior look soundRegistering permission state via
State.register("permission", () => Instance.directory, () => { ... }, async (state) => { ... })keeps the prior in-memory structure while making it invalidation-aware. On dispose, all pending requests are rejected withRejectedError, which is an appropriate default for hot-reload or shutdown.No functional issues spotted in this block.
Also applies to: 53-85
packages/opencode/src/tool/registry.ts (1)
27-58: State.register migration for tool registry looks consistentThe registry state is now properly keyed as
"tool-registry"and scoped byInstance.directory, matching the new state system used elsewhere (config, format, etc.). The async initializer still aggregates tools from config directories and plugins with unchanged behavior, so this refactor looks safe.packages/opencode/src/config/diff.ts (1)
4-123: ConfigDiff and computeDiff implementation look solidThe
ConfigDiffshape andcomputeDiff()logic line up well with theConfig.Infoschema: you capture all of the important top‑level sections, include key‑level details for provider/mcp, and expose pluginAdded/pluginRemoved for targeted invalidation.computeKeysChanged()handles undefined inputs safely. No issues from a correctness or readability standpoint.packages/opencode/src/server/server.ts (2)
84-100: Config PATCH flow and LastConfigUpdate cache are well structuredThe new
LastConfigUpdatemap plusrememberConfigUpdate()give you a cheap, per-directory record of which sections changed, and the 60s cleanup interval should prevent the cache from growing unbounded.The
/configPATCH handler’s integration withConfig.updateand hot-reload is also solid:
- Scope handling (
projectvsglobal) is explicit and defaults sensibly.diffForPublishis used for downstream events and to derivesectionsfor TUI enrichment.- When hot reload is enabled:
- Project scope publishes a single
Config.Event.Updatedfor the current directory.- Global scope fans out via
Instance.forEach, remembers sections per directory, and aggregates any publish failures into a single error with good logging context.This looks like a good balance between correctness, observability, and keeping the hot-reload behavior clearly behind the env flag.
Also applies to: 252-303
1784-1813: Nice TUI toast enrichment based on recent config updatesThe
/tui/show-toasthandler’s enrichment logic is well-contained and defensive:
- It only rewrites messages that match “Saved (global|project) config”.
- It prefers a fresh (
< 10s)LastConfigUpdateentry for the current directory/scope and falls back to the freshest matching scope overall.- It replaces the
-> undefinedsuffix when present, or rebuilds the message to include the changed sections or “no changes”.- Any failure is swallowed, so the endpoint never breaks toast delivery.
This should make config save toasts much more informative without risking instability.
packages/opencode/src/config/error.ts (1)
1-45: Structured config error types look goodThe four NamedError definitions (
ConfigUpdateError,ConfigValidationError,ConfigWriteConflictError,ConfigWriteError) have clear, appropriately scoped payload schemas and align well with the update/validation/write flows described in the PR. This should make error handling and logging around config operations much more consistent.packages/opencode/src/config/write.ts (1)
281-295:syncDirectorylogging and error tolerance look goodThe directory fsync logic correctly skips Windows and tolerates
EISDIR/ENOENT, only warning on other failures. This is a nice balance between durability and portability.packages/opencode/src/config/config.ts (5)
27-41: Plugin path detection helper looks solid
isPathLikePluginSpecifiercleanly distinguishes path‑like plugin values (file URLs, relative, home‑relative, and absolute paths, including common Windows prefixes) from bare specifiers. This should make downstream resolution behaviour much more predictable.
123-132: Filesystem.up start/stop parametersUsing
Filesystem.up({ targets: [".opencode"], start: directory, stop: worktree })makes the search for.opencodedirectories explicit and bounded, which is a nice robustness improvement over unbounded upward traversal. This looks good.
197-202: Config state registration andreadFreshConfigsemantics
state = State.register("config", () => Instance.directory, loadStateFromDisk)plusreadFreshConfig()(which bypasses state and hits disk directly) is a clear separation between cached and fresh config. This matches howpersist.updatechooses betweenConfig.get()andreadFreshConfig()based on hot‑reload, and looks correct.
716-746: Global config loading viaresolveGlobalFile
loadGlobalConfignow resolves the global JSONC path viaresolveGlobalFile()and mergesconfig.json,opencode.json, and the resolved global file. This ensures the config directory exists and centralizes the global path logic, which should reduce path drift and race conditions. Looks good.
760-799: New plugin specifier normalization inloadThe post‑parse loop that rewrites
data.plugin[i]usingresolvePluginFileReferencewhenisPathLikePluginSpecifierholds gives you normalized file URLs in memory without mutating the original file (unless$schemais missing). This should make downstream plugin resolution simpler and more consistent across project/global configs.
| const state = State.register( | ||
| "agent", | ||
| () => Instance.directory, | ||
| async () => { | ||
| const cfg = await Config.get() | ||
| const defaultTools = cfg.tools ?? {} | ||
| const defaultPermission: Info["permission"] = { | ||
| edit: "allow", | ||
| bash: { | ||
| "cut*": "allow", | ||
| "diff*": "allow", | ||
| "du*": "allow", | ||
| "file *": "allow", | ||
| "find * -delete*": "ask", | ||
| "find * -exec*": "ask", | ||
| "find * -fprint*": "ask", | ||
| "find * -fls*": "ask", | ||
| "find * -fprintf*": "ask", | ||
| "find * -ok*": "ask", | ||
| "find *": "allow", | ||
| "git diff*": "allow", | ||
| "git log*": "allow", | ||
| "git show*": "allow", | ||
| "git status*": "allow", | ||
| "git branch": "allow", | ||
| "git branch -v": "allow", | ||
| "grep*": "allow", | ||
| "head*": "allow", | ||
| "less*": "allow", | ||
| "ls*": "allow", | ||
| "more*": "allow", | ||
| "pwd*": "allow", | ||
| "rg*": "allow", | ||
| "sort --output=*": "ask", | ||
| "sort -o *": "ask", | ||
| "sort*": "allow", | ||
| "stat*": "allow", | ||
| "tail*": "allow", | ||
| "tree -o *": "ask", | ||
| "tree*": "allow", | ||
| "uniq*": "allow", | ||
| "wc*": "allow", | ||
| "whereis*": "allow", | ||
| "which*": "allow", | ||
| "*": "ask", | ||
| "*": "allow", | ||
| }, | ||
| webfetch: "allow", | ||
| }, | ||
| cfg.permission ?? {}, | ||
| ) | ||
| doom_loop: "ask", | ||
| external_directory: "ask", | ||
| } | ||
| const agentPermission = mergeAgentPermissions(defaultPermission, cfg.permission ?? {}) | ||
|
|
||
| const result: Record<string, Info> = { | ||
| general: { | ||
| name: "general", | ||
| description: | ||
| "General-purpose agent for researching complex questions, searching for code, and executing multi-step tasks. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries use this agent to perform the search for you.", | ||
| tools: { | ||
| todoread: false, | ||
| todowrite: false, | ||
| ...defaultTools, | ||
| const planPermission = mergeAgentPermissions( | ||
| { | ||
| edit: "deny", | ||
| bash: { | ||
| "cut*": "allow", | ||
| "diff*": "allow", | ||
| "du*": "allow", | ||
| "file *": "allow", | ||
| "find * -delete*": "ask", | ||
| "find * -exec*": "ask", | ||
| "find * -fprint*": "ask", | ||
| "find * -fls*": "ask", | ||
| "find * -fprintf*": "ask", | ||
| "find * -ok*": "ask", | ||
| "find *": "allow", | ||
| "git diff*": "allow", | ||
| "git log*": "allow", | ||
| "git show*": "allow", | ||
| "git status*": "allow", | ||
| "git branch": "allow", | ||
| "git branch -v": "allow", | ||
| "grep*": "allow", | ||
| "head*": "allow", | ||
| "less*": "allow", | ||
| "ls*": "allow", | ||
| "more*": "allow", | ||
| "pwd*": "allow", | ||
| "rg*": "allow", | ||
| "sort --output=*": "ask", | ||
| "sort -o *": "ask", | ||
| "sort*": "allow", | ||
| "stat*": "allow", | ||
| "tail*": "allow", | ||
| "tree -o *": "ask", | ||
| "tree*": "allow", | ||
| "uniq*": "allow", | ||
| "wc*": "allow", | ||
| "whereis*": "allow", | ||
| "which*": "allow", | ||
| "*": "ask", | ||
| }, | ||
| webfetch: "allow", | ||
| }, | ||
| options: {}, | ||
| permission: agentPermission, | ||
| mode: "subagent", | ||
| builtIn: true, | ||
| }, | ||
| build: { | ||
| name: "build", | ||
| tools: { ...defaultTools }, | ||
| options: {}, | ||
| permission: agentPermission, | ||
| mode: "primary", | ||
| builtIn: true, | ||
| }, | ||
| plan: { | ||
| name: "plan", | ||
| options: {}, | ||
| permission: planPermission, | ||
| tools: { | ||
| ...defaultTools, | ||
| cfg.permission ?? {}, | ||
| ) | ||
|
|
||
| const result: Record<string, Info> = { | ||
| general: { | ||
| name: "general", | ||
| description: | ||
| "General-purpose agent for researching complex questions, searching for code, and executing multi-step tasks. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries use this agent to perform the search for you.", | ||
| tools: { | ||
| todoread: false, | ||
| todowrite: false, | ||
| ...defaultTools, | ||
| }, | ||
| options: {}, | ||
| permission: agentPermission, | ||
| mode: "subagent", | ||
| builtIn: true, | ||
| }, | ||
| mode: "primary", | ||
| builtIn: true, | ||
| }, | ||
| } | ||
| for (const [key, value] of Object.entries(cfg.agent ?? {})) { | ||
| if (value.disable) { | ||
| delete result[key] | ||
| continue | ||
| } | ||
| let item = result[key] | ||
| if (!item) | ||
| item = result[key] = { | ||
| name: key, | ||
| mode: "all", | ||
| build: { | ||
| name: "build", | ||
| tools: { ...defaultTools }, | ||
| options: {}, | ||
| permission: agentPermission, | ||
| mode: "primary", | ||
| builtIn: true, | ||
| }, | ||
| plan: { | ||
| name: "plan", | ||
| options: {}, | ||
| tools: {}, | ||
| builtIn: false, | ||
| } | ||
| const { name, model, prompt, tools, description, temperature, top_p, mode, permission, color, ...extra } = value | ||
| item.options = { | ||
| ...item.options, | ||
| ...extra, | ||
| permission: planPermission, | ||
| tools: { | ||
| ...defaultTools, | ||
| }, | ||
| mode: "primary", | ||
| builtIn: true, | ||
| }, | ||
| } | ||
| if (model) item.model = Provider.parseModel(model) | ||
| if (prompt) item.prompt = prompt | ||
| if (tools) | ||
| for (const [key, value] of Object.entries(cfg.agent ?? {})) { | ||
| if (value.disable) { | ||
| delete result[key] | ||
| continue | ||
| } | ||
| let item = result[key] | ||
| if (!item) | ||
| item = result[key] = { | ||
| name: key, | ||
| mode: "all", | ||
| permission: agentPermission, | ||
| options: {}, | ||
| tools: {}, | ||
| builtIn: false, | ||
| } | ||
| const { name, model, prompt, tools, description, temperature, top_p, mode, color, permission, ...extra } = value | ||
| item.options = { | ||
| ...item.options, | ||
| ...extra, | ||
| } | ||
| if (model) item.model = Provider.parseModel(model) | ||
| if (prompt) item.prompt = prompt | ||
| if (tools) | ||
| item.tools = { | ||
| ...item.tools, | ||
| ...tools, | ||
| } | ||
| item.tools = { | ||
| ...defaultTools, | ||
| ...item.tools, | ||
| ...tools, | ||
| } | ||
| item.tools = { | ||
| ...defaultTools, | ||
| ...item.tools, | ||
| } | ||
| if (description) item.description = description | ||
| if (temperature != undefined) item.temperature = temperature | ||
| if (top_p != undefined) item.topP = top_p | ||
| if (mode) item.mode = mode | ||
| if (color) item.color = color | ||
| // just here for consistency & to prevent it from being added as an option | ||
| if (name) item.name = name | ||
| if (description) item.description = description | ||
| if (temperature != undefined) item.temperature = temperature | ||
| if (top_p != undefined) item.topP = top_p | ||
| if (mode) item.mode = mode | ||
| if (color) item.color = color | ||
| // just here for consistency & to prevent it from being added as an option | ||
| if (name) item.name = name | ||
|
|
||
| if (permission ?? cfg.permission) { | ||
| item.permission = mergeAgentPermissions(cfg.permission ?? {}, permission ?? {}) | ||
| if (permission ?? cfg.permission) { | ||
| item.permission = mergeAgentPermissions(cfg.permission ?? {}, permission ?? {}) | ||
| } | ||
| } | ||
| } | ||
| return result | ||
| }) | ||
| return result | ||
| }, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
Per-agent permission merge can drop doom_loop / external_directory defaults and mutates inputs
The overall State.register("agent", …) flow looks solid, but the permission merging has a subtle layering issue:
- You compute
agentPermissionandplanPermissionwith the intended defaults (includingdoom_loop: "ask"andexternal_directory: "ask"for the general/build agents). - In the per-agent override loop, you then do:
if (permission ?? cfg.permission) {
item.permission = mergeAgentPermissions(cfg.permission ?? {}, permission ?? {})
}For built-in agents that have a corresponding cfg.agent.<name> entry but no per-agent permission, and a top-level cfg.permission present, this re-merge uses only cfg.permission as the base and discards the earlier defaultPermission-derived fields. That can turn doom_loop / external_directory from "ask" into undefined for those agents, changing safety defaults whenever the user both sets global permissions and customizes agent metadata.
A safer pattern is to treat the existing item.permission as the base and only apply per-agent overrides:
- if (permission ?? cfg.permission) {
- item.permission = mergeAgentPermissions(cfg.permission ?? {}, permission ?? {})
- }
+ if (permission) {
+ // Start from whatever base permission was assigned (default/plan/global),
+ // and layer per-agent overrides on top.
+ item.permission = mergeAgentPermissions(item.permission, permission)
+ }Separately, mergeAgentPermissions mutates its input objects:
if (typeof basePermission.bash === "string") {
basePermission.bash = { "*": basePermission.bash }
}
if (typeof overridePermission.bash === "string") {
overridePermission.bash = { "*": overridePermission.bash }
}Since cfg.permission is passed directly here, this normalizes and mutates the config object in-place. Even if current callers tolerate it, it’s surprising for a merge helper and makes future reuse of cfg.permission harder to reason about. Consider cloning the inputs first (or normalizing into locals) to keep mergeAgentPermissions side‑effect free.
Also applies to: 226-263
I'll verify the review comment by examining the actual code. Let me search the codebase for the relevant files and functions.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find and display the agent.ts file at the specified lines
fd -t f "agent.ts" --path "*/agent/agent.ts" | head -5
Length of output: 233
🏁 Script executed:
#!/bin/bash
# Search for mergeAgentPermissions function definition
rg -n "mergeAgentPermissions" --type ts -A 10 -B 2Length of output: 2776
🏁 Script executed:
#!/bin/bash
# Check the structure of packages/opencode/src/agent directory
fd -t f . "packages/opencode/src/agent/" 2>/dev/null | head -20Length of output: 77
---
🏁 Script executed:
```shell
#!/bin/bash
# Get the full mergeAgentPermissions function (lines 226-263)
sed -n '226,263p' packages/opencode/src/agent/agent.ts
Length of output: 1007
🏁 Script executed:
#!/bin/bash
# Get the full state.register block to see permission setup (lines 43-100)
sed -n '43,100p' packages/opencode/src/agent/agent.tsLength of output: 1759
🏁 Script executed:
#!/bin/bash
# Get the per-agent override loop section (lines 155-185)
sed -n '155,185p' packages/opencode/src/agent/agent.tsLength of output: 1042
Permission layering loses defaults and mergeAgentPermissions mutates inputs
The review comment accurately identifies two real issues in the code:
Issue 1: Permission layering can drop doom_loop / external_directory defaults
Verified in the code flow:
defaultPermissionincludesdoom_loop: "ask"andexternal_directory: "ask"(lines 50-57)mergeAgentPermissionsfunction returns these fields without defaults (lines 255-262):doom_loop: merged.doom_loopandexternal_directory: merged.external_directorycan be undefined- At line 179-181, for built-in agents with config entries but no per-agent
permissionfield, the conditionif (permission ?? cfg.permission)is true, triggering a re-merge that uses onlycfg.permissionas the base - This overwrites the previously assigned defaults, potentially leaving
doom_loopandexternal_directoryundefined
Issue 2: mergeAgentPermissions mutates its inputs in-place
Confirmed at lines 227-235 where basePermission.bash and overridePermission.bash are directly modified. Since cfg.permission is passed as basePermission at line 180, the config object itself gets mutated with normalized bash structure—unexpected side effects for a utility function.
Suggested fix is sound: Using item.permission as the base for per-agent merges would preserve previously assigned defaults while only applying per-agent overrides.
🤖 Prompt for AI Agents
packages/opencode/src/agent/agent.ts lines 43-185: permission layering currently
overwrites defaults and the merge helper mutates inputs causing cfg to be
changed; fix by (1) changing the per-agent merge at the end to use the
already-assigned item.permission as the base (item.permission =
mergeAgentPermissions(item.permission, permission ?? {})) so defaults
(doom_loop, external_directory, etc.) are preserved, and (2) update
mergeAgentPermissions to never mutate its arguments (create new objects/clone
bash maps before normalizing/merging and always return a fresh merged object) so
cfg.permission is not modified.
| export async function createBackup(filepath: string): Promise<string> { | ||
| const timestamp = new Date().toISOString().replace(/[:.]/g, "-") | ||
| const backupPath = `${filepath}.bak-${timestamp}` | ||
|
|
||
| if (await Bun.file(filepath).exists()) { | ||
| await fs.copyFile(filepath, backupPath) | ||
| } | ||
|
|
||
| return backupPath | ||
| } | ||
|
|
||
| export async function restoreBackup(backupPath: string, targetPath: string): Promise<void> { | ||
| await fs.copyFile(backupPath, targetPath) | ||
| await fs.unlink(backupPath) | ||
| } |
There was a problem hiding this comment.
Handle missing backups gracefully in restoreBackup
createBackup returns a backup path even when the original file doesn’t exist, but restoreBackup unconditionally copyFiles from that path. If a write fails on a brand-new config (no original file), calling restoreBackup will currently throw ENOENT instead of behaving as “no backup to restore”.
Consider making restoreBackup tolerant of a missing backup file (and cleaning up best-effort):
-export async function restoreBackup(backupPath: string, targetPath: string): Promise<void> {
- await fs.copyFile(backupPath, targetPath)
- await fs.unlink(backupPath)
-}
+export async function restoreBackup(backupPath: string, targetPath: string): Promise<void> {
+ try {
+ await fs.copyFile(backupPath, targetPath)
+ } catch (error: any) {
+ if (error?.code === "ENOENT") {
+ // No backup to restore; nothing to do.
+ return
+ }
+ throw error
+ }
+
+ await fs.unlink(backupPath).catch((error: any) => {
+ if (error?.code === "ENOENT") return
+ throw error
+ })
+}Alternatively, createBackup could return null/undefined when no backup is created, and callers could skip restore in that case.
🤖 Prompt for AI Agents
In packages/opencode/src/config/backup.ts around lines 3 to 17, restoreBackup
currently unconditionally copies from a backup path that may not exist; change
restoreBackup to be tolerant of missing backups by first checking that the
backup file exists before attempting to copy, and if it does not exist simply
return; perform the copy and unlink inside a try/catch so any filesystem errors
are handled gracefully and ignore ENOENT when cleaning up (best-effort unlink),
and ensure the function still resolves void on missing backup instead of
throwing.
| export async function acquireLock(filepath: string, options?: LockOptions): Promise<() => Promise<void>> { | ||
| const normalized = path.normalize(filepath) | ||
| const timeout = options?.timeout ?? 30000 | ||
| const staleAfter = options?.staleAfter ?? LOCKFILE_STALE_AFTER_MS | ||
| const startTime = Date.now() | ||
|
|
||
| while (fileLocks.has(normalized)) { | ||
| const waited = Date.now() - startTime | ||
|
|
||
| if (waited > 5000 && waited < 5100) { | ||
| log.warn("lock acquisition taking longer than expected", { | ||
| filepath: normalized, | ||
| waited, | ||
| }) | ||
| } | ||
|
|
||
| if (waited > timeout) { | ||
| throw new Error(`Lock timeout: could not acquire lock for ${normalized} after ${waited}ms`) | ||
| } | ||
|
|
||
| await fileLocks.get(normalized) | ||
| await Bun.sleep(10) | ||
| } | ||
|
|
||
| let releaseFn: () => void | ||
| const lockPromise = new Promise<void>((resolve) => { | ||
| releaseFn = resolve | ||
| }) | ||
|
|
||
| fileLocks.set(normalized, lockPromise) | ||
|
|
||
| const releaseFilesystem = await acquireFilesystemLock({ | ||
| filepath: normalized, | ||
| timeout, | ||
| staleAfter, | ||
| startTime, | ||
| }) | ||
|
|
||
| return async () => { | ||
| fileLocks.delete(normalized) | ||
| releaseFn!() | ||
| await releaseFilesystem() | ||
| } |
There was a problem hiding this comment.
Deadlock risk when filesystem lock acquisition fails
If acquireFilesystemLock throws (timeout or fs error), acquireLock leaves a pending promise in fileLocks and never calls releaseFn. Subsequent callers will hit while (fileLocks.has(normalized)) and then await fileLocks.get(normalized), which never resolves, effectively deadlocking local lock acquisition for that path.
To make failure modes safe, clean up the in-memory lock and resolve waiters when acquisition fails:
- const releaseFilesystem = await acquireFilesystemLock({
- filepath: normalized,
- timeout,
- staleAfter,
- startTime,
- })
-
- return async () => {
- fileLocks.delete(normalized)
- releaseFn!()
- await releaseFilesystem()
- }
+ let releaseFilesystem: (() => Promise<void>) | undefined
+ try {
+ releaseFilesystem = await acquireFilesystemLock({
+ filepath: normalized,
+ timeout,
+ staleAfter,
+ startTime,
+ })
+ } catch (error) {
+ // Unblock local waiters if we fail to acquire the filesystem lock.
+ fileLocks.delete(normalized)
+ releaseFn!()
+ throw error
+ }
+
+ return async () => {
+ fileLocks.delete(normalized)
+ releaseFn!()
+ await releaseFilesystem!()
+ }This preserves the timeout behavior while ensuring callers don’t get stuck after a failed acquisition.
🤖 Prompt for AI Agents
In packages/opencode/src/config/lock.ts around lines 90 to 132, the code sets an
in-memory promise in fileLocks then calls acquireFilesystemLock but if that call
throws the in-memory promise never resolves and the entry is never removed,
causing deadlocks for subsequent acquirers; wrap the acquireFilesystemLock call
in a try/catch, and on any error delete the fileLocks entry and resolve the
pending releaseFn before rethrowing the error (ensure releaseFn is defined
before the try and call it safely), so waiters are unblocked and the in-memory
lock is cleaned up while preserving the original error/timeout behavior.
| export async function writeFileAtomically(filepath: string, content: string): Promise<void> { | ||
| const directory = path.dirname(filepath) | ||
| const tempName = `${path.basename(filepath)}.${randomUUID()}.tmp` | ||
| const tempPath = path.join(directory, tempName) | ||
| await fs.mkdir(directory, { recursive: true }) | ||
| const handle = await fs.open(tempPath, constants.O_WRONLY | constants.O_CREAT | constants.O_TRUNC, 0o600) | ||
| await handle.writeFile(content, "utf8") | ||
| await handle.sync() | ||
| await handle.close() | ||
| await fs.rename(tempPath, filepath).catch(async (error) => { | ||
| await fs.unlink(tempPath).catch(() => {}) | ||
| throw error | ||
| }) | ||
| await syncDirectory(directory) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Ensure atomic write closes handle and cleans temp file on all error paths
writeFileAtomically closes the file handle and unlinks the temp file only on the happy path and on rename failure; errors from writeFile or sync will leak the handle and leave the temp file behind. Wrapping the body in a try/finally that closes the handle (and best‑effort unlinks the temp file) would make this much more robust under disk errors.
- const handle = await fs.open(tempPath, constants.O_WRONLY | constants.O_CREAT | constants.O_TRUNC, 0o600)
- await handle.writeFile(content, "utf8")
- await handle.sync()
- await handle.close()
- await fs.rename(tempPath, filepath).catch(async (error) => {
- await fs.unlink(tempPath).catch(() => {})
- throw error
- })
+ const handle = await fs.open(tempPath, constants.O_WRONLY | constants.O_CREAT | constants.O_TRUNC, 0o600)
+ try {
+ await handle.writeFile(content, "utf8")
+ await handle.sync()
+ } finally {
+ await handle.close().catch(() => {})
+ }
+ await fs.rename(tempPath, filepath).catch(async (error) => {
+ await fs.unlink(tempPath).catch(() => {})
+ throw error
+ })| import { Config } from "../config/config" | ||
| import { spawn } from "child_process" | ||
| import { Instance } from "../project/instance" | ||
| import { State } from "../project/state" |
There was a problem hiding this comment.
LSP State.register usage is fine; guard against spreading undefined for custom servers
Switching LSP to
const state = State.register("lsp", () => Instance.directory, async () => { ... }, async (state) => { ... })is consistent with the rest of the subsystem migrations and should make LSP state hot-reloadable via config invalidation. The disposer shutting down all clients via client.shutdown() is also appropriate.
One edge case in the initializer to consider (pre-existing but worth fixing here):
for (const [name, item] of Object.entries(cfg.lsp ?? {})) {
const existing = servers[name]
...
servers[name] = {
...existing,
id: name,
root: existing?.root ?? (async () => Instance.directory),
extensions: item.extensions ?? existing?.extensions ?? [],
spawn: async (root) => { ... },
}
}If cfg.lsp allows defining custom LSP server IDs that don’t correspond to any built-in LSPServer entry, existing will be undefined, and ...existing will throw at runtime (TypeError: Cannot convert undefined or null to object). To make this safe while preserving override semantics for known servers, you can default existing to an empty object:
- const existing = servers[name]
+ const existing = servers[name]
...
- servers[name] = {
- ...existing,
+ servers[name] = {
+ ...(existing ?? {}),
id: name,
root: existing?.root ?? (async () => Instance.directory),
extensions: item.extensions ?? existing?.extensions ?? [],
spawn: async (root) => { ... },
}This keeps overrides working for built-ins and avoids crashes when users introduce new LSP configs.
Also applies to: 62-126, 80-108
🤖 Prompt for AI Agents
In packages/opencode/src/lsp/index.ts around line 9 (and similarly for ranges
62-126 and 80-108), the initializer spreads `existing` which can be undefined
for user-defined custom servers causing a runtime TypeError; fix by defaulting
`existing` to an empty object before spreading (e.g. treat `existing` as
`existing ?? {}`) so overrides still merge for built-ins while preventing
crashes for new custom server entries, and apply the same defensive default
wherever `...existing` is used in those listed line ranges.
PATCH /config hot-reload w/ selective cache invalidation
Note
Introduce hot-reloadable PATCH /config with targeted cache invalidation, robust JSONC persistence, plugin path handling, and full tests/specs, plus a unified state registry and updates across subsystems.
PATCH /config: supportsscope(project/global), returns merged config, and (whenOPENCODE_CONFIG_HOT_RELOAD=true) publishesconfig.updatedevents;GET /configreflects hot-reload mode.write.ts), and diffs (diff.ts).error.ts), global file resolver, hot-reload switch, and plugin path resolution (preserve scoped specifiers; resolve relative/absolute tofile://).plugin/*files topluginlist.State.registerwith named invalidation andState.invalidate(name, key?);Instance.invalidate/forEachhelpers; bootstrap wires config invalidation listener.agent,command,filewatcher,format,lsp,mcp,permission,plugin(with cleanup),provider, andtool-registry; file listing refresh logic hardened.config.updatedevent to SDK types; addspecs/config-spec.mdforPATCH /config.Written by Cursor Bugbot for commit 980c331. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.