Skip to content
This repository was archived by the owner on Mar 30, 2026. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 41 additions & 8 deletions src/plugin.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1410,7 +1410,7 @@ export const createAntigravityPlugin = (providerId: string) => async (

// Note: AccountManager now ensures the current auth is always included in accounts

const accountManager = await AccountManager.loadFromDisk(auth);
let accountManager = await AccountManager.loadFromDisk(auth);
activeAccountManager = accountManager;
if (accountManager.getAccountCount() > 0) {
accountManager.requestSaveToDisk();
Expand Down Expand Up @@ -1461,9 +1461,8 @@ export const createAntigravityPlugin = (providerId: string) => async (
return fetch(input, init);
}

if (accountManager.getAccountCount() === 0) {
throw new Error("No Antigravity accounts configured. Run `opencode auth login`.");
}
accountManager = await AccountManager.loadFromDisk(latestAuth);
activeAccountManager = accountManager;
Comment on lines +1464 to +1465
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Disk read on every request

AccountManager.loadFromDisk is now called at the top of every fetch invocation, unconditionally. For heavy workloads (e.g. many parallel tool-call requests in a single agent turn) this multiplies disk reads proportionally.

Since the intent is only to avoid staleness between user interactions, it could be worth debouncing or caching the result for a short window (e.g. 1–5 s) so that a burst of concurrent requests within the same turn shares one disk read. This is not a correctness issue, but it may become noticeable at scale.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/plugin.ts
Line: 1464-1465

Comment:
**Disk read on every request**

`AccountManager.loadFromDisk` is now called at the top of every `fetch` invocation, unconditionally. For heavy workloads (e.g. many parallel tool-call requests in a single agent turn) this multiplies disk reads proportionally. 

Since the intent is only to avoid staleness between user interactions, it could be worth debouncing or caching the result for a short window (e.g. 1–5 s) so that a burst of concurrent requests within the same turn shares one disk read. This is not a correctness issue, but it may become noticeable at scale.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +1464 to +1465
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid replacing the AccountManager object mid-session.

On Line 1464 and Line 1486, this hot-swaps the manager instance instead of reconciling it in place. AccountManager carries session-local state (sessionOffsetApplied, toast debounce, pending save timer), so the swap can re-apply PID offset on later requests, re-fire “Using account...” toasts, and still let the old instance persist a stale snapshot after the reload. rateLimitStateByAccountQuota / accountFailureState are also keyed by account.index, and Line 1427 leaves refreshQueue bound to the previous manager. This needs a reload handover that disposes/rebinds the old state instead of just reassigning the variable.

Also applies to: 1477-1494

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugin.ts` around lines 1464 - 1465, The code currently hot-swaps the
AccountManager by assigning accountManager = await
AccountManager.loadFromDisk(latestAuth) and activeAccountManager =
accountManager which breaks session-local state (sessionOffsetApplied, toast
debounce, pending save timer) and leaves refreshQueue and maps like
rateLimitStateByAccountQuota/accountFailureState bound to the old instance;
instead, implement a reload handover in the AccountManager reload flow: call
AccountManager.loadFromDisk(latestAuth) to obtain newManager, transfer/merge
session-local flags and timers from the existing accountManager into newManager
(or call a reconcileFrom(oldManager) method on AccountManager), rebind
refreshQueue and any external maps to newManager, dispose the oldManager (cancel
timers, clear debouncers, and stop using its index-based keys), then set
activeAccountManager to the reconciled/newManager; ensure this same pattern is
applied to the other reload locations around lines 1477–1494.


const urlString = toUrlString(input);
const family = getModelFamilyFromUrl(urlString);
Expand All @@ -1474,6 +1473,26 @@ export const createAntigravityPlugin = (providerId: string) => async (
debugLines.push(line);
};
pushDebug(`request=${urlString}`);
let reloadedProviderState = false;
const reloadProviderState = async (reason: string): Promise<boolean> => {
if (reloadedProviderState) {
return false;
}
const reloadedAuth = await getAuth();
if (!isOAuthAuth(reloadedAuth)) {
return false;
}
reloadedProviderState = true;
accountManager = await AccountManager.loadFromDisk(reloadedAuth);
activeAccountManager = accountManager;
rateLimitStateByAccountQuota.clear();
emptyResponseAttempts.clear();
accountFailureState.clear();
rateLimitToastShown = false;
softQuotaToastShown = false;
Comment on lines +1488 to +1492
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Clearing shared state affects concurrent in-flight requests

rateLimitStateByAccountQuota, emptyResponseAttempts, accountFailureState, rateLimitToastShown, and softQuotaToastShown are all closure-level variables shared across every concurrent request in the same plugin instance. When one request triggers reloadProviderState, these maps are cleared globally, which means any other request that is mid-flight at the same moment loses its accumulated per-attempt tracking (e.g. backoff state, empty-response counters).

In practice the reload path is rare and the worst case is that a concurrent request retries against a previously-known rate-limited account once before re-discovering the limit. This is unlikely to be noticeable, but it's worth noting that the clear is not scoped to the triggering request.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/plugin.ts
Line: 1488-1492

Comment:
**Clearing shared state affects concurrent in-flight requests**

`rateLimitStateByAccountQuota`, `emptyResponseAttempts`, `accountFailureState`, `rateLimitToastShown`, and `softQuotaToastShown` are all closure-level variables shared across every concurrent request in the same plugin instance. When one request triggers `reloadProviderState`, these maps are cleared globally, which means any other request that is mid-flight at the same moment loses its accumulated per-attempt tracking (e.g. backoff state, empty-response counters).

In practice the reload path is rare and the worst case is that a concurrent request retries against a previously-known rate-limited account once before re-discovering the limit. This is unlikely to be noticeable, but it's worth noting that the clear is not scoped to the triggering request.

How can I resolve this? If you propose a fix, please make it concise.

pushDebug(`reload-provider-state reason=${reason} accounts=${accountManager.getAccountCount()}`);
return true;
};

type FailureContext = {
response: Response;
Expand Down Expand Up @@ -1554,6 +1573,9 @@ export const createAntigravityPlugin = (providerId: string) => async (
} = routingDecision;

if (accountCount === 0) {
if (await reloadProviderState("no-accounts")) {
continue;
}
throw new Error("No Antigravity accounts available. Run `opencode auth login`.");
}

Expand Down Expand Up @@ -1646,6 +1668,9 @@ export const createAntigravityPlugin = (providerId: string) => async (
// 0 means disabled (wait indefinitely)
const maxWaitMs = (config.max_rate_limit_wait_seconds ?? 300) * 1000;
if (maxWaitMs > 0 && waitMs > maxWaitMs) {
if (await reloadProviderState(`all-rate-limited:${family}`)) {
continue;
}
const waitTimeFormatted = formatWaitTime(waitMs);
await showToast(
`Rate limited for ${waitTimeFormatted}. Try again later or add another account.`,
Expand Down Expand Up @@ -1827,6 +1852,7 @@ export const createAntigravityPlugin = (providerId: string) => async (
const warmupUrl = toWarmupStreamUrl(prepared.request);
const warmupHeaders = new Headers(prepared.init.headers ?? {});
warmupHeaders.set("accept", "text/event-stream");
warmupHeaders.delete("x-goog-api-key");

const warmupInit: RequestInit = {
...prepared.init,
Expand Down Expand Up @@ -1979,16 +2005,23 @@ export const createAntigravityPlugin = (providerId: string) => async (
},
);

const requestHeaders = new Headers(prepared.init.headers ?? {});
requestHeaders.delete("x-goog-api-key");
const requestInit: RequestInit = {
...prepared.init,
headers: requestHeaders,
};

const originalUrl = toUrlString(input);
const resolvedUrl = toUrlString(prepared.request);
pushDebug(`endpoint=${currentEndpoint}`);
pushDebug(`resolved=${resolvedUrl}`);
const debugContext = startAntigravityDebugRequest({
originalUrl,
resolvedUrl,
method: prepared.init.method,
headers: prepared.init.headers,
body: prepared.init.body,
method: requestInit.method,
headers: requestInit.headers,
body: requestInit.body,
streaming: prepared.streaming,
projectId: projectContext.effectiveProjectId,
});
Expand Down Expand Up @@ -2022,7 +2055,7 @@ export const createAntigravityPlugin = (providerId: string) => async (
tokenConsumed = getTokenTracker().consume(account.index);
}

const response = await fetch(prepared.request, prepared.init);
const response = await fetch(prepared.request, requestInit);
pushDebug(`status=${response.status} ${response.statusText}`);


Expand Down
35 changes: 35 additions & 0 deletions src/plugin/accounts.test.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import { beforeEach, describe, expect, it, vi } from "vitest";

import { AccountManager, type ModelFamily, type HeaderStyle, parseRateLimitReason, calculateBackoffMs, type RateLimitReason, resolveQuotaGroup } from "./accounts";
import { saveAccounts } from "./storage";
import type { AccountStorageV4 } from "./storage";
import type { OAuthAuthDetails } from "./types";

Expand Down Expand Up @@ -255,6 +256,40 @@ describe("AccountManager", () => {
expect(snapshot[1]?.expires).toBe(123);
});

it("saveToDisk preserves empty rateLimitResetTimes so stale limits can be cleared", async () => {
const stored: AccountStorageV4 = {
version: 4,
accounts: [
{
refreshToken: "r1",
projectId: "p1",
addedAt: 1,
lastUsed: 0,
rateLimitResetTimes: { claude: 123456 },
},
],
activeIndex: 0,
};

const manager = new AccountManager(undefined, stored);
const account = manager.getAccounts()[0];
if (!account) throw new Error("Account not found");

account.rateLimitResetTimes = {};
await manager.saveToDisk();

expect(vi.mocked(saveAccounts)).toHaveBeenCalledWith(
expect.objectContaining({
accounts: [
expect.objectContaining({
refreshToken: "r1",
rateLimitResetTimes: {},
}),
],
}),
);
});

it("debounces toast display for same account", () => {
vi.useFakeTimers();
vi.setSystemTime(new Date(0));
Expand Down
2 changes: 1 addition & 1 deletion src/plugin/accounts.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1003,7 +1003,7 @@ export class AccountManager {
lastUsed: a.lastUsed,
enabled: a.enabled,
lastSwitchReason: a.lastSwitchReason,
rateLimitResetTimes: Object.keys(a.rateLimitResetTimes).length > 0 ? a.rateLimitResetTimes : undefined,
rateLimitResetTimes: { ...a.rateLimitResetTimes },
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't treat every empty rateLimitResetTimes map as an explicit clear.

On Line 1006, every save from an account with no in-memory limits now emits {}. mergeAccountStorage() treats {} as “delete the stored limits”, so a stale writer can wipe a fresher rate-limit record from another session/process. A simple case is: session A still has {} in memory, session B persists a fresh 429 window, then A’s delayed saveToDisk() clears B’s limit again. This sentinel needs to be emitted only when this instance actually cleared limits, not for every empty map.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugin/accounts.ts` at line 1006, mergeAccountStorage treats an empty
object for rateLimitResetTimes as an explicit delete, but the save path is
always emitting {} for empty in-memory maps; change the persistence so it only
writes the rateLimitResetTimes property when this instance actually cleared
limits or when the map is non-empty: in the save/build object code that
currently sets rateLimitResetTimes: { ...a.rateLimitResetTimes }, update it to
omit the property if the map is empty (or write an explicit cleared sentinel
only when an explicitClear flag is set), and ensure
saveToDisk()/mergeAccountStorage() uses that presence/flag to perform deletes;
refer to the rateLimitResetTimes field, mergeAccountStorage(), and the
saveToDisk()/save path to implement this conditional write.

coolingDownUntil: a.coolingDownUntil,
cooldownReason: a.cooldownReason,
fingerprint: a.fingerprint,
Expand Down
67 changes: 67 additions & 0 deletions src/plugin/storage.test.ts
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import { describe, expect, it, vi, beforeEach } from "vitest";
import {
deduplicateAccountsByEmail,
mergeAccountStorage,
migrateV2ToV3,
loadAccounts,
type AccountMetadata,
type AccountStorage,
type AccountStorageV4,
} from "./storage";
import { promises as fs } from "node:fs";
import {
Expand Down Expand Up @@ -209,6 +211,71 @@ describe("deduplicateAccountsByEmail", () => {
});
});

describe("mergeAccountStorage", () => {
it("lets an empty incoming rateLimitResetTimes clear stale stored limits", () => {
const existing: AccountStorageV4 = {
version: 4,
accounts: [
{
refreshToken: "r1",
projectId: "p1",
addedAt: 1,
lastUsed: 10,
rateLimitResetTimes: { claude: 123456 },
},
],
activeIndex: 0,
};
const incoming: AccountStorageV4 = {
version: 4,
accounts: [
{
refreshToken: "r1",
projectId: "p1",
addedAt: 1,
lastUsed: 20,
rateLimitResetTimes: {},
},
],
activeIndex: 0,
};

const merged = mergeAccountStorage(existing, incoming);
expect(merged.accounts[0]?.rateLimitResetTimes).toBeUndefined();
});

it("preserves existing limits when incoming storage does not specify them", () => {
const existing: AccountStorageV4 = {
version: 4,
accounts: [
{
refreshToken: "r1",
projectId: "p1",
addedAt: 1,
lastUsed: 10,
rateLimitResetTimes: { claude: 123456 },
},
],
activeIndex: 0,
};
const incoming: AccountStorageV4 = {
version: 4,
accounts: [
{
refreshToken: "r1",
projectId: "p1",
addedAt: 1,
lastUsed: 20,
},
],
activeIndex: 0,
};

const merged = mergeAccountStorage(existing, incoming);
expect(merged.accounts[0]?.rateLimitResetTimes).toEqual({ claude: 123456 });
});
});

vi.mock("node:fs", async () => {
const actual = await vi.importActual<typeof import("node:fs")>("node:fs");
return {
Expand Down
16 changes: 11 additions & 5 deletions src/plugin/storage.ts
Original file line number Diff line number Diff line change
Expand Up @@ -399,7 +399,7 @@ async function withFileLock<T>(path: string, fn: () => Promise<T>): Promise<T> {
}
}

function mergeAccountStorage(
export function mergeAccountStorage(
existing: AccountStorageV4,
incoming: AccountStorageV4,
): AccountStorageV4 {
Expand All @@ -415,16 +415,22 @@ function mergeAccountStorage(
if (acc.refreshToken) {
const existingAcc = accountMap.get(acc.refreshToken);
if (existingAcc) {
const incomingRateLimitResetTimes = acc.rateLimitResetTimes;
const mergedRateLimitResetTimes = incomingRateLimitResetTimes === undefined
? existingAcc.rateLimitResetTimes
: Object.keys(incomingRateLimitResetTimes).length === 0
? undefined
: {
...existingAcc.rateLimitResetTimes,
...incomingRateLimitResetTimes,
};
accountMap.set(acc.refreshToken, {
...existingAcc,
...acc,
// Preserve manually configured projectId/managedProjectId if not in incoming
projectId: acc.projectId ?? existingAcc.projectId,
managedProjectId: acc.managedProjectId ?? existingAcc.managedProjectId,
rateLimitResetTimes: {
...existingAcc.rateLimitResetTimes,
...acc.rateLimitResetTimes,
},
rateLimitResetTimes: mergedRateLimitResetTimes,
lastUsed: Math.max(existingAcc.lastUsed || 0, acc.lastUsed || 0),
});
} else {
Expand Down