Skip to content

Commit 51783a4

Browse files
committed
test(integration): use page.clock for proactive token-refresh tests
The MemoryTokenCache cross-tab tests assumed JWT TTL = 60s and waited 50s of wall-clock time for the proactive refresh timer to fire. The test instances now issue 300s tokens, so the timer is scheduled at ~283s (TTL - 15s leeway - 2s lead) and the 50s window never reaches it — refreshRequests.length stays at 0 and the assertion fails. Replace the wall-clock waits with `page.clock.install()` + `clock.fastForward('5:00')` so the timer can be fired deterministically regardless of the instance's actual TTL, and so tests no longer take ~50s of CI runtime each. For the single-session test we sequence the fast-forwards (tab1 first, then tab2 after the broadcast has had a chance to clear tab2's pending timer) so we still exercise the BroadcastChannel-based dedup. For the multi-session test we fast-forward both tabs in parallel because different sessions have different tokenIds and don't dedupe.
1 parent 4bbebcf commit 51783a4

2 files changed

Lines changed: 47 additions & 23 deletions

File tree

integration/tests/session-token-cache/multi-session.test.ts

Lines changed: 16 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -247,9 +247,13 @@ testAgainstRunningApps({ withEnv: [appConfigs.envs.withSessionTasks] })(
247247
* deterministic.
248248
*/
249249
test('multi-session scheduled refreshes produce one request per session', async ({ context }) => {
250-
test.setTimeout(90_000);
251-
252250
const page1 = await context.newPage();
251+
252+
// Install a virtual clock so the proactive-refresh timer can be
253+
// fast-forwarded without depending on the instance's JWT TTL or
254+
// waiting wall-clock time.
255+
await page1.clock.install();
256+
253257
await page1.goto(app.serverUrl);
254258
await page1.waitForFunction(() => (window as any).Clerk?.loaded);
255259

@@ -265,6 +269,7 @@ testAgainstRunningApps({ withEnv: [appConfigs.envs.withSessionTasks] })(
265269
expect(user1SessionId).toBeDefined();
266270

267271
const page2 = await context.newPage();
272+
await page2.clock.install();
268273
await page2.goto(app.serverUrl);
269274
await page2.waitForFunction(() => (window as any).Clerk?.loaded);
270275

@@ -306,13 +311,17 @@ testAgainstRunningApps({ withEnv: [appConfigs.envs.withSessionTasks] })(
306311
await route.continue();
307312
});
308313

309-
// Wait for proactive refresh timers to fire.
310-
// Default token TTL is 60s; onRefresh fires at 60 - 15 - 2 = 43s from iat.
311-
// Uses page.evaluate to avoid the global actionTimeout (10s) capping the wait.
312-
await page1.evaluate(() => new Promise(resolve => setTimeout(resolve, 50_000)));
314+
// Fast-forward both tabs past their refresh fire times. The two tabs
315+
// hold different sessions and therefore different tokenIds, so
316+
// BroadcastChannel does NOT deduplicate; each tab's timer fires its
317+
// own /tokens request.
318+
await Promise.all([page1.clock.fastForward('5:00'), page2.clock.fastForward('5:00')]);
319+
320+
// Real-time wait so both network round-trips complete.
321+
// eslint-disable-next-line playwright/no-wait-for-timeout
322+
await page1.waitForTimeout(1000);
313323

314324
// Two different sessions should each produce exactly one refresh request.
315-
// BroadcastChannel deduplication is per-tokenId, so different sessions refresh independently.
316325
expect(refreshRequests.length).toBe(2);
317326

318327
const refreshedSessionIds = new Set(refreshRequests.map(r => r.sessionId));

integration/tests/session-token-cache/single-session.test.ts

Lines changed: 31 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -131,18 +131,25 @@ testAgainstRunningApps({ withEnv: [appConfigs.envs.withEmailCodes] })(
131131

132132
/**
133133
* Test Flow:
134-
* 1. Open two tabs with the same browser context (shared cookies)
134+
* 1. Open two tabs with a virtual clock installed, same browser context
135135
* 2. Sign in on tab1, reload tab2 to pick up the session
136136
* 3. Both tabs hydrate their token cache with the session token
137-
* 4. Start counting /tokens requests, then wait for the timers to fire
137+
* 4. Start counting /tokens requests, then fast-forward virtual time
138+
* past the proactive-refresh fire time
138139
* 5. Assert only 1 /tokens request was made (not 2)
139140
*/
140141
test('multi-tab scheduled refreshes are deduped to a single request', async ({ context }) => {
141-
test.setTimeout(90_000);
142-
143142
const page1 = await context.newPage();
144143
const page2 = await context.newPage();
145144

145+
// Install a virtual clock on each page before any Clerk code runs so
146+
// we can fast-forward the proactive-refresh timer without waiting
147+
// wall-clock time. Each tab's onRefresh fires at TTL - 15s leeway - 2s
148+
// lead; the actual TTL depends on the instance's JWT settings, so we
149+
// fast-forward generously rather than hardcoding 43s.
150+
await page1.clock.install();
151+
await page2.clock.install();
152+
146153
await page1.goto(app.serverUrl);
147154
await page2.goto(app.serverUrl);
148155

@@ -167,28 +174,36 @@ testAgainstRunningApps({ withEnv: [appConfigs.envs.withEmailCodes] })(
167174
await u2.po.expect.toBeSignedIn();
168175

169176
// Both tabs are now signed in and have hydrated their token caches
170-
// via Session constructor -> #hydrateCache, each with an independent
171-
// onRefresh timer that fires at ~43s (TTL 60s - 15s leeway - 2s lead).
177+
// via Session constructor -> #hydrateCache, each with its own onRefresh
178+
// timer scheduled at ~TTL - 17s.
172179
// Start counting /tokens requests from this point.
173180
const refreshRequests: string[] = [];
174181
await context.route('**/v1/client/sessions/*/tokens*', async route => {
175182
refreshRequests.push(route.request().url());
176183
await route.continue();
177184
});
178185

179-
// Wait for proactive refresh timers to fire.
180-
// Default token TTL is 60s; onRefresh fires at 60 - 15 - 2 = 43s from iat.
181-
// We wait 50s to give comfortable buffer, this includes the broadcast delay.
182-
//
183-
// Uses page.evaluate instead of page.waitForTimeout to avoid
184-
// the global actionTimeout (10s) silently capping the wait.
185-
await page1.evaluate(() => new Promise(resolve => setTimeout(resolve, 50_000)));
186+
// Fast-forward tab1 past its refresh fire time. Tab1's timer fires,
187+
// its /tokens request goes out (real network), and on success it
188+
// broadcasts the new token to tab2.
189+
await page1.clock.fastForward('5:00');
190+
191+
// Real-time wait: lets the network round-trip complete and the
192+
// BroadcastChannel message reach tab2's handler, which clears tab2's
193+
// pending refresh timer.
194+
// eslint-disable-next-line playwright/no-wait-for-timeout
195+
await page2.waitForTimeout(1000);
196+
197+
// Now fast-forward tab2 past where its timer would have fired. Because
198+
// the broadcast already cleared its refresh timer, no /tokens request
199+
// should fire from tab2.
200+
await page2.clock.fastForward('5:00');
186201

187-
// Only one tab should have made a /tokens request; the other tab should have
188-
// received the refreshed token via BroadcastChannel.
202+
// Only tab1 should have made a /tokens request; tab2 got the refreshed
203+
// token via BroadcastChannel.
189204
expect(refreshRequests.length).toBe(1);
190205

191-
// Both tabs should still have valid tokens after the refresh cycle
206+
// Both tabs should still have valid, identical tokens after the refresh.
192207
const [page1Token, page2Token] = await Promise.all([
193208
page1.evaluate(() => (window as any).Clerk.session?.getToken()),
194209
page2.evaluate(() => (window as any).Clerk.session?.getToken()),

0 commit comments

Comments
 (0)