Skip to content

Playback performance and sync#1595

Closed
richiemcilroy wants to merge 349 commits intomainfrom
cursor/playback-performance-and-sync-dec3
Closed

Playback performance and sync#1595
richiemcilroy wants to merge 349 commits intomainfrom
cursor/playback-performance-and-sync-dec3

Conversation

@richiemcilroy
Copy link
Copy Markdown
Member

@richiemcilroy richiemcilroy commented Feb 13, 2026

Refactor audio playback to a low-latency streaming path, optimize seeking logic, and reduce decoder initialization overhead.

This addresses user reports of audio lag and delayed startup, improves scrubbing responsiveness, and aims for smoother 60fps playback with better A/V synchronization across all supported platforms.


Open in Cursor Open in Web


Note

High Risk
Touches core playback/audio timing, seek handling, and prefetch concurrency; regressions could manifest as A/V desync, stalls, or incorrect frames under rapid seeking across platforms.

Overview
Editor playback now supports live seeking while playing: Tauri seek_to/set_playhead_position only emit state on real changes and forward seeks to the active PlaybackHandle, while the timeline UI batches/coaleces drag seeks via requestAnimationFrame to avoid command storms.

Playback runtime is refactored to add a seek channel with generation-based invalidation, keyed BTreeMap prefetch buffering, dynamic FPS-scaled prefetch/timeout/skip tuning, and added startup/seek telemetry; audio playback defaults to the streaming buffer path with prerendered mode gated by CAP_AUDIO_PRERENDER_PLAYBACK, and AudioPlaybackBuffer is enabled cross-platform (incl. Windows).

Benchmarking is expanded with new scrub + startup metrics, optional JSON output for playback-test-runner and decode-benchmark (incl. fragmented input support), plus new docs/runbook and scripts to aggregate, validate, finalize, publish, analyze, and baseline-compare playback benchmark matrix results.

Written by Cursor Bugbot for commit 0cbac24. This will update automatically on new commits. Configure here.

Greptile Overview

Greptile Summary

Refactors playback to use live seeking without stop/restart, switching from VecDeque to BTreeMap prefetch buffer with generation-based concurrency control to discard stale frames after seeks. Audio switches to low-latency streaming by default (prerender mode behind CAP_AUDIO_PRERENDER_PLAYBACK env var). Adds dynamic prefetch window tuning based on FPS, seek coalescing in Timeline UI via RAF batching, and telemetry for first-render and seek-settle latency.

Key improvements:

  • Eliminated playback stop/restart on seek - PlaybackHandle.seek() updates running loop via watch channel
  • Generation tagging prevents race where stale prefetched frames from old seek positions corrupt playback
  • Frontend Timeline coalesces rapid seeks with scheduleSeek + RAF to reduce IPC churn
  • Tauri commands avoid no-op state emissions and forward seeks directly to active handle

Issues found:

  • Lock acquisition failures in is_in_flight check (playback.rs:795) silently return false, triggering redundant decode
  • Timeline seek deduplication (index.tsx:284) blocks retry after failed seek because lastCompletedSeekFrame only updates on success
  • Audio resampler reset() (audio.rs:441) swallows errors, leaving stale state that causes sync drift
  • Dynamic skip threshold reduction during sustained lag may over-skip keyframes, worsening recovery

Confidence Score: 2/5

  • High risk - core playback timing and concurrency changes with multiple race conditions and error-swallowing paths that can cause A/V desync or frame corruption under load
  • Score reflects three critical logic bugs (redundant decode race, seek retry blocking, audio reset failure) plus complex generation-based concurrency that hasn't been battle-tested across platforms. The refactor touches ~800 lines in playback loop with new BTreeMap buffer, dual in-flight tracking, and dynamic threshold tuning - all high-risk areas for regressions in frame correctness and A/V sync
  • Pay close attention to crates/editor/src/playback.rs (concurrency/race conditions), apps/desktop/src/routes/editor/Timeline/index.tsx (seek retry logic), and crates/editor/src/audio.rs (error handling)

Important Files Changed

Filename Overview
crates/editor/src/playback.rs Major refactor adds live seeking with generation-based concurrency, dynamic prefetch tuning, BTreeMap buffer, and telemetry; high complexity with race condition risks
crates/editor/src/audio.rs Adds Windows-specific set_playhead_smooth drift tolerance and safer reset() error handling; low-risk audio changes
apps/desktop/src-tauri/src/lib.rs Optimizes seek_to and set_playhead_position to avoid no-op state emissions and forward seeks to active PlaybackHandle
apps/desktop/src/routes/editor/Timeline/index.tsx Adds RAF-based seek coalescing and removes stop/restart on seek during playback; simpler scrubbing path with de-duplication

Sequence Diagram

sequenceDiagram
    participant UI as Timeline UI
    participant Tauri as Tauri Commands
    participant PH as PlaybackHandle
    participant PL as Playback Loop
    participant PF as Prefetch Task
    participant AU as Audio Thread
    
    UI->>Tauri: seekTo(frame)
    Tauri->>Tauri: Check state changed
    alt State Changed
        Tauri->>Tauri: Emit state update
        Tauri->>PH: seek(frame)
        PH->>PL: seek_rx.send(frame)
        PL->>PL: Increment seek_generation
        PL->>PL: Clear prefetch_buffer & frame_cache
        PL->>PF: Send new generation via seek_generation_tx
        PL->>AU: Update audio_playhead_tx
        PF->>PF: Drop stale frames (old generation)
        PF->>PF: Reset prefetch from new frame
        PF->>PL: Send new frames with generation tag
        PL->>PL: Filter by generation, render frames
    end
    
    Note over UI,AU: Live seeking without stop/restart
    
    UI->>UI: scheduleSeek coalescing
    UI->>Tauri: Batched IPC call
    Note over PL,PF: Generation-aware concurrency prevents stale frames
Loading

Last reviewed commit: 0cbac24

@cursor
Copy link
Copy Markdown

cursor bot commented Feb 13, 2026

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

richiemcilroy and others added 29 commits February 13, 2026 22:26
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
richiemcilroy and others added 23 commits February 14, 2026 00:14
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
Co-authored-by: Richie McIlroy <richiemcilroy@users.noreply.github.com>
}
}

if buffered_wait_prefetch_changed {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small perf nit: if a seek came in during the wait loop, you continue right after this and the seek handler clears prefetch_buffer anyway, so you can bail before trimming.

Suggested change
if buffered_wait_prefetch_changed {
if seek_rx.has_changed().unwrap_or(false) {
continue;
}
if buffered_wait_prefetch_changed {
trim_prefetch_buffer(&mut prefetch_buffer, frame_number);
}

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 3 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

This PR is being reviewed by Cursor Bugbot

Details

You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

frameNumber === pendingSeekFrame ||
frameNumber === inFlightSeekFrame ||
frameNumber === lastCompletedSeekFrame
) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seek dedupe blocks valid repeat seeks

Medium Severity

scheduleSeek drops requests when frameNumber === lastCompletedSeekFrame. Because lastCompletedSeekFrame is never cleared on normal playback progress, a later seek back to a previously completed frame can be ignored even after the playhead moved away. This makes timeline scrubbing intermittently no-op in Timeline/index.tsx.

Additional Locations (1)

Fix in Cursor Fix in Web

*current_frame = frame_number;
true
}
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repeated same-frame seeks are ignored

Medium Severity

PlaybackHandle::seek uses seek_tx.send_if_modified, so a seek to the same frame as the last requested seek is dropped. Because seek_tx is never updated as playback advances, later valid seeks back to that frame can be ignored even after the playhead moved away.

Fix in Cursor Fix in Web

if (zoomRafId !== null) cancelAnimationFrame(zoomRafId);
if (scrollRafId !== null) cancelAnimationFrame(scrollRafId);
if (seekRafId !== null) cancelAnimationFrame(seekRafId);
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unmount doesn’t cancel pending seek pipeline

Low Severity

onCleanup only cancels current RAF IDs, but it does not invalidate flushPendingSeek while an async commands.seekTo is in flight. After unmount, the finally block can schedule another RAF and continue issuing seeks from a disposed Timeline instance.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

This PR is being reviewed by Cursor Bugbot

Details

You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.


if let Some(handle) = playback_handle {
handle.seek(frame_number);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seek dedupe uses stale state

Medium Severity

seek_to and set_playhead_position now skip handle.seek(...) when state.playhead_position already equals frame_number. But playhead_position is not advanced by playback frames, so it can be stale. This drops valid seeks to a previously requested frame while playback has moved, making scrubbing/rewind requests intermittently ignored.

Additional Locations (1)

Fix in Cursor Fix in Web

}
}

if let Err(error) = fs::write(&combined_path, &combined_data) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fragment merge can exhaust memory

Medium Severity

get_fragmented_video_duration reads init.mp4 and every .m4s segment into one in-memory Vec before writing a temp file. Large fragmented recordings can allocate very large buffers and terminate the benchmark process with OOM, since there is no size guard or streaming append path.

Fix in Cursor Fix in Web

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, 6 comments

Edit Code Review Agent Settings | Greptile

Comment on lines 795 to +805
prefetched.segment_index,
))
} else {
let is_in_flight = main_in_flight
let in_flight_key = (seek_generation, frame_number);
let is_in_flight = playback_prefetch_in_flight
.read()
.map(|guard| guard.contains(&frame_number))
.unwrap_or(false);
.map(|guard| guard.contains(&in_flight_key))
.unwrap_or(false)
|| playback_decode_in_flight
.read()
.map(|guard| guard.contains(&in_flight_key))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both playback_prefetch_in_flight and playback_decode_in_flight locks checked separately with .unwrap_or(false) fallback - if lock acquisition fails, returns false and playback starts redundant decode even though frame may already be in-flight in prefetch task

Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/editor/src/playback.rs
Line: 795:805

Comment:
both `playback_prefetch_in_flight` and `playback_decode_in_flight` locks checked separately with `.unwrap_or(false)` fallback - if lock acquisition fails, returns `false` and playback starts redundant decode even though frame may already be in-flight in prefetch task

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +280 to +292
function scheduleSeek(frameNumber: number) {
if (
frameNumber === pendingSeekFrame ||
frameNumber === inFlightSeekFrame ||
frameNumber === lastCompletedSeekFrame
) {
return;
}
pendingSeekFrame = frameNumber;
if (seekRafId === null) {
seekRafId = requestAnimationFrame(flushPendingSeek);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deduplicates seeks against lastCompletedSeekFrame which is only set on success (line 311) - if seekTo throws (line 313), lastCompletedSeekFrame stays stale, and retrying the same frame later gets incorrectly rejected as duplicate on line 284

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/desktop/src/routes/editor/Timeline/index.tsx
Line: 280:292

Comment:
deduplicates seeks against `lastCompletedSeekFrame` which is only set on success (line 311) - if `seekTo` throws (line 313), `lastCompletedSeekFrame` stays stale, and retrying the same frame later gets incorrectly rejected as duplicate on line 284

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 439 to 443
pub fn reset(&mut self) {
*self = Self::new(self.output).unwrap();
if let Ok(resampler) = Self::new(self.output) {
*self = resampler;
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Self::new() can fail and return Err, but error is silently ignored - resampler stays in old state, causing audio sync drift or corruption after seek when sample rate/channel config changes

Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/editor/src/audio.rs
Line: 439:443

Comment:
`Self::new()` can fail and return `Err`, but error is silently ignored - resampler stays in old state, causing audio sync drift or corruption after seek when sample rate/channel config changes

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +413 to 416
scheduled_in_flight_frames.insert(frame_num);
if let Ok(mut in_flight_guard) = prefetch_in_flight.write() {
in_flight_guard.insert(frame_num);
in_flight_guard.insert((generation, frame_num));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tracked in both local scheduled_in_flight_frames HashSet (line 413) and shared prefetch_in_flight RwLock (line 415) - consider using only the local set to reduce contention, since prefetch task already removes from shared set on completion

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/editor/src/playback.rs
Line: 413:416

Comment:
tracked in both local `scheduled_in_flight_frames` HashSet (line 413) and shared `prefetch_in_flight` RwLock (line 415) - consider using only the local set to reduce contention, since prefetch task already removes from shared set on completion

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +1918 to +1929
let state_changed = {
let state = editor_instance.state.lock().await;
state.playhead_position != frame_number
};

if state_changed {
editor_instance
.modify_and_emit_state(|state| {
state.playhead_position = frame_number;
})
.await;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

locks state twice (lines 1919, 1932) - second lock held during seek() call which may briefly block - consider reading both playhead_position and playback_task in single lock scope to reduce hold time

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/desktop/src-tauri/src/lib.rs
Line: 1918:1929

Comment:
locks `state` twice (lines 1919, 1932) - second lock held during `seek()` call which may briefly block - consider reading both `playhead_position` and `playback_task` in single lock scope to reduce hold time

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 1073 to 1093
@@ -724,10 +1093,29 @@ impl Playback {
{
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dynamic_skip_threshold reduced progressively during sustained lag (late_streak) - this aggressively drops frames when decoder is slow, but may skip over keyframes needed for next GOP, potentially causing further decode delays instead of recovery

Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/editor/src/playback.rs
Line: 1073:1093

Comment:
`dynamic_skip_threshold` reduced progressively during sustained lag (`late_streak`) - this aggressively drops frames when decoder is slow, but may skip over keyframes needed for next GOP, potentially causing further decode delays instead of recovery

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants