Skip to content

feat: add audio waveform, audio edit, 1.1 zoom#326

Open
PromptEngineer48 wants to merge 2 commits intosiddharthvaddem:mainfrom
PromptEngineer48:feat/audio-waveform
Open

feat: add audio waveform, audio edit, 1.1 zoom#326
PromptEngineer48 wants to merge 2 commits intosiddharthvaddem:mainfrom
PromptEngineer48:feat/audio-waveform

Conversation

@PromptEngineer48
Copy link
Copy Markdown

@PromptEngineer48 PromptEngineer48 commented Apr 4, 2026

Description

Adds several quality-of-life enhancements to the OpenScreen video editor: a 1.1× zoom level, interactive audio adjustment controls (highpass, compression, treble, loudness) with real-time preview via the Web Audio API, and an audio waveform visualization in the timeline for precision editing.

Motivation

Editors needed finer-grained zoom granularity (1.1×) for subtle motion zooms. They also lacked any audio post-processing tools, making it impossible to clean up low-end rumble, boost treble clarity, or normalize loudness — all common needs when editing screen recordings with narration. The audio waveform gives a visual reference for cuts, making trim decisions far more precise without having to rely purely on the timecode.

Type of Change

  • New Feature
  • Bug Fix
  • Refactor / Code Cleanup
  • Documentation Update
  • Other

Related Issue(s)

N/A

Screenshots / Video

1.1× Zoom Level added to the timeline selector:

The zoom selector now includes a 1.1× option alongside the existing 1.25×, 1.5×, 1.8×, 2.2×, 3.5×, levels.

Audio Adjustments panel in Settings:

A new Audio Adjustments accordion section exposes four draggable sliders:

  • Highpass Filter (Hz) — cuts low-frequency rumble
  • Compression (ratio) — evens out loudness peaks
  • Treble (+dB) — adds clarity/air to the vocal range
  • Loudness Target (LUFS/dB makeup gain) — normalizes perceived volume

Audio Waveform in Timeline:

A scrollable, zoomable waveform row now sits at the bottom of the timeline editor, synchronized with the current viewport. It renders using canvas and reacts to panning/zooming in real time.

Testing

  1. Open the editor with any video that has audio.
  2. Zoom level: Select a zoom region → open Settings → verify 1.1× appears as the first zoom level option.
  3. Audio Adjustments: Scroll the Settings panel to the bottom → expand "Audio Adjustments" → drag the sliders and play back the video to hear real-time changes.
  4. Waveform: After loading a video, wait ~2s for the waveform to process. It will appear in the AUDIO row at the bottom of the timeline. Pan and zoom the timeline to confirm the waveform stays in sync.
  5. Export: Export an MP4 with non-default audio settings. Verify the exported file reflects the audio adjustments.

Checklist

  • I have performed a self-review of my code.
  • I have added any necessary screenshots or videos.
  • I have linked related issue(s) and updated the changelog if applicable.

Technical Summary (for reviewers)

Area Change
types.ts Added AudioSettings interface + DEFAULT_AUDIO_SETTINGS
projectPersistence.ts Serialize/deserialize audioSettings in project files
useEditorHistory.ts audioSettings tracked in undo/redo history
SettingsPanel.tsx New "Audio Adjustments" accordion with 4 range sliders
VideoPlayback.tsx Live Web Audio node graph (BiquadFilter → DynamicsCompressor → Gain)
audioEncoder.ts Export pipeline applies same audio graph during MP4 render
timeline/Item.tsx Added 1.1× to ZOOM_LABELS map
lib/audioWaveform.ts New: reads file via electronAPI, decodes with AudioContext.decodeAudioData, computes RMS peaks
timeline/AudioWaveform.tsx New: canvas-based waveform renderer with ResizeObserver + DPR support
timeline/TimelineEditor.tsx Renders <AudioWaveform> at bottom of timeline inside dnd-timeline context

edits

Summary by CodeRabbit

  • New Features

    • Real-time audio controls (highpass, compression, treble, loudness) in the editor
    • Audio waveform visualization on the timeline
    • Audio-aware export pipeline with detailed audio progress reporting
  • Improvements

    • Enhanced zoom options and finer granularity (new 1.1×–5.0× range)
    • Editor persists and restores audio settings with project state
    • HUD overlay now appears in the system taskbar/dock

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 4, 2026

📝 Walkthrough

Walkthrough

Adds persistent audio settings, real-time Web Audio processing in playback, canvas waveform generation in the timeline, waveform peak generation for files, audio-aware export processing, UI controls in SettingsPanel, zoom depth adjustments, and makes the HUD overlay window appear in the system taskbar.

Changes

Cohort / File(s) Summary
Types & Defaults
src/components/video-editor/types.ts
Added AudioSettings and DEFAULT_AUDIO_SETTINGS; extended ZoomDepth to include 7, adjusted ZOOM_DEPTH_SCALES, and changed DEFAULT_ZOOM_DEPTH to 4.
Settings UI
src/components/video-editor/SettingsPanel.tsx
Added optional audioSettings props and callbacks (onAudioSettingsChange, onAudioSettingsCommit); new "Audio" accordion with four sliders; zoom grid expanded to 7 columns and labels reordered.
Editor State & Persistence
src/components/video-editor/projectPersistence.ts, src/hooks/useEditorHistory.ts, src/components/video-editor/VideoEditor.tsx
Persisted audioSettings in editor/project state, initialized history with defaults, threaded audioSettings through state snapshots, UI wiring, and save/load flows.
Playback Processing
src/components/video-editor/VideoPlayback.tsx
Added optional audioSettings prop; initializes Web Audio graph (highpass → compressor → treble → gain) and updates node parameters via setTargetAtTime on settings changes.
Waveform UI
src/components/video-editor/timeline/AudioWaveform.tsx, src/components/video-editor/timeline/TimelineEditor.tsx, src/components/video-editor/timeline/Item.tsx
New AudioWaveform component (HiDPI canvas, ResizeObserver, drawing of visible-range bars); TimelineEditor accepts/passes audioPeaks and isAudioWaveformLoading; adjusted zoom labels.
Waveform Generation
src/lib/audioWaveform.ts
New generateWaveformPeaks(videoUrl, numPeaks?) supporting local file reads via window.electronAPI.readBinaryFile and HTTP/blob/data fetch, decodes audio and returns normalized peaks.
Export Pipeline
src/lib/exporter/audioEncoder.ts, src/lib/exporter/videoExporter.ts, src/lib/exporter/types.ts
AudioProcessor.process(...) now accepts audioSettings and onProgress; added OfflineAudioContext-based processing that applies effects when settings differ from defaults and reports audio progress; VideoExporterConfig threaded audioSettings; ExportProgress includes audioProgress.
Export Dialog UI
src/components/video-editor/ExportDialog.tsx
Uses progress.audioProgress to compute and display finalizingProgress for MP4 finalization UI.
Electron Window
electron/windows.ts
Changed HUD overlay BrowserWindow option skipTaskbar from true to false.

Sequence Diagram

sequenceDiagram
    actor User
    participant SettingsPanel
    participant VideoEditor
    participant VideoPlayback
    participant AudioContext as Web Audio Context
    participant VideoExporter

    User->>SettingsPanel: adjust audio sliders
    SettingsPanel->>VideoEditor: onAudioSettingsChange(settings)
    VideoEditor->>VideoEditor: updateState(audioSettings)
    VideoEditor->>VideoPlayback: pass audioSettings
    VideoPlayback->>AudioContext: update nodes (highpass, compressor, treble, gain)
    AudioContext-->>VideoPlayback: audio changes applied (smoothed)

    User->>SettingsPanel: commit audio changes
    SettingsPanel->>VideoEditor: onAudioSettingsCommit(settings)
    VideoEditor->>VideoEditor: commitState + persist project

    User->>VideoEditor: export video
    VideoEditor->>VideoExporter: export(config with audioSettings)
    VideoExporter->>VideoExporter: compare audioSettings vs defaults
    alt audioSettings differ
        VideoExporter->>AudioContext: run OfflineAudioContext pipeline (apply effects)
        AudioContext-->>VideoExporter: processed audio blob (reports audioProgress)
    end
    VideoExporter->>User: final export
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Add webcam mask shape support #288 — Similar cross-cutting editor setting addition that threads a new setting through SettingsPanel, VideoEditor, VideoPlayback, persistence, and export paths.

Suggested reviewers

  • siddharthvaddem

"I nibble peaks in midnight code,
sliders hum where green waves flowed,
compressors, highs, a gentle shove —
export sings the tweaks I love. 🐇🎶"

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 8.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title directly addresses all three main features added: audio waveform, audio editing controls, and 1.1× zoom level.
Description check ✅ Passed The PR description covers all required sections: detailed motivation, type of change, screenshots with descriptions, testing steps with verification procedures, and a comprehensive technical summary.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 735a13958f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".


let lastNode: AudioNode = sourceNode;

if (audioSettings) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Skip default audio FX for speed-only exports

process() computes hasActiveAudioSettings, but renderPitchPreservedTimelineAudio() still applies the filter chain whenever audioSettings is merely present. Because speed edits always enter this path and VideoEditor passes defaults, exports with speed regions but untouched audio sliders get unintended processing (including the default loudnessDb: -12 attenuation), so output audio is altered even when users did not opt into audio edits.

Useful? React with 👍 / 👎.

generateWaveformPeaks(videoPath)
.then((res) => {
console.log("[Waveform] peaks generated:", res.peaks.length);
setAudioPeaks(res.peaks);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Ignore stale waveform results after source changes

This async waveform effect has no cancellation/version guard, so if videoPath changes while a prior generateWaveformPeaks() call is in flight, the older promise can resolve later and overwrite audioPeaks (and loading state) for the newly selected video. The regression is visible when users switch/open projects quickly and see a waveform that does not match the current media.

Useful? React with 👍 / 👎.

import type { WebcamLayoutPreset } from "@/lib/compositeLayout";

export type ZoomDepth = 1 | 2 | 3 | 4 | 5 | 6;
export type ZoomDepth = 1 | 2 | 3 | 4 | 5 | 6 | 7;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Update project zoom validation for the new depth level

Adding depth 7 here enables the new 5× zoom option, but project normalization still accepts only [1,2,3,4,5,6] in normalizeProjectEditor (projectPersistence.ts), so saved zoom regions at depth 7 are coerced back to the default when reopening a project. This causes user edits at the new zoom level to be lost across save/load.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/components/video-editor/projectPersistence.ts (1)

193-193: ⚠️ Potential issue | 🔴 Critical

Bug: Missing depth 7 in validation causes new zoom level to be rejected.

The validation array [1, 2, 3, 4, 5, 6] doesn't include the newly added depth 7. When loading a project with depth 7 zoom regions, they will be reset to DEFAULT_ZOOM_DEPTH.

🐛 Fix: Include depth 7 in validation
-						depth: [1, 2, 3, 4, 5, 6].includes(region.depth) ? region.depth : DEFAULT_ZOOM_DEPTH,
+						depth: [1, 2, 3, 4, 5, 6, 7].includes(region.depth) ? region.depth : DEFAULT_ZOOM_DEPTH,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/projectPersistence.ts` at line 193, The depth
validation currently checks [1,2,3,4,5,6] and thus rejects region.depth === 7;
update the validation in the loader/serializer where region.depth is normalized
(the expression using region.depth and DEFAULT_ZOOM_DEPTH) to include 7 (e.g.,
[1,2,3,4,5,6,7]) so depth 7 zoom regions are preserved instead of being reset to
DEFAULT_ZOOM_DEPTH.
🧹 Nitpick comments (2)
src/components/video-editor/VideoPlayback.tsx (1)

275-278: The "LUFS" label is misleading for a simple gain control.

The loudnessDb is converted to linear gain using 10^(dB/20), which is a standard dB-to-linear conversion. However, the UI displays this as "LUFS" (Loudness Units Full Scale), which is an integrated loudness measurement standard—not the same as applying a simple gain.

True LUFS-based loudness normalization requires analyzing the audio and dynamically adjusting gain. The current implementation is a static makeup gain, which is fine functionally but the label is technically inaccurate.

Consider renaming the UI label from "LUFS" to "dB" or "Makeup Gain" to accurately describe the behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/VideoPlayback.tsx` around lines 275 - 278, The UI
label "LUFS" is misleading because the code (loudnessDb -> linearGain via
Math.pow(10, loudnessDb/20) applied to loudnessGainNodeRef.current.gain)
implements a static makeup gain, not LUFS measurement/normalization; update the
UI label and any associated tooltip from "LUFS" to a more accurate term such as
"dB" or "Makeup Gain" wherever the loudness control is rendered (the control
tied to loudnessDb and loudnessGainNodeRef), and adjust any helper text or tests
to reflect the new label.
src/components/video-editor/projectPersistence.ts (1)

398-403: Consider validating individual audioSettings fields for robustness.

The shallow merge approach is consistent with other fields in this normalizer, but unlike numeric fields elsewhere that use isFiniteNumber checks, the audioSettings fields are merged without type/range validation. If corrupted project data contains invalid values (e.g., highpassHz: "invalid"), they would pass through.

This is a low-risk edge case since the source is typically trusted serialized state, but for defensive consistency you could validate each field:

♻️ Optional: Add field-level validation
 		audioSettings: {
 			...DEFAULT_AUDIO_SETTINGS,
-			...(editor.audioSettings && typeof editor.audioSettings === "object"
-				? editor.audioSettings
-				: {}),
+			...(editor.audioSettings && typeof editor.audioSettings === "object"
+				? {
+						highpassHz: isFiniteNumber(editor.audioSettings.highpassHz)
+							? clamp(editor.audioSettings.highpassHz, 0, 300)
+							: DEFAULT_AUDIO_SETTINGS.highpassHz,
+						compressionRatio: isFiniteNumber(editor.audioSettings.compressionRatio)
+							? clamp(editor.audioSettings.compressionRatio, 1, 20)
+							: DEFAULT_AUDIO_SETTINGS.compressionRatio,
+						trebleDb: isFiniteNumber(editor.audioSettings.trebleDb)
+							? clamp(editor.audioSettings.trebleDb, 0, 15)
+							: DEFAULT_AUDIO_SETTINGS.trebleDb,
+						loudnessDb: isFiniteNumber(editor.audioSettings.loudnessDb)
+							? clamp(editor.audioSettings.loudnessDb, -24, 0)
+							: DEFAULT_AUDIO_SETTINGS.loudnessDb,
+					}
+				: {}),
 		},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/projectPersistence.ts` around lines 398 - 403,
The shallow merge for audioSettings (using DEFAULT_AUDIO_SETTINGS and
editor.audioSettings) allows invalid types/values to slip through; update the
normalization in projectPersistence.ts to validate each audioSettings field
(e.g., highpassHz, lowpassHz, gainDb, etc.) before merging by reusing the
existing isFiniteNumber helper and applying appropriate ranges or fallbacks to
DEFAULT_AUDIO_SETTINGS for each field when editor.audioSettings has invalid
values; locate the merge around audioSettings and replace the direct spread with
per-field checks that coerce/ignore non-finite or out-of-range numbers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/video-editor/SettingsPanel.tsx`:
- Line 1046: Replace hardcoded English strings in the Audio Adjustments section
of the SettingsPanel component with i18n calls: wrap the section title and all
slider labels/descriptions in t(...) (e.g., change "Audio Adjustments" to
t("audio.title") and use t("audio.highpass"), t("audio.highpassDesc"), etc. for
each slider label/description), ensuring you import/use the same t function
already used elsewhere in SettingsPanel.tsx so keys match the project's
translation keys.
- Around line 1060-1075: The handlers in SettingsPanel.tsx use the non-null
assertion audioSettings! inside the Slider callbacks which will throw if
audioSettings is undefined; update the onValueChange and onValueCommit for the
highpass (and the other three sliders: Compression, Treble, Loudness Target) to
guard by building a safe base object (e.g., const base = audioSettings ?? { /*
default audio settings */ }) and then spread that base when calling
onAudioSettingsChange/onAudioSettingsCommit, and ensure the committed value uses
the same fallback default (highpassHz default 80) so no spreading of undefined
occurs.

In `@src/components/video-editor/timeline/AudioWaveform.tsx`:
- Around line 108-122: The two hardcoded UI strings in AudioWaveform.tsx
("Audio" label and "Processing waveform…") bypass i18n and must be replaced with
translated strings; import and use the project's i18n hook (e.g., useTranslation
or the existing t function used elsewhere in the codebase) inside the
AudioWaveform component and call t(...) for both texts (suggest keys like
"timeline.audio" and "timeline.processingWaveform"), then update the locale
resource files with these keys so translations render instead of the English
literals.

In `@src/components/video-editor/VideoEditor.tsx`:
- Around line 369-387: The useEffect that calls generateWaveformPeaks(videoPath)
can update state after videoPath has changed; add a local cancelled flag (e.g.,
let cancelled = false) inside the effect and return a cleanup that sets
cancelled = true, then before calling setAudioPeaks, setIsAudioWaveformLoading,
or toast inside the .then/.catch/.finally handlers check if cancelled is true
and skip any state/log updates if so; target the generateWaveformPeaks call and
its promise handlers (and ensure the else branch still clears state immediately
when videoPath is falsy).
- Around line 1279-1280: The handleExport useCallback is capturing stale
audioSettings so exports use previous values; update the useCallback dependency
array for handleExport to include audioSettings (the same state passed into
VideoExporter and GifExporter) so the callback is recreated when audioSettings
change, and if audioSettings is an object ensure it is stable/memoized (or
derive primitive deps) to avoid unnecessary re-creations.

In `@src/components/video-editor/VideoPlayback.tsx`:
- Around line 219-258: The audio init effect currently runs only on mount
(useEffect with []), so if videoRef.current is null it never initializes; update
the logic to run when the video element becomes available by either adding the
video reference/state to the effect dependencies (e.g., depend on
videoRef.current or a boolean like videoReady) or move the initialization into
the video load handler (e.g., handleLoadedMetadata) and call the same setup
routine; ensure you preserve the _audioInitialized guard and set
audioCtxRef/current node refs (highpassNodeRef, compressorNodeRef,
trebleNodeRef, loudnessGainNodeRef) in the moved/updated init so the Web Audio
graph is built once when the DOM video element exists.

In `@src/lib/audioWaveform.ts`:
- Around line 45-60: The peaks computation currently reads only channel 0 and
computes samplesPerPeak with Math.floor(rawData.length / numPeaks) which can be
zero for very short buffers; update the logic in the peaks calculation (symbols:
audioBuffer.getChannelData, rawData, samplesPerPeak, numPeaks, peaks, globalMax)
to (1) mix across all channels by iterating audioBuffer.numberOfChannels and
taking the max absolute sample from every channel for each sample index used in
a bin, and (2) clamp samplesPerPeak to at least 1 (e.g., Math.max(1,
Math.floor(...))) so bins are never size 0; then compute per-bin max across
those mixed samples and update peaks[i] and globalMax accordingly.

In `@src/lib/exporter/audioEncoder.ts`:
- Around line 33-42: The code treats specific literals as "no audio processing"
but those literals actually equal DEFAULT_AUDIO_SETTINGS which is not neutral;
update the hasActiveAudioSettings check to detect the real neutral state by
comparing audioSettings to DEFAULT_AUDIO_SETTINGS (deep equality) or by
adding/using a helper like isNeutralAudioSettings(audioSettings) and ensure that
when audioSettings equals DEFAULT_AUDIO_SETTINGS we still consider it as active
processing so the conditional guarding timeline playback (the if using
sortedSpeedRegions and hasActiveAudioSettings, and functions
processTrimOnlyAudio(...) and timeline playback path) selects the
audio-processing/timeline path rather than falling back to processTrimOnlyAudio;
reference audioSettings, DEFAULT_AUDIO_SETTINGS, hasActiveAudioSettings,
sortedSpeedRegions, and processTrimOnlyAudio when making the change.

---

Outside diff comments:
In `@src/components/video-editor/projectPersistence.ts`:
- Line 193: The depth validation currently checks [1,2,3,4,5,6] and thus rejects
region.depth === 7; update the validation in the loader/serializer where
region.depth is normalized (the expression using region.depth and
DEFAULT_ZOOM_DEPTH) to include 7 (e.g., [1,2,3,4,5,6,7]) so depth 7 zoom regions
are preserved instead of being reset to DEFAULT_ZOOM_DEPTH.

---

Nitpick comments:
In `@src/components/video-editor/projectPersistence.ts`:
- Around line 398-403: The shallow merge for audioSettings (using
DEFAULT_AUDIO_SETTINGS and editor.audioSettings) allows invalid types/values to
slip through; update the normalization in projectPersistence.ts to validate each
audioSettings field (e.g., highpassHz, lowpassHz, gainDb, etc.) before merging
by reusing the existing isFiniteNumber helper and applying appropriate ranges or
fallbacks to DEFAULT_AUDIO_SETTINGS for each field when editor.audioSettings has
invalid values; locate the merge around audioSettings and replace the direct
spread with per-field checks that coerce/ignore non-finite or out-of-range
numbers.

In `@src/components/video-editor/VideoPlayback.tsx`:
- Around line 275-278: The UI label "LUFS" is misleading because the code
(loudnessDb -> linearGain via Math.pow(10, loudnessDb/20) applied to
loudnessGainNodeRef.current.gain) implements a static makeup gain, not LUFS
measurement/normalization; update the UI label and any associated tooltip from
"LUFS" to a more accurate term such as "dB" or "Makeup Gain" wherever the
loudness control is rendered (the control tied to loudnessDb and
loudnessGainNodeRef), and adjust any helper text or tests to reflect the new
label.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e6c3126a-fdfe-4106-a2fc-e2a66fcbcd62

📥 Commits

Reviewing files that changed from the base of the PR and between 21893f0 and 735a139.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (13)
  • electron/windows.ts
  • src/components/video-editor/SettingsPanel.tsx
  • src/components/video-editor/VideoEditor.tsx
  • src/components/video-editor/VideoPlayback.tsx
  • src/components/video-editor/projectPersistence.ts
  • src/components/video-editor/timeline/AudioWaveform.tsx
  • src/components/video-editor/timeline/Item.tsx
  • src/components/video-editor/timeline/TimelineEditor.tsx
  • src/components/video-editor/types.ts
  • src/hooks/useEditorHistory.ts
  • src/lib/audioWaveform.ts
  • src/lib/exporter/audioEncoder.ts
  • src/lib/exporter/videoExporter.ts

<AccordionTrigger className="py-2.5 hover:no-underline">
<div className="flex items-center gap-2">
<Volume2 className="w-4 h-4 text-[#34B27B]" />
<span className="text-xs font-medium">Audio Adjustments</span>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded strings break i18n consistency.

The "Audio Adjustments" section uses hardcoded English strings while the rest of the panel uses the t() function for internationalization. This includes the section title and all slider labels/descriptions.

Consider using translation keys like:

  • t("audio.title") for "Audio Adjustments"
  • t("audio.highpass"), t("audio.highpassDesc") for slider labels
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/SettingsPanel.tsx` at line 1046, Replace
hardcoded English strings in the Audio Adjustments section of the SettingsPanel
component with i18n calls: wrap the section title and all slider
labels/descriptions in t(...) (e.g., change "Audio Adjustments" to
t("audio.title") and use t("audio.highpass"), t("audio.highpassDesc"), etc. for
each slider label/description), ensuring you import/use the same t function
already used elsewhere in SettingsPanel.tsx so keys match the project's
translation keys.

Comment on lines +1060 to +1075
<Slider
value={[audioSettings?.highpassHz ?? 80]}
onValueChange={(values) =>
onAudioSettingsChange?.({ ...audioSettings!, highpassHz: values[0] })
}
onValueCommit={() =>
onAudioSettingsCommit?.({
...audioSettings!,
highpassHz: audioSettings?.highpassHz ?? 80,
})
}
min={0}
max={300}
step={10}
className="w-full [&_[role=slider]]:bg-[#34B27B] [&_[role=slider]]:border-[#34B27B] [&_[role=slider]]:h-3 [&_[role=slider]]:w-3"
/>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential runtime error when audioSettings is undefined.

The onValueChange and onValueCommit handlers use { ...audioSettings!, ... } which will throw if audioSettings is undefined. Although the slider values fall back to defaults via nullish coalescing, the callbacks still attempt to spread an undefined object.

🐛 Fix: Guard against undefined audioSettings
 								<Slider
 									value={[audioSettings?.highpassHz ?? 80]}
 									onValueChange={(values) =>
-										onAudioSettingsChange?.({ ...audioSettings!, highpassHz: values[0] })
+										audioSettings && onAudioSettingsChange?.({ ...audioSettings, highpassHz: values[0] })
 									}
 									onValueCommit={() =>
-										onAudioSettingsCommit?.({
-											...audioSettings!,
-											highpassHz: audioSettings?.highpassHz ?? 80,
-										})
+										audioSettings && onAudioSettingsCommit?.(audioSettings)
 									}

Apply the same pattern to the other three sliders (Compression, Treble, Loudness Target).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<Slider
value={[audioSettings?.highpassHz ?? 80]}
onValueChange={(values) =>
onAudioSettingsChange?.({ ...audioSettings!, highpassHz: values[0] })
}
onValueCommit={() =>
onAudioSettingsCommit?.({
...audioSettings!,
highpassHz: audioSettings?.highpassHz ?? 80,
})
}
min={0}
max={300}
step={10}
className="w-full [&_[role=slider]]:bg-[#34B27B] [&_[role=slider]]:border-[#34B27B] [&_[role=slider]]:h-3 [&_[role=slider]]:w-3"
/>
<Slider
value={[audioSettings?.highpassHz ?? 80]}
onValueChange={(values) =>
audioSettings && onAudioSettingsChange?.({ ...audioSettings, highpassHz: values[0] })
}
onValueCommit={() =>
audioSettings && onAudioSettingsCommit?.(audioSettings)
}
min={0}
max={300}
step={10}
className="w-full [&_[role=slider]]:bg-[`#34B27B`] [&_[role=slider]]:border-[`#34B27B`] [&_[role=slider]]:h-3 [&_[role=slider]]:w-3"
/>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/SettingsPanel.tsx` around lines 1060 - 1075, The
handlers in SettingsPanel.tsx use the non-null assertion audioSettings! inside
the Slider callbacks which will throw if audioSettings is undefined; update the
onValueChange and onValueCommit for the highpass (and the other three sliders:
Compression, Treble, Loudness Target) to guard by building a safe base object
(e.g., const base = audioSettings ?? { /* default audio settings */ }) and then
spread that base when calling onAudioSettingsChange/onAudioSettingsCommit, and
ensure the committed value uses the same fallback default (highpassHz default
80) so no spreading of undefined occurs.

Comment on lines +108 to +122
<span className="text-[9px] font-semibold text-slate-500 uppercase tracking-widest px-2">
Audio
</span>
</div>

{/* Waveform area */}
<div
ref={containerRef}
className="absolute top-0 bottom-0 overflow-hidden"
style={{ left: sidebarWidth, right: 0 }}
>
{isLoading ? (
<div className="absolute inset-0 flex items-center justify-center gap-1.5 text-slate-600">
<Loader2 className="w-3 h-3 animate-spin text-[#34B27B]/70" />
<span className="text-[9px] font-medium text-slate-600">Processing waveform…</span>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Localize the new waveform strings.

"Audio" and "Processing waveform…" bypass the existing i18n layer, so this row will stay English even when the rest of the editor is translated.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/timeline/AudioWaveform.tsx` around lines 108 -
122, The two hardcoded UI strings in AudioWaveform.tsx ("Audio" label and
"Processing waveform…") bypass i18n and must be replaced with translated
strings; import and use the project's i18n hook (e.g., useTranslation or the
existing t function used elsewhere in the codebase) inside the AudioWaveform
component and call t(...) for both texts (suggest keys like "timeline.audio" and
"timeline.processingWaveform"), then update the locale resource files with these
keys so translations render instead of the English literals.

Comment on lines +369 to +387
useEffect(() => {
if (videoPath) {
setIsAudioWaveformLoading(true);
generateWaveformPeaks(videoPath)
.then((res) => {
console.log("[Waveform] peaks generated:", res.peaks.length);
setAudioPeaks(res.peaks);
})
.catch((err) => {
console.error("[Waveform] FAILED:", err);
toast.error("Waveform failed: " + String(err).slice(0, 80));
setAudioPeaks(null);
})
.finally(() => setIsAudioWaveformLoading(false));
} else {
setAudioPeaks(null);
setIsAudioWaveformLoading(false);
}
}, [videoPath]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "VideoEditor.tsx" | head -5

Repository: siddharthvaddem/openscreen

Length of output: 115


🏁 Script executed:

# Read the VideoEditor.tsx file to check lines 369-387
cat -n src/components/video-editor/VideoEditor.tsx | sed -n '365,390p'

Repository: siddharthvaddem/openscreen

Length of output: 924


🏁 Script executed:

# Search for generateWaveformPeaks implementation
rg "generateWaveformPeaks" --type ts --type tsx -B 2 -A 10

Repository: siddharthvaddem/openscreen

Length of output: 97


🏁 Script executed:

rg "generateWaveformPeaks" -B 2 -A 10

Repository: siddharthvaddem/openscreen

Length of output: 2820


🏁 Script executed:

# Also search for where generateWaveformPeaks is defined/imported
rg "generateWaveformPeaks\s*=" -B 5 -A 15

Repository: siddharthvaddem/openscreen

Length of output: 52


🏁 Script executed:

cat -n src/lib/audioWaveform.ts

Repository: siddharthvaddem/openscreen

Length of output: 2686


🏁 Script executed:

# Check if there are similar async patterns in the codebase with proper cleanup
rg "useEffect.*async|Promise\(\)" -A 5 src/components/video-editor/VideoEditor.tsx | head -50

Repository: siddharthvaddem/openscreen

Length of output: 52


🏁 Script executed:

# Verify if there are any AbortController or cancellation patterns used elsewhere
rg "AbortController|AbortSignal|cancelled" src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 52


Cancel in-flight waveform requests when the video path changes.

The useEffect at lines 369–387 has a race condition: if videoPath changes while generateWaveformPeaks(videoPath) is still running, the older promise will eventually resolve and overwrite the state for the new video. This can set audioPeaks to stale data and incorrectly clear isAudioWaveformLoading before the new waveform finishes processing.

Add a cleanup function with a cancelled flag to prevent stale state updates:

Suggested fix
 	useEffect(() => {
+		let cancelled = false;
+
 		if (videoPath) {
 			setIsAudioWaveformLoading(true);
 			generateWaveformPeaks(videoPath)
 				.then((res) => {
+					if (!cancelled) {
-					console.log("[Waveform] peaks generated:", res.peaks.length);
-					setAudioPeaks(res.peaks);
+						console.log("[Waveform] peaks generated:", res.peaks.length);
+						setAudioPeaks(res.peaks);
+					}
 				})
 				.catch((err) => {
+					if (!cancelled) {
-					console.error("[Waveform] FAILED:", err);
-					toast.error("Waveform failed: " + String(err).slice(0, 80));
-					setAudioPeaks(null);
+						console.error("[Waveform] FAILED:", err);
+						toast.error("Waveform failed: " + String(err).slice(0, 80));
+						setAudioPeaks(null);
+					}
 				})
-				.finally(() => setIsAudioWaveformLoading(false));
+				.finally(() => {
+					if (!cancelled) {
+						setIsAudioWaveformLoading(false);
+					}
+				});
 		} else {
 			setAudioPeaks(null);
 			setIsAudioWaveformLoading(false);
 		}
+
+		return () => {
+			cancelled = true;
+		};
 	}, [videoPath]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
useEffect(() => {
if (videoPath) {
setIsAudioWaveformLoading(true);
generateWaveformPeaks(videoPath)
.then((res) => {
console.log("[Waveform] peaks generated:", res.peaks.length);
setAudioPeaks(res.peaks);
})
.catch((err) => {
console.error("[Waveform] FAILED:", err);
toast.error("Waveform failed: " + String(err).slice(0, 80));
setAudioPeaks(null);
})
.finally(() => setIsAudioWaveformLoading(false));
} else {
setAudioPeaks(null);
setIsAudioWaveformLoading(false);
}
}, [videoPath]);
useEffect(() => {
let cancelled = false;
if (videoPath) {
setIsAudioWaveformLoading(true);
generateWaveformPeaks(videoPath)
.then((res) => {
if (!cancelled) {
console.log("[Waveform] peaks generated:", res.peaks.length);
setAudioPeaks(res.peaks);
}
})
.catch((err) => {
if (!cancelled) {
console.error("[Waveform] FAILED:", err);
toast.error("Waveform failed: " + String(err).slice(0, 80));
setAudioPeaks(null);
}
})
.finally(() => {
if (!cancelled) {
setIsAudioWaveformLoading(false);
}
});
} else {
setAudioPeaks(null);
setIsAudioWaveformLoading(false);
}
return () => {
cancelled = true;
};
}, [videoPath]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/VideoEditor.tsx` around lines 369 - 387, The
useEffect that calls generateWaveformPeaks(videoPath) can update state after
videoPath has changed; add a local cancelled flag (e.g., let cancelled = false)
inside the effect and return a cleanup that sets cancelled = true, then before
calling setAudioPeaks, setIsAudioWaveformLoading, or toast inside the
.then/.catch/.finally handlers check if cancelled is true and skip any state/log
updates if so; target the generateWaveformPeaks call and its promise handlers
(and ensure the else branch still clears state immediately when videoPath is
falsy).

Comment on lines +1279 to 1280
audioSettings,
onProgress: (progress: ExportProgress) => {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "VideoEditor.tsx" | head -5

Repository: siddharthvaddem/openscreen

Length of output: 115


🏁 Script executed:

wc -l src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 118


🏁 Script executed:

sed -n '1270,1290p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 608


🏁 Script executed:

sed -n '1320,1360p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 915


🏁 Script executed:

sed -n '1260,1290p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 833


🏁 Script executed:

sed -n '1240,1280p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 1149


🏁 Script executed:

rg -n "const handleExport" src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 154


🏁 Script executed:

sed -n '1080,1360p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 8659


🏁 Script executed:

rg -n "audioSettings" src/components/video-editor/VideoEditor.tsx | head -20

Repository: siddharthvaddem/openscreen

Length of output: 436


🏁 Script executed:

rg -n "setAudioSettings\|useState.*audioSettings" src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 52


🏁 Script executed:

sed -n '85,95p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 298


🏁 Script executed:

sed -n '1560,1570p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 509


🏁 Script executed:

sed -n '1680,1695p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 782


🏁 Script executed:

rg -n "const \[.*audioSettings" src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 52


🏁 Script executed:

sed -n '50,120p' src/components/video-editor/VideoEditor.tsx

Repository: siddharthvaddem/openscreen

Length of output: 2716


handleExport captures stale audioSettings due to missing dependency.

audioSettings is passed to both VideoExporter and GifExporter configs but is missing from the handleExport useCallback dependency array. When audio settings change, the callback isn't recreated, so exports immediately after changing an audio control will use the previous settings.

Add audioSettings to the dependency array:

Suggested fix
 			handleExportSaved,
 			cursorTelemetry,
+			audioSettings,
 		],
 	);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/VideoEditor.tsx` around lines 1279 - 1280, The
handleExport useCallback is capturing stale audioSettings so exports use
previous values; update the useCallback dependency array for handleExport to
include audioSettings (the same state passed into VideoExporter and GifExporter)
so the callback is recreated when audioSettings change, and if audioSettings is
an object ensure it is stable/memoized (or derive primitive deps) to avoid
unnecessary re-creations.

Comment on lines +219 to +258
useEffect(() => {
const video = videoRef.current;
if (!video || (video as any)._audioInitialized) return;
(video as any)._audioInitialized = true;

try {
const ctx = new window.AudioContext();
audioCtxRef.current = ctx;

const source = ctx.createMediaElementSource(video);

const highpass = ctx.createBiquadFilter();
highpass.type = "highpass";

const compressor = ctx.createDynamicsCompressor();

const treble = ctx.createBiquadFilter();
treble.type = "highshelf";
treble.frequency.value = 3000;

const loudnessGain = ctx.createGain();

source.connect(highpass);
highpass.connect(compressor);
compressor.connect(treble);
treble.connect(loudnessGain);
loudnessGain.connect(ctx.destination);

highpassNodeRef.current = highpass;
compressorNodeRef.current = compressor;
trebleNodeRef.current = treble;
loudnessGainNodeRef.current = loudnessGain;

if (!video.paused) {
ctx.resume().catch(console.error);
}
} catch (e) {
console.error("Failed to initialize Web Audio API:", e);
}
}, []);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Audio initialization may silently fail if video element isn't mounted on first effect run.

The useEffect has an empty dependency array [], meaning it runs only once on component mount. If videoRef.current is null at that moment (which can happen since the video element renders later in the JSX), the audio graph will never be initialized. There's no retry mechanism.

🐛 Fix: Add video element to dependencies or use a separate initialization trigger
 	useEffect(() => {
 		const video = videoRef.current;
 		if (!video || (video as any)._audioInitialized) return;
+		// Ensure video has a valid source before initializing audio
+		if (!video.src) return;
 		(video as any)._audioInitialized = true;

 		try {
 			// ... audio initialization code
 		} catch (e) {
 			console.error("Failed to initialize Web Audio API:", e);
+			// Reset flag to allow retry
+			(video as any)._audioInitialized = false;
 		}
-	}, []);
+	}, [videoPath]); // Re-run when video path changes

Alternatively, trigger initialization in handleLoadedMetadata or when videoReady becomes true.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/video-editor/VideoPlayback.tsx` around lines 219 - 258, The
audio init effect currently runs only on mount (useEffect with []), so if
videoRef.current is null it never initializes; update the logic to run when the
video element becomes available by either adding the video reference/state to
the effect dependencies (e.g., depend on videoRef.current or a boolean like
videoReady) or move the initialization into the video load handler (e.g.,
handleLoadedMetadata) and call the same setup routine; ensure you preserve the
_audioInitialized guard and set audioCtxRef/current node refs (highpassNodeRef,
compressorNodeRef, trebleNodeRef, loudnessGainNodeRef) in the moved/updated init
so the Web Audio graph is built once when the DOM video element exists.

Comment on lines +45 to +60
// 3. Compute peaks — use channel 0 (or mix down if stereo)
const rawData = audioBuffer.getChannelData(0);
const samplesPerPeak = Math.floor(rawData.length / numPeaks);
const peaks = new Float32Array(numPeaks);

let globalMax = 0;
for (let i = 0; i < numPeaks; i++) {
const start = i * samplesPerPeak;
let max = 0;
for (let j = 0; j < samplesPerPeak; j++) {
const abs = Math.abs(rawData[start + j] || 0);
if (abs > max) max = abs;
}
peaks[i] = max;
if (max > globalMax) globalMax = max;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Build peaks from all channels and clamp the bin size.

This only samples channel 0, so right-only audio can render as silence. Math.floor(rawData.length / numPeaks) also becomes 0 on very short clips, which makes every peak stay at 0. Both cases produce a misleading blank waveform even though the source contains audio.

Suggested fix
-	const rawData = audioBuffer.getChannelData(0);
-	const samplesPerPeak = Math.floor(rawData.length / numPeaks);
+	const channelData = Array.from({ length: audioBuffer.numberOfChannels }, (_, index) =>
+		audioBuffer.getChannelData(index),
+	);
+	const sampleCount = channelData[0]?.length ?? 0;
+	const samplesPerPeak = Math.max(1, Math.ceil(sampleCount / numPeaks));
 	const peaks = new Float32Array(numPeaks);

 	let globalMax = 0;
 	for (let i = 0; i < numPeaks; i++) {
 		const start = i * samplesPerPeak;
 		let max = 0;
-		for (let j = 0; j < samplesPerPeak; j++) {
-			const abs = Math.abs(rawData[start + j] || 0);
-			if (abs > max) max = abs;
+		const end = Math.min(sampleCount, start + samplesPerPeak);
+		for (let j = start; j < end; j++) {
+			for (const channel of channelData) {
+				const abs = Math.abs(channel[j] ?? 0);
+				if (abs > max) max = abs;
+			}
 		}
 		peaks[i] = max;
 		if (max > globalMax) globalMax = max;
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/audioWaveform.ts` around lines 45 - 60, The peaks computation
currently reads only channel 0 and computes samplesPerPeak with
Math.floor(rawData.length / numPeaks) which can be zero for very short buffers;
update the logic in the peaks calculation (symbols: audioBuffer.getChannelData,
rawData, samplesPerPeak, numPeaks, peaks, globalMax) to (1) mix across all
channels by iterating audioBuffer.numberOfChannels and taking the max absolute
sample from every channel for each sample index used in a bin, and (2) clamp
samplesPerPeak to at least 1 (e.g., Math.max(1, Math.floor(...))) so bins are
never size 0; then compute per-bin max across those mixed samples and update
peaks[i] and globalMax accordingly.

Comment on lines +33 to +42
const hasActiveAudioSettings =
!!audioSettings &&
(audioSettings.highpassHz !== 80 ||
audioSettings.compressionRatio !== 4 ||
audioSettings.trebleDb !== 5 ||
audioSettings.loudnessDb !== -12);

// Speed edits must use timeline playback to preserve pitch.
// Audio edits also use timeline playback to evaluate the Web Audio graph.
if (sortedSpeedRegions.length > 0 || hasActiveAudioSettings) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't treat the default preset as “no audio processing.”

These literals match DEFAULT_AUDIO_SETTINGS, but that preset is not neutral. Any project saved with the default preset will drop back to processTrimOnlyAudio(...) here and export different audio than the preview path in src/components/video-editor/VideoPlayback.tsx.

Suggested fix
-		const hasActiveAudioSettings =
-			!!audioSettings &&
-			(audioSettings.highpassHz !== 80 ||
-				audioSettings.compressionRatio !== 4 ||
-				audioSettings.trebleDb !== 5 ||
-				audioSettings.loudnessDb !== -12);
+		const hasActiveAudioSettings = !!audioSettings;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/exporter/audioEncoder.ts` around lines 33 - 42, The code treats
specific literals as "no audio processing" but those literals actually equal
DEFAULT_AUDIO_SETTINGS which is not neutral; update the hasActiveAudioSettings
check to detect the real neutral state by comparing audioSettings to
DEFAULT_AUDIO_SETTINGS (deep equality) or by adding/using a helper like
isNeutralAudioSettings(audioSettings) and ensure that when audioSettings equals
DEFAULT_AUDIO_SETTINGS we still consider it as active processing so the
conditional guarding timeline playback (the if using sortedSpeedRegions and
hasActiveAudioSettings, and functions processTrimOnlyAudio(...) and timeline
playback path) selects the audio-processing/timeline path rather than falling
back to processTrimOnlyAudio; reference audioSettings, DEFAULT_AUDIO_SETTINGS,
hasActiveAudioSettings, sortedSpeedRegions, and processTrimOnlyAudio when making
the change.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 427-477: The encodeAudioBufferToBlob method currently uses a live
AudioContext + MediaRecorder which records in real time; replace that real-time
path with a non-realtime WebCodecs-based encoder similar to the trim-only path:
convert the AudioBuffer frames to raw PCM frames, instantiate an AudioEncoder
(or the same encoder used in the trim-only flow), feed encoded chunks to collect
a Blob/ArrayBuffer, and return that result; update encodeAudioBufferToBlob to
branch to WebCodecs when available (fall back to the existing MediaRecorder flow
if not), reuse getSupportedAudioMimeType/AUDIO_BITRATE semantics for
container/mime decisions, and ensure proper close/flush of the AudioEncoder and
cleanup before resolving the Promise.
- Around line 502-504: The current branch returns a full-length segment when
keptSegments is empty, which re-introduces audio that should be fully trimmed;
update the early-return in the function that computes segments so that if
keptSegments.length === 0 you return an empty array ([]) instead of [{ startMs:
0, endMs: totalDurationMs, speed: 1 }], and ensure the return type still matches
the function signature and any callers tolerate an empty segments array (adjust
consumer expectations if necessary).
- Around line 400-410: The highpass filter is being set from
audioSettings.highpassHz which may be 0 (slider min), but
BiquadFilterNode.frequency requires positive values; update the UI slider in
SettingsPanel (where the highpassHz slider is defined) to use min={20} OR add
runtime validation in applyAudioSettings()/the code that sets highpass.frequency
(e.g., before calling offCtx.createBiquadFilter() or when assigning
audioSettings.highpassHz) to clamp/replace non-positive values with a safe floor
like 20 Hz so highpass.frequency.value is never set to 0 or negative; ensure you
reference the highpass variable and audioSettings.highpassHz in your change so
the validation/slider update is consistent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5639a43f-4204-4b1d-8105-08bea2d218d8

📥 Commits

Reviewing files that changed from the base of the PR and between 735a139 and d53aecd.

📒 Files selected for processing (4)
  • src/components/video-editor/ExportDialog.tsx
  • src/lib/exporter/audioEncoder.ts
  • src/lib/exporter/types.ts
  • src/lib/exporter/videoExporter.ts
✅ Files skipped from review due to trivial changes (1)
  • src/lib/exporter/types.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/lib/exporter/videoExporter.ts

Comment on lines +400 to +410
const highpass = offCtx.createBiquadFilter();
highpass.type = "highpass";
highpass.frequency.value = audioSettings.highpassHz;

const compressor = offCtx.createDynamicsCompressor();
compressor.ratio.value = audioSettings.compressionRatio;

const treble = offCtx.createBiquadFilter();
treble.type = "highshelf";
treble.frequency.value = 3000;
treble.gain.value = audioSettings.trebleDb;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if AudioSettings type or UI has validation constraints
rg -n "AudioSettings" --type=ts -A 10 | head -80
rg -n "highpassHz|compressionRatio" --type=ts -C 3 | head -100

Repository: siddharthvaddem/openscreen

Length of output: 12326


🏁 Script executed:

#!/bin/bash
# Check Slider max values and range constraints for audio settings
rg -n "highpassHz|compressionRatio|trebleDb|loudnessDb" src/components/video-editor/SettingsPanel.tsx -A 5 -B 5 | grep -E "(min|max|value|Slider)" | head -60

Repository: siddharthvaddem/openscreen

Length of output: 1070


🏁 Script executed:

#!/bin/bash
# Check if there's any validation in applyAudioSettings
sed -n '384,450p' src/lib/exporter/audioEncoder.ts

Repository: siddharthvaddem/openscreen

Length of output: 2266


Fix invalid highpass frequency constraint in UI.

The highpassHz slider allows min={0} (SettingsPanel.tsx:1071), but BiquadFilterNode.frequency requires positive values. Setting frequency to 0 Hz will cause browser clamping or errors. Either update the slider to min={20} (typical audio range floor) or add runtime validation in applyAudioSettings().

The compressionRatio slider constraint (min={1}, max={20}) already aligns correctly with DynamicsCompressorNode.ratio requirements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/exporter/audioEncoder.ts` around lines 400 - 410, The highpass filter
is being set from audioSettings.highpassHz which may be 0 (slider min), but
BiquadFilterNode.frequency requires positive values; update the UI slider in
SettingsPanel (where the highpassHz slider is defined) to use min={20} OR add
runtime validation in applyAudioSettings()/the code that sets highpass.frequency
(e.g., before calling offCtx.createBiquadFilter() or when assigning
audioSettings.highpassHz) to clamp/replace non-positive values with a safe floor
like 20 Hz so highpass.frequency.value is never set to 0 or negative; ensure you
reference the highpass variable and audioSettings.highpassHz in your change so
the validation/slider update is consistent.

Comment on lines +427 to +477
/**
* Encodes an AudioBuffer to a WebM/Opus Blob.
* Uses a short MediaRecorder session fed from a live AudioContext
* to serialize the already-processed buffer — much faster than real-time
* since the buffer is fully built before this step.
*/
private async encodeAudioBufferToBlob(buffer: AudioBuffer): Promise<Blob> {
// Create a live AudioContext to play back the processed buffer into a MediaRecorder
const ctx = new AudioContext({ sampleRate: buffer.sampleRate });
const dest = ctx.createMediaStreamDestination();
const source = ctx.createBufferSource();
source.buffer = buffer;
source.connect(dest);

const mimeType = this.getSupportedAudioMimeType();
const options: MediaRecorderOptions = {
audioBitsPerSecond: AUDIO_BITRATE,
...(mimeType ? { mimeType } : {}),
};

const recorder = new MediaRecorder(dest.stream, options);
const chunks: Blob[] = [];

const recordedBlobPromise = new Promise<Blob>((resolve, reject) => {
recorder.ondataavailable = (event: BlobEvent) => {
if (event.data && event.data.size > 0) chunks.push(event.data);
};
recorder.onerror = () => reject(new Error("MediaRecorder failed during audio encoding"));
recorder.onstop = () => {
const type = mimeType || chunks[0]?.type || "audio/webm";
resolve(new Blob(chunks, { type }));
};
});

recorder.start();
source.start(0);

// Wait for playback to end (this is proportional to audio duration, but since
// the buffer is already processed the recording is the only remaining real-time step)
await new Promise<void>((resolve) => {
source.onended = () => resolve();
});

if (recorder.state !== "inactive") recorder.stop();
dest.stream.getTracks().forEach((t) => t.stop());
source.disconnect();
dest.disconnect();
await ctx.close();

return recordedBlobPromise;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Encoding step runs in real-time, not faster.

The comment claims this is "much faster than real-time", but MediaRecorder fed from a live AudioContext plays and records the buffer in real-time. A 10-minute audio track will take ~10 minutes to encode, which may be acceptable for short clips but could frustrate users with longer content.

Consider using WebCodecs AudioEncoder directly (similar to the trim-only path) to encode the processed AudioBuffer without real-time playback.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/exporter/audioEncoder.ts` around lines 427 - 477, The
encodeAudioBufferToBlob method currently uses a live AudioContext +
MediaRecorder which records in real time; replace that real-time path with a
non-realtime WebCodecs-based encoder similar to the trim-only path: convert the
AudioBuffer frames to raw PCM frames, instantiate an AudioEncoder (or the same
encoder used in the trim-only flow), feed encoded chunks to collect a
Blob/ArrayBuffer, and return that result; update encodeAudioBufferToBlob to
branch to WebCodecs when available (fall back to the existing MediaRecorder flow
if not), reuse getSupportedAudioMimeType/AUDIO_BITRATE semantics for
container/mime decisions, and ensure proper close/flush of the AudioEncoder and
cleanup before resolving the Promise.

Comment on lines +502 to 504
if (keptSegments.length === 0) {
return [{ startMs: 0, endMs: totalDurationMs, speed: 1 }];
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Returns full audio when everything is trimmed.

If keptSegments is empty (e.g., trim regions cover the entire timeline), the function returns the full original duration at 1× speed instead of an empty array. This would include audio that should have been trimmed out.

Suggested fix
 		if (keptSegments.length === 0) {
-			return [{ startMs: 0, endMs: totalDurationMs, speed: 1 }];
+			return [];
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (keptSegments.length === 0) {
return [{ startMs: 0, endMs: totalDurationMs, speed: 1 }];
}
if (keptSegments.length === 0) {
return [];
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/exporter/audioEncoder.ts` around lines 502 - 504, The current branch
returns a full-length segment when keptSegments is empty, which re-introduces
audio that should be fully trimmed; update the early-return in the function that
computes segments so that if keptSegments.length === 0 you return an empty array
([]) instead of [{ startMs: 0, endMs: totalDurationMs, speed: 1 }], and ensure
the return type still matches the function signature and any callers tolerate an
empty segments array (adjust consumer expectations if necessary).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant