feat: add audio waveform, audio edit, 1.1 zoom#326
feat: add audio waveform, audio edit, 1.1 zoom#326PromptEngineer48 wants to merge 2 commits intosiddharthvaddem:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds persistent audio settings, real-time Web Audio processing in playback, canvas waveform generation in the timeline, waveform peak generation for files, audio-aware export processing, UI controls in SettingsPanel, zoom depth adjustments, and makes the HUD overlay window appear in the system taskbar. Changes
Sequence DiagramsequenceDiagram
actor User
participant SettingsPanel
participant VideoEditor
participant VideoPlayback
participant AudioContext as Web Audio Context
participant VideoExporter
User->>SettingsPanel: adjust audio sliders
SettingsPanel->>VideoEditor: onAudioSettingsChange(settings)
VideoEditor->>VideoEditor: updateState(audioSettings)
VideoEditor->>VideoPlayback: pass audioSettings
VideoPlayback->>AudioContext: update nodes (highpass, compressor, treble, gain)
AudioContext-->>VideoPlayback: audio changes applied (smoothed)
User->>SettingsPanel: commit audio changes
SettingsPanel->>VideoEditor: onAudioSettingsCommit(settings)
VideoEditor->>VideoEditor: commitState + persist project
User->>VideoEditor: export video
VideoEditor->>VideoExporter: export(config with audioSettings)
VideoExporter->>VideoExporter: compare audioSettings vs defaults
alt audioSettings differ
VideoExporter->>AudioContext: run OfflineAudioContext pipeline (apply effects)
AudioContext-->>VideoExporter: processed audio blob (reports audioProgress)
end
VideoExporter->>User: final export
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 735a13958f
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
src/lib/exporter/audioEncoder.ts
Outdated
|
|
||
| let lastNode: AudioNode = sourceNode; | ||
|
|
||
| if (audioSettings) { |
There was a problem hiding this comment.
Skip default audio FX for speed-only exports
process() computes hasActiveAudioSettings, but renderPitchPreservedTimelineAudio() still applies the filter chain whenever audioSettings is merely present. Because speed edits always enter this path and VideoEditor passes defaults, exports with speed regions but untouched audio sliders get unintended processing (including the default loudnessDb: -12 attenuation), so output audio is altered even when users did not opt into audio edits.
Useful? React with 👍 / 👎.
| generateWaveformPeaks(videoPath) | ||
| .then((res) => { | ||
| console.log("[Waveform] peaks generated:", res.peaks.length); | ||
| setAudioPeaks(res.peaks); |
There was a problem hiding this comment.
Ignore stale waveform results after source changes
This async waveform effect has no cancellation/version guard, so if videoPath changes while a prior generateWaveformPeaks() call is in flight, the older promise can resolve later and overwrite audioPeaks (and loading state) for the newly selected video. The regression is visible when users switch/open projects quickly and see a waveform that does not match the current media.
Useful? React with 👍 / 👎.
| import type { WebcamLayoutPreset } from "@/lib/compositeLayout"; | ||
|
|
||
| export type ZoomDepth = 1 | 2 | 3 | 4 | 5 | 6; | ||
| export type ZoomDepth = 1 | 2 | 3 | 4 | 5 | 6 | 7; |
There was a problem hiding this comment.
Update project zoom validation for the new depth level
Adding depth 7 here enables the new 5× zoom option, but project normalization still accepts only [1,2,3,4,5,6] in normalizeProjectEditor (projectPersistence.ts), so saved zoom regions at depth 7 are coerced back to the default when reopening a project. This causes user edits at the new zoom level to be lost across save/load.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/components/video-editor/projectPersistence.ts (1)
193-193:⚠️ Potential issue | 🔴 CriticalBug: Missing depth
7in validation causes new zoom level to be rejected.The validation array
[1, 2, 3, 4, 5, 6]doesn't include the newly added depth7. When loading a project with depth 7 zoom regions, they will be reset toDEFAULT_ZOOM_DEPTH.🐛 Fix: Include depth 7 in validation
- depth: [1, 2, 3, 4, 5, 6].includes(region.depth) ? region.depth : DEFAULT_ZOOM_DEPTH, + depth: [1, 2, 3, 4, 5, 6, 7].includes(region.depth) ? region.depth : DEFAULT_ZOOM_DEPTH,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/projectPersistence.ts` at line 193, The depth validation currently checks [1,2,3,4,5,6] and thus rejects region.depth === 7; update the validation in the loader/serializer where region.depth is normalized (the expression using region.depth and DEFAULT_ZOOM_DEPTH) to include 7 (e.g., [1,2,3,4,5,6,7]) so depth 7 zoom regions are preserved instead of being reset to DEFAULT_ZOOM_DEPTH.
🧹 Nitpick comments (2)
src/components/video-editor/VideoPlayback.tsx (1)
275-278: The "LUFS" label is misleading for a simple gain control.The
loudnessDbis converted to linear gain using10^(dB/20), which is a standard dB-to-linear conversion. However, the UI displays this as "LUFS" (Loudness Units Full Scale), which is an integrated loudness measurement standard—not the same as applying a simple gain.True LUFS-based loudness normalization requires analyzing the audio and dynamically adjusting gain. The current implementation is a static makeup gain, which is fine functionally but the label is technically inaccurate.
Consider renaming the UI label from "LUFS" to "dB" or "Makeup Gain" to accurately describe the behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/VideoPlayback.tsx` around lines 275 - 278, The UI label "LUFS" is misleading because the code (loudnessDb -> linearGain via Math.pow(10, loudnessDb/20) applied to loudnessGainNodeRef.current.gain) implements a static makeup gain, not LUFS measurement/normalization; update the UI label and any associated tooltip from "LUFS" to a more accurate term such as "dB" or "Makeup Gain" wherever the loudness control is rendered (the control tied to loudnessDb and loudnessGainNodeRef), and adjust any helper text or tests to reflect the new label.src/components/video-editor/projectPersistence.ts (1)
398-403: Consider validating individualaudioSettingsfields for robustness.The shallow merge approach is consistent with other fields in this normalizer, but unlike numeric fields elsewhere that use
isFiniteNumberchecks, theaudioSettingsfields are merged without type/range validation. If corrupted project data contains invalid values (e.g.,highpassHz: "invalid"), they would pass through.This is a low-risk edge case since the source is typically trusted serialized state, but for defensive consistency you could validate each field:
♻️ Optional: Add field-level validation
audioSettings: { ...DEFAULT_AUDIO_SETTINGS, - ...(editor.audioSettings && typeof editor.audioSettings === "object" - ? editor.audioSettings - : {}), + ...(editor.audioSettings && typeof editor.audioSettings === "object" + ? { + highpassHz: isFiniteNumber(editor.audioSettings.highpassHz) + ? clamp(editor.audioSettings.highpassHz, 0, 300) + : DEFAULT_AUDIO_SETTINGS.highpassHz, + compressionRatio: isFiniteNumber(editor.audioSettings.compressionRatio) + ? clamp(editor.audioSettings.compressionRatio, 1, 20) + : DEFAULT_AUDIO_SETTINGS.compressionRatio, + trebleDb: isFiniteNumber(editor.audioSettings.trebleDb) + ? clamp(editor.audioSettings.trebleDb, 0, 15) + : DEFAULT_AUDIO_SETTINGS.trebleDb, + loudnessDb: isFiniteNumber(editor.audioSettings.loudnessDb) + ? clamp(editor.audioSettings.loudnessDb, -24, 0) + : DEFAULT_AUDIO_SETTINGS.loudnessDb, + } + : {}), },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/projectPersistence.ts` around lines 398 - 403, The shallow merge for audioSettings (using DEFAULT_AUDIO_SETTINGS and editor.audioSettings) allows invalid types/values to slip through; update the normalization in projectPersistence.ts to validate each audioSettings field (e.g., highpassHz, lowpassHz, gainDb, etc.) before merging by reusing the existing isFiniteNumber helper and applying appropriate ranges or fallbacks to DEFAULT_AUDIO_SETTINGS for each field when editor.audioSettings has invalid values; locate the merge around audioSettings and replace the direct spread with per-field checks that coerce/ignore non-finite or out-of-range numbers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/video-editor/SettingsPanel.tsx`:
- Line 1046: Replace hardcoded English strings in the Audio Adjustments section
of the SettingsPanel component with i18n calls: wrap the section title and all
slider labels/descriptions in t(...) (e.g., change "Audio Adjustments" to
t("audio.title") and use t("audio.highpass"), t("audio.highpassDesc"), etc. for
each slider label/description), ensuring you import/use the same t function
already used elsewhere in SettingsPanel.tsx so keys match the project's
translation keys.
- Around line 1060-1075: The handlers in SettingsPanel.tsx use the non-null
assertion audioSettings! inside the Slider callbacks which will throw if
audioSettings is undefined; update the onValueChange and onValueCommit for the
highpass (and the other three sliders: Compression, Treble, Loudness Target) to
guard by building a safe base object (e.g., const base = audioSettings ?? { /*
default audio settings */ }) and then spread that base when calling
onAudioSettingsChange/onAudioSettingsCommit, and ensure the committed value uses
the same fallback default (highpassHz default 80) so no spreading of undefined
occurs.
In `@src/components/video-editor/timeline/AudioWaveform.tsx`:
- Around line 108-122: The two hardcoded UI strings in AudioWaveform.tsx
("Audio" label and "Processing waveform…") bypass i18n and must be replaced with
translated strings; import and use the project's i18n hook (e.g., useTranslation
or the existing t function used elsewhere in the codebase) inside the
AudioWaveform component and call t(...) for both texts (suggest keys like
"timeline.audio" and "timeline.processingWaveform"), then update the locale
resource files with these keys so translations render instead of the English
literals.
In `@src/components/video-editor/VideoEditor.tsx`:
- Around line 369-387: The useEffect that calls generateWaveformPeaks(videoPath)
can update state after videoPath has changed; add a local cancelled flag (e.g.,
let cancelled = false) inside the effect and return a cleanup that sets
cancelled = true, then before calling setAudioPeaks, setIsAudioWaveformLoading,
or toast inside the .then/.catch/.finally handlers check if cancelled is true
and skip any state/log updates if so; target the generateWaveformPeaks call and
its promise handlers (and ensure the else branch still clears state immediately
when videoPath is falsy).
- Around line 1279-1280: The handleExport useCallback is capturing stale
audioSettings so exports use previous values; update the useCallback dependency
array for handleExport to include audioSettings (the same state passed into
VideoExporter and GifExporter) so the callback is recreated when audioSettings
change, and if audioSettings is an object ensure it is stable/memoized (or
derive primitive deps) to avoid unnecessary re-creations.
In `@src/components/video-editor/VideoPlayback.tsx`:
- Around line 219-258: The audio init effect currently runs only on mount
(useEffect with []), so if videoRef.current is null it never initializes; update
the logic to run when the video element becomes available by either adding the
video reference/state to the effect dependencies (e.g., depend on
videoRef.current or a boolean like videoReady) or move the initialization into
the video load handler (e.g., handleLoadedMetadata) and call the same setup
routine; ensure you preserve the _audioInitialized guard and set
audioCtxRef/current node refs (highpassNodeRef, compressorNodeRef,
trebleNodeRef, loudnessGainNodeRef) in the moved/updated init so the Web Audio
graph is built once when the DOM video element exists.
In `@src/lib/audioWaveform.ts`:
- Around line 45-60: The peaks computation currently reads only channel 0 and
computes samplesPerPeak with Math.floor(rawData.length / numPeaks) which can be
zero for very short buffers; update the logic in the peaks calculation (symbols:
audioBuffer.getChannelData, rawData, samplesPerPeak, numPeaks, peaks, globalMax)
to (1) mix across all channels by iterating audioBuffer.numberOfChannels and
taking the max absolute sample from every channel for each sample index used in
a bin, and (2) clamp samplesPerPeak to at least 1 (e.g., Math.max(1,
Math.floor(...))) so bins are never size 0; then compute per-bin max across
those mixed samples and update peaks[i] and globalMax accordingly.
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 33-42: The code treats specific literals as "no audio processing"
but those literals actually equal DEFAULT_AUDIO_SETTINGS which is not neutral;
update the hasActiveAudioSettings check to detect the real neutral state by
comparing audioSettings to DEFAULT_AUDIO_SETTINGS (deep equality) or by
adding/using a helper like isNeutralAudioSettings(audioSettings) and ensure that
when audioSettings equals DEFAULT_AUDIO_SETTINGS we still consider it as active
processing so the conditional guarding timeline playback (the if using
sortedSpeedRegions and hasActiveAudioSettings, and functions
processTrimOnlyAudio(...) and timeline playback path) selects the
audio-processing/timeline path rather than falling back to processTrimOnlyAudio;
reference audioSettings, DEFAULT_AUDIO_SETTINGS, hasActiveAudioSettings,
sortedSpeedRegions, and processTrimOnlyAudio when making the change.
---
Outside diff comments:
In `@src/components/video-editor/projectPersistence.ts`:
- Line 193: The depth validation currently checks [1,2,3,4,5,6] and thus rejects
region.depth === 7; update the validation in the loader/serializer where
region.depth is normalized (the expression using region.depth and
DEFAULT_ZOOM_DEPTH) to include 7 (e.g., [1,2,3,4,5,6,7]) so depth 7 zoom regions
are preserved instead of being reset to DEFAULT_ZOOM_DEPTH.
---
Nitpick comments:
In `@src/components/video-editor/projectPersistence.ts`:
- Around line 398-403: The shallow merge for audioSettings (using
DEFAULT_AUDIO_SETTINGS and editor.audioSettings) allows invalid types/values to
slip through; update the normalization in projectPersistence.ts to validate each
audioSettings field (e.g., highpassHz, lowpassHz, gainDb, etc.) before merging
by reusing the existing isFiniteNumber helper and applying appropriate ranges or
fallbacks to DEFAULT_AUDIO_SETTINGS for each field when editor.audioSettings has
invalid values; locate the merge around audioSettings and replace the direct
spread with per-field checks that coerce/ignore non-finite or out-of-range
numbers.
In `@src/components/video-editor/VideoPlayback.tsx`:
- Around line 275-278: The UI label "LUFS" is misleading because the code
(loudnessDb -> linearGain via Math.pow(10, loudnessDb/20) applied to
loudnessGainNodeRef.current.gain) implements a static makeup gain, not LUFS
measurement/normalization; update the UI label and any associated tooltip from
"LUFS" to a more accurate term such as "dB" or "Makeup Gain" wherever the
loudness control is rendered (the control tied to loudnessDb and
loudnessGainNodeRef), and adjust any helper text or tests to reflect the new
label.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e6c3126a-fdfe-4106-a2fc-e2a66fcbcd62
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (13)
electron/windows.tssrc/components/video-editor/SettingsPanel.tsxsrc/components/video-editor/VideoEditor.tsxsrc/components/video-editor/VideoPlayback.tsxsrc/components/video-editor/projectPersistence.tssrc/components/video-editor/timeline/AudioWaveform.tsxsrc/components/video-editor/timeline/Item.tsxsrc/components/video-editor/timeline/TimelineEditor.tsxsrc/components/video-editor/types.tssrc/hooks/useEditorHistory.tssrc/lib/audioWaveform.tssrc/lib/exporter/audioEncoder.tssrc/lib/exporter/videoExporter.ts
| <AccordionTrigger className="py-2.5 hover:no-underline"> | ||
| <div className="flex items-center gap-2"> | ||
| <Volume2 className="w-4 h-4 text-[#34B27B]" /> | ||
| <span className="text-xs font-medium">Audio Adjustments</span> |
There was a problem hiding this comment.
Hardcoded strings break i18n consistency.
The "Audio Adjustments" section uses hardcoded English strings while the rest of the panel uses the t() function for internationalization. This includes the section title and all slider labels/descriptions.
Consider using translation keys like:
t("audio.title")for "Audio Adjustments"t("audio.highpass"),t("audio.highpassDesc")for slider labels
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/SettingsPanel.tsx` at line 1046, Replace
hardcoded English strings in the Audio Adjustments section of the SettingsPanel
component with i18n calls: wrap the section title and all slider
labels/descriptions in t(...) (e.g., change "Audio Adjustments" to
t("audio.title") and use t("audio.highpass"), t("audio.highpassDesc"), etc. for
each slider label/description), ensuring you import/use the same t function
already used elsewhere in SettingsPanel.tsx so keys match the project's
translation keys.
| <Slider | ||
| value={[audioSettings?.highpassHz ?? 80]} | ||
| onValueChange={(values) => | ||
| onAudioSettingsChange?.({ ...audioSettings!, highpassHz: values[0] }) | ||
| } | ||
| onValueCommit={() => | ||
| onAudioSettingsCommit?.({ | ||
| ...audioSettings!, | ||
| highpassHz: audioSettings?.highpassHz ?? 80, | ||
| }) | ||
| } | ||
| min={0} | ||
| max={300} | ||
| step={10} | ||
| className="w-full [&_[role=slider]]:bg-[#34B27B] [&_[role=slider]]:border-[#34B27B] [&_[role=slider]]:h-3 [&_[role=slider]]:w-3" | ||
| /> |
There was a problem hiding this comment.
Potential runtime error when audioSettings is undefined.
The onValueChange and onValueCommit handlers use { ...audioSettings!, ... } which will throw if audioSettings is undefined. Although the slider values fall back to defaults via nullish coalescing, the callbacks still attempt to spread an undefined object.
🐛 Fix: Guard against undefined audioSettings
<Slider
value={[audioSettings?.highpassHz ?? 80]}
onValueChange={(values) =>
- onAudioSettingsChange?.({ ...audioSettings!, highpassHz: values[0] })
+ audioSettings && onAudioSettingsChange?.({ ...audioSettings, highpassHz: values[0] })
}
onValueCommit={() =>
- onAudioSettingsCommit?.({
- ...audioSettings!,
- highpassHz: audioSettings?.highpassHz ?? 80,
- })
+ audioSettings && onAudioSettingsCommit?.(audioSettings)
}Apply the same pattern to the other three sliders (Compression, Treble, Loudness Target).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <Slider | |
| value={[audioSettings?.highpassHz ?? 80]} | |
| onValueChange={(values) => | |
| onAudioSettingsChange?.({ ...audioSettings!, highpassHz: values[0] }) | |
| } | |
| onValueCommit={() => | |
| onAudioSettingsCommit?.({ | |
| ...audioSettings!, | |
| highpassHz: audioSettings?.highpassHz ?? 80, | |
| }) | |
| } | |
| min={0} | |
| max={300} | |
| step={10} | |
| className="w-full [&_[role=slider]]:bg-[#34B27B] [&_[role=slider]]:border-[#34B27B] [&_[role=slider]]:h-3 [&_[role=slider]]:w-3" | |
| /> | |
| <Slider | |
| value={[audioSettings?.highpassHz ?? 80]} | |
| onValueChange={(values) => | |
| audioSettings && onAudioSettingsChange?.({ ...audioSettings, highpassHz: values[0] }) | |
| } | |
| onValueCommit={() => | |
| audioSettings && onAudioSettingsCommit?.(audioSettings) | |
| } | |
| min={0} | |
| max={300} | |
| step={10} | |
| className="w-full [&_[role=slider]]:bg-[`#34B27B`] [&_[role=slider]]:border-[`#34B27B`] [&_[role=slider]]:h-3 [&_[role=slider]]:w-3" | |
| /> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/SettingsPanel.tsx` around lines 1060 - 1075, The
handlers in SettingsPanel.tsx use the non-null assertion audioSettings! inside
the Slider callbacks which will throw if audioSettings is undefined; update the
onValueChange and onValueCommit for the highpass (and the other three sliders:
Compression, Treble, Loudness Target) to guard by building a safe base object
(e.g., const base = audioSettings ?? { /* default audio settings */ }) and then
spread that base when calling onAudioSettingsChange/onAudioSettingsCommit, and
ensure the committed value uses the same fallback default (highpassHz default
80) so no spreading of undefined occurs.
| <span className="text-[9px] font-semibold text-slate-500 uppercase tracking-widest px-2"> | ||
| Audio | ||
| </span> | ||
| </div> | ||
|
|
||
| {/* Waveform area */} | ||
| <div | ||
| ref={containerRef} | ||
| className="absolute top-0 bottom-0 overflow-hidden" | ||
| style={{ left: sidebarWidth, right: 0 }} | ||
| > | ||
| {isLoading ? ( | ||
| <div className="absolute inset-0 flex items-center justify-center gap-1.5 text-slate-600"> | ||
| <Loader2 className="w-3 h-3 animate-spin text-[#34B27B]/70" /> | ||
| <span className="text-[9px] font-medium text-slate-600">Processing waveform…</span> |
There was a problem hiding this comment.
Localize the new waveform strings.
"Audio" and "Processing waveform…" bypass the existing i18n layer, so this row will stay English even when the rest of the editor is translated.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/timeline/AudioWaveform.tsx` around lines 108 -
122, The two hardcoded UI strings in AudioWaveform.tsx ("Audio" label and
"Processing waveform…") bypass i18n and must be replaced with translated
strings; import and use the project's i18n hook (e.g., useTranslation or the
existing t function used elsewhere in the codebase) inside the AudioWaveform
component and call t(...) for both texts (suggest keys like "timeline.audio" and
"timeline.processingWaveform"), then update the locale resource files with these
keys so translations render instead of the English literals.
| useEffect(() => { | ||
| if (videoPath) { | ||
| setIsAudioWaveformLoading(true); | ||
| generateWaveformPeaks(videoPath) | ||
| .then((res) => { | ||
| console.log("[Waveform] peaks generated:", res.peaks.length); | ||
| setAudioPeaks(res.peaks); | ||
| }) | ||
| .catch((err) => { | ||
| console.error("[Waveform] FAILED:", err); | ||
| toast.error("Waveform failed: " + String(err).slice(0, 80)); | ||
| setAudioPeaks(null); | ||
| }) | ||
| .finally(() => setIsAudioWaveformLoading(false)); | ||
| } else { | ||
| setAudioPeaks(null); | ||
| setIsAudioWaveformLoading(false); | ||
| } | ||
| }, [videoPath]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "VideoEditor.tsx" | head -5Repository: siddharthvaddem/openscreen
Length of output: 115
🏁 Script executed:
# Read the VideoEditor.tsx file to check lines 369-387
cat -n src/components/video-editor/VideoEditor.tsx | sed -n '365,390p'Repository: siddharthvaddem/openscreen
Length of output: 924
🏁 Script executed:
# Search for generateWaveformPeaks implementation
rg "generateWaveformPeaks" --type ts --type tsx -B 2 -A 10Repository: siddharthvaddem/openscreen
Length of output: 97
🏁 Script executed:
rg "generateWaveformPeaks" -B 2 -A 10Repository: siddharthvaddem/openscreen
Length of output: 2820
🏁 Script executed:
# Also search for where generateWaveformPeaks is defined/imported
rg "generateWaveformPeaks\s*=" -B 5 -A 15Repository: siddharthvaddem/openscreen
Length of output: 52
🏁 Script executed:
cat -n src/lib/audioWaveform.tsRepository: siddharthvaddem/openscreen
Length of output: 2686
🏁 Script executed:
# Check if there are similar async patterns in the codebase with proper cleanup
rg "useEffect.*async|Promise\(\)" -A 5 src/components/video-editor/VideoEditor.tsx | head -50Repository: siddharthvaddem/openscreen
Length of output: 52
🏁 Script executed:
# Verify if there are any AbortController or cancellation patterns used elsewhere
rg "AbortController|AbortSignal|cancelled" src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 52
Cancel in-flight waveform requests when the video path changes.
The useEffect at lines 369–387 has a race condition: if videoPath changes while generateWaveformPeaks(videoPath) is still running, the older promise will eventually resolve and overwrite the state for the new video. This can set audioPeaks to stale data and incorrectly clear isAudioWaveformLoading before the new waveform finishes processing.
Add a cleanup function with a cancelled flag to prevent stale state updates:
Suggested fix
useEffect(() => {
+ let cancelled = false;
+
if (videoPath) {
setIsAudioWaveformLoading(true);
generateWaveformPeaks(videoPath)
.then((res) => {
+ if (!cancelled) {
- console.log("[Waveform] peaks generated:", res.peaks.length);
- setAudioPeaks(res.peaks);
+ console.log("[Waveform] peaks generated:", res.peaks.length);
+ setAudioPeaks(res.peaks);
+ }
})
.catch((err) => {
+ if (!cancelled) {
- console.error("[Waveform] FAILED:", err);
- toast.error("Waveform failed: " + String(err).slice(0, 80));
- setAudioPeaks(null);
+ console.error("[Waveform] FAILED:", err);
+ toast.error("Waveform failed: " + String(err).slice(0, 80));
+ setAudioPeaks(null);
+ }
})
- .finally(() => setIsAudioWaveformLoading(false));
+ .finally(() => {
+ if (!cancelled) {
+ setIsAudioWaveformLoading(false);
+ }
+ });
} else {
setAudioPeaks(null);
setIsAudioWaveformLoading(false);
}
+
+ return () => {
+ cancelled = true;
+ };
}, [videoPath]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| if (videoPath) { | |
| setIsAudioWaveformLoading(true); | |
| generateWaveformPeaks(videoPath) | |
| .then((res) => { | |
| console.log("[Waveform] peaks generated:", res.peaks.length); | |
| setAudioPeaks(res.peaks); | |
| }) | |
| .catch((err) => { | |
| console.error("[Waveform] FAILED:", err); | |
| toast.error("Waveform failed: " + String(err).slice(0, 80)); | |
| setAudioPeaks(null); | |
| }) | |
| .finally(() => setIsAudioWaveformLoading(false)); | |
| } else { | |
| setAudioPeaks(null); | |
| setIsAudioWaveformLoading(false); | |
| } | |
| }, [videoPath]); | |
| useEffect(() => { | |
| let cancelled = false; | |
| if (videoPath) { | |
| setIsAudioWaveformLoading(true); | |
| generateWaveformPeaks(videoPath) | |
| .then((res) => { | |
| if (!cancelled) { | |
| console.log("[Waveform] peaks generated:", res.peaks.length); | |
| setAudioPeaks(res.peaks); | |
| } | |
| }) | |
| .catch((err) => { | |
| if (!cancelled) { | |
| console.error("[Waveform] FAILED:", err); | |
| toast.error("Waveform failed: " + String(err).slice(0, 80)); | |
| setAudioPeaks(null); | |
| } | |
| }) | |
| .finally(() => { | |
| if (!cancelled) { | |
| setIsAudioWaveformLoading(false); | |
| } | |
| }); | |
| } else { | |
| setAudioPeaks(null); | |
| setIsAudioWaveformLoading(false); | |
| } | |
| return () => { | |
| cancelled = true; | |
| }; | |
| }, [videoPath]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoEditor.tsx` around lines 369 - 387, The
useEffect that calls generateWaveformPeaks(videoPath) can update state after
videoPath has changed; add a local cancelled flag (e.g., let cancelled = false)
inside the effect and return a cleanup that sets cancelled = true, then before
calling setAudioPeaks, setIsAudioWaveformLoading, or toast inside the
.then/.catch/.finally handlers check if cancelled is true and skip any state/log
updates if so; target the generateWaveformPeaks call and its promise handlers
(and ensure the else branch still clears state immediately when videoPath is
falsy).
| audioSettings, | ||
| onProgress: (progress: ExportProgress) => { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "VideoEditor.tsx" | head -5Repository: siddharthvaddem/openscreen
Length of output: 115
🏁 Script executed:
wc -l src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 118
🏁 Script executed:
sed -n '1270,1290p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 608
🏁 Script executed:
sed -n '1320,1360p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 915
🏁 Script executed:
sed -n '1260,1290p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 833
🏁 Script executed:
sed -n '1240,1280p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 1149
🏁 Script executed:
rg -n "const handleExport" src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 154
🏁 Script executed:
sed -n '1080,1360p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 8659
🏁 Script executed:
rg -n "audioSettings" src/components/video-editor/VideoEditor.tsx | head -20Repository: siddharthvaddem/openscreen
Length of output: 436
🏁 Script executed:
rg -n "setAudioSettings\|useState.*audioSettings" src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 52
🏁 Script executed:
sed -n '85,95p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 298
🏁 Script executed:
sed -n '1560,1570p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 509
🏁 Script executed:
sed -n '1680,1695p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 782
🏁 Script executed:
rg -n "const \[.*audioSettings" src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 52
🏁 Script executed:
sed -n '50,120p' src/components/video-editor/VideoEditor.tsxRepository: siddharthvaddem/openscreen
Length of output: 2716
handleExport captures stale audioSettings due to missing dependency.
audioSettings is passed to both VideoExporter and GifExporter configs but is missing from the handleExport useCallback dependency array. When audio settings change, the callback isn't recreated, so exports immediately after changing an audio control will use the previous settings.
Add audioSettings to the dependency array:
Suggested fix
handleExportSaved,
cursorTelemetry,
+ audioSettings,
],
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoEditor.tsx` around lines 1279 - 1280, The
handleExport useCallback is capturing stale audioSettings so exports use
previous values; update the useCallback dependency array for handleExport to
include audioSettings (the same state passed into VideoExporter and GifExporter)
so the callback is recreated when audioSettings change, and if audioSettings is
an object ensure it is stable/memoized (or derive primitive deps) to avoid
unnecessary re-creations.
| useEffect(() => { | ||
| const video = videoRef.current; | ||
| if (!video || (video as any)._audioInitialized) return; | ||
| (video as any)._audioInitialized = true; | ||
|
|
||
| try { | ||
| const ctx = new window.AudioContext(); | ||
| audioCtxRef.current = ctx; | ||
|
|
||
| const source = ctx.createMediaElementSource(video); | ||
|
|
||
| const highpass = ctx.createBiquadFilter(); | ||
| highpass.type = "highpass"; | ||
|
|
||
| const compressor = ctx.createDynamicsCompressor(); | ||
|
|
||
| const treble = ctx.createBiquadFilter(); | ||
| treble.type = "highshelf"; | ||
| treble.frequency.value = 3000; | ||
|
|
||
| const loudnessGain = ctx.createGain(); | ||
|
|
||
| source.connect(highpass); | ||
| highpass.connect(compressor); | ||
| compressor.connect(treble); | ||
| treble.connect(loudnessGain); | ||
| loudnessGain.connect(ctx.destination); | ||
|
|
||
| highpassNodeRef.current = highpass; | ||
| compressorNodeRef.current = compressor; | ||
| trebleNodeRef.current = treble; | ||
| loudnessGainNodeRef.current = loudnessGain; | ||
|
|
||
| if (!video.paused) { | ||
| ctx.resume().catch(console.error); | ||
| } | ||
| } catch (e) { | ||
| console.error("Failed to initialize Web Audio API:", e); | ||
| } | ||
| }, []); |
There was a problem hiding this comment.
Audio initialization may silently fail if video element isn't mounted on first effect run.
The useEffect has an empty dependency array [], meaning it runs only once on component mount. If videoRef.current is null at that moment (which can happen since the video element renders later in the JSX), the audio graph will never be initialized. There's no retry mechanism.
🐛 Fix: Add video element to dependencies or use a separate initialization trigger
useEffect(() => {
const video = videoRef.current;
if (!video || (video as any)._audioInitialized) return;
+ // Ensure video has a valid source before initializing audio
+ if (!video.src) return;
(video as any)._audioInitialized = true;
try {
// ... audio initialization code
} catch (e) {
console.error("Failed to initialize Web Audio API:", e);
+ // Reset flag to allow retry
+ (video as any)._audioInitialized = false;
}
- }, []);
+ }, [videoPath]); // Re-run when video path changesAlternatively, trigger initialization in handleLoadedMetadata or when videoReady becomes true.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoPlayback.tsx` around lines 219 - 258, The
audio init effect currently runs only on mount (useEffect with []), so if
videoRef.current is null it never initializes; update the logic to run when the
video element becomes available by either adding the video reference/state to
the effect dependencies (e.g., depend on videoRef.current or a boolean like
videoReady) or move the initialization into the video load handler (e.g.,
handleLoadedMetadata) and call the same setup routine; ensure you preserve the
_audioInitialized guard and set audioCtxRef/current node refs (highpassNodeRef,
compressorNodeRef, trebleNodeRef, loudnessGainNodeRef) in the moved/updated init
so the Web Audio graph is built once when the DOM video element exists.
| // 3. Compute peaks — use channel 0 (or mix down if stereo) | ||
| const rawData = audioBuffer.getChannelData(0); | ||
| const samplesPerPeak = Math.floor(rawData.length / numPeaks); | ||
| const peaks = new Float32Array(numPeaks); | ||
|
|
||
| let globalMax = 0; | ||
| for (let i = 0; i < numPeaks; i++) { | ||
| const start = i * samplesPerPeak; | ||
| let max = 0; | ||
| for (let j = 0; j < samplesPerPeak; j++) { | ||
| const abs = Math.abs(rawData[start + j] || 0); | ||
| if (abs > max) max = abs; | ||
| } | ||
| peaks[i] = max; | ||
| if (max > globalMax) globalMax = max; | ||
| } |
There was a problem hiding this comment.
Build peaks from all channels and clamp the bin size.
This only samples channel 0, so right-only audio can render as silence. Math.floor(rawData.length / numPeaks) also becomes 0 on very short clips, which makes every peak stay at 0. Both cases produce a misleading blank waveform even though the source contains audio.
Suggested fix
- const rawData = audioBuffer.getChannelData(0);
- const samplesPerPeak = Math.floor(rawData.length / numPeaks);
+ const channelData = Array.from({ length: audioBuffer.numberOfChannels }, (_, index) =>
+ audioBuffer.getChannelData(index),
+ );
+ const sampleCount = channelData[0]?.length ?? 0;
+ const samplesPerPeak = Math.max(1, Math.ceil(sampleCount / numPeaks));
const peaks = new Float32Array(numPeaks);
let globalMax = 0;
for (let i = 0; i < numPeaks; i++) {
const start = i * samplesPerPeak;
let max = 0;
- for (let j = 0; j < samplesPerPeak; j++) {
- const abs = Math.abs(rawData[start + j] || 0);
- if (abs > max) max = abs;
+ const end = Math.min(sampleCount, start + samplesPerPeak);
+ for (let j = start; j < end; j++) {
+ for (const channel of channelData) {
+ const abs = Math.abs(channel[j] ?? 0);
+ if (abs > max) max = abs;
+ }
}
peaks[i] = max;
if (max > globalMax) globalMax = max;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/audioWaveform.ts` around lines 45 - 60, The peaks computation
currently reads only channel 0 and computes samplesPerPeak with
Math.floor(rawData.length / numPeaks) which can be zero for very short buffers;
update the logic in the peaks calculation (symbols: audioBuffer.getChannelData,
rawData, samplesPerPeak, numPeaks, peaks, globalMax) to (1) mix across all
channels by iterating audioBuffer.numberOfChannels and taking the max absolute
sample from every channel for each sample index used in a bin, and (2) clamp
samplesPerPeak to at least 1 (e.g., Math.max(1, Math.floor(...))) so bins are
never size 0; then compute per-bin max across those mixed samples and update
peaks[i] and globalMax accordingly.
| const hasActiveAudioSettings = | ||
| !!audioSettings && | ||
| (audioSettings.highpassHz !== 80 || | ||
| audioSettings.compressionRatio !== 4 || | ||
| audioSettings.trebleDb !== 5 || | ||
| audioSettings.loudnessDb !== -12); | ||
|
|
||
| // Speed edits must use timeline playback to preserve pitch. | ||
| // Audio edits also use timeline playback to evaluate the Web Audio graph. | ||
| if (sortedSpeedRegions.length > 0 || hasActiveAudioSettings) { |
There was a problem hiding this comment.
Don't treat the default preset as “no audio processing.”
These literals match DEFAULT_AUDIO_SETTINGS, but that preset is not neutral. Any project saved with the default preset will drop back to processTrimOnlyAudio(...) here and export different audio than the preview path in src/components/video-editor/VideoPlayback.tsx.
Suggested fix
- const hasActiveAudioSettings =
- !!audioSettings &&
- (audioSettings.highpassHz !== 80 ||
- audioSettings.compressionRatio !== 4 ||
- audioSettings.trebleDb !== 5 ||
- audioSettings.loudnessDb !== -12);
+ const hasActiveAudioSettings = !!audioSettings;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/exporter/audioEncoder.ts` around lines 33 - 42, The code treats
specific literals as "no audio processing" but those literals actually equal
DEFAULT_AUDIO_SETTINGS which is not neutral; update the hasActiveAudioSettings
check to detect the real neutral state by comparing audioSettings to
DEFAULT_AUDIO_SETTINGS (deep equality) or by adding/using a helper like
isNeutralAudioSettings(audioSettings) and ensure that when audioSettings equals
DEFAULT_AUDIO_SETTINGS we still consider it as active processing so the
conditional guarding timeline playback (the if using sortedSpeedRegions and
hasActiveAudioSettings, and functions processTrimOnlyAudio(...) and timeline
playback path) selects the audio-processing/timeline path rather than falling
back to processTrimOnlyAudio; reference audioSettings, DEFAULT_AUDIO_SETTINGS,
hasActiveAudioSettings, sortedSpeedRegions, and processTrimOnlyAudio when making
the change.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 427-477: The encodeAudioBufferToBlob method currently uses a live
AudioContext + MediaRecorder which records in real time; replace that real-time
path with a non-realtime WebCodecs-based encoder similar to the trim-only path:
convert the AudioBuffer frames to raw PCM frames, instantiate an AudioEncoder
(or the same encoder used in the trim-only flow), feed encoded chunks to collect
a Blob/ArrayBuffer, and return that result; update encodeAudioBufferToBlob to
branch to WebCodecs when available (fall back to the existing MediaRecorder flow
if not), reuse getSupportedAudioMimeType/AUDIO_BITRATE semantics for
container/mime decisions, and ensure proper close/flush of the AudioEncoder and
cleanup before resolving the Promise.
- Around line 502-504: The current branch returns a full-length segment when
keptSegments is empty, which re-introduces audio that should be fully trimmed;
update the early-return in the function that computes segments so that if
keptSegments.length === 0 you return an empty array ([]) instead of [{ startMs:
0, endMs: totalDurationMs, speed: 1 }], and ensure the return type still matches
the function signature and any callers tolerate an empty segments array (adjust
consumer expectations if necessary).
- Around line 400-410: The highpass filter is being set from
audioSettings.highpassHz which may be 0 (slider min), but
BiquadFilterNode.frequency requires positive values; update the UI slider in
SettingsPanel (where the highpassHz slider is defined) to use min={20} OR add
runtime validation in applyAudioSettings()/the code that sets highpass.frequency
(e.g., before calling offCtx.createBiquadFilter() or when assigning
audioSettings.highpassHz) to clamp/replace non-positive values with a safe floor
like 20 Hz so highpass.frequency.value is never set to 0 or negative; ensure you
reference the highpass variable and audioSettings.highpassHz in your change so
the validation/slider update is consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5639a43f-4204-4b1d-8105-08bea2d218d8
📒 Files selected for processing (4)
src/components/video-editor/ExportDialog.tsxsrc/lib/exporter/audioEncoder.tssrc/lib/exporter/types.tssrc/lib/exporter/videoExporter.ts
✅ Files skipped from review due to trivial changes (1)
- src/lib/exporter/types.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib/exporter/videoExporter.ts
| const highpass = offCtx.createBiquadFilter(); | ||
| highpass.type = "highpass"; | ||
| highpass.frequency.value = audioSettings.highpassHz; | ||
|
|
||
| const compressor = offCtx.createDynamicsCompressor(); | ||
| compressor.ratio.value = audioSettings.compressionRatio; | ||
|
|
||
| const treble = offCtx.createBiquadFilter(); | ||
| treble.type = "highshelf"; | ||
| treble.frequency.value = 3000; | ||
| treble.gain.value = audioSettings.trebleDb; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if AudioSettings type or UI has validation constraints
rg -n "AudioSettings" --type=ts -A 10 | head -80
rg -n "highpassHz|compressionRatio" --type=ts -C 3 | head -100Repository: siddharthvaddem/openscreen
Length of output: 12326
🏁 Script executed:
#!/bin/bash
# Check Slider max values and range constraints for audio settings
rg -n "highpassHz|compressionRatio|trebleDb|loudnessDb" src/components/video-editor/SettingsPanel.tsx -A 5 -B 5 | grep -E "(min|max|value|Slider)" | head -60Repository: siddharthvaddem/openscreen
Length of output: 1070
🏁 Script executed:
#!/bin/bash
# Check if there's any validation in applyAudioSettings
sed -n '384,450p' src/lib/exporter/audioEncoder.tsRepository: siddharthvaddem/openscreen
Length of output: 2266
Fix invalid highpass frequency constraint in UI.
The highpassHz slider allows min={0} (SettingsPanel.tsx:1071), but BiquadFilterNode.frequency requires positive values. Setting frequency to 0 Hz will cause browser clamping or errors. Either update the slider to min={20} (typical audio range floor) or add runtime validation in applyAudioSettings().
The compressionRatio slider constraint (min={1}, max={20}) already aligns correctly with DynamicsCompressorNode.ratio requirements.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/exporter/audioEncoder.ts` around lines 400 - 410, The highpass filter
is being set from audioSettings.highpassHz which may be 0 (slider min), but
BiquadFilterNode.frequency requires positive values; update the UI slider in
SettingsPanel (where the highpassHz slider is defined) to use min={20} OR add
runtime validation in applyAudioSettings()/the code that sets highpass.frequency
(e.g., before calling offCtx.createBiquadFilter() or when assigning
audioSettings.highpassHz) to clamp/replace non-positive values with a safe floor
like 20 Hz so highpass.frequency.value is never set to 0 or negative; ensure you
reference the highpass variable and audioSettings.highpassHz in your change so
the validation/slider update is consistent.
| /** | ||
| * Encodes an AudioBuffer to a WebM/Opus Blob. | ||
| * Uses a short MediaRecorder session fed from a live AudioContext | ||
| * to serialize the already-processed buffer — much faster than real-time | ||
| * since the buffer is fully built before this step. | ||
| */ | ||
| private async encodeAudioBufferToBlob(buffer: AudioBuffer): Promise<Blob> { | ||
| // Create a live AudioContext to play back the processed buffer into a MediaRecorder | ||
| const ctx = new AudioContext({ sampleRate: buffer.sampleRate }); | ||
| const dest = ctx.createMediaStreamDestination(); | ||
| const source = ctx.createBufferSource(); | ||
| source.buffer = buffer; | ||
| source.connect(dest); | ||
|
|
||
| const mimeType = this.getSupportedAudioMimeType(); | ||
| const options: MediaRecorderOptions = { | ||
| audioBitsPerSecond: AUDIO_BITRATE, | ||
| ...(mimeType ? { mimeType } : {}), | ||
| }; | ||
|
|
||
| const recorder = new MediaRecorder(dest.stream, options); | ||
| const chunks: Blob[] = []; | ||
|
|
||
| const recordedBlobPromise = new Promise<Blob>((resolve, reject) => { | ||
| recorder.ondataavailable = (event: BlobEvent) => { | ||
| if (event.data && event.data.size > 0) chunks.push(event.data); | ||
| }; | ||
| recorder.onerror = () => reject(new Error("MediaRecorder failed during audio encoding")); | ||
| recorder.onstop = () => { | ||
| const type = mimeType || chunks[0]?.type || "audio/webm"; | ||
| resolve(new Blob(chunks, { type })); | ||
| }; | ||
| }); | ||
|
|
||
| recorder.start(); | ||
| source.start(0); | ||
|
|
||
| // Wait for playback to end (this is proportional to audio duration, but since | ||
| // the buffer is already processed the recording is the only remaining real-time step) | ||
| await new Promise<void>((resolve) => { | ||
| source.onended = () => resolve(); | ||
| }); | ||
|
|
||
| if (recorder.state !== "inactive") recorder.stop(); | ||
| dest.stream.getTracks().forEach((t) => t.stop()); | ||
| source.disconnect(); | ||
| dest.disconnect(); | ||
| await ctx.close(); | ||
|
|
||
| return recordedBlobPromise; | ||
| } |
There was a problem hiding this comment.
Encoding step runs in real-time, not faster.
The comment claims this is "much faster than real-time", but MediaRecorder fed from a live AudioContext plays and records the buffer in real-time. A 10-minute audio track will take ~10 minutes to encode, which may be acceptable for short clips but could frustrate users with longer content.
Consider using WebCodecs AudioEncoder directly (similar to the trim-only path) to encode the processed AudioBuffer without real-time playback.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/exporter/audioEncoder.ts` around lines 427 - 477, The
encodeAudioBufferToBlob method currently uses a live AudioContext +
MediaRecorder which records in real time; replace that real-time path with a
non-realtime WebCodecs-based encoder similar to the trim-only path: convert the
AudioBuffer frames to raw PCM frames, instantiate an AudioEncoder (or the same
encoder used in the trim-only flow), feed encoded chunks to collect a
Blob/ArrayBuffer, and return that result; update encodeAudioBufferToBlob to
branch to WebCodecs when available (fall back to the existing MediaRecorder flow
if not), reuse getSupportedAudioMimeType/AUDIO_BITRATE semantics for
container/mime decisions, and ensure proper close/flush of the AudioEncoder and
cleanup before resolving the Promise.
| if (keptSegments.length === 0) { | ||
| return [{ startMs: 0, endMs: totalDurationMs, speed: 1 }]; | ||
| } |
There was a problem hiding this comment.
Returns full audio when everything is trimmed.
If keptSegments is empty (e.g., trim regions cover the entire timeline), the function returns the full original duration at 1× speed instead of an empty array. This would include audio that should have been trimmed out.
Suggested fix
if (keptSegments.length === 0) {
- return [{ startMs: 0, endMs: totalDurationMs, speed: 1 }];
+ return [];
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (keptSegments.length === 0) { | |
| return [{ startMs: 0, endMs: totalDurationMs, speed: 1 }]; | |
| } | |
| if (keptSegments.length === 0) { | |
| return []; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/exporter/audioEncoder.ts` around lines 502 - 504, The current branch
returns a full-length segment when keptSegments is empty, which re-introduces
audio that should be fully trimmed; update the early-return in the function that
computes segments so that if keptSegments.length === 0 you return an empty array
([]) instead of [{ startMs: 0, endMs: totalDurationMs, speed: 1 }], and ensure
the return type still matches the function signature and any callers tolerate an
empty segments array (adjust consumer expectations if necessary).
Description
Adds several quality-of-life enhancements to the OpenScreen video editor: a 1.1× zoom level, interactive audio adjustment controls (highpass, compression, treble, loudness) with real-time preview via the Web Audio API, and an audio waveform visualization in the timeline for precision editing.
Motivation
Editors needed finer-grained zoom granularity (1.1×) for subtle motion zooms. They also lacked any audio post-processing tools, making it impossible to clean up low-end rumble, boost treble clarity, or normalize loudness — all common needs when editing screen recordings with narration. The audio waveform gives a visual reference for cuts, making trim decisions far more precise without having to rely purely on the timecode.
Type of Change
Related Issue(s)
N/A
Screenshots / Video
1.1× Zoom Level added to the timeline selector:
Audio Adjustments panel in Settings:
Audio Waveform in Timeline:
Testing
1.1×appears as the first zoom level option.Checklist
Technical Summary (for reviewers)
types.tsAudioSettingsinterface +DEFAULT_AUDIO_SETTINGSprojectPersistence.tsaudioSettingsin project filesuseEditorHistory.tsaudioSettingstracked in undo/redo historySettingsPanel.tsxVideoPlayback.tsxaudioEncoder.tstimeline/Item.tsx1.1×toZOOM_LABELSmaplib/audioWaveform.tselectronAPI, decodes withAudioContext.decodeAudioData, computes RMS peakstimeline/AudioWaveform.tsxtimeline/TimelineEditor.tsx<AudioWaveform>at bottom of timeline inside dnd-timeline contextSummary by CodeRabbit
New Features
Improvements