This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
NeuralNote is a cross-platform audio plugin (VST3/AU/Standalone) that performs real-time Audio-to-MIDI transcription using Spotify's Basic Pitch model. The plugin is built with JUCE and uses RTNeural for CNN inference and ONNXRuntime for feature extraction (Constant-Q Transform + Harmonic Stacking).
Clone with submodules:
git clone --recurse-submodules --shallow-submodules https://github.com/DamRsn/NeuralNote./build.shThis script:
- Downloads pre-built ONNXRuntime static library
- Extracts the optimized ONNX model to
Lib/ModelData/features_model.ort - Configures CMake in Release mode with tests enabled
- Builds all targets (Standalone, VST3, AU)
- Runs unit tests
- Output:
./build/NeuralNote_artefacts/Release/Standalone/NeuralNote.app/Contents/MacOS/NeuralNote
For Visual Studio 2022 (MSVC 19.35.x):
.\build.batFor other MSVC versions, manually build ONNXRuntime first (see README.md for detailed steps), then run .\build.bat.
./build.shTests are disabled on Linux by default. Output: ./build/NeuralNote_artefacts/Release/Standalone/NeuralNote
After running the build script once, you can load the project in any CMake-compatible IDE (CLion, Visual Studio, VSCode). The build script only needs to run once to download ONNXRuntime dependencies.
Run unit tests (macOS/Windows only):
./build/Tests/UnitTests_artefacts/Release/UnitTestsConfigure with CMake options:
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \
-DUniversalBinary=ON \ # macOS universal binary (x86_64 + arm64)
-DLTO=ON \ # Link-time optimization
-DBUILD_UNIT_TESTS=ON # Enable unit testsNeuralNote/
├── NeuralNote/ # Plugin-specific code (JUCE plugin wrapper + UI)
│ ├── PluginSources/ # JUCE plugin processor (PluginProcessor, PluginEditor)
│ ├── Source/ # Application logic (managers, controllers, components)
│ └── Assets/ # UI resources (fonts, icons, images)
├── Lib/ # Core transcription engine (reusable library)
│ ├── Model/ # BasicPitch implementation (CNN, Features, Notes)
│ ├── Components/ # UI components (Keyboard, Knobs, Sliders)
│ ├── DSP/ # Audio processing (Resampler)
│ ├── MidiPostProcessing/ # MIDI quantization and scaling
│ ├── Player/ # Audio/MIDI playback
│ ├── Utils/ # Utility functions
│ └── ModelData/ # Model weights (.json files for CNN, .ort for features)
├── Tests/ # Unit tests
└── ThirdParty/ # External dependencies (JUCE, RTNeural, ONNXRuntime)
Plugin Layer (NeuralNote/):
PluginProcessor: Main audio plugin processor, manages state machine (Empty → Recording → Processing → Populated)PluginEditor: Plugin UI, hosts the main viewTranscriptionManager: Coordinates transcription workflow on a background thread poolSourceAudioManager: Manages input audio recording and file loading (.wav, .aiff, .flac, .mp3, .ogg)Player: Handles playback of source audio and synthesized MIDISynthController: Generates audio from MIDI notes for preview
Transcription Engine (Lib/Model/):
BasicPitch: Main API for transcription. CalltranscribeToMIDI(audio, numSamples)→ getgetNoteEvents()Features: Computes CQT + Harmonic Stacking features using ONNXRuntimeBasicPitchCNN: Runs CNN inference using RTNeural (4 sequential 2D conv models)Notes: Converts posteriorgrams (note/onset/contour probabilities) to MIDI note events
Pipeline: Raw Audio (22.05kHz) → Features (CQT) → BasicPitchCNN → Posteriorgrams → Notes → MIDI Events
UI Components (NeuralNote/Source/Components/):
NeuralNoteMainView: Main container viewCombinedAudioMidiRegion: Waveform + piano roll displayPianoRoll: Piano roll visualization with zoom/scrollAudioRegion: Waveform displayMidiFileDrag: Drag-and-drop MIDI export- Custom widgets:
Keyboard,Knob,MinMaxNoteSlider,QuantizeForceSlider
The plugin uses a state machine (see State enum in PluginProcessor.h):
EmptyAudioAndMidiRegions: No audio loadedRecording: Currently recording audio inputProcessing: Running transcription (background thread)PopulatedAudioAndMidiRegions: Transcription complete, ready for playback/export
State is stored in:
AudioProcessorValueTreeState(APVTS): Automatable parameters (sensitivity, quantization, etc.)ValueTree: General plugin state (non-automatable settings)
- Audio thread: Handles
processBlock()for audio recording/playback - Background thread pool: Runs transcription jobs via
TranscriptionManager - Timer callbacks: Check for job completion and update UI (see
TranscriptionManager::timerCallback())
BasicPitch is split into two parts:
- Features (ONNXRuntime): CQT + Harmonic Stacking →
features_model.ort - CNN (RTNeural): 4 sequential 2D conv models →
.jsonweight files inLib/ModelData/
The CNN was split into 4 models to work around RTNeural's sequential processing requirement. Model weights are embedded as binary data via juce_add_binary_data().
The core transcription code in Lib/Model/ and model weights in Lib/ModelData/ are designed to be reusable in other projects. Key files:
BasicPitch.h/.cpp: Main APIFeatures.h/.cpp: CQT feature extractionBasicPitchCNN.h/.cpp: CNN inference (built separately with optimization flags)Notes.h/.cpp: Posteriorgram → MIDI conversion
BasicPitchCNN.cpp is compiled as a separate static library with forced optimization (-O3 on GCC/Clang, /O2 on MSVC) even in Debug builds to maintain real-time performance. This is controlled by the RTNeural_Release CMake option.
The build scripts download a custom pre-built ONNXRuntime static library from libonnxruntime-neuralnote built with ort-builder. This includes a runtime-optimized ONNX model (features_model.ort).
On Windows with non-MSVC 19.35.x compilers, you must manually rebuild ONNXRuntime (see README.md).
- macOS: Code is signed and notarized for distribution. Use
sign_and_package_neuralnote_macos.shfor release builds. - Windows: Code is not signed. Users may need to bypass SmartScreen warnings.
- Linux: Raw binaries provided (no installer). Tests are disabled by default.
This project contributed 2D convolution support to RTNeural (PR #89), which is used for the BasicPitch CNN layers.
See PACKAGING.md for detailed packaging instructions:
- macOS: Use Packages.app and
sign_and_package_neuralnote_macos.sh - Windows: Use Inno Setup to build installer from
Installers/Windows/neuralnote.iss
Build with:
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release -DUniversalBinary=ON -DLTO=ON
cmake --build build -j $(nproc)Background Image Requirements:
- All background images MUST be rescaled to 1000x640 pixels
- Use
Graphics::ResamplingQuality::highResamplingQualityfor best quality - Both loading and reloading must include rescaling
Example (from NeuralNoteMainView.cpp:232-233):
mBackgroundImage = ImageCache::getFromMemory(BinaryData::background_png, BinaryData::background_pngSize)
.rescaled(1000, 640, Graphics::ResamplingQuality::highResamplingQuality);Common Mistake: Loading/reloading images without the .rescaled() step causes distorted backgrounds.
Color Definitions: Lib/Components/UIDefines.h:66-74
BLACK: RGB(14, 14, 14)WHITE_SOLID: RGB(255, 253, 246) - off-white creamWHITE_TRANSPARENT: RGB(255, 253, 246, 0.7f)KNOB_GREY: RGB(218, 221, 217)- Maintain color consistency across the UI
Adding items to settings menu (NeuralNoteMainView.cpp:132+):
- Increment
item_id - Create
PopupMenu::Itemwith label - Set ID and enabled state
- For toggle items: add to
mSettingsMenuItemsShouldBeTicked - Define action lambda
- Call
setAction()andaddItem()
Handled in NeuralNoteMainView::keyPressed():
- Use
ModifierKeys::commandModifierfor Cmd/Ctrl - Return
trueif key handled,falseotherwise - Check
KeyPressequality:key == KeyPress('b', ModifierKeys::commandModifier, 0)
Location: Settings menu → "Reload Background" or Cmd+B
- Checks
~/Desktop/background.pngfirst - Falls back to embedded
BinaryData::background_png - Always rescales to 1000x640
- Useful for testing different background designs without rebuilding
- Never name things "FIXED" or "UPDATED" in filenames or app names
- Use descriptive, neutral names for builds and artifacts
- Avoid generic placeholder names in commit messages and code