Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
bba6612
port over other PR
Aug 21, 2024
f4b1374
fix
Aug 21, 2024
0c9fe09
Merge branch 'main' into staging
CodyCBakerPhD Aug 23, 2024
de0ae02
Merge branch 'main' into staging
CodyCBakerPhD Aug 25, 2024
9ec9b25
Merge branch 'main' into staging
CodyCBakerPhD Aug 31, 2024
a2ce75e
Merge branch 'main' into staging
rly Nov 13, 2024
0ea23b9
Use latest neuroconv 0.6.5
rly Nov 13, 2024
454c29f
Update manage_neuroconv.py
rly Nov 13, 2024
f3d6358
Update manage_neuroconv.py
rly Nov 13, 2024
6a286a3
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 13, 2024
57fec0c
Fix missing scikit-learn
rly Nov 13, 2024
ae67d2f
Merge branch 'main' into staging
rly Jan 13, 2025
6a61076
Clean env files
rly Jan 13, 2025
5491209
Merge branch 'main' into staging
rly Feb 1, 2025
83c7c24
Update neuroconv, nwbinspector
rly Feb 1, 2025
00de462
Set spikeinterface n_jobs = -1 for parallelism
rly Feb 1, 2025
1b34bc6
Merge branch 'main' into staging
rly Apr 6, 2025
61dc0f0
Pin to neuroconv 0.6.0
rly Apr 6, 2025
6e81201
Fix numcodecs issue
rly Apr 8, 2025
7e5099e
Update environment-MAC-intel.yml
rly Apr 8, 2025
2469bfb
Update environment-MAC-intel.yml
rly Apr 8, 2025
7d00db0
Update environment-MAC-intel.yml
rly Apr 8, 2025
b48e4dc
Update environment-MAC-intel.yml
rly Apr 8, 2025
fa2f503
Update environment-MAC-intel.yml
rly Apr 8, 2025
0eb19de
Try collecting h5py modules
rly Apr 8, 2025
c242b6b
Experiment - install latest hdf5 conda
rly Apr 8, 2025
715a804
Just set hdf5 1.14.4 in conda - mac intel env
rly Apr 8, 2025
7100d6c
Try pinning h5py to 3.12.1 only
rly Apr 8, 2025
d639cb4
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Apr 8, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 3 additions & 5 deletions environments/environment-Linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,11 @@ dependencies:
- flask-cors == 4.0.0
- flask_restx == 1.1.0
- werkzeug < 3.0 # werkzeug 3.0 deprecates features used by flask 2.3.2. Remove this when updating flask.
# For stability, NeuroConv is pinned at a commit just prior to breaking SpikeInterface compatibility
- neuroconv @ git+https://github.com/catalystneuro/neuroconv.git@fa636458aa5c321f1c2c08f6e682b4a52d5a83f3#neuroconv[dandi,compressors,ecephys,ophys,behavior,text]
# For stability, pinning SpikeInterface to a version that works with NeuroConv and with tutorial generation
- spikeinterface == 0.100.5
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.6.0
- scikit-learn == 1.4.0 # Tutorial data generation
- tqdm_publisher >= 0.0.1 # Progress bars
- tzlocal >= 5.2 # Frontend timezone handling
- ndx-pose == 0.1.1
- nwbinspector==0.6.2
- nwbinspector == 0.6.2
- tables
- numcodecs < 0.16.0 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
8 changes: 3 additions & 5 deletions environments/environment-MAC-apple-silicon.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,10 @@ dependencies:
- werkzeug < 3.0 # werkzeug 3.0 deprecates features used by flask 2.3.2. Remove this when updating flask.
# NOTE: the NeuroConv wheel on PyPI includes sonpy which is not compatible with arm64, so build and install
# NeuroConv from GitHub, which will remove the sonpy dependency when building from Mac arm64
# For stability, NeuroConv is pinned at a commit just prior to breaking SpikeInterface compatibility
- neuroconv @ git+https://github.com/catalystneuro/neuroconv.git@fa636458aa5c321f1c2c08f6e682b4a52d5a83f3#neuroconv[dandi,compressors,ecephys,ophys,behavior,text]
# For stability, pinning SpikeInterface to a version that works with NeuroConv and with tutorial generation
- spikeinterface == 0.100.5
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.6.0
- scikit-learn == 1.4.0 # Tutorial data generation
- tqdm_publisher >= 0.0.1 # Progress bars
- tzlocal >= 5.2 # Frontend timezone handling
- ndx-pose == 0.1.1
- nwbinspector==0.6.2
- nwbinspector == 0.6.2
- numcodecs < 0.16.0 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
12 changes: 6 additions & 6 deletions environments/environment-MAC-intel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ dependencies:
- nodejs = 18.16.1
# install these from conda-forge so that dependent packages get included in the distributable
- jsonschema = 4.18.0 # installs jsonschema-specifications
- pytables = 3.10.2 # Install from conda-forge because PyPI version results in hdf5 conflicts and missing libs
- pip
- pip:
- setuptools==70.0.0
Expand All @@ -19,12 +18,13 @@ dependencies:
- flask-cors == 4.0.0
- flask_restx == 1.1.0
- werkzeug < 3.0 # werkzeug 3.0 deprecates features used by flask 2.3.2. Remove this when updating flask.
# For stability, NeuroConv is pinned at a commit just prior to breaking SpikeInterface compatibility
- neuroconv @ git+https://github.com/catalystneuro/neuroconv.git@fa636458aa5c321f1c2c08f6e682b4a52d5a83f3#neuroconv[dandi,compressors,ecephys,ophys,behavior,text]
# For stability, pinning SpikeInterface to a version that works with NeuroConv and with tutorial generation
- spikeinterface == 0.100.5
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.6.0
- scikit-learn == 1.4.0 # Tutorial data generation
- tqdm_publisher >= 0.0.1 # Progress bars
- tzlocal >= 5.2 # Frontend timezone handling
- ndx-pose == 0.1.1
- nwbinspector==0.6.2
- nwbinspector == 0.6.2
- numcodecs < 0.16.0 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
- h5py == 3.12.1 # 3.13.0 uses features in hdf5 1.14.4 that is not available in earlier hdf5 libs packaged
# with tables==3.9.1 (latest that can be used by neuroconv 0.6.0).
# h5py and tables need to be consistent for electron build for unknown reason
8 changes: 3 additions & 5 deletions environments/environment-Windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,11 @@ dependencies:
- flask-cors === 3.0.10
- flask_restx == 1.1.0
- werkzeug < 3.0 # werkzeug 3.0 deprecates features used by flask 2.3.2. Remove this when updating flask.
# For stability, NeuroConv is pinned at a commit just prior to breaking SpikeInterface compatibility
- neuroconv @ git+https://github.com/catalystneuro/neuroconv.git@fa636458aa5c321f1c2c08f6e682b4a52d5a83f3#neuroconv[dandi,compressors,ecephys,ophys,behavior,text]
# For stability, pinning SpikeInterface to a version that works with NeuroConv and with tutorial generation
- spikeinterface == 0.100.5
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.6.0
- scikit-learn == 1.4.0 # Tutorial data generation
- tqdm_publisher >= 0.0.1 # Progress bars
- tzlocal >= 5.2 # Frontend timezone handling
- ndx-pose == 0.1.1
- nwbinspector==0.6.2
- nwbinspector == 0.6.2
- tables
- numcodecs < 0.16.0 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
37 changes: 24 additions & 13 deletions src/pyflask/manageNeuroconv/manage_neuroconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -1668,11 +1668,13 @@ def generate_test_data(output_path: str):
"""
Autogenerate the data formats needed for the tutorial pipeline.

Consists of a single-probe single-segment SpikeGLX recording (both AP and LF bands) as well as Phy spiking data.
Consists of a single-probe single-segment SpikeGLX recording (both AP and LF bands) as well as Phy sorting data.
"""
import spikeinterface
from spikeinterface.exporters import export_to_phy
from spikeinterface.preprocessing import bandpass_filter, resample, scale
import spikeinterface.exporters
import spikeinterface.preprocessing

spikeinterface.set_global_job_kwargs(n_jobs=-1)

base_path = Path(output_path)
spikeglx_output_folder = base_path / "spikeglx"
Expand All @@ -1687,8 +1689,8 @@ def generate_test_data(output_path: str):
lf_sampling_frequency = 2_500.0
downsample_factor = int(ap_sampling_frequency / lf_sampling_frequency)

# Generate synthetic spiking and voltage traces with waveforms around them
artificial_ap_band_in_uV, spiking = spikeinterface.generate_ground_truth_recording(
# Generate synthetic sorting and voltage traces with waveforms around them
artificial_ap_band_in_uV, sorting = spikeinterface.generate_ground_truth_recording(
durations=[duration_in_s],
sampling_frequency=ap_sampling_frequency,
num_channels=number_of_channels,
Expand All @@ -1697,12 +1699,18 @@ def generate_test_data(output_path: str):
seed=0, # Fixed seed for reproducibility
)

unscaled_artificial_ap_band = scale(recording=artificial_ap_band_in_uV, gain=1 / conversion_factor_to_uV)
unscaled_artificial_ap_band = spikeinterface.preprocessing.scale(
recording=artificial_ap_band_in_uV, gain=1 / conversion_factor_to_uV
)
int16_artificial_ap_band = unscaled_artificial_ap_band.astype(dtype="int16")
int16_artificial_ap_band.set_channel_gains(conversion_factor_to_uV)

unscaled_artificial_lf_filter = bandpass_filter(recording=unscaled_artificial_ap_band, freq_min=0.5, freq_max=1_000)
unscaled_artificial_lf_band = resample(recording=unscaled_artificial_lf_filter, resample_rate=2_500)
unscaled_artificial_lf_filter = spikeinterface.preprocessing.bandpass_filter(
recording=unscaled_artificial_ap_band, freq_min=0.5, freq_max=1_000
)
unscaled_artificial_lf_band = spikeinterface.preprocessing.decimate(
recording=unscaled_artificial_lf_filter, decimation_factor=downsample_factor
)
int16_artificial_lf_band = unscaled_artificial_lf_band.astype(dtype="int16")
int16_artificial_lf_band.set_channel_gains(conversion_factor_to_uV)

Expand All @@ -1725,13 +1733,16 @@ def generate_test_data(output_path: str):
with open(file=lf_meta_file_path, mode="w") as io:
io.write(lf_meta_content)

# Make Phy folder
waveform_extractor = spikeinterface.extract_waveforms(
recording=artificial_ap_band_in_uV, sorting=spiking, mode="memory"
# Make Phy folder - see https://spikeinterface.readthedocs.io/en/latest/modules/exporters.html
sorting_analyzer = spikeinterface.create_sorting_analyzer(
sorting=sorting, recording=artificial_ap_band_in_uV, mode="memory", sparse=False
)
sorting_analyzer.compute(["random_spikes", "waveforms", "templates", "noise_levels"])
sorting_analyzer.compute("spike_amplitudes")
sorting_analyzer.compute("principal_components", n_components=5, mode="by_channel_local")

export_to_phy(
waveform_extractor=waveform_extractor, output_folder=phy_output_folder, remove_if_exists=True, copy_binary=False
spikeinterface.exporters.export_to_phy(
sorting_analyzer=sorting_analyzer, output_folder=phy_output_folder, remove_if_exists=True, copy_binary=False
)


Expand Down
Loading