Skip to content

Commit 8f79421

Browse files
ecobostalejoe91
andauthored
Fix some typos in documentation (#4419)
Co-authored-by: Alessio Buccino <alejoe9187@gmail.com>
1 parent 797f0e8 commit 8f79421

17 files changed

Lines changed: 62 additions & 60 deletions

.github/workflows/all-tests.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ jobs:
105105
run: echo "dataset_hash=$(git ls-remote https://gin.g-node.org/NeuralEnsemble/ephy_testing_data.git HEAD | cut -f1)" >> $GITHUB_OUTPUT
106106

107107
- name: Cache datasets
108-
if: env.RUN_EXTRACTORS_TESTS == 'true'
108+
if: env.RUN_EXTRACTORS_TESTS == 'true' || env.RUN_PREPROCESSING_TESTS == 'true'
109109
id: cache-datasets
110110
uses: actions/cache/restore@v4
111111
with:
@@ -115,7 +115,7 @@ jobs:
115115

116116
- name: Install git-annex
117117
shell: bash
118-
if: env.RUN_EXTRACTORS_TESTS == 'true'
118+
if: env.RUN_EXTRACTORS_TESTS == 'true' || env.RUN_PREPROCESSING_TESTS == 'true'
119119
run: |
120120
pip install datalad-installer
121121
if [ ${{ runner.os }} = 'Linux' ]; then

doc/how_to/auto_label_units.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -295,8 +295,9 @@ page <https://huggingface.co/SpikeInterface>`__.
295295
.. image:: auto_label_units_files/auto_label_units_27_1.png
296296

297297

298-
**NOTE:** If you want to train your own models, see the `UnitRefine
299-
repo <%60https://github.com/anoushkajain/UnitRefine%60>`__ for
298+
.. note::
299+
If you want to train your own models, see the `UnitRefine
300+
repo <https://github.com/anoushkajain/UnitRefine>`__ for
300301
instructions!
301302

302303
This “How To” demonstrated how to automatically label units after spike

doc/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ amazing algorithms and formats that we interface with. See them all, and how to
1212
`references page <https://spikeinterface.readthedocs.io/en/latest/references.html>`_. In the past year, we have added support
1313
for the following tools:
1414

15-
- Bombcell. `Bombcell: automated curation and cell classification of spike-sorted electrophysiology data. 2023. <https://doi.org/10.5281/zenodo.8172822>`_ (`docs <https://spikeinterface.readthedocs.io/en/stable/how_to/auto_label_units.html#bombcell>`_)
15+
- Bombcell. `Bombcell: automated curation and cell classification of spike-sorted electrophysiology data. <https://doi.org/10.5281/zenodo.8172822>`_ (`docs <https://spikeinterface.readthedocs.io/en/stable/how_to/auto_label_units.html#bombcell>`_)
1616
- SLAy. `SLAy-ing oversplitting errors in high-density electrophysiology spike sorting <https://www.biorxiv.org/content/10.1101/2025.06.20.660590v2>`_ (`docs <https://spikeinterface.readthedocs.io/en/latest/modules/curation.html#auto-merging-units>`_)
1717
- Lupin, Spykingcicus2 and Tridesclous2. `Opening the black box: a modular approach to spike sorting <https://www.biorxiv.org/content/10.64898/2026.01.23.701239v1>`_ (`docs <https://spikeinterface.readthedocs.io/en/stable/modules/sorters.html#supported-spike-sorters>`_)
1818
- RT-Sort. `RT-Sort: An action potential propagation-based algorithm for real time spike detection and sorting with millisecond latencies <https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0312438>`_ (`docs <https://spikeinterface.readthedocs.io/en/stable/modules/sorters.html#supported-spike-sorters>`_)

doc/modules/benchmark.rst

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -2,31 +2,31 @@ Benchmark module
22
================
33

44

5-
Historically, this module was used to compare/benchmark sorters against ground truth
5+
Historically, this module was used to compare/benchmark sorters against ground truth.
66
With this, sorters can be challenge in multiple situations (noise, drift, small/high snr,
77
small/high spike rate, high/small probe density, ...).
88

99
The main idea is to generate a synthetic recording using the internal generators
1010
:py:func:`~spikeinterface.generation.generate_drifting_recording` or external tools
11-
like ***mearec**. And then to compare the output of each sorter to the ground truth sorting.
12-
Then, theses comparisons can be plotted in various ways to explore all strengths and weakness of
11+
like **mearec**. And then to compare the output of each sorter to the ground truth sorting.
12+
Then, these comparisons can be plotted in various ways to explore all strengths and weakness of
1313
sorters tools. The very first paper of spikeinterface was about that, see [Buccino]_.
1414

1515
Since version, 0.102.0 the concept of *benchmark* has been extended to challenge/study specific
16-
steps of the sorting pipeline, for instance the motion estimation methods has been carrfully studied
16+
steps of the sorting pipeline, for instance the motion estimation methods has been carefully studied
1717
in [Garcia2024]_ or some localisation methods has been compared in [Scopin2024]_.
18-
Also, very specific details (the ability for a sorting to recover collision spike) has been
18+
Also, very specific details (the ability for a sorting to recover collision spikes) has been
1919
studied in [Garcia2022]_.
2020

21-
Now, almost all steps of the spike sorting step has implemented in spikeinterface and then
22-
all this steps can be benchmarked more or less the same way with dedicated classes:
21+
Now, almost all steps of the spike sorting pipeline have been implemented in spikeinterface and then
22+
all these steps can be benchmarked more or less the same way with dedicated classes:
2323

2424
* :py:func:`~spikeinterface.sortingcomponents.peak_detection.detect_peaks()`
2525
methods can be compared with :py:class:`~spikeinterface.benchmark.benchmark_peak_detection.PeakDetectionStudy`
2626
* :py:func:`~spikeinterface.sortingcomponents.peak_localization.localize_peaks()`
2727
methods can be compared with :py:class:`~spikeinterface.benchmark.benchmark_peak_localization.PeakLocalizationStudy`
2828
* :py:func:`~spikeinterface.sortingcomponents.motion.estimate_motion()`
29-
methods can be compared with :py:class:`~spikeinterface.benchmark.benchmark_motion_estimation.MotionEstimationStudyStudy`
29+
methods can be compared with :py:class:`~spikeinterface.benchmark.benchmark_motion_estimation.MotionEstimationStudy`
3030
* :py:func:`~spikeinterface.sortingcomponents.clustering.find_clusters_from_peaks()`
3131
methods can be compared with :py:class:`~spikeinterface.benchmark.benchmark_clustering.ClusteringStudy`
3232
* :py:func:`~spikeinterface.sortingcomponents.matching.find_spikes_from_templates()`
@@ -41,19 +41,19 @@ All theses benchmark study classes share the same design :
4141

4242
* They accept as input a dict of "cases". A case being a mix of **one method** (or one sorter)
4343
in a **particular situation** (drift or not, low/high snr, ...) with **some parameters**.
44-
With this in mind, this is very easy to test either algorithm but also there parameters.
45-
* Study classes has 4 steps : create cases, run methods, compute results and plot results.
44+
With this in mind, it is very easy to test either algorithms or their parameters.
45+
* Study classes have 4 steps : create cases, run methods, compute results and plot results.
4646
* Study classes have dedicated plot functions or more general plotting (for instance accuracy vs snr)
47-
* Study classes also cases handle the concept of "levels" : this allows you to compare several
47+
* Study classes also handle the concept of "levels" : this allows you to compare several
4848
complexities at the same time. For instance, compare kilosort4 vs kilsort2.5 (level 0) for
4949
different noises amplitudes (level 1) combined with several motion vectors (level 2).
5050
* When plotting levels can be grouped to make averages.
5151
* Internally, they almost all use the :py:mod:`~spikeinterface.comparison` module.
52-
In short this module can compare a set of spiketrains against ground truth spiketrains.
53-
The van diagram (True Posistive, False positive, False negative) against each ground truth units is
52+
In short, this module can compare a set of spiketrains against ground truth spiketrains.
53+
The van diagram (True positive, False positive, False negative) against each ground truth units is
5454
performed.
5555
An internal agreement matrix is also constructed. With this machinery many metrics can be taken
56-
to estimate the quality of a methods : accuracy, recall, precision
56+
to estimate the quality of the methods : accuracy, recall, precision.
5757
* Study classes are persistent on disk. The mechanism is based on an intrinsic
5858
organization into a "study_folder" with several subfolders: results, sorting_analyzer, run_logs,
5959
cases...
@@ -158,8 +158,8 @@ Here a simple code block to generate
158158
The :py:func:`~spikeinterface.sortingcomponents.peak_detection.detect_peaks()` function
159159
propose mainly (with some variants) 2 main methods :
160160

161-
* "locally_exclussive" : a multichannel peak detection by threhold crossing that taken
162-
in account the neighbor channels
161+
* "locally_exclusive" : a multichannel peak detection by threhold crossing that takes into
162+
account the neighbor channels.
163163
* "matched_filtering" : a method based on convolution by a kernel that "looks like a spike"
164164
at several spatial scales.
165165

@@ -256,9 +256,9 @@ version of spikeinterface for benchmark but re-generating the same figures shoul
256256
new version of spikeinterface.
257257

258258
Note that since this puplication, new methods has been published (DREDGe and MEDiCINe) and implemented in spikeinterface
259-
so runnning a new comparison could make sens.
259+
so runnning a new comparison could make sense.
260260

261-
Lets be *open-and-reproducible-science*, this is so trendy. This 120 lines script will make the same
261+
Let's be *open-and-reproducible-science*, this is so trendy. This 120 lines script will make the same
262262
job done [Garcia2024]_.
263263

264264

doc/modules/curation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ This format has two part:
284284
* "format_version" : format specification
285285
* "unit_ids" : the list of unit_ds
286286
* "label_definitions" : list of label categories and possible labels per category.
287-
Every category can be *exclusive=True* onely one label or *exclusive=False* several labels possible
287+
Every category can be *exclusive=True* (only one label) or *exclusive=False* (several labels possible).
288288

289289
* **manual output** curation with the folowing keys:
290290

doc/modules/generation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Brain Laboratory - Brain Wide Map (available on
1717
You can check out this collection of over 600 templates from this `web app <https://spikeinterface.github.io/hybrid_template_library/>`_.
1818

1919
The :py:mod:`spikeinterface.generation` module offers tools to interact with this database to select and download templates,
20-
manupulating (e.g. rescaling and relocating them), and construct hybrid recordings with them.
20+
manipulating (e.g. rescaling and relocating them), and construct hybrid recordings with them.
2121
Importantly, recordings from long-shank probes, such as Neuropixels, usually experience drifts.
2222
Such drifts can be taken into account in order to smoothly inject spikes into the recording.
2323

doc/modules/metrics.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Currently, it contains the following submodules:
1313

1414
All metrics extensions inherit from the :py:class:`~spikeinterface.core.analyzer_extension_core.BaseMetricExtension`
1515
base class, which provides a common interface for computing and retrieving metrics and has convenience method to access
16-
metric information. For example, you can get the list of available metrics using the and their descriptions with:
16+
metric information. For example, you can get the list of available metrics and their descriptions with:
1717

1818
.. code-block:: python
1919
@@ -67,18 +67,18 @@ metric information. For example, you can get the list of available metrics using
6767
'extremum channel (1/um). Uses exponential or linear fit based '
6868
'on linear_fit parameter.',
6969
'main_peak_to_trough_ratio': 'Ratio of main peak amplitude to trough amplitude',
70-
'main_to_next_extremum_duration': 'Duration in seconds from main extremum to next extremum.',
70+
'main_to_next_extremum_duration': 'Duration in seconds from main extremum to next extremum.',
7171
'num_negative_peaks': 'Number of negative peaks (troughs) in the template',
7272
'num_positive_peaks': 'Number of positive peaks in the template',
73-
'peak_after_to_trough_ratio': 'Ratio of peak after amplitude to trough amplitude',
73+
'peak_after_to_trough_ratio': 'Ratio of peak after amplitude to trough amplitude',
7474
'peak_after_width': 'Width of the main peak after trough in seconds',
75-
'peak_before_to_peak_after_ratio': 'Ratio of peak before amplitude to peak after amplitude',
76-
'peak_before_to_trough_ratio': 'Ratio of peak before amplitude to trough amplitude',
75+
'peak_before_to_peak_after_ratio': 'Ratio of peak before amplitude to peak after amplitude',
76+
'peak_before_to_trough_ratio': 'Ratio of peak before amplitude to trough amplitude',
7777
'peak_before_width': 'Width of the main peak before trough in seconds',
78-
'peak_half_width': 'Duration in s at half the amplitude of the peak (maximum) of the template.',
79-
'peak_to_trough_duration': 'Duration in seconds between the trough (minimum) and the next peak (maximum) of the template.',
80-
'recovery_slope': 'Slope of the recovery phase of the template, after the peak (maximum) returning to baseline in uV/s.',
81-
'repolarization_slope': 'Slope of the repolarization phase of the template, between the trough (minimum) and return to baseline in uV/s.',
78+
'peak_half_width': 'Duration in s at half the amplitude of the peak (maximum) of the template.',
79+
'peak_to_trough_duration': 'Duration in seconds between the trough (minimum) and the next peak (maximum) of the template.',
80+
'recovery_slope': 'Slope of the recovery phase of the template, after the peak (maximum) returning to baseline in uV/s.',
81+
'repolarization_slope': 'Slope of the repolarization phase of the template, between the trough (minimum) and return to baseline in uV/s.',
8282
'spread': 'Spread of the template amplitude in um, calculated as the distance between channels whose templates exceed the spread_threshold.',
8383
'trough_half_width': 'Duration in s at half the amplitude of the trough (minimum) of the template.',
8484
'trough_width': 'Width of the main trough in seconds',

doc/modules/metrics/quality_metrics.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Completeness metrics (or 'false negative'/'type II' metrics) aim to identify whe
1212
Examples include: presence ratio, amplitude cutoff, NN-miss rate.
1313
Drift metrics aim to identify changes in waveforms which occur when spike sorters fail to successfully track neurons in the case of electrode drift.
1414

15-
The quality metrics are saved as an extension of a :doc:`SortingAnalyzer <../postprocessing>`. Some metrics can only be computed if certain extensions have been computed first. For example the drift metrics can only be computed the spike locations extension has been computed. By default, as many metrics as possible are computed. Which ones are computed depends on which other extensions have
15+
The quality metrics are saved as an extension of a :doc:`SortingAnalyzer <../postprocessing>`. Some metrics can only be computed if certain extensions have been computed first. For example the drift metrics can only be computed if the spike locations extension has been computed. By default, as many metrics as possible are computed. Which ones are computed depends on which other extensions have
1616
been computed.
1717

1818
In detail, the default metrics are (click on each metric to find out more about them!):

doc/modules/metrics/spiketrain_metrics.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@ Currently, the following metrics are implemented:
77
- "num_spikes": number of spikes in the spike train.
88
- "firing_rate": firing rate of the spike train (spikes per second).
99

10-
# TODO: Add more metrics such as ISI distribution, CV, etc.
10+
.. TODO: Add more metrics such as ISI distribution, CV, etc.

doc/modules/metrics/template_metrics.rst

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -134,6 +134,12 @@ template across the probe (these are computed by default if the number of channe
134134
is greater than 64, but can be forced on or off with the :code:`include_multi_channel_metrics`
135135
parameter).
136136

137+
138+
.. code-block:: python
139+
140+
tm = sorting_analyzer.compute(input="template_metrics", include_multi_channel_metrics=True)
141+
142+
137143
These are the multi-channel metrics that can be computed:
138144

139145
velocity_fits
@@ -159,9 +165,4 @@ above 20% of the maximum amplitude (default). Template amplitudes are normalized
159165
and optionally smoothed over space using a Gaussian filter (default sigma is 20µm).
160166

161167

162-
.. code-block:: python
163-
164-
tm = sorting_analyzer.compute(input="template_metrics", include_multi_channel_metrics=True)
165-
166-
167168
For more information, see :py:func:`~spikeinterface.postprocessing.compute_template_metrics`

0 commit comments

Comments
 (0)