Skip to content
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,10 @@ Suggested workflows, depending on the image folder contents:
and start drawing rectangles over the images. Masks and rectangle vertices are saved as described in [Save Layers](#save-layers).
Note that masks can be reloaded and edited at a later stage by dropping the `vertices.csv` file onto the canvas.

5. **Detect Outliers to Refine Labels**
Open napari as described in [Usage](#usage) and open the `CollectedData_<ScorerName>.h5` file. Click on the button cluster and wait a few seconds. It will show a new layer with the cluster. You can click on a point and see the image on the right with the keypoints. If you decided to refine that frame, click show img and refine them. You can go back to the cluster layer by clicking on close img and refine another image. When you're done, you need to do ctl s to save it. And now you can retrain the network!

Comment on lines +139 to +140
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new workflow step is a single long run-on paragraph with several unclear instructions (e.g., “ctl s”) and inconsistent button naming vs the UI labels (“cluster pose”, “show img”, “close img”). Please rewrite for clarity (short steps, consistent button names, and correct shortcut notation like “Ctrl+S” / “Cmd+S”).

Suggested change
Open napari as described in [Usage](#usage) and open the `CollectedData_<ScorerName>.h5` file. Click on the button cluster and wait a few seconds. It will show a new layer with the cluster. You can click on a point and see the image on the right with the keypoints. If you decided to refine that frame, click show img and refine them. You can go back to the cluster layer by clicking on close img and refine another image. When you're done, you need to do ctl s to save it. And now you can retrain the network!
1. Open napari as described in [Usage](#usage).
2. Open the `CollectedData_<ScorerName>.h5` file.
3. Click the `cluster pose` button and wait a few seconds; a new layer with the clusters will appear.
4. Click a point in the cluster layer to display the corresponding image with keypoints on the right.
5. If you decide to refine that frame, click `show img` and adjust the keypoints as needed.
6. When you are finished with that frame, click `close img` to return to the cluster layer and select another point to refine.
7. When you are done refining, save your annotations with **Ctrl+S** (Windows/Linux) or **Cmd+S** (macOS).
8. You can now retrain the network using the refined labels.

Copilot uses AI. Check for mistakes.

### Workflow flowchart

```mermaid
Expand Down
1 change: 1 addition & 0 deletions src/napari_deeplabcut/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,4 @@
get_config_reader,
)
from ._writer import write_hdf, write_masks

3 changes: 2 additions & 1 deletion src/napari_deeplabcut/_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,11 +193,12 @@ def read_hdf(filename: str) -> List[LayerData]:
if isinstance(temp.index, pd.MultiIndex):
temp.index = [os.path.join(*row) for row in temp.index]
df = (
temp.stack(["individuals", "bodyparts"])
temp.stack(["individuals", "bodyparts"])#, dropna=False)
.reindex(header.individuals, level="individuals")
.reindex(header.bodyparts, level="bodyparts")
.reset_index()
)
#df.fillna(0, inplace=True)
Comment on lines +196 to +201
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new inline commented parameters (#, dropna=False)) and commented-out df.fillna(...) leave confusing dead code in a critical reader path. Please either apply the intended behavior change cleanly (with proper argument/formatting) or remove the commented-out code to keep the reader logic clear.

Suggested change
temp.stack(["individuals", "bodyparts"])#, dropna=False)
.reindex(header.individuals, level="individuals")
.reindex(header.bodyparts, level="bodyparts")
.reset_index()
)
#df.fillna(0, inplace=True)
temp.stack(["individuals", "bodyparts"])
.reindex(header.individuals, level="individuals")
.reindex(header.bodyparts, level="bodyparts")
.reset_index()
)

Copilot uses AI. Check for mistakes.
nrows = df.shape[0]
data = np.empty((nrows, 3))
image_paths = df["level_0"]
Expand Down
146 changes: 145 additions & 1 deletion src/napari_deeplabcut/_widgets.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,13 @@
import os
from collections import defaultdict
from functools import partial
import numpy as np
import pandas as pd
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.figure import Figure
from types import MethodType
from typing import Optional, Sequence, Union
from napari.layers import Image, Points
from collections import defaultdict, namedtuple
from copy import deepcopy
from datetime import datetime
Expand Down Expand Up @@ -45,9 +54,38 @@
QStyleOption,
QVBoxLayout,
QWidget,
QPushButton,
)

from napari_deeplabcut.kmeans import cluster_data
from napari_deeplabcut import keypoints
from napari_deeplabcut.misc import to_os_dir_sep, find_project_name


class Worker(QtCore.QObject):
started = QtCore.Signal()
finished = QtCore.Signal()
value = QtCore.Signal(object)
Comment on lines +65 to +68
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QtCore is referenced (QtCore.QObject, QtCore.Signal, QtCore.QThread) but the file only imports symbols from qtpy.QtCore (e.g., Signal) and never imports the QtCore module itself. This will raise NameError when importing the plugin. Either from qtpy import QtCore or switch the class to use the already-imported Signal, QObject, and QThread directly.

Copilot uses AI. Check for mistakes.

def __init__(self, func):
super().__init__()
self.func = func

def run(self):
out = self.func()
self.value.emit(out)
self.finished.emit()


def move_to_separate_thread(func):
thread = QtCore.QThread()
worker = Worker(func)
worker.moveToThread(thread)
thread.started.connect(worker.run)
worker.finished.connect(thread.quit)
worker.finished.connect(worker.deleteLater)
worker.finished.connect(thread.deleteLater)
return worker, thread
from napari_deeplabcut._reader import _load_config
from napari_deeplabcut._writer import _write_config, _write_image, _form_df
from napari_deeplabcut.misc import (
Expand Down Expand Up @@ -538,7 +576,6 @@ def __init__(self, napari_viewer):
self.viewer = napari_viewer
self.viewer.layers.events.inserted.connect(self.on_insert)
self.viewer.layers.events.removed.connect(self.on_remove)

self.viewer.window.qt_viewer._get_and_try_preferred_reader = MethodType(
_get_and_try_preferred_reader,
self.viewer.window.qt_viewer,
Expand Down Expand Up @@ -779,6 +816,108 @@ def _store_crop_coordinates(self, *args):
_write_config(config_path, cfg)
break

self.add_clustering_buttons()

Comment on lines +819 to +820
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The clustering UI is initialized inside _store_crop_coordinates (after the crop-coordinates workflow). That means users won’t see the new “cluster pose/show img/close img” buttons unless they click “Store crop coordinates” first, which doesn’t match the README workflow. Please move clustering UI initialization to a more appropriate place (e.g., widget __init__ / menu construction) so it’s always available when a suitable Points layer is present.

Copilot uses AI. Check for mistakes.
# Initialize an empty canvas onto which to draw the images
self.fig = Figure(tight_layout=True, dpi=100)
self.fig.patch.set_facecolor("None")
self.ax = self.fig.add_subplot(111)
self.ax.invert_yaxis()
self.ax.set_axis_off()
self._im = None
self._scatter = self.ax.scatter([], [])
self.canvas = FigureCanvas(self.fig)

self.show()

def add_clustering_buttons(self):
layout = QHBoxLayout()
btn_cluster = QPushButton('cluster pose', self)
btn_cluster.clicked.connect(self.on_click)
btn_show = QPushButton('show img', self)
btn_show.clicked.connect(self.on_click_show_img)
btn_close = QPushButton('close img', self)
btn_close.clicked.connect(self.on_click_close_img)
layout.addWidget(btn_cluster)
layout.addWidget(btn_show)
layout.addWidget(btn_close)
self._layout.addLayout(layout)

def _show_clusters(self, input_):
points, names = input_
points[:, [0, 1]] = points[:, [1, 0]]
colors = points[:, 2] + 1

dict_prop_points = {'colorn': colors, 'frame': names}
clust_layer = self.viewer.add_points(
points[:, :2],
size=8,
name='cluster',
features=dict_prop_points,
face_color='colorn',
face_colormap='plasma',
)
clust_layer.mode = 'select'

self.viewer.window.add_dock_widget(self.canvas, name='frames')
self.viewer.layers[0].visible = False

self._df = pd.read_hdf(self.viewer.layers[0].source.path)
self._df.index = ['/'.join(row) for row in list(self._df.index)]
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The frame-index normalization uses '/'.join(...) on self._df.index. This is not OS-agnostic (Windows paths will use \\ from os.path.join in read_hdf) and will also behave incorrectly if the index values are already strings (it will join characters). Prefer using the existing to_os_dir_sep() helper (or consistently use os.path.join/Path) and only join when the index value is a tuple/MultiIndex entry.

Suggested change
self._df.index = ['/'.join(row) for row in list(self._df.index)]
# Normalize frame index to OS-appropriate path strings.
self._df.index = [
to_os_dir_sep(os.path.join(*idx)) if isinstance(idx, tuple) else to_os_dir_sep(idx)
for idx in self._df.index
]

Copilot uses AI. Check for mistakes.

root = self.viewer.layers[0].metadata['root']
filenames = list(self.viewer.layers[0].metadata['paths'])
project_name = find_project_name(root)
project_path = os.path.join(root.split(project_name)[0], project_name)

@clust_layer.mouse_drag_callbacks.append
def get_event(clust_layer, event):
inds = list(clust_layer.selected_data)
if not inds:
return

if len(inds) > 1:
self.viewer.status = 'Please select only one data point.'
return

ind = inds[0]
filename = clust_layer.properties['frame'][ind]
bpts = self._df.loc[filename].to_numpy().reshape((-1, 2))
self.step = filenames.index(filename)

with Image_.open(os.path.join(project_path, filename)) as img:
im = np.asarray(img)
Comment on lines +888 to +889
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image_ is used to open images, but it’s not imported anywhere in this module. This will raise NameError the first time a cluster point is selected. Please import PIL.Image (or whichever image loader is intended) and ensure it’s included in dependencies if not already present.

Copilot uses AI. Check for mistakes.
if self._im is None:
self._im = self.ax.imshow(im)
else:
self._im.set_data(im)
self._scatter.set_offsets(bpts)
self.canvas.draw()

def on_click(self):
layer = self.viewer.layers.selection.active
if not isinstance(layer, Points):
print("Only Points layers can be clustered.")
return

func = partial(cluster_data, layer)
self.worker, self.thread = move_to_separate_thread(func)
self.worker.value.connect(self._show_clusters)
self.thread.start()

def on_click_show_img(self):
self.viewer.layers[0].visible = True
self.viewer.layers[1].visible = False
self.viewer.dims.set_current_step(0, self.step)
self.viewer.add_image(self._im.get_array(), name='image refine label')
self.viewer.layers.move_selected(0, 2)

def on_click_close_img(self):
self.viewer.layers.remove('image refine label')
self.viewer.layers.move_selected(0, 1)
self.viewer.layers[0].visible = False
self.viewer.layers[1].visible = True

Comment on lines +909 to +920
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These handlers assume specific layer ordering (layers[0] is the source Points layer and layers[1] is the cluster layer) and use layers.remove('image refine label'), which likely won’t work because LayersList.remove typically expects a Layer instance (not a name). This is brittle if users have other layers open, or if the new image layer isn’t present. Please store explicit references to the created layers (cluster layer + refine image layer) and show/hide/remove them via those references (or by name indexing like del viewer.layers[name]).

Suggested change
self.viewer.layers[0].visible = True
self.viewer.layers[1].visible = False
self.viewer.dims.set_current_step(0, self.step)
self.viewer.add_image(self._im.get_array(), name='image refine label')
self.viewer.layers.move_selected(0, 2)
def on_click_close_img(self):
self.viewer.layers.remove('image refine label')
self.viewer.layers.move_selected(0, 1)
self.viewer.layers[0].visible = False
self.viewer.layers[1].visible = True
# Lazily determine and cache source and cluster layers to avoid relying
# on fixed layer positions in the viewer.
source_layer = getattr(self, "_source_points_layer", None)
cluster_layer = getattr(self, "_cluster_points_layer", None)
if source_layer is None or cluster_layer is None:
layers_list = list(self.viewer.layers)
if len(layers_list) < 2:
# Not enough layers to determine source/cluster; abort safely.
return
source_layer = layers_list[0]
cluster_layer = layers_list[1]
self._source_points_layer = source_layer
self._cluster_points_layer = cluster_layer
source_layer.visible = True
cluster_layer.visible = False
self.viewer.dims.set_current_step(0, self.step)
# Store a reference to the refine image layer so it can be removed safely.
refine_layer = self.viewer.add_image(
self._im.get_array(), name='image refine label'
)
self._refine_image_layer = refine_layer
self.viewer.layers.move_selected(0, 2)
def on_click_close_img(self):
# Safely remove the refine image layer if it exists.
refine_layer = getattr(self, "_refine_image_layer", None)
if refine_layer is not None and refine_layer in self.viewer.layers:
self.viewer.layers.remove(refine_layer)
self._refine_image_layer = None
self.viewer.layers.move_selected(0, 1)
source_layer = getattr(self, "_source_points_layer", None)
cluster_layer = getattr(self, "_cluster_points_layer", None)
if source_layer is not None:
source_layer.visible = False
if cluster_layer is not None:
cluster_layer.visible = True

Copilot uses AI. Check for mistakes.
def _form_dropdown_menus(self, store):
menu = KeypointsDropdownMenu(store)
self.viewer.dims.events.current_step.connect(
Expand Down Expand Up @@ -871,6 +1010,10 @@ def _remap_frame_indices(self, layer):

def on_insert(self, event):
layer = event.source[-1]
# FIXME Is the following necessary?
if any(s in str(layer) for s in ('cluster', 'refine')):
Comment on lines +1013 to +1014
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filtering inserted layers via any(s in str(layer) ...) is unreliable: str(layer) isn’t a stable API and may match unrelated layers, causing metadata propagation and store setup to be skipped unexpectedly. If this guard is needed, it should check explicit layer attributes (e.g., layer.name against exact names) or use a dedicated flag on layers created by this widget.

Suggested change
# FIXME Is the following necessary?
if any(s in str(layer) for s in ('cluster', 'refine')):
# Skip auxiliary layers created by this widget (e.g. clustering/refinement results)
layer_name = getattr(layer, "name", "")
if isinstance(layer, Points) and layer_name in ("cluster", "refine"):

Copilot uses AI. Check for mistakes.
return

if isinstance(layer, Image):
paths = layer.metadata.get("paths")
if paths is None: # Then it's a video file
Expand Down Expand Up @@ -1091,6 +1234,7 @@ def __init__(
):
super().__init__(parent)
self.store = store

self.store.layer.events.current_properties.connect(self.update_menus)
self._locked = False

Expand Down
2 changes: 1 addition & 1 deletion src/napari_deeplabcut/_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,4 +106,4 @@ def write_masks(foldername, data, metadata):
output_path = filename.format(os.path.splitext(image_name)[0], shape_inds[n])
_write_image(mask, output_path)
napari_write_shapes(os.path.join(folder, "vertices.csv"), data, metadata)
return folder
return folder
45 changes: 45 additions & 0 deletions src/napari_deeplabcut/kmeans.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
import numpy as np
import pandas as pd
from scipy.spatial.distance import pdist
from sklearn.cluster import DBSCAN
from sklearn.decomposition import PCA
Comment on lines +1 to +5
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This module introduces hard dependencies on scipy (pdist) and scikit-learn (DBSCAN, PCA), but the project’s declared install_requires doesn’t include them. Without adding these to the package dependencies, the plugin will fail to import in a clean environment. Please add the dependencies (or guard the imports and provide a clear error) and document the requirement.

Copilot uses AI. Check for mistakes.
from napari_deeplabcut._writer import _conv_layer_to_df
from napari_deeplabcut.misc import DLCHeader
Comment on lines +6 to +7
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_conv_layer_to_df is imported from napari_deeplabcut._writer, but that function doesn’t exist in _writer.py (only _form_df is defined). This import will raise at runtime and break clustering. Either import and use the existing _form_df (wrapping the layer metadata/properties like other code in _widgets.py), or add the missing conversion function to _writer.py.

Copilot uses AI. Check for mistakes.


def _cluster(data):
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(data)

# putting components in a dataframe for later
PCA_components = pd.DataFrame(principalComponents)

dbscan=DBSCAN(eps=9.7, min_samples=20, algorithm='ball_tree', metric='minkowski', leaf_size=90, p=2)

# fit - perform DBSCAN clustering from features, or distance matrix.
dbscan = dbscan.fit(PCA_components)
cluster1 = dbscan.labels_
Comment on lines +4 to +21
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file/module is named kmeans.py but the implementation uses DBSCAN (density-based clustering) rather than k-means. This mismatch is confusing for maintenance and discoverability. Consider renaming the module/functions to reflect DBSCAN (or implement actual k-means if that’s the intended algorithm).

Suggested change
from sklearn.cluster import DBSCAN
from sklearn.decomposition import PCA
from napari_deeplabcut._writer import _conv_layer_to_df
from napari_deeplabcut.misc import DLCHeader
def _cluster(data):
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(data)
# putting components in a dataframe for later
PCA_components = pd.DataFrame(principalComponents)
dbscan=DBSCAN(eps=9.7, min_samples=20, algorithm='ball_tree', metric='minkowski', leaf_size=90, p=2)
# fit - perform DBSCAN clustering from features, or distance matrix.
dbscan = dbscan.fit(PCA_components)
cluster1 = dbscan.labels_
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from napari_deeplabcut._writer import _conv_layer_to_df
from napari_deeplabcut.misc import DLCHeader
def _cluster(data, n_clusters: int = 8):
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(data)
# putting components in a dataframe for later
PCA_components = pd.DataFrame(principalComponents)
kmeans = KMeans(n_clusters=n_clusters, random_state=0)
# fit - perform k-means clustering from features.
kmeans = kmeans.fit(PCA_components)
cluster1 = kmeans.labels_

Copilot uses AI. Check for mistakes.

return PCA_components, cluster1


def cluster_data(points_layer):
df = _conv_layer_to_df(
points_layer.data, points_layer.metadata, points_layer.properties
)
try:
df = df.drop('single', axis=1, level='individuals')
except KeyError:
pass
df.dropna(inplace=True)
header = DLCHeader(df.columns)
try:
df = df.stack('individuals').droplevel('individuals')
except KeyError:
pass
df.index = ['/'.join(row) for row in df.index]
xy = df.to_numpy().reshape((-1, len(header.bodyparts), 2))
Comment on lines +40 to +41
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

df.index = ['/'.join(row) for row in df.index] assumes each index entry is an iterable of path parts and forces POSIX separators. If the index entries are already strings (common after read_hdf) this will join characters, and on Windows it won’t match the os.path.join paths used elsewhere. Use to_os_dir_sep() / Path normalization and only join when dealing with tuples/MultiIndex entries.

Copilot uses AI. Check for mistakes.
# TODO Normalize dists by longest length?
dists = np.vstack([pdist(data, "euclidean") for data in xy])
points = np.c_[_cluster(dists)] # x, y, label
return points, list(df.index)
Comment on lines +26 to +45
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cluster_data introduces non-trivial data reshaping and clustering logic but currently has no tests. Please add unit tests for expected shapes/labels (including noise label -1 from DBSCAN) using a small synthetic Points-layer-like input, similar to existing pytest coverage in src/napari_deeplabcut/_tests.

Copilot uses AI. Check for mistakes.
9 changes: 9 additions & 0 deletions src/napari_deeplabcut/misc.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,24 @@
from __future__ import annotations

import os
import re
from enum import Enum, EnumMeta
from itertools import cycle
from pathlib import Path
from typing import Dict, List, Optional, Sequence, Tuple, Union

import numpy as np
import pandas as pd
from napari.utils import colormaps


def find_project_name(s):
pat = re.compile('.+-.+-\d{4}-\d{1,2}-\d{1,2}')
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The regex pattern is written as a normal string, so \d is interpreted as an escape and becomes just d (and may emit a deprecation warning). As a result this pattern won’t match digits and find_project_name will almost always return None, which later callers treat as a string. Use a raw string (e.g., r"...\d{4}...") or escape the backslashes properly, and consider explicitly returning a fallback or raising if no project name is found.

Suggested change
pat = re.compile('.+-.+-\d{4}-\d{1,2}-\d{1,2}')
pat = re.compile(r'.+-.+-\d{4}-\d{1,2}-\d{1,2}')

Copilot uses AI. Check for mistakes.
for part in Path(s).parts[::-1]:
if pat.search(part):
return part
Comment on lines +15 to +19
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New helper find_project_name is used by the clustering workflow but has no test coverage. Given the path parsing/regex sensitivity across OSes, please add unit tests (e.g., POSIX + Windows style paths, and a case where no match is found) alongside existing test_misc.py coverage.

Copilot uses AI. Check for mistakes.


def unsorted_unique(array: Sequence) -> np.ndarray:
"""Return the unsorted unique elements of an array."""
_, inds = np.unique(array, return_index=True)
Expand Down
9 changes: 9 additions & 0 deletions src/napari_deeplabcut/napari.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ contributions:
- id: napari-deeplabcut.make_keypoint_controls
python_name: napari_deeplabcut._widgets:KeypointControls
title: Make keypoint controls

readers:
- command: napari-deeplabcut.get_hdf_reader
accepts_directories: false
Expand All @@ -42,6 +43,7 @@ contributions:
- command: napari-deeplabcut.get_folder_parser
accepts_directories: true
filename_patterns: ['*']

writers:
- command: napari-deeplabcut.write_hdf
layer_types: ["points{1}"]
Expand All @@ -52,3 +54,10 @@ contributions:
widgets:
- command: napari-deeplabcut.make_keypoint_controls
display_name: Keypoint controls
kmeans:
- command: napari-deeplabcut.get_hdf_reader1
accepts_directories: false
filename_patterns: ['*.h5']
- command: napari-deeplabcut.get_folder_parser1
accepts_directories: true
filename_patterns: ['*']
Comment on lines 56 to +63
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

contributions only supports recognized extension points (e.g., commands, readers, writers, widgets). The new kmeans: section is not a valid napari manifest entry and references napari-deeplabcut.get_hdf_reader1/get_folder_parser1, which are not declared under commands (and don’t exist in the codebase). This will likely make the plugin manifest invalid and prevent the plugin from loading. Please remove this section or wire the feature through existing commands/widgets (and add any new commands under contributions.commands).

Suggested change
display_name: Keypoint controls
kmeans:
- command: napari-deeplabcut.get_hdf_reader1
accepts_directories: false
filename_patterns: ['*.h5']
- command: napari-deeplabcut.get_folder_parser1
accepts_directories: true
filename_patterns: ['*']
display_name: Keypoint controls

Copilot uses AI. Check for mistakes.
Loading