Skip to content

Commit fae546f

Browse files
authored
[Apply lint] Add PR-only pre-commit workflow & lint all (#166)
* Refine pre-commit hook stages and ruff Add default_stages and explicitly set stages for hooks so checks that don't modify files can run in both pre-commit and manual/CI, while modifiers run only locally. Reorganize and document ruff hooks into local autofix/write entries (runs with --fix/unsafe-fixes and format write) and CI check-only entries (output-format=github, --check/--diff). Also set stages for pyproject-fmt and validate-pyproject, add check-* hooks to pre-commit/manual, and clarify behavior with inline comments. * [lint apply] Add PR-only pre-commit workflow Add .github/workflows/format.yml: a GitHub Actions workflow that runs pre-commit only for changed files in PRs. It triggers on pull_request (opened, synchronize, reopened), uses a detect_changes job to checkout full history and compute the list of files changed against the base branch, and exposes that list as an output. A precommit job (guarded by a condition that changed files exist) checks out the PR branch, sets up Python 3.12, installs pre-commit, and runs pre-commit only on the changed files. * Run pre-commit on all files Manual fixes: - Reordered some __init__ calls and added some __all__ - Added tensorrt import best-effort in one file - __init__.py ignores E402 due to requiring registry update * Update format.yml * Use multiline GITHUB_OUTPUT for changed files Replace the single-line echo that wrote CHANGED_FILES to $GITHUB_OUTPUT with a here-document (key<<EOF ... EOF) to correctly export multi-line file lists. This preserves newlines and spaces in the changed-files output so downstream workflow steps receive the full list. * Update format.yml * Fix circular import * Fix all E501 * Export Engine for public acccess
1 parent 35e0432 commit fae546f

File tree

85 files changed

+770
-1046
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+770
-1046
lines changed

.github/workflows/format.yml

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
name: pre-commit (PR only on changed files)
2+
3+
on:
4+
pull_request:
5+
types: [opened, synchronize, reopened]
6+
7+
jobs:
8+
detect_changes:
9+
runs-on: ubuntu-latest
10+
outputs:
11+
changed: ${{ steps.changed_files.outputs.changed }}
12+
13+
steps:
14+
- name: Checkout full history
15+
uses: actions/checkout@v4
16+
with:
17+
fetch-depth: 0
18+
19+
- name: Detect changed files
20+
id: changed_files
21+
run: |
22+
git fetch origin ${{ github.base_ref }}
23+
CHANGED_FILES=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
24+
25+
{
26+
echo "changed<<EOF"
27+
echo "$CHANGED_FILES"
28+
echo "EOF"
29+
} >> "$GITHUB_OUTPUT"
30+
31+
- name: Show changed files
32+
run: |
33+
echo "Changed files:"
34+
echo "${{ steps.changed_files.outputs.changed }}"
35+
36+
precommit:
37+
needs: detect_changes
38+
runs-on: ubuntu-latest
39+
if: ${{ needs.detect_changes.outputs.changed != '' }}
40+
41+
steps:
42+
- name: Checkout PR branch
43+
uses: actions/checkout@v4
44+
with:
45+
fetch-depth: 0
46+
ref: ${{ github.head_ref }}
47+
48+
- name: Set up Python
49+
uses: actions/setup-python@v5
50+
with:
51+
python-version: "3.12"
52+
53+
- name: Install pre-commit
54+
run: pip install pre-commit
55+
56+
- name: Run pre-commit (CI check-only stage) on changed files
57+
env:
58+
CHANGED_FILES: ${{ needs.detect_changes.outputs.changed }}
59+
run: |
60+
mapfile -t files <<< "$CHANGED_FILES"
61+
pre-commit run --hook-stage manual --files "${files[@]}" --show-diff-on-failure

.pre-commit-config.yaml

Lines changed: 39 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,61 @@
1+
default_stages: [pre-commit]
2+
13
repos:
24
- repo: https://github.com/pre-commit/pre-commit-hooks
35
rev: v6.0.0
46
hooks:
7+
# These are safe to run in both local & CI (they don't require "fix vs check" split)
58
- id: check-added-large-files
9+
stages: [pre-commit, manual]
610
- id: check-yaml
11+
stages: [pre-commit, manual]
712
- id: check-toml
13+
stages: [pre-commit, manual]
14+
- id: check-merge-conflict
15+
stages: [pre-commit, manual]
16+
17+
# These modify files. Run locally only (pre-commit stage).
818
- id: end-of-file-fixer
9-
- id: name-tests-test
10-
args: [--pytest-test-first]
19+
stages: [pre-commit]
1120
- id: trailing-whitespace
12-
- id: check-merge-conflict
21+
stages: [pre-commit]
22+
1323
- repo: https://github.com/tox-dev/pyproject-fmt
1424
rev: v2.15.2
1525
hooks:
1626
- id: pyproject-fmt
27+
stages: [pre-commit] # modifies -> local only
28+
1729
- repo: https://github.com/abravalheri/validate-pyproject
1830
rev: v0.25
1931
hooks:
2032
- id: validate-pyproject
33+
stages: [pre-commit, manual]
34+
2135
- repo: https://github.com/astral-sh/ruff-pre-commit
2236
rev: v0.15.0
2337
hooks:
24-
# Run the formatter.
38+
# --------------------------
39+
# LOCAL AUTOFIX (developers)
40+
# --------------------------
41+
- id: ruff-check
42+
name: ruff-check (fix)
43+
args: [--fix, --unsafe-fixes]
44+
stages: [pre-commit]
45+
2546
- id: ruff-format
26-
# Run the linter.
47+
name: ruff-format (write)
48+
stages: [pre-commit]
49+
50+
# --------------------------
51+
# CI CHECK-ONLY (no writes)
52+
# --------------------------
2753
- id: ruff-check
28-
args: [--fix,--unsafe-fixes]
54+
name: ruff-check (ci)
55+
args: [--output-format=github]
56+
stages: [manual]
57+
58+
- id: ruff-format
59+
name: ruff-format (ci)
60+
args: [--check, --diff]
61+
stages: [manual]

MANIFEST.in

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
include dlclive/check_install/*
22
include dlclive/modelzoo/model_configs/*.yaml
3-
include dlclive/modelzoo/project_configs/*.yaml
3+
include dlclive/modelzoo/project_configs/*.yaml

README.md

Lines changed: 39 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -16,26 +16,26 @@ pipeline for real-time applications that has minimal (software) dependencies. Th
1616
is as easy to install as possible (in particular, on atypical systems like [
1717
NVIDIA Jetson boards](https://developer.nvidia.com/buy-jetson)).
1818

19-
If you've used DeepLabCut-Live with TensorFlow models and want to try the PyTorch
19+
If you've used DeepLabCut-Live with TensorFlow models and want to try the PyTorch
2020
version, take a look at [_Switching from TensorFlow to PyTorch_](
2121
#Switching-from-TensorFlow-to-PyTorch)
2222

23-
**Performance of TensorFlow models:** If you would like to see estimates on how your
24-
model should perform given different video sizes, neural network type, and hardware,
23+
**Performance of TensorFlow models:** If you would like to see estimates on how your
24+
model should perform given different video sizes, neural network type, and hardware,
2525
please see: [deeplabcut.github.io/DLC-inferencespeed-benchmark/
2626
](https://deeplabcut.github.io/DLC-inferencespeed-benchmark/). **We're working on
2727
getting these benchmarks for PyTorch architectures as well.**
2828

2929
If you have different hardware, please consider [submitting your results too](
3030
https://github.com/DeepLabCut/DLC-inferencespeed-benchmark)!
3131

32-
**What this SDK provides:** This package provides a `DLCLive` class which enables pose
32+
**What this SDK provides:** This package provides a `DLCLive` class which enables pose
3333
estimation online to provide feedback. This object loads and prepares a DeepLabCut
3434
network for inference, and will return the predicted pose for single images.
3535

36-
To perform processing on poses (such as predicting the future pose of an animal given
37-
its current pose, or to trigger external hardware like send TTL pulses to a laser for
38-
optogenetic stimulation), this object takes in a `Processor` object. Processor objects
36+
To perform processing on poses (such as predicting the future pose of an animal given
37+
its current pose, or to trigger external hardware like send TTL pulses to a laser for
38+
optogenetic stimulation), this object takes in a `Processor` object. Processor objects
3939
must contain two methods: `process` and `save`.
4040

4141
- The `process` method takes in a pose, performs some processing, and returns processed
@@ -44,48 +44,48 @@ pose.
4444

4545
For more details and examples, see documentation [here](dlclive/processor/README.md).
4646

47-
**🔥🔥🔥🔥🔥 Note :: alone, this object does not record video or capture images from a
47+
**🔥🔥🔥🔥🔥 Note :: alone, this object does not record video or capture images from a
4848
camera. This must be done separately, i.e. see our [DeepLabCut-live GUI](
4949
https://github.com/DeepLabCut/DeepLabCut-live-GUI).🔥🔥🔥🔥🔥**
5050

5151
### News!
5252

5353
- **WIP 2025**: DeepLabCut-Live is implemented for models trained with the PyTorch engine!
54-
- March 2022: DeepLabCut-Live! 1.0.2 supports poetry installation `poetry install
54+
- March 2022: DeepLabCut-Live! 1.0.2 supports poetry installation `poetry install
5555
deeplabcut-live`, thanks to PR #60.
56-
- March 2021: DeepLabCut-Live! [**version 1.0** is released](https://pypi.org/project/deeplabcut-live/), with support for
56+
- March 2021: DeepLabCut-Live! [**version 1.0** is released](https://pypi.org/project/deeplabcut-live/), with support for
5757
tensorflow 1 and tensorflow 2!
5858
- Feb 2021: DeepLabCut-Live! was featured in **Nature Methods**:
5959
["Real-time behavioral analysis"](https://www.nature.com/articles/s41592-021-01072-z)
60-
- Jan 2021: full **eLife** paper is published: ["Real-time, low-latency closed-loop
60+
- Jan 2021: full **eLife** paper is published: ["Real-time, low-latency closed-loop
6161
feedback using markerless posture tracking"](https://elifesciences.org/articles/61909)
6262
- Dec 2020: we talked to **RTS Suisse Radio** about DLC-Live!: ["Capture animal
6363
movements in real time"](
6464
https://www.rts.ch/play/radio/cqfd/audio/capturer-les-mouvements-des-animaux-en-temps-reel?id=11782529)
6565

6666
### Installation
6767

68-
DeepLabCut-live can be installed from PyPI with PyTorch or Tensorflow directly:
68+
DeepLabCut-live can be installed from PyPI with PyTorch or Tensorflow directly:
6969
```bash
7070
# With PyTorch (recommended)
7171
pip install deeplabcut-live[pytorch]
72-
72+
7373
# Or with TensorFlow
7474
pip install deeplabcut-live[tf]
75-
75+
7676
# Or using uv
7777
uv pip install deeplabcut-live[pytorch] # or [tf]
7878
```
7979

80-
Note: On **Windows**, the `deeplabcut-live[pytorch]` extra will not install the required CUDA-enabled wheels for PyTorch by default. For GPU support, install CUDA-enabled PyTorch first, then install `deeplabcut-live[pytorch]`.
80+
Note: On **Windows**, the `deeplabcut-live[pytorch]` extra will not install the required CUDA-enabled wheels for PyTorch by default. For GPU support, install CUDA-enabled PyTorch first, then install `deeplabcut-live[pytorch]`.
8181

8282
Please see our instruction manual for more elaborate information on how to install on a [Windows or Linux machine](
8383
docs/install_desktop.md) or on a [NVIDIA Jetson Development Board](
84-
docs/install_jetson.md).
84+
docs/install_jetson.md).
8585

8686
This code works with PyTorch, TensorFlow 1 or TensorFlow
87-
2 models, but whatever engine you exported your model with, you must import with the
88-
same version (i.e., export a PyTorch model, then install PyTorch, export with TF1.13,
87+
2 models, but whatever engine you exported your model with, you must import with the
88+
same version (i.e., export a PyTorch model, then install PyTorch, export with TF1.13,
8989
then use TF1.13 with DlC-Live; export with TF2.3, then use TF2.3 with DLC-live).
9090

9191
You can test your installation by running:
@@ -139,7 +139,7 @@ dlc_live.get_pose(<your image>)
139139
- `index 0` = use dynamic cropping, bool
140140
- `index 1` = detection threshold, float
141141
- `index 2` = margin (in pixels) around identified points, int
142-
- `resize` = float, optional; factor by which to resize image (resize=0.5 downsizes
142+
- `resize` = float, optional; factor by which to resize image (resize=0.5 downsizes
143143
both width and height of image by half). Can be used to downsize large images for
144144
faster inference
145145
- `processor` = dlc pose processor object, optional
@@ -148,51 +148,51 @@ dlc_live.get_pose(<your image>)
148148

149149
`DLCLive` **inputs:**
150150

151-
- `<path to exported model>` =
152-
- For TensorFlow models: path to the folder that has the `.pb` files that you
151+
- `<path to exported model>` =
152+
- For TensorFlow models: path to the folder that has the `.pb` files that you
153153
acquire after running `deeplabcut.export_model`
154-
- For PyTorch models: path to the `.pt` file that is generated after running
154+
- For PyTorch models: path to the `.pt` file that is generated after running
155155
`deeplabcut.export_model`
156156
- `<your image>` = is a numpy array of each frame
157157

158158
#### DLCLive - PyTorch Specific Guide
159159

160-
This guide is for users who trained a model with the PyTorch engine with
160+
This guide is for users who trained a model with the PyTorch engine with
161161
`DeepLabCut 3.0`.
162162

163163
Once you've trained your model in [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut)
164-
and you are happy with its performance, you can export the model to be used for live
164+
and you are happy with its performance, you can export the model to be used for live
165165
inference with DLCLive!
166166

167167
### Switching from TensorFlow to PyTorch
168168

169-
This section is for users who **have already used DeepLabCut-Live** with
169+
This section is for users who **have already used DeepLabCut-Live** with
170170
TensorFlow models (through DeepLabCut 1.X or 2.X) and want to switch to using the
171171
PyTorch Engine. Some quick notes:
172172

173173
- You may need to adapt your code slightly when creating the DLCLive instance.
174-
- Processors that were created for TensorFlow models will function the same way with
175-
PyTorch models. As multi-animal models can be used with PyTorch, the shape of the `pose`
174+
- Processors that were created for TensorFlow models will function the same way with
175+
PyTorch models. As multi-animal models can be used with PyTorch, the shape of the `pose`
176176
array given to the processor may be `(num_individuals, num_keypoints, 3)`. Just call
177177
`DLCLive(..., single_animal=True)` and it will work.
178178

179179
### Benchmarking/Analyzing your exported DeepLabCut models
180180

181-
DeepLabCut-live offers some analysis tools that allow users to perform the following
181+
DeepLabCut-live offers some analysis tools that allow users to perform the following
182182
operations on videos, from python or from the command line:
183183

184184
#### Test inference speed across a range of image sizes
185185

186-
Downsizing images can be done by specifying the `resize` or `pixels` parameter. Using
187-
the `pixels` parameter will resize images to the desired number of `pixels`, without
186+
Downsizing images can be done by specifying the `resize` or `pixels` parameter. Using
187+
the `pixels` parameter will resize images to the desired number of `pixels`, without
188188
changing the aspect ratio. Results will be saved (along with system info) to a pickle
189189
file if you specify an output directory.
190190

191191
Inside a **python** shell or script, you can run:
192192

193193
```python
194194
dlclive.benchmark_videos(
195-
"/path/to/exported/model",
195+
"/path/to/exported/model",
196196
["/path/to/video1", "/path/to/video2"],
197197
output="/path/to/output",
198198
resize=[1.0, 0.75, '0.5'],
@@ -211,7 +211,7 @@ Inside a **python** shell or script, you can run:
211211

212212
```python
213213
dlclive.benchmark_videos(
214-
"/path/to/exported/model",
214+
"/path/to/exported/model",
215215
"/path/to/video",
216216
resize=0.5,
217217
display=True,
@@ -229,7 +229,7 @@ dlc-live-benchmark /path/to/exported/model /path/to/video -r 0.5 --display --pcu
229229

230230
#### Analyze and create a labeled video using the exported model and desired resize parameters.
231231

232-
This option functions similar to `deeplabcut.benchmark_videos` and
232+
This option functions similar to `deeplabcut.benchmark_videos` and
233233
`deeplabcut.create_labeled_video` (note, this is slow and only for testing purposes).
234234

235235
Inside a **python** shell or script, you can run:
@@ -255,9 +255,9 @@ dlc-live-benchmark /path/to/exported/model /path/to/video -r 0.5 --pcutoff 0.5 -
255255

256256
## License:
257257

258-
This project is licensed under the GNU AGPLv3. Note that the software is provided "as
259-
is", without warranty of any kind, express or implied. If you use the code or data, we
260-
ask that you please cite us! This software is available for licensing via the EPFL
258+
This project is licensed under the GNU AGPLv3. Note that the software is provided "as
259+
is", without warranty of any kind, express or implied. If you use the code or data, we
260+
ask that you please cite us! This software is available for licensing via the EPFL
261261
Technology Transfer Office (https://tto.epfl.ch/, info.tto@epfl.ch).
262262

263263
## Community Support, Developers, & Help:
@@ -270,9 +270,9 @@ https://github.com/DeepLabCut/DeepLabCut/blob/master/CONTRIBUTING.md), which is
270270
at the main repository of DeepLabCut.
271271
- We are a community partner on the [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&amp;url=https%3A%2F%2Fforum.image.sc%2Ftags%2Fdeeplabcut.json&amp;query=%24.topic_list.tags.0.topic_count&amp;colorB=brightgreen&amp;&amp;suffix=%20topics&amp;logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tags/deeplabcut). Please post help and
272272
support questions on the forum with the tag DeepLabCut. Check out their mission
273-
statement [Scientific Community Image Forum: A discussion forum for scientific image
273+
statement [Scientific Community Image Forum: A discussion forum for scientific image
274274
software](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000340).
275-
- If you encounter a previously unreported bug/code issue, please post here (we
275+
- If you encounter a previously unreported bug/code issue, please post here (we
276276
encourage you to search issues first): [github.com/DeepLabCut/DeepLabCut-live/issues](
277277
https://github.com/DeepLabCut/DeepLabCut-live/issues)
278278
- For quick discussions here: [![Gitter](
@@ -281,7 +281,7 @@ https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_cam
281281

282282
### Reference:
283283

284-
If you utilize our tool, please [cite Kane et al, eLife 2020](https://elifesciences.org/articles/61909). The preprint is
284+
If you utilize our tool, please [cite Kane et al, eLife 2020](https://elifesciences.org/articles/61909). The preprint is
285285
available here: https://www.biorxiv.org/content/10.1101/2020.08.04.236422v2
286286

287287
```

benchmarking/run_dlclive_benchmark.py

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,14 @@
88
# Script for running the official benchmark from Kane et al, 2020.
99
# Please share your results at https://github.com/DeepLabCut/DLC-inferencespeed-benchmark
1010

11-
import os, pathlib
1211
import glob
12+
import os
13+
import pathlib
1314

1415
from dlclive import benchmark_videos, download_benchmarking_data
1516
from dlclive.engine import Engine
1617

17-
datafolder = os.path.join(
18-
pathlib.Path(__file__).parent.absolute(), "Data-DLC-live-benchmark"
19-
)
18+
datafolder = os.path.join(pathlib.Path(__file__).parent.absolute(), "Data-DLC-live-benchmark")
2019

2120
if not os.path.isdir(datafolder): # only download if data doesn't exist!
2221
# Downloading data.... this takes a while (see terminal)
@@ -44,7 +43,7 @@
4443
video_path=dog_video,
4544
output=out_dir,
4645
n_frames=n_frames,
47-
pixels=pixels
46+
pixels=pixels,
4847
)
4948

5049
for model_path in mouse_models:
@@ -54,5 +53,5 @@
5453
video_path=mouse_video,
5554
output=out_dir,
5655
n_frames=n_frames,
57-
pixels=pixels
56+
pixels=pixels,
5857
)

0 commit comments

Comments
 (0)