You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+23-4Lines changed: 23 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@
60
60
-[x]**AI/LLM-assisted skill installation** — community-contributed skills installed and configured via AI agent
61
61
-[x]**GPU / NPU / CPU (AIPC) aware installation** — auto-detect hardware, install matching frameworks, convert models to optimal format
62
62
-[x]**Hardware environment layer** — shared [`env_config.py`](skills/lib/env_config.py) for auto-detection + model optimization across NVIDIA, AMD, Apple Silicon, Intel, and CPU
63
-
-[ ]**Skill development** — 18 skills across 9 categories, actively expanding with community contributions
63
+
-[ ]**Skill development** — 19 skills across 10 categories, actively expanding with community contributions
64
64
65
65
## 🧩 Skill Catalog
66
66
@@ -70,9 +70,10 @@ Each skill is a self-contained module with its own model, parameters, and [commu
70
70
|----------|-------|--------------|:------:|
71
71
|**Detection**|[`yolo-detection-2026`](skills/detection/yolo-detection-2026/)| Real-time 80+ class detection — auto-accelerated via TensorRT / CoreML / OpenVINO / ONNX | ✅|
|**Camera Providers**|[`eufy`](skills/camera-providers/eufy/) · [`reolink`](skills/camera-providers/reolink/) · [`tapo`](skills/camera-providers/tapo/)| Direct camera integrations via RTSP | 📐 |
77
78
|**Streaming**|[`go2rtc-cameras`](skills/streaming/go2rtc-cameras/)| RTSP → WebRTC live view | 📐 |
Watch your cameras **without seeing faces, clothing, or identities**. The [depth-estimation skill](skills/transformation/depth-estimation/) transforms live feeds into colorized depth maps using [Depth Anything v2](https://github.com/DepthAnything/Depth-Anything-V2) — warm colors for nearby objects, cool colors for distant ones.
**HomeSec-Bench** is a 143-test security benchmark that measures how well your local AI performs as a security guard. It tests what matters: Can it detect a person in fog? Classify a break-in vs. a delivery? Resist prompt injection? Route alerts correctly at 3 AM?
{role: 'system',content: 'You are Aegis. Keep all responses under 50 words.'},
908
+
{role: 'system',content: 'You are Aegis. Keep all responses succinct.'},
911
909
{role: 'user',content: 'Give me a very detailed, comprehensive explanation of how the security classification system works with all four levels and examples of each.'},
912
910
]);
913
911
constc=stripThink(r.content);
914
912
// Model should produce something reasonable — not crash or refuse
description: "Path to COCO-format dataset (from dataset-annotation skill)"
20
+
group: Training
21
+
22
+
- name: epochs
23
+
label: "Training Epochs"
24
+
type: number
25
+
default: 50
26
+
group: Training
27
+
28
+
- name: batch_size
29
+
label: "Batch Size"
30
+
type: number
31
+
default: 16
32
+
description: "Adjust based on GPU VRAM"
33
+
group: Training
34
+
35
+
- name: auto_export
36
+
label: "Auto-Export to Optimal Format"
37
+
type: boolean
38
+
default: true
39
+
description: "Automatically convert to TensorRT/CoreML/OpenVINO after training"
40
+
group: Deployment
41
+
42
+
- name: deploy_as_skill
43
+
label: "Deploy as Detection Skill"
44
+
type: boolean
45
+
default: false
46
+
description: "Replace the active YOLO detection model with the fine-tuned version"
47
+
group: Deployment
48
+
49
+
capabilities:
50
+
training:
51
+
script: scripts/train.py
52
+
description: "Fine-tune YOLO models on custom annotated datasets"
53
+
---
54
+
55
+
# Model Training
56
+
57
+
Agent-driven custom model training powered by Aegis's Training Agent. Closes the annotation-to-deployment loop: take a COCO dataset from `dataset-annotation`, fine-tune a YOLO model, auto-export to the optimal format for your hardware, and optionally deploy it as your active detection skill.
58
+
59
+
## What You Get
60
+
61
+
-**Fine-tune YOLO26** — start from nano/small/medium/large pre-trained weights
62
+
-**COCO dataset input** — uses standard format from `dataset-annotation` skill
63
+
-**Hardware-aware training** — auto-detects CUDA, MPS, ROCm, or CPU
64
+
-**Auto-export** — converts trained model to TensorRT / CoreML / OpenVINO / ONNX via `env_config.py`
65
+
-**One-click deploy** — replace the active detection model with your fine-tuned version
66
+
-**Training telemetry** — real-time loss, mAP, and epoch progress streamed to Aegis UI
0 commit comments