Skip to content

Commit 4b2bcd2

Browse files
committed
docs: add Privacy section to main README, update skill catalog status
- Change depth-estimation category from Transformation to Privacy - Mark depth-estimation as ✅ Ready (was 📐 Planned) - Add dedicated '🔒 Privacy — Depth Map Anonymization' section - Link to TransformSkillBase for building new privacy skills
1 parent 1f32a9b commit 4b2bcd2

File tree

1 file changed

+19
-1
lines changed

1 file changed

+19
-1
lines changed

README.md

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ Each skill is a self-contained module with its own model, parameters, and [commu
7171
| **Detection** | [`yolo-detection-2026`](skills/detection/yolo-detection-2026/) | Real-time 80+ class detection — auto-accelerated via TensorRT / CoreML / OpenVINO / ONNX ||
7272
| **Analysis** | [`home-security-benchmark`](skills/analysis/home-security-benchmark/) | [143-test evaluation suite](#-homesec-bench--how-secure-is-your-local-ai) for LLM & VLM security performance ||
7373
| | [`sam2-segmentation`](skills/analysis/sam2-segmentation/) | Click-to-segment with pixel-perfect masks | 📐 |
74-
| **Transformation** | [`depth-estimation`](skills/transformation/depth-estimation/) | Monocular depth maps with Depth Anything v2 | 📐 |
74+
| **Privacy** | [`depth-estimation`](skills/transformation/depth-estimation/) | [Real-time depth-map privacy transform](#-privacy--depth-map-anonymization) — anonymize camera feeds while preserving activity | |
7575
| **Annotation** | [`dataset-annotation`](skills/annotation/dataset-annotation/) | AI-assisted labeling → COCO export | 📐 |
7676
| **Camera Providers** | [`eufy`](skills/camera-providers/eufy/) · [`reolink`](skills/camera-providers/reolink/) · [`tapo`](skills/camera-providers/tapo/) | Direct camera integrations via RTSP | 📐 |
7777
| **Streaming** | [`go2rtc-cameras`](skills/streaming/go2rtc-cameras/) | RTSP → WebRTC live view | 📐 |
@@ -143,6 +143,24 @@ Camera → Frame Governor → detect.py (JSONL) → Aegis IPC → Live Overlay
143143

144144
📖 [Full Skill Documentation →](skills/detection/yolo-detection-2026/SKILL.md)
145145

146+
## 🔒 Privacy — Depth Map Anonymization
147+
148+
Watch your cameras **without seeing faces, clothing, or identities**. The [depth-estimation skill](skills/transformation/depth-estimation/) transforms live feeds into colorized depth maps using [Depth Anything v2](https://github.com/DepthAnything/Depth-Anything-V2) — warm colors for nearby objects, cool colors for distant ones.
149+
150+
```
151+
Camera Frame ──→ Depth Anything v2 ──→ Colorized Depth Map ──→ Aegis Overlay
152+
(live) (0.5 FPS) warm=near, cool=far (privacy on)
153+
```
154+
155+
- 🛡️ **Full anonymization**`depth_only` mode hides all visual identity while preserving spatial activity
156+
- 🎨 **Overlay mode** — blend depth on top of original feed with adjustable opacity
157+
-**Rate-limited** — 0.5 FPS frontend capture + backend scheduler keeps GPU load minimal
158+
- 🧩 **Extensible** — new privacy skills (blur, pixelation, silhouette) can subclass [`TransformSkillBase`](skills/transformation/depth-estimation/scripts/transform_base.py)
159+
160+
Runs on the same [hardware acceleration stack](#hardware-acceleration) as YOLO detection — CUDA, MPS, ROCm, OpenVINO, or CPU.
161+
162+
📖 [Full Skill Documentation →](skills/transformation/depth-estimation/SKILL.md) · 📖 [README →](skills/transformation/depth-estimation/README.md)
163+
146164
## 📊 HomeSec-Bench — How Secure Is Your Local AI?
147165

148166
**HomeSec-Bench** is a 143-test security benchmark that measures how well your local AI performs as a security guard. It tests what matters: Can it detect a person in fog? Classify a break-in vs. a delivery? Resist prompt injection? Route alerts correctly at 3 AM?

0 commit comments

Comments
 (0)