|
| 1 | +--- |
| 2 | +name: model-training |
| 3 | +description: "Agent-driven YOLO fine-tuning — annotate, train, export, deploy" |
| 4 | +version: 1.0.0 |
| 5 | + |
| 6 | +parameters: |
| 7 | + - name: base_model |
| 8 | + label: "Base Model" |
| 9 | + type: select |
| 10 | + options: ["yolo26n", "yolo26s", "yolo26m", "yolo26l"] |
| 11 | + default: "yolo26n" |
| 12 | + description: "Pre-trained model to fine-tune" |
| 13 | + group: Training |
| 14 | + |
| 15 | + - name: dataset_dir |
| 16 | + label: "Dataset Directory" |
| 17 | + type: string |
| 18 | + default: "~/datasets" |
| 19 | + description: "Path to COCO-format dataset (from dataset-annotation skill)" |
| 20 | + group: Training |
| 21 | + |
| 22 | + - name: epochs |
| 23 | + label: "Training Epochs" |
| 24 | + type: number |
| 25 | + default: 50 |
| 26 | + group: Training |
| 27 | + |
| 28 | + - name: batch_size |
| 29 | + label: "Batch Size" |
| 30 | + type: number |
| 31 | + default: 16 |
| 32 | + description: "Adjust based on GPU VRAM" |
| 33 | + group: Training |
| 34 | + |
| 35 | + - name: auto_export |
| 36 | + label: "Auto-Export to Optimal Format" |
| 37 | + type: boolean |
| 38 | + default: true |
| 39 | + description: "Automatically convert to TensorRT/CoreML/OpenVINO after training" |
| 40 | + group: Deployment |
| 41 | + |
| 42 | + - name: deploy_as_skill |
| 43 | + label: "Deploy as Detection Skill" |
| 44 | + type: boolean |
| 45 | + default: false |
| 46 | + description: "Replace the active YOLO detection model with the fine-tuned version" |
| 47 | + group: Deployment |
| 48 | + |
| 49 | +capabilities: |
| 50 | + training: |
| 51 | + script: scripts/train.py |
| 52 | + description: "Fine-tune YOLO models on custom annotated datasets" |
| 53 | +--- |
| 54 | + |
| 55 | +# Model Training |
| 56 | + |
| 57 | +Agent-driven custom model training powered by Aegis's Training Agent. Closes the annotation-to-deployment loop: take a COCO dataset from `dataset-annotation`, fine-tune a YOLO model, auto-export to the optimal format for your hardware, and optionally deploy it as your active detection skill. |
| 58 | + |
| 59 | +## What You Get |
| 60 | + |
| 61 | +- **Fine-tune YOLO26** — start from nano/small/medium/large pre-trained weights |
| 62 | +- **COCO dataset input** — uses standard format from `dataset-annotation` skill |
| 63 | +- **Hardware-aware training** — auto-detects CUDA, MPS, ROCm, or CPU |
| 64 | +- **Auto-export** — converts trained model to TensorRT / CoreML / OpenVINO / ONNX via `env_config.py` |
| 65 | +- **One-click deploy** — replace the active detection model with your fine-tuned version |
| 66 | +- **Training telemetry** — real-time loss, mAP, and epoch progress streamed to Aegis UI |
| 67 | + |
| 68 | +## Training Loop (Aegis Training Agent) |
| 69 | + |
| 70 | +``` |
| 71 | +dataset-annotation model-training yolo-detection-2026 |
| 72 | +┌─────────────┐ ┌──────────────────┐ ┌──────────────────┐ |
| 73 | +│ Annotate │───────▶│ Fine-tune YOLO │───────▶│ Deploy custom │ |
| 74 | +│ Review │ COCO │ Auto-export │ .pt │ model as active │ |
| 75 | +│ Export │ JSON │ Validate mAP │ .engine│ detection skill │ |
| 76 | +└─────────────┘ └──────────────────┘ └──────────────────┘ |
| 77 | + ▲ │ |
| 78 | + └────────────────────────────────────────────────────┘ |
| 79 | + Feedback loop: better detection → better annotation |
| 80 | +``` |
| 81 | + |
| 82 | +## Protocol |
| 83 | + |
| 84 | +### Aegis → Skill (stdin) |
| 85 | +```jsonl |
| 86 | +{"event": "train", "dataset_path": "~/datasets/front_door_people/", "base_model": "yolo26n", "epochs": 50, "batch_size": 16} |
| 87 | +{"event": "export", "model_path": "runs/train/best.pt", "formats": ["coreml", "tensorrt"]} |
| 88 | +{"event": "validate", "model_path": "runs/train/best.pt", "dataset_path": "~/datasets/front_door_people/"} |
| 89 | +``` |
| 90 | + |
| 91 | +### Skill → Aegis (stdout) |
| 92 | +```jsonl |
| 93 | +{"event": "ready", "gpu": "mps", "base_models": ["yolo26n", "yolo26s", "yolo26m", "yolo26l"]} |
| 94 | +{"event": "progress", "epoch": 12, "total_epochs": 50, "loss": 0.043, "mAP50": 0.87, "mAP50_95": 0.72} |
| 95 | +{"event": "training_complete", "model_path": "runs/train/best.pt", "metrics": {"mAP50": 0.91, "mAP50_95": 0.78, "params": "2.6M"}} |
| 96 | +{"event": "export_complete", "format": "coreml", "path": "runs/train/best.mlpackage", "speedup": "2.1x vs PyTorch"} |
| 97 | +{"event": "validation", "mAP50": 0.91, "per_class": [{"class": "person", "ap": 0.95}, {"class": "car", "ap": 0.88}]} |
| 98 | +``` |
| 99 | + |
| 100 | +## Setup |
| 101 | + |
| 102 | +```bash |
| 103 | +python3 -m venv .venv && source .venv/bin/activate |
| 104 | +pip install -r requirements.txt |
| 105 | +``` |
0 commit comments