|
| 1 | +--- |
| 2 | +name: monitor |
| 3 | +description: Monitor submitted jobs (PTQ, evaluation, deployment) on SLURM clusters. Use when the user asks "check job status", "is my job done", "monitor my evaluation", "what's the status of the PTQ", "check on job 12345", or after any skill submits a long-running job. Also triggers on "nel status", "squeue", or any request to check progress of a previously submitted job. |
| 4 | +--- |
| 5 | + |
| 6 | +# Job Monitor |
| 7 | + |
| 8 | +Monitor jobs submitted to SLURM clusters — PTQ quantization, NEL evaluation, model deployment, or raw SLURM jobs. |
| 9 | + |
| 10 | +## When to use |
| 11 | + |
| 12 | +1. **Auto-monitor** — another skill (PTQ, evaluation, deployment) just submitted a job. Register the job and set up monitoring immediately. |
| 13 | +2. **User-initiated** — user asks about a job status, possibly in a new conversation. Check the registry, identify the job, and report. |
| 14 | + |
| 15 | +--- |
| 16 | + |
| 17 | +## Job Registry |
| 18 | + |
| 19 | +All active jobs are tracked in `.claude/active_jobs.json`. This file is the single source of truth for what's being monitored. |
| 20 | + |
| 21 | +```json |
| 22 | +[ |
| 23 | + { |
| 24 | + "type": "nel", |
| 25 | + "id": "<invocation_id or slurm_job_id>", |
| 26 | + "host": "<cluster_hostname>", |
| 27 | + "user": "<ssh_user>", |
| 28 | + "submitted": "YYYY-MM-DD HH:MM", |
| 29 | + "description": "<what this job does>", |
| 30 | + "last_status": "<last known status>" |
| 31 | + } |
| 32 | +] |
| 33 | +``` |
| 34 | + |
| 35 | +`type` is one of: `nel`, `slurm`, `launcher`. |
| 36 | + |
| 37 | +--- |
| 38 | + |
| 39 | +## On Job Submission |
| 40 | + |
| 41 | +Every time a job is submitted (by any skill or manually): |
| 42 | + |
| 43 | +1. **Add an entry** to `.claude/active_jobs.json`. Create the file if it doesn't exist. |
| 44 | +2. **Set up a durable recurring cron** (if one isn't already running) that polls all registered jobs every 15 minutes. The cron prompt should: read the registry, check each job, report state changes to the user, remove completed jobs, and delete itself when the registry is empty. |
| 45 | + |
| 46 | +Always do both steps. Don't try to predict job duration. |
| 47 | + |
| 48 | +--- |
| 49 | + |
| 50 | +## On Cron Fire / Status Check |
| 51 | + |
| 52 | +Whether triggered by the cron or by the user asking "check status": |
| 53 | + |
| 54 | +1. **Read the registry** from `.claude/active_jobs.json` |
| 55 | +2. **Check each job** using the appropriate method (see below) |
| 56 | +3. **Report only state changes** — compare against `last_status` in registry |
| 57 | +4. **Update `last_status`** in the registry |
| 58 | +5. **Remove completed jobs** — any job in a terminal state (COMPLETED, FAILED, CANCELLED, KILLED) |
| 59 | +6. **If registry is empty** — delete the recurring cron |
| 60 | + |
| 61 | +--- |
| 62 | + |
| 63 | +## How to Check Each Job Type |
| 64 | + |
| 65 | +### NEL jobs (`type: nel`) |
| 66 | + |
| 67 | +- **Check:** `nel status <id>` |
| 68 | +- **On completion:** `nel info <id>` to fetch results |
| 69 | +- **On failure:** `nel info <id> --logs` then inspect server/client/SLURM logs via SSH |
| 70 | + |
| 71 | +### Launcher jobs (`type: launcher`) |
| 72 | + |
| 73 | +- **Check:** Tail the launcher's background output file for key events |
| 74 | +- **Key events:** experiment ID, SLURM job ID, container import, calibration progress, export path, final status |
| 75 | +- **On failure:** Look for `Traceback`, `Error`, or `FAILED` in the output |
| 76 | + |
| 77 | +### Raw SLURM jobs (`type: slurm`) |
| 78 | + |
| 79 | +- **Check:** `ssh <host> "squeue -j <id> -h -o '%T %M %R'"` — if empty, job left the queue |
| 80 | +- **On completion:** `ssh <host> "sacct -j <id> --format=State,ExitCode,Elapsed -n"` |
| 81 | +- **On failure:** Check the job's output log file |
| 82 | + |
| 83 | +--- |
| 84 | + |
| 85 | +## Identifying Jobs (user-initiated, no ID given) |
| 86 | + |
| 87 | +When the user asks about a job without specifying an ID, check in order: |
| 88 | + |
| 89 | +1. `.claude/active_jobs.json` — most reliable, has context |
| 90 | +2. `nel ls runs --since 1d` — recent NEL runs |
| 91 | +3. `ssh <host> "squeue -u <user>"` — active SLURM jobs |
| 92 | +4. `ls -lt tools/launcher/experiments/cicd/ | head -10` — recent launcher experiments |
| 93 | + |
| 94 | +--- |
| 95 | + |
| 96 | +## Reporting Guidelines |
| 97 | + |
| 98 | +- **Report state changes proactively** — PENDING → RUNNING, or job completes |
| 99 | +- **Aggregate multiple jobs** — "2 of 4 completed (MMLU-Pro: 42.3%, GSM8K: 67.1%), 1 running, 1 pending" |
| 100 | +- **Summarize, don't echo** — interpret events ("Calibration complete, exporting checkpoint") not raw logs |
| 101 | +- **On failure, diagnose immediately** — check logs and report root cause without waiting for user to ask |
| 102 | +- **Minimize noise** — don't report "still running" unless the user is actively asking |
0 commit comments