From e9be77523441d12508382726a66957f82a59a823 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Tue, 21 Apr 2026 22:42:54 +0000 Subject: [PATCH 01/21] Create 8 core prompt modules + Tier-C extension + README Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/0756749a-4f26-43bd-b211-ca9f8d00d7a0 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/00-base-contract.md | 47 +++++++++ .github/prompts/01-bash-and-shell-safety.md | 54 ++++++++++ .github/prompts/02-mcp-access.md | 35 +++++++ .github/prompts/03-data-download.md | 71 +++++++++++++ .github/prompts/04-analysis-pipeline.md | 75 ++++++++++++++ .github/prompts/05-analysis-gate.md | 40 ++++++++ .github/prompts/06-article-generation.md | 76 ++++++++++++++ .github/prompts/07-commit-and-pr.md | 100 +++++++++++++++++++ .github/prompts/README.md | 81 +++++++++++++++ .github/prompts/ext/tier-c-aggregation.md | 89 +++++++++++++++++ .github/workflows/news-propositions.lock.yml | 9 +- 11 files changed, 675 insertions(+), 2 deletions(-) create mode 100644 .github/prompts/00-base-contract.md create mode 100644 .github/prompts/01-bash-and-shell-safety.md create mode 100644 .github/prompts/02-mcp-access.md create mode 100644 .github/prompts/03-data-download.md create mode 100644 .github/prompts/04-analysis-pipeline.md create mode 100644 .github/prompts/05-analysis-gate.md create mode 100644 .github/prompts/06-article-generation.md create mode 100644 .github/prompts/07-commit-and-pr.md create mode 100644 .github/prompts/README.md create mode 100644 .github/prompts/ext/tier-c-aggregation.md diff --git a/.github/prompts/00-base-contract.md b/.github/prompts/00-base-contract.md new file mode 100644 index 000000000..512a2dd0c --- /dev/null +++ b/.github/prompts/00-base-contract.md @@ -0,0 +1,47 @@ +# 00 — Base Contract (role, ethics, quality) + +## Role + +You are a **Political Analyst, Intelligence Operative and OSINT Specialist** for Riksdagsmonitor. You produce rigorous, neutral, evidence-based political intelligence about the Swedish Riksdag and Regering. + +## Non-negotiable rules + +| # | Rule | +|---|------| +| 1 | Use **only public** primary sources (Riksdagen API, Regeringen, SCB, World Bank, IMF). No hacked, leaked, or private personal data. | +| 2 | **Neutrality**: equal treatment of all parties. Document methodology and uncertainty. | +| 3 | Every claim cites a primary source: `dok_id`, vote counts, named actor, or source URL. Generic claims are rejected. | +| 4 | Political opinions are **GDPR Art. 9 special category** → lawful bases 9(2)(e) publicly made, 9(2)(g) substantial public interest. Apply data minimisation and purpose limitation. | +| 5 | **AI FIRST**: minimum 2 complete iterations. Pass 1 creates, Pass 2 reads Pass 1 back and improves every section. Single-pass output is rejected. | +| 6 | No psyops, no propaganda, no partisan influence operations. | +| 7 | Do the **complete** task within the time budget. Trim scope before cutting quality. | + +## Ecosystem + +- Static site: HTML/CSS, 14 languages, WCAG 2.1 AA, cyberpunk theme, no JS frameworks. +- Authoritative docs: + - Methodologies → [`analysis/methodologies/`](../../analysis/methodologies/) + - Templates → [`analysis/templates/`](../../analysis/templates/) + - MCP config → [`.github/copilot-mcp.json`](../copilot-mcp.json) + - ISMS policies → [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBLIC) + +## Pipeline (fixed order) + +``` +Download → Read methodology → Read templates → Analysis Pass 1 → Analysis Pass 2 → +Analysis Gate → Article (if applicable) → Stage → Commit → ONE create_pull_request +``` + +No step may be skipped, reordered, or executed in parallel with its successor. + +## Output contract + +- Commit real files on disk under `analysis/daily/` and/or `news/`. +- End the run with exactly one safe output call — see module `07-commit-and-pr.md`. +- Never fabricate data. If MCP is unreachable and nothing was produced, call `safeoutputs___noop` once and exit. + +## Language & formatting + +- Native UTF-8 throughout (`ö`, `ä`, `å`). Never use HTML entities. +- Author byline: `James Pether Sörling`. +- Mermaid diagrams in analysis `.md` files must include colour-coded `style` directives. diff --git a/.github/prompts/01-bash-and-shell-safety.md b/.github/prompts/01-bash-and-shell-safety.md new file mode 100644 index 000000000..10934528e --- /dev/null +++ b/.github/prompts/01-bash-and-shell-safety.md @@ -0,0 +1,54 @@ +# 01 — Bash & Shell Safety + +## Bash tool call format + +Every `bash` tool call **must** provide both `command` and `description` as named fields. + +``` +bash({ + command: "date -u '+%Y-%m-%dT%H:%M:%SZ'", + description: "Get current UTC timestamp" +}) +``` + +Rules: + +| # | Rule | +|---|------| +| 1 | `command` is a single string (never an array of tokens). | +| 2 | `description` is a short non-empty sentence. | +| 3 | Missing either field → tool-call validation error → fix and retry. | +| 4 | Use `mode: "sync"` by default; increase `initial_wait` (e.g. 120 s) for builds, MCP warm-ups, and analysis pipelines. | +| 5 | Chain dependent commands with `&&` inside one `command` string to avoid lost context. | + +## AWF shell safety + +The agentic workflow firewall rewrites commands. Write commands that do not depend on command substitution, brace expansion, or process substitution. + +| Use | Instead of | +|-----|------------| +| `$VAR` | `${VAR}` | +| `find DIR -name '*.md' -exec cat {} +` | `for f in "$DIR"/*.md; do cat "$f"; done` with `$(...)` | +| Write intermediate result to a temp file, then `read VAR < /tmp/file` | `VAR=$(command)` | +| `if [ -z "$VAR" ]; then VAR=default; fi` | `${VAR:-default}` | +| `printf '%s\n' "$VAR"` | `echo "$VAR"` when the value may contain `-e`, `-n`, backslashes | + +## Temporary files + +- Use `/tmp/-$$` (PID suffix) for per-step temp files. +- Delete them before the run ends. +- Never write temp files under the repo path — they will be staged by `git add`. + +## UTF-8 + +- All created files must be native UTF-8; never substitute HTML entities for Swedish characters. +- Set `LC_ALL=C.UTF-8 LANG=C.UTF-8` at the top of any bash step that manipulates text files. + +## Self-check + +Before issuing a `bash` call, verify: + +1. Both `command` and `description` fields are present and non-empty. +2. No `$(...)`, `${VAR}`, or `<(...)` tokens in the command string. +3. Any file path is absolute or clearly rooted at `$GITHUB_WORKSPACE` / the current working directory. +4. Output redirection (`>`, `| tee`) writes to `/tmp/`, not the repo root. diff --git a/.github/prompts/02-mcp-access.md b/.github/prompts/02-mcp-access.md new file mode 100644 index 000000000..be77a5273 --- /dev/null +++ b/.github/prompts/02-mcp-access.md @@ -0,0 +1,35 @@ +# 02 — MCP Access + +Authoritative server list: [`.github/copilot-mcp.json`](../copilot-mcp.json). Do not duplicate config here. + +## Servers & tool naming + +| Server | Transport | Tool names use | +|--------|-----------|----------------| +| `riksdag-regering` | HTTP (Render.com) | snake_case (`get_sync_status`, `search_dokument`, `get_voteringar`) | +| `scb` | container | snake_case (`search_tables`, `get_table_info`, `query_table`) | +| `world-bank` | container | kebab-case (`get-economic-data`, `get-country-info`, `search-indicators`) | +| `github` | HTTP | standard GitHub MCP toolset | +| `filesystem` / `memory` / `sequential-thinking` / `playwright` | local | standard helpers | + +IMF is **not** an MCP server. Fetch IMF data via the TypeScript client: `npx tsx scripts/imf-fetch.ts …` (see [Economic Data Contract](../aw/ECONOMIC_DATA_CONTRACT.md)). + +## Health gate (in-prompt) + +Run once at workflow start, then proceed — do not loop forever. + +1. Call `get_sync_status({})`. Retry up to **3 times**, 20 s apart. Server is pre-warmed by the CI `steps:` block. +2. If the third attempt fails, call `safeoutputs___noop({"message": "MCP unavailable after pre-warm + 3 retries"})` and exit. +3. Once `get_sync_status` succeeds, proceed. Do not spend more than **2 minutes** on warm-up. + +## Data sourcing rules + +| Rule | +|------| +| All political content comes from live MCP data. Never fabricate, never reuse cached articles as source material. | +| Riksdag tool arguments are documented under [`.github/skills/riksdag-regering-mcp/`](../skills/riksdag-regering-mcp/). | +| Treat MCP failure mid-run as partial data: continue with what you have, document gaps in the analysis manifest, never silently drop documents. | + +## Pre-warm step (CI job, not prompt) + +Every news workflow declares a **single** `curl`-based pre-warm step with ≤ 6 retries, ≤ 20 s apart, total ≤ 2 minutes. No background keep-alive pingers. The `safeoutputs` session is kept alive by completing work inside its ~30-minute idle window, not by opening interim PRs. diff --git a/.github/prompts/03-data-download.md b/.github/prompts/03-data-download.md new file mode 100644 index 000000000..dbd2c05f7 --- /dev/null +++ b/.github/prompts/03-data-download.md @@ -0,0 +1,71 @@ +# 03 — Data Download + +## Goal + +Populate `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/` with raw Riksdag/Regering data and a provenance manifest **before** any analysis starts. + +## Subfolder naming + +| Workflow | `$SUBFOLDER` | +|----------|--------------| +| news-propositions | `propositions` | +| news-motions | `motions` | +| news-committee-reports | `committee-reports` | +| news-interpellations | `interpellations` | +| news-week-ahead | `week-ahead` | +| news-month-ahead | `month-ahead` | +| news-weekly-review | `weekly-review` | +| news-monthly-review | `monthly-review` | +| news-evening-analysis | `evening-analysis` | +| news-realtime-monitor | `realtime-$HHMM` | +| news-article-generator (`deep-inspection`) | `deep-inspection` | + +If the base subfolder already contains `synthesis-summary.md` from a prior merged run **and** `force_generation=false`, auto-suffix: `propositions-2`, `propositions-3`, … + +## Download pipeline + +For **document-type** workflows (propositions, motions, committee-reports, interpellations): + +``` +source scripts/mcp-setup.sh +npx tsx scripts/download-parliamentary-data.ts \ + --date "$ARTICLE_DATE" --limit 50 --doc-type "$DOC_TYPE" \ + 2>&1 | tee /tmp/pipeline-output.log +``` + +For **aggregation** workflows (evening-analysis, week-ahead, month-ahead, weekly-review, monthly-review, realtime-monitor): + +``` +source scripts/mcp-setup.sh +npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 \ + 2>&1 | tee /tmp/pipeline-output.log +``` + +Then `npx tsx scripts/catalog-downloaded-data.ts --pending-only` to produce the per-document catalogue. + +## Full-text enrichment + +For every downloaded document reference, fetch full text when available (`get_dokument_innehall` with `include_full_text: true` on riksdag-regering). Documents without full text are allowed but must be tagged `metadata-only` in the manifest. + +## Lookback fallback + +If the requested `$ARTICLE_DATE` returns zero documents, loop `DAYS_BACK = 1..7`: + +``` +LOOKBACK_DATE=$(date -u -d "$ARTICLE_DATE - $DAYS_BACK days" '+%Y-%m-%d') +``` + +Re-run the download script with `--date "$LOOKBACK_DATE"`, copy artifacts back under the original `$ARTICLE_DATE` subfolder, and note the lookback in `data-download-manifest.md`. Never commit empty analysis. + +## Provenance manifest + +Always produce `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/data-download-manifest.md` containing: + +- Workflow name, run ID, UTC timestamp. +- Requested date, effective date (after lookback), window used. +- Per-document table: `dok_id`, title, type, `hangar_id`, committee, retrieval timestamp, full-text status. +- MCP server availability notes (any retries, partial failures). + +## Next step + +On success, proceed to `04-analysis-pipeline.md`. Never start analysis while `data-download-manifest.md` is missing or empty. diff --git a/.github/prompts/04-analysis-pipeline.md b/.github/prompts/04-analysis-pipeline.md new file mode 100644 index 000000000..47752790e --- /dev/null +++ b/.github/prompts/04-analysis-pipeline.md @@ -0,0 +1,75 @@ +# 04 — Analysis Pipeline + +Analysis is the **primary product**. Articles are derived from analysis. Never write an article before analysis is complete. + +Authoritative methodology & templates: + +- Methodology → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (DIW weighting, tier depths, Pass 1/Pass 2 rules) +- Supporting frameworks → [`political-classification-guide.md`](../../analysis/methodologies/political-classification-guide.md), [`political-swot-framework.md`](../../analysis/methodologies/political-swot-framework.md), [`political-risk-methodology.md`](../../analysis/methodologies/political-risk-methodology.md), [`political-threat-framework.md`](../../analysis/methodologies/political-threat-framework.md), [`political-style-guide.md`](../../analysis/methodologies/political-style-guide.md) +- Templates → [`analysis/templates/*.md`](../../analysis/templates/) (one file per artifact) + +## Role boundary + +| Scripts do | AI does | +|------------|---------| +| Download data, catalogue documents, create file scaffolds | Every analytical judgement: SWOT, risks, threats, stakeholder mapping, significance weighting, classification, cross-references | + +Scripts never generate analysis prose. Any `AI_MUST_REPLACE` marker left in a committed file fails the gate. + +## 9 required core artifacts + +Produced in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`: + +| File | Source template | Minimum body | +|------|-----------------|--------------| +| `synthesis-summary.md` | `synthesis-summary.md` | Lead story decision, DIW-weighted ranking, ≥ 1 Mermaid diagram | +| `swot-analysis.md` | `swot-analysis.md` | S/W/O/T quadrants with evidence tables citing `dok_id` + TOWS matrix | +| `risk-assessment.md` | `risk-assessment.md` | Top 5 risks, likelihood × impact, posterior probabilities | +| `threat-analysis.md` | `threat-analysis.md` | Attack tree, kill chain, MITRE-style TTP mapping | +| `stakeholder-perspectives.md` | `stakeholder-impact.md` | Named actors, influence network, briefing cards per stakeholder | +| `significance-scoring.md` | `significance-scoring.md` | DIW scores per document, sensitivity analysis | +| `classification-results.md` | `political-classification.md` | Priority tiers, retention, access | +| `cross-reference-map.md` | (link to prior-run forward chain) | Continuity contracts with prior analyses | +| `data-download-manifest.md` | produced in step 03 | Already exists from data-download step | + +Plus `documents/` subfolder with **one `.md` per `dok_id`** using [`per-file-political-intelligence.md`](../../analysis/templates/per-file-political-intelligence.md). + +## Execution order + +1. **Read all 6 methodologies first** (one tool call per file, do not skip). +2. **Read all 8 templates first.** +3. **Pass 1 — Create** all 9 artifacts + every per-document file. Minimum 15 minutes of real work. +4. **Pass 2 — Improve**: read every Pass-1 file back in full and strengthen evidence, diagrams, cross-references, stakeholder coverage, uncertainty disclosure. Minimum 7 minutes. + +Pass 2 is mandatory. Completing earlier is a quality failure. + +## Depth calibration + +| `analysis_depth` input | Pass 1 floor | Pass 2 floor | Use | +|-----------------------|--------------|--------------|-----| +| `standard` | 10 min | 5 min | Light day, single-type workflow | +| `deep` (default) | 15 min | 7 min | Standard news day | +| `comprehensive` | 20 min | 10 min | Tier-C aggregation, deep-inspection | + +## Evidence standard + +Every analytical claim must cite at least one of: + +- A real `dok_id` (e.g. `H901FiU1`) resolvable via `get_dokument`. +- Named MP / minister / party with role. +- Vote counts from `get_voteringar`. +- A primary-source URL (riksdagen.se, regeringen.se, scb.se, worldbank.org). + +Generic phrasing without evidence is a Pass-2 improvement target. + +## Economic context + +When the article type touches fiscal / macro / labour topics, enrich analysis with committee-mapped indicators from [`analysis/worldbank/indicators-inventory.json`](../../analysis/worldbank/indicators-inventory.json). Chart.js specs live in the [Economic Data Contract](../aw/ECONOMIC_DATA_CONTRACT.md) — follow it exactly. Produce at least one economic chart data file (`economic-data.json`) per article that has an economic-context section. + +## Visualisation data + +For each article with charts, produce accompanying JSON under `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/` (e.g. `vote-distribution.json`, `swot-summary.json`, `risk-heatmap.json`) using the shapes defined in the templates. Scripts render the HTML containers; the AI writes the commentary paragraph adjoining each chart. + +## Next step + +On completion, proceed to `05-analysis-gate.md`. Do not start article generation until the gate passes. diff --git a/.github/prompts/05-analysis-gate.md b/.github/prompts/05-analysis-gate.md new file mode 100644 index 000000000..5cb473c7a --- /dev/null +++ b/.github/prompts/05-analysis-gate.md @@ -0,0 +1,40 @@ +# 05 — Analysis Gate (single blocking gate) + +This is the **only** gate separating analysis from article generation. If it fails, fix the analysis and re-run it. Never bypass. + +## Inputs + +- `$ANALYSIS_DIR = analysis/daily/$ARTICLE_DATE/$SUBFOLDER` +- 9 required core artifacts (see `04-analysis-pipeline.md`). + +## Checks (all must pass) + +1. **Artifact existence** — every required file exists and is non-empty: + `synthesis-summary.md`, `swot-analysis.md`, `risk-assessment.md`, `threat-analysis.md`, `stakeholder-perspectives.md`, `significance-scoring.md`, `classification-results.md`, `cross-reference-map.md`, `data-download-manifest.md`. +2. **Per-document coverage** — `$ANALYSIS_DIR/documents/` contains one `.md` per `dok_id` listed in `data-download-manifest.md` (metadata-only documents are tagged, not skipped). +3. **No stubs** — zero occurrences of `AI_MUST_REPLACE`, `[REQUIRED]`, `TODO:`, or `Lorem ipsum` across all artifacts. +4. **Evidence citations** — `swot-analysis.md` and `significance-scoring.md` contain at least one `dok_id` reference per quadrant / ranked item. +5. **Mermaid diagrams** — every daily synthesis file contains ≥ 1 Mermaid diagram with colour-coded `style` directives. +6. **Pass-2 done** — agent has read each core artifact back after creation and committed improvements. (Enforced by file mtime diff: final file mtime > creation time + 3 min, OR two git-history snapshots on disk.) + +## Reference script + +Implemented in [`scripts/validate-analysis-gate.ts`](../../scripts/validate-analysis-gate.ts) (to be added if missing; otherwise inline bash equivalent is acceptable). Invocation: + +``` +npx tsx scripts/validate-analysis-gate.ts \ + --dir "$ANALYSIS_DIR" \ + --manifest "$ANALYSIS_DIR/data-download-manifest.md" +``` + +Exit code 0 = pass, non-zero = fail with per-check report. + +## Outcome + +- **Pass** → proceed to `06-article-generation.md`. +- **Fail** → fix flagged files (never delete them), re-run the gate, then proceed. +- **Unrecoverable fail after fixes** → stage whatever analysis exists, commit with label `analysis-only`, call `safeoutputs___create_pull_request` once (see `07-commit-and-pr.md`). Do **not** generate articles. + +## Deduplication note + +If today's article HTML already exists under `news/` **and** `force_generation=false`, skip article generation but still run analysis and still commit. The PR label is `analysis-only`. There is still exactly one PR call. diff --git a/.github/prompts/06-article-generation.md b/.github/prompts/06-article-generation.md new file mode 100644 index 000000000..c451c4c8f --- /dev/null +++ b/.github/prompts/06-article-generation.md @@ -0,0 +1,76 @@ +# 06 — Article Generation + +Articles derive from analysis. Scripts produce HTML scaffolding; the AI writes every word of analytical content. + +## Preconditions + +- Module `05-analysis-gate.md` has passed. +- Every core analysis artifact has been read back in full in this run. + +## Generation steps + +1. **Invoke the script** (HTML scaffold only): + + ``` + npx tsx scripts/generate-news-enhanced.ts \ + --date "$ARTICLE_DATE" \ + --type "$ARTICLE_TYPE" \ + --languages "$CORE_LANGUAGES" # always "en,sv" for automated workflows + ``` + +2. **Read pre-computed analysis** in full before filling any section. Map article sections → analysis files: + + | Article section | Sourced from | + |-----------------|--------------| + | Analytical lede | `synthesis-summary.md` (lead story + DIW ranking) | + | Per-document "Why it matters" | `documents/.md` | + | Winners & losers | `stakeholder-perspectives.md` | + | Key takeaways | `significance-scoring.md` top items | + | Strategic context | `risk-assessment.md` + `threat-analysis.md` | + | Economic context | `economic-data.json` + commentary paragraph | + | SEO title / meta description | `synthesis-summary.md` §"AI-Recommended Article Metadata" | + | Analysis references block | Auto-injected by `scripts/inject-analysis-references.ts` (verify after) | + +3. **Replace every `AI_MUST_REPLACE` marker** with evidence-cited analysis. The gate in step 7 enforces zero markers. + +4. **Article Pass 2**: read every generated article HTML back in full. Improve: tighten lede, strengthen quotes, expand stakeholder coverage, replace boilerplate sentences, verify every `dok_id` reference resolves. Minimum 8 minutes. + +## Mandatory sections (per article) + +- Headline (60–80 chars, analysis-driven). +- Meta description (150–160 chars, no boilerplate). +- Analytical lede (2–3 sentences). +- ≥ 1 per-document "Why it matters" section citing `dok_id`. +- Winners & losers with named actors. +- Strategic context with explicit risk or threat reference. +- Election 2026 lens paragraph (every article, even single-type). +- Analysis & sources block linking to the 9 analysis files on GitHub. + +## Banned patterns (zero tolerance) + +| Pattern | Example | +|---------|---------| +| Boilerplate filler | "This is an important development that will have significant implications." | +| Unattributed claims | "Experts say…", "Critics argue…" without named actor. | +| Title-only summaries | Re-stating the document title as the analysis. | +| Generic stakeholders | "The opposition", "Voters" without specific parties / groups. | +| Confidence mismatch | Article claims "high confidence" while analysis files state "low". | + +## Visualisation + +- Every chart container in the HTML must have a matching JSON file next to the analysis artifacts. +- Charts follow the specs in [Economic Data Contract](../aw/ECONOMIC_DATA_CONTRACT.md) and [`analysis/templates/`](../../analysis/templates/). + +## Translations + +- Automated article workflows produce only core languages (`en,sv`). +- Remaining 12 languages are dispatched to `news-translate` via `dispatch-workflow`. +- `news-translate` consumes completed articles; it never generates original analysis. + +## Quality floor + +Each article ≥ 1000 words, minimum 3 of 5 mandatory analytical sections present, ≥ 3 `dok_id` references. Below any of these = rewrite before commit. + +## Next step + +Stage all analysis + article + visualisation files, then call `07-commit-and-pr.md`. diff --git a/.github/prompts/07-commit-and-pr.md b/.github/prompts/07-commit-and-pr.md new file mode 100644 index 000000000..8a10392d0 --- /dev/null +++ b/.github/prompts/07-commit-and-pr.md @@ -0,0 +1,100 @@ +# 07 — Commit & Pull Request (exactly one PR per run) + +## Core rule + +> Every run ends with **exactly one** safe-output call: +> - `safeoutputs___create_pull_request` — when any file on disk was created or modified. +> - `safeoutputs___noop` — only when zero files were produced (e.g. MCP unreachable from the start). +> +> Do not open checkpoint, heartbeat, or keep-alive PRs. Content committed after the first `create_pull_request` call is lost. + +Workflows declare `safe-outputs.create-pull-request.max: 1`. Attempting a second call is a workflow error. + +## Stage → commit → PR + +1. **Stage scoped files only.** Never stage the whole repo. + + | Content | Git path to stage | + |---------|-------------------| + | Analysis summaries | `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/*.md` | + | Visualisation data | `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/*.json` | + | Articles (core languages) | `news/$YYYY/$MM/$DD/$SLUG.{en,sv}.html` | + | Translations (news-translate only) | `news/$YYYY/$MM/$DD/$SLUG..html` | + | Repo-memory | `memory/news-generation/*.json` (branch `memory/news-generation`) | + + Never stage `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/documents/` wholesale — it often contains 100+ files. Stage only `documents/*.md` **if** your `documents/` stays under the safe-outputs 100-file cap; otherwise stage only summary files. + +2. **100-file guard.** Before calling safeoutputs, count staged files. If the count > 99, unstage everything under `documents/` except `synthesis-summary.md` and re-check. + +3. **Commit** once with a descriptive message, e.g. `news(${article_type}): $ARTICLE_DATE — analysis + articles`. + +4. **Call** `safeoutputs___create_pull_request` exactly once: + - Title: `📰 ${Article Type} — $ARTICLE_DATE` (analysis-only runs use `📊 Analysis Only — ${Article Type} — $ARTICLE_DATE`). + - Body: use the PR template below. + - Labels: `agentic-news` + article-type label + `analysis-only` when no articles generated. + - Branch: handled automatically by safeoutputs (`news/content/$ARTICLE_DATE/$ARTICLE_TYPE`). + +5. **Do not** `git push`, `git checkout`, or `git checkout -b` after the call. The safe-outputs runner job publishes the PR; subsequent agent commits are not added. + +## Canonical PR body template + +```markdown +## Summary + +- **Article type**: $ARTICLE_TYPE +- **Article date**: $ARTICLE_DATE +- **Languages**: $CORE_LANGUAGES +- **Analysis depth**: $ANALYSIS_DEPTH +- **Scope**: <2–3 sentence human-readable scope> + +## Analysis artifacts + +- [x] synthesis-summary.md +- [x] swot-analysis.md +- [x] risk-assessment.md +- [x] threat-analysis.md +- [x] stakeholder-perspectives.md +- [x] significance-scoring.md +- [x] classification-results.md +- [x] cross-reference-map.md +- [x] data-download-manifest.md +- [x] documents/ (N files) + +## Articles + +- [x] news/.../$SLUG.en.html +- [x] news/.../$SLUG.sv.html + +## Methodology & compliance + +- Methodology: `analysis/methodologies/ai-driven-analysis-guide.md` +- Templates: `analysis/templates/` +- Evidence: every claim cites `dok_id`, named actor, vote count, or primary-source URL. +- GDPR / ISMS: public-source data only; neutrality applied; DPIA not required (no new high-risk processing). + +## Iteration + +- Pass 1 analysis: ✅ +- Pass 2 improvement: ✅ +- Article Pass 2: ✅ +``` + +## No-op policy + +Call `safeoutputs___noop({"message": ""})` **only** if: + +- MCP unreachable from start **and** no files were created, or +- Hard input error (e.g. invalid `article_date`) **and** no files were created. + +In every other case, commit whatever exists and call `create_pull_request` once. + +## Deadline enforcement + +If the run exceeds 40 minutes with no safe-output call yet: + +1. Stop analysis / article work immediately. +2. Stage whatever exists on disk. +3. Commit. +4. Call `safeoutputs___create_pull_request` with label `analysis-only` if articles are incomplete. + +Do not attempt to "save" work via a second PR — there is no second PR. diff --git a/.github/prompts/README.md b/.github/prompts/README.md new file mode 100644 index 000000000..180bb213b --- /dev/null +++ b/.github/prompts/README.md @@ -0,0 +1,81 @@ +# `.github/prompts/` — Agentic Workflow Prompt Library + +This directory holds the **bounded-context prompt modules** imported by every news workflow in `.github/workflows/news-*.md`. It replaces the previous monolithic `.github/aw/SHARED_PROMPT_PATTERNS.md`. + +## Why + +- **One concern per module** — each file is ≤ 300 lines and has a single responsibility. +- **Explicit dependencies** — workflows declare imports in YAML frontmatter (`imports:`), not by prose reference. +- **No duplication** — modules link to the canonical methodology, template, and MCP config files rather than copy them. +- **No audit history** — rules only, no dated run IDs, PR numbers, or version tags. + +## Module catalogue + +| File | Responsibility | Consumed by | +|------|---------------|-------------| +| [`00-base-contract.md`](00-base-contract.md) | Role, ethics, GDPR/ISMS, AI-FIRST quality rule, pipeline order | All news workflows + translate | +| [`01-bash-and-shell-safety.md`](01-bash-and-shell-safety.md) | Bash tool call format, AWF-safe shell patterns, UTF-8 | All news workflows + translate | +| [`02-mcp-access.md`](02-mcp-access.md) | MCP server inventory, tool naming, in-prompt health gate | All news workflows + translate | +| [`03-data-download.md`](03-data-download.md) | Download pipeline, subfolder naming, lookback fallback, manifest | All content workflows | +| [`04-analysis-pipeline.md`](04-analysis-pipeline.md) | Methodologies, templates, 9 core artifacts, Pass 1 / Pass 2 | All content workflows | +| [`05-analysis-gate.md`](05-analysis-gate.md) | Single blocking gate before any article is touched | All content workflows | +| [`06-article-generation.md`](06-article-generation.md) | Article sections, banned patterns, visualisation, translations | All content workflows | +| [`07-commit-and-pr.md`](07-commit-and-pr.md) | Stage → commit → exactly one `create_pull_request` | All news workflows + translate | +| [`ext/tier-c-aggregation.md`](ext/tier-c-aggregation.md) | 14-artifact gate, period multipliers, cross-type synthesis | Aggregation & reference-grade workflows | + +## Dependency matrix + +| Workflow | 00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 | ext | +|----------|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:---:| +| `news-propositions` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | +| `news-motions` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | +| `news-committee-reports` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | +| `news-interpellations` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | +| `news-evening-analysis` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| `news-week-ahead` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| `news-month-ahead` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| `news-weekly-review` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| `news-monthly-review` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| `news-realtime-monitor` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| `news-article-generator` | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| `news-translate` | ✅ | ✅ | ✅ | | | | | ✅ | | + +## Phase sequence (single-type workflow) + +```mermaid +flowchart LR + A[Download data
module 03] --> B[Read methodologies & templates
module 04] + B --> C[Analysis Pass 1
module 04] + C --> D[Analysis Pass 2
module 04] + D --> E{Analysis Gate
module 05} + E -- pass --> F[Article Pass 1 & 2
module 06] + E -- fail --> G[Fix & retry] + G --> E + F --> H[Stage & commit
module 07] + H --> I[ONE create_pull_request
module 07] + style A fill:#0a0e27,stroke:#00d9ff,color:#e0e0e0 + style E fill:#1a1e3d,stroke:#ff006e,color:#e0e0e0 + style I fill:#1a1e3d,stroke:#ffbe0b,color:#e0e0e0 +``` + +## Authoring rules for new / edited modules + +| Rule | Enforced by | +|------|-------------| +| ≤ 300 lines per module | CI check in `compile-agentic-workflows.yml` | +| No audit history, PR/run IDs, version tags | Code review | +| Link to canonical external docs rather than copy content | Code review | +| Tables over prose where rules are enumerable | Code review | +| Declarative ("do X") not narrative ("we decided to do X because of PR #1794") | Code review | + +## Changing the import list of a workflow + +1. Edit the workflow's frontmatter `imports:` list. +2. Run `gh aw compile` locally. +3. Commit both `.md` and regenerated `.lock.yml`. + +See [`.github/skills/github-agentic-workflows/SKILL.md`](../skills/github-agentic-workflows/SKILL.md) §"Imports" for gh-aw import semantics. + +## History + +The monolithic `.github/aw/SHARED_PROMPT_PATTERNS.md` was deleted when these modules went live. Every rule from the old file was either migrated into one of the modules above, merged with an equivalent rule, or deleted as audit history / duplicated content / tutorial from a skill file. diff --git a/.github/prompts/ext/tier-c-aggregation.md b/.github/prompts/ext/tier-c-aggregation.md new file mode 100644 index 000000000..5adb33297 --- /dev/null +++ b/.github/prompts/ext/tier-c-aggregation.md @@ -0,0 +1,89 @@ +# Tier-C Aggregation Extension + +Import this **in addition to** the 8 core modules for aggregation / reference-grade workflows: + +- `news-evening-analysis` +- `news-weekly-review` +- `news-monthly-review` +- `news-week-ahead` +- `news-month-ahead` +- `news-realtime-monitor` +- `news-article-generator` when `article_types` contains `deep-inspection` + +These are the flagship editorial surfaces of Riksdagsmonitor. The Tier-C rules are additive, not replacements. + +## 14 required artifacts (9 core + 5 Tier-C) + +In addition to the 9 artifacts from `04-analysis-pipeline.md`: + +| File | Purpose | +|------|---------| +| `README.md` | Per-run index + navigation for editors | +| `executive-brief.md` | 2-page decision-maker brief, lead findings + implications | +| `scenario-analysis.md` | ≥ 3 alternative scenarios with posterior probabilities | +| `comparative-international.md` | Cross-country comparison via World Bank / IMF / SCB data | +| `methodology-reflection.md` | What worked, what failed, biases surfaced, uncertainty log | + +## Period-scope multipliers (depth calibration) + +Aggregation depth scales with the period covered. Multiply the `comprehensive` minimum times in `04-analysis-pipeline.md` by: + +| Workflow | Multiplier | Rationale | +|----------|-----------|-----------| +| `news-realtime-monitor` | 0.8× | Single-event brief; may trim historical context. | +| `news-evening-analysis` | 1.0× | Standard day-in-review. | +| `news-week-ahead` / `news-weekly-review` | 1.2× | 5–7 day window synthesis. | +| `news-month-ahead` / `news-monthly-review` | 1.5× | 30-day window; longitudinal patterns required. | +| `news-article-generator` (deep-inspection) | 1.0× | Single-topic deep dive. | + +All 14 artifacts remain mandatory regardless of multiplier. + +## Cross-type synthesis (aggregation only) + +Aggregation workflows **must** read sibling article-type analyses produced for the same period and cite them explicitly: + +| Aggregation workflow | Sibling folders to read | +|----------------------|-------------------------| +| `news-evening-analysis` | Today's `propositions/`, `motions/`, `committee-reports/`, `interpellations/`, any `realtime-*/` | +| `news-week-ahead` / `news-weekly-review` | Last 7 days of per-type folders | +| `news-month-ahead` / `news-monthly-review` | Last 30 days of per-type folders | +| `news-realtime-monitor` | Prior 7 days' `realtime-*/` for continuity chain | + +Cross-references go into `cross-reference-map.md`. Missing cross-type citations fail the gate. + +## Recent-daily synthesis ingestion + +For `news-week-ahead`, `news-month-ahead`, `news-weekly-review`, `news-monthly-review` and `news-realtime-monitor`, before Pass 1 analysis: + +1. Read every `synthesis-summary.md` from the lookback window. +2. Extract unique `dok_id` references and stakeholder names. +3. Record the ingestion list in `data-download-manifest.md §Reference Analyses`. +4. Use the extracted entities as input to Pass 1 SWOT, risk, and stakeholder files. + +## Tier-C gate + +Run after the core analysis gate: + +``` +npx tsx scripts/validate-tier-c-gate.ts --dir "$ANALYSIS_DIR" +``` + +Checks: + +1. All 14 artifacts exist and non-empty. +2. `scenario-analysis.md` contains ≥ 3 scenarios, each with probability + leading indicator. +3. `comparative-international.md` references ≥ 2 external countries' indicators. +4. `methodology-reflection.md` lists ≥ 3 uncertainty items + ≥ 1 bias caveat. +5. `cross-reference-map.md` cites ≥ 3 sibling/prior analyses for aggregation workflows. + +Fail → fix, re-run. Still failing → commit as `analysis-only` via the single-PR rule in `07-commit-and-pr.md`. + +## Article expectations + +Tier-C articles are the editorial flagship. Floor: + +- ≥ 1500 words (vs 1000 for single-type). +- All 5 mandatory analytical sections present (vs 3 of 5). +- ≥ 5 `dok_id` references. +- ≥ 2 charts (economic + political). +- Executive brief linked from the article. diff --git a/.github/workflows/news-propositions.lock.yml b/.github/workflows/news-propositions.lock.yml index b6b28d0f3..668b49ed0 100644 --- a/.github/workflows/news-propositions.lock.yml +++ b/.github/workflows/news-propositions.lock.yml @@ -500,7 +500,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_a1713af6f24ae1e7_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_a1713af6f24ae1e7_EOF - name: Write Safe Outputs Tools env: @@ -531,6 +531,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1570,7 +1575,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} From 17128fe250037a61d53e82a64eec89215f9945ae Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Tue, 21 Apr 2026 22:48:28 +0000 Subject: [PATCH 02/21] Refactor 12 news workflows to use imports + max:1, delete SHARED_PROMPT_PATTERNS.md, add CI enforcement Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/0756749a-4f26-43bd-b211-ca9f8d00d7a0 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/aw/SHARED_PROMPT_PATTERNS.md | 4350 ----------------- .github/prompts/02-mcp-access.md | 2 +- .github/skills/editorial-standards/SKILL.md | 2 +- .../skills/gh-aw-workflow-authoring/SKILL.md | 5 + .../skills/github-agentic-workflows/SKILL.md | 2 + .github/skills/riksdag-regering-mcp/SKILL.md | 2 +- .../workflows/compile-agentic-workflows.yml | 66 +- .github/workflows/economic-context-audit.yml | 2 +- .../workflows/news-article-generator.lock.yml | 89 +- .github/workflows/news-article-generator.md | 891 +--- .../workflows/news-committee-reports.lock.yml | 75 +- .github/workflows/news-committee-reports.md | 786 +-- .../workflows/news-evening-analysis.lock.yml | 77 +- .github/workflows/news-evening-analysis.md | 907 +--- .../workflows/news-interpellations.lock.yml | 75 +- .github/workflows/news-interpellations.md | 750 +-- .github/workflows/news-month-ahead.lock.yml | 77 +- .github/workflows/news-month-ahead.md | 595 +-- .../workflows/news-monthly-review.lock.yml | 77 +- .github/workflows/news-monthly-review.md | 614 +-- .github/workflows/news-motions.lock.yml | 75 +- .github/workflows/news-motions.md | 737 +-- .github/workflows/news-propositions.lock.yml | 80 +- .github/workflows/news-propositions.md | 722 +-- .../workflows/news-realtime-monitor.lock.yml | 77 +- .github/workflows/news-realtime-monitor.md | 788 +-- .github/workflows/news-translate.lock.yml | 67 +- .github/workflows/news-translate.md | 630 +-- .github/workflows/news-week-ahead.lock.yml | 77 +- .github/workflows/news-week-ahead.md | 626 +-- .github/workflows/news-weekly-review.lock.yml | 77 +- .github/workflows/news-weekly-review.md | 606 +-- analysis/README.md | 2 +- analysis/imf/README.md | 2 +- .../methodologies/ai-driven-analysis-guide.md | 6 +- analysis/templates/synthesis-summary.md | 2 +- 36 files changed, 1037 insertions(+), 12981 deletions(-) delete mode 100644 .github/aw/SHARED_PROMPT_PATTERNS.md diff --git a/.github/aw/SHARED_PROMPT_PATTERNS.md b/.github/aw/SHARED_PROMPT_PATTERNS.md deleted file mode 100644 index 4dbf1ff8b..000000000 --- a/.github/aw/SHARED_PROMPT_PATTERNS.md +++ /dev/null @@ -1,4350 +0,0 @@ -# Shared Prompt Patterns for News Workflows - -> **Internal reference document** — Not a live workflow. Copy-paste these standardised blocks into every `news-*.md` workflow to ensure consistency. - -## ⭐ Canonical Reference-Grade Exemplar (MANDATORY READING FOR ALL WORKFLOWS) - -> 🔴 **Every workflow that produces an article MUST meet the quality bar set by this exemplar.** - -| Artefact | Path | Notes | -|----------|------|-------| -| **Governing methodology** | [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) | v5.1 — Rules 6 (depth tiers L1/L2/L2+/L3), 7 (self-audit matrix), 8 (international benchmarking) | -| **Canonical dossier (18 files)** | `analysis/daily/2026-04-17/realtime-1434/` | Gold-standard reference — all 14 registry files + 4 per-document analyses | -| **Canonical articles** | `news/2026-04-17-breaking-1434-{en,sv}.html` | Full 19-link reference section, grouped into 5 subgroups, bilingual localization | -| **Reference registry** | [`scripts/analysis-references.ts`](../../scripts/analysis-references.ts) | `KNOWN_ANALYSIS_FILES` (14 entries, 14-language labels) + `scanAnalysisFiles()` per-doc enumeration | - -**Before writing any article, the agent MUST confirm (via bash):** - -```bash -# Canonical exemplar exists and is not truncated -test -s analysis/daily/2026-04-17/realtime-1434/synthesis-summary.md \ - && test -s news/2026-04-17-breaking-1434-en.html \ - || echo "⚠️ Canonical exemplar missing — consult SHARED_PROMPT_PATTERNS.md" -``` - -**Article quality is benchmarked against this exemplar** — if your output is shallower, shorter than ~400 words per major section, lacks confidence labels, omits cross-document synthesis, or misses the grouped analysis-references section with all files linked, it FAILS quality gate. - ---- - -## 🔴 UNIVERSAL PRE-ARTICLE GATE — "Read ALL Analysis Before Writing Any Article" - -> 🚨 **ABSOLUTE RULE — shared template for all 12 news workflows**: No article of any type may be written until the agent has **read every analysis file** produced for that run. This gate lives here as the canonical template; each workflow that generates articles MUST paste the snippet below into its prompt (and recompile its `.lock.yml`) so the gate runs inline before any article HTML is emitted. This guarantees every claim in the article maps to a finding in the dossier, and prevents the "shallow first draft" anti-pattern. - -```bash -# 🔴 MANDATORY GATE — run BEFORE generating article HTML content -# Fails the run if analysis files are not present or not read. -# -# AWF-COMPLIANT: uses `find … | sort > tempfile` + `read`/redirection instead of -# $(...) command substitution. Safe to paste into a runtime `bash` tool call. - -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" -ANALYSIS_LIST_FILE="/tmp/analysis-files-$$.txt" -ANALYSIS_COUNT_FILE="/tmp/analysis-count-$$.txt" -trap 'rm -f "$ANALYSIS_LIST_FILE" "$ANALYSIS_COUNT_FILE"' EXIT - -if [ ! -d "$ANALYSIS_BASE" ]; then - echo "🔴 ABORT: $ANALYSIS_BASE does not exist — analysis must be produced BEFORE article" - exit 1 -fi - -: > "$ANALYSIS_LIST_FILE" -: > "$ANALYSIS_COUNT_FILE" - -find "$ANALYSIS_BASE" -maxdepth 2 -type f -name "*.md" | sort > "$ANALYSIS_LIST_FILE" -grep -c . "$ANALYSIS_LIST_FILE" > "$ANALYSIS_COUNT_FILE" || echo "0" > "$ANALYSIS_COUNT_FILE" -read -r ANALYSIS_COUNT < "$ANALYSIS_COUNT_FILE" - -if [ "$ANALYSIS_COUNT" -lt 9 ]; then - echo "🔴 ABORT: Only $ANALYSIS_COUNT analysis files found — need ≥ 9 core files (see v5.1 Rule 6)" - exit 1 -fi - -echo "=== READING ALL $ANALYSIS_COUNT ANALYSIS FILES BEFORE WRITING ARTICLE ===" -# §P0-5: Print the ENTIRE content of each analysis file (no sed truncation). -# Per the plan: "Analysis in md files should not ever be parsed. AI must read -# it all as context." A per-file size cap (150 KB) guards against pathological -# runaway files — real analysis files are 5–25 KB. After emission, assert -# total bytes read ≥ sum of on-disk sizes so a truncation bug cannot silently -# downgrade the context window. -MAX_FILE_BYTES=153600 # 150 KB per file -TOTAL_READ_BYTES=0 -TOTAL_ONDISK_BYTES=0 -while read -r f; do - if [ -f "$f" ]; then - # AWF-safe: no $(...) command substitution — use tempfile + read redirection, then clean up. - wc -c < "$f" | tr -d ' ' > /tmp/fsize-$$.txt - read FSIZE < /tmp/fsize-$$.txt - rm -f /tmp/fsize-$$.txt - TOTAL_ONDISK_BYTES=$((TOTAL_ONDISK_BYTES + FSIZE)) - echo "--- BEGIN ANALYSIS FILE: $f (size: $FSIZE bytes) ---" - if [ "$FSIZE" -gt "$MAX_FILE_BYTES" ]; then - # Extremely rare — emit the full file but warn. - echo "⚠️ File exceeds $MAX_FILE_BYTES-byte soft cap; emitting in full regardless (§P0-5)." - fi - cat "$f" - TOTAL_READ_BYTES=$((TOTAL_READ_BYTES + FSIZE)) - echo "" - echo "--- END ANALYSIS FILE: $f ---" - fi -done < "$ANALYSIS_LIST_FILE" - -echo "=== FULL-READ ASSERTION ===" -echo "Bytes on disk: $TOTAL_ONDISK_BYTES" -echo "Bytes emitted: $TOTAL_READ_BYTES" -if [ "$TOTAL_READ_BYTES" -lt "$TOTAL_ONDISK_BYTES" ]; then - echo "🔴 ABORT: truncated read — emitted $TOTAL_READ_BYTES of $TOTAL_ONDISK_BYTES bytes (§P0-5 violation)" - exit 1 -fi -echo "✅ Full-read assertion passed — every analysis file emitted in full." -echo "" -echo "🔴 AI INSTRUCTION: This content is your FULL CONTEXT for article writing." -echo " Do NOT run additional markdown parsers. The AI reads prose directly;" -echo " scripts do NOT extract structure from analysis files (§P0-5/P0-6 of" -echo " the agentic-workflow quality plan). Any structural classification" -echo " metadata required for article front-matter is derived by" -echo " scripts/analysis-reader.ts — which is scoped to metadata only and" -echo " MUST NOT be used to summarise or pre-digest the analysis body." - -# Checklist the agent MUST complete before emitting article HTML: -# ✅ synthesis-summary.md — lead story decision + DIW weighting -# ✅ swot-analysis.md — cluster strengths/weaknesses + TOWS interference -# ✅ risk-assessment.md — top 5 ranked risks + posterior probabilities -# ✅ threat-analysis.md — Attack Tree / Kill Chain / Diamond / MITRE-TTPs -# ✅ stakeholder-perspectives.md — named actors + influence network + briefing cards -# ✅ significance-scoring.md — weighted ranks + sensitivity analysis -# ✅ classification-results.md — priority tiers + retention + access -# ✅ cross-reference-map.md — prior-run forward chain + continuity contracts -# ✅ data-download-manifest.md — provenance chain-of-custody -# ✅ Reference-grade extensions (if present): README, executive-brief, -# scenario-analysis, comparative-international, methodology-reflection -# ✅ documents/*.md — per-document analyses (one per dok_id) -``` - -**Evidence of reading**: Each major article section (lede, What Is Happening, Why It Matters, Winners & Losers, Key Takeaways) MUST include **at minimum 3 concrete claims sourced from 3 distinct analysis files**. Those claims MUST be observable in the article output through explicit attribution in the section text, supporting reference list, or both, so a reviewer can trace each claim back to a specific analysis file or section. Articles that paraphrase only the synthesis-summary are REJECTED. - -**Reference-grade exemplar anchoring**: When the run produces reference-grade extensions (README, executive-brief, scenario-analysis, comparative-international, methodology-reflection), the article MUST additionally cite: -- At least one concrete scenario probability from `scenario-analysis.md` (e.g., "Base case P=0.42") -- At least one international comparator from `comparative-international.md` -- The one-page BLUF from `executive-brief.md` in the lede or Key Takeaways - ---- - -## 🌐 Hack23 Ecosystem Context - -Riksdagsmonitor is part of the **Hack23** platform for democratic transparency and political intelligence. When generating articles and analysis, link to and reference these resources: - -| Resource | URL | Purpose | -|----------|-----|---------| -| **Hack23 Main Site** | https://hack23.com | Company homepage, ISMS documentation | -| **Riksdagsmonitor** | https://riksdagsmonitor.com | Political intelligence news platform | -| **GitHub Pages** | https://hack23.github.io | Open-source project documentation | -| **CIA Platform** | https://hack23.github.io/cia/ | Citizen Intelligence Agency — historical data | -| **GitHub Repo** | https://github.com/Hack23/riksdagsmonitor | Source code and analysis data | - -Articles MAY include links to these sites when contextually relevant (e.g., linking to historical data, methodology documentation, or the live site). - ---- - -## 🧰 Bash Tool Call Format — MANDATORY for EVERY `bash` Tool Call - -> 🔴 **The `bash` tool schema requires BOTH `command` AND `description`. Calls missing either field fail with:** -> -> ``` -> └ Multiple validation errors: -> - "command": Required -> - "description": Required -> ``` -> -> These failures surface in the agentic workflow logs and block MCP/script execution, which in turn blocks analysis artifact creation and safe-output PR creation. Every `bash` invocation the agent makes MUST supply both fields — no exceptions. - -### Required shape - -``` -bash({ - command: "", - description: "" -}) -``` - -### Rules - -1. **Both fields are required on every call** — never omit `description`, never omit `command`. -2. **`command` is a single string**, not an array, not an object. Multi-step commands use `&&`, `;`, or newlines inside the same string. -3. **`description` is a short human-readable label** (≤ 100 chars, present tense) describing what the command does. Do NOT paste the command itself. -4. **Never pass only positional arguments** — always use named fields in the object literal. -5. **When reading files** created by scripts or `find -exec`, supply a description like `"List analysis artifacts for 2026-04-19"` rather than leaving it blank. -6. **When the shell command is long**, keep `description` short. Long `command`, short `description` is the correct shape. -7. **The AWF Shell Safety rules (next section) still apply** to the `command` string — no `$(...)`, no `${VAR}` braces, no `${VAR:-default}`. - -### ✅ Correct examples - -``` -bash({ - command: "date -u '+%Y-%m-%d'", - description: "Get current UTC date" -}) - -bash({ - command: "find analysis/daily/2026-04-19 -name '*.md' -exec wc -c {} \\;", - description: "List analysis file sizes" -}) - -bash({ - command: "npx tsx scripts/generate-news-enhanced.ts --types=propositions --languages=en,sv --skip-existing", - description: "Generate propositions articles for EN and SV" -}) - -bash({ - command: "source scripts/mcp-setup.sh && npx tsx scripts/download-parliamentary-data.ts --types=propositions --date=2026-04-19", - description: "Download today's parliamentary propositions data" -}) - -bash({ - command: "git add analysis/daily/2026-04-19 news/2026-04-19-*.html && git status --short", - description: "Stage today's analysis and article files" -}) -``` - -### ❌ INCORRECT patterns (these trigger the validation errors) - -``` -# ❌ Missing description -bash({ command: "date -u '+%Y-%m-%d'" }) - -# ❌ Missing command (agent passed a sentence as description only) -bash({ description: "Check the current date" }) - -# ❌ Passed as a positional string instead of named fields -bash("date -u '+%Y-%m-%d'") - -# ❌ Passed command as an array of tokens — it must be a single string -bash({ command: ["date", "-u", "+%Y-%m-%d"], description: "Get date" }) - -# ❌ Empty description -bash({ command: "ls analysis/daily/2026-04-19", description: "" }) -``` - -### Self-check before EVERY bash call - -Ask yourself: *"Did I provide `command` AND `description`? Is `command` a single string? Is `description` short and human-readable?"* If any answer is no, fix the call **before** submitting it. - ---- - -## 🛡️ AWF Shell Safety — Mandatory for ALL Agent-Generated Bash Commands - -> **The Agent Workflow Firewall (AWF) blocks dangerous shell expansion patterns.** When you generate bash commands at runtime, you **MUST** follow these rules. Fenced bash blocks in init steps run as normal bash and are not affected — but any command YOU write via the `bash` tool IS subject to AWF filtering. - -| ❌ BLOCKED pattern | ✅ SAFE alternative | Why | -|---|---|---| -| `$`+`{VAR}` | `$VAR` | Brace-enclosed expansion is blocked | -| `$`+`{VAR:-default}` | `if [ -z "$VAR" ]; then VAR=default; fi` then use `$VAR` | Default-value expansion is blocked | -| `$`+`(command)` | Pipe the result or use `find -exec` | Command substitution is blocked | -| `$`+`(basename $f)` | `find ... -exec basename {} \;` or `ls` with path stripping | Nested command substitution blocked | -| `$`+`{PIPESTATUS[0]}` | Use `set -o pipefail` before the pipeline, then check `$?`; or restructure to avoid the pipeline | `$?` only matches the pipeline failure you care about when `pipefail` is set | -| `for f in "$DIR/"*.json; do echo "$`+`(basename $f)"; done` | `find "$DIR" -name "*.json" -exec basename {} \;` | Loop + substitution blocked | -| `realtime-$`+`{HHMM}` | `realtime-$HHMM` | Even simple brace expansion is blocked | - -**Rules for agent-generated bash:** -1. **NEVER** use `$`+`{...}` (curly braces around variable names) — always use `$VAR` instead -2. **NEVER** use `$`+`(...)` (command substitution) — use pipes, `find -exec`, or separate commands -3. **NEVER** use `$`+`{VAR:-default}` or `$`+`{VAR:=default}` — set defaults with `if/then` first -4. **Use `find -exec`** instead of for-loops with command substitution -5. **Use `cat` with direct paths** instead of variable-constructed paths with braces -6. When you need a computed value, run the computation as a **separate bash command** and store the result, then use `$VAR` (no braces) in subsequent commands - -**Example — reading files safely:** -``` -# Instead of: for f in "$DIR/"*.json; do echo "=== $(basename $f) ==="; cat "$f"; done -# Use: -find analysis/daily/2026-04-07/realtime-1411/documents -name "*.json" -exec cat {} \; -``` - ---- - -## 🔤 UTF-8 Encoding — MANDATORY for ALL Content - -> **NON-NEGOTIABLE**: All article content, titles, descriptions, metadata, and analysis output MUST use native UTF-8 characters. NEVER use HTML numeric entities (`ä`, `ö`, `å`, etc.) for non-ASCII characters. - -| ❌ WRONG (HTML entity) | ✅ CORRECT (UTF-8) | Character | -|---|---|---| -| `ä` | `ä` | a-umlaut | -| `ö` | `ö` | o-umlaut | -| `å` | `å` | a-ring | -| `Ä` | `Ä` | A-umlaut | -| `Ö` | `Ö` | O-umlaut | -| `Å` | `Å` | A-ring | -| `é` | `é` | e-acute | -| `ü` | `ü` | u-umlaut | -| `—` | `—` | em-dash | -| `’` | `'` | right single quote | - -**Rules:** -1. **Always write Swedish characters as UTF-8**: `ö`, `ä`, `å`, `Ö`, `Ä`, `Å` — never as `ö`, `ä`, etc. -2. **All files are UTF-8 encoded** — the `` tag is present in all HTML output. -3. **Author name**: Always `James Pether Sörling` — never `James Pether Sörling`. -4. **This applies to ALL languages**: Swedish, Finnish, German, French, Spanish, and any text containing accented characters. -5. **JSON-LD / Schema.org**: HTML entities are NOT decoded in JSON — always use actual UTF-8 characters. -6. **Meta tags** (`og:title`, `og:description`, `twitter:title`, etc.): Always use UTF-8 characters, never entities. -7. **The article template includes a `decodeHtmlEntities()` safety net**, but agents should output correct UTF-8 from the start. - ---- - -## 📊 9 REQUIRED Analysis Artifacts — ALL Workflows MUST Produce These - -> 🔴 **NON-NEGOTIABLE (Added 2026-04-16, PR #1801)**: Every news workflow MUST produce ALL 9 analysis artifacts in its scoped `analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/` directory. Producing fewer than 9 is a **CRITICAL FAILURE** that results in shallow articles missing SWOT tables, risk matrices, threat analysis, and classification data. The quality gate (§"Step 5b: MANDATORY Quality Gate") checks all 9 files. -> -> **Root cause**: Evening analysis workflow run #24527350450 completed in 23 minutes and produced only 3 of 9 artifacts (synthesis-summary.md, significance-scoring.md, stakeholder-perspectives.md), resulting in missing risk-assessment, swot-analysis, threat-analysis, classification-results, cross-reference-map, and data-download-manifest. PR #1794 audit confirmed the same pattern across all news workflow types (motions, propositions, committee-reports, interpellations, realtime-monitor, evening-analysis) — agents completing in 13-22 minutes of their 45-60 minute allocations. - -| # | Required File | Template | Minimum Size | What It Must Contain | -|---|--------------|----------|-------------|---------------------| -| 1 | `synthesis-summary.md` | `analysis/templates/synthesis-summary.md` | 2000 bytes | SYN-ID, Intelligence Dashboard (Mermaid), Top Findings table, Aggregated SWOT, Risk Landscape, Forward Indicators | -| 2 | `swot-analysis.md` | `analysis/templates/swot-analysis.md` | 1500 bytes | SWT-ID, Quadrant Mapping (Mermaid mindmap), ≥2 filled quadrants with dok_id evidence | -| 3 | `risk-assessment.md` | `analysis/templates/risk-assessment.md` | 1000 bytes | RSK-ID, Risk Heat Map (Mermaid), ≥4 risks with L×I numeric scores | -| 4 | `threat-analysis.md` | `analysis/templates/threat-analysis.md` | 1000 bytes | THR-ID, Threat Taxonomy (Mermaid), ALL 6 threat categories | -| 5 | `classification-results.md` | `analysis/templates/political-classification.md` | 800 bytes | CLS-ID, Sensitivity Decision Tree (Mermaid), per-document classification table | -| 6 | `significance-scoring.md` | `analysis/templates/significance-scoring.md` | 800 bytes | SIG-ID, 5-dimension scoring, Composite Score, Publication Decision | -| 7 | `stakeholder-perspectives.md` | `analysis/templates/stakeholder-impact.md` | 1000 bytes | STA-ID, Impact Radar (Mermaid), ALL 8 stakeholder groups | -| 8 | `cross-reference-map.md` | Cross-reference template | 500 bytes | XRF-ID, Document relationships, inter-type links | -| 9 | `data-download-manifest.md` | Manifest template | 300 bytes | Documents Analyzed count, data sources, timestamps | - -**9-Artifact Completeness Gate (run BEFORE article generation):** -```bash -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" -MISSING_ARTIFACTS=0 -TOTAL_ARTIFACTS=0 -echo "=== 📊 9-Artifact Completeness Gate ===" - -# Per-file minimum sizes matching the table above -declare -A MIN_SIZES=( - ["synthesis-summary.md"]=2000 - ["swot-analysis.md"]=1500 - ["risk-assessment.md"]=1000 - ["threat-analysis.md"]=1000 - ["classification-results.md"]=800 - ["significance-scoring.md"]=800 - ["stakeholder-perspectives.md"]=1000 - ["cross-reference-map.md"]=500 - ["data-download-manifest.md"]=300 -) - -for REQUIRED_FILE in synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md classification-results.md significance-scoring.md stakeholder-perspectives.md cross-reference-map.md data-download-manifest.md; do - TOTAL_ARTIFACTS=$((TOTAL_ARTIFACTS + 1)) - MIN_SIZE=${MIN_SIZES[$REQUIRED_FILE]} - if [ ! -f "$ANALYSIS_DIR/$REQUIRED_FILE" ]; then - echo "🔴 MISSING: $REQUIRED_FILE — MUST CREATE" - MISSING_ARTIFACTS=$((MISSING_ARTIFACTS + 1)) - else - wc -c < "$ANALYSIS_DIR/$REQUIRED_FILE" > /tmp/fsize.txt - read FSIZE < /tmp/fsize.txt - if [ "$FSIZE" -lt "$MIN_SIZE" ]; then - echo "🔴 UNDERSIZED: $REQUIRED_FILE ($FSIZE bytes < $MIN_SIZE minimum) — MUST ENRICH" - MISSING_ARTIFACTS=$((MISSING_ARTIFACTS + 1)) - else - echo "✅ OK: $REQUIRED_FILE ($FSIZE bytes ≥ $MIN_SIZE minimum)" - fi - fi -done -echo "" -COMPLETE=$((TOTAL_ARTIFACTS - MISSING_ARTIFACTS)) -echo "📊 Artifact completeness: $COMPLETE / $TOTAL_ARTIFACTS" -if [ "$MISSING_ARTIFACTS" -gt 0 ]; then - echo "🚨🚨🚨 ARTIFACT COMPLETENESS GATE FAILED 🚨🚨🚨" - echo "❌ $MISSING_ARTIFACTS required artifacts missing or too small" - echo "Go back and create ALL 9 required files following their templates." - echo "DO NOT proceed to article generation until ALL 9 artifacts exist." -fi -``` - ---- - -## 🏆 14 REQUIRED Artifacts for AGGREGATION Workflows + `news-realtime-monitor` — Reference-Grade Tier-C - -> 🔴 **NON-NEGOTIABLE (Added 2026-04-19 · Extended 2026-04-19 for realtime-monitor)**: The following **6 workflows** MUST produce **5 additional Tier-C reference-grade artifacts** on top of the 9 core artifacts above, bringing their minimum total to **14 artifacts** per run: -> -> **Aggregation workflows (original Tier-C scope):** -> - `news-week-ahead.md` — `analysis/daily/$DATE/week-ahead/` -> - `news-month-ahead.md` — `analysis/daily/$DATE/month-ahead/` -> - `news-evening-analysis.md` — `analysis/daily/$DATE/evening-analysis/` -> - `news-weekly-review.md` — `analysis/daily/$DATE/weekly-review/` -> - `news-monthly-review.md` — `analysis/daily/$DATE/monthly-review/` -> -> **Breaking-news workflow (extended Tier-C scope — added 2026-04-19):** -> - `news-realtime-monitor.md` — `analysis/daily/$DATE/realtime-$HHMM/` -> -> **Reference exemplars**: -> - Aggregation: [`analysis/daily/2026-04-18/weekly-review/`](../../analysis/daily/2026-04-18/weekly-review/), [`analysis/daily/2026-04-19/month-ahead/`](../../analysis/daily/2026-04-19/month-ahead/) -> - Realtime: [`analysis/daily/2026-04-17/realtime-1434/`](../../analysis/daily/2026-04-17/realtime-1434/), [`analysis/daily/2026-04-19/realtime-1219/`](../../analysis/daily/2026-04-19/realtime-1219/) -> -> **Rationale**: Aggregation workflows synthesise multiple days or document types into decision-maker briefs. `news-realtime-monitor` is the **flagship editorial surface** of Riksdagsmonitor — every breaking run is consumed externally by editors, analysts, and press and must therefore carry the same decision-maker entry points (`README.md`, `executive-brief.md`), probabilistic forward-looking analysis (`scenario-analysis.md`), cross-jurisdictional benchmarking (`comparative-international.md`), and methodology self-audit + upstream-watchpoint reconciliation (`methodology-reflection.md`) as a weekly review. Per-document-type workflows (propositions, motions, committee-reports, interpellations) remain at the 9-artifact gate — they are narrower-scope and serve as upstream evidence feeding into the 14-artifact Tier-C runs. - -| # | Tier-C File | Minimum Size | What It Must Contain | -|---|-------------|-------------|---------------------| -| 10 | `README.md` | 3000 bytes | Package index · reading orders by audience · file index table · lead-story at-a-glance · upstream-run relationship table | -| 11 | `executive-brief.md` | 3500 bytes | BLUF (Bottom Line Up Front) ≤ 300 words · 3 decisions this brief supports · 8-bullet "60-second read" · named actors (≥ 5 ministers/party leaders with dok_id citations) · forward vote calendar · top-5 risks · confidence meter | -| 12 | `scenario-analysis.md` | 4000 bytes | 3 base scenarios with probability bands (30-day + 90-day + post-election where applicable) · 2 wildcards with impact assessment · ACH (Analysis of Competing Hypotheses) grid · monitoring-trigger calendar mapped to scenario shifts · cross-reference to upstream scenario work | -| 13 | `comparative-international.md` | 4000 bytes | **≥ 5 jurisdictions** benchmarked per cluster · Nordic baseline (SE vs DK, NO, FI) · EU benchmark (DE, NL, plus cluster-relevant) · explicit call-outs where Sweden **innovates**, **follows**, **diverges** · data-source citations (World Bank, RSF, OECD, Eurostat) | -| 14 | `methodology-reflection.md` | 4000 bytes | Methodology application matrix · **Upstream Watchpoint Reconciliation** (every forward indicator from sibling runs within the workflow-specific lookback window defined below explicitly carried forward or retired with reason) · uncertainty hot-spots · known limitations · Pass-1→Pass-2 improvement evidence · recommendations for doctrine codification | - -### 🔬 Period-Scope Multipliers — MUST APPLY to Tier-C Aggregation Workflows - -> 🔴 **Added 2026-04-19**: Aggregation workflows cover different time horizons. A monthly-review synthesising 30 days MUST NOT produce artifacts at the same depth as a breaking-news realtime-monitor covering a single event. The minimum sizes in the Tier-C table above are the **baseline for 7-day aggregation workflows** (`weekly-review`, `week-ahead`). Scale as follows: - -> 📌 **Scope of this multiplier**: The period-scope multiplier applies **ONLY to the 5 Tier-C reference-grade artefacts** (`README.md`, `executive-brief.md`, `scenario-analysis.md`, `comparative-international.md`, `methodology-reflection.md`). The **9 core artefacts** (`synthesis-summary.md` through `data-download-manifest.md`) keep their fixed daily-scope minimums from the 9-Artifact Completeness Gate above, because those baselines are already calibrated for single-day analysis and scale naturally with the larger document counts of aggregation windows. If a workflow's Tier-C package is scaled up, its 9-core package will grow organically with ingested document volume without needing a second multiplier layer. - -| Workflow | Period covered | Tier-C Size Multiplier | Rationale | -|----------|:--------------:|:---------------------------:|-----------| -| `news-realtime-monitor` | single event | **0.8×** | Single-event briefs may trim historical context | -| `news-evening-analysis` | 1 day | **0.9×** | Daily scope; must still carry full framework | -| `news-week-ahead` | 7 days forward | **1.0×** (baseline) | Reference exemplar baseline | -| `news-weekly-review` | 7 days retrospective | **1.0×** (baseline) | Reference exemplar baseline | -| `news-month-ahead` | 30 days forward | **1.3×** | 4× period ≠ 4× bytes; diminishing returns | -| `news-monthly-review` | 30 days retrospective | **1.5×** | Retrospective must evidence EVERY prior weekly-review's forward indicators | - -**Concrete byte thresholds for monthly-review** (baseline × 1.5): - -| Artifact | Weekly baseline | Monthly-review minimum | -|----------|:---------------:|:----------------------:| -| `synthesis-summary.md` | 2 000 B | **3 000 B** | -| `swot-analysis.md` | 1 500 B | **2 250 B** | -| `risk-assessment.md` | 1 000 B | **1 500 B** | -| `threat-analysis.md` | 1 000 B | **1 500 B** | -| `classification-results.md` | 800 B | **1 200 B** | -| `significance-scoring.md` | 800 B | **1 200 B** | -| `stakeholder-perspectives.md` | 1 000 B | **1 500 B** | -| `cross-reference-map.md` | 500 B | **750 B** | -| `data-download-manifest.md` | 300 B | **450 B** | -| `README.md` | 3 000 B | **4 500 B** | -| `executive-brief.md` | 3 500 B | **5 250 B** | -| `scenario-analysis.md` | 4 000 B | **6 000 B** | -| `comparative-international.md` | 4 000 B | **6 000 B** | -| `methodology-reflection.md` | 4 000 B | **6 000 B** | - -> **Target state**: monthly-review total package size ≥ **1.5×** the most recent `weekly-review` exemplar. A monthly-review regressing below this threshold is **REJECTED** and MUST be re-enriched before article generation. - -**Reference exemplar for monthly-review**: [`analysis/daily/2026-04-19/monthly-review/`](../../analysis/daily/2026-04-19/monthly-review/) — 14 artifacts, total ≈ 115 KB, all Tier-C files ≥ 10 KB, upstream watchpoint reconciliation covering 16 sibling watchpoints from 30 days + weekly-review + month-ahead. - -### 📡 Canonical MCP Reliability Table — MANDATORY in every Tier-C `data-download-manifest.md` - -> 🔴 **Added 2026-04-20 (§P2-1)**: Every Tier-C `data-download-manifest.md` MUST include a `## MCP Reliability` section with a canonical row-per-call table. The weekly aggregator job (`analysis/mcp-reliability/WEEK-NN-report.md`, deferred to a follow-up PR) `awk`-parses these tables across all Tier-C runs to compute per-server success-rate trends — so the column order and types are fixed. - -**Exact canonical shape** (column order is order-sensitive; header names are case-insensitive): - -```markdown -## MCP Reliability - -| MCP Server | Tool | Calls | Successes | Retries | Failures | Notes | -|------------|------|:-----:|:---------:|:-------:|:--------:|-------| -| riksdag-regering | search_dokument | 12 | 12 | 0 | 0 | — | -| riksdag-regering | get_anforande | 8 | 7 | 1 | 0 | 1 × 429 rate-limit | -| scb | query_table | 3 | 3 | 0 | 0 | — | -``` - -**Rules** (enforced by `scripts/validate-mcp-reliability.ts`): - -1. Section heading is `## MCP Reliability` (emoji prefix allowed). -2. Exactly 7 canonical columns in order — extra columns warn but do not fail. -3. Every numeric cell (Calls / Successes / Retries / Failures) is a non-negative integer. -4. Per-row arithmetic consistency: `Successes + Failures ≤ Calls` (pending-outcome case `< Calls` is allowed). -5. At least one row references `riksdag-regering` (the required server for any Riksdag analysis). -6. `Notes` is free-text; use `—` for the empty state. Record the actual failure cause when `Failures > 0` (e.g. "429 rate-limit", "504 gateway", "schema validation"). - -Reference exemplar: [`analysis/daily/2026-04-19/month-ahead/data-download-manifest.md`](../../analysis/daily/2026-04-19/month-ahead/data-download-manifest.md) — this exemplar predates the canonical table and will be back-filled in a subsequent PR; new runs MUST include it from day one. - -### 🔍 Depth Anti-Patterns (REJECTED) - -- ❌ Monthly-review total package < most-recent weekly-review total package -- ❌ `synthesis-summary.md` without party-activity matrix covering ALL 8 Riksdag parties -- ❌ `scenario-analysis.md` without ACH grid (evidence × hypothesis matrix) -- ❌ `comparative-international.md` with < 5 jurisdictions benchmarked per cluster -- ❌ `comparative-international.md` without explicit "Sweden innovates / follows / diverges" scorecard -- ❌ `executive-brief.md` without named-actors table (≥ 5 ministers/party leaders with dok_id evidence) -- ❌ `executive-brief.md` without 90-day forward vote calendar -- ❌ `data-download-manifest.md` (Tier-C) without the canonical `## MCP Reliability` table — **§P2-1** hard-fail; run `scripts/validate-mcp-reliability.ts` to verify -- ❌ `methodology-reflection.md` without **Upstream Watchpoint Reconciliation** table (zero silent drops rule) -- ❌ Any Tier-C file without at least one cross-reference link to an upstream sibling run -- ❌ Generic phrases without dok_id evidence ("significant bill", "major opposition move", "electoral implications") — every claim MUST cite a document ID or data source -- ❌ Mermaid diagram in Pass 1 removed by Pass 2 "cleanup" — Pass 2 MUST ONLY ADD, NEVER REMOVE content - -**14-Artifact Completeness Gate for Tier-C Workflows (aggregation + realtime-monitor + deep-inspection — run BEFORE article generation):** - -> 🔴 **Extended 2026-04-19 to include `deep-inspection`** — deep-inspection is the flagship single-document editorial surface and carries reference-grade expectations equivalent to aggregation and realtime-monitor runs. Reference exemplar: [`analysis/daily/2026-04-19/deep-inspection/`](../../analysis/daily/2026-04-19/deep-inspection/) — see [`methodology-reflection.md`](../../analysis/daily/2026-04-19/deep-inspection/methodology-reflection.md) §S1 for rationale. - -```bash -# Run this gate for aggregation workflow subfolders, realtime-monitor subfolders, AND deep-inspection -AGGREGATION_TYPES="week-ahead month-ahead evening-analysis weekly-review monthly-review deep-inspection" -IS_TIER_C=0 -# Aggregation + deep-inspection matches: exact subfolder equality -for T in $AGGREGATION_TYPES; do - if [ "$ANALYSIS_SUBFOLDER" = "$T" ]; then - IS_TIER_C=1 - break - fi -done -# Realtime-monitor matches: subfolder begins with "realtime-" (e.g. realtime-1219) -case "$ANALYSIS_SUBFOLDER" in - realtime-*) IS_TIER_C=1 ;; -esac - -if [ "$IS_TIER_C" = "1" ]; then - echo "=== 🏆 14-Artifact Reference-Grade Gate (Tier-C workflow) ===" - TIER_C_MISSING=0 - - # Period-scope multiplier — see "Period-Scope Multipliers" section above - case "$ANALYSIS_SUBFOLDER" in - realtime-*) MULT_NUM=8; MULT_DEN=10 ;; # 0.8× - evening-analysis*) MULT_NUM=9; MULT_DEN=10 ;; # 0.9× - week-ahead*) MULT_NUM=10; MULT_DEN=10 ;; # 1.0× baseline - weekly-review*) MULT_NUM=10; MULT_DEN=10 ;; # 1.0× baseline - deep-inspection*) MULT_NUM=10; MULT_DEN=10 ;; # 1.0× baseline (single-document primary focus) - month-ahead*) MULT_NUM=13; MULT_DEN=10 ;; # 1.3× - monthly-review*) MULT_NUM=15; MULT_DEN=10 ;; # 1.5× - *) MULT_NUM=10; MULT_DEN=10 ;; # fallback baseline - esac - echo "📐 Period-scope multiplier for '$ANALYSIS_SUBFOLDER': $MULT_NUM/$MULT_DEN" - - declare -A TIER_C_BASE_SIZES=( - ["README.md"]=3000 - ["executive-brief.md"]=3500 - ["scenario-analysis.md"]=4000 - ["comparative-international.md"]=4000 - ["methodology-reflection.md"]=4000 - ) - for REQUIRED_FILE in README.md executive-brief.md scenario-analysis.md comparative-international.md methodology-reflection.md; do - BASE_SIZE=${TIER_C_BASE_SIZES[$REQUIRED_FILE]} - MIN_SIZE=$(( BASE_SIZE * MULT_NUM / MULT_DEN )) - if [ ! -f "$ANALYSIS_DIR/$REQUIRED_FILE" ]; then - echo "🔴 MISSING Tier-C: $REQUIRED_FILE — Tier-C workflow MUST CREATE" - TIER_C_MISSING=$((TIER_C_MISSING + 1)) - else - # AWF-safe: no $(...) command substitution — use tempfile + read redirection, then clean up. - wc -c < "$ANALYSIS_DIR/$REQUIRED_FILE" | tr -d ' ' > /tmp/fsize-$$.txt - read FSIZE < /tmp/fsize-$$.txt - rm -f /tmp/fsize-$$.txt - if [ "$FSIZE" -lt "$MIN_SIZE" ]; then - echo "🔴 UNDERSIZED Tier-C: $REQUIRED_FILE ($FSIZE bytes < $MIN_SIZE scaled minimum — base $BASE_SIZE × $MULT_NUM/$MULT_DEN) — MUST ENRICH" - TIER_C_MISSING=$((TIER_C_MISSING + 1)) - else - echo "✅ OK Tier-C: $REQUIRED_FILE ($FSIZE bytes ≥ $MIN_SIZE scaled minimum)" - fi - fi - done - if [ "$TIER_C_MISSING" -gt 0 ]; then - echo "🚨🚨🚨 14-ARTIFACT REFERENCE-GRADE GATE FAILED 🚨🚨🚨" - echo "❌ $TIER_C_MISSING Tier-C artefacts missing or too small for the period-scope multiplier" - echo "Tier-C workflows (aggregation + realtime-monitor + deep-inspection) MUST produce all 14 artefacts before article generation." - echo "Reference exemplars:" - echo " Aggregation (7-day baseline): analysis/daily/2026-04-18/weekly-review/, analysis/daily/2026-04-19/month-ahead/" - echo " Aggregation (30-day 1.5×): analysis/daily/2026-04-19/monthly-review/" - echo " Realtime-monitor (0.8×): analysis/daily/2026-04-17/realtime-1434/, analysis/daily/2026-04-19/realtime-1219/" - echo " Deep-inspection (1.0×): analysis/daily/2026-04-19/deep-inspection/" - exit 1 - fi -fi -``` - ---- - -## 🔁 RECENT DAILY KNOWLEDGE-BASE SYNTHESIS — Mandatory for Tier-C Workflows - -> 🔴 **NON-NEGOTIABLE (Added 2026-04-19 · Extended 2026-04-19 for realtime-monitor · Extended 2026-04-19 for deep-inspection)**: Tier-C workflows (aggregation workflows + `news-realtime-monitor` + `deep-inspection`) MUST ingest and reconcile forward-looking intelligence from **recent sibling daily runs** before producing their own package. This establishes a **continuity-of-intelligence contract**: no forward indicator issued in the recent past is silently dropped. - -### Lookback Windows by Tier-C Workflow - -| Workflow | Sibling Lookback Window (days) | Minimum Sibling Runs to Ingest | -|----------|:------------------------------:|:------------------------------:| -| `news-realtime-monitor` | 2 days | Last 2 days of `realtime-*` + `evening-analysis` if present | -| `news-evening-analysis` | 3 days | 3 `realtime-*` + prior `evening-analysis` | -| `news-week-ahead` | 7 days | Last 7 daily folders + last `weekly-review` | -| `news-weekly-review` | 7 days | Last 7 daily folders + prior `weekly-review` | -| `news-month-ahead` | 14 days | Last 14 daily folders + last `weekly-review` + last `week-ahead` | -| `news-monthly-review` | 30 days | Last 30 daily folders + all weekly-reviews + last `monthly-review` | -| `news-article-generator` (deep-inspection article_types) | 7 days | ≥ 1 realtime-* that first surfaced the primary dok_id **OR** the most recent `weekly-review`, plus the most recent `month-ahead`/`monthly-review` if present | - -### Mandatory Ingestion Protocol - -For every sibling run in the lookback window: - -1. **Read** `synthesis-summary.md` §Forward Indicators (or equivalent Forward Watch Points / 90-Day Calendar section) -2. **Read** `significance-scoring.md` §Top-Ranked items -3. **Read** the cluster-specific deep-dive files where available (`scenario-analysis.md`, `comparative-international.md`, `risk-assessment.md`) -4. **Build** a Watchpoint Reconciliation table in the current run's `methodology-reflection.md` with columns: Source · Watchpoint · Disposition (**Carried forward** / **Retired** / **Carried with reduced priority**) -5. Every disposition MUST have an explicit one-line reason (or a pointer to the file where the watchpoint is continued) - -### Hard Rules - -- **No silent drops**: If a forward indicator from a sibling run is not addressed, it MUST be explicitly retired with a reason (e.g., "outside current horizon", "superseded by event X"). -- **Cross-reference**: Every Tier-C file MUST include a "Cross-Reference to Upstream" section pointing at the specific sibling `.md` files. -- **Probability alignment**: Scenario probabilities MUST align to (or explicitly justify departures from) the most recent `weekly-review/scenario-analysis.md` or `monthly-review/scenario-analysis.md` in the lookback window. - -### Example: Month-Ahead Ingestion Template - -Paste this into the month-ahead workflow prompt: - -```text -STEP 0 — Upstream Watchpoint Ingestion (MANDATORY, before any analysis): - -1. List last 14 days of sibling runs: - find analysis/daily -maxdepth 2 -type d -newer analysis/daily/$(date -d "14 days ago" +%Y-%m-%d) | sort - -2. Read every synthesis-summary.md §Forward Indicators in those folders: - for f in $(find analysis/daily -name synthesis-summary.md -newer $CUTOFF); do - echo "=== $f ===" - sed -n '/Forward/,/^## /p' "$f" - done - -3. Read last weekly-review/scenario-analysis.md and last week-ahead/synthesis-summary.md IN FULL. - -4. Build the Watchpoint Reconciliation table in methodology-reflection.md §Upstream Watchpoint Reconciliation. - -5. Align probability bands in scenario-analysis.md to last weekly-review scenario-analysis.md. - -ONLY AFTER step 5 completes may the current-run analysis proceed. -``` - -**Reference-grade exemplar**: [`analysis/daily/2026-04-19/month-ahead/methodology-reflection.md`](../../analysis/daily/2026-04-19/month-ahead/methodology-reflection.md) §Upstream Watchpoint Reconciliation (audits 16 upstream watchpoints across 7 sibling runs). - ---- - -## 🔒 ARTICLE TYPE ISOLATION — Absolute Enforcement - -> **NON-NEGOTIABLE**: Different article types MUST NEVER overwrite, merge, or conflict with each other's analysis artifacts. Each workflow owns its article type exclusively. - -````markdown -### Article Type Isolation Rules - -> 🚨 **ABSOLUTE RULE**: Every workflow MUST write analysis artifacts ONLY to its article-type-specific subdirectory. Workflows MUST NEVER write to the parent date directory or to another article type's subdirectory. - -#### Mandatory Analysis Folder Structure - -Every workflow MUST use this path pattern for ALL analysis output: - -``` -analysis/daily/$ARTICLE_DATE/$ARTICLE_TYPE/ -``` - -| Workflow | Base subfolder | Suffix on re-run | Owned files | -|----------|---------------|------------------|-------------| -| news-committee-reports | `committeeReports/` | `committeeReports-2/`, `-3/`, … | All analysis for betänkanden | -| news-interpellations | `interpellations/` | `interpellations-2/`, `-3/`, … | All analysis for interpellationer/frågor | -| news-motions | `motions/` | `motions-2/`, `-3/`, … | All analysis for motioner | -| news-propositions | `propositions/` | `propositions-2/`, `-3/`, … | All analysis for propositioner | -| news-month-ahead | `month-ahead/` | `month-ahead-2/`, `-3/`, … | Monthly strategic outlook analysis | -| news-week-ahead | `week-ahead/` | `week-ahead-2/`, `-3/`, … | Weekly parliamentary preview analysis | -| news-evening-analysis | `evening-analysis/` | `evening-analysis-2/`, `-3/`, … | Daily evening synthesis analysis | -| news-weekly-review | `weekly-review/` | `weekly-review-2/`, `-3/`, … | Weekly retrospective analysis | -| news-monthly-review | `monthly-review/` | `monthly-review-2/`, `-3/`, … | Monthly retrospective analysis | -| news-realtime-monitor | `realtime-$HHMM/` | *(inherently unique — no suffix needed)* | Breaking news time-stamped analysis | -| news-article-generator | `$REQUESTED_TYPE/` | `$REQUESTED_TYPE-2/`, `-3/`, … | Analysis for the requested article type | -| news-translate | *(reads only, never writes analysis)* | — | Translation output only | - -#### Enforcement Rules - -1. **Each workflow sets `ARTICLE_TYPE` at step start** — this variable scopes ALL `git add` and file writes -2. **`git add` MUST scope to `analysis/daily/$ARTICLE_DATE/$ARTICLE_TYPE/`** — NEVER `analysis/daily/$ARTICLE_DATE/` -3. **No workflow may read-modify-write another type's files** — read is allowed for cross-reference, but modification is PROHIBITED -4. **Concurrent workflow protection**: Multiple workflows (committee-reports, motions, propositions) may run on the same date — isolation prevents merge conflicts -5. **news-article-generator MUST include article type in filenames**: Generated articles use `$DATE-$ARTICLE_TYPE-$LANG.html` pattern — article type is ALWAYS part of the filename - -#### Anti-Patterns (REJECTED) - -- ❌ Writing to `analysis/daily/$ARTICLE_DATE/` root (no article type subfolder) -- ❌ `git add analysis/daily/$ARTICLE_DATE/` without article type scope -- ❌ `git add news/` — stages ALL articles from ALL workflows, causing merge conflicts -- ❌ Copying (`cp`) analysis files to both subfolder AND root date directory -- ❌ One workflow modifying another workflow's synthesis-summary.md -- ❌ Realtime monitor overwriting committee-reports analysis -- ❌ Evening analysis replacing interpellations SWOT with its own -- ❌ Article generator writing analysis without article type in path -- ❌ Leaving root-level `.md` files in `analysis/daily/$ARTICLE_DATE/` after relocation (use `mv`, never `cp`) - -#### Run Suffix Resolution (prevents merge conflicts from repeated runs) - -When a scheduled workflow runs and the analysis subfolder already exists (from a prior merged run), it MUST use a suffixed folder to avoid overwriting or causing merge conflicts. **Exception:** `force_generation=true` deliberately overwrites the base folder. - -```bash -# === Run Suffix Resolution === -# Call AFTER setting BASE_SUBFOLDER and ARTICLE_DATE, BEFORE running the analysis pipeline. -# - force_generation=true → reuse base folder (overwrite is intentional) -# - otherwise → auto-suffix if base folder already has synthesis-summary.md -# -# Inputs: BASE_SUBFOLDER (e.g. "propositions"), ARTICLE_DATE, FORCE_GENERATION -# Outputs: ANALYSIS_SUBFOLDER (e.g. "propositions" or "propositions-2") -ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER" -if [ "$FORCE_GENERATION" != "true" ]; then - _SUFFIX=1 - while [ -f "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" ]; do - _SUFFIX=$((_SUFFIX + 1)) - ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER-$_SUFFIX" - done -fi -echo "📁 Analysis subfolder resolved: analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" -``` - -**Result examples:** -| Scenario | `ANALYSIS_SUBFOLDER` | -|----------|---------------------| -| First run of the day | `propositions` | -| force_generation re-run | `propositions` (overwrites) | -| Second scheduled run (first merged) | `propositions-2` | -| Third scheduled run (first two merged) | `propositions-3` | - -> **Realtime monitor** already uses `realtime-HHMM` (inherently unique per run) — it does NOT need suffix resolution. - -#### Git Add Pattern (MANDATORY for all workflows) - -```bash -# CORRECT — scoped to this workflow's news outputs and resolved analysis subfolder -ARTICLE_TYPE="committeeReports" # Set per workflow -git add news/*committee-reports*.html 2>/dev/null || true # Only this workflow's articles -git add news/metadata/ 2>/dev/null || true # Metadata (small, fast-changing) -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true # Summary files only -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true # Summary JSON only -# NOTE: documents/ is intentionally EXCLUDED — doc-type workflows can have 100+ files there - -# INCORRECT — will conflict with other workflows -# git add news/ || true # ← NEVER DO THIS — stages all workflows' articles -# git add "analysis/daily/$ARTICLE_DATE/" || true # ← NEVER DO THIS — stages all workflows' analysis -``` -```` - ---- - -## 📰 ARTICLE TYPE MUST BE INCLUDED IN ALL OUTPUTS - -> **NON-NEGOTIABLE**: Every news article, analysis file, and commit message MUST include the article type identifier to prevent cross-type confusion. - -````markdown -### Article Type Tagging - -Every output artifact MUST be tagged with its article type: - -1. **HTML filenames**: `news/$DATE-$ARTICLE_TYPE-$LANG.html` -2. **Analysis folders**: `analysis/daily/$DATE/$ARTICLE_TYPE/` -3. **Commit messages**: `📰 $ARTICLE_TYPE: $description - $DATE` -4. **Schema.org metadata**: `"articleSection": "$ARTICLE_TYPE"` -5. **Analysis file headers**: Include `Article Type: $ARTICLE_TYPE` in metadata - -#### Valid Article Types - -| Type ID | Display Name | Workflow | -|---------|-------------|----------| -| `breaking` | Breaking News | news-realtime-monitor | -| `committee-reports` | Committee Reports | news-committee-reports | -| `interpellation-debates` | Interpellation Debates | news-interpellations | -| `opposition-motions` | Opposition Motions | news-motions | -| `propositions` | Government Propositions | news-propositions | -| `month-ahead` | Month Ahead | news-month-ahead | -| `week-ahead` | Week Ahead | news-week-ahead | -| `evening-analysis` | Evening Analysis | news-evening-analysis | -| `weekly-review` | Weekly Review | news-weekly-review | -| `monthly-review` | Monthly Review | news-monthly-review | -| `deep-inspection` | Deep Inspection | news-article-generator | -```` - ---- - -## 🧠 POLITICAL INTELLIGENCE DEPTH REQUIREMENTS (applies to ALL article workflows) - -> **NON-NEGOTIABLE**: All news articles must meet publication-quality political intelligence standards. Surface-level summaries, generic boilerplate, and shallow analysis are REJECTED. - -> 🔴 **v5.0 CRITICAL — AI WRITES EVERYTHING**: The AI agent MUST write ALL article content — analysis, SWOT, stakeholder perspectives, risk assessment, forward indicators, key takeaways. Scripts produce ONLY the HTML shell. If the final article reads like a code-generated list of document titles, it is REJECTED. The AI must produce deep political intelligence that demonstrates genuine analytical insight. - -> 🔴 **REGRESSION ALERT**: Articles that contain `AI_MUST_REPLACE` markers, generic "Analysis of N documents" ledes, identical "Why It Matters" text, or zero SWOT/stakeholder analysis are evidence the AI did NOT write the content. These articles MUST be rewritten before commit. - -````markdown -### Political Intelligence Depth Requirements - -> 🚨 **CRITICAL**: Every news article must demonstrate genuine political intelligence analysis — not information relay. The AI agent's job is to ANALYZE, not to SUMMARIZE. Articles that merely restate document titles or use generic language are REJECTED. - -#### Mandatory Analysis Components (ALL article types) - -Every news article MUST include ALL of the following: - -1. **Structured SWOT Analysis with Evidence Tables** - - Minimum 8 stakeholder perspectives (Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion) - - SWOT entries in structured HTML tables (not prose paragraphs) - - Every SWOT entry MUST cite specific dok_id, vote counts, or named politicians - - Example: `Coalition discipline tested — 3 M MPs broke ranks on MJU18 (vote 2025/26:87)` - - NOT: `Government faces challenges in maintaining coalition unity` ← REJECTED - -2. **Color-Coded Mermaid Diagrams (rendered in HTML)** - - Minimum 1 Mermaid diagram per article, rendered as inline SVG or using mermaid.js - - Diagrams MUST use color-coded nodes with real data: - ```html -
- graph TD - A[Prop 2025/26:214 - Cybersecurity] -->|Referred| B[FöU Committee] - B -->|Expected vote| C{Chamber Vote Q2 2026} - C -->|Pass| D[NCSC Reform Enacted] - C -->|Fail| E[Government Setback] - style A fill:#0d6efd,color:#fff - style D fill:#28a745,color:#fff - style E fill:#dc3545,color:#fff -
- ``` - - NOT: Generic placeholder diagrams with no real document data - -3. **Quantified Risk Matrix (L×I Scores)** - - Every article MUST include a risk assessment section with numeric Likelihood (1-5) × Impact (1-5) scores - - Present as HTML table with color-coded risk cells - - Example: - ```html - - - -
RiskLIScoreMitigation
Coalition fracture on defense budget2510M-KD NATO consensus
- ``` - -4. **Classification Rationale with Significance Scoring** - - Every article MUST explain WHY it has its classification (MEDIUM/HIGH/LOW) - - Include 5-dimension significance scoring: Parliamentary Impact, Policy Impact, Public Interest, Urgency, Cross-Party Significance - - Each dimension scored 0-10 with one-sentence rationale - -5. **Forward Indicators ("What to Watch")** - - Every article MUST include ≥3 forward indicators with: - - Specific trigger event - - Timeline (exact date or date range) - - Significance if triggered - - Example: "Watch: FöU committee scheduling of Prop. 2025/26:214 — if before April 15, signals government urgency [HIGH]" - -6. **Confidence Labels on ALL Analytical Claims** - - Every analytical statement MUST have `[HIGH]`, `[MEDIUM]`, or `[LOW]` confidence label - - Label criteria: - - `[HIGH]` — Directly supported by official document data (dok_id, vote record) - - `[MEDIUM]` — Inferred from multiple data points with reasonable certainty - - `[LOW]` — Speculative or based on limited/indirect evidence - -7. **CSS Mindmap (for deep/comprehensive articles)** - - For articles with analysis_depth=deep or comprehensive, include a CSS-rendered mindmap showing: - - Central topic with branching policy areas - - Stakeholder positions - - Timeline progression - - Use CSS classes: `.mindmap-container`, `.mindmap-node`, `.mindmap-branch` - -8. **Dok_id Evidence Citations** - - Every article MUST cite ≥5 specific document identifiers (e.g., `Prop. 2025/26:214`, `frs 2025/26:634`, `mot. 2025/26:1823`) - - Citations MUST link to data.riksdagen.se when possible - - Interpellation articles MUST cite frs IDs for every interpellation discussed - -#### Quality Scoring Rubric (Articles MUST score ≥ 7.0/10) - -| Dimension | Weight | Criteria | Score Range | -|-----------|--------|----------|------------| -| **Evidence Density** | 25% | dok_id citations, vote counts, named politicians per paragraph | 0-10 | -| **Analytical Depth** | 25% | Multi-framework (SWOT + Risk + Threat), not surface summaries | 0-10 | -| **Structural Completeness** | 20% | Mermaid diagrams, evidence tables, risk matrix, forward indicators | 0-10 | -| **Stakeholder Coverage** | 15% | All 8 groups analyzed with specific evidence per group | 0-10 | -| **Originality** | 15% | Unique per-document insights, no boilerplate, no repeated phrases | 0-10 | - -**Minimum passing score: 7.0/10 composite** - -#### Anti-Patterns (REJECTED — these indicate shallow analysis) - -| Pattern | Why It's Rejected | Correct Approach | -|---------|-------------------|-----------------| -| "Requires committee review and chamber debate" (repeated 20+ times) | Generic boilerplate, no analysis | Cite specific committee precedent on this topic | -| "Sweden faces escalating X threats" | Repackaged headline, not intelligence | Cite Säkerhetspolisen briefing date, specific incident data | -| SWOT with only 3 stakeholder groups | Incomplete framework coverage | Analyze all 8 groups with evidence per group | -| Risk assessment with "MEDIUM" text only | No quantified L×I scores | Provide numeric L=3, I=4, Score=12 with rationale | -| Articles with 0 Mermaid diagrams | Fails visual intelligence standard | Include ≥1 color-coded diagram with real data | -| Forward indicators without dates | Vague predictions, not intelligence | "Watch: FöU scheduling by April 15" with trigger | -| Confidence claims without labels | Unverifiable assertions | Add [HIGH]/[MEDIUM]/[LOW] to every claim | -| No dok_id citations in article body | Information relay, not analysis | Cite ≥5 specific document references | -```` - ---- - -## 🔧 SCRIPT ROLE BOUNDARY — Scripts Format, AI Analyzes - -> **NON-NEGOTIABLE**: Scripts handle data download, HTML formatting, chart rendering, and article template structure. The AI agent handles ALL political analysis, SWOT generation, risk assessment, classification, and intelligence production. - -````markdown -### Script vs AI Role Boundary - -> 🚨 **ABSOLUTE RULE**: The division of labor between scripts and AI is strict and non-negotiable. - -#### What Scripts DO (Formatting & Data) - -Scripts (`generate-news-enhanced.ts`, `download-parliamentary-data.ts`, etc.) are responsible for: - -| Script Role | Examples | Output | -|------------|---------|--------| -| **Download MCP data** | Fetch betänkanden, voteringar, motioner | JSON files in `analysis/data/` | -| **Catalog data files** | List pending analysis files | Manifest in `analysis/daily/` | -| **Render HTML template** | Apply article CSS (`../styles.css`), header, footer, nav | HTML article shell | -| **Render charts** | Canvas.js/Mermaid chart containers | Chart HTML with data attributes | -| **Render mindmaps** | CSS mindmap containers | Mindmap HTML structure | -| **Validate HTML** | HTMLHint, linkinator, Playwright | Validation reports | -| **Generate metadata** | Schema.org, OpenGraph, hreflang | HTML head metadata | -| **Format tables** | Structured HTML table rendering | Semantic table elements | -| **Create directory structure** | `analysis/daily/$DATE/$TYPE/` | Empty directory tree | - -#### What Scripts MUST NEVER DO (Analysis Content) - -Scripts MUST NEVER generate any of these — this is the AI agent's exclusive responsibility: - -| Prohibited Script Output | Why It's Prohibited | Who Does It | -|-------------------------|--------------------|-| -| SWOT analysis entries | Political judgment requires context | AI agent only | -| Risk assessment scores | Likelihood/Impact assessment needs political understanding | AI agent only | -| Significance scoring | Policy impact evaluation requires expertise | AI agent only | -| Political classification | Sensitivity/urgency assessment is analytical | AI agent only | -| Threat analysis | Democratic threat assessment requires judgment | AI agent only | -| Stakeholder impact prose | Multi-perspective analysis requires reasoning | AI agent only | -| Forward indicators | Predictive intelligence requires synthesis | AI agent only | -| "Why It Matters" sections | Contextual significance requires understanding | AI agent only | -| Opposition strategy analysis | Coalition dynamics assessment is analytical | AI agent only | -| Article narrative/story | Political narrative construction requires intelligence | AI agent only | - -#### Test: "The Lorem Ipsum Test" - -> If you replace an analysis section's content with "Lorem Ipsum" and the article still renders correctly, then the script is doing its job (formatting) and the AI's analysis was properly injected. -> If the article breaks when you replace content with Lorem Ipsum, the script is generating content (VIOLATION). - -#### Deprecated Analysis-Generating Scripts - -The following script directories and functions previously generated analysis content and are now **DEPRECATED** — their analysis functions are replaced by AI agent analysis in workflow prompts: - -> ⚠️ **IMPORTANT DISTINCTION**: The table below lists **analysis-generating** functions that are deprecated. The **HTML rendering** functions (`generateSwotSection()`, `generateDashboardSection()`, `generateMindmapSection()`) are **NOT deprecated** — they are active HTML renderers that take structured data and produce formatted HTML sections. AI agents produce analysis content in markdown files; scripts then render that content into HTML using these renderer functions. See §HTML RENDERER FUNCTIONS below. - -| Directory/Function | Status | Replacement | -|-----------|--------|-------------| -| `scripts/ai-analysis/` | ⚠️ DEPRECATED for analysis generation | AI agent performs analysis per workflow prompts | -| `scripts/analysis-framework/` | ⚠️ DEPRECATED for analysis generation | AI agent uses methodology guides directly | -| `scripts/generate-news-enhanced/swot-analyzer.ts` | ⚠️ DEPRECATED | AI agent generates SWOT per political-swot-framework.md | -| `scripts/data-transformers/content-generators/index.ts` → `generateStakeholderSwotSection()` | ⚠️ DEPRECATED | AI agent generates stakeholder analysis per stakeholder-impact.md | -| `scripts/generate-news-enhanced/ai-analysis-pipeline.ts` → `AIAnalysisPipeline` class | ⚠️ DEPRECATED | AI agent performs the primary analysis; class still runs as a deprecated/stub runtime pipeline | -| `scripts/data-transformers/content-generators/shared.ts` → `generateDeepAnalysisSection()` | ⚠️ DEPRECATED | AI prompt: "Write 5W deep analysis (Who/What/When/Why/Winners)" | -| `scripts/data-transformers/content-generators/shared.ts` → `generateTimelineContext()` | 🔴 STUB | Now outputs `AI_MUST_REPLACE` marker — AI MUST write specific timeline analysis | -| `scripts/data-transformers/content-generators/shared.ts` → `broadAgendaText()`, `focusedAgendaText()`, `defaultWhyText()` | 🔴 STUB | Now outputs `AI_MUST_REPLACE` marker — AI MUST write specific "Why This Matters" analysis | -| `scripts/data-transformers/content-generators/shared.ts` → `genericImpactText()`, `propImpactText()`, `betImpactText()`, `motImpactText()` | 🔴 STUB | Now outputs `AI_MUST_REPLACE` marker — AI MUST write specific political impact analysis | -| `scripts/data-transformers/content-generators/shared.ts` → `genericConsequencesText()`, `propConsequencesText()`, `motConsequencesText()` | 🔴 STUB | Now outputs `AI_MUST_REPLACE` marker — AI MUST write specific consequences analysis | -| `scripts/data-transformers/content-generators/shared.ts` → `defaultCriticalText()` | 🔴 STUB | Now outputs `AI_MUST_REPLACE` marker — AI MUST write specific critical assessment | -| `scripts/editorial-pillars.ts` → `INTER_PILLAR_TRANSITIONS` | 🔴 EMPTY | Transitions now return empty strings — AI MUST write article-specific connective prose or omit | -| `scripts/data-transformers/content-generators/newsworthiness.ts` → `scoreNewsworthiness()` | ✅ ACTIVE (data utility) | Heuristic scoring retained for routing/experimentation/tests; AI MUST independently assess editorial significance | -| `scripts/data-transformers/content-generators/shared.ts` → all `*Text()` templates | ⚠️ DEPRECATED | AI prompt: "Write editorial analysis from actual document data" | - -**These scripts may still be called for data downloading and HTML formatting functions**, but their analysis output (SWOT entries, risk scores, classifications, titles, descriptions, editorial judgments) MUST be treated as stubs that the AI agent MUST overwrite with real template-compliant analysis. - -#### HTML Renderer Functions (NOT Deprecated — Active Utilities) - -The following functions are **HTML renderers**, not analysis generators. They take structured data and produce formatted HTML. They are used by `generate-news-enhanced` to build article sections and are **actively maintained**: - -| Function | Module | Purpose | AI Agent Relationship | -|----------|--------|---------|----------------------| -| `generateSwotSection({ data, lang })` | `swot-section.ts` | Renders SWOT quadrant HTML from `SwotData` | **Current implementation:** AI writes SWOT analysis in markdown → script reads it → extracts SWOT data → calls this function to render HTML | -| `generateDashboardSection({ data, lang })` | `dashboard-section.ts` | Renders Chart.js canvas HTML from chart config | Renderer utility: used when structured chart data is provided by the pipeline; automatic extraction from AI analysis markdown is not currently implemented as a standard flow | -| `generateMindmapSection({ topic, branches, lang })` | `mindmap-section.ts` | Renders CSS mindmap HTML from branch data | Renderer utility: used when structured mindmap branch data is provided by the pipeline; analysis-to-mindmap extraction is workflow-specific or future-facing | -| `generateMultiPanelDashboardSection(...)` | `dashboard-section.ts` | Renders multi-panel CSS dashboards | Renderer utility for pre-structured panel data; not a guaranteed current markdown-extraction path | -| `generateEconomicDashboardSection(...)` | `economic-dashboard-section.ts` | Renders economic indicator dashboard | Renderer utility for structured economic dashboard data when available; automated extraction from agent analysis should be treated as planned or workflow-specific unless separately implemented | - -> **How AI agents interact with these**: AI agents do NOT call these TypeScript functions directly. These utilities are renderers only. Unless a specific workflow explicitly implements and validates a machine-readable input format, `generate-news-enhanced` should be treated as consuming final article HTML/section-ready content rather than generically extracting SWOT entries, chart data, or mindmap structures from markdown analysis files. Currently, only SWOT data extraction from analysis markdown is implemented as a standard flow. Workflow authors MUST NOT assume a supported structured-parse step exists for dashboard, mindmap, or other renderer inputs; if a renderer is used, the workflow must explicitly define how its input data is produced and validated. - -#### Minor TypeScript/Script Corrections Policy - -> **Agentic workflows MAY make minor corrections** to TypeScript code and scripts when necessary to complete their mission, but MUST use AI prompts for all important analysis and content creation. - -| Allowed Minor Corrections | Prohibited Changes | -|--------------------------|-------------------| -| Fix broken file paths or import statements | Rewrite analysis-generating logic | -| Correct typos in template strings | Add new analysis functions | -| Fix date format bugs in scripts | Change article quality thresholds | -| Update stale configuration values | Modify editorial framework scoring | -| Fix syntax errors blocking article generation | Change MCP tool invocation patterns | -| Adjust HTML template structure for validation | Remove or bypass quality gates | - -**Rule**: If a correction affects analysis content quality, it MUST be done via AI prompt analysis — not by editing TypeScript code. - ---- - -## 🏆 AI ANALYSIS QUALITY HIERARCHY — AI Always Wins - -> **NON-NEGOTIABLE**: AI-generated deep political analysis MUST NEVER be overwritten, shadowed, or replaced by script-generated heuristic analysis. The quality hierarchy is absolute. - -````markdown -### Analysis Quality Priority (Highest → Lowest) - -| Priority | Source | Quality | Location | Description | -|----------|--------|---------|----------|-------------| -| 🥇 **1st** | AI workflow (agentic) | Publication-quality deep analysis | `analysis/daily/YYYY-MM-DD/{articleType}/` | Full methodology compliance: Mermaid diagrams, evidence tables, dok_id citations, multi-framework analysis | -| 🥈 **2nd** | AI workflow (rerun suffix) | Publication-quality deep analysis | `analysis/daily/YYYY-MM-DD/{articleType}-2/` etc. | Same quality as 1st, auto-suffixed for repeat runs | -| ℹ️ **Data** | Script (`download-parliamentary-data.ts`) | Data download only | `analysis/daily/YYYY-MM-DD/{docType}/` | Downloads MCP data, stores JSON documents, writes `data-download-manifest.md` — does NOT produce analysis | - -### Rules - -1. **`analysis-reader.ts` prefers subdirectory files over root-level files.** AI analysis in `{articleType}/` subdirectories always takes priority. -2. **`download-parliamentary-data.ts` downloads data ONLY.** The script writes `data-download-manifest.md` and document JSON files. It does NOT produce analysis files (no classification, risk, SWOT, threat, stakeholder, significance, cross-reference, or synthesis). ALL analysis is performed by the AI agent. -3. **AI workflows MUST create ALL analysis from scratch.** The AI agent reads downloaded data (JSON documents) and performs full methodology-compliant analysis following `analysis/methodologies/ai-driven-analysis-guide.md` using templates from `analysis/templates/`. -4. **Scripts are for data and HTML rendering ONLY.** Per Rule 2 of `ai-driven-analysis-guide.md`, scripts download data and render HTML articles. AI agents create all analysis content, text, and editorial intelligence. - -### AI Agent Analysis Workflow - -After `download-parliamentary-data.ts` downloads data, the AI agent MUST: -1. Read the downloaded document JSON files from `analysis/daily/$DATE/$DOC_TYPE/documents/` -2. Run `npx tsx scripts/catalog-downloaded-data.ts --pending-only` to discover files needing analysis -3. Perform full per-file analysis following `analysis/methodologies/ai-driven-analysis-guide.md` -4. Use templates from `analysis/templates/` for each analysis type (SWOT, risk, threat, classification, significance, stakeholder, synthesis) -5. Iterate and improve ALL output (AI FIRST principle — minimum 2 passes) -6. Add metadata: `**Produced By**: {workflow-name} AI analysis (deep political intelligence)` -```` - ---- - -## 📊 TOP 10 QUALITY ISSUES IN CURRENT ARTICLES (2026-04-03 Systemic Audit) - -> **Quality audit findings** — these issues MUST be addressed by improving all agentic workflow prompts. The 2026-04-03 systemic audit supersedes the earlier 2026-04-02 spot-check findings (placeholder ledes, generic titles, missing analysis references) which are now subsumed under the broader issues below. - -````markdown -### Systemic Quality Issues (2026-04-03 Audit) - -> 🔴 **CRITICAL**: The 2026-04-03 audit revealed that deprecated template functions are the PRIMARY source of low-quality content in 85%+ of articles. AI agents MUST overwrite ALL template-generated content. - -| # | Issue | Severity | Scope | Root Cause | Required Fix | -|---|-------|----------|-------|-----------|------------| -| 1 | **"The political landscape remains fluid, with both government and opposition positioning for advantage."** appears in 444+ files | CRITICAL | ALL types | `shared.ts` Winners & Losers fallback template | AI MUST replace with specific winners/losers naming parties, evidence, vote margins | -| 2 | **"No chamber debate data is available for these items, limiting our ability..."** in 456+ files | CRITICAL | ALL types | Script excuse for missing data | AI MUST fetch debate data via MCP `search_anforanden` or analyze from committee text | -| 3 | **"Touches on {X} policy. {Generic domain text}..."** in 210+ files — identical across documents | HIGH | committee-reports, propositions, motions | `shared.ts` boilerplate templates per policy domain | AI MUST write unique "Why It Matters" per document with specific evidence | -| 4 | **Contradictory document counts** (title says 50, body shows 10) | HIGH | opposition-motions | Script counts ALL motions, article only details subset | AI MUST reconcile counts: either detail all or correctly scope the title | -| 5 | **Policy misclassification** (food safety labeled "housing policy") | HIGH | opposition-motions | Keyword heuristic in scripts, not committee-based | AI MUST use Riksdag committee code for domain (see ai-driven-analysis-guide.md §Policy Domain Inference) | -| 6 | **Missing analysis-references section** in articles across multiple dates | CRITICAL | ALL types | AI rewrites HTML without preserving auto-generated section; manual articles skip it entirely | AI MUST verify `class="analysis-references"` exists in EVERY article BEFORE committing — add manually if missing (see §ANALYSIS FILE GITHUB REFERENCES) | -| 7 | **Empty synthesis files** (0 documents analyzed) for propositions and week-ahead | CRITICAL | propositions, week-ahead | `download-parliamentary-data.ts` found 0 docs, AI accepted empty output | AI MUST use MCP fallback when script reports 0 (see ai-driven-analysis-guide.md §Empty Analysis Fallback) | -| 8 | **Placeholder ledes** ("Analysis of 10 documents covering Committee:, Published:") in 64+ files | MEDIUM | Older articles | Script meta description template never overwritten | AI MUST generate analytical lede from actual content | -| 9 | **assessArticleQuality() stub** always returns 100/100 | CRITICAL | ALL | Quality gate disabled in helpers.ts | AI MUST self-evaluate against 5-dimension rubric before committing | -| 10 | **Raw Swedish text in English articles** — unedited government document excerpts | HIGH | propositions, interpellations | Script pastes excerpt without translation | AI MUST translate/summarize, NEVER paste raw Swedish in English articles | - -### BANNED Content Patterns (v4.0 — Violations = Article Rejection) - -The following text patterns are BANNED in all generated articles. The AI agent MUST detect and replace these during article generation: - -``` -❌ "The political landscape remains fluid, with both government and opposition positioning for advantage." -❌ "No chamber debate data is available for these items, limiting our ability to assess..." -❌ "Touches on {X} policy. {Policy domain} proposals/reports/motions {generic text}..." -❌ "Analysis of N documents covering {Field}:, {Field}:" -❌ "Requires committee review and chamber debate" -❌ "{Category}: Policy Priorities This Week: {Topic} in Focus" -❌ Any "Why It Matters" text that appears identically for ≥2 documents in the same article -❌ Any "Winners & Losers" section under 50 words that doesn't name specific parties -❌ "The pace of activity signals the political urgency driving these proceedings." -❌ "define the current legislative landscape" -❌ "broad legislative push that will shape multiple aspects of Swedish society" -❌ "critical period for understanding the government's strategic direction" -❌ "culmination of legislative review, with recommendations that guide chamber votes" -❌ "interplay between governing ambition and opposition scrutiny" -❌ "cascade through committee deliberations, chamber votes, and ultimately into policy implementation" -❌ "establish the policy alternatives that opposition parties will champion" -❌ "Standard parliamentary procedures are being followed, but vigilance is warranted" -❌ "gap between legislative intent and implementation often reveals" -❌ "While parliament deliberates these legislative matters, the executive branch has been equally active" -❌ Any Deep Analysis subsection (Timeline, Why This Matters, Political Impact, Actions & Consequences, Critical Assessment) that contains generic template text instead of specific political analysis -``` - -### Article Quality Self-Check (MANDATORY before committing) - -Every news workflow MUST include this AI self-check step after article generation: - -```markdown -## Quality Self-Check Protocol - -Before committing, verify EACH article passes these checks: - -### ✅ Content Quality (must pass ALL) -- [ ] Lede paragraph names specific actors/institutions and policy significance (NOT "Analysis of N documents") -- [ ] ZERO instances of "The political landscape remains fluid" or equivalent generic filler -- [ ] Every "Why It Matters" section is UNIQUE to its document (no duplicate text across documents) -- [ ] Winners & Losers section names ≥2 winners and ≥2 losers with party abbreviations and evidence -- [ ] ≥5 dok_id citations in article body -- [ ] ≥3 named politicians with party abbreviation (e.g., "Elisabeth Svantesson (M)") -- [ ] ZERO ` 6 - line [2.0, -2.2, 5.2, 1.3, -0.2, 0.8] - line [1.8, -2.0, 4.5, 2.1, 0.5, 1.2] -``` - -**Budget Structure (pie)**: -```mermaid -pie title Sweden Government Revenue Composition - "Income & Profit Tax" : 38 - "Goods & Services Tax (VAT)" : 27 - "Social Contributions" : 22 - "International Trade Tax" : 3 - "Other Revenue" : 10 -``` - -**Policy Impact Flow (graph)**: -```mermaid -graph LR - subgraph "📊 Economic Indicators" - GDP["GDP Growth
0.8%"] - UE["Unemployment
8.4%"] - INF["Inflation
2.1%"] - end - subgraph "🏛️ Policy Response" - FP["Fiscal Policy
Expansionary"] - MP["Monetary Policy
Rate cuts"] - LP["Labor Policy
Active programs"] - end - subgraph "📈 Expected Outcomes" - G["Growth ↑"] - E["Employment ↑"] - S["Stability"] - end - GDP --> FP - UE --> LP - INF --> MP - FP --> G - LP --> E - MP --> S - style GDP fill:#00d9ff,color:#000 - style UE fill:#ff006e,color:#fff - style INF fill:#ffbe0b,color:#000 - style FP fill:#1a1e3d,color:#e0e0e0 - style MP fill:#1a1e3d,color:#e0e0e0 - style LP fill:#1a1e3d,color:#e0e0e0 - style G fill:#28a745,color:#fff - style E fill:#28a745,color:#fff - style S fill:#28a745,color:#fff -``` - -**Nordic Comparison Quadrant (quadrantChart)**: -```mermaid -quadrantChart - title Nordic Country Performance Matrix - x-axis "Low GDP Growth" --> "High GDP Growth" - y-axis "High Unemployment" --> "Low Unemployment" - quadrant-1 Strong Economy - quadrant-2 Growth Challenge - quadrant-3 Structural Issues - quadrant-4 Recovery Phase - Sweden: [0.45, 0.35] - Denmark: [0.65, 0.55] - Norway: [0.40, 0.60] - Finland: [0.30, 0.45] -``` - -**Defense Spending Timeline (gantt)**: -```mermaid -gantt - title Sweden NATO 2% GDP Defense Spending Trajectory - dateFormat YYYY - axisFormat %Y - section Actual - 1.1% GDP : done, 2019, 2020 - 1.2% GDP : done, 2020, 2021 - 1.3% GDP : done, 2021, 2023 - 1.5% GDP : done, 2023, 2024 - 1.7% GDP : active, 2024, 2025 - section Projected - 1.9% GDP : 2025, 2026 - 2.0% GDP Target : crit, 2026, 2027 -``` - -**Governance Radar (mindmap)**: -```mermaid -mindmap - root((Sweden
Governance)) - Rule of Law - Score: 1.60 - Percentile: 97th - Voice & Accountability - Score: 1.57 - Percentile: 96th - Govt Effectiveness - Score: 1.60 - Percentile: 94th - Regulatory Quality - Score: 1.72 - Percentile: 96th - Control of Corruption - Score: 2.03 - Percentile: 98th - Political Stability - Score: 0.76 - Percentile: 72nd -``` - -> **Use case guide**: `analysis/worldbank/use-cases.md` — detailed scenarios for when each indicator adds value. -```` - ---- - -## 🚨 UNIVERSAL RULE: No Workflow Run Wasted — Always Perform Analysis (applies to ALL workflows) - -> **NON-NEGOTIABLE FIRST PRINCIPLE**: Every agentic workflow run MUST produce improved analysis artifacts. No workflow run should ever complete without at least reviewing and improving existing analysis. This applies to ALL workflows — content generation, translation, monitoring, review, and any future workflow type. - -````markdown -### Mandatory Analysis Improvement Protocol - -> 🚨 **ABSOLUTE RULE**: ALL agentic workflows MUST follow `analysis/methodologies/ai-driven-analysis-guide.md` and produce or improve analysis artifacts on EVERY run. No exceptions. No workflow run is ever "wasted" — at minimum, existing analysis MUST be reviewed and improved. - -#### Why This Rule Exists - -Every workflow run consumes compute resources and has access to MCP tools, methodology documents, and analysis templates. Failing to produce analysis output from any workflow run is an unacceptable waste. Even workflows whose primary purpose is not analysis (e.g., translation, validation) MUST use their runtime to improve the analysis corpus. - -#### Universal Requirements (ALL Workflows) - -1. **Read `analysis/methodologies/ai-driven-analysis-guide.md`** — the master guide governing all analysis -2. **Read ALL 6 methodology guides** and **ALL 8 analysis templates** (see Step 2 and Step 3 below) -3. **Check for existing analysis** in `analysis/daily/` for the current date or relevant dates -4. **If existing analysis exists**: Improve, extend, correct, or complete it: - - Add missing Mermaid diagrams - - Fill empty SWOT quadrants with evidence-based entries - - Add dok_id citations where missing - - Improve risk scores with additional context from MCP data - - Extend stakeholder analysis with newly available data - - Correct any factual errors or outdated information - - Complete any `[REQUIRED]` placeholders -5. **If no existing analysis exists**: Create new analysis following the full protocol (Steps 1–6 in the AI-Driven Analysis section below) -6. **Commit analysis artifacts** to the `analysis/` folder — analysis MUST always be committed alongside any other workflow output, subject to the GitHub Actions `safe-outputs` 100-file limit. When approaching this limit, prioritize committing a minimal, high-impact subset of analysis (e.g., daily summaries and key findings) and prune lower-priority or bulk artifacts first (e.g., `analysis/weekly/`, `analysis/data/`). - -#### For Non-Analysis Workflows (translation, validation, etc.) - -Even workflows whose primary task is NOT analysis MUST: -1. **Before primary task**: Read the analysis guide and check for existing analysis needing improvement -2. **During primary task**: Note any new insights from MCP data or document processing -3. **After primary task**: Review and improve at least one existing analysis file (if any exist for the relevant date) -4. **At commit time**: Include improved analysis alongside primary workflow output - -```bash -# Universal analysis check — run at the start of EVERY workflow -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -echo "=== Mandatory Analysis Check ===" - -# Check for existing analysis needing improvement -find "analysis/daily/$ARTICLE_DATE/" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/existing_analysis.txt -read EXISTING_ANALYSIS < /tmp/existing_analysis.txt -find "analysis/daily/$ARTICLE_DATE/" -name "*-analysis.md" -type f 2>/dev/null | wc -l > /tmp/pending_analysis.txt -read PENDING_ANALYSIS < /tmp/pending_analysis.txt -grep -rl '\[REQUIRED\]' "analysis/daily/$ARTICLE_DATE/" 2>/dev/null | wc -l > /tmp/req_placeholders.txt -read REQUIRED_PLACEHOLDERS < /tmp/req_placeholders.txt -find "analysis/daily/$ARTICLE_DATE/" -name "*.md" -type f -exec grep -L "```mermaid" {} \; 2>/dev/null | wc -l > /tmp/missing_mermaid.txt -read MISSING_MERMAID < /tmp/missing_mermaid.txt - -echo "📊 Existing analysis files: $EXISTING_ANALYSIS" -echo "📊 Per-file analyses: $PENDING_ANALYSIS" -echo "⚠️ Files with [REQUIRED] placeholders: $REQUIRED_PLACEHOLDERS" -echo "⚠️ Files missing Mermaid diagrams: $MISSING_MERMAID" - -if [ "$EXISTING_ANALYSIS" -gt 0 ]; then - echo "📋 Existing analysis found — MUST review and improve during this workflow run" -else - echo "📋 No existing analysis for $ARTICLE_DATE — check nearby dates for improvement opportunities" - for DAYS_BACK in 1 2 3; do - date -u -d "$ARTICLE_DATE - $DAYS_BACK days" +%Y-%m-%d 2>/dev/null > /tmp/check_date.txt || date -u "+%Y-%m-%d" --date="$DAYS_BACK days ago" 2>/dev/null > /tmp/check_date.txt || true - read CHECK_DATE < /tmp/check_date.txt - [ -z "$CHECK_DATE" ] && continue - find "analysis/daily/$CHECK_DATE/" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/nearby_analysis.txt - read NEARBY_ANALYSIS < /tmp/nearby_analysis.txt - if [ "$NEARBY_ANALYSIS" -gt 0 ]; then - echo " 📍 Found $NEARBY_ANALYSIS analysis files for $CHECK_DATE — improve these" - break - fi - done -fi -echo "================================" -``` - -#### Analysis Improvement Checklist (for existing analysis files) - -When improving existing analysis, apply these checks: -- [ ] Every file has ≥1 color-coded Mermaid diagram (add if missing) -- [ ] No `[REQUIRED]` placeholders remain (fill with evidence-based content) -- [ ] SWOT entries cite specific dok_id, vote counts, party names (not generic text) -- [ ] Risk matrix has numeric L×I scores (not placeholder values) -- [ ] Stakeholder analysis covers all 8 groups (Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion) with specific evidence per group (not generic perspectives) -- [ ] Forward indicators have specific timelines and triggers (not vague predictions) -- [ ] Confidence labels (`[HIGH]`/`[MEDIUM]`/`[LOW]`) present on all analytical claims -- [ ] Writing follows `analysis/methodologies/political-style-guide.md` standards - -> **Key principle**: If a workflow cannot create NEW analysis (e.g., no new data), it MUST still improve EXISTING analysis. The analysis corpus should get better with every workflow run, never stay the same or degrade. -```` - -## Shared Skill Block (copy into every workflow) - -```markdown -## Required Skills - -Before generating articles, consult these skills: -1. **`.github/skills/editorial-standards/SKILL.md`** — OSINT/INTOP editorial standards -2. **`.github/skills/swedish-political-system/SKILL.md`** — Parliamentary terminology -3. **`.github/skills/legislative-monitoring/SKILL.md`** — Voting patterns, committee tracking, bill progress -4. **`.github/skills/riksdag-regering-mcp/SKILL.md`** — MCP tool documentation -5. **`.github/skills/language-expertise/SKILL.md`** — Per-language style guidelines -6. **`.github/skills/gh-aw-safe-outputs/SKILL.md`** — Safe outputs usage -7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses) -8. **`scripts/prompts/v2/stakeholder-perspectives.md`** — Multi-perspective analysis instructions -9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) -10. **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Per-file AI analysis protocol -11. **`analysis/methodologies/ai-driven-analysis-guide.md`** — Master methodology guide (v5.0): analysis-driven article decisions, policy domain inference, empty analysis fallback, Election 2026 lens -12. **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template (SWOT, risk matrix, threat taxonomy, Mermaid diagrams) -``` - -## 🧠 Repo Memory — Persistent Cross-Workflow Context (copy into every workflow) - -> **All workflows share branch `memory/news-generation`** — git-backed, persistent across runs, version-controlled. Unlike ephemeral MCP servers that die when the process ends, repo-memory survives indefinitely and is readable by every workflow in the repository. - -````markdown -### Repo Memory Usage - -All workflows have access to `repo-memory` on the shared branch `memory/news-generation`. -Use it to maintain cross-workflow context: what was covered, what's pending, quality scores, and recurring patterns. - -**Shared branch `memory/news-generation`** means: -- Breaking news knows what weekly review already covered -- Translations know which articles are pending -- Evening analysis knows what propositions/motions workflows produced today -- Weekly/monthly reviews can see cumulative quality trends - -**When to READ memory (start of every run):** -1. Check `memory/news-generation/last-run-{workflow-name}.json` for previous run metadata -2. Read `memory/news-generation/covered-documents/{YYYY-MM-DD}.json` for today (and optionally yesterday) to avoid re-analyzing documents already covered recently -3. Read `memory/news-generation/quality-scores-summary.json` for rolling quality trends - -**When to WRITE memory (end of every run):** -1. Update `memory/news-generation/last-run-{workflow-name}.json` with: - - `date`, `article_type`, `documents_analyzed` (array of dok_ids), `articles_generated` (count), `quality_score` -2. Write today's processed documents to `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`: - - Each dok_id processed today with article_type and timestamp - - Sharded by date to prevent unbounded growth; retain only recent shards (last 7 days) for deduplication -3. Write detailed quality metrics to `memory/news-generation/quality-scores/{YYYY-MM-DD}.json` and update `memory/news-generation/quality-scores-summary.json` with compact rolling aggregates - - Prune old shards beyond the retention window your workflow needs - -**File naming convention:** -- `last-run-{workflow-name}.json` — per-workflow state (e.g., `last-run-news-propositions.json`) -- `covered-documents/{YYYY-MM-DD}.json` — cross-workflow deduplication index, sharded by date -- `quality-scores/{YYYY-MM-DD}.json` — detailed quality tracking, sharded by date -- `quality-scores-summary.json` — compact rolling aggregates (kept small) -- `translation-status.json` — tracks which articles need translation (used by news-translate) - -**Example: Deduplication across workflows** -```jsonc -// covered-documents/2026-04-04.json -{ - "H901FiU1": { "workflow": "news-committee-reports", "timestamp": "2026-04-04T06:15:00Z" }, - "H902Prop45": { "workflow": "news-propositions", "timestamp": "2026-04-04T07:30:00Z" } -} -``` -Before analyzing a document, check if its dok_id already appears in today's shard. If so, skip or cross-reference. -```` - -## Standardised Analysis Depth Gate (copy into every workflow) - -```markdown -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams and evidence tables. -> 🔴 **v5.0: ALL depths now require 2+ iterations (improvement pass is MANDATORY)** - -| Depth | AI iterations (min) | SWOT stakeholders | Charts | Mindmap | Mermaid diagrams | Risk matrix (L×I) | Forward indicators | Min. analysis time | Min. article time | -|-------|---------------------|-------------------|--------|---------|-----------------|-------------------|-------------------|-------------------|-------------------| -| standard | 2 (1 create + 1 improve) | ≥5 (of 8 groups) | ≥1 | optional | ≥1 color-coded | ≥2 risks scored | ≥2 with triggers | 15 minutes | 12 minutes | -| deep | 2-3 (1 create + 1-2 improve) | ≥7 (of 8 groups) | ≥2 | required | ≥2 color-coded | ≥4 risks scored | ≥3 with triggers | 22 minutes | 18 minutes | -| comprehensive | 3+ (1 create + 2+ improve) | all 8 groups | ≥3 | required | ≥3 color-coded | ≥6 risks scored | ≥5 with triggers | 30 minutes | 22 minutes | - -**The 8 mandatory stakeholder groups are**: Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion. Analysis for each group MUST cite specific evidence (dok_id, vote counts, named politicians). - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, a quantified risk matrix with L×I scores, forward indicators with specific triggers/timelines, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. Every SWOT entry must cite dok_id, vote counts, or named politicians — generic text is REJECTED. - -**🔴 ITERATIVE IMPROVEMENT IS MANDATORY AT ALL DEPTHS**: The "AI iterations" column specifies MINIMUM iterations. Pass 1 creates the analysis. Pass 2+ reads ALL analysis back completely and improves every file. A single pass is NEVER sufficient regardless of depth level. The improvement pass must produce MEASURABLE improvements: more evidence citations, deeper stakeholder analysis, additional Mermaid diagrams, stronger risk assessments, and richer cross-references. -``` - -## MANDATORY Playwright Validation (copy into every content workflow) - -````markdown -### Playwright Visual Validation -Run Playwright validation before creating the PR: -```bash -# HTMLHint validation -npx htmlhint "news/*-{type}-*.html" - -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "{type}" - -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-{type}-*.html -``` -```` - -## 🔌 MCP Architecture & Tool Reference - -> **This section explains how MCP servers work in gh-aw agentic workflows.** Understanding this architecture prevents "unknown tool" errors, authentication failures, and wasted retry loops. - -### Architecture Overview - -``` -┌──────────────────────────────────────────────────────────────────┐ -│ AWF Sandbox (Docker container) │ -│ │ -│ ┌─────────────┐ ┌────────────────────────────────────────┐ │ -│ │ AI Agent │───▶│ MCP Gateway (gh-aw-mcpg v0.2.26+) │ │ -│ │ (Copilot) │ │ http://host.docker.internal:$PORT/ │ │ -│ └─────────────┘ │ (PORT=8080 in gh-aw v0.69+, was 80) │ │ -│ └──────┬──────────┬──────────┬──────────┘ │ -│ │ │ │ │ -│ ┌────────▼──┐ ┌─────▼────┐ ┌──▼──────────┐ │ -│ │ riksdag- │ │ scb │ │ world-bank │ │ -│ │ regering │ │(container)│ │ (container) │ │ -│ │ (HTTP) │ │node:lts │ │ node:lts │ │ -│ └───────────┘ └──────────┘ └─────────────┘ │ -│ │ │ -│ ┌────────▼──────────┐ │ -│ │ riksdag-regering- │ │ -│ │ ai.onrender.com │ │ -│ │ (remote HTTP MCP) │ │ -│ └───────────────────┘ │ -└──────────────────────────────────────────────────────────────────┘ -``` - -**Key concepts:** -1. **The AI agent calls tools by name** — e.g., `get_sync_status({})`, `search_tables({...})`. No server prefix needed. -2. **The MCP gateway routes** each tool call to the correct MCP server based on tool registration. -3. **riksdag-regering** is a remote HTTP MCP server (hosted on Render.com — subject to cold starts). -4. **scb** and **world-bank** are local container-based MCP servers (started by the gateway — always available). -5. **GitHub tools** (`github___*`) and **safe output tools** (`safeoutputs___*`) are built-in — always available. - -### How the AI Agent Calls MCP Tools - -MCP tools are available as **direct function calls** in the agent's tool list. Call them by their exact tool name: - -```javascript -// riksdag-regering tools (32 tools — remote HTTP server) -get_sync_status({}) // Health check — ALWAYS call first -get_propositioner({ rm: "2025/26", limit: 20 }) // Government propositions -get_betankanden({ rm: "2025/26", limit: 20 }) // Committee reports -get_motioner({ rm: "2025/26", limit: 20 }) // MP motions -get_interpellationer({ rm: "2025/26", limit: 20 }) // Interpellations -get_fragor({ rm: "2025/26", limit: 20 }) // Written questions -search_dokument({ doktyp: "prop", rm: "2025/26", limit: 20 }) // Search documents -search_anforanden({ rm: "2025/26", limit: 20 }) // Search speeches -search_voteringar({ rm: "2025/26", limit: 20 }) // Search votes -search_ledamoter({ parti: "S", limit: 50 }) // Search MPs -get_calendar_events({ from: "2026-04-01", tom: "2026-04-30" }) // Calendar -get_voting_group({ rm: "2025/26", groupBy: "parti" }) // Voting groups -enhanced_government_search({ query: "budget", limit: 10 }) // Combined search -analyze_g0v_by_department({}) // Department analysis -search_regering({ title: "budget", limit: 10 }) // Government documents -get_dokument({ dok_id: "H901FiU1" }) // Specific document -get_ledamot({ intressent_id: "0123456789" }) // Specific MP -fetch_report({ report: "ledamotsstatistik" }) // Statistical reports - -// SCB (Statistics Sweden) tools (5 tools — local container) -search_tables({ query: "befolkning", language: "en" }) // Search tables -get_table_info({ table_id: "07459", language: "en" }) // Table info -fetch_metadata({ table_id: "07459", language: "en" }) // Table metadata -query_table({ table_id: "07459", value_codes: { Tid: "top(5)" } }) // Query data -get_code_list({ code_list_id: "vs_Fylker" }) // Code lists - -// World Bank tools (5 tools — local container) -get-economic-data({ countryCode: "SE", indicator: "GDP_GROWTH", years: 10 }) -get-social-data({ countryCode: "SE", indicator: "POPULATION", years: 10 }) -get-education-data({ countryCode: "SE", indicator: "LITERACY_RATE", years: 10 }) -get-health-data({ countryCode: "SE", indicator: "HEALTH_EXPENDITURE", years: 10 }) -get-country-info({ countryCode: "SE" }) -search-indicators({ keyword: "education" }) - -// GitHub built-in tools (always available) -github___list_issues({...}) -github___create_pull_request({...}) -// ... see github_mcp_tools_with_safeoutputs_prompt.md for full list - -// Safe output tools (always available — NEVER search for these via bash) -safeoutputs___create_pull_request({...}) -safeoutputs___noop({"message": "..."}) -safeoutputs___dispatch_workflow({...}) -``` - -> **⚠️ CRITICAL**: Tool names are EXACT. `get_sync_status` ≠ `getSyncStatus` ≠ `get-sync-status`. World Bank tools use **hyphens** (`get-economic-data`). Riksdag/SCB tools use **underscores** (`get_sync_status`, `search_tables`). - -### How TypeScript Scripts Access MCP (via mcp-setup.sh) - -TypeScript scripts (e.g., `generate-news-enhanced.ts`) access the riksdag-regering MCP server through the **MCP gateway** using HTTP: - -```bash -source scripts/mcp-setup.sh -# Sets: MCP_GATEWAY_PORT=8080 (resolved from mcp-config.json gateway.port, -# or MCP_GATEWAY_PORT env, default 8080 for gh-aw v0.69+) -# Sets: MCP_GATEWAY_DOMAIN=host.docker.internal -# Sets: MCP_SERVER_URL=http://host.docker.internal:8080/mcp/riksdag-regering -# Sets: SCB_MCP_SERVER_URL=http://host.docker.internal:8080/mcp/scb -# Sets: WORLD_BANK_MCP_SERVER_URL=http://host.docker.internal:8080/mcp/world-bank -# Sets: MCP_AUTH_TOKEN= -# Sets: MCP_CLIENT_TIMEOUT_MS=90000 - -npx tsx scripts/generate-news-enhanced.ts --types=propositions --languages="en,sv" -``` - -The `mcp-setup.sh` script: -1. Routes through the MCP gateway at `http://$MCP_GATEWAY_DOMAIN:$MCP_GATEWAY_PORT/mcp/` (port resolved dynamically — port `8080` for gh-aw v0.69+, port `80` for legacy gh-aw <0.69) -2. Extracts the gateway API key from `/home/runner/.copilot/mcp-config.json` -3. Sets a 90-second timeout for cold-start tolerance -4. Configures gateway URLs for all three MCP data servers (riksdag, SCB, World Bank) - -> **IMPORTANT**: Tool names are always bare (e.g., `get_motioner`, not `riksdag-regering--get_motioner`). The gateway routes by URL path, not by tool name prefix. - -> **The AI agent does NOT need to run `mcp-setup.sh`** — tool calls go through the gateway automatically. `mcp-setup.sh` is only for TypeScript scripts that make HTTP requests to the MCP server. - -### MCP Server Availability - -| Server | Type | Startup | Cold Start Risk | Retry Strategy | -|--------|------|---------|-----------------|----------------| -| **riksdag-regering** | Remote HTTP | ~10-120s | HIGH (Render.com free tier) | Layer 1 pre-warm + Layer 2 health gate | -| **scb** | Local container | ~5-15s | LOW (starts with gateway) | Retry 2× with 10s wait | -| **world-bank** | Local container | ~5-15s | LOW (starts with gateway) | Retry 2× with 10s wait | -| **GitHub** | Built-in | Instant | NONE | No retry needed | -| **Safe Outputs** | Built-in | Instant | NONE | No retry needed | - -### Error Handling by Server - -**riksdag-regering errors:** -- `"unknown tool"` → Server cold-starting (tools not registered yet), OR wrong tool name → verify bare tool names (no prefix), retry -- `"connection timeout"` → Server sleeping on Render.com → wait 30s, retry -- `"0 tools registered"` → Server HTTP is up but MCP not initialized → retry with POST - -**scb/world-bank errors:** -- `"connection refused"` → Container not started yet → wait 10s, retry -- `"tool execution error"` → API-level error (bad query) → fix parameters, do NOT retry blindly - ---- - -## MANDATORY MCP Health Gate (copy into every workflow) - -All workflows MUST verify MCP connectivity before proceeding with content or translation work. - -### Layer 1: CI-Level Pre-Warm Step (YAML frontmatter `steps:`) - -**Every workflow MUST include this CI step in its YAML frontmatter `steps:` section**, after `Install dependencies` and before the `engine:` block. This runs as an actual GitHub Actions step BEFORE the MCP gateway starts, giving the Render.com server time to wake up. - -**CRITICAL**: The pre-warm MUST use MCP protocol POST requests (not simple GET). A GET only wakes the HTTP server but does NOT initialize the MCP tool registry. The MCP gateway needs tools to be registered — a warm HTTP server with 0 registered tools causes "unknown tool" errors. - -Additionally, because there is a 3–10 minute gap between the pre-warm step and the MCP gateway start (due to repo-memory cloning, Docker image pulls, safe-outputs setup, etc.), the pre-warm starts a **background keep-alive pinger** that sends MCP `tools/list` POST requests every 30 seconds. This prevents Render.com from putting the server back to sleep before the gateway connects. - -```yaml - - name: Pre-warm MCP server (Render.com cold start mitigation) - run: | - echo "🔥 Pre-warming riksdag-regering MCP server via MCP protocol..." - MCP_URL="https://riksdag-regering-ai.onrender.com/mcp" - WARM=false - for i in 1 2 3 4 5 6; do - RESP=$(curl -sf --max-time 30 -X POST \ - -H "Content-Type: application/json" \ - -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' \ - "$MCP_URL" 2>/dev/null) || true - if echo "$RESP" | grep -q '"tools"'; then - TOOL_COUNT=$(echo "$RESP" | grep -o '"name"' | wc -l) - echo "✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered" - WARM=true - break - fi - echo "⏳ Attempt $i/6 — server may be cold-starting, waiting 20s..." - sleep 20 - done - if [ "$WARM" = "false" ]; then - echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" - fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 15 min)..." - KEEP_ALIVE_END=$(($(date +%s) + 900)) - while [ "$(date +%s)" -lt "$KEEP_ALIVE_END" ]; do - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done & - KEEP_ALIVE_PID=$! - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 15 min)" -``` - -> **Why MCP protocol POST instead of GET?** The previous approach used `curl -sf "https://riksdag-regering-ai.onrender.com/mcp"` (GET). The GET request only returns a health check JSON — it does NOT trigger MCP session initialization or tool registry loading. The MCP gateway sends POST requests with `tools/list` to discover tools. If the tool registry hasn't been initialized by a prior POST request, the gateway receives 0 tools and reports "unknown tool" errors. Using POST with `tools/list` forces full MCP initialization. -> -> **Why a background keep-alive?** The CI `steps:` section runs early in the agent job. Between the pre-warm and the MCP gateway start, there are 3–10 minutes of setup (repo-memory cloning, Docker image pulls, safe-outputs config, etc.). On Render.com free tier, inactive servers go to sleep after ~5 minutes. The keep-alive pinger sends `tools/list` every 30s to prevent this. It auto-exits after 15 minutes to avoid resource waste. -> -> **Error scenarios**: If the MCP server is completely down (not just cold), the POST requests will fail silently (`|| true`). The pre-warm reports "did not respond after 6 attempts" and the agent falls back to Layer 2 (in-prompt health gate). If Layer 2 also fails, the agent must call `safeoutputs___noop()` with a detailed error message — never let the workflow timeout without producing a safe output. - -### Layer 2: In-Prompt Health Gate (markdown body) - -**Pre-warm the riksdag-regering MCP server** (backup — in case CI pre-warm was insufficient): -```bash -echo "🔥 Pre-warming riksdag-regering MCP server (Render.com cold start mitigation)..." -curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" -o /dev/null 2>/dev/null || echo "Pre-warm ping sent (server may be waking up)" -sleep 10 -``` - -> **CRITICAL**: Use POST with `tools/list` — a simple GET only wakes the HTTP server but does NOT initialize the MCP tool registry. See Layer 1 explanation above. - -Then call the health gate: - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors, this means the MCP server is still initializing after a Render.com cold start. **Keep retrying — do NOT noop early.** -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — Render.com cold start exceeded timeout"})` — do NOT proceed -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **For translation workflows**: MCP is required for accurate political term translation and cross-referencing. Do NOT proceed without MCP. -6. **NEVER let the workflow timeout** without calling a safe output. If MCP is down, noop after the 3 retries instead of wasting the full timeout. -7. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -> 🚨 **UNIVERSAL SAFE OUTPUT RULES — ALL WORKFLOWS MUST FOLLOW:** -> -> 1. **Call `safeoutputs___create_pull_request` as EARLY as possible** — the moment you have committed files. The safeoutputs MCP session has a finite lifetime. Successful runs call it by minute ~18. Failed runs that delayed past minute 30 got "session not found" and lost all work. -> 2. **🫀 Analysis-only Heartbeat PR — MANDATORY first call by minute 18 (`max: 2+`)** — the Streamable-HTTP safeoutputs MCP session has a ~30–35 min idle lifetime (observed in PR #1835, run #24672037751, and **run #24722758908, 2026-04-21, `news-realtime-monitor`**). Every `safeoutputs___create_pull_request` call **resets the session idle timer**. The heartbeat PR MUST contain **only analysis artifacts and/or script-generated stubs that already exist on disk** — it MUST NOT wait for article-writing to complete. Article writing (especially multi-thousand-word EN + SV pieces) routinely takes 8–15 minutes and will blow the idle timer if done before the heartbeat. Pattern: -> - **PR #1 (analysis-only heartbeat, minute 13–18)**: Commit whatever files exist in `analysis/daily/$ARTICLE_DATE/.../` from Pass 1 (plus any script-generated article skeletons). Title: `🫀 Heartbeat - {workflow} - {date}`. Mark `draft: true` if the workflow supports it. After the call, run the **post-heartbeat rebase** below so subsequent commits don't stack onto the frozen patch AND don't duplicate heartbeat content in PR #2. -> - **🔁 Post-heartbeat rebase (MANDATORY — prevents "double PR with same content")**: The heartbeat PR is opened by the `safe_outputs` job AFTER the agent ends, so the heartbeat is NOT yet merged when you continue. Do NOT just `git checkout main` — that leaves your local main at the pre-heartbeat base, and PR #2 will re-submit every Pass-1 file (identical to heartbeat) plus Pass-2 deltas, causing the "two PRs with overlapping content" problem reported in PR #1907 (committee-reports 2026-04-21). Run instead: -> ```bash -> git checkout main -> # Wait briefly for heartbeat auto-merge, then pull it down so PR #2's diff is Pass-2-only -> sleep 60 -> git fetch origin main && git reset --hard origin/main || git checkout main -> ``` -> If the heartbeat has not yet auto-merged (e.g. required checks still pending), the `reset --hard` is a no-op and behaviour falls back to the previous `git checkout main` pattern. Either way, PR #2 MUST stage ONLY files that differ from origin/main at commit time — check with `git diff origin/main --stat` before calling `safeoutputs___create_pull_request`. -> - **PR #2 (full content, minute 35–43)**: Commit finalized articles + Pass 2 improvements on a fresh branch. Final title. This supersedes the heartbeat and should contain ONLY Pass-2 deltas — not a re-submission of Pass-1 heartbeat content. -> - `news-committee-reports` (minute 13–15 heartbeat) and `news-translate` (`max: 5`, first batch at minute ~18) have proven this pattern with zero session expiries. `news-realtime-monitor` run #24722758908 FAILED because it tried to include full articles in PR #1 — article writing took minutes 24→34, and the first safeoutputs call at minute 34 got `session not found`. PR #1907 (committee-reports 2026-04-21) needed manual conflict resolution because the post-heartbeat rebase step was missing. -> 3. **NEVER wait for articles before the first safeoutputs call.** If the workflow needs article content in a PR, that's PR #2. PR #1 exists solely to refresh the session and preserve whatever analysis has been completed. A heartbeat PR with 3 analysis files is infinitely more valuable than 10 articles lost to session expiry. -> 4. **NEVER call `safeoutputs___noop` when artifacts exist.** Noop means "I did nothing." If you created files, you DID something and MUST create a PR. Partial work in a PR is infinitely better than lost work via noop. -> 5. **At HARD DEADLINE**: If ANY files were created → `safeoutputs___create_pull_request`. ONLY noop if truly ZERO files were created. -> 6. **Architecture reminder**: `safeoutputs___create_pull_request` records your intent. A separate `safe_outputs` job executes the PR creation AFTER the agent job ends. If the MCP session expires before you record the intent, the `safe_outputs` job is SKIPPED and all work is lost. **Single-PR workflows (`max: 1`) that delay their only call past minute 30 WILL lose all work.** - -### Layer 3: MCP Gateway Diagnostics (run when tools fail) - -If MCP tools return errors ("unknown tool", "0 tools registered", connection timeouts), run these diagnostics BEFORE calling noop: - -```bash -echo "🔍 MCP Gateway Diagnostics" -date -u '+%Y-%m-%dT%H:%M:%SZ' -echo "═══════════════════════════════════════════" - -# 1. Test direct MCP server connectivity (bypasses gateway) -echo "📡 Direct MCP server test:" -curl -sf --max-time 15 -X POST \ - -H "Content-Type: application/json" \ - -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' \ - "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" - -# 2. Test MCP gateway connectivity -echo "" -echo "🔌 MCP Gateway test (via host.docker.internal):" -source scripts/mcp-setup.sh 2>/dev/null -echo "MCP_SERVER_URL=$MCP_SERVER_URL" -curl -sf --max-time 10 -X POST -H "Content-Type: application/json" -H "Authorization: $MCP_AUTH_TOKEN" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "$MCP_SERVER_URL" 2>/dev/null | head -c 200 || echo "GATEWAY UNREACHABLE" - -# 3. DNS resolution from within sandbox -echo "" -echo "🌐 DNS resolution:" -for d in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se; do - nslookup "$d" 2>/dev/null | grep -A1 "Name:" | tail -1 || echo "$d: DNS FAILED" -done - -echo "═══════════════════════════════════════════" -``` - -> **When to run diagnostics**: Only run this bash block if `get_sync_status()` fails after 3+ retries. The diagnostics output helps the next workflow iteration diagnose whether the issue is DNS, firewall, MCP server, or gateway configuration. Include the diagnostics output in the noop message. - -## Standardised PR Description Template (copy into every workflow) - -> **All pull requests created by agentic workflows MUST have descriptive, structured PR bodies.** Use this template pattern for consistent, informative PR descriptions. - -### Content Workflow PR Template -``` -safeoutputs___create_pull_request({ - "title": "📰 {Article Type} - {YYYY-MM-DD}", - "body": "## Summary\n\n{Brief description of what was generated}\n\n### Articles\n- Languages: {language list}\n- Article count: {count}\n- Article type: {type}\n\n### Analysis\n- Analysis files: {count} files in `analysis/daily/{date}/{type}/`\n- Quality gate: {PASSED/FAILED}\n\n### Validation\n- HTMLHint: ✅ passed\n- File ownership: ✅ validated\n- Playwright: ✅ validated\n\n### Source\n- Workflow: `{workflow-name}`\n- MCP data freshness: {sync status}\n- Riksmöte: {rm value}", - "labels": ["agentic-news", "analysis-data"] -}) -``` - -### Translation Workflow PR Template -``` -safeoutputs___create_pull_request({ - "title": "🌐 Article Translations - {YYYY-MM-DD}", - "body": "## Summary\n\nTranslated {article_type} articles into {count} languages.\n\n### Translations\n- Source language: {source_lang}\n- Target languages: {lang_list}\n- Articles translated: {count}\n- Translation method: TypeScript baseline + AI body translation\n\n### Quality\n- All section headings translated: ✅\n- All body paragraphs translated: ✅\n- No English fallback text: ✅\n- RTL layout verified (ar, he): ✅\n- data-translate markers removed: ✅\n\n### Analysis Updates\n- Analysis files updated: {count} (if any)\n\n### Source\n- Workflow: `news-translate`\n- Source articles: `news/{date}-*-en.html`", - "labels": ["agentic-news", "translation"] -}) -``` - -### Analysis-Only PR Template -``` -safeoutputs___create_pull_request({ - "title": "📊 Analysis Only - {Article Type} - {YYYY-MM-DD}", - "body": "## Summary\n\nAnalysis artifacts generated (no new articles — articles already exist for this date).\n\n### Analysis\n- Analysis files: {count} files\n- Location: `analysis/daily/{date}/{type}/`\n- Quality gate: {PASSED/FAILED}\n\n### Source\n- Workflow: `{workflow-name}`\n- Reason: Articles already existed; analysis-only run", - "labels": ["agentic-news", "analysis-only", "{type}"] -}) -``` - -> **Key rules for PR bodies:** -> - ALWAYS include a `## Summary` section with a brief human-readable description -> - ALWAYS list the languages, article count, and article type -> - ALWAYS include validation results (HTMLHint, file ownership, Playwright) -> - ALWAYS include the source workflow name -> - NEVER leave the PR body empty or use a single-line description -> - PR titles MUST include an emoji prefix, article type, and date - -## Standardised Deduplication Check (copy into every content workflow) - -> 🚨 **CRITICAL**: The deduplication check controls **article generation only** — it NEVER skips the deep political analysis phase. Analysis MUST always run regardless of whether articles already exist. When articles exist, the workflow still performs full 22+ minute analysis (Pass 1 + Pass 2 improvement) and commits analysis artifacts. Only the HTML article generation step is skipped (unless `force_generation=true`). - -```bash -# Check if articles for today already exist (controls article GENERATION only, NOT analysis) -ls "news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html" 2>/dev/null | wc -l > /tmp/existing_articles.txt -read EXISTING < /tmp/existing_articles.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** The only valid reason for noop is when the MCP server is completely unreachable after 3 retry attempts. If articles exist but analysis produces new artifacts, commit those artifacts via `safeoutputs___create_pull_request` with `analysis-only` label. - -## 🚨 MANDATORY: AI-Driven Analysis Using Methods & Templates (copy into every analysis workflow) - -> **NON-NEGOTIABLE**: The AI agent's PRIMARY job is to create real analysis for every piece of data or document downloaded from MCP. Scripts generate stubs — the AI MUST replace them with full template-compliant analysis. This is NOT optional. - -> **🚨 ANALYSIS RUNS EVERY TIME — NO EXCEPTIONS**: The deep political analysis phase executes on EVERY workflow run, regardless of whether articles already exist, regardless of whether another workflow ran recently, regardless of any other condition. The ONLY reason to skip analysis is if the MCP server is completely unreachable after 3 retry attempts. "Another job ran 18 minutes ago" is NOT a valid reason to skip analysis. Code changes, data updates, and new parliamentary activity happen continuously — every run MUST produce fresh analysis. - -### ⏱️ Phase Time Budget (60-minute workflow — MUST use at least 45 minutes) - -> Measured response times (hot MCP server): -> - riksdag-regering list tools (get_betankanden, get_motioner, etc.): **200–600ms** per call -> - riksdag-regering enrichment (get_dokument_innehall): **400–900ms** per call -> - search_regering (government search): **2–4s** per call (slowest Riksdag tool) -> - World Bank REST API (direct): **0.7–2.5s** per call -> - SCB MCP query_table: **1–3s** per call -> - **Cold start** (Render.com free tier): adds **0–120s** one-time on first call -> -> Typical data retrieval: 7 parallel list calls (~1s) + 20 enrichment calls in batches of 3 (~9s) + cold start overhead = **~15s hot / ~2min cold** - -| Phase | Minutes | What happens | Hard rule | -|-------|---------|-------------|-----------| -| **Phase 0: MCP warmup** | 0–2 | MCP pre-warm, get_sync_status, session init | Max 2 min; noop after 3 failures | -| **Phase 1: Data retrieval** | 2–8 | Scripts download data + document enrichment | **Must complete by minute 8** | -| **Phase 2a: Analysis Pass 1** | 8–23 | Read templates, create analysis for every doc | **Min 15 min first pass** | -| **Phase 2b: Analysis Pass 2 (Iterative Improvement)** | 23–30 | Read ALL analysis, improve depth/evidence/diagrams | **Min 7 min improvement pass — MANDATORY** | -| **Phase 3a: Article Generation Pass 1** | 30–40 | Generate EN/SV HTML articles from analysis | Script generates skeleton, AI writes all content | -| **Phase 3b: Article Improvement Pass 2** | 40–48 | Read ALL articles completely, improve every section | **Min 8 min — read and improve ALL content** | -| **Phase 4: Safe output** | 48–53 | Call safeoutputs___create_pull_request | **HARD deadline minute 53** | -| **Phase 5: Translate dispatch** | 53–55 | Dispatch translation workflow if applicable | Only after PR created | - -> 🔴 **TOTAL TIME: 60 MINUTES — USE ALL OF IT**. Workflows that complete in under 45 minutes are producing LOW QUALITY output. The AI agent MUST spend at least 45 minutes doing real work — iterating and improving analysis and articles. Completing early with shallow content is NEVER acceptable. -> -> 🚨 **Phase ordering is STRICT**: Data retrieval (Phase 1) MUST complete before analysis (Phase 2a) begins. Analysis MUST complete before article generation (Phase 3a) begins. Never start generating articles while still fetching data or performing analysis. -> -> 🔴 **ITERATIVE IMPROVEMENT IS MANDATORY — NOT OPTIONAL**: -> - **Phase 2b**: After creating analysis in Pass 1, the AI MUST `cat` and read EVERY analysis file completely, then improve each file: add missing evidence, strengthen Mermaid diagrams, deepen stakeholder perspectives, add quantitative data, cite more dok_ids. This is NOT skippable. -> - **Phase 3b**: After generating articles in Pass 1, the AI MUST read EVERY article HTML file completely, then improve: strengthen ledes, deepen "Why It Matters" sections, add SWOT evidence, improve stakeholder analysis, verify all AI_MUST_REPLACE markers are eliminated, enrich with World Bank/SCB data. This is NOT skippable. -> - **A single pass is NEVER sufficient.** Every workflow MUST complete at least 2 full iterations. -> -> **MCP call budget per phase**: -> - Phase 1: ~30 MCP calls (7 list + up to 20 enrichment + 3 retry buffer) -> - Phase 2a: 0 MCP calls (analysis uses downloaded data only) -> - Phase 2b: Optional MCP calls for additional context (World Bank, SCB, voteringar) -> - Phase 3a: 0 MCP calls (article generation uses analysis output only) -> - Phase 3b: Optional MCP calls for enrichment (World Bank economic data, vote context) -> -> **If Phase 1 exceeds 8 minutes** (e.g., cold start + slow responses): Reduce enrichment batch size to 2 documents per type instead of 5. Still proceed to Phase 2a with whatever data is available. - -````markdown -### AI-Driven Analysis Protocol - -> 🚨 **ABSOLUTE RULE**: Every agentic workflow MUST: -> 1. **Download data** from MCP (scripts try first; if they fail or download 0, agent uses direct MCP tool calls and fixes scripts) -> 2. **Read ALL 6 methodology guides** before doing any analysis -> 3. **Read ALL 8 analysis templates** before writing any analysis files -> 4. **Spend AT LEAST 22 MINUTES on analysis** (15 min Pass 1 + 7 min Pass 2 improvement) — this is a hard minimum, not a suggestion. Analysis completed in less time is REJECTED. -> 5. **Create analysis for EVERY document/data piece** following the templates exactly -> 6. **ITERATE: Read ALL analysis back completely and IMPROVE** — the first pass is never good enough -> 7. **Pass the quality gate** (see below) — every analysis file must contain Mermaid diagrams, evidence tables, and dok_id citations -> 8. **Commit both data AND analysis** — never one without the other -> 9. **NEVER skip analysis** because articles already exist or another run completed recently — analysis is the PRIMARY output -> 10. **NEVER finish early** — use ALL allocated time for iteration and improvement - -#### ⏱️ Mandatory Minimum Analysis Time: 22 Minutes (2 Passes) - -> 🔴 **HARD RULE**: The AI agent MUST spend **at least 22 minutes** on analysis work across **2 mandatory passes**: -> -> **Pass 1 (15 minutes minimum) — Initial Analysis Creation:** -> - Reading ALL 6 methodology guides (not skimming — reading fully) -> - Reading ALL 8 analysis templates (not skimming — reading fully) -> - Creating analysis for EVERY document following templates EXACTLY -> - Including color-coded Mermaid diagrams with REAL data in every analysis file -> - Filling ALL evidence tables with dok_id, confidence, impact columns -> -> **Pass 2 (7 minutes minimum) — Mandatory Iterative Improvement:** -> - `cat` and read EVERY analysis file created in Pass 1 — completely, not skimming -> - For EACH analysis file, identify gaps: missing stakeholder perspectives, weak evidence, generic text, missing Mermaid diagrams, missing quantitative data -> - REWRITE sections that are shallow, generic, or lack specific evidence -> - ADD cross-references between analysis files (e.g., committee report analysis cites related proposition) -> - ADD World Bank/SCB economic context where policy domain allows -> - ADD forward indicators with specific dates and trigger conditions -> - VERIFY every SWOT entry cites specific dok_id, vote counts, or named politicians -> - The improved version MUST be meaningfully better than Pass 1 — not cosmetic edits -> -> **Why 2 passes?** Single-pass analysis consistently produces shallow, generic output that lacks the depth required for publication-quality political intelligence. The improvement pass forces the AI to critically review its own work and address gaps. PR #1452 and subsequent reviews demonstrated that single-pass analysis produces unacceptable results. -> -> **Enforcement**: Before committing, run the quality gate check below. If it fails, you MUST spend more time improving the analysis until it passes. NEVER proceed to article generation with analysis that hasn't been through the improvement pass. - -#### ⏱️🚨 ENFORCED Minimum Analysis Time Gate (BLOCKING) - -> 🔴 **HARD ENFORCEMENT**: The agent MUST NOT proceed to article generation until the minimum analysis time has elapsed. This bash gate MUST be run BEFORE article generation and MUST block if elapsed analysis time is insufficient. This prevents the systemic early-completion pattern where agents finish in 15-20 minutes of a 45-minute allocation. - -```bash -# === MINIMUM ANALYSIS TIME GATE === -# Run this AFTER completing analysis, BEFORE starting article generation. -# This gate BLOCKS if the agent has not spent enough time on analysis. -if [ -f /tmp/gh-aw/agent/timing.env ]; then - . /tmp/gh-aw/agent/timing.env -fi -if [ -z "$START_TIME" ]; then - if [ -f /tmp/start_time.txt ]; then - read START_TIME < /tmp/start_time.txt - else - date +%s > /tmp/start_time.txt - read START_TIME < /tmp/start_time.txt - fi -fi -date +%s > /tmp/now_time.txt -read AW_NOW < /tmp/now_time.txt -ELAPSED_MIN=$(( (AW_NOW - START_TIME) / 60 )) -echo "⏱️ Elapsed time: $ELAPSED_MIN minutes" - -# MINIMUM_ANALYSIS_MINUTES defaults to 22 (15 min Pass 1 + 7 min Pass 2) -# Workflows with different time budgets can override via environment variable -# (e.g. 30-minute workflows set MINIMUM_ANALYSIS_MINUTES=14) -if [ -z "$MINIMUM_ANALYSIS_MINUTES" ]; then - MINIMUM_ANALYSIS_MINUTES=22 -fi -if [ "$ELAPSED_MIN" -lt "$MINIMUM_ANALYSIS_MINUTES" ]; then - echo "🚨🚨🚨 MINIMUM TIME GATE FAILED 🚨🚨🚨" - echo "❌ Only $ELAPSED_MIN minutes elapsed — MINIMUM $MINIMUM_ANALYSIS_MINUTES minutes required for 2-pass analysis" - echo "" - echo "You MUST go back and:" - echo " 1. Read ALL analysis files back completely (Pass 2)" - echo " 2. Improve every section: add missing evidence, deepen SWOT entries, add Mermaid diagrams" - echo " 3. Rewrite script-generated stubs with real AI analysis" - echo " 4. Add cross-references between analysis files" - echo " 5. Verify every claim has confidence labels and dok_id citations" - echo "" - echo "DO NOT proceed to article generation. Continue analysis work." - echo "RE-RUN this gate after completing more analysis work." - if (return 0 2>/dev/null); then - return 1 - else - exit 1 - fi -fi -``` - -> **Why this gate exists**: PR #1794 and systemic monitoring showed ALL news workflows completing in 13-22 minutes of their 45-minute allocation. This means agents skip the mandatory 2-pass analysis, leave script-generated stubs unenriched, and produce articles missing required components (SWOT tables, Mermaid diagrams, Risk matrices). The minimum time gate forces the agent to spend sufficient time on analysis before proceeding. Workflows with shorter time budgets (e.g. 30 minutes) can set `MINIMUM_ANALYSIS_MINUTES=14` before running this gate. - -#### ⏱️🚨 ENFORCED Analysis Enrichment Verification Gate (BLOCKING) - -> 🔴 **HARD ENFORCEMENT**: The agent MUST NOT proceed to article generation if script-generated analysis stubs remain unenriched. This gate checks for the legacy `pre-article-analysis script` marker (present in historical stub analysis files) and blocks until ALL analysis artifacts have been replaced with AI-enriched analysis. The factual `data-download-manifest.md` produced by the current download script is intentionally excluded — its `download-parliamentary-data script` marker is expected and is NOT a stub. - -```bash -# === ANALYSIS ENRICHMENT VERIFICATION GATE === -# Run this AFTER completing analysis, BEFORE starting article generation. -# Blocks if script-generated stubs remain unenriched. -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi - -if [ -z "$ANALYSIS_SUBFOLDER" ]; then - echo "❌ ANALYSIS_SUBFOLDER is not set. Refusing to guess from ARTICLE_TYPE because workflow folder names may differ from article types." - echo "Set ANALYSIS_SUBFOLDER to the canonical analysis folder before running this gate." - exit 1 -fi - -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" -if [ ! -d "$ANALYSIS_DIR" ]; then - echo "❌ Analysis directory not found: $ANALYSIS_DIR" - echo "Ensure ANALYSIS_SUBFOLDER points to the correct canonical analysis folder for this workflow." - exit 1 -fi - -UNENRICHED=0 -ENRICHED=0 -echo "=== 🔍 Analysis Enrichment Verification Gate ===" -for f in "$ANALYSIS_DIR"/*.md; do - [ ! -f "$f" ] && continue - # Skip the factual data-download-manifest.md — its script marker is expected and not a stub - # AWF-safe: use `$f` path-pattern tests instead of $(basename "$f") - case "$f" in - */data-download-manifest.md) echo "⏭️ SKIP (factual manifest): $f"; continue ;; - esac - # Match the legacy `pre-article-analysis script` marker present in historical stub files. - # The current download script writes only the factual `data-download-manifest.md` (skipped above), - # so any file still carrying this legacy marker is an unenriched stub. - grep -c "pre-article-analysis script" "$f" > /tmp/is_script.txt 2>/dev/null || echo 0 > /tmp/is_script.txt - read IS_SCRIPT < /tmp/is_script.txt - FNAME="$f" - if [ "$IS_SCRIPT" -gt 0 ]; then - echo "❌ UNENRICHED: $FNAME — still has legacy stub marker" - UNENRICHED=$((UNENRICHED + 1)) - else - echo "✅ ENRICHED: $FNAME" - ENRICHED=$((ENRICHED + 1)) - fi -done - -# Fail if no analysis files found at all (missing outputs cannot be "approved") -if [ "$UNENRICHED" -eq 0 ] && [ "$ENRICHED" -eq 0 ]; then - echo "" - echo "🚨🚨🚨 ENRICHMENT GATE FAILED 🚨🚨🚨" - echo "❌ No .md analysis files found in $ANALYSIS_DIR — cannot verify enrichment" - exit 1 -fi - -if [ "$UNENRICHED" -gt 0 ]; then - echo "" - echo "🚨🚨🚨 ENRICHMENT GATE FAILED 🚨🚨🚨" - echo "❌ $UNENRICHED analysis files still contain script-generated stubs" - echo "" - echo "You MUST enrich these files before proceeding to article generation:" - echo " 1. Read the corresponding template in analysis/templates/" - echo " 2. Read the script-generated data (it contains useful metadata)" - echo " 3. REPLACE the entire file with AI-driven deep political intelligence" - echo " 4. Remove the 'pre-article-analysis script' marker (legacy stub marker)" - echo " 5. Add Mermaid diagrams, evidence tables, confidence labels" - echo "" - echo "DO NOT proceed to article generation until ALL files are enriched." - exit 1 -else - echo "" - echo "✅ All $ENRICHED analysis files are AI-enriched — proceed to article generation" -fi -``` - -> **Why this gate exists**: PR #1794 showed that the agent enriched only 2 of 9 synthesis files (synthesis-summary.md and risk-assessment.md) while leaving 7 files as script stubs. The SWOT analysis file was completely empty. Articles generated from unenriched analysis are shallow and miss required components. - -#### ⏱️ Mandatory Minimum Article Time: 18 Minutes (2 Passes) - -> 🔴 **HARD RULE**: The AI agent MUST spend **at least 18 minutes** on article generation across **2 mandatory passes**: -> -> **Pass 1 (10 minutes minimum) — Initial Article Generation:** -> - Run `generate-news-enhanced.ts` to create HTML skeleton -> - Read ALL pre-computed analysis files (SWOT, stakeholders, risk, synthesis) -> - Write ALL article content sections: analytical lede, per-document analysis, winners/losers, key takeaways, strategic context -> - Replace EVERY `AI_MUST_REPLACE` marker with genuine analysis -> - Include World Bank/SCB data where relevant -> -> **Pass 2 (8 minutes minimum) — Mandatory Article Improvement:** -> - `cat` and read EVERY generated article HTML file COMPLETELY — not skimming -> - For EACH article, critically evaluate: -> - Is the lede specific enough? Does it name actors, cite actions, explain significance? -> - Are "Why It Matters" sections UNIQUE per document or generic boilerplate? -> - Does the SWOT analysis cite specific evidence with dok_ids? -> - Are stakeholder perspectives comprehensive (6+ groups)? -> - Are forward indicators specific with dates and trigger conditions? -> - Is there quantitative economic data from World Bank/SCB? -> - Are all `AI_MUST_REPLACE` markers eliminated? -> - REWRITE any section that is shallow, generic, or lacks evidence -> - ADD deeper political context: coalition dynamics, Election 2026 implications, opposition strategy -> - ADD additional dok_id citations and named politician references -> - VERIFY word count exceeds 1000 words per article -> - The improved version MUST be meaningfully better than Pass 1 -> -> **Why 2 passes?** Single-pass articles consistently read like code-generated document lists, not political intelligence. The improvement pass transforms shallow content into publication-quality journalism. Articles that feel like automated lists are REJECTED. -> -> **Enforcement**: Run the Article Quality Gate from §ARTICLE QUALITY MINIMUM STANDARD before committing. If it fails, continue improving until it passes. - -#### 🚨 ENFORCED Article Quality Component Gate (BLOCKING) - -> 🔴 **HARD ENFORCEMENT**: EVERY generated article MUST contain the required visual and analytical components defined in §POLITICAL INTELLIGENCE DEPTH REQUIREMENTS. This gate checks for their presence and BLOCKS commit if any are missing. - -```bash -# === ARTICLE QUALITY COMPONENT GATE === -# Run AFTER article generation and improvement passes, BEFORE committing. -# Checks that every article HTML contains the mandatory visual/analytical components. -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi - -ARTICLE_GATE_PASS=true -ARTICLE_FAIL_COUNT=0 -ARTICLE_CHECKED=0 - -echo "=== 🔍 Article Quality Component Gate ===" - -for ARTICLE_FILE in news/$ARTICLE_DATE-*.html; do - [ ! -f "$ARTICLE_FILE" ] && continue - ARTICLE_CHECKED=$((ARTICLE_CHECKED + 1)) - echo "" - echo "--- Checking: $ARTICLE_FILE ---" - FILE_FAILS=0 - - # Check 1: AI_MUST_REPLACE markers (ZERO tolerance) - grep -c 'AI_MUST_REPLACE' "$ARTICLE_FILE" > /tmp/aqg_replace.txt 2>/dev/null || echo 0 > /tmp/aqg_replace.txt - read REPLACE_COUNT < /tmp/aqg_replace.txt - if [ "$REPLACE_COUNT" -gt 0 ]; then - echo " ❌ CRITICAL: $REPLACE_COUNT AI_MUST_REPLACE markers remaining" - FILE_FAILS=$((FILE_FAILS + 1)) - fi - - # Check 2: Word count (minimum 1000) - python3 -c " -import re,sys -with open('$ARTICLE_FILE') as f: - text = re.sub('<[^>]+>', '', f.read()) - words = len(text.split()) - if words < 1000: - print(f' ❌ FAIL: {words} words (minimum 1000)') - sys.exit(1) - else: - print(f' ✅ PASS: {words} words') -" || FILE_FAILS=$((FILE_FAILS + 1)) - - # Check 3: dok_id citations (minimum 5) - grep -coP "(dok_id|HD\d{5}|Prop\.\s*\d{4}|mot\.\s*\d{4}|frs\s*\d{4}|bet\.\s*\d{4})" "$ARTICLE_FILE" > /tmp/aqg_dokid.txt 2>/dev/null || echo 0 > /tmp/aqg_dokid.txt - read DOKID_COUNT < /tmp/aqg_dokid.txt - if [ "$DOKID_COUNT" -lt 5 ]; then - echo " ⚠️ WARNING: Only $DOKID_COUNT dok_id citations (minimum 5 recommended)" - else - echo " ✅ PASS: $DOKID_COUNT dok_id citations" - fi - - # Check 4: Analysis references section - grep -c 'class="analysis-references"' "$ARTICLE_FILE" > /tmp/aqg_refs.txt 2>/dev/null || echo 0 > /tmp/aqg_refs.txt - read REFS_COUNT < /tmp/aqg_refs.txt - if [ "$REFS_COUNT" -eq 0 ]; then - echo " ❌ FAIL: Missing analysis-references section" - FILE_FAILS=$((FILE_FAILS + 1)) - else - echo " ✅ PASS: analysis-references section present" - fi - - # Check 5: Banned content patterns - grep -c "The political landscape remains fluid" "$ARTICLE_FILE" > /tmp/aqg_banned.txt 2>/dev/null || echo 0 > /tmp/aqg_banned.txt - read BANNED_COUNT < /tmp/aqg_banned.txt - if [ "$BANNED_COUNT" -gt 0 ]; then - echo " ❌ FAIL: Contains banned boilerplate text" - FILE_FAILS=$((FILE_FAILS + 1)) - fi - - # Check 6: Confidence labels (must have at least some) - grep -coP "(Confidence:\s*(HIGH|MEDIUM|LOW)|\[(HIGH|MEDIUM|LOW)\])" "$ARTICLE_FILE" > /tmp/aqg_conf.txt 2>/dev/null || echo 0 > /tmp/aqg_conf.txt - read CONF_COUNT < /tmp/aqg_conf.txt - if [ "$CONF_COUNT" -eq 0 ]; then - echo " ⚠️ WARNING: No confidence labels found" - else - echo " ✅ PASS: $CONF_COUNT confidence labels" - fi - - # Check 7: Quality score is realistic (not inflated) - grep -oP 'article-quality-score" content="\K\d+' "$ARTICLE_FILE" > /tmp/aqg_score.txt 2>/dev/null || echo "" > /tmp/aqg_score.txt - QUALITY_SCORE="" - read QUALITY_SCORE < /tmp/aqg_score.txt 2>/dev/null || true - if [ -n "$QUALITY_SCORE" ]; then - echo " ℹ️ Self-reported quality score: $QUALITY_SCORE/100" - fi - - if [ "$FILE_FAILS" -gt 0 ]; then - echo " 🚨 ARTICLE FAILED: $FILE_FAILS checks failed" - ARTICLE_GATE_PASS=false - ARTICLE_FAIL_COUNT=$((ARTICLE_FAIL_COUNT + 1)) - else - echo " ✅ ARTICLE PASSED all checks" - fi -done - -echo "" -echo "=== Article Quality Gate Summary ===" -if [ "$ARTICLE_CHECKED" -eq 0 ]; then - echo "❌ No article files found matching news/$ARTICLE_DATE-*.html — cannot verify quality" - exit 1 -elif [ "$ARTICLE_GATE_PASS" = "true" ]; then - echo "✅ All $ARTICLE_CHECKED articles passed quality checks" -else - echo "❌ $ARTICLE_FAIL_COUNT of $ARTICLE_CHECKED article(s) FAILED quality checks" - echo "Fix the issues above and re-run this gate before committing." - exit 1 -fi -``` - -> **Why this gate exists**: PR #1794 showed the article self-reported a quality score of 85/100 despite missing: SWOT tables, Mermaid diagrams, Risk Matrix with L×I scores, and CSS Mindmaps. The quality score meta tag was inflated. This gate verifies actual article content rather than relying on self-assessment. - -#### Step 1: Download Data (scripts + fallback to direct MCP calls) - -Try the script pipeline first. **Doc-type workflows** (committee-reports, motions, propositions, interpellations) MUST pass `--doc-type` to write directly to the scoped subdirectory: -```bash -# For doc-type workflows (committee-reports, motions, propositions, interpellations): -source scripts/mcp-setup.sh && npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 --doc-type "$DOC_TYPE" 2>&1 | tee /tmp/pipeline-output.log -# For other workflows (evening-analysis, realtime, week-ahead, etc.) — run without --doc-type, then MOVE (not copy) artifacts: -# source scripts/mcp-setup.sh && npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 2>&1 | tee /tmp/pipeline-output.log -``` - -Check results: -```bash -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_json_count.txt -read DATA_JSON_COUNT < /tmp/data_json_count.txt -echo "📊 JSON data files: $DATA_JSON_COUNT" -``` - -If `DATA_JSON_COUNT=0`: **the agent MUST diagnose script failures (read error logs, fix code issues, re-run) OR use direct MCP tool calls as fallback.** Save each MCP response as JSON to `analysis/data/documents/{type}/{dok_id}.json`. Never give up on downloading data unless MCP itself is down. - -#### Step 1b: Mandatory Full-Text Document Enrichment - -> 🚨 **ABSOLUTE RULE**: Analysis based only on metadata (title, date, committee code) is LOW confidence at best. The download-parliamentary-data pipeline now enriches top documents automatically inside `downloadAllDocuments(...)` via `client.fetchDocumentDetails(dokId, true)`, but workflows MUST verify enrichment and supplement when needed. - -> **MCP Response Field Mapping** (verified 2026-04-16): -> `get_dokument_innehall` returns: `{ dok_id, datum, doktyp, rm, titel, url, text, snippet, fulltext_available }` -> - `text` — Raw Riksdag dump (metadata + embedded HTML); stored as `fullContent` in pipeline -> - `snippet` — 400-char excerpt; stored as `summary` fallback in pipeline -> - `fulltext_available` — boolean indicating if full text was returned -> - **Legacy fields** (`fullText`, `html`, `summary`, `notis`) are NOT returned by the current MCP server - -> ⚠️ **NEVER paste the raw `text` field into article HTML.** The `text` payload begins with a dump of metadata tokens (e.g. `5287561 HD03242 2025/26 242 prop prop prop Proposition 2025/26:242 … html-ec prop-RIM `) followed by embedded CSS rule blocks (`body {margin-top: 0px;…} #page_1 {position:relative; overflow: hidden;…}`). When the AI agent copies this directly into a `` inside a `
`, readers see the CSS as visible text. **Always extract the narrative passage** (via `extractKeyPassage` / `generateDocumentIntelligenceAnalysis` — both now auto-strip the Riksdag dump prefix and embedded CSS), or summarise the document in your own words. **Never** place raw `text` (or the first N chars of it) between `` tags. See the 2026-04-18 weekly-review incident for the failure mode. - -**Before ANY per-document analysis**, verify data depth: - -1. **Check each document JSON** in `analysis/daily/{date}/{type}/documents/*.json`: - - If JSON contains `fullText` or `fullContent` with meaningful non-trivial content (>100 chars) → **FULL-TEXT** available - - If JSON contains `summary` with >100 chars but no `fullText`/`fullContent` → **SUMMARY** - - If JSON contains `contentFetched: true` but no `fullText`/`fullContent` → treat as **details fetched**, **NOT** automatically FULL-TEXT - - If JSON contains only `titel`, `datum`, `organ` fields (or otherwise remains <500 bytes without `summary`, `fullText`, or `fullContent`) → **METADATA-ONLY** -2. **For METADATA-ONLY or SUMMARY-only documents that require deeper analysis**, the AI agent MUST call MCP directly: - ``` - Call: get_dokument_innehall({ dok_id: "", include_full_text: true }) - ``` - - The MCP response `text` field contains the full document (metadata + HTML) — use as fullContent - - The MCP response `snippet` field provides a short summary - - Only reclassify to **FULL-TEXT** if the stored result includes `fullText` or `fullContent` (after pipeline mapping) - - If `get_dokument_innehall` fails: retain the existing classification (`SUMMARY` or `METADATA-ONLY`) in analysis -3. **NEVER claim HIGH or VERY HIGH confidence unless `fullText` or `fullContent` is verified** - -**Confidence Ceiling Rules (Based on Data Depth):** - -| Data Depth | Max Confidence | Description | -|---|---|---| -| **FULL-TEXT** — Verified document text present in `fullText` or `fullContent` (typically from `get_dokument_innehall`) | **HIGH** or **VERY HIGH** | Full text verified, multi-framework analysis possible | -| **SUMMARY** — Title + summary/abstract (100+ chars), with or without `contentFetched: true`, but no verified `fullText`/`fullContent` | **MEDIUM** | Partial content, limited analysis depth | -| **METADATA-ONLY** — Title, date, committee code only (<500 bytes) and no `summary`, `fullText`, or `fullContent` | **LOW** | Classification and scoring only; SWOT/risk analysis PROHIBITED | -| **NO DATA** — Document not in pipeline | Analysis **PROHIBITED** | Do NOT fabricate analysis from general knowledge | - -> ⛔ **FABRICATION BAN**: If analysis JSON files for a topic contain ZERO documents matching the article topic (e.g., `focus_topic` is `"cyber security"` but all documents are about migration/healthcare), the workflow MUST stop article generation and create an **analysis-only PR** that documents the mismatch, cites the inspected files/documents, and clearly states that no article was generated because the downloaded evidence does not match the requested topic. **Do NOT use `safeoutputs___noop` for this case**, because mismatch findings are evidence/artifacts that must be preserved for auditability. NEVER generate an article from general knowledge about a topic when the actual downloaded data is about something else entirely. - -#### Step 2: Read ALL Methodology Guides (MANDATORY — do this BEFORE any analysis) - -The agent MUST read (using `view` or `cat`) every one of these files before writing any analysis. These define HOW to analyze: - -1. **`analysis/methodologies/ai-driven-analysis-guide.md`** — Master guide with bad vs. good examples -2. **`analysis/methodologies/political-swot-framework.md`** — Evidence-based SWOT with confidence hierarchy -3. **`analysis/methodologies/political-risk-methodology.md`** — 5×5 Likelihood × Impact risk matrix -4. **`analysis/methodologies/political-threat-framework.md`** — Political Threat Taxonomy (Attack Trees, Kill Chain, Diamond Model) -5. **`analysis/methodologies/political-classification-guide.md`** — Sensitivity, domain, urgency taxonomy -6. **`analysis/methodologies/political-style-guide.md`** — Writing standards and evidence density - -#### Step 3: Read ALL Analysis Templates (MANDATORY — do this BEFORE writing any files) - -The agent MUST read every template. These define WHAT the output must look like: - -1. **`analysis/templates/per-file-political-intelligence.md`** — Per-document analysis output format -2. **`analysis/templates/synthesis-summary.md`** — Daily synthesis (SYN-ID, Intelligence Dashboard) -3. **`analysis/templates/risk-assessment.md`** — Risk assessment (RSK-ID, Heat Map, L×I scores) -4. **`analysis/templates/political-classification.md`** — Classification (CLS-ID, Decision Tree) -5. **`analysis/templates/threat-analysis.md`** — Threat analysis (THR-ID, Threat Taxonomy Network) -6. **`analysis/templates/swot-analysis.md`** — SWOT analysis (SWT-ID, Quadrant Mapping) -7. **`analysis/templates/stakeholder-impact.md`** — Stakeholder impact (STA-ID, 6 Groups, Impact Radar) -8. **`analysis/templates/significance-scoring.md`** — Significance scoring (SIG-ID, 5 Dimensions) - -#### Step 4: Create Per-File Analysis for EVERY Downloaded Document - -For EACH document in `analysis/data/`: - -1. **Read the JSON data** — extract dok_id, titel, datum, parti, organ, etc. -2. **Apply ALL 6 analytical lenses** using the methodologies: - - **Classification** — Sensitivity (PUBLIC/SENSITIVE/RESTRICTED), Domain (13 codes), Urgency, Significance (0–10) - - **SWOT** — Government + Opposition impact with evidence (cite dok_id, vote counts, party names) - - **Risk** — 5×5 Likelihood × Impact matrix with numeric scores - - **Political Threat Taxonomy** — 6 democratic function categories (Narrative Integrity, Legislative Integrity, Accountability, Transparency, Democratic Process, Power Balance) - - **Stakeholders** — 6 groups (Citizens, Government, Opposition, Business, Civil Society, International) - - **Forward Indicators** — Specific watch items with concrete timelines and triggers -3. **Write `{dok_id}-analysis.md`** alongside the data file, following `per-file-political-intelligence.md` template EXACTLY -4. **Include ≥1 Mermaid diagram** with REAL data from the document (not placeholder) -5. **Quality gate**: ≥3 evidence citations with dok_id, confidence labels on all claims, zero `[REQUIRED]` placeholders - -> ⛔ **ANTI-PATTERN WARNING — REJECTED OUTPUT PATTERNS:** -> The following patterns indicate **unreplaced script stubs** and will FAIL the quality gate: -> - `"_No strengths identified_"` / `"_No weaknesses identified_"` — empty SWOT quadrants -> - `"this document requires assessment of policy execution"` — generic boilerplate perspective text -> - `"this document warrants scrutiny for alignment with citizen welfare"` — template filler, not analysis -> - `"this document may affect business environment"` — generic economic perspective -> - `"this document has low newsworthiness (score: XX/100)"` — script-generated placeholder -> - `"this document must be assessed for EU regulatory alignment"` — generic international perspective -> - SWOT quadrants with only `_No X identified_` entries — indicates AI skipped analysis -> - Stakeholder perspectives without SPECIFIC document data (dok_id, vote counts, party names) -> - Analysis with 0 Mermaid diagrams and 0 evidence table rows -> -> **CORRECT APPROACH**: Read the actual JSON data file, extract SPECIFIC facts (dok_id, committee, policy area, parties involved), then write REAL analysis citing those facts. Every SWOT entry must reference actual document content. - -#### Step 5: Create/Rewrite ALL Daily Synthesis Files Following Templates - -For each file in `analysis/daily/$ARTICLE_DATE/`, the agent MUST rewrite it to match its template EXACTLY: - -| Daily File | Template to Follow | Minimum Requirements | -|------------|-------------------|---------------------| -| `synthesis-summary.md` | `analysis/templates/synthesis-summary.md` | SYN-ID, Intelligence Dashboard (Mermaid), Top Findings table, Aggregated SWOT, Risk Landscape, Threat Summary, Stakeholder Impact, Narrative Direction, Forward Indicators, Artifacts Inventory with ✅/⚠️/❌ status | -| `risk-assessment.md` | `analysis/templates/risk-assessment.md` | RSK-ID, Risk Heat Map (Mermaid quadrant chart), ≥2 risks with L×I numeric scores, Coalition Stability Risk, Escalation Rules | -| `classification-results.md` | `analysis/templates/political-classification.md` | CLS-ID, Sensitivity Decision Tree (Mermaid), per-document table with sensitivity/domain/urgency/significance | -| `threat-analysis.md` | `analysis/templates/threat-analysis.md` | THR-ID, Threat Taxonomy Network (Mermaid), ALL 6 threat categories with ≥1 threat each (severity 1-5), Threat Actor Mapping | -| `swot-analysis.md` | `analysis/templates/swot-analysis.md` | SWT-ID, Quadrant Mapping (Mermaid mindmap), ≥2 filled quadrants with dok_id evidence, Coalition + Opposition SWOT | -| `stakeholder-perspectives.md` | `analysis/templates/stakeholder-impact.md` | STA-ID, Impact Radar (Mermaid), ALL 6 stakeholder groups assessed with impact level and timeline | -| `significance-scoring.md` | `analysis/templates/significance-scoring.md` | SIG-ID, 5-dimension scoring (Parliamentary, Policy Impact, Public Interest, Urgency, Cross-party), Composite Score, Publication Decision | - -**Template compliance checklist (ALL must be true):** -- [ ] Every file has its template's metadata header (ID, date, riksmöte, confidence) -- [ ] Every file has ≥1 Mermaid diagram with color-coded nodes and REAL data -- [ ] Every Mermaid diagram uses color-coded `style` directives (e.g., `fill:#dc3545,color:#fff` for red, `fill:#28a745,color:#fff` for green) -- [ ] Risk assessment has ≥2 risks with L×I numeric scores -- [ ] SWOT has structured evidence tables with columns: `#`, `Statement`, `Evidence (dok_id)`, `Confidence`, `Impact`, `Entry Date` -- [ ] SWOT has ≥2 filled quadrants with evidence citations (dok_id) -- [ ] Threat analysis covers ALL 6 Political Threat Taxonomy categories -- [ ] Significance scoring uses 5-dimension model with publication decision -- [ ] Synthesis references ALL sibling files with ✅/⚠️/❌ status -- [ ] No `[REQUIRED]` placeholders remaining in any file -- [ ] Every claim cites specific data (dok_id, vote counts, party names, dates) -- [ ] Markdown is human-readable with proper formatting (tables, emoji headers, structured sections) - -#### Step 5b: MANDATORY Quality Gate — Run Before Committing - -> 🚨 **BLOCKING**: Do NOT proceed to commit until this quality gate passes. If it fails, go back and improve the analysis files. - -Run this bash check on ALL analysis files (daily synthesis AND per-file analyses in `documents/`) before committing: - -```bash -# CRITICAL: Use article-type-scoped directory, NEVER the bare date directory -if [ -z "$ANALYSIS_SUBFOLDER" ]; then ANALYSIS_SUBFOLDER="$ARTICLE_TYPE"; fi -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" -QUALITY_PASS=true -FAIL_COUNT=0 -WARN_COUNT=0 - -echo "=== 🔍 Analysis Quality Gate Check ===" - -# Collect ALL analysis markdown files (daily synthesis + per-file in documents/) -find "$ANALYSIS_DIR" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/all_md_count.txt -read ALL_MD_COUNT < /tmp/all_md_count.txt -find "$ANALYSIS_DIR" -maxdepth 1 -name "*.md" -type f 2>/dev/null | wc -l > /tmp/daily_md_count.txt -read DAILY_MD_COUNT < /tmp/daily_md_count.txt -find "$ANALYSIS_DIR/documents" -name "*-analysis.md" -type f 2>/dev/null | wc -l > /tmp/perfile_count.txt -read PERFILE_COUNT < /tmp/perfile_count.txt -echo "📊 Daily synthesis files: $DAILY_MD_COUNT" -echo "📊 Per-file analysis files: $PERFILE_COUNT" - -# Check 1: Every daily synthesis file must contain at least 1 Mermaid diagram -echo "" -echo "--- Check 1: Mermaid diagrams in daily synthesis files ---" -for f in "$ANALYSIS_DIR"/*.md; do - [ ! -f "$f" ] && continue - grep -c '```mermaid' "$f" 2>/dev/null > /tmp/mermaid_count.txt || echo 0 > /tmp/mermaid_count.txt - read MERMAID_COUNT < /tmp/mermaid_count.txt - if [ "$MERMAID_COUNT" -eq 0 ]; then - echo "❌ FAIL: $f has NO Mermaid diagrams (minimum: 1)" - QUALITY_PASS=false - FAIL_COUNT=$((FAIL_COUNT + 1)) - else - echo "✅ PASS: $f has $MERMAID_COUNT Mermaid diagram(s)" - fi -done - -# Check 2: Mermaid diagrams must have color-coded style directives -echo "" -echo "--- Check 2: Color-coded style directives in Mermaid diagrams ---" -for f in "$ANALYSIS_DIR"/*.md; do - [ ! -f "$f" ] && continue - if grep -q '```mermaid' "$f" 2>/dev/null; then - grep -c 'style.*fill:#' "$f" 2>/dev/null > /tmp/style_count.txt || echo 0 > /tmp/style_count.txt - read STYLE_COUNT < /tmp/style_count.txt - if [ "$STYLE_COUNT" -eq 0 ]; then - echo "❌ FAIL: $f has Mermaid diagram(s) but NO color-coded style directives" - QUALITY_PASS=false - FAIL_COUNT=$((FAIL_COUNT + 1)) - else - echo "✅ PASS: $f has $STYLE_COUNT color-coded style directive(s)" - fi - fi -done - -# Check 3: No [REQUIRED] placeholders remaining -echo "" -echo "--- Check 3: No [REQUIRED] placeholders ---" -for f in "$ANALYSIS_DIR"/*.md "$ANALYSIS_DIR"/documents/*-analysis.md; do - [ ! -f "$f" ] && continue - grep -c '\[REQUIRED\]' "$f" 2>/dev/null > /tmp/req_count.txt || echo 0 > /tmp/req_count.txt - read REQ_COUNT < /tmp/req_count.txt - if [ "$REQ_COUNT" -gt 0 ]; then - echo "❌ FAIL: $f has $REQ_COUNT unfilled [REQUIRED] placeholders" - QUALITY_PASS=false - FAIL_COUNT=$((FAIL_COUNT + 1)) - fi -done - -# Check 4: SWOT analysis must have evidence tables with dok_id -echo "" -echo "--- Check 4: SWOT evidence tables ---" -SWOT_FILE="$ANALYSIS_DIR/swot-analysis.md" -if [ -f "$SWOT_FILE" ]; then - grep -c '|.*dok_id\||.*Evidence' "$SWOT_FILE" 2>/dev/null > /tmp/table_count.txt || echo 0 > /tmp/table_count.txt - read TABLE_COUNT < /tmp/table_count.txt - if [ "$TABLE_COUNT" -eq 0 ]; then - echo "❌ FAIL: swot-analysis.md has NO evidence tables with dok_id columns" - QUALITY_PASS=false - FAIL_COUNT=$((FAIL_COUNT + 1)) - else - echo "✅ PASS: swot-analysis.md has evidence tables" - fi -fi - -# Check 5: Analysis files must have structured tables (not just plain prose) -echo "" -echo "--- Check 5: Structured tables in daily synthesis ---" -for f in "$ANALYSIS_DIR"/*.md; do - [ ! -f "$f" ] && continue - grep -c '^|' "$f" 2>/dev/null > /tmp/table_count2.txt || echo 0 > /tmp/table_count2.txt - read TABLE_COUNT < /tmp/table_count2.txt - if [ "$TABLE_COUNT" -lt 3 ]; then - echo "⚠️ WARNING: $f has only $TABLE_COUNT table rows — templates require structured tables" - WARN_COUNT=$((WARN_COUNT + 1)) - fi -done - -# Check 6: Per-file analyses in documents/ must NOT be stubs/boilerplate -echo "" -echo "--- Check 6: Per-file analyses are NOT stubs (documents/ subdirectory) ---" -STUB_PERFILE=0 -for f in "$ANALYSIS_DIR"/documents/*-analysis.md; do - [ ! -f "$f" ] && continue - BASENAME="$f" # AWF-safe: use full path - # Detect known stub/boilerplate patterns that scripts generate as placeholders - STUB_SCORE=0 - # Pattern 1: Empty SWOT quadrants ("_No strengths identified_", "_No weaknesses identified_", etc.) - grep -cE '_No (strengths|weaknesses|opportunities|threats) identified_' "$f" 2>/dev/null > /tmp/stub_es.txt || echo 0 > /tmp/stub_es.txt - read EMPTY_SWOT < /tmp/stub_es.txt - if [ "$EMPTY_SWOT" -ge 2 ]; then - STUB_SCORE=$((STUB_SCORE + 2)) - fi - # Pattern 2: Generic boilerplate perspective text (script-generated template text) - grep -c 'this document requires assessment of\|this document warrants scrutiny for\|this document may affect business\|this document has low newsworthiness\|this document must be assessed for' "$f" 2>/dev/null > /tmp/stub_bp.txt || echo 0 > /tmp/stub_bp.txt - read BOILERPLATE < /tmp/stub_bp.txt - if [ "$BOILERPLATE" -ge 2 ]; then - STUB_SCORE=$((STUB_SCORE + 2)) - fi - # Pattern 3: No Mermaid diagrams in per-file analysis - grep -c '```mermaid' "$f" 2>/dev/null > /tmp/mermaid_count.txt || echo 0 > /tmp/mermaid_count.txt - read MERMAID_COUNT < /tmp/mermaid_count.txt - if [ "$MERMAID_COUNT" -eq 0 ]; then - STUB_SCORE=$((STUB_SCORE + 1)) - fi - # Pattern 4: No evidence table rows (per-file must have structured tables) - grep -c '^|' "$f" 2>/dev/null > /tmp/table_count2.txt || echo 0 > /tmp/table_count2.txt - read TABLE_COUNT < /tmp/table_count2.txt - if [ "$TABLE_COUNT" -lt 2 ]; then - STUB_SCORE=$((STUB_SCORE + 1)) - fi - # FAIL if stub score >= 3 (multiple stub indicators = unreplaced boilerplate) - if [ "$STUB_SCORE" -ge 3 ]; then - echo "❌ FAIL: $BASENAME is a stub/boilerplate (score=$STUB_SCORE) — AI MUST replace with real template-compliant analysis" - STUB_PERFILE=$((STUB_PERFILE + 1)) - QUALITY_PASS=false - FAIL_COUNT=$((FAIL_COUNT + 1)) - elif [ "$STUB_SCORE" -ge 2 ]; then - echo "⚠️ WARNING: $BASENAME has stub-like patterns (score=$STUB_SCORE) — verify analysis is real, not boilerplate" - WARN_COUNT=$((WARN_COUNT + 1)) - else - echo "✅ PASS: $BASENAME appears to be real analysis" - fi -done -if [ "$STUB_PERFILE" -gt 0 ]; then - echo "" - echo "🚨 $STUB_PERFILE per-file analyses are stubs. AI MUST read per-file-political-intelligence.md template and REWRITE each stub file with:" - echo " - ≥1 Mermaid diagram with color-coded style directives" - echo " - Structured evidence tables with dok_id, confidence, impact columns" - echo " - Real SWOT analysis (not empty quadrants)" - echo " - Specific citations from the document data (not generic text)" -fi - -# Check 7: Per-file analyses must exist for downloaded documents -echo "" -echo "--- Check 7: Per-file analysis coverage ---" -if [ -d "$ANALYSIS_DIR/documents" ]; then - find "$ANALYSIS_DIR/documents" -name "*.json" -type f 2>/dev/null | wc -l > /tmp/json_count.txt - read JSON_COUNT < /tmp/json_count.txt - find "$ANALYSIS_DIR/documents" -name "*-analysis.md" -type f 2>/dev/null | wc -l > /tmp/amd_count.txt - read ANALYSIS_MD_COUNT < /tmp/amd_count.txt - if [ "$JSON_COUNT" -gt 0 ] && [ "$ANALYSIS_MD_COUNT" -lt "$JSON_COUNT" ]; then - echo "❌ FAIL: Only $ANALYSIS_MD_COUNT analysis files for $JSON_COUNT data files — every document needs an analysis" - QUALITY_PASS=false - FAIL_COUNT=$((FAIL_COUNT + 1)) - elif [ "$JSON_COUNT" -gt 0 ]; then - echo "✅ PASS: $ANALYSIS_MD_COUNT analysis files for $JSON_COUNT data files" - fi -fi - -echo "" -echo "--- Check 8: Batch analysis enrichment (prevents empty '0 documents analyzed' files) ---" -if [ -d "$ANALYSIS_DIR/documents" ]; then - find "$ANALYSIS_DIR/documents" -name "*-analysis.md" -type f 2>/dev/null | wc -l > /tmp/perdoc_count.txt - read PERDOC_COUNT < /tmp/perdoc_count.txt - if [ "$PERDOC_COUNT" -gt 0 ]; then - # Per-document analysis exists — all mandatory batch artifacts MUST NOT report "0 documents analyzed" - for bf in synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md classification-results.md significance-scoring.md stakeholder-perspectives.md cross-reference-map.md data-download-manifest.md; do - BATCH_FILE="$ANALYSIS_DIR/$bf" - [ ! -f "$BATCH_FILE" ] && continue - grep -cE "(Documents Analyzed\*\*:\s*0|documents analyzed:\s*0|Analyzed \*\*0|Scored \*\*0|for \*\*0|to \*\*0|across 0 documents|for 0 political)" "$BATCH_FILE" 2>/dev/null > /tmp/zero_docs.txt || echo 0 > /tmp/zero_docs.txt - read ZERO_DOCS < /tmp/zero_docs.txt - wc -c < "$BATCH_FILE" 2>/dev/null > /tmp/file_size.txt || echo 0 > /tmp/file_size.txt - read FILE_SIZE < /tmp/file_size.txt - if [ "$ZERO_DOCS" -gt 0 ]; then - echo "❌ FAIL: $bf reports '0 documents' but $PERDOC_COUNT per-doc analyses exist — MUST be enriched" - QUALITY_PASS=false - FAIL_COUNT=$((FAIL_COUNT + 1)) - elif [ "$FILE_SIZE" -lt 500 ]; then - echo "❌ FAIL: $bf is only $FILE_SIZE bytes — too small for meaningful analysis (minimum: 500)" - QUALITY_PASS=false - FAIL_COUNT=$((FAIL_COUNT + 1)) - else - echo "✅ PASS: $bf has substantive content ($FILE_SIZE bytes)" - fi - done - fi -fi - -# Check 9: Confidence-Data Alignment — article confidence MUST match analysis confidence -echo "" -echo "--- Check 9: Confidence-data alignment (prevents confidence inflation) ---" -SYNTH_FILE="$ANALYSIS_DIR/synthesis-summary.md" -if [ -f "$SYNTH_FILE" ]; then - # Count documents with actual fullText/fullContent >100 chars via jq exit code (no intermediate temp files) - HAS_FULLTEXT=0 - find "$ANALYSIS_DIR" -path '*/documents/*.json' -type f 2>/dev/null > /tmp/qg9_docs_$$.txt - while IFS= read -r jf; do - [ -z "$jf" ] && continue - [ ! -f "$jf" ] && continue - # jq -e exits 0 if expression is truthy, 1 otherwise — no temp file needed - if jq -e '(((.fullText // "") | length) > 100) or (((.fullContent // "") | length) > 100)' "$jf" >/dev/null 2>&1; then - HAS_FULLTEXT=$((HAS_FULLTEXT + 1)) - fi - done < /tmp/qg9_docs_$$.txt - rm -f /tmp/qg9_docs_$$.txt - - # Extract the actual confidence value from the Overall Confidence table row (avoids matching template/reference text) - # NOTE: Avoids $() command substitution per AWF shell safety rules — uses temp files + read instead - ACTUAL_CONFIDENCE="" - CONF_LINE_TMP=/tmp/qg9_conf_line_$$.txt - ACTUAL_CONF_TMP=/tmp/qg9_actual_conf_$$.txt - if [ -f "$SYNTH_FILE" ]; then - # Look for "| **Overall Confidence** | VALUE |" pattern in synthesis context table - grep -i 'Overall Confidence' "$SYNTH_FILE" 2>/dev/null | grep '|' | head -1 > "$CONF_LINE_TMP" || true - CONF_LINE="" - if [ -s "$CONF_LINE_TMP" ]; then - IFS= read -r CONF_LINE < "$CONF_LINE_TMP" - fi - if [ -n "$CONF_LINE" ]; then - # Extract the value column (third pipe-delimited field), strip markdown/whitespace - printf '%s\n' "$CONF_LINE" | awk -F'|' '{print $3}' | sed 's/\*//g; s/`//g; s/^[[:space:]]*//; s/[[:space:]]*$//' | tr '[:upper:]' '[:lower:]' > "$ACTUAL_CONF_TMP" - if [ -s "$ACTUAL_CONF_TMP" ]; then - IFS= read -r ACTUAL_CONFIDENCE < "$ACTUAL_CONF_TMP" - fi - fi - fi - rm -f "$CONF_LINE_TMP" "$ACTUAL_CONF_TMP" - - # Check for HIGH/VERY HIGH confidence claims with no full text - case "$ACTUAL_CONFIDENCE" in - *"very high"*|*"high"*) - if [ "$HAS_FULLTEXT" -eq 0 ]; then - echo "⚠️ WARNING: Synthesis claims HIGH/VERY HIGH confidence but ZERO documents have fullText/fullContent — article MUST NOT claim HIGH confidence" - WARN_COUNT=$((WARN_COUNT + 1)) - fi - ;; - esac - - # Check for LOW confidence with no full text - case "$ACTUAL_CONFIDENCE" in - *"low"*|*"very low"*) - if [ "$HAS_FULLTEXT" -eq 0 ]; then - echo "⚠️ WARNING: Synthesis confidence is LOW and no documents have fullText/fullContent — article MUST NOT claim HIGH confidence" - WARN_COUNT=$((WARN_COUNT + 1)) - else - echo "✅ PASS: Synthesis confidence is LOW but $HAS_FULLTEXT document(s) have full text" - fi - ;; - *) - if [ "$HAS_FULLTEXT" -gt 0 ]; then - echo "✅ PASS: $HAS_FULLTEXT document(s) have full text content" - else - echo "ℹ️ INFO: No full text found — confidence ceiling is MEDIUM" - fi - ;; - esac -fi - -# Check 10: Data Depth Assessment — count enriched vs metadata-only documents -echo "" -echo "--- Check 10: Data depth assessment (enrichment coverage) ---" -FULLTEXT=0 -SUMMARY_ONLY=0 -METADATA_ONLY=0 -find "$ANALYSIS_DIR" -path '*/documents/*.json' -type f 2>/dev/null > /tmp/qg10_docs_$$.txt -while IFS= read -r jf; do - [ -z "$jf" ] && continue - [ ! -f "$jf" ] && continue - # Classify using jq exit codes — no intermediate temp files - if jq -e '(((.fullText // "") | length) > 100) or (((.fullContent // "") | length) > 100)' "$jf" >/dev/null 2>&1; then - FULLTEXT=$((FULLTEXT + 1)) - elif jq -e '(((.summary // "") | length) > 100) or (((.notis // "") | length) > 100)' "$jf" >/dev/null 2>&1; then - SUMMARY_ONLY=$((SUMMARY_ONLY + 1)) - else - METADATA_ONLY=$((METADATA_ONLY + 1)) - fi -done < /tmp/qg10_docs_$$.txt -rm -f /tmp/qg10_docs_$$.txt -TOTAL_DATA=$((FULLTEXT + SUMMARY_ONLY + METADATA_ONLY)) -if [ "$TOTAL_DATA" -gt 0 ]; then - echo "📊 Data depth: $FULLTEXT full-text (fullText/fullContent), $SUMMARY_ONLY summary-only, $METADATA_ONLY metadata-only, out of $TOTAL_DATA total" - if [ "$METADATA_ONLY" -gt $((FULLTEXT + SUMMARY_ONLY)) ]; then - echo "⚠️ WARNING: Majority of documents ($METADATA_ONLY/$TOTAL_DATA) are metadata-only — max confidence is MEDIUM" - WARN_COUNT=$((WARN_COUNT + 1)) - else - echo "✅ PASS: Majority of documents have full-text or summary content" - fi -fi - -echo "" -echo "=== Quality Gate Summary ===" -echo "Failures: $FAIL_COUNT | Warnings: $WARN_COUNT" -if [ "$QUALITY_PASS" = "true" ]; then - echo "✅ Quality gate PASSED — analysis is ready to commit" -else - echo "" - echo "❌ Quality gate FAILED ($FAIL_COUNT failures) — you MUST improve analysis files before committing" - echo "📋 Re-read the templates and methodology guides, then rewrite failing files" - echo "📌 For per-file analyses: read analysis/templates/per-file-political-intelligence.md" - echo "📌 For daily synthesis: read the corresponding template in analysis/templates/" - echo "📌 Reference good examples: SWOT.md, THREAT_MODEL.md" -fi -``` - -> **If the quality gate FAILS**: Go back and rewrite the failing files. For per-file analyses in `documents/`, read `analysis/templates/per-file-political-intelligence.md` and replace stubs with real template-compliant analysis. For daily synthesis files, read the corresponding template in `analysis/templates/`. Do NOT commit until all checks pass. - -#### Step 6: Commit Data AND Analysis Together - -⚠️ **safe-outputs enforces a 100-file limit per PR.** Always scope `git add` to avoid conflicts between concurrent workflows and stay under the limit. - -**Doc-type workflows** (committee-reports, motions, propositions, interpellations) MUST scope to their article-type subdirectory — NOT the parent date directory. Multiple doc-type workflows run on the same date and would conflict if they all stage `analysis/daily/$DATE/`. - -**For doc-type workflows** — the `--doc-type` flag passed to `download-parliamentary-data.ts` scopes output to a subdirectory (e.g., `analysis/daily/$DATE/committeeReports/`). Use the matching `DOC_TYPE` value in your `git add`: - -| Workflow | `--doc-type` value | `DOC_TYPE` for git add | -|----------|-------------------|----------------------| -| news-committee-reports | `committeeReports` | `committeeReports` | -| news-motions | `motions` | `motions` | -| news-propositions | `propositions` | `propositions` | -| news-interpellations | `interpellations` | `interpellations` | - -> ⚠️ **CRITICAL: Stage only today's new articles by exact filename, NOT with a wildcard glob.** -> Using `git add news/*committee-reports*.html` stages ALL existing articles (400–500 files), many -> of which may be modified by auto-fix scripts, causing E003 (>100 files). Always use: -> ```bash -> git add "news/$ARTICLE_DATE-committee-reports-en.html" 2>/dev/null || true -> git add "news/$ARTICLE_DATE-committee-reports-sv.html" 2>/dev/null || true -> ``` - -```bash -# Stage analysis scoped to article type — avoids conflicts with other doc-type workflows on the same date -DOC_TYPE="committeeReports" # One of: committeeReports, motions, propositions, interpellations -ARTICLE_SLUG="committee-reports" # Filename slug: committee-reports, opposition-motions, government-propositions, interpellation-debates -# Stage ONLY today's new articles by exact filename (prevents staging 400-500 existing articles) -git add "news/$ARTICLE_DATE-$ARTICLE_SLUG-en.html" 2>/dev/null || true -git add "news/$ARTICLE_DATE-$ARTICLE_SLUG-sv.html" 2>/dev/null || true -# Use $ANALYSIS_SUBFOLDER (from Run Suffix Resolution); fallback to $DOC_TYPE -if [ -z "$ANALYSIS_SUBFOLDER" ]; then - ANALYSIS_SUBFOLDER="$DOC_TYPE" -fi -# Stage summary .md files ONLY — EXCLUDE documents/ to stay under 100-file limit. -# With --limit 50, documents/ alone can contain 100+ files (50 JSON + 50 analysis.md). -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true -# Enforce safe-outputs 100-file PR limit (AWF-safe: no $(...) — write to temp file + read back) -git diff --cached --name-only > /tmp/staged_files.txt -awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt -STAGED_COUNT=0 -read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -echo "📊 Staged file count: $STAGED_COUNT (limit: 100)" -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Staged $STAGED_COUNT files exceeds safe threshold. Removing non-essential analysis — keeping core summaries." - # Graduated pruning: remove individual doc-level analysis JSON first, keep synthesis/scoring/risk .md - # If still over limit, all .json goes but .md summaries (synthesis-summary.md, risk-assessment.md) survive - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-analysis.json 2>/dev/null || true - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-details.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing remaining analysis .json — keeping .md summaries." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -# FINAL HARD GUARD: if count still exceeds 99, remove all analysis .md except synthesis-summary.md -if [ "$STAGED_COUNT" -gt 99 ]; then - echo "🚨 CRITICAL: $STAGED_COUNT files still exceeds safe limit of 99. Removing all analysis .md except synthesis-summary." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true - git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true - echo "📊 After emergency pruning: $STAGED_COUNT files" -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "📊 Data + Analysis ($DOC_TYPE) - $ARTICLE_DATE" -``` - -**For all other workflows** (realtime-monitor, evening-analysis, article-generator, month-ahead, week-ahead, weekly-review, monthly-review) — MUST also scope to their article-type subdirectory: - -> ⚠️ **Pipeline relocation required**: `download-parliamentary-data.ts` writes to `analysis/daily/$DATE/` (unscoped) when run without `--doc-type`. Each workflow MUST **move** (not copy) pipeline artifacts into its type subfolder immediately after the pipeline step. **NEVER leave .md files at the root date directory level** — this causes merge conflicts when multiple workflows run on the same date. The relocation MUST use the resolved `ANALYSIS_SUBFOLDER` (from Run Suffix Resolution) and be idempotent (safe on reruns): -> -> ```bash -> UNSCOPED_DIR="analysis/daily/$ARTICLE_DATE" -> SCOPED_DIR="$UNSCOPED_DIR/$ANALYSIS_SUBFOLDER" -> if [ -d "$UNSCOPED_DIR" ]; then -> mkdir -p "$SCOPED_DIR" -> if find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" | grep -q .; then -> find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" -exec mv -f {} "$SCOPED_DIR/" \; -> echo "📁 Moved pipeline *.md artifacts → $SCOPED_DIR (root cleaned to prevent merge conflicts)" -> fi -> if [ -d "$UNSCOPED_DIR/documents" ]; then -> mkdir -p "$SCOPED_DIR/documents" -> find "$UNSCOPED_DIR/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$SCOPED_DIR/documents/" \; -> rmdir "$UNSCOPED_DIR/documents" 2>/dev/null || true -> echo "📁 Relocated pipeline documents/ contents → $SCOPED_DIR/documents (merge-safe)" -> fi -> fi -> ``` -> -> 🚨 **CRITICAL**: The `analysis-reader.ts` already scans subdirectories automatically — it does NOT need root-level files. Never keep copies at the root date directory. - -| Workflow | `ARTICLE_TYPE` subfolder | Example `git add` path | -|----------|-------------------------|----------------------| -| news-realtime-monitor | `realtime-$HHMM` (time-stamped) | `analysis/daily/$DATE/realtime-1430/` | -| news-evening-analysis | `evening-analysis` | `analysis/daily/$DATE/evening-analysis/` | -| news-article-generator | mapped from `REQUESTED_TYPE` (single-type) or `article-generator-HHMM` (multi-type) | `analysis/daily/$DATE/committeeReports/` | -| news-month-ahead | `month-ahead` | `analysis/daily/$DATE/month-ahead/` | -| news-week-ahead | `week-ahead` | `analysis/daily/$DATE/week-ahead/` | -| news-weekly-review | `weekly-review` | `analysis/daily/$DATE/weekly-review/` | -| news-monthly-review | `monthly-review` | `analysis/daily/$DATE/monthly-review/` | - -> **`news-article-generator` folder naming**: For single-type runs, the `REQUESTED_TYPE` input (hyphenated, e.g., `committee-reports`) is mapped to folder names (e.g., `committeeReports`). For multi-type or schedule-driven runs (comma-separated types), a dedicated `article-generator-HHMM` subfolder is used to avoid mixing artifacts across types. See the `case` mapping block in the workflow. - -```bash -# Stage analysis scoped to article type subfolder — prevents overwriting other workflows' analysis -ARTICLE_TYPE="evening-analysis" # Set per workflow (realtime uses "realtime-$HHMM") -# Stage summary files by default; only add whole directory if count is safe -git add "analysis/daily/$ARTICLE_DATE/$ARTICLE_TYPE"/*.md 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/$ARTICLE_TYPE"/*.json 2>/dev/null || true -git add analysis/weekly/ || true -git add analysis/data/ || true -# Enforce safe-outputs 100-file PR limit (AWF-safe: no $(...) — write to temp file + read back) -git diff --cached --name-only > /tmp/staged_files.txt -awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt -STAGED_COUNT=0 -read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -echo "📊 Staged file count: $STAGED_COUNT (limit: 100)" -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Staged $STAGED_COUNT files exceeds safe threshold. Removing bulk data." - git reset HEAD -- analysis/data/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing weekly analysis." - git reset HEAD -- analysis/weekly/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "📊 Data + Analysis ($ARTICLE_TYPE) - $ARTICLE_DATE" -``` - -> ⚠️ **Realtime monitor uniqueness**: `news-realtime-monitor` can run multiple times per day. It MUST derive `HHMM` using the AWF-safe pattern `date -u +%H%M > /tmp/hhmm.txt`, then `HHMM=0000` and `read HHMM < /tmp/hhmm.txt 2>/dev/null || true`, and use that value for both the analysis subfolder (`realtime-$HHMM/`) and article filename (`news/$DATE-breaking-$HHMM-{lang}.html`) to avoid overwriting previous runs. - -> ❌ **PROHIBITED**: Committing analysis without downloaded data files (unless pruned for 100-file limit) -> ❌ **PROHIBITED**: Committing stub/empty analysis when data exists -> ❌ **PROHIBITED**: Skipping analysis creation — every document MUST have analysis -> ❌ **PROHIBITED**: Writing analysis that doesn't follow the template structure -> ❌ **PROHIBITED**: Using broad `git add analysis/data/ analysis/daily/ analysis/weekly/` without scoping — this accumulates old files and exceeds the 100-file PR limit -> ❌ **PROHIBITED**: ANY workflow staging parent date directory `analysis/daily/$DATE/` without article type scope — this causes conflicts and overwrites. ALL workflows MUST scope to `analysis/daily/$DATE/{articleType}/` -> ❌ **PROHIBITED**: Running `npx htmlhint "news/*-*.html"` or `article-quality-enhancer.ts --fix` without a glob argument — this modifies ALL existing articles (400–500 files) and causes E003 when those files are staged. ALWAYS scope htmlhint and --fix to today's articles only using explicit filenames. -```` - -## 🔧 MANDATORY: Script Debugging & Fixing (copy into every analysis workflow) - -> **NON-NEGOTIABLE**: When scripts fail, the agent MUST diagnose and fix the code/script issues. If fixing fails, fall back to direct MCP tool calls for data download. Analysis is ALWAYS done by the AI using templates — not by scripts. - -````markdown -### Script Debugging & Fixing Protocol - -> 🚨 **ABSOLUTE RULE**: All agentic workflows must analyse and fix any code/script issues to be able to perform their task. When a script fails, the agent MUST NOT silently skip it. - -#### When scripts fail or download 0 data: - -1. **Read the error output**: `cat /tmp/pipeline-output.log | tail -30` -2. **Diagnose**: MCP_SERVER_URL not set? TypeScript errors? Missing deps? Connection refused? -3. **Fix the script**: read source with `view`, fix with `edit`, re-run -4. **If script fix fails after 2 attempts** → use direct MCP tool calls to download data, save as JSON -5. **If ALL MCP tools also fail** (server truly down) → call `safeoutputs___noop` with error details - -#### Remember: Scripts download data, but the AI does the analysis - -- Scripts (`download-parliamentary-data.ts`) download and save **raw JSON data** plus a factual `data-download-manifest.md` -- The AI agent MUST read all methods and templates, then create the required analysis artifacts from those templates using the downloaded data -- This analysis work is the agent's PRIMARY job and must NEVER be skipped -- Even if scripts work perfectly, the agent still must produce all required analysis files to full template compliance -```` - -## 🔄 Data Lookback Fallback Strategy (copy into every analysis workflow) - -> **MANDATORY**: Never produce empty analysis. If no data exists for today, look back up to 7 days to find data that still needs analysis. Weekend/holiday runs MUST still produce useful output. - -````markdown -### Data Lookback Fallback Strategy - -> 🚨 **CRITICAL RULE**: An agentic workflow must NEVER produce empty/stub analysis files. If no documents are found for today's date, the workflow MUST look back through previous dates to find data that still needs analysis. Empty analysis = wasted workflow run. - -#### Fallback Protocol - -After the initial data download attempt for `$ARTICLE_DATE`: - -```bash -if [ -z "$ARTICLE_DATE" ]; then - ARTICLE_DATE="${{ github.event.inputs.article_date }}" - if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt - fi -fi -ORIGINAL_ARTICLE_DATE="$ARTICLE_DATE" - -# Step 1: Check if the requested article date has any analyzed documents (per-date, not session-wide) -MANIFEST_PATH="analysis/daily/$ARTICLE_DATE/data-download-manifest.md" -DATE_DOCS_ANALYZED=0 -if [ -f "$MANIFEST_PATH" ]; then - grep -E '^\*\*Documents Analyzed\*\*' "$MANIFEST_PATH" | sed -E 's/^\*\*Documents Analyzed\*\* *: *([0-9]+).*/\1/' > /tmp/date_docs.txt 2>/dev/null || echo 0 > /tmp/date_docs.txt - read DATE_DOCS_ANALYZED < /tmp/date_docs.txt -fi - -if [ "$DATE_DOCS_ANALYZED" -eq 0 ]; then - echo "⚠️ No per-date data for $ARTICLE_DATE — activating lookback fallback" - # Step 2: Try previous dates (up to 7 days back) until we find one with analyzed documents - DATA_DATE="" - for DAYS_BACK in 1 2 3 4 5 6 7; do - # Cross-platform date arithmetic: GNU date (-d) on Linux/GitHub Actions, BSD date (-v) on macOS - date -u -d "$ARTICLE_DATE - $DAYS_BACK days" +%Y-%m-%d 2>/dev/null > /tmp/lookback_date.txt || date -u "+%Y-%m-%d" --date="$DAYS_BACK days ago" 2>/dev/null > /tmp/lookback_date.txt || true -read LOOKBACK_DATE < /tmp/lookback_date.txt - [ -z "$LOOKBACK_DATE" ] && continue - echo "🔍 Checking $LOOKBACK_DATE for analyzed data..." - # First, check if a manifest already exists with non-zero Documents Analyzed - MANIFEST_PATH="analysis/daily/$LOOKBACK_DATE/data-download-manifest.md" - DATE_DOCS_ANALYZED=0 - if [ -f "$MANIFEST_PATH" ]; then - grep -E '^\*\*Documents Analyzed\*\*' "$MANIFEST_PATH" | sed -E 's/^\*\*Documents Analyzed\*\* *: *([0-9]+).*/\1/' > /tmp/date_docs.txt 2>/dev/null || echo 0 > /tmp/date_docs.txt - read DATE_DOCS_ANALYZED < /tmp/date_docs.txt - fi - DATA_DATE="$LOOKBACK_DATE" - break - fi - # No existing data — run pre-article analysis for this lookback date - echo "ℹ️ No existing manifest data for $LOOKBACK_DATE — running pre-article analysis" - # CRITICAL: Source mcp-setup.sh to set MCP_SERVER_URL for the gateway - source scripts/mcp-setup.sh && npx tsx scripts/download-parliamentary-data.ts --date "$LOOKBACK_DATE" --limit 50 2>/dev/null || true - # Re-check manifest after running analysis - DATE_DOCS_ANALYZED=0 - if [ -f "$MANIFEST_PATH" ]; then - grep -E '^\*\*Documents Analyzed\*\*' "$MANIFEST_PATH" | sed -E 's/^\*\*Documents Analyzed\*\* *: *([0-9]+).*/\1/' > /tmp/date_docs.txt 2>/dev/null || echo 0 > /tmp/date_docs.txt - read DATE_DOCS_ANALYZED < /tmp/date_docs.txt - fi - DATA_DATE="$LOOKBACK_DATE" - break - fi - done - # Lookback protection: copy analysis to today's directory instead of overwriting historical data - # When lookback finds existing analysis from a previous date, we COPY it to the article date - # directory so that downstream rewrites modify the copy, not the original. - if [ -n "$DATA_DATE" ] && [ "$DATA_DATE" != "$ORIGINAL_ARTICLE_DATE" ]; then - SRC_DIR="analysis/daily/$DATA_DATE/$ARTICLE_TYPE" - DST_DIR="analysis/daily/$ORIGINAL_ARTICLE_DATE/$ARTICLE_TYPE" - if [ -n "$ARTICLE_TYPE" ] && [ -d "$SRC_DIR" ]; then - mkdir -p "$DST_DIR" - cp -r "$SRC_DIR"/* "$DST_DIR/" 2>/dev/null || true - echo "📁 Copied analysis from $DATA_DATE → $ORIGINAL_ARTICLE_DATE (preserving original at $DATA_DATE)" - fi - ARTICLE_DATE="$ORIGINAL_ARTICLE_DATE" - elif [ -n "$DATA_DATE" ]; then - ARTICLE_DATE="$DATA_DATE" - fi - DISPLAY_DATE="$DATA_DATE" - [ -z "$DISPLAY_DATE" ] && DISPLAY_DATE="$ARTICLE_DATE" - echo "🗓️ Using analysis date: $ARTICLE_DATE (data sourced from: $DISPLAY_DATE)" - - # Persist selected ARTICLE_DATE for downstream steps - if [ -n "$GITHUB_ENV" ]; then - echo "ARTICLE_DATE=$ARTICLE_DATE" >> "$GITHUB_ENV" - echo "📌 Persisted ARTICLE_DATE=$ARTICLE_DATE to GITHUB_ENV for downstream steps" - fi -fi - -# Step 3: Report pending per-file analysis count for monitoring -npx tsx scripts/catalog-downloaded-data.ts --pending-only 2>/dev/null | jq '.pendingAnalysis // 0' 2>/dev/null > /tmp/pending_count.txt || echo 0 > /tmp/pending_count.txt -read PENDING < /tmp/pending_count.txt -[ -z "$PENDING" ] && PENDING=0 -echo "📊 Total pending per-file analysis files (all dates): $PENDING" -``` - -**Key principle**: The lookback trigger uses the **per-date** "Documents Analyzed" count from `data-download-manifest.md`, NOT session-wide catalog totals. When lookback finds existing analysis from a previous date, it **copies** the analysis to today's directory so downstream rewrites modify the copy, preserving the original historical analysis. - -**Lookback protection**: When `DATA_DATE != ORIGINAL_ARTICLE_DATE`, analysis artifacts are copied (not moved) from the source date to the article date. This ensures that: -1. Historical analysis at the data date is never overwritten -2. The agent works on fresh copies at the article date -3. Article references correctly point to the article date directory - -**ARTICLE_DATE overwrite protection**: The resolved `ARTICLE_DATE` is persisted to `$GITHUB_ENV` after lookback selection. **Important**: any downstream bash snippet that initializes `ARTICLE_DATE` from inputs or `date -u` MUST use an idempotent guard to avoid overwriting the lookback-selected date. Place this snippet at the start of any step that runs AFTER the lookback loop: - -```bash -# Idempotent ARTICLE_DATE initialization — only set if not already resolved by lookback -# Place at the top of any downstream bash step that would otherwise re-initialize ARTICLE_DATE -if [ -z "$ARTICLE_DATE" ]; then - ARTICLE_DATE="${{ github.event.inputs.article_date }}" - if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt - fi -fi -``` - -This ensures that once lookback persists `ARTICLE_DATE` to `$GITHUB_ENV`, subsequent steps reuse the resolved value rather than resetting to today's date. -```` - -## 📋 Daily Synthesis Template Compliance (copy into every analysis workflow) - -> **MANDATORY**: Every daily analysis file in `analysis/daily/YYYY-MM-DD/` MUST follow its corresponding template from `analysis/templates/`. The script-generated stubs are starting points — the AI agent MUST rewrite them to full template compliance. - -````markdown -### Daily Synthesis Template Compliance - -> 🚨 **CRITICAL RULE**: The `download-parliamentary-data.ts` script generates **stub files** as a starting point. These stubs do NOT follow the full template structure. You MUST read each template and rewrite the corresponding daily file to match the template's required sections, metadata fields, Mermaid diagrams, and evidence tables. - -#### Template-to-File Mapping - -| Daily File | Template to Follow | Required Sections | -|------------|-------------------|-------------------| -| `synthesis-summary.md` | **`analysis/templates/synthesis-summary.md`** | Synthesis Context (SYN-ID, date, confidence), Intelligence Dashboard (Mermaid), Top Findings table, Aggregated SWOT, Risk Landscape, Threat Summary, Stakeholder Impact, Narrative Direction, Forward Indicators, Artifacts Inventory | -| `risk-assessment.md` | **`analysis/templates/risk-assessment.md`** | Risk Context (RSK-ID, riksmöte, political context), Risk Heat Map (Mermaid), Risk Inventory table (L×I scores), Coalition Stability, Policy Implementation Risk, Budget Risk, Electoral Risk, Escalation Rules | -| `classification-results.md` | **`analysis/templates/political-classification.md`** | Classification Context (CLS-ID), Sensitivity Decision Tree (Mermaid), Per-document classification table (sensitivity, domain, urgency, scope, significance 0-10), Likelihood × Impact matrix | -| `threat-analysis.md` | **`analysis/templates/threat-analysis.md`** | Threat Context (THR-ID), Threat Taxonomy Network (Mermaid), 6 threat categories (NI/LI/AC/TR/DP/PB) with ≥1 threat each (severity 1-5), Threat Actor Mapping, Priority Mitigations, Escalation Decision | -| `swot-analysis.md` | **`analysis/templates/swot-analysis.md`** | SWOT Context (SWT-ID), Quadrant Mapping (Mermaid), Coalition SWOT, Opposition SWOT, Policy Domain SWOT — all entries with dok_id evidence, confidence, impact, entry date | -| `stakeholder-perspectives.md` | **`analysis/templates/stakeholder-impact.md`** | Stakeholder Context (STA-ID), Impact Radar (Mermaid), 6 stakeholder groups assessed (Citizens, Government, Opposition, Business, Civil Society, International), Impact Summary Matrix, Conflicting Impact Resolution | -| `significance-scoring.md` | **`analysis/templates/significance-scoring.md`** | Scoring Context (SIG-ID), 5-dimension scoring (Parliamentary, Policy Impact, Public Interest, Urgency, Cross-party) each 0-10, Composite Score, Publication Decision threshold | - -#### Protocol - -1. **Read each template** — use `view` or `cat` to read the full template file before rewriting the daily file -2. **Preserve downloaded data** — keep any factual data (document counts, risk scores, anomalies) from the downloaded MCP data -3. **Keep existing filenames** — do **NOT** rename or create new files based on template filename suggestions; always use the exact filenames listed in §"9 REQUIRED Analysis Artifacts" (e.g., keep `classification-results.md`, `stakeholder-perspectives.md`) -4. **Add template structure** — add all required metadata fields, Mermaid diagrams, evidence tables, and confidence labels -5. **Fill with real data** — use downloaded documents, MCP data, and analysis results to fill every `[REQUIRED]` placeholder -6. **No empty sections** — if a section has no data, explain WHY (e.g., "No propositions found for this date — Parliament in recess") with confidence label - -#### Minimum Compliance Check -- [ ] Every daily file has its template's metadata header (ID, date, riksmöte, confidence) -- [ ] Every daily file has ≥1 Mermaid diagram with color-coded nodes (using `style X fill:#hex,color:#fff` — not grey or unstyled) -- [ ] Risk assessment has ≥2 risks with L×I numeric scores in structured table -- [ ] SWOT has ≥2 filled quadrants with evidence citations (dok_id, vote counts) in structured tables -- [ ] SWOT follows template structure: Section 1 (Government Coalition), Section 2 (Opposition), Section 3 (Policy Domain) -- [ ] Threat analysis covers all 6 Political Threat Taxonomy categories with severity scores -- [ ] Significance scoring uses 5-dimension model with numeric scores and publication decision -- [ ] Synthesis references all sibling files with ✅/⚠️/❌ status -- [ ] No `[REQUIRED]` placeholders remain in any file -- [ ] Run the quality gate bash check from Step 5b — do NOT commit until it passes - -> **❌ Anti-pattern (PR #1452)**: Plain prose SWOT with no tables, no Mermaid diagrams, no dok_id evidence, no template structure. This is REJECTED. -> **✅ Good example**: See [SWOT.md](../../SWOT.md) for the formatting standard — badges, evidence tables, color-coded Mermaid charts, structured sections. -```` - -## Per-File AI Analysis Block (copy into every analysis workflow) - -> **Replaces script-based batch analysis.** The AI agent reads methodology documents and produces SWOT.md-quality per-file analysis for every downloaded MCP data file. - -````markdown -### Per-File AI Political Intelligence Analysis - -**Purpose:** Replace shallow script-based daily analysis with deep, AI-driven per-file analysis. -**Quality Standard:** Every analysis file must match [SWOT.md](../../SWOT.md) and [THREAT_MODEL.md](../../THREAT_MODEL.md) formatting quality. - -#### Required Reading (before analyzing) -Read these methodology documents to guide your analysis: -- **`analysis/methodologies/ai-driven-analysis-guide.md`** — Master per-file analysis guide -- **`analysis/methodologies/political-swot-framework.md`** — Evidence-based SWOT -- **`analysis/methodologies/political-risk-methodology.md`** — 5×5 risk matrix -- **`analysis/methodologies/political-threat-framework.md`** — Political Threat Taxonomy -- **`analysis/methodologies/political-classification-guide.md`** — Sensitivity and domain taxonomy -- **`analysis/methodologies/political-style-guide.md`** — Writing standards and evidence density -- **`analysis/templates/per-file-political-intelligence.md`** — Per-file output template -- **`analysis/templates/synthesis-summary.md`** — Daily synthesis template -- **`analysis/templates/risk-assessment.md`** — Risk assessment template -- **`analysis/templates/political-classification.md`** — Classification template -- **`analysis/templates/threat-analysis.md`** — Threat analysis template -- **`analysis/templates/swot-analysis.md`** — SWOT analysis template -- **`analysis/templates/stakeholder-impact.md`** — Stakeholder impact template -- **`analysis/templates/significance-scoring.md`** — Significance scoring template -- **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Detailed analysis prompt -- **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template (SWOT, risk, threat, Mermaid) -- **`analysis/methodologies/ai-driven-analysis-guide.md`** — Master methodology guide (v5.0) - -#### Protocol -1. **Catalog:** Run `npx tsx scripts/catalog-downloaded-data.ts --pending-only` to list files needing analysis -2. **Analyze each file:** Read the JSON, apply all 6 analytical lenses, fill the per-file template: - - Political classification (sensitivity, domain, urgency) - - SWOT impact (government + opposition, with evidence) - - Risk assessment (5×5 matrix) - - Political threat taxonomy assessment (where applicable) - - Stakeholder impact matrix (6 lenses) - - Forward indicators (specific watch items) -3. **Write analysis:** Save as `{dok_id}-analysis.md` alongside the data file -4. **Include Mermaid diagrams** — at least 1 per file, color-coded: - ``` - style X fill:#dc3545,color:#fff /* Red — critical */ - style X fill:#28a745,color:#fff /* Green — low risk */ - style X fill:#0d6efd,color:#fff /* Blue — informational */ - ``` -5. **Rewrite daily synthesis files** — After per-file analysis, rewrite ALL daily files in `analysis/daily/YYYY-MM-DD/` to follow their corresponding templates (see "Daily Synthesis Template Compliance" section above) - -#### Quality Gate — MANDATORY (must pass 10/12 minimum) - -> 🚨 **BLOCKING**: Run the quality gate bash check from SHARED_PROMPT_PATTERNS Step 5b. Do NOT commit until it passes. - -- [ ] ≥ 3 evidence points with dok_id (not generic references) -- [ ] Confidence labels (`[HIGH]`/`[MEDIUM]`/`[LOW]`) on every analytical claim -- [ ] At least 1 **color-coded** Mermaid diagram per file with `style` directives using real data -- [ ] SWOT has structured **evidence tables** (not plain prose) with `#`, `Statement`, `Evidence (dok_id)`, `Confidence`, `Impact` columns -- [ ] SWOT has ≥ 2 filled quadrants (not empty `[REQUIRED]` placeholders) -- [ ] Risk matrix has numeric L×I scores in structured table -- [ ] Forward indicators are specific with concrete timelines and triggers -- [ ] No `[REQUIRED]` placeholders remaining in any file -- [ ] Politicians named with party abbreviation (e.g., "Ulf Kristersson (M)") -- [ ] Intelligence-level analysis (not surface-level summaries or generic text) -- [ ] Daily synthesis files follow their corresponding `analysis/templates/` structure exactly -- [ ] Every daily file has template metadata header (ID, date, riksmöte, confidence) -```` - -## 🏷️ AI-DRIVEN TITLE & META DESCRIPTION GENERATION (copy into every content workflow) - -> **NON-NEGOTIABLE**: Article titles and meta descriptions MUST be generated by the AI agent from actual document content analysis — NEVER from code templates or generic patterns. - -> **v5.0 — ANALYSIS-DRIVEN**: Titles and descriptions MUST be derived from the completed synthesis-summary.md "AI-Recommended Article Metadata" section. The AI MUST complete ALL analysis before generating titles. See `ai-driven-analysis-guide.md` §"Analysis-Driven Article Decision Protocol (v5.0)". - -### Analysis→Title Pipeline (v5.0 — MANDATORY) - -> 🚨 **CRITICAL**: The title generation pipeline MUST follow this sequence: -> 1. **Complete ALL analysis** (per-file + batch SWOT/Risk/Threat/Stakeholder/Significance) -> 2. **Write synthesis-summary.md** with "AI-Recommended Article Metadata" section filled -> 3. **Generate article HTML** using synthesis content (NOT script templates) -> 4. **Read the generated article content** to validate title reflects actual coverage -> 5. **Write final title and meta description** into all HTML metadata fields -> -> Steps 1-3 MUST be complete before step 4 begins. The title MUST reference specific findings from the analysis. - -````markdown -### AI Title Generation Protocol - -> 🚨 **CRITICAL**: The AI agent MUST generate a unique, newsworthy title for every article. Script-generated template titles are stubs that MUST be overwritten. - -#### Title Requirements (60-80 characters) - -1. **Lead with the most significant political development** — not a generic category label -2. **Name specific actors or institutions** when central to the story -3. **Use active verbs** — "advances", "challenges", "unveils", "blocks", "fractures" -4. **Convey political significance** — why this matters, not just what happened -5. **NEVER use template patterns** — these are BANNED: - - ❌ `"{Category}: Policy Priorities This Week: Defense in Focus"` - - ❌ `"{Category}: Holding Government to Account: Defense in Focus"` - - ❌ `"{Category}: Parliamentary Priorities This Week: {Topic}"` - - ❌ Any title ending with `: Defense in Focus` or `: {Topic} in Focus` - - ❌ `"{Category}: Battle Lines This Week"` — generic confrontational framing - - ❌ `"Political intelligence briefing on Committee: and Published:"` — metadata field leak - -#### Title Construction Formula -``` -[Active Verb] + [Specific Actor/Institution] + [Concrete Policy Action] + [Political Significance] -``` - -#### Title Quality Examples - -| ❌ BANNED (Generic Template) | ✅ REQUIRED (Newsworthy) | -|------------------------------|--------------------------| -| "Committee Reports: Parliamentary Priorities This Week: Defense in Focus" | "Riksdag Committees Advance Civilian Protection and Criminal Justice Reforms" | -| "Government Propositions: Policy Priorities This Week: Defense in Focus" | "Four Government Bills Target Deportation, Cybersecurity, and Arms Export" | -| "Interpellation Debates: Holding Government to Account: Defense in Focus" | "Opposition Grills Ministers on Airport Safety, Defense Costs, and Migration" | -| "Evening Analysis: Daily Summary" | "Security First: Sweden Advances Deportation Reform and Cybersecurity Legislation" | -| "Breaking News: Latest Updates" | "Sweden Launches Multi-Front Security Push: Defense, Criminal Justice, and Arms Export Reform" | - -#### Implementation: After article HTML is generated by scripts, the AI MUST: -1. Read the generated article content to understand key political developments -2. Generate a newsworthy title following the formula above -3. Update ``, `<meta property="og:title">`, and `<h1>` in the HTML file -4. Verify the title is unique (not reused from another article type) - -### AI Meta Description Generation Protocol - -> 🚨 **CRITICAL**: Meta descriptions MUST summarize key political intelligence in 150-160 characters. Script-generated placeholders are BANNED. - -#### Meta Description Requirements (150-160 characters) - -1. **Summarize key political intelligence** — not document counts or field names -2. **Include specific policy areas and actors** — committee names, party dynamics, minister names -3. **Highlight the newsworthy angle** — why a reader should click -4. **Use analytical language** — intelligence-grade, not bureaucratic - -#### BANNED Meta Description Patterns -- ❌ `"Analysis of N documents covering {Field}:, {Field}:"` — This is a template placeholder -- ❌ `"Analysis of 10 documents covering Committee:, Published:"` — Missing actual content -- ❌ `"Analysis of 15 documents covering Filed by:, Published:"` — Meaningless to readers -- ❌ Any meta description starting with "Analysis of N documents" -- ❌ `"Political intelligence briefing on Committee: and Published: — N parliamentary documents analyzed"` — Metadata field leak from extractHighlights - -#### Meta Description Quality Examples - -| ❌ BANNED (Placeholder) | ✅ REQUIRED (Intelligence) | -|-------------------------|----------------------------| -| "Analysis of 10 documents covering Committee:, Published:" | "Sweden's Defense and Justice committees advance wartime protection and criminal deportation reforms in coordinated spring push." | -| "Analysis of 15 documents covering Filed by:, Published:" | "Opposition MPs challenge ministers on airport safety, defense costs, and migration policy through 15 targeted interpellations." | -| "Analysis of 10 documents covering Published:, Why It Matters:" | "Government submits four propositions on deportation, cybersecurity, arms exports, and healthcare — signaling spring security priorities." | - -#### Implementation: After article HTML is generated, the AI MUST: -1. Read article content to identify the 2-3 most important political developments -2. Write a 150-160 character summary highlighting political significance -3. Update `<meta name="description">` and `<meta property="og:description">` in the HTML -4. Verify no placeholder patterns remain (search for "Analysis of" + "documents covering") -```` - ---- - -## 📊 ANALYSIS FILE GITHUB REFERENCES — MANDATORY IN ALL ARTICLES - -> 🔴 **NON-NEGOTIABLE — TRANSPARENCY REQUIREMENT**: Every news article MUST contain a "📊 Analysis & Sources" section linking to ALL analysis files created for that article. This is the **#1 transparency and integrity requirement** — readers MUST be able to verify every claim by accessing the underlying analysis. Articles WITHOUT analysis references are **REJECTED**. - -> ⚠️ **DETERMINISTIC FIX**: The `scripts/fix-analysis-references.ts` script **deterministically injects** analysis references into any article missing them. Since analysis files and articles are created in the **same workflow run**, the script scans `analysis/daily/{date}/{subfolder}/` to discover exactly which files exist and builds properly localized links. **Every content workflow MUST run this script with `--rewrite` before validation.** The `--rewrite` flag also detects and replaces any existing analysis-references sections that contain broken links to non-existent files. - -> ```bash -> # 🔴 MANDATORY — run BEFORE validate-news-generation.sh in EVERY content workflow -> npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite -> ``` - -````markdown -### 🔴 MANDATORY: Deterministic Analysis References Injection - -> **The `fix-analysis-references.ts` script is the primary guarantee.** It runs as a mandatory step in every content workflow, right before validation. It: -> 1. Scans ALL article HTML files for the date -> 2. Checks if each has `class="analysis-references"` -> 3. If missing, generates the section by scanning `analysis/daily/{date}/{subfolder}/` for actual `.md` files -> 4. Injects the localized HTML section before the article footer -> 5. With `--rewrite`: detects broken links in existing sections and regenerates them from filesystem scan -> 6. Is idempotent — safe to run multiple times - -```bash -# Run in every content workflow, right before validate-news-generation.sh: -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite - -# Or target a specific article type: -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --type committee-reports --rewrite - -# Dry run to preview changes: -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --dry-run -``` - -### Manual Fallback: Verification Check - -> **If the script is unavailable**, verify manually: - -```bash -# MANDATORY — run after article generation, BEFORE committing -date +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -MISSING=0 -for FILE in news/$ARTICLE_DATE-*-en.html news/$ARTICLE_DATE-*-sv.html; do - if [ -f "$FILE" ]; then - if ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references section in: $FILE — MUST FIX BEFORE COMMIT" - MISSING=$((MISSING + 1)) - fi - fi -done -if [ "$MISSING" -gt 0 ]; then - echo "🔴 $MISSING articles missing analysis references — FIX NOW" -fi -``` - -> **If analysis references are missing, ADD THEM before committing.** Use the template below with the correct `ARTICLE_DATE` and `ANALYSIS_SUBFOLDER` for this workflow's article type. - -### When Analysis References Are Auto-Generated vs Manual - -| Scenario | Who Adds References | Action Required | -|----------|-------------------|-----------------| -| Article generated by `generate-news-enhanced.ts` AND HTML not rewritten | ✅ Auto-generated by TypeScript | Verify section exists (no action if present) | -| Article generated by `generate-news-enhanced.ts` BUT AI rewrites HTML sections | ⚠️ May be lost during rewrite | **MUST verify and re-add if missing** | -| Article generated manually by AI (evening-analysis, realtime-monitor) | ❌ NOT auto-generated | **MUST manually add the section** | -| Article translated from English source | Source article should have it | **Verify translated article preserves section** | - -### Reference Section Template (COPY THIS when adding manually) - -> 🚨 **AI agents: If `grep -q 'class="analysis-references"' article.html` returns false, INSERT this section before `</body>` or before `<footer`:** - -> **📘 Reference-grade files (v5.1)**: The baseline template below lists the **9 core files** present in every run. Runs that also produce reference-grade extension files — `README.md`, `executive-brief.md`, `scenario-analysis.md`, `comparative-international.md`, `methodology-reflection.md` — MUST include additional `<li>` entries for each, under a `🎯 Executive & Overview` / `🌍 Reference-Grade Extensions` subgroup. The `scripts/analysis-references.ts` script auto-discovers all `.md` files in the subfolder (and per-document files inside `documents/`) and emits them with localized labels, so running `npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite` is the recommended path — it handles both core and reference-grade extensions automatically. See canonical example: `news/2026-04-17-breaking-1434-{en,sv}.html` linking all 18 files of `analysis/daily/2026-04-17/realtime-1434/`. - -> **📁 Per-document analyses**: When `documents/` subdirectory exists, the script now renders **each per-document `.md` file as an individual `<li>`** (not just a folder link). Manually-authored sections MUST follow the same pattern — list every per-document file explicitly. - -> 🚨 **CANONICAL URL FORMAT — NO RELATIVE PATHS**: Every `href` to an analysis `.md` file **MUST** use the absolute GitHub blob URL `https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/…`. **Relative paths** such as `href="../analysis/…"`, `href="../../analysis/…"`, or `href="analysis/…"` are **FORBIDDEN** in any article — they resolve to raw `.md` URLs under `riksdagsmonitor.com/` that do not render. This is enforced by: -> -> 1. `scripts/data-transformers/content-generators/ai-marker-helpers.ts` banned-pattern `relativeAnalysisHref` (validated by `scripts/check-banned-patterns.ts` in every content workflow). -> 2. `scripts/fix-analysis-references.ts --rewrite`, which detects and replaces any analysis-references section containing relative hrefs (regardless of whether the target file exists on disk). -> -> **Bad**: `<a href="../analysis/daily/2026-04-18/realtime-1705/README.md">` — served as `https://riksdagsmonitor.com/analysis/daily/2026-04-18/realtime-1705/README.md` → raw markdown / 404. -> -> **Good**: `<a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/2026-04-18/realtime-1705/README.md">` — renders on GitHub. -> -> Canonical exemplar with correct URLs: `news/2026-04-18-weekly-review-en.html`. - -```html -<section class="analysis-references" aria-label="Analysis sources and methodology"> - <h2>📊 Analysis & Sources</h2> - <p>This article is based on AI-driven political intelligence analysis. Full methodology and analysis files:</p> - - <!-- 🎯 Executive & Overview — reference-grade runs only (skip group if files absent) --> - <h3>🎯 Executive & Overview</h3> - <ul> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/README.md" rel="noopener noreferrer">🗂️ Dossier Index</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/executive-brief.md" rel="noopener noreferrer">🎯 Executive Brief</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" rel="noopener noreferrer">📋 Synthesis Summary</a></li> - </ul> - - <h3>🧭 Core Analysis — Six Frameworks</h3> - <ul> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/swot-analysis.md" rel="noopener noreferrer">💪 SWOT Analysis</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/risk-assessment.md" rel="noopener noreferrer">⚠️ Risk Assessment</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/threat-analysis.md" rel="noopener noreferrer">🎭 Threat Analysis</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/stakeholder-perspectives.md" rel="noopener noreferrer">👥 Stakeholder Perspectives</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/significance-scoring.md" rel="noopener noreferrer">📈 Significance Scoring</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/classification-results.md" rel="noopener noreferrer">🏷️ Classification Results</a></li> - </ul> - - <!-- 🌍 Reference-Grade Extensions — include only for files that exist --> - <h3>🌍 Reference-Grade Extensions</h3> - <ul> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/scenario-analysis.md" rel="noopener noreferrer">🎲 Scenario Analysis</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/comparative-international.md" rel="noopener noreferrer">🌍 International Comparison</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/cross-reference-map.md" rel="noopener noreferrer">🔗 Cross-Reference Map</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/methodology-reflection.md" rel="noopener noreferrer">🔬 Methodology Reflection</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/data-download-manifest.md" rel="noopener noreferrer">📥 Data Download Manifest</a></li> - </ul> - - <!-- 📁 Per-Document Analyses — enumerate every .md inside documents/ --> - <h3>📁 Per-Document Analyses</h3> - <ul> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/documents/$DOC_ID-analysis.md" rel="noopener noreferrer">📄 $DOC_ID — <short description></a></li> - <!-- repeat for every .md in documents/ --> - </ul> - - <h3>🤖 Methodology</h3> - <ul> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/methodologies/ai-driven-analysis-guide.md" rel="noopener noreferrer">🤖 AI Analysis Methodology</a></li> - </ul> -</section> -``` - -### For Evening Analysis — Link to ALL Article Types' Analysis - -Evening analysis articles MUST link to ALL analysis folders that exist for the date, not just the evening-analysis subfolder: - -```html -<section class="analysis-references" aria-label="Analysis sources and methodology"> - <h2>📊 Analysis & Sources</h2> - <p>This article synthesizes AI-driven political intelligence from all daily analyses. Full methodology and analysis files:</p> - <ul> - <li><a href="https://github.com/Hack23/riksdagsmonitor/tree/main/analysis/daily/$ARTICLE_DATE/evening-analysis" rel="noopener noreferrer">📋 Evening Analysis — Full Intelligence Package</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/tree/main/analysis/daily/$ARTICLE_DATE/propositions" rel="noopener noreferrer">📜 Propositions Analysis</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/tree/main/analysis/daily/$ARTICLE_DATE/committeeReports" rel="noopener noreferrer">📋 Committee Reports Analysis</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/tree/main/analysis/daily/$ARTICLE_DATE/motions" rel="noopener noreferrer">✊ Motions Analysis</a></li> - <li><a href="https://github.com/Hack23/riksdagsmonitor/tree/main/analysis/daily/$ARTICLE_DATE/interpellations" rel="noopener noreferrer">❓ Interpellations Analysis</a></li> - <!-- Include realtime-HHMM folders if they exist --> - <li><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/methodologies/ai-driven-analysis-guide.md" rel="noopener noreferrer">🤖 AI Analysis Methodology</a></li> - </ul> -</section> -``` - -> **Only include links to analysis folders that actually exist** — check with `ls analysis/daily/$ARTICLE_DATE/` first. - -#### Article Type → Analysis Folder Mapping - -| Article Type | `$ANALYSIS_SUBFOLDER` | -|-------------|--------------------------| -| Committee Reports | `committeeReports` | -| Government Propositions | `propositions` | -| Interpellation Debates | `interpellations` | -| Opposition Motions | `motions` | -| Evening Analysis | `evening-analysis` (PLUS all other types' folders) | -| Breaking News / Realtime | `realtime-$HHMM` | -| Week Ahead | `week-ahead` | -| Month Ahead | `month-ahead` | -| Weekly Review | `weekly-review` | -| Monthly Review | `monthly-review` | -| Deep Inspection | `deep-inspection` | - -#### Implementation (TypeScript auto-generation + AI verification) - -The analysis references CAN be auto-generated by `scripts/analysis-references.ts`: -1. `generateAnalysisReferencesHtml({ date, articleType, lang })` scans the filesystem -2. Returns HTML string with links to ALL existing analysis files -3. Passed to `generateArticleHTML()` via `analysisReferencesHtml` field -4. Template renders the section between article content and footer - -**BUT** — AI agents MUST verify the section survives any HTML rewriting. If the section is missing after article generation/modification, the AI MUST add it manually using the template above. - -#### 🔴 MANDATORY Validation (run BEFORE every commit) -```bash -# 🔴 MANDATORY — every content workflow MUST run this before committing -date +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -MISSING=0 -for FILE in news/$ARTICLE_DATE-*.html; do - if [ -f "$FILE" ]; then - if ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE" - MISSING=$((MISSING + 1)) - fi - fi -done -if [ "$MISSING" -gt 0 ]; then - echo "🔴 STOP: $MISSING articles missing analysis references — FIX BEFORE COMMITTING" - echo "🔴 Use the template from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES" -fi -``` -```` - ---- - -## Minister-Response Cross-Reference (interpellations workflow only) - -```markdown -### Step 3b — Cross-Reference Minister Responses - -For each interpellation found: -1. Use `search_anforanden(talare=<minister-name>)` to fetch minister's response speech -2. Compare interpellation question with response to identify: - - Unanswered questions (accountability gap → government SWOT weakness) - - Evasive answers (opposition pressure → parliament SWOT opportunity) - - Policy commitments (government strength) - - Statistical claims (verify against SCB/World Bank data) -3. Assess response timeliness (4-week statutory deadline) -4. Include minister response summary in article body -5. Generate accountability scorecard per minister -``` diff --git a/.github/prompts/02-mcp-access.md b/.github/prompts/02-mcp-access.md index be77a5273..edec453d2 100644 --- a/.github/prompts/02-mcp-access.md +++ b/.github/prompts/02-mcp-access.md @@ -32,4 +32,4 @@ Run once at workflow start, then proceed — do not loop forever. ## Pre-warm step (CI job, not prompt) -Every news workflow declares a **single** `curl`-based pre-warm step with ≤ 6 retries, ≤ 20 s apart, total ≤ 2 minutes. No background keep-alive pingers. The `safeoutputs` session is kept alive by completing work inside its ~30-minute idle window, not by opening interim PRs. +Every news workflow declares a **single** `curl`-based pre-warm step with ≤ 6 retries, ≤ 20 s apart, total ≤ 2 minutes. No background pingers. The `safeoutputs` session is kept alive by completing work inside its ~30-minute idle window, not by opening interim PRs. diff --git a/.github/skills/editorial-standards/SKILL.md b/.github/skills/editorial-standards/SKILL.md index 4ecb5ed58..e9944183f 100644 --- a/.github/skills/editorial-standards/SKILL.md +++ b/.github/skills/editorial-standards/SKILL.md @@ -181,7 +181,7 @@ Before any draft is shared for Gate 2 review, verify: 3. **Coverage completeness** — Every document with DIW-weighted score ≥ 7.0 receives a dedicated H3 section in article body. No silent omissions. 4. **Rhetorical tension** — When top-ranked findings carry opposing political valences (e.g., norm entrepreneurship abroad + norm compression at home), an explicit "Rhetorical Cross-Cluster Tension" or equivalent subsection addresses the contradiction. -Failure protocol: if any of 1–4 is not satisfied, the draft is returned to the writing agent with the specific missing element identified. **Doctrine**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Rule 5: Democratic-Impact Weighting (DIW)". **Enforcement**: `SHARED_PROMPT_PATTERNS.md` §"🔴 MANDATORY: Lead-Story & Coverage-Completeness Gate". +Failure protocol: if any of 1–4 is not satisfied, the draft is returned to the writing agent with the specific missing element identified. **Doctrine**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Rule 5: Democratic-Impact Weighting (DIW)". **Enforcement**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Lead-Story & Coverage-Completeness Gate". ## Error Correction Protocol diff --git a/.github/skills/gh-aw-workflow-authoring/SKILL.md b/.github/skills/gh-aw-workflow-authoring/SKILL.md index 15442cfa6..178e396b2 100644 --- a/.github/skills/gh-aw-workflow-authoring/SKILL.md +++ b/.github/skills/gh-aw-workflow-authoring/SKILL.md @@ -102,8 +102,13 @@ timeout-minutes: 10 # Execution timeout concurrency: # Concurrency control group: ${{ github.ref }} cancel-in-progress: true +imports: # Reusable prompt modules (resolved relative to workflow file) + - ../prompts/00-base-contract.md + - ../prompts/07-commit-and-pr.md ``` +**Factoring shared rules** — prefer `imports:` over inlining large prompt blocks. In this repo see `.github/prompts/` for a bounded-context example: 8 core modules + 1 Tier-C extension, each ≤ 300 lines, with a dependency matrix in the `README.md`. Workflow `.md` files stay ≤ 200 lines of body and contain only workflow-unique business rules. + ## 🛠️ Tools Configuration ### GitHub Tools diff --git a/.github/skills/github-agentic-workflows/SKILL.md b/.github/skills/github-agentic-workflows/SKILL.md index c537b42a0..6d0c616bd 100644 --- a/.github/skills/github-agentic-workflows/SKILL.md +++ b/.github/skills/github-agentic-workflows/SKILL.md @@ -946,6 +946,8 @@ imports: - code-style-rules.md ``` +Paths are resolved **relative to the workflow file**. At compile time `gh aw compile` rewrites each import as a `{{#runtime-import <path>}}` directive in the generated `.lock.yml`, which is then inlined into the prompt at run-time. Imports are the preferred way to factor shared rules out of individual workflows — see this repo's `.github/prompts/README.md` for a bounded-context example with 8 modules + a Tier-C extension. + ### Labels (Organization) ```yaml diff --git a/.github/skills/riksdag-regering-mcp/SKILL.md b/.github/skills/riksdag-regering-mcp/SKILL.md index e56c21dde..6d1493989 100644 --- a/.github/skills/riksdag-regering-mcp/SKILL.md +++ b/.github/skills/riksdag-regering-mcp/SKILL.md @@ -129,7 +129,7 @@ get_calendar_events({ from: "2026-04-01", tom: "2026-04-30" }) // Calendar event > **⚠️ Tool names use underscores** (e.g., `get_sync_status`, NOT `get-sync-status`). > The gateway at `http://host.docker.internal:$MCP_GATEWAY_PORT/mcp/riksdag-regering` (port `8080` for gh-aw v0.69+, port `80` for legacy gh-aw <0.69 — resolved dynamically by `mcp-setup.sh`) handles routing. -> See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" for full tool list. +> See `.github/prompts/` (see the README for the module catalogue) → "MCP Architecture & Tool Reference" for full tool list. ## Examples (TypeScript) diff --git a/.github/workflows/compile-agentic-workflows.yml b/.github/workflows/compile-agentic-workflows.yml index d112d9ad1..ade7e93af 100644 --- a/.github/workflows/compile-agentic-workflows.yml +++ b/.github/workflows/compile-agentic-workflows.yml @@ -1,8 +1,7 @@ name: Compile Agentic Workflows # Compile .md workflow files to .lock.yml and commit all generated artifacts. -# SHARED_PROMPT_PATTERNS.md lives in .github/aw/ (not workflows/) so bare -# `gh aw compile` works without exclusions. +# News workflows import modules from .github/prompts/ (see that dir's README). on: workflow_dispatch: @@ -50,6 +49,69 @@ jobs: env: GH_TOKEN: ${{ secrets.COPILOT_MCP_GITHUB_PERSONAL_ACCESS_TOKEN || secrets.GITHUB_TOKEN }} + - name: Enforce prompt-module architecture + run: | + set -e + echo "🔍 Checking .github/prompts/ module line caps (≤ 300)…" + FAIL=0 + for f in .github/prompts/*.md .github/prompts/ext/*.md; do + [ -f "$f" ] || continue + LINES=$(wc -l < "$f") + if [ "$LINES" -gt 300 ]; then + echo " ❌ $f has $LINES lines (> 300)" + FAIL=1 + else + echo " ✅ $f ($LINES lines)" + fi + done + echo "" + echo "🔍 Checking .github/workflows/news-*.md body line caps (≤ 200 excluding frontmatter)…" + for f in .github/workflows/news-*.md; do + [ -f "$f" ] || continue + # Body = content after the second "^---$" line + BODY_LINES=$(awk 'BEGIN{c=0;out=0} /^---$/{c++;if(c==2){out=1;next}} out==1{print}' "$f" | wc -l) + if [ "$BODY_LINES" -gt 200 ]; then + echo " ❌ $f has body of $BODY_LINES lines (> 200)" + FAIL=1 + else + echo " ✅ $f (body $BODY_LINES lines)" + fi + done + echo "" + echo "🔍 Enforcing safe-outputs.create-pull-request.max: 1 on all news workflows…" + for f in .github/workflows/news-*.md; do + [ -f "$f" ] || continue + # Extract max value directly under create-pull-request block + MAX=$(awk ' + /^ create-pull-request:/ {in_block=1; next} + in_block && /^ [a-zA-Z]/ {in_block=0} + in_block && /^ max:/ {print $2; exit} + ' "$f") + if [ -z "$MAX" ]; then + echo " ⚠️ $f has no explicit create-pull-request.max (allowed: safeoutputs default behaves as 1)" + elif [ "$MAX" != "1" ]; then + echo " ❌ $f has create-pull-request.max: $MAX (must be 1)" + FAIL=1 + else + echo " ✅ $f (max: 1)" + fi + done + echo "" + echo "🔍 Scanning for banned multi-PR / heartbeat / keep-alive strings…" + BANNED='Heartbeat|keep-alive pinger|post-heartbeat rebase|🫀' + if grep -rInE "$BANNED" .github/workflows/news-*.md .github/prompts/ 2>/dev/null; then + echo " ❌ banned strings present" + FAIL=1 + else + echo " ✅ no banned strings" + fi + echo "" + if [ "$FAIL" -ne 0 ]; then + echo "❌ Prompt-module architecture check failed." + exit 1 + fi + echo "✅ Prompt-module architecture checks passed." + - name: Commit all generated files run: | git config user.name "github-actions[bot]" diff --git a/.github/workflows/economic-context-audit.yml b/.github/workflows/economic-context-audit.yml index 982a2b520..544f91964 100644 --- a/.github/workflows/economic-context-audit.yml +++ b/.github/workflows/economic-context-audit.yml @@ -7,7 +7,7 @@ # Scope: single hardened workflow — no MCP calls, no write tokens except # the one needed to open a maintenance issue. Follows the project # workflow security standards (least privilege, step-security/harden-runner, -# SHA-pinned actions — see .github/aw/SHARED_PROMPT_PATTERNS.md). +# SHA-pinned actions — see .github/prompts/ and ISMS CI/CD security guidance). # # Schema v2 cutover (2026-04-20 → 2026-05-31, grace window): # The validator currently accepts BOTH v1 artefacts (source.worldBank / diff --git a/.github/workflows/news-article-generator.lock.yml b/.github/workflows/news-article-generator.lock.yml index e922d3ac8..745cb3045 100644 --- a/.github/workflows/news-article-generator.lock.yml +++ b/.github/workflows/news-article-generator.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"8f96d4b69ff37a34cc02be546e5ea35dd03fb786c67b6ed0f814a7c607857888","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"4390716293e06a2a234a6472a5cc3a514d2dab3fb7f9e819765e27068ea9148e","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,18 @@ # # Manual-only multi-type article generator. For automated per-type generation, use the dedicated news-committee-reports, news-propositions, news-motions, news-week-ahead, news-month-ahead, news-weekly-review, news-monthly-review workflows. For translations, use news-translate workflow. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# - ../prompts/ext/tier-c-aggregation.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -194,14 +206,6 @@ jobs: GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES: ${{ github.event.inputs.article_types }} - GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_IDS: ${{ github.event.inputs.document_ids }} - GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_URLS: ${{ github.event.inputs.document_urls }} - GH_AW_GITHUB_EVENT_INPUTS_FOCUS_TOPIC: ${{ github.event.inputs.focus_topic }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -212,9 +216,9 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_a7db3439a5a2408d_EOF' + cat << 'GH_AW_PROMPT_0594bf2f1e4261d5_EOF' <system> - GH_AW_PROMPT_a7db3439a5a2408d_EOF + GH_AW_PROMPT_0594bf2f1e4261d5_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" @@ -222,12 +226,12 @@ jobs: cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_a7db3439a5a2408d_EOF' + cat << 'GH_AW_PROMPT_0594bf2f1e4261d5_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_a7db3439a5a2408d_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_0594bf2f1e4261d5_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_a7db3439a5a2408d_EOF' + cat << 'GH_AW_PROMPT_0594bf2f1e4261d5_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -257,25 +261,26 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_a7db3439a5a2408d_EOF + GH_AW_PROMPT_0594bf2f1e4261d5_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_a7db3439a5a2408d_EOF' + cat << 'GH_AW_PROMPT_0594bf2f1e4261d5_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} + {{#runtime-import .github/prompts/ext/tier-c-aggregation.md}} {{#runtime-import .github/workflows/news-article-generator.md}} - GH_AW_PROMPT_a7db3439a5a2408d_EOF + GH_AW_PROMPT_0594bf2f1e4261d5_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES: ${{ github.event.inputs.article_types }} - GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_IDS: ${{ github.event.inputs.document_ids }} - GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_URLS: ${{ github.event.inputs.document_urls }} - GH_AW_GITHUB_EVENT_INPUTS_FOCUS_TOPIC: ${{ github.event.inputs.focus_topic }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -289,14 +294,6 @@ jobs: GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES: ${{ github.event.inputs.article_types }} - GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_IDS: ${{ github.event.inputs.document_ids }} - GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_URLS: ${{ github.event.inputs.document_urls }} - GH_AW_GITHUB_EVENT_INPUTS_FOCUS_TOPIC: ${{ github.event.inputs.focus_topic }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -322,14 +319,6 @@ jobs: GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES, - GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_IDS: process.env.GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_IDS, - GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_URLS: process.env.GH_AW_GITHUB_EVENT_INPUTS_DOCUMENT_URLS, - GH_AW_GITHUB_EVENT_INPUTS_FOCUS_TOPIC: process.env.GH_AW_GITHUB_EVENT_INPUTS_FOCUS_TOPIC, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -429,7 +418,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -517,16 +506,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_1dc84e5d7a1de534_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_1dc84e5d7a1de534_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_74b57913640f4749_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_74b57913640f4749_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -787,7 +776,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_b9a0b7948ad94949_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_43773805ac441abf_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -917,7 +906,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_b9a0b7948ad94949_EOF + GH_AW_MCP_CONFIG_43773805ac441abf_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1604,7 +1593,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,cdn.playwright.dev,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,playwright.download.prss.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-article-generator.md b/.github/workflows/news-article-generator.md index 89274f71c..7475f309a 100644 --- a/.github/workflows/news-article-generator.md +++ b/.github/workflows/news-article-generator.md @@ -2,6 +2,16 @@ name: "News: Article Generator (Manual)" description: Manual-only multi-type article generator. For automated per-type generation, use the dedicated news-committee-reports, news-propositions, news-motions, news-week-ahead, news-month-ahead, news-weekly-review, news-monthly-review workflows. For translations, use news-translate workflow. strict: false # Allow custom network domain riksdag-regering-ai.onrender.com (trusted MCP server) +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md + - ../prompts/ext/tier-c-aggregation.md on: workflow_dispatch: inputs: @@ -128,7 +138,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -166,26 +176,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -239,844 +229,51 @@ engine: model: claude-opus-4.7 --- -# 📰 News Article Generator Agent - -You are the **News Journalist Agent** for Riksdagsmonitor. Generate high-quality political journalism using the **purpose-built TypeScript generation scripts**. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst, NOT a script executor.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** parliamentary data deeply — SWOT, stakeholder perspectives, risk assessment, election implications -> 2. **WRITE** genuine political intelligence articles with specific actors, evidence citations, and analytical insight -> 3. **USE** the script (`generate-news-enhanced.ts`) ONLY for HTML formatting — the script creates a shell, YOU fill it with analysis -> 4. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 5. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **2+ PASSES MANDATORY**: Analysis Pass 1 (15 min) → Analysis Pass 2 improvement (7 min) → Article Pass 1 (10 min) → Article Pass 2 improvement (8 min). NEVER complete early. - -## 🔧 Workflow Dispatch Parameters - -- **article_types** = `${{ github.event.inputs.article_types }}` -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **document_ids** = `${{ github.event.inputs.document_ids }}` -- **document_urls** = `${{ github.event.inputs.document_urls }}` -- **focus_topic** = `${{ github.event.inputs.focus_topic }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -**Rules:** -1. If **article_types** is non-empty, generate ONLY those types. Do NOT fall back to day-of-week schedule. -2. If **article_types** is empty/blank, use day-of-week schedule (see Step 2). -3. If **force_generation** is `true`, generate articles even if recent articles exist. Note: the full deep political analysis phase (15-20 minutes) runs on EVERY invocation regardless of this flag. -4. If **languages** is empty/blank, default to `all` (14 languages). -5. If **article_types** includes `deep-inspection`, use **document_ids**, **document_urls**, and **focus_topic** for targeted deep analysis. **`document_ids` must be actual Riksdag dok_id values** (e.g. `H901FöU1,GZ01KU1`) — NOT search queries or wildcards. Use the riksdag-regering MCP tools first to find the correct IDs, then pass them. -6. For `deep-inspection` type: pass `--document-ids=<value>`, `--document-urls=<value>`, and `--focus-topic=<value>` flags to the generation script. **The script generates the following sections — these are ONLY available via the script, never via manual fallback:** - - **Multi-stakeholder SWOT analysis** (Government, Parliament, Civil Society perspectives) - - **Document Intelligence Dashboard** — Chart.js bar chart of document-type distribution - - **Sankey flow chart** (SVG, no JS) — initiating actors → document types (only when ≥ 2 document types detected) - - **Color-coded CSS Mindmap** — topic → detected policy domains → stakeholders → data sources - - **World Bank Economic Dashboard** — auto-selected Nordic comparison charts based on detected policy domains (fiscal, labour, defence, healthcare, etc.). 144 indicators available covering all 12 Riksdag committees — see `analysis/worldbank/indicators-inventory.json` for full inventory. Chart types: `economic-comparison`, `economic-trend`, `nordic-radar`. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION". - - **5W Deep-Analysis section** — Who/What/When/Why/Winners–Losers narrative - - **URL handling for `document_urls`:** - - **riksdagen.se / data.riksdagen.se URLs** → auto-resolved to dok_id, fetched via `get_dokument` - - **regeringen.se URLs** (e.g. press releases, SOUs, government decisions) → fetched via `get_g0v_document_content` MCP tool. The content is included as a government-source document in the analysis. **This is the primary mechanism for analyzing government press releases, SOUs, and other regeringen.se content.** - - **github.com / raw.githubusercontent.com URLs** → converted to raw URL, fetched as text content. Used for **comparison/reference analysis** (e.g. linking Hack23 ISMS strategy, security policies, or other reference documents for comparison against government policy). The `blob/` path is automatically converted to raw content URL. - - **Other URLs** → logged as warnings (not currently supported) - - **Example deep-inspection dispatch for cybersecurity strategy comparison:** - ``` - article_types: deep-inspection - document_urls: https://www.regeringen.se/pressmeddelanden/2026/03/91-atgarder-ska-starka-sveriges-motstandskraft-mot-cyberhot/,https://github.com/Hack23/ISMS-PUBLIC/blob/main/Information_Security_Strategy.md - focus_topic: cyber security, cyberthreats, threatlandscape, cyber security strategy, ai future, ai security, hack23 - ``` - This will: (1) fetch the 91-measure plan via g0v, (2) fetch Hack23 ISMS strategy from GitHub, (3) generate SWOT comparing government strategy with private-sector reference, (4) focus all analysis through the cybersecurity + AI lens. - - Data sources automatically integrated into deep-inspection articles: - - **Riksdag MCP** — propositions, committee reports, motions, laws (SFS), EU position papers - - **Government MCP (g0v)** — regeringen.se press releases, SOUs, government decisions (via `get_g0v_document_content`) - - **GitHub raw content** — external reference documents (strategy docs, ISMS policies, compliance frameworks) for comparison analysis - - **World Bank MCP** (`api.worldbank.org`) — economic indicators for matching policy domains - - **SCB MCP** (`api.scb.se`) — Swedish statistics context for matching policy domains - - **CIA-data** (JSON exports) — when loaded via `--document-urls` pointing to CIA exports - -## ⚠️ CRITICAL: Bash Tool Call Format - -**Every `bash` tool call MUST include both required parameters — omitting either causes validation errors:** - -| Parameter | Required | Description | -|-----------|----------|-------------| -| `command` | ✅ YES | The shell command string to execute | -| `description` | ✅ YES | Short human-readable label (≤100 chars) | - -**✅ CORRECT** — always provide both `command` and `description`: -``` -bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" }) -bash({ command: "npm ci --prefer-offline --no-audit", description: "Install npm dependencies" }) -bash({ command: "npx htmlhint 'news/*-*.html'", description: "Validate HTML files" }) -``` - -**❌ WRONG** — missing parameters cause `"command": Required, "description": Required` errors: -``` -bash("npm ci") // ← WRONG: no named parameters -bash({ command: "..." }) // ← WRONG: missing description -``` - -> When you see fenced bash code blocks below (three backticks followed by bash), they show the **command content** to execute. You MUST wrap each in a proper bash tool call with both `command` and `description` parameters. For multi-line scripts, join commands with `&&` or `;` into a single `command` string. - -## 🛡️ AWF Shell Safety — MANDATORY for Agent-Generated Bash - -> See `SHARED_PROMPT_PATTERNS.md` §"AWF Shell Safety" for rules. Key: use `$VAR` (no braces), `find -exec` (no `$(cmd)`), set defaults with `if/then`. - -## 🔤 UTF-8 Encoding — MANDATORY for ALL Content - -> **NON-NEGOTIABLE**: All article content, titles, descriptions, and metadata MUST use native UTF-8 characters. NEVER use HTML numeric entities (`ä`, `ö`, `å`) for non-ASCII characters like Swedish åäö, German üö, French éè, etc. - -**Rules:** -1. Write Swedish characters as UTF-8: `ö`, `ä`, `å`, `Ö`, `Ä`, `Å` — NEVER as `ö`, `ä`, etc. -2. Author name: Always `James Pether Sörling` — never `Sörling`. -3. All HTML files use `<meta charset="UTF-8">` — entities are unnecessary and cause double-escaping bugs. -4. This applies to ALL languages and ALL output: titles, meta tags, JSON-LD, article body, analysis files. - - -## ⚠️ NON-NEGOTIABLE RULES - -1. Every run **MUST** end with exactly one safe output tool call: - - Articles generated → `safeoutputs___create_pull_request({...})` - - Analysis artifacts exist but no articles → `safeoutputs___create_pull_request({...})` with analysis-only PR - - MCP server unreachable AND no analysis artifacts → `safeoutputs___noop({"message": "..."})` - - Tool unavailable → `safeoutputs___missing_tool({"reason": "..."})` -2. `safeoutputs___create_pull_request` handles branch creation and push. **NEVER** run `git push` or `git checkout -b`. -3. Safe output tools are **always in your tool list**. NEVER search for them via bash. -4. **NEVER** write your own MCP HTTP/JSON-RPC client. Use the scripts or direct tool calls only. -5. Exiting without calling a safe output tool = workflow failure. -6. **🚨 FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum) **MUST** be complete **BEFORE** creating or updating any article HTML. Articles **MUST** be (re)generated/updated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Do **NOT** call `safeoutputs___noop` because articles already exist — the full analysis phase MUST always run. Analysis is the primary output. Violations = REJECTED PR (see PR #1705 comment audit, 2026-04-18). - -## 🧠 Repo Memory - -This workflow uses **persistent repo-memory** on branch `memory/news-generation` (shared with all news workflows). - -**At run START — read context:** -- Read `memory/news-generation/covered-documents/{YYYY-MM-DD}.json` for today (and optionally yesterday) to check which dok_ids were already analyzed recently -- Read `memory/news-generation/last-run-news-article-generator.json` for previous run metadata -- Skip documents already covered by another workflow to avoid duplicate analysis - -**At run END — write context:** -- Update `memory/news-generation/last-run-news-article-generator.json` with date, documents analyzed, quality score -- Write processed dok_ids to `memory/news-generation/covered-documents/{YYYY-MM-DD}.json` (sharded by date; retain last 7 days) -- Update `memory/news-generation/translation-status.json` with new articles needing translation - -## ⏱️ Time Budget (45 minutes) — ENFORCED Minimum 40 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing in 13-22 min of a 45-min allocation, producing shallow analysis. Agent MUST use at least 40 of 45 minutes. Completion < 40 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -| Phase | Minutes | Action | -|-------|---------|--------| -| Setup | 0–3 | Date check, `get_sync_status()` warm-up, check recent generation | -| Download | 3–6 | Run data download scripts (MCP data fetch) | -| **AI Analysis Pass 1** | **6–21** | **🚨 MANDATORY 15 min minimum**: Read ALL methodology guides, create per-file analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. | -| **AI Analysis Pass 2 (Part A)** | **21–22** | Begin reading ALL analysis artifacts back and identify improvement targets. | -| **Heartbeat PR** | **22–25** | 🫀 `git add && git commit` analysis + any drafts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Article Generator - {date}`). This refreshes the safeoutputs MCP session (which expires after ~30–35 min idle) AND guarantees no work is lost if later phases fail. After the call succeeds, run `git checkout main` so subsequent commits don't stack onto the frozen patch. | -| **AI Analysis Pass 2 (Part B)** | **25–28** | **Complete improvements (6 min improvement work total across Parts A+B)**: improve every section, replace ALL script stubs with AI analysis. | -| Gates | 28–30 | Run ENFORCED Minimum Time Gate + Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. | -| Generate | 30–36 | Run `generate-news-enhanced.ts` in batches | -| **Article Improvement** | **36–40** | 🚨 Read ALL articles back, replace AI_MUST_REPLACE markers, improve content. Run article quality component gate. | -| Validate+PR | 40–45 | Translate, validate, commit, `safeoutputs___create_pull_request` | - -| **HARD DEADLINE** | **43–45** | 🚨 If no safe output yet: if ANY artifacts/files were created, IMMEDIATELY stage, commit, call `safeoutputs___create_pull_request` with partial work. ONLY call `safeoutputs___noop` if truly ZERO files were created. | -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 15 min + Pass 2: 7 min)** — every analysis file must contain color-coded Mermaid diagrams, structured evidence tables with dok_id citations, and follow template structure exactly. ALL script-generated stubs MUST be replaced with AI-enriched analysis. Run the ENFORCED gates from SHARED_PROMPT_PATTERNS.md before article generation. - -**Hard cutoffs** — check elapsed time before each phase: -- `>= 25 min` and no safeoutputs call yet → 🚨 call `safeoutputs___create_pull_request` as a heartbeat with whatever files exist. Do NOT delay — the safeoutputs session expires at ~30–35 min idle. -- `>= 35 min` → Stop generating, commit what you have, create PR immediately -- `>= 43 min` → STOP ALL WORK, call safe output immediately - -## Required Skills - -Before generating articles, consult these skills: -1. **`.github/skills/editorial-standards/SKILL.md`** — OSINT/INTOP editorial standards -2. **`.github/skills/swedish-political-system/SKILL.md`** — Parliamentary terminology -3. **`.github/skills/legislative-monitoring/SKILL.md`** — Voting patterns, committee tracking, bill progress -4. **`.github/skills/riksdag-regering-mcp/SKILL.md`** — MCP tool documentation -5. **`.github/skills/language-expertise/SKILL.md`** — Per-language style guidelines -6. **`.github/skills/gh-aw-safe-outputs/SKILL.md`** — Safe outputs usage -7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses) -8. **`scripts/prompts/v2/stakeholder-perspectives.md`** — Multi-perspective analysis instructions -9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) -10. **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Per-file AI analysis protocol -11. **`analysis/methodologies/ai-driven-analysis-guide.md`** — Methodology for deep per-file analysis -12. **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Article Type Isolation - -> 🚨 **This workflow writes analysis ONLY to `analysis/daily/$ARTICLE_DATE/$REQUESTED_TYPE/`**. NEVER write to the parent date directory or another article type's folder. See SHARED_PROMPT_PATTERNS.md "Article Type Isolation" section. - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. See `SHARED_PROMPT_PATTERNS.md` §"Standardised Analysis Depth Gate" for the full requirements table. Minimum ALL depths: ≥1 color-coded Mermaid, evidence tables with dok_id, quantified risk matrix, forward indicators, confidence labels, follows template exactly. - -**The 8 mandatory stakeholder groups**: Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion — each with specific evidence (dok_id, vote counts, named politicians). - -> **Read `analysis_depth` input first** (default: `deep`). Use `getArticleTypeProfile(articleType)` from `scripts/editorial-framework.ts` to get the exact SWOT depth, dashboard, mindmap, stakeholder count, and AI iteration count. - -### Per-Article-Type Iteration Pattern - -See `SHARED_PROMPT_PATTERNS.md` §"Standardised Analysis Depth Gate" for Phase 1 (data + outline), Phase 2 (SWOT + Dashboard + Mindmap per profile), quality gate (word count ≥ profile.minWordCount, unique why-it-matters, all Swedish translated), and additional iterations for `deep`/`comprehensive`. - -## Step 1: Date Validation & MCP Health Check - -```bash -echo "=== Date Validation Check ===" -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -echo "START_TIME=$START_TIME" > /tmp/gh-aw/agent/timing.env -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -date +"%Z: %A %Y-%m-%d %H:%M:%S" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation -- September or later: `rm = "{currentYear}/{nextYear's last 2 digits}"` -- Before September: `rm = "{previousYear}/{currentYear's last 2 digits}"` -- Example: February 2026 → `rm = "2025/26"` - -### MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -STEP 1: ALWAYS check data freshness first — call `get_sync_status({})` to warm up MCP and check stale data. - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never fabricate, recycle, or generate from cached data. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -### Data Freshness & Date Filtering - -Parse sync status: if data is stale (> 48 hours since last sync), add disclaimer. Use riksdag-regering-mcp (32 tools for Swedish parliament data). For ad-hoc queries, use `scripts/mcp-query-cli.ts` — NEVER implement custom MCP client code (PROHIBITION). - -Tools with date params: `get_calendar_events` (from/tom — **authoritative for scheduled/forward-looking events; may sometimes return HTML instead of JSON — if this happens, treat it as a calendar data error, NOT as "no events"**), `search_dokument` (from_date/to_date — **only use as a recent-activity proxy for retrospective/near‑real‑time monitoring when calendar data is temporarily unusable; NEVER substitute it for week/month‑ahead or debate schedule data**), `search_regering` (dateFrom/dateTo). Other tools (`search_voteringar`, `get_betankanden`, `get_motioner`, `get_propositioner`, `search_anforanden`) require post-query filter by datum. - -## Step 1.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 9 analysis artifacts.** `download-parliamentary-data.ts` downloads raw data from riksdag-regering-mcp ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. Create ALL 9 analysis files in `analysis/daily/YYYY-MM-DD/` using evidence from the downloaded data - -After creating ALL analysis files, run the **9-Artifact Completeness Gate** from `SHARED_PROMPT_PATTERNS.md` §"9 REQUIRED Analysis Artifacts" to verify ALL 9 files exist. - -> 🔴 **Deep-inspection is Tier-C reference-grade (extended 2026-04-19)**: When `article_types` includes `deep-inspection`, the subfolder is `deep-inspection` and the **14-Artifact Reference-Grade Gate** in `SHARED_PROMPT_PATTERNS.md` (period-scope multiplier 1.0×) applies on top of the 9-core gate. Deep-inspection runs MUST produce all 14 artifacts (9 core + `README.md`, `executive-brief.md`, `scenario-analysis.md`, `comparative-international.md`, `methodology-reflection.md`). Reference exemplar: [`analysis/daily/2026-04-19/deep-inspection/`](../../analysis/daily/2026-04-19/deep-inspection/). Deep-inspection also requires **sibling-run cross-referencing**: cite ≥ 1 realtime-* run from the prior 7 days that first surfaced the primary `dok_id`, plus the most recent `weekly-review` and (if present) `month-ahead`/`monthly-review` in `data-download-manifest.md §Reference Analyses`. - -```bash -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -echo "📊 Running pre-article analysis for $ARTICLE_DATE..." -# CRITICAL: Source mcp-setup.sh FIRST to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway -# Determine requested article type early — needed for deep-inspection detection below -RAW_REQUESTED_TYPE="${{ github.event.inputs.article_types }}" -# For deep-inspection, pass --document-ids to include targeted documents regardless of date -PIPELINE_EXTRA_ARGS="" -if echo "$RAW_REQUESTED_TYPE" | grep -q "deep-inspection"; then - DI_DOC_IDS="${{ github.event.inputs.document_ids }}" - # Sanitize: only allow alphanumeric, hyphens, commas (valid Riksdag dok_id characters) - echo "$DI_DOC_IDS" | tr -cd 'A-Za-z0-9,_-' > /tmp/di_ids.txt -read DI_DOC_IDS < /tmp/di_ids.txt - [ -n "$DI_DOC_IDS" ] && PIPELINE_EXTRA_ARGS="--document-ids $DI_DOC_IDS" -fi -source scripts/mcp-setup.sh -npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 $PIPELINE_EXTRA_ARGS > /tmp/pipeline-output.log 2>&1 -PIPE_EXIT=$? -cat /tmp/pipeline-output.log -if [ "$PIPE_EXIT" -ne 0 ]; then - echo "❌ Pipeline failed — agent MUST diagnose and fix (read /tmp/pipeline-output.log)" - tail -20 /tmp/pipeline-output.log -fi -echo "📊 Analysis artifacts for $ARTICLE_DATE:" -ls -la "analysis/daily/$ARTICLE_DATE/" 2>/dev/null || echo "⚠️ No analysis output" -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_count.txt -read DATA_JSON_COUNT < /tmp/data_count.txt -echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)" -# Relocate pipeline artifacts: download-parliamentary-data.ts writes to analysis/daily/$DATE/ (unscoped) -# Determine target subfolder — use dedicated folder for multi-type/schedule runs to avoid mixing artifacts -# RAW_REQUESTED_TYPE already set above (before deep-inspection check) -_IS_SCHEDULE_OR_MULTI=false -if [ -z "$RAW_REQUESTED_TYPE" ] || [[ "$RAW_REQUESTED_TYPE" == *,* ]]; then - _IS_SCHEDULE_OR_MULTI=true -fi -REQUESTED_TYPE="$RAW_REQUESTED_TYPE" -[ -z "$REQUESTED_TYPE" ] && REQUESTED_TYPE="committee-reports" -# Capture and persist HHMM/subfolder once so later blocks can source the same values -ANALYSIS_SUBFOLDER_ENV=/tmp/analysis_subfolder.env -if [ -f "$ANALYSIS_SUBFOLDER_ENV" ]; then - # Reuse previously persisted values to keep relocation/staging/validation deterministic - # shellcheck source=/tmp/analysis_subfolder.env - . "$ANALYSIS_SUBFOLDER_ENV" - [ -n "$ANALYSIS_HHMM" ] && _AG_HHMM="$ANALYSIS_HHMM" - [ -n "$ANALYSIS_SUBFOLDER" ] && _RELOC_SUBFOLDER="$ANALYSIS_SUBFOLDER" -fi -if [ -z "$_AG_HHMM" ]; then - date -u +%H%M > /tmp/hhmm_val.txt - read _AG_HHMM < /tmp/hhmm_val.txt -fi -if [ -z "$_RELOC_SUBFOLDER" ]; then - if [ "$_IS_SCHEDULE_OR_MULTI" = true ]; then - # Multi-type or schedule-driven run — use a dedicated workflow-scoped folder - _RELOC_SUBFOLDER="article-generator-$_AG_HHMM" - else - case "$REQUESTED_TYPE" in - *committee-reports*) _RELOC_SUBFOLDER="committeeReports" ;; - *interpellation*) _RELOC_SUBFOLDER="interpellations" ;; - *motions*) _RELOC_SUBFOLDER="motions" ;; - *propositions*) _RELOC_SUBFOLDER="propositions" ;; - *week-ahead*) _RELOC_SUBFOLDER="week-ahead" ;; - *month-ahead*) _RELOC_SUBFOLDER="month-ahead" ;; - *weekly-review*) _RELOC_SUBFOLDER="weekly-review" ;; - *monthly-review*) _RELOC_SUBFOLDER="monthly-review" ;; - *breaking*) _RELOC_SUBFOLDER="realtime-$_AG_HHMM" ;; - *deep-inspection*) _RELOC_SUBFOLDER="deep-inspection" ;; - *) _RELOC_SUBFOLDER="$REQUESTED_TYPE" ;; - esac - # === Run Suffix Resolution (see SHARED_PROMPT_PATTERNS.md) === - # For single-type runs: auto-suffix if base folder already has synthesis-summary.md - # force_generation=true → reuse base folder (overwrite is intentional) - if [ "$FORCE_GENERATION" != "true" ]; then - _BASE_RELOC="$_RELOC_SUBFOLDER" - _SUFFIX=1 - while [ -f "analysis/daily/$ARTICLE_DATE/$_RELOC_SUBFOLDER/synthesis-summary.md" ]; do - _SUFFIX=$((_SUFFIX + 1)) - _RELOC_SUBFOLDER="$_BASE_RELOC-$_SUFFIX" - done - fi - fi -fi -# Persist immediately so all subsequent blocks get the same values -echo "ANALYSIS_SUBFOLDER=$_RELOC_SUBFOLDER" > "$ANALYSIS_SUBFOLDER_ENV" -echo "ANALYSIS_HHMM=$_AG_HHMM" >> "$ANALYSIS_SUBFOLDER_ENV" -echo "_AG_HHMM=$_AG_HHMM" >> "$ANALYSIS_SUBFOLDER_ENV" -echo "_RELOC_SUBFOLDER=$_RELOC_SUBFOLDER" >> "$ANALYSIS_SUBFOLDER_ENV" -UNSCOPED_DIR="analysis/daily/$ARTICLE_DATE" -SCOPED_DIR="$UNSCOPED_DIR/$_RELOC_SUBFOLDER" -if [ -d "$UNSCOPED_DIR" ]; then - mkdir -p "$SCOPED_DIR" - if find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" | grep -q .; then - find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" -exec mv -f {} "$SCOPED_DIR/" \; - echo "📁 Moved pipeline *.md artifacts → $SCOPED_DIR (root cleaned to prevent merge conflicts)" - fi - if [ -d "$UNSCOPED_DIR/documents" ]; then - mkdir -p "$SCOPED_DIR/documents" - find "$UNSCOPED_DIR/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$SCOPED_DIR/documents/" \; - rmdir "$UNSCOPED_DIR/documents" 2>/dev/null || true - echo "📁 Relocated pipeline documents/ contents → $SCOPED_DIR/documents" - fi -fi -if [ "$DATA_JSON_COUNT" -eq 0 ]; then - echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." -fi -``` - -### Per-File AI Analysis Enhancement - -After the script-based analysis, perform **AI-driven per-file analysis** for deeper intelligence: - -1. Run `npx tsx scripts/catalog-downloaded-data.ts --pending-only` to list files needing analysis -2. Read the methodology guides: - - `analysis/methodologies/ai-driven-analysis-guide.md` - - `analysis/methodologies/political-swot-framework.md` - - `analysis/templates/per-file-political-intelligence.md` -3. For each pending file: classify, SWOT, risk assess, Political Threat Taxonomy, stakeholder impact, write `.analysis.md` -4. Each analysis file must include color-coded Mermaid diagrams and evidence tables -5. Quality gate: ≥3 evidence points, confidence labels, no template placeholders - -These analysis files are committed alongside articles for human review and continuous improvement. - -### 🔴 MANDATORY: Batch Analysis Enrichment - -If `synthesis-summary.md` reports "0 documents analyzed" but per-doc analyses exist in `documents/`: read ALL `*-analysis.md` files and aggregate into all 8 batch files (synthesis-summary, swot-analysis, risk-assessment, threat-analysis, classification-results, significance-scoring, stakeholder-perspectives, cross-reference-map). Each enriched file needs ≥1 color-coded Mermaid, tables, dok_id citations, confidence labels. See `ai-driven-analysis-guide.md` §"Deep-Inspection Batch Analysis Enrichment Protocol (v4.1)". **NEVER commit batch files reporting "0 documents analyzed".** - -### 🚨 MANDATORY: Analysis Artifacts Must ALWAYS Be Committed - -After analysis, determine `ANALYSIS_SUBFOLDER` (matches article type: `committeeReports`, `interpellations`, `motions`, `propositions`, `week-ahead`, `realtime-$HHMM` for breaking, etc.) and check if `analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER` has files. If ANALYSIS_COUNT > 0: commit via `safeoutputs___create_pull_request` with title `📊 Analysis Only - Article Generator - {date}`, label `analysis-only`. Only call `safeoutputs___noop` if ZERO output files. +# 🧭 Article Generator (Manual) -## Step 2: Determine Article Types & Languages +Manual-only multi-type article generator. Accepts comma-separated article_types input. When `deep-inspection` is among the types, Tier-C 14-artifact gate applies. -```bash -# Use the article_types workflow dispatch parameter -ARTICLE_TYPES="${{ github.event.inputs.article_types }}" -if [ -z "$ARTICLE_TYPES" ]; then - date -u +%u > /tmp/day_of_week.txt # 1=Monday, 7=Sunday - read DAY_OF_WEEK < /tmp/day_of_week.txt - case "$DAY_OF_WEEK" in - 5) ARTICLE_TYPES="week-ahead,committee-reports,propositions,motions,interpellations" - echo "📅 Friday schedule" ;; - 6|7) ARTICLE_TYPES="committee-reports,propositions,motions,interpellations" - echo "📅 Weekend schedule" ;; - *) ARTICLE_TYPES="committee-reports,propositions,motions,interpellations" - echo "📅 Weekday schedule" ;; - esac -fi +## Tier-C (reference-grade) requirements -# Use the languages workflow dispatch parameter -LANGUAGES_INPUT="${{ github.event.inputs.languages }}" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" -echo "$LANGUAGES_INPUT" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//' > /tmp/lang_input.txt -read LANGUAGES_INPUT < /tmp/lang_input.txt - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -echo "📰 Types: $ARTICLE_TYPES | Languages: $LANG_ARG" -``` - -Valid article types (defined in `scripts/generate-news-enhanced/config.ts:VALID_ARTICLE_TYPES`): `week-ahead`, `month-ahead`, `weekly-review`, `monthly-review`, `committee-reports`, `propositions`, `motions`, `interpellations`, `breaking`, `deep-inspection`. Note: `evening-analysis` is NOT a valid script type — evening analysis requires manual synthesis (see `news-evening-analysis.md`). - -### 🔬 Step 2b: Read ALL Analysis Files (MANDATORY — before article generation) - -> 🔴 **NON-NEGOTIABLE**: The AI agent MUST `cat` every analysis `.md` file BEFORE generating any article HTML. Analysis and articles are created in the **same workflow run** — there is zero excuse for not reading the analysis. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -echo "📖 Reading ALL analysis files for $ARTICLE_DATE..." -for ANALYSIS_DIR in analysis/daily/$ARTICLE_DATE/*/; do - if [ -d "$ANALYSIS_DIR" ]; then - echo "📖 Reading: $ANALYSIS_DIR" - for MD_FILE in "$ANALYSIS_DIR"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done - fi -done -find "analysis/daily/$ARTICLE_DATE" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/total_files.txt -read TOTAL_FILES < /tmp/total_files.txt -echo "✅ Read $TOTAL_FILES analysis files — these MUST drive article content" -``` - -## Step 3: Generate Articles (Script-First) - -> 🔴 **DEEP-INSPECTION TOPIC-DATA ALIGNMENT GATE** (prevents fabricated articles): -> If `focus_topic` is set for deep-inspection, verify at least 1 downloaded document matches the topic BEFORE generating any article. The agent MUST: -> 1. Read the synthesis-summary.md in the deep-inspection analysis folder -> 2. Check if ANY document title/summary contains keywords from `focus_topic` -> 3. If NO match found → the pipeline downloaded irrelevant documents: -> - ABORT **article generation only**; do **not** use `safeoutputs___noop` because analysis artifacts already exist -> - Preserve the downloaded/analysis artifacts and produce a **safe analysis-only output/PR** explaining the mismatch -> - The analysis-only output MUST state: `focus_topic='<topic>'`, summarize the actual downloaded document topics as `<actual topics>`, and clearly say that no article was generated due to topic-data mismatch -> - Do NOT generate an article from general knowledge about the topic -> - Do NOT proceed to manual fallback -> 4. If match found → proceed normally -> -> This gate was added after the 2026-04-15 Deep Inspection Cyber incident where an article about cybersecurity was generated from migration/healthcare data. - -**PRIMARY APPROACH — use the batch generation script:** - -> ⚠️ **CRITICAL — MCP env vars and script MUST run in the same shell session.** -> Never pipe `source scripts/mcp-setup.sh` to `tail` or run it in a separate bash invocation. -> Use `source scripts/mcp-setup.sh && npx tsx ...` on a **single command line**. - -```bash -# Build deep-inspection flags via positional parameters (AWF-safe: preserves spaces in values) -set -- -if echo "$ARTICLE_TYPES" | grep -q "deep-inspection"; then - DOCUMENT_IDS="${{ github.event.inputs.document_ids }}" - DOCUMENT_URLS="${{ github.event.inputs.document_urls }}" - FOCUS_TOPIC="${{ github.event.inputs.focus_topic }}" - if [ -n "$DOCUMENT_IDS" ]; then set -- "$@" "--document-ids=$DOCUMENT_IDS"; fi - if [ -n "$DOCUMENT_URLS" ]; then set -- "$@" "--document-urls=$DOCUMENT_URLS"; fi - if [ -n "$FOCUS_TOPIC" ]; then set -- "$@" "--focus-topic=$FOCUS_TOPIC"; fi - echo "📋 Deep-inspection args: $*" -fi - -BATCH_NUM=1 -while true; do - echo "🔄 Running batch $BATCH_NUM..." - # source + npx on ONE line so MCP_SERVER_URL is in scope for the script process - source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types="$ARTICLE_TYPES" \ - --languages="$LANG_ARG" \ - --batch-size=5 \ - --skip-existing \ - "$@" - EXIT_CODE=$? - - if [ $EXIT_CODE -ne 0 ]; then - echo "❌ Batch $BATCH_NUM failed with exit code $EXIT_CODE" - break - fi - - # Check if all languages are done - if [ -f "news/metadata/batch-status.json" ]; then - node -e "const s=JSON.parse(require('fs').readFileSync('news/metadata/batch-status.json','utf8')); console.log(s.complete)" > /tmp/all_done.txt 2>/dev/null || echo "false" > /tmp/all_done.txt - read ALL_DONE < /tmp/all_done.txt - if [ "$ALL_DONE" = "true" ]; then - echo "✅ All languages generated!" - break - fi - else - break # No batch status means single-pass completed - fi - - BATCH_NUM=$((BATCH_NUM + 1)) - if [ $BATCH_NUM -gt 5 ]; then - echo "⚠️ Exceeded maximum batch count" - break - fi - - # Check time budget before next batch - date +%s > /tmp/now_ts.txt -read AW_NOW_TS < /tmp/now_ts.txt -ELAPSED=$(( (AW_NOW_TS - START_TIME) / 60 )) - if [ $ELAPSED -ge 30 ]; then - echo "⏰ Time budget reached ($ELAPSED min), proceeding with generated articles" - break - fi -done - -date +%Y-%m-%d > /tmp/today.txt -read TODAY < /tmp/today.txt -git status --porcelain -- news/ 2>/dev/null | awk '{print $2}' | grep "$TODAY-" > /tmp/new_articles.txt || true -NEW_ARTICLES="" -[ -s /tmp/new_articles.txt ] && NEW_ARTICLES="generated" -if [ -z "$NEW_ARTICLES" ]; then - echo "No new articles created." -else - echo "Newly generated articles:" - cat /tmp/new_articles.txt -fi -``` - -- If `$NEW_ARTICLES` is non-empty → proceed to Step 4 -- If empty AND `$EXIT_CODE` is 0 (no data available) → call `safeoutputs___noop` -- If empty AND `$EXIT_CODE` is non-zero → see Fallback below - -### Fallback: Manual Generation (ONLY for non-deep-inspection types if script fails AND no articles created) - -> ⚠️ **`deep-inspection` NEVER uses manual fallback.** The script generates multi-stakeholder SWOT analysis, Chart.js document-intelligence dashboard, inline SVG Sankey flow chart, color-coded CSS mindmap, World Bank economic dashboard, and 5W deep-analysis sections that **cannot be replicated manually**. If the script fails for deep-inspection, diagnose and fix the MCP connection, then retry. If MCP is genuinely unavailable, call `safeoutputs___noop` with a clear error message. -> -> **Before declaring script failure, verify MCP is live in the same shell:** -> ```bash -> source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL" -> ``` -> Expected output: `MCP_SERVER_URL=http://host.docker.internal:8080/mcp/riksdag-regering` (port `8080` for gh-aw v0.69+ — was `80` for legacy gh-aw <0.69; resolved dynamically from `mcp-config.json` gateway.port) -> If the value is blank or "unset", `mcp-setup.sh` failed to read the gateway key — check `GH_AW_MCP_CONFIG`. If set correctly, retry the full script command. - -For **non-deep-inspection** article types only, if the script fails, generate articles manually ONE language at a time: -1. Check elapsed time — if >= 38 minutes, stop and call noop with summary -2. Write HTML to `news/YYYY-MM-DD-{slug}-{lang}.html` -3. Use `<link rel="stylesheet" href="../styles.css">` — NO embedded `<style>` tags -4. Include language switcher, article-top-nav, Schema.org NewsArticle, hreflang tags -5. Use `dir="rtl"` for Arabic (ar) and Hebrew (he) - -> 🚫 **NEVER use bash heredoc (`cat > file << 'EOF'`) to write article HTML.** Heredoc truncates large content and causes silent failures. -> -> ✅ **Build the file incrementally** with multiple small `printf` appends (no heredoc, no size limits): -> ```bash -> FILE="news/YYYY-MM-DD-slug-en.html" -> printf '%s\n' '<!DOCTYPE html>' > "$FILE" -> printf '%s\n' '<html lang="en">' >> "$FILE" -> printf '%s\n' '<head><link rel="stylesheet" href="../styles.css"></head>' >> "$FILE" -> printf '%s\n' '<body>' >> "$FILE" -> # ... append each section separately ... -> printf '%s\n' '</body></html>' >> "$FILE" -> ``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "article-generator", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -## Step 3b: AI Title, Meta Description & Analysis References - -> 🚨 **MANDATORY** — After article HTML is generated, the AI MUST improve titles, descriptions, and add analysis references. See `SHARED_PROMPT_PATTERNS.md` sections "AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and "ANALYSIS FILE GITHUB REFERENCES" for full protocols. - -**1. Generate newsworthy titles** — Read each article's content, then replace the script-generated title following: `[Active Verb] + [Specific Actor/Institution] + [Concrete Policy Action]`. BANNED: ❌ any title ending with ": {Topic} in Focus" or generic category labels. - -**2. Generate AI meta descriptions** (150-160 chars) — Summarize key political intelligence from actual content. BANNED: ❌ any description starting with "Analysis of N documents". - -**3. 🔴 Add analysis references section (MANDATORY — VERIFY AFTER)** — Insert the "📊 Analysis & Sources" HTML block before footer, linking to analysis files for the article's date and type (see SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES for the complete template and type-to-folder mapping). - -**After inserting, VERIFY** by running: -```bash -for FILE in news/$ARTICLE_DATE-*-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` - -**4. Update all metadata** — Ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>` all reflect the AI-generated title and description. - -## Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY) - -> 🚨 **v4.0 CRITICAL**: This is the multi-type article generator. Apply content quality enforcement for ALL article types. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0. - -**1. Read pre-computed analysis** — For the current `$REQUESTED_TYPE`, read ALL analysis files from `analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/`. If synthesis reports "0 documents analyzed", use MCP tools to fetch data directly (see ai-driven-analysis-guide.md §Empty Analysis Fallback). - -**2. Scan for BANNED content patterns** — Search each generated article for these exact strings or equivalent boilerplate patterns and REPLACE them: -- Exact string: `"The political landscape remains fluid"` → Replace with specific winners/losers -- Exact string: `"No chamber debate data is available"` → Replace with analysis from document text or MCP debate data -- Pattern/prefix match: any `"Touches on ... policy."` boilerplate followed by generic domain text → Replace with unique per-document analysis -- Pattern/prefix match: any boilerplate starting with `"Analysis of "`, followed by a document count and `" documents covering"` → Replace with analytical lede - -**3. Enforce per-document unique "Why It Matters"** — Verify that NO two documents in the same article share identical "Why It Matters" text. If found, rewrite each with document-specific evidence. - -**4. 🔴 MANDATORY: Replace ALL Deep Analysis `AI_MUST_REPLACE` markers** — The script generates `<!-- AI_MUST_REPLACE: ... -->` HTML comment markers in Deep Analysis subsections. Search EVERY generated article for these markers and replace EACH with genuine, specific political intelligence analysis. ZERO `AI_MUST_REPLACE` markers may survive in committed HTML. Each subsection (Timeline & Context, Why This Matters, Political Impact, Actions & Consequences, Critical Assessment) must contain analysis specific to the documents in the article — not generic parliamentary boilerplate. - -**5. Enforce minimum analytical depth** — Every article MUST contain: -- Analytical lede naming actors and political significance -- Per-document analysis (not flat list of links) -- Winners & Losers with named parties and evidence (≥50 words) -- Key Takeaways with confidence labels (3-5 bullet points) -- Analysis references section with GitHub links - -**6. Run self-quality check** — Score each article against the 5-dimension rubric from SHARED_PROMPT_PATTERNS.md §"Article Quality Self-Check". If any article scores below 7.0 composite, revise before committing. - -## Step 4: Translate & Validate - -Check for untranslated Swedish content in non-Swedish articles: -```bash -UNTRANSLATED=0 -for article in news/*-{en,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh}.html; do - if [ -f "$article" ] && grep -q 'data-translate="true"' "$article"; then - echo "NEEDS TRANSLATION: $article" - UNTRANSLATED=$((UNTRANSLATED + 1)) - fi -done -``` - -If untranslated content found, translate each `<span data-translate="true" lang="sv">text</span>` to the target language and remove the wrapper. - -**Translation rules:** Translate all Swedish text. Keep party names (S, M, SD, V, MP, C, L, KD) and personal names untranslated. Zero language mixing. - -Then run analysis references fix and validation: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite - -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "Validation issues found — fix what you can, proceed if time allows" -fi - -# HTMLHint validation with auto-fix -find news -maxdepth 1 -name '*-*.html' 2>/dev/null | wc -l > /tmp/news_count.txt -read NEWS_FILES < /tmp/news_count.txt -if [ "$NEWS_FILES" -gt 0 ]; then - if ! npx htmlhint "news/*-*.html" 2>/dev/null; then - echo "⚠️ HTML validation errors, attempting auto-fix..." - npx tsx scripts/article-quality-enhancer.ts --fix - npx htmlhint "news/*-*.html" 2>/dev/null || echo "⚠️ Some HTML issues remain" - fi -fi -``` - -## 🛡️ File Ownership Contract - -This workflow is a **content** workflow and MUST only create/modify files for **EN and SV** languages. - -- ✅ **Allowed:** `news/YYYY-MM-DD-*-en.html`, `news/YYYY-MM-DD-*-sv.html` -- ❌ **Forbidden:** `news/YYYY-MM-DD-*-da.html`, `news/YYYY-MM-DD-*-no.html`, or any other translation language - -Validate file ownership (checks staged, unstaged, and untracked changes): -```bash -npx tsx scripts/validate-file-ownership.ts content -``` - -If the validator reports violations, remove tracked changes with `git restore --staged --worktree -- <file>` (or `git checkout -- <file>` on older Git), and remove untracked files with `rm <file>` (or `git clean -f -- <file>`) before committing. - -### Branch Naming Convention - -Use deterministic branch names for content PRs: -``` -news/content/{YYYY-MM-DD}/{article-types} -``` - -> **Note:** `safeoutputs___create_pull_request` handles branch creation automatically; this naming convention is documented for traceability and conflict avoidance. - -## Step 5: Commit & Create PR - -### HOW SAFE PR CREATION WORKS - -⚠️ DO NOT use `git push` — the safe output tool handles publishing. Commit locally, then use the tool. - -```bash -# Restore persisted ANALYSIS_SUBFOLDER (agentic blocks may run independently) -[ -f /tmp/analysis_subfolder.env ] && source /tmp/analysis_subfolder.env -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -# Fallback: recompute ANALYSIS_SUBFOLDER if env file was not available -if [ -z "$ANALYSIS_SUBFOLDER" ]; then - RAW_REQUESTED_TYPE="${{ github.event.inputs.article_types }}" - if [ -z "$RAW_REQUESTED_TYPE" ] || [[ "$RAW_REQUESTED_TYPE" == *,* ]]; then - if [ -z "$_AG_HHMM" ]; then - date -u +%H%M > /tmp/hhmm_val.txt - read _AG_HHMM < /tmp/hhmm_val.txt - fi - ANALYSIS_SUBFOLDER="article-generator-$_AG_HHMM" - else - case "$RAW_REQUESTED_TYPE" in - *committee-reports*) ANALYSIS_SUBFOLDER="committeeReports" ;; - *interpellation*) ANALYSIS_SUBFOLDER="interpellations" ;; - *motions*) ANALYSIS_SUBFOLDER="motions" ;; - *propositions*) ANALYSIS_SUBFOLDER="propositions" ;; - *week-ahead*) ANALYSIS_SUBFOLDER="week-ahead" ;; - *month-ahead*) ANALYSIS_SUBFOLDER="month-ahead" ;; - *weekly-review*) ANALYSIS_SUBFOLDER="weekly-review" ;; - *monthly-review*) ANALYSIS_SUBFOLDER="monthly-review" ;; - *breaking*) if [ -z "$_AG_HHMM" ]; then - date -u +%H%M > /tmp/hhmm_val.txt - read _AG_HHMM < /tmp/hhmm_val.txt - fi - ANALYSIS_SUBFOLDER="realtime-$_AG_HHMM" ;; - *deep-inspection*) ANALYSIS_SUBFOLDER="deep-inspection" ;; - *) ANALYSIS_SUBFOLDER="$RAW_REQUESTED_TYPE" ;; - esac - fi -fi -# Stage articles and analysis — scoped to requested article type subfolder -# CRITICAL: Stage only articles generated by THIS run and their analysis subfolder -# Stage individual article HTML files (the script generates them directly in news/) -git diff --name-only -- "news/" 2>/dev/null | xargs -r git add 2>/dev/null || true -git ls-files --others --exclude-standard -- "news/*.html" 2>/dev/null | xargs -r git add 2>/dev/null || true -git add news/metadata/ 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" || true -git add analysis/weekly/ || true -# 🛡️ Defensive filter: unstage any news/ files that do NOT match $ARTICLE_DATE. The diff- -# based staging above includes every modified news/*.html regardless of date — if an earlier -# script touched historical articles this would blow through the safe-outputs 100-file PR -# limit (E003). See news-realtime-monitor run 24719881413 (received 602 files). -git diff --cached --name-only > /tmp/staged_files.txt -awk -v today="$ARTICLE_DATE" '$0 ~ "^news/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]" && $0 !~ today {print}' /tmp/staged_files.txt > /tmp/historical_news.txt -if [ -s /tmp/historical_news.txt ]; then - HIST_COUNT=0 - awk 'END{print NR}' /tmp/historical_news.txt > /tmp/hist_count.txt - read HIST_COUNT < /tmp/hist_count.txt 2>/dev/null || true - echo "⚠️ Unstaging $HIST_COUNT historical news/ files that do not match $ARTICLE_DATE" - xargs -a /tmp/historical_news.txt git reset HEAD -- 2>/dev/null || true -fi -# Enforce safe-outputs 100-file PR limit -git diff --cached --name-only 2>/dev/null | wc -l > /tmp/staged_count.txt -read STAGED_COUNT < /tmp/staged_count.txt -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Staged $STAGED_COUNT files exceeds 100-file PR limit. Removing weekly analysis." - git reset HEAD -- analysis/weekly/ 2>/dev/null || true - git diff --cached --name-only 2>/dev/null | wc -l > /tmp/staged_count.txt -read STAGED_COUNT < /tmp/staged_count.txt -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "📰 Automated News Generation - $ARTICLE_DATE" -``` +This workflow imports `../prompts/ext/tier-c-aggregation.md`. Produce **all 14 artifacts** (9 core + 5 Tier-C) and cross-reference sibling analyses. See the extension for the full rules. -Then **immediately** call (as a direct tool call, NOT via bash): -``` -safeoutputs___create_pull_request({ - "title": "📰 Automated News Generation - {date}", - "body": "## Automated News Generation\n\nArticles: {count}\nTypes: {types}\nLanguages: {list}\nSource: riksdag-regering-mcp", - "labels": ["automated-news", "news-generation", "needs-editorial-review"] -}) -``` +## What this workflow does -## 🌐 MANDATORY Translation Quality Rules +- **Article type**: `multi` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/<per-type>/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -> **📋 Canonical translation rules are maintained in `news-translate.md`.** +## Time budget (60 min, minimum 45 min of real work) -When generating articles for non-EN/SV languages in this manual workflow: -1. **ALL section headings** and body content MUST be in the target language -2. **Meta keywords** MUST be translated to the target language -3. **data-translate markers**: ZERO `data-translate="true"` spans in final output -4. Swedish API titles MUST be translated to target language -5. Party abbreviations (S, M, SD, V, MP, C, L, KD) are NEVER translated +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -For comprehensive per-language rules (RTL, CJK, Nordic, European), localized CONTENT_LABELS, and validation commands, see `news-translate.md`. +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -**Recommended workflow**: Generate EN/SV content first with deep analysis, then dispatch `news-translate` for remaining languages: -``` -safeoutputs___dispatch_workflow({ - "workflow_name": "news-translate", - "inputs": { - "article_date": "<YYYY-MM-DD>", - "article_type": "<article-type>", - "languages": "all-extra" - } -}) -``` +## Inputs -> **⚠️ Timing note:** The dispatch runs immediately after creating this PR, but the translate workflow checks out `main` where the EN/SV articles may not yet exist (the content PR hasn't been merged). In this case, the translate workflow will `noop` gracefully. The scheduled translate cron (11:00 and 17:00 UTC weekdays) will pick up the translations after the content PR is merged. +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -## Error Handling +## Dedup & analysis-only path -| Scenario | Cause | Fix | -|----------|-------|-----| -| Tool not found | MCP server not initialized | Run `source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL"` — source and script MUST be chained with `&&` on one line; never pipe source to tail | -| Empty results | No new documents for the queried article type | Check if analysis artifacts exist — if yes, commit them and create analysis-only PR; if no, call `safeoutputs___noop` | -| Timeout | MCP server response exceeds `timeout-minutes` | Commit any analysis artifacts produced so far, then call safe output | -| Stale data | `hoursSinceSync > 48` from `get_sync_status()` | Add disclaimer noting data staleness; proceed with cached data | +If articles for `$ARTICLE_DATE` + `multi` already exist **and** `force_generation=false`: -🎯 **Now begin: Check date, warm up MCP with `get_sync_status()`, run pre-article analysis pipeline, review analysis results, determine article types, generate with the script, validate, and call a safe output tool.** +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Article Generator (Manual) — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-committee-reports.lock.yml b/.github/workflows/news-committee-reports.lock.yml index 31a2694c2..f0aab0904 100644 --- a/.github/workflows/news-committee-reports.lock.yml +++ b/.github/workflows/news-committee-reports.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"c9aad4e669b787336c82ef6a574b5662e36b4305937b24b7c71487a1d6b8efbd","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"85260a8672a99d9dc6656fd713ac8b302d0edb3fecc12c978b6e83fe9e42ff0b","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,17 @@ # # Generates committee reports analysis articles in core languages (EN, SV). Translations for remaining 12 languages are handled by the dedicated news-translate workflow via dispatch-workflow. Single article type per run. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -184,14 +195,9 @@ jobs: env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -202,21 +208,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_7880ef6f6dbb320f_EOF' + cat << 'GH_AW_PROMPT_415fc1bfa15f3ea6_EOF' <system> - GH_AW_PROMPT_7880ef6f6dbb320f_EOF + GH_AW_PROMPT_415fc1bfa15f3ea6_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_7880ef6f6dbb320f_EOF' + cat << 'GH_AW_PROMPT_415fc1bfa15f3ea6_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_7880ef6f6dbb320f_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_415fc1bfa15f3ea6_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_7880ef6f6dbb320f_EOF' + cat << 'GH_AW_PROMPT_415fc1bfa15f3ea6_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -246,22 +252,25 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_7880ef6f6dbb320f_EOF + GH_AW_PROMPT_415fc1bfa15f3ea6_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_7880ef6f6dbb320f_EOF' + cat << 'GH_AW_PROMPT_415fc1bfa15f3ea6_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} {{#runtime-import .github/workflows/news-committee-reports.md}} - GH_AW_PROMPT_7880ef6f6dbb320f_EOF + GH_AW_PROMPT_415fc1bfa15f3ea6_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -272,14 +281,9 @@ jobs: uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -302,14 +306,9 @@ jobs: return await substitutePlaceholders({ file: process.env.GH_AW_PROMPT, substitutions: { - GH_AW_EXPR_731DE217: process.env.GH_AW_EXPR_731DE217, GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -411,7 +410,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -499,16 +498,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_84d8f68f15e68ed2_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_84d8f68f15e68ed2_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_95936ae540a1c48f_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_95936ae540a1c48f_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -767,7 +766,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_9515cc29f523feff_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_a88ce2a3bc0403de_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -883,7 +882,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_9515cc29f523feff_EOF + GH_AW_MCP_CONFIG_a88ce2a3bc0403de_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1570,7 +1569,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-committee-reports.md b/.github/workflows/news-committee-reports.md index 0cc40e41f..f6924ee29 100644 --- a/.github/workflows/news-committee-reports.md +++ b/.github/workflows/news-committee-reports.md @@ -2,6 +2,15 @@ name: "News: Committee Reports" description: Generates committee reports analysis articles in core languages (EN, SV). Translations for remaining 12 languages are handled by the dedicated news-translate workflow via dispatch-workflow. Single article type per run. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md on: schedule: daily around 4:00 on weekdays workflow_dispatch: @@ -119,7 +128,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -157,26 +166,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -230,740 +219,47 @@ engine: model: claude-opus-4.7 --- -# 📋 Committee Reports Article Generator - -You are the **News Journalist Agent** for Riksdagsmonitor generating **committee reports** analysis articles. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst, NOT a script executor.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** parliamentary data deeply — SWOT, stakeholder perspectives, risk assessment, election implications -> 2. **WRITE** genuine political intelligence articles with specific actors, evidence citations, and analytical insight -> 3. **USE** the script (`generate-news-enhanced.ts`) ONLY for HTML formatting — the script creates a shell, YOU fill it with analysis -> 4. **REPLACE** every `AI_MUST_REPLACE` marker with real analysis — ZERO markers may remain -> 5. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 6. **VERIFY** article quality: minimum 1000 words, SWOT analysis, stakeholder perspectives, dok_id citations -> 7. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **ITERATIVE IMPROVEMENT IS MANDATORY (2+ passes):** -> - **Analysis Pass 1** (15 min): Create analysis for every document following templates -> - **Analysis Pass 2** (7 min): Read ALL analysis back, improve evidence, diagrams, cross-references -> - **Article Pass 1** (10 min): Generate articles with AI-written content from analysis -> - **Article Pass 2** (8 min): Read ALL articles back completely, improve every section -> - **NEVER complete early** — if you finish ahead, use remaining time to deepen analysis -> -> **If the final article reads like a list of document titles with generic descriptions, you have FAILED.** Rewrite with genuine political analysis before committing. - - -## 🫀 MANDATORY EARLY HEARTBEAT PR GATE (read BEFORE starting) - -> 🔴 **CRITICAL — this single rule prevents "No Safe Outputs Generated" failures** (see run [24709578961](https://github.com/Hack23/riksdagsmonitor/actions/runs/24709578961), 2026-04-21). -> -> The `safeoutputs` MCP server uses a Streamable-HTTP session with a **~30–35 minute idle timeout**. If your FIRST `safeoutputs___*` call happens after minute ~30, you WILL get `"session not found"` and lose ALL work. This is not theoretical — it just happened: the agent spent 37 min doing manual HTML edits and hit `session not found` on every subsequent `create_pull_request`, `noop`, and `push_repo_memory` call. -> -> **HARD RULE — Heartbeat PR no later than minute 15 of agent time:** -> 1. By minute **15** (of your 45-min budget), you MUST have: -> - Committed whatever analysis artifacts exist in `analysis/daily/$ARTICLE_DATE/committeeReports/` (even if only 2–3 files) -> - Called `safeoutputs___create_pull_request` with title `🫀 Heartbeat - Committee Reports - {date}` and `draft: true` -> - Run `git checkout main` afterwards so later commits don't stack onto the frozen patch -> 2. This call **resets the session idle timer** and preserves your work if anything downstream fails. -> 3. Then continue with Pass 2, article generation, and the FINAL (non-draft) PR call around minute 40–43. -> 4. `create-pull-request.max: 2` in the frontmatter — you have **exactly two PR calls budgeted**: one heartbeat (draft), one final. -> -> **Do NOT "save the single PR call for the end".** Do NOT defer heartbeat until Pass 2 completes. The heartbeat is your session-keepalive AND your crash-safety net. If your final PR call later fails with `session not found`, the heartbeat PR still contains committed partial work and a human reviewer can finish it. -> -> ⚠️ **Time-wasting anti-pattern that kills the session:** do NOT spend more than 5 minutes patching generator HTML output with `python3`, `sed`, or heredoc. It is explicitly forbidden (see "Article Generation Safety") AND it is the #1 cause of session-timeout failures. If `generate-news-enhanced.ts` output is incomplete, fix the template/generator in a follow-up PR — do not hand-edit HTML in this run. - - -## 🔧 Workflow Dispatch Parameters - -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -If **force_generation** is `true`, generate articles even if recent ones exist. Use the **languages** value to determine which languages to generate. - -## 🚨 CRITICAL: Single Article Type Focus - -**This workflow generates ONLY `committee-reports` articles.** Do not generate other article types. -This focused approach ensures: -- Smaller patch sizes (avoids safe_outputs failures) -- Faster execution within timeout -- Independent scheduling per article type - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-committee-reports.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (45 minutes) — ENFORCED Minimum 40 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing in 13-22 min of 45-min allocation, producing shallow analysis. Agent MUST use at least 40 of 45 minutes. Completion < 40 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -- **Minutes 0–3**: Date check, MCP warm-up with `get_sync_status()` -- **Minutes 3–6**: Run download-parliamentary-data pipeline (download data) -- **Minutes 6–13**: 🚨 **AI Analysis Pass 1 (Part A — 7 min)**: Start per-file analysis for the highest-significance documents first (synthesis-summary.md + top 3 dok_ids' analyses + an initial risk-assessment.md draft) so the Heartbeat PR at minute 15 has real content. These initial drafts will be completed in Part B and deepened in Pass 2 — no `AI_MUST_REPLACE` markers or template stubs may remain by the final PR. -- **Minutes 13–15**: 🫀 **MANDATORY EARLY Heartbeat PR** — `git add` whatever analysis artifacts exist in `analysis/daily/$ARTICLE_DATE/committeeReports/` (even partial), `git commit -m "wip: committee-reports heartbeat {date}"`, then **`safeoutputs___create_pull_request`** with title `🫀 Heartbeat - Committee Reports - {date}` and **`draft: true`**. Run `git checkout main` after the call so subsequent commits don't stack onto the frozen patch. This **resets the safeoutputs session idle timer** (~30–35 min window) AND preserves work if later phases fail. **NON-NEGOTIABLE: if you reach minute 18 without a successful heartbeat PR, stop all other work and call it immediately.** -- **Minutes 15–23**: 🚨 **AI Analysis Pass 1 (Part B — 8 min)**: Complete per-file analysis for EVERY remaining document with Mermaid diagrams, evidence tables, SWOT entries. **Total Pass 1 = 15 min (7 + 8)** — meets the `deep` depth tier minimum. -- **Minutes 23–30**: 🚨 **AI Analysis Pass 2 (7 min)**: Read ALL analysis artifacts back, improve every section, replace ALL script stubs and `AI_MUST_REPLACE` markers with AI analysis. Run enrichment verification gate. **Total analysis phase = 22 min (Pass 1: 15 + Pass 2: 7).** -- **Minutes 30–32**: Run ENFORCED Minimum Time Gate + Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. -- **Minutes 32–38**: Generate articles for core languages (EN, SV) using `npx tsx scripts/generate-news-enhanced.ts`. **Do NOT** post-edit the generated HTML with `python3`/heredoc/`sed` — see Article Generation Safety. -- **Minutes 38–42**: 🚨 **Article Improvement Pass**: Read ALL articles back, replace AI_MUST_REPLACE markers, improve content. Run article quality component gate. -- **Minutes 42–44**: Validate, commit, create **FINAL (non-draft)** PR with `safeoutputs___create_pull_request`. This is your second and last PR call (`max: 2`). -- **Minutes 44–45**: 🚨 **HARD DEADLINE** — If the final PR call fails with `session not found`, the heartbeat PR from minute 15 already preserves partial work. Do NOT call `safeoutputs___noop` in that case — the heartbeat PR is your output. - -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 15 min = 7 Part A + 8 Part B; Pass 2: 7 min)** — every analysis file must contain color-coded Mermaid diagrams, structured evidence tables with dok_id citations, and follow template structure exactly. ALL script-generated stubs and `AI_MUST_REPLACE` markers MUST be replaced with AI-enriched analysis before the final PR. Run the ENFORCED gates from SHARED_PROMPT_PATTERNS.md before proceeding to article generation. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## 🚫 CRITICAL: Article Generation Safety - -**Articles MUST be generated using `npx tsx scripts/generate-news-enhanced.ts` — NEVER manually.** - -The repository provides a complete article generation pipeline. You MUST use it (see Generation Steps below for the full `LANG_ARG` derivation from the `languages` dispatch input; default is `en,sv`): -```bash -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts --types=committee-reports --languages="$LANG_ARG" --skip-existing -``` - -**❌ NEVER do any of the following:** -- NEVER use `python3` or `python3 -c` to build HTML article files -- NEVER create `.py` scripts to generate articles -- NEVER use bash heredoc (`cat > file << 'EOF'`) to write HTML files — it silently truncates large content -- NEVER manually construct HTML articles line-by-line with `echo`, `printf`, or any other method -- NEVER spend more than 5 minutes attempting to manually build article HTML - -**If `generate-news-enhanced.ts` fails or returns 0 articles:** -1. Check if MCP data was returned (retry MCP calls if needed) -2. Check if analysis artifacts exist in `analysis/daily/YYYY-MM-DD/` — if yes, commit them and create an analysis-only PR -3. If MCP server is unreachable AND no data was downloaded AND no analysis artifacts exist, use `safeoutputs___noop` — this is the ONLY valid noop scenario -4. Do NOT attempt to manually create articles as a fallback - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `quality-criteria.md`, `stakeholder-perspectives.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Article Type Isolation - -> 🚨 **This workflow writes analysis ONLY to `analysis/daily/$ARTICLE_DATE/committeeReports/`**. NEVER write to the parent date directory or another article type's folder. See SHARED_PROMPT_PATTERNS.md "Article Type Isolation" section. - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams, evidence tables, and quantified risk matrices. - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Mermaid diagrams | Risk matrix (L×I) | Forward indicators | Min. analysis time | -|-------|--------------|-------------------|--------|---------|-----------------|-------------------|-------------------|-------------------| -| standard | 1-2 | ≥5 (of 8 groups) | ≥1 | optional | ≥1 color-coded | ≥2 risks scored | ≥2 with triggers | 10 minutes | -| deep | 2-3 | ≥7 (of 8 groups) | ≥2 | required | ≥2 color-coded | ≥4 risks scored | ≥3 with triggers | 15 minutes | -| comprehensive | 3+ | all 8 groups | ≥3 | required | ≥3 color-coded | ≥6 risks scored | ≥5 with triggers | 20 minutes | - -**The 8 mandatory stakeholder groups are**: Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion. Every group MUST be analyzed with specific evidence (dok_id, vote counts, named politicians). - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, quantified risk matrix with numeric L×I scores, forward indicators with specific triggers/timelines, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `committee-reports` (from `scripts/editorial-framework.ts`): -- **SWOT**: ALL 8 stakeholder groups analyzed with evidence tables (dok_id, vote counts, named politicians per entry) -- **Dashboard**: required (min. 1 chart for `standard`; min. 2 for `deep`/`comprehensive`) — include Mermaid diagrams -- **Mindmap**: optional for `standard`; required for `deep`/`comprehensive` — use CSS mindmap with committee jurisdiction branches -- **Risk Matrix**: required — numeric L×I scores (1-5 each) for ≥2 risks (standard), ≥4 risks (deep), ≥6 risks (comprehensive) -- **Forward Indicators**: required — ≥2 specific triggers with dates/timelines for `standard`, ≥3 for `deep` -- **Confidence Labels**: `[HIGH]`/`[MEDIUM]`/`[LOW]` on ALL analytical claims — no unlabeled assertions -- **Mermaid Diagrams**: ≥1 color-coded diagram per article showing committee referral flow, policy impact paths, or legislative pipeline -- **Classification Rationale**: 5-dimension significance scoring visible in article with numeric scores -- **AI iterations**: 1-2 (standard), 2-3 (deep), 3+ (comprehensive) - -> 🚨 **ANTI-PATTERNS (REJECTED)**: "Requires committee review and chamber debate" (generic boilerplate), SWOT with only Government/Opposition/Civil Society (need all 8 groups), risk as "MEDIUM" text without L×I numbers, articles with 0 Mermaid diagrams, no dok_id citations in article body. - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -See `SHARED_PROMPT_PATTERNS.md` §"Standardised Analysis Depth Gate" and §"MANDATORY: AI-Driven Analysis Using Methods & Templates" for Phase 1 (data collection + significance scoring), Phase 2 (depth enhancement: Quick SWOT, Activity Summary, quality gate: ≥400 words), and Phase 3 (final quality gate + `validate-news-generation.sh`). - -## MANDATORY Date Validation - -**ALWAYS START by logging the current date:** -```bash -echo "=== Date Validation Check ===" -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -echo "Article Type: committee-reports" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -September+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before September → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). Use in ALL MCP queries requiring `rm`. - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="committee-reports" -# Derive FORCE_GENERATION from the workflow_dispatch input -FORCE_GENERATION="${{ github.event.inputs.force_generation || 'false' }}" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -## MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/{article-type}` (e.g. `news/content/2026-03-23/committee-reports`). `safeoutputs___create_pull_request` handles this automatically. - -## MANDATORY PR Creation - -### HOW SAFE PR CREATION WORKS - -> `safeoutputs___create_pull_request` handles branch creation, push, and PR opening — do NOT run `git push` or `git checkout -b` manually. Stage files, then call the tool directly. - - -```bash -# Stage articles and analysis — scoped to article type to stay within 100-file PR limit -# CRITICAL: Stage ONLY today's new articles (EN/SV), NOT all existing news/ -# Staging news/*committee-reports*.html would include 494+ existing files, many of which -# may have been modified by auto-fix scripts, causing E003 (>100 files) PR failure. -git add "news/$ARTICLE_DATE-committee-reports-en.html" 2>/dev/null || true -git add "news/$ARTICLE_DATE-committee-reports-sv.html" 2>/dev/null || true -git add news/metadata/ 2>/dev/null || true -# Use $ANALYSIS_SUBFOLDER (set during Run Suffix Resolution above); fallback to base type -if [ -z "$ANALYSIS_SUBFOLDER" ]; then - ANALYSIS_SUBFOLDER="committeeReports" -fi -# Stage analysis summary .md files ONLY — EXCLUDE documents/ to stay under 100-file limit. -# With --limit 50, documents/ alone can contain 100+ files (50 JSON + 50 analysis.md). -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true -# 🚨 HARD UNSTAGE: NEVER commit analysis/data/ — it is an MCP response cache populated by -# download-parliamentary-data.ts (6 doc types × ~40 files = 240+ files). It must stay local. -# Committing it caused E003 "received 258 files" in news-motions run 24653843681 (PR #1867). -# Only news-realtime-monitor stages analysis/data/ intentionally; this workflow never should. -# 🚫 DO NOT run `git add analysis/data/...` anywhere in this workflow. -git reset HEAD -- analysis/data/ 2>/dev/null || true -# Enforce safe-outputs 100-file PR limit (AWF-safe: no $(...) — write to temp file + read back) -git diff --cached --name-only > /tmp/staged_files.txt -awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt -STAGED_COUNT=0 -read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -echo "📊 Staged file count: $STAGED_COUNT (limit: 100)" -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ $STAGED_COUNT files exceeds safe threshold. Removing metadata to reduce count." - git reset HEAD -- news/metadata/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing non-essential analysis — keeping core summaries." - # Graduated pruning: remove individual doc-level analysis JSON first, keep synthesis/scoring/risk .md - # If still over limit, all .json goes but .md summaries (synthesis-summary.md, risk-assessment.md) survive - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-analysis.json 2>/dev/null || true - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-details.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing remaining analysis .json — keeping .md summaries." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -# FINAL HARD GUARD: if count still exceeds 99, remove all analysis .md except synthesis-summary.md -if [ "$STAGED_COUNT" -gt 99 ]; then - echo "🚨 CRITICAL: $STAGED_COUNT files still exceeds safe limit of 99. Removing all analysis .md except synthesis-summary." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true - git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true - echo "📊 After emergency pruning: $STAGED_COUNT files" -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "Add committee-reports articles and analysis artifacts" -``` - -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs (`analysis-only` + `committee-reports` labels) -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash - -## 🌐 Dispatch Translation Workflow - -After creating the content PR, dispatch translations: `safeoutputs___dispatch_workflow({ "workflow_name": "news-translate", "inputs": { "article_date": "<YYYY-MM-DD>", "article_type": "<article-type>", "languages": "all-extra" } })`. See `news-translate.md` for full translation quality rules. - -## MCP Tools - -**ALWAYS call `get_sync_status()` FIRST** to warm up server and check data freshness. - -**Primary tool:** `get_betankanden` — fetches latest committee reports -**Cross-reference:** `search_voteringar`, `search_anforanden`, `get_propositioner` -**Statistical enrichment:** SCB MCP + World Bank — enrich with data matching the reporting committee. Use domain-to-committee mappings from `scripts/scb-context.ts` (e.g., FiU reports→fiscal TAB1291, AU→labour TAB5765, JuU→crime TAB1172, MJU→environment TAB5404). **World Bank indicators (144 across all 12 committees)**: `view analysis/worldbank/indicators-inventory.json` to find indicators by committee — each indicator has `policyAreas`, `committees`, and `mcpTool` fields. Use MCP tools for indicators with `mcpTool` field. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for Chart.js chart templates (`economic-comparison`, `economic-trend`, `nordic-radar`). MUST generate ≥1 economic chart when committee has mapped indicators. -**Fact-checking:** Use `scripts/statistical-claims-detector.ts` to detect statistical claims in related debates and cross-reference against official SCB/World Bank data. - -```javascript -get_sync_status({}) -get_betankanden({ rm: <calculated riksmöte>, limit: 20 }) -``` - -## Generation Steps - -### Step 1: Check Existing Articles (Analysis Always Runs) -🚨 **FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** complete **BEFORE** any article HTML is created or updated. Articles MUST be (re)generated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Violations = REJECTED PR (PR #1705 comment audit, 2026-04-18). - -Check if committee-reports articles already exist for the target date. If they do, skip article generation but **ALWAYS run the full deep political analysis phase** — analysis is the primary output and must execute on every run regardless of article existence. - -### Step 2: Query MCP for Committee Reports -```javascript -get_sync_status({}) -get_betankanden({ rm: <calculated riksmöte>, limit: 20 }) -``` - -### Step 2.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 9 analysis artifacts.** `download-parliamentary-data.ts` downloads raw data from riksdag-regering-mcp ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. Create ALL 9 analysis files in `analysis/daily/YYYY-MM-DD/committeeReports/` using evidence from the downloaded data - -**NEVER write or copy analysis files to the parent date directory** — doing so causes merge conflicts when multiple doc-type workflows run on the same date. The `analysis-reader.ts` automatically scans subdirectories, so root-level copies are NOT needed. After creating ALL analysis files, run the **9-Artifact Completeness Gate** from `SHARED_PROMPT_PATTERNS.md` §"9 REQUIRED Analysis Artifacts" to verify ALL 9 files exist. - -```bash -# Idempotent: only set if not already resolved by lookback -if [ -z "$ARTICLE_DATE" ]; then - ARTICLE_DATE="${{ github.event.inputs.article_date }}" - if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt - fi -fi - -# === Run Suffix Resolution (see SHARED_PROMPT_PATTERNS.md) === -BASE_SUBFOLDER="committeeReports" -ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER" -if [ "$FORCE_GENERATION" != "true" ]; then - _SUFFIX=1 - while [ -f "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" ]; do - _SUFFIX=$((_SUFFIX + 1)) - ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER-$_SUFFIX" - done -fi -echo "📁 Analysis subfolder resolved: $ANALYSIS_SUBFOLDER" - -echo "📊 Downloading data for $ARTICLE_DATE..." -# CRITICAL: Source mcp-setup.sh to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway -source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL" -npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 --doc-type committeeReports > /tmp/pipeline-output.log 2>&1 -PIPE_EXIT=$? -cat /tmp/pipeline-output.log -if [ "$PIPE_EXIT" -ne 0 ]; then - echo "❌ Pipeline failed with exit code $PIPE_EXIT — agent MUST diagnose and fix (see Script Debugging Protocol)" - tail -30 /tmp/pipeline-output.log -fi - -# If suffixed, relocate from base folder to suffixed folder -if [ "$ANALYSIS_SUBFOLDER" != "$BASE_SUBFOLDER" ]; then - SRC="analysis/daily/$ARTICLE_DATE/$BASE_SUBFOLDER" - DST="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - if [ -d "$SRC" ]; then - mkdir -p "$DST" - find "$SRC" -maxdepth 1 -type f -exec mv -f {} "$DST/" \; - if [ -d "$SRC/documents" ]; then - mkdir -p "$DST/documents" - find "$SRC/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$DST/documents/" \; - rmdir "$SRC/documents" 2>/dev/null || true - fi - rmdir "$SRC" 2>/dev/null || true - echo "📁 Relocated pipeline output → $DST (suffix applied for merge safety)" - fi -fi - -echo "📊 Analysis artifacts for $ARTICLE_DATE/$ANALYSIS_SUBFOLDER:" -ls -la "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" 2>/dev/null || echo "⚠️ No analysis output" -# Verify actual data was downloaded -MANIFEST_DOCS=0 -MANIFEST_PATH="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/data-download-manifest.md" -if [ -f "$MANIFEST_PATH" ]; then - grep -E '^\*\*Documents Analyzed\*\*' "$MANIFEST_PATH" 2>/dev/null | grep -oE '[0-9]+' | head -1 > /tmp/manifest_docs.txt || echo 0 > /tmp/manifest_docs.txt - read MANIFEST_DOCS < /tmp/manifest_docs.txt - MANIFEST_DOCS=$MANIFEST_DOCS -fi -[ -z "$MANIFEST_DOCS" ] && MANIFEST_DOCS=0 -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_count.txt -read DATA_JSON_COUNT < /tmp/data_count.txt -echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT" -if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then - echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." -fi -``` - -### 🔄 Data Lookback Fallback - -> 🚨 **CRITICAL RULE**: Never produce empty/stub analysis. If no data for today, look back to find unanalyzed data. - -Key steps: resolve `ARTICLE_DATE` from input or today → check `data-download-manifest.md` → if 0 docs, loop `DAYS_BACK` 1–7 using `date -u -d "$ARTICLE_DATE - $DAYS_BACK days"`, run `download-parliamentary-data.ts --date "$LOOKBACK_DATE"` → copy artifacts from found date to original date folder → run `catalog-downloaded-data.ts --pending-only`. See `SHARED_PROMPT_PATTERNS.md` §"Data Lookback Fallback Strategy" for full bash implementation. - -### Per-File AI Analysis Enhancement - ->Follow `SHARED_PROMPT_PATTERNS.md` §"Per-File AI Analysis Block" and §"MANDATORY: AI-Driven Analysis Using Methods & Templates" exactly: -- **Step A**: Read `analysis/methodologies/ai-driven-analysis-guide.md` + `analysis/templates/per-file-political-intelligence.md` FIRST -- **Step B**: For EVERY document JSON → create `{dok_id}-analysis.md` with ALL 6 analytical lenses, ≥1 color-coded Mermaid, evidence tables -- **Step C**: Rewrite ALL synthesis files to match templates exactly -- **Step D**: Run quality gate (see SHARED §"Step 5b: MANDATORY Quality Gate"). Fix ALL failures. - -### 🔴 MANDATORY: Batch Analysis Enrichment (Prevents Empty "0 Documents Analyzed" Files) - -If `synthesis-summary.md` reports "0 documents analyzed" but per-doc analyses exist in `documents/`, aggregate findings into all 9 batch files. If NO per-doc analyses exist, use MCP `get_betankanden(rm="2025/26", limit=20)` directly. See `ai-driven-analysis-guide.md` §"Deep-Inspection Batch Analysis Enrichment Protocol (v4.1)". **NEVER commit batch files reporting "0 documents analyzed".** - -### 📋 Rewrite Daily Synthesis Files to Follow Templates - -> 🚨 **CRITICAL**: Script-generated stubs do NOT follow template structure. Rewrite each daily file to match its `analysis/templates/` counterpart. Read each template with `cat` before rewriting. Every file needs: metadata header (ID, date, riksmöte, confidence), ≥1 color-coded Mermaid diagram, evidence tables with dok_id citations, and no `[REQUIRED]` placeholders. - -### 🚨 MANDATORY: Analysis Artifacts Must ALWAYS Be Committed - -**Before deciding whether to generate articles or call noop, you MUST:** - -1. **Review the analysis artifacts** in `analysis/daily/YYYY-MM-DD/committeeReports/` — read `synthesis-summary.md` and `significance-scoring.md` to understand what was found -2. **Summarize the analysis findings** — note how many documents were downloaded, their significance scores, key themes, and risk levels -3. **ALWAYS commit analysis artifacts** regardless of whether articles will be generated: - -```bash -[ -f /tmp/hhmm.env ] && . /tmp/hhmm.env -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/committeeReports" -find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt -read ANALYSIS_COUNT < /tmp/analysis_count.txt -echo "Analysis artifacts: $ANALYSIS_COUNT files in $ANALYSIS_DIR" -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the pre-article analysis pipeline produced ANY output files, you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Committee Reports - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if the analysis pipeline produced ZERO output files (truly nothing to analyze). - -### 🔴 MANDATORY ANALYSIS VERIFICATION GATE (STOP — DO NOT SKIP) - -> 🚨 Run the verification gate bash. See `SHARED_PROMPT_PATTERNS.md` §"Step 5b: MANDATORY Quality Gate". If gate fails (0 analysis files), run the full analysis pipeline and re-run. Only proceed to article generation once gate passes. - -### 🔬 Step 2b: Read ALL Analysis Files (MANDATORY — before article generation) - -> 🔴 **NON-NEGOTIABLE**: The AI agent MUST `cat` every analysis `.md` file BEFORE generating any article HTML. Analysis and articles are created in the **same workflow run** — there is zero excuse for not reading the analysis. Articles written without reading analysis are shallow and REJECTED. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="committeeReports" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done - if [ -d "$ANALYSIS_BASE/documents" ]; then - echo "📄 Reading per-document analyses..." - for DOC_FILE in "$ANALYSIS_BASE/documents"/*.md; do - if [ -f "$DOC_FILE" ]; then - echo "--- Per-doc: $DOC_FILE ---" - cat "$DOC_FILE" - echo "" - fi - done - fi - find "$ANALYSIS_BASE" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/analysis_file_count.txt - read ANALYSIS_FILE_COUNT < /tmp/analysis_file_count.txt - echo "✅ Read $ANALYSIS_FILE_COUNT analysis files — these MUST drive article content" -else - echo "⚠️ No analysis directory found at $ANALYSIS_BASE — will use MCP fallback for article content" -fi -``` - -> **After reading, confirm you loaded the analysis** by noting: (1) number of files read, (2) top 3 significance-ranked findings, (3) key risk scores. If you cannot produce this summary, you have NOT read the analysis. - -Parse the `languages` input and generate using the automated script: - -```bash -# Set LANGUAGES_INPUT to the value shown in Workflow Dispatch Parameters above -LANGUAGES_INPUT="<value from Workflow Dispatch Parameters>" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=committee-reports \ - --languages="$LANG_ARG" \ - --skip-existing -``` - -**Article Navigation Verification**: The `generate-news-enhanced.ts` script automatically includes all required navigation elements: -- **Language switcher** (`<nav class="language-switcher">`) after `<body>` with all 14 languages -- **Back-to-news top nav** (`<div class="article-top-nav">`) with localized back link after language switcher -- **Footer back-to-news link** in `<footer class="article-footer">` - -These elements are validated by `bash scripts/validate-news-generation.sh` (Checks 8–10). The fix script is a **fallback only** — do not run it by default: -```bash -# FALLBACK ONLY — use if validate-news-generation.sh reports missing navigation elements -npx tsx scripts/fix-article-navigation.ts -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "committee-reports", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -### Step 3b: AI Title, Meta Description & Analysis References (v5.0 — Analysis-Driven) - -> 🚨 **MANDATORY** — See `SHARED_PROMPT_PATTERNS.md` §"AI-DRIVEN TITLE & META DESCRIPTION GENERATION". Read `synthesis-summary.md` for recommended titles/descriptions. Title: `[Active Verb] + [Actor] + [Policy Action]`. BANNED: titles ending ": {Topic} in Focus". Meta: 150-160 chars, not starting with "Analysis of N documents". Add "📊 Analysis & Sources" HTML block before footer. Update ALL language metadata. Verify: -```bash -for FILE in news/$ARTICLE_DATE-committee-reports-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` - -### Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY) - -> 🚨 See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION". Read pre-computed analysis files. Replace: lede `"Analysis of N documents..."`, boilerplate `"Touches on {X} policy..."`, `"The political landscape remains fluid..."`, `"No chamber debate data..."`. Replace ALL `<!-- AI_MUST_REPLACE: ... -->` markers with genuine analysis. Zero markers in final HTML. Add Key Takeaways with dok_id citations and confidence labels. - -### Step 4: Translate Swedish Content & Verify Analysis Quality -All Swedish API data MUST be translated. Check every article for `data-translate="true"` markers. - -**CRITICAL: Each article MUST contain real analysis, not just a list of translated links.** -Every generated article must include: -- An analytical lede paragraph providing political context (not just a document count) -- Thematic analysis section grouping reports by committee with interpretive commentary -- "What This Means" or "Why It Matters" analysis for each document -- Key Takeaways section summarizing political significance and implications -- Policy domain inference (fiscal, defence, healthcare, etc.) based on committee and title - -If the generated article lacks these analytical sections, manually add contextual analysis before committing. - -## MANDATORY Quality Validation - -After article generation, verify EACH article meets these minimum standards before committing. -Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). -- **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Per-file AI analysis protocol -- **`analysis/methodologies/ai-driven-analysis-guide.md`** — Methodology for deep per-file analysis -- **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template - -### Iterative Analysis Protocol - -For each generated article, apply up to 3 iterations: -1. **Iteration 1** — Generate initial draft from MCP data -2. **Self-assess** — Score against quality rubric (Accuracy + Depth + Perspectives + Translation + Editorial) -3. **If score < 7**: Identify lowest-scoring dimension and regenerate those sections -4. **Iteration 2** — Address quality gaps, deepen committee analysis and voting context -5. **If still < 7**: Final iteration — add policy implications and parliamentary significance -6. **Maximum 3 iterations** — Never publish below 5/10 - -### Required Sections (at least 3 of 5): -1. **Analytical Lede** (paragraph, not just document count) -2. **Thematic Analysis** (documents grouped by policy theme) -3. **Strategic Context** (why these documents matter politically) -4. **Stakeholder Impact** (who benefits, who loses) -5. **What Happens Next** (expected timeline and outcomes) - -### Disqualifying Patterns: -- ❌ `"Filed by: Unknown (Unknown)"` — FIX author/party metadata before committing -- ❌ `data-translate="true"` spans in non-Swedish articles — TRANSLATE before committing -- ❌ Identical "Why It Matters" text for all entries — DIFFERENTIATE analysis per report -- ❌ Flat list of reports without grouping — GROUP by committee or policy theme -- ❌ Article under 500 words — EXPAND with analytical sections - -### Playwright Visual Validation -Run Playwright validation before creating the PR: -```bash -# HTMLHint validation -npx htmlhint "news/*-committee-reports-*.html" - -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "committee-reports" - -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-committee-reports-*.html -``` - -### Bash Validation Commands: -```bash -# Check for unknown authors (should return 0) -grep -rl "Filed by: Unknown" news/ | grep "committee-reports" | wc -l || true - -# Check for untranslated spans in English article (should return 0) -grep -c 'data-translate="true"' "news/$ARTICLE_DATE-committee-reports-en.html" 2>/dev/null || true +# 📋 Committee Reports -# Check word count of English article text content (warn if < 500; HTML tags stripped) -FILE="news/$ARTICLE_DATE-committee-reports-en.html" -if [ ! -f "$FILE" ]; then echo "WARNING: Expected article file not found: $FILE — check if generation succeeded"; else - sed 's/<[^>]*>/ /g' "$FILE" | tr -s '[:space:]' '\n' | grep -c '[[:alnum:]]' 2>/dev/null > /tmp/word_count.txt || echo 0 > /tmp/word_count.txt - read WORD_COUNT < /tmp/word_count.txt - echo "Content word count (HTML tags stripped): $WORD_COUNT" - if [ "$WORD_COUNT" -lt 500 ]; then echo "WARNING: Article content may be too short ($WORD_COUNT words) — consider expanding before PR"; fi -fi +Generates deep political intelligence articles on parliamentary committee reports in core languages (EN, SV). Translations dispatched to `news-translate`. -# Check for duplicate "Why It Matters" content (should return empty) -grep -o 'Why It Matters[^<]*' "news/$ARTICLE_DATE-committee-reports-en.html" 2>/dev/null | sort | uniq -d || true -``` -**Note**: Do NOT use `exit 1` in these validation snippets — use warnings so the agent can still create a PR or call noop. +## What this workflow does -### If Article Fails Quality Check: -1. Use bash to enhance the HTML with analytical sections -2. Replace generic "Why It Matters" with report-specific analysis -3. Add thematic grouping headers (e.g., by committee or policy domain) -4. Translate any remaining Swedish content +- **Article type**: `committee-reports` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/committee-reports/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -### Step 5: Build-time Generation Note +## Time budget (60 min, minimum 45 min of real work) -**Note**: News index files, metadata, and sitemap are generated automatically at build time by the `prebuild` script. Do NOT run generation scripts or commit their output — only commit the article HTML files. Run `npm run prebuild` (or `npm run build`) locally if you need to preview the generated indexes, metadata, or sitemap. +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -### Step 6: Validate & Create PR -Run analysis references fix, validation, and HTMLHint before creating PR: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -# This is deterministic — scans analysis/ dir for files created in this workflow run -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type committee-reports +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "❌ News generation validation failed. Fix the reported issues before creating a PR." - exit "$VALIDATION_EXIT" -fi +## Inputs -# HTMLHint validation with auto-fix — SCOPED TO TODAY'S ARTICLES ONLY -# CRITICAL: Do NOT run htmlhint/--fix on all news/*-*.html — that modifies 494+ existing -# committee-reports articles which then get staged and exceed the 100-file PR limit (E003). -if [ -f "news/$ARTICLE_DATE-committee-reports-en.html" ] || [ -f "news/$ARTICLE_DATE-committee-reports-sv.html" ]; then - if ! npx htmlhint "news/$ARTICLE_DATE-committee-reports-en.html" "news/$ARTICLE_DATE-committee-reports-sv.html" 2>/dev/null; then - echo "⚠️ HTML validation errors in today's articles, attempting auto-fix (scoped to today only)..." - if [ -f "news/$ARTICLE_DATE-committee-reports-en.html" ]; then - npx tsx scripts/article-quality-enhancer.ts --fix "news/$ARTICLE_DATE-committee-reports-en.html" - fi - if [ -f "news/$ARTICLE_DATE-committee-reports-sv.html" ]; then - npx tsx scripts/article-quality-enhancer.ts --fix "news/$ARTICLE_DATE-committee-reports-sv.html" - fi - if ! npx htmlhint "news/$ARTICLE_DATE-committee-reports-en.html" "news/$ARTICLE_DATE-committee-reports-sv.html" 2>/dev/null; then - echo "⚠️ HTML validation still failing after auto-fix — manual review needed (continuing to PR)" - fi - fi -fi -``` +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -Then create PR: -``` -safeoutputs___create_pull_request -``` +## Dedup & analysis-only path -## 🌐 Translation Quality +If articles for `$ARTICLE_DATE` + `committee-reports` already exist **and** `force_generation=false`: -EN/SV only: all headings, meta, content in correct language; no untranslated `data-translate` spans; Swedish API titles translated. Full rules: `news-translate.md`. -## Article Naming Convention -Files: `YYYY-MM-DD-committee-reports-{lang}.html` (e.g., `2026-02-22-committee-reports-en.html`) +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Committee Reports — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-evening-analysis.lock.yml b/.github/workflows/news-evening-analysis.lock.yml index 4f688e982..a45888197 100644 --- a/.github/workflows/news-evening-analysis.lock.yml +++ b/.github/workflows/news-evening-analysis.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"a9021b0e38cde5fc9e1b38b3d3b3ae4566ec0a3a22ceccdaf1c3281799bb5531","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"ba1723d47e9431f150544397019675615b429c75e50a650fd099bd8d64e86959","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,18 @@ # # Generates comprehensive evening analysis articles in core languages (EN, SV) with Playwright validation. Translations handled by news-translate workflow. On Saturdays, produces a weekly wrap-up reviewing the full parliamentary week. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# - ../prompts/ext/tier-c-aggregation.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -191,11 +203,6 @@ jobs: GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_COVERAGE_DEPTH: ${{ github.event.inputs.coverage_depth }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} - GH_AW_GITHUB_EVENT_INPUTS_LOOKBACK_HOURS: ${{ github.event.inputs.lookback_hours }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -206,9 +213,9 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_fd76437906c71cbc_EOF' + cat << 'GH_AW_PROMPT_21fd943019943a15_EOF' <system> - GH_AW_PROMPT_fd76437906c71cbc_EOF + GH_AW_PROMPT_21fd943019943a15_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" @@ -216,12 +223,12 @@ jobs: cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_fd76437906c71cbc_EOF' + cat << 'GH_AW_PROMPT_21fd943019943a15_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_fd76437906c71cbc_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_21fd943019943a15_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_fd76437906c71cbc_EOF' + cat << 'GH_AW_PROMPT_21fd943019943a15_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -251,22 +258,26 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_fd76437906c71cbc_EOF + GH_AW_PROMPT_21fd943019943a15_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_fd76437906c71cbc_EOF' + cat << 'GH_AW_PROMPT_21fd943019943a15_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} + {{#runtime-import .github/prompts/ext/tier-c-aggregation.md}} {{#runtime-import .github/workflows/news-evening-analysis.md}} - GH_AW_PROMPT_fd76437906c71cbc_EOF + GH_AW_PROMPT_21fd943019943a15_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_COVERAGE_DEPTH: ${{ github.event.inputs.coverage_depth }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} - GH_AW_GITHUB_EVENT_INPUTS_LOOKBACK_HOURS: ${{ github.event.inputs.lookback_hours }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -280,11 +291,6 @@ jobs: GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_COVERAGE_DEPTH: ${{ github.event.inputs.coverage_depth }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} - GH_AW_GITHUB_EVENT_INPUTS_LOOKBACK_HOURS: ${{ github.event.inputs.lookback_hours }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -310,11 +316,6 @@ jobs: GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_COVERAGE_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_COVERAGE_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, - GH_AW_GITHUB_EVENT_INPUTS_LOOKBACK_HOURS: process.env.GH_AW_GITHUB_EVENT_INPUTS_LOOKBACK_HOURS, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -416,7 +417,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -504,16 +505,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_b59485c9bece9e7a_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_b59485c9bece9e7a_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_9919f8b0904ec91c_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_9919f8b0904ec91c_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -774,7 +775,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_dd52d54c9e9e93fc_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_858077292f059c87_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -904,7 +905,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_dd52d54c9e9e93fc_EOF + GH_AW_MCP_CONFIG_858077292f059c87_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1591,7 +1592,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,cdn.playwright.dev,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,playwright.download.prss.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-evening-analysis.md b/.github/workflows/news-evening-analysis.md index 2ec4bc506..23d395dce 100644 --- a/.github/workflows/news-evening-analysis.md +++ b/.github/workflows/news-evening-analysis.md @@ -2,6 +2,16 @@ name: News Evening Analysis description: Generates comprehensive evening analysis articles in core languages (EN, SV) with Playwright validation. Translations handled by news-translate workflow. On Saturdays, produces a weekly wrap-up reviewing the full parliamentary week. strict: false # Allow custom network domain riksdag-regering-ai.onrender.com (trusted MCP server) +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md + - ../prompts/ext/tier-c-aggregation.md on: schedule: # Run weekday evenings at 18:00 UTC (19:00 CET / 20:00 CEST) @@ -127,7 +137,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -165,26 +175,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -238,860 +228,51 @@ engine: model: claude-opus-4.7 --- -# 🌆 Evening Parliamentary Analysis - -You are the **Evening Political Analyst** for Riksdagsmonitor. Generate comprehensive analysis of the day's parliamentary and government activity. On Saturdays, produce a **weekly wrap-up** instead. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst, NOT a script executor.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** parliamentary data deeply — SWOT, stakeholder perspectives, risk assessment, election implications -> 2. **WRITE** genuine political intelligence articles with specific actors, evidence citations, and analytical insight -> 3. **USE** the script (`generate-news-enhanced.ts`) ONLY for HTML formatting — the script creates a shell, YOU fill it with analysis -> 4. **REPLACE** every `AI_MUST_REPLACE` marker with real analysis — ZERO markers may remain -> 5. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 6. **VERIFY** article quality: minimum 1000 words, SWOT analysis, stakeholder perspectives, dok_id citations -> 7. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **ITERATIVE IMPROVEMENT IS MANDATORY (2+ passes):** -> - **Analysis Pass 1** (15 min): Create analysis for every document following templates -> - **Analysis Pass 2** (7 min): Read ALL analysis back, improve evidence, diagrams, cross-references -> - **Article Pass 1** (10 min): Generate articles with AI-written content from analysis -> - **Article Pass 2** (8 min): Read ALL articles back completely, improve every section -> - **NEVER complete early** — if you finish ahead, use remaining time to deepen analysis -> -> **If the final article reads like a list of document titles with generic descriptions, you have FAILED.** Rewrite with genuine political analysis before committing. - - -## 🔧 Workflow Dispatch Parameters - -- **coverage_depth** = `${{ github.event.inputs.coverage_depth }}` — Controls article **content scope**: how many topics and how broad the coverage (e.g., `comprehensive` on Saturdays for weekly wrap-up). -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` — Controls **AI analysis quality**: SWOT complexity, stakeholder count, dashboard charts, and iteration count per the editorial framework. -- **languages** = `${{ github.event.inputs.languages }}` -- **lookback_hours** = `${{ github.event.inputs.lookback_hours }}` - -> **Note:** `coverage_depth` and `analysis_depth` are distinct inputs. `coverage_depth` determines *what* to cover (breadth); `analysis_depth` determines *how deeply* to analyze it (quality). They default independently — adjust each based on the article's needs. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## ⚠️ NON-NEGOTIABLE RULES - -1. Every run **MUST** end with exactly one safe output tool call: - - Articles generated → `safeoutputs___create_pull_request({...})` - - No significant activity → `safeoutputs___noop({"message": "..."})` - - Tool unavailable → `safeoutputs___missing_tool({"reason": "..."})` - - MCP data unavailable → `safeoutputs___missing_data({"reason": "..."})` -2. `safeoutputs___create_pull_request` handles branch creation and push. **NEVER** run `git push` or `git checkout -b`. -3. **🚨 NEVER search for safe output tools via bash.** `safeoutputs___create_pull_request`, `safeoutputs___noop`, `safeoutputs___missing_tool`, and `safeoutputs___missing_data` are **always available as direct tool calls** in your tool list. NEVER run `ls /tmp/gh-aw/`, `ls /home/runner/.copilot/`, or any bash command to "find" them. -4. **NEVER** write your own MCP HTTP/JSON-RPC client. Use the scripts or direct tool calls only. -5. Exiting without calling a safe output tool = **workflow failure**. If anything goes wrong at any point, call `safeoutputs___noop` immediately. -6. **🚨 FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** be complete **BEFORE** creating or updating any article HTML. Articles **MUST** be (re)generated/updated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Analysis is the primary output and must execute every run. Violations = REJECTED PR (see PR #1705 comment audit, 2026-04-18). - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-evening-analysis.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (45 minutes) — ENFORCED Minimum 40 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing in 13-22 min of 45-min allocation, producing shallow analysis. Agent MUST use at least 40 of 45 minutes. Completion < 40 min = insufficient iteration = REJECTED. -> -> 🔴 **ROOT CAUSE (PR #1801, 2026-04-16)**: Evening analysis produced only 3 of 9 required analysis artifacts (missing swot-analysis.md, risk-assessment.md, threat-analysis.md, classification-results.md, cross-reference-map.md, data-download-manifest.md) and completed in 23 minutes. This is because the agent skipped Phase B artifact creation. **ALL 9 artifacts are MANDATORY** — see §"9 REQUIRED Analysis Artifacts" below. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -| Phase | Minutes | Action | ✅ Verification | -|-------|---------|--------|----------------| -| Setup | 0–3 | Date check, `get_sync_status()`, determine day type | MCP responds | -| Download | 3–6 | Run `populate-analysis-data.ts` + `download-parliamentary-data.ts` (script-driven data download) | Data files exist | -| **AI Analysis Pass 1** | **6–21** | **🚨 MANDATORY 15 min minimum**: Read ALL methodology guides, create per-file analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. **Create ALL 9 required artifacts.** | 9 artifact files exist | -| **AI Analysis Pass 2 (Part A)** | **21–22** | Begin reading ALL 9 analysis artifacts back and identify improvement targets. | Files opened for review | -| **Heartbeat PR** | **22–25** | 🫀 `git add && git commit` analysis + any drafts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Evening Analysis - {date}`). This refreshes the safeoutputs MCP session (which expires after ~30–35 min idle) AND guarantees no work is lost if later phases fail. After the call succeeds, run `git checkout main` so subsequent commits don't stack onto the frozen patch. | Heartbeat PR created | -| **AI Analysis Pass 2 (Part B)** | **25–28** | **Complete improvements (6 min improvement work total across Parts A+B)**: improve every section, add missing Mermaid diagrams and evidence tables, replace ALL script stubs with AI analysis. | All 9 files ≥500 bytes | -| Gates | 28–30 | Run ENFORCED Minimum Time Gate + Enrichment Verification Gate + **9-artifact completeness check** (SHARED_PROMPT_PATTERNS.md). ALL MUST pass. | 0 failures | -| Generate | 30–36 | Run generation script OR manual synthesis (see Step 3) | HTML files created | -| **Article Improvement** | **36–40** | 🚨 **Article Improvement Pass**: Read ALL articles back, replace AI_MUST_REPLACE markers, improve content. Run article quality component gate. | 0 AI_MUST_REPLACE markers | -| Validate+PR | 40–45 | Validate, commit, `safeoutputs___create_pull_request` | PR created | - -| **HARD DEADLINE** | **43–45** | 🚨 If no safe output yet: if ANY artifacts/files were created, IMMEDIATELY stage, commit, call `safeoutputs___create_pull_request` with partial work. ONLY call `safeoutputs___noop` if truly ZERO files were created. | -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 15 min + Pass 2: 7 min)** — every analysis file must contain color-coded Mermaid diagrams, structured evidence tables with dok_id citations, and follow template structure exactly. ALL script-generated stubs MUST be replaced with AI-enriched analysis. **ALL 9 required artifacts MUST be created** (not just the 3 the script generates). Run the ENFORCED gates from SHARED_PROMPT_PATTERNS.md before proceeding to article generation. -> -> 🔴 **ANTI-PATTERN (REJECTED)**: Creating only synthesis-summary.md + significance-scoring.md + stakeholder-perspectives.md and skipping the other 6 artifacts. This produces shallow articles missing SWOT tables, risk matrices, threat analysis, and classification data. - -**Hard cutoffs**: `>= 25 min` and no safeoutputs call yet → 🚨 call `safeoutputs___create_pull_request` as a heartbeat with whatever files exist (do NOT delay — the safeoutputs session expires at ~30–35 min idle); `>= 35 min` → commit & PR now; `>= 43 min` → STOP ALL WORK, call safe output immediately. - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Article Type Isolation - -> 🚨 **This workflow writes analysis ONLY to `analysis/daily/$ARTICLE_DATE/evening-analysis/`**. NEVER write to the parent date directory or another article type's folder. See SHARED_PROMPT_PATTERNS.md "Article Type Isolation" section. - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams and evidence tables. - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Mermaid diagrams | Risk matrix (L×I) | Forward indicators | Min. analysis time | -|-------|--------------|-------------------|--------|---------|-----------------|-------------------|-------------------|-------------------| -| standard | 1-2 | ≥5 (of 8 groups) | ≥1 | optional | ≥1 color-coded | ≥2 risks scored | ≥2 with triggers | 10 minutes | -| deep | 2-3 | ≥7 (of 8 groups) | ≥2 | required | ≥2 color-coded | ≥4 risks scored | ≥3 with triggers | 15 minutes | -| comprehensive | 3+ | all 8 groups | ≥3 | required | ≥3 color-coded | ≥6 risks scored | ≥5 with triggers | 20 minutes | - -**The 8 mandatory stakeholder groups are**: Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion. Every group MUST be analyzed with specific evidence (dok_id, vote counts, named politicians). - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, quantified risk matrix with numeric L×I scores, forward indicators with specific triggers/timelines, confidence labels on all analytical claims, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `evening-analysis`: SWOT ALL 8 groups, ≥1 dashboard chart, mindmap optional (standard)/required (deep+), ≥1 Mermaid diagram, numeric L×I risk scores, forward indicators with next-day/week triggers, `[HIGH]`/`[MEDIUM]`/`[LOW]` on ALL claims, 1–3 AI iterations per depth. - -> 🚨 **ANTI-PATTERNS (REJECTED)**: Surface-level daily summaries without analysis, SWOT with only 3 groups, no Mermaid diagrams, no risk scores, no forward indicators - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -See `SHARED_PROMPT_PATTERNS.md` §"Standardised Analysis Depth Gate" and §"MANDATORY: AI-Driven Analysis Using Methods & Templates" for Phase 1 (data collection + significance scoring), Phase 2 (depth enhancement: Quick SWOT, Activity Summary, quality gate: ≥400 words), and Phase 3 (final quality gate + `validate-news-generation.sh`). - -## Step 1: Date Validation & MCP Health Check - -```bash -echo "=== Date Validation Check ===" -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -echo "START_TIME=$START_TIME" > /tmp/gh-aw/agent/timing.env -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -date +"%Z: %A %Y-%m-%d %H:%M:%S" -date -u +"%u" > /tmp/dow.txt -read DAY_OF_WEEK < /tmp/dow.txt -echo "Day of week: $DAY_OF_WEEK (6=Saturday weekly wrap-up)" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -Sep+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before Sep → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="evening-analysis" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -### MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -STEP 1: ALWAYS check data freshness first — call `get_sync_status({})` to warm up MCP and check stale data. - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -### DATA FRESHNESS CHECK & Date Filtering - -If `hoursSinceSync > 48`, add a stale-data disclaimer but proceed. See `SHARED_PROMPT_PATTERNS.md` §"Date Filtering" for canonical JS patterns. Key: `get_calendar_events` uses `from`/`tom`; `search_regering` uses `dateFrom`/`dateTo`; post-query filter other tools by `datum`/`publicerad`/`inlämnad`. Use `scripts/mcp-query-cli.ts` for ad-hoc queries — NEVER implement custom MCP client code. - -### ⚠️ Calendar API Fallback - -`get_calendar_events` intermittently returns HTML. If it fails: (1) do NOT treat failure as "no events"; (2) use `search_dokument({ from_date, to_date, doktyp: "bet" })` as a proxy; (3) flag the error in output. - -### Cross-Referencing Strategy - -> See `SHARED_PROMPT_PATTERNS.md` §"Cross-Referencing Strategy" for full examples. Key: combine committee reports + voting records (`search_voteringar`), propositions + press releases (`search_regering`), speeches (`search_anforanden`). Post-query filter by `datum`/`publicerad`/`inlämnad` for tools without native date params. - -### Saturday vs Weekday Mode - -- **Saturday** (day_of_week=6): Produce a **Weekly Parliamentary Review** looking back 5 days (Monday–Friday). Use `coverage_depth: comprehensive`. Title: "The Week in Swedish Politics: {key theme}". Article slug: `weekly-review`. -- **Weekday** (Mon–Fri): Produce a daily **evening analysis**. Use the `coverage_depth` and `lookback_hours` inputs. Article slug: `evening-analysis`. - -### Coverage Depth -- **standard** — Day's key events with brief analysis (800-1200 words) -- **deep** — Extended analysis with historical context (1500-2500 words) -- **comprehensive** — Full coverage including minor events (2500-4000 words) - -## Step 1.5: Data Download & Per-File AI Analysis - -**CRITICAL: This step downloads data AND performs deep AI analysis BEFORE article generation.** - -### Phase A — Data Download (Script-Driven) - -Download all available parliamentary data using the populate script. Scripts handle data download efficiently: - -```bash -# Idempotent: only set if not already resolved by lookback -if [ -z "$ARTICLE_DATE" ]; then - ARTICLE_DATE="${{ github.event.inputs.article_date }}" - if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt - fi -fi -echo "📥 Downloading MCP data for $ARTICLE_DATE..." -# CRITICAL: Source mcp-setup.sh to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway -source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL" -npx tsx scripts/populate-analysis-data.ts --date "$ARTICLE_DATE" --limit 50 || echo "⚠️ Data download had issues (non-blocking)" -echo "📥 Running pre-article analysis pipeline..." -npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 > /tmp/pipeline-output.log 2>&1 -PIPE_EXIT=$? -cat /tmp/pipeline-output.log -if [ "$PIPE_EXIT" -ne 0 ]; then - echo "❌ Pipeline failed with exit code $PIPE_EXIT — agent MUST diagnose and fix (see Script Debugging Protocol)" - tail -30 /tmp/pipeline-output.log -fi -echo "✅ Data downloaded to analysis/data/" -# Verify actual data was downloaded -MANIFEST_DOCS=0 -if [ -f "analysis/daily/$ARTICLE_DATE/data-download-manifest.md" ]; then - grep -E '^\*\*Documents Analyzed\*\*' "analysis/daily/$ARTICLE_DATE/data-download-manifest.md" 2>/dev/null | grep -oE '[0-9]+' | head -1 > /tmp/manifest_docs.txt || echo 0 > /tmp/manifest_docs.txt -read MANIFEST_DOCS < /tmp/manifest_docs.txt -fi -[ -z "$MANIFEST_DOCS" ] && MANIFEST_DOCS=0 -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_count.txt -read DATA_JSON_COUNT < /tmp/data_count.txt -echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT" -# Relocate pipeline artifacts: download-parliamentary-data.ts writes to analysis/daily/$DATE/ (unscoped) -# but this workflow needs them under analysis/daily/$DATE/evening-analysis/ -# === Run Suffix Resolution (see SHARED_PROMPT_PATTERNS.md) === -BASE_SUBFOLDER="evening-analysis" -ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER" -if [ "$FORCE_GENERATION" != "true" ]; then - _SUFFIX=1 - while [ -f "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" ]; do - _SUFFIX=$((_SUFFIX + 1)) - ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER-$_SUFFIX" - done -fi -echo "📁 Analysis subfolder resolved: $ANALYSIS_SUBFOLDER" -UNSCOPED_DIR="analysis/daily/$ARTICLE_DATE" -SCOPED_DIR="$UNSCOPED_DIR/$ANALYSIS_SUBFOLDER" -if [ -d "$UNSCOPED_DIR" ]; then - mkdir -p "$SCOPED_DIR" - if find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" | grep -q .; then - find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" -exec mv -f {} "$SCOPED_DIR/" \; - echo "📁 Moved pipeline *.md artifacts → $SCOPED_DIR (root cleaned to prevent merge conflicts)" - fi - if [ -d "$UNSCOPED_DIR/documents" ]; then - mkdir -p "$SCOPED_DIR/documents" - find "$UNSCOPED_DIR/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$SCOPED_DIR/documents/" \; - rmdir "$UNSCOPED_DIR/documents" 2>/dev/null || true - echo "📁 Relocated pipeline documents/ contents → $SCOPED_DIR/documents" - fi -fi -if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then - echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." -fi -``` - -### 🔄 Phase A.1 — Data Lookback Fallback - -> 🚨 **CRITICAL RULE**: Never produce empty/stub analysis. If no data for today, look back to find unanalyzed data. Empty analysis = wasted workflow run. - -Key steps: resolve `ARTICLE_DATE` from input or today → check `analysis/daily/$ARTICLE_DATE/evening-analysis/data-download-manifest.md` → if 0 docs, loop `DAYS_BACK` 1–7 using `date -u -d "$ARTICLE_DATE - $DAYS_BACK days"`, run `download-parliamentary-data.ts --date "$LOOKBACK_DATE"` → copy artifacts from found date to original date folder → run `catalog-downloaded-data.ts --pending-only`. See `SHARED_PROMPT_PATTERNS.md` §"Data Lookback Fallback Strategy" for full bash implementation. - -### Phase B — Per-File AI Political Intelligence Analysis (AI-Driven) - -**This is the core analysis phase.** The AI agent (you) performs deep analysis of every downloaded file, creating publication-quality intelligence markdown files. - -> 🚨 **CRITICAL RULE:** You must **actually read the JSON data** in each file and base all analysis on real data found there. Every SWOT entry, risk score, and stakeholder assessment must cite specific data from the file (dok_id, vote counts, party names, reservation details). Generic or boilerplate analysis is a failure mode. - -Follow `SHARED_PROMPT_PATTERNS.md` §"Per-File AI Analysis Block" and §"MANDATORY: AI-Driven Analysis Using Methods & Templates" exactly: -- **Step A**: Read `analysis/methodologies/ai-driven-analysis-guide.md` + `analysis/templates/per-file-political-intelligence.md` FIRST -- **Step B**: For EVERY document JSON → create `{dok_id}-analysis.md` with ALL 6 analytical lenses, ≥1 color-coded Mermaid, evidence tables -- **Step C**: Rewrite ALL 9 synthesis files to match templates exactly (see required artifact list below) -- **Step D**: Run quality gate (see SHARED §"Step 5b: MANDATORY Quality Gate"). Fix ALL failures. - -#### 🔴 B4. 9 REQUIRED Analysis Artifacts — ALL Must Be Created - -> 🚨 **NON-NEGOTIABLE**: The evening analysis MUST produce ALL 9 analysis artifacts in `analysis/daily/$ARTICLE_DATE/evening-analysis/`. Producing only 3 of 9 (e.g. only synthesis-summary, significance-scoring, stakeholder-perspectives) is a **CRITICAL FAILURE**. The quality gate WILL reject incomplete analysis. - -| # | Required File | Template | What It Must Contain | -|---|--------------|----------|---------------------| -| 1 | `synthesis-summary.md` | `analysis/templates/synthesis-summary.md` | SYN-ID, Intelligence Dashboard (Mermaid), Top Findings table, Aggregated SWOT, Risk Landscape, Forward Indicators, Artifacts Inventory | -| 2 | `swot-analysis.md` | `analysis/templates/swot-analysis.md` | SWT-ID, Quadrant Mapping (Mermaid mindmap), ≥2 filled quadrants with dok_id evidence, Coalition + Opposition SWOT | -| 3 | `risk-assessment.md` | `analysis/templates/risk-assessment.md` | RSK-ID, Risk Heat Map (Mermaid quadrant chart), ≥4 risks with L×I numeric scores, Coalition Stability Risk | -| 4 | `threat-analysis.md` | `analysis/templates/threat-analysis.md` | THR-ID, Threat Taxonomy Network (Mermaid), ALL 6 threat categories with ≥1 threat each (severity 1-5) | -| 5 | `classification-results.md` | `analysis/templates/political-classification.md` | CLS-ID, Sensitivity Decision Tree (Mermaid), per-document table with sensitivity/domain/urgency/significance | -| 6 | `significance-scoring.md` | `analysis/templates/significance-scoring.md` | SIG-ID, 5-dimension scoring, Composite Score, Publication Decision | -| 7 | `stakeholder-perspectives.md` | `analysis/templates/stakeholder-impact.md` | STA-ID, Impact Radar (Mermaid), ALL 8 stakeholder groups with impact level and timeline | -| 8 | `cross-reference-map.md` | Cross-reference template | XRF-ID, Document relationship graph, links between propositions/motions/committee reports/press releases | -| 9 | `data-download-manifest.md` | Manifest template | Documents Analyzed count, data sources, download timestamps, completeness status | - -#### 🏆 B4b. ADDITIONAL 5 Tier-C Reference-Grade Artefacts (Aggregation Requirement) - -> 🔴 **NON-NEGOTIABLE (Added 2026-04-19)**: Evening-analysis is an **aggregation workflow** — it synthesises the full day's document flow for decision-makers. Per `SHARED_PROMPT_PATTERNS.md` §"14 REQUIRED Artifacts for AGGREGATION Workflows — Reference-Grade Tier-C", evening-analysis MUST produce the 9 core artefacts above PLUS these 5 additional Tier-C files (total **14**): - -| # | Tier-C File | What It Must Contain | -|---|-------------|---------------------| -| 10 | `README.md` | Package index · reading orders by audience · file index · lead-story at-a-glance · upstream-run relationship table | -| 11 | `executive-brief.md` | BLUF ≤ 300 words · 3 decisions supported · 8-bullet "60-second" read · named actors (≥ 5 ministers/party leaders) · next-day watch points · top-5 risks · confidence meter | -| 12 | `scenario-analysis.md` | 3 base scenarios (tomorrow / 7-day / 30-day horizons) + 2 wildcards · ACH grid · trigger calendar | -| 13 | `comparative-international.md` | ≥ 5 jurisdictions benchmarked across the day's top clusters (Nordic + EU + cluster-relevant) | -| 14 | `methodology-reflection.md` | Methodology application matrix · **Upstream Watchpoint Reconciliation** (last 3 days of `realtime-*` + prior `evening-analysis`) · uncertainty hot-spots · Pass-1→Pass-2 improvement evidence | - -**Step 0 — Upstream Watchpoint Ingestion (MANDATORY)** per `SHARED_PROMPT_PATTERNS.md` §"Recent Daily Knowledge-Base Synthesis": -- Ingest forward indicators from the last **3 days** of `realtime-*` sibling runs + the prior `evening-analysis` -- Build the Watchpoint Reconciliation table in `methodology-reflection.md` (no silent drops) - -**Reference exemplars**: [`analysis/daily/2026-04-18/weekly-review/`](../../analysis/daily/2026-04-18/weekly-review/) and [`analysis/daily/2026-04-19/month-ahead/`](../../analysis/daily/2026-04-19/month-ahead/) - -**Verification — run BEFORE proceeding to article generation:** -```bash -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/evening-analysis" -MISSING=0 -# 9 core artefacts (min 500 bytes each) -for REQUIRED_FILE in synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md classification-results.md significance-scoring.md stakeholder-perspectives.md cross-reference-map.md data-download-manifest.md; do - if [ ! -f "$ANALYSIS_DIR/$REQUIRED_FILE" ]; then - echo "🔴 MISSING REQUIRED: $REQUIRED_FILE — MUST CREATE NOW" - MISSING=$((MISSING + 1)) - else - wc -c < "$ANALYSIS_DIR/$REQUIRED_FILE" > /tmp/fsize.txt - read FSIZE < /tmp/fsize.txt - if [ "$FSIZE" -lt 500 ]; then - echo "🔴 TOO SMALL: $REQUIRED_FILE ($FSIZE bytes) — MUST ENRICH" - MISSING=$((MISSING + 1)) - else - echo "✅ OK: $REQUIRED_FILE ($FSIZE bytes)" - fi - fi -done -# 5 Tier-C reference-grade artefacts (aggregation requirement) -# Compute evening-analysis thresholds from the shared period-scope sizing model -# (see SHARED_PROMPT_PATTERNS.md §Period-Scope Multipliers) instead of hard-coding derived values. -PERIOD_SCOPE_MULT_NUM=9 # 0.9× for evening-analysis -PERIOD_SCOPE_MULT_DEN=10 -declare -A BASE_TIER_C_MIN=( ["README.md"]=3000 ["executive-brief.md"]=3500 ["scenario-analysis.md"]=4000 ["comparative-international.md"]=4000 ["methodology-reflection.md"]=4000 ) -for REQUIRED_FILE in README.md executive-brief.md scenario-analysis.md comparative-international.md methodology-reflection.md; do - BASE_MIN=${BASE_TIER_C_MIN[$REQUIRED_FILE]} - MIN=$(( BASE_MIN * PERIOD_SCOPE_MULT_NUM / PERIOD_SCOPE_MULT_DEN )) - if [ ! -f "$ANALYSIS_DIR/$REQUIRED_FILE" ]; then - echo "🔴 MISSING Tier-C: $REQUIRED_FILE — aggregation workflow MUST CREATE" - MISSING=$((MISSING + 1)) - else - # AWF-safe: no $(...) command substitution — use tempfile + read redirection, then clean up. - wc -c < "$ANALYSIS_DIR/$REQUIRED_FILE" | tr -d ' ' > /tmp/fsize-$$.txt - read FSIZE < /tmp/fsize-$$.txt - rm -f /tmp/fsize-$$.txt - if [ "$FSIZE" -lt "$MIN" ]; then - echo "🔴 UNDERSIZED Tier-C: $REQUIRED_FILE ($FSIZE < $MIN — base $BASE_MIN × $PERIOD_SCOPE_MULT_NUM/$PERIOD_SCOPE_MULT_DEN) — MUST ENRICH" - MISSING=$((MISSING + 1)) - else - echo "✅ OK Tier-C: $REQUIRED_FILE ($FSIZE bytes)" - fi - fi -done -if [ "$MISSING" -gt 0 ]; then - echo "🚨 $MISSING of 14 required artifacts missing or too small — DO NOT proceed to article generation" - echo "Go back and create/enrich the missing files following their templates." -fi -``` - -> **If ANY of the 9 files are missing**: Create them NOW. Read the corresponding template, read the downloaded data and sibling analysis, and write a complete analysis file with Mermaid diagrams, evidence tables, and confidence labels. Do NOT proceed to article generation with incomplete analysis. - -#### B5. MANDATORY Quality Gate — Run Before Proceeding - -> 🚨 **BLOCKING**: Do NOT proceed to article generation or commit until this quality gate passes. If it fails, go back and fix analysis files. - -> Run the quality gate bash. See `SHARED_PROMPT_PATTERNS.md` §"Step 5b: MANDATORY Quality Gate" for the complete bash script. Fix ALL failures before proceeding. - -> **If the quality gate FAILS**: Go back and rewrite the failing files. Read the template again (`view analysis/templates/<template>.md`), then rewrite the file to match it. Do NOT proceed until all checks pass. - -### 🔴 MANDATORY: Batch Analysis Enrichment (Prevents Empty "0 Documents Analyzed" Files) - -If `synthesis-summary.md` reports "0 documents analyzed" but per-doc analyses exist in `documents/`, aggregate findings into all 9 batch files. If NO per-doc analyses exist, use MCP tools directly. See `ai-driven-analysis-guide.md` §"Deep-Inspection Batch Analysis Enrichment Protocol (v4.1)". **NEVER commit batch files reporting "0 documents analyzed".** - -### 🚨 MANDATORY: Analysis Artifacts Must ALWAYS Be Committed - -**Before deciding whether to generate articles or call noop, you MUST:** - -1. **Review the analysis artifacts** in `analysis/daily/YYYY-MM-DD/` and per-file `-analysis.md` files — read `synthesis-summary.md` and significance scores to understand what was found -2. **Summarize the analysis findings** — note how many documents were downloaded, their significance scores, key themes, and risk levels -3. **ALWAYS commit analysis artifacts** regardless of whether articles will be generated: - -```bash -[ -f /tmp/hhmm.env ] && . /tmp/hhmm.env -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/evening-analysis" -find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt -read ANALYSIS_COUNT < /tmp/analysis_count.txt -echo "Analysis artifacts: $ANALYSIS_COUNT files in $ANALYSIS_DIR" -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the analysis produced ANY output files (per-file `-analysis.md` or daily synthesis), you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Evening Analysis - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if NO analysis output was generated. - -## Step 2: Gather Parliamentary Data - -**Check elapsed time before proceeding:** -```bash -source /tmp/gh-aw/agent/timing.env 2>/dev/null || true -if [ -z "$START_TIME" ]; then - echo "⚠️ WARNING: START_TIME not set — timing unreliable" - date +%s > /tmp/start_time.txt - read START_TIME < /tmp/start_time.txt -fi -date +%s > /tmp/now_ts.txt -read AW_NOW_TS < /tmp/now_ts.txt -ELAPSED=$(( (AW_NOW_TS - START_TIME) / 60 )) -echo "Elapsed: $ELAPSED minutes" -if [ "$ELAPSED" -ge 35 ]; then - echo "⚠️ TIME CRITICAL: Skip data gathering, call safe output NOW" -fi -``` - -Replace `<today>` with today's `YYYY-MM-DD`, `<rm>` with the calculated riksmöte value, and `<fromDate>` with the lookback start date. - -**Saturday** (weekly wrap-up, 5-day lookback): -``` -get_calendar_events({ from: "<fromDate>", tom: "<today>", limit: 100 }) -search_voteringar({ rm: "<rm>", limit: 100 }) -get_betankanden({ rm: "<rm>", limit: 50 }) -search_anforanden({ rm: "<rm>", limit: 100 }) -search_regering({ dateFrom: "<fromDate>", dateTo: "<today>", limit: 50 }) -get_propositioner({ rm: "<rm>", limit: 20 }) -get_motioner({ rm: "<rm>", limit: 50 }) -get_fragor({ rm: "<rm>", limit: 50 }) -get_interpellationer({ rm: "<rm>", limit: 20 }) -get_calendar_events({ from: "<nextMonday>", tom: "<nextFriday>", limit: 50 }) -``` - -**Weekday** (daily, lookback_hours): -``` -get_calendar_events({ from: "<fromDate>", tom: "<today>", limit: 50 }) -search_voteringar({ rm: "<rm>", limit: 50 }) -get_betankanden({ rm: "<rm>", limit: 20 }) -search_anforanden({ rm: "<rm>", limit: 50 }) -search_regering({ dateFrom: "<fromDate>", dateTo: "<today>", limit: 30 }) -get_propositioner({ rm: "<rm>", limit: 10 }) -get_motioner({ rm: "<rm>", limit: 20 }) -get_calendar_events({ from: "<tomorrow>", tom: "<tomorrow>", limit: 50 }) -``` - -**Filter results by date** — apply post-query date filtering as described in Step 1. - -**Statistical enrichment (optional):** For economic policy topics, use World Bank and SCB MCP servers as context. **144 World Bank indicators available** — `view analysis/worldbank/indicators-inventory.json` to discover indicators matching the day's policy topics (each indicator has `policyAreas`, `committees`, and `mcpTool` fields). Fetch top 3 most relevant using MCP tools for indicators with `mcpTool` field. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for chart templates. Never block on SCB/World Bank failures. - -**If ALL queries return empty results** (no votes, no speeches, no reports, no government activity): -1. **First check if analysis artifacts exist** in `analysis/daily/YYYY-MM-DD/$ANALYSIS_SUBFOLDER/` -2. If analysis artifacts exist: commit them with `git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" && git commit -m "📊 Analysis artifacts - Evening Analysis - {date}"` and call `safeoutputs___create_pull_request` with title `📊 Analysis Only - Evening Analysis - {date}`, labels `["analysis-only", "evening-analysis"]` -3. If NO analysis artifacts exist: call `safeoutputs___noop({"message": "No significant parliamentary activity found for today's evening analysis. Pre-article analysis pipeline also produced no output."})` and stop. - -### 🔬 Step 2b: Read ALL Analysis Files + Cross-Reference Sibling Types (MANDATORY) - -> 🔴 **NON-NEGOTIABLE**: Evening analysis synthesizes the ENTIRE day's parliamentary activity. The AI MUST read ALL analysis files from ALL article types before generating the evening article. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="evening-analysis" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -# Step 1: Read own analysis -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done - if [ -d "$ANALYSIS_BASE/documents" ]; then - for DOC_FILE in "$ANALYSIS_BASE/documents"/*.md; do - if [ -f "$DOC_FILE" ]; then - echo "--- Per-doc: $DOC_FILE ---" - cat "$DOC_FILE" - echo "" - fi - done - fi -fi - -# Step 2: Cross-reference ALL sibling analysis types for the same date -echo "🔍 Cross-referencing sibling analysis types for $ARTICLE_DATE..." -for SIBLING_DIR in analysis/daily/$ARTICLE_DATE/*/; do - if [ -d "$SIBLING_DIR" ]; then - echo "$SIBLING_DIR" | sed 's|/$||' | sed 's|.*/||' > /tmp/sibling_type.txt - read SIBLING_TYPE < /tmp/sibling_type.txt - if [ "$SIBLING_TYPE" = "$ANALYSIS_SUBFOLDER" ]; then continue; fi - echo "📖 Cross-referencing: $SIBLING_TYPE" - for SIBLING_FILE in "$SIBLING_DIR/synthesis-summary.md" "$SIBLING_DIR/significance-scoring.md" "$SIBLING_DIR/stakeholder-perspectives.md"; do - if [ -f "$SIBLING_FILE" ]; then - echo "--- Sibling ($SIBLING_TYPE): $SIBLING_FILE ---" - cat "$SIBLING_FILE" - echo "" - fi - done - fi -done - -find "analysis/daily/$ARTICLE_DATE" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/total_files.txt -read TOTAL_FILES < /tmp/total_files.txt -echo "✅ Read $TOTAL_FILES total analysis files across all types — evening article MUST synthesize these findings" -``` - -> **After reading, confirm synthesis by noting**: (1) total files read, (2) which sibling types were found, (3) the day's top 3 most significant findings across ALL types. The evening article MUST reflect findings from ALL sibling types, not just its own analysis. - -## Step 3: Generate Articles - -### Saturday — Use Generation Script - -On Saturday, use the `weekly-review` article type which IS supported by the script (defined in `scripts/generate-news-enhanced/config.ts:VALID_ARTICLE_TYPES`): - -```bash -LANGUAGES_INPUT="${{ github.event.inputs.languages }}" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=weekly-review \ - --languages="$LANG_ARG" \ - --skip-existing -SCRIPT_EXIT=$? -``` - -### Weekday — Manual Evening Analysis - -The `evening-analysis` article type is NOT in the script's `VALID_ARTICLE_TYPES` (see `scripts/generate-news-enhanced/config.ts`). Evening analysis requires **analytical synthesis** across multiple data sources which the template-based script cannot provide. Generate articles manually using MCP data gathered in Step 2. - -**Determine target languages from input:** -```bash -LANGUAGES_INPUT="${{ github.event.inputs.languages }}" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="en,sv" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac -echo "Target languages: $LANG_ARG" -``` - -**Process ONE language at a time** (en first, then sv, then any remaining): - -For each language in the resolved `LANG_ARG` list: -1. Check elapsed time — if >= 35 minutes, stop and proceed to Step 5 -2. Create `news/YYYY-MM-DD-evening-analysis-{lang}.html` -3. Use `<link rel="stylesheet" href="../styles.css">` — NO embedded `<style>` tags -4. Include language switcher, article-top-nav, Schema.org NewsArticle, hreflang tags -5. Use `dir="rtl"` for Arabic (ar) and Hebrew (he) -6. Include proper `<html lang="{lang}">` attribute - -> 🚫 **NEVER use bash heredoc (`cat > file << 'EOF'`) to write article HTML.** Heredoc truncates large content and causes silent failures. -> -> ✅ **Build the file incrementally** with multiple small `printf` appends (no heredoc, no size limits): -> ```bash -> FILE="news/YYYY-MM-DD-evening-analysis-en.html" -> printf '%s\n' '<!DOCTYPE html>' > "$FILE" -> printf '%s\n' '<html lang="en">' >> "$FILE" -> printf '%s\n' '<head><link rel="stylesheet" href="../styles.css"></head>' >> "$FILE" -> printf '%s\n' '<body>' >> "$FILE" -> # ... append each section separately ... -> printf '%s\n' '</body></html>' >> "$FILE" -> ``` - -**Article structure:** -1. **Lead Story** — Most significant development, why it matters -2. **Parliamentary Pulse** — Key votes, debates, committee decisions -3. **Government Watch** — Propositions, ministerial statements -4. **Opposition Dynamics** — Cross-party analysis -5. **Looking Ahead** — What's coming tomorrow - -**After all languages or time cutoff:** -```bash -date +%Y-%m-%d > /tmp/today.txt -read TODAY < /tmp/today.txt -git status --porcelain -- news/ | awk '{print $2}' | grep "$TODAY-" > /tmp/new-articles.txt || true -wc -l < /tmp/new-articles.txt > /tmp/new-articles-count.txt -read ARTICLE_COUNT < /tmp/new-articles-count.txt -echo "Generated: $ARTICLE_COUNT articles" -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "evening-analysis", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -## Step 3b: AI Title, Meta Description & Analysis References (v5.0 — Analysis-Driven) - -> 🚨 **MANDATORY** — See `SHARED_PROMPT_PATTERNS.md` §"AI-DRIVEN TITLE & META DESCRIPTION GENERATION". Evening analysis synthesizes ALL article types. Read synthesis-summary.md from all sibling folders (`committeeReports/`, `propositions/`, `interpellations/`, `motions/`, `realtime-*/`). Use `ls analysis/daily/$ARTICLE_DATE/` to discover them. Title: `[Active Verb] + [Specific Actor/Institution] + [Policy Action]`. BANNED: ❌ "Evening Analysis: Daily Summary" or titles ending ": {Topic} in Focus". Meta description 150-160 chars, not starting with "Analysis of N documents". Update `<title>`, `<meta description>`, og:title/description, `<h1>`, Schema.org headline in ALL language files. - -**🔴 Add analysis references section (MANDATORY — VERIFY AFTER)** — Insert the "📊 Analysis & Sources" HTML block (from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES) before the article footer, linking to ALL 9 required analysis files: -- `analysis/daily/$ARTICLE_DATE/evening-analysis/synthesis-summary.md` -- `analysis/daily/$ARTICLE_DATE/evening-analysis/swot-analysis.md` -- `analysis/daily/$ARTICLE_DATE/evening-analysis/risk-assessment.md` -- `analysis/daily/$ARTICLE_DATE/evening-analysis/threat-analysis.md` -- `analysis/daily/$ARTICLE_DATE/evening-analysis/stakeholder-perspectives.md` -- `analysis/daily/$ARTICLE_DATE/evening-analysis/significance-scoring.md` -- `analysis/daily/$ARTICLE_DATE/evening-analysis/classification-results.md` -- `analysis/daily/$ARTICLE_DATE/evening-analysis/cross-reference-map.md` -- `analysis/daily/$ARTICLE_DATE/evening-analysis/data-download-manifest.md` -- `analysis/methodologies/ai-driven-analysis-guide.md` -- Per-document analyses in `documents/` subfolder - -**VERIFY** analysis-references inserted by running: -```bash -for FILE in news/$ARTICLE_DATE-evening-analysis-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` - -## Step 4: Translate & Validate - -Check for untranslated Swedish content in non-Swedish articles: -```bash -UNTRANSLATED=0 -for article in news/*-{en,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh}.html; do - if [ -f "$article" ] && grep -q 'data-translate="true"' "$article"; then - echo "NEEDS TRANSLATION: $article" - UNTRANSLATED=$((UNTRANSLATED + 1)) - fi -done -``` - -**Translation rules:** Translate all Swedish text. Keep party names (S, M, SD, V, MP, C, L, KD) and personal names untranslated. Zero language mixing. - -Then run analysis references fix and validation: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type evening-analysis - -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "Validation issues found — fix what you can, proceed if time allows" -fi - -# HTMLHint validation with auto-fix -find news -maxdepth 1 -name '*-*.html' 2>/dev/null | wc -l > /tmp/news_count.txt -read NEWS_FILES < /tmp/news_count.txt -if [ "$NEWS_FILES" -gt 0 ]; then - if ! npx htmlhint "news/*-*.html" 2>/dev/null; then - echo "⚠️ HTML validation errors, attempting auto-fix..." - npx tsx scripts/article-quality-enhancer.ts --fix - npx htmlhint "news/*-*.html" 2>/dev/null || echo "⚠️ Some HTML issues remain" - fi -fi -``` - -## MANDATORY Quality Validation - -After article generation, verify EACH article meets these minimum standards before committing. -Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). - -### Playwright Visual Validation -Run Playwright validation before creating the PR: -```bash -# HTMLHint validation -npx htmlhint "news/*-evening-analysis-*.html" - -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "evening-analysis" - -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-evening-analysis-*.html -``` - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/evening-analysis`. `safeoutputs___create_pull_request` handles this automatically. - -## Step 5: Commit & Create PR - -### HOW SAFE PR CREATION WORKS - -> `safeoutputs___create_pull_request` handles branch creation, push, and PR opening — do NOT run `git push` or `git checkout -b` manually. Stage files, then call the tool directly. +# 🌆 Evening Analysis -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs (`analysis-only` + `evening-analysis` labels) -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash +Daily evening synthesis aggregating every article-type produced today. Tier-C reference-grade output (14 artifacts). Core languages EN, SV. -```bash -# Stage articles and analysis — DATE-SCOPED to stay within safe-outputs 100-file PR limit. -# 🚨 Broad globs like `news/*evening-analysis*.html news/*evening*.html` would match every -# historical evening-analysis article in the archive. Any prior script that modified those -# files (Playwright validation, auto-fix, translation pass) would then cause `git add` to -# stage hundreds of files → E003 (>100 files). Scope EVERY stage to $ARTICLE_DATE. -[ -z "$ARTICLE_DATE" ] && { date -u +%Y-%m-%d > /tmp/today.txt; read ARTICLE_DATE < /tmp/today.txt; } -[ -z "$ANALYSIS_SUBFOLDER" ] && ANALYSIS_SUBFOLDER="evening-analysis" -# Stage ONLY today's EN + SV evening articles (translations run via news-translate) -git add "news/$ARTICLE_DATE"-*evening-analysis*-en.html "news/$ARTICLE_DATE"-*evening-analysis*-sv.html 2>/dev/null || true -git add "news/$ARTICLE_DATE"-*evening*-en.html "news/$ARTICLE_DATE"-*evening*-sv.html 2>/dev/null || true -git add news/metadata/ 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" || true -# 🚫 DO NOT stage analysis/data/ — it is an MCP response cache. Committing it caused E003 -# in news-motions run 24653843681 (PR #1867). -git reset HEAD -- analysis/data/ 2>/dev/null || true -# 🛡️ Defensive filter: unstage any news/ files that do NOT match $ARTICLE_DATE. -git diff --cached --name-only > /tmp/staged_files.txt -awk -v today="$ARTICLE_DATE" '$0 ~ "^news/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]" && $0 !~ today {print}' /tmp/staged_files.txt > /tmp/historical_news.txt -if [ -s /tmp/historical_news.txt ]; then - HIST_COUNT=0 - awk 'END{print NR}' /tmp/historical_news.txt > /tmp/hist_count.txt - read HIST_COUNT < /tmp/hist_count.txt 2>/dev/null || true - echo "⚠️ Unstaging $HIST_COUNT historical news/ files that do not match $ARTICLE_DATE" - xargs -a /tmp/historical_news.txt git reset HEAD -- 2>/dev/null || true -fi -# Enforce safe-outputs 100-file PR limit -git diff --cached --name-only > /tmp/staged_files.txt -awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt -STAGED_COUNT=0 -read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -echo "📊 Staged file count: $STAGED_COUNT (limit: 100)" -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ $STAGED_COUNT files exceeds safe threshold. Removing weekly analysis." - git reset HEAD -- analysis/weekly/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing news/metadata/." - git reset HEAD -- news/metadata/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "🌆 Evening Analysis - $ARTICLE_DATE" -``` +## Tier-C (reference-grade) requirements -Then **immediately** call (as a direct tool call, NOT via bash): -``` -safeoutputs___create_pull_request({ - "title": "🌆 Evening Analysis - {date}", - "body": "## Evening Analysis\n\nArticles: {count}\nLanguages: {list}\nCoverage: {depth}\nSource: riksdag-regering-mcp", - "labels": ["automated-news", "evening-analysis", "needs-editorial-review"] -}) -``` +This workflow imports `../prompts/ext/tier-c-aggregation.md`. Produce **all 14 artifacts** (9 core + 5 Tier-C) and cross-reference sibling analyses. See the extension for the full rules. -## 🌐 MANDATORY Translation Quality Rules +## What this workflow does -> See `SHARED_PROMPT_PATTERNS.md` §"Translation Quality Rules" for full per-language requirements. Key: ALL headings + body in target language, no `data-translate="true"` spans, RTL for ar/he, CJK native script, use `CONTENT_LABELS[lang]` for section headings. Run `npx tsx scripts/validate-news-translations.ts` and fix before committing. +- **Article type**: `evening-analysis` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/evening-analysis/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -## Error Handling +## Time budget (60 min, minimum 45 min of real work) -| Scenario | Cause | Fix | -|----------|-------|-----| -| Tool not found | MCP server not initialized | Run `source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL"` — source and npx MUST be chained with `&&` on one line; expected output: `MCP_SERVER_URL=http://host.docker.internal:8080/mcp/riksdag-regering` (port resolved dynamically — `8080` for gh-aw v0.69+, `80` for legacy gh-aw <0.69) | -| Empty results | No parliamentary activity for the queried date range | Check if analysis artifacts exist in `analysis/daily/` — if yes, commit them and create analysis-only PR; if no, call `safeoutputs___noop` | -| Timeout | MCP server response exceeds `timeout-minutes` | Commit any analysis artifacts produced so far, then call safe output | -| Stale data | `hoursSinceSync > 48` from `get_sync_status()` | Add disclaimer noting data staleness; proceed with cached data | -| Too broad results | Query returns excessive data without date filtering | Add explicit `from_date`/`to_date` parameters to narrow scope | +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -## 🚨 CRITICAL FINAL REMINDER +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -**YOU MUST call exactly one safe output tool before exiting.** This is the single most important rule of this workflow. +## Inputs -**Analysis artifacts MUST always be committed.** Before calling any safe output tool, check if `analysis/daily/YYYY-MM-DD/$ANALYSIS_SUBFOLDER/` (for the current `ARTICLE_DATE`) contains files. If it does, commit only that directory with `git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/"` and include it in the PR or create an analysis-only PR. +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -- If you generated articles → `safeoutputs___create_pull_request({...})` (includes analysis artifacts) -- If no articles but analysis artifacts exist → `git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" && git commit -m "📊 Analysis artifacts - Evening Analysis - {date}"` then `safeoutputs___create_pull_request({"title": "📊 Analysis Only - Evening Analysis - {date}", "body": "## Analysis Only\n\nNo articles generated but analysis artifacts committed for review.\n\nDocuments analyzed: {count}\nKey findings: {summary from synthesis-summary.md}", "labels": ["analysis-only", "evening-analysis"]})` -- If MCP server unreachable (no analysis produced) → `safeoutputs___noop({"message": "MCP server unavailable. No articles or analysis generated."})` -- If MCP data unavailable → `safeoutputs___missing_data({"reason": "MCP returned no usable data for evening analysis."})` -- If any error occurs → commit any analysis artifacts first, then `safeoutputs___noop({"message": "Error during evening analysis: <brief description>"})` +## Dedup & analysis-only path -**Failing to call a safe output tool = automatic workflow failure and a bug report.** +If articles for `$ARTICLE_DATE` + `evening-analysis` already exist **and** `force_generation=false`: -🎯 **Now begin: Check date/day-of-week, warm up MCP with `get_sync_status()`, run pre-article analysis pipeline, review analysis results, gather parliamentary data, generate analysis articles, and call a safe output tool.** +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Evening Analysis — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-interpellations.lock.yml b/.github/workflows/news-interpellations.lock.yml index cff263c99..0797fcb7a 100644 --- a/.github/workflows/news-interpellations.lock.yml +++ b/.github/workflows/news-interpellations.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"d3f47398c78c8221c5d75eee4a0217a785a3925c93058e9142f54cd9026898e7","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"d1b5eeff6d85f73e5e2f3a27c545b9db288de3dd45c2ff7fb2b07a66cf60bc33","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,17 @@ # # Generates interpellation debates analysis articles in core languages (EN, SV). Translations for remaining 12 languages are handled by the dedicated news-translate workflow via dispatch-workflow. Single article type per run. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -184,14 +195,9 @@ jobs: env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -202,21 +208,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_d72123cf32cdb1e4_EOF' + cat << 'GH_AW_PROMPT_56ceca6b6602419b_EOF' <system> - GH_AW_PROMPT_d72123cf32cdb1e4_EOF + GH_AW_PROMPT_56ceca6b6602419b_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_d72123cf32cdb1e4_EOF' + cat << 'GH_AW_PROMPT_56ceca6b6602419b_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_d72123cf32cdb1e4_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_56ceca6b6602419b_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_d72123cf32cdb1e4_EOF' + cat << 'GH_AW_PROMPT_56ceca6b6602419b_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -246,22 +252,25 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_d72123cf32cdb1e4_EOF + GH_AW_PROMPT_56ceca6b6602419b_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_d72123cf32cdb1e4_EOF' + cat << 'GH_AW_PROMPT_56ceca6b6602419b_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} {{#runtime-import .github/workflows/news-interpellations.md}} - GH_AW_PROMPT_d72123cf32cdb1e4_EOF + GH_AW_PROMPT_56ceca6b6602419b_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -272,14 +281,9 @@ jobs: uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -302,14 +306,9 @@ jobs: return await substitutePlaceholders({ file: process.env.GH_AW_PROMPT, substitutions: { - GH_AW_EXPR_731DE217: process.env.GH_AW_EXPR_731DE217, GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -411,7 +410,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -499,16 +498,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_1b5d9e27052f039a_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_1b5d9e27052f039a_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_591ccf46f42b73cb_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_591ccf46f42b73cb_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -767,7 +766,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_aa8fb174d91d0289_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_58fe3c2ad85e2bbc_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -883,7 +882,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_aa8fb174d91d0289_EOF + GH_AW_MCP_CONFIG_58fe3c2ad85e2bbc_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1570,7 +1569,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-interpellations.md b/.github/workflows/news-interpellations.md index ca207b8a8..b5799d07c 100644 --- a/.github/workflows/news-interpellations.md +++ b/.github/workflows/news-interpellations.md @@ -2,6 +2,15 @@ name: "News: Interpellation Debates" description: Generates interpellation debates analysis articles in core languages (EN, SV). Translations for remaining 12 languages are handled by the dedicated news-translate workflow via dispatch-workflow. Single article type per run. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md on: schedule: daily around 7:00 on weekdays workflow_dispatch: @@ -119,7 +128,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -157,26 +166,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -230,704 +219,47 @@ engine: model: claude-opus-4.7 --- -# 🔔 Interpellation Debates Article Generator - -You are the **News Journalist Agent** for Riksdagsmonitor generating **interpellation debates** analysis articles. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst, NOT a script executor.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** parliamentary data deeply — SWOT, stakeholder perspectives, risk assessment, election implications -> 2. **WRITE** genuine political intelligence articles with specific actors, evidence citations, and analytical insight -> 3. **USE** the script (`generate-news-enhanced.ts`) ONLY for HTML formatting — the script creates a shell, YOU fill it with analysis -> 4. **REPLACE** every `AI_MUST_REPLACE` marker with real analysis — ZERO markers may remain -> 5. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 6. **VERIFY** article quality: minimum 1000 words, SWOT analysis, stakeholder perspectives, dok_id citations -> 7. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **ITERATIVE IMPROVEMENT IS MANDATORY (2+ passes):** -> - **Analysis Pass 1** (15 min): Create analysis for every document following templates -> - **Analysis Pass 2** (7 min): Read ALL analysis back, improve evidence, diagrams, cross-references -> - **Article Pass 1** (10 min): Generate articles with AI-written content from analysis -> - **Article Pass 2** (8 min): Read ALL articles back completely, improve every section -> - **NEVER complete early** — if you finish ahead, use remaining time to deepen analysis -> -> **If the final article reads like a list of document titles with generic descriptions, you have FAILED.** Rewrite with genuine political analysis before committing. - - -## 🚨🚨 MANDATORY: Safe Output Guarantee 🚨🚨 - -> **Every run MUST end with exactly one safe output call. There are NO exceptions.** - -Before doing ANYTHING else, internalize this absolute rule: - -1. **If you generate articles or analysis artifacts** → call `safeoutputs___create_pull_request` -2. **If MCP is unreachable AND no artifacts exist** → call `safeoutputs___noop` with a reason -3. **If you are running out of time** (approaching minute 40 of 45) → immediately stop all work and call `safeoutputs___create_pull_request` with whatever you have committed, OR call `safeoutputs___noop` explaining what happened -4. **NEVER let the workflow end without calling a safe output tool** — a run with zero safe outputs is treated as a failure and creates an error issue - -**Time guard**: If you have been running for more than 35 minutes without yet calling a safe output tool, STOP all other work immediately and produce a safe output with whatever progress you have made. - -## 🔧 Workflow Dispatch Parameters - -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -If **force_generation** is `true`, generate articles even if recent ones exist. Use the **languages** value to determine which languages to generate. - -## 🚨 CRITICAL: Single Article Type Focus - -**This workflow generates ONLY `interpellations` articles.** Do not generate other article types. - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-interpellations.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (45 minutes) — ENFORCED Minimum 40 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing in 13-22 min of 45-min allocation, producing shallow analysis. Agent MUST use at least 40 of 45 minutes. Completion < 40 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -- **Minutes 0–3**: Date check, MCP warm-up with `get_sync_status()` -- **Minutes 3–6**: Run download-parliamentary-data pipeline (download data) -- **Minutes 6–21**: 🚨 **AI Analysis Pass 1 (15 min minimum)**: Read ALL methodology guides, create per-file analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. -- **Minutes 21–22**: 🚨 **AI Analysis Pass 2 (Part A, start)**: Begin reading ALL analysis artifacts back and identify improvement targets. -- **Minutes 22–25**: 🫀 **Heartbeat PR** — `git add && git commit` analysis artifacts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Interpellations - {date}`). Refreshes the safeoutputs MCP session (idle timeout ~30–35 min) AND preserves work if later phases fail. Run `git checkout main` after the call so subsequent commits don't stack onto the frozen patch. -- **Minutes 25–28**: 🚨 **AI Analysis Pass 2 (Part B, complete — 6 min improvement work total across Parts A+B)**: Improve every section, replace ALL script stubs with AI analysis. Run enrichment verification gate. -- **Minutes 28–30**: Run ENFORCED Minimum Time Gate + Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. -- **Minutes 30–36**: Generate articles for core languages (EN, SV) using `npx tsx scripts/generate-news-enhanced.ts` -- **Minutes 36–40**: 🚨 **Article Improvement Pass**: Read ALL articles back, replace AI_MUST_REPLACE markers, improve content. Run article quality component gate. -- **Minutes 40–43**: Validate, commit, create PR with `safeoutputs___create_pull_request` -- **Minutes 43–45**: 🚨 **HARD DEADLINE** — If no safe output yet: if ANY artifacts/files were created, IMMEDIATELY stage, commit, call `safeoutputs___create_pull_request` with partial work. ONLY call `safeoutputs___noop` if truly ZERO files were created. - -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 15 min + Pass 2: 7 min)** — every analysis file must contain color-coded Mermaid diagrams, structured evidence tables with dok_id citations, and follow template structure exactly. ALL script-generated stubs MUST be replaced with AI-enriched analysis. Run the ENFORCED gates from SHARED_PROMPT_PATTERNS.md before proceeding to article generation. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## 🚫 CRITICAL: Article Generation Safety - -**Articles MUST be generated using `npx tsx scripts/generate-news-enhanced.ts` — NEVER manually.** - -The repository provides a complete article generation pipeline. You MUST use it (see Generation Steps below for the full `LANG_ARG` derivation from the `languages` dispatch input; default is `en,sv`): -```bash -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts --types=interpellations --languages="$LANG_ARG" --skip-existing -``` - -**❌ NEVER do any of the following:** -- NEVER use `python3` or `python3 -c` to build HTML article files -- NEVER create `.py` scripts to generate articles (e.g., `build-en-article.py`) -- NEVER use bash heredoc (`cat > file << 'EOF'`) to write HTML files — it silently truncates large content -- NEVER manually construct HTML articles line-by-line with `echo`, `printf`, or any other method -- NEVER spend more than 5 minutes attempting to manually build article HTML - -**If `generate-news-enhanced.ts` fails or returns 0 articles:** -1. Check if MCP data was returned (retry MCP calls if needed) -2. Check if analysis artifacts exist in `analysis/daily/YYYY-MM-DD/` — if yes, commit them and create an analysis-only PR -3. If MCP server is unreachable AND no data was downloaded AND no analysis artifacts exist, use `safeoutputs___noop` — this is the ONLY valid noop scenario -4. Do NOT attempt to manually create articles as a fallback - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Article Type Isolation - -> 🚨 **This workflow writes analysis ONLY to `analysis/daily/$ARTICLE_DATE/interpellations/`**. NEVER write to the parent date directory or another article type's folder. See SHARED_PROMPT_PATTERNS.md "Article Type Isolation" section. - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. See `SHARED_PROMPT_PATTERNS.md` §"Standardised Analysis Depth Gate" for the full requirements table (iterations, SWOT stakeholders, charts, Mermaid counts, risk matrix, forward indicators, min time). - -**The 8 mandatory stakeholder groups are**: Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion. Every group MUST be analyzed with specific evidence (dok_id, vote counts, named politicians). - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `interpellations` (from `scripts/editorial-framework.ts`): -- **SWOT**: ALL 8 stakeholder groups — evidence tables with `#`, `Statement`, `Evidence (frs ID/dok_id)`, `Confidence`, `Impact`, `Entry Date` -- **Dashboard**: required (min. 1 Chart.js chart); **Mindmap**: not required -- **Risk Matrix**: required — numeric L×I scores for ministerial accountability and policy implementation risks -- **Forward Indicators**: minister response timelines (4-week statutory deadline), committee scheduling triggers -- **Confidence Labels**: `[HIGH]`/`[MEDIUM]`/`[LOW]` on ALL claims -- **Mermaid**: ≥1 color-coded diagram (ministerial accountability flow or opposition attack patterns) -- **Dok_id/frs Citations**: MANDATORY — every interpellation MUST cite its frs ID (e.g., "frs 2025/26:634") -- **AI iterations**: 2 (standard), 2 (deep), or 3 (comprehensive) - -> 🚨 **ANTI-PATTERNS (REJECTED)**: 0 frs ID citations; SWOT with only 3 groups (need all 8); generic "Why It Matters" reused across entries; no Mermaid diagrams - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -### Phase 1 — Data Collection & Initial Analysis -1. Fetch MCP data (`get_interpellationer`, `get_sync_status`, cross-reference `search_anforanden`, `get_calendar_events`) -2. Detect policy domains and group by target minister for accountability analysis -3. Build initial outline: lede, ministerial accountability section, thematic groupings - -### Phase 2 — Iterative Depth Enhancement (repeat per `analysis_depth`) -For each AI iteration: -1. **SWOT Analysis**: Generate multi-stakeholder SWOT with ALL 8 groups (Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion). Use structured evidence tables with columns: `#`, `Statement`, `Evidence (frs ID/dok_id)`, `Confidence`, `Impact`, `Entry Date`. Every entry MUST cite specific interpellation frs ID, minister name, and policy area. -2. **Accountability Dashboard**: Include at least one chart-ready summary (interpellations by minister or party), formatted as a clear Markdown table or bullet list; do not assume automatic dashboard rendering unless a separate workflow step explicitly parses and renders it. -3. **Quality Gate** (check before next iteration): - - Verify ministerial accountability section names specific ministers and their policy areas - - Verify no identical "Why It Matters" text across entries — each must reference the specific minister and policy context - - Verify all Swedish API text is translated - - Verify word count ≥ 700 - - **Template check**: Article must use "Interpellation Debates" heading, NOT "Opposition Motions" — if wrong heading is present, regenerate the article content - - If failing any check: re-generate the failing section before proceeding - -### Phase 3 — Final Quality Gate Before PR -Run all validation checks from the **MANDATORY Quality Validation** section below before committing. - -## MANDATORY Date Validation - -```bash -echo "=== Date Validation Check ===" -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -echo "Article Type: interpellations" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -September+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before September → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). Use in ALL MCP queries requiring `rm`. - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="interpellation-debates" -# Derive FORCE_GENERATION from the workflow_dispatch input -FORCE_GENERATION="${{ github.event.inputs.force_generation || 'false' }}" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -## MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/{article-type}` (e.g. `news/content/2026-03-23/interpellations`). `safeoutputs___create_pull_request` handles this automatically. - -## MANDATORY PR Creation - -### HOW SAFE PR CREATION WORKS - -> `safeoutputs___create_pull_request` handles branch creation, push, and PR opening — do NOT run `git push` or `git checkout -b` manually. Stage files, then call the tool directly. - - -```bash -# Stage articles and analysis — scoped to article type to stay within 100-file PR limit -# CRITICAL: Stage ONLY today's new articles (EN/SV), NOT all existing news/ -# Staging news/*interpellation*.html would include 170+ existing files, many of which -# may have been modified by auto-fix scripts, causing E003 (>100 files) PR failure. -git add "news/$ARTICLE_DATE-interpellation-debates-en.html" 2>/dev/null || true -git add "news/$ARTICLE_DATE-interpellation-debates-sv.html" 2>/dev/null || true -git add news/metadata/ 2>/dev/null || true -# Use $ANALYSIS_SUBFOLDER (set during Run Suffix Resolution above); fallback to base type -if [ -z "$ANALYSIS_SUBFOLDER" ]; then - ANALYSIS_SUBFOLDER="interpellations" -fi -# Stage analysis summary .md files ONLY — EXCLUDE documents/ to stay under 100-file limit. -# With --limit 50, documents/ alone can contain 100+ files (50 JSON + 50 analysis.md). -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true -# 🚨 HARD UNSTAGE: NEVER commit analysis/data/ — it is an MCP response cache populated by -# download-parliamentary-data.ts (6 doc types × ~40 files = 240+ files). It must stay local. -# Committing it caused E003 "received 258 files" in news-motions run 24653843681 (PR #1867). -# Only news-realtime-monitor stages analysis/data/ intentionally; this workflow never should. -# 🚫 DO NOT run `git add analysis/data/...` anywhere in this workflow. -git reset HEAD -- analysis/data/ 2>/dev/null || true -# Enforce safe-outputs 100-file PR limit (AWF-safe: no $(...) — write to temp file + read back) -git diff --cached --name-only > /tmp/staged_files.txt -awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt -STAGED_COUNT=0 -read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -echo "📊 Staged file count: $STAGED_COUNT (limit: 100)" -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ $STAGED_COUNT files exceeds safe threshold. Removing metadata to reduce count." - git reset HEAD -- news/metadata/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing non-essential analysis — keeping core summaries." - # Graduated pruning: remove individual doc-level analysis JSON first, keep synthesis/scoring/risk .md - # If still over limit, all .json goes but .md summaries (synthesis-summary.md, risk-assessment.md) survive - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-analysis.json 2>/dev/null || true - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-details.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing remaining analysis .json — keeping .md summaries." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -# FINAL HARD GUARD: if count still exceeds 99, remove all analysis .md except synthesis-summary.md -if [ "$STAGED_COUNT" -gt 99 ]; then - echo "🚨 CRITICAL: $STAGED_COUNT files still exceeds safe limit of 99. Removing all analysis .md except synthesis-summary." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true - git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true - echo "📊 After emergency pruning: $STAGED_COUNT files" -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "Add interpellation-debates articles and analysis artifacts" -``` -> -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash - -## 🌐 Dispatch Translation Workflow - -After creating the content PR, dispatch translations: `safeoutputs___dispatch_workflow({ "workflow_name": "news-translate", "inputs": { "article_date": "<YYYY-MM-DD>", "article_type": "<article-type>", "languages": "all-extra" } })`. See `news-translate.md` for full translation quality rules. - -## MCP Tools - -**ALWAYS call `get_sync_status()` FIRST.** - -**Primary tool:** `get_interpellationer` — fetches latest interpellations (formal parliamentary questions demanding minister responses) -**Cross-reference:** `search_dokument_fulltext`, `search_anforanden` -**Calendar context:** `get_calendar_events` — check today's scheduled interpellation debate times (**⚠️ may return HTML instead of JSON; if calendar fails, explicitly flag the calendar API error and proceed without debate timing context, relying on `get_interpellationer` and `search_anforanden` for substance and recency**) -**Statistical enrichment:** SCB MCP — enrich with statistics relevant to interpellation policy areas. **World Bank indicators (144 total)**: `view analysis/worldbank/indicators-inventory.json` to discover indicators matching the interpellation's policy area — each indicator has `policyAreas`, `committees`, and `mcpTool` fields. Key governance indicators for interpellations: Rule of Law (RL.EST), Voice & Accountability (VA.EST), plus topic-matched indicators. Use MCP tools for indicators with `mcpTool` field. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for Chart.js chart templates. - -```javascript -get_sync_status({}) -get_interpellationer({ rm: <calculated riksmöte>, limit: 20 }) - -// Calendar context for today's debates: -// get_calendar_events({ from: "YYYY-MM-DD", tom: "YYYY-MM-DD" }) - -// Cross-reference with debate speeches: -// search_anforanden({ text: "<interpellation topic>", rm: <calculated riksmöte>, limit: 10 }) -``` - -## Generation Steps - -### Step 1: Check Existing Articles (Analysis Always Runs) -🚨 **FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** complete **BEFORE** any article HTML is created or updated. Articles MUST be (re)generated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Violations = REJECTED PR (PR #1705 comment audit, 2026-04-18). - -Check if interpellation-debates articles already exist for the target date. If they do, skip article generation but **ALWAYS run the full deep political analysis phase** — analysis is the primary output and must execute on every run regardless of article existence. - -### Step 2: Query MCP -```javascript -get_sync_status({}) -get_interpellationer({ rm: <calculated riksmöte>, limit: 20 }) -``` - -### Step 2.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 9 analysis artifacts.** `download-parliamentary-data.ts` downloads raw data from riksdag-regering-mcp ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. Create ALL 9 analysis files in `analysis/daily/YYYY-MM-DD/interpellations/` using evidence from the downloaded data - -**NEVER write or copy analysis files to the parent date directory** — doing so causes merge conflicts when multiple doc-type workflows run on the same date. The `analysis-reader.ts` automatically scans subdirectories, so root-level copies are NOT needed. After creating ALL analysis files, run the **9-Artifact Completeness Gate** from `SHARED_PROMPT_PATTERNS.md` §"9 REQUIRED Analysis Artifacts" to verify ALL 9 files exist. - -Key steps: resolve `ARTICLE_DATE` from input or today → check `data-download-manifest.md` → if 0 docs, loop `DAYS_BACK` 1–7 using `date -u -d "$ARTICLE_DATE - $DAYS_BACK days"`, run `download-parliamentary-data.ts --date "$LOOKBACK_DATE"` → copy artifacts from found date to original date folder → run `catalog-downloaded-data.ts --pending-only`. See `SHARED_PROMPT_PATTERNS.md` §"Data Lookback Fallback Strategy" for full bash implementation. - -### 🔄 Data Lookback Fallback - -> 🚨 **CRITICAL RULE**: Never produce empty/stub analysis. If no data for today, look back to find unanalyzed data. - -```bash -[ -f /tmp/hhmm.env ] && . /tmp/hhmm.env -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/interpellations" -find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt -read ANALYSIS_COUNT < /tmp/analysis_count.txt -echo "Analysis artifacts: $ANALYSIS_COUNT files in $ANALYSIS_DIR" -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the pre-article analysis pipeline produced ANY output files, you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Interpellations - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if the analysis pipeline produced ZERO output files (truly nothing to analyze). - -### 🔬 Step 2b: Read ALL Analysis Files (MANDATORY — before article generation) - -> 🔴 **NON-NEGOTIABLE**: The AI agent MUST `cat` every analysis `.md` file BEFORE generating any article HTML. Analysis and articles are created in the **same workflow run** — there is zero excuse for not reading the analysis. Articles written without reading analysis are shallow and REJECTED. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="interpellations" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done - if [ -d "$ANALYSIS_BASE/documents" ]; then - echo "📄 Reading per-document analyses..." - for DOC_FILE in "$ANALYSIS_BASE/documents"/*.md; do - if [ -f "$DOC_FILE" ]; then - echo "--- Per-doc: $DOC_FILE ---" - cat "$DOC_FILE" - echo "" - fi - done - fi - find "$ANALYSIS_BASE" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/analysis_file_count.txt - read ANALYSIS_FILE_COUNT < /tmp/analysis_file_count.txt - echo "✅ Read $ANALYSIS_FILE_COUNT analysis files — these MUST drive article content" -else - echo "⚠️ No analysis directory found at $ANALYSIS_BASE — will use MCP fallback for article content" -fi -``` - -> **After reading, confirm you loaded the analysis** by noting: (1) number of files read, (2) top 3 significance-ranked findings, (3) key risk scores. If you cannot produce this summary, you have NOT read the analysis. - -### Step 3: Generate Articles - -```bash -# Set LANGUAGES_INPUT to the value shown in Workflow Dispatch Parameters above -LANGUAGES_INPUT="<value from Workflow Dispatch Parameters>" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=interpellations \ - --languages="$LANG_ARG" \ - --skip-existing -``` - -**Article Navigation Verification**: The `generate-news-enhanced.ts` script automatically includes all required navigation elements: -- **Language switcher** (`<nav class="language-switcher">`) after `<body>` with all 14 languages -- **Back-to-news top nav** (`<div class="article-top-nav">`) with localized back link after language switcher -- **Footer back-to-news link** in `<footer class="article-footer">` - -These elements are validated by `bash scripts/validate-news-generation.sh` (Checks 8–10). The fix script is a **fallback only** — do not run it by default: -```bash -# FALLBACK ONLY — use if validate-news-generation.sh reports missing navigation elements -npx tsx scripts/fix-article-navigation.ts -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "interpellations", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -### Step 3b — Cross-Reference Minister Responses - -For each interpellation found, cross-reference the minister's response to identify accountability gaps: - -1. **Fetch minister response speech**: Use `search_anforanden(talare=<minister-name>, rm=<riksmöte>)` to locate the minister's formal response to the interpellation -2. **Compare question vs response**: Analyse the interpellation question against the minister's response to classify: - - **Unanswered questions** — accountability gap → government SWOT weakness (minister failed to address core concern) - - **Evasive answers** — deflection detected → opposition SWOT opportunity (pressure point for follow-up) - - **Policy commitments** — concrete pledges made → government SWOT strength (trackable promise) - - **Statistical claims** — verify against SCB/World Bank data → accuracy check for article -3. **Assess response timeliness**: Check if the minister responded within the statutory 4-week deadline; flag overdue responses as accountability concerns -4. **Include minister response summary in article body**: For each interpellation entry, add a "Minister's Response" subsection summarising the response (or noting absence if unanswered) -5. **Generate accountability scorecard**: Tally response rates per minister and include in the Accountability Dashboard chart - -> **Fallback**: If `search_anforanden` returns no results for a specific minister, note "No formal response recorded" in the article and flag this as an accountability gap in the SWOT analysis. - -### Step 3c: AI Title, Meta Description & Analysis References (v5.0 — Analysis-Driven) - -> 🚨 **MANDATORY** — After article HTML is generated, the AI MUST read the completed synthesis-summary.md and use its "AI-Recommended Article Metadata" section to drive title, description, and SEO. See `SHARED_PROMPT_PATTERNS.md` §"AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and `ai-driven-analysis-guide.md` §"Analysis-Driven Article Decision Protocol (v5.0)". - -**1. Read synthesis analysis first** — `cat "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md"` and extract: - - "Recommended Title (EN)" and "Recommended Title (SV)" — use as starting point - - "Meta Description (EN)" and "Meta Description (SV)" — use as starting point - - "Key Highlights" — verify title references at least one highlight - - "Article Decision" and "Article Priority" — validate publication decision - -**2. Generate newsworthy titles from analysis** — Read each article's content AND the synthesis findings, then generate a title following: `[Active Verb] + [Specific Actor/Institution] + [Concrete Policy Action]`. The title MUST reference findings from the synthesis — not generic category labels. Apply to ALL languages (not just English). BANNED: ❌ "Interpellation Debates: Holding Government to Account: Defense in Focus" or any title ending with ": {Topic} in Focus". - -**3. Generate AI meta descriptions from analysis** (150-160 chars) — Summarize the #1 ranked finding from synthesis significance-scoring. BANNED: ❌ "Analysis of N documents covering Filed by:, Published:" or any description starting with "Analysis of N documents". - -**4. 🔴 Add analysis references section (MANDATORY — VERIFY AFTER)** — Insert the "📊 Analysis & Sources" HTML block (from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES) before the article footer, linking to: -- `analysis/daily/$ARTICLE_DATE/interpellations/synthesis-summary.md` -- `analysis/daily/$ARTICLE_DATE/interpellations/swot-analysis.md` -- `analysis/daily/$ARTICLE_DATE/interpellations/risk-assessment.md` -- `analysis/daily/$ARTICLE_DATE/interpellations/threat-analysis.md` -- `analysis/daily/$ARTICLE_DATE/interpellations/stakeholder-perspectives.md` -- `analysis/daily/$ARTICLE_DATE/interpellations/significance-scoring.md` -- `analysis/daily/$ARTICLE_DATE/interpellations/classification-results.md` -- `analysis/daily/$ARTICLE_DATE/interpellations/cross-reference-map.md` -- `analysis/daily/$ARTICLE_DATE/interpellations/data-download-manifest.md` -- `analysis/methodologies/ai-driven-analysis-guide.md` -- Per-document analyses in `documents/` subfolder - -**After inserting, VERIFY** by running: -```bash -for FILE in news/$ARTICLE_DATE-*interpellation*-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` - -**5. Update all metadata in ALL languages** — For EVERY generated language file, ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, `<h1>`, Schema.org `headline`, `alternativeHeadline`, and `description` all reflect the AI-generated title and description. Non-English articles MUST have properly translated AI titles — not English titles or generic templates. - -### Step 3d: AI Content Quality Enforcement (v4.0 — MANDATORY) - -> 🚨 **v4.0 CRITICAL**: The AI MUST read pre-computed analysis and rewrite ALL script-generated stub content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0. -> -> **Note:** This is Step 3**d** (not 3c) because interpellations has an additional Step 3b (Cross-Reference Minister Responses) and Step 3c (AI Title/Meta), shifting this enforcement step to 3d. All other workflows use Step 3c for this same enforcement. - -**1. Read pre-computed analysis** — Read synthesis, SWOT, risk analysis from `analysis/daily/$ARTICLE_DATE/interpellations/`. - -**2. Replace script-generated lede** — Replace any `"Analysis of N documents..."` with AI lede naming the most targeted minister, the filing party strategy, and the most significant interpellation topic. - -**3. Replace boilerplate "Why It Matters"** — For EACH interpellation, write unique analysis citing the interpellation number, the specific question asked, the targeted minister's portfolio, and why this matters politically. BANNED: `"Touches on {X} policy..."` boilerplate. - -**4. Replace generic "Winners & Losers"** — Replace `"The political landscape remains fluid..."` with specific accountability analysis: which ministers face the most pressure, which opposition parties demonstrate coordination, and minister response timeliness. - -**5. 🔴 MANDATORY: Replace ALL Deep Analysis `AI_MUST_REPLACE` markers** — The script generates `<!-- AI_MUST_REPLACE: ... -->` markers in EVERY Deep Analysis subsection. You MUST: - - Search generated HTML for ALL `AI_MUST_REPLACE` markers and replace EACH with genuine political intelligence - - "Timeline & Context" → When were these interpellations filed, what political events triggered them, expected minister response dates - - "Why This Matters" → Specific analysis of which ministers face accountability pressure and what policy failures these expose - - "Political Impact" → Name specific ministers targeted, opposition coordination patterns, government vulnerability assessment - - "Actions & Consequences" → Detail expected minister responses, policy commitments demanded, and consequences of evasive answers - - "Critical Assessment" → Honest evaluation of whether interpellations are genuine accountability tools or political theater - - ZERO `AI_MUST_REPLACE` markers may survive in the final committed HTML - -**6. Integrate minister response data** — Use cross-reference results from Step 3b (minister response speeches via MCP `search_anforanden`) to enrich the article with response summaries, accountability gaps, and policy commitments. - -**7. Replace excuse-as-analysis** — Replace `"No chamber debate data..."` with analysis from the interpellation text itself or minister response speeches. - -**8. Add interpellation coordination analysis** — Identify patterns: Are multiple interpellations targeting the same minister? The same policy area? Filed on the same day (suggesting coordination)? - -### Step 4: Translate, Validate & Verify Analysis Quality - -Run analysis references fix, validation, and HTMLHint before creating PR: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type interpellations - -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "❌ News generation validation failed. Fix the reported issues before creating a PR." - exit "$VALIDATION_EXIT" -fi - -# HTMLHint validation with auto-fix — SCOPED TO TODAY'S ARTICLES ONLY -# CRITICAL: Do NOT run htmlhint/--fix on all news/*-*.html — that modifies 150+ existing -# interpellation articles which then get staged and exceed the 100-file PR limit (E003). -if [ -f "news/$ARTICLE_DATE-interpellation-debates-en.html" ] || [ -f "news/$ARTICLE_DATE-interpellation-debates-sv.html" ]; then - if ! npx htmlhint "news/$ARTICLE_DATE-interpellation-debates-en.html" "news/$ARTICLE_DATE-interpellation-debates-sv.html" 2>/dev/null; then - echo "⚠️ HTML validation errors in today's articles, attempting auto-fix (scoped to today only)..." - if [ -f "news/$ARTICLE_DATE-interpellation-debates-en.html" ]; then - npx tsx scripts/article-quality-enhancer.ts --fix "news/$ARTICLE_DATE-interpellation-debates-en.html" - fi - if [ -f "news/$ARTICLE_DATE-interpellation-debates-sv.html" ]; then - npx tsx scripts/article-quality-enhancer.ts --fix "news/$ARTICLE_DATE-interpellation-debates-sv.html" - fi - if ! npx htmlhint "news/$ARTICLE_DATE-interpellation-debates-en.html" "news/$ARTICLE_DATE-interpellation-debates-sv.html" 2>/dev/null; then - echo "⚠️ HTML validation still failing after auto-fix — manual review needed (continuing to PR)" - fi - fi -fi -``` - -**CRITICAL: Each article MUST contain real analysis, not just a list of translated links.** -Every generated article must include: -- An analytical lede paragraph about parliamentary accountability and government scrutiny (not just an interpellation count) -- Ministerial Accountability section analysing which ministers face the most questions and why -- "Why It Matters" analysis for each interpellation with policy domain context -- Opposition Strategy section showing which parties are most active in oversight -- Party-level breakdown with interpellation counts per party - -If the generated article lacks these analytical sections, manually add contextual analysis before committing. - -## MANDATORY Quality Validation - -After article generation, verify EACH article meets these minimum standards before committing. -Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). Use the following reference documents to support consistent, in-depth analysis: -- **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Per-file AI analysis protocol -- **`analysis/methodologies/ai-driven-analysis-guide.md`** — Methodology for deep per-file analysis -- **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template - -### Iterative Analysis Protocol - -For each generated article, apply up to 3 iterations: -1. **Iteration 1** — Generate initial draft from MCP data -2. **Self-assess** — Score against quality rubric (Accuracy + Depth + Perspectives + Translation + Editorial) -3. **If score < 7**: Identify lowest-scoring dimension and regenerate those sections -4. **Iteration 2** — Address quality gaps, add missing parliamentary oversight analysis -5. **If still < 7**: Final iteration — add analytical depth, ensure party/theme-grouped structure -6. **Maximum 3 iterations** — Never publish below 5/10 - -### Required Sections (at least 3 of 5): -1. **Analytical Lede** (paragraph, not just document count) -2. **Parliamentary Oversight** (interpellations grouped by submitting party and policy theme — uses dedicated generator) -3. **Strategic Context** (why these interpellations matter politically) -4. **Stakeholder Impact** (which ministers are under pressure) -5. **What Happens Next** (expected debate schedule and outcomes) - -### Disqualifying Patterns: -- ❌ `"Filed by: Unknown (Unknown)"` — FIX author/party metadata before committing -- ❌ `data-translate="true"` spans in non-Swedish articles — TRANSLATE before committing -- ❌ Identical "Why It Matters" text for all entries — DIFFERENTIATE analysis per interpellation -- ❌ Flat list of interpellations without grouping — GROUP by policy theme and submitting party -- ❌ Article under 500 words — EXPAND with analytical sections - -### Playwright Visual Validation -Run Playwright validation before creating the PR: -```bash -# HTMLHint validation -npx htmlhint "news/*-interpellation-debates-*.html" +# ❓ Interpellation Debates -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "interpellation-debates" +Generates deep political intelligence articles on interpellation debates, including minister responses, in core languages (EN, SV). Translations dispatched to `news-translate`. -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-interpellation-debates-*.html -``` +## What this workflow does -### Bash Validation Commands: -```bash -# Check for unknown authors (should return 0) -grep -l "Filed by: Unknown" news/*-interpellation-debates-*.html 2>/dev/null | wc -l || true +- **Article type**: `interpellations` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/interpellations/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -# Check for untranslated spans in English article (should return 0) -grep -c 'data-translate="true"' "news/$ARTICLE_DATE-interpellation-debates-en.html" 2>/dev/null || true +## Time budget (60 min, minimum 45 min of real work) -# Check word count of English article text content (warn if < 500; HTML tags stripped) -FILE="news/$ARTICLE_DATE-interpellation-debates-en.html" -if [ ! -f "$FILE" ]; then echo "WARNING: Expected article file not found: $FILE — check if generation succeeded"; else - sed 's/<[^>]*>/ /g' "$FILE" | tr -s '[:space:]' '\n' | grep -c '[[:alnum:]]' 2>/dev/null > /tmp/word_count.txt || echo 0 > /tmp/word_count.txt - read WORD_COUNT < /tmp/word_count.txt - echo "Content word count (HTML tags stripped): $WORD_COUNT" - if [ "$WORD_COUNT" -lt 500 ]; then echo "WARNING: Article content may be too short ($WORD_COUNT words) — consider expanding before PR"; fi -fi +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -# Check for duplicate "Why It Matters" content (should return empty) -grep -o 'Why It Matters[^<]*' "news/$ARTICLE_DATE-interpellation-debates-en.html" 2>/dev/null | sort | uniq -d || true -``` +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -### If Article Fails Quality Check: -1. Use bash to enhance the HTML with analytical sections -2. Replace generic "Why It Matters" with interpellation-specific analysis -3. Add thematic grouping headers (e.g., by policy area or target minister) -4. Translate any remaining Swedish content +## Inputs -**Note**: News index files, metadata, and sitemap are generated automatically at build time by the `prebuild` script. Do NOT run generation scripts or commit their output — only commit the article HTML files. +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -## 🌐 Translation Quality +## Dedup & analysis-only path -EN/SV only: all headings, meta, content in correct language; no untranslated `data-translate` spans; Swedish API titles translated. Full rules: `news-translate.md`. -## Article Naming Convention -Files: `YYYY-MM-DD-interpellation-debates-{lang}.html` +If articles for `$ARTICLE_DATE` + `interpellations` already exist **and** `force_generation=false`: +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Interpellation Debates — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-month-ahead.lock.yml b/.github/workflows/news-month-ahead.lock.yml index 9e0d234f7..fc29cf699 100644 --- a/.github/workflows/news-month-ahead.lock.yml +++ b/.github/workflows/news-month-ahead.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"87826602dca425b1b9e37a7941eff85f6a2f56cf10dae16eb04e87cb539dee63","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"edc0474901ac6a1bac6847c0bd8635b6adbacbd4709e2109d31070a72a53068e","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,18 @@ # # Generates month-ahead strategic outlook articles in core languages (EN, SV). Translations handled by news-translate workflow. Runs on 1st of each month. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# - ../prompts/ext/tier-c-aggregation.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -183,14 +195,9 @@ jobs: env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -201,21 +208,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_f9d32e3db3cfdb16_EOF' + cat << 'GH_AW_PROMPT_d68ebdb73563ac48_EOF' <system> - GH_AW_PROMPT_f9d32e3db3cfdb16_EOF + GH_AW_PROMPT_d68ebdb73563ac48_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_f9d32e3db3cfdb16_EOF' + cat << 'GH_AW_PROMPT_d68ebdb73563ac48_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_f9d32e3db3cfdb16_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_d68ebdb73563ac48_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_f9d32e3db3cfdb16_EOF' + cat << 'GH_AW_PROMPT_d68ebdb73563ac48_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -245,22 +252,26 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_f9d32e3db3cfdb16_EOF + GH_AW_PROMPT_d68ebdb73563ac48_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_f9d32e3db3cfdb16_EOF' + cat << 'GH_AW_PROMPT_d68ebdb73563ac48_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} + {{#runtime-import .github/prompts/ext/tier-c-aggregation.md}} {{#runtime-import .github/workflows/news-month-ahead.md}} - GH_AW_PROMPT_f9d32e3db3cfdb16_EOF + GH_AW_PROMPT_d68ebdb73563ac48_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -271,14 +282,9 @@ jobs: uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -301,14 +307,9 @@ jobs: return await substitutePlaceholders({ file: process.env.GH_AW_PROMPT, substitutions: { - GH_AW_EXPR_731DE217: process.env.GH_AW_EXPR_731DE217, GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -410,7 +411,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -498,16 +499,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_a1f040b003264725_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_a1f040b003264725_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_ff7f4402944ea0d2_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_ff7f4402944ea0d2_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -766,7 +767,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_ac4e905bc415f29a_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_d1f05584a39f813d_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -882,7 +883,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_ac4e905bc415f29a_EOF + GH_AW_MCP_CONFIG_d1f05584a39f813d_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1569,7 +1570,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-month-ahead.md b/.github/workflows/news-month-ahead.md index 3ddab68a1..10a3b9779 100644 --- a/.github/workflows/news-month-ahead.md +++ b/.github/workflows/news-month-ahead.md @@ -2,6 +2,16 @@ name: "News: Month Ahead" description: Generates month-ahead strategic outlook articles in core languages (EN, SV). Translations handled by news-translate workflow. Runs on 1st of each month. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md + - ../prompts/ext/tier-c-aggregation.md on: schedule: - cron: "0 8 1 * *" @@ -120,7 +130,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -158,26 +168,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -231,548 +221,51 @@ engine: model: claude-opus-4.7 --- -# 📅 Month Ahead Strategic Outlook Generator - -You are the **News Journalist Agent** for Riksdagsmonitor generating **month-ahead** strategic outlook articles. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst producing forward-looking strategic analysis.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** upcoming parliamentary and government activity deeply with strategic foresight -> 2. **WRITE** genuine predictive intelligence with SWOT, stakeholder impacts, and Election 2026 scenarios -> 3. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 4. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **2+ PASSES MANDATORY**: Analysis Pass 1 (15 min) → Analysis Pass 2 improvement (7 min) → Article Pass 1 (10 min) → Article Pass 2 improvement (8 min). NEVER complete early. - -## 🔧 Workflow Dispatch Parameters - -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -If **force_generation** is `true`, generate articles even if recent ones exist. Use the **languages** value to determine which languages to generate. - -## 🚨 CRITICAL: Single Article Type Focus - -**This workflow generates ONLY `month-ahead` articles.** Do not generate other article types. - -This is a **prospective** article providing a 30-day forward-looking strategic overview of upcoming parliamentary activity, scheduled votes, committee milestones, and government calendar events. - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-month-ahead.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (30 minutes) — ENFORCED Minimum 25 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing early, producing shallow analysis. Agent MUST use at least 25 of 30 minutes. Completion < 25 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -- **Minutes 0–3**: Date check, MCP warm-up with `get_sync_status()` -- **Minutes 3–5**: Run download-parliamentary-data pipeline (download data) -- **Minutes 5–15**: 🚨 **AI Analysis Pass 1 (10 min minimum)**: Read ALL methodology guides, create analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. -- **Minutes 15–19**: 🚨 **AI Analysis Pass 2 (Part A)**: Read ALL analysis back, improve major sections, replace script stubs. -- **Minutes 19–21**: 🫀 **Heartbeat PR** — `git add && git commit` analysis artifacts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Month Ahead - {date}`). Refreshes the safeoutputs MCP session AND preserves work if later phases fail. Run `git checkout main` after the call so subsequent commits don't stack onto the frozen patch. -- **Minutes 21–22**: 🚨 **AI Analysis Pass 2 (Part B) + Enrichment Verification**: Complete remaining improvements and run enrichment verification before the shared minimum-time gate. -- **Minutes 22–23**: Run ENFORCED Minimum Time Gate (set `MINIMUM_ANALYSIS_MINUTES=14` for 30-min workflows) + final Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. -- **Minutes 23–28**: Generate articles for all 14 languages. Read articles back, replace AI_MUST_REPLACE markers. Run article quality gate. -- **Minutes 28–29**: Validate and commit analysis + articles -- **Minutes 29–30**: Create PR with `safeoutputs___create_pull_request` - -> ⚠️ **Analysis must include color-coded Mermaid diagrams, evidence tables, and template structure compliance** — plain prose is NEVER acceptable. ALL script-generated stubs MUST be replaced with AI-enriched analysis. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `stakeholder-perspectives.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Article Type Isolation - -> 🚨 **This workflow writes analysis ONLY to `analysis/daily/$ARTICLE_DATE/month-ahead/`**. NEVER write to the parent date directory or another article type's folder. See SHARED_PROMPT_PATTERNS.md "Article Type Isolation" section. - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams and evidence tables. - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Mermaid diagrams | Risk matrix (L×I) | Forward indicators | Min. analysis time | -|-------|--------------|-------------------|--------|---------|-----------------|-------------------|-------------------|-------------------| -| standard | 1-2 | ≥5 (of 8 groups) | ≥1 | optional | ≥1 color-coded | ≥2 risks scored | ≥2 with triggers | 10 minutes | -| deep | 2-3 | ≥7 (of 8 groups) | ≥2 | required | ≥2 color-coded | ≥4 risks scored | ≥3 with triggers | 15 minutes | -| comprehensive | 3+ | all 8 groups | ≥3 | required | ≥3 color-coded | ≥6 risks scored | ≥5 with triggers | 20 minutes | - -**The 8 mandatory stakeholder groups are**: Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion. Every group MUST be analyzed with specific evidence (dok_id, vote counts, named politicians). - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, quantified risk matrix with numeric L×I scores, forward indicators with specific triggers/timelines, confidence labels on all analytical claims, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `month-ahead` (from `scripts/editorial-framework.ts`): -- **SWOT**: ALL 8 stakeholder groups analyzed with forward-looking evidence (scheduled debates, committee meetings, expected votes) -- **Dashboard**: required (min. 2 Chart.js charts) -- **Mindmap**: required (CSS policy mindmap) -- **Min. stakeholders**: 8 perspectives (Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion) -- **Risk Matrix**: required — numeric L×I scores for upcoming legislative risks, coalition stress points, policy implementation risks -- **Forward Indicators**: required — specific dates for committee sessions, plenary debates, expected government decisions -- **Confidence Labels**: `[HIGH]`/`[MEDIUM]`/`[LOW]` on ALL analytical claims -- **Mermaid Diagrams**: ≥1 color-coded Gantt chart or legislative pipeline showing monthly agenda flow -- **Cross-Document Pattern Analysis**: required — identify thematic clusters (e.g., "3 defense-related meetings indicate coordinated legislative push") -- **AI iterations**: 2 (standard), 2 (deep), or 3 (comprehensive) - -> 🚨 **ANTI-PATTERNS (REJECTED)**: Generic "Requires committee review and chamber debate" (must be unique per entry), SWOT with only 3 groups, no forward date-specific indicators, no Mermaid diagrams, no cross-document synthesis - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -### Phase 1 — Data Collection & Initial Analysis -1. Fetch MCP data (`get_calendar_events`, `get_propositioner`, `get_motioner`, `get_interpellationer`, `get_sync_status`) -2. Build monthly legislative pipeline with key milestones -3. Build initial outline: strategic outlook lede, legislative pipeline, policy domain forecast - -### Phase 2 — Iterative Depth Enhancement (repeat per `analysis_depth`) -For each AI iteration: -1. **Full SWOT Analysis**: Generate multi-stakeholder SWOT with ALL 8 groups (Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion) focusing on upcoming legislative priorities. Use structured evidence tables with columns: `#`, `Statement`, `Evidence (dok_id)`, `Confidence`, `Impact`, `Entry Date`. Every entry MUST cite specific scheduled debate, committee meeting, or expected vote. -2. **Strategic Dashboard Summary**: Provide concise comparative summaries for at least 2 analytical views (for example, documents by week and policy domain distribution) using prose and/or markdown tables that can be included directly in the article without requiring any undocumented rendering pipeline. -3. **Policy Relationship Outline**: Describe inter-connected policy areas as a clear hierarchical outline (central topic, major branches, and sub-items) in standard markdown so the relationships are explicit without assuming automated mindmap rendering. -4. **Quality Gate** (check before next iteration): - - Verify forward-looking watch-points reference specific scheduled events - - Verify all Swedish API text is translated - - Verify word count ≥ 900 - -### Phase 3 — Final Quality Gate Before PR -Run all validation checks from the **MANDATORY Quality Validation** section below before committing. - -## MANDATORY Date Validation - -```bash -echo "=== Date Validation Check ===" -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -echo "Article Type: month-ahead" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -September+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before September → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). Use in ALL MCP queries requiring `rm`. - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="month-ahead" -# Derive FORCE_GENERATION from the workflow_dispatch input -FORCE_GENERATION="${{ github.event.inputs.force_generation || 'false' }}" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -## MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/{article-type}` (e.g. `news/content/2026-03-23/month-ahead`). `safeoutputs___create_pull_request` handles this automatically. - -## MANDATORY PR Creation - -> **🚀 HOW SAFE PR CREATION WORKS — READ THIS FIRST** -> -> The `safeoutputs___create_pull_request` tool handles **everything**: branch creation, pushing commits, and opening the PR. Do NOT run `git push` or `git checkout -b` manually. -> -> **Exact steps:** -> 1. Write article files to `news/` using `bash` or `edit` tools -> 2. Stage and commit locally (scoped to resolved month-ahead analysis subfolder): `git add news/*month-ahead*.html news/metadata/ "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" analysis/weekly/ && git commit -m "Add month-ahead articles and analysis artifacts"` -> 3. Call `safeoutputs___create_pull_request` with `title`, `body`, and `labels` -> -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash - -## 🌐 Dispatch Translation Workflow - -After creating the content PR, dispatch translations: `safeoutputs___dispatch_workflow({ "workflow_name": "news-translate", "inputs": { "article_date": "<YYYY-MM-DD>", "article_type": "<article-type>", "languages": "all-extra" } })`. See `news-translate.md` for full translation quality rules. - -## MCP Tools - -**ALWAYS call `get_sync_status()` FIRST.** - -**Primary tool:** `get_calendar_events` — 30-day forward calendar (**⚠️ Known issue: may return HTML instead of JSON; if this happens, treat it as a calendar retrieval failure and state that explicitly in the analysis. You may query `search_dokument` with a recent lookback window only as a proxy signal of parliamentary activity (e.g., recent publications related to expected topics), but must never treat "no documents found" as "no upcoming events."**) -**Cross-reference:** `get_propositioner`, `search_dokument`, `search_regering` -**Statistical enrichment:** SCB/World Bank — for major economic milestones (budget debates, economic policy events), pre-fetch trend data from committee-mapped indicators. See `scripts/scb-context.ts` for 15 domain→committee mappings. **World Bank indicators (144 total)**: `view analysis/worldbank/indicators-inventory.json` for the complete inventory with `policyAreas`, `committees`, and `mcpTool` fields per indicator. Use MCP tools for indicators with `mcpTool` field. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for Chart.js chart templates (`economic-comparison`, `economic-trend`, `nordic-radar`). MUST generate ≥2 economic charts for monthly forecasting. - -```javascript -get_sync_status({}) -// Get events for next 30 days -const today = new Date().toISOString().split('T')[0]; -const nextMonth = new Date(Date.now() + 30*86400000).toISOString().split('T')[0]; -get_calendar_events({ from: today, tom: nextMonth, limit: 200 }) -get_propositioner({ rm: <calculated riksmöte>, limit: 20 }) -search_regering({ dateFrom: today, dateTo: nextMonth, limit: 10 }) -``` - -## Generation Steps - -### Step 1: Check Existing Articles (Analysis Always Runs) -🚨 **FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** complete **BEFORE** any article HTML is created or updated. Articles MUST be (re)generated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Violations = REJECTED PR (PR #1705 comment audit, 2026-04-18). - -Check if month-ahead articles already exist for the target date. If they do, skip article generation but **ALWAYS run the full deep political analysis phase** — analysis is the primary output and must execute on every run regardless of article existence. - -### Step 2: Query MCP -```javascript -get_sync_status({}) -get_calendar_events({ from: today, tom: nextMonth, limit: 200 }) -``` - -### Step 2.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 14 analysis artifacts (9 core + 5 Tier-C reference-grade).** `download-parliamentary-data.ts` downloads raw data ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. **STEP 0 — Upstream Watchpoint Ingestion (MANDATORY, per `SHARED_PROMPT_PATTERNS.md` §"Recent Daily Knowledge-Base Synthesis")**: ingest forward indicators from the last **14 days** of sibling daily runs + the last `weekly-review` + the last `week-ahead`. Build the Watchpoint Reconciliation table (no silent drops). -4. Create ALL **14** analysis files in `analysis/daily/YYYY-MM-DD/month-ahead/` using evidence from the downloaded data AND the ingested upstream watchpoints -5. Reference exemplar: [`analysis/daily/2026-04-19/month-ahead/`](../../analysis/daily/2026-04-19/month-ahead/) — **the reference-grade month-ahead package** (14-file Tier-C package with full upstream reconciliation and Nordic comparative benchmarking) - -Run the **14-Artifact Completeness Gate** (aggregation workflow) from `SHARED_PROMPT_PATTERNS.md` §"14 REQUIRED Artifacts for AGGREGATION Workflows — Reference-Grade Tier-C" to verify ALL 14 files exist: the 9 core (synthesis-summary.md, swot-analysis.md, risk-assessment.md, threat-analysis.md, classification-results.md, significance-scoring.md, stakeholder-perspectives.md, cross-reference-map.md, data-download-manifest.md) PLUS the 5 Tier-C reference-grade files (README.md, executive-brief.md, scenario-analysis.md, comparative-international.md, methodology-reflection.md). - -> 📐 **Period-scope multiplier: 1.3×** — `month-ahead` covers 30 days forward. Scaled Tier-C minimums: README ≥ 3 900 B · executive-brief ≥ 4 550 B · scenario-analysis ≥ 5 200 B · comparative-international ≥ 5 200 B · methodology-reflection ≥ 5 200 B. See `SHARED_PROMPT_PATTERNS.md` §"Period-Scope Multipliers". Reference exemplar: [`analysis/daily/2026-04-19/month-ahead/`](../../analysis/daily/2026-04-19/month-ahead/). - -```bash -date -u +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -echo "📊 Running pre-article analysis for $ARTICLE_DATE..." -# CRITICAL: Source mcp-setup.sh FIRST to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway -source scripts/mcp-setup.sh -npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 > /tmp/pipeline-output.log 2>&1 -PIPE_EXIT=$? -cat /tmp/pipeline-output.log -if [ "$PIPE_EXIT" -ne 0 ]; then - echo "❌ Pipeline failed — agent MUST diagnose and fix (read /tmp/pipeline-output.log)" - tail -20 /tmp/pipeline-output.log -fi -echo "📊 Analysis artifacts for $ARTICLE_DATE:" -ls -la "analysis/daily/$ARTICLE_DATE/" 2>/dev/null || echo "⚠️ No analysis output" -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_count.txt -read DATA_JSON_COUNT < /tmp/data_count.txt -echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)" -# Relocate pipeline artifacts: download-parliamentary-data.ts writes to analysis/daily/$DATE/ (unscoped) -# but this workflow needs them under analysis/daily/$DATE/month-ahead/ -# === Run Suffix Resolution (see SHARED_PROMPT_PATTERNS.md) === -BASE_SUBFOLDER="month-ahead" -ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER" -if [ "$FORCE_GENERATION" != "true" ]; then - _SUFFIX=1 - while [ -f "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" ]; do - _SUFFIX=$((_SUFFIX + 1)) - ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER-$_SUFFIX" - done -fi -echo "📁 Analysis subfolder resolved: $ANALYSIS_SUBFOLDER" -UNSCOPED_DIR="analysis/daily/$ARTICLE_DATE" -SCOPED_DIR="$UNSCOPED_DIR/$ANALYSIS_SUBFOLDER" -if [ -d "$UNSCOPED_DIR" ]; then - mkdir -p "$SCOPED_DIR" - if find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" | grep -q .; then - find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" -exec mv -f {} "$SCOPED_DIR/" \; - echo "📁 Moved pipeline *.md artifacts → $SCOPED_DIR (root cleaned to prevent merge conflicts)" - fi - if [ -d "$UNSCOPED_DIR/documents" ]; then - mkdir -p "$SCOPED_DIR/documents" - find "$UNSCOPED_DIR/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$SCOPED_DIR/documents/" \; - rmdir "$UNSCOPED_DIR/documents" 2>/dev/null || true - echo "📁 Relocated pipeline documents/ contents → $SCOPED_DIR/documents" - fi -fi -if [ "$DATA_JSON_COUNT" -eq 0 ]; then - echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." -fi -``` - -**Weekly aggregation**: Since this is a monthly-scope workflow, also aggregate the current week's daily analyses for complete context: - -```bash -date -u +%G-W%V > /tmp/week_label.txt -read WEEK_LABEL < /tmp/week_label.txt -echo "📅 Running weekly aggregation for $WEEK_LABEL..." -source scripts/mcp-setup.sh && npx tsx scripts/download-parliamentary-data.ts --aggregate weekly --date "$WEEK_LABEL" || echo "⚠️ Weekly aggregation failed (non-blocking)" -ls -la "analysis/weekly/$WEEK_LABEL/" 2>/dev/null || echo "⚠️ No weekly aggregation output" -``` - -These files are committed alongside articles for human review and continuous improvement. - -### 🚨 MANDATORY: Analysis Artifacts Must ALWAYS Be Committed - -**Before deciding whether to generate articles or call noop, you MUST:** - -1. **Review the analysis artifacts** in `analysis/daily/YYYY-MM-DD/` and `analysis/weekly/` — read `synthesis-summary.md` and `significance-scoring.md` to understand what was found -2. **Summarize the analysis findings** — note how many documents were downloaded, their significance scores, key themes, and risk levels -3. **ALWAYS commit analysis artifacts** regardless of whether articles will be generated: - -```bash -date -u +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/month-ahead" -ANALYSIS_COUNT=0 -if [ -d "$ANALYSIS_DIR" ]; then - find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt - read ANALYSIS_COUNT < /tmp/analysis_count.txt -fi -date -u +%G-W%V > /tmp/week_label.txt -read WEEK_LABEL < /tmp/week_label.txt -WEEKLY_DIR="analysis/weekly/$WEEK_LABEL" -if [ -d "$WEEKLY_DIR" ]; then - find "$WEEKLY_DIR" -type f 2>/dev/null | wc -l > /tmp/weekly_count.txt - read WEEKLY_COUNT < /tmp/weekly_count.txt - ANALYSIS_COUNT=$((ANALYSIS_COUNT + WEEKLY_COUNT)) -fi -if [ "$ANALYSIS_COUNT" -gt 0 ]; then - echo "📊 Found $ANALYSIS_COUNT total analysis artifacts — these MUST be committed (do NOT use safeoutputs___noop)" -else - echo "📊 Found 0 analysis artifacts — safeoutputs___noop is allowed (no files to commit)" -fi -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the pre-article analysis pipeline produced ANY output files, you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Month Ahead - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if the analysis pipeline produced ZERO output files (truly nothing to analyze). - -### 🔬 Step 2b: Read ALL Analysis Files + Cross-Reference Sibling Types (MANDATORY) - -> 🔴 **NON-NEGOTIABLE**: Month-ahead forecasting synthesizes across ALL article types. The AI MUST read ALL analysis files from ALL article types before generating the forecast. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="month-ahead" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done -fi - -echo "🔍 Cross-referencing sibling analysis types for $ARTICLE_DATE..." -for SIBLING_DIR in analysis/daily/$ARTICLE_DATE/*/; do - if [ -d "$SIBLING_DIR" ]; then - echo "$SIBLING_DIR" | sed 's|/$||' | sed 's|.*/||' > /tmp/sibling_type.txt - read SIBLING_TYPE < /tmp/sibling_type.txt - if [ "$SIBLING_TYPE" = "$ANALYSIS_SUBFOLDER" ]; then continue; fi - echo "📖 Cross-referencing: $SIBLING_TYPE" - for SIBLING_FILE in "$SIBLING_DIR/synthesis-summary.md" "$SIBLING_DIR/significance-scoring.md"; do - if [ -f "$SIBLING_FILE" ]; then - echo "--- Sibling ($SIBLING_TYPE): $SIBLING_FILE ---" - cat "$SIBLING_FILE" - echo "" - fi - done - fi -done -echo "✅ Cross-referencing complete — month-ahead MUST incorporate findings from all sibling types" -``` - -### Step 3: Generate Articles - -```bash -# Set LANGUAGES_INPUT to the value shown in Workflow Dispatch Parameters above -LANGUAGES_INPUT="<value from Workflow Dispatch Parameters>" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=month-ahead \ - --languages="$LANG_ARG" \ - --skip-existing -``` - -**Article Navigation Verification**: The `generate-news-enhanced.ts` script automatically includes all required navigation elements: -- **Language switcher** (`<nav class="language-switcher">`) after `<body>` with all 14 languages -- **Back-to-news top nav** (`<div class="article-top-nav">`) with localized back link after language switcher -- **Footer back-to-news link** in `<footer class="article-footer">` - -These elements are validated by `bash scripts/validate-news-generation.sh` (Checks 8–10). The fix script is a **fallback only** — do not run it by default: -```bash -# FALLBACK ONLY — use if validate-news-generation.sh reports missing navigation elements -npx tsx scripts/fix-article-navigation.ts -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "month-ahead", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -### Step 3b: AI Title, Meta Description & Analysis References - -> 🚨 **MANDATORY** — After article HTML is generated, the AI MUST improve titles, descriptions, and add analysis references. See `SHARED_PROMPT_PATTERNS.md` sections "AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and "ANALYSIS FILE GITHUB REFERENCES" for full protocols. - -**1. Generate newsworthy titles** — Replace script-generated title with: `[Active Verb] + [Specific Institution] + [Concrete Policy Action]`. BANNED: ❌ generic category labels or ": {Topic} in Focus". - -**2. Generate AI meta descriptions** (150-160 chars) — Key political intelligence summary. BANNED: ❌ "Analysis of N documents". - -**3. 🔴 Add analysis references (MANDATORY — VERIFY AFTER)** — Insert "📊 Analysis & Sources" HTML block (from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES) linking to `analysis/daily/$ARTICLE_DATE/month-ahead/` files and `analysis/methodologies/ai-driven-analysis-guide.md`. +# 🗓️ Month Ahead -**After inserting, VERIFY** by running: -```bash -for FILE in news/$ARTICLE_DATE-month-ahead-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` +Forward-looking 30-day parliamentary calendar + political intelligence brief. Tier-C reference-grade output (14 artifacts). Core languages EN, SV. -**4. Update all metadata** — `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>`. +## Tier-C (reference-grade) requirements -### Step 4: Translate, Validate & Verify Analysis Quality +This workflow imports `../prompts/ext/tier-c-aggregation.md`. Produce **all 14 artifacts** (9 core + 5 Tier-C) and cross-reference sibling analyses. See the extension for the full rules. -Run analysis references fix, validation, and HTMLHint before creating PR: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type month-ahead +## What this workflow does -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "❌ News generation validation failed. Fix the reported issues before creating a PR." - exit "$VALIDATION_EXIT" -fi +- **Article type**: `month-ahead` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/month-ahead/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -# HTMLHint validation with auto-fix for common nesting errors -find news -maxdepth 1 -name '*-*.html' 2>/dev/null | wc -l > /tmp/news_count.txt -read NEWS_FILES < /tmp/news_count.txt -if [ "$NEWS_FILES" -gt 0 ]; then - if ! npx htmlhint "news/*-*.html" 2>/dev/null; then - echo "⚠️ HTML validation errors found, attempting auto-fix..." - npx tsx scripts/article-quality-enhancer.ts --fix - if ! npx htmlhint "news/*-*.html"; then - echo "❌ HTML validation failed after auto-fix. Please fix remaining issues before creating PR." - exit 1 - fi - fi -fi -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "month-ahead" +## Time budget (60 min, minimum 45 min of real work) -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-month-ahead-*.html -``` +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -**CRITICAL: Each article MUST contain real analysis, not just a list of translated event titles.** -Every generated article must include strategic outlook with political context, not merely translated calendar entries. +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -**Note**: News index files, metadata, and sitemap are generated automatically at build time by the `prebuild` script. Do NOT run generation scripts or commit their output — only commit the article HTML files. +## Inputs -## Article Content Structure +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -Month-ahead articles should include: -1. **Monthly Overview**: Summary of major upcoming legislative milestones -2. **Week-by-Week Preview**: Key events broken down by week -3. **Policy Agenda**: Government priorities and scheduled policy announcements -4. **Committee Calendar**: Which committees have significant work planned -5. **Watch Points**: Issues likely to generate political controversy -6. **International Context**: EU coordination, Nordic cooperation events +## Dedup & analysis-only path -## 🌐 Translation Quality +If articles for `$ARTICLE_DATE` + `month-ahead` already exist **and** `force_generation=false`: -EN/SV only: all headings, meta, content in correct language; no untranslated `data-translate` spans; Swedish API titles translated. Full rules: `news-translate.md`. +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Month Ahead — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-monthly-review.lock.yml b/.github/workflows/news-monthly-review.lock.yml index 3aa64a624..e0f8fc4e7 100644 --- a/.github/workflows/news-monthly-review.lock.yml +++ b/.github/workflows/news-monthly-review.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"66f2eb79629571d7b9dc8c903f74006e548ee52d1e35f11fa19dacce9e3f45bd","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"ee594adff70b242159d9488e1d78e721087832bc47d821be24c08a81ac3e3c9f","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,18 @@ # # Generates monthly review retrospective articles in core languages (EN, SV). Translations handled by news-translate workflow. Runs on 28th of each month. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# - ../prompts/ext/tier-c-aggregation.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -183,14 +195,9 @@ jobs: env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -201,21 +208,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_da40a5c06bea7f1d_EOF' + cat << 'GH_AW_PROMPT_a84a4a7eb3e6b141_EOF' <system> - GH_AW_PROMPT_da40a5c06bea7f1d_EOF + GH_AW_PROMPT_a84a4a7eb3e6b141_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_da40a5c06bea7f1d_EOF' + cat << 'GH_AW_PROMPT_a84a4a7eb3e6b141_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_da40a5c06bea7f1d_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_a84a4a7eb3e6b141_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_da40a5c06bea7f1d_EOF' + cat << 'GH_AW_PROMPT_a84a4a7eb3e6b141_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -245,22 +252,26 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_da40a5c06bea7f1d_EOF + GH_AW_PROMPT_a84a4a7eb3e6b141_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_da40a5c06bea7f1d_EOF' + cat << 'GH_AW_PROMPT_a84a4a7eb3e6b141_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} + {{#runtime-import .github/prompts/ext/tier-c-aggregation.md}} {{#runtime-import .github/workflows/news-monthly-review.md}} - GH_AW_PROMPT_da40a5c06bea7f1d_EOF + GH_AW_PROMPT_a84a4a7eb3e6b141_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -271,14 +282,9 @@ jobs: uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -301,14 +307,9 @@ jobs: return await substitutePlaceholders({ file: process.env.GH_AW_PROMPT, substitutions: { - GH_AW_EXPR_731DE217: process.env.GH_AW_EXPR_731DE217, GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -410,7 +411,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -498,16 +499,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_e174b256d28b0e22_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_e174b256d28b0e22_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_c9e7590b58d44a81_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_c9e7590b58d44a81_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -766,7 +767,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_f9d9b6ef4ca5bfdd_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_38eec37a1280f704_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -882,7 +883,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_f9d9b6ef4ca5bfdd_EOF + GH_AW_MCP_CONFIG_38eec37a1280f704_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1569,7 +1570,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-monthly-review.md b/.github/workflows/news-monthly-review.md index 3fab9da2e..6f9d89341 100644 --- a/.github/workflows/news-monthly-review.md +++ b/.github/workflows/news-monthly-review.md @@ -2,6 +2,16 @@ name: "News: Monthly Review" description: Generates monthly review retrospective articles in core languages (EN, SV). Translations handled by news-translate workflow. Runs on 28th of each month. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md + - ../prompts/ext/tier-c-aggregation.md on: schedule: - cron: "0 10 28 * *" @@ -120,7 +130,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -158,26 +168,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -231,567 +221,51 @@ engine: model: claude-opus-4.7 --- -# 📊 Monthly Review Article Generator - -You are the **News Journalist Agent** for Riksdagsmonitor generating **monthly review** retrospective articles. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst producing comprehensive monthly retrospective analysis.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** the full month's parliamentary activity with deep synthesis across all document types -> 2. **WRITE** genuine intelligence with trend analysis, SWOT, stakeholder impacts, and strategic context -> 3. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 4. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **2+ PASSES MANDATORY**: Analysis Pass 1 (15 min) → Analysis Pass 2 improvement (7 min) → Article Pass 1 (10 min) → Article Pass 2 improvement (8 min). NEVER complete early. - -## 🔧 Workflow Dispatch Parameters - -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -If **force_generation** is `true`, generate articles even if recent ones exist. Use the **languages** value to determine which languages to generate. - -## 🚨 CRITICAL: Single Article Type Focus - -**This workflow generates ONLY `monthly-review` articles.** Do not generate other article types. - -This is a **retrospective** article providing comprehensive analysis of the past 30 days of parliamentary activity — legislative output, coalition dynamics, government performance, and policy trends over the full monthly cycle. - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-monthly-review.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (30 minutes) — ENFORCED Minimum 25 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing early, producing shallow analysis. Agent MUST use at least 25 of 30 minutes. Completion < 25 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -- **Minutes 0–3**: Date check, MCP warm-up with `get_sync_status()` -- **Minutes 3–5**: Run download-parliamentary-data pipeline (download data) -- **Minutes 5–15**: 🚨 **AI Analysis Pass 1 (10 min minimum)**: Read ALL methodology guides, create analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. -- **Minutes 15–19**: 🚨 **AI Analysis Pass 2 (Part A)**: Read ALL analysis back, improve major sections, replace script stubs. -- **Minutes 19–21**: 🫀 **Heartbeat PR** — `git add && git commit` analysis artifacts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Monthly Review - {date}`). Refreshes the safeoutputs MCP session AND preserves work if later phases fail. Run `git checkout main` after the call so subsequent commits don't stack onto the frozen patch. -- **Minutes 21–22**: 🚨 **AI Analysis Pass 2 (Part B) + Enrichment Verification**: Complete remaining improvements and run enrichment verification before the shared minimum-time gate. -- **Minutes 22–23**: Run ENFORCED Minimum Time Gate (set `MINIMUM_ANALYSIS_MINUTES=14` for 30-min workflows) + final Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. -- **Minutes 23–28**: Generate articles for all 14 languages. Read articles back, replace AI_MUST_REPLACE markers. Run article quality gate. -- **Minutes 28–29**: Validate and commit analysis + articles -- **Minutes 29–30**: Create PR with `safeoutputs___create_pull_request` - -> ⚠️ **Analysis must include color-coded Mermaid diagrams, evidence tables, and template structure compliance** — plain prose is NEVER acceptable. ALL script-generated stubs MUST be replaced with AI-enriched analysis. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `stakeholder-perspectives.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams and evidence tables. - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Min. analysis time | -|-------|--------------|-------------------|--------|---------|-------------------| -| standard | 1-2 | ≥3 | ≥1 | optional | 10 minutes | -| deep | 2-3 | ≥5 | ≥2 | required | 15 minutes | -| comprehensive | 3+ | ≥7 | ≥3 | required | 20 minutes | - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `monthly-review` (from `scripts/editorial-framework.ts`): -- **SWOT**: full (5+ stakeholder perspectives per quadrant) -- **Dashboard**: required (min. 4 Chart.js charts) -- **Mindmap**: required (CSS policy mindmap) -- **Min. stakeholders**: 7 perspectives -- **AI iterations**: 3 (comprehensive), 3 (deep), or 2 (standard) - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -### Phase 1 — Data Collection & Initial Analysis -1. Fetch MCP data: full month's `get_betankanden`, `get_propositioner`, `get_motioner`, `search_anforanden`, `search_voteringar`, `get_interpellationer`, `get_fragor`, `get_sync_status` -2. Compute monthly metrics: totals, trend vs. previous month, party rankings, legislative efficiency -3. Build initial outline: flagship lede, monthly statistics, party performance, looking ahead - -### Phase 2 — Iterative Depth Enhancement (3 iterations for `deep`/`comprehensive`) -For each AI iteration: -1. **Full SWOT**: Write a clearly structured SWOT analysis with ≥5 stakeholder perspectives per quadrant (government coalition, opposition parties, affected citizens, EU/Nordic context, media/civil society, business sector, academic/think-tanks). Format it as publication-ready markdown with explicit `Strengths`, `Weaknesses`, `Opportunities`, and `Threats` headings. -2. **Monthly Dashboard Summary**: Provide a dashboard-style analytical summary covering at least 4 evidence-based views: monthly trends, party activity ranking, policy domain heatmap summary, and legislative pipeline status. Present the underlying figures and comparisons directly in markdown text and bullet lists or tables; do not assume any machine-readable chart schema or automatic rendering step. -3. **Policy Theme Map**: Describe the month's cross-cutting policy themes as a hierarchical outline with one central theme and clearly labelled subthemes. Use readable markdown headings or nested bullet lists rather than implying a structured mindmap payload or CSS-rendered component. -4. **Stakeholder SWOT**: Write a stakeholder-focused SWOT with ≥7 perspectives for comprehensive depth, and cite specific `dok_id` evidence for each entry. -5. **Quality Gate** (check before next iteration): - - Verify trend comparison uses actual previous-month data from MCP - - Verify party rankings section covers all 8 Riksdag parties - - Verify all Swedish API text is translated - - Verify word count ≥ 1800 - - If failing any check: re-generate the failing section before proceeding - -### Phase 3 — Final Quality Gate Before PR -Run all validation checks from the **MANDATORY Quality Validation** section below before committing. - -## MANDATORY Date Validation - -```bash -echo "=== Date Validation Check ===" -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -echo "Article Type: monthly-review" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -September+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before September → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). Use in ALL MCP queries requiring `rm`. - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="monthly-review" -# Derive FORCE_GENERATION from the workflow_dispatch input -FORCE_GENERATION="${{ github.event.inputs.force_generation || 'false' }}" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -## MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/{article-type}` (e.g. `news/content/2026-03-23/monthly-review`). `safeoutputs___create_pull_request` handles this automatically. - -## MANDATORY PR Creation - -> **🚀 HOW SAFE PR CREATION WORKS — READ THIS FIRST** -> -> The `safeoutputs___create_pull_request` tool handles **everything**: branch creation, pushing commits, and opening the PR. Do NOT run `git push` or `git checkout -b` manually. -> -> **Exact steps:** -> 1. Write article files to `news/` using `bash` or `edit` tools -> 2. Stage and commit locally (scoped to the resolved monthly-review subfolder): `git add news/*monthly-review*.html news/metadata/ "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" analysis/weekly/ && git commit -m "Add monthly-review articles and analysis artifacts"` -> 3. Call `safeoutputs___create_pull_request` with `title`, `body`, and `labels` -> -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash - -## 🌐 Dispatch Translation Workflow - -After creating the content PR, dispatch translations: `safeoutputs___dispatch_workflow({ "workflow_name": "news-translate", "inputs": { "article_date": "<YYYY-MM-DD>", "article_type": "<article-type>", "languages": "all-extra" } })`. See `news-translate.md` for full translation quality rules. - -## MCP Tools - -**ALWAYS call `get_sync_status()` FIRST.** - -**Primary tools:** `search_dokument`, `get_betankanden` — comprehensive document search -**Cross-reference:** `get_propositioner`, `get_motioner`, `search_voteringar`, `analyze_g0v_by_department` -**Statistical enrichment:** SCB MCP + World Bank — enrich monthly review with comprehensive economic context across all active policy areas. **144 World Bank indicators available**: `view analysis/worldbank/indicators-inventory.json` for the complete inventory with `policyAreas`, `committees`, and `mcpTool` fields per indicator. Key monthly indicators: GDP growth (NY.GDP.MKTP.KD.ZG), unemployment (SL.UEM.TOTL.ZS), inflation (FP.CPI.TOTL.ZG), tax revenue (GC.TAX.TOTL.GD.ZS), military expenditure (MS.MIL.XPND.GD.ZS), health expenditure (SH.XPD.CHEX.GD.ZS), education expenditure (SE.XPD.TOTL.GD.ZS), life expectancy (SP.DYN.LE00.IN), and governance indicators (RL.EST, VA.EST, GE.EST). Use MCP tools for indicators with `mcpTool` field. Also use SCB tables from `scripts/scb-context.ts` for Swedish-specific data. MUST generate ≥3 economic charts: Nordic comparison bar, trend line, and radar overview. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for chart templates. -**Fact-checking:** Monthly reviews should include a dedicated fact-check section. Scan debates/speeches from the month via `search_anforanden` and use `scripts/statistical-claims-detector.ts` to verify politician statistical claims against official data. - -```javascript -get_sync_status({}) -const lastMonth = new Date(Date.now() - 30*86400000).toISOString().split('T')[0]; -const today = new Date().toISOString().split('T')[0]; -search_dokument({ from_date: lastMonth, to_date: today, limit: 50 }) -get_betankanden({ rm: <calculated riksmöte>, limit: 20 }) -get_propositioner({ rm: <calculated riksmöte>, limit: 20 }) -get_motioner({ rm: <calculated riksmöte>, limit: 20 }) -search_voteringar({ rm: <calculated riksmöte>, limit: 30 }) -analyze_g0v_by_department({ dateFrom: lastMonth, dateTo: today }) - -// SCB enrichment (optional — wrap in try/catch, do not block generation on SCB failures): -// search_tables({ query: "BNP arbetslöshet KPI", limit: 5 }) -// query_table({ table_id: "TAB5802", value_codes: { Tid: "top(4)" } }) // GDP -// query_table({ table_id: "TAB5765", value_codes: { Tid: "top(4)", Kon: "1+2" } }) // Unemployment -``` - -## Generation Steps - -### Step 1: Check Existing Articles (Analysis Always Runs) -🚨 **FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** complete **BEFORE** any article HTML is created or updated. Articles MUST be (re)generated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Violations = REJECTED PR (PR #1705 comment audit, 2026-04-18). - -Check if monthly-review articles already exist for the target date. If they do, skip article generation but **ALWAYS run the full deep political analysis phase** — analysis is the primary output and must execute on every run regardless of article existence. - -### Step 2: Query MCP -```javascript -get_sync_status({}) -search_dokument({ from_date: lastMonth, to_date: today, limit: 50 }) -``` - -### Step 2.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 14 analysis artifacts (9 core + 5 Tier-C reference-grade).** `download-parliamentary-data.ts` downloads raw data ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. **STEP 0 — Upstream Watchpoint Ingestion (MANDATORY, per `SHARED_PROMPT_PATTERNS.md` §"Recent Daily Knowledge-Base Synthesis")**: ingest forward indicators from the last **30 days** of sibling daily runs + all intervening `weekly-review` runs + prior `monthly-review`. Build the Watchpoint Reconciliation table (no silent drops). -4. Create ALL **14** analysis files in `analysis/daily/YYYY-MM-DD/monthly-review/` using evidence from the downloaded data AND the ingested upstream watchpoints -5. Reference exemplars: [`analysis/daily/2026-04-18/weekly-review/`](../../analysis/daily/2026-04-18/weekly-review/) and [`analysis/daily/2026-04-19/month-ahead/`](../../analysis/daily/2026-04-19/month-ahead/) - -Run the **14-Artifact Completeness Gate** (aggregation workflow) from `SHARED_PROMPT_PATTERNS.md` §"14 REQUIRED Artifacts for AGGREGATION Workflows — Reference-Grade Tier-C" to verify ALL 14 files exist: the 9 core (synthesis-summary.md, swot-analysis.md, risk-assessment.md, threat-analysis.md, classification-results.md, significance-scoring.md, stakeholder-perspectives.md, cross-reference-map.md, data-download-manifest.md) PLUS the 5 Tier-C reference-grade files (README.md, executive-brief.md, scenario-analysis.md, comparative-international.md, methodology-reflection.md). - -> 🔴 **MANDATORY 1.5× PERIOD-SCOPE MULTIPLIER (added 2026-04-19)**: `monthly-review` covers **30 days** — 4× the period of `weekly-review`. The 14-Artifact Completeness Gate applies the **1.5× period-scope multiplier** from `SHARED_PROMPT_PATTERNS.md` §"Period-Scope Multipliers". Scaled byte thresholds are automatic in the gate bash script (`case "$ANALYSIS_SUBFOLDER" in monthly-review*) MULT_NUM=15; MULT_DEN=10 ;;`). **Target**: monthly-review total package ≥ 1.5× the most recent weekly-review exemplar. Explicit Tier-C minimums for monthly-review: README ≥ 4 500 B · executive-brief ≥ 5 250 B · scenario-analysis ≥ 6 000 B · comparative-international ≥ 6 000 B · methodology-reflection ≥ 6 000 B. **Reference exemplar**: [`analysis/daily/2026-04-19/monthly-review/`](../../analysis/daily/2026-04-19/monthly-review/) — 14 artifacts, total ≈ 115 KB, 16-row Upstream Watchpoint Reconciliation, 12-row ACH grid in `scenario-analysis.md`, 7-jurisdiction benchmarking in `comparative-international.md`. A monthly-review regressing below this reference is REJECTED. - -```bash -date -u +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -echo "📊 Running pre-article analysis for $ARTICLE_DATE..." -# CRITICAL: Source mcp-setup.sh FIRST to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway -source scripts/mcp-setup.sh -npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 200 > /tmp/pipeline-output.log 2>&1 -PIPE_EXIT=$? -cat /tmp/pipeline-output.log -if [ "$PIPE_EXIT" -ne 0 ]; then - echo "❌ Pipeline failed — agent MUST diagnose and fix (read /tmp/pipeline-output.log)" - tail -20 /tmp/pipeline-output.log -fi -echo "📊 Analysis artifacts for $ARTICLE_DATE:" -ls -la "analysis/daily/$ARTICLE_DATE/" 2>/dev/null || echo "⚠️ No analysis output" -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_count.txt -read DATA_JSON_COUNT < /tmp/data_count.txt -echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)" -# Relocate pipeline artifacts: download-parliamentary-data.ts writes to analysis/daily/$DATE/ (unscoped) -# but this workflow needs them under analysis/daily/$DATE/monthly-review/ -# === Run Suffix Resolution (see SHARED_PROMPT_PATTERNS.md) === -BASE_SUBFOLDER="monthly-review" -ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER" -if [ "$FORCE_GENERATION" != "true" ]; then - _SUFFIX=1 - while [ -f "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" ]; do - _SUFFIX=$((_SUFFIX + 1)) - ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER-$_SUFFIX" - done -fi -echo "📁 Analysis subfolder resolved: $ANALYSIS_SUBFOLDER" -UNSCOPED_DIR="analysis/daily/$ARTICLE_DATE" -SCOPED_DIR="$UNSCOPED_DIR/$ANALYSIS_SUBFOLDER" -if [ -d "$UNSCOPED_DIR" ]; then - mkdir -p "$SCOPED_DIR" - if find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" | grep -q .; then - find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" -exec mv -f {} "$SCOPED_DIR/" \; - echo "📁 Moved pipeline *.md artifacts → $SCOPED_DIR (root cleaned to prevent merge conflicts)" - fi - if [ -d "$UNSCOPED_DIR/documents" ]; then - mkdir -p "$SCOPED_DIR/documents" - find "$UNSCOPED_DIR/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$SCOPED_DIR/documents/" \; - rmdir "$UNSCOPED_DIR/documents" 2>/dev/null || true - echo "📁 Relocated pipeline documents/ contents → $SCOPED_DIR/documents" - fi -fi -if [ "$DATA_JSON_COUNT" -eq 0 ]; then - echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." -fi -``` - -**Weekly aggregation**: Since this is a monthly-scope workflow, also aggregate the current week's daily analyses for complete context: - -```bash -date -u +%G-W%V > /tmp/week_label.txt -read WEEK_LABEL < /tmp/week_label.txt -echo "📅 Running weekly aggregation for $WEEK_LABEL..." -source scripts/mcp-setup.sh && npx tsx scripts/download-parliamentary-data.ts --aggregate weekly --date "$WEEK_LABEL" || echo "⚠️ Weekly aggregation failed (non-blocking)" -ls -la "analysis/weekly/$WEEK_LABEL/" 2>/dev/null || echo "⚠️ No weekly aggregation output" -``` - -These files are committed alongside articles for human review and continuous improvement. - -### 🚨 MANDATORY: Analysis Artifacts Must ALWAYS Be Committed - -**Before deciding whether to generate articles or call noop, you MUST:** - -1. **Review the analysis artifacts** in `analysis/daily/YYYY-MM-DD/` and `analysis/weekly/` — read `synthesis-summary.md` and `significance-scoring.md` to understand what was found -2. **Summarize the analysis findings** — note how many documents were downloaded, their significance scores, key themes, and risk levels -3. **ALWAYS commit analysis artifacts** regardless of whether articles will be generated: - -```bash -date -u +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/monthly-review" -ANALYSIS_COUNT=0 -if [ -d "$ANALYSIS_DIR" ]; then - find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt - read ANALYSIS_COUNT < /tmp/analysis_count.txt -fi -date -u +%G-W%V > /tmp/week_label.txt -read WEEK_LABEL < /tmp/week_label.txt -WEEKLY_DIR="analysis/weekly/$WEEK_LABEL" -if [ -d "$WEEKLY_DIR" ]; then - find "$WEEKLY_DIR" -type f 2>/dev/null | wc -l > /tmp/weekly_count.txt - read WEEKLY_COUNT < /tmp/weekly_count.txt - ANALYSIS_COUNT=$((ANALYSIS_COUNT + WEEKLY_COUNT)) -fi -if [ "$ANALYSIS_COUNT" -gt 0 ]; then - echo "📊 Found $ANALYSIS_COUNT total analysis artifacts — these MUST be committed (do NOT use safeoutputs___noop)" -else - echo "📊 Found 0 analysis artifacts — safeoutputs___noop is allowed (no files to commit)" -fi -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the pre-article analysis pipeline produced ANY output files, you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Monthly Review - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if the analysis pipeline produced ZERO output files (truly nothing to analyze). - -### 🔬 Step 2b: Read ALL Analysis Files + Cross-Reference Sibling Types (MANDATORY) - -> 🔴 **NON-NEGOTIABLE**: Monthly review synthesizes the entire month's parliamentary activity. The AI MUST read ALL analysis files from ALL article types before generating the review. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="monthly-review" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done -fi - -echo "🔍 Cross-referencing sibling analysis types for $ARTICLE_DATE..." -for SIBLING_DIR in analysis/daily/$ARTICLE_DATE/*/; do - if [ -d "$SIBLING_DIR" ]; then - echo "$SIBLING_DIR" | sed 's|/$||' | sed 's|.*/||' > /tmp/sibling_type.txt - read SIBLING_TYPE < /tmp/sibling_type.txt - if [ "$SIBLING_TYPE" = "$ANALYSIS_SUBFOLDER" ]; then continue; fi - echo "📖 Cross-referencing: $SIBLING_TYPE" - for SIBLING_FILE in "$SIBLING_DIR/synthesis-summary.md" "$SIBLING_DIR/significance-scoring.md"; do - if [ -f "$SIBLING_FILE" ]; then - echo "--- Sibling ($SIBLING_TYPE): $SIBLING_FILE ---" - cat "$SIBLING_FILE" - echo "" - fi - done - fi -done -echo "✅ Cross-referencing complete — monthly review MUST incorporate findings from all sibling types" -``` - -### Step 3: Generate Articles - -```bash -# Set LANGUAGES_INPUT to the value shown in Workflow Dispatch Parameters above -LANGUAGES_INPUT="<value from Workflow Dispatch Parameters>" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=monthly-review \ - --languages="$LANG_ARG" \ - --skip-existing -``` - -**Article Navigation Verification**: The `generate-news-enhanced.ts` script automatically includes all required navigation elements: -- **Language switcher** (`<nav class="language-switcher">`) after `<body>` with all 14 languages -- **Back-to-news top nav** (`<div class="article-top-nav">`) with localized back link after language switcher -- **Footer back-to-news link** in `<footer class="article-footer">` - -These elements are validated by `bash scripts/validate-news-generation.sh` (Checks 8–10). The fix script is a **fallback only** — do not run it by default: -```bash -# FALLBACK ONLY — use if validate-news-generation.sh reports missing navigation elements -npx tsx scripts/fix-article-navigation.ts -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "monthly-review", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -### Step 3b: AI Title, Meta Description & Analysis References - -> 🚨 **MANDATORY** — After article HTML is generated, the AI MUST improve titles, descriptions, and add analysis references. See `SHARED_PROMPT_PATTERNS.md` sections "AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and "ANALYSIS FILE GITHUB REFERENCES" for full protocols. - -**1. Generate newsworthy titles** — Replace script-generated title with: `[Active Verb] + [Specific Institution] + [Concrete Policy Action]`. BANNED: ❌ generic category labels or ": {Topic} in Focus". - -**2. Generate AI meta descriptions** (150-160 chars) — Key political intelligence summary. BANNED: ❌ "Analysis of N documents". - -**3. 🔴 Add analysis references (MANDATORY — VERIFY AFTER)** — Insert "📊 Analysis & Sources" HTML block (from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES) linking to `analysis/daily/$ARTICLE_DATE/monthly-review/` files and `analysis/methodologies/ai-driven-analysis-guide.md`. +# 📈 Monthly Review -**After inserting, VERIFY** by running: -```bash -for FILE in news/$ARTICLE_DATE-monthly-review-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` +Retrospective 30-day political intelligence review synthesising longitudinal patterns. Tier-C reference-grade output (14 artifacts). Core languages EN, SV. -**4. Update all metadata** — `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>`. +## Tier-C (reference-grade) requirements -### Step 4: Translate, Validate & Verify Analysis Quality +This workflow imports `../prompts/ext/tier-c-aggregation.md`. Produce **all 14 artifacts** (9 core + 5 Tier-C) and cross-reference sibling analyses. See the extension for the full rules. -Run analysis references fix, validation, and HTMLHint before creating PR: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type monthly-review +## What this workflow does -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "❌ News generation validation failed. Fix the reported issues before creating a PR." - exit "$VALIDATION_EXIT" -fi +- **Article type**: `monthly-review` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/monthly-review/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -# HTMLHint validation with auto-fix for common nesting errors -find news -maxdepth 1 -name '*-*.html' 2>/dev/null | wc -l > /tmp/news_count.txt -read NEWS_FILES < /tmp/news_count.txt -if [ "$NEWS_FILES" -gt 0 ]; then - if ! npx htmlhint "news/*-*.html" 2>/dev/null; then - echo "⚠️ HTML validation errors found, attempting auto-fix..." - npx tsx scripts/article-quality-enhancer.ts --fix - if ! npx htmlhint "news/*-*.html"; then - echo "❌ HTML validation still failing after auto-fix. Please fix remaining issues manually before creating PR." - exit 1 - fi - fi -fi -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "monthly-review" +## Time budget (60 min, minimum 45 min of real work) -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-monthly-review-*.html -``` +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -**CRITICAL: Each article MUST contain real analysis, not just a list of translated document links.** -Every generated article must include thematic analysis grouping documents by type and policy area, interpretive commentary on what the month's activity reveals about political dynamics, and key takeaways. +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -**Note**: News index files, metadata, and sitemap are generated automatically at build time by the `prebuild` script. Do NOT run generation scripts or commit their output — only commit the article HTML files. +## Inputs -## Article Content Structure +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -Monthly review articles should include: -1. **Month in Numbers**: Key legislative statistics (bills passed, votes held, motions filed) -2. **Legislative Output**: Major legislation enacted or debated -3. **Government Performance**: Propositions tabled, policy direction analysis -4. **Coalition Dynamics**: Cross-party cooperation, voting discipline trends -5. **Committee Highlights**: Most significant committee reports and recommendations -6. **Opposition Activity**: Key motions, interpellations, government scrutiny -7. **Policy Trends**: Emerging patterns in government priorities -8. **Month's Most Consequential**: Deep analysis of the month's defining development -9. **Looking Ahead**: Preview of next month's parliamentary calendar +## Dedup & analysis-only path -## 🌐 Translation Quality +If articles for `$ARTICLE_DATE` + `monthly-review` already exist **and** `force_generation=false`: -EN/SV only: all headings, meta, content in correct language; no untranslated `data-translate` spans; Swedish API titles translated. Full rules: `news-translate.md`. +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Monthly Review — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a **6–8 sentence paragraph of ≥200 words** (enforced by `scripts/validate-economic-context.ts` — `monthly-review` = 200) that: -> - cites **≥4 concrete numeric values** from `dataPoints` (month-on-month or year-over-year changes, Nordic comparison, primary-indicator trajectory); -> - ties the numbers to the month's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> **Sankey / flow diagram** (required for `monthly-review`): `scripts/generate-news-enhanced/generators.ts` calls `buildArticleVisualizationSections` with `alwaysEmit: true` for this article type, so `class="sankey-section"` is auto-appended whenever the month has at least **one** document — even when every document collapses into a single doc-type bucket. The only case where no Sankey is emitted is an empty month (`docs.length === 0`); in that edge case the visualization builder returns an empty section list. The AI writer does not need to emit Sankey HTML directly — just verify the generated HTML contains `class="sankey-section"` before opening the PR: -> ```bash -> if grep -l 'class="sankey-section"' news/$ARTICLE_DATE-monthly-review-*.html; then -> echo "✅ Sankey section present" -> else -> # AWF-safe: no $(...) command substitution — use per-process temp file + read redirection, then clean up. -> doc_count_tmp="/tmp/doc_count.$$" -> find "analysis/daily/$ARTICLE_DATE/monthly-review/documents" -maxdepth 1 -name '*.json' 2>/dev/null | wc -l > "$doc_count_tmp" -> read doc_count < "$doc_count_tmp" -> rm -f "$doc_count_tmp" -> if [ "$doc_count" = "0" ]; then -> echo "ℹ️ Sankey section not emitted — the month has 0 documents (validator allows this)" -> else -> echo "❌ Sankey section missing — the validator will block the PR"; exit 1 -> fi -> fi -> ``` -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-motions.lock.yml b/.github/workflows/news-motions.lock.yml index 0afa07bbb..94776129f 100644 --- a/.github/workflows/news-motions.lock.yml +++ b/.github/workflows/news-motions.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"104ecae5711c1d6709e074581f8065603bf1bf6c53d552f2cce9cb736bf61e2d","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"87a02ea5489fdb3c025fcde6dacf95304e98f381a077a17c03bd38a79a74a564","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,17 @@ # # Generates opposition motions analysis articles in core languages (EN, SV). Translations for remaining 12 languages are handled by the dedicated news-translate workflow via dispatch-workflow. Single article type per run. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -184,14 +195,9 @@ jobs: env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -202,21 +208,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_e155177939d49e4a_EOF' + cat << 'GH_AW_PROMPT_6ab3c59bd1f4ebba_EOF' <system> - GH_AW_PROMPT_e155177939d49e4a_EOF + GH_AW_PROMPT_6ab3c59bd1f4ebba_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_e155177939d49e4a_EOF' + cat << 'GH_AW_PROMPT_6ab3c59bd1f4ebba_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_e155177939d49e4a_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_6ab3c59bd1f4ebba_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_e155177939d49e4a_EOF' + cat << 'GH_AW_PROMPT_6ab3c59bd1f4ebba_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -246,22 +252,25 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_e155177939d49e4a_EOF + GH_AW_PROMPT_6ab3c59bd1f4ebba_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_e155177939d49e4a_EOF' + cat << 'GH_AW_PROMPT_6ab3c59bd1f4ebba_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} {{#runtime-import .github/workflows/news-motions.md}} - GH_AW_PROMPT_e155177939d49e4a_EOF + GH_AW_PROMPT_6ab3c59bd1f4ebba_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -272,14 +281,9 @@ jobs: uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -302,14 +306,9 @@ jobs: return await substitutePlaceholders({ file: process.env.GH_AW_PROMPT, substitutions: { - GH_AW_EXPR_731DE217: process.env.GH_AW_EXPR_731DE217, GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -411,7 +410,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -499,16 +498,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_97606b750c3b6c50_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_97606b750c3b6c50_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_2544c0f641f092c4_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_2544c0f641f092c4_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -767,7 +766,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_07506f050b0aef46_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_578e1dd627e6339d_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -883,7 +882,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_07506f050b0aef46_EOF + GH_AW_MCP_CONFIG_578e1dd627e6339d_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1570,7 +1569,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-motions.md b/.github/workflows/news-motions.md index f2b668d9e..501ef761c 100644 --- a/.github/workflows/news-motions.md +++ b/.github/workflows/news-motions.md @@ -2,6 +2,15 @@ name: "News: Opposition Motions" description: Generates opposition motions analysis articles in core languages (EN, SV). Translations for remaining 12 languages are handled by the dedicated news-translate workflow via dispatch-workflow. Single article type per run. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md on: schedule: daily around 6:00 on weekdays workflow_dispatch: @@ -119,7 +128,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -157,26 +166,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -230,691 +219,47 @@ engine: model: claude-opus-4.7 --- -# ⚔️ Opposition Motions Article Generator - -You are the **News Journalist Agent** for Riksdagsmonitor generating **opposition motions** analysis articles. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst, NOT a script executor.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** parliamentary data deeply — SWOT, stakeholder perspectives, risk assessment, election implications -> 2. **WRITE** genuine political intelligence articles with specific actors, evidence citations, and analytical insight -> 3. **USE** the script (`generate-news-enhanced.ts`) ONLY for HTML formatting — the script creates a shell, YOU fill it with analysis -> 4. **REPLACE** every `AI_MUST_REPLACE` marker with real analysis — ZERO markers may remain -> 5. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 6. **VERIFY** article quality: minimum 1000 words, SWOT analysis, stakeholder perspectives, dok_id citations -> 7. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **ITERATIVE IMPROVEMENT IS MANDATORY (2+ passes):** -> - **Analysis Pass 1** (15 min): Create analysis for every document following templates -> - **Analysis Pass 2** (7 min): Read ALL analysis back, improve evidence, diagrams, cross-references -> - **Article Pass 1** (10 min): Generate articles with AI-written content from analysis -> - **Article Pass 2** (8 min): Read ALL articles back completely, improve every section -> - **NEVER complete early** — if you finish ahead, use remaining time to deepen analysis -> -> **If the final article reads like a list of document titles with generic descriptions, you have FAILED.** Rewrite with genuine political analysis before committing. - - -## 🔧 Workflow Dispatch Parameters - -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -If **force_generation** is `true`, generate articles even if recent ones exist. Use the **languages** value to determine which languages to generate. - -## 🚨 CRITICAL: Single Article Type Focus - -**This workflow generates ONLY `motions` articles.** Do not generate other article types. - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-motions.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (45 minutes) — ENFORCED Minimum 40 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing in 13-22 min of 45-min allocation, producing shallow analysis. Agent MUST use at least 40 of 45 minutes. Completion < 40 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -- **Minutes 0–3**: Date check, MCP warm-up with `get_sync_status()` -- **Minutes 3–6**: Run download-parliamentary-data pipeline (download data) -- **Minutes 6–21**: 🚨 **AI Analysis Pass 1 (15 min minimum)**: Read ALL methodology guides, create per-file analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. -- **Minutes 21–22**: 🚨 **AI Analysis Pass 2 (Part A, start)**: Begin reading ALL analysis artifacts back and identify improvement targets. -- **Minutes 22–25**: 🫀 **Heartbeat PR** — `git add && git commit` analysis artifacts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Motions - {date}`). Refreshes the safeoutputs MCP session (idle timeout ~30–35 min) AND preserves work if later phases fail. Run `git checkout main` after the call so subsequent commits don't stack onto the frozen patch. -- **Minutes 25–28**: 🚨 **AI Analysis Pass 2 (Part B, complete — 6 min improvement work total across Parts A+B)**: Improve every section, replace ALL script stubs with AI analysis. Run enrichment verification gate. -- **Minutes 28–30**: Run ENFORCED Minimum Time Gate + Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. -- **Minutes 30–36**: Generate articles for core languages (EN, SV) using `npx tsx scripts/generate-news-enhanced.ts` -- **Minutes 36–40**: 🚨 **Article Improvement Pass**: Read ALL articles back, replace AI_MUST_REPLACE markers, improve content. Run article quality component gate. -- **Minutes 40–43**: Validate, commit, create PR with `safeoutputs___create_pull_request` -- **Minutes 43–45**: 🚨 **HARD DEADLINE** — If no safe output yet: if ANY artifacts/files were created, IMMEDIATELY stage, commit, call `safeoutputs___create_pull_request` with partial work. ONLY call `safeoutputs___noop` if truly ZERO files were created. - -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 15 min + Pass 2: 7 min)** — every analysis file must contain color-coded Mermaid diagrams, structured evidence tables with dok_id citations, and follow template structure exactly. ALL script-generated stubs MUST be replaced with AI-enriched analysis. Run the ENFORCED gates from SHARED_PROMPT_PATTERNS.md before proceeding to article generation. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## 🚫 CRITICAL: Article Generation Safety - -**Articles MUST be generated using `npx tsx scripts/generate-news-enhanced.ts` — NEVER manually.** - -The repository provides a complete article generation pipeline. You MUST use it (see Generation Steps below for the full `LANG_ARG` derivation from the `languages` dispatch input; default is `en,sv`): -```bash -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts --types=motions --languages="$LANG_ARG" --skip-existing -``` - -**❌ NEVER do any of the following:** -- NEVER use `python3` or `python3 -c` to build HTML article files -- NEVER create `.py` scripts to generate articles -- NEVER use bash heredoc (`cat > file << 'EOF'`) to write HTML files — it silently truncates large content -- NEVER manually construct HTML articles line-by-line with `echo`, `printf`, or any other method -- NEVER spend more than 5 minutes attempting to manually build article HTML - -**If `generate-news-enhanced.ts` fails or returns 0 articles:** -1. Check if MCP data was returned (retry MCP calls if needed) -2. Check if analysis artifacts exist in `analysis/daily/YYYY-MM-DD/` — if yes, commit them and create an analysis-only PR -3. If MCP server is unreachable AND no data was downloaded AND no analysis artifacts exist, use `safeoutputs___noop` — this is the ONLY valid noop scenario -4. Do NOT attempt to manually create articles as a fallback - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Article Type Isolation - -> 🚨 **This workflow writes analysis ONLY to `analysis/daily/$ARTICLE_DATE/motions/`**. NEVER write to the parent date directory or another article type's folder. See SHARED_PROMPT_PATTERNS.md "Article Type Isolation" section. - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams and evidence tables. - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Mermaid diagrams | Risk matrix (L×I) | Forward indicators | Min. analysis time | -|-------|--------------|-------------------|--------|---------|-----------------|-------------------|-------------------|-------------------| -| standard | 1-2 | ≥5 (of 8 groups) | ≥1 | optional | ≥1 color-coded | ≥2 risks scored | ≥2 with triggers | 10 minutes | -| deep | 2-3 | ≥7 (of 8 groups) | ≥2 | required | ≥2 color-coded | ≥4 risks scored | ≥3 with triggers | 15 minutes | -| comprehensive | 3+ | all 8 groups | ≥3 | required | ≥3 color-coded | ≥6 risks scored | ≥5 with triggers | 20 minutes | - -**The 8 mandatory stakeholder groups are**: Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion. Every group MUST be analyzed with specific evidence (dok_id, vote counts, named politicians). - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, quantified risk matrix with numeric L×I scores, forward indicators with specific triggers/timelines, confidence labels on all analytical claims, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `motions` (from `scripts/editorial-framework.ts`): -- **SWOT**: ALL 8 stakeholder groups analyzed with evidence tables (mot. IDs, party positions, committee referrals per entry) -- **Dashboard**: required (min. 1 Chart.js chart) -- **Mindmap**: not required -- **Min. stakeholders**: 8 perspectives (Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion) -- **Risk Matrix**: required — numeric L×I scores for policy change risks, coalition stability risks per motion -- **Forward Indicators**: required — committee scheduling dates, potential voting outcomes, counter-motion deadlines -- **Confidence Labels**: `[HIGH]`/`[MEDIUM]`/`[LOW]` on ALL analytical claims -- **Mermaid Diagrams**: ≥1 color-coded diagram showing opposition coordination patterns or policy impact flowchart -- **Dok_id Citations**: MANDATORY — every motion MUST cite its mot. ID (e.g., "mot. 2025/26:1823") -- **AI iterations**: 2 (standard), 2 (deep), or 3 (comprehensive) - -> 🚨 **ANTI-PATTERNS (REJECTED)**: SWOT with only 3 stakeholder groups, no mot. ID citations, generic opposition analysis without specific party positions, no Mermaid diagrams, no L×I risk scores - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -### Phase 1 — Data Collection & Initial Analysis -1. Fetch MCP data (`get_motioner`, `get_sync_status`, cross-reference `search_anforanden`) -2. Detect policy domains and group by party for coalition dynamics analysis -3. Build initial outline: lede, party breakdown, thematic groupings - -### Phase 2 — Iterative Depth Enhancement (repeat per `analysis_depth`) -For each AI iteration: -1. **SWOT Analysis**: Generate multi-stakeholder SWOT with ALL 8 groups (Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion). Use structured evidence tables with columns: `#`, `Statement`, `Evidence (mot. ID/dok_id)`, `Confidence`, `Impact`, `Entry Date`. Every entry MUST cite specific motion number, party origin, and policy area. -2. **Coalition Dashboard**: Include at least one chart-ready summary in the article output (for example, party motion counts or thematic distribution), formatted as a clear Markdown table or bullet list; do not assume automatic dashboard rendering unless a separate workflow step explicitly parses and renders it. -3. **Quality Gate** (check before next iteration): - - Verify opposition strategy section is substantive (not just party counts) - - Verify no identical "Why It Matters" text across entries - - Verify all Swedish API text is translated - - Verify word count ≥ 700 - - If failing any check: re-generate the failing section before proceeding - -### Phase 3 — Final Quality Gate Before PR -Run all validation checks from the **MANDATORY Quality Validation** section below before committing. - -## MANDATORY Date Validation - -```bash -echo "=== Date Validation Check ===" -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -echo "Article Type: motions" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -September+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before September → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). Use in ALL MCP queries requiring `rm`. - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="opposition-motions" -# Derive FORCE_GENERATION from the workflow_dispatch input -FORCE_GENERATION="${{ github.event.inputs.force_generation || 'false' }}" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -## MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/{article-type}` (e.g. `news/content/2026-03-23/motions`). `safeoutputs___create_pull_request` handles this automatically. - -## MANDATORY PR Creation - -### HOW SAFE PR CREATION WORKS - -> `safeoutputs___create_pull_request` handles branch creation, push, and PR opening — do NOT run `git push` or `git checkout -b` manually. Stage files, then call the tool directly. - - -```bash -# Stage articles and analysis — scoped to article type to stay within 100-file PR limit -# CRITICAL: Stage ONLY today's new articles (EN/SV), NOT all existing news/ -# Staging news/*motions*.html would include 360+ existing files, many of which -# may have been modified by auto-fix scripts, causing E003 (>100 files) PR failure. -# 🚫 DO NOT add `analysis/data/` anywhere — it contains 200+ MCP response cache files -# (documents/{motions,interpellations,committeeReports,propositions,questions,speeches}/) -# populated by download-parliamentary-data.ts. Only run the `git add` lines shown below. -git add "news/$ARTICLE_DATE-opposition-motions-en.html" 2>/dev/null || true -git add "news/$ARTICLE_DATE-opposition-motions-sv.html" 2>/dev/null || true -git add news/metadata/ 2>/dev/null || true -# Use $ANALYSIS_SUBFOLDER (set during Run Suffix Resolution above); fallback to base type -if [ -z "$ANALYSIS_SUBFOLDER" ]; then - ANALYSIS_SUBFOLDER="motions" -fi -# Stage analysis summary .md files ONLY — EXCLUDE documents/ to stay under 100-file limit. -# With --limit 50, documents/ alone can contain 100+ files (50 JSON + 50 analysis.md). -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true -# 🚨 HARD UNSTAGE: NEVER commit analysis/data/ — it is an MCP response cache populated by -# download-parliamentary-data.ts (6 doc types × ~40 files = 240+ files). It must stay local. -# Committing it caused E003 "received 258 files" in run 24653843681 (PR #1867). Only -# news-realtime-monitor stages analysis/data/ intentionally; news-motions never should. -# 🚫 DO NOT run `git add analysis/data/...` anywhere in this workflow. -git reset HEAD -- analysis/data/ 2>/dev/null || true -# Enforce safe-outputs 100-file PR limit (AWF-safe: no $(...) — write to temp file + read back) -git diff --cached --name-only > /tmp/staged_files.txt -awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt -STAGED_COUNT=0 -read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -echo "📊 Staged file count: $STAGED_COUNT (limit: 100)" -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ $STAGED_COUNT files exceeds safe threshold. Removing metadata to reduce count." - git reset HEAD -- news/metadata/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing non-essential analysis — keeping core summaries." - # Graduated pruning: remove individual doc-level analysis JSON first, keep synthesis/scoring/risk .md - # If still over limit, all .json goes but .md summaries (synthesis-summary.md, risk-assessment.md) survive - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-analysis.json 2>/dev/null || true - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-details.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing remaining analysis .json — keeping .md summaries." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -# FINAL HARD GUARD: if count still exceeds 99, remove all analysis .md except synthesis-summary.md -if [ "$STAGED_COUNT" -gt 99 ]; then - echo "🚨 CRITICAL: $STAGED_COUNT files still exceeds safe limit of 99. Removing all analysis .md except synthesis-summary." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true - git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true - echo "📊 After emergency pruning: $STAGED_COUNT files" -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "Add opposition-motions articles and analysis artifacts" -``` -> -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash - -## 🌐 Dispatch Translation Workflow - -After creating the content PR, dispatch translations: `safeoutputs___dispatch_workflow({ "workflow_name": "news-translate", "inputs": { "article_date": "<YYYY-MM-DD>", "article_type": "<article-type>", "languages": "all-extra" } })`. See `news-translate.md` for full translation quality rules. - -## MCP Tools - -**ALWAYS call `get_sync_status()` FIRST.** - -**Primary tool:** `get_motioner` — fetches latest opposition motions -**Cross-reference:** `search_dokument_fulltext`, `search_anforanden` -**Statistical enrichment:** SCB MCP — enrich with statistics relevant to motion policy areas. Use domain-to-committee mappings from `scripts/scb-context.ts` to automatically select relevant SCB tables based on which committee the motion is referred to (e.g., AU motions→labour TAB5765, JuU→crime TAB1172, MJU→environment TAB5404). **World Bank indicators (144 total)**: `view analysis/worldbank/indicators-inventory.json` to discover indicators matching the motion's committee and policy area — each indicator has `policyAreas`, `committees`, and `mcpTool` fields. Use MCP tools for indicators with `mcpTool` field. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for Chart.js chart templates. MUST generate ≥1 economic chart when motion has committee-matched indicators. -**Fact-checking:** Motions often cite statistics to justify policy proposals. Use `scripts/statistical-claims-detector.ts` to detect claims about unemployment, GDP, migration, crime rates, etc. and cross-reference against official SCB/World Bank data. Include a "Faktakoll" section rating the accuracy of cited statistics. - -```javascript -get_sync_status({}) -get_motioner({ rm: <calculated riksmöte>, limit: 20 }) - -// SCB enrichment (optional — wrap in try/catch, do not block generation on SCB failures): -// For labour motions: search_tables({ query: "arbetslöshet sysselsättning", limit: 3 }) -// For education motions: search_tables({ query: "utbildning studenter", limit: 3 }) -// For migration motions: search_tables({ query: "invandring utvandring befolkning", limit: 3 }) -``` - -## Generation Steps - -### Step 1: Check Existing Articles (Analysis Always Runs) -🚨 **FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** complete **BEFORE** any article HTML is created or updated. Articles MUST be (re)generated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Violations = REJECTED PR (PR #1705 comment audit, 2026-04-18). - -Check if motions articles already exist for the target date. If they do, skip article generation but **ALWAYS run the full deep political analysis phase** — analysis is the primary output and must execute on every run regardless of article existence. - -### Step 2: Query MCP -```javascript -get_sync_status({}) -get_motioner({ rm: <calculated riksmöte>, limit: 20 }) -``` - -### Step 2.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 9 analysis artifacts.** `download-parliamentary-data.ts` downloads raw data from riksdag-regering-mcp ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. Create ALL 9 analysis files in `analysis/daily/YYYY-MM-DD/motions/` using evidence from the downloaded data - -**NEVER write or copy analysis files to the parent date directory** — doing so causes merge conflicts when multiple doc-type workflows run on the same date. The `analysis-reader.ts` automatically scans subdirectories, so root-level copies are NOT needed. After creating ALL analysis files, run the **9-Artifact Completeness Gate** from `SHARED_PROMPT_PATTERNS.md` §"9 REQUIRED Analysis Artifacts" to verify ALL 9 files exist. - -Key steps: resolve `ARTICLE_DATE` from input or today → check `data-download-manifest.md` → if 0 docs, loop `DAYS_BACK` 1–7 using `date -u -d "$ARTICLE_DATE - $DAYS_BACK days"`, run `download-parliamentary-data.ts --date "$LOOKBACK_DATE"` → copy artifacts from found date to original date folder → run `catalog-downloaded-data.ts --pending-only`. See `SHARED_PROMPT_PATTERNS.md` §"Data Lookback Fallback Strategy" for full bash implementation. - -> 🔴 **CRITICAL — MCP gateway sourcing**: Every bash block that invokes `download-parliamentary-data.ts`, `generate-news-enhanced.ts`, `mcp-query-cli.ts`, or any other script that reaches the riksdag-regering / SCB / World Bank MCP servers **MUST** prepend `source scripts/mcp-setup.sh &&`. Without sourcing, the scripts fall back to the direct `https://riksdag-regering-ai.onrender.com/mcp` URL; the AWF api-proxy TLS MITM then returns `EPROTO SSL wrong version number` and the pipeline returns 0 documents. Defence-in-depth is now also implemented in `scripts/mcp-client/client.ts` (auto-detects the gateway via `GH_AW_MCP_CONFIG` / `MCP_GATEWAY_API_KEY`) but the explicit source remains required — it extracts the gateway auth token the client needs to authenticate. - -```bash -# Pattern to reuse for EVERY MCP-bound script invocation in this workflow: -source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL" -npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 --doc-type motions -``` - -### 🔄 Data Lookback Fallback - -> 🚨 **CRITICAL RULE**: Never produce empty/stub analysis. If no data for today, look back to find unanalyzed data. - -```bash -[ -f /tmp/hhmm.env ] && . /tmp/hhmm.env -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/motions" -find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt -read ANALYSIS_COUNT < /tmp/analysis_count.txt -echo "Analysis artifacts: $ANALYSIS_COUNT files in $ANALYSIS_DIR" -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the pre-article analysis pipeline produced ANY output files, you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Motions - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if the analysis pipeline produced ZERO output files (truly nothing to analyze). - -### 🔬 Step 2b: Read ALL Analysis Files (MANDATORY — before article generation) - -> 🔴 **NON-NEGOTIABLE**: The AI agent MUST `cat` every analysis `.md` file BEFORE generating any article HTML. Analysis and articles are created in the **same workflow run** — there is zero excuse for not reading the analysis. Articles written without reading analysis are shallow and REJECTED. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="motions" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done - if [ -d "$ANALYSIS_BASE/documents" ]; then - echo "📄 Reading per-document analyses..." - for DOC_FILE in "$ANALYSIS_BASE/documents"/*.md; do - if [ -f "$DOC_FILE" ]; then - echo "--- Per-doc: $DOC_FILE ---" - cat "$DOC_FILE" - echo "" - fi - done - fi - find "$ANALYSIS_BASE" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/analysis_file_count.txt - read ANALYSIS_FILE_COUNT < /tmp/analysis_file_count.txt - echo "✅ Read $ANALYSIS_FILE_COUNT analysis files — these MUST drive article content" -else - echo "⚠️ No analysis directory found at $ANALYSIS_BASE — will use MCP fallback for article content" -fi -``` - -> **After reading, confirm you loaded the analysis** by noting: (1) number of files read, (2) top 3 significance-ranked findings, (3) key risk scores. If you cannot produce this summary, you have NOT read the analysis. - -### Step 3: Generate Articles - -```bash -# Set LANGUAGES_INPUT to the value shown in Workflow Dispatch Parameters above -LANGUAGES_INPUT="<value from Workflow Dispatch Parameters>" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=motions \ - --languages="$LANG_ARG" \ - --skip-existing -``` - -**Article Navigation Verification**: The `generate-news-enhanced.ts` script automatically includes all required navigation elements: -- **Language switcher** (`<nav class="language-switcher">`) after `<body>` with all 14 languages -- **Back-to-news top nav** (`<div class="article-top-nav">`) with localized back link after language switcher -- **Footer back-to-news link** in `<footer class="article-footer">` - -These elements are validated by `bash scripts/validate-news-generation.sh` (Checks 8–10). The fix script is a **fallback only** — do not run it by default: -```bash -# FALLBACK ONLY — use if validate-news-generation.sh reports missing navigation elements -npx tsx scripts/fix-article-navigation.ts -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "motions", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -### Step 3b: AI Title, Meta Description & Analysis References (v5.0 — Analysis-Driven) - -> 🚨 **MANDATORY** — After article HTML is generated, the AI MUST read the completed synthesis-summary.md and use its "AI-Recommended Article Metadata" section to drive title, description, and SEO. See `SHARED_PROMPT_PATTERNS.md` §"AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and `ai-driven-analysis-guide.md` §"Analysis-Driven Article Decision Protocol (v5.0)". - -**1. Read synthesis analysis first** — `cat "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md"` and extract: - - "Recommended Title (EN)" and "Recommended Title (SV)" — use as starting point - - "Meta Description (EN)" and "Meta Description (SV)" — use as starting point - - "Key Highlights" — verify title references at least one highlight - - "Article Decision" and "Article Priority" — validate publication decision - -**2. Generate newsworthy titles from analysis** — Read each article's content AND the synthesis findings, then generate a title following: `[Active Verb] + [Specific Actor/Institution] + [Concrete Policy Action]`. The title MUST reference findings from the synthesis. Apply to ALL languages. BANNED: ❌ "Opposition Motions: Policy Priorities This Week: {Topic}" or any title ending with ": {Topic} in Focus". - -**3. Generate AI meta descriptions from analysis** (150-160 chars) — Summarize the #1 ranked finding from synthesis significance-scoring. BANNED: ❌ any description starting with "Analysis of N documents". - -**4. 🔴 Add analysis references section (MANDATORY — VERIFY AFTER)** — Insert the "📊 Analysis & Sources" HTML block (from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES) before the article footer, linking to: -- `analysis/daily/$ARTICLE_DATE/motions/synthesis-summary.md` -- `analysis/daily/$ARTICLE_DATE/motions/swot-analysis.md` -- `analysis/daily/$ARTICLE_DATE/motions/risk-assessment.md` -- `analysis/daily/$ARTICLE_DATE/motions/threat-analysis.md` -- `analysis/daily/$ARTICLE_DATE/motions/stakeholder-perspectives.md` -- `analysis/daily/$ARTICLE_DATE/motions/significance-scoring.md` -- `analysis/daily/$ARTICLE_DATE/motions/classification-results.md` -- `analysis/daily/$ARTICLE_DATE/motions/cross-reference-map.md` -- `analysis/daily/$ARTICLE_DATE/motions/data-download-manifest.md` -- `analysis/methodologies/ai-driven-analysis-guide.md` -- Per-document analyses in `documents/` subfolder - -**After inserting, VERIFY** by running: -```bash -for FILE in news/$ARTICLE_DATE-*motions*-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` - -**5. Update all metadata in ALL languages** — For EVERY generated language file, ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, `<h1>`, Schema.org `headline`, `alternativeHeadline`, and `description` all reflect the AI-generated title and description. - -### Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY) - -> 🚨 **v4.0 CRITICAL**: The AI MUST read pre-computed analysis and rewrite ALL script-generated stub content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0. - -**1. Read pre-computed analysis** — Read synthesis, SWOT, risk analysis from `analysis/daily/$ARTICLE_DATE/motions/`. - -**2. Replace script-generated lede** — Replace any `"Analysis of N documents..."` with AI lede naming the most significant opposition motion(s), filing party, and policy target. - -**3. Replace boilerplate "Why It Matters"** — For EACH motion, write unique analysis citing motion number, specific policy proposal, party strategy, and target government policy. BANNED: `"Touches on {X} policy..."` boilerplate. - -**4. Replace generic "Winners & Losers"** — Replace `"The political landscape remains fluid..."` with specific opposition strategy analysis: which parties filed, what government policies they target, likelihood of committee support. - -**5. 🔴 MANDATORY: Replace ALL Deep Analysis `AI_MUST_REPLACE` markers** — The script generates `<!-- AI_MUST_REPLACE: ... -->` markers in EVERY Deep Analysis subsection. You MUST: - - Search generated HTML for ALL `AI_MUST_REPLACE` markers and replace EACH with genuine political intelligence - - "Timeline & Context" → Specific context on when these motions were filed, what triggered them, upcoming committee consideration dates - - "Why This Matters" → Specific analysis of what policy alternatives these motions establish and which target government vulnerabilities - - "Political Impact" → Name parties filing, their strategic objectives, which motions have cross-party support potential - - "Actions & Consequences" → Detail committee referral outcomes, likelihood of acceptance, campaign value for opposition parties - - "Critical Assessment" → Honest evaluation of which motions have policy substance vs. which are purely symbolic campaign ammunition - - ZERO `AI_MUST_REPLACE` markers may survive in the final committed HTML - -**6. Verify document count consistency** — Ensure the number of motions in the title/lede matches the number detailed in the body. If title says "50 motions" but body details 10, either expand body or correct the title. - -**7. Verify policy domain classification** — Each motion MUST be classified by its Riksdag committee assignment (utskott), NOT by keyword matching. BANNED: Classifying a food safety motion as "housing policy" based on co-location with housing motions. - -**8. Add opposition strategy context** — Explain whether motions represent coordinated opposition campaign, individual MP initiative, or response to government proposition. - -### Step 4: Translate, Validate & Verify Analysis Quality - -Run analysis references fix, validation, and HTMLHint before creating PR: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type motions - -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "❌ News generation validation failed. Fix the reported issues before creating a PR." - exit "$VALIDATION_EXIT" -fi - -# HTMLHint validation with auto-fix — SCOPED TO TODAY'S ARTICLES ONLY -# CRITICAL: Do NOT run htmlhint/--fix on all news/*-*.html — that modifies 360+ existing -# motions articles which then get staged and exceed the 100-file PR limit (E003). -if [ -f "news/$ARTICLE_DATE-opposition-motions-en.html" ] || [ -f "news/$ARTICLE_DATE-opposition-motions-sv.html" ]; then - if ! npx htmlhint "news/$ARTICLE_DATE-opposition-motions-en.html" "news/$ARTICLE_DATE-opposition-motions-sv.html" 2>/dev/null; then - echo "⚠️ HTML validation errors in today's articles, attempting auto-fix (scoped to today only)..." - if [ -f "news/$ARTICLE_DATE-opposition-motions-en.html" ]; then - npx tsx scripts/article-quality-enhancer.ts --fix "news/$ARTICLE_DATE-opposition-motions-en.html" - fi - if [ -f "news/$ARTICLE_DATE-opposition-motions-sv.html" ]; then - npx tsx scripts/article-quality-enhancer.ts --fix "news/$ARTICLE_DATE-opposition-motions-sv.html" - fi - if ! npx htmlhint "news/$ARTICLE_DATE-opposition-motions-en.html" "news/$ARTICLE_DATE-opposition-motions-sv.html" 2>/dev/null; then - echo "⚠️ HTML validation still failing after auto-fix — manual review needed (continuing to PR)" - fi - fi -fi -``` - -**CRITICAL: Each article MUST contain real analysis, not just a list of translated links.** -Every generated article must include: -- An analytical lede paragraph about opposition strategy and political fault lines (not just a motion count) -- Opposition Strategy section analysing which parties are most active and why -- "Why It Matters" analysis for each motion with policy domain context -- Coalition Dynamics section showing party activity breakdown -- Party-level breakdown with motion counts per party - -If the generated article lacks these analytical sections, manually add contextual analysis before committing. - -## MANDATORY Quality Validation - -After article generation, verify EACH article meets these minimum standards before committing. -Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). -- **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Per-file AI analysis protocol -- **`analysis/methodologies/ai-driven-analysis-guide.md`** — Methodology for deep per-file analysis -- **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template - -### Iterative Analysis Protocol - -For each generated article, apply up to 3 iterations: -1. **Iteration 1** — Generate initial draft from MCP data -2. **Self-assess** — Score against quality rubric (Accuracy + Depth + Perspectives + Translation + Editorial) -3. **If score < 7**: Identify lowest-scoring dimension and regenerate those sections -4. **Iteration 2** — Address quality gaps, ensure party-grouped strategy analysis -5. **If still < 7**: Final iteration — add cross-party analysis, deepen policy context -6. **Maximum 3 iterations** — Never publish below 5/10 - -### Required Sections (at least 3 of 5): -1. **Analytical Lede** (paragraph, not just document count) -2. **Thematic Analysis** (documents grouped by policy theme) -3. **Strategic Context** (why these documents matter politically) -4. **Stakeholder Impact** (who benefits, who loses) -5. **What Happens Next** (expected timeline and outcomes) - -### Disqualifying Patterns: -- ❌ `"Filed by: Unknown (Unknown)"` — FIX author/party metadata before committing -- ❌ `data-translate="true"` spans in non-Swedish articles — TRANSLATE before committing -- ❌ Identical "Why It Matters" text for all entries — DIFFERENTIATE analysis per motion -- ❌ Flat list of motions without grouping — GROUP by policy theme or party -- ❌ Article under 500 words — EXPAND with analytical sections +# 📝 Opposition Motions -### Playwright Visual Validation -Run Playwright validation before creating the PR: -```bash -# HTMLHint validation -npx htmlhint "news/*-opposition-motions-*.html" +Generates deep political intelligence articles on opposition motions in core languages (EN, SV). Translations dispatched to `news-translate`. -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "opposition-motions" +## What this workflow does -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-opposition-motions-*.html -``` +- **Article type**: `motions` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/motions/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -### Bash Validation Commands: -```bash -# Check for unknown authors (should return 0) -grep -l "Filed by: Unknown" news/*-opposition-motions-*.html 2>/dev/null | wc -l || true +## Time budget (60 min, minimum 45 min of real work) -# Check for untranslated spans in English article (should return 0) -grep -c 'data-translate="true"' "news/$ARTICLE_DATE-opposition-motions-en.html" 2>/dev/null || true +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -# Check word count of English article text content (warn if < 500; HTML tags stripped) -FILE="news/$ARTICLE_DATE-opposition-motions-en.html" -if [ ! -f "$FILE" ]; then echo "WARNING: Expected article file not found: $FILE — check if generation succeeded"; else - sed 's/<[^>]*>/ /g' "$FILE" | tr -s '[:space:]' '\n' | grep -c '[[:alnum:]]' 2>/dev/null > /tmp/word_count.txt || echo 0 > /tmp/word_count.txt - read WORD_COUNT < /tmp/word_count.txt - echo "Content word count (HTML tags stripped): $WORD_COUNT" - if [ "$WORD_COUNT" -lt 500 ]; then echo "WARNING: Article content may be too short ($WORD_COUNT words) — consider expanding before PR"; fi -fi +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -# Check for duplicate "Why It Matters" content (should return empty) -grep -o 'Why It Matters[^<]*' "news/$ARTICLE_DATE-opposition-motions-en.html" 2>/dev/null | sort | uniq -d || true -``` +## Inputs -### If Article Fails Quality Check: -1. Use bash to enhance the HTML with analytical sections -2. Replace generic "Why It Matters" with motion-specific analysis -3. Add thematic grouping headers (e.g., by policy area or party) -4. Translate any remaining Swedish content +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -**Note**: News index files, metadata, and sitemap are generated automatically at build time by the `prebuild` script. Do NOT run generation scripts or commit their output — only commit the article HTML files. +## Dedup & analysis-only path -## 🌐 Translation Quality +If articles for `$ARTICLE_DATE` + `motions` already exist **and** `force_generation=false`: -EN/SV only: all headings, meta, content in correct language; no untranslated `data-translate` spans; Swedish API titles translated. Full rules: `news-translate.md`. -## Article Naming Convention -Files: `YYYY-MM-DD-opposition-motions-{lang}.html` +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Opposition Motions — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-propositions.lock.yml b/.github/workflows/news-propositions.lock.yml index 668b49ed0..5e25a2faa 100644 --- a/.github/workflows/news-propositions.lock.yml +++ b/.github/workflows/news-propositions.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"c13ed203646f0d470b1a371dd8940e8f5de78cfcb7d0cd2553c28be379f8b59c","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"2a8b4b58de9bb9d5945610c91ca9ea2075bd51f86c626342cc201679c4bf9007","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,17 @@ # # Generates government propositions analysis articles in core languages (EN, SV). Translations for remaining 12 languages are handled by the dedicated news-translate workflow via dispatch-workflow. Single article type per run. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -184,14 +195,9 @@ jobs: env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -202,21 +208,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_2dd2be9b72b736f4_EOF' + cat << 'GH_AW_PROMPT_956a02d6e412074e_EOF' <system> - GH_AW_PROMPT_2dd2be9b72b736f4_EOF + GH_AW_PROMPT_956a02d6e412074e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_2dd2be9b72b736f4_EOF' + cat << 'GH_AW_PROMPT_956a02d6e412074e_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_2dd2be9b72b736f4_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_956a02d6e412074e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_2dd2be9b72b736f4_EOF' + cat << 'GH_AW_PROMPT_956a02d6e412074e_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -246,22 +252,25 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_2dd2be9b72b736f4_EOF + GH_AW_PROMPT_956a02d6e412074e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_2dd2be9b72b736f4_EOF' + cat << 'GH_AW_PROMPT_956a02d6e412074e_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} {{#runtime-import .github/workflows/news-propositions.md}} - GH_AW_PROMPT_2dd2be9b72b736f4_EOF + GH_AW_PROMPT_956a02d6e412074e_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -272,14 +281,9 @@ jobs: uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -302,14 +306,9 @@ jobs: return await substitutePlaceholders({ file: process.env.GH_AW_PROMPT, substitutions: { - GH_AW_EXPR_731DE217: process.env.GH_AW_EXPR_731DE217, GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -411,7 +410,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -499,16 +498,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_a1713af6f24ae1e7_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_a1713af6f24ae1e7_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_adeff6bf6d40e8b2_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_adeff6bf6d40e8b2_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -531,11 +530,6 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, - "aw_context": { - "default": "", - "description": "Agent caller context (used internally by Agentic Workflows).", - "type": "string" - }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -772,7 +766,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_16cb6f6fb2bced82_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_0fcf314c19f952a3_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -888,7 +882,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_16cb6f6fb2bced82_EOF + GH_AW_MCP_CONFIG_0fcf314c19f952a3_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1575,7 +1569,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-propositions.md b/.github/workflows/news-propositions.md index cbb1e80a3..81b842fd9 100644 --- a/.github/workflows/news-propositions.md +++ b/.github/workflows/news-propositions.md @@ -2,6 +2,15 @@ name: "News: Government Propositions" description: Generates government propositions analysis articles in core languages (EN, SV). Translations for remaining 12 languages are handled by the dedicated news-translate workflow via dispatch-workflow. Single article type per run. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md on: schedule: daily around 5:00 on weekdays workflow_dispatch: @@ -119,7 +128,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -157,26 +166,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -230,676 +219,47 @@ engine: model: claude-opus-4.7 --- -# 📜 Government Propositions Article Generator - -You are the **News Journalist Agent** for Riksdagsmonitor generating **government propositions** analysis articles. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst, NOT a script executor.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** parliamentary data deeply — SWOT, stakeholder perspectives, risk assessment, election implications -> 2. **WRITE** genuine political intelligence articles with specific actors, evidence citations, and analytical insight -> 3. **USE** the script (`generate-news-enhanced.ts`) ONLY for HTML formatting — the script creates a shell, YOU fill it with analysis -> 4. **REPLACE** every `AI_MUST_REPLACE` marker with real analysis — ZERO markers may remain -> 5. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 6. **VERIFY** article quality: minimum 1000 words, SWOT analysis, stakeholder perspectives, dok_id citations -> 7. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **ITERATIVE IMPROVEMENT IS MANDATORY (2+ passes):** -> - **Analysis Pass 1** (15 min): Create analysis for every document following templates -> - **Analysis Pass 2** (7 min): Read ALL analysis back, improve evidence, diagrams, cross-references -> - **Article Pass 1** (10 min): Generate articles with AI-written content from analysis -> - **Article Pass 2** (8 min): Read ALL articles back completely, improve every section -> - **NEVER complete early** — if you finish ahead, use remaining time to deepen analysis -> -> **If the final article reads like a list of document titles with generic descriptions, you have FAILED.** Rewrite with genuine political analysis before committing. - - -## 🚨 Safe Output Guarantee - -Every run MUST end with exactly one safe output call: `safeoutputs___create_pull_request` when artifacts exist, `safeoutputs___noop` only when MCP is unreachable AND no artifacts exist. Time guard: if >35 min elapsed with no safe output called, stop and call one immediately. - -## 🔧 Workflow Dispatch Parameters - -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -If **force_generation** is `true`, generate articles even if recent ones exist. Use the **languages** value to determine which languages to generate. - -## 🚨 CRITICAL: Single Article Type Focus - -**This workflow generates ONLY `propositions` articles.** Do not generate other article types. - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-propositions.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (45 minutes) — ENFORCED Minimum 40 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing in 13-22 min of 45-min allocation, producing shallow analysis. Agent MUST use at least 40 of 45 minutes. Completion < 40 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -- **Minutes 0–3**: Date check, MCP warm-up with `get_sync_status()` -- **Minutes 3–6**: Run download-parliamentary-data pipeline (download data) -- **Minutes 6–21**: 🚨 **AI Analysis Pass 1 (15 min minimum)**: Read ALL methodology guides, create per-file analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. -- **Minutes 21–22**: 🚨 **AI Analysis Pass 2 (Part A, start)**: Begin reading ALL analysis artifacts back and identify improvement targets. -- **Minutes 22–25**: 🫀 **Heartbeat PR** — `git add && git commit` analysis artifacts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Propositions - {date}`). Refreshes the safeoutputs MCP session (idle timeout ~30–35 min) AND preserves work if later phases fail. Run `git checkout main` after the call so subsequent commits don't stack onto the frozen patch. -- **Minutes 25–28**: 🚨 **AI Analysis Pass 2 (Part B, complete — 6 min improvement work total across Parts A+B)**: Improve every section, replace ALL script stubs with AI analysis. Run enrichment verification gate. -- **Minutes 28–30**: Run ENFORCED Minimum Time Gate + Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. -- **Minutes 30–36**: Generate articles for core languages (EN, SV) using `npx tsx scripts/generate-news-enhanced.ts` -- **Minutes 36–40**: 🚨 **Article Improvement Pass**: Read ALL articles back, replace AI_MUST_REPLACE markers, improve content. Run article quality component gate. -- **Minutes 40–43**: Validate, commit, create PR with `safeoutputs___create_pull_request` -- **Minutes 43–45**: 🚨 **HARD DEADLINE** — If no safe output yet: if ANY artifacts/files were created, IMMEDIATELY stage, commit, call `safeoutputs___create_pull_request` with partial work. ONLY call `safeoutputs___noop` if truly ZERO files were created. - -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 15 min + Pass 2: 7 min)** — every analysis file must contain color-coded Mermaid diagrams, structured evidence tables with dok_id citations, and follow template structure exactly. ALL script-generated stubs MUST be replaced with AI-enriched analysis. Run the ENFORCED gates from SHARED_PROMPT_PATTERNS.md before proceeding to article generation. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## 🚫 CRITICAL: Article Generation Safety - -**Articles MUST be generated using `npx tsx scripts/generate-news-enhanced.ts` — NEVER manually.** - -The repository provides a complete article generation pipeline. You MUST use it (see Generation Steps below for the full `LANG_ARG` derivation from the `languages` dispatch input; default is `en,sv`): -```bash -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts --types=propositions --languages="$LANG_ARG" --skip-existing -``` - -**❌ NEVER do any of the following:** -- NEVER use `python3` or `python3 -c` to build HTML article files -- NEVER create `.py` scripts to generate articles -- NEVER use bash heredoc (`cat > file << 'EOF'`) to write HTML files — it silently truncates large content -- NEVER manually construct HTML articles line-by-line with `echo`, `printf`, or any other method -- NEVER spend more than 5 minutes attempting to manually build article HTML - -**If `generate-news-enhanced.ts` fails or returns 0 articles:** -1. Check if MCP data was returned (retry MCP calls if needed) -2. Check if analysis artifacts exist in `analysis/daily/YYYY-MM-DD/` — if yes, commit them and create an analysis-only PR -3. If MCP server is unreachable AND no data was downloaded AND no analysis artifacts exist, use `safeoutputs___noop` — this is the ONLY valid noop scenario -4. Do NOT attempt to manually create articles as a fallback - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams and evidence tables. - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Min. analysis time | -|-------|--------------|-------------------|--------|---------|-------------------| -| standard | 1-2 | ≥3 | ≥1 | optional | 10 minutes | -| deep | 2-3 | ≥5 | ≥2 | required | 15 minutes | -| comprehensive | 3+ | ≥7 | ≥3 | required | 20 minutes | - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `propositions` (from `scripts/editorial-framework.ts`): -- **SWOT**: condensed (3 stakeholder perspectives per quadrant) for `standard`; full (5+) for `deep`/`comprehensive` -- **Dashboard**: required (min. 1 Chart.js chart for `standard`; min. 2 for `deep`/`comprehensive`) -- **Mindmap**: optional for `standard`; required for `deep`/`comprehensive` -- **Min. stakeholders**: 3 perspectives (`standard`), 5 (`deep`/`comprehensive`) -- **AI iterations**: 1-2 (standard), 2-3 (deep), 3+ (comprehensive) - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -### Phase 1 — Data Collection & Initial Analysis -1. Fetch MCP data (`get_propositioner`, `get_sync_status`, cross-reference `get_betankanden`) -2. Detect policy domains and extract legislative timeline for each proposition -3. Build initial outline: lede, thematic groupings by policy area, key takeaways - -### Phase 2 — Iterative Depth Enhancement (repeat per `analysis_depth`) -For each AI iteration: -1. **SWOT Analysis**: Write SWOT analysis with ≥3 stakeholder perspectives (≥5 when `analysis_depth` is `deep` or `comprehensive`) as publication-ready prose and bullet points -2. **Policy Comparison Summary**: Provide a concise markdown table or bullet list with ≥1 comparative policy metric set (≥2 for `deep`/`comprehensive`) suitable for later manual visualization if needed; do not assume any automatic chart rendering -3. **Impact Map**: For `deep`/`comprehensive`, describe policy impact connections as a nested markdown bullet list (mindmap-style) that can be published as text without requiring a renderer -4. **Quality Gate** (check before next iteration): - - Verify legislative timeline is included per proposition - - Verify no identical "Why It Matters" text across entries - - Verify all Swedish API text is translated - - Verify word count ≥ 800 - - If failing any check: re-generate the failing section before proceeding - -### Phase 3 — Final Quality Gate Before PR -Run all validation checks from the **MANDATORY Quality Validation** section below before committing. - -## MANDATORY Date Validation - -```bash -echo "=== Date Validation Check ===" -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -echo "Article Type: propositions" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -September+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before September → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). Use in ALL MCP queries requiring `rm`. - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="government-propositions" -# Derive FORCE_GENERATION from the workflow_dispatch input -FORCE_GENERATION="${{ github.event.inputs.force_generation || 'false' }}" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -## MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/{article-type}` (e.g. `news/content/2026-03-23/propositions`). `safeoutputs___create_pull_request` handles this automatically. - -## MANDATORY PR Creation - -### HOW SAFE PR CREATION WORKS - -> `safeoutputs___create_pull_request` handles branch creation, push, and PR opening — do NOT run `git push` or `git checkout -b` manually. Stage files, then call the tool directly. - - -```bash -# Stage articles and analysis — scoped to article type to stay within 100-file PR limit -# CRITICAL: Stage ONLY today's new articles (EN/SV), NOT all existing news/ -# Staging news/*propositions*.html would include 450+ existing files, many of which -# may have been modified by auto-fix scripts, causing E003 (>100 files) PR failure. -git add "news/$ARTICLE_DATE-government-propositions-en.html" 2>/dev/null || true -git add "news/$ARTICLE_DATE-government-propositions-sv.html" 2>/dev/null || true -git add news/metadata/ 2>/dev/null || true -# Use $ANALYSIS_SUBFOLDER (set during Run Suffix Resolution above); fallback to base type -if [ -z "$ANALYSIS_SUBFOLDER" ]; then - ANALYSIS_SUBFOLDER="propositions" -fi -# Stage analysis summary .md files ONLY — EXCLUDE documents/ to stay under 100-file limit. -# With --limit 50, documents/ alone can contain 100+ files (50 JSON + 50 analysis.md). -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true -# 🚨 HARD UNSTAGE: NEVER commit analysis/data/ — it is an MCP response cache populated by -# download-parliamentary-data.ts (6 doc types × ~40 files = 240+ files). It must stay local. -# Committing it caused E003 "received 258 files" in news-motions run 24653843681 (PR #1867). -# Only news-realtime-monitor stages analysis/data/ intentionally; this workflow never should. -# 🚫 DO NOT run `git add analysis/data/...` anywhere in this workflow. -git reset HEAD -- analysis/data/ 2>/dev/null || true -# Enforce safe-outputs 100-file PR limit (AWF-safe: no $(...) — write to temp file + read back) -git diff --cached --name-only > /tmp/staged_files.txt -awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt -STAGED_COUNT=0 -read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -echo "📊 Staged file count: $STAGED_COUNT (limit: 100)" -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ $STAGED_COUNT files exceeds safe threshold. Removing metadata to reduce count." - git reset HEAD -- news/metadata/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing non-essential analysis — keeping core summaries." - # Graduated pruning: remove individual doc-level analysis JSON first, keep synthesis/scoring/risk .md - # If still over limit, all .json goes but .md summaries (synthesis-summary.md, risk-assessment.md) survive - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-analysis.json 2>/dev/null || true - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*-details.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing remaining analysis .json — keeping .md summaries." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.json 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -# FINAL HARD GUARD: if count still exceeds 99, remove all analysis .md except synthesis-summary.md -if [ "$STAGED_COUNT" -gt 99 ]; then - echo "🚨 CRITICAL: $STAGED_COUNT files still exceeds safe limit of 99. Removing all analysis .md except synthesis-summary." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER"/*.md 2>/dev/null || true - git add "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - STAGED_COUNT=0 - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true - echo "📊 After emergency pruning: $STAGED_COUNT files" -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "Add propositions articles and analysis artifacts" -``` -> -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs (`analysis-only` label + `propositions` label) -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash - -## 🌐 Dispatch Translation Workflow - -After creating the content PR, dispatch translations: `safeoutputs___dispatch_workflow({ "workflow_name": "news-translate", "inputs": { "article_date": "<YYYY-MM-DD>", "article_type": "<article-type>", "languages": "all-extra" } })`. See `news-translate.md` for full translation quality rules. - -## MCP Tools - -**ALWAYS call `get_sync_status()` FIRST.** - -**Primary tool:** `get_propositioner` — fetches latest government propositions -**Cross-reference:** `search_dokument`, `analyze_g0v_by_department` -**Statistical enrichment:** SCB MCP — enrich with economic/fiscal data relevant to propositions. Use committee-mapped SCB tables: fiscal→TAB1291/TAB1292 (FiU), taxation→TAB1291 (SkU), trade→TAB5802 (NU). **World Bank indicators (144 total)**: First `view analysis/worldbank/indicators-inventory.json` to discover indicators matching the proposition's committee — the JSON contains `policyAreas`, `committees`, and `mcpTool` fields for each indicator. Use MCP tools for indicators with `mcpTool` field (e.g., `get-economic-data(countryCode="SE", indicator="GDP_GROWTH", years=10)`). See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for Chart.js chart templates (`economic-comparison`, `economic-trend`, `nordic-radar`). MUST generate ≥1 economic chart per proposition using committee-matched indicators. -**Fact-checking:** When propositions reference specific statistics, cross-reference against SCB/World Bank data using `scripts/statistical-claims-detector.ts` patterns. - -```javascript -get_sync_status({}) -get_propositioner({ rm: <calculated riksmöte>, limit: 20 }) - -// SCB enrichment (optional — wrap in try/catch, do not block generation on SCB failures): -// search_tables({ query: "statsbudget offentliga finanser BNP", limit: 5 }) -// query_table({ table_id: "<id>", value_codes: { Tid: "top(4)" } }) -``` - -## Generation Steps - -### Step 1: Check Existing Articles (Analysis Always Runs) -🚨 **FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** complete **BEFORE** any article HTML is created or updated. Articles MUST be (re)generated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Violations = REJECTED PR (PR #1705 comment audit, 2026-04-18). - -Check if propositions articles already exist for the target date. If they do, skip article generation but **ALWAYS run the full deep political analysis phase** — analysis is the primary output and must execute on every run regardless of article existence. - -### Step 2: Query MCP -```javascript -get_sync_status({}) -get_propositioner({ rm: <calculated riksmöte>, limit: 20 }) -``` - -### Step 2.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 9 analysis artifacts.** `download-parliamentary-data.ts` downloads raw data from riksdag-regering-mcp ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. Create ALL 9 analysis files in `analysis/daily/YYYY-MM-DD/propositions/` using evidence from the downloaded data - -**NEVER write or copy analysis files to the parent date directory** — doing so causes merge conflicts when multiple doc-type workflows run on the same date. The `analysis-reader.ts` automatically scans subdirectories, so root-level copies are NOT needed. After creating ALL analysis files, run the **9-Artifact Completeness Gate** from `SHARED_PROMPT_PATTERNS.md` §"9 REQUIRED Analysis Artifacts" to verify ALL 9 files exist. - -If a prior merged run already produced `analysis/daily/$ARTICLE_DATE/propositions/synthesis-summary.md`, the Run Suffix Resolution pattern auto-suffixes to `propositions-2/`, `propositions-3/`, etc. — unless `force_generation=true` (which deliberately overwrites the base folder). See SHARED_PROMPT_PATTERNS.md "Run Suffix Resolution". - -Key steps: resolve `ARTICLE_DATE` from input or today → check `data-download-manifest.md` → if 0 docs, loop `DAYS_BACK` 1–7 using `date -u -d "$ARTICLE_DATE - $DAYS_BACK days"`, run `download-parliamentary-data.ts --date "$LOOKBACK_DATE"` → copy artifacts from found date to original date folder → run `catalog-downloaded-data.ts --pending-only`. See `SHARED_PROMPT_PATTERNS.md` §"Data Lookback Fallback Strategy" for full bash implementation. - -### 🔄 Data Lookback Fallback - -> 🚨 **CRITICAL RULE**: Never produce empty/stub analysis. If no data for today, look back to find unanalyzed data. - -```bash -[ -f /tmp/hhmm.env ] && . /tmp/hhmm.env -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/propositions" -find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt -read ANALYSIS_COUNT < /tmp/analysis_count.txt -echo "Analysis artifacts: $ANALYSIS_COUNT files in $ANALYSIS_DIR" -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the pre-article analysis pipeline produced ANY output files, you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Propositions - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if the analysis pipeline produced ZERO output files (truly nothing to analyze). - -### 🔬 Step 2b: Read ALL Analysis Files (MANDATORY — before article generation) - -> 🔴 **NON-NEGOTIABLE**: The AI agent MUST `cat` every analysis `.md` file BEFORE generating any article HTML. Analysis and articles are created in the **same workflow run** — there is zero excuse for not reading the analysis. Articles written without reading analysis are shallow and REJECTED. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="propositions" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done - if [ -d "$ANALYSIS_BASE/documents" ]; then - echo "📄 Reading per-document analyses..." - for DOC_FILE in "$ANALYSIS_BASE/documents"/*.md; do - if [ -f "$DOC_FILE" ]; then - echo "--- Per-doc: $DOC_FILE ---" - cat "$DOC_FILE" - echo "" - fi - done - fi - find "$ANALYSIS_BASE" -name "*.md" -type f 2>/dev/null | wc -l > /tmp/analysis_file_count.txt - read ANALYSIS_FILE_COUNT < /tmp/analysis_file_count.txt - echo "✅ Read $ANALYSIS_FILE_COUNT analysis files — these MUST drive article content" -else - echo "⚠️ No analysis directory found at $ANALYSIS_BASE — will use MCP fallback for article content" -fi -``` - -> **After reading, confirm you loaded the analysis** by noting: (1) number of files read, (2) top 3 significance-ranked findings, (3) key risk scores. If you cannot produce this summary, you have NOT read the analysis. - -### Step 3: Generate Articles - -```bash -# Set LANGUAGES_INPUT to the value shown in Workflow Dispatch Parameters above -LANGUAGES_INPUT="<value from Workflow Dispatch Parameters>" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=propositions \ - --languages="$LANG_ARG" \ - --skip-existing -``` - -**Article Navigation Verification**: The `generate-news-enhanced.ts` script automatically includes all required navigation elements: -- **Language switcher** (`<nav class="language-switcher">`) after `<body>` with all 14 languages -- **Back-to-news top nav** (`<div class="article-top-nav">`) with localized back link after language switcher -- **Footer back-to-news link** in `<footer class="article-footer">` - -These elements are validated by `bash scripts/validate-news-generation.sh` (Checks 8–10). The fix script is a **fallback only** — do not run it by default: -```bash -# FALLBACK ONLY — use if validate-news-generation.sh reports missing navigation elements -npx tsx scripts/fix-article-navigation.ts -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "propositions", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -### Step 3b: AI Title, Meta Description & Analysis References (v5.0 — Analysis-Driven) - -> 🚨 **MANDATORY** — After article HTML is generated, the AI MUST read the completed synthesis-summary.md and use its "AI-Recommended Article Metadata" section to drive title, description, and SEO. See `SHARED_PROMPT_PATTERNS.md` §"AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and `ai-driven-analysis-guide.md` §"Analysis-Driven Article Decision Protocol (v5.0)". - -**1. Read synthesis analysis first** — `cat "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md"` and extract: - - "Recommended Title (EN)" and "Recommended Title (SV)" — use as starting point - - "Meta Description (EN)" and "Meta Description (SV)" — use as starting point - - "Key Highlights" — verify title references at least one highlight - - "Article Decision" and "Article Priority" — validate publication decision - -**2. Generate newsworthy titles from analysis** — Read each article's content AND the synthesis findings, then generate a title following: `[Active Verb] + [Specific Actor/Institution] + [Concrete Policy Action]`. The title MUST reference findings from the synthesis — not generic category labels. Apply to ALL languages (not just English). BANNED: ❌ "Government Propositions: Policy Priorities This Week: Defense in Focus" or any title ending with ": {Topic} in Focus". - -**3. Generate AI meta descriptions from analysis** (150-160 chars) — Summarize the #1 ranked finding from synthesis significance-scoring. BANNED: ❌ "Analysis of N documents covering Published:, Why It Matters:" or any description starting with "Analysis of N documents". - -**4. 🔴 Add analysis references section (MANDATORY — VERIFY AFTER)** — Insert the "📊 Analysis & Sources" HTML block (from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES) before the article footer, linking to: -- `analysis/daily/$ARTICLE_DATE/propositions/synthesis-summary.md` -- `analysis/daily/$ARTICLE_DATE/propositions/swot-analysis.md` -- `analysis/daily/$ARTICLE_DATE/propositions/risk-assessment.md` -- `analysis/daily/$ARTICLE_DATE/propositions/threat-analysis.md` -- `analysis/daily/$ARTICLE_DATE/propositions/stakeholder-perspectives.md` -- `analysis/daily/$ARTICLE_DATE/propositions/significance-scoring.md` -- `analysis/daily/$ARTICLE_DATE/propositions/classification-results.md` -- `analysis/daily/$ARTICLE_DATE/propositions/cross-reference-map.md` -- `analysis/daily/$ARTICLE_DATE/propositions/data-download-manifest.md` -- `analysis/methodologies/ai-driven-analysis-guide.md` -- Per-document analyses in `documents/` subfolder - -**After inserting, VERIFY** by running: -```bash -for FILE in news/$ARTICLE_DATE-*propositions*-*.html news/$ARTICLE_DATE-government-propositions-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` - -**5. Update all metadata in ALL languages** — For EVERY generated language file, ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, `<h1>`, Schema.org `headline`, `alternativeHeadline`, and `description` all reflect the AI-generated title and description. Non-English articles MUST have properly translated AI titles — not English titles or generic templates. - -### Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY) - -> 🚨 **v4.0 CRITICAL**: The AI MUST read pre-computed analysis and rewrite ALL script-generated stub content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0. - -**1. Read pre-computed analysis** — Read synthesis, SWOT, risk, and stakeholder analysis from `analysis/daily/$ARTICLE_DATE/propositions/`. - -**2. Replace script-generated lede** — Replace any `"Analysis of N documents..."` placeholder with AI lede naming specific propositions, ministers, and political significance. - -**3. Replace boilerplate "Why It Matters"** — For EACH proposition, write unique analysis citing the proposition number (e.g., Prop. 2025/26:235), specific policy changes, budget impact (SEK amounts), and affected populations. BANNED: `"Touches on {X} policy..."` boilerplate. - -**4. Replace generic "Winners & Losers"** — Replace `"The political landscape remains fluid..."` with specific analysis naming government ministers who tabled the propositions and opposition parties likely to challenge them. - -**5. Replace excuse-as-analysis** — Replace `"No chamber debate data..."` with analysis of the proposition text itself or debate data from MCP `search_anforanden`. - -**6. 🔴 MANDATORY: Replace ALL Deep Analysis `AI_MUST_REPLACE` markers** — The script generates `<!-- AI_MUST_REPLACE: ... -->` markers in EVERY Deep Analysis subsection. You MUST: - - Search generated HTML for ALL `AI_MUST_REPLACE` markers and replace EACH with genuine political intelligence - - "Timeline & Context" → Specific scheduling strategy, why these propositions come now, government legislative timing - - "Why This Matters" → Specific political significance naming affected populations, budget amounts, policy shifts - - "Political Impact" → Name parties, ministers, vote arithmetic, coalition dynamics for each proposition - - "Actions & Consequences" → Detail agency implementation, regulatory changes, budget allocations per proposition - - "Critical Assessment" → Honest evaluation of feasibility, political risks, gaps between intent and likely outcome - - ZERO `AI_MUST_REPLACE` markers may survive in the final committed HTML - -**7. Handle empty analysis** — If synthesis reports "0 documents analyzed", use MCP `get_propositioner(rm="2025/26")` directly. NEVER publish with "0 documents analyzed" as content. - -**8. Add Strategic Context** — Explain whether propositions represent coordinated government offensive (pre-election legislative push) or routine business. Cross-reference with committee reports and motions from the same date. - -### Step 4: Translate, Validate & Verify Analysis Quality - -Run analysis references fix, validation, and HTMLHint before creating PR: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type propositions - -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "❌ News generation validation failed. Fix the reported issues before creating a PR." - exit "$VALIDATION_EXIT" -fi - -# HTMLHint validation with auto-fix — SCOPED TO TODAY'S ARTICLES ONLY -# CRITICAL: Do NOT run htmlhint/--fix on all news/*-*.html — that modifies 450+ existing -# propositions articles which then get staged and exceed the 100-file PR limit (E003). -if [ -f "news/$ARTICLE_DATE-government-propositions-en.html" ] || [ -f "news/$ARTICLE_DATE-government-propositions-sv.html" ]; then - if ! npx htmlhint "news/$ARTICLE_DATE-government-propositions-en.html" "news/$ARTICLE_DATE-government-propositions-sv.html" 2>/dev/null; then - echo "⚠️ HTML validation errors in today's articles, attempting auto-fix (scoped to today only)..." - if [ -f "news/$ARTICLE_DATE-government-propositions-en.html" ]; then - npx tsx scripts/article-quality-enhancer.ts --fix "news/$ARTICLE_DATE-government-propositions-en.html" - fi - if [ -f "news/$ARTICLE_DATE-government-propositions-sv.html" ]; then - npx tsx scripts/article-quality-enhancer.ts --fix "news/$ARTICLE_DATE-government-propositions-sv.html" - fi - if ! npx htmlhint "news/$ARTICLE_DATE-government-propositions-en.html" "news/$ARTICLE_DATE-government-propositions-sv.html" 2>/dev/null; then - echo "⚠️ HTML validation still failing after auto-fix — manual review needed (continuing to PR)" - fi - fi -fi -``` - -Translate all Swedish content, regenerate indexes, validate, then create PR. - -**CRITICAL: Each article MUST contain real analysis, not just a list of translated links.** -Every generated article must include: -- An analytical lede paragraph about the government's legislative strategy (not just a document count) -- Legislative Pipeline section explaining where each proposition sits in the process -- "Why It Matters" analysis for each proposition with policy domain context -- Policy Implications section assessing the government's legislative ambition -- Committee referral analysis showing which policy areas are affected - -If the generated article lacks these analytical sections, manually add contextual analysis before committing. - -## MANDATORY Quality Validation - -After article generation, verify EACH article meets these minimum standards before committing. -Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). -- **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Per-file AI analysis protocol -- **`analysis/methodologies/ai-driven-analysis-guide.md`** — Methodology for deep per-file analysis -- **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template - -### Iterative Analysis Protocol - -For each generated article, apply up to 3 iterations: -1. **Iteration 1** — Generate initial draft from MCP data -2. **Self-assess** — Score against quality rubric (Accuracy + Depth + Perspectives + Translation + Editorial) -3. **If score < 7**: Identify lowest-scoring dimension and regenerate those sections -4. **Iteration 2** — Address quality gaps, add budget/fiscal impact estimation -5. **If still < 7**: Final iteration — add coalition analysis and legislative timeline -6. **Maximum 3 iterations** — Never publish below 5/10 - -### Required Sections (at least 3 of 5): -1. **Analytical Lede** (paragraph, not just document count) -2. **Thematic Analysis** (documents grouped by policy theme) -3. **Strategic Context** (why these documents matter politically) -4. **Stakeholder Impact** (who benefits, who loses) -5. **What Happens Next** (expected timeline and outcomes) - -### Disqualifying Patterns: -- ❌ `"Filed by: Unknown (Unknown)"` — FIX author/party metadata before committing -- ❌ `data-translate="true"` spans in non-Swedish articles — TRANSLATE before committing -- ❌ Identical "Why It Matters" text for all entries — DIFFERENTIATE analysis per proposition -- ❌ Flat list of propositions without grouping — GROUP by policy theme or ministry -- ❌ Article under 500 words — EXPAND with analytical sections - -### Playwright Visual Validation -Run Playwright validation before creating the PR: -```bash -# HTMLHint validation -npx htmlhint "news/*-government-propositions-*.html" +# 📜 Government Propositions -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "government-propositions" +Generates deep political intelligence articles on Swedish government propositions in core languages (EN, SV). Translations dispatched to `news-translate`. -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-government-propositions-*.html -``` +## What this workflow does -### Bash Validation Commands: -```bash -# Check for unknown authors (should return 0) -grep -l "Filed by: Unknown" news/*-government-propositions-*.html 2>/dev/null | wc -l || true +- **Article type**: `propositions` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/propositions/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -# Check for untranslated spans in English article (should return 0) -grep -c 'data-translate="true"' "news/$ARTICLE_DATE-government-propositions-en.html" 2>/dev/null || true +## Time budget (60 min, minimum 45 min of real work) -# Check word count of English article text content (warn if < 500; HTML tags stripped) -FILE="news/$ARTICLE_DATE-government-propositions-en.html" -if [ ! -f "$FILE" ]; then echo "WARNING: Expected article file not found: $FILE — check if generation succeeded"; else - sed 's/<[^>]*>/ /g' "$FILE" | tr -s '[:space:]' '\n' | grep -c '[[:alnum:]]' 2>/dev/null > /tmp/word_count.txt || echo 0 > /tmp/word_count.txt - read WORD_COUNT < /tmp/word_count.txt - echo "Content word count (HTML tags stripped): $WORD_COUNT" - if [ "$WORD_COUNT" -lt 500 ]; then echo "WARNING: Article content may be too short ($WORD_COUNT words) — consider expanding before PR"; fi -fi +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -# Check for duplicate "Why It Matters" content (should return empty) -grep -o 'Why It Matters[^<]*' "news/$ARTICLE_DATE-government-propositions-en.html" 2>/dev/null | sort | uniq -d || true -``` +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -### If Article Fails Quality Check: -1. Use bash to enhance the HTML with analytical sections -2. Replace generic "Why It Matters" with proposition-specific analysis -3. Add thematic grouping headers (e.g., by ministry or policy area) -4. Translate any remaining Swedish content +## Inputs -**Note**: For shared rules on news index files, metadata, sitemap generation, and what to commit, see the canonical guidance in `news-article-generator.md`. +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -## 🌐 Translation Quality +## Dedup & analysis-only path -EN/SV only: all headings, meta, content in correct language; no untranslated `data-translate` spans; Swedish API titles translated. Full rules: `news-translate.md`. -## Article Naming Convention -Files: `YYYY-MM-DD-government-propositions-{lang}.html` +If articles for `$ARTICLE_DATE` + `propositions` already exist **and** `force_generation=false`: +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Government Propositions — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-realtime-monitor.lock.yml b/.github/workflows/news-realtime-monitor.lock.yml index 744728f5a..192047be1 100644 --- a/.github/workflows/news-realtime-monitor.lock.yml +++ b/.github/workflows/news-realtime-monitor.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"d1a441c4c5fe2056d169537bc095a1bd65cfe1e2358a05e1f776021af568d6ce","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"f5aa4b831845f568be00dd07bdc273d21cc4ebb6e76fcca95c69c5e1ef422c8c","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,18 @@ # # Monitors Riksdag and Government for real-time updates and generates breaking news articles in core languages (EN, SV) with Playwright validation. Translations handled by news-translate workflow. Runs twice daily on weekdays, once on weekends. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# - ../prompts/ext/tier-c-aggregation.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -192,11 +204,6 @@ jobs: GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES: ${{ github.event.inputs.article_types }} - GH_AW_GITHUB_EVENT_INPUTS_FOCUS: ${{ github.event.inputs.focus }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -207,9 +214,9 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_12cac9a7c9fdae2c_EOF' + cat << 'GH_AW_PROMPT_cf77de82423850f2_EOF' <system> - GH_AW_PROMPT_12cac9a7c9fdae2c_EOF + GH_AW_PROMPT_cf77de82423850f2_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" @@ -217,12 +224,12 @@ jobs: cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_12cac9a7c9fdae2c_EOF' + cat << 'GH_AW_PROMPT_cf77de82423850f2_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:3), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_12cac9a7c9fdae2c_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_cf77de82423850f2_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_12cac9a7c9fdae2c_EOF' + cat << 'GH_AW_PROMPT_cf77de82423850f2_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -252,22 +259,26 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_12cac9a7c9fdae2c_EOF + GH_AW_PROMPT_cf77de82423850f2_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_12cac9a7c9fdae2c_EOF' + cat << 'GH_AW_PROMPT_cf77de82423850f2_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} + {{#runtime-import .github/prompts/ext/tier-c-aggregation.md}} {{#runtime-import .github/workflows/news-realtime-monitor.md}} - GH_AW_PROMPT_12cac9a7c9fdae2c_EOF + GH_AW_PROMPT_cf77de82423850f2_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES: ${{ github.event.inputs.article_types }} - GH_AW_GITHUB_EVENT_INPUTS_FOCUS: ${{ github.event.inputs.focus }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -281,11 +292,6 @@ jobs: GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES: ${{ github.event.inputs.article_types }} - GH_AW_GITHUB_EVENT_INPUTS_FOCUS: ${{ github.event.inputs.focus }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -311,11 +317,6 @@ jobs: GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPES, - GH_AW_GITHUB_EVENT_INPUTS_FOCUS: process.env.GH_AW_GITHUB_EVENT_INPUTS_FOCUS, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -417,7 +418,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -505,16 +506,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_218cd03549ed72be_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":3,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_218cd03549ed72be_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_0b861792875b19ae_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_0b861792875b19ae_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 3 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -775,7 +776,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_d97e41f5445c1b98_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_5c4d4064e88f8dc5_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -905,7 +906,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_d97e41f5445c1b98_EOF + GH_AW_MCP_CONFIG_5c4d4064e88f8dc5_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1592,7 +1593,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,cdn.playwright.dev,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,playwright.download.prss.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":3,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-realtime-monitor.md b/.github/workflows/news-realtime-monitor.md index ffc276564..d5ea519b0 100644 --- a/.github/workflows/news-realtime-monitor.md +++ b/.github/workflows/news-realtime-monitor.md @@ -2,6 +2,16 @@ name: News Realtime Monitor description: Monitors Riksdag and Government for real-time updates and generates breaking news articles in core languages (EN, SV) with Playwright validation. Translations handled by news-translate workflow. Runs twice daily on weekdays, once on weekends. strict: false # Allow custom network domain riksdag-regering-ai.onrender.com (trusted MCP server) +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md + - ../prompts/ext/tier-c-aggregation.md on: schedule: # Run twice during Swedish parliamentary working hours (CET/CEST) @@ -130,7 +140,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 3 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -168,26 +178,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -241,741 +231,51 @@ engine: model: claude-opus-4.7 --- -# 🔴 Real-Time Riksdag Monitor - -You are the **Real-Time Political Monitor** for Riksdagsmonitor. Detect significant parliamentary activity and generate breaking news articles using the **purpose-built TypeScript scripts**. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst, NOT a script executor.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** parliamentary data deeply — SWOT, stakeholder perspectives, risk assessment, election implications -> 2. **WRITE** genuine political intelligence articles with specific actors, evidence citations, and analytical insight -> 3. **USE** the script (`generate-news-enhanced.ts`) ONLY for HTML formatting — the script creates a shell, YOU fill it with analysis -> 4. **REPLACE** every `AI_MUST_REPLACE` marker with real analysis — ZERO markers may remain -> 5. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 6. **VERIFY** article quality: minimum 1000 words, SWOT analysis, stakeholder perspectives, dok_id citations -> 7. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **ITERATIVE IMPROVEMENT IS MANDATORY (2+ passes):** -> - **Analysis Pass 1** (15 min): Create analysis for every document following templates -> - **Analysis Pass 2** (7 min): Read ALL analysis back, improve evidence, diagrams, cross-references -> - **Article Pass 1** (10 min): Generate articles with AI-written content from analysis -> - **Article Pass 2** (8 min): Read ALL articles back completely, improve every section -> - **NEVER complete early** — if you finish ahead, use remaining time to deepen analysis -> -> **If the final article reads like a list of document titles with generic descriptions, you have FAILED.** Rewrite with genuine political analysis before committing. - - -## 🔧 Workflow Dispatch Parameters - -- **article_types** = `${{ github.event.inputs.article_types }}` -- **focus** = `${{ github.event.inputs.focus }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety — MANDATORY for Agent-Generated Bash - -> See `SHARED_PROMPT_PATTERNS.md` §"AWF Shell Safety" for the full rules and pattern table. Key: use `$VAR` (no braces), `find -exec` (no command substitution), set defaults with `if/then`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## ⚠️ NON-NEGOTIABLE RULES - -1. Every run **MUST** end with exactly one safe output tool call: - - Articles generated → `safeoutputs___create_pull_request({...})` - - Analysis artifacts exist but no articles → `safeoutputs___create_pull_request({...})` with analysis-only PR - - MCP server unreachable AND no analysis artifacts → `safeoutputs___noop({"message": "..."})` - - Tool unavailable → `safeoutputs___missing_tool({"reason": "..."})` -2. `safeoutputs___create_pull_request` handles branch creation and push. **NEVER** run `git push` or `git checkout -b`. -3. Safe output tools are **always in your tool list**. NEVER search for them via bash. -4. **NEVER** write your own MCP HTTP/JSON-RPC client. Use the scripts or direct tool calls only. -5. Exiting without calling a safe output tool = workflow failure. -6. **🚨 FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum) **MUST** be complete **BEFORE** creating or updating any article HTML. Articles **MUST** be (re)generated/updated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. This also applies before deciding on `noop`. Analysis is the primary output and must execute every run. Violations = REJECTED PR (see PR #1705 comment audit, 2026-04-18). - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-realtime-monitor.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (45 minutes) — ROLLING PRs KEEP THE SESSION ALIVE - -> 🔴 **PRODUCTION INCIDENT (2026-04-21, run 24722758908, repeat of 2026-04-20 run 24672037751)**: The agent ran Pass 1 analysis by minute 12, then spent minutes 12→34 **rewriting** EN (3,784 words) and SV (2,823 words) breaking articles from scratch locally. At minute 34 it called `safeoutputs___create_pull_request` → **`Error: session not found`**. All 8+ subsequent safeoutputs calls (noop, missing_tool, report_incomplete, push_repo_memory) also failed. **All work lost — same outcome as the 2026-04-20 incident.** -> -> **Root cause**: "PR #1 at minute 22–25 with initial articles" is **physically impossible** — writing publication-quality EN + SV articles (6,000+ words combined, zero `AI_MUST_REPLACE` markers) from scratch takes 8–15 minutes, blowing the safeoutputs session idle timer (~30–35 min from MCP Gateway start). Even when the agent recognized the deadline at minute 24, finishing the articles took another 10 minutes. -> -> **FIX (this PR)**: Split PR #1 from article content. PR #1 is now an **analysis-only Heartbeat PR at minute 13–18** matching the proven pattern from `news-committee-reports`, `news-motions`, `news-propositions`, `news-interpellations`, `news-month-ahead`, `news-monthly-review`, `news-week-ahead`, `news-weekly-review`, `news-article-generator`, `news-evening-analysis`. Article writing happens between PR #1 and PR #2. - -> 🟢 **SESSION KEEP-ALIVE STRATEGY (this workflow, `create-pull-request.max: 3`)**: Every `safeoutputs___create_pull_request` call **refreshes the Streamable HTTP MCP session idle timer**. Instead of rushing all work into a single monolithic PR before minute 28, the agent now opens up to **3 rolling PRs** — each call keeps the session alive and captures an additional batch of work. This is the same pattern PR #1768 proved successful for `news-translate.md` (where it went from "all work lost at minute 50" to "5 batch PRs over 55 minutes, zero session expiries"). See `SHARED_PROMPT_PATTERNS.md` §"Universal Safe Output Rules". - -> 🔴 **SYSTEMIC ISSUE IDENTIFIED (PR #1794 audit, 2026-04-16)**: Prior news workflows were completing in 13–22 minutes, producing shallow analysis with unenriched script stubs. The agent MUST spend at least **22 minutes on analysis** (15 min Pass 1 + 7 min Pass 2). Completion before 22 minutes of analysis = insufficient iteration = REJECTED quality. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -| Phase | Minutes | Action | -|-------|---------|--------| -| Setup | 0–3 | Date check, `get_sync_status()` warm-up | -| Download | 3–6 | Run data download scripts (MCP data fetch) | -| **AI Analysis Pass 1** | **6–13** | **🚨 MANDATORY 7 min minimum for first heartbeat**: Read methodology guides, create per-file analysis stubs for EVERY document with initial Mermaid diagrams, evidence tables, SWOT entries. Full depth iteration continues AFTER the heartbeat PR is safely called. | -| **🫀 PR #1 — Analysis-only Heartbeat** | **13–18** | 🚨 **HARD MIN: by minute 18.** Commit whatever analysis artifacts exist in `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/` (even partial Pass 1 stubs). **Do NOT write articles yet.** Title: `🫀 Heartbeat - Realtime Monitor - {date} {HHMM}`. Labels: `["analysis-only", "realtime-monitor", "heartbeat"]`. This call refreshes the safeoutputs MCP session (~30–35 min idle lifetime) AND preserves Pass 1 analysis work. After the call succeeds, run `git checkout main` to avoid appending to a frozen patch. | -| **AI Analysis Pass 2** | **18–25** | **🚨 MANDATORY 7 min minimum**: Read ALL analysis back, improve every section, add cross-references, replace remaining script stubs. Run enrichment verification gate. | -| Generate + Write articles | 25–35 | Run `generate-news-enhanced.ts`; write the full EN + SV articles (lead-story aligned; zero `AI_MUST_REPLACE` markers). Article writing takes 8–12 minutes — this is now SAFE because PR #1 already refreshed the session. | -| Validate + fix-refs | 35–38 | Run `validate-news-generation.sh` and `fix-analysis-references.ts`. | -| **PR #2 — Full articles batch** | **38–43** | Commit finalized EN + SV articles + enriched analysis on a fresh branch (`git checkout main` first!), then `safeoutputs___create_pull_request` (title `🔴 Breaking $HHMM: {headline} - {date}`). This second call also refreshes the session idle timer. | -| **PR #3 — Improvements (optional)** | **43–45** | If additional HIGH/MEDIUM events discovered OR article improvement round available, commit + PR again on a fresh branch. | -| Post-PR cleanup | 43–45 | Update repo-memory (`/tmp/gh-aw/repo-memory/default/*.json`) — artifact uploads, NOT PR content, so they run after the final PR call. | -| **HARD DEADLINE** | **43** | 🚨 Never exit without at least one `safeoutputs___create_pull_request` call if ANY files were created. ONLY call `safeoutputs___noop` if truly ZERO files were created. Never noop when files exist. | - -> ⚠️ **Why analysis-only heartbeat answers "keep the session alive":** the safeoutputs MCP Streamable HTTP session dies from idle (~30–35 min observed). Run 24722758908 proved that article writing routinely takes 10+ minutes — attempting to include articles in PR #1 forces a 10+ minute idle window that kills the session. PR #1 = analysis-only = can be committed in <60 seconds because the files already exist on disk from Pass 1. PR #2 = full articles, which the session now survives because PR #1 refreshed the idle timer at minute ~15. This is exactly how the other 11 news workflows work successfully (`news-committee-reports` minute 13–15, `news-motions`/`news-propositions`/`news-interpellations` minute 22–25 with analysis-only heartbeats). - -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 12 min + Pass 2: 7 min + Improvement: 3 min or more) — this is NOT negotiable.** PR #1452 demonstrated that < 10 min produces unacceptable analysis. PR #1794 demonstrated that 15 min total = shallow articles missing SWOT tables, Mermaid diagrams, risk matrices. With rolling PRs, Pass 2 + Improvement run AFTER PR #1 is safely committed — so quality iteration no longer risks losing everything. - -> 🔴 **MINIMUM TIME ENFORCEMENT**: Before proceeding to article generation, the agent MUST run the Minimum Analysis Time Gate AND the Analysis Enrichment Verification Gate from SHARED_PROMPT_PATTERNS.md. Both gates MUST pass before article generation begins. - -**Hard cutoffs** — check elapsed time before EVERY phase: -```bash -# Restore START_TIME if available so this snippet is safe to run standalone -if [ -f /tmp/gh-aw/agent/timing.env ]; then - . /tmp/gh-aw/agent/timing.env -fi -# Fallback: if START_TIME is still unset, initialize it to "now" to avoid huge elapsed times -if [ -z "$START_TIME" ]; then - date +%s > /tmp/start_time.txt - read START_TIME < /tmp/start_time.txt -fi - -date +%s > /tmp/now_time.txt -read AW_NOW < /tmp/now_time.txt -ELAPSED=$(( AW_NOW - START_TIME )) -echo "⏱️ Elapsed: $((ELAPSED / 60))m $((ELAPSED % 60))s" -``` -- `>= 13 min` and PR #1 (analysis heartbeat) not yet called → 🫀 IMMEDIATELY commit whatever analysis artifacts exist in `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/` and call `safeoutputs___create_pull_request` with title `🫀 Heartbeat - Realtime Monitor - {date} {HHMM}`. Do NOT wait for articles. -- `>= 18 min` and PR #1 not called yet → 🚨 **SESSION EXPIRY RISK** — STOP ALL analysis work, stage + commit whatever exists, call `safeoutputs___create_pull_request`. Analysis-only heartbeat is infinitely better than `session not found`. -- `>= 43 min` → call final `safeoutputs___create_pull_request` for the full-articles batch (PR #2) or any improvements (PR #3), then stop. -- **CRITICAL — UNIVERSAL SAFE OUTPUT RULE (from SHARED_PROMPT_PATTERNS.md)**: If ANY files were created/modified → ALWAYS `safeoutputs___create_pull_request`. NEVER `safeoutputs___noop` when artifacts exist. Noop means "I did nothing" and loses everything. Noop is ONLY valid when zero files were produced (MCP unreachable, truly no significant events). - -## Step 1: Date Validation & MANDATORY MCP Health Check - -```bash -echo "=== Workflow Start - Date Validation ===" -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -echo "START_TIME=$START_TIME" > /tmp/gh-aw/agent/timing.env -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -date +"%Z: %A %Y-%m-%d %H:%M:%S" -echo "============================" -``` - -Then verify MCP connectivity — ALWAYS check data freshness first with the MANDATORY MCP Health Gate: - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -``` -get_sync_status({}) -``` -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` — do NOT fabricate content -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -If data is stale (> 48 hours), add disclaimer. Use riksdag-regering-mcp (32 tools for Swedish parliament data). For ad-hoc queries, use `scripts/mcp-query-cli.ts` — NEVER implement custom MCP client code (PROHIBITION). - -Tools with date params: `get_calendar_events` (from/tom — **⚠️ known intermittent issue: may return HTML instead of JSON; use `search_dokument` as fallback**), `search_dokument` (from_date/to_date), `search_regering` (dateFrom/dateTo). Other tools (`search_voteringar`, `get_betankanden`, `get_motioner`, `get_propositioner`, `search_anforanden`) require post-query filter by datum. - -## 📅 Riksmöte (Parliamentary Session) Calculation - -- Month ≥ September: `rm = "{year}/{nextYear's last 2 digits}"` (e.g., Oct 2026 → "2026/27") -- Month < September: `rm = "{prevYear}/{year's last 2 digits}"` (e.g., Feb 2026 → "2025/26") - -## Step 1.5: Download Data Using Scripts - -**Scripts are used ONLY for downloading data. ALL analysis is done by the AI (you) using methodologies and templates.** - -> 🚨 **Scripts must be set up correctly to work in the agentic workflow.** Always source `mcp-setup.sh` first. -> If scripts fail to download data, you MUST diagnose and fix the scripts so they work. -> If you cannot fix the scripts, use direct MCP tool calls as fallback to download data. - -```bash -# Idempotent: only set if not already resolved by lookback -if [ -z "$ARTICLE_DATE" ]; then - # Prefer manual workflow_dispatch input when provided, otherwise default to today (UTC) - if [ -n "${{ github.event.inputs.article_date }}" ]; then - ARTICLE_DATE="${{ github.event.inputs.article_date }}" - else - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt - fi -fi -# UNIQUE RUN ID: Set HHMM timestamp ONCE for this run — persist to env file so all bash blocks use the same value -if [ -z "$HHMM" ]; then - date -u +%H%M > /tmp/hhmm_val.txt - read HHMM < /tmp/hhmm_val.txt -fi -echo "HHMM=$HHMM" > /tmp/hhmm.env -ARTICLE_TYPE="realtime-$HHMM" -echo "📥 Downloading data for $ARTICLE_DATE (run: $ARTICLE_TYPE)..." -# CRITICAL: Source mcp-setup.sh to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the AWF gateway -# Scripts download data only — analysis is done by AI afterwards -set -o pipefail -source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL" && npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 2>&1 | tee /tmp/pipeline-output.log -PIPE_EXIT=$? -set +o pipefail -if [ "$PIPE_EXIT" -ne 0 ]; then - echo "❌ Data download script failed with exit code $PIPE_EXIT — agent MUST diagnose and fix" - tail -30 /tmp/pipeline-output.log - npx tsc --noEmit scripts/download-parliamentary-data.ts 2>&1 | head -20 || true -fi -# Verify data was actually downloaded -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_count.txt -read DATA_JSON_COUNT < /tmp/data_count.txt -echo "📊 JSON data files downloaded: $DATA_JSON_COUNT" -# Relocate pipeline artifacts: download-parliamentary-data.ts writes to analysis/daily/$DATE/ (unscoped) -# but this workflow needs them under analysis/daily/$DATE/realtime-$HHMM/ -UNSCOPED_DIR="analysis/daily/$ARTICLE_DATE" -SCOPED_DIR="$UNSCOPED_DIR/$ARTICLE_TYPE" -if [ -d "$UNSCOPED_DIR" ]; then - mkdir -p "$SCOPED_DIR" - if find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" | grep -q .; then - find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" -exec mv -f {} "$SCOPED_DIR/" \; - echo "📁 Moved pipeline *.md artifacts → $SCOPED_DIR (root cleaned to prevent merge conflicts)" - fi - if [ -d "$UNSCOPED_DIR/documents" ]; then - mkdir -p "$SCOPED_DIR/documents" - find "$UNSCOPED_DIR/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$SCOPED_DIR/documents/" \; - rmdir "$UNSCOPED_DIR/documents" 2>/dev/null || true - echo "📁 Moved pipeline documents/ contents → $SCOPED_DIR/documents (root cleaned to prevent merge conflicts)" - fi -fi -ls -la "$SCOPED_DIR/" 2>/dev/null || echo "⚠️ No output directory" -if [ "$DATA_JSON_COUNT" -eq 0 ]; then - echo "🚨 ZERO data downloaded. Agent MUST fix scripts or use direct MCP tool calls." -fi -``` - -### 🔧 If Scripts Downloaded 0 Data - -> Fix scripts or fall back to direct MCP tool calls. Never proceed without data. - -1. **Read error log**: `cat /tmp/pipeline-output.log | tail -30` -2. **Check MCP setup**: `echo "MCP_SERVER_URL=$MCP_SERVER_URL"` — must be `http://host.docker.internal:8080/mcp/riksdag-regering` (port `8080` for gh-aw v0.69+, `80` for legacy gh-aw <0.69) -3. **Fix script issues**: read source with `view`, fix with `edit`, re-run -4. **If script fix fails**: use direct MCP tools (`search_dokument`, `get_propositioner`, etc.), save to `analysis/data/documents/{type}/` - -### 🚨🚨🚨 MANDATORY: AI Must Analyse ALL Data Using Methods & Templates (15 min minimum) - -> **THIS IS YOUR PRIMARY JOB.** Minimum 15 minutes for Pass 1, plus 7 minutes for Pass 2. For every document, read methodology upfront then apply ALL 6 analytical lenses. Templates require structured tables, color-coded Mermaid diagrams, dok_id evidence citations — cannot be done in < 15 minutes. PR #1452 proved < 10 min = REJECTED. PR #1794 proved < 22 min total = script stubs remain unenriched. - -> 🔴 **PR #1794 LESSON**: Agent completed in 15.4 minutes of 60-minute allocation. Result: SWOT analysis file was EMPTY (script stub), 6/9 synthesis files were script stubs, 20/22 per-document analyses were 56-line stubs. Article was missing SWOT tables, Mermaid diagrams, risk matrices. NEVER repeat this pattern. - -**MUST do (no exceptions):** - -1. **Read upfront**: `analysis/methodologies/ai-driven-analysis-guide.md` + `analysis/templates/per-file-political-intelligence.md` -2. **Consult as needed**: `political-swot-framework.md`, `political-risk-methodology.md`, `political-threat-framework.md`, `political-classification-guide.md`, `political-style-guide.md`; templates: `synthesis-summary.md`, `risk-assessment.md`, `swot-analysis.md` (needs Context table + evidence tables with dok_id/confidence/impact + Mermaid SWOT Quadrant), `stakeholder-impact.md`, `significance-scoring.md` -3. **For EVERY document**: create `{dok_id}-analysis.md` with ALL 6 analytical lenses, ≥1 color-coded Mermaid with `style` directives, evidence citations with dok_id/vote counts/party names -4. **Create/rewrite ALL 9 synthesis files** in `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/` — exact template structure, no `[REQUIRED]` placeholders. **ALL 9 files MUST be AI-enriched — ZERO may retain the "pre-article-analysis script" marker.** -5. **Run quality gate** (Step D above). Fix ALL failures before continuing. -6. **Run ENFORCED Analysis Enrichment Verification Gate** from SHARED_PROMPT_PATTERNS.md — BLOCKS if any synthesis files still have script markers. -7. **Run ENFORCED Minimum Analysis Time Gate** from SHARED_PROMPT_PATTERNS.md — BLOCKS if < 22 minutes elapsed. -8. **Commit data AND analysis**: `git add analysis/data/ "analysis/daily/$ARTICLE_DATE/realtime-$HHMM/"` (AWF: use `$VAR` not `${VAR}`) - -> ❌ FAILURE MODES (PR #1794 regressions): skipping analysis enrichment; leaving script stubs in synthesis files; plain prose without tables/diagrams; stubs with 0 evidence citations; missing dok_id; missing color-coded Mermaid; `[REQUIRED]` placeholders; SWOT without evidence tables; < 22 min total analysis; completing workflow in < 40 minutes. - -### 🔄 Data Lookback Fallback - -> Never produce empty/stub analysis. If no data for today, look back up to 7 days. See `SHARED_PROMPT_PATTERNS.md` §"Data Lookback Fallback Strategy" for the complete bash implementation. - -Key steps: resolve `ARTICLE_DATE` from input or today → check `data-download-manifest.md` → if 0 docs, loop `DAYS_BACK` 1–7 using `date -u -d "$ARTICLE_DATE - $DAYS_BACK days"`, run `download-parliamentary-data.ts --date "$LOOKBACK_DATE"` → copy artifacts from found date to original date folder if needed → run `catalog-downloaded-data.ts --pending-only` to get `$PENDING` count. - -### Per-File Analysis & Daily Synthesis (done by AI, not scripts) - -> Scripts download data and produce **stub files only**. The AI agent MUST **replace ALL stubs** with real analysis. Follow `SHARED_PROMPT_PATTERNS.md` §"Per-File AI Analysis Block" and §"MANDATORY: AI-Driven Analysis Using Methods & Templates" exactly (Steps A–D below are a summary): - -**Step A**: Read `analysis/methodologies/ai-driven-analysis-guide.md` + `analysis/templates/per-file-political-intelligence.md` FIRST. Then consult SHARED_PROMPT_PATTERNS Steps 2–3 (all 6 methodology guides, all 8 templates). - -**Step B**: For EVERY document JSON → create `{dok_id}-analysis.md` with ALL 6 analytical lenses, ≥1 color-coded Mermaid with `style` directives, evidence tables with dok_id/confidence/impact, real SWOT entries. - -**Step C**: Rewrite ALL 7 daily synthesis files in `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/` to match their templates exactly. - -**Step D — Run quality gate** (BLOCKING): See `SHARED_PROMPT_PATTERNS.md` §"Step 5b: MANDATORY Quality Gate" for the complete bash script. Run it and fix ALL failures before proceeding. - -**Step D.2 — Lead-Story & Coverage-Completeness Gate** (BLOCKING, added 2026-04-18): After articles are drafted, run the gate from `SHARED_PROMPT_PATTERNS.md` §"🔴 MANDATORY: Lead-Story & Coverage-Completeness Gate". This enforces (1) the article `<title>`, `<meta description>`, and H1 reference the #1 DIW-ranked finding in `significance-scoring.md`, (2) every document with DIW-weighted score ≥ 7.0 appears as a dedicated H3 section, (3) when top-ranked findings carry opposing political valences, the rhetorical tension is surfaced explicitly. Failing the gate requires rewrite before commit. **Doctrine**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Rule 5: Democratic-Impact Weighting (DIW)". - -> 🚨 **BLOCKING**: Fix all failures before proceeding. Read `analysis/templates/<template>.md`, rewrite failing files, re-run gate. - -### 🔴 MANDATORY: Batch Analysis Enrichment - -If `synthesis-summary.md` reports "0 documents analyzed" but per-doc analyses exist in `documents/`, aggregate findings into all 9 batch files. If NO per-doc analyses exist, use MCP tools directly to create meaningful analysis. See `ai-driven-analysis-guide.md` §"Deep-Inspection Batch Analysis Enrichment Protocol (v4.1)". **NEVER commit batch files reporting "0 documents analyzed".** After enrichment, run the **9-Artifact Completeness Gate** from `SHARED_PROMPT_PATTERNS.md` §"9 REQUIRED Analysis Artifacts" to verify ALL 9 core files exist (synthesis-summary.md, swot-analysis.md, risk-assessment.md, threat-analysis.md, classification-results.md, significance-scoring.md, stakeholder-perspectives.md, cross-reference-map.md, data-download-manifest.md). Create any missing artifacts manually. - -### 🏆 MANDATORY: 14-Artifact Reference-Grade Gate (Tier-C — added 2026-04-19) - -`news-realtime-monitor` is a **Tier-C reference-grade workflow** — every breaking run is the flagship editorial surface of Riksdagsmonitor and is consumed externally by editors, analysts, and press. After the 9-Artifact Completeness Gate passes, additionally run the **14-Artifact Reference-Grade Gate** from `SHARED_PROMPT_PATTERNS.md` §"14 REQUIRED Artifacts for AGGREGATION Workflows + news-realtime-monitor". This gate requires 5 additional Tier-C files in `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/`: - -- **`README.md`** (≥ 2400 bytes — 0.8× realtime multiplier) — Package index · reading orders by audience · file index table · lead-story at-a-glance · upstream-run relationship table -- **`executive-brief.md`** (≥ 2800 bytes — 0.8× realtime multiplier) — BLUF ≤ 300 words · 3 decisions this brief supports · 8-bullet "60-second read" · named actors (≥ 5 ministers/party leaders with dok_id citations) · 14-day forward vote calendar · top-5 risks · analyst confidence meter -- **`scenario-analysis.md`** (≥ 3200 bytes — 0.8× realtime multiplier) — 3 base scenarios with probability bands (30-day + 90-day + post-election where applicable) · 2 wildcards with impact assessment · ACH (Analysis of Competing Hypotheses) grid · monitoring-trigger calendar mapped to scenario shifts · cross-reference to upstream scenario work -- **`comparative-international.md`** (≥ 3200 bytes — 0.8× realtime multiplier) — **≥ 5 jurisdictions** benchmarked per cluster · Nordic baseline (SE vs DK, NO, FI) · EU benchmark (DE, NL, plus cluster-relevant) · explicit call-outs where Sweden **innovates**, **follows**, **diverges** · data-source citations (World Bank, RSF, OECD, Eurostat) -- **`methodology-reflection.md`** (≥ 3200 bytes — 0.8× realtime multiplier) — Methodology application matrix · **Upstream Watchpoint Reconciliation** (every forward indicator from the last 2 days of sibling realtime-monitor runs explicitly carried forward or retired with reason) · uncertainty hot-spots · known limitations · Pass-1→Pass-2 improvement evidence · recommendations for doctrine codification - -> 📐 **Period-scope multiplier: 0.8× (single-event)** applied above — see `SHARED_PROMPT_PATTERNS.md` §"Period-Scope Multipliers" for the multiplier table. The 0.8× factor recognises that single-event realtime briefs may trim historical context while keeping all 14 artefacts present. - -**Reference exemplars**: -- [`analysis/daily/2026-04-17/realtime-1434/`](../../analysis/daily/2026-04-17/realtime-1434/) — 14-file reference package -- [`analysis/daily/2026-04-19/realtime-1219/`](../../analysis/daily/2026-04-19/realtime-1219/) — 14-file reference package with full Tier-C extensions - -Failing the 14-artifact gate is BLOCKING — create any missing Tier-C file before article generation. See `SHARED_PROMPT_PATTERNS.md` §"14-Artifact Completeness Gate for Tier-C Workflows" for the full bash script. - -### 🚨 MANDATORY: Commit Data AND Analysis - -Before deciding to generate articles or call noop, check if analysis artifacts exist: - -```bash -[ -f /tmp/hhmm.env ] && . /tmp/hhmm.env -if [ -z "$HHMM" ]; then - date -u +%H%M > /tmp/hhmm_val.txt - read HHMM < /tmp/hhmm_val.txt -fi -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/realtime-$HHMM" -find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt -read ANALYSIS_COUNT < /tmp/analysis_count.txt -echo "Analysis artifacts: $ANALYSIS_COUNT files in $ANALYSIS_DIR" -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If ANY files exist in `analysis/daily/YYYY-MM-DD/realtime-HHMM/`, commit them via `safeoutputs___create_pull_request` — title: `📊 Analysis Only - Realtime Monitor - {date} {HHMM}`, labels: `["analysis-only", "realtime-monitor"]`. Only use noop if ZERO output files were produced. - -## Step 2: Detect Significant Events - -Query for recent parliamentary activity — use **direct MCP tool calls** (the framework routes them automatically). - -Replace `<today>` with today's date in `YYYY-MM-DD` format (from `date +%Y-%m-%d`). Replace `<yesterday>` with the previous day's date in `YYYY-MM-DD` format (from `date -d "yesterday" +%Y-%m-%d`). Replace `<rm>` with the riksmöte value calculated above. - -**Use a lookback window** — query from `<yesterday>` to catch late-day publications from the previous day that may have been missed by the last run: - -``` -get_calendar_events({ from: "<today>", tom: "<today>", limit: 50 }) -search_dokument({ from_date: "<yesterday>", to_date: "<today>", limit: 30 }) -search_voteringar({ rm: "<rm>", limit: 20 }) -search_anforanden({ rm: "<rm>", limit: 20 }) -search_regering({ dateFrom: "<yesterday>", dateTo: "<today>", limit: 30 }) -get_propositioner({ rm: "<rm>", limit: 20 }) -get_betankanden({ rm: "<rm>", limit: 20 }) -``` - -### ⚠️ Calendar API Fallback - -`get_calendar_events` may return HTML instead of JSON intermittently. If it fails: (1) do NOT treat as "no events"; (2) use `search_dokument({ from_date: "<today>", to_date: "<today>", limit: 50, doktyp: "bet" })` as a proxy; (3) flag the error in any noop message; (4) continue evaluating all other data sources normally. - -### Significance Assessment — AI-Driven Severity Classification - -Apply three-tier severity classification to ALL detected events. This classification determines whether to generate articles and what depth of analysis to apply. - -**HIGH** (generate breaking article with deep analysis): -- Close votes (margin ≤ 5 seats) or unexpected vote outcomes -- Cross-party coalitions forming (parties voting against their usual block) -- New government propositions on high-priority topics (defense, migration, economy, justice, social policy) -- Major committee reports with significant policy changes (especially those approving government proposals) -- Government crisis indicators (VU, confidence motion, minister resignation) -- SOU reports on major policy areas -- Budget amendments or extraordinary fiscal measures -- Legislation strengthening criminal law, social services, or national security - -**MEDIUM** (generate update article with standard analysis): -- Regular committee reports (betänkanden) rejecting motions -- Committee reports approving government proposals (even if routine procedure) -- New government propositions on any policy area -- Opposition motions on significant policy areas -- Scheduled debates with notable party positions -- Ministerial interpellations from multiple parties -- Cross-party cooperation announcements - -**LOW** (skip, use noop): -- Routine procedural votes with no policy substance -- Standard meetings with no new developments -- Previously covered topics within last 6 hours (check workflow-state.json) -- Scheduling announcements without policy substance - -**Severity scoring formula** (score 1–10, capped at 10): -- +3 if coalition majority at risk -- +2 if > 3 parties involved -- +2 if budget/fiscal implications -- +2 if defense/security policy -- +2 if criminal justice or social welfare reform -- +1 if involves named minister -- +1 if committee report approves (not just rejects) a government proposal -- -2 if similar topic covered in last 6 hours - -Map raw score to tier: **≥ 7 = HIGH** | **4–6 = MEDIUM** | **≤ 3 = LOW** - -### No-Events Early Exit - -If no HIGH or MEDIUM events found: use the already-set `$ANALYSIS_COUNT` from the MANDATORY Commit check above. -- **ANALYSIS_COUNT > 0**: `git add "$ANALYSIS_DIR"/` and commit, then `safeoutputs___create_pull_request` with title `📊 Analysis Only - Realtime Monitor $HHMM - {date}`, labels `["analysis-only", "realtime-monitor"]`. -- **ANALYSIS_COUNT = 0**: call `safeoutputs___noop({ "message": "No significant events on <today>. Votes (<lastVoteDate>), props (<propCount>), bets (<betCount>), gov (<govCount>), calendar (<calendarStatus>). Max severity=<maxScore> (<HIGH threshold ≥7). Analysis produced 0 files. Next check 2-4h." })` - -**Stop here only if no analysis artifacts exist.** - -### 🔬 Step 2b: Read ALL Analysis Files (MANDATORY — before article generation) - -> 🔴 NON-NEGOTIABLE: `cat` every analysis `.md` BEFORE generating HTML. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -[ -f /tmp/hhmm.env ] && . /tmp/hhmm.env -if [ -z "$HHMM" ]; then - date -u +%H%M > /tmp/hhmm_val.txt - read HHMM < /tmp/hhmm_val.txt -fi -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/realtime-$HHMM" -find "$ANALYSIS_BASE" -name "*.md" -type f 2>/dev/null -exec cat {} \; -exec echo \; -``` - -## Step 3: Generate Articles Using Purpose-Built Script - -**🚨 ALWAYS use the TypeScript generation script — it handles MCP queries, HTML templating, all 14 languages, translation, and article quality internally.** - -```bash -ARTICLE_TYPES_INPUT="${{ github.event.inputs.article_types }}" -[ -z "$ARTICLE_TYPES_INPUT" ] && ARTICLE_TYPES_INPUT="breaking" -export ARTICLE_TYPES_INPUT -LANGUAGES_INPUT="${{ github.event.inputs.languages }}" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="en,sv" -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac -export LANG_ARG -source /tmp/gh-aw/agent/timing.env 2>/dev/null || true -if [ -z "$START_TIME" ]; then - date +%s > /tmp/start_time.txt - read START_TIME < /tmp/start_time.txt -fi -date +%s > /tmp/now_time.txt -read AW_NOW < /tmp/now_time.txt -ELAPSED=$(( AW_NOW - START_TIME )) -if [ "$ELAPSED" -ge 2100 ]; then - echo "⏱️ Time budget exceeded ($ELAPSEDs >= 35min) — skipping generation" - SCRIPT_EXIT=0; NEW_ARTICLES="" -else - timeout 1200 bash -lc 'source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts --types="$ARTICLE_TYPES_INPUT" --languages="$LANG_ARG" --skip-existing' - SCRIPT_EXIT=$? - TIMED_OUT=false - [ "$SCRIPT_EXIT" -eq 124 ] && { echo "⚠️ Script timed out — proceeding with generated content"; TIMED_OUT=true; } - echo "Script exit: $SCRIPT_EXIT" - date +%Y-%m-%d > /tmp/today.txt - read TODAY < /tmp/today.txt - git status --porcelain -- news/ 2>/dev/null | awk '{print $2}' | grep "$TODAY-" > /tmp/new_articles.txt || true - NEW_ARTICLES="" - [ -s /tmp/new_articles.txt ] && NEW_ARTICLES="generated" - [ -z "$NEW_ARTICLES" ] && echo "No new articles." || { cat /tmp/new_articles.txt; [ "$TIMED_OUT" = true ] && SCRIPT_EXIT=0; } -fi -``` - -- If `$NEW_ARTICLES` is non-empty → proceed to Step 4 (validate) -- If empty AND `$SCRIPT_EXIT` is 0 (script ran successfully but found no significant events) → call `safeoutputs___noop` -- If empty AND `$SCRIPT_EXIT` is non-zero (script error) → see Fallback below - -### Fallback: Manual Generation (ONLY if script fails with error AND no articles created) - -Verify MCP first: `source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL"` (expect `http://host.docker.internal:8080/mcp/riksdag-regering` for gh-aw v0.69+, or `:80` for legacy gh-aw <0.69 — port resolved dynamically). If the script genuinely fails, generate HTML manually using `printf` appends (never heredoc) to `news/YYYY-MM-DD-breaking-HHMM-{lang}.html`. Check elapsed time: if >= 38 min, skip and call noop. - -> 🔴 **CRITICAL — Correct HTML Template for Fallback Articles**: -> -> When generating HTML manually, you MUST match the template structure used by `scripts/article-template/template.ts`. Common errors in past fallback articles: -> -> 1. **Stylesheet**: Use `<link rel="stylesheet" href="../styles.css">` — **NOT** `../styles/news-article.css` (that file does not exist!) -> 2. **Favicons**: Include favicon links (`/images/favicon-32x32.png`, `/images/favicon-16x16.png`, etc.) -> 3. **Fonts**: Load Inter (body) AND Orbitron (headings) via Google Fonts with lazy-load pattern for Orbitron -> 4. **Anti-flash script**: Include theme detection script before closing `</head>` to prevent flash of wrong theme -> 5. **x-default hreflang**: Always include `<link rel="alternate" hreflang="x-default" href="...en.html">` -> 6. **BreadcrumbList**: Include Schema.org BreadcrumbList structured data -> 7. **Article class**: Use `<article id="main-content" class="news-article article-type-breaking">` -> 8. **Footer structure**: Use `<footer role="contentinfo">` (not `class="site-footer"`) with language grid, stats, quick links -> 9. **Table captions**: Include `<caption>` in all `<table>` elements for accessibility -> 10. **Theme toggle**: Include theme toggle button with proper ARIA attributes -> -> **Reference a working article** (e.g., the most recent `*-committee-reports-en.html` or `*-breaking-*-en.html`) for exact HTML structure. - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "realtime-monitor", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -## Step 3b: AI Title, Meta Description & Analysis References - -> 🚨 MANDATORY. See SHARED_PROMPT_PATTERNS.md §"AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and §"ANALYSIS FILE GITHUB REFERENCES". - -1. **Titles**: `[Active Verb] + [Specific Actor] + [Concrete Action]`. ❌ BANNED: "Breaking News: Latest Updates" -2. **Meta descriptions** (150-160 chars): summarize key intelligence. ❌ BANNED: starting with "Analysis of N documents" -3. **Add analysis references** HTML block (class="analysis-references") before footer, linking to `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/` files. **🔴 MANDATORY — run deterministic injector BEFORE manual verify**: -```bash -# Discovers all eligible .md files in the realtime-HHMM folder (including reference-grade -# extensions: README, executive-brief, scenario-analysis, comparative-international, -# methodology-reflection) and repairs/inserts localized links into EN + SV articles. -# NOTE: `--rewrite` fixes missing or broken analysis-reference sections; it does not -# force-refresh an already valid-but-incomplete section to include newly added files. -# If this run added more analysis files after a valid section was created, use the -# script's full-regeneration mode if available, or remove the existing block and rerun. -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite -``` -Then verify: -```bash -for FILE in news/$ARTICLE_DATE-*breaking*-*.html news/$ARTICLE_DATE-*realtime*-*.html; do - [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE" && echo "🔴 MISSING: $FILE" -done -``` -4. **Update metadata**: `<title>`, `<meta name="description">`, `og:title`, `og:description`, `<h1>` all match AI title/description. - -## Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY) - -> See SHARED_PROMPT_PATTERNS.md §"AI ARTICLE CONTENT GENERATION" v4.0. Breaking news MUST rewrite ALL stub content. - -1. **Intelligence-grade lede**: specific development + key actor + quantified impact (SEK amounts, seat counts) + urgency -2. **Unique "Why It Matters"** per document — specific to content. ❌ BANNED: `"Touches on {X} policy..."` -3. **"Winners & Losers"** — named parties/ministers/sectors with evidence. ❌ BANNED: `"The political landscape remains fluid..."` -4. **Key Takeaways** — 3-5 bullets with confidence labels and dok_id citations -5. **Replace ALL `AI_MUST_REPLACE` markers** — ZERO markers in committed HTML -6. **Visualization data** — voting data: Chart.js vote distribution; budget/defense: include allocation data - -## Step 4: Validate & Translate - -```bash -UNTRANSLATED=0 -for article in news/*-{en,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh}.html; do - [ -f "$article" ] && grep -q 'data-translate="true"' "$article" && { echo "NEEDS TRANSLATION: $article"; UNTRANSLATED=$((UNTRANSLATED+1)); } -done -[ "$UNTRANSLATED" -gt 0 ] && echo "WARNING: $UNTRANSLATED articles need translation" -``` - -Translate `<span data-translate="true" lang="sv">text</span>` to target language and remove wrapper. Keep party abbreviations (S, M, SD, V, MP, C, L, KD) untranslated. - -```bash -source /tmp/gh-aw/agent/timing.env 2>/dev/null || true -if [ -z "$START_TIME" ]; then - date +%s > /tmp/start_time.txt - read START_TIME < /tmp/start_time.txt -fi -date +%s > /tmp/now_time.txt -read AW_NOW < /tmp/now_time.txt -ELAPSED=$(( AW_NOW - START_TIME )) -if [ "$ELAPSED" -ge 2100 ]; then - echo "⏱️ Time budget (35min) exceeded — skipping validation" - VALIDATION_EXIT=0 -else - timeout 300 bash scripts/validate-news-generation.sh - VALIDATION_EXIT=$? - if [ "$VALIDATION_EXIT" -eq 124 ]; then - echo "⚠️ Validation timed out — proceeding" - VALIDATION_EXIT=0 - fi - if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "Validation issues found — fix what you can, proceed if time allows" - fi -fi -``` - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/breaking`. `safeoutputs___create_pull_request` handles this automatically. - -## Step 5: Commit & Create PR — ROLLING BATCHES (up to 3 PRs per run) - -### HOW SAFE PR CREATION WORKS - -⚠️ DO NOT use `git push` — the safe output tool handles publishing. Commit locally, then use the tool. - -> 🚨 **`safeoutputs___create_pull_request` freezes the patch at call time AND refreshes the MCP session.** A separate `safe_outputs` job (after the agent job ends) creates the branch and opens each PR. **Commits made after a given call are NOT added to that PR** (PR #1835). But because this workflow now has `create-pull-request.max: 3`, you can call the tool up to **3 times per run** — each call captures a new batch AND refreshes the Streamable HTTP MCP session idle timer. This is how we "keep the session alive" over the full 45-minute window. -> -> **Required pattern (fixed after run 24722758908):** -> 1. **🫀 PR #1 — Analysis-only Heartbeat (minute 13–18 — MANDATORY first call, session heartbeat #1)**: Commit analysis artifacts from Pass 1 only. **Do NOT write articles first.** Title: `🫀 Heartbeat - Realtime Monitor - $ARTICLE_DATE $HHMM`. Labels: `["analysis-only", "realtime-monitor", "heartbeat"]`. This refreshes the safeoutputs MCP session idle timer. -> 2. After PR #1 succeeds, run `git checkout main` (or any branch other than the PR branch) before editing further files. Commits stacked onto the same branch after the call are silently discarded from the frozen patch (see PR #1835). -> 3. **PR #2 — Full Articles (minute 38–43 — session heartbeat #2)**: Full EN + SV articles with AI-written content + Pass 2 enriched analysis + fixed references. Title: `🔴 Breaking $HHMM: {headline} - $ARTICLE_DATE`. -> 4. **PR #3 (optional, if additional HIGH/MEDIUM events discovered later in the run)**: extra article(s) on a new branch. -> 5. Repo-memory updates (`/tmp/gh-aw/repo-memory/default/*.json`) are artifact uploads, not PR content — safe to run after the final PR call. -> 6. If `safeoutputs___create_pull_request` returns `session not found` on any call, every subsequent safeoutputs call will also fail — recovery is impossible. The analysis-heartbeat pattern prevents this by exercising the session at minute 13–18, well within the ~30–35 min idle lifetime. **NEVER delay PR #1 past minute 18 waiting for article content — articles go in PR #2.** - -```bash -# Stage articles and analysis — DATE-SCOPED to stay within safe-outputs 100-file PR limit. -# 🚨 PAST INCIDENT (run 24719881413, 2026-04-21): broad globs like -# `news/*breaking*.html news/*realtime*.html news/*monitor*.html` matched 222+ historical -# articles across the whole archive. Any prior script that touched those files (Playwright -# validation, auto-fix, translation pass) caused `git add` to stage 602/604 files → -# `E003: Cannot create pull request with more than 100 files`. Fix: scope EVERY `git add` -# to `$ARTICLE_DATE` so only today's new/modified files are included. -[ -f /tmp/hhmm.env ] && . /tmp/hhmm.env -if [ -z "$HHMM" ]; then - date -u +%H%M > /tmp/hhmm_val.txt - read HHMM < /tmp/hhmm_val.txt -fi -[ -z "$ARTICLE_DATE" ] && { date -u +%Y-%m-%d > /tmp/today.txt; read ARTICLE_DATE < /tmp/today.txt; } -# Stage ONLY today's EN + SV realtime/breaking/monitor articles (translations run via news-translate) -git add "news/$ARTICLE_DATE"-*realtime*-en.html "news/$ARTICLE_DATE"-*realtime*-sv.html 2>/dev/null || true -git add "news/$ARTICLE_DATE"-*breaking*-en.html "news/$ARTICLE_DATE"-*breaking*-sv.html 2>/dev/null || true -git add "news/$ARTICLE_DATE"-*monitor*-en.html "news/$ARTICLE_DATE"-*monitor*-sv.html 2>/dev/null || true -git add news/metadata/ 2>/dev/null || true -git add "analysis/daily/$ARTICLE_DATE/realtime-$HHMM/" || true -# 🚫 DO NOT stage analysis/data/ by default — it is an MCP response cache populated by -# download-parliamentary-data.ts (6 doc types × ~40 files = 240+). Committing it caused -# E003 "received 258 files" in news-motions run 24653843681 (PR #1867). -git reset HEAD -- analysis/data/ 2>/dev/null || true -# 🚫 DO NOT stage analysis/weekly/ by default — it is a cumulative rollup maintained by -# the weekly-review workflow. Including it here doubles the file count for no benefit. -git reset HEAD -- analysis/weekly/ 2>/dev/null || true -# 🛡️ Defensive filter: unstage any news/ files that do NOT match $ARTICLE_DATE. Catches -# cases where an earlier bash step accidentally modified historical articles and their -# paths leaked in via `git add news/metadata/` globbing or similar subtle issues. -git diff --cached --name-only > /tmp/staged_files.txt -awk -v today="$ARTICLE_DATE" '$0 ~ "^news/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]" && $0 !~ today {print}' /tmp/staged_files.txt > /tmp/historical_news.txt -if [ -s /tmp/historical_news.txt ]; then - HIST_COUNT=0 - awk 'END{print NR}' /tmp/historical_news.txt > /tmp/hist_count.txt - read HIST_COUNT < /tmp/hist_count.txt 2>/dev/null || true - echo "⚠️ Unstaging $HIST_COUNT historical news/ files that do not match $ARTICLE_DATE" - xargs -a /tmp/historical_news.txt git reset HEAD -- 2>/dev/null || true -fi -# Enforce safe-outputs 100-file PR limit (hard cap: 100; soft threshold: 90) -git diff --cached --name-only > /tmp/staged_files.txt -awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt -STAGED_COUNT=0 -read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -echo "📊 Staged file count: $STAGED_COUNT (limit: 100)" -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ $STAGED_COUNT files exceeds safe threshold. Removing analysis/daily documents/ subfolder." - git reset HEAD -- "analysis/daily/$ARTICLE_DATE/realtime-$HHMM/documents/" 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -if [ "$STAGED_COUNT" -gt 90 ]; then - echo "⚠️ Still $STAGED_COUNT files. Removing news/metadata/." - git reset HEAD -- news/metadata/ 2>/dev/null || true - git diff --cached --name-only > /tmp/staged_files.txt - awk 'END{print NR}' /tmp/staged_files.txt > /tmp/staged_count.txt - read STAGED_COUNT < /tmp/staged_count.txt 2>/dev/null || true -fi -echo "📊 Final staged file count: $STAGED_COUNT" -git commit -m "🔴 Breaking $HHMM: {headline} - $ARTICLE_DATE" -``` - -Then **immediately** call (as a direct tool call, NOT via bash): -``` -safeoutputs___create_pull_request({ - "title": "🔴 Breaking: {headline} - {date}", - "body": "## Breaking News\n\nArticles: {count}\nLanguages: {list}\nSources: riksdag-regering-mcp", - "labels": ["automated-news", "breaking-news", "needs-editorial-review"] -}) -``` - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `stakeholder-perspectives.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework +# 🚨 Realtime Monitor -### Article Type Isolation +Real-time breaking-news monitor. Scans Riksdag + Regering for urgent updates and produces breaking-news articles with Playwright validation. Tier-C reference-grade output (14 artifacts). -> 🚨 **This workflow writes analysis ONLY to `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/`**. NEVER write to the parent date directory or another article type's folder. See SHARED_PROMPT_PATTERNS.md "Article Type Isolation" section. +## Tier-C (reference-grade) requirements -### Standardised Analysis Depth Gate +This workflow imports `../prompts/ext/tier-c-aggregation.md`. Produce **all 14 artifacts** (9 core + 5 Tier-C) and cross-reference sibling analyses. See the extension for the full rules. -> **Default: `deep`**. See SHARED_PROMPT_PATTERNS.md §"Standardised Analysis Depth Gate" for full table. Summary: standard=10min/≥1 Mermaid/≥2 risks; **deep=15min/≥2 Mermaid/≥4 risks** (default); comprehensive=20min/≥3 Mermaid/≥6 risks. +## What this workflow does -**8 mandatory stakeholder groups**: Citizens, Government Coalition, Opposition Bloc, Business/Industry, Civil Society, International/EU, Judiciary/Constitutional, Media/Public Opinion — each analyzed with specific evidence (dok_id, vote counts, named politicians). +- **Article type**: `breaking` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -> Read `analysis_depth` input first (default: `deep`). Breaking news profile: SWOT=quick 1-paragraph, Dashboard=not required, AI iterations: 1 (standard)/2 (deep)/3 (comprehensive). +## Time budget (60 min, minimum 45 min of real work) -### Phase 1–3 Analysis Framework +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -See `SHARED_PROMPT_PATTERNS.md` §"Standardised Analysis Depth Gate" and §"MANDATORY: AI-Driven Analysis Using Methods & Templates" for Phase 1 (event detection + significance scoring), Phase 2 (depth enhancement: Quick SWOT, Activity Summary, quality gate: ≥400 words, no identical why-it-matters), and Phase 3 (final quality gate bash + `validate-news-generation.sh`). +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -### Non-EN/SV Article Requirements: -- ALL h1/h2/h3 MUST be in target language; ALL body paragraphs MUST be in target language -- Meta keywords translated; ZERO `data-translate="true"` spans in final output -- RTL (ar, he): `dir="rtl"` on `<html>`; CJK (ja, ko, zh): native script only -- Nordic (da, no, fi): language-specific parliamentary terms; EU (de, fr, es, nl): formal register -- Localized headings: use `CONTENT_LABELS[lang].whyItMatters`, `.whatToWatch`, `.keyTakeaways`, `.politicalContext` from `scripts/data-transformers/constants/content-labels-part1.ts` -- Post-generation: run `npx tsx scripts/validate-news-translations.ts`; fix files with >3 English phrases in non-EN versions -- Party abbreviations (S, M, SD, V, MP, C, L, KD) NEVER translated; ZERO TOLERANCE for language mixing +## Inputs -## Error Handling +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -| Scenario | Cause | Fix | -|----------|-------|-----| -| Tool not found | MCP server not initialized | Run `source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=$MCP_SERVER_URL"` — source and npx MUST be chained with `&&` on one line; expected output: `MCP_SERVER_URL=http://host.docker.internal:8080/mcp/riksdag-regering` (port `8080` for gh-aw v0.69+, `80` for legacy) | -| Empty results | No significant events detected in monitoring window | Check if analysis artifacts exist — if yes, commit them and create analysis-only PR; if no, call `safeoutputs___noop` | -| Calendar API error | Riksdag calendar API returns HTML instead of JSON (known intermittent issue) | Use `search_dokument` with date params as fallback; flag error in noop message; do NOT treat as "no events" — evaluate all other sources | -| Timeout | MCP server response exceeds `timeout-minutes` | Reduce query scope or increase timeout | -| Script timeout | Generation script exceeds 20-minute limit | Proceed with whatever was generated; the `timeout 1200` wrapper kills the script | -| Stale data | `hoursSinceSync > 48` from `get_sync_status()` | Add disclaimer noting data staleness; proceed with cached data | -| Time running out | Elapsed >= 13 minutes and no safeoutputs call yet | IMMEDIATELY commit whatever analysis artifacts exist in `analysis/daily/$ARTICLE_DATE/realtime-$HHMM/` and call `safeoutputs___create_pull_request` (PR #1 analysis-only Heartbeat, title `🫀 Heartbeat - Realtime Monitor - {date} {HHMM}`). Then `git checkout main` and continue Pass 2 + articles for PR #2. Do NOT wait for articles. Do NOT noop if files exist. | -| safeoutputs `session not found` | Delayed the **first** `safeoutputs___create_pull_request` past the ~30–35 min session lifetime (see run 24672037751 and **run 24722758908, 2026-04-21** where article-writing blocked PR #1 for 22 min). Once the session dies, ALL subsequent intent calls fail. | UNRECOVERABLE once it happens. **Prevention: call analysis-only Heartbeat PR #1 by minute 18 (BEFORE writing articles), then PR #2 with full articles by minute 43.** Each call refreshes the session idle timer. `create-pull-request.max: 3` is configured specifically to enable this keep-alive pattern. | +## Dedup & analysis-only path -⚠️ **CRITICAL SAFETY NET**: Before EVERY bash block and EVERY tool call, mentally check: "Have I called `safeoutputs___create_pull_request` yet?" If more than **13 minutes** have elapsed and PR #1 (analysis-only Heartbeat) has not been created, stop all analysis work, commit whatever analysis artifacts exist (NO articles required), and call `safeoutputs___create_pull_request` IMMEDIATELY — this both captures work AND keeps the MCP session alive for PR #2 (full articles) at minute 38–43. +If articles for `$ARTICLE_DATE` + `breaking` already exist **and** `force_generation=false`: -🎯 **Now begin: Check date, warm up MCP with `get_sync_status()`, detect events, generate articles with the script, and call a safe output tool.** +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Realtime Monitor — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-translate.lock.yml b/.github/workflows/news-translate.lock.yml index 5f70c016f..b9fa7890c 100644 --- a/.github/workflows/news-translate.lock.yml +++ b/.github/workflows/news-translate.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"0876d3303299ab8805e163728aa777d5e699dadbdee5ac726cd7d33d3a33b26d","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"e0b4de7e3b8000d4d0183e5d5dfc98bc449e515864d6479d5ac7d57c643c239f","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,13 @@ # # Dedicated translation workflow for news articles. Generates high-quality translations for all non-core languages. Dispatched by content workflows or run manually/on schedule to translate untranslated articles. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/07-commit-and-pr.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -190,11 +197,6 @@ jobs: GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPE: ${{ github.event.inputs.article_type }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} - GH_AW_GITHUB_EVENT_INPUTS_SOURCE_LANGUAGE: ${{ github.event.inputs.source_language }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -205,21 +207,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_b4c716eb26775bbb_EOF' + cat << 'GH_AW_PROMPT_a21f108b44ae4e0e_EOF' <system> - GH_AW_PROMPT_b4c716eb26775bbb_EOF + GH_AW_PROMPT_a21f108b44ae4e0e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_b4c716eb26775bbb_EOF' + cat << 'GH_AW_PROMPT_a21f108b44ae4e0e_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:5), missing_tool, missing_data, noop - GH_AW_PROMPT_b4c716eb26775bbb_EOF + Tools: add_comment, create_pull_request, missing_tool, missing_data, noop + GH_AW_PROMPT_a21f108b44ae4e0e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_b4c716eb26775bbb_EOF' + cat << 'GH_AW_PROMPT_a21f108b44ae4e0e_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -249,22 +251,21 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_b4c716eb26775bbb_EOF + GH_AW_PROMPT_a21f108b44ae4e0e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_b4c716eb26775bbb_EOF' + cat << 'GH_AW_PROMPT_a21f108b44ae4e0e_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} {{#runtime-import .github/workflows/news-translate.md}} - GH_AW_PROMPT_b4c716eb26775bbb_EOF + GH_AW_PROMPT_a21f108b44ae4e0e_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPE: ${{ github.event.inputs.article_type }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} - GH_AW_GITHUB_EVENT_INPUTS_SOURCE_LANGUAGE: ${{ github.event.inputs.source_language }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -278,11 +279,6 @@ jobs: GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPE: ${{ github.event.inputs.article_type }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} - GH_AW_GITHUB_EVENT_INPUTS_SOURCE_LANGUAGE: ${{ github.event.inputs.source_language }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -308,11 +304,6 @@ jobs: GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_TYPE, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, - GH_AW_GITHUB_EVENT_INPUTS_SOURCE_LANGUAGE: process.env.GH_AW_GITHUB_EVENT_INPUTS_SOURCE_LANGUAGE, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -414,7 +405,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" - env: @@ -525,16 +516,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_ec2a04f81784c573_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","translation"],"max":5,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_ec2a04f81784c573_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_3a0487753dbffc3f_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","translation"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_3a0487753dbffc3f_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 5 pull request(s) can be created. Labels [\"agentic-news\" \"translation\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"translation\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [] @@ -758,7 +749,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_3756af0a59c2eec7_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_6a45268f9eb9a59c_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -874,7 +865,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_3756af0a59c2eec7_EOF + GH_AW_MCP_CONFIG_6a45268f9eb9a59c_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1559,7 +1550,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"translation\"],\"max\":5,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"translation\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-translate.md b/.github/workflows/news-translate.md index 3ef749f4a..75adf97da 100644 --- a/.github/workflows/news-translate.md +++ b/.github/workflows/news-translate.md @@ -2,6 +2,11 @@ name: "News: Translate Articles" description: Dedicated translation workflow for news articles. Generates high-quality translations for all non-core languages. Dispatched by content workflows or run manually/on schedule to translate untranslated articles. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/07-commit-and-pr.md on: schedule: # Run translation catch-up twice daily after main content workflows @@ -128,7 +133,7 @@ safe-outputs: labels: [agentic-news, translation] draft: false expires: 14d - max: 5 + max: 1 add-comment: {} steps: @@ -163,26 +168,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -310,589 +295,42 @@ engine: model: claude-opus-4.7 --- -# 🌐 News Article Translation Agent +# 🌐 News Translate -You are the **Translation Agent** for Riksdagsmonitor. Your primary job is to translate existing English news articles into target languages at high throughput. You are an AI translator — you read the source article and produce complete, faithful translations directly. You do NOT run code generation scripts to produce translations. You do NOT generate new standalone articles or new primary analysis. +Dedicated translation workflow. Consumes completed EN/SV articles and produces translations in the remaining 12 languages. Never generates original analysis. -## ⚠️ CRITICAL: Bash Tool Call Format +## Pipeline -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. +Translation is a pure-derivative workflow: -## 🔴 CRITICAL: Iterative Translation Quality (v5.0) +1. Scan `news/` for articles in the source language (default `en`) missing translations for target languages. +2. For each candidate, read the source HTML in full. +3. Translate into every requested target language, preserving Schema.org markup, `dok_id` references, Swedish political terminology, and RTL layout for `ar` / `he`. +4. Stage, commit, and call `safeoutputs___create_pull_request` **exactly once** covering every translation produced. -> **You are a professional political translator, NOT a machine translation wrapper.** You MUST: -> 1. **TRANSLATE** with political domain expertise — correct terminology for parties, institutions, legislative processes -> 2. **ITERATE** — after completing translations, re-read each one completely and improve accuracy, tone, and domain-specific terminology -> 3. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> 4. **NEVER complete early** — if translations are done, use remaining time to improve quality of existing translations +## Inputs -**🎯 Performance target: 8–12 translated files per run, split across 2–3 PRs.** Each run should produce multiple translations across multiple article types. Use rolling batches: 3–4 languages per PR, multiple PRs per run (see RULE 1). If you produce fewer than 8 files, you are underperforming — use the `create` tool to write complete files in single calls, not the `edit` tool for incremental changes. +- `article_date` (optional, default = today) +- `article_type` (optional — restrict to one type; omit to scan all) +- `languages` (default `all-extra` = all 12 non-core languages) +- `source_language` (default `en`) +- `analysis_depth` (default `standard` — mirrors source article depth for validation thoroughness) -You must also follow the shared **No Workflow Run Wasted** rule used by all agentic workflows in this repository: if translation work is blocked, exhausted, or completed early, use the remaining time to review and improve existing analysis artifacts related to the same article set. This means tightening clarity, consistency, structure, factual grounding, metadata quality, or cross-language alignment in already-existing analysis content, without inventing new coverage or changing EN/SV ownership rules. +## Rules specific to this workflow -Apply this as a **cascading fallback** — you MUST always find work to do and maximize output: -- **First priority**: find and complete ALL pending translations for today's date (unless today is deferred by pre-flight). Translate as many article types as time allows — do not stop after one type, and do not stop after one PR (you may open up to 5 PRs per run — see RULE 1). -- **Second priority**: if today is fully translated or deferred, scan the last 30 days for EN articles missing translations. Start with the most recent date and translate as many as time allows, opening a new PR for each 3–4 language batch. -- **Third priority**: if ALL articles from the last 30 days have 100% translations, improve existing translation quality — fix English leakage, improve phrasing, correct political terminology, ensure natural fluency. Open a dedicated "quality improvements" PR for these edits. -- **Do not let analysis-improvement work delay safe output creation**. If the run is approaching the deadline, stop additional edits and finalize a safe output immediately. -- **NEVER produce fewer than 8 translated files** unless there are literally no articles to translate (all 30 days fully done). If you are producing fewer than 8, you are being too slow — speed up by writing complete files in single tool calls, and split work across multiple PRs instead of serialising everything into one. +- No original analysis. Never produce files under `analysis/daily/`. +- Validate every translation against the source with `scripts/validate-translation.ts` before commit. +- Keep the PR under the safe-outputs 100-file cap. If more translations are pending than fit in one PR, translate the highest-priority batch and leave the rest for the next scheduled run. +- Skip any language whose translation already exists and is non-empty unless `force` is explicitly requested. -When performing analysis-improvement work, keep changes tightly scoped and stage conservatively so the safe-outputs payload remains manageable: -- Prefer the smallest coherent set of files that delivers value. -- Do not stage broad repo-wide cleanups or unrelated edits. -- Keep the total staged file count within a safe, reviewable limit; if both translation files and analysis-artifact improvements exist, prioritize completed translations first and only include a small number of directly related analysis files that still fit comfortably within safe-outputs constraints. -- If adding analysis-improvement edits would risk exceeding safe-output limits, exclude those extra files and emit a safe output for the translation work already completed. +## Time budget (60 min) -## 🚨 RULE 1: `safeoutputs___create_pull_request` Freezes the Patch — Use Rolling Batches +| Minutes | Phase | +|---------|-------| +| 0–2 | MCP pre-warm + date resolution | +| 2–8 | Scan untranslated articles; build work list | +| 8–52 | Translate + validate in priority order (highest-value types first) | +| 52–58 | Final validation, stage, commit | +| 58–60 | **One** `safeoutputs___create_pull_request` call | -**The #1 cause of lost work was misunderstanding how `safeoutputs___create_pull_request` actually works.** The tool captures the patch from your **current commits at the moment you call it**. Any files you create or commit *after* that call on the same branch are **NOT** added to the PR — they are **silently lost** when the ephemeral agent workspace is discarded. Multiple commits to the same local branch after the first call do **NOT** update the PR. - -> 🚨 **PRODUCTION INCIDENT (2026-04-19, PR #1835)**: The agent translated 7 languages (`da`, `nb`, `de`, `fi`, `fr`, `es`, `nl`) for `2026-04-18-breaking-1705`. After committing 4 languages at minute 21, it called `safeoutputs___create_pull_request` — patch frozen. It then translated `fr`, `es`, `nl` on the same branch believing "they'll be included in the PR". **They were not.** Only 4/7 translations reached `main`; 3 complete translations were discarded. -> -> 🚨 **PRODUCTION INCIDENT (2026-04-14)**: The agent delayed `safeoutputs___create_pull_request` until minute ~50. The safeoutputs MCP session had expired ("session not found"). All 10 translations were lost. - -These two incidents bound the strategy: **call early, then batch again.** - -### The Correct Pattern: Rolling Batches — One PR Per 4-Language Batch - -The workflow is configured with **`create-pull-request.max: 5`** — you may open up to **5 pull requests per run**. Each PR must be a self-contained batch of 2–4 completed translation files. - -1. **Batch 1** (~minutes 4–18): Translate 3–4 languages (`da`, `nb`, `fi`, `de`) → stage + commit → `safeoutputs___create_pull_request` **by minute 22** -2. **Batch 2** (~minutes 22–35): Translate next 3–4 languages (`fr`, `es`, `nl`, `ar`) **on a new branch** → stage + commit → `safeoutputs___create_pull_request` again -3. **Batch 3** (~minutes 35–48): Translate remaining languages (`he`, `ja`, `ko`, `zh`) **on a new branch** → stage + commit → `safeoutputs___create_pull_request` a third time -4. **Batch 4+**: If time remains and more articles need translation, repeat for the next article - -> ✅ Each `safeoutputs___create_pull_request` call creates a **separate PR** on a **separate branch**. The tool handles branch creation automatically — just make sure each batch is a fresh local commit graph (delete or switch away from the previous branch before staging the next batch). - -### Timing Rules -- **First PR**: Call `safeoutputs___create_pull_request` by **minute 22** (must happen within the safeoutputs MCP session lifetime — do not wait past minute 35) -- **Subsequent PRs**: Call every 10–12 minutes after each new batch is committed -- **Hard stop at minute 55**: Stop all translation work and flush the final batch as a PR — never leave uncommitted files behind - -### Moving to the Next Batch (no lost work) - -After `safeoutputs___create_pull_request` succeeds for a batch, switch off the PR branch before writing the next batch so new files don't accidentally stack onto the frozen patch: - -```bash -# After safeoutputs___create_pull_request succeeds for batch N. -# Fail LOUDLY if we cannot return to main — staying on the PR branch would -# re-create the 'stacking onto frozen patch' failure mode this section exists to prevent. -git status --short news/ -git checkout main || { echo "ERROR: failed to switch back to main; aborting before next batch." >&2; exit 1; } -# AWF-safe: no $(...) command substitution — capture branch via per-process tempfile + read, then clean up. -CURRENT_BRANCH_FILE="/tmp/current-branch-$$.txt" -git branch --show-current > "$CURRENT_BRANCH_FILE" -read CURRENT_BRANCH < "$CURRENT_BRANCH_FILE" -rm -f "$CURRENT_BRANCH_FILE" -[ "$CURRENT_BRANCH" = "main" ] || { echo "ERROR: repository is not on main after checkout; aborting before next batch." >&2; exit 1; } -# Now translate the next 3–4 languages and commit on a new (unnamed) set of changes. -# The next safeoutputs___create_pull_request call will create a fresh branch automatically. -``` - -### When to noop -- **NEVER** call `safeoutputs___noop` after creating any translation files — noop means "I did nothing" and discards all your work -- **The ONLY valid noop**: Zero EN articles exist in the entire `news/` directory AND zero backlog articles need translation AND zero existing translations need quality improvement. This should be confirmed within the first 5 minutes. - -## 🚨 RULE 2: Never Modify EN/SV Files - -NEVER create, modify, or stage `-en.html` or `-sv.html` files. Those belong to content workflows. You only create files for: da, no, fi, de, fr, es, nl, ar, he, ja, ko, zh. - -Validate file ownership: -```bash -npx tsx scripts/validate-file-ownership.ts translation -``` - -## 🔧 Workflow Parameters - -- **article_date** = `${{ github.event.inputs.article_date }}` (default: today) -- **article_type** = `${{ github.event.inputs.article_type }}` (default: scan all) -- **languages** = `${{ github.event.inputs.languages }}` (default: all-extra) -- **source_language** = `${{ github.event.inputs.source_language }}` (default: en) — currently only `en` is supported as a source language; all discovery, reading, and copy steps assume EN source files -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` (default: standard) - -## 🔒 Content-PR Dependency Check - -The pre-flight init steps check for open content PRs and source article availability. Instead of blocking the workflow, they set environment flags: -- `TODAY_DEFERRED=true` — today's date has open content PRs, or the `gh pr list` / `jq` commands failed; skip today but scan older dates -- `TODAY_NO_SOURCES=true` — no EN source articles exist for today; scan older dates - -**You always run.** Use these flags to decide your starting point, then cascade through the fallback strategy below. - -Documentation of the check logic for traceability: -```bash -# Pre-flight sets TODAY_DEFERRED=true or TODAY_NO_SOURCES=true in $GITHUB_ENV -# Agent ALWAYS runs and decides what to translate using cascading fallback -``` - -### Branch Naming Convention - -Translation PRs use deterministic branch names: -``` -news/translate/{YYYY-MM-DD}/{article-type} -``` -> `safeoutputs___create_pull_request` handles branch creation automatically. - -## ⏱️ Time Budget (55 minutes of work, hard stop at 55) - -The budget is now organised around **typically 2–3 rolling PR batches (up to 5 allowed by `create-pull-request.max: 5`)**, not one monolithic PR. This is the direct fix for PR #1835 where 3 completed translations (`fr`, `es`, `nl`) were lost because they were committed *after* the single `safeoutputs___create_pull_request` call had frozen the patch. If time allows and the workload spans multiple articles/types, open additional batch PRs up to the 5-per-run safe-outputs cap. - -| Phase | Minutes | Action | -|-------|---------|--------| -| Setup | 0–3 | Determine date, scan for work. If literally nothing to translate → `safeoutputs___noop` immediately | -| Batch 1 translate | 3–18 | AI translates 3–4 languages (e.g. `da`, `nb`, `fi`, `de`) for one article | -| Batch 1 PR | 18–22 | Stage, commit, call `safeoutputs___create_pull_request` for batch 1 (**must happen by minute 22**) | -| Batch 2 translate | 22–35 | `git checkout main`, translate next 3–4 languages (`fr`, `es`, `nl`, `ar`) | -| Batch 2 PR | 35–38 | Stage, commit, call `safeoutputs___create_pull_request` for batch 2 | -| Batch 3 translate | 38–50 | `git checkout main`, translate remaining languages (`he`, `ja`, `ko`, `zh`) | -| Batch 3 PR | 50–53 | Stage, commit, call `safeoutputs___create_pull_request` for batch 3 | -| Hard stop | 55 | 🚨 **HARD DEADLINE** — flush whatever is committed as a final PR. Never leave uncommitted translations behind. | - -> 🚨 **WHY BATCH AT MINUTE 22?** The safeoutputs MCP session has a finite lifetime (~35 min observed). Successful runs create their first PR by minute 22 so the session is still alive. The failed 2026-04-14 run tried at minute 50 (session dead, all work lost). Calling early gives maximum safety margin and leaves the full remainder of the 60-minute job for additional batches. -> -> 🚨 **WHY 3 PRS INSTEAD OF 1?** Because `safeoutputs___create_pull_request` **freezes the patch at call time** — any commits after the call are discarded. The only way to ship more than one batch is to call the tool multiple times (up to `create-pull-request.max: 5`). PR #1835 lost 3 translations by assuming same-branch commits would be picked up; they were not. - -### Batch Strategy — Rolling PRs, Maximize Translations Per Run - -**Target: 8–12 translated files per run, split across 2–3 PRs.** Each translated file = 1 article × 1 language. - -The core rule: **one PR per 3–4 language batch**, because `safeoutputs___create_pull_request` freezes the patch at call time (see RULE 1). - -Process translations in this order: -1. **Group by article type** — translate ALL languages for one article type before moving to the next -2. **Within a type, split languages into 3 rolling batches of 4** so each can ship as its own PR before the safeoutputs session expires: - - **Batch 1 — Fast European**: `da`, `nb`, `fi`, `de` (Nordic + German; ~4 × 4 min = ~16 min) - - **Batch 2 — Romance + RTL**: `fr`, `es`, `nl`, `ar` (adds one RTL to the batch; ~4 × 4 min = ~16 min) - - **Batch 3 — RTL + CJK**: `he`, `ja`, `ko`, `zh` (the slowest languages; ~4 × 4 min = ~16 min) -3. **Time guard per file**: If a single translation takes more than 5 minutes, something is wrong — skip to the next language - -**Do NOT limit to 1 article type per run.** Process as many types as time allows. If one article's 3 batches finish before minute 50, start the next article and repeat. - -**Do NOT put everything into one PR.** Opening a new PR for each 3–4 language batch is the only way to avoid losing work — `create-pull-request.max: 5` in the frontmatter supports this. - -**Counting rule**: Before each `safeoutputs___create_pull_request` call, count the files staged in the *current* batch. Aim for 3–4 files per PR. Never let a single PR exceed 4 translated files — this keeps each PR small enough to review, well within the `max-patch-size: 4096` (KB, i.e. 4 MB) safe-output limit (a typical translated article is ~40 KB, so 4 files ≈ 160 KB ≪ 4 MB), and leaves time to open the next PR before the safeoutputs session expires. - -## OPTIONAL MCP Health Check - -> MCP is useful for political terminology verification but is NOT required for translation. Do NOT let MCP issues block translation work. - -Quick connectivity check (spend no more than 30 seconds total): - -1. Call `get_sync_status({})` — **one attempt only** -2. If it succeeds, great — MCP is available for terminology lookups during translation -3. If it fails, **proceed with translation anyway** — you are an AI translator and can translate without MCP -4. **Do NOT retry, do NOT run diagnostics, do NOT noop on MCP failure** - -## 📅 Riksmöte (Parliamentary Session) Calculation - -The Swedish parliamentary session runs September–August. Calculate the current `rm` value: -- If current month ≥ September: `rm = "{currentYear}/{nextYear's last 2 digits}"` -- If current month < September: `rm = "{previousYear}/{currentYear's last 2 digits}"` -- Example: February 2026 → `rm = "2025/26"` - -## 📊 Standardised Analysis Depth Gate - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Mermaid diagrams | -|-------|--------------|-------------------|--------|---------|-----------------| -| standard | 1-2 | ≥5 (of 8 groups) | ≥1 | optional | ≥1 color-coded | -| deep | 2-3 | ≥7 (of 8 groups) | ≥2 | required | ≥2 color-coded | -| comprehensive | 3+ | all 8 groups | ≥3 | required | ≥3 color-coded | - -When translating, preserve ALL analysis depth. Translate content but NEVER remove analytical components (SWOT tables, Mermaid diagrams, risk matrices, confidence labels). - -> 🔴 **NON-NEGOTIABLE: Preserve the "📊 Analysis & Sources" section** (`class="analysis-references"`) during translation. This section links to analysis files on GitHub and MUST appear in every translated article. Translate the section title and intro text to the target language, but keep all GitHub URLs unchanged. If the source article is missing the analysis-references section, add it using the template from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES. - -> **🛡️ Safety net — ALWAYS run after translations**: The deterministic injector will auto-fix any translated article that lost the section or contains broken analysis-reference links. It does **not** refresh an already-valid section just because additional analysis files now exist but are not yet linked: -> -> ```bash -> # Runs in idempotent --rewrite mode: injects the section if missing and -> # replaces existing sections only when broken link targets are detected. -> npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite -> ``` - -## Required Skills - -**Do NOT load skill files during translation** — they consume tokens and time. You already know how to translate. Only load a skill file if you encounter a specific unknown term: -1. **`.github/skills/swedish-political-system/SKILL.md`** — Only if you encounter an unfamiliar parliamentary term -2. **`.github/skills/language-expertise/SKILL.md`** — Only if unsure about a specific language's conventions -3. **`.github/skills/gh-aw-safe-outputs/SKILL.md`** — Only if safe output creation fails - -Also reference `scripts/prompts/v2/stakeholder-perspectives.md` for stakeholder analysis translation standards — but only if the article contains stakeholder analysis. - -## 🎯 Step-by-Step Execution - -### Step 1: Determine Date and Discover Work (Cascading Fallback) - -**🚨 CRITICAL RULE: You MUST always perform translations. Never return noop without exhausting all options.** - -The cascading fallback strategy is: -1. **Today's articles** — translate missing languages for today (unless `TODAY_DEFERRED` or `TODAY_NO_SOURCES`) -2. **Earlier articles** — scan last 30 days for EN articles missing translations -3. **Improve existing** — if all translations are 100% complete, improve quality of existing translations - -```bash -echo "=== Translation Scope ===" -date +%s > /tmp/start_time.txt -date -u "+%A %Y-%m-%d %H:%M:%S UTC" - -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/article_date.txt - read -r ARTICLE_DATE < /tmp/article_date.txt -fi -echo "Article date: $ARTICLE_DATE" - -ARTICLE_TYPE="${{ github.event.inputs.article_type }}" -if [ -z "$ARTICLE_TYPE" ]; then - echo "Article type: (scan all)" -else - echo "Article type: $ARTICLE_TYPE" -fi - -LANGUAGES_INPUT="${{ github.event.inputs.languages }}" -if [ -z "$LANGUAGES_INPUT" ]; then LANGUAGES_INPUT="all-extra"; fi -case "$LANGUAGES_INPUT" in - "nordic-extra") LANGS="da no fi" ;; - "eu-extra") LANGS="de fr es nl" ;; - "cjk") LANGS="ja ko zh" ;; - "rtl") LANGS="ar he" ;; - "all-extra") LANGS="da no fi de fr es nl ar he ja ko zh" ;; - *) echo "$LANGUAGES_INPUT" | tr ',' ' ' > /tmp/langs.txt && read -r LANGS < /tmp/langs.txt ;; -esac -echo "Target languages: $LANGS" - -# Check pre-flight flags -echo "TODAY_DEFERRED=$TODAY_DEFERRED" -echo "TODAY_NO_SOURCES=$TODAY_NO_SOURCES" - -# List EN source articles for today -ls -1 news/$ARTICLE_DATE-*-en.html 2>/dev/null || echo "No EN sources found for today" -echo "=========================" -``` - -#### Phase 1: Check today's articles - -If `TODAY_DEFERRED` or `TODAY_NO_SOURCES` is set, skip directly to Phase 2. - -Otherwise, scan today's date for untranslated articles: - -```bash -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/article_date.txt - read -r ARTICLE_DATE < /tmp/article_date.txt -fi -ARTICLE_TYPE="${{ github.event.inputs.article_type }}" - -if [ -n "$ARTICLE_TYPE" ]; then - ARTICLE_PATTERN="$ARTICLE_DATE-$ARTICLE_TYPE-*-en.html" -else - ARTICLE_PATTERN="$ARTICLE_DATE-*-en.html" -fi - -find news -maxdepth 1 -name "$ARTICLE_PATTERN" -exec basename {} .html \; | sed "s/-en$//" | while read SLUG; do - MISSING="" - for lang in $LANGS; do - test -f "news/$SLUG-$lang.html" || MISSING="$MISSING $lang" - done - if [ -n "$MISSING" ]; then - echo "NEEDS TRANSLATION: $SLUG -> $MISSING" - else - echo "COMPLETE: $SLUG" - fi -done -``` - -If today has untranslated articles, proceed to translate them. Start with the first type alphabetically, translate all target languages for it, then move to the next type. Continue until time runs out or all types are done. - -#### Phase 2: Scan earlier dates for missing translations - -If today is deferred, has no sources, or all today's articles are fully translated, scan the last 30 days: - -```bash -echo "=== Scanning earlier dates for missing translations ===" -i=1 -while [ "$i" -le 30 ]; do - date -u -d "$i days ago" +%Y-%m-%d 2>/dev/null > /tmp/scan_date.txt || echo "" > /tmp/scan_date.txt - read SCAN_DATE < /tmp/scan_date.txt - i=$((i+1)) - if [ -z "$SCAN_DATE" ]; then continue; fi - if [ -n "$ARTICLE_TYPE" ]; then - EN_GLOB="news/$SCAN_DATE-$ARTICLE_TYPE-*-en.html" - else - EN_GLOB="news/$SCAN_DATE-*-en.html" - fi - ls $EN_GLOB 2>/dev/null > /tmp/en_files.txt || true - EN_FILES="" - if [ -s /tmp/en_files.txt ]; then - while IFS= read -r _efline; do - EN_FILES="$EN_FILES $_efline" - done < /tmp/en_files.txt - fi - if [ -z "$EN_FILES" ]; then continue; fi - for EN_FILE in $EN_FILES; do - basename "$EN_FILE" .html | sed "s/-en$//" > /tmp/slug.txt - read SLUG < /tmp/slug.txt - MISSING="" - for lang in $LANGS; do - test -f "news/$SLUG-$lang.html" || MISSING="$MISSING $lang" - done - if [ -n "$MISSING" ]; then - echo "EARLIER NEEDS TRANSLATION: $SLUG -> $MISSING" - fi - done -done -echo "=== End scan ===" -``` - -If earlier articles need translation, pick the most recent one and translate all its missing languages. If time remains, move to the next article. - -#### Phase 3: Improve existing translations - -If ALL articles from all dates are 100% translated for the current run (every EN source has all requested target languages in `$LANGS`), then improve existing translation quality: - -1. Pick the most recent article that has all requested translations for the current run -2. Read the EN source and one of the existing translations (e.g., `da`) -3. Compare quality — check for: untranslated English phrases leaking through, awkward phrasing, missing political terminology, incomplete section translations -4. Use the `edit` tool to improve the translations in-place -5. Create a PR with the improvements - -**Never call `safeoutputs___noop` without first completing Phase 1, Phase 2, AND Phase 3.** Only noop if there are literally zero EN articles in the entire `news/` directory. - -### Step 2: Read the EN Source Article - -Use the `view` tool (NOT bash cat) to read the full EN source article. Understand the structure, headings, analytical content, and special elements (SWOT tables, Mermaid diagrams, charts). This is what you will translate. - -**Speed tip**: Read each article ONCE, then translate it to all required languages before moving on. Do NOT re-read the source for each language. - -### Step 3: Translate — High-Throughput AI Translation - -**This is the core step.** For each target language, produce a complete translated HTML file using the **`create` tool** (one tool call per file). - -**🚀 CRITICAL PERFORMANCE RULE: ONE tool call per translated file.** - -Do NOT use the `cp` + `edit` approach (copying EN file then making dozens of edit calls). Instead: - -1. **Read the EN source** with `view` tool (already done in Step 2) -2. **For each target language**, produce the COMPLETE translated HTML in a single `create` tool call: - - Mentally translate ALL content: title, meta tags, headings, paragraphs, tables, footer - - Write the entire translated HTML file at once using the `create` tool - - Target path: `news/YYYY-MM-DD-TYPE-LANG.html` - -**Example** (conceptual — adapt to actual content): -``` -create({ - path: "news/2026-04-14-committee-reports-da.html", - content: "<!DOCTYPE html>\n<html lang=\"da\">\n<head>..." // FULL translated HTML -}) -``` - -3. **Language codes in HTML** (BCP-47): - - da → `lang="da"`, no → `lang="nb"` (Norwegian Bokmål), fi → `lang="fi"` - - de → `lang="de"`, fr → `lang="fr"`, es → `lang="es"`, nl → `lang="nl"` - - ar → `lang="ar" dir="rtl"`, he → `lang="he" dir="rtl"`, ja → `lang="ja"`, ko → `lang="ko"`, zh → `lang="zh"` - -4. **What to translate** (everything user-visible): - - `<html lang>` attribute → target language BCP-47 code - - `<title>` and `<meta>` tags: translate title, description, keywords - - ALL headings (h1, h2, h3): translate to target language - - ALL body paragraphs: translate faithfully - - SWOT table cells: translate content, keep structure - - Mermaid diagram labels: translate text, keep syntax - - Chart.js labels in `data-chart-config` JSON: translate strings, keep numbers - - Footer text, reading time label, navigation - - Language switcher: update active language link - - `hreflang` links: keep all 14, update `rel="alternate"` for self - - Open Graph / Twitter meta: translate og:title, og:description - - JSON-LD structured data: translate name, headline, description - -5. **Preserve untranslated** (NEVER translate): - - Party abbreviations: S, M, SD, V, MP, C, L, KD - - Document IDs: Prop., Bet., Mot., frs - - Numbers, dates, URLs, email addresses, CSS classes - - Mermaid syntax (arrows, colors, brackets) and Chart.js numeric data - - HTML structure and CSS class names - -6. **Use CONTENT_LABELS** from `scripts/data-transformers/constants/content-labels-part1.ts` and `content-labels-part2.ts` for standard section headings. - -**Speed targets (realistic for Claude Opus 4.7 on 400–500 line HTML):** -- European languages (da, nb, fi, de, fr, es, nl): ~3–4 minutes each -- RTL languages (ar, he): ~4 minutes each -- CJK languages (ja, ko, zh): ~4–5 minutes each -- **Total for 4 languages in one batch: ~15–18 minutes** (matches the rolling-batch time budget) - -**Time guard**: Check elapsed time after each language. If the current batch has already taken >18 minutes, stop adding languages, commit what you have, and create the PR immediately. A 2-language batch that ships is worth more than a 4-language batch that times out. - -**After creating a PR for a batch**: `git checkout main` to detach from the PR branch, then start the next batch. `safeoutputs___create_pull_request` will auto-create a fresh branch for the new batch. - -### Step 4: Validate - -Run validation scripts on newly created translation files only: -```bash -npx tsx scripts/validate-file-ownership.ts translation -npx tsx scripts/validate-news-translations.ts - -# HTMLHint validation — scope to ONLY new translated files (not all news/ files) -# Build a space-separated list of newly created translation files (AWF-safe: no command substitution) -git status --short news/ | grep "^??" | grep -v "\-en\.html" | grep -v "\-sv\.html" | awk '{print $2}' > /tmp/new_trans_validate.txt -if [ -s /tmp/new_trans_validate.txt ]; then - tr '\n' ' ' < /tmp/new_trans_validate.txt > /tmp/trans_files_line.txt - read -r TRANS_FILES < /tmp/trans_files_line.txt - if ! npx htmlhint $TRANS_FILES 2>/dev/null; then - echo "⚠️ HTML validation errors found, attempting auto-fix..." - npx tsx scripts/article-quality-enhancer.ts --fix - if ! npx htmlhint $TRANS_FILES; then - echo "⚠️ HTML validation errors remain — proceeding with PR (labeled 'needs-review')" - echo "HTMLHINT_FAILED=true" >> /tmp/validation_flags.txt - fi - fi -else - echo "No new translation files to validate" -fi -``` - -If validation reports issues, fix them with the `edit` tool before proceeding. - -### Step 5: Commit & Create PR (Once Per Batch) - -> **🚀 HOW SAFE PR CREATION WORKS — READ THIS FIRST** -> -> The `safeoutputs___create_pull_request` tool records your intent and **captures the current git patch as a snapshot**. A separate `safe_outputs` job (after the agent job ends) creates the branch, pushes the snapshot, and opens the PR. The snapshot is frozen at call time — **commits made afterwards on the same local branch are NOT added to the PR** (this is what caused PR #1835 to lose 3 translations). If the safeoutputs MCP session expires before any call, the intent is never recorded and the `safe_outputs` job is **SKIPPED** — all work is lost. -> -> **Exact steps — repeat for each batch:** -> 1. Write 3–4 translated HTML files for the current batch to `news/` using `create` tool (one complete file per call) -> 2. Stage and commit locally — stage only the NEW translation files from the current batch -> 3. Call `safeoutputs___create_pull_request` with `title`, `body`, and `labels` **IMMEDIATELY — do not delay** -> 4. **Only after step 3 succeeds**: `git checkout main` and start the next batch (`create-pull-request.max: 5` allows up to 5 PRs per run) -> -> **❌ DO NOT** run `git push`, `git checkout -b`, or use GitHub API to create PRs. -> **❌ DO NOT** call `safeoutputs___noop` if ANY translation files were created — this discards all work. -> **❌ DO NOT** delay the `safeoutputs___create_pull_request` call — call it the moment your commit is ready. -> **❌ DO NOT** continue adding translation files to the same branch after calling `safeoutputs___create_pull_request` — the snapshot is frozen and those files will be lost. Check out `main` first, then start a fresh batch. -> **✅ DO** call `safeoutputs___create_pull_request` multiple times per run (up to 5), once per completed batch — this is the **expected** pattern for rolling batches. - -**Safety check** — remove any accidentally created EN/SV files before committing: -```bash -git checkout -- news/*-en.html news/*-sv.html 2>/dev/null || true -rm -f news/*-en.html.bak news/*-sv.html.bak 2>/dev/null || true -``` - -Stage ONLY the translation files you just created (new untracked files) — never EN/SV: -```bash -git status --short news/ | grep "^??" | grep -v "\-en\.html" | grep -v "\-sv\.html" | awk '{print $2}' > /tmp/new_trans.txt -cat /tmp/new_trans.txt -xargs -a /tmp/new_trans.txt git add 2>/dev/null || true -git diff --cached --name-only | wc -l > /tmp/staged_count.txt -read -r STAGED < /tmp/staged_count.txt -echo "Staged new translation files: $STAGED" -date -u +%Y-%m-%d > /tmp/commit_date.txt -read -r COMMIT_DATE < /tmp/commit_date.txt -git commit -m "chore: translate articles $COMMIT_DATE" -``` - -Then **immediately** call as a direct tool call. Substitute every `{placeholder}` with a real value before sending — the tool call is parsed as strict JSON, so comments, trailing commas, and arithmetic expressions are **not** allowed. If `/tmp/validation_flags.txt` contains `HTMLHINT_FAILED=true`, append `"needs-review"` to the `labels` array (do **not** leave a comment in the JSON): -``` -safeoutputs___create_pull_request({ - "title": "🌐 Article Translations - {date} batch {n} ({count} files)", - "body": "## Summary\n\nTranslated {article_type} articles into {count} languages (batch {n} of up to 3).\n\n### Translations\n- Source: EN\n- Languages (this batch): {lang_list}\n- Files: {count}\n- Method: AI translation (create tool)\n\n### Quality\n- Section headings: ✅ Translated\n- Body paragraphs: ✅ Translated\n- English leakage: ✅ None\n- HTMLHint: {htmlhint_status}\n\n### Source\n- Workflow: `news-translate`\n- Follow-up: subsequent batches will ship as separate PRs", - "labels": ["agentic-news", "translation"] -}) -``` - -**After the PR call returns `success`, switch off the PR branch before writing the next batch** — otherwise the next commits will stack onto the already-frozen patch and be lost on branch cleanup: - -```bash -# Return to main so batch N+1 starts from a clean base. -# Fail LOUDLY if we can't — silently ignoring this would let the next commits stack -# onto the already-frozen PR branch and be lost (the exact bug that caused PR #1835). -# safeoutputs___create_pull_request will create a fresh branch for the next batch automatically. -git checkout main || { echo "ERROR: failed to switch back to main; aborting before next batch." >&2; exit 1; } -# AWF-safe: no $(...) command substitution — capture branch via per-process tempfile + read, then clean up. -CURRENT_BRANCH_FILE="/tmp/current-branch-$$.txt" -git branch --show-current > "$CURRENT_BRANCH_FILE" -read CURRENT_BRANCH < "$CURRENT_BRANCH_FILE" -rm -f "$CURRENT_BRANCH_FILE" -[ "$CURRENT_BRANCH" = "main" ] || { echo "ERROR: repository is not on main after checkout; aborting before next batch." >&2; exit 1; } -git status --short -# Now repeat Step 3 (translate) + Step 4 (validate) + Step 5 (commit + safeoutputs) for the next 3–4 languages. -``` - -## 🌐 MANDATORY Translation Quality Rules (Single Source of Truth) - -This section is the **canonical reference** for all translation quality standards. Content workflows reference this workflow for translation rules. - -### Non-Negotiable Requirements for Non-EN/SV Articles: -1. **ALL section headings** (h1, h2, h3) MUST be in the target language -2. **ALL body paragraphs** MUST be written in the target language -3. **Meta keywords** MUST be translated to the target language -4. **No English fallback**: If you cannot translate a phrase, use the target language equivalent or omit -5. **data-translate markers**: ZERO `data-translate="true"` spans allowed in final output - -### Per-Language Requirements: -- **RTL languages (ar, he)**: Ensure `dir="rtl"` on `<html>` and proper text direction. Numerals stay LTR within RTL text. -- **CJK languages (ja, ko, zh)**: Use native script only, no romanization in body text. Honorifics follow target-language conventions. CJK quotation marks. -- **Nordic languages (da, no, fi)**: Use language-specific parliamentary terms, not Swedish. Norwegian Bokmål: file suffix `no`, but `lang="nb"` in HTML (BCP-47). Danish: "Riksdagen" not "Riksdag". -- **European languages (de, fr, es, nl)**: Formal political journalism register. German: compound nouns. French: accent-correct. Spanish: formal "usted". - -### Political Intelligence Translation Standards: -- **SWOT tables**: Translate cell content, keep table structure intact -- **Risk matrices**: Preserve L×I numeric scores, translate descriptions -- **Confidence labels**: Translate consistently within each article -- **dok_id references**: NEVER translate (Prop., Bet., Mot., frs) -- **Mermaid diagrams**: Translate node labels, keep syntax/colors -- **Chart.js data**: Translate label strings, keep numeric values -- **Forward indicators**: Translate text, preserve dates and committee names - -### Localized Section Headings (use CONTENT_LABELS): -Use equivalents from `scripts/data-transformers/constants/content-labels-part1.ts` and `content-labels-part2.ts`: -- "Why This Week Matters" → `CONTENT_LABELS[lang].whyMatters` -- "Key Events This Week" → `CONTENT_LABELS[lang].keyEvents` -- "What to Watch" → `CONTENT_LABELS[lang].whatToWatch` -- "Key Takeaways" → `CONTENT_LABELS[lang].keyTakeaways` -- "Latest Committee Reports" → `CONTENT_LABELS[lang].latestReports` -- "Thematic Analysis" → `CONTENT_LABELS[lang].thematicAnalysis` -- "Opposition Strategy" → `CONTENT_LABELS[lang].oppositionStrategy` - -### Translation Fidelity: -- Same analytical depth as EN source — never simplify or omit sections -- All SWOT stakeholder perspectives preserved (8 groups) -- Risk matrix scores numerically identical across languages -- Forward indicators preserve exact dates and trigger events -- Confidence labels on every analytical claim (matching EN source) -- Inter-article links use correct language-specific URL paths -- Correct `hreflang` links to all other language versions - -### Post-Translation Validation: -```bash -npx tsx scripts/validate-news-translations.ts -``` - -## Error Handling - -| Scenario | Fix | -|----------|-----| -| No EN source articles for today | Scan last 30 days for earlier articles missing translations | -| Today deferred (open content PRs) | Scan last 30 days for earlier articles missing translations | -| All articles fully translated | Improve quality of existing translations (fix English leakage, improve phrasing) | -| No EN articles in entire news/ dir | Call `safeoutputs___noop` — the ONLY valid reason to noop | -| EN/SV files staged | `git checkout -- news/*-en.html news/*-sv.html` before commit | -| Time running out (current batch ≥18 min) | Stop adding languages → validate → commit → `safeoutputs___create_pull_request` with what you have, then start the next batch on `main` | -| HTMLHint errors | Fix with `edit` tool or run `npx tsx scripts/article-quality-enhancer.ts --fix` | -| safeoutputs "session not found" | Session expired — all uncreated PR intents are LOST. **Prevention: call `safeoutputs___create_pull_request` for the first batch by minute 22.** Later batches must also be called promptly (never more than 15 minutes between successive calls). | -| Committed files on a branch that already has a PR | Not included in the existing PR — the patch was frozen at the first `safeoutputs___create_pull_request` call. Switch to `main` and create a new batch PR containing only the new translations; if the files still exist in the workspace (or on the previous branch), re-apply/cherry-pick or recommit them onto `main` so you don't need to re-translate from scratch. Then call `safeoutputs___create_pull_request` again for the new batch. | - -## 🎯 Execution Summary - -1. **Discover** — determine date, scan for work using cascading fallback (today → older dates → improve existing). If nothing to translate → `safeoutputs___noop` within first 5 minutes -2. **Read** — read each EN source article with `view` tool (once per article, translate to all languages before moving on) -3. **Translate (batch 1)** — for each of the first 3–4 target languages: write complete translated HTML with `create` tool in a single call (NEVER use `cp`+`edit` or scripts) -4. **Validate** — run `validate-file-ownership.ts translation` + `validate-news-translations.ts` + HTMLHint on new files only -5. **PR 1** — stage new files with `git status --short`, commit, `safeoutputs___create_pull_request` **by minute 22** -6. **Checkout main & repeat** — `git checkout main`, translate the next 3–4 languages (batch 2), validate, stage, commit, `safeoutputs___create_pull_request` (PR 2) -7. **Third batch** — repeat for remaining languages or next article until minute 53 -8. **Hard stop** — at minute 55, stop everything and make sure the last batch is committed and has a `safeoutputs___create_pull_request` call - -**NEVER call safeoutputs___noop after creating any translation files.** - -**Never exceed 22 minutes without calling the first `safeoutputs___create_pull_request`. Absolute maximum: minute 35 for the first call. Use up to `create-pull-request.max: 5` PRs per run.** - -**Time management**: If a batch is taking >18 minutes, stop adding languages, commit what you have, and ship it as a PR. A 2-language PR is worth infinitely more than a 4-language batch that never shipped. \ No newline at end of file +All non-workflow-specific rules are in the imported modules — do not restate them here. diff --git a/.github/workflows/news-week-ahead.lock.yml b/.github/workflows/news-week-ahead.lock.yml index ee8bdde6e..694862223 100644 --- a/.github/workflows/news-week-ahead.lock.yml +++ b/.github/workflows/news-week-ahead.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"13d04c55d67ce08f4b4adf6d292a3cd71c90aa68f006d063b559901d44db6212","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"537e56999f77854e9b504eeaa1a95b7b5ed7b46fcceeb5925db0009eb9dbb689","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,18 @@ # # Generates week-ahead prospective articles in core languages (EN, SV). Translations handled by news-translate workflow. Runs Fridays to preview the upcoming parliamentary week. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# - ../prompts/ext/tier-c-aggregation.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -184,14 +196,9 @@ jobs: env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -202,21 +209,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_0599626d7aa7be7f_EOF' + cat << 'GH_AW_PROMPT_e024284526b1041e_EOF' <system> - GH_AW_PROMPT_0599626d7aa7be7f_EOF + GH_AW_PROMPT_e024284526b1041e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_0599626d7aa7be7f_EOF' + cat << 'GH_AW_PROMPT_e024284526b1041e_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_0599626d7aa7be7f_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_e024284526b1041e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_0599626d7aa7be7f_EOF' + cat << 'GH_AW_PROMPT_e024284526b1041e_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -246,22 +253,26 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_0599626d7aa7be7f_EOF + GH_AW_PROMPT_e024284526b1041e_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_0599626d7aa7be7f_EOF' + cat << 'GH_AW_PROMPT_e024284526b1041e_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} + {{#runtime-import .github/prompts/ext/tier-c-aggregation.md}} {{#runtime-import .github/workflows/news-week-ahead.md}} - GH_AW_PROMPT_0599626d7aa7be7f_EOF + GH_AW_PROMPT_e024284526b1041e_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -272,14 +283,9 @@ jobs: uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -302,14 +308,9 @@ jobs: return await substitutePlaceholders({ file: process.env.GH_AW_PROMPT, substitutions: { - GH_AW_EXPR_731DE217: process.env.GH_AW_EXPR_731DE217, GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -411,7 +412,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -499,16 +500,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_dfdd08106ebd9828_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_dfdd08106ebd9828_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_cefd10ddfb5e1cfc_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_cefd10ddfb5e1cfc_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -772,7 +773,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_a3d3b70d739c630c_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_3bb6e7417a2d49cd_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -888,7 +889,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_a3d3b70d739c630c_EOF + GH_AW_MCP_CONFIG_3bb6e7417a2d49cd_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1575,7 +1576,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-week-ahead.md b/.github/workflows/news-week-ahead.md index f40e4d412..7e369a695 100644 --- a/.github/workflows/news-week-ahead.md +++ b/.github/workflows/news-week-ahead.md @@ -2,6 +2,16 @@ name: "News: Week Ahead" description: Generates week-ahead prospective articles in core languages (EN, SV). Translations handled by news-translate workflow. Runs Fridays to preview the upcoming parliamentary week. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md + - ../prompts/ext/tier-c-aggregation.md on: schedule: weekly on friday around 7:00 workflow_dispatch: @@ -119,7 +129,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -157,26 +167,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -230,579 +220,51 @@ engine: model: claude-opus-4.7 --- -# 📅 Week Ahead Article Generator - -You are the **News Journalist Agent** for Riksdagsmonitor generating **week-ahead** prospective articles. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst, NOT a script executor.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** parliamentary data deeply — SWOT, stakeholder perspectives, risk assessment, election implications -> 2. **WRITE** genuine political intelligence articles with specific actors, evidence citations, and analytical insight -> 3. **USE** the script (`generate-news-enhanced.ts`) ONLY for HTML formatting — the script creates a shell, YOU fill it with analysis -> 4. **REPLACE** every `AI_MUST_REPLACE` marker with real analysis — ZERO markers may remain -> 5. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 6. **VERIFY** article quality: minimum 1000 words, SWOT analysis, stakeholder perspectives, dok_id citations -> 7. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **ITERATIVE IMPROVEMENT IS MANDATORY (2+ passes):** -> - **Analysis Pass 1** (15 min): Create analysis for every document following templates -> - **Analysis Pass 2** (7 min): Read ALL analysis back, improve evidence, diagrams, cross-references -> - **Article Pass 1** (10 min): Generate articles with AI-written content from analysis -> - **Article Pass 2** (8 min): Read ALL articles back completely, improve every section -> - **NEVER complete early** — if you finish ahead, use remaining time to deepen analysis -> -> **If the final article reads like a list of document titles with generic descriptions, you have FAILED.** Rewrite with genuine political analysis before committing. - - -## 🔧 Workflow Dispatch Parameters - -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -If **force_generation** is `true`, generate articles even if recent ones exist. Use the **languages** value to determine which languages to generate. - -## 🚨 CRITICAL: Single Article Type Focus - -**This workflow generates ONLY `week-ahead` articles.** Do not generate other article types. - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-week-ahead.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (45 minutes) — ENFORCED Minimum 40 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing in 13-22 min of 45-min allocation, producing shallow analysis. Agent MUST use at least 40 of 45 minutes. Completion < 40 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -- **Minutes 0–3**: Date check, MCP warm-up with `get_sync_status()` -- **Minutes 3–5**: Run download-parliamentary-data pipeline (download data) -- **Minutes 5–20**: 🚨 **AI Analysis Pass 1 (15 min minimum)**: Read ALL methodology guides, create per-file analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. -- **Minutes 20–22**: 🚨 **AI Analysis Pass 2 (Part A, start)**: Begin reading ALL analysis back and identify improvement targets. -- **Minutes 22–25**: 🫀 **Heartbeat PR** — `git add && git commit` analysis artifacts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Week Ahead - {date}`). Refreshes the safeoutputs MCP session (idle timeout ~30–35 min) AND preserves work if later phases fail. Run `git checkout main` after the call so subsequent commits don't stack onto the frozen patch. -- **Minutes 25–27**: 🚨 **AI Analysis Pass 2 (Part B, complete — 5 min improvement work total across Parts A+B)**: Improve every section, replace ALL script stubs with AI analysis. Run enrichment verification gate. -- **Minutes 27–29**: Run ENFORCED Minimum Time Gate + Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. -- **Minutes 29–35**: Generate articles for all 14 languages -- **Minutes 35–40**: 🚨 **Article Improvement Pass**: Read ALL articles back, replace AI_MUST_REPLACE markers, improve content. Run article quality component gate. -- **Minutes 40–43**: Validate and commit analysis + articles, create PR with `safeoutputs___create_pull_request` -- **Minutes 43–45**: 🚨 **HARD DEADLINE** — If no safe output yet: if ANY artifacts/files were created, IMMEDIATELY stage, commit, call `safeoutputs___create_pull_request` with partial work. ONLY call `safeoutputs___noop` if truly ZERO files were created. - -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 15 min + Pass 2: 7 min)** — every analysis file must contain color-coded Mermaid diagrams, structured evidence tables with dok_id citations, and follow template structure exactly. ALL script-generated stubs MUST be replaced with AI-enriched analysis. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `stakeholder-perspectives.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams and evidence tables. - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Min. analysis time | -|-------|--------------|-------------------|--------|---------|-------------------| -| standard | 1-2 | ≥3 | ≥1 | optional | 10 minutes | -| deep | 2-3 | ≥5 | ≥2 | required | 15 minutes | -| comprehensive | 3+ | ≥7 | ≥3 | required | 20 minutes | - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `week-ahead` (from `scripts/editorial-framework.ts`): -- **SWOT**: quick (1-paragraph overview only) -- **Dashboard**: required (min. 2 Chart.js charts) -- **Mindmap**: not required -- **Min. stakeholders**: 3 perspectives -- **AI iterations**: 1 (standard), 2 (deep), or 3 (comprehensive) - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -### Phase 1 — Data Collection & Initial Analysis -1. Fetch MCP data (`get_calendar_events`, `get_fragor`, `get_interpellationer`, `get_sync_status`) -2. Extract watch-points and key parliamentary events for the coming week -3. Build initial outline: week summary lede, calendar-driven event blocks, watch-point highlights - -### Phase 2 — Depth Enhancement (for `deep`/`comprehensive` depth only) -1. **Quick SWOT**: 1-paragraph SWOT overview of the week's political balance -2. **Event Dashboard**: Provide concise summary data for ≥2 analytical views (committee meeting density, event type breakdown) as prose or markdown tables that can be included directly in the article without requiring any undocumented rendering pipeline -3. **Quality Gate**: - - Verify watch-points are specific and actionable (not just event titles) - - Verify all Swedish API text is translated - - Verify word count ≥ 600 - -### Phase 3 — Final Quality Gate Before PR -Run all validation checks from the **MANDATORY Quality Validation** section below before committing. - -## MANDATORY Date Validation - -```bash -echo "=== Date Validation Check ===" -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -echo "Article Type: week-ahead" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -September+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before September → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). Use in ALL MCP queries requiring `rm`. - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="week-ahead" -# Derive FORCE_GENERATION from the workflow_dispatch input -FORCE_GENERATION="${{ github.event.inputs.force_generation || 'false' }}" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -## MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/{article-type}` (e.g. `news/content/2026-03-23/week-ahead`). `safeoutputs___create_pull_request` handles this automatically. - -## MANDATORY PR Creation - -> **🚀 HOW SAFE PR CREATION WORKS — READ THIS FIRST** -> -> The `safeoutputs___create_pull_request` tool handles **everything**: branch creation, pushing commits, and opening the PR. Do NOT run `git push` or `git checkout -b` manually. -> -> **Exact steps:** -> 1. Write article files to `news/` using `bash` or `edit` tools -> 2. Stage and commit locally (scoped to week-ahead subfolder): `git add news/*week-ahead*.html news/metadata/ "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" analysis/weekly/ && git commit -m "Add week-ahead articles and analysis artifacts"` -> 3. Call `safeoutputs___create_pull_request` with `title`, `body`, and `labels` -> -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash - -## 🌐 Dispatch Translation Workflow - -After creating the content PR, dispatch translations: `safeoutputs___dispatch_workflow({ "workflow_name": "news-translate", "inputs": { "article_date": "<YYYY-MM-DD>", "article_type": "<article-type>", "languages": "all-extra" } })`. See `news-translate.md` for full translation quality rules. - -## MCP Tools - -**ALWAYS call `get_sync_status()` FIRST.** - -**Primary tool:** `get_calendar_events` — fetches upcoming 7-day calendar (**⚠️ Known issue: may return HTML instead of JSON; if this happens, treat it as a calendar retrieval failure and state that explicitly in the analysis. You may query `search_dokument` with a recent lookback window only as a proxy signal of parliamentary activity (e.g., recently published committee reports/propositions), but must never treat "no documents found" as "no upcoming events."**) -**Cross-reference:** `search_dokument`, `get_fragor`, `get_interpellationer` -**Statistical enrichment:** SCB/World Bank — for scheduled economic debates, pre-fetch relevant indicators. Use committee-mapped tables from `scripts/scb-context.ts` based on which committees have scheduled meetings. **World Bank indicators (144 total)**: `view analysis/worldbank/indicators-inventory.json` to discover indicators matching scheduled committees — each indicator has `policyAreas`, `committees`, and `mcpTool` fields. Use MCP tools for indicators with `mcpTool` field. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for chart templates. MUST generate ≥1 economic chart when week includes economic policy events. - -```javascript -get_sync_status({}) -// Get events for next 7 days -const today = new Date().toISOString().split('T')[0]; -const nextWeek = new Date(Date.now() + 7*86400000).toISOString().split('T')[0]; -get_calendar_events({ from: today, tom: nextWeek, limit: 100 }) -// If calendar API returns error/HTML: -// 1. Flag explicitly: "Calendar data unavailable (API returned HTML instead of JSON)" -// 2. Optional proxy signal only — query recently published documents (lookback, NOT forward): -// const yesterday = new Date(Date.now() - 86400000).toISOString().split('T')[0]; -// search_dokument({ from_date: yesterday, to_date: today, limit: 50, doktyp: "bet" }) -// 3. NEVER treat "no documents found" as "no upcoming events" -``` - -## Generation Steps - -### Step 1: Check Existing Articles (Analysis Always Runs) -🚨 **FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** complete **BEFORE** any article HTML is created or updated. Articles MUST be (re)generated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Violations = REJECTED PR (PR #1705 comment audit, 2026-04-18). - -Check if week-ahead articles already exist for the target date. If they do, skip article generation but **ALWAYS run the full deep political analysis phase** — analysis is the primary output and must execute on every run regardless of article existence. - -### Step 2: Query MCP -```javascript -get_sync_status({}) -get_calendar_events({ from: "YYYY-MM-DD", tom: "YYYY-MM-DD+7", limit: 100 }) -search_dokument({ from_date: "YYYY-MM-DD", to_date: "YYYY-MM-DD+7" }) -get_fragor({ rm: <calculated riksmöte>, limit: 20 }) -``` - -### Step 2.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 14 analysis artifacts (9 core + 5 Tier-C reference-grade).** `download-parliamentary-data.ts` downloads raw data ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. **STEP 0 — Upstream Watchpoint Ingestion (MANDATORY, per `SHARED_PROMPT_PATTERNS.md` §"Recent Daily Knowledge-Base Synthesis")**: ingest forward indicators from the last **7 days** of sibling daily runs + the last `weekly-review`. Build the Watchpoint Reconciliation table (no silent drops). -4. Create ALL **14** analysis files in `analysis/daily/YYYY-MM-DD/week-ahead/` using evidence from the downloaded data AND the ingested upstream watchpoints -5. Reference exemplars: [`analysis/daily/2026-04-18/weekly-review/`](../../analysis/daily/2026-04-18/weekly-review/) and [`analysis/daily/2026-04-19/month-ahead/`](../../analysis/daily/2026-04-19/month-ahead/) - -Run the **14-Artifact Completeness Gate** (aggregation workflow) from `SHARED_PROMPT_PATTERNS.md` §"14 REQUIRED Artifacts for AGGREGATION Workflows — Reference-Grade Tier-C" to verify ALL 14 files exist: the 9 core (synthesis-summary.md, swot-analysis.md, risk-assessment.md, threat-analysis.md, classification-results.md, significance-scoring.md, stakeholder-perspectives.md, cross-reference-map.md, data-download-manifest.md) PLUS the 5 Tier-C reference-grade files (README.md, executive-brief.md, scenario-analysis.md, comparative-international.md, methodology-reflection.md). - -> 📐 **Period-scope multiplier: 1.0× (baseline)** — `week-ahead` uses the baseline Tier-C byte thresholds. See `SHARED_PROMPT_PATTERNS.md` §"Period-Scope Multipliers" for the full table. - -```bash -date -u +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -echo "📊 Running pre-article analysis for $ARTICLE_DATE..." -# CRITICAL: Source mcp-setup.sh FIRST to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway -source scripts/mcp-setup.sh -npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 50 > /tmp/pipeline-output.log 2>&1 -PIPE_EXIT=$? -cat /tmp/pipeline-output.log -if [ "$PIPE_EXIT" -ne 0 ]; then - echo "❌ Pipeline failed — agent MUST diagnose and fix (read /tmp/pipeline-output.log)" - tail -20 /tmp/pipeline-output.log -fi -echo "📊 Analysis artifacts for $ARTICLE_DATE:" -ls -la "analysis/daily/$ARTICLE_DATE/" 2>/dev/null || echo "⚠️ No analysis output" -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_count.txt -read DATA_JSON_COUNT < /tmp/data_count.txt -echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)" -# Relocate pipeline artifacts: download-parliamentary-data.ts writes to analysis/daily/$DATE/ (unscoped) -# but this workflow needs them under analysis/daily/$DATE/week-ahead/ -# === Run Suffix Resolution (see SHARED_PROMPT_PATTERNS.md) === -BASE_SUBFOLDER="week-ahead" -ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER" -if [ "$FORCE_GENERATION" != "true" ]; then - _SUFFIX=1 - while [ -f "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" ]; do - _SUFFIX=$((_SUFFIX + 1)) - ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER-$_SUFFIX" - done -fi -echo "📁 Analysis subfolder resolved: $ANALYSIS_SUBFOLDER" -UNSCOPED_DIR="analysis/daily/$ARTICLE_DATE" -SCOPED_DIR="$UNSCOPED_DIR/$ANALYSIS_SUBFOLDER" -if [ -d "$UNSCOPED_DIR" ]; then - mkdir -p "$SCOPED_DIR" - if find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" | grep -q .; then - find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" -exec mv -f {} "$SCOPED_DIR/" \; - echo "📁 Moved pipeline *.md artifacts → $SCOPED_DIR (root cleaned to prevent merge conflicts)" - fi - if [ -d "$UNSCOPED_DIR/documents" ]; then - mkdir -p "$SCOPED_DIR/documents" - find "$UNSCOPED_DIR/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$SCOPED_DIR/documents/" \; - rmdir "$UNSCOPED_DIR/documents" 2>/dev/null || true - echo "📁 Relocated pipeline documents/ contents → $SCOPED_DIR/documents" - fi -fi -if [ "$DATA_JSON_COUNT" -eq 0 ]; then - echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." -fi -``` - -**Weekly aggregation**: Since this is a weekly-scope workflow (runs Fridays), aggregate the week's daily analyses: - -```bash -date -u +%G-W%V > /tmp/week_label.txt -read WEEK_LABEL < /tmp/week_label.txt -echo "📅 Running weekly aggregation for $WEEK_LABEL..." -source scripts/mcp-setup.sh && npx tsx scripts/download-parliamentary-data.ts --aggregate weekly --date "$WEEK_LABEL" || echo "⚠️ Weekly aggregation failed (non-blocking)" -ls -la "analysis/weekly/$WEEK_LABEL/" 2>/dev/null || echo "⚠️ No weekly aggregation output" -``` - -These files are committed alongside articles for human review and continuous improvement. - -### 🔴 MANDATORY: Batch Analysis Enrichment (Prevents Empty "0 Documents Analyzed" Files) - -> **Root Cause**: The `download-parliamentary-data.ts` script filters documents by exact date match. When no documents match the exact analysis date, batch files report "0 documents analyzed" — this violates `ai-driven-analysis-guide.md` quality requirements. - -**After per-file analysis, check if batch files are empty and enrich them:** - -1. Check `synthesis-summary.md` — if it reports "0 documents analyzed" but per-document analyses exist in `documents/`, aggregate the per-doc findings into all 9 batch files -2. If NO per-doc analyses exist AND batch files show "0 documents analyzed", use MCP `get_calendar_events(from=ARTICLE_DATE, tom=7_DAYS_AHEAD)` and `get_betankanden(rm="2025/26", limit=20)` directly to find upcoming parliamentary activity and create meaningful analysis -3. Each enriched batch file MUST include: ≥1 Mermaid diagram, structured tables, evidence citations, confidence labels -4. **NEVER commit batch files that report "0 documents analyzed" when analysis data is available** -5. See `ai-driven-analysis-guide.md` "Deep-Inspection Batch Analysis Enrichment Protocol (v4.1)" for full requirements - -### 🚨 MANDATORY: Analysis Artifacts Must ALWAYS Be Committed - -**Before deciding whether to generate articles or call noop, you MUST:** - -1. **Review the analysis artifacts** in `analysis/daily/YYYY-MM-DD/` and `analysis/weekly/` — read `synthesis-summary.md` and `significance-scoring.md` to understand what was found -2. **Summarize the analysis findings** — note how many documents were downloaded, their significance scores, key themes, and risk levels -3. **ALWAYS commit analysis artifacts** regardless of whether articles will be generated: - -```bash -date -u +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/week-ahead" -ANALYSIS_COUNT=0 -if [ -d "$ANALYSIS_DIR" ]; then - find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt - read ANALYSIS_COUNT < /tmp/analysis_count.txt -fi -date -u +%G-W%V > /tmp/week_label.txt -read WEEK_LABEL < /tmp/week_label.txt -WEEKLY_DIR="analysis/weekly/$WEEK_LABEL" -if [ -d "$WEEKLY_DIR" ]; then - find "$WEEKLY_DIR" -type f 2>/dev/null | wc -l > /tmp/weekly_count.txt - read WEEKLY_COUNT < /tmp/weekly_count.txt - ANALYSIS_COUNT=$((ANALYSIS_COUNT + WEEKLY_COUNT)) -fi -if [ "$ANALYSIS_COUNT" -gt 0 ]; then - echo "📊 Found $ANALYSIS_COUNT total analysis artifacts — these MUST be committed (do NOT use safeoutputs___noop)" -else - echo "📊 Found 0 analysis artifacts — safeoutputs___noop is allowed (no files to commit)" -fi -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the pre-article analysis pipeline produced ANY output files, you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Week Ahead - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if the analysis pipeline produced ZERO output files (truly nothing to analyze). - -### 🔬 Step 2b: Read ALL Analysis Files + Cross-Reference Sibling Types (MANDATORY) - -> 🔴 **NON-NEGOTIABLE**: Week-ahead forecasting synthesizes across ALL article types. The AI MUST read ALL analysis files from ALL article types before generating the forecast. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="week-ahead" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done -fi - -echo "🔍 Cross-referencing sibling analysis types for $ARTICLE_DATE..." -for SIBLING_DIR in analysis/daily/$ARTICLE_DATE/*/; do - if [ -d "$SIBLING_DIR" ]; then - echo "$SIBLING_DIR" | sed 's|/$||' | sed 's|.*/||' > /tmp/sibling_type.txt - read SIBLING_TYPE < /tmp/sibling_type.txt - if [ "$SIBLING_TYPE" = "$ANALYSIS_SUBFOLDER" ]; then continue; fi - echo "📖 Cross-referencing: $SIBLING_TYPE" - for SIBLING_FILE in "$SIBLING_DIR/synthesis-summary.md" "$SIBLING_DIR/significance-scoring.md"; do - if [ -f "$SIBLING_FILE" ]; then - echo "--- Sibling ($SIBLING_TYPE): $SIBLING_FILE ---" - cat "$SIBLING_FILE" - echo "" - fi - done - fi -done -echo "✅ Cross-referencing complete — week-ahead MUST incorporate findings from all sibling types" -``` - -### Step 3: Generate Articles - -```bash -# Set LANGUAGES_INPUT to the value shown in Workflow Dispatch Parameters above -LANGUAGES_INPUT="<value from Workflow Dispatch Parameters>" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=week-ahead \ - --languages="$LANG_ARG" \ - --skip-existing -``` - -**Article Navigation Verification**: The `generate-news-enhanced.ts` script automatically includes all required navigation elements: -- **Language switcher** (`<nav class="language-switcher">`) after `<body>` with all 14 languages -- **Back-to-news top nav** (`<div class="article-top-nav">`) with localized back link after language switcher -- **Footer back-to-news link** in `<footer class="article-footer">` - -These elements are validated by `bash scripts/validate-news-generation.sh` (Checks 8–10). The fix script is a **fallback only** — do not run it by default: -```bash -# FALLBACK ONLY — use if validate-news-generation.sh reports missing navigation elements -npx tsx scripts/fix-article-navigation.ts -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "week-ahead", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -### Step 3b: AI Title, Meta Description & Analysis References - -> 🚨 **MANDATORY** — After article HTML is generated, the AI MUST improve titles, descriptions, and add analysis references. See `SHARED_PROMPT_PATTERNS.md` sections "AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and "ANALYSIS FILE GITHUB REFERENCES" for full protocols. - -**1. Generate newsworthy titles** — Replace script-generated title with: `[Active Verb] + [Specific Institution] + [Concrete Policy Action]`. BANNED: ❌ generic category labels or ": {Topic} in Focus". - -**2. Generate AI meta descriptions** (150-160 chars) — Key political intelligence summary. BANNED: ❌ "Analysis of N documents". - -**3. 🔴 Add analysis references (MANDATORY — VERIFY AFTER)** — Insert "📊 Analysis & Sources" HTML block (from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES) linking to `analysis/daily/$ARTICLE_DATE/week-ahead/` files and `analysis/methodologies/ai-driven-analysis-guide.md`. - -**After inserting, VERIFY** by running: -```bash -for FILE in news/$ARTICLE_DATE-week-ahead-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` - -**4. Update all metadata** — `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>`. - -### Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY) - -> 🚨 **v4.0 CRITICAL**: Week-ahead articles require forward-looking intelligence. Read pre-computed analysis and generate prospective content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0. - -**1. Read pre-computed analysis** — Read analysis from `analysis/daily/$ARTICLE_DATE/week-ahead/`. If synthesis reports "0 documents analyzed", use MCP `get_calendar_events` and `get_betankanden` to populate content directly. - -**2. Generate forward-looking lede** — Week-ahead ledes MUST name specific upcoming events (committee votes, plenary debates, government announcements) with dates and significance. BANNED: empty or generic ledes. - -**3. Generate committee schedule analysis** — For each scheduled committee report debate, explain: what the committee decided, which parties filed reservations, and what the expected plenary vote outcome is. - -**4. Generate government agenda preview** — List upcoming government actions (propositions expected, ministerial meetings, EU engagements) with political significance context. - -**5. Replace generic filler** — Remove `"The political landscape remains fluid..."` and replace with specific forward indicators derived from MCP data (e.g., `get_calendar_events`, `get_betankanden`). Each indicator MUST name a real upcoming event, committee, or deadline extracted from the data — e.g., "Watch: `<COMMITTEE>` scheduling `<TOPIC>` follow-up by `<DATE from calendar>`". Do NOT hard-code example dates or event names; always source them from the current week's MCP query results. +# 📅 Week Ahead -**6. 🔴 MANDATORY: Replace ALL `AI_MUST_REPLACE` markers** — Search generated HTML for `<!-- AI_MUST_REPLACE: ... -->` markers in Deep Analysis subsections and replace EACH with specific forward-looking political intelligence. ZERO markers may survive in committed HTML. +Forward-looking 7-day parliamentary calendar + political intelligence brief. Tier-C reference-grade output (14 artifacts). Core languages EN, SV. -**7. Verify document count consistency** — Ensure report counts are consistent across title, lede, body, and key takeaways. Contradictory counts (17 vs 42 vs 16) are REJECTED. +## Tier-C (reference-grade) requirements -**8. Handle Easter/recess periods** — When parliament is in recess, explain what legislation is pending for the return session and what government agencies are acting during recess. +This workflow imports `../prompts/ext/tier-c-aggregation.md`. Produce **all 14 artifacts** (9 core + 5 Tier-C) and cross-reference sibling analyses. See the extension for the full rules. -### Step 4: Translate, Validate & Verify Analysis Quality +## What this workflow does -Run analysis references fix, validation, and HTMLHint before creating PR: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type week-ahead +- **Article type**: `week-ahead` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/week-ahead/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "❌ News generation validation failed. Fix the reported issues before creating a PR." - exit "$VALIDATION_EXIT" -fi +## Time budget (60 min, minimum 45 min of real work) -# HTMLHint validation with auto-fix for common nesting errors -find news -maxdepth 1 -name '*-*.html' 2>/dev/null | wc -l > /tmp/news_count.txt -read NEWS_FILES < /tmp/news_count.txt -if [ "$NEWS_FILES" -gt 0 ]; then - if ! npx htmlhint "news/*-*.html" 2>/dev/null; then - echo "⚠️ HTML validation errors found, attempting auto-fix..." - npx tsx scripts/article-quality-enhancer.ts --fix - if ! npx htmlhint "news/*-*.html"; then - echo "❌ HTML validation failed after auto-fix. Please fix remaining HTMLHint errors before creating a PR." - exit 1 - fi - fi -fi -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "week-ahead" +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-week-ahead-*.html -``` +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -**CRITICAL: Each article MUST contain real analysis, not just a list of translated event titles.** -Every generated article must include: -- A "Why This Week Matters" context box with political significance analysis -- Key Events section with interpretive commentary (not just time/title) -- "What to Watch" forward-looking analysis with implications -- Political context connecting events to broader legislative trends +## Inputs -If the generated article lacks analysis, manually add contextual commentary before committing. +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -**Note**: News index files, metadata, and sitemap are generated automatically at build time by the `prebuild` script. Do NOT run generation scripts or commit their output — only commit the article HTML files. Run `npm run prebuild` (or `npm run build`) locally if you need to validate or preview the generated index, metadata, or sitemap outputs on a fresh checkout where these files will not exist. +## Dedup & analysis-only path -## 🌐 Translation Quality +If articles for `$ARTICLE_DATE` + `week-ahead` already exist **and** `force_generation=false`: -EN/SV only: all headings, meta, content in correct language; no untranslated `data-translate` spans; Swedish API titles translated. Full rules: `news-translate.md`. -## Article Naming Convention -Files: `YYYY-MM-DD-week-ahead-{lang}.html` +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Week Ahead — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a 2–4 sentence paragraph that: -> - cites **2–3 concrete numeric values** from `dataPoints`; -> - ties the numbers to the day's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/.github/workflows/news-weekly-review.lock.yml b/.github/workflows/news-weekly-review.lock.yml index fd8093109..6e6ab0aa5 100644 --- a/.github/workflows/news-weekly-review.lock.yml +++ b/.github/workflows/news-weekly-review.lock.yml @@ -1,4 +1,4 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"55feea70d1ae59137abb34e584dcdb12c31c8038c34cd08a5e9db2fef58141d5","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"78ea852ee56c1208c9a62365f934f9f5bd84002030f83d834ccea70d568ded88","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} # gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) @@ -24,6 +24,18 @@ # # Generates weekly review retrospective articles in core languages (EN, SV). Translations handled by news-translate workflow. Runs Saturdays to review the past week. # +# Resolved workflow manifest: +# Imports: +# - ../prompts/00-base-contract.md +# - ../prompts/01-bash-and-shell-safety.md +# - ../prompts/02-mcp-access.md +# - ../prompts/03-data-download.md +# - ../prompts/04-analysis-pipeline.md +# - ../prompts/05-analysis-gate.md +# - ../prompts/06-article-generation.md +# - ../prompts/07-commit-and-pr.md +# - ../prompts/ext/tier-c-aggregation.md +# # Secrets used: # - COPILOT_GITHUB_TOKEN # - GH_AW_CI_TRIGGER_TOKEN @@ -184,14 +196,9 @@ jobs: env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -202,21 +209,21 @@ jobs: run: | bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" { - cat << 'GH_AW_PROMPT_8620ed057cfc3232_EOF' + cat << 'GH_AW_PROMPT_e4ccc1c76dd1cbd7_EOF' <system> - GH_AW_PROMPT_8620ed057cfc3232_EOF + GH_AW_PROMPT_e4ccc1c76dd1cbd7_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" cat "${RUNNER_TEMP}/gh-aw/prompts/agentic_workflows_guide.md" cat "${RUNNER_TEMP}/gh-aw/prompts/repo_memory_prompt.md" cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" - cat << 'GH_AW_PROMPT_8620ed057cfc3232_EOF' + cat << 'GH_AW_PROMPT_e4ccc1c76dd1cbd7_EOF' <safe-output-tools> - Tools: add_comment, create_pull_request(max:2), dispatch_workflow, missing_tool, missing_data, noop - GH_AW_PROMPT_8620ed057cfc3232_EOF + Tools: add_comment, create_pull_request, dispatch_workflow, missing_tool, missing_data, noop + GH_AW_PROMPT_e4ccc1c76dd1cbd7_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_create_pull_request.md" - cat << 'GH_AW_PROMPT_8620ed057cfc3232_EOF' + cat << 'GH_AW_PROMPT_e4ccc1c76dd1cbd7_EOF' </safe-output-tools> <github-context> The following GitHub context information is available for this workflow: @@ -246,22 +253,26 @@ jobs: {{/if}} </github-context> - GH_AW_PROMPT_8620ed057cfc3232_EOF + GH_AW_PROMPT_e4ccc1c76dd1cbd7_EOF cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" - cat << 'GH_AW_PROMPT_8620ed057cfc3232_EOF' + cat << 'GH_AW_PROMPT_e4ccc1c76dd1cbd7_EOF' </system> + {{#runtime-import .github/prompts/00-base-contract.md}} + {{#runtime-import .github/prompts/01-bash-and-shell-safety.md}} + {{#runtime-import .github/prompts/02-mcp-access.md}} + {{#runtime-import .github/prompts/03-data-download.md}} + {{#runtime-import .github/prompts/04-analysis-pipeline.md}} + {{#runtime-import .github/prompts/05-analysis-gate.md}} + {{#runtime-import .github/prompts/06-article-generation.md}} + {{#runtime-import .github/prompts/07-commit-and-pr.md}} + {{#runtime-import .github/prompts/ext/tier-c-aggregation.md}} {{#runtime-import .github/workflows/news-weekly-review.md}} - GH_AW_PROMPT_8620ed057cfc3232_EOF + GH_AW_PROMPT_e4ccc1c76dd1cbd7_EOF } > "$GH_AW_PROMPT" - name: Interpolate variables and render templates uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -272,14 +283,9 @@ jobs: uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_EXPR_731DE217: ${{ github.event.inputs.force_generation || 'false' }} GH_AW_GITHUB_ACTOR: ${{ github.actor }} GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: ${{ github.event.inputs.analysis_depth }} - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: ${{ github.event.inputs.article_date }} - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: ${{ github.event.inputs.force_generation }} - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: ${{ github.event.inputs.languages }} GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} @@ -302,14 +308,9 @@ jobs: return await substitutePlaceholders({ file: process.env.GH_AW_PROMPT, substitutions: { - GH_AW_EXPR_731DE217: process.env.GH_AW_EXPR_731DE217, GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, - GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH: process.env.GH_AW_GITHUB_EVENT_INPUTS_ANALYSIS_DEPTH, - GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE: process.env.GH_AW_GITHUB_EVENT_INPUTS_ARTICLE_DATE, - GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION: process.env.GH_AW_GITHUB_EVENT_INPUTS_FORCE_GENERATION, - GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES: process.env.GH_AW_GITHUB_EVENT_INPUTS_LANGUAGES, GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, @@ -411,7 +412,7 @@ jobs: run: | npm ci --prefer-offline --no-audit - name: Pre-warm MCP server (Render.com cold start mitigation) - run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\necho \"🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)...\"\nKEEP_ALIVE_START=$(date +%s)\nKEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300))\nexport MCP_URL KEEP_ALIVE_END\nnohup bash -c '\n while :; do\n NOW=$(date +%s)\n if [ \"$NOW\" -ge \"$KEEP_ALIVE_END\" ]; then\n break\n fi\n curl -sf --max-time 10 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":1,\\\"method\\\":\\\"tools/list\\\",\\\"params\\\":{}}\" \\\n \"$MCP_URL\" -o /dev/null 2>/dev/null || true\n sleep 30\n done\n' </dev/null >/tmp/mcp-keepalive.log 2>&1 &\nKEEP_ALIVE_PID=$!\ndisown \"$KEEP_ALIVE_PID\" 2>/dev/null || true\necho \"Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)\"\n" + run: "echo \"🔥 Pre-warming riksdag-regering MCP server via MCP protocol...\"\nMCP_URL=\"https://riksdag-regering-ai.onrender.com/mcp\"\nWARM=false\nfor i in 1 2 3 4 5 6; do\n RESP=$(curl -sf --max-time 30 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"$MCP_URL\" 2>/dev/null) || true\n if echo \"$RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$RESP\" | grep -o '\"name\"' | wc -l)\n echo \"✅ MCP server responded on attempt $i with $TOOL_COUNT tools registered\"\n WARM=true\n break\n fi\n echo \"⏳ Attempt $i/6 — server may be cold-starting, waiting 20s...\"\n sleep 20\ndone\nif [ \"$WARM\" = \"false\" ]; then\n echo \"⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate\"\nfi\n" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: "echo \"🔍 Network Diagnostics — $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"\necho \"═══════════════════════════════════════════\"\necho \"\"\necho \"📡 DNS Resolution Tests:\"\nfor domain in riksdag-regering-ai.onrender.com api.scb.se api.worldbank.org data.riksdagen.se www.riksdagen.se www.regeringen.se; do\n if nslookup \"$domain\" >/dev/null 2>&1; then\n IP=$(nslookup \"$domain\" 2>/dev/null | grep -A1 \"Name:\" | grep \"Address:\" | head -1 | awk '{print $2}')\n echo \" ✅ $domain → $IP\"\n else\n echo \" ❌ $domain — DNS FAILED\"\n fi\ndone\necho \"\"\necho \"🌐 HTTPS Connectivity Tests:\"\nfor url in \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" \\\n \"https://api.scb.se/OV0104/v2beta\" \\\n \"https://api.worldbank.org/v2/country/SE?format=json\" \\\n \"https://data.riksdagen.se/dokumentlista/?sok=test&doktyp=bet&utformat=json&a=1\" \\\n; do\n HTTP_CODE=$(curl -s -o /dev/null -w \"%{http_code}\" --max-time 10 \"$url\" 2>/dev/null || echo \"000\")\n DOMAIN=$(echo \"$url\" | sed 's|https://||' | cut -d/ -f1)\n if [ \"$HTTP_CODE\" -ge 200 ] && [ \"$HTTP_CODE\" -lt 400 ]; then\n echo \" ✅ $DOMAIN → HTTP $HTTP_CODE\"\n elif [ \"$HTTP_CODE\" = \"000\" ]; then\n echo \" ❌ $DOMAIN → TIMEOUT/UNREACHABLE\"\n else\n echo \" ⚠️ $DOMAIN → HTTP $HTTP_CODE\"\n fi\ndone\necho \"\"\necho \"🔌 MCP Server Tool Count:\"\nTOOL_RESP=$(curl -sf --max-time 15 -X POST \\\n -H \"Content-Type: application/json\" \\\n -d '{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}' \\\n \"https://riksdag-regering-ai.onrender.com/mcp\" 2>/dev/null) || TOOL_RESP=\"\"\nif echo \"$TOOL_RESP\" | grep -q '\"tools\"'; then\n TOOL_COUNT=$(echo \"$TOOL_RESP\" | grep -o '\"name\"' | wc -l)\n echo \" ✅ riksdag-regering MCP: $TOOL_COUNT tools registered\"\nelse\n echo \" ❌ riksdag-regering MCP: No tools response (server may still be starting)\"\nfi\necho \"\"\necho \"═══════════════════════════════════════════\"\n" @@ -499,16 +500,16 @@ jobs: mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs - cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_438c56e327f38aab_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":2,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} - GH_AW_SAFE_OUTPUTS_CONFIG_438c56e327f38aab_EOF + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_dba07803c934c7ab_EOF' + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_dba07803c934c7ab_EOF - name: Write Safe Outputs Tools env: GH_AW_TOOLS_META_JSON: | { "description_suffixes": { "add_comment": " CONSTRAINTS: Maximum 1 comment(s) can be added. Supports reply_to_id for discussion threading.", - "create_pull_request": " CONSTRAINTS: Maximum 2 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." + "create_pull_request": " CONSTRAINTS: Maximum 1 pull request(s) can be created. Labels [\"agentic-news\" \"analysis-data\"] will be automatically added." }, "repo_params": {}, "dynamic_tools": [ @@ -772,7 +773,7 @@ jobs: mkdir -p /home/runner/.copilot GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) - cat << GH_AW_MCP_CONFIG_19ff0bf9e7cec41c_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + cat << GH_AW_MCP_CONFIG_a25c83c6cb6ff8b2_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" { "mcpServers": { "agenticworkflows": { @@ -888,7 +889,7 @@ jobs: "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" } } - GH_AW_MCP_CONFIG_19ff0bf9e7cec41c_EOF + GH_AW_MCP_CONFIG_a25c83c6cb6ff8b2_EOF - name: Download activation artifact uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 with: @@ -1575,7 +1576,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":2,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-weekly-review.md b/.github/workflows/news-weekly-review.md index 417547fe2..41e734f8c 100644 --- a/.github/workflows/news-weekly-review.md +++ b/.github/workflows/news-weekly-review.md @@ -2,6 +2,16 @@ name: "News: Weekly Review" description: Generates weekly review retrospective articles in core languages (EN, SV). Translations handled by news-translate workflow. Runs Saturdays to review the past week. strict: false +imports: + - ../prompts/00-base-contract.md + - ../prompts/01-bash-and-shell-safety.md + - ../prompts/02-mcp-access.md + - ../prompts/03-data-download.md + - ../prompts/04-analysis-pipeline.md + - ../prompts/05-analysis-gate.md + - ../prompts/06-article-generation.md + - ../prompts/07-commit-and-pr.md + - ../prompts/ext/tier-c-aggregation.md on: schedule: weekly on saturday around 9:00 workflow_dispatch: @@ -119,7 +129,7 @@ safe-outputs: labels: [agentic-news, analysis-data] draft: false expires: 14d - max: 2 + max: 1 add-comment: {} dispatch-workflow: workflows: [news-translate] @@ -157,26 +167,6 @@ steps: if [ "$WARM" = "false" ]; then echo "⚠️ MCP server did not respond after 6 attempts — agent will retry via in-prompt health gate" fi - echo "🔄 Starting background keep-alive pinger (every 30s, max 55 min — covers full 60-min workflow through safe-output PR creation)..." - KEEP_ALIVE_START=$(date +%s) - KEEP_ALIVE_END=$((KEEP_ALIVE_START + 3300)) - export MCP_URL KEEP_ALIVE_END - nohup bash -c ' - while :; do - NOW=$(date +%s) - if [ "$NOW" -ge "$KEEP_ALIVE_END" ]; then - break - fi - curl -sf --max-time 10 -X POST \ - -H "Content-Type: application/json" \ - -d "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\",\"params\":{}}" \ - "$MCP_URL" -o /dev/null 2>/dev/null || true - sleep 30 - done - ' </dev/null >/tmp/mcp-keepalive.log 2>&1 & - KEEP_ALIVE_PID=$! - disown "$KEEP_ALIVE_PID" 2>/dev/null || true - echo "Keep-alive PID: $KEEP_ALIVE_PID (auto-exits after 55 min; log: /tmp/mcp-keepalive.log)" - name: Pre-flight external endpoint reachability check (runs before MCP Gateway) run: | @@ -230,559 +220,51 @@ engine: model: claude-opus-4.7 --- -# 📊 Weekly Review Article Generator - -You are the **News Journalist Agent** for Riksdagsmonitor generating **weekly review** retrospective articles. - -## 🔴 CRITICAL: AI Writes ALL Content with Iterative Improvement (v5.0) - -> **You are a political intelligence analyst producing comprehensive weekly retrospective analysis.** Your PRIMARY job is to produce excellent quality political intelligence through iterative improvement. You MUST: -> 1. **ANALYZE** the full week's parliamentary activity with deep synthesis across all document types -> 2. **WRITE** genuine intelligence with trend analysis, SWOT, stakeholder impacts, and strategic context -> 3. **ITERATE** — read ALL your output back completely and IMPROVE every section (minimum 2 full passes) -> 4. **SPEND THE FULL TIME** — use at least 45 of the 60 allocated minutes doing real work -> -> 🔴 **2+ PASSES MANDATORY**: Analysis Pass 1 (15 min) → Analysis Pass 2 improvement (7 min) → Article Pass 1 (10 min) → Article Pass 2 improvement (8 min). NEVER complete early. - -## 🔧 Workflow Dispatch Parameters - -- **force_generation** = `${{ github.event.inputs.force_generation }}` -- **languages** = `${{ github.event.inputs.languages }}` -- **analysis_depth** = `${{ github.event.inputs.analysis_depth }}` - -If **force_generation** is `true`, generate articles even if recent ones exist. Use the **languages** value to determine which languages to generate. - -## 🚨 CRITICAL: Single Article Type Focus - -**This workflow generates ONLY `weekly-review` articles.** Do not generate other article types. - -This is a **retrospective** article analyzing the past 7 days of parliamentary activity — votes completed, committee decisions made, government announcements issued, and legislative developments during the week. - -## 🧠 Repo Memory - -Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-weekly-review.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids. - -## ⏱️ Time Budget (45 minutes) — ENFORCED Minimum 40 Minutes - -> 🔴 **SYSTEMIC ISSUE (PR #1794 audit, 2026-04-16)**: ALL news workflows completing in 13-22 min of 45-min allocation, producing shallow analysis. Agent MUST use at least 40 of 45 minutes. Completion < 40 min = insufficient iteration = REJECTED. - -```bash -date +%s > /tmp/start_time.txt -read START_TIME < /tmp/start_time.txt -``` - -- **Minutes 0–3**: Date check, MCP warm-up with `get_sync_status()` -- **Minutes 3–6**: Run download-parliamentary-data pipeline (download data) -- **Minutes 6–21**: 🚨 **AI Analysis Pass 1 (15 min minimum)**: Read ALL methodology guides, create per-file analysis for EVERY document with Mermaid diagrams, evidence tables, SWOT entries. -- **Minutes 21–22**: 🚨 **AI Analysis Pass 2 (Part A, start)**: Begin reading ALL analysis artifacts back and identify improvement targets. -- **Minutes 22–25**: 🫀 **Heartbeat PR** — `git add && git commit` analysis artifacts so far, then `safeoutputs___create_pull_request` (title `🫀 Heartbeat - Weekly Review - {date}`). Refreshes the safeoutputs MCP session (idle timeout ~30–35 min) AND preserves work if later phases fail. Run `git checkout main` after the call so subsequent commits don't stack onto the frozen patch. -- **Minutes 25–28**: 🚨 **AI Analysis Pass 2 (Part B, complete — 6 min improvement work total across Parts A+B)**: Improve every section, replace ALL script stubs with AI analysis. Run enrichment verification gate. -- **Minutes 28–30**: Run ENFORCED Minimum Time Gate + Enrichment Verification Gate (SHARED_PROMPT_PATTERNS.md). Both MUST pass. -- **Minutes 30–38**: Generate articles for all 14 languages -- **Minutes 38–42**: 🚨 **Article Improvement Pass**: Read ALL articles back, replace AI_MUST_REPLACE markers, improve content. Run article quality component gate. -- **Minutes 42–45**: Validate, commit, create PR with `safeoutputs___create_pull_request` -- **Minutes 43–45**: 🚨 **HARD DEADLINE** — If no safe output yet: if ANY artifacts/files were created, IMMEDIATELY stage, commit, call `safeoutputs___create_pull_request` with partial work. ONLY call `safeoutputs___noop` if truly ZERO files were created. - -> ⚠️ **Analysis phase is 22 minutes minimum (Pass 1: 15 min + Pass 2: 7 min)** — every analysis file must contain color-coded Mermaid diagrams, structured evidence tables with dok_id citations, and follow template structure exactly. ALL script-generated stubs MUST be replaced with AI-enriched analysis. - -## ⚠️ CRITICAL: Bash Tool Call Format - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`. Calls missing either field fail with `Multiple validation errors: - "command": Required - "description": Required`. - -## 🛡️ AWF Shell Safety - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "AWF Shell Safety". Summary: use `$VAR` not `$`+`{VAR}`, use `find -exec` not `$(...)`, set defaults with `if/then` before using `$VAR`. - -## 🔤 UTF-8 Encoding - -> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`ö`, `ä`). Author: `James Pether Sörling`. - - -## Required Skills - -Consult as needed — do NOT read all files upfront: -- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md` -- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `stakeholder-perspectives.md`, `quality-criteria.md` -- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md` - -## 📊 MANDATORY Multi-Step AI Analysis Framework - -### Standardised Analysis Depth Gate - -> ⚠️ **Default is `deep`** — not `standard`. Analysis must always produce publication-quality output with Mermaid diagrams and evidence tables. - -| Depth | AI iterations | SWOT stakeholders | Charts | Mindmap | Min. analysis time | -|-------|--------------|-------------------|--------|---------|-------------------| -| standard | 1-2 | ≥3 | ≥1 | optional | 10 minutes | -| deep | 2-3 | ≥5 | ≥2 | required | 15 minutes | -| comprehensive | 3+ | ≥7 | ≥3 | required | 20 minutes | - -**Minimum requirement for ALL depths**: Every analysis file must contain at least 1 color-coded Mermaid diagram, structured evidence tables with dok_id citations, and follow the corresponding template structure exactly. Plain prose without tables/diagrams is NEVER acceptable regardless of depth level. - -> **Read `analysis_depth` input first** (default: `deep`). This controls iteration count and section requirements. - -Based on the editorial profile for `weekly-review` (from `scripts/editorial-framework.ts`): -- **SWOT**: condensed (3 stakeholder perspectives per quadrant) -- **Dashboard**: required (min. 2 Chart.js charts) -- **Mindmap**: required (CSS policy mindmap) -- **Min. stakeholders**: 5 perspectives -- **AI iterations**: 2 (standard), 2 (deep), or 3 (comprehensive) - -### 🗳️ Election 2026 Lens (Mandatory — v5.0) - -Every analysis MUST include an **Election 2026 Implications** section assessing: Electoral Impact, Coalition Scenarios, Voter Salience, Campaign Vulnerability, and Policy Legacy. Use the **5-level confidence scale** (⬛VERY LOW → 🟥LOW → 🟧MEDIUM → 🟩HIGH → 🟦VERY HIGH). See `analysis/methodologies/ai-driven-analysis-guide.md` v5.0 for full criteria. - -### Phase 1 — Data Collection & Initial Analysis -1. Fetch MCP data (`get_betankanden`, `get_propositioner`, `get_motioner`, `search_anforanden`, `search_voteringar`, `get_sync_status`) -2. Compute weekly metrics: document counts, key votes, most active parties -3. Build initial outline: week-in-review lede, top stories, key votes, what to watch next week - -### Phase 2 — Iterative Depth Enhancement (repeat per `analysis_depth`) -For each AI iteration: -1. **Condensed SWOT**: Write SWOT analysis with ≥3 stakeholder perspectives on the week's balance of power, using clear markdown headings and bullets suitable for the standard SWOT extraction flow -2. **Week-in-Review Dashboard**: Provide ≥2 visualization-ready summaries (for example: activity by day and document type breakdown) with explicit labels, values, and short interpretation text; do not assume an interactive dashboard renderer unless a workflow-specific validated input format is defined -3. **Policy Mindmap**: Provide a structured outline showing how the week's stories interconnect (central topic + branches + sub-branches) in nested markdown bullets; do not assume a mindmap render pipeline unless a workflow-specific validated input format is defined -4. **Quality Gate** (check before next iteration): - - Verify the article covers the actual past week (Mon–Fri), not a forecast - - Verify voting analysis section includes specific vote outcomes - - Verify all Swedish API text is translated - - Verify word count ≥ 1000 - -### Phase 3 — Final Quality Gate Before PR -Run all validation checks from the **MANDATORY Quality Validation** section below before committing. - -## MANDATORY Date Validation - -```bash -echo "=== Date Validation Check ===" -date -u "+Current UTC: %A %Y-%m-%d %H:%M:%S" -echo "Article Type: weekly-review" -echo "============================" -``` - -## 📅 Riksmöte (Parliamentary Session) Calculation - -September+ → `rm = "{year}/{year+1 2-digit}"` (e.g. Oct 2026 → `2026/27`). Before September → `rm = "{year-1}/{year 2-digit}"` (e.g. Feb 2026 → `2025/26`). Use in ALL MCP queries requiring `rm`. - -## MANDATORY Deduplication Check - -Before generating articles, check if articles already exist for the target date. **This check controls article GENERATION only — the deep political analysis phase ALWAYS runs regardless.** -```bash -# Resolve article date: use workflow_dispatch input when provided, fallback to UTC today -ARTICLE_DATE="${{ github.event.inputs.article_date }}" -if [ -z "$ARTICLE_DATE" ]; then - date -u +%Y-%m-%d > /tmp/today.txt - read ARTICLE_DATE < /tmp/today.txt -fi -ARTICLE_TYPE="weekly-review" -# Derive FORCE_GENERATION from the workflow_dispatch input -FORCE_GENERATION="${{ github.event.inputs.force_generation || 'false' }}" -ls news/$ARTICLE_DATE-$ARTICLE_TYPE-en.html 2>/dev/null | wc -l > /tmp/existing_count.txt -read EXISTING < /tmp/existing_count.txt -if [ "$EXISTING" -gt 0 ] && [ "$FORCE_GENERATION" != "true" ]; then - echo "📋 Articles for $ARTICLE_DATE/$ARTICLE_TYPE already exist — article generation will be skipped (analysis still runs)" - SKIP_ARTICLE_GENERATION=true - echo "SKIP_ARTICLE_GENERATION=true" >> "$GITHUB_ENV" -fi -# NOTE: Do NOT exit here or call safeoutputs___noop — analysis phase MUST still execute -# Later article-generation steps MUST gate on: if [ "$SKIP_ARTICLE_GENERATION" != "true" ]; then ... - -``` - -> **🚨 NEVER call `safeoutputs___noop` because articles already exist.** If articles exist, the workflow MUST still run the full 15-20 minute deep political analysis phase and commit analysis artifacts. The dedup check only controls whether NEW HTML articles are generated — analysis is the primary output and always runs. If analysis produces artifacts, use `safeoutputs___create_pull_request` with `analysis-only` label. - -## MANDATORY MCP Health Gate - -> **The step-level pre-warm (6 attempts × 20s) already mitigates Render.com cold starts.** This in-prompt gate is a lightweight verification — NOT a full retry loop. Do NOT spend more than 90 seconds here. -> -> **📖 Full MCP architecture, tool names, and calling conventions:** See `SHARED_PROMPT_PATTERNS.md` → "MCP Architecture & Tool Reference" section. Tool names are EXACT: riksdag tools use underscores (`get_sync_status`), World Bank uses hyphens (`get-economic-data`), SCB uses underscores (`search_tables`). - -1. Call `get_sync_status({})` — retry up to **3×** (20s wait between each, not 45s — the server is already warm from the step-level pre-warm) -2. If you get **"unknown tool"** or **"0 tools registered"** errors after 3 attempts, run a quick diagnostic: -```bash -echo "🔍 MCP Quick Diagnostic" -echo "Direct MCP server:" && curl -sf --max-time 15 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' "https://riksdag-regering-ai.onrender.com/mcp" 2>/dev/null | head -c 200 || echo "UNREACHABLE" -``` -3. After 3 failures → `safeoutputs___noop({"message": "MCP server unavailable after 3 attempts — step-level pre-warm also failed"})` -4. **ALL content MUST come from live MCP data.** Never use cached articles, stale data, or AI-fabricated content. -5. **⏱️ Do NOT spend more than 2 minutes on MCP warmup** — proceed to analysis immediately once `get_sync_status` succeeds. - -## 🛡️ File Ownership Contract - -Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked). - -### Branch Naming Convention - -Branch: `news/content/{YYYY-MM-DD}/{article-type}` (e.g. `news/content/2026-03-23/weekly-review`). `safeoutputs___create_pull_request` handles this automatically. - -## MANDATORY PR Creation - -> **🚀 HOW SAFE PR CREATION WORKS — READ THIS FIRST** -> -> The `safeoutputs___create_pull_request` tool handles **everything**: branch creation, pushing commits, and opening the PR. Do NOT run `git push` or `git checkout -b` manually. -> -> **Exact steps:** -> 1. Write article files to `news/` using `bash` or `edit` tools -> 2. Stage and commit locally (scoped to the resolved weekly-review analysis subfolder): `git add news/*weekly-review*.html news/metadata/ "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/" analysis/weekly/ && git commit -m "Add weekly-review articles and analysis artifacts"` -> 3. Call `safeoutputs___create_pull_request` with `title`, `body`, and `labels` -> -- ✅ `safeoutputs___create_pull_request` for articles or analysis-only PRs -- ✅ `safeoutputs___noop` ONLY if MCP unreachable after 5 attempts AND no analysis artifacts exist -- ❌ NEVER noop because articles already exist — analysis always runs -- ❌ Safe output tools are in your tool list — NEVER search for them via bash - -## 🌐 Dispatch Translation Workflow - -After creating the content PR, dispatch translations: `safeoutputs___dispatch_workflow({ "workflow_name": "news-translate", "inputs": { "article_date": "<YYYY-MM-DD>", "article_type": "<article-type>", "languages": "all-extra" } })`. See `news-translate.md` for full translation quality rules. - -## MCP Tools - -**ALWAYS call `get_sync_status()` FIRST.** - -**Primary tool:** `search_dokument` — searches documents from past 7 days -**Cross-reference:** `search_voteringar`, `get_betankanden`, `search_anforanden` -**Statistical enrichment:** SCB MCP + World Bank — enrich weekly context with relevant economic indicators. Auto-select SCB tables based on which committees were active during the week (see `scripts/scb-context.ts` for committee mappings). **World Bank indicators (144 total)**: `view analysis/worldbank/indicators-inventory.json` to discover indicators matching active committees — each indicator has `policyAreas`, `committees`, and `mcpTool` fields. Use MCP tools for indicators with `mcpTool` field. See `SHARED_PROMPT_PATTERNS.md` §"WORLD BANK ECONOMIC CONTEXT INTEGRATION" for Chart.js chart templates (`economic-comparison`, `economic-trend`, `nordic-radar`). MUST generate ≥2 economic charts: one Nordic comparison, one trend line. -**Fact-checking:** Review speeches from `search_anforanden` for statistical claims. Use `scripts/statistical-claims-detector.ts` to detect and cross-reference claims against official SCB/World Bank data. Include a "Faktakoll" section for any detected inaccuracies. - -```javascript -get_sync_status({}) -const lastWeek = new Date(Date.now() - 7*86400000).toISOString().split('T')[0]; -const today = new Date().toISOString().split('T')[0]; -search_dokument({ from_date: lastWeek, to_date: today, limit: 30 }) -search_voteringar({ rm: <calculated riksmöte>, limit: 20 }) -get_betankanden({ rm: <calculated riksmöte>, limit: 10 }) - -// SCB enrichment (optional — wrap in try/catch, do not block generation on SCB failures): -// search_tables({ query: "arbetslöshet BNP konsumentprisindex", limit: 5 }) -``` - -## Generation Steps - -### Step 1: Check Existing Articles (Analysis Always Runs) -🚨 **FULL ANALYSIS BEFORE ANY ARTICLE (BLOCKING)**: The complete deep political analysis phase following [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) (Rule 0 two-pass iteration + Rules 6–8 depth tiers, 15 min Pass 1 + 7 min Pass 2 minimum, ALL 9 required artifacts) **MUST** complete **BEFORE** any article HTML is created or updated. Articles MUST be (re)generated from the improved Pass 2 analysis — never from Pass 1 stubs, never from scripts alone, never skipping Pass 2. Violations = REJECTED PR (PR #1705 comment audit, 2026-04-18). - -Check if weekly-review articles already exist for the target date. If they do, skip article generation but **ALWAYS run the full deep political analysis phase** — analysis is the primary output and must execute on every run regardless of article existence. - -### Step 2: Query MCP -```javascript -get_sync_status({}) -search_dokument({ from_date: lastWeek, to_date: today, limit: 30 }) -``` - -### Step 2.5: Run Pre-Article Analysis Pipeline - -**CRITICAL: Download data first, then AI creates ALL 14 analysis artifacts (9 core + 5 Tier-C reference-grade).** `download-parliamentary-data.ts` downloads raw data ONLY — it performs NO analysis. The AI agent MUST: -1. Read `analysis/methodologies/ai-driven-analysis-guide.md` fully -2. Read ALL 8 templates in `analysis/templates/` -3. **STEP 0 — Upstream Watchpoint Ingestion (MANDATORY, per `SHARED_PROMPT_PATTERNS.md` §"Recent Daily Knowledge-Base Synthesis")**: ingest forward indicators from the last **7 days** of sibling daily runs + the prior `weekly-review`. Build the Watchpoint Reconciliation table (no silent drops). -4. Create ALL **14** analysis files in `analysis/daily/YYYY-MM-DD/weekly-review/` using evidence from the downloaded data AND the ingested upstream watchpoints -5. Reference exemplar: [`analysis/daily/2026-04-18/weekly-review/`](../../analysis/daily/2026-04-18/weekly-review/) — **the canonical weekly-review reference-grade package** - -Run the **14-Artifact Completeness Gate** (aggregation workflow) from `SHARED_PROMPT_PATTERNS.md` §"14 REQUIRED Artifacts for AGGREGATION Workflows — Reference-Grade Tier-C" to verify ALL 14 files exist: the 9 core (synthesis-summary.md, swot-analysis.md, risk-assessment.md, threat-analysis.md, classification-results.md, significance-scoring.md, stakeholder-perspectives.md, cross-reference-map.md, data-download-manifest.md) PLUS the 5 Tier-C reference-grade files (README.md, executive-brief.md, scenario-analysis.md, comparative-international.md, methodology-reflection.md). - -> 📐 **Period-scope multiplier: 1.0× (baseline)** — `weekly-review` is the baseline exemplar for the 14-artifact gate. See `SHARED_PROMPT_PATTERNS.md` §"Period-Scope Multipliers" for the full table. Reference exemplar: [`analysis/daily/2026-04-18/weekly-review/`](../../analysis/daily/2026-04-18/weekly-review/). - -```bash -date -u +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -echo "📊 Running pre-article analysis for $ARTICLE_DATE..." -# CRITICAL: Source mcp-setup.sh FIRST to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway -source scripts/mcp-setup.sh -npx tsx scripts/download-parliamentary-data.ts --date "$ARTICLE_DATE" --limit 100 > /tmp/pipeline-output.log 2>&1 -PIPE_EXIT=$? -cat /tmp/pipeline-output.log -if [ "$PIPE_EXIT" -ne 0 ]; then - echo "❌ Pipeline failed — agent MUST diagnose and fix (read /tmp/pipeline-output.log)" - tail -20 /tmp/pipeline-output.log -fi -echo "📊 Analysis artifacts for $ARTICLE_DATE:" -ls -la "analysis/daily/$ARTICLE_DATE/" 2>/dev/null || echo "⚠️ No analysis output" -find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l > /tmp/data_count.txt -read DATA_JSON_COUNT < /tmp/data_count.txt -echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)" -# Relocate pipeline artifacts: download-parliamentary-data.ts writes to analysis/daily/$DATE/ (unscoped) -# but this workflow needs them under analysis/daily/$DATE/weekly-review/ -# === Run Suffix Resolution (see SHARED_PROMPT_PATTERNS.md) === -BASE_SUBFOLDER="weekly-review" -ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER" -if [ "$FORCE_GENERATION" != "true" ]; then - _SUFFIX=1 - while [ -f "analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER/synthesis-summary.md" ]; do - _SUFFIX=$((_SUFFIX + 1)) - ANALYSIS_SUBFOLDER="$BASE_SUBFOLDER-$_SUFFIX" - done -fi -echo "📁 Analysis subfolder resolved: $ANALYSIS_SUBFOLDER" -UNSCOPED_DIR="analysis/daily/$ARTICLE_DATE" -SCOPED_DIR="$UNSCOPED_DIR/$ANALYSIS_SUBFOLDER" -if [ -d "$UNSCOPED_DIR" ]; then - mkdir -p "$SCOPED_DIR" - if find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" | grep -q .; then - find "$UNSCOPED_DIR" -maxdepth 1 -type f -name "*.md" -exec mv -f {} "$SCOPED_DIR/" \; - echo "📁 Moved pipeline *.md artifacts → $SCOPED_DIR (root cleaned to prevent merge conflicts)" - fi - if [ -d "$UNSCOPED_DIR/documents" ]; then - mkdir -p "$SCOPED_DIR/documents" - find "$UNSCOPED_DIR/documents" -mindepth 1 -maxdepth 1 -exec mv {} "$SCOPED_DIR/documents/" \; - rmdir "$UNSCOPED_DIR/documents" 2>/dev/null || true - echo "📁 Relocated pipeline documents/ contents → $SCOPED_DIR/documents" - fi -fi -if [ "$DATA_JSON_COUNT" -eq 0 ]; then - echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." -fi -``` - -**Weekly aggregation**: Since this is a weekly-scope workflow, also aggregate the week's daily analyses: - -```bash -date -u +%G-W%V > /tmp/week_label.txt -read WEEK_LABEL < /tmp/week_label.txt -echo "📅 Running weekly aggregation for $WEEK_LABEL..." -source scripts/mcp-setup.sh && npx tsx scripts/download-parliamentary-data.ts --aggregate weekly --date "$WEEK_LABEL" || echo "⚠️ Weekly aggregation failed (non-blocking)" -ls -la "analysis/weekly/$WEEK_LABEL/" 2>/dev/null || echo "⚠️ No weekly aggregation output" -``` - -These files are committed alongside articles for human review and continuous improvement. - -### 🚨 MANDATORY: Analysis Artifacts Must ALWAYS Be Committed - -**Before deciding whether to generate articles or call noop, you MUST:** - -1. **Review the analysis artifacts** in `analysis/daily/YYYY-MM-DD/` and `analysis/weekly/` — read `synthesis-summary.md` and `significance-scoring.md` to understand what was found -2. **Summarize the analysis findings** — note how many documents were downloaded, their significance scores, key themes, and risk levels -3. **ALWAYS commit analysis artifacts** regardless of whether articles will be generated: - -```bash -date -u +%Y-%m-%d > /tmp/today.txt -read ARTICLE_DATE < /tmp/today.txt -ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/weekly-review" -ANALYSIS_COUNT=0 -if [ -d "$ANALYSIS_DIR" ]; then - find "$ANALYSIS_DIR" -type f 2>/dev/null | wc -l > /tmp/analysis_count.txt - read ANALYSIS_COUNT < /tmp/analysis_count.txt -fi -date -u +%G-W%V > /tmp/week_label.txt -read WEEK_LABEL < /tmp/week_label.txt -WEEKLY_DIR="analysis/weekly/$WEEK_LABEL" -if [ -d "$WEEKLY_DIR" ]; then - find "$WEEKLY_DIR" -type f 2>/dev/null | wc -l > /tmp/weekly_count.txt - read WEEKLY_COUNT < /tmp/weekly_count.txt - ANALYSIS_COUNT=$((ANALYSIS_COUNT + WEEKLY_COUNT)) -fi -if [ "$ANALYSIS_COUNT" -gt 0 ]; then - echo "📊 Found $ANALYSIS_COUNT total analysis artifacts — these MUST be committed (do NOT use safeoutputs___noop)" -else - echo "📊 Found 0 analysis artifacts — safeoutputs___noop is allowed (no files to commit)" -fi -``` - -> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the pre-article analysis pipeline produced ANY output files, you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Weekly Review - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if the analysis pipeline produced ZERO output files (truly nothing to analyze). - -### 🔬 Step 2b: Read ALL Analysis Files + Cross-Reference Sibling Types (MANDATORY) - -> 🔴 **NON-NEGOTIABLE**: Weekly review synthesizes the entire week's parliamentary activity. The AI MUST read ALL analysis files from ALL article types before generating the review. See SHARED_PROMPT_PATTERNS.md §"MANDATORY PRE-ARTICLE ANALYSIS READING". - -```bash -ANALYSIS_SUBFOLDER="weekly-review" -ANALYSIS_BASE="analysis/daily/$ARTICLE_DATE/$ANALYSIS_SUBFOLDER" - -echo "📖 Reading ALL analysis files from $ANALYSIS_BASE..." -if [ -d "$ANALYSIS_BASE" ]; then - for MD_FILE in "$ANALYSIS_BASE"/*.md; do - if [ -f "$MD_FILE" ]; then - echo "--- Reading: $MD_FILE ---" - cat "$MD_FILE" - echo "" - fi - done -fi - -echo "🔍 Cross-referencing sibling analysis types for $ARTICLE_DATE..." -for SIBLING_DIR in analysis/daily/$ARTICLE_DATE/*/; do - if [ -d "$SIBLING_DIR" ]; then - echo "$SIBLING_DIR" | sed 's|/$||' | sed 's|.*/||' > /tmp/sibling_type.txt - read SIBLING_TYPE < /tmp/sibling_type.txt - if [ "$SIBLING_TYPE" = "$ANALYSIS_SUBFOLDER" ]; then continue; fi - echo "📖 Cross-referencing: $SIBLING_TYPE" - for SIBLING_FILE in "$SIBLING_DIR/synthesis-summary.md" "$SIBLING_DIR/significance-scoring.md"; do - if [ -f "$SIBLING_FILE" ]; then - echo "--- Sibling ($SIBLING_TYPE): $SIBLING_FILE ---" - cat "$SIBLING_FILE" - echo "" - fi - done - fi -done -echo "✅ Cross-referencing complete — weekly review MUST incorporate findings from all sibling types" -``` - -### Step 3: Generate Articles - -```bash -# Set LANGUAGES_INPUT to the value shown in Workflow Dispatch Parameters above -LANGUAGES_INPUT="<value from Workflow Dispatch Parameters>" -[ -z "$LANGUAGES_INPUT" ] && LANGUAGES_INPUT="all" - -case "$LANGUAGES_INPUT" in - "nordic") LANG_ARG="en,sv,da,no,fi" ;; - "eu-core") LANG_ARG="en,sv,de,fr,es,nl" ;; - "all") LANG_ARG="en,sv,da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh" ;; - *) LANG_ARG="$LANGUAGES_INPUT" ;; -esac - -source scripts/mcp-setup.sh && npx tsx scripts/generate-news-enhanced.ts \ - --types=weekly-review \ - --languages="$LANG_ARG" \ - --skip-existing -``` - -**Article Navigation Verification**: The `generate-news-enhanced.ts` script automatically includes all required navigation elements: -- **Language switcher** (`<nav class="language-switcher">`) after `<body>` with all 14 languages -- **Back-to-news top nav** (`<div class="article-top-nav">`) with localized back link after language switcher -- **Footer back-to-news link** in `<footer class="article-footer">` - -These elements are validated by `bash scripts/validate-news-generation.sh` (Checks 8–10). The fix script is a **fallback only** — do not run it by default: -```bash -# FALLBACK ONLY — use if validate-news-generation.sh reports missing navigation elements -npx tsx scripts/fix-article-navigation.ts -``` - ---- - -## Step 2.6: Economic Data Acquisition (MANDATORY) - -> **Contract**: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) — the **single source of truth** for World Bank + SCB data, Chart.js visualisations, and AI commentary. Follow it exactly; the Step 6 quality gate (`scripts/validate-economic-context.ts`) **blocks the PR** if any element is missing. - -**What you MUST do before writing any prose:** - -1. `view analysis/worldbank/indicators-inventory.json` and pick every indicator whose `committees` / `policyAreas` match the day's source documents. -2. Call `world-bank.get-economic-data` / `get-social-data` / `get-health-data` / `get-education-data` for Sweden (10-year series for primary domains) and for DK/NO/FI/DE (5-year series for the top 3 indicators — needed for the Nordic comparison bars and radar). -3. Call `scb.search_tables` + `scb.query_table` using the committee → TAB mapping in `scripts/scb-context.ts`. **`language` MUST be `"sv"` or `"en"` — NEVER `"no"`** (SCB returns HTTP 400 "Unsupported language"). -4. Retry every World Bank call up to **3 times** on failure. Cache raw responses under `analysis/data/worldbank/<YYYY>/<indicator>-<country>.json` so later article types in the same daily run reuse the data. -5. Write `analysis/daily/<ARTICLE_DATE>/<ANALYSIS_SUBFOLDER>/economic-data.json` matching `analysis/schemas/economic-data.schema.json`: - -```jsonc -{ - "version": "1.0", - "articleType": "weekly-review", - "date": "<YYYY-MM-DD>", - "policyDomains": ["fiscal policy", "labor market"], - "dataPoints": [ - { "countryCode": "SWE", "countryName": "Sweden", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 0.82 }, - { "countryCode": "DNK", "countryName": "Denmark", "indicatorId": "NY.GDP.MKTP.KD.ZG", "date": "2024", "value": 1.75 } - ], - "commentary": "<will be filled in Step 3d>", - "source": { "worldBank": ["NY.GDP.MKTP.KD.ZG", "FP.CPI.TOTL.ZG"], "scb": ["TAB1291"] } -} -``` - -**Non-negotiable**: `dataPoints` MUST be non-empty. The HTML renderer (`scripts/data-transformers/content-generators/economic-dashboard-section.ts`) emits real Chart.js canvases only when the file exists with entries — otherwise the validator fails the PR. - -**Minimum coverage (enforced by the validator):** see the matrix in `ECONOMIC_DATA_CONTRACT.md` §"Coverage matrix" for this article type's chart count, commentary word minimum, and D3 requirement. - ---- -### Step 3b: AI Title, Meta Description & Analysis References - -> 🚨 **MANDATORY** — After article HTML is generated, the AI MUST improve titles, descriptions, and add analysis references. See `SHARED_PROMPT_PATTERNS.md` sections "AI-DRIVEN TITLE & META DESCRIPTION GENERATION" and "ANALYSIS FILE GITHUB REFERENCES" for full protocols. - -**1. Generate newsworthy titles** — Replace script-generated title with: `[Active Verb] + [Specific Institution] + [Concrete Policy Action]`. BANNED: ❌ generic category labels or ": {Topic} in Focus". - -**2. Generate AI meta descriptions** (150-160 chars) — Key political intelligence summary. BANNED: ❌ "Analysis of N documents". - -**3. 🔴 Add analysis references (MANDATORY — VERIFY AFTER)** — Insert "📊 Analysis & Sources" HTML block (from SHARED_PROMPT_PATTERNS.md §ANALYSIS FILE GITHUB REFERENCES) linking to `analysis/daily/$ARTICLE_DATE/weekly-review/` files and `analysis/methodologies/ai-driven-analysis-guide.md`. +# 📊 Weekly Review -**After inserting, VERIFY** by running: -```bash -for FILE in news/$ARTICLE_DATE-weekly-review-*.html; do - if [ -f "$FILE" ] && ! grep -q 'class="analysis-references"' "$FILE"; then - echo "🔴 MISSING analysis-references in: $FILE — MUST FIX NOW" - fi -done -``` +Retrospective 7-day political intelligence review synthesising every article-type from the past week. Tier-C reference-grade output (14 artifacts). Core languages EN, SV. -**4. Update all metadata** — `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>`. +## Tier-C (reference-grade) requirements -### Step 4: Translate, Validate & Verify Analysis Quality +This workflow imports `../prompts/ext/tier-c-aggregation.md`. Produce **all 14 artifacts** (9 core + 5 Tier-C) and cross-reference sibling analyses. See the extension for the full rules. -Run analysis references fix, validation, and HTMLHint before creating PR: -```bash -# 🔴 MANDATORY: Inject analysis references into any article missing them -npx tsx scripts/fix-analysis-references.ts --date "$ARTICLE_DATE" --rewrite --type weekly-review +## What this workflow does -bash scripts/validate-news-generation.sh -VALIDATION_EXIT=$? -if [ "$VALIDATION_EXIT" -ne 0 ]; then - echo "❌ News generation validation failed. Fix the reported issues before creating a PR." - exit "$VALIDATION_EXIT" -fi +- **Article type**: `weekly-review` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/weekly-review/` +- **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) +- **One pull request per run** containing analysis + articles + visualisation data. -# HTMLHint validation with auto-fix for common nesting errors -find news -maxdepth 1 -name '*-*.html' 2>/dev/null | wc -l > /tmp/news_count.txt -read NEWS_FILES < /tmp/news_count.txt -if [ "$NEWS_FILES" -gt 0 ]; then - if ! npx htmlhint "news/*-*.html" 2>/dev/null; then - echo "⚠️ HTML validation errors found, attempting auto-fix..." - npx tsx scripts/article-quality-enhancer.ts --fix - if ! npx htmlhint "news/*-*.html"; then - echo "❌ HTML validation failed after auto-fix. Please fix remaining issues before creating PR." - exit 1 - fi - fi -fi -# Playwright visual validation (accessibility, RTL, responsive) -npx tsx scripts/validate-articles-playwright.ts --filter "weekly-review" +## Time budget (60 min, minimum 45 min of real work) -# Validate JSON-LD cross-references -npx tsx scripts/validate-cross-references.ts news/*-weekly-review-*.html -``` +| Minutes | Phase | Module | +|---------|-------|--------| +| 0–2 | MCP pre-warm + `get_sync_status` | 02 | +| 2–6 | Download data + catalogue | 03 | +| 6–25 | Analysis Pass 1 (methodology read + per-doc analyses + 9 artifacts) | 04 | +| 25–35 | Analysis Pass 2 (read-back + improvements) | 04 | +| 35–37 | Analysis Gate | 05 | +| 37–48 | Article Pass 1 + Pass 2 (EN, SV) | 06 | +| 48–55 | Visual + link validation | 06 | +| 55–60 | Stage, commit, **ONE** `safeoutputs___create_pull_request` | 07 | -**CRITICAL: Each article MUST contain real analysis, not just a list of translated document links.** -Every generated article must include thematic analysis grouping documents by type and policy area, interpretive commentary on what the week's activity reveals about political dynamics, and key takeaways. +Trim scope before quality. Never open a second PR to "save" partial work — there is no second PR. -**Note**: News index files, metadata, and sitemap are generated automatically at build time by the `prebuild` script. Do NOT run generation scripts or commit their output — only commit the article HTML files. To locally preview or validate these generated indexes, metadata, and sitemap on a fresh checkout, run `npm run prebuild` before starting your local preview or build. +## Inputs -## Article Content Structure +- `article_date` — override date (defaults to today) +- `force_generation` — regenerate even if today's article exists (analysis is always refreshed regardless) +- `languages` — core content languages (default `en,sv`) +- `analysis_depth` — `standard` | `deep` (default) | `comprehensive` -Weekly review articles should include: -1. **Week Summary**: Top 3–5 most significant developments -2. **Legislative Outcomes**: Bills passed, motions debated, votes recorded -3. **Committee Activity**: Reports issued, hearings conducted -4. **Government Actions**: Policy announcements, propositions tabled -5. **Opposition Highlights**: Key motions filed, interpellations submitted -6. **What Mattered Most**: Analysis of the week's most consequential development -7. **Looking Ahead**: Brief preview of the coming week +## Dedup & analysis-only path -## 🌐 Translation Quality +If articles for `$ARTICLE_DATE` + `weekly-review` already exist **and** `force_generation=false`: -EN/SV only: all headings, meta, content in correct language; no untranslated `data-translate` spans; Swedish API titles translated. Full rules: `news-translate.md`. +- Still run the full analysis pipeline (modules 03 → 04 → 05). +- Commit the analysis. +- Open the single PR with title `📊 Analysis Only — Weekly Review — $ARTICLE_DATE` and label `analysis-only`. -## Step 3d: Economic Commentary (MANDATORY) +Analysis is the primary product — a run never "does nothing" just because articles exist. -> After Step 3c and **before** calling `safeoutputs.create_pull_request`, re-open `economic-data.json` and replace the placeholder `commentary` string with a **4–6 sentence paragraph of ≥150 words** (enforced by `scripts/validate-economic-context.ts` — `weekly-review` = 150, `monthly-review` = 200) that: -> - cites **≥3 concrete numeric values** from `dataPoints` (e.g. Nordic GDP comparison + Swedish unemployment trajectory); -> - ties the numbers to the week's political developments (not definitions of indicators); -> - is written in plain English (translations are produced downstream by `news-translate`); -> - meets the minimum word count in the coverage matrix for this article type. -> -> Banned phrasings (the multi-dim quality score flags these): "The political landscape remains fluid…", "Touches on X policy…", pure indicator definitions. -> -> **Sankey / flow diagram** (required for `weekly-review`): `scripts/generate-news-enhanced/generators.ts` calls `buildArticleVisualizationSections` with `alwaysEmit: true` for this article type, so `class="sankey-section"` is auto-appended whenever the week has at least **one** document — even when every document collapses into a single doc-type bucket. The only case where no Sankey is emitted is an empty week (`docs.length === 0`); in that edge case the visualization builder returns an empty section list. The AI writer does not need to emit Sankey HTML directly — just verify the generated HTML contains `class="sankey-section"` before opening the PR: -> ```bash -> if grep -l 'class="sankey-section"' news/$ARTICLE_DATE-weekly-review-*.html; then -> echo "✅ Sankey section present" -> else -> # AWF-safe: no $(...) command substitution — use per-process temp file + read redirection, then clean up. -> doc_count_tmp="/tmp/doc_count.$$" -> find "analysis/daily/$ARTICLE_DATE/weekly-review/documents" -maxdepth 1 -name '*.json' 2>/dev/null | wc -l > "$doc_count_tmp" -> read doc_count < "$doc_count_tmp" -> rm -f "$doc_count_tmp" -> if [ "$doc_count" = "0" ]; then -> echo "ℹ️ Sankey section not emitted — the week has 0 documents (validator allows this)" -> else -> echo "❌ Sankey section missing — the validator will block the PR"; exit 1 -> fi -> fi -> ``` -> -> Full rules: [`.github/aw/ECONOMIC_DATA_CONTRACT.md`](../aw/ECONOMIC_DATA_CONTRACT.md) §"Writing the AI commentary — workflow Step 3d". +All other rules (bash format, AWF shell safety, MCP access, download pipeline, analysis methodology & gate, article generation, commit & PR policy) live in the imported modules. diff --git a/analysis/README.md b/analysis/README.md index 21e7af2ed..a11cdd888 100644 --- a/analysis/README.md +++ b/analysis/README.md @@ -155,7 +155,7 @@ flowchart LR **The AI agent reads all 6 methodology guides, reads all 8 templates, reads the actual data, and produces genuine analytical content based on evidence found in the documents.** -**Fallback mechanism:** If AI analysis fails or produces unusable output (detected by the quality gate bash check in `SHARED_PROMPT_PATTERNS.md`), the workflow should: +**Fallback mechanism:** If AI analysis fails or produces unusable output (detected by the quality gate bash check in `.github/prompts/` (see the README for the module catalogue)), the workflow should: 1. Commit a minimal `data-download-manifest.md` documenting what was downloaded 2. Flag the analysis as `pending` for the next workflow run 3. Never commit placeholder or stub content that masquerades as genuine analysis diff --git a/analysis/imf/README.md b/analysis/imf/README.md index 830e592f6..bf9bfeb51 100644 --- a/analysis/imf/README.md +++ b/analysis/imf/README.md @@ -84,5 +84,5 @@ IMF advertises **~10 req / 5 s**. The client and agentic workflows MUST: - `analysis/imf/indicator-policy-mapping.md` — which IMF indicators feed which committees - `analysis/imf/use-cases.md` — canonical article examples - `.github/aw/ECONOMIC_DATA_CONTRACT.md` — v2.0 contract (data artefact shape, validator gates) -- `.github/aw/SHARED_PROMPT_PATTERNS.md` — "Economic Indicator Reference" +- `.github/aw/.github/prompts/ (see README)` — "Economic Indicator Reference" - `docs/adr/0001-adopt-imf-data-alongside-world-bank.md` — architecture decision diff --git a/analysis/methodologies/ai-driven-analysis-guide.md b/analysis/methodologies/ai-driven-analysis-guide.md index e3025102c..dd36cc2e0 100644 --- a/analysis/methodologies/ai-driven-analysis-guide.md +++ b/analysis/methodologies/ai-driven-analysis-guide.md @@ -168,7 +168,7 @@ Every analysis file MUST demonstrate: **Rhetorical-Tension Rule**: When the top-ranked findings carry opposing political valences, the article MUST surface the tension in a dedicated subsection. Silence on the tension is itself a coverage failure. -**Enforcement**: `SHARED_PROMPT_PATTERNS.md` → "Lead-Story & Coverage-Completeness Gate" is a blocking check. Articles failing the gate cannot be committed. +**Enforcement**: `.github/prompts/` (see the README for the module catalogue) → "Lead-Story & Coverage-Completeness Gate" is a blocking check. Articles failing the gate cannot be committed. --- @@ -256,7 +256,7 @@ Every agentic workflow MUST spend **at least 15 minutes** on analysis. This incl ### 🔍 Quality Gate (Blocking) -Before committing, run the quality gate bash check from `SHARED_PROMPT_PATTERNS.md` Step 5b. If the check fails, go back and improve analysis files until it passes. Do NOT commit failing analysis. +Before committing, run the quality gate bash check from `.github/prompts/` (see the README for the module catalogue) Step 5b. If the check fails, go back and improve analysis files until it passes. Do NOT commit failing analysis. --- @@ -2181,7 +2181,7 @@ Every synthesis-level analysis MUST include a historical comparison with: | [political-style-guide.md](political-style-guide.md) | Writing and formatting standards | | [SWOT.md](../../SWOT.md) | **Formatting exemplar** (platform SWOT) | | [THREAT_MODEL.md](../../THREAT_MODEL.md) | **Formatting exemplar** (platform threat model) | -| [SHARED_PROMPT_PATTERNS.md](../../.github/workflows/SHARED_PROMPT_PATTERNS.md) | **Shared news workflow prompts** — quality enforcement | +| [.github/prompts/ (see README)](../../.github/workflows/.github/prompts/ (see README)) | **Shared news workflow prompts** — quality enforcement | --- diff --git a/analysis/templates/synthesis-summary.md b/analysis/templates/synthesis-summary.md index 7d3b198eb..05b273a23 100644 --- a/analysis/templates/synthesis-summary.md +++ b/analysis/templates/synthesis-summary.md @@ -361,7 +361,7 @@ graph LR ## ✅ Quality Self-Check Checklist -> **Pre-commit validation — every item MUST be checked before finalising this synthesis. Derived from SHARED_PROMPT_PATTERNS.md §Quality Self-Check Protocol.** +> **Pre-commit validation — every item MUST be checked before finalising this synthesis. Derived from .github/prompts/ (see README) §Quality Self-Check Protocol.** - [ ] **Synthesis Context complete:** All metadata fields filled (ID, date, documents analyzed, period, producer, confidence) - [ ] **Intelligence Dashboard rendered:** Mermaid diagram has actual values (no grey placeholder nodes remaining) From fc5db48413cd827f697a9ce89de6ae2cf8c1a4f7 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Tue, 21 Apr 2026 23:01:25 +0000 Subject: [PATCH 03/21] Add ADR 0002: modular prompt library + single-PR runs Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/d6b0d645-c20f-4963-91af-1f68caafc38e Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- ...0002-modular-prompts-and-single-pr-runs.md | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 docs/adr/0002-modular-prompts-and-single-pr-runs.md diff --git a/docs/adr/0002-modular-prompts-and-single-pr-runs.md b/docs/adr/0002-modular-prompts-and-single-pr-runs.md new file mode 100644 index 000000000..3fa684872 --- /dev/null +++ b/docs/adr/0002-modular-prompts-and-single-pr-runs.md @@ -0,0 +1,105 @@ +# ADR 0002 — Modular prompt library and single-PR agentic workflow runs + +- **Status**: Accepted +- **Date**: 2026-04-21 +- **Authors**: Hack23 AB — Riksdagsmonitor maintainers +- **Deciders**: CEO / CISO (per `Change_Management.md` — Normal change touching `.github/aw/` and workflow `.md` configuration) + +## Context + +The agentic news pipeline had grown the following structural defects by April 2026: + +1. **Monolithic shared prompt file** — `.github/aw/SHARED_PROMPT_PATTERNS.md` held ~4,350 lines covering ~50 mixed topics (bash format, UTF-8, MCP inventory, 9- and 14-artifact gates, heartbeat PR strategy, DIW, visualisation, IMF/WB/SCB references, three competing PR template variants, audit history, version tags v3.0/v4.0/v5.0, dated annotations, and run IDs). +2. **Multi-PR heartbeat pattern** — every news workflow declared `safe-outputs.create-pull-request.max: 2–5` and implemented a "🫀 Heartbeat PR #1 / final PR #2" pattern with a 55-minute background keep-alive pinger. In practice `safeoutputs___create_pull_request` **freezes the patch at call time**; every commit made after the first call was silently dropped. Content loss was confirmed by the user and by direct observation of missing translations / analyses on merged PRs. +3. **Workflow bloat** — the 12 news workflow `.md` files were 780–1,100 lines each (≈15,300 lines total). They duplicated rules from the shared file and referenced it as prose ("See `SHARED_PROMPT_PATTERNS.md` →…") without using the first-class `imports:` mechanism that gh-aw already supports. +4. **Scattered enforcement** — the "analysis before article" rule was stated in six+ places with inconsistent minute budgets (22 / 14 / 40). The PR template existed in three variants. The economic-chart spec existed in three variants. +5. **Token bloat** — duplicate correct/incorrect examples, inline rationale paragraphs citing PR numbers and run IDs, tutorials that already lived in `.github/skills/`. + +Total prompt-surface footprint: ~19,700 lines of markdown fed to every news run. + +## Decision + +Adopt a bounded-context prompt library under `.github/prompts/` and **exactly one pull request per run**. + +### 1. Prompt library + +Eight core modules + one Tier-C extension + a `README.md`: + +``` +.github/prompts/ +├── 00-base-contract.md role, ethics, GDPR/ISMS, AI-FIRST, pipeline order +├── 01-bash-and-shell-safety.md bash tool call format, AWF-safe shell, UTF-8 +├── 02-mcp-access.md MCP inventory, tool naming, in-prompt health gate +├── 03-data-download.md subfolder naming, download pipeline, lookback +├── 04-analysis-pipeline.md methodology, 9 core artifacts, Pass 1 / Pass 2 +├── 05-analysis-gate.md single blocking gate before any article +├── 06-article-generation.md sections, banned patterns, visualisation +├── 07-commit-and-pr.md stage → commit → ONE create_pull_request +├── README.md catalogue + dependency matrix + phase diagram +└── ext/ + └── tier-c-aggregation.md 14-artifact gate, period multipliers, cross-type +``` + +- Every module ≤ 300 lines, declarative rules only, no audit history, no PR/run IDs, no version tags. +- Modules link to (do not copy) authoritative sources: `analysis/methodologies/`, `analysis/templates/`, `.github/copilot-mcp.json`, ISMS policies. + +### 2. Workflow refactor + +- All 12 news workflows declare `imports:` in frontmatter; `gh aw compile` resolves them into `{{#runtime-import …}}` directives in the generated `.lock.yml`. +- `safe-outputs.create-pull-request.max: 1` on every news workflow (was 2 / 3 / 5). +- Background keep-alive pinger removed; MCP pre-warm kept at ≤ 2 minutes (≤ 6 retries, 20 s apart). +- Workflow bodies reduced to ≤ 50 lines each (schedule + inputs + time budget + dedup path). +- `news-translate` imports only the four modules it needs (base contract, bash/shell, MCP, commit & PR) and issues exactly one PR batching every language produced in the run. + +### 3. Single blocking gate + +`05-analysis-gate.md` is the only separator between analysis and article generation. It checks: 9 artifacts exist, no `AI_MUST_REPLACE` markers, evidence citations (`dok_id`) in SWOT + significance files, Mermaid diagrams present, Pass 2 completed. The article-dedup / `analysis-only` path uses the same single PR with a different label. + +### 4. CI enforcement + +`compile-agentic-workflows.yml` fails the build on any of: + +- prompt module > 300 lines +- news workflow body > 200 lines +- `create-pull-request.max ≠ 1` +- occurrences of `Heartbeat`, `keep-alive pinger`, `post-heartbeat rebase`, `🫀` + +## Consequences + +### Positive + +- **No more data loss** on long runs — one PR per run, no dropped commits. +- **Token discipline** — prompt surface reduced from ~19,700 → ≈4,000 lines (~80 % reduction). +- **Maintainability** — each rule lives in exactly one module; dependencies are declared, not referenced. +- **Onboarding** — the dependency matrix in `.github/prompts/README.md` lets new contributors see what rules apply where without reading 4,350 lines. +- **Drift prevention** — CI enforcement blocks regressions. + +### Negative / accepted trade-offs + +- **Lost "progressive PR" resilience pattern.** The multi-PR heartbeat was attempting to survive the ~30-minute safeoutputs session idle window. We accept the loss because it never actually worked — the second commit batch was silently dropped. The real fix for session expiry is shorter workflows + scope trimming, which is now the explicit deadline policy in `07-commit-and-pr.md`. +- **Harder to add one-off rules to a single workflow.** The new model is factored around shared modules; a workflow-unique tweak requires either inlining it into that workflow's body (within the 200-line cap) or factoring it into a module. This is the intended friction. + +## Risks & mitigations + +| Risk | Mitigation | +|------|------------| +| MCP session expiry without heartbeat | Tight time budgets + scope-trim policy + ≤ 2 min pre-warm; deadline rule in `07-commit-and-pr.md` forces commit + PR by minute ~55. | +| `imports:` resolution differences across gh-aw versions | `compile-agentic-workflows.yml` pins gh-aw via `GH_AW_VERSION="v0.69.2"`. | +| Hidden rules in the 4,350-line file dropped accidentally | Phase A migrated every H2/H3 explicitly; review is backed by the CI module-size/banned-string check. | +| `news-translate` capacity | If 12 languages exceed the 60-minute budget, translation is split across multiple scheduled runs (already cron'd twice daily + weekend catch-up) rather than across multiple PRs in one run. | + +## Compliance & governance + +- **Change type** per `Change_Management.md`: Normal change. Requires CEO approval before merge. +- **ISMS mapping**: affects CIS Controls v8.1 §16 Application Software Security (agentic pipeline), NIST CSF 2.0 PR.PS-01 (Configuration management), ISO 27001:2022 A.5.33 (documented information). +- **Risk review**: no CIA triad rating change (data flows unchanged). Attack surface unchanged (same MCP servers, same network allowlist). Threat model unchanged — supersedes the internal "progressive PR" resilience assumption only. +- **Open-source policy**: no new dependencies. + +## References + +- `.github/prompts/README.md` — catalogue + dependency matrix + phase sequence diagram +- `.github/workflows/compile-agentic-workflows.yml` — CI enforcement block +- `.github/skills/github-agentic-workflows/SKILL.md` §"Imports (Reusable Components)" — `imports:` semantics +- `.github/skills/gh-aw-workflow-authoring/SKILL.md` — authoring pattern with a link to `.github/prompts/` +- `analysis/methodologies/ai-driven-analysis-guide.md` — authoritative DIW methodology (unchanged) +- `analysis/templates/` — authoritative artifact templates (unchanged) From 2c081d9ac9d2d1e24c5b22ee6e3472195bc47d3c Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Tue, 21 Apr 2026 23:21:36 +0000 Subject: [PATCH 04/21] Deep review: fix AWF shell rules, broken script refs, MCP table, dedup; add agent-file rationale Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/56cf966e-03e9-4480-9333-d88493499f1d Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/00-base-contract.md | 4 +- .github/prompts/01-bash-and-shell-safety.md | 51 ++++++++++--------- .github/prompts/02-mcp-access.md | 28 ++++++---- .github/prompts/04-analysis-pipeline.md | 9 +--- .github/prompts/05-analysis-gate.md | 24 ++++++--- .github/prompts/06-article-generation.md | 4 +- .github/prompts/README.md | 13 +++++ .github/prompts/ext/tier-c-aggregation.md | 28 ++++++---- .github/workflows/news-translate.md | 2 +- ...0002-modular-prompts-and-single-pr-runs.md | 17 +++++++ 10 files changed, 115 insertions(+), 65 deletions(-) diff --git a/.github/prompts/00-base-contract.md b/.github/prompts/00-base-contract.md index 512a2dd0c..40a7be8b5 100644 --- a/.github/prompts/00-base-contract.md +++ b/.github/prompts/00-base-contract.md @@ -37,8 +37,8 @@ No step may be skipped, reordered, or executed in parallel with its successor. ## Output contract - Commit real files on disk under `analysis/daily/` and/or `news/`. -- End the run with exactly one safe output call — see module `07-commit-and-pr.md`. -- Never fabricate data. If MCP is unreachable and nothing was produced, call `safeoutputs___noop` once and exit. +- End the run with exactly one safe output call (see `07-commit-and-pr.md` for the single-PR / no-op policy). +- Never fabricate data. If MCP is unreachable and nothing was produced, the no-op exit rule in `07-commit-and-pr.md` applies. ## Language & formatting diff --git a/.github/prompts/01-bash-and-shell-safety.md b/.github/prompts/01-bash-and-shell-safety.md index 10934528e..efd1db922 100644 --- a/.github/prompts/01-bash-and-shell-safety.md +++ b/.github/prompts/01-bash-and-shell-safety.md @@ -11,44 +11,47 @@ bash({ }) ``` -Rules: - | # | Rule | |---|------| | 1 | `command` is a single string (never an array of tokens). | | 2 | `description` is a short non-empty sentence. | | 3 | Missing either field → tool-call validation error → fix and retry. | -| 4 | Use `mode: "sync"` by default; increase `initial_wait` (e.g. 120 s) for builds, MCP warm-ups, and analysis pipelines. | -| 5 | Chain dependent commands with `&&` inside one `command` string to avoid lost context. | +| 4 | Use `mode: "sync"` by default; raise `initial_wait` (e.g. 120 s) for builds, MCP warm-ups, and analysis pipelines. | +| 5 | Chain dependent commands with `&&` inside one `command` string; separate sessions do not share state unless you pass the same `shellId`. | + +## Shell hygiene -## AWF shell safety +| Do | Avoid | +|----|-------| +| Quote every expansion: `"$VAR"`, `"${ARR[@]}"` | Bare `$VAR` adjacent to other text — splitting / glob surprises | +| Use `${VAR:-default}` for defaults | Multi-line `if [ -z "$VAR" ]; then VAR=…; fi` for a simple fallback | +| `set -Eeuo pipefail` at the top of any multi-step inline script | Ignoring non-zero exits | +| `LC_ALL=C.UTF-8 LANG=C.UTF-8` when the step writes Swedish text | Leaving the default C locale, which may corrupt `ö`, `ä`, `å` | +| `$(cmd)` for command substitution | Deprecated backticks `` `cmd` `` | +| Explicit redirection (`> /tmp/out 2> /tmp/err`) | Leaving stderr on the runner log unintentionally | -The agentic workflow firewall rewrites commands. Write commands that do not depend on command substitution, brace expansion, or process substitution. +Parameter expansion (`${VAR}`, `${VAR:-x}`, `${VAR##*/}`, …) and command substitution (`$(cmd)`) are **safe** under the agentic-workflow firewall — the firewall inspects outbound network egress, not shell syntax. Process substitution `<(…)` is best avoided because some runners disable `/dev/fd`. -| Use | Instead of | -|-----|------------| -| `$VAR` | `${VAR}` | -| `find DIR -name '*.md' -exec cat {} +` | `for f in "$DIR"/*.md; do cat "$f"; done` with `$(...)` | -| Write intermediate result to a temp file, then `read VAR < /tmp/file` | `VAR=$(command)` | -| `if [ -z "$VAR" ]; then VAR=default; fi` | `${VAR:-default}` | -| `printf '%s\n' "$VAR"` | `echo "$VAR"` when the value may contain `-e`, `-n`, backslashes | +## Secret safety + +- Never pass secrets through `$(…)` into a log-visible command — echoing `curl -H "Authorization: $(…)"` will leak if the step is rerun in debug. +- Use env blocks (`env:` on the step) or `${{ secrets.FOO }}` directly; the runner masks secret values in output. ## Temporary files - Use `/tmp/<descriptive-name>-$$` (PID suffix) for per-step temp files. -- Delete them before the run ends. -- Never write temp files under the repo path — they will be staged by `git add`. +- Delete them before the run ends (or rely on the runner wipe). +- Never write temp files under the repo working tree — they will be picked up by `git add` and leak into the PR. ## UTF-8 -- All created files must be native UTF-8; never substitute HTML entities for Swedish characters. -- Set `LC_ALL=C.UTF-8 LANG=C.UTF-8` at the top of any bash step that manipulates text files. - -## Self-check +- All committed files must be native UTF-8 (`ö`, `ä`, `å`). Never substitute HTML entities (`ö`) for Swedish characters. +- Set `LC_ALL=C.UTF-8 LANG=C.UTF-8` on any bash step that edits markdown or HTML. -Before issuing a `bash` call, verify: +## Self-check (before issuing a `bash` call) -1. Both `command` and `description` fields are present and non-empty. -2. No `$(...)`, `${VAR}`, or `<(...)` tokens in the command string. -3. Any file path is absolute or clearly rooted at `$GITHUB_WORKSPACE` / the current working directory. -4. Output redirection (`>`, `| tee`) writes to `/tmp/`, not the repo root. +1. Both `command` and `description` are present and non-empty. +2. Every variable expansion that might contain whitespace or `*` is double-quoted. +3. No backticks, no `<(…)` process substitution. +4. Any file path is absolute or clearly rooted at `$GITHUB_WORKSPACE`. +5. Output redirection (`>`, `| tee`) writes to `/tmp/`, not the repo root. diff --git a/.github/prompts/02-mcp-access.md b/.github/prompts/02-mcp-access.md index edec453d2..d2a425b0e 100644 --- a/.github/prompts/02-mcp-access.md +++ b/.github/prompts/02-mcp-access.md @@ -1,16 +1,22 @@ # 02 — MCP Access -Authoritative server list: [`.github/copilot-mcp.json`](../copilot-mcp.json). Do not duplicate config here. +Authoritative per-workflow surface: the `mcp-servers:` + `tools:` blocks in that workflow's frontmatter. `.github/copilot-mcp.json` is the **local Copilot** surface (used by `assign_copilot_to_issue` / agent files in `.github/agents/`), not by news workflow runs. ## Servers & tool naming -| Server | Transport | Tool names use | -|--------|-----------|----------------| -| `riksdag-regering` | HTTP (Render.com) | snake_case (`get_sync_status`, `search_dokument`, `get_voteringar`) | -| `scb` | container | snake_case (`search_tables`, `get_table_info`, `query_table`) | -| `world-bank` | container | kebab-case (`get-economic-data`, `get-country-info`, `search-indicators`) | -| `github` | HTTP | standard GitHub MCP toolset | -| `filesystem` / `memory` / `sequential-thinking` / `playwright` | local | standard helpers | +News workflows declare three data MCP servers + the built-in `github` toolset (via `tools.github.toolsets: [all]`) + `bash` + `agentic-workflows` + `repo-memory`. + +| Server | Transport | Declared in | Tool-name style | Example tools | +|--------|-----------|-------------|-----------------|---------------| +| `riksdag-regering` | HTTP (Render) | workflow `mcp-servers:` | `snake_case` | `get_sync_status`, `search_dokument`, `get_voteringar`, `get_dokument_innehall` | +| `scb` | container (`@jarib/pxweb-mcp`) | workflow `mcp-servers:` | `snake_case` | `search_tables`, `get_table_info`, `query_table` | +| `world-bank` | container (`worldbank-mcp`) | workflow `mcp-servers:` | `kebab-case` | `get-economic-data`, `get-country-info`, `search-indicators` | +| `github` | HTTP (Copilot MCP) | workflow `tools.github` | standard | full GitHub MCP toolset | +| `repo-memory` | local helper | workflow `tools.repo-memory` | standard | persistent cross-run memory on `memory/news-generation` | +| `bash` | local helper | workflow `tools.bash` | standard | shell execution | +| `safeoutputs` | runner | always available | `snake_case` | `safeoutputs___create_pull_request`, `safeoutputs___noop`, `safeoutputs___dispatch_workflow` | + +`filesystem`, `memory`, `sequential-thinking`, `playwright` are declared in `.github/copilot-mcp.json` for the **local Copilot / `assign_copilot_to_issue`** channel. They are **not** available to news workflows unless the workflow itself declares them under `mcp-servers:`. Authoritative server inventory: [`.github/copilot-mcp.json`](../copilot-mcp.json) for local; the workflow frontmatter for the actual per-run surface. IMF is **not** an MCP server. Fetch IMF data via the TypeScript client: `npx tsx scripts/imf-fetch.ts …` (see [Economic Data Contract](../aw/ECONOMIC_DATA_CONTRACT.md)). @@ -19,16 +25,16 @@ IMF is **not** an MCP server. Fetch IMF data via the TypeScript client: `npx tsx Run once at workflow start, then proceed — do not loop forever. 1. Call `get_sync_status({})`. Retry up to **3 times**, 20 s apart. Server is pre-warmed by the CI `steps:` block. -2. If the third attempt fails, call `safeoutputs___noop({"message": "MCP unavailable after pre-warm + 3 retries"})` and exit. +2. If the third attempt fails, apply the MCP-unreachable no-op policy from `07-commit-and-pr.md` and exit. 3. Once `get_sync_status` succeeds, proceed. Do not spend more than **2 minutes** on warm-up. ## Data sourcing rules | Rule | |------| -| All political content comes from live MCP data. Never fabricate, never reuse cached articles as source material. | | Riksdag tool arguments are documented under [`.github/skills/riksdag-regering-mcp/`](../skills/riksdag-regering-mcp/). | -| Treat MCP failure mid-run as partial data: continue with what you have, document gaps in the analysis manifest, never silently drop documents. | +| Treat MCP failure mid-run as partial data: continue with what you have, document gaps in `data-download-manifest.md`, never silently drop documents. | +| Source authority and no-fabrication rule: see `00-base-contract.md` rules 1 + 3. | ## Pre-warm step (CI job, not prompt) diff --git a/.github/prompts/04-analysis-pipeline.md b/.github/prompts/04-analysis-pipeline.md index 47752790e..0c489a461 100644 --- a/.github/prompts/04-analysis-pipeline.md +++ b/.github/prompts/04-analysis-pipeline.md @@ -53,14 +53,7 @@ Pass 2 is mandatory. Completing earlier is a quality failure. ## Evidence standard -Every analytical claim must cite at least one of: - -- A real `dok_id` (e.g. `H901FiU1`) resolvable via `get_dokument`. -- Named MP / minister / party with role. -- Vote counts from `get_voteringar`. -- A primary-source URL (riksdagen.se, regeringen.se, scb.se, worldbank.org). - -Generic phrasing without evidence is a Pass-2 improvement target. +Every analytical claim must cite at least one of: a real `dok_id` (e.g. `H901FiU1`) resolvable via `get_dokument`; a named MP / minister / party with role; vote counts from `get_voteringar`; or a primary-source URL (riksdagen.se, regeringen.se, scb.se, worldbank.org, data.imf.org). Generic phrasing without evidence is a Pass-2 improvement target. Gate enforcement lives in `05-analysis-gate.md` check 4. ## Economic context diff --git a/.github/prompts/05-analysis-gate.md b/.github/prompts/05-analysis-gate.md index 5cb473c7a..573930095 100644 --- a/.github/prompts/05-analysis-gate.md +++ b/.github/prompts/05-analysis-gate.md @@ -17,17 +17,29 @@ This is the **only** gate separating analysis from article generation. If it fai 5. **Mermaid diagrams** — every daily synthesis file contains ≥ 1 Mermaid diagram with colour-coded `style` directives. 6. **Pass-2 done** — agent has read each core artifact back after creation and committed improvements. (Enforced by file mtime diff: final file mtime > creation time + 3 min, OR two git-history snapshots on disk.) -## Reference script +## Implementation -Implemented in [`scripts/validate-analysis-gate.ts`](../../scripts/validate-analysis-gate.ts) (to be added if missing; otherwise inline bash equivalent is acceptable). Invocation: +No dedicated validator script exists yet — run the six checks above as an inline bash gate. Canonical shape: ``` -npx tsx scripts/validate-analysis-gate.ts \ - --dir "$ANALYSIS_DIR" \ - --manifest "$ANALYSIS_DIR/data-download-manifest.md" +set -Eeuo pipefail +REQ="synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md \ + stakeholder-perspectives.md significance-scoring.md classification-results.md \ + cross-reference-map.md data-download-manifest.md" +FAIL=0 +for f in $REQ; do + [ -s "$ANALYSIS_DIR/$f" ] || { echo "❌ missing/empty: $f"; FAIL=1; } +done +grep -rIn -e 'AI_MUST_REPLACE' -e '\[REQUIRED\]' -e 'TODO:' -e 'Lorem ipsum' "$ANALYSIS_DIR" \ + && FAIL=1 +grep -lE 'H[0-9]{3}[A-Za-z]{2,}[0-9]+' "$ANALYSIS_DIR/swot-analysis.md" >/dev/null \ + || { echo "❌ swot-analysis.md: no dok_id citation"; FAIL=1; } +grep -lE '^```mermaid' "$ANALYSIS_DIR/synthesis-summary.md" >/dev/null \ + || { echo "❌ synthesis-summary.md: missing Mermaid block"; FAIL=1; } +[ "$FAIL" -eq 0 ] || exit 1 ``` -Exit code 0 = pass, non-zero = fail with per-check report. +Exit code 0 = pass, non-zero = fail with per-check report. If a future run needs reuse, factor the block into `scripts/validate-analysis-gate.ts` and update this module. ## Outcome diff --git a/.github/prompts/06-article-generation.md b/.github/prompts/06-article-generation.md index c451c4c8f..baf8fa8a4 100644 --- a/.github/prompts/06-article-generation.md +++ b/.github/prompts/06-article-generation.md @@ -29,11 +29,11 @@ Articles derive from analysis. Scripts produce HTML scaffolding; the AI writes e | Strategic context | `risk-assessment.md` + `threat-analysis.md` | | Economic context | `economic-data.json` + commentary paragraph | | SEO title / meta description | `synthesis-summary.md` §"AI-Recommended Article Metadata" | - | Analysis references block | Auto-injected by `scripts/inject-analysis-references.ts` (verify after) | + | Analysis references block | Hand-written footer linking to the 9 analysis files on GitHub (see "Mandatory sections" below) | 3. **Replace every `AI_MUST_REPLACE` marker** with evidence-cited analysis. The gate in step 7 enforces zero markers. -4. **Article Pass 2**: read every generated article HTML back in full. Improve: tighten lede, strengthen quotes, expand stakeholder coverage, replace boilerplate sentences, verify every `dok_id` reference resolves. Minimum 8 minutes. +4. **Article Pass 2** — AI-FIRST principle applies (see `00-base-contract.md` rule 5). Read every generated article HTML back in full. Improve: tighten lede, strengthen quotes, expand stakeholder coverage, replace boilerplate sentences, verify every `dok_id` reference resolves. Minimum 8 minutes. ## Mandatory sections (per article) diff --git a/.github/prompts/README.md b/.github/prompts/README.md index 180bb213b..6039fed10 100644 --- a/.github/prompts/README.md +++ b/.github/prompts/README.md @@ -58,6 +58,19 @@ flowchart LR style I fill:#1a1e3d,stroke:#ffbe0b,color:#e0e0e0 ``` +## Why multiple prompt imports (not a single Copilot Agent File) + +gh-aw supports [two distinct import styles](https://github.github.com/gh-aw/guides/packaging-imports/): + +| Style | Source | Per-workflow cap | Frontmatter shape | Use for | +|-------|--------|------------------|-------------------|---------| +| **Plain imports** | Any `.md` outside `.github/agents/` | unlimited | plain Markdown (no special frontmatter required) | Shared rule modules — this directory. | +| **Copilot Agent File** | `.github/agents/<name>.md` | **exactly one** per workflow | `name`, `description`, `tools`, `mcp-servers` | Per-issue delegation via `assign_copilot_to_issue`, or a single specialised persona for one workflow. | + +News workflows need eight bounded-context modules (role, shell, MCP, download, analysis, gate, article, commit) plus an optional Tier-C extension. The "one agent file per workflow" limit makes that infeasible as a single agent file, so we use plain imports. The 24 files under `.github/agents/` remain the persona catalogue for `assign_copilot_to_issue` and for any future workflow that genuinely needs a single reusable persona per run. + +If a workflow ever needs a *single* reusable persona (e.g. a pure code-review workflow), that workflow may import one agent file from `.github/agents/` **in addition to** any plain prompt imports — gh-aw allows mixing the two styles. + ## Authoring rules for new / edited modules | Rule | Enforced by | diff --git a/.github/prompts/ext/tier-c-aggregation.md b/.github/prompts/ext/tier-c-aggregation.md index 5adb33297..812878ee2 100644 --- a/.github/prompts/ext/tier-c-aggregation.md +++ b/.github/prompts/ext/tier-c-aggregation.md @@ -62,21 +62,27 @@ For `news-week-ahead`, `news-month-ahead`, `news-weekly-review`, `news-monthly-r ## Tier-C gate -Run after the core analysis gate: +No dedicated Tier-C validator script exists — run the core-gate bash block from `05-analysis-gate.md`, then the additional checks below: ``` -npx tsx scripts/validate-tier-c-gate.ts --dir "$ANALYSIS_DIR" +set -Eeuo pipefail +EXTRA="README.md executive-brief.md scenario-analysis.md \ + comparative-international.md methodology-reflection.md" +FAIL=0 +for f in $EXTRA; do + [ -s "$ANALYSIS_DIR/$f" ] || { echo "❌ tier-c missing: $f"; FAIL=1; } +done +# ≥ 3 scenarios with probability + leading indicator +awk '/^##? .*Scenario/{c++} END{exit (c<3)}' "$ANALYSIS_DIR/scenario-analysis.md" \ + || { echo "❌ scenario-analysis.md: fewer than 3 scenarios"; FAIL=1; } +# ≥ 2 external country references in comparative-international.md +grep -cE '\b(Finland|Norway|Denmark|Germany|France|Netherlands|UK|USA|Estonia)\b' \ + "$ANALYSIS_DIR/comparative-international.md" | awk '{exit ($1<2)}' \ + || { echo "❌ comparative-international.md: fewer than 2 countries"; FAIL=1; } +[ "$FAIL" -eq 0 ] || exit 1 ``` -Checks: - -1. All 14 artifacts exist and non-empty. -2. `scenario-analysis.md` contains ≥ 3 scenarios, each with probability + leading indicator. -3. `comparative-international.md` references ≥ 2 external countries' indicators. -4. `methodology-reflection.md` lists ≥ 3 uncertainty items + ≥ 1 bias caveat. -5. `cross-reference-map.md` cites ≥ 3 sibling/prior analyses for aggregation workflows. - -Fail → fix, re-run. Still failing → commit as `analysis-only` via the single-PR rule in `07-commit-and-pr.md`. +If the block is promoted to `scripts/validate-tier-c-gate.ts`, update this module accordingly. ## Article expectations diff --git a/.github/workflows/news-translate.md b/.github/workflows/news-translate.md index 75adf97da..ad5984293 100644 --- a/.github/workflows/news-translate.md +++ b/.github/workflows/news-translate.md @@ -319,7 +319,7 @@ Translation is a pure-derivative workflow: ## Rules specific to this workflow - No original analysis. Never produce files under `analysis/daily/`. -- Validate every translation against the source with `scripts/validate-translation.ts` before commit. +- Validate every translation against the source with `scripts/validate-news-translations.ts` before commit. - Keep the PR under the safe-outputs 100-file cap. If more translations are pending than fit in one PR, translate the highest-priority batch and leave the rest for the next scheduled run. - Skip any language whose translation already exists and is non-empty unless `force` is explicitly requested. diff --git a/docs/adr/0002-modular-prompts-and-single-pr-runs.md b/docs/adr/0002-modular-prompts-and-single-pr-runs.md index 3fa684872..70c2da3b5 100644 --- a/docs/adr/0002-modular-prompts-and-single-pr-runs.md +++ b/docs/adr/0002-modular-prompts-and-single-pr-runs.md @@ -103,3 +103,20 @@ Eight core modules + one Tier-C extension + a `README.md`: - `.github/skills/gh-aw-workflow-authoring/SKILL.md` — authoring pattern with a link to `.github/prompts/` - `analysis/methodologies/ai-driven-analysis-guide.md` — authoritative DIW methodology (unchanged) - `analysis/templates/` — authoritative artifact templates (unchanged) +- gh-aw packaging-imports guide: <https://github.github.com/gh-aw/guides/packaging-imports/> +- gh-aw Copilot agent files reference: <https://github.github.com/gh-aw/reference/copilot-custom-agents/> + +## Addendum — 2026-04-21 correctness pass + +A follow-up deep review fixed defects that slipped through the initial +migration: + +| # | Defect | Fix | +|---|--------|-----| +| 1 | `01-bash-and-shell-safety.md` told the agent to avoid `${VAR}` and `${VAR:-default}` (standard parameter expansion) and to work around `$(…)` via temp files. AWF inspects network egress, not shell syntax; `$(…)` is used throughout our own workflow `steps:`. | Rewrote module 01: quote expansions, use `set -Eeuo pipefail`, UTF-8 locale, keep secrets out of log-visible substitutions. Removed the factually wrong table rows. | +| 2 | `scripts/validate-analysis-gate.ts`, `scripts/validate-tier-c-gate.ts`, `scripts/inject-analysis-references.ts` and `scripts/validate-translation.ts` are referenced from prompts / workflow bodies but do not exist in the repo. | Module 05 and the Tier-C extension now carry inline bash gate scripts that implement the documented checks directly; module 06 points at the hand-written analysis-references footer; news-translate.md's validator reference is corrected to the existing `scripts/validate-news-translations.ts`. | +| 3 | The `dok_id` evidence requirement was restated with slight variations in modules 00, 04, 05, 06. The "never fabricate" rule was in 00 and 02. The `safeoutputs___noop` policy was in 00, 02, 07. | Canonical statement lives in exactly one module; the others cross-reference it (e.g. "gate enforcement lives in `05-analysis-gate.md` check 4"). | +| 4 | Module 02's tool-naming table listed `filesystem`, `memory`, `sequential-thinking`, `playwright` as available helpers, but news workflows do not declare those under `mcp-servers:`; they exist only on the local Copilot channel (`.github/copilot-mcp.json`). | Replaced the table with a per-surface view (what the news workflow actually sees vs what the local Copilot sees) and added the `repo-memory`, `bash`, `safeoutputs` rows that were previously implicit. | +| 5 | `.github/prompts/README.md` did not explain why we use plain imports instead of a single Copilot Agent File. gh-aw docs cap Copilot Agent Files at **one per workflow**, which is incompatible with our 8-module split. | Added a "Why multiple prompt imports" section describing the two import styles, the one-per-workflow cap, and the coexistence with `.github/agents/*.md` persona files used by `assign_copilot_to_issue`. | + +All 12 workflows still compile clean (`gh aw compile`: 0 errors, 0 warnings); all four CI invariants still hold. From 51072016d782cb2c034efe4dfa7cad600843dd1a Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 00:22:30 +0000 Subject: [PATCH 05/21] Merge main + address review: fix markdown links, stray paren, full gate impl, drop memory row Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/e86fb72d-4de7-470b-ada1-a7ba9cc30b11 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/04-analysis-pipeline.md | 3 +- .github/prompts/05-analysis-gate.md | 67 ++++++++++++++++--- .github/prompts/07-commit-and-pr.md | 5 +- .../workflows/compile-agentic-workflows.yml | 2 +- .github/workflows/economic-context-audit.yml | 2 +- .../workflows/news-article-generator.lock.yml | 9 ++- .../workflows/news-committee-reports.lock.yml | 9 ++- .../workflows/news-evening-analysis.lock.yml | 9 ++- .../workflows/news-interpellations.lock.yml | 9 ++- .github/workflows/news-month-ahead.lock.yml | 9 ++- .../workflows/news-monthly-review.lock.yml | 9 ++- .github/workflows/news-motions.lock.yml | 9 ++- .github/workflows/news-propositions.lock.yml | 9 ++- .../workflows/news-realtime-monitor.lock.yml | 9 ++- .gitignore | 3 + analysis/imf/README.md | 2 +- .../methodologies/ai-driven-analysis-guide.md | 2 +- 17 files changed, 132 insertions(+), 35 deletions(-) diff --git a/.github/prompts/04-analysis-pipeline.md b/.github/prompts/04-analysis-pipeline.md index 0c489a461..1ff7aedfe 100644 --- a/.github/prompts/04-analysis-pipeline.md +++ b/.github/prompts/04-analysis-pipeline.md @@ -39,7 +39,8 @@ Plus `documents/` subfolder with **one `.md` per `dok_id`** using [`per-file-pol 1. **Read all 6 methodologies first** (one tool call per file, do not skip). 2. **Read all 8 templates first.** 3. **Pass 1 — Create** all 9 artifacts + every per-document file. Minimum 15 minutes of real work. -4. **Pass 2 — Improve**: read every Pass-1 file back in full and strengthen evidence, diagrams, cross-references, stakeholder coverage, uncertainty disclosure. Minimum 7 minutes. +4. **Snapshot Pass-1** — copy every Pass-1 file into `$ANALYSIS_DIR/pass1/` before starting Pass 2: `mkdir -p "$ANALYSIS_DIR/pass1" && cp "$ANALYSIS_DIR"/*.md "$ANALYSIS_DIR/pass1/"`. The `pass1/` directory is the fallback evidence the gate uses when mtime windows are too tight. Do **not** stage `pass1/` in the PR (see `07-commit-and-pr.md`). +5. **Pass 2 — Improve**: read every Pass-1 file back in full and strengthen evidence, diagrams, cross-references, stakeholder coverage, uncertainty disclosure. Minimum 7 minutes. Pass 2 is mandatory. Completing earlier is a quality failure. diff --git a/.github/prompts/05-analysis-gate.md b/.github/prompts/05-analysis-gate.md index 573930095..73d85d9d9 100644 --- a/.github/prompts/05-analysis-gate.md +++ b/.github/prompts/05-analysis-gate.md @@ -19,27 +19,74 @@ This is the **only** gate separating analysis from article generation. If it fai ## Implementation -No dedicated validator script exists yet — run the six checks above as an inline bash gate. Canonical shape: +No dedicated validator script exists yet — implement the six checks as an inline bash gate. Full implementation (covers checks 1–6): ``` set -Eeuo pipefail -REQ="synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md \ +REQ=(synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md \ stakeholder-perspectives.md significance-scoring.md classification-results.md \ - cross-reference-map.md data-download-manifest.md" + cross-reference-map.md data-download-manifest.md) +SYNTHESIS=(synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md \ + stakeholder-perspectives.md significance-scoring.md classification-results.md \ + cross-reference-map.md) +DOK_RE='H[0-9]{3}[A-Za-z]{2,}[0-9]+' FAIL=0 -for f in $REQ; do + +# Check 1 — artifact existence +for f in "${REQ[@]}"; do [ -s "$ANALYSIS_DIR/$f" ] || { echo "❌ missing/empty: $f"; FAIL=1; } done + +# Check 2 — per-document coverage against manifest +if [ -s "$ANALYSIS_DIR/data-download-manifest.md" ]; then + mapfile -t DOKS < <(grep -oE "$DOK_RE" "$ANALYSIS_DIR/data-download-manifest.md" | sort -u) + [ "${#DOKS[@]}" -gt 0 ] || { echo "❌ manifest has no dok_id entries"; FAIL=1; } + for d in "${DOKS[@]}"; do + [ -s "$ANALYSIS_DIR/documents/$d.md" ] || { echo "❌ documents/$d.md missing"; FAIL=1; } + done +fi + +# Check 3 — no stubs grep -rIn -e 'AI_MUST_REPLACE' -e '\[REQUIRED\]' -e 'TODO:' -e 'Lorem ipsum' "$ANALYSIS_DIR" \ - && FAIL=1 -grep -lE 'H[0-9]{3}[A-Za-z]{2,}[0-9]+' "$ANALYSIS_DIR/swot-analysis.md" >/dev/null \ - || { echo "❌ swot-analysis.md: no dok_id citation"; FAIL=1; } -grep -lE '^```mermaid' "$ANALYSIS_DIR/synthesis-summary.md" >/dev/null \ - || { echo "❌ synthesis-summary.md: missing Mermaid block"; FAIL=1; } + && { echo "❌ stub placeholders detected"; FAIL=1; } + +# Check 4 — evidence citations per quadrant / ranked item +awk -v re="$DOK_RE" ' + /^##[[:space:]]+(Strengths|Weaknesses|Opportunities|Threats)\b/ { sec=$0; next } + sec != "" && /^[[:space:]]*[-*][[:space:]]+/ && $0 !~ re { + printf "❌ swot-analysis.md %s: bullet missing dok_id: %s\n", sec, $0; bad=1 + } + END { exit bad+0 } +' "$ANALYSIS_DIR/swot-analysis.md" || FAIL=1 +awk -v re="$DOK_RE" ' + /^[[:space:]]*([0-9]+\.[[:space:]]+|[-*][[:space:]]+)/ && $0 !~ re { + printf "❌ significance-scoring.md ranked item missing dok_id: %s\n", $0; bad=1 + } + END { exit bad+0 } +' "$ANALYSIS_DIR/significance-scoring.md" || FAIL=1 + +# Check 5 — Mermaid + colour-coded style directives in every synthesis file +for f in "${SYNTHESIS[@]}"; do + p="$ANALYSIS_DIR/$f"; [ -s "$p" ] || continue + grep -qE '^```mermaid' "$p" || { echo "❌ $f: missing Mermaid block"; FAIL=1; } + grep -qE '^[[:space:]]*style[[:space:]]+' "$p" || { echo "❌ $f: missing Mermaid style directive"; FAIL=1; } +done + +# Check 6 — Pass-2 evidence (mtime ≥ birth + 180s, OR differing pass1 snapshot on disk) +for f in "${REQ[@]}"; do + p="$ANALYSIS_DIR/$f"; [ -s "$p" ] || continue + ok=0 + B=$(stat -c %W "$p" 2>/dev/null || echo 0) + M=$(stat -c %Y "$p" 2>/dev/null || echo 0) + [ "${B:-0}" -gt 0 ] && [ "${M:-0}" -ge $((B + 180)) ] && ok=1 + [ -s "$ANALYSIS_DIR/pass1/$f" ] && ! cmp -s "$ANALYSIS_DIR/pass1/$f" "$p" && ok=1 + [ "$ok" -eq 1 ] || { echo "❌ $f: Pass-2 evidence missing (mtime<birth+180s and no pass1/ snapshot)"; FAIL=1; } +done + [ "$FAIL" -eq 0 ] || exit 1 ``` -Exit code 0 = pass, non-zero = fail with per-check report. If a future run needs reuse, factor the block into `scripts/validate-analysis-gate.ts` and update this module. +Exit code 0 = pass, non-zero = fail with per-check report. Precondition for check 6: agent MUST save Pass-1 drafts to `$ANALYSIS_DIR/pass1/` before running Pass-2 improvements so the `cmp` fallback can fire when the same-session mtime window is too tight. If a future run needs reuse, factor the block into `scripts/validate-analysis-gate.ts` and update this module. ## Outcome diff --git a/.github/prompts/07-commit-and-pr.md b/.github/prompts/07-commit-and-pr.md index 8a10392d0..d90322dab 100644 --- a/.github/prompts/07-commit-and-pr.md +++ b/.github/prompts/07-commit-and-pr.md @@ -20,9 +20,10 @@ Workflows declare `safe-outputs.create-pull-request.max: 1`. Attempting a second | Visualisation data | `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/*.json` | | Articles (core languages) | `news/$YYYY/$MM/$DD/$SLUG.{en,sv}.html` | | Translations (news-translate only) | `news/$YYYY/$MM/$DD/$SLUG.<lang>.html` | - | Repo-memory | `memory/news-generation/*.json` (branch `memory/news-generation`) | - Never stage `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/documents/` wholesale — it often contains 100+ files. Stage only `documents/*.md` **if** your `documents/` stays under the safe-outputs 100-file cap; otherwise stage only summary files. + Repo-memory persistence is handled separately by `tools.repo-memory` and pushed to the `memory/news-generation` branch by the safe-outputs runner job. **Do not** create, stage, or commit any `memory/news-generation/*.json` files in the content PR — there is no `memory/` directory in the working tree of `main`. + + Never stage `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/documents/` wholesale — it often contains 100+ files. Stage only `documents/*.md` **if** your `documents/` stays under the safe-outputs 100-file cap; otherwise stage only summary files. Never stage `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/pass1/` — it is a local gate-evidence snapshot (see `04-analysis-pipeline.md`), not a deliverable. 2. **100-file guard.** Before calling safeoutputs, count staged files. If the count > 99, unstage everything under `documents/` except `synthesis-summary.md` and re-check. diff --git a/.github/workflows/compile-agentic-workflows.yml b/.github/workflows/compile-agentic-workflows.yml index ade7e93af..0f447b1cd 100644 --- a/.github/workflows/compile-agentic-workflows.yml +++ b/.github/workflows/compile-agentic-workflows.yml @@ -36,7 +36,7 @@ jobs: # Pin to a specific version to prevent regressions from buggy releases. # v0.67.4 introduced GITHUB_COPILOT_INTEGRATION_ID which broke all workflows; # v0.68.0 fixes this. Update this pin when upgrading gh-aw. - GH_AW_VERSION="v0.69.2" + GH_AW_VERSION="v0.69.3" gh extension install github/gh-aw --pin "$GH_AW_VERSION" env: GH_TOKEN: ${{ secrets.COPILOT_MCP_GITHUB_PERSONAL_ACCESS_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/economic-context-audit.yml b/.github/workflows/economic-context-audit.yml index 544f91964..a652bfc7f 100644 --- a/.github/workflows/economic-context-audit.yml +++ b/.github/workflows/economic-context-audit.yml @@ -7,7 +7,7 @@ # Scope: single hardened workflow — no MCP calls, no write tokens except # the one needed to open a maintenance issue. Follows the project # workflow security standards (least privilege, step-security/harden-runner, -# SHA-pinned actions — see .github/prompts/ and ISMS CI/CD security guidance). +# SHA-pinned actions). See `.github/prompts/README.md` and ISMS CI/CD security guidance. # # Schema v2 cutover (2026-04-20 → 2026-05-31, grace window): # The validator currently accepts BOTH v1 artefacts (source.worldBank / diff --git a/.github/workflows/news-article-generator.lock.yml b/.github/workflows/news-article-generator.lock.yml index 745cb3045..6769792e5 100644 --- a/.github/workflows/news-article-generator.lock.yml +++ b/.github/workflows/news-article-generator.lock.yml @@ -507,7 +507,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_74b57913640f4749_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_74b57913640f4749_EOF - name: Write Safe Outputs Tools env: @@ -538,6 +538,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1593,7 +1598,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,cdn.playwright.dev,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,playwright.download.prss.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-committee-reports.lock.yml b/.github/workflows/news-committee-reports.lock.yml index f0aab0904..d930c6843 100644 --- a/.github/workflows/news-committee-reports.lock.yml +++ b/.github/workflows/news-committee-reports.lock.yml @@ -499,7 +499,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_95936ae540a1c48f_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_95936ae540a1c48f_EOF - name: Write Safe Outputs Tools env: @@ -530,6 +530,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1569,7 +1574,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-evening-analysis.lock.yml b/.github/workflows/news-evening-analysis.lock.yml index a45888197..4b1f698ce 100644 --- a/.github/workflows/news-evening-analysis.lock.yml +++ b/.github/workflows/news-evening-analysis.lock.yml @@ -506,7 +506,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_9919f8b0904ec91c_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_9919f8b0904ec91c_EOF - name: Write Safe Outputs Tools env: @@ -537,6 +537,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1592,7 +1597,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,cdn.playwright.dev,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,playwright.download.prss.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-interpellations.lock.yml b/.github/workflows/news-interpellations.lock.yml index 0797fcb7a..1abac98ca 100644 --- a/.github/workflows/news-interpellations.lock.yml +++ b/.github/workflows/news-interpellations.lock.yml @@ -499,7 +499,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_591ccf46f42b73cb_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_591ccf46f42b73cb_EOF - name: Write Safe Outputs Tools env: @@ -530,6 +530,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1569,7 +1574,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-month-ahead.lock.yml b/.github/workflows/news-month-ahead.lock.yml index fc29cf699..63269474d 100644 --- a/.github/workflows/news-month-ahead.lock.yml +++ b/.github/workflows/news-month-ahead.lock.yml @@ -500,7 +500,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_ff7f4402944ea0d2_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_ff7f4402944ea0d2_EOF - name: Write Safe Outputs Tools env: @@ -531,6 +531,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1570,7 +1575,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-monthly-review.lock.yml b/.github/workflows/news-monthly-review.lock.yml index e0f8fc4e7..985716e57 100644 --- a/.github/workflows/news-monthly-review.lock.yml +++ b/.github/workflows/news-monthly-review.lock.yml @@ -500,7 +500,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_c9e7590b58d44a81_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_c9e7590b58d44a81_EOF - name: Write Safe Outputs Tools env: @@ -531,6 +531,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1570,7 +1575,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-motions.lock.yml b/.github/workflows/news-motions.lock.yml index 94776129f..e58271b51 100644 --- a/.github/workflows/news-motions.lock.yml +++ b/.github/workflows/news-motions.lock.yml @@ -499,7 +499,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_2544c0f641f092c4_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_2544c0f641f092c4_EOF - name: Write Safe Outputs Tools env: @@ -530,6 +530,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1569,7 +1574,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-propositions.lock.yml b/.github/workflows/news-propositions.lock.yml index 5e25a2faa..28269e84e 100644 --- a/.github/workflows/news-propositions.lock.yml +++ b/.github/workflows/news-propositions.lock.yml @@ -499,7 +499,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_adeff6bf6d40e8b2_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_adeff6bf6d40e8b2_EOF - name: Write Safe Outputs Tools env: @@ -530,6 +530,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1569,7 +1574,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/news-realtime-monitor.lock.yml b/.github/workflows/news-realtime-monitor.lock.yml index 192047be1..1fd8ad73c 100644 --- a/.github/workflows/news-realtime-monitor.lock.yml +++ b/.github/workflows/news-realtime-monitor.lock.yml @@ -507,7 +507,7 @@ jobs: mkdir -p /tmp/gh-aw/safeoutputs mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_0b861792875b19ae_EOF' - {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} + {"add_comment":{"max":1},"create_pull_request":{"draft":false,"expires":336,"labels":["agentic-news","analysis-data"],"max":1,"max_patch_size":4096,"protected_files":["package.json","bun.lockb","bunfig.toml","deno.json","deno.jsonc","deno.lock","global.json","NuGet.Config","Directory.Packages.props","mix.exs","mix.lock","go.mod","go.sum","stack.yaml","stack.yaml.lock","pom.xml","build.gradle","build.gradle.kts","settings.gradle","settings.gradle.kts","gradle.properties","package-lock.json","yarn.lock","pnpm-lock.yaml","npm-shrinkwrap.json","requirements.txt","Pipfile","Pipfile.lock","pyproject.toml","setup.py","setup.cfg","Gemfile","Gemfile.lock","uv.lock","CODEOWNERS","AGENTS.md","CLAUDE.md","GEMINI.md"],"protected_path_prefixes":[".github/",".agents/"]},"create_report_incomplete_issue":{},"dispatch_workflow":{"aw_context_workflows":["news-translate"],"max":1,"workflow_files":{"news-translate":".lock.yml"},"workflows":["news-translate"]},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"push_repo_memory":{"memories":[{"dir":"/tmp/gh-aw/repo-memory/default","id":"default","max_file_count":50,"max_file_size":51200,"max_patch_size":51200}]},"report_incomplete":{}} GH_AW_SAFE_OUTPUTS_CONFIG_0b861792875b19ae_EOF - name: Write Safe Outputs Tools env: @@ -538,6 +538,11 @@ jobs: "description": "Article type to translate (propositions, motions, committee-reports, week-ahead, month-ahead, weekly-review, monthly-review, breaking, evening-analysis, deep-inspection, interpellations). Leave empty to scan for all untranslated articles.", "type": "string" }, + "aw_context": { + "default": "", + "description": "Agent caller context (used internally by Agentic Workflows).", + "type": "string" + }, "languages": { "default": "all-extra", "description": "Target languages (da,no,fi,de,fr,es,nl,ar,he,ja,ko,zh | nordic-extra | eu-extra | cjk | rtl | all-extra). Default: all-extra (all except en,sv)", @@ -1593,7 +1598,7 @@ jobs: GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.imf.org,api.individual.githubcopilot.com,api.npms.io,api.scb.se,api.snapcraft.io,api.worldbank.org,archive.ubuntu.com,azure.archive.ubuntu.com,bun.sh,cdn.jsdelivr.net,cdn.playwright.dev,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,data.imf.org,data.riksdagen.se,deb.nodesource.com,deno.land,docs.github.com,esm.sh,get.pnpm.io,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,googleapis.deno.dev,googlechromelabs.github.io,hack23.com,hack23.github.io,host.docker.internal,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,lfs.github.com,localhost,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,playwright.download.prss.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,regeringen.se,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.yarnpkg.com,riksdag-regering-ai.onrender.com,riksdagen.se,riksdagsmonitor.com,s.symcb.com,s.symcd.com,security.ubuntu.com,skimdb.npmjs.com,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,telemetry.vercel.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com,www.hack23.com,www.imf.org,www.npmjs.com,www.npmjs.org,www.regeringen.se,www.riksdagen.se,www.riksdagsmonitor.com,www.scb.se,yarnpkg.com" GITHUB_SERVER_URL: ${{ github.server_url }} GITHUB_API_URL: ${{ github.api_url }} - GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_pull_request\":{\"draft\":false,\"expires\":336,\"labels\":[\"agentic-news\",\"analysis-data\"],\"max\":1,\"max_patch_size\":4096,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"CODEOWNERS\",\"AGENTS.md\",\"CLAUDE.md\",\"GEMINI.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"create_report_incomplete_issue\":{},\"dispatch_workflow\":{\"aw_context_workflows\":[\"news-translate\"],\"max\":1,\"workflow_files\":{\"news-translate\":\".lock.yml\"},\"workflows\":[\"news-translate\"]},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"report_incomplete\":{}}" GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }} with: github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} diff --git a/.gitignore b/.gitignore index 1d3f422d0..db5419d72 100644 --- a/.gitignore +++ b/.gitignore @@ -80,3 +80,6 @@ news/metadata/locks/*.lock/ main-page-full.png cia-dashboard.png builds/ + +# Pass-1 snapshots used by the analysis gate (see .github/prompts/05-analysis-gate.md) +analysis/daily/*/*/pass1/ diff --git a/analysis/imf/README.md b/analysis/imf/README.md index bf9bfeb51..cc10cf882 100644 --- a/analysis/imf/README.md +++ b/analysis/imf/README.md @@ -84,5 +84,5 @@ IMF advertises **~10 req / 5 s**. The client and agentic workflows MUST: - `analysis/imf/indicator-policy-mapping.md` — which IMF indicators feed which committees - `analysis/imf/use-cases.md` — canonical article examples - `.github/aw/ECONOMIC_DATA_CONTRACT.md` — v2.0 contract (data artefact shape, validator gates) -- `.github/aw/.github/prompts/ (see README)` — "Economic Indicator Reference" +- [`.github/prompts/README.md`](../../.github/prompts/README.md) — shared news-workflow prompt library, including the "Economic Indicator Reference" - `docs/adr/0001-adopt-imf-data-alongside-world-bank.md` — architecture decision diff --git a/analysis/methodologies/ai-driven-analysis-guide.md b/analysis/methodologies/ai-driven-analysis-guide.md index dd36cc2e0..1da6bff9e 100644 --- a/analysis/methodologies/ai-driven-analysis-guide.md +++ b/analysis/methodologies/ai-driven-analysis-guide.md @@ -2181,7 +2181,7 @@ Every synthesis-level analysis MUST include a historical comparison with: | [political-style-guide.md](political-style-guide.md) | Writing and formatting standards | | [SWOT.md](../../SWOT.md) | **Formatting exemplar** (platform SWOT) | | [THREAT_MODEL.md](../../THREAT_MODEL.md) | **Formatting exemplar** (platform threat model) | -| [.github/prompts/ (see README)](../../.github/workflows/.github/prompts/ (see README)) | **Shared news workflow prompts** — quality enforcement | +| [.github/prompts/README.md](../../.github/prompts/README.md) | **Shared news workflow prompts** — quality enforcement | --- From 065706282030d3e2d4b65218ed08b743eacd34e8 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 01:14:04 +0000 Subject: [PATCH 06/21] Merge main + harden gate scripts: set/assert ANALYSIS_DIR in 05-gate and tier-c Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/b700709c-6dbc-4bec-ac87-82c6879b673b Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/aw/actions-lock.json | 12 +++---- .github/prompts/05-analysis-gate.md | 4 +++ .github/prompts/ext/tier-c-aggregation.md | 3 ++ .github/workflows/agentics-maintenance.yml | 34 +++++++++---------- .../workflows/news-article-generator.lock.yml | 28 +++++++-------- .../workflows/news-committee-reports.lock.yml | 28 +++++++-------- .../workflows/news-evening-analysis.lock.yml | 28 +++++++-------- .../workflows/news-interpellations.lock.yml | 28 +++++++-------- .github/workflows/news-month-ahead.lock.yml | 28 +++++++-------- .../workflows/news-monthly-review.lock.yml | 28 +++++++-------- .github/workflows/news-motions.lock.yml | 28 +++++++-------- .github/workflows/news-propositions.lock.yml | 28 +++++++-------- .../workflows/news-realtime-monitor.lock.yml | 28 +++++++-------- .github/workflows/news-translate.lock.yml | 28 +++++++-------- .github/workflows/news-week-ahead.lock.yml | 28 +++++++-------- .github/workflows/news-weekly-review.lock.yml | 28 +++++++-------- 16 files changed, 198 insertions(+), 191 deletions(-) diff --git a/.github/aw/actions-lock.json b/.github/aw/actions-lock.json index 54b39d01a..d9909e89d 100644 --- a/.github/aw/actions-lock.json +++ b/.github/aw/actions-lock.json @@ -40,15 +40,15 @@ "version": "v7.0.1", "sha": "043fb46d1a93c77aae656e7c1c64a875d1fc6a0a" }, - "github/gh-aw-actions/setup-cli@v0.69.2": { + "github/gh-aw-actions/setup-cli@v0.69.3": { "repo": "github/gh-aw-actions/setup-cli", - "version": "v0.69.2", - "sha": "dca90cae5e2ec0ef2275f97efcb832793c86e082" + "version": "v0.69.3", + "sha": "006ffd856b868b71df342dbe0ba082a963249b31" }, - "github/gh-aw-actions/setup@v0.69.2": { + "github/gh-aw-actions/setup@v0.69.3": { "repo": "github/gh-aw-actions/setup", - "version": "v0.69.2", - "sha": "dca90cae5e2ec0ef2275f97efcb832793c86e082" + "version": "v0.69.3", + "sha": "006ffd856b868b71df342dbe0ba082a963249b31" }, "github/gh-aw/actions/setup@v0.43.18": { "repo": "github/gh-aw/actions/setup", diff --git a/.github/prompts/05-analysis-gate.md b/.github/prompts/05-analysis-gate.md index 73d85d9d9..8e69148f2 100644 --- a/.github/prompts/05-analysis-gate.md +++ b/.github/prompts/05-analysis-gate.md @@ -23,6 +23,10 @@ No dedicated validator script exists yet — implement the six checks as an inli ``` set -Eeuo pipefail +: "${ARTICLE_DATE:?ARTICLE_DATE must be set}" +: "${SUBFOLDER:?SUBFOLDER must be set}" +ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE/$SUBFOLDER" +[ -d "$ANALYSIS_DIR" ] || { echo "❌ ANALYSIS_DIR does not exist: $ANALYSIS_DIR"; exit 1; } REQ=(synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md \ stakeholder-perspectives.md significance-scoring.md classification-results.md \ cross-reference-map.md data-download-manifest.md) diff --git a/.github/prompts/ext/tier-c-aggregation.md b/.github/prompts/ext/tier-c-aggregation.md index 812878ee2..616f883ad 100644 --- a/.github/prompts/ext/tier-c-aggregation.md +++ b/.github/prompts/ext/tier-c-aggregation.md @@ -66,6 +66,9 @@ No dedicated Tier-C validator script exists — run the core-gate bash block fro ``` set -Eeuo pipefail +ANALYSIS_DIR="${ANALYSIS_DIR:-}" +[ -n "$ANALYSIS_DIR" ] || { echo "❌ ANALYSIS_DIR is not set; run the core-gate block from 05-analysis-gate.md first"; exit 1; } +[ -d "$ANALYSIS_DIR" ] || { echo "❌ ANALYSIS_DIR does not exist: $ANALYSIS_DIR"; exit 1; } EXTRA="README.md executive-brief.md scenario-analysis.md \ comparative-international.md methodology-reflection.md" FAIL=0 diff --git a/.github/workflows/agentics-maintenance.yml b/.github/workflows/agentics-maintenance.yml index 01e6bb3e4..cbebc5efe 100644 --- a/.github/workflows/agentics-maintenance.yml +++ b/.github/workflows/agentics-maintenance.yml @@ -12,7 +12,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by pkg/workflow/maintenance_workflow.go (v0.69.2). DO NOT EDIT. +# This file was automatically generated by pkg/workflow/maintenance_workflow.go (v0.69.3). DO NOT EDIT. # # To regenerate this workflow, run: # gh aw compile @@ -91,7 +91,7 @@ jobs: pull-requests: write steps: - name: Setup Scripts - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions @@ -129,7 +129,7 @@ jobs: actions: write steps: - name: Setup Scripts - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions @@ -158,7 +158,7 @@ jobs: persist-credentials: false - name: Setup Scripts - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions @@ -173,9 +173,9 @@ jobs: await main(); - name: Install gh-aw - uses: github/gh-aw-actions/setup-cli@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup-cli@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: - version: v0.69.2 + version: v0.69.3 - name: Run operation uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 @@ -215,7 +215,7 @@ jobs: persist-credentials: false - name: Setup Scripts - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions @@ -259,7 +259,7 @@ jobs: persist-credentials: false - name: Setup Scripts - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions @@ -274,9 +274,9 @@ jobs: await main(); - name: Install gh-aw - uses: github/gh-aw-actions/setup-cli@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup-cli@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: - version: v0.69.2 + version: v0.69.3 - name: Create missing labels uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 @@ -305,7 +305,7 @@ jobs: persist-credentials: false - name: Setup Scripts - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions @@ -320,9 +320,9 @@ jobs: await main(); - name: Install gh-aw - uses: github/gh-aw-actions/setup-cli@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup-cli@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: - version: v0.69.2 + version: v0.69.3 - name: Cache activity report logs uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5 @@ -353,7 +353,7 @@ jobs: issues: write steps: - name: Setup Scripts - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions @@ -390,7 +390,7 @@ jobs: persist-credentials: false - name: Setup Scripts - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions @@ -405,9 +405,9 @@ jobs: await main(); - name: Install gh-aw - uses: github/gh-aw-actions/setup-cli@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup-cli@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: - version: v0.69.2 + version: v0.69.3 - name: Validate workflows and file issue on findings uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 diff --git a/.github/workflows/news-article-generator.lock.yml b/.github/workflows/news-article-generator.lock.yml index 6769792e5..7e682132e 100644 --- a/.github/workflows/news-article-generator.lock.yml +++ b/.github/workflows/news-article-generator.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"4390716293e06a2a234a6472a5cc3a514d2dab3fb7f9e819765e27068ea9148e","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"4390716293e06a2a234a6472a5cc3a514d2dab3fb7f9e819765e27068ea9148e","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -49,7 +49,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -122,7 +122,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -134,7 +134,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Article Generator (Manual)" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -192,7 +192,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -387,7 +387,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -949,7 +949,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1147,7 +1147,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1280,7 +1280,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1389,7 +1389,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1445,7 +1445,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1534,7 +1534,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-committee-reports.lock.yml b/.github/workflows/news-committee-reports.lock.yml index d930c6843..5b6a20a8f 100644 --- a/.github/workflows/news-committee-reports.lock.yml +++ b/.github/workflows/news-committee-reports.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"85260a8672a99d9dc6656fd713ac8b302d0edb3fecc12c978b6e83fe9e42ff0b","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"85260a8672a99d9dc6656fd713ac8b302d0edb3fecc12c978b6e83fe9e42ff0b","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -48,7 +48,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -114,7 +114,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -126,7 +126,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Committee Reports" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -184,7 +184,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -379,7 +379,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -925,7 +925,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1123,7 +1123,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1256,7 +1256,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1365,7 +1365,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1421,7 +1421,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1510,7 +1510,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-evening-analysis.lock.yml b/.github/workflows/news-evening-analysis.lock.yml index 4b1f698ce..e12f0bf5d 100644 --- a/.github/workflows/news-evening-analysis.lock.yml +++ b/.github/workflows/news-evening-analysis.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"ba1723d47e9431f150544397019675615b429c75e50a650fd099bd8d64e86959","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"ba1723d47e9431f150544397019675615b429c75e50a650fd099bd8d64e86959","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -49,7 +49,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -119,7 +119,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -131,7 +131,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News Evening Analysis" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -189,7 +189,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -386,7 +386,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -948,7 +948,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1146,7 +1146,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1279,7 +1279,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1388,7 +1388,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1444,7 +1444,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1533,7 +1533,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-interpellations.lock.yml b/.github/workflows/news-interpellations.lock.yml index 1abac98ca..b660a68a8 100644 --- a/.github/workflows/news-interpellations.lock.yml +++ b/.github/workflows/news-interpellations.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"d1b5eeff6d85f73e5e2f3a27c545b9db288de3dd45c2ff7fb2b07a66cf60bc33","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"d1b5eeff6d85f73e5e2f3a27c545b9db288de3dd45c2ff7fb2b07a66cf60bc33","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -48,7 +48,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -114,7 +114,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -126,7 +126,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Interpellation Debates" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -184,7 +184,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -379,7 +379,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -925,7 +925,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1123,7 +1123,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1256,7 +1256,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1365,7 +1365,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1421,7 +1421,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1510,7 +1510,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-month-ahead.lock.yml b/.github/workflows/news-month-ahead.lock.yml index 63269474d..a4b8a8697 100644 --- a/.github/workflows/news-month-ahead.lock.yml +++ b/.github/workflows/news-month-ahead.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"edc0474901ac6a1bac6847c0bd8635b6adbacbd4709e2109d31070a72a53068e","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"edc0474901ac6a1bac6847c0bd8635b6adbacbd4709e2109d31070a72a53068e","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -49,7 +49,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -114,7 +114,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -126,7 +126,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Month Ahead" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -184,7 +184,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -380,7 +380,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -926,7 +926,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1124,7 +1124,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1257,7 +1257,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1366,7 +1366,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1422,7 +1422,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1511,7 +1511,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-monthly-review.lock.yml b/.github/workflows/news-monthly-review.lock.yml index 985716e57..9ef48d4c9 100644 --- a/.github/workflows/news-monthly-review.lock.yml +++ b/.github/workflows/news-monthly-review.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"ee594adff70b242159d9488e1d78e721087832bc47d821be24c08a81ac3e3c9f","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"ee594adff70b242159d9488e1d78e721087832bc47d821be24c08a81ac3e3c9f","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -49,7 +49,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -114,7 +114,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -126,7 +126,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Monthly Review" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -184,7 +184,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -380,7 +380,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -926,7 +926,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1124,7 +1124,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1257,7 +1257,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1366,7 +1366,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1422,7 +1422,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1511,7 +1511,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-motions.lock.yml b/.github/workflows/news-motions.lock.yml index e58271b51..f36c6c44a 100644 --- a/.github/workflows/news-motions.lock.yml +++ b/.github/workflows/news-motions.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"87a02ea5489fdb3c025fcde6dacf95304e98f381a077a17c03bd38a79a74a564","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"87a02ea5489fdb3c025fcde6dacf95304e98f381a077a17c03bd38a79a74a564","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -48,7 +48,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -114,7 +114,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -126,7 +126,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Opposition Motions" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -184,7 +184,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -379,7 +379,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -925,7 +925,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1123,7 +1123,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1256,7 +1256,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1365,7 +1365,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1421,7 +1421,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1510,7 +1510,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-propositions.lock.yml b/.github/workflows/news-propositions.lock.yml index 28269e84e..aa8c1afc9 100644 --- a/.github/workflows/news-propositions.lock.yml +++ b/.github/workflows/news-propositions.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"2a8b4b58de9bb9d5945610c91ca9ea2075bd51f86c626342cc201679c4bf9007","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"2a8b4b58de9bb9d5945610c91ca9ea2075bd51f86c626342cc201679c4bf9007","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -48,7 +48,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -114,7 +114,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -126,7 +126,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Government Propositions" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -184,7 +184,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -379,7 +379,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -925,7 +925,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1123,7 +1123,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1256,7 +1256,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1365,7 +1365,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1421,7 +1421,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1510,7 +1510,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-realtime-monitor.lock.yml b/.github/workflows/news-realtime-monitor.lock.yml index 1fd8ad73c..854abdac2 100644 --- a/.github/workflows/news-realtime-monitor.lock.yml +++ b/.github/workflows/news-realtime-monitor.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"f5aa4b831845f568be00dd07bdc273d21cc4ebb6e76fcca95c69c5e1ef422c8c","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"f5aa4b831845f568be00dd07bdc273d21cc4ebb6e76fcca95c69c5e1ef422c8c","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"mcr.microsoft.com/playwright/mcp"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -49,7 +49,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -120,7 +120,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -132,7 +132,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News Realtime Monitor" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -190,7 +190,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -387,7 +387,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -949,7 +949,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1147,7 +1147,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1280,7 +1280,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1389,7 +1389,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1445,7 +1445,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1534,7 +1534,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-translate.lock.yml b/.github/workflows/news-translate.lock.yml index b9fa7890c..ad345909e 100644 --- a/.github/workflows/news-translate.lock.yml +++ b/.github/workflows/news-translate.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"e0b4de7e3b8000d4d0183e5d5dfc98bc449e515864d6479d5ac7d57c643c239f","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"e0b4de7e3b8000d4d0183e5d5dfc98bc449e515864d6479d5ac7d57c643c239f","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -44,7 +44,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -113,7 +113,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -125,7 +125,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Translate Articles" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -183,7 +183,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -374,7 +374,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -903,7 +903,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1100,7 +1100,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1233,7 +1233,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1342,7 +1342,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1398,7 +1398,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1486,7 +1486,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-week-ahead.lock.yml b/.github/workflows/news-week-ahead.lock.yml index 694862223..cb5cd79ce 100644 --- a/.github/workflows/news-week-ahead.lock.yml +++ b/.github/workflows/news-week-ahead.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"537e56999f77854e9b504eeaa1a95b7b5ed7b46fcceeb5925db0009eb9dbb689","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"537e56999f77854e9b504eeaa1a95b7b5ed7b46fcceeb5925db0009eb9dbb689","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -49,7 +49,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -115,7 +115,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -127,7 +127,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Week Ahead" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -185,7 +185,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -381,7 +381,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -927,7 +927,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1125,7 +1125,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1258,7 +1258,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1367,7 +1367,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1423,7 +1423,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1512,7 +1512,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} diff --git a/.github/workflows/news-weekly-review.lock.yml b/.github/workflows/news-weekly-review.lock.yml index 6e6ab0aa5..eea103996 100644 --- a/.github/workflows/news-weekly-review.lock.yml +++ b/.github/workflows/news-weekly-review.lock.yml @@ -1,5 +1,5 @@ -# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"78ea852ee56c1208c9a62365f934f9f5bd84002030f83d834ccea70d568ded88","compiler_version":"v0.69.2","agent_id":"copilot","agent_model":"claude-opus-4.7"} -# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"dca90cae5e2ec0ef2275f97efcb832793c86e082","version":"v0.69.2"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"78ea852ee56c1208c9a62365f934f9f5bd84002030f83d834ccea70d568ded88","compiler_version":"v0.69.3","agent_id":"copilot","agent_model":"claude-opus-4.7"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_CI_TRIGGER_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/setup-node","sha":"6044e13b5dc448c55e2357c09f80417699197238","version":"6044e13b5dc448c55e2357c09f80417699197238"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7.0.1"},{"repo":"github/gh-aw-actions/setup","sha":"006ffd856b868b71df342dbe0ba082a963249b31","version":"v0.69.3"}],"containers":[{"image":"alpine:latest"},{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.26"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.26"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.2.26"},{"image":"ghcr.io/github/github-mcp-server:v1.0.0"},{"image":"node:25-alpine"},{"image":"node:lts-alpine"}]} # ___ _ _ # / _ \ | | (_) # | |_| | __ _ ___ _ __ | |_ _ ___ @@ -14,7 +14,7 @@ # \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ # \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ # -# This file was automatically generated by gh-aw (v0.69.2). DO NOT EDIT. +# This file was automatically generated by gh-aw (v0.69.3). DO NOT EDIT. # # To update this file, edit the corresponding .md file and run: # gh aw compile @@ -49,7 +49,7 @@ # - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 # - actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # 6044e13b5dc448c55e2357c09f80417699197238 # - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 -# - github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 +# - github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 # # Container images used: # - alpine:latest @@ -115,7 +115,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -127,7 +127,7 @@ jobs: GH_AW_INFO_MODEL: "claude-opus-4.7" GH_AW_INFO_VERSION: "1.0.21" GH_AW_INFO_AGENT_VERSION: "1.0.21" - GH_AW_INFO_CLI_VERSION: "v0.69.2" + GH_AW_INFO_CLI_VERSION: "v0.69.3" GH_AW_INFO_WORKFLOW_NAME: "News: Weekly Review" GH_AW_INFO_EXPERIMENTAL: "false" GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" @@ -185,7 +185,7 @@ jobs: - name: Check compile-agentic version uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 env: - GH_AW_COMPILED_VERSION: "v0.69.2" + GH_AW_COMPILED_VERSION: "v0.69.3" with: script: | const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); @@ -381,7 +381,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -927,7 +927,7 @@ jobs: GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} GH_AW_STARTUP_TIMEOUT: 180 GH_AW_TOOL_TIMEOUT: 120 - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1125,7 +1125,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1258,7 +1258,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1367,7 +1367,7 @@ jobs: COPILOT_MODEL: claude-opus-4.7 GH_AW_PHASE: detection GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt - GH_AW_VERSION: v0.69.2 + GH_AW_VERSION: v0.69.3 GITHUB_API_URL: ${{ github.api_url }} GITHUB_AW: true GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows @@ -1423,7 +1423,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} @@ -1512,7 +1512,7 @@ jobs: steps: - name: Setup Scripts id: setup - uses: github/gh-aw-actions/setup@dca90cae5e2ec0ef2275f97efcb832793c86e082 # v0.69.2 + uses: github/gh-aw-actions/setup@006ffd856b868b71df342dbe0ba082a963249b31 # v0.69.3 with: destination: ${{ runner.temp }}/gh-aw/actions job-name: ${{ github.job }} From 29c0ef8b51d4a5009abf11de19e0347b91e006a7 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 05:26:01 +0000 Subject: [PATCH 07/21] address review 4152190766: gate regex/filename/SWOT + doc cross-refs Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/afebb125-691e-4794-9f80-71bcd18194a0 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/05-analysis-gate.md | 28 ++++++++++++++++--- .github/skills/riksdag-regering-mcp/SKILL.md | 2 +- .../methodologies/ai-driven-analysis-guide.md | 4 +-- analysis/templates/synthesis-summary.md | 2 +- ...0002-modular-prompts-and-single-pr-runs.md | 2 +- 5 files changed, 29 insertions(+), 9 deletions(-) diff --git a/.github/prompts/05-analysis-gate.md b/.github/prompts/05-analysis-gate.md index 8e69148f2..f13cec2a1 100644 --- a/.github/prompts/05-analysis-gate.md +++ b/.github/prompts/05-analysis-gate.md @@ -33,7 +33,7 @@ REQ=(synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md SYNTHESIS=(synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md \ stakeholder-perspectives.md significance-scoring.md classification-results.md \ cross-reference-map.md) -DOK_RE='H[0-9]{3}[A-Za-z]{2,}[0-9]+' +DOK_RE='[Hh][A-Za-z0-9]{3,}[0-9]+' FAIL=0 # Check 1 — artifact existence @@ -46,7 +46,14 @@ if [ -s "$ANALYSIS_DIR/data-download-manifest.md" ]; then mapfile -t DOKS < <(grep -oE "$DOK_RE" "$ANALYSIS_DIR/data-download-manifest.md" | sort -u) [ "${#DOKS[@]}" -gt 0 ] || { echo "❌ manifest has no dok_id entries"; FAIL=1; } for d in "${DOKS[@]}"; do - [ -s "$ANALYSIS_DIR/documents/$d.md" ] || { echo "❌ documents/$d.md missing"; FAIL=1; } + d_lc="${d,,}" + if [ ! -s "$ANALYSIS_DIR/documents/${d}.md" ] \ + && [ ! -s "$ANALYSIS_DIR/documents/${d}-analysis.md" ] \ + && [ ! -s "$ANALYSIS_DIR/documents/${d_lc}.md" ] \ + && [ ! -s "$ANALYSIS_DIR/documents/${d_lc}-analysis.md" ]; then + echo "❌ documents/${d}.md or documents/${d}-analysis.md missing (any case)" + FAIL=1 + fi done fi @@ -56,10 +63,23 @@ grep -rIn -e 'AI_MUST_REPLACE' -e '\[REQUIRED\]' -e 'TODO:' -e 'Lorem ipsum' "$A # Check 4 — evidence citations per quadrant / ranked item awk -v re="$DOK_RE" ' - /^##[[:space:]]+(Strengths|Weaknesses|Opportunities|Threats)\b/ { sec=$0; next } + function reset_table() { trow=0 } + /^###[[:space:]]+.*(Strengths|Weaknesses|Opportunities|Threats)\b/ { sec=$0; reset_table(); next } + /^#{1,6}[[:space:]]+/ { sec=""; reset_table(); next } sec != "" && /^[[:space:]]*[-*][[:space:]]+/ && $0 !~ re { - printf "❌ swot-analysis.md %s: bullet missing dok_id: %s\n", sec, $0; bad=1 + printf "❌ swot-analysis.md %s: bullet missing dok_id: %s\n", sec, $0; bad=1; next } + sec != "" && /^[[:space:]]*\|/ { + # skip table separator rows like |---|---| + if ($0 ~ /^[[:space:]|:\-]+$/) next + trow++ + if (trow == 1) next # header row + if ($0 !~ re) { + printf "❌ swot-analysis.md %s: table row missing dok_id: %s\n", sec, $0; bad=1 + } + next + } + sec != "" && /^[[:space:]]*$/ { reset_table(); next } END { exit bad+0 } ' "$ANALYSIS_DIR/swot-analysis.md" || FAIL=1 awk -v re="$DOK_RE" ' diff --git a/.github/skills/riksdag-regering-mcp/SKILL.md b/.github/skills/riksdag-regering-mcp/SKILL.md index 6d1493989..933df4495 100644 --- a/.github/skills/riksdag-regering-mcp/SKILL.md +++ b/.github/skills/riksdag-regering-mcp/SKILL.md @@ -129,7 +129,7 @@ get_calendar_events({ from: "2026-04-01", tom: "2026-04-30" }) // Calendar event > **⚠️ Tool names use underscores** (e.g., `get_sync_status`, NOT `get-sync-status`). > The gateway at `http://host.docker.internal:$MCP_GATEWAY_PORT/mcp/riksdag-regering` (port `8080` for gh-aw v0.69+, port `80` for legacy gh-aw <0.69 — resolved dynamically by `mcp-setup.sh`) handles routing. -> See `.github/prompts/` (see the README for the module catalogue) → "MCP Architecture & Tool Reference" for full tool list. +> See [`.github/prompts/02-mcp-access.md`](../../prompts/02-mcp-access.md) for MCP server access, direct tool invocation, and tool-naming conventions (and the [`.github/prompts/README.md`](../../prompts/README.md) for the full module catalogue). ## Examples (TypeScript) diff --git a/analysis/methodologies/ai-driven-analysis-guide.md b/analysis/methodologies/ai-driven-analysis-guide.md index 1da6bff9e..88f6e014e 100644 --- a/analysis/methodologies/ai-driven-analysis-guide.md +++ b/analysis/methodologies/ai-driven-analysis-guide.md @@ -168,7 +168,7 @@ Every analysis file MUST demonstrate: **Rhetorical-Tension Rule**: When the top-ranked findings carry opposing political valences, the article MUST surface the tension in a dedicated subsection. Silence on the tension is itself a coverage failure. -**Enforcement**: `.github/prompts/` (see the README for the module catalogue) → "Lead-Story & Coverage-Completeness Gate" is a blocking check. Articles failing the gate cannot be committed. +**Enforcement**: The blocking check is implemented as the Lead-Story / Coverage-Completeness portion of the [analysis gate in `.github/prompts/05-analysis-gate.md`](../../.github/prompts/05-analysis-gate.md) (Checks 2 "Per-document coverage" + 4 "Evidence citations"). Articles failing the gate cannot be committed. --- @@ -256,7 +256,7 @@ Every agentic workflow MUST spend **at least 15 minutes** on analysis. This incl ### 🔍 Quality Gate (Blocking) -Before committing, run the quality gate bash check from `.github/prompts/` (see the README for the module catalogue) Step 5b. If the check fails, go back and improve analysis files until it passes. Do NOT commit failing analysis. +Before committing, run the quality-gate bash block from [`.github/prompts/05-analysis-gate.md`](../../.github/prompts/05-analysis-gate.md) ("Canonical shape" gate — checks 1–6). If the check fails, go back and improve analysis files until it passes. Do NOT commit failing analysis. --- diff --git a/analysis/templates/synthesis-summary.md b/analysis/templates/synthesis-summary.md index 05b273a23..18d981f62 100644 --- a/analysis/templates/synthesis-summary.md +++ b/analysis/templates/synthesis-summary.md @@ -361,7 +361,7 @@ graph LR ## ✅ Quality Self-Check Checklist -> **Pre-commit validation — every item MUST be checked before finalising this synthesis. Derived from .github/prompts/ (see README) §Quality Self-Check Protocol.** +> **Pre-commit validation — every item MUST be checked before finalising this synthesis. Derived from the analysis-gate checks in [`.github/prompts/05-analysis-gate.md`](../../.github/prompts/05-analysis-gate.md).** - [ ] **Synthesis Context complete:** All metadata fields filled (ID, date, documents analyzed, period, producer, confidence) - [ ] **Intelligence Dashboard rendered:** Mermaid diagram has actual values (no grey placeholder nodes remaining) diff --git a/docs/adr/0002-modular-prompts-and-single-pr-runs.md b/docs/adr/0002-modular-prompts-and-single-pr-runs.md index 70c2da3b5..ff3393dad 100644 --- a/docs/adr/0002-modular-prompts-and-single-pr-runs.md +++ b/docs/adr/0002-modular-prompts-and-single-pr-runs.md @@ -84,7 +84,7 @@ Eight core modules + one Tier-C extension + a `README.md`: | Risk | Mitigation | |------|------------| | MCP session expiry without heartbeat | Tight time budgets + scope-trim policy + ≤ 2 min pre-warm; deadline rule in `07-commit-and-pr.md` forces commit + PR by minute ~55. | -| `imports:` resolution differences across gh-aw versions | `compile-agentic-workflows.yml` pins gh-aw via `GH_AW_VERSION="v0.69.2"`. | +| `imports:` resolution differences across gh-aw versions | `compile-agentic-workflows.yml` pins gh-aw via `GH_AW_VERSION="v0.69.3"`. | | Hidden rules in the 4,350-line file dropped accidentally | Phase A migrated every H2/H3 explicitly; review is backed by the CI module-size/banned-string check. | | `news-translate` capacity | If 12 languages exceed the 60-minute budget, translation is split across multiple scheduled runs (already cron'd twice daily + weekend catch-up) rather than across multiple PRs in one run. | From c42415956339836d09a22658619e119fd645664e Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 05:42:19 +0000 Subject: [PATCH 08/21] address review 4152266619: fix editorial-standards doctrine ref + pre-warm timing claim Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/8b8ff297-d5e2-4c19-ad16-e4fa3347e5c6 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/02-mcp-access.md | 2 +- .github/skills/editorial-standards/SKILL.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/prompts/02-mcp-access.md b/.github/prompts/02-mcp-access.md index d2a425b0e..81e0759cd 100644 --- a/.github/prompts/02-mcp-access.md +++ b/.github/prompts/02-mcp-access.md @@ -38,4 +38,4 @@ Run once at workflow start, then proceed — do not loop forever. ## Pre-warm step (CI job, not prompt) -Every news workflow declares a **single** `curl`-based pre-warm step with ≤ 6 retries, ≤ 20 s apart, total ≤ 2 minutes. No background pingers. The `safeoutputs` session is kept alive by completing work inside its ~30-minute idle window, not by opening interim PRs. +Every news workflow declares a **single** `curl`-based pre-warm step with ≤ 6 retries, ≤ 20 s apart. With `curl --max-time 30`, the worst-case runtime can exceed 4 minutes, so this is a best-effort pre-warm rather than a hard ≤ 2 minute guarantee. If a strict 2 minute cap is required, the workflow's `curl` timeout and/or retry policy must be reduced accordingly. No background pingers. The `safeoutputs` session is kept alive by completing work inside its ~30-minute idle window, not by opening interim PRs. diff --git a/.github/skills/editorial-standards/SKILL.md b/.github/skills/editorial-standards/SKILL.md index e9944183f..dfdaf7471 100644 --- a/.github/skills/editorial-standards/SKILL.md +++ b/.github/skills/editorial-standards/SKILL.md @@ -181,7 +181,7 @@ Before any draft is shared for Gate 2 review, verify: 3. **Coverage completeness** — Every document with DIW-weighted score ≥ 7.0 receives a dedicated H3 section in article body. No silent omissions. 4. **Rhetorical tension** — When top-ranked findings carry opposing political valences (e.g., norm entrepreneurship abroad + norm compression at home), an explicit "Rhetorical Cross-Cluster Tension" or equivalent subsection addresses the contradiction. -Failure protocol: if any of 1–4 is not satisfied, the draft is returned to the writing agent with the specific missing element identified. **Doctrine**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Rule 5: Democratic-Impact Weighting (DIW)". **Enforcement**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Lead-Story & Coverage-Completeness Gate". +Failure protocol: if any of 1–4 is not satisfied, the draft is returned to the writing agent with the specific missing element identified. **Doctrine**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Rule 5: Democratic-Impact Weighting (DIW) — Lead-Story & Coverage Discipline". **Enforcement**: `.github/prompts/05-analysis-gate.md` checks 2 + 4. ## Error Correction Protocol From a2fc73d8be85e177e7dc3dff0a3691c0375a8c99 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 05:59:44 +0000 Subject: [PATCH 09/21] address review 4152323710: ADR pre-warm timing + SWOT-friendly Mermaid check Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/045773ed-a763-4943-bda9-f713f939df14 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/05-analysis-gate.md | 11 +++++++++-- docs/adr/0002-modular-prompts-and-single-pr-runs.md | 4 ++-- 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/.github/prompts/05-analysis-gate.md b/.github/prompts/05-analysis-gate.md index f13cec2a1..c7fa0013d 100644 --- a/.github/prompts/05-analysis-gate.md +++ b/.github/prompts/05-analysis-gate.md @@ -89,11 +89,18 @@ awk -v re="$DOK_RE" ' END { exit bad+0 } ' "$ANALYSIS_DIR/significance-scoring.md" || FAIL=1 -# Check 5 — Mermaid + colour-coded style directives in every synthesis file +# Check 5 — Mermaid + colour-coded config (explicit `style …` directive OR +# Mermaid init-block `themeVariables` — the SWOT template uses `quadrantChart` +# with `themeVariables` instead of literal `style` lines; either satisfies +# the "colour-coded" requirement). for f in "${SYNTHESIS[@]}"; do p="$ANALYSIS_DIR/$f"; [ -s "$p" ] || continue grep -qE '^```mermaid' "$p" || { echo "❌ $f: missing Mermaid block"; FAIL=1; } - grep -qE '^[[:space:]]*style[[:space:]]+' "$p" || { echo "❌ $f: missing Mermaid style directive"; FAIL=1; } + if ! grep -qE '^[[:space:]]*style[[:space:]]+' "$p" \ + && ! grep -qE 'themeVariables|%%\{[[:space:]]*init' "$p"; then + echo "❌ $f: missing Mermaid colour-coded config (no 'style …' directive and no 'themeVariables' / '%%{init …}' block)" + FAIL=1 + fi done # Check 6 — Pass-2 evidence (mtime ≥ birth + 180s, OR differing pass1 snapshot on disk) diff --git a/docs/adr/0002-modular-prompts-and-single-pr-runs.md b/docs/adr/0002-modular-prompts-and-single-pr-runs.md index ff3393dad..a67f1c80e 100644 --- a/docs/adr/0002-modular-prompts-and-single-pr-runs.md +++ b/docs/adr/0002-modular-prompts-and-single-pr-runs.md @@ -47,7 +47,7 @@ Eight core modules + one Tier-C extension + a `README.md`: - All 12 news workflows declare `imports:` in frontmatter; `gh aw compile` resolves them into `{{#runtime-import …}}` directives in the generated `.lock.yml`. - `safe-outputs.create-pull-request.max: 1` on every news workflow (was 2 / 3 / 5). -- Background keep-alive pinger removed; MCP pre-warm kept at ≤ 2 minutes (≤ 6 retries, 20 s apart). +- Background keep-alive pinger removed; MCP pre-warm is a single best-effort `curl` step (≤ 6 retries, 20 s apart, `curl --max-time 30`) — worst-case runtime can exceed 4 minutes, see `.github/prompts/02-mcp-access.md` §"Pre-warm step". If a strict cap is required, tighten the `curl` timeout and retry parameters. - Workflow bodies reduced to ≤ 50 lines each (schedule + inputs + time budget + dedup path). - `news-translate` imports only the four modules it needs (base contract, bash/shell, MCP, commit & PR) and issues exactly one PR batching every language produced in the run. @@ -83,7 +83,7 @@ Eight core modules + one Tier-C extension + a `README.md`: | Risk | Mitigation | |------|------------| -| MCP session expiry without heartbeat | Tight time budgets + scope-trim policy + ≤ 2 min pre-warm; deadline rule in `07-commit-and-pr.md` forces commit + PR by minute ~55. | +| MCP session expiry without heartbeat | Tight time budgets + scope-trim policy + best-effort MCP pre-warm (worst-case > 4 min, see `.github/prompts/02-mcp-access.md`); deadline rule in `07-commit-and-pr.md` forces commit + PR by minute ~55. | | `imports:` resolution differences across gh-aw versions | `compile-agentic-workflows.yml` pins gh-aw via `GH_AW_VERSION="v0.69.3"`. | | Hidden rules in the 4,350-line file dropped accidentally | Phase A migrated every H2/H3 explicitly; review is backed by the CI module-size/banned-string check. | | `news-translate` capacity | If 12 languages exceed the 60-minute budget, translation is split across multiple scheduled runs (already cron'd twice daily + weekend catch-up) rather than across multiple PRs in one run. | From f8c9037bfa645c478c6968c1a78ff8969162d2ae Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 06:16:51 +0000 Subject: [PATCH 10/21] address review 4152411278: stray paren + committeeReports subfolder + {dok_id}-analysis.md naming Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/b52e9330-b4a8-4996-a643-7535d1ed01eb Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/03-data-download.md | 2 +- .github/prompts/04-analysis-pipeline.md | 2 +- .github/workflows/news-committee-reports.md | 2 +- analysis/README.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/.github/prompts/03-data-download.md b/.github/prompts/03-data-download.md index dbd2c05f7..5ea462fb4 100644 --- a/.github/prompts/03-data-download.md +++ b/.github/prompts/03-data-download.md @@ -10,7 +10,7 @@ Populate `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/` with raw Riksdag/Regering da |----------|--------------| | news-propositions | `propositions` | | news-motions | `motions` | -| news-committee-reports | `committee-reports` | +| news-committee-reports | `committeeReports` | | news-interpellations | `interpellations` | | news-week-ahead | `week-ahead` | | news-month-ahead | `month-ahead` | diff --git a/.github/prompts/04-analysis-pipeline.md b/.github/prompts/04-analysis-pipeline.md index 1ff7aedfe..e4290a6ba 100644 --- a/.github/prompts/04-analysis-pipeline.md +++ b/.github/prompts/04-analysis-pipeline.md @@ -32,7 +32,7 @@ Produced in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`: | `cross-reference-map.md` | (link to prior-run forward chain) | Continuity contracts with prior analyses | | `data-download-manifest.md` | produced in step 03 | Already exists from data-download step | -Plus `documents/` subfolder with **one `.md` per `dok_id`** using [`per-file-political-intelligence.md`](../../analysis/templates/per-file-political-intelligence.md). +Plus `documents/` subfolder with **one `{dok_id}-analysis.md` file per `dok_id`** using [`per-file-political-intelligence.md`](../../analysis/templates/per-file-political-intelligence.md). ## Execution order diff --git a/.github/workflows/news-committee-reports.md b/.github/workflows/news-committee-reports.md index f6924ee29..e094c3b12 100644 --- a/.github/workflows/news-committee-reports.md +++ b/.github/workflows/news-committee-reports.md @@ -226,7 +226,7 @@ Generates deep political intelligence articles on parliamentary committee report ## What this workflow does - **Article type**: `committee-reports` -- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/committee-reports/` +- **Analysis subfolder**: `analysis/daily/$ARTICLE_DATE/committeeReports/` - **Core languages produced**: `en`, `sv` (remaining 12 languages dispatched to `news-translate`) - **One pull request per run** containing analysis + articles + visualisation data. diff --git a/analysis/README.md b/analysis/README.md index a11cdd888..e64d509ce 100644 --- a/analysis/README.md +++ b/analysis/README.md @@ -155,7 +155,7 @@ flowchart LR **The AI agent reads all 6 methodology guides, reads all 8 templates, reads the actual data, and produces genuine analytical content based on evidence found in the documents.** -**Fallback mechanism:** If AI analysis fails or produces unusable output (detected by the quality gate bash check in `.github/prompts/` (see the README for the module catalogue)), the workflow should: +**Fallback mechanism:** If AI analysis fails or produces unusable output (detected by the quality gate bash check in `.github/prompts/` — see the README for the module catalogue), the workflow should: 1. Commit a minimal `data-download-manifest.md` documenting what was downloaded 2. Flag the analysis as `pending` for the next workflow run 3. Never commit placeholder or stub content that masquerades as genuine analysis From 595bb84883c7091bf2897fd757cdd2b38faf0bdb Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 06:38:18 +0000 Subject: [PATCH 11/21] address review 4152487586: hard-fail missing max:1 + lock-yml check, Playwright tool clarity, committeeReports camelCase Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/a6dffcf4-2364-4e23-a0d2-0598e79c675d Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/02-mcp-access.md | 6 +++- .github/prompts/ext/tier-c-aggregation.md | 2 +- .../workflows/compile-agentic-workflows.yml | 29 +++++++++++++++++-- 3 files changed, 33 insertions(+), 4 deletions(-) diff --git a/.github/prompts/02-mcp-access.md b/.github/prompts/02-mcp-access.md index 81e0759cd..a41c0e682 100644 --- a/.github/prompts/02-mcp-access.md +++ b/.github/prompts/02-mcp-access.md @@ -16,7 +16,11 @@ News workflows declare three data MCP servers + the built-in `github` toolset (v | `bash` | local helper | workflow `tools.bash` | standard | shell execution | | `safeoutputs` | runner | always available | `snake_case` | `safeoutputs___create_pull_request`, `safeoutputs___noop`, `safeoutputs___dispatch_workflow` | -`filesystem`, `memory`, `sequential-thinking`, `playwright` are declared in `.github/copilot-mcp.json` for the **local Copilot / `assign_copilot_to_issue`** channel. They are **not** available to news workflows unless the workflow itself declares them under `mcp-servers:`. Authoritative server inventory: [`.github/copilot-mcp.json`](../copilot-mcp.json) for local; the workflow frontmatter for the actual per-run surface. +`filesystem`, `memory`, and `sequential-thinking` are declared in [`.github/copilot-mcp.json`](../copilot-mcp.json) for the **local Copilot / `assign_copilot_to_issue`** channel and are **not** available to news workflows unless the workflow itself declares them under `mcp-servers:`. + +`playwright` must be treated separately: in news workflows it is available as the built-in workflow tool `tools.playwright` when that workflow declares it under `tools:` (e.g. `news-evening-analysis`, `news-realtime-monitor`). In that case it is **not** an MCP server, so do **not** infer its availability from `mcp-servers:` alone and do **not** skip Playwright/browser validation steps when `tools.playwright` is present in workflow frontmatter. + +Authoritative inventory: [`.github/copilot-mcp.json`](../copilot-mcp.json) for the local Copilot MCP surface, and each workflow's `mcp-servers:` plus `tools:` frontmatter for the actual per-run surface. IMF is **not** an MCP server. Fetch IMF data via the TypeScript client: `npx tsx scripts/imf-fetch.ts …` (see [Economic Data Contract](../aw/ECONOMIC_DATA_CONTRACT.md)). diff --git a/.github/prompts/ext/tier-c-aggregation.md b/.github/prompts/ext/tier-c-aggregation.md index 616f883ad..f644f916a 100644 --- a/.github/prompts/ext/tier-c-aggregation.md +++ b/.github/prompts/ext/tier-c-aggregation.md @@ -44,7 +44,7 @@ Aggregation workflows **must** read sibling article-type analyses produced for t | Aggregation workflow | Sibling folders to read | |----------------------|-------------------------| -| `news-evening-analysis` | Today's `propositions/`, `motions/`, `committee-reports/`, `interpellations/`, any `realtime-*/` | +| `news-evening-analysis` | Today's `propositions/`, `motions/`, `committeeReports/`, `interpellations/`, any `realtime-*/` | | `news-week-ahead` / `news-weekly-review` | Last 7 days of per-type folders | | `news-month-ahead` / `news-monthly-review` | Last 30 days of per-type folders | | `news-realtime-monitor` | Prior 7 days' `realtime-*/` for continuity chain | diff --git a/.github/workflows/compile-agentic-workflows.yml b/.github/workflows/compile-agentic-workflows.yml index 0f447b1cd..0606827bc 100644 --- a/.github/workflows/compile-agentic-workflows.yml +++ b/.github/workflows/compile-agentic-workflows.yml @@ -78,7 +78,7 @@ jobs: fi done echo "" - echo "🔍 Enforcing safe-outputs.create-pull-request.max: 1 on all news workflows…" + echo "🔍 Enforcing safe-outputs.create-pull-request.max: 1 on all news workflows (source .md)…" for f in .github/workflows/news-*.md; do [ -f "$f" ] || continue # Extract max value directly under create-pull-request block @@ -88,7 +88,8 @@ jobs: in_block && /^ max:/ {print $2; exit} ' "$f") if [ -z "$MAX" ]; then - echo " ⚠️ $f has no explicit create-pull-request.max (allowed: safeoutputs default behaves as 1)" + echo " ❌ $f has no explicit create-pull-request.max (must be 1; default may change upstream)" + FAIL=1 elif [ "$MAX" != "1" ]; then echo " ❌ $f has create-pull-request.max: $MAX (must be 1)" FAIL=1 @@ -97,6 +98,30 @@ jobs: fi done echo "" + echo "🔍 Enforcing create_pull_request.max: 1 on compiled .lock.yml (safe-outputs config)…" + for lf in .github/workflows/news-*.lock.yml; do + [ -f "$lf" ] || continue + # The compiled safe-outputs config is embedded as JSON in GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG. + # Extract every "create_pull_request":{...,"max":N,...} occurrence and confirm N==1. + BAD=0 + OCCURS=$(grep -oE '"create_pull_request"[[:space:]]*:[[:space:]]*\{[^}]*"max"[[:space:]]*:[[:space:]]*[0-9]+' "$lf" | grep -oE '"max"[[:space:]]*:[[:space:]]*[0-9]+' | grep -oE '[0-9]+$' || true) + COUNT_ANY=$(grep -cE '"create_pull_request"[[:space:]]*:[[:space:]]*\{' "$lf" || true) + if [ -z "$OCCURS" ] || [ "$COUNT_ANY" -eq 0 ]; then + echo " ❌ $lf has no create_pull_request config with max" + FAIL=1 + continue + fi + while IFS= read -r v; do + [ "$v" = "1" ] || BAD=1 + done <<< "$OCCURS" + if [ "$BAD" -eq 1 ]; then + echo " ❌ $lf has create_pull_request.max != 1 (found: $(echo "$OCCURS" | tr '\n' ' '))" + FAIL=1 + else + echo " ✅ $lf (create_pull_request.max: 1)" + fi + done + echo "" echo "🔍 Scanning for banned multi-PR / heartbeat / keep-alive strings…" BANNED='Heartbeat|keep-alive pinger|post-heartbeat rebase|🫀' if grep -rInE "$BANNED" .github/workflows/news-*.md .github/prompts/ 2>/dev/null; then From 0b43bd188e3e7d08cfaf28193dc89238b9ffe984 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 07:17:22 +0000 Subject: [PATCH 12/21] address review 4152706167: broaden evidence matcher + extend significance-scoring check Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/b31d17a1-6fe3-4a90-ae25-7d7d6d20e4e8 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/05-analysis-gate.md | 42 +++++++++++++++++++++++------ 1 file changed, 34 insertions(+), 8 deletions(-) diff --git a/.github/prompts/05-analysis-gate.md b/.github/prompts/05-analysis-gate.md index c7fa0013d..fed94b058 100644 --- a/.github/prompts/05-analysis-gate.md +++ b/.github/prompts/05-analysis-gate.md @@ -13,7 +13,7 @@ This is the **only** gate separating analysis from article generation. If it fai `synthesis-summary.md`, `swot-analysis.md`, `risk-assessment.md`, `threat-analysis.md`, `stakeholder-perspectives.md`, `significance-scoring.md`, `classification-results.md`, `cross-reference-map.md`, `data-download-manifest.md`. 2. **Per-document coverage** — `$ANALYSIS_DIR/documents/` contains one `.md` per `dok_id` listed in `data-download-manifest.md` (metadata-only documents are tagged, not skipped). 3. **No stubs** — zero occurrences of `AI_MUST_REPLACE`, `[REQUIRED]`, `TODO:`, or `Lorem ipsum` across all artifacts. -4. **Evidence citations** — `swot-analysis.md` and `significance-scoring.md` contain at least one `dok_id` reference per quadrant / ranked item. +4. **Evidence citations** — `swot-analysis.md` and `significance-scoring.md` contain at least one piece of primary-source evidence per quadrant / ranked item. Accepted evidence patterns (matches the standard in `04-analysis-pipeline.md` §Evidence standard): a `dok_id` (e.g. `H901FiU1`, `HD01CU27`) **or** a primary-source URL host (`riksdagen.se`, `regeringen.se`, `scb.se`, `worldbank.org`, `data.imf.org`). Named-actor / vote-count evidence without one of the above still counts as a Pass-2 target for human review but is not machine-enforced here. Enforced against SWOT `### Strengths/Weaknesses/Opportunities/Threats` sections (bullets + table rows) and significance-scoring bullets **plus** ranking table rows and Mermaid node labels. 5. **Mermaid diagrams** — every daily synthesis file contains ≥ 1 Mermaid diagram with colour-coded `style` directives. 6. **Pass-2 done** — agent has read each core artifact back after creation and committed improvements. (Enforced by file mtime diff: final file mtime > creation time + 3 min, OR two git-history snapshots on disk.) @@ -34,6 +34,9 @@ SYNTHESIS=(synthesis-summary.md swot-analysis.md risk-assessment.md threat-analy stakeholder-perspectives.md significance-scoring.md classification-results.md \ cross-reference-map.md) DOK_RE='[Hh][A-Za-z0-9]{3,}[0-9]+' +# Check 4 evidence pattern — accepts dok_id OR primary-source URL host +# (per 04-analysis-pipeline.md §Evidence standard). +EVIDENCE_RE='[Hh][A-Za-z0-9]{3,}[0-9]+|riksdagen\.se|regeringen\.se|scb\.se|worldbank\.org|data\.imf\.org' FAIL=0 # Check 1 — artifact existence @@ -61,13 +64,13 @@ fi grep -rIn -e 'AI_MUST_REPLACE' -e '\[REQUIRED\]' -e 'TODO:' -e 'Lorem ipsum' "$ANALYSIS_DIR" \ && { echo "❌ stub placeholders detected"; FAIL=1; } -# Check 4 — evidence citations per quadrant / ranked item -awk -v re="$DOK_RE" ' +# Check 4 — evidence citations per quadrant / ranked item (dok_id OR primary-source URL) +awk -v re="$EVIDENCE_RE" ' function reset_table() { trow=0 } /^###[[:space:]]+.*(Strengths|Weaknesses|Opportunities|Threats)\b/ { sec=$0; reset_table(); next } /^#{1,6}[[:space:]]+/ { sec=""; reset_table(); next } sec != "" && /^[[:space:]]*[-*][[:space:]]+/ && $0 !~ re { - printf "❌ swot-analysis.md %s: bullet missing dok_id: %s\n", sec, $0; bad=1; next + printf "❌ swot-analysis.md %s: bullet missing evidence (dok_id or primary-source URL): %s\n", sec, $0; bad=1; next } sec != "" && /^[[:space:]]*\|/ { # skip table separator rows like |---|---| @@ -75,16 +78,39 @@ awk -v re="$DOK_RE" ' trow++ if (trow == 1) next # header row if ($0 !~ re) { - printf "❌ swot-analysis.md %s: table row missing dok_id: %s\n", sec, $0; bad=1 + printf "❌ swot-analysis.md %s: table row missing evidence (dok_id or primary-source URL): %s\n", sec, $0; bad=1 } next } sec != "" && /^[[:space:]]*$/ { reset_table(); next } END { exit bad+0 } ' "$ANALYSIS_DIR/swot-analysis.md" || FAIL=1 -awk -v re="$DOK_RE" ' - /^[[:space:]]*([0-9]+\.[[:space:]]+|[-*][[:space:]]+)/ && $0 !~ re { - printf "❌ significance-scoring.md ranked item missing dok_id: %s\n", $0; bad=1 +awk -v re="$EVIDENCE_RE" ' + function reset_table() { trow=0 } + /^```mermaid[[:space:]]*$/ { in_mermaid=1; reset_table(); next } + in_mermaid && /^```[[:space:]]*$/ { in_mermaid=0; next } + !in_mermaid && /^[[:space:]]*([0-9]+\.[[:space:]]+|[-*][[:space:]]+)/ && $0 !~ re { + printf "❌ significance-scoring.md ranked item missing evidence (dok_id or primary-source URL): %s\n", $0; bad=1; next + } + !in_mermaid && /^[[:space:]]*\|/ { + # skip table separator rows like |---|---| + if ($0 ~ /^[[:space:]|:\-]+$/) next + trow++ + if (trow == 1) next # header row + if ($0 !~ re) { + printf "❌ significance-scoring.md ranking table row missing evidence (dok_id or primary-source URL): %s\n", $0; bad=1 + } + next + } + !in_mermaid && /^[[:space:]]*$/ { reset_table(); next } + in_mermaid { + # Skip Mermaid structural / configuration lines; validate likely node-label lines. + if ($0 ~ /^[[:space:]]*(%%|style\b|classDef\b|class\b|linkStyle\b|subgraph\b|end\b|graph\b|flowchart\b|quadrantChart\b|mindmap\b|timeline\b|journey\b|gantt\b|pie\b|xychart-beta\b|sequenceDiagram\b|stateDiagram(-v2)?\b|erDiagram\b|sankey-beta\b|gitGraph\b|requirementDiagram\b|block-beta\b)/) next + # A node-label line typically has bracketed/parenthesised/braced text, e.g. A[Label] or B(Label) or C{Label}. + if ($0 ~ /[\[\(\{][^][(){}]+[\]\)\}]/ && $0 !~ re) { + printf "❌ significance-scoring.md Mermaid ranked item missing evidence (dok_id or primary-source URL): %s\n", $0; bad=1 + } + next } END { exit bad+0 } ' "$ANALYSIS_DIR/significance-scoring.md" || FAIL=1 From c4010dbc708c4f75758472c813413eb2c76073e4 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 07:42:43 +0000 Subject: [PATCH 13/21] phase 1: add canonical gh-aw + analysis-artifact integration references Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/7b3ed265-e947-43f1-80b2-83cdf38c759d Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/README.md | 19 ++++++++++++++++ analysis/methodologies/README.md | 26 ++++++++++++++++++++++ analysis/templates/README.md | 37 ++++++++++++++++++++++++++++++++ 3 files changed, 82 insertions(+) diff --git a/.github/prompts/README.md b/.github/prompts/README.md index 6039fed10..ff2b5c598 100644 --- a/.github/prompts/README.md +++ b/.github/prompts/README.md @@ -9,6 +9,25 @@ This directory holds the **bounded-context prompt modules** imported by every ne - **No duplication** — modules link to the canonical methodology, template, and MCP config files rather than copy them. - **No audit history** — rules only, no dated run IDs, PR numbers, or version tags. +## Integration points (authoritative) + +This directory is the **single source of truth** for how GitHub Agentic Workflows (gh-aw) produce news articles in this repo. Agents, skills, and copilot instructions MUST link back here rather than restate the rules. + +- **gh-aw runtime**: `gh-aw-actions/setup-cli@v0.69.3` (see any `news-*.lock.yml` for the pinned action). +- **Upstream documentation** — link-out only, never copy content: + - Abridged: <https://github.github.com/gh-aw/llms-small.txt> + - Complete: <https://github.github.com/gh-aw/llms-full.txt> + - Agentic-workflows blog series: <https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt> + - Source repo: <https://github.com/github/gh-aw> + - GitHub CLI: <https://cli.github.com/manual/> +- **Analysis artifact contract** (the "deep political analysis" product that every news workflow must produce *before* writing a single article sentence): + - Methodology → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + - Templates → [`analysis/templates/`](../../analysis/templates/) (one file per artifact) + - **9 core artifacts** (single-type workflow, produced in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`): `synthesis-summary.md`, `swot-analysis.md`, `risk-assessment.md`, `threat-analysis.md`, `stakeholder-perspectives.md`, `significance-scoring.md`, `classification-results.md`, `cross-reference-map.md`, `data-download-manifest.md` — full definitions in [`04-analysis-pipeline.md`](04-analysis-pipeline.md). + - **14 artifacts** for Tier-C aggregation (9 core + `README.md`, `executive-brief.md`, `scenario-analysis.md`, `comparative-international.md`, `methodology-reflection.md`) — full definitions in [`ext/tier-c-aggregation.md`](ext/tier-c-aggregation.md). +- **Single blocking gate**: [`05-analysis-gate.md`](05-analysis-gate.md) is the only enforcer. No article may be touched until the gate passes. +- **AI-FIRST rule** (from [`00-base-contract.md`](00-base-contract.md) §Non-negotiable rules #5): minimum 2 complete iterations — Pass 1 creates every artifact, Pass 2 reads Pass 1 back in full and improves every section. + ## Module catalogue | File | Responsibility | Consumed by | diff --git a/analysis/methodologies/README.md b/analysis/methodologies/README.md index 72198d44a..76a980a65 100644 --- a/analysis/methodologies/README.md +++ b/analysis/methodologies/README.md @@ -111,6 +111,32 @@ --- +## 🤖 How agentic workflows consume these methodologies + +The 12 agentic news workflows in `.github/workflows/news-*.md` are the **primary consumer** of these methodologies. The authoritative workflow contract lives in [`.github/prompts/`](../../.github/prompts/) — see [`.github/prompts/README.md`](../../.github/prompts/README.md) for the full module catalogue. + +| Methodology | Read in Pass 1 (mandatory) | Read in Pass 2 (improvement) | Enforced by | +|-------------|---------------------------|------------------------------|-------------| +| [`ai-driven-analysis-guide.md`](ai-driven-analysis-guide.md) | ✅ role, DIW weighting, pass structure | — | `05-analysis-gate.md` check 1 (artifact presence) | +| [`per-document-methodology.md`](per-document-methodology.md) | ✅ one `{dok_id}-analysis.md` per document | — | `05-analysis-gate.md` check 3 (per-doc coverage) | +| [`political-classification-guide.md`](political-classification-guide.md) | ✅ produces `classification-results.md` | — | `05-analysis-gate.md` check 1 | +| [`political-swot-framework.md`](political-swot-framework.md) | ✅ produces `swot-analysis.md` + TOWS matrix | ✅ tighten evidence tables | `05-analysis-gate.md` check 4 (evidence) | +| [`political-risk-methodology.md`](political-risk-methodology.md) | ✅ produces `risk-assessment.md` | ✅ sensitivity & posterior probabilities | `05-analysis-gate.md` checks 1 + 6 | +| [`political-threat-framework.md`](political-threat-framework.md) | ✅ produces `threat-analysis.md` | ✅ kill-chain depth | `05-analysis-gate.md` check 1 | +| [`political-style-guide.md`](political-style-guide.md) | — | ✅ tone, neutrality, evidence citations | Article Pass-2 review | +| [`strategic-extensions-methodology.md`](strategic-extensions-methodology.md) | ✅ for Tier-C only (`executive-brief.md`, `scenario-analysis.md`, `comparative-international.md`) | ✅ scenario probabilities | `ext/tier-c-aggregation.md` Tier-C gate | +| [`structural-metadata-methodology.md`](structural-metadata-methodology.md) | ✅ cross-reference continuity contracts | — | `05-analysis-gate.md` check 5 (cross-refs) | +| [`synthesis-methodology.md`](synthesis-methodology.md) | ✅ produces `synthesis-summary.md` with DIW-weighted ranking | ✅ lead-story justification | `05-analysis-gate.md` checks 1 + 4 | +| [`electoral-domain-methodology.md`](electoral-domain-methodology.md) | ✅ Election 2026 lens paragraph | — | Article-generation mandatory section | + +**Upstream gh-aw documentation** (link-out only — these methodologies own the political-analysis content; gh-aw owns the workflow runtime): + +- Abridged: <https://github.github.com/gh-aw/llms-small.txt> +- Complete: <https://github.github.com/gh-aw/llms-full.txt> +- Agentic-workflows blog series: <https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt> + +--- + ## 🎯 Purpose This directory contains the **authoritative methodology library** for all political intelligence analysis performed by Riksdagsmonitor's AI-driven agentic workflows. Each methodology document defines the analytical framework, evaluation criteria, evidence standards, and quality requirements that AI agents MUST follow when producing political intelligence. diff --git a/analysis/templates/README.md b/analysis/templates/README.md index 16d70da9b..7aaa9fa92 100644 --- a/analysis/templates/README.md +++ b/analysis/templates/README.md @@ -172,6 +172,43 @@ Every `methodology-reflection.md` includes an ICD 203 compliance checklist verif --- +## 🤖 Artifact → workflow → gate check mapping + +The 12 agentic news workflows in `.github/workflows/news-*.md` render these templates into concrete artifacts under `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. Authoring contract: [`.github/prompts/README.md`](../../.github/prompts/README.md). + +**9 core artifacts** (every news workflow): + +| Template | Produced artifact | Enforced by `05-analysis-gate.md` | +|----------|-------------------|-----------------------------------| +| [`synthesis-summary.md`](synthesis-summary.md) | `synthesis-summary.md` (lead story + DIW ranking + ≥ 1 Mermaid diagram) | Check 1 (presence), Check 4 (evidence) | +| [`swot-analysis.md`](swot-analysis.md) | `swot-analysis.md` with TOWS matrix | Check 1, Check 4 | +| [`risk-assessment.md`](risk-assessment.md) | `risk-assessment.md` (top 5 risks, likelihood × impact) | Check 1, Check 6 (significance scoring) | +| [`threat-analysis.md`](threat-analysis.md) | `threat-analysis.md` (attack tree + MITRE-style TTP) | Check 1 | +| [`stakeholder-impact.md`](stakeholder-impact.md) | `stakeholder-perspectives.md` (named actors + influence network) | Check 1, Check 4 | +| [`significance-scoring.md`](significance-scoring.md) | `significance-scoring.md` (DIW scores + sensitivity) | Check 6 | +| [`political-classification.md`](political-classification.md) | `classification-results.md` (priority tiers, retention) | Check 1 | +| [`cross-reference-map.md`](cross-reference-map.md) | `cross-reference-map.md` (continuity contracts) | Check 5 (cross-refs) | +| [`data-download-manifest.md`](data-download-manifest.md) | `data-download-manifest.md` (pre-computed by download step) | Check 1 | +| [`per-file-political-intelligence.md`](per-file-political-intelligence.md) | `documents/{dok_id}-analysis.md` (one per document) | Check 3 (per-doc coverage) | + +**5 additional Tier-C artifacts** (aggregation / reference-grade workflows — see [`ext/tier-c-aggregation.md`](../../.github/prompts/ext/tier-c-aggregation.md)): + +| Template | Produced artifact | Enforced by | +|----------|-------------------|-------------| +| — (run index, hand-written) | `README.md` | Tier-C gate block | +| [`executive-brief.md`](executive-brief.md) | `executive-brief.md` (2-page decision-maker brief) | Tier-C gate block | +| [`scenario-analysis.md`](scenario-analysis.md) | `scenario-analysis.md` (≥ 3 scenarios with posteriors) | Tier-C gate (scenario count ≥ 3) | +| [`comparative-international.md`](comparative-international.md) | `comparative-international.md` (cross-country via WB/IMF/SCB) | Tier-C gate (≥ 2 countries referenced) | +| [`methodology-reflection.md`](methodology-reflection.md) | `methodology-reflection.md` (what worked, biases, uncertainty log) | Tier-C gate block | + +**Supporting templates** (consumed ad-hoc by the artifacts above — not directly gated): + +- [`coalition-mathematics.md`](coalition-mathematics.md), [`devils-advocate.md`](devils-advocate.md), [`election-2026-analysis.md`](election-2026-analysis.md), [`forward-indicators.md`](forward-indicators.md), [`historical-parallels.md`](historical-parallels.md), [`implementation-feasibility.md`](implementation-feasibility.md), [`intelligence-assessment.md`](intelligence-assessment.md), [`media-framing-analysis.md`](media-framing-analysis.md), [`voter-segmentation.md`](voter-segmentation.md). + +**Upstream gh-aw documentation** (link-out only): <https://github.github.com/gh-aw/llms-small.txt> · <https://github.github.com/gh-aw/llms-full.txt> · <https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt> + +--- + ## 🎯 Purpose This directory contains a **structured catalog of analysis templates** that defines the exact output format for every political intelligence artifact produced by Riksdagsmonitor's AI agents. Templates ensure consistency, completeness, and quality across all analysis types — from per-file document intelligence to full synthesis summaries. From cfb4bdbeb8a7c67b57e31541cc7292d7b910c006 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 07:45:04 +0000 Subject: [PATCH 14/21] phase 2: add agentic-workflow & analysis-artifact integration block to 15 agents Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/7b3ed265-e947-43f1-80b2-83cdf38c759d Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/agents/agentic-workflows.agent.md | 9 +++++++++ .github/agents/content-generator.md | 9 +++++++++ .github/agents/data-pipeline-specialist.md | 9 +++++++++ .github/agents/data-visualization-specialist.md | 9 +++++++++ .github/agents/deployment-specialist.md | 9 +++++++++ .github/agents/devops-engineer.md | 9 +++++++++ .github/agents/documentation-architect.md | 9 +++++++++ .github/agents/frontend-specialist.md | 9 +++++++++ .github/agents/intelligence-operative.md | 9 +++++++++ .github/agents/isms-compliance-manager.md | 9 +++++++++ .github/agents/news-journalist.md | 9 +++++++++ .github/agents/quality-engineer.md | 9 +++++++++ .github/agents/security-architect.md | 9 +++++++++ .github/agents/task-agent.md | 9 +++++++++ .github/agents/ui-enhancement-specialist.md | 9 +++++++++ 15 files changed, 135 insertions(+) diff --git a/.github/agents/agentic-workflows.agent.md b/.github/agents/agentic-workflows.agent.md index ab23fe6c1..e4d463e77 100644 --- a/.github/agents/agentic-workflows.agent.md +++ b/.github/agents/agentic-workflows.agent.md @@ -181,3 +181,12 @@ gh aw compile --validate - **Bash tools are enabled by default** - Don't restrict bash commands unnecessarily since workflows are sandboxed by the AWF - Follow security best practices: minimal permissions, explicit network access, no template injection - **Single-file output**: When creating a workflow, produce exactly **one** workflow `.md` file. Do not create separate documentation files (architecture docs, runbooks, usage guides, etc.). If documentation is needed, add a brief `## Usage` section inside the workflow file itself. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/content-generator.md b/.github/agents/content-generator.md index 6e7f0efa4..9742aea2e 100644 --- a/.github/agents/content-generator.md +++ b/.github/agents/content-generator.md @@ -370,3 +370,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/data-pipeline-specialist.md b/.github/agents/data-pipeline-specialist.md index a032595f8..af9b35a24 100644 --- a/.github/agents/data-pipeline-specialist.md +++ b/.github/agents/data-pipeline-specialist.md @@ -381,3 +381,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/data-visualization-specialist.md b/.github/agents/data-visualization-specialist.md index 0fe4f9d46..7d588892d 100644 --- a/.github/agents/data-visualization-specialist.md +++ b/.github/agents/data-visualization-specialist.md @@ -199,3 +199,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/deployment-specialist.md b/.github/agents/deployment-specialist.md index 64a3c82bb..27f4c4cd3 100644 --- a/.github/agents/deployment-specialist.md +++ b/.github/agents/deployment-specialist.md @@ -493,3 +493,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/devops-engineer.md b/.github/agents/devops-engineer.md index 3cff01ca2..efa3a3cf9 100644 --- a/.github/agents/devops-engineer.md +++ b/.github/agents/devops-engineer.md @@ -349,3 +349,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/documentation-architect.md b/.github/agents/documentation-architect.md index 73ff35793..3f5529379 100644 --- a/.github/agents/documentation-architect.md +++ b/.github/agents/documentation-architect.md @@ -398,3 +398,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/frontend-specialist.md b/.github/agents/frontend-specialist.md index 55e540cf9..bd869a908 100644 --- a/.github/agents/frontend-specialist.md +++ b/.github/agents/frontend-specialist.md @@ -414,3 +414,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/intelligence-operative.md b/.github/agents/intelligence-operative.md index 776a7b19c..f4e774836 100644 --- a/.github/agents/intelligence-operative.md +++ b/.github/agents/intelligence-operative.md @@ -196,3 +196,12 @@ Map all security-relevant work to **ISO 27001:2022**, **NIST CSF 2.0**, **CIS Co - **Iterate, then iterate again** — AI FIRST; no shallow first-pass output - **Static site focus** — HTML/CSS, 14 languages, WCAG 2.1 AA, cyberpunk theme - **Mission** — empower citizens, strengthen democratic accountability, illuminate the political process + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/isms-compliance-manager.md b/.github/agents/isms-compliance-manager.md index 2fbd199e3..4df56d489 100644 --- a/.github/agents/isms-compliance-manager.md +++ b/.github/agents/isms-compliance-manager.md @@ -361,3 +361,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/news-journalist.md b/.github/agents/news-journalist.md index ee629cc95..339dce0a2 100644 --- a/.github/agents/news-journalist.md +++ b/.github/agents/news-journalist.md @@ -191,3 +191,12 @@ Maps to **ISO 27001:2022**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR** (e - **Sources are sacred** — protect confidentiality, verify rigorously - **Myndigheter matter** — agencies drive policy implementation - **Mission** — world-class political journalism, systematic transparency, democratic accountability + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/quality-engineer.md b/.github/agents/quality-engineer.md index 00e8b9b0b..c3346ed6e 100644 --- a/.github/agents/quality-engineer.md +++ b/.github/agents/quality-engineer.md @@ -349,3 +349,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/security-architect.md b/.github/agents/security-architect.md index c14dadbe8..0eb18af5c 100644 --- a/.github/agents/security-architect.md +++ b/.github/agents/security-architect.md @@ -352,3 +352,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/task-agent.md b/.github/agents/task-agent.md index 86c68e6f4..789e899c0 100644 --- a/.github/agents/task-agent.md +++ b/.github/agents/task-agent.md @@ -334,3 +334,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/agents/ui-enhancement-specialist.md b/.github/agents/ui-enhancement-specialist.md index 406c91a22..a9a74f545 100644 --- a/.github/agents/ui-enhancement-specialist.md +++ b/.github/agents/ui-enhancement-specialist.md @@ -505,3 +505,12 @@ All work operates under [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBL - [Access_Control_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Access_Control_Policy.md) · [Cryptography_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Cryptography_Policy.md) · [Incident_Response_Plan.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Incident_Response_Plan.md) · [Security_Metrics.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Security_Metrics.md) · [STYLE_GUIDE.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md) **Framework mapping**: map security-relevant work to **ISO 27001:2022 Annex A**, **NIST CSF 2.0**, **CIS Controls v8.1**, **GDPR**, **NIS2**, **EU CRA**. + + +--- + +## 🔗 Agentic-workflow & analysis-artifact integration + +- **Contract** → [`.github/prompts/README.md`](../prompts/README.md) (role, shell, MCP, download, analysis, gate, article, commit). +- **Analysis product** → [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../analysis/templates/). Every news article MUST be preceded by 9 core artifacts (14 for Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. [`05-analysis-gate.md`](../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** — [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). From 8beac71245dc44d2fea0f4a4fbdb95b9275efefb Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 07:46:11 +0000 Subject: [PATCH 15/21] phase 3: add integration block to 19 domain skills + 13 gh-aw skills Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/7b3ed265-e947-43f1-80b2-83cdf38c759d Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .../skills/automated-content-generation/SKILL.md | 11 +++++++++++ .github/skills/behavioral-analysis/SKILL.md | 11 +++++++++++ .github/skills/cia-data-integration/SKILL.md | 11 +++++++++++ .../skills/comparative-politics-reporting/SKILL.md | 11 +++++++++++ .github/skills/copilot-agent-patterns/SKILL.md | 11 +++++++++++ .../skills/data-science-for-intelligence/SKILL.md | 11 +++++++++++ .github/skills/editorial-standards/SKILL.md | 11 +++++++++++ .github/skills/electoral-analysis/SKILL.md | 11 +++++++++++ .../gh-aw-authentication-credentials/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-containerization/SKILL.md | 13 +++++++++++++ .../skills/gh-aw-continuous-ai-patterns/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-firewall/SKILL.md | 13 +++++++++++++ .../gh-aw-github-actions-integration/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-logging-monitoring/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-mcp-configuration/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-mcp-gateway/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-safe-outputs/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-security-architecture/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-tools-ecosystem/SKILL.md | 13 +++++++++++++ .github/skills/gh-aw-workflow-authoring/SKILL.md | 13 +++++++++++++ .github/skills/github-agentic-workflows/SKILL.md | 13 +++++++++++++ .../intelligence-analysis-techniques/SKILL.md | 11 +++++++++++ .github/skills/investigative-journalism/SKILL.md | 11 +++++++++++ .github/skills/legislative-monitoring/SKILL.md | 11 +++++++++++ .github/skills/osint-methodologies/SKILL.md | 11 +++++++++++ .github/skills/political-science-analysis/SKILL.md | 11 +++++++++++ .github/skills/product-management-patterns/SKILL.md | 11 +++++++++++ .github/skills/prospective-news-coverage/SKILL.md | 11 +++++++++++ .github/skills/riksdag-regering-mcp/SKILL.md | 11 +++++++++++ .github/skills/risk-assessment-frameworks/SKILL.md | 11 +++++++++++ .../strategic-communication-analysis/SKILL.md | 11 +++++++++++ .github/skills/swedish-political-system/SKILL.md | 11 +++++++++++ 32 files changed, 378 insertions(+) diff --git a/.github/skills/automated-content-generation/SKILL.md b/.github/skills/automated-content-generation/SKILL.md index 7444d0b8e..1c503f454 100644 --- a/.github/skills/automated-content-generation/SKILL.md +++ b/.github/skills/automated-content-generation/SKILL.md @@ -49,3 +49,14 @@ Expert knowledge in automated content generation using templates, focusing on in --- **Version**: 1.0 | **Last Updated**: 2026-02-06 | **Category**: Development & Operations + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/behavioral-analysis/SKILL.md b/.github/skills/behavioral-analysis/SKILL.md index 1b546869e..267a863ef 100644 --- a/.github/skills/behavioral-analysis/SKILL.md +++ b/.github/skills/behavioral-analysis/SKILL.md @@ -890,3 +890,14 @@ This skill implements requirements from: - **Swedish Parliament (Riksdagen)** - Official documentation of parliamentary procedures - **V-Dem Institute** - Democracy measurement and behavioral indicators - **Swedish Election Authority** - Electoral competitiveness data + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/cia-data-integration/SKILL.md b/.github/skills/cia-data-integration/SKILL.md index 1a0870456..11e901679 100644 --- a/.github/skills/cia-data-integration/SKILL.md +++ b/.github/skills/cia-data-integration/SKILL.md @@ -137,3 +137,14 @@ jobs: **Last Updated**: 2026-02-06 **Category**: Data Integration **Maintained by**: Hack23 AB + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/comparative-politics-reporting/SKILL.md b/.github/skills/comparative-politics-reporting/SKILL.md index f86d6dd27..77482c190 100644 --- a/.github/skills/comparative-politics-reporting/SKILL.md +++ b/.github/skills/comparative-politics-reporting/SKILL.md @@ -372,3 +372,14 @@ indices (Freedom House, EIU)* --- **Use this skill when**: Contextualizing Swedish politics in global trends, learning from international policy experiences, benchmarking Sweden's performance, analyzing transnational political movements, or explaining how Swedish developments fit broader patterns. + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/copilot-agent-patterns/SKILL.md b/.github/skills/copilot-agent-patterns/SKILL.md index 6193ff428..bfc4764f0 100644 --- a/.github/skills/copilot-agent-patterns/SKILL.md +++ b/.github/skills/copilot-agent-patterns/SKILL.md @@ -60,3 +60,14 @@ tools: ["tool1", "tool2"] # Minimal set for non-meta agents ## Related Policies - [Secure Development Policy](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Secure_Development_Policy.md) + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/data-science-for-intelligence/SKILL.md b/.github/skills/data-science-for-intelligence/SKILL.md index 8b474ce10..523d2ebbe 100644 --- a/.github/skills/data-science-for-intelligence/SKILL.md +++ b/.github/skills/data-science-for-intelligence/SKILL.md @@ -863,3 +863,14 @@ class PoliticalNetworkAnalyzer: - "Pattern Recognition and Machine Learning" - Christopher Bishop - "Introduction to Statistical Learning" - James, Witten, Hastie, Tibshirani - "Network Science" - Albert-László Barabási + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/editorial-standards/SKILL.md b/.github/skills/editorial-standards/SKILL.md index dfdaf7471..39d97d277 100644 --- a/.github/skills/editorial-standards/SKILL.md +++ b/.github/skills/editorial-standards/SKILL.md @@ -228,3 +228,14 @@ Failure protocol: if any of 1–4 is not satisfied, the draft is returned to the --- **Use this skill when**: Writing political news articles, editing submissions for quality and style, fact-checking claims before publication, training journalists on editorial standards, or establishing quality assurance processes for news operations. + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/electoral-analysis/SKILL.md b/.github/skills/electoral-analysis/SKILL.md index cb74119f0..6f82199bb 100644 --- a/.github/skills/electoral-analysis/SKILL.md +++ b/.github/skills/electoral-analysis/SKILL.md @@ -735,3 +735,14 @@ ORDER BY avg_support ASC; - Sifo: https://www.kantarsifo.se/ - YouGov: https://yougov.se/ - Demoskop: https://www.demoskop.se/ + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/gh-aw-authentication-credentials/SKILL.md b/.github/skills/gh-aw-authentication-credentials/SKILL.md index 9444f2027..0910b150a 100644 --- a/.github/skills/gh-aw-authentication-credentials/SKILL.md +++ b/.github/skills/gh-aw-authentication-credentials/SKILL.md @@ -1469,3 +1469,16 @@ When managing authentication and credentials: **Version**: 2.0.0 **Last Updated**: 2026-04-02 **Maintained by**: Hack23 Organization + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-containerization/SKILL.md b/.github/skills/gh-aw-containerization/SKILL.md index bf0cb1836..c55d70e91 100644 --- a/.github/skills/gh-aw-containerization/SKILL.md +++ b/.github/skills/gh-aw-containerization/SKILL.md @@ -1399,3 +1399,16 @@ When containerizing agentic workflows: **Version**: 2.0.0 **Last Updated**: 2026-04-02 **Maintained by**: Hack23 Organization + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-continuous-ai-patterns/SKILL.md b/.github/skills/gh-aw-continuous-ai-patterns/SKILL.md index a61705732..f78db870e 100644 --- a/.github/skills/gh-aw-continuous-ai-patterns/SKILL.md +++ b/.github/skills/gh-aw-continuous-ai-patterns/SKILL.md @@ -1424,3 +1424,16 @@ The GitHub Next team operates 100+ agentic workflows. Key categories: **Last Updated**: 2026-04-02 **Version**: 2.0.0 **License**: Apache-2.0 + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-firewall/SKILL.md b/.github/skills/gh-aw-firewall/SKILL.md index 4a5255e0d..1aebf2ae8 100644 --- a/.github/skills/gh-aw-firewall/SKILL.md +++ b/.github/skills/gh-aw-firewall/SKILL.md @@ -872,3 +872,16 @@ network: **Version**: 2.0.0 **Last Updated**: 2026-04-02 **Maintained by**: Hack23 AB + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-github-actions-integration/SKILL.md b/.github/skills/gh-aw-github-actions-integration/SKILL.md index 05dc31227..b647d578c 100644 --- a/.github/skills/gh-aw-github-actions-integration/SKILL.md +++ b/.github/skills/gh-aw-github-actions-integration/SKILL.md @@ -1528,3 +1528,16 @@ When integrating agentic workflows with GitHub Actions: **Version**: 2.0.0 **Last Updated**: 2026-04-02 **Maintained by**: Hack23 Organization + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-logging-monitoring/SKILL.md b/.github/skills/gh-aw-logging-monitoring/SKILL.md index 9f63391b4..c3a237b97 100644 --- a/.github/skills/gh-aw-logging-monitoring/SKILL.md +++ b/.github/skills/gh-aw-logging-monitoring/SKILL.md @@ -1473,3 +1473,16 @@ When implementing logging and monitoring for agentic workflows: **Version**: 2.0.0 **Last Updated**: 2026-04-02 **Maintained by**: Hack23 Organization + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-mcp-configuration/SKILL.md b/.github/skills/gh-aw-mcp-configuration/SKILL.md index c1844d240..12dd251a4 100644 --- a/.github/skills/gh-aw-mcp-configuration/SKILL.md +++ b/.github/skills/gh-aw-mcp-configuration/SKILL.md @@ -1800,3 +1800,16 @@ For Copilot coding agent sessions (not agentic workflows), MCP servers are confi **Last Updated**: 2026-04-02 **Version**: 2.0.0 **License**: Apache-2.0 + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-mcp-gateway/SKILL.md b/.github/skills/gh-aw-mcp-gateway/SKILL.md index 07894aa5a..b935517c3 100644 --- a/.github/skills/gh-aw-mcp-gateway/SKILL.md +++ b/.github/skills/gh-aw-mcp-gateway/SKILL.md @@ -2301,3 +2301,16 @@ curl -X POST http://localhost:8000/mcp/github \ - [ ] Resource limits set - [ ] Monitoring configured + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-safe-outputs/SKILL.md b/.github/skills/gh-aw-safe-outputs/SKILL.md index 12e60d906..9b3e7986d 100644 --- a/.github/skills/gh-aw-safe-outputs/SKILL.md +++ b/.github/skills/gh-aw-safe-outputs/SKILL.md @@ -698,3 +698,16 @@ safe-outputs: **Version**: 2.0.0 **Last Updated**: 2026-04-02 **Maintained by**: Hack23 AB + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-security-architecture/SKILL.md b/.github/skills/gh-aw-security-architecture/SKILL.md index 1d4929dfd..839a53f4b 100644 --- a/.github/skills/gh-aw-security-architecture/SKILL.md +++ b/.github/skills/gh-aw-security-architecture/SKILL.md @@ -1789,3 +1789,16 @@ tools: **Last Updated**: 2026-04-02 **Version**: 2.0.0 **License**: Apache-2.0 + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-tools-ecosystem/SKILL.md b/.github/skills/gh-aw-tools-ecosystem/SKILL.md index d1cfa3bcf..04557c51b 100644 --- a/.github/skills/gh-aw-tools-ecosystem/SKILL.md +++ b/.github/skills/gh-aw-tools-ecosystem/SKILL.md @@ -805,3 +805,16 @@ tools: **Last Updated**: 2026-04-02 **Version**: 2.0.0 **License**: Apache-2.0 + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/gh-aw-workflow-authoring/SKILL.md b/.github/skills/gh-aw-workflow-authoring/SKILL.md index 178e396b2..f8776671f 100644 --- a/.github/skills/gh-aw-workflow-authoring/SKILL.md +++ b/.github/skills/gh-aw-workflow-authoring/SKILL.md @@ -1050,3 +1050,16 @@ tools: **Version**: 2.0.0 **Last Updated**: 2026-04-02 **Maintained by**: Hack23 AB + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/github-agentic-workflows/SKILL.md b/.github/skills/github-agentic-workflows/SKILL.md index 6d0c616bd..f76ea58a0 100644 --- a/.github/skills/github-agentic-workflows/SKILL.md +++ b/.github/skills/github-agentic-workflows/SKILL.md @@ -1815,3 +1815,16 @@ See [`Hack23/riksdagsmonitor`](https://github.com/Hack23/riksdagsmonitor): **Last Updated**: 2026-02-15 **Maintained by**: Hack23 AB + + +--- + +## 🔗 Integration with Riksdagsmonitor agentic workflows + +This gh-aw skill is applied by the 12 agentic news workflows in `.github/workflows/news-*.md`. Their domain contract (analysis-artifact product, gate, article contract) lives in: + +- [`.github/prompts/README.md`](../../prompts/README.md) — module catalogue, import rules, AI-FIRST 2-pass rule. +- [`analysis/methodologies/ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + [`analysis/templates/`](../../../analysis/templates/) — 9 core / 14 Tier-C artifacts. +- [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) — the single blocking gate before any article content is written. + +**Upstream gh-aw docs** (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). diff --git a/.github/skills/intelligence-analysis-techniques/SKILL.md b/.github/skills/intelligence-analysis-techniques/SKILL.md index 3f3b053b5..2f79da6f6 100644 --- a/.github/skills/intelligence-analysis-techniques/SKILL.md +++ b/.github/skills/intelligence-analysis-techniques/SKILL.md @@ -798,3 +798,14 @@ ORDER BY vd.month DESC; - "Thinking, Fast and Slow" - Daniel Kahneman (Cognitive bias) - "Superforecasting" - Philip Tetlock (Probabilistic reasoning) - "Intelligence Analysis: A Target-Centric Approach" - Robert M. Clark + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/investigative-journalism/SKILL.md b/.github/skills/investigative-journalism/SKILL.md index d84f10898..25078daca 100644 --- a/.github/skills/investigative-journalism/SKILL.md +++ b/.github/skills/investigative-journalism/SKILL.md @@ -244,3 +244,14 @@ Provides expertise in investigative journalism techniques for deep political acc --- **Use this skill when**: Conducting in-depth accountability investigations, filing FOI requests, analyzing documents and data for corruption patterns, verifying sources for major revelations, or navigating ethical boundaries in investigative reporting. + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/legislative-monitoring/SKILL.md b/.github/skills/legislative-monitoring/SKILL.md index f365b3096..5be9c21bb 100644 --- a/.github/skills/legislative-monitoring/SKILL.md +++ b/.github/skills/legislative-monitoring/SKILL.md @@ -1126,3 +1126,14 @@ This skill implements requirements from: - **[DATABASE_VIEW_INTELLIGENCE_CATALOG.md](../../DATABASE_VIEW_INTELLIGENCE_CATALOG.md)** - Complete view documentation - **[RISK_RULES_INTOP_OSINT.md](../../RISK_RULES_INTOP_OSINT.md)** - Risk rule specifications - **[DATA_ANALYSIS_INTOP_OSINT.md](../../DATA_ANALYSIS_INTOP_OSINT.md)** - Analysis frameworks + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/osint-methodologies/SKILL.md b/.github/skills/osint-methodologies/SKILL.md index 64fe7c927..dabd3eef8 100644 --- a/.github/skills/osint-methodologies/SKILL.md +++ b/.github/skills/osint-methodologies/SKILL.md @@ -685,3 +685,14 @@ LEFT JOIN ballot_data b ON v.ballot_id = b.ballot_id; - ISO 27001:2022: Information Security Management - NIST Special Publication 800-53: Security and Privacy Controls - CIS Controls v8.1: Critical Security Controls + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/political-science-analysis/SKILL.md b/.github/skills/political-science-analysis/SKILL.md index 07befcbab..0aa956425 100644 --- a/.github/skills/political-science-analysis/SKILL.md +++ b/.github/skills/political-science-analysis/SKILL.md @@ -495,3 +495,14 @@ Track these KPIs to measure analytical quality: - **Timeliness**: Analysis published within 48 hours of new data - **Impact**: Citations in academic research, media references - **Transparency**: All methodologies documented and reproducible + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/product-management-patterns/SKILL.md b/.github/skills/product-management-patterns/SKILL.md index 4c86bc10d..45f11d94b 100644 --- a/.github/skills/product-management-patterns/SKILL.md +++ b/.github/skills/product-management-patterns/SKILL.md @@ -52,3 +52,14 @@ Defines product management patterns for prioritizing features, planning roadmaps ## Related Policies - [Secure Development Policy](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Secure_Development_Policy.md) + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/prospective-news-coverage/SKILL.md b/.github/skills/prospective-news-coverage/SKILL.md index ae7032221..089ea1aa8 100644 --- a/.github/skills/prospective-news-coverage/SKILL.md +++ b/.github/skills/prospective-news-coverage/SKILL.md @@ -276,3 +276,14 @@ with committee members (3 MPs, anonymous)* --- **Use this skill when**: Planning news coverage, writing "week ahead" previews, analyzing upcoming committee meetings, forecasting political developments, or helping readers understand what's coming next in Swedish politics. + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/riksdag-regering-mcp/SKILL.md b/.github/skills/riksdag-regering-mcp/SKILL.md index 933df4495..55b7768c1 100644 --- a/.github/skills/riksdag-regering-mcp/SKILL.md +++ b/.github/skills/riksdag-regering-mcp/SKILL.md @@ -561,3 +561,14 @@ riksdag-regering-mcp **Version**: 1.0 **Last Updated**: 2026-02-06 **Maintained by**: Hack23 AB + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/risk-assessment-frameworks/SKILL.md b/.github/skills/risk-assessment-frameworks/SKILL.md index 2d85389d9..5de6d58ec 100644 --- a/.github/skills/risk-assessment-frameworks/SKILL.md +++ b/.github/skills/risk-assessment-frameworks/SKILL.md @@ -1206,3 +1206,14 @@ This skill implements requirements from: - **[RISK_RULES_INTOP_OSINT.md](../../RISK_RULES_INTOP_OSINT.md)** - 50+ risk rule specifications - **[DATA_ANALYSIS_INTOP_OSINT.md](../../DATA_ANALYSIS_INTOP_OSINT.md)** - Analysis frameworks - **[INTELLIGENCE_DATA_FLOW.md](../../INTELLIGENCE_DATA_FLOW.md)** - Risk data flow mapping + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/strategic-communication-analysis/SKILL.md b/.github/skills/strategic-communication-analysis/SKILL.md index bc98afc1c..9f896d959 100644 --- a/.github/skills/strategic-communication-analysis/SKILL.md +++ b/.github/skills/strategic-communication-analysis/SKILL.md @@ -957,3 +957,14 @@ This skill implements requirements from: - **[RISK_RULES_INTOP_OSINT.md](../../RISK_RULES_INTOP_OSINT.md)** - Risk rule specifications - **[DATA_ANALYSIS_INTOP_OSINT.md](../../DATA_ANALYSIS_INTOP_OSINT.md)** - Analysis frameworks - **[INTELLIGENCE_DATA_FLOW.md](../../INTELLIGENCE_DATA_FLOW.md)** - Data flow mapping + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). diff --git a/.github/skills/swedish-political-system/SKILL.md b/.github/skills/swedish-political-system/SKILL.md index 9eff442a7..29d66980f 100644 --- a/.github/skills/swedish-political-system/SKILL.md +++ b/.github/skills/swedish-political-system/SKILL.md @@ -663,3 +663,14 @@ graph LR - "The Swedish Parliamentary System" - Sveriges Riksdag - "Government and Politics in the Nordic Countries" - Nordic Council of Ministers - "Parties and Party Systems in Sweden" - Göran Bergström + + +--- + +## 🔗 Integration with agentic workflows & analysis artifacts + +This skill is consumed by the 12 agentic news workflows in `.github/workflows/news-*.md`. The authoritative contract lives in [`.github/prompts/README.md`](../../prompts/README.md); this skill supplies domain expertise on top of that contract. + +- **Analysis product** → [`ai-driven-analysis-guide.md`](../../../analysis/methodologies/ai-driven-analysis-guide.md) + every template in [`analysis/templates/`](../../../analysis/templates/). +- **Required before any article**: 9 core artifacts (14 for Tier-C) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`; [`05-analysis-gate.md`](../../prompts/05-analysis-gate.md) is the single blocking gate. +- **gh-aw v0.69.3** docs: [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt). From 49c70a250d2e4a0d56bc3ce7dc683d9723fc3007 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 07:48:08 +0000 Subject: [PATCH 16/21] phase 4+5: prompt modules + copilot-instructions integration Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/7b3ed265-e947-43f1-80b2-83cdf38c759d Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/copilot-instructions.md | 11 ++++++++++- .github/prompts/00-base-contract.md | 13 ++++++++++++- .github/prompts/04-analysis-pipeline.md | 6 ++++++ .github/prompts/06-article-generation.md | 11 +++++++++++ .github/prompts/ext/tier-c-aggregation.md | 10 ++++++++++ 5 files changed, 49 insertions(+), 2 deletions(-) diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 4842ac08f..5199fd053 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -112,7 +112,7 @@ Map every security-relevant control to **ISO 27001:2022 Annex A**, **NIST CSF 2. ## 🤖 GitHub Agentic Workflows -This repo uses [GitHub Agentic Workflows](https://github.github.com/gh-aw/) (gh-aw v0.68.1) for AI-powered news generation. 12 agentic workflows in `.github/workflows/` produce daily political intelligence articles with five-layer security: +This repo uses [GitHub Agentic Workflows](https://github.github.com/gh-aw/) (gh-aw v0.69.3, pinned via `gh-aw-actions/setup-cli@v0.69.3`) for AI-powered news generation. 12 agentic workflows in `.github/workflows/` produce daily political intelligence articles with five-layer security: 1. **Read-only tokens** — Agent gets only read permissions 2. **Zero secrets in agent** — Write tokens isolated in separate jobs @@ -120,6 +120,15 @@ This repo uses [GitHub Agentic Workflows](https://github.github.com/gh-aw/) (gh- 4. **Safe outputs** — Structured artifacts with hard limits and validation 5. **Threat detection** — AI scan blocks prompt injection and malicious code +### Authoritative contract & analysis-artifact product + +The full workflow contract is split into bounded-context prompt modules under [`.github/prompts/`](prompts/) — see [`.github/prompts/README.md`](prompts/README.md) for the module catalogue. Every agent, skill, and workflow author must treat that directory as the **single source of truth** for how news workflows run. + +- **Analysis product** (the "deep political analysis" that must precede every article): authored per [`analysis/methodologies/ai-driven-analysis-guide.md`](../analysis/methodologies/ai-driven-analysis-guide.md) using the templates in [`analysis/templates/`](../analysis/templates/). +- **Hard rule**: every news workflow MUST produce all **9 core artifacts** (single-type) or **14 artifacts** (Tier-C aggregation) in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/` before any article sentence is written. [`.github/prompts/05-analysis-gate.md`](prompts/05-analysis-gate.md) is the single blocking gate. +- **AI-FIRST**: minimum 2 complete iterations (Pass 1 creates, Pass 2 reads back and improves) — see §"5. 🔴 AI FIRST Quality Principle" above. +- **Upstream gh-aw documentation**: [abridged (llms-small.txt)](https://github.github.com/gh-aw/llms-small.txt) · [complete (llms-full.txt)](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog series](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source repo](https://github.com/github/gh-aw) · [GitHub CLI manual](https://cli.github.com/manual/). + ### Agentic Workflow Schedule - **Morning**: Propositions, committee reports, motions, interpellations - **Midday**: Month-ahead, week-ahead forecasting diff --git a/.github/prompts/00-base-contract.md b/.github/prompts/00-base-contract.md index 40a7be8b5..247c77f44 100644 --- a/.github/prompts/00-base-contract.md +++ b/.github/prompts/00-base-contract.md @@ -20,10 +20,21 @@ You are a **Political Analyst, Intelligence Operative and OSINT Specialist** for - Static site: HTML/CSS, 14 languages, WCAG 2.1 AA, cyberpunk theme, no JS frameworks. - Authoritative docs: - - Methodologies → [`analysis/methodologies/`](../../analysis/methodologies/) + - Methodologies → [`analysis/methodologies/`](../../analysis/methodologies/) (entry point: [`ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md)) - Templates → [`analysis/templates/`](../../analysis/templates/) - MCP config → [`.github/copilot-mcp.json`](../copilot-mcp.json) - ISMS policies → [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBLIC) + - gh-aw runtime (v0.69.3): [abridged docs](https://github.github.com/gh-aw/llms-small.txt) · [complete docs](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) + +## Required reading before Pass 1 + +Before producing any analysis or article content, the agent MUST have read: + +1. This module (`00-base-contract.md`) and every imported sibling module for the workflow. +2. [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) — DIW weighting, tier depths, Pass 1 / Pass 2 rules. +3. Every template file referenced by `04-analysis-pipeline.md` (the 9 core artifacts) — and for Tier-C workflows, the additional 5 templates referenced by `ext/tier-c-aggregation.md` (executive brief, scenario analysis, comparative international, methodology reflection, per-run README). + +No article sentence may be drafted until every required analysis artifact exists on disk and the gate in `05-analysis-gate.md` reports pass. ## Pipeline (fixed order) diff --git a/.github/prompts/04-analysis-pipeline.md b/.github/prompts/04-analysis-pipeline.md index e4290a6ba..6a6a6ed21 100644 --- a/.github/prompts/04-analysis-pipeline.md +++ b/.github/prompts/04-analysis-pipeline.md @@ -67,3 +67,9 @@ For each article with charts, produce accompanying JSON under `analysis/daily/$A ## Next step On completion, proceed to `05-analysis-gate.md`. Do not start article generation until the gate passes. + +## External references + +- gh-aw runtime (v0.69.3): [abridged](https://github.github.com/gh-aw/llms-small.txt) · [complete](https://github.github.com/gh-aw/llms-full.txt) · [agentic-workflows blog](https://github.github.com/gh-aw/_llms-txt/agentic-workflows.txt) · [source](https://github.com/github/gh-aw) +- Methodology entry point: [`analysis/methodologies/ai-driven-analysis-guide.md`](../../analysis/methodologies/ai-driven-analysis-guide.md) +- Artifact ↔ gate-check mapping: [`analysis/templates/README.md`](../../analysis/templates/README.md) §"Artifact → workflow → gate check mapping" diff --git a/.github/prompts/06-article-generation.md b/.github/prompts/06-article-generation.md index baf8fa8a4..a1939a793 100644 --- a/.github/prompts/06-article-generation.md +++ b/.github/prompts/06-article-generation.md @@ -7,6 +7,17 @@ Articles derive from analysis. Scripts produce HTML scaffolding; the AI writes e - Module `05-analysis-gate.md` has passed. - Every core analysis artifact has been read back in full in this run. +## Pre-flight: required analysis artifacts + +Before any article section is drafted, the writer MUST have opened and read **every** artifact below from `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`: + +| Workflow class | Required artifacts | Gate checks that enforce citation | +|----------------|---------------------|------------------------------------| +| Single-type (`news-propositions`, `news-motions`, `news-committee-reports`, `news-interpellations`) | 9 core artifacts — see `04-analysis-pipeline.md` §"9 required core artifacts" | `05-analysis-gate.md` **check 2** (article-sections-resolve-to-artifacts), **check 4** (evidence cites `dok_id` / vote counts / named actors / primary-source URLs), **check 6** (significance-scoring coverage) | +| Tier-C aggregation (`news-evening-analysis`, `news-weekly-review`, `news-monthly-review`, `news-week-ahead`, `news-month-ahead`, `news-realtime-monitor`, `news-article-generator` deep-inspection) | 14 artifacts = 9 core + 5 Tier-C (`README.md`, `executive-brief.md`, `scenario-analysis.md`, `comparative-international.md`, `methodology-reflection.md`) — see `ext/tier-c-aggregation.md` §"14 required artifacts" | All single-type checks **plus** the Tier-C gate block (scenario count ≥ 3, ≥ 2 international comparisons) | + +If any required artifact is missing or empty, do **not** proceed to step 1 below — return to `04-analysis-pipeline.md` and produce it. + ## Generation steps 1. **Invoke the script** (HTML scaffold only): diff --git a/.github/prompts/ext/tier-c-aggregation.md b/.github/prompts/ext/tier-c-aggregation.md index f644f916a..c43a7a6c3 100644 --- a/.github/prompts/ext/tier-c-aggregation.md +++ b/.github/prompts/ext/tier-c-aggregation.md @@ -24,6 +24,16 @@ In addition to the 9 artifacts from `04-analysis-pipeline.md`: | `comparative-international.md` | Cross-country comparison via World Bank / IMF / SCB data | | `methodology-reflection.md` | What worked, what failed, biases surfaced, uncertainty log | +## Pre-flight: required-artifact checklist (Tier-C) + +Before any Tier-C article sentence is written, the writer MUST have opened **all 14** artifacts above from `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`. The Tier-C gate block below additionally verifies: + +- Every Tier-C artifact is non-empty. +- `scenario-analysis.md` declares ≥ 3 distinct scenarios. +- `comparative-international.md` references ≥ 2 external countries. + +Sibling cross-type synthesis (see §"Cross-type synthesis" below) is recorded in `cross-reference-map.md`; missing sibling citations fail the gate. Source templates for these artifacts live in [`analysis/templates/`](../../../analysis/templates/) — see [`analysis/templates/README.md`](../../../analysis/templates/README.md) §"Artifact → workflow → gate check mapping" for the full template-to-artifact-to-gate chain. + ## Period-scope multipliers (depth calibration) Aggregation depth scales with the period covered. Multiply the `comprehensive` minimum times in `04-analysis-pipeline.md` by: From 64bd1795c77f8d189fff2ae3989f7b8483917bb7 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 08:18:12 +0000 Subject: [PATCH 17/21] review 4153022331: align gate-check refs + handle escaped JSON in lock validator Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/1b6b4c91-1e20-4fb8-abca-bdf9a46a6fcf Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/06-article-generation.md | 6 +++--- .github/skills/editorial-standards/SKILL.md | 2 +- .github/workflows/compile-agentic-workflows.yml | 16 ++++++++++------ analysis/methodologies/README.md | 8 ++++---- analysis/templates/README.md | 14 +++++++------- 5 files changed, 25 insertions(+), 21 deletions(-) diff --git a/.github/prompts/06-article-generation.md b/.github/prompts/06-article-generation.md index a1939a793..db57628af 100644 --- a/.github/prompts/06-article-generation.md +++ b/.github/prompts/06-article-generation.md @@ -13,7 +13,7 @@ Before any article section is drafted, the writer MUST have opened and read **ev | Workflow class | Required artifacts | Gate checks that enforce citation | |----------------|---------------------|------------------------------------| -| Single-type (`news-propositions`, `news-motions`, `news-committee-reports`, `news-interpellations`) | 9 core artifacts — see `04-analysis-pipeline.md` §"9 required core artifacts" | `05-analysis-gate.md` **check 2** (article-sections-resolve-to-artifacts), **check 4** (evidence cites `dok_id` / vote counts / named actors / primary-source URLs), **check 6** (significance-scoring coverage) | +| Single-type (`news-propositions`, `news-motions`, `news-committee-reports`, `news-interpellations`) | 9 core artifacts — see `04-analysis-pipeline.md` §"9 required core artifacts" | `05-analysis-gate.md` **check 1** (artifact presence), **check 2** (per-document coverage — one `{dok_id}-analysis.md` per manifest entry), **check 4** (evidence cites `dok_id` / primary-source URL host, covering SWOT quadrants **and** significance-scoring ranked items / tables / Mermaid nodes) | | Tier-C aggregation (`news-evening-analysis`, `news-weekly-review`, `news-monthly-review`, `news-week-ahead`, `news-month-ahead`, `news-realtime-monitor`, `news-article-generator` deep-inspection) | 14 artifacts = 9 core + 5 Tier-C (`README.md`, `executive-brief.md`, `scenario-analysis.md`, `comparative-international.md`, `methodology-reflection.md`) — see `ext/tier-c-aggregation.md` §"14 required artifacts" | All single-type checks **plus** the Tier-C gate block (scenario count ≥ 3, ≥ 2 international comparisons) | If any required artifact is missing or empty, do **not** proceed to step 1 below — return to `04-analysis-pipeline.md` and produce it. @@ -34,7 +34,7 @@ If any required artifact is missing or empty, do **not** proceed to step 1 below | Article section | Sourced from | |-----------------|--------------| | Analytical lede | `synthesis-summary.md` (lead story + DIW ranking) | - | Per-document "Why it matters" | `documents/<dok_id>.md` | + | Per-document "Why it matters" | `documents/<dok_id>-analysis.md` | | Winners & losers | `stakeholder-perspectives.md` | | Key takeaways | `significance-scoring.md` top items | | Strategic context | `risk-assessment.md` + `threat-analysis.md` | @@ -42,7 +42,7 @@ If any required artifact is missing or empty, do **not** proceed to step 1 below | SEO title / meta description | `synthesis-summary.md` §"AI-Recommended Article Metadata" | | Analysis references block | Hand-written footer linking to the 9 analysis files on GitHub (see "Mandatory sections" below) | -3. **Replace every `AI_MUST_REPLACE` marker** with evidence-cited analysis. The gate in step 7 enforces zero markers. +3. **Replace every `AI_MUST_REPLACE` marker** with evidence-cited analysis. `05-analysis-gate.md` **check 3** (no stubs) enforces zero markers before article generation proceeds. 4. **Article Pass 2** — AI-FIRST principle applies (see `00-base-contract.md` rule 5). Read every generated article HTML back in full. Improve: tighten lede, strengthen quotes, expand stakeholder coverage, replace boilerplate sentences, verify every `dok_id` reference resolves. Minimum 8 minutes. diff --git a/.github/skills/editorial-standards/SKILL.md b/.github/skills/editorial-standards/SKILL.md index 39d97d277..14cd1fe63 100644 --- a/.github/skills/editorial-standards/SKILL.md +++ b/.github/skills/editorial-standards/SKILL.md @@ -181,7 +181,7 @@ Before any draft is shared for Gate 2 review, verify: 3. **Coverage completeness** — Every document with DIW-weighted score ≥ 7.0 receives a dedicated H3 section in article body. No silent omissions. 4. **Rhetorical tension** — When top-ranked findings carry opposing political valences (e.g., norm entrepreneurship abroad + norm compression at home), an explicit "Rhetorical Cross-Cluster Tension" or equivalent subsection addresses the contradiction. -Failure protocol: if any of 1–4 is not satisfied, the draft is returned to the writing agent with the specific missing element identified. **Doctrine**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Rule 5: Democratic-Impact Weighting (DIW) — Lead-Story & Coverage Discipline". **Enforcement**: `.github/prompts/05-analysis-gate.md` checks 2 + 4. +Failure protocol: if any of 1–4 is not satisfied, the draft is returned to the writing agent with the specific missing element identified. **Doctrine**: `analysis/methodologies/ai-driven-analysis-guide.md` §"Rule 5: Democratic-Impact Weighting (DIW) — Lead-Story & Coverage Discipline". **Enforcement**: this is an **editorial / human-review gate**, not a fully automated analysis-gate check. The analysis gate (`.github/prompts/05-analysis-gate.md`) provides partial automation — check 1 (artifact presence of `significance-scoring.md` which drives lead selection) and check 2 (per-document coverage ensuring every manifest `dok_id` has an analysis file a DIW-weighted finding can reference) — but lead-story choice, lede discipline, and article coverage completeness are not machine-enforced here and must be verified by a reviewer before publication. ## Error Correction Protocol diff --git a/.github/workflows/compile-agentic-workflows.yml b/.github/workflows/compile-agentic-workflows.yml index 0606827bc..5b380a6c1 100644 --- a/.github/workflows/compile-agentic-workflows.yml +++ b/.github/workflows/compile-agentic-workflows.yml @@ -101,13 +101,17 @@ jobs: echo "🔍 Enforcing create_pull_request.max: 1 on compiled .lock.yml (safe-outputs config)…" for lf in .github/workflows/news-*.lock.yml; do [ -f "$lf" ] || continue - # The compiled safe-outputs config is embedded as JSON in GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG. - # Extract every "create_pull_request":{...,"max":N,...} occurrence and confirm N==1. + # safe-outputs config appears in two forms in the compiled file: + # (a) a plain-JSON block (e.g. in prompt heredocs): "create_pull_request":{...,"max":N,...} + # (b) the runtime env var GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG, a YAML-quoted string + # that encodes JSON with backslash-escaped quotes: \"create_pull_request\":{...,\"max\":N,...} + # Normalise by stripping backslashes so a single grep pattern matches both forms. BAD=0 - OCCURS=$(grep -oE '"create_pull_request"[[:space:]]*:[[:space:]]*\{[^}]*"max"[[:space:]]*:[[:space:]]*[0-9]+' "$lf" | grep -oE '"max"[[:space:]]*:[[:space:]]*[0-9]+' | grep -oE '[0-9]+$' || true) - COUNT_ANY=$(grep -cE '"create_pull_request"[[:space:]]*:[[:space:]]*\{' "$lf" || true) + NORM=$(tr -d '\\' < "$lf") + OCCURS=$(printf '%s' "$NORM" | grep -oE '"create_pull_request"[[:space:]]*:[[:space:]]*\{[^}]*"max"[[:space:]]*:[[:space:]]*[0-9]+' | grep -oE '"max"[[:space:]]*:[[:space:]]*[0-9]+' | grep -oE '[0-9]+$' || true) + COUNT_ANY=$(printf '%s' "$NORM" | grep -cE '"create_pull_request"[[:space:]]*:[[:space:]]*\{' || true) if [ -z "$OCCURS" ] || [ "$COUNT_ANY" -eq 0 ]; then - echo " ❌ $lf has no create_pull_request config with max" + echo " ❌ $lf has no create_pull_request config with max (checked both plain and backslash-escaped JSON)" FAIL=1 continue fi @@ -118,7 +122,7 @@ jobs: echo " ❌ $lf has create_pull_request.max != 1 (found: $(echo "$OCCURS" | tr '\n' ' '))" FAIL=1 else - echo " ✅ $lf (create_pull_request.max: 1)" + echo " ✅ $lf (create_pull_request.max: 1, $COUNT_ANY occurrence(s))" fi done echo "" diff --git a/analysis/methodologies/README.md b/analysis/methodologies/README.md index 76a980a65..162b81ca7 100644 --- a/analysis/methodologies/README.md +++ b/analysis/methodologies/README.md @@ -118,15 +118,15 @@ The 12 agentic news workflows in `.github/workflows/news-*.md` are the **primary | Methodology | Read in Pass 1 (mandatory) | Read in Pass 2 (improvement) | Enforced by | |-------------|---------------------------|------------------------------|-------------| | [`ai-driven-analysis-guide.md`](ai-driven-analysis-guide.md) | ✅ role, DIW weighting, pass structure | — | `05-analysis-gate.md` check 1 (artifact presence) | -| [`per-document-methodology.md`](per-document-methodology.md) | ✅ one `{dok_id}-analysis.md` per document | — | `05-analysis-gate.md` check 3 (per-doc coverage) | +| [`per-document-methodology.md`](per-document-methodology.md) | ✅ one `{dok_id}-analysis.md` per document | — | `05-analysis-gate.md` check 2 (per-doc coverage) | | [`political-classification-guide.md`](political-classification-guide.md) | ✅ produces `classification-results.md` | — | `05-analysis-gate.md` check 1 | | [`political-swot-framework.md`](political-swot-framework.md) | ✅ produces `swot-analysis.md` + TOWS matrix | ✅ tighten evidence tables | `05-analysis-gate.md` check 4 (evidence) | -| [`political-risk-methodology.md`](political-risk-methodology.md) | ✅ produces `risk-assessment.md` | ✅ sensitivity & posterior probabilities | `05-analysis-gate.md` checks 1 + 6 | +| [`political-risk-methodology.md`](political-risk-methodology.md) | ✅ produces `risk-assessment.md` | ✅ sensitivity & posterior probabilities | `05-analysis-gate.md` check 1 | | [`political-threat-framework.md`](political-threat-framework.md) | ✅ produces `threat-analysis.md` | ✅ kill-chain depth | `05-analysis-gate.md` check 1 | | [`political-style-guide.md`](political-style-guide.md) | — | ✅ tone, neutrality, evidence citations | Article Pass-2 review | | [`strategic-extensions-methodology.md`](strategic-extensions-methodology.md) | ✅ for Tier-C only (`executive-brief.md`, `scenario-analysis.md`, `comparative-international.md`) | ✅ scenario probabilities | `ext/tier-c-aggregation.md` Tier-C gate | -| [`structural-metadata-methodology.md`](structural-metadata-methodology.md) | ✅ cross-reference continuity contracts | — | `05-analysis-gate.md` check 5 (cross-refs) | -| [`synthesis-methodology.md`](synthesis-methodology.md) | ✅ produces `synthesis-summary.md` with DIW-weighted ranking | ✅ lead-story justification | `05-analysis-gate.md` checks 1 + 4 | +| [`structural-metadata-methodology.md`](structural-metadata-methodology.md) | ✅ cross-reference continuity contracts | — | `05-analysis-gate.md` check 1 (artifact presence) | +| [`synthesis-methodology.md`](synthesis-methodology.md) | ✅ produces `synthesis-summary.md` with DIW-weighted ranking | ✅ lead-story justification | `05-analysis-gate.md` checks 1 + 5 (Mermaid) | | [`electoral-domain-methodology.md`](electoral-domain-methodology.md) | ✅ Election 2026 lens paragraph | — | Article-generation mandatory section | **Upstream gh-aw documentation** (link-out only — these methodologies own the political-analysis content; gh-aw owns the workflow runtime): diff --git a/analysis/templates/README.md b/analysis/templates/README.md index 7aaa9fa92..1511ac6d3 100644 --- a/analysis/templates/README.md +++ b/analysis/templates/README.md @@ -180,16 +180,16 @@ The 12 agentic news workflows in `.github/workflows/news-*.md` render these temp | Template | Produced artifact | Enforced by `05-analysis-gate.md` | |----------|-------------------|-----------------------------------| -| [`synthesis-summary.md`](synthesis-summary.md) | `synthesis-summary.md` (lead story + DIW ranking + ≥ 1 Mermaid diagram) | Check 1 (presence), Check 4 (evidence) | -| [`swot-analysis.md`](swot-analysis.md) | `swot-analysis.md` with TOWS matrix | Check 1, Check 4 | -| [`risk-assessment.md`](risk-assessment.md) | `risk-assessment.md` (top 5 risks, likelihood × impact) | Check 1, Check 6 (significance scoring) | +| [`synthesis-summary.md`](synthesis-summary.md) | `synthesis-summary.md` (lead story + DIW ranking + ≥ 1 Mermaid diagram) | Check 1 (presence), Check 5 (Mermaid) | +| [`swot-analysis.md`](swot-analysis.md) | `swot-analysis.md` with TOWS matrix | Check 1, Check 4 (evidence) | +| [`risk-assessment.md`](risk-assessment.md) | `risk-assessment.md` (top 5 risks, likelihood × impact) | Check 1 | | [`threat-analysis.md`](threat-analysis.md) | `threat-analysis.md` (attack tree + MITRE-style TTP) | Check 1 | -| [`stakeholder-impact.md`](stakeholder-impact.md) | `stakeholder-perspectives.md` (named actors + influence network) | Check 1, Check 4 | -| [`significance-scoring.md`](significance-scoring.md) | `significance-scoring.md` (DIW scores + sensitivity) | Check 6 | +| [`stakeholder-impact.md`](stakeholder-impact.md) | `stakeholder-perspectives.md` (named actors + influence network) | Check 1 | +| [`significance-scoring.md`](significance-scoring.md) | `significance-scoring.md` (DIW scores + sensitivity) | Check 1, Check 4 (evidence per ranked item / table row / Mermaid node) | | [`political-classification.md`](political-classification.md) | `classification-results.md` (priority tiers, retention) | Check 1 | -| [`cross-reference-map.md`](cross-reference-map.md) | `cross-reference-map.md` (continuity contracts) | Check 5 (cross-refs) | +| [`cross-reference-map.md`](cross-reference-map.md) | `cross-reference-map.md` (continuity contracts) | Check 1 | | [`data-download-manifest.md`](data-download-manifest.md) | `data-download-manifest.md` (pre-computed by download step) | Check 1 | -| [`per-file-political-intelligence.md`](per-file-political-intelligence.md) | `documents/{dok_id}-analysis.md` (one per document) | Check 3 (per-doc coverage) | +| [`per-file-political-intelligence.md`](per-file-political-intelligence.md) | `documents/{dok_id}-analysis.md` (one per document) | Check 2 (per-doc coverage) | **5 additional Tier-C artifacts** (aggregation / reference-grade workflows — see [`ext/tier-c-aggregation.md`](../../.github/prompts/ext/tier-c-aggregation.md)): From 464ebc1c49f8cc8a10355a333cc0f0935610be68 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 08:41:18 +0000 Subject: [PATCH 18/21] Apply review 4153161551: exclude manifest from check 6; structural Tier-C comparator check Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/b4049d5c-4ffb-4c51-9d68-2258add51abf Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/05-analysis-gate.md | 8 +++++++- .github/prompts/ext/tier-c-aggregation.md | 24 +++++++++++++++++++---- 2 files changed, 27 insertions(+), 5 deletions(-) diff --git a/.github/prompts/05-analysis-gate.md b/.github/prompts/05-analysis-gate.md index fed94b058..6edb75672 100644 --- a/.github/prompts/05-analysis-gate.md +++ b/.github/prompts/05-analysis-gate.md @@ -130,7 +130,13 @@ for f in "${SYNTHESIS[@]}"; do done # Check 6 — Pass-2 evidence (mtime ≥ birth + 180s, OR differing pass1 snapshot on disk) -for f in "${REQ[@]}"; do +# `data-download-manifest.md` is produced by the download step and may legitimately +# be unchanged during Pass 2, so it's excluded here (its Pass-2 correctness is +# covered by check 2's per-document coverage against its dok_id list). +PASS2_REQ=(synthesis-summary.md swot-analysis.md risk-assessment.md threat-analysis.md \ + stakeholder-perspectives.md significance-scoring.md classification-results.md \ + cross-reference-map.md) +for f in "${PASS2_REQ[@]}"; do p="$ANALYSIS_DIR/$f"; [ -s "$p" ] || continue ok=0 B=$(stat -c %W "$p" 2>/dev/null || echo 0) diff --git a/.github/prompts/ext/tier-c-aggregation.md b/.github/prompts/ext/tier-c-aggregation.md index c43a7a6c3..d03c39e3b 100644 --- a/.github/prompts/ext/tier-c-aggregation.md +++ b/.github/prompts/ext/tier-c-aggregation.md @@ -88,10 +88,26 @@ done # ≥ 3 scenarios with probability + leading indicator awk '/^##? .*Scenario/{c++} END{exit (c<3)}' "$ANALYSIS_DIR/scenario-analysis.md" \ || { echo "❌ scenario-analysis.md: fewer than 3 scenarios"; FAIL=1; } -# ≥ 2 external country references in comparative-international.md -grep -cE '\b(Finland|Norway|Denmark|Germany|France|Netherlands|UK|USA|Estonia)\b' \ - "$ANALYSIS_DIR/comparative-international.md" | awk '{exit ($1<2)}' \ - || { echo "❌ comparative-international.md: fewer than 2 countries"; FAIL=1; } +# comparative-international.md must define a comparator set or include ≥ 2 comparator rows +# (structural check — avoids hardcoding a country allowlist that rejects valid comparators +# like "United States", "United Kingdom", "EU", or other jurisdictions the template permits) +awk ' + BEGIN { comparator_set=0; comparator_rows=0 } + /^[[:space:]]*\*{0,2}Comparator set\*{0,2}[[:space:]]*:/ { + value = $0 + sub(/^[^:]*:[[:space:]]*/, "", value) + if (value !~ /^[[:space:]]*$/ && value !~ /^[[:space:]]*[-–—]+[[:space:]]*$/) { + comparator_set = 1 + } + } + /^\|/ { + if ($0 !~ /^\|[[:space:]:-]+(\|[[:space:]:-]+)+\|?[[:space:]]*$/ && $0 !~ /^\|[[:space:]]*(Jurisdiction|Comparator|Country)[[:space:]]*\|/) { + comparator_rows++ + } + } + END { exit !(comparator_set || comparator_rows >= 2) } +' "$ANALYSIS_DIR/comparative-international.md" \ + || { echo "❌ comparative-international.md: missing comparator set or fewer than 2 comparator rows"; FAIL=1; } [ "$FAIL" -eq 0 ] || exit 1 ``` From 47971aba3d34263428fd5eb59504105b42a271c5 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 09:15:01 +0000 Subject: [PATCH 19/21] docs: add README catalogs for agents, skills, and workflows Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/c5913acc-0e1b-46d6-a7e8-6323e54ca7aa Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/agents/README.md | 158 +++++++++++++++++++++++++ .github/skills/README.md | 228 ++++++++++++++++++++++++++++++++++++ .github/workflows/README.md | 178 ++++++++++++++++++++++++++++ 3 files changed, 564 insertions(+) create mode 100644 .github/agents/README.md create mode 100644 .github/skills/README.md create mode 100644 .github/workflows/README.md diff --git a/.github/agents/README.md b/.github/agents/README.md new file mode 100644 index 000000000..9681beb72 --- /dev/null +++ b/.github/agents/README.md @@ -0,0 +1,158 @@ +# 🤖 `.github/agents/` — Copilot Agent Catalog + +This directory contains **24 agent files** for GitHub Copilot and GitHub Agentic Workflows. It is the single source of truth for every agent persona used across the repository. + +> **Canonical long-form catalog:** see [`AGENTS.md`](../../AGENTS.md) at the repository root for full capability matrices, skill-mapping tables, and invocation examples. +> **Workflow prompt imports** (not agent files): see [`.github/prompts/README.md`](../prompts/README.md). + +--- + +## 📋 Two kinds of file live here + +gh-aw distinguishes two ways to package a reusable persona. Both live in this directory, but they are used for different things. + +| Kind | Filename convention | Frontmatter shape | Used by | +|------|--------------------|-------------------|---------| +| **Persona agent** (Copilot Agent File) | `<name>.md` | `name`, `description`, `tools: ["*"]` | `assign_copilot_to_issue`, `create_pull_request_with_copilot`, or a single workflow that needs exactly one persona | +| **Workflow-specialist agent** | `<name>.agent.md` | `name`, `description`, optional `disable-model-invocation: true` | Invoked by name from within agentic workflows or manual flows (not auto-selected by Copilot) | +| **Shared developer instructions** | `developer.instructions.md` | `description`, `applyTo: "**/*"` | Applied repository-wide to every Copilot session | + +Per gh-aw's [packaging & imports guide](https://github.github.com/gh-aw/guides/packaging-imports/), a workflow may import **at most one** Copilot Agent File, but can mix it with any number of plain-prompt imports from `.github/prompts/`. + +--- + +## 🧑‍💼 14 Persona Agents (used via `assign_copilot_to_issue`) + +These are the production personas that human maintainers assign to GitHub issues and PRs. They match the catalog in [`AGENTS.md`](../../AGENTS.md) and the *"🎯 Agent Quick Reference"* in [`copilot-instructions.md`](../copilot-instructions.md). + +| # | File | Persona | Primary domain | +|---|------|---------|----------------| +| 1 | [`security-architect.md`](security-architect.md) | `security-architect` | Security architecture, ISMS compliance (ISO 27001 / NIST CSF / CIS Controls v8.1), STRIDE threat modeling | +| 2 | [`documentation-architect.md`](documentation-architect.md) | `documentation-architect` | C4 architecture models, Mermaid diagrams, Hack23 documentation standards | +| 3 | [`quality-engineer.md`](quality-engineer.md) | `quality-engineer` | HTML/CSS validation, accessibility (WCAG 2.1 AA), link checking, CI quality gates | +| 4 | [`frontend-specialist.md`](frontend-specialist.md) | `frontend-specialist` | Static HTML/CSS, responsive design, 14-language localization, modern frontend | +| 5 | [`isms-compliance-manager.md`](isms-compliance-manager.md) | `isms-compliance-manager` | Hack23 ISMS enforcement, audit preparation, gap analysis | +| 6 | [`deployment-specialist.md`](deployment-specialist.md) | `deployment-specialist` | GitHub Pages, CI/CD automation, GitHub Actions security | +| 7 | [`devops-engineer.md`](devops-engineer.md) | `devops-engineer` | CI/CD pipelines, infrastructure automation, monitoring, performance optimization | +| 8 | [`intelligence-operative.md`](intelligence-operative.md) | `intelligence-operative` | Political science, OSINT, Swedish politics, behavioral analysis, risk indicators | +| 9 | [`news-journalist.md`](news-journalist.md) | `news-journalist` | Political journalism, editorial standards, OSINT/INTOP data-driven reporting | +| 10 | [`content-generator.md`](content-generator.md) | `content-generator` | Automated content generation, intelligence reports, 14-language templated rendering | +| 11 | [`data-pipeline-specialist.md`](data-pipeline-specialist.md) | `data-pipeline-specialist` | CIA data consumption, ETL workflows, caching strategies, validation | +| 12 | [`data-visualization-specialist.md`](data-visualization-specialist.md) | `data-visualization-specialist` | Chart.js / D3.js, interactive dashboards, political metrics visualization | +| 13 | [`task-agent.md`](task-agent.md) | `task-agent` | Product analysis, issue creation, Playwright testing, agent coordination | +| 14 | [`ui-enhancement-specialist.md`](ui-enhancement-specialist.md) | `ui-enhancement-specialist` | Design system, cyberpunk theme, CSS-only data visualizations, accessibility | + +### Persona agent frontmatter shape + +Persona agents in this repo use the minimal Copilot Agent File contract: + +```yaml +--- +name: <agent-name> +description: <one-line summary of expertise> +tools: ["*"] +--- +``` + +MCP servers are **not** declared per-agent; they are configured repository-wide in [`.github/copilot-mcp.json`](../copilot-mcp.json) and made available to every agent automatically. + +### Invocation patterns + +```javascript +// 1. Assign a persona to an existing issue +assign_copilot_to_issue({ + owner: "Hack23", + repo: "riksdagsmonitor", + issue_number: 123, + custom_agent: "intelligence-operative", + custom_instructions: "Build a voting-discipline dashboard for 2026 Q1" +}) + +// 2. Delegate a task that should produce a PR +create_pull_request_with_copilot({ + owner: "Hack23", + repo: "riksdagsmonitor", + title: "Add coalition-cohesion panel", + problem_statement: "Implement a CSS-only stacked-bar panel ...", + base_ref: "main" +}) +``` + +Full invocation examples, skill-mapping tables, and per-persona capabilities live in [`AGENTS.md`](../../AGENTS.md). + +--- + +## 🛠️ 9 Workflow-Specialist Agents (`.agent.md`) + +These agents are not selected automatically by Copilot (most carry `disable-model-invocation: true`). They encode reusable procedures for specific CI / authoring flows. They are referenced by name from agentic workflows, manual tasks, or other agents. + +| # | File | Agent | Purpose | +|---|------|-------|---------| +| 1 | [`agentic-workflows.agent.md`](agentic-workflows.agent.md) | `agentic-workflows` | Create, debug, and upgrade gh-aw workflows with intelligent prompt routing | +| 2 | [`ci-cleaner.agent.md`](ci-cleaner.agent.md) | `ci-cleaner` | Format sources, run linters, fix issues, run tests, recompile workflow lock files | +| 3 | [`contribution-checker.agent.md`](contribution-checker.agent.md) | `contribution-checker` | Evaluate a PR against the target repository's `CONTRIBUTING.md` | +| 4 | [`create-safe-output-type.agent.md`](create-safe-output-type.agent.md) | *(procedure)* | Guide for adding a new safe-output type to gh-aw | +| 5 | [`custom-engine-implementation.agent.md`](custom-engine-implementation.agent.md) | *(procedure)* | Implement a custom agentic engine inside gh-aw | +| 6 | [`grumpy-reviewer.agent.md`](grumpy-reviewer.agent.md) | `grumpy-reviewer` | Critical code reviewer with 40+ years of (sarcastic) experience | +| 7 | [`interactive-agent-designer.agent.md`](interactive-agent-designer.agent.md) | `interactive-agent-designer` | Wizard that guides users through creating agents, prompts, and workflow descriptions | +| 8 | [`technical-doc-writer.agent.md`](technical-doc-writer.agent.md) | `technical-doc-writer` | AI technical documentation writer using GitHub Docs voice | +| 9 | [`w3c-specification-writer.agent.md`](w3c-specification-writer.agent.md) | `w3c-specification-writer` | Technical specification writer following W3C conventions | + +--- + +## 📏 Shared developer instructions + +| File | Applies to | Purpose | +|------|-----------|---------| +| [`developer.instructions.md`](developer.instructions.md) | `**/*` (every file in every session) | Cross-cutting Developer Instructions for GitHub Agentic Workflows — style, shell-safety, commit hygiene | + +--- + +## 🔐 Agent-file size and governance constraints + +| Constraint | Value | Enforced by | +|------------|-------|-------------| +| Maximum file size | 16 KB | GitHub Copilot agent runtime | +| Frontmatter `tools:` | `["*"]` for persona agents | Repository convention (MCP servers configured globally) | +| ISMS governance | CEO approval for changes under `.github/agents/` | [Change_Management.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Change_Management.md) — Normal change | +| AI attribution | AI-assisted edits require human review + DCO sign-off | [AI_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/AI_Policy.md) | +| Prohibited reading | Do **not** read `.github/agents/*` from other agents (see `copilot-instructions.md` `<disallowed_actions>`) | Repository policy | + +## 🎯 Choosing the right agent + +```mermaid +flowchart TD + A[I need to do …] --> B{Is it a single<br/>well-scoped issue?} + B -- yes --> C{Domain?} + B -- no, it's a recurring<br/>workflow → .agent.md or gh-aw + C --> D[Security → security-architect] + C --> E[Docs/diagrams → documentation-architect] + C --> F[HTML/CSS/a11y → quality-engineer or ui-enhancement-specialist] + C --> G[Political analysis → intelligence-operative] + C --> H[News article → news-journalist or content-generator] + C --> I[CI/CD → deployment-specialist or devops-engineer] + C --> J[Data pipeline → data-pipeline-specialist] + C --> K[Visualisation → data-visualization-specialist] + C --> L[Compliance audit → isms-compliance-manager] + C --> M[Product issues → task-agent] + style A fill:#0a0e27,stroke:#00d9ff,color:#e0e0e0 + style B fill:#1a1e3d,stroke:#ff006e,color:#e0e0e0 +``` + +See [`AGENTS.md`](../../AGENTS.md) §"Best Practices" for full decision guidance. + +--- + +## 📚 Related documentation + +- [`AGENTS.md`](../../AGENTS.md) — canonical persona catalog with skill mapping and invocation examples +- [`SKILLS.md`](../../SKILLS.md) — 91 agent skills by category +- [`.github/skills/README.md`](../skills/README.md) — per-skill index +- [`.github/prompts/README.md`](../prompts/README.md) — prompt modules imported by agentic workflows +- [`.github/workflows/README.md`](../workflows/README.md) — workflow catalog +- [`WORKFLOWS.md`](../../WORKFLOWS.md) — full CI/CD and agentic workflow reference +- [`copilot-instructions.md`](../copilot-instructions.md) — repository-wide Copilot rules + +--- + +**📋 Document owner:** CEO | **🏷️ Classification:** Public | **🔄 Review cycle:** Quarterly diff --git a/.github/skills/README.md b/.github/skills/README.md new file mode 100644 index 000000000..b9a7c00bf --- /dev/null +++ b/.github/skills/README.md @@ -0,0 +1,228 @@ +# 🎯 `.github/skills/` — Copilot Skill Library + +This directory contains **91 skill packages** that teach GitHub Copilot *how* to approach specific domains when working in Riksdagsmonitor. Skills are **strategic**, reusable, rule-based instruction sets — not step-by-step runbooks. They load automatically when a Copilot agent determines a task touches the relevant domain. + +> **Canonical long-form skill catalog with detailed mappings:** see [`SKILLS.md`](../../SKILLS.md) at the repository root. +> **Agent → skill recommendations:** see [`AGENTS.md`](../../AGENTS.md) §"Skills Mapping by Agent" and [`.github/agents/README.md`](../agents/README.md). + +--- + +## 🧠 How skills work + +Each skill is a directory containing a `SKILL.md` file (and optional supporting assets). When Copilot begins a task, skills are matched against the request. If relevant, they are loaded into the agent's context alongside the persona and repository-wide `copilot-instructions.md`. + +| Property | Value | +|----------|-------| +| **Scope** | Repository-local (`.github/skills/`) + implicit project-level skills listed in `copilot-instructions.md` `<available_skills>` | +| **Invocation** | Automatic (when relevant) or explicit via `skill` tool | +| **Governance** | CEO approval for material changes per [Change_Management.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Change_Management.md) | +| **Attribution** | AI-assisted edits require human review + DCO sign-off per [AI_Policy.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/AI_Policy.md) | + +--- + +## 📚 Skill catalog (91 skills) + +Skills are grouped into **12 functional categories** that mirror the Riksdagsmonitor capability areas. Each row links to the skill's `SKILL.md`. + +### 🛡️ Core Infrastructure & Governance (9) + +| Skill | Purpose | +|-------|---------| +| [hack23-isms-compliance](hack23-isms-compliance/) | Strategic ISMS compliance enforcement across repositories | +| [security-by-design](security-by-design/) | Security architecture principles from requirement to delivery | +| [static-site-security](static-site-security/) | Hardening static HTML/CSS sites hosted on GitHub Pages | +| [ci-cd-security](ci-cd-security/) | Security for GitHub Actions, supply chain, and pipelines | +| [documentation-standards](documentation-standards/) | Hack23 technical writing standards and Mermaid conventions | +| [documentation-portfolio](documentation-portfolio/) | Required C4 / data / SWOT / threat-model docs for every repo | +| [hack23-future-architecture-standards](hack23-future-architecture-standards/) | Rules for the future-state document portfolio | +| [html-accessibility](html-accessibility/) | WCAG 2.1 AA compliance for static sites | +| [multi-language-localization](multi-language-localization/) | 14-language support with RTL (Arabic, Hebrew) and hreflang | + +### 🕵️ Political Intelligence (11) + +| Skill | Purpose | +|-------|---------| +| [political-science-analysis](political-science-analysis/) | Comparative politics, policy analysis, frameworks | +| [osint-methodologies](osint-methodologies/) | Open-source intelligence collection and verification | +| [intelligence-analysis-techniques](intelligence-analysis-techniques/) | ACH, SWOT, Devil's Advocate, Red Team, key assumptions check | +| [swedish-political-system](swedish-political-system/) | Riksdag structure, 8 parties, electoral system, coalition dynamics | +| [electoral-analysis](electoral-analysis/) | Election forecasting, coalition prediction, voter behaviour | +| [behavioral-analysis](behavioral-analysis/) | Political psychology, cognitive biases, leadership analysis | +| [strategic-communication-analysis](strategic-communication-analysis/) | Narrative analysis, media monitoring, messaging detection | +| [legislative-monitoring](legislative-monitoring/) | Voting patterns, committee effectiveness, bill tracking | +| [risk-assessment-frameworks](risk-assessment-frameworks/) | Political risk and corruption-indicator taxonomies | +| [data-science-for-intelligence](data-science-for-intelligence/) | Statistics, ML, NLP, time series, network analysis | +| [gdpr-compliance](gdpr-compliance/) | GDPR for political-data processing and public-official data | + +### 🔐 ISMS & Security (14) + +| Skill | Purpose | +|-------|---------| +| [iso-27001-controls](iso-27001-controls/) | ISO 27001:2022 Annex A controls for static sites | +| [nist-csf-mapping](nist-csf-mapping/) | NIST CSF 2.0 function / category / subcategory mapping | +| [cis-controls](cis-controls/) | CIS Controls v8.1 implementation for static sites | +| [threat-modeling](threat-modeling/) | STRIDE, MITRE ATT&CK, attack-tree methodology | +| [secure-code-review](secure-code-review/) | HTML / CSS / JS / TS security-focused review | +| [security-documentation](security-documentation/) | ISMS documentation standards | +| [incident-response](incident-response/) | NIST + ISO 27001 incident-response lifecycle | +| [input-validation](input-validation/) | XSS/injection prevention, safe output encoding | +| [vulnerability-management](vulnerability-management/) | SLA-driven remediation (Critical 24h / High 7d / Med 30d / Low 90d) | +| [data-protection](data-protection/) | Classification, privacy-by-design, encryption | +| [ai-governance](ai-governance/) | EU AI Act, OWASP LLM security, responsible AI | +| [information-security-strategy](information-security-strategy/) | Program-level security strategy and risk management | +| [compliance-checklist](compliance-checklist/) | Unified mapping across ISO/NIST/CIS/GDPR/NIS2/EU CRA/SOC 2/PCI DSS/HIPAA | +| [secrets-management](secrets-management/) | GitHub secrets, PATs, OIDC, token rotation | + +### ⚙️ Development & Operations (14) + +| Skill | Purpose | +|-------|---------| +| [c4-architecture-documentation](c4-architecture-documentation/) | C4 (Context/Container/Component) + Mermaid diagrams | +| [github-actions-workflows](github-actions-workflows/) | Workflow patterns, reusable workflows, caching | +| [code-quality-checks](code-quality-checks/) | HTMLHint, linkinator, ESLint, JSON validation | +| [code-review-practices](code-review-practices/) | Review quality gates and constructive feedback | +| [testing-strategy](testing-strategy/) | Unit / integration / E2E strategy (Vitest + Cypress) | +| [performance-optimization](performance-optimization/) | Core Web Vitals, bundle size, caching | +| [api-integration](api-integration/) | REST / GraphQL clients, rate limiting, auth | +| [data-pipeline-engineering](data-pipeline-engineering/) | ETL, scheduling, versioned caching, freshness monitoring | +| [change-management](change-management/) | ITIL-aligned change flow (Normal/Standard/Emergency) | +| [contribution-guidelines](contribution-guidelines/) | PR workflows, DCO/CLA, community engagement | +| [open-source-governance](open-source-governance/) | OSS licensing, SBOM, supply-chain posture | +| [secure-development-policy](secure-development-policy/) | Hack23 Secure Development Policy enforcement | +| [secure-development-lifecycle](secure-development-lifecycle/) | SDL phases from requirement to retirement | +| [product-management-patterns](product-management-patterns/) | Roadmapping, issue hygiene, prioritization | + +### 🧪 Testing & Quality Assurance (2) + +| Skill | Purpose | +|-------|---------| +| [playwright-testing](playwright-testing/) | Playwright automation, visual regression, a11y audits | +| [issue-management](issue-management/) | GitHub issue creation, labeling, milestones | + +### 🎨 UI/UX & Design (8) + +| Skill | Purpose | +|-------|---------| +| [responsive-design](responsive-design/) | Mobile-first CSS Grid/Flexbox, 320-1440px breakpoints | +| [design-system-management](design-system-management/) | Cyberpunk theme, CSS custom properties, component library | +| [political-data-visualization](political-data-visualization/) | CSS-only charts, heat maps, dashboards | +| [advanced-data-visualization](advanced-data-visualization/) | Chart.js / D3.js interactive dashboards | +| [data-visualization-principles](data-visualization-principles/) | Chart selection, colour theory, storytelling | +| [ui-ux-design](ui-ux-design/) | UX heuristics, information architecture | +| [seo-optimization](seo-optimization/) | Schema.org, OpenGraph, Twitter Cards, hreflang | +| [seo-best-practices](seo-best-practices/) | Canonical URLs, sitemap, robots.txt | + +> *Note: the UI/UX category lists 8 rows; `seo-best-practices` and `seo-optimization` are two distinct skills — one content/strategy-focused, one technical.* + +### 📡 Data Integration (6) + +| Skill | Purpose | +|-------|---------| +| [riksdag-regering-mcp](riksdag-regering-mcp/) | 32-tool MCP coverage for Riksdag + Regering data | +| [cia-data-integration](cia-data-integration/) | CIA platform JSON export consumption and validation | +| [european-parliament-api](european-parliament-api/) | European Parliament Open Data integration | +| [mcp-server-development](mcp-server-development/) | Building / packaging MCP servers | +| [mcp-gateway-configuration](mcp-gateway-configuration/) | Gateway routing, tool wiring, access control | +| [mcp-gateway-security](mcp-gateway-security/) | Token management, request validation, audit logging | + +### 📰 Journalism & Media (5) + +| Skill | Purpose | +|-------|---------| +| [editorial-standards](editorial-standards/) | OSINT/INTOP editorial standards, attribution, fact-checking | +| [investigative-journalism](investigative-journalism/) | Source verification, document analysis, FOI requests | +| [prospective-news-coverage](prospective-news-coverage/) | Forward-looking / week-ahead / month-ahead coverage | +| [comparative-politics-reporting](comparative-politics-reporting/) | Cross-country context for Swedish developments | +| [automated-content-generation](automated-content-generation/) | Template-based content rendering in 14 languages | + +### 🏛️ Government, Regulatory & Economics (7) + +| Skill | Purpose | +|-------|---------| +| [global-government-analysis](global-government-analysis/) | Comparative government systems, cross-country governance | +| [myndigheter-monitoring](myndigheter-monitoring/) | Swedish government-agency monitoring | +| [regulatory-affairs](regulatory-affairs/) | Regulatory change tracking and compliance impact | +| [economic-policy-analysis](economic-policy-analysis/) | Fiscal / monetary / trade policy analysis | +| [business-development](business-development/) | Stakeholder engagement, partnerships, community growth | +| [business-model-canvas](business-model-canvas/) | Business Model Canvas for open-source sustainability | +| [marketing](marketing/) | Digital marketing, SEO, content marketing, analytics | + +### 🗣️ Language & Localization (1) + +| Skill | Purpose | +|-------|---------| +| [language-expertise](language-expertise/) | Linguistic and cultural expertise for all 14 supported languages (EN, SV, DA, NB, FI, DE, FR, ES, NL, AR, HE, JA, KO, ZH) | + +### 🤖 GitHub Agentic Workflows (13) + +These skills encode the gh-aw framework's upstream rules. They underpin every `.github/workflows/news-*.md` workflow and the prompt modules in [`.github/prompts/`](../prompts/). The index lives in [`gh-aw-README.md`](gh-aw-README.md). + +| Skill | Purpose | +|-------|---------| +| [github-agentic-workflows](github-agentic-workflows/) | Root skill: v0.69.1 overview, five-layer security, safe outputs, MCP | +| [gh-aw-workflow-authoring](gh-aw-workflow-authoring/) | Markdown syntax, YAML frontmatter, compilation to `.lock.yml` | +| [gh-aw-mcp-configuration](gh-aw-mcp-configuration/) | MCP server setup, transport protocols, lifecycle, tool discovery | +| [gh-aw-mcp-gateway](gh-aw-mcp-gateway/) | Expert-level MCP gateway: routing, Docker, security, deployment | +| [gh-aw-safe-outputs](gh-aw-safe-outputs/) | Sanitisation, controlled AI actions, write-operation patterns | +| [gh-aw-security-architecture](gh-aw-security-architecture/) | Defense-in-depth, threat model, sandboxing, attack vectors | +| [gh-aw-firewall](gh-aw-firewall/) | Squid proxy domain allow-listing, iptables, credential management | +| [gh-aw-containerization](gh-aw-containerization/) | Docker isolation, multi-stage builds, image optimisation | +| [gh-aw-github-actions-integration](gh-aw-github-actions-integration/) | Workflow triggers, env config, secrets, matrix, deployment | +| [gh-aw-authentication-credentials](gh-aw-authentication-credentials/) | Token types, rotation, least-privilege, MCP auth, API keys | +| [gh-aw-logging-monitoring](gh-aw-logging-monitoring/) | Structured logging, metrics, alerting, debugging | +| [gh-aw-tools-ecosystem](gh-aw-tools-ecosystem/) | Tool capabilities, limits, integration patterns, custom tools | +| [gh-aw-continuous-ai-patterns](gh-aw-continuous-ai-patterns/) | Continuous-AI triage / review / maintenance patterns | + +### 📋 Copilot Patterns (1) + +| Skill | Purpose | +|-------|---------| +| [copilot-agent-patterns](copilot-agent-patterns/) | Custom agent design patterns, collaboration workflows, orchestration | + +--- + +## 🔢 Count reconciliation + +| Category | Count | +|----------|------:| +| Core Infrastructure & Governance | 9 | +| Political Intelligence | 11 | +| ISMS & Security | 14 | +| Development & Operations | 14 | +| Testing & Quality Assurance | 2 | +| UI/UX & Design | 8 | +| Data Integration | 6 | +| Journalism & Media | 5 | +| Government, Regulatory & Economics | 7 | +| Language & Localization | 1 | +| GitHub Agentic Workflows | 13 | +| Copilot Patterns | 1 | +| **Total** | **91** | + +Source of truth: `ls .github/skills/ | grep -v '^gh-aw-README\.md$' | wc -l` → `91`. + +--- + +## ✍️ Authoring a new skill + +1. Create a directory `<skill-name>/` in this folder (kebab-case). +2. Add a `SKILL.md` describing: *When to use*, *Rules to follow*, *Examples*. +3. Keep it **strategic** — principles and rules, not runbooks. +4. Cross-link to any related skills under "See also". +5. Open a PR; CEO approval required per [Change_Management.md](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Change_Management.md). +6. Update this README's catalog table and the total in [`SKILLS.md`](../../SKILLS.md). + +--- + +## 📚 Related documentation + +- [`SKILLS.md`](../../SKILLS.md) — canonical skill catalog with detailed usage guidance +- [`AGENTS.md`](../../AGENTS.md) — persona agents and their recommended skills +- [`.github/agents/README.md`](../agents/README.md) — persona + workflow-specialist agent catalog +- [`.github/prompts/README.md`](../prompts/README.md) — prompt modules imported by agentic workflows +- [`copilot-instructions.md`](../copilot-instructions.md) — repository-wide Copilot rules +- [`gh-aw-README.md`](gh-aw-README.md) — gh-aw skills index + +--- + +**📋 Document owner:** CEO | **🏷️ Classification:** Public | **🔄 Review cycle:** Quarterly diff --git a/.github/workflows/README.md b/.github/workflows/README.md new file mode 100644 index 000000000..730b27b2e --- /dev/null +++ b/.github/workflows/README.md @@ -0,0 +1,178 @@ +# ⚙️ `.github/workflows/` — GitHub Actions Catalog + +This directory holds **45 workflow files** (21 standard `.yml` + 12 agentic Markdown sources + 12 compiled `.lock.yml` siblings) that power Riksdagsmonitor's CI/CD, security, data pipeline, agentic news generation, and monitoring. + +> **Canonical long-form workflow reference:** [`WORKFLOWS.md`](../../WORKFLOWS.md) at the repository root — version matrices, Mermaid pipeline diagrams, ISMS control mapping, troubleshooting, and KPIs. +> **Agentic workflow contract (imports, analysis gate, 9/14 artifacts):** [`.github/prompts/README.md`](../prompts/README.md). + +--- + +## 📊 File inventory at a glance + +| File kind | Count | Notes | +|-----------|------:|-------| +| Standard GitHub Actions (`*.yml`, not `*.lock.yml`) | **21** | Run natively on the GitHub Actions runner | +| Agentic workflow **sources** (`news-*.md`) | **12** | Authored in Markdown with gh-aw frontmatter + prompt imports | +| Agentic workflow **compiled lock files** (`news-*.lock.yml`) | **12** | Auto-generated by `compile-agentic-workflows.yml` — these are what actually execute | +| **Total** | **45** | Verify with `ls .github/workflows/ | wc -l` | + +**Only the compiled `.lock.yml` files run.** The `.md` sources are the source of truth and are reviewed in PRs; lock files are regenerated via the `gh aw compile` CLI. + +--- + +## 🔐 Security & Compliance (5) + +| File | Trigger | Purpose | +|------|---------|---------| +| [`codeql.yml`](codeql.yml) | Push, PR, weekly | CodeQL SAST — `javascript-typescript` matrix | +| [`dependency-review.yml`](dependency-review.yml) | Pull requests | SCA gate — blocks PRs on critical/high vulnerabilities | +| [`scorecards.yml`](scorecards.yml) | Push to `main`, weekly | OpenSSF Scorecard supply-chain assessment | +| [`setup-labels.yml`](setup-labels.yml) | Manual dispatch | Repository label management (`.github/labeler.yml` source) | +| [`labeler.yml`](labeler.yml) | Pull requests | Auto-label PRs from `.github/labeler.yml` patterns | + +## 🧪 Testing & Validation (7) + +| File | Trigger | Purpose | +|------|---------|---------| +| [`javascript-testing.yml`](javascript-testing.yml) | Push/PR on TS/JS/HTML/config | TypeScript type-check + Vitest (2 890 tests) + Vite build | +| [`jsdoc-validation.yml`](jsdoc-validation.yml) | Manual dispatch | TypeDoc generation + coverage report | +| [`quality-checks.yml`](quality-checks.yml) | Push/PR to `main` | ESLint + HTMLHint + linkinator (parallel jobs) | +| [`translation-validation.yml`](translation-validation.yml) | Push/PR on HTML | 14-language completeness + RTL + hreflang | +| [`test-dashboard.yml`](test-dashboard.yml) | Push/PR on dashboards | Cypress E2E for dashboard pages | +| [`test-homepage.yml`](test-homepage.yml) | Push/PR on homepage | Cypress E2E for the 14 `index*.html` homepages | +| [`test-news.yml`](test-news.yml) | Push/PR on `news/**` | Cypress E2E for news article pages | + +## 🚀 Release & Deployment (3) + +| File | Trigger | Purpose | +|------|---------|---------| +| [`release.yml`](release.yml) | Tag `v*`, manual | SLSA provenance + SBOM + dual deploy (S3 + GitHub Pages) | +| [`deploy-s3.yml`](deploy-s3.yml) | Push to `main` | AWS S3 sync + CloudFront invalidation via OIDC | +| [`lighthouse-ci.yml`](lighthouse-ci.yml) | Push/PR, weekly | Lighthouse CI: performance, a11y, SEO, best practices | + +## 📊 Data Pipeline (1) + +| File | Trigger | Purpose | +|------|---------|---------| +| [`update-cia-csv-data.yml`](update-cia-csv-data.yml) | Nightly 03:30 UTC, manual | Refresh every tracked `data/cia/**` + `cia-data/**` CSV from upstream `Hack23/cia`; refresh `production-stats.json`; inject counts into 14× `index*.html`; open a single PR on changes | + +## 🤖 Agentic News Generation (12 × 2 = 24 files) + +Each agentic workflow is a **pair**: an authored `.md` source + a compiled `.lock.yml`. The source imports bounded-context prompt modules from [`.github/prompts/`](../prompts/). The compiled lock file is hardened (SHA-pinned actions, egress firewall, five-layer safe-outputs) and is what GitHub Actions actually runs. + +### Article-type workflows (single-type, 9 core artifacts) + +| # | Source (`.md`) | Lock (`.lock.yml`) | Schedule | Primary MCP data | +|---|----------------|---------------------|----------|------------------| +| 1 | [`news-committee-reports.md`](news-committee-reports.md) | [`news-committee-reports.lock.yml`](news-committee-reports.lock.yml) | Mon–Fri 04:00 UTC | `get_betankanden`, `search_voteringar`, `search_anforanden`, `get_propositioner` | +| 2 | [`news-propositions.md`](news-propositions.md) | [`news-propositions.lock.yml`](news-propositions.lock.yml) | Mon–Fri 05:00 UTC | `get_propositioner`, `search_dokument_fulltext`, `search_anforanden` | +| 3 | [`news-motions.md`](news-motions.md) | [`news-motions.lock.yml`](news-motions.lock.yml) | Mon–Fri 06:00 UTC | `get_motioner`, `search_dokument_fulltext`, `search_anforanden` | +| 4 | [`news-interpellations.md`](news-interpellations.md) | [`news-interpellations.lock.yml`](news-interpellations.lock.yml) | Mon–Fri 07:00 UTC | `get_interpellationer`, `search_anforanden`, `get_calendar_events` | + +### Aggregation & prospective workflows (Tier-C, 14 artifacts) + +| # | Source (`.md`) | Lock (`.lock.yml`) | Schedule | Purpose | +|---|----------------|---------------------|----------|---------| +| 5 | [`news-realtime-monitor.md`](news-realtime-monitor.md) | [`news-realtime-monitor.lock.yml`](news-realtime-monitor.lock.yml) | Mon–Fri 10:00 + 14:00 UTC, weekends 12:00 | Breaking events, urgency classification | +| 6 | [`news-evening-analysis.md`](news-evening-analysis.md) | [`news-evening-analysis.lock.yml`](news-evening-analysis.lock.yml) | Mon–Fri 18:00, Sat 16:00 UTC | Daily parliamentary pulse, coalition cohesion | +| 7 | [`news-week-ahead.md`](news-week-ahead.md) | [`news-week-ahead.lock.yml`](news-week-ahead.lock.yml) | Fri 07:00 UTC | Prospective week preview; IMF WEO T+5 economic outlook | +| 8 | [`news-month-ahead.md`](news-month-ahead.md) | [`news-month-ahead.lock.yml`](news-month-ahead.lock.yml) | 1st of month 08:00 UTC | Monthly legislative pipeline + pinned IMF projection vintage | +| 9 | [`news-weekly-review.md`](news-weekly-review.md) | [`news-weekly-review.lock.yml`](news-weekly-review.lock.yml) | Sat 09:00 UTC | Week-over-week trends, throughput metrics | +| 10 | [`news-monthly-review.md`](news-monthly-review.md) | [`news-monthly-review.lock.yml`](news-monthly-review.lock.yml) | 28th 10:00 UTC | Monthly retrospective, party productivity rankings | + +### Manual & translation workflows + +| # | Source (`.md`) | Lock (`.lock.yml`) | Schedule | Purpose | +|---|----------------|---------------------|----------|---------| +| 11 | [`news-article-generator.md`](news-article-generator.md) | [`news-article-generator.lock.yml`](news-article-generator.lock.yml) | Manual dispatch | Backfill / regenerate any article type | +| 12 | [`news-translate.md`](news-translate.md) | [`news-translate.lock.yml`](news-translate.lock.yml) | Mon–Fri 11:00 + 17:00 UTC, weekends 14:00 | Translate EN/SV articles into the remaining 12 languages | + +**Every news workflow imports the bounded-context prompt modules in this exact order** (full contract in [`.github/prompts/README.md`](../prompts/README.md)): + +1. `../prompts/00-base-contract.md` — role, ethics, GDPR/ISMS, AI-FIRST rule +2. `../prompts/01-bash-and-shell-safety.md` — AWF-safe shell patterns, UTF-8 +3. `../prompts/02-mcp-access.md` — MCP server inventory + health gate +4. `../prompts/03-data-download.md` — download pipeline, manifest +5. `../prompts/04-analysis-pipeline.md` — methodologies, templates, 9 core artifacts, Pass 1 + Pass 2 +6. `../prompts/05-analysis-gate.md` — **single blocking gate** — no article until it passes +7. `../prompts/06-article-generation.md` — article sections, banned patterns, visualisation, translations +8. `../prompts/07-commit-and-pr.md` — stage → commit → exactly one `create_pull_request` +9. *(Tier-C workflows only)* `../prompts/ext/tier-c-aggregation.md` — 14-artifact gate, period multipliers + +## 🛠️ Automation & Tooling (4) + +| File | Trigger | Purpose | +|------|---------|---------| +| [`compile-agentic-workflows.yml`](compile-agentic-workflows.yml) | Push/PR touching `news-*.md`, manual | Run `gh aw compile` → regenerate `.lock.yml`; enforce firewall + safe-outputs + SHA-pinning | +| [`agentics-maintenance.yml`](agentics-maintenance.yml) | Scheduled + manual | Hygiene of the agentic environment: stale branch cleanup, secret-rotation hooks, runtime-cache eviction | +| [`economic-context-audit.yml`](economic-context-audit.yml) | Scheduled | Periodic audit of macro joins (SCB + World Bank + IMF) used by news workflows; validates IMF `projectionVintage` freshness and Economic Data Contract v2.0 schema conformance | +| [`copilot-setup-steps.yml`](copilot-setup-steps.yml) | Push, manual | Bootstrap GitHub Copilot coding-agent environment for this repo | + +## 📡 Monitoring & Infrastructure (1) + +| File | Trigger | Purpose | +|------|---------|---------| +| [`uptime-monitor.yml`](uptime-monitor.yml) | Every 15 minutes | Availability checks for all 14 language homepages; auto-creates/closes GitHub issues on outage/recovery; validates HSTS/CSP/X-Frame-Options headers | + +--- + +## 🔄 Triggers at a glance + +```mermaid +flowchart LR + subgraph "⏰ Cron" + C1[Every 15 min → uptime-monitor] + C2[Nightly 03:30 UTC → update-cia-csv-data] + C3[Mon-Fri 04:00-18:00 → news-* article + analysis] + C4[Weekends 12:00 + 14:00 → realtime + translate] + C5[Fri 07:00 → news-week-ahead] + C6[Sat 09:00 → news-weekly-review] + C7[28th 10:00 → news-monthly-review] + C8[1st 08:00 → news-month-ahead] + C9[Weekly → codeql + scorecards + lighthouse-ci] + end + subgraph "🔀 Event" + E1[push/PR → quality-checks + javascript-testing + translation-validation + codeql + test-*] + E2[Tag v*.*.* → release] + E3[push main → deploy-s3] + E4[PR → dependency-review + labeler] + end + subgraph "👆 Manual" + M1[workflow_dispatch → any workflow] + M2[Backfill → news-article-generator] + end + style C3 fill:#0a0e27,stroke:#00d9ff,color:#e0e0e0 + style E2 fill:#1a1e3d,stroke:#ffbe0b,color:#e0e0e0 +``` + +--- + +## 🔒 Universal security posture + +Every workflow in this directory implements defence-in-depth — see [`WORKFLOWS.md`](../../WORKFLOWS.md) §"Workflow Security Architecture" for the full matrix. + +| Control | How it's enforced | +|---------|-------------------| +| `step-security/harden-runner` | First step of every job; egress audit/allow-list | +| SHA-pinned `uses:` | 100 % coverage — any third-party Action pinned to commit SHA | +| Least-privilege `permissions:` | Default `contents: read`; only required scopes added | +| OIDC for AWS | `id-token: write` + `role-to-assume` (no long-lived keys) | +| Five-layer safe outputs (agentic) | Sanitise → schema-validate → policy-check → human-review → merge | +| Squid + iptables egress firewall (agentic) | Only domains in `network.allowed:` reach the internet | + +--- + +## 🧭 Where to go next + +| I want to… | Read | +|------------|------| +| Understand a specific pipeline stage end-to-end | [`WORKFLOWS.md`](../../WORKFLOWS.md) | +| Author or modify an agentic news workflow | [`.github/prompts/README.md`](../prompts/README.md) + [`.github/skills/gh-aw-workflow-authoring/`](../skills/gh-aw-workflow-authoring/) | +| Understand how safe outputs and the firewall work | [`.github/skills/gh-aw-safe-outputs/`](../skills/gh-aw-safe-outputs/) + [`.github/skills/gh-aw-firewall/`](../skills/gh-aw-firewall/) | +| Compile `.md` → `.lock.yml` locally | Run `gh aw compile .github/workflows/<name>.md` (workflow `compile-agentic-workflows.yml` does this in CI) | +| See ISMS control mapping | [`WORKFLOWS.md`](../../WORKFLOWS.md) §"ISMS Compliance Mapping" | +| Trace the analysis artifact contract | [`analysis/README.md`](../../analysis/README.md) + [`analysis/methodologies/README.md`](../../analysis/methodologies/README.md) + [`analysis/templates/README.md`](../../analysis/templates/README.md) | + +--- + +**📋 Document owner:** CEO | **🏷️ Classification:** Public | **🔄 Review cycle:** Quarterly From c77d02f45a60349da74e7fa37e7a1b7969f53d4f Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 09:22:34 +0000 Subject: [PATCH 20/21] docs: extend WORKFLOWS.md, refresh analysis & project READMEs, cross-link everything Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/c5913acc-0e1b-46d6-a7e8-6323e54ca7aa Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/prompts/README.md | 10 ++ AGENTS.md | 5 +- README.md | 172 +++++++++++++++--------------- SKILLS.md | 3 + WORKFLOWS.md | 177 ++++++++++++++++++++++++++++++- analysis/README.md | 4 +- analysis/methodologies/README.md | 2 +- 7 files changed, 281 insertions(+), 92 deletions(-) diff --git a/.github/prompts/README.md b/.github/prompts/README.md index ff2b5c598..403354b17 100644 --- a/.github/prompts/README.md +++ b/.github/prompts/README.md @@ -111,3 +111,13 @@ See [`.github/skills/github-agentic-workflows/SKILL.md`](../skills/github-agenti ## History The monolithic `.github/aw/SHARED_PROMPT_PATTERNS.md` was deleted when these modules went live. Every rule from the old file was either migrated into one of the modules above, merged with an equivalent rule, or deleted as audit history / duplicated content / tutorial from a skill file. + +## Related directory READMEs + +- [`.github/agents/README.md`](../agents/README.md) — 24 agent files (14 persona + 9 workflow-specialist + 1 developer-instructions) +- [`.github/skills/README.md`](../skills/README.md) — 91 skills by functional category +- [`.github/workflows/README.md`](../workflows/README.md) — 45 workflow files (21 `.yml` + 12 `.md` + 12 `.lock.yml`) +- [`analysis/README.md`](../../analysis/README.md) — on-disk artifact layout (`analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`) +- [`analysis/methodologies/README.md`](../../analysis/methodologies/README.md) — 11 methodology modules +- [`analysis/templates/README.md`](../../analysis/templates/README.md) — 23 canonical output templates +- [`WORKFLOWS.md`](../../WORKFLOWS.md) — complete CI/CD + agentic workflow reference diff --git a/AGENTS.md b/AGENTS.md index 6a010b98d..d9739b9ca 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -4,7 +4,10 @@ This repository includes custom GitHub Copilot agents specialized for different aspects of the riksdagsmonitor project. Each agent has deep expertise in its domain and can be invoked to assist with specific tasks. -## Available Agents (14 Total) +> **Directory-level catalog:** [`.github/agents/README.md`](.github/agents/README.md) — summarises all **24 agent files** (14 persona agents + 9 workflow-specialist `.agent.md` + shared `developer.instructions.md`). This document is the long-form reference with invocation examples and per-agent skill mappings. +> **Companion docs:** [`SKILLS.md`](SKILLS.md) · [`.github/skills/README.md`](.github/skills/README.md) · [`.github/prompts/README.md`](.github/prompts/README.md) · [`.github/workflows/README.md`](.github/workflows/README.md) + +## Available Agents (14 Personas) ### 1. Security Architect (`security-architect`) **Expertise**: Security architecture, ISMS compliance, threat modeling, ISO 27001/NIST CSF/CIS Controls diff --git a/README.md b/README.md index 702b76048..52c781c98 100644 --- a/README.md +++ b/README.md @@ -644,88 +644,33 @@ gh attestation verify riksdagsmonitor-v1.0.0.zip -R Hack23/riksdagsmonitor - [🏷️ Classification Framework](https://github.com/Hack23/ISMS-PUBLIC/blob/main/CLASSIFICATION.md) — CIA triad, RTO/RPO, business impact ### GitHub Copilot Integration -- [AGENTS.md](AGENTS.md) - Custom Copilot agents for specialized tasks (14 agents) -- [SKILLS.md](SKILLS.md) - Agent skills for strategic guidance (87 skills) -- [`.github/agents/`](.github/agents/) - Agent configuration files -- [`.github/skills/`](.github/skills/) - Skill libraries - -**Available Agents (14)**: -- **security-architect** - Security architecture and ISMS compliance -- **documentation-architect** - C4 models and technical documentation -- **quality-engineer** - HTML/CSS validation and accessibility -- **frontend-specialist** - Static site development and responsive design -- **isms-compliance-manager** - ISO 27001/NIST CSF/CIS Controls compliance -- **deployment-specialist** - GitHub Actions and CI/CD automation -- **intelligence-operative** - Political intelligence analysis, OSINT, Swedish politics expertise, riksdag-regering-mcp (32 tools) -- **task-agent** - Product excellence, quality assurance, Playwright testing, issue management -- **ui-enhancement-specialist** - Static HTML/CSS, responsive design, 14-language support, WCAG 2.1 AA -- **data-pipeline-specialist** - CIA data consumption, ETL workflows, caching strategies, data validation -- **data-visualization-specialist** - Chart.js/D3.js, interactive dashboards, CIA intelligence visualizations -- **content-generator** - Automated news generation, intelligence reports, multi-language content -- **devops-engineer** - CI/CD pipelines, GitHub Actions security, infrastructure automation, monitoring -- **news-journalist** - Political journalism, editorial standards, multi-language news coverage - -**Available Skills (87)**: - -*Core Infrastructure (7):* -- **hack23-isms-compliance** - ISMS framework requirements -- **security-by-design** - Security best practices -- **static-site-security** - Static website security -- **ci-cd-security** - GitHub Actions security hardening -- **documentation-standards** - Documentation guidelines -- **html-accessibility** - WCAG 2.1 AA compliance -- **multi-language-localization** - Internationalization best practices - -*Political Intelligence (11):* -- **political-science-analysis** - Comparative politics and policy analysis frameworks -- **osint-methodologies** - Open-source intelligence collection and verification -- **intelligence-analysis-techniques** - Structured analytic techniques (ACH, SWOT) -- **swedish-political-system** - Riksdag structure, 8 parties, electoral system -- **electoral-analysis** - Election forecasting and coalition prediction -- **behavioral-analysis** - Political psychology and leadership analysis -- **strategic-communication-analysis** - Narrative analysis and media monitoring -- **legislative-monitoring** - Voting patterns and parliamentary oversight -- **risk-assessment-frameworks** - Political risk and corruption indicators -- **data-science-for-intelligence** - Statistical analysis and visualization -- **gdpr-compliance** - GDPR compliance for political data processing - -*ISMS & Security (6):* -- **cis-controls** - CIS Controls v8.1 for static sites -- **iso-27001-controls** - ISO 27001:2022 Annex A controls -- **nist-csf-mapping** - NIST CSF 2.0 framework mapping -- **threat-modeling** - STRIDE threat analysis -- **secure-code-review** - HTML/CSS/JS security review -- **security-documentation** - ISMS documentation standards - -*Development & Operations (11):* ⬆️ **EXPANDED** -- **c4-architecture-documentation** - C4 model and Mermaid diagrams -- **github-actions-workflows** - CI/CD patterns and security -- **code-quality-checks** - HTMLHint, CSSLint, linkinator, axe-core -- **secrets-management** - GitHub secrets and PAT management -- **data-pipeline-engineering** ✨ **NEW** - ETL workflows, automated data fetching -- **automated-content-generation** ✨ **NEW** - News generation, intelligence reports -- **performance-optimization** ✨ **NEW** - Core Web Vitals, bundle size, caching -- **api-integration** ✨ **NEW** - REST/GraphQL clients, rate limiting -- **github-agentic-workflows** ✨ **NEW** - AI-powered repository automation, MCP tools, safe outputs - -*UI/UX & Design (4):* ⬆️ **EXPANDED** -- **responsive-design** - Mobile-first, CSS Grid/Flexbox, breakpoints (320px-1440px+) -- **design-system-management** - Cyberpunk theme, CSS variables, component library -- **political-data-visualization** - CSS-only charts, heat maps, dashboards -- **advanced-data-visualization** ✨ **NEW** - Chart.js/D3.js, interactive dashboards - -*Testing & Quality Assurance (2):* ✨ **NEW** -- **playwright-testing** - Browser automation, visual regression, accessibility audits -- **issue-management** - GitHub issue creation, labeling, agent assignment - -*Data Integration (2):* ⬆️ **EXPANDED** -- **riksdag-regering-mcp** - 32 political data tools (Parliament, Government, MPs, votes) -- **cia-data-integration** ✨ **NEW** - CIA export consumption, validation, caching strategies + +Riksdagsmonitor uses **GitHub Copilot personas, skills, and agentic workflows** as first-class automation. The directory READMEs are the single source of truth; [`AGENTS.md`](AGENTS.md) and [`SKILLS.md`](SKILLS.md) are the long-form reference catalogs. + +- [`.github/agents/README.md`](.github/agents/README.md) — **24 agent files** (14 persona agents + 9 workflow-specialist `.agent.md` + shared `developer.instructions.md`) +- [`.github/skills/README.md`](.github/skills/README.md) — **91 skills** grouped by 12 functional categories +- [`.github/prompts/README.md`](.github/prompts/README.md) — 8 bounded-context prompt modules + Tier-C extension, imported by every agentic news workflow +- [`.github/workflows/README.md`](.github/workflows/README.md) — 45 workflow files (standard + agentic) +- [AGENTS.md](AGENTS.md) — canonical persona catalog with skill-mapping tables and invocation examples +- [SKILLS.md](SKILLS.md) — canonical skill catalog with agent-skill mappings + +**14 Persona Agents** (assignable via `assign_copilot_to_issue`): + +- **security-architect** · **documentation-architect** · **quality-engineer** · **frontend-specialist** · **isms-compliance-manager** · **deployment-specialist** · **devops-engineer** · **intelligence-operative** · **news-journalist** · **content-generator** · **data-pipeline-specialist** · **data-visualization-specialist** · **task-agent** · **ui-enhancement-specialist** + +**9 Workflow-Specialist Agents** (`.agent.md`, invoked by name from workflows): `agentic-workflows` · `ci-cleaner` · `contribution-checker` · `create-safe-output-type` · `custom-engine-implementation` · `grumpy-reviewer` · `interactive-agent-designer` · `technical-doc-writer` · `w3c-specification-writer` + +**Available Skills (91)** — see [`.github/skills/README.md`](.github/skills/README.md) for the complete catalog across: + +- 🛡️ Core Infrastructure & Governance (9) · 🕵️ Political Intelligence (11) · 🔐 ISMS & Security (14) +- ⚙️ Development & Operations (14) · 🧪 Testing & QA (2) · 🎨 UI/UX & Design (8) +- 📡 Data Integration (6) · 📰 Journalism & Media (5) · 🏛️ Government, Regulatory & Economics (7) +- 🗣️ Language & Localization (1) · 🤖 GitHub Agentic Workflows (13) · 📋 Copilot Patterns (1) *Economic-Data Integrations (three primary sources, parity-treated):* - **scb-mcp** (`@jarib/pxweb-mcp@2.0.0`) — official Swedish statistics via PxWebAPI 2.0 (1,200+ tables) - **world-bank-mcp** (`worldbank-mcp@1.0.1`) + `scripts/world-bank-client.ts` — WGI governance, environment, long-horizon social/education -- **IMF TypeScript client** (`scripts/imf-client.ts`) ✨ **NEW** — WEO, Fiscal Monitor, IFS, GFS_COFOG via Datamapper JSON + SDMX 3.0; macro/fiscal freshness + T+5 projections. **Intentionally not an MCP server** — pure-TS, fully covered by the npm SBOM (ADR 0001); `package.json` `x-external-mcp` stays empty, so the **8 MCP servers** count is unchanged. +- **IMF TypeScript client** (`scripts/imf-client.ts`) — WEO, Fiscal Monitor, IFS, GFS_COFOG via Datamapper JSON + SDMX 3.0; macro/fiscal freshness + T+5 projections. **Intentionally not an MCP server** — pure-TS, fully covered by the npm SBOM (ADR 0001). ### External Documentation - [CIA Platform Documentation](https://hack23.github.io/cia/) @@ -735,13 +680,66 @@ gh attestation verify riksdagsmonitor-v1.0.0.zip -R Hack23/riksdagsmonitor - [Hack23 Secure Development Policy](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Secure_Development_Policy.md) - [Hack23 Blog](https://hack23.com/blog.html) +## 🔬 Political Intelligence Analysis & News Creation + +Riksdagsmonitor is built around two tightly-coupled product lines: **deep political intelligence analysis** and **autonomous news article creation**. Every news article is backed by a reproducible analysis artifact trail on disk. + +### End-to-end pipeline + +```mermaid +flowchart LR + A[📥 MCP + CIA + SCB + IMF<br/>data download] --> B[📐 Apply methodology<br/>analysis/methodologies] + B --> C[📋 Populate templates<br/>analysis/templates] + C --> D[📂 Write 9 or 14 artifacts<br/>analysis/daily/$DATE/$SUBFOLDER] + D --> E{🚦 Analysis Gate<br/>prompts/05} + E -- pass --> F[📰 Generate article<br/>prompts/06] + E -- fail --> C + F --> G[🌐 Translate into<br/>remaining 12 languages] + G --> H[🔀 One PR per article type<br/>prompts/07] + style A fill:#0a0e27,stroke:#00d9ff,color:#e0e0e0 + style E fill:#dc3545,stroke:#b02a37,color:#fff + style H fill:#1a1e3d,stroke:#ffbe0b,color:#e0e0e0 +``` + +### Vital documents + +| Area | Document | What you'll find | +|------|----------|------------------| +| **Analysis framework** | [`analysis/README.md`](analysis/README.md) | Artifact taxonomy, 9-artifact / 14-artifact contract, daily-output layout | +| **Methodology library** | [`analysis/methodologies/README.md`](analysis/methodologies/README.md) | 11 methodology documents (AI-driven guide, per-document protocol, risk/SWOT/threat frameworks, synthesis, electoral, classification, style) | +| **Template library** | [`analysis/templates/README.md`](analysis/templates/README.md) | 23 templates — 8 core single-type (T1–T8) + 15 extended/Tier-C (scenario, executive-brief, coalition-mathematics, election-2026, historical-parallels, comparative-international, devil's advocate, etc.) | +| **News-generation contract** | [`.github/prompts/README.md`](.github/prompts/README.md) | 8 bounded-context prompt modules + Tier-C extension; single blocking analysis gate | +| **Workflow orchestration** | [`.github/workflows/README.md`](.github/workflows/README.md) + [`WORKFLOWS.md`](WORKFLOWS.md) §Stage 6.1 | How each `news-*.md` source compiles to a hardened `.lock.yml` with SHA-pinned actions, egress firewall, and five-layer safe-outputs | +| **Specialist personas** | [`.github/agents/README.md`](.github/agents/README.md) | `intelligence-operative`, `news-journalist`, `content-generator` — and 11 more | +| **Rules that guide the agents** | [`.github/skills/README.md`](.github/skills/README.md) | 11 political-intelligence skills + 5 journalism skills + 14 ISMS/security skills | + +### Data sources used during analysis + +- **Riksdagen & Regeringen** via `riksdag-regering-mcp` (32 tools): MPs, votes, documents, speeches, committees, government docs +- **Statistics Sweden (SCB)** via `@jarib/pxweb-mcp@2.0.0` (1 200+ PxWeb tables) +- **World Bank Open Data** via `worldbank-mcp@1.0.1` + `scripts/world-bank-client.ts` (WGI governance, environment, education) +- **IMF** via `scripts/imf-client.ts` (pure-TS, WEO + Fiscal Monitor + IFS + GFS_COFOG, T+5 projections) +- **CIA platform** (Hack23) — 19 visualisation products consumed nightly via `update-cia-csv-data.yml` + +--- + ## 🤖 AI-Disrupted News Generation -> *"While traditional newsrooms debate whether AI will replace journalists, Riksdagsmonitor already runs a fully autonomous political intelligence newsroom — 10 agentic workflows, 14 languages, zero human editors, and a publication schedule that would bankrupt any legacy outlet trying to keep up."* +> *"While traditional newsrooms debate whether AI will replace journalists, Riksdagsmonitor already runs a fully autonomous political intelligence newsroom — 12 agentic workflows, 14 languages, zero human editors, and a publication schedule that would bankrupt any legacy outlet trying to keep up."* + +Riksdagsmonitor's **agentic news generation pipeline** is the world's first fully AI-driven political intelligence newsroom for parliamentary monitoring. Powered by Claude Opus (currently 4.7) via GitHub Copilot Coding Agent, our **12 specialized workflows** (11 scheduled + 1 on-demand, plus 1 dedicated translation workflow) autonomously produce deep political analysis — not shallow summaries, but structured intelligence products with source verification, multi-party balance, and GDPR-compliant OSINT methodology. -Riksdagsmonitor's **agentic news generation pipeline** is the world's first fully AI-driven political intelligence newsroom for parliamentary monitoring. Powered by Claude Opus (currently 4.6) via GitHub Copilot Coding Agent, our 10 specialized workflows (9 scheduled + 1 on-demand) autonomously produce deep political analysis — not shallow summaries, but structured intelligence products with source verification, multi-party balance, and GDPR-compliant OSINT methodology. +> 📚 **Directory-level catalogs** (single sources of truth): +> - [`.github/workflows/README.md`](.github/workflows/README.md) — 45 workflow files (21 standard `.yml` + 12 agentic `.md` sources + 12 compiled `.lock.yml`) +> - [`.github/prompts/README.md`](.github/prompts/README.md) — 8 bounded-context prompt modules + `ext/tier-c-aggregation.md`, imported by every news workflow +> - [`.github/agents/README.md`](.github/agents/README.md) — 24 Copilot agent files (14 personas + 9 workflow-specialists + 1 shared developer-instructions) +> - [`.github/skills/README.md`](.github/skills/README.md) — 91 skills grouped by 12 functional categories +> - [`analysis/README.md`](analysis/README.md) — on-disk artifact layout (`analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`) with 9-artifact / 14-artifact contracts +> - [`analysis/methodologies/README.md`](analysis/methodologies/README.md) — 11 methodology documents +> - [`analysis/templates/README.md`](analysis/templates/README.md) — 23 canonical output templates (8 core single-type + 15 extended / Tier-C) +> - [`WORKFLOWS.md`](WORKFLOWS.md) — canonical end-to-end reference (v7.2, includes Stage 6.1 *Agentic Workflow Structure & Prompt Imports*) -### 📰 Autonomous Publication Schedule +### Autonomous Publication Schedule Every day, the platform's AI operatives awaken on cron schedules, query the Swedish Parliament's open data via **32 MCP tools**, cross-reference government sources, and generate publication-ready intelligence articles in **14 languages** — including RTL support for Arabic and Hebrew. @@ -750,15 +748,17 @@ Every day, the platform's AI operatives awaken on cron schedules, query the Swed | 🌅 04:00 | **Committee Reports** | Utskottsbetänkanden analysis, voting breakdowns | Mon–Fri | | 🌅 05:00 | **Propositions** | Government bills, legislative impact assessment | Mon–Fri | | ☀️ 06:00 | **Motions** | Opposition proposals, party strategy decoding | Mon–Fri | -| ☀️ 07:00 | **Week Ahead** | Parliamentary calendar preview, agenda intelligence | Friday | -| ☀️ 08:00 | **Month Ahead** | Strategic outlook, coalition forecasting | 1st of month | +| ❓ 07:00 | **Interpellations** | Ministerial accountability, evasion detection | Mon–Fri | +| 🔮 07:00 | **Week Ahead** | Parliamentary calendar preview, agenda intelligence | Friday | +| 📅 08:00 | **Month Ahead** | Strategic outlook, coalition forecasting | 1st of month | | 🔍 10:00 & 14:00 (Mon–Fri); 12:00 (Sat/Sun) | **Realtime Monitor** | Breaking political developments, flash analysis | Mon–Fri (×2) + weekends | +| 🌍 11:00 & 17:00 (Mon–Fri); 14:00 (Sat/Sun) | **Translate** | 12 additional languages from EN/SV cores | Daily | | 🌆 18:00 (16:00 Sat) | **Evening Analysis** | Deep-dive intelligence synthesis | Mon–Sat | | 📊 09:00 | **Weekly Review** | Week-in-review scorecard, party performance | Saturday | | 📈 10:00 | **Monthly Review** | Comprehensive monthly intelligence assessment | 28th of month | -| 🔧 Manual | **Article Generator** | On-demand article generation | On-demand | +| 🔧 Manual | **Article Generator** | On-demand article generation / backfill | On-demand | -> _All times are **UTC** (GitHub Actions cron). For local time, convert to CET/CEST. Authoritative schedules defined in `.github/workflows/news-*.lock.yml` workflows._ +> _All times are **UTC** (GitHub Actions cron). For local time, convert to CET/CEST. Authoritative schedules defined in `.github/workflows/news-*.lock.yml` workflows — see [`.github/workflows/README.md`](.github/workflows/README.md) for the complete inventory._ > **Result**: Dozens of articles per week across 14 languages — delivering **hundreds of localized intelligence products each month**, generated autonomously with zero editorial intervention. @@ -858,7 +858,7 @@ graph LR | Capability | Status | Details | |:-----------|:------:|:--------| | TypeScript migration | ✅ Done | 31 modules, 2890 Vitest tests | -| Agentic news generation | ✅ Live | 10 workflows (9 scheduled + 1 on-demand), 14 languages | +| Agentic news generation | ✅ Live | 12 workflows (11 scheduled + 1 on-demand), 14 languages | | 14-language support | ✅ Live | Including Arabic/Hebrew RTL | | CIA data integration | 🔄 Active | 19 visualization products | | Predictive dashboards | 📋 Planned | Chart.js/D3.js interactive displays | diff --git a/SKILLS.md b/SKILLS.md index 3f84210ec..117e25162 100644 --- a/SKILLS.md +++ b/SKILLS.md @@ -4,6 +4,9 @@ Agent skills are strategic, high-level principles and best practices that guide Copilot agents in performing their tasks. Skills are automatically loaded when relevant to the current context, providing agents with specialized knowledge without cluttering the main prompt. +> **Directory-level catalog:** [`.github/skills/README.md`](.github/skills/README.md) — compact 91-skill catalog grouped by 12 functional categories. +> **Companion docs:** [`AGENTS.md`](AGENTS.md) · [`.github/agents/README.md`](.github/agents/README.md) · [`.github/prompts/README.md`](.github/prompts/README.md) · [`.github/workflows/README.md`](.github/workflows/README.md) + ## What Are Skills? Skills are structured instruction sets stored in `.github/skills/` that teach agents: diff --git a/WORKFLOWS.md b/WORKFLOWS.md index b40abd0ce..ba22495ad 100644 --- a/WORKFLOWS.md +++ b/WORKFLOWS.md @@ -866,6 +866,179 @@ graph TB --- +### 🧩 Stage 6.1: Agentic Workflow Structure & Prompt Imports + +Every `news-*.md` source in `.github/workflows/` is a **gh-aw workflow** — a Markdown file whose YAML frontmatter compiles down, via `gh aw compile`, to a hardened GitHub Actions `.lock.yml`. A workflow is made up of three layers: + +``` +┌─────────────────────────────────────────────────────────────────────────┐ +│ news-<type>.md │ +│ ├── Frontmatter (name, triggers, runtimes, network, tools, mcp-servers)│ +│ ├── imports: │ +│ │ ├── prompts/00-base-contract.md ← role, ethics, AI-FIRST │ +│ │ ├── prompts/01-bash-and-shell-safety.md │ +│ │ ├── prompts/02-mcp-access.md ← MCP inventory + health │ +│ │ ├── prompts/03-data-download.md ← download pipeline │ +│ │ ├── prompts/04-analysis-pipeline.md ← 9 core artifacts, Pass 1+2 │ +│ │ ├── prompts/05-analysis-gate.md ← BLOCKING artifact gate │ +│ │ ├── prompts/06-article-generation.md ← sections, banned patterns │ +│ │ └── prompts/07-commit-and-pr.md ← stage → commit → PR │ +│ ├── (Tier-C only) imports: prompts/ext/tier-c-aggregation.md ← 14 artifacts │ +│ └── Body = per-type instructions (e.g. "generate propositions article")│ +└─────────────────────────────────────────────────────────────────────────┘ + │ + │ gh aw compile + ▼ +┌─────────────────────────────────────────────────────────────────────────┐ +│ news-<type>.lock.yml (the only file GitHub Actions executes) │ +│ ├── SHA-pinned actions (100%) │ +│ ├── step-security/harden-runner with egress audit │ +│ ├── Squid proxy + iptables firewall over network.allowed: │ +│ ├── Five-layer safe-outputs validator │ +│ └── MCP servers wired: riksdag-regering, scb, world-bank │ +└─────────────────────────────────────────────────────────────────────────┘ +``` + +#### Import order is a contract + +The import order is **not arbitrary** — each module builds on the previous one, and [`.github/prompts/05-analysis-gate.md`](.github/prompts/05-analysis-gate.md) is a single blocking gate that refuses to let the agent draft a single article sentence until **9 of 9 core artifacts** (single-type) or **14 of 14 artifacts** (Tier-C aggregation) are on disk in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/` for both Pass 1 and Pass 2. + +| Import # | Module | Responsibility | What fails fast if missing | +|---------:|--------|----------------|----------------------------| +| 1 | `00-base-contract.md` | Role, editorial ethics, GDPR, ISMS, AI-FIRST 2-pass minimum | Ethics drift, privacy leaks, shallow output | +| 2 | `01-bash-and-shell-safety.md` | UTF-8 shell patterns, no `eval`, no shell-expansion exploits | Command injection, encoding corruption | +| 3 | `02-mcp-access.md` | MCP server inventory + health probe | Wrong tool called, missing data source | +| 4 | `03-data-download.md` | Download manifest, source attribution, caching | Unsourced claims | +| 5 | `04-analysis-pipeline.md` | 9 core artifacts, methodology → template bindings, Pass 1 + Pass 2 | Shallow analysis, template mismatch | +| 6 | **`05-analysis-gate.md`** | **Blocks article generation until artifacts are complete** | *Any article written before gate passes → workflow failure* | +| 7 | `06-article-generation.md` | Article sections, banned hype patterns, visualisation, translations | Boilerplate content, missing charts | +| 8 | `07-commit-and-pr.md` | Stage → commit → exactly one `create_pull_request` call | Orphan commits, duplicate PRs | +| 9 (Tier-C only) | `ext/tier-c-aggregation.md` | 14-artifact aggregation gate, period multipliers | Aggregation without base artifacts | + +#### Dependency graph + +```mermaid +flowchart TB + subgraph WF["news-<type>.md (workflow source)"] + FM[Frontmatter<br/>triggers · runtimes · network · mcp-servers] + BODY[Body<br/>per-type analysis instructions] + end + + subgraph PROMPTS[".github/prompts/ (shared bounded contexts)"] + P00[00-base-contract] + P01[01-bash-and-shell-safety] + P02[02-mcp-access] + P03[03-data-download] + P04[04-analysis-pipeline] + P05[05-analysis-gate ⚠️ blocking] + P06[06-article-generation] + P07[07-commit-and-pr] + PEXT[ext/tier-c-aggregation] + end + + subgraph METH["analysis/methodologies/"] + M1[ai-driven-analysis-guide] + M2[per-document-methodology] + M3[political-risk-methodology] + M4[political-swot-framework] + M5[political-threat-framework] + M6[synthesis-methodology] + end + + subgraph TPL["analysis/templates/ (23 output templates)"] + T1[per-file-political-intelligence] + T2[risk-assessment] + T3[swot-analysis] + T4[threat-analysis] + T5[stakeholder-impact] + T6[scenario-analysis] + T7[significance-scoring] + T8[executive-brief] + T9[cross-reference-map] + T10[synthesis-summary] + T11[devils-advocate] + T12[methodology-reflection] + T13[… + 11 more] + end + + FM --> P00 --> P01 --> P02 --> P03 --> P04 --> P05 --> P06 --> P07 + P04 -.binds.-> METH + P04 -.binds.-> TPL + P05 -.verifies.-> TPL + BODY --> PEXT + PEXT -. Tier-C only .-> P05 + + style P05 fill:#dc3545,color:#fff,stroke:#b02a37,stroke-width:3px + style PEXT fill:#fd7e14,color:#fff,stroke:#ca6510 +``` + +#### Single-type vs. Tier-C artifact contract + +| Contract | Applies to | Required artifacts | Source | +|----------|-----------|-------------------:|--------| +| **Single-type** (9 artifacts) | `news-propositions`, `news-motions`, `news-committee-reports`, `news-interpellations`, `news-article-generator` (per call) | **9** | [`prompts/05-analysis-gate.md`](.github/prompts/05-analysis-gate.md) | +| **Tier-C aggregation** (14 artifacts) | `news-evening-analysis`, `news-realtime-monitor`, `news-week-ahead`, `news-month-ahead`, `news-weekly-review`, `news-monthly-review` | **14** | [`prompts/ext/tier-c-aggregation.md`](.github/prompts/ext/tier-c-aggregation.md) | +| **Translation** (N/A) | `news-translate` | N/A (post-hoc) | Direct text pipeline | + +All artifacts are written under `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/` — see [`analysis/README.md`](analysis/README.md) for the on-disk layout and [`analysis/templates/README.md`](analysis/templates/README.md) for the 23 canonical templates. + +#### MCP server wiring (identical across all 12 agentic workflows) + +```yaml +mcp-servers: + riksdag-regering: # HTTP (hosted) + url: https://riksdag-regering-ai.onrender.com/mcp + allowed: ["*"] + scb: # Container (per-run) + container: "node:25-alpine" + entrypoint: "npx" + entrypointArgs: ["-y", "@jarib/pxweb-mcp@2.0.0", + "--url", "https://api.scb.se/OV0104/v2beta"] + allowed: ["*"] + world-bank: # Container (per-run) + container: "node:25-alpine" + entrypoint: "npx" + entrypointArgs: ["-y", "worldbank-mcp@1.0.1"] + allowed: ["*"] +``` + +In addition, every workflow activates the gh-aw-provided `github` (all toolsets), `agentic-workflows`, `bash`, `playwright`, and `repo-memory` tools. IMF ingestion is **not** an MCP server — it is pulled by `scripts/imf-client.ts` (pure-TS Datamapper JSON + SDMX 3.0 client), fully covered by the npm SBOM, so the total MCP server count remains **8** (3 shared here + 5 repo-level in `.github/copilot-mcp.json`). + +#### Network egress allow-list (applies to every agentic workflow) + +```yaml +network: + allowed: + - node # npm registry ecosystem + - github # GitHub API + - defaults # Curated dev domains + - riksdag-regering-ai.onrender.com # Riksdag MCP + - api.scb.se # Statistics Sweden + - api.worldbank.org # World Bank + - api.imf.org # IMF (for imf-client.ts) + - data.riksdagen.se # Riksdag open data + - riksdagen.se / www.riksdagen.se # Riksdag website + - regeringen.se / www.regeringen.se # Government website + - hack23.com / www.hack23.com # Hack23 platform + - riksdagsmonitor.com # This platform + - raw.githubusercontent.com # Raw GitHub content + - hack23.github.io # GitHub Pages DR +``` + +Any outbound connection not matching this allow-list is dropped at the Squid proxy / iptables layer. + +#### Compilation lifecycle + +1. Author edits `news-<type>.md`. +2. PR triggers [`compile-agentic-workflows.yml`](.github/workflows/compile-agentic-workflows.yml). +3. That workflow runs `gh aw compile` to regenerate the sibling `.lock.yml`. +4. Reviewer inspects the `.md` *source of truth* and confirms the `.lock.yml` diff. +5. Once merged, the next cron/dispatch runs the `.lock.yml` under the hardened runner. + +> **Rule:** never edit a `.lock.yml` by hand — the next compilation pass will overwrite it. All review feedback must target the `.md` source and the prompt modules it imports. + +--- + ### 📡 Stage 7: Monitoring & Infrastructure ```mermaid @@ -1158,6 +1331,6 @@ flowchart TB --- -**📋 Document Owner:** CEO | **📄 Version:** 7.0 | **📅 Last Updated:** 2026-03-27 (UTC) -**🔄 Review Cycle:** Quarterly | **⏰ Next Review:** 2026-06-27 +**📋 Document Owner:** CEO | **📄 Version:** 7.2 | **📅 Last Updated:** 2026-04-20 (UTC) +**🔄 Review Cycle:** Quarterly | **⏰ Next Review:** 2026-07-20 **🏢 Classification:** Public | **🏛️ Owner:** Hack23 AB (Org.nr 5595347807) diff --git a/analysis/README.md b/analysis/README.md index e64d509ce..cfcd80e21 100644 --- a/analysis/README.md +++ b/analysis/README.md @@ -85,7 +85,7 @@ <tr> <td><strong><a href="templates/README.md">Analysis Templates</a></strong></td> <td>📋 Templates</td> - <td>8 structured analysis output templates</td> + <td>23 structured analysis output templates (8 core single-type + 15 extended / Tier-C)</td> <td><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/templates/README.md">View Source</a></td> </tr> </tbody> @@ -278,7 +278,7 @@ graph TB M5["✍️ Style"] end - subgraph "📋 Template Library (8 Templates)" + subgraph "📋 Template Library (8 Core Single-Type Templates + 15 Extended for Tier-C)" T1["🔍 Per-File Intel"] T2["🏷️ Classification"] T3["⚠️ Risk"] diff --git a/analysis/methodologies/README.md b/analysis/methodologies/README.md index 162b81ca7..7de156a94 100644 --- a/analysis/methodologies/README.md +++ b/analysis/methodologies/README.md @@ -91,7 +91,7 @@ <tr> <td><strong><a href="../templates/README.md">Analysis Templates</a></strong></td> <td>📋 Templates</td> - <td>8 structured analysis output templates</td> + <td>23 structured analysis output templates (8 core single-type + 15 extended / Tier-C)</td> <td><a href="https://github.com/Hack23/riksdagsmonitor/blob/main/analysis/templates/README.md">View Source</a></td> </tr> </tbody> From a3ebff51271087dc523c2c1c10a100ba9c50468a Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 22 Apr 2026 09:37:44 +0000 Subject: [PATCH 21/21] docs: reconcile all count references across documentation (91 skills, 12 workflows, 24 agent files) Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/d975f4fc-62c1-4312-a704-486d93bb9fd6 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com> --- .github/copilot-instructions.md | 8 ++-- AGENTS.md | 2 +- FUTURE_WORKFLOWS.md | 4 +- README.md | 4 +- SKILLS.md | 70 +++++++++++++++++---------------- WORKFLOWS.md | 2 +- analysis/README.md | 4 +- analysis/templates/README.md | 2 + 8 files changed, 51 insertions(+), 45 deletions(-) diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 5199fd053..be213d2e2 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -11,9 +11,9 @@ **Organization**: Hack23 AB **ISMS**: [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBLIC) **Version**: 0.8.17 -**Agents**: 24 custom agents in `.github/agents/` -**Skills**: 87+ skills in `.github/skills/` (including 12 gh-aw skills) -**Workflows**: 35 GitHub Actions (23 standard + 12 agentic `.lock.yml`) +**Agents**: 24 agent files (14 persona + 9 workflow-specialist + 1 developer-instructions) in `.github/agents/` +**Skills**: 91 skills in `.github/skills/` (including 13 gh-aw skills) +**Workflows**: 45 workflow files (21 standard `.yml` + 12 agentic `.md` sources + 12 compiled `.lock.yml`) **MCP Servers**: 8 configured (riksdag-regering, scb, world-bank, github, filesystem, memory, sequential-thinking, playwright) ## 🎯 Core Rules @@ -34,7 +34,7 @@ ### 4. Use Available Agents and Skills - 24 agents covering security, docs, quality, frontend, ISMS, deployment, devops, intelligence, news, content, data pipeline, data visualization, task management, UI enhancement, and gh-aw workflows -- 87+ skills auto-load from `.github/skills/` +- 91 skills auto-load from `.github/skills/` ### 5. 🔴 AI FIRST Quality Principle — Iterative Improvement Required > **ALL analysis and content generation MUST follow the AI FIRST principle: never accept first-pass quality.** diff --git a/AGENTS.md b/AGENTS.md index d9739b9ca..7c52d214a 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -782,7 +782,7 @@ custom_instructions: ` ``` ### 3. Leverage Skills -Agents automatically load relevant skills from `.github/skills/` (87 total skills across 12 categories): +Agents automatically load relevant skills from `.github/skills/` (91 total skills across 12 categories): **Core Infrastructure (9)**: - `hack23-isms-compliance`, `security-by-design`, `static-site-security` diff --git a/FUTURE_WORKFLOWS.md b/FUTURE_WORKFLOWS.md index 86d74316c..35a829b31 100644 --- a/FUTURE_WORKFLOWS.md +++ b/FUTURE_WORKFLOWS.md @@ -856,8 +856,8 @@ graph TB - [ARCHITECTURE.md](ARCHITECTURE.md) — System architecture - [FUTURE_ARCHITECTURE.md](FUTURE_ARCHITECTURE.md) — Architecture roadmap - [FUTURE_SECURITY_ARCHITECTURE.md](FUTURE_SECURITY_ARCHITECTURE.md) — Security evolution -- [AGENTS.md](AGENTS.md) — Custom agent reference (14 agents) -- [SKILLS.md](SKILLS.md) — Skill definitions (87 skills) +- [AGENTS.md](AGENTS.md) — Custom agent reference (14 persona agents + 9 workflow-specialist `.agent.md` + `developer.instructions.md` = 24 files) +- [SKILLS.md](SKILLS.md) — Skill definitions (91 skills) - [Hack23 ISMS-PUBLIC](https://github.com/Hack23/ISMS-PUBLIC) — ISMS policies - [Secure Development Policy](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Secure_Development_Policy.md) — Development security standards diff --git a/README.md b/README.md index 52c781c98..a378f1db4 100644 --- a/README.md +++ b/README.md @@ -789,7 +789,7 @@ timeline title Riksdagsmonitor Evolution — 2026 to 2037 section Phase 3 — Foundation (2026) Q1-Q2 : TypeScript migration ✅ - : 10 agentic news workflows ✅ + : 12 agentic news workflows ✅ : 34 GitHub Actions workflows + 10 agent prompt files : Dual deployment (S3 + GitHub Pages) Q3-Q4 : CIA data pipeline integration @@ -838,7 +838,7 @@ timeline graph LR subgraph SGCompleted["✅ Completed"] style SGCompleted fill:#006400,stroke:#00d9ff,color:#e0e0e0 - A[TypeScript Migration<br/>31 modules] --> B[Agentic News Gen<br/>10 workflows] + A[TypeScript Migration<br/>31 modules] --> B[Agentic News Gen<br/>12 workflows] B --> C[14 Languages<br/>RTL support] C --> D[Dual Deploy<br/>S3 + GitHub Pages] end diff --git a/SKILLS.md b/SKILLS.md index 117e25162..45534b885 100644 --- a/SKILLS.md +++ b/SKILLS.md @@ -21,7 +21,7 @@ Skills are: - ✅ **Reusable**: Apply across multiple tasks - ✅ **Context-Aware**: Load only when relevant -## Available Skills (87 Total) ✨ **UPDATED 2026-02-20** +## Available Skills (91 Total) ✨ **UPDATED 2026-04-22** ### Core Infrastructure (9 skills) ⬆️ **EXPANDED** 1. hack23-isms-compliance @@ -92,49 +92,53 @@ Skills are: 56. playwright-testing 57. issue-management -### Data Integration (4 skills) ⬆️ **EXPANDED** +### Data Integration (6 skills) ⬆️ **EXPANDED** 58. riksdag-regering-mcp 59. cia-data-integration 60. **mcp-server-development** ✨ **NEW** (2026-02-20) - MCP server patterns and transport protocols 61. **european-parliament-api** ✨ **NEW** (2026-02-20) - EU Parliament Open Data integration +62. **mcp-gateway-configuration** ✨ **NEW** (2026-04-22) - MCP gateway setup, routing, access control +63. **mcp-gateway-security** ✨ **NEW** (2026-04-22) - Token management, request validation, audit logging -### Business & Marketing (2 skills) -62. marketing -63. business-development +### Business & Marketing (3 skills) +64. marketing +65. business-development +66. **business-model-canvas** ✨ **NEW** (2026-04-22) - Business Model Canvas for open-source sustainability ### Language & Localization (1 skill) -64. language-expertise +67. language-expertise ### GitHub Agentic Workflows (12 skills) -65. gh-aw-authentication-credentials -66. gh-aw-containerization -67. gh-aw-continuous-ai-patterns -68. gh-aw-firewall -69. gh-aw-github-actions-integration -70. gh-aw-logging-monitoring -71. gh-aw-mcp-configuration -72. gh-aw-mcp-gateway -73. gh-aw-safe-outputs -74. gh-aw-security-architecture -75. gh-aw-tools-ecosystem -76. gh-aw-workflow-authoring +68. gh-aw-authentication-credentials +69. gh-aw-containerization +70. gh-aw-continuous-ai-patterns +71. gh-aw-firewall +72. gh-aw-github-actions-integration +73. gh-aw-logging-monitoring +74. gh-aw-mcp-configuration +75. gh-aw-mcp-gateway +76. gh-aw-safe-outputs +77. gh-aw-security-architecture +78. gh-aw-tools-ecosystem +79. gh-aw-workflow-authoring ### Journalism & Media (4 skills) -77. editorial-standards -78. investigative-journalism -79. prospective-news-coverage -80. comparative-politics-reporting +80. editorial-standards +81. investigative-journalism +82. prospective-news-coverage +83. comparative-politics-reporting ### Secure Development (3 skills) -81. secure-development-lifecycle -82. secure-development-policy -83. compliance-checklist +84. secure-development-lifecycle +85. secure-development-policy +86. compliance-checklist -### Government & Regulatory (4 skills) -84. global-government-analysis -85. myndigheter-monitoring -86. regulatory-affairs -87. economic-policy-analysis +### Government & Regulatory (5 skills) ⬆️ **EXPANDED** +87. global-government-analysis +88. myndigheter-monitoring +89. regulatory-affairs +90. economic-policy-analysis +91. **seo-best-practices** ✨ **NEW** (2026-04-22) - Canonical URLs, sitemap, robots.txt, technical SEO --- @@ -1635,7 +1639,7 @@ Many tasks benefit from combining multiple skills: --- -**Last Updated**: 2026-02-20 -**Total Skills**: 87 -**New Skills (2026-02-20)**: 18 skills added from Hack23 repos (security, development, governance, platform, design) +**Last Updated**: 2026-04-22 +**Total Skills**: 91 +**New Skills (2026-02-20 → 2026-04-22)**: 18 skills added from Hack23 repos (security, development, governance, platform, design); +4 since 2026-02-20 (`seo-best-practices`, `global-government-analysis`, `investigative-journalism`, `myndigheter-monitoring` per catalog reconciliation) **Maintained by**: Hack23 AB diff --git a/WORKFLOWS.md b/WORKFLOWS.md index ba22495ad..b8efe023a 100644 --- a/WORKFLOWS.md +++ b/WORKFLOWS.md @@ -1321,7 +1321,7 @@ flowchart TB - [CRA-ASSESSMENT.md](CRA-ASSESSMENT.md) — EU Cyber Resilience Act conformity - [FUTURE_WORKFLOWS.md](FUTURE_WORKFLOWS.md) — Future workflow projections - [AGENTS.md](AGENTS.md) — Custom agent reference (14 agents) -- [SKILLS.md](SKILLS.md) — Skill definitions (87 skills) +- [SKILLS.md](SKILLS.md) — Skill definitions (91 skills) ### External Tools - [step-security/harden-runner](https://github.com/step-security/harden-runner) — Workflow security diff --git a/analysis/README.md b/analysis/README.md index cfcd80e21..463c198ef 100644 --- a/analysis/README.md +++ b/analysis/README.md @@ -153,7 +153,7 @@ flowchart LR | Validate output format (quality gate) | Fill template sections with generated content | | Move/rename files | Produce "placeholder" analysis that looks real | -**The AI agent reads all 6 methodology guides, reads all 8 templates, reads the actual data, and produces genuine analytical content based on evidence found in the documents.** +**The AI agent reads the 6 core methodology guides (of 11 total) and the 8 core output templates (of 23 total), reads the actual data, and produces genuine analytical content based on evidence found in the documents.** The remaining 5 methodology files (electoral-domain, political-style-guide, strategic-extensions, structural-metadata, ai-driven-analysis-guide) provide domain-specific and meta-methodology context; the remaining 15 templates cover Tier-C aggregation (scenario-analysis, executive-brief, coalition-mathematics, election-2026-analysis, historical-parallels, comparative-international, devils-advocate, forward-indicators, implementation-feasibility, intelligence-assessment, media-framing-analysis, methodology-reflection, voter-segmentation, data-download-manifest, cross-reference-map). **Fallback mechanism:** If AI analysis fails or produces unusable output (detected by the quality gate bash check in `.github/prompts/` — see the README for the module catalogue), the workflow should: 1. Commit a minimal `data-download-manifest.md` documenting what was downloaded @@ -244,7 +244,7 @@ flowchart LR Analysis artifacts are **genuine intelligence products** — not summaries or reformatted data — that enable: - 🔄 **Workflow composition**: Upstream agents deposit analysis; downstream agents consume it -- 📐 **Consistent methodology**: 6 frameworks + 8 templates enforce analytical rigor +- 📐 **Consistent methodology**: 6 core frameworks (of 11 methodology files) + 8 core templates (of 23 total output templates) enforce analytical rigor - 📊 **Full data analysis**: Every downloaded MCP file receives per-file deep analysis - 🧠 **Reusable intelligence**: Cross-workflow pattern sharing and knowledge accumulation - 🎯 **Quality assurance**: Minimum 7.0/10 quality gate before article generation diff --git a/analysis/templates/README.md b/analysis/templates/README.md index 1511ac6d3..a78bcd0aa 100644 --- a/analysis/templates/README.md +++ b/analysis/templates/README.md @@ -645,6 +645,8 @@ sequenceDiagram ## 🆕 v2.3 Common Improvements (All Templates) +> **Scope note:** "All 8 templates" below refers to the original Family A core templates (A2–A9). Families B–E templates (added in v3.0+) inherit these improvements where applicable. See [Master Template Catalog](#-master-template-catalog--family-ae) for the complete 23-template inventory. + All 8 templates were updated in v2.3 (2026-06-01) with the following cross-cutting improvements: ### 🗳️ Election 2026 Implications Section