Skip to content

Commit 8e0b33c

Browse files
authored
fix: deterministic audit metrics via run_summary.json cache and workflow-logs/ exclusion (#26148)
1 parent df1aeec commit 8e0b33c

5 files changed

Lines changed: 381 additions & 33 deletions

File tree

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# ADR-26148: Deterministic Audit Metrics via run_summary.json Cache and workflow-logs/ Exclusion
2+
3+
**Date**: 2026-04-14
4+
**Status**: Draft
5+
**Deciders**: pelikhan, Copilot
6+
7+
---
8+
9+
## Part 1 — Narrative (Human-Friendly)
10+
11+
### Context
12+
13+
The `audit` command reported wildly inconsistent `token_usage` and `turns` on repeated invocations for the same workflow run (observed: 9 turns / 381k tokens on one call, 22 turns / 4.7M tokens on another). Two compounding bugs caused this: (1) `AuditWorkflowRun` unconditionally re-processed all local log files on every call, even when a fully-computed `run_summary.json` was already on disk; and (2) the log-file walk in `extractLogMetrics` did not exclude the `workflow-logs/` directory, which `downloadWorkflowRunLogs` populates with GitHub Actions step-output — files that capture the same agent stdout already present in the agent artifact logs, inflating token counts by approximately 12×.
14+
15+
### Decision
16+
17+
We will adopt a **cache-first strategy** for `AuditWorkflowRun`: before performing any API calls or log processing, check whether a valid `run_summary.json` exists on disk (validated by CLI version). If a cache hit is found, reconstruct `ProcessedRun` from the cached summary and return immediately via a shared `renderAuditReport` helper, bypassing all re-download and re-parse logic. We will additionally **exclude the `workflow-logs/` directory** from the `extractLogMetrics` log walk by returning `filepath.SkipDir` whenever the walk visits a directory named `workflow-logs`, preventing GitHub Actions runner captures from being counted as agent artifact data. Together, these two changes ensure that repeated `audit` calls for the same run produce identical metrics.
18+
19+
### Alternatives Considered
20+
21+
#### Alternative 1: Invalidate and Overwrite the Cache on Every Call
22+
23+
Rather than treating the cached `run_summary.json` as authoritative, re-process logs on every call and overwrite the cache. This would keep the cache "fresh" but would perpetuate the inconsistency problem: log re-processing can produce different values depending on which files are present at the time (e.g., if `workflow-logs/` has been populated between calls). This was rejected because consistency of audit metrics across repeated calls is the primary requirement.
24+
25+
#### Alternative 2: Exclude `workflow-logs/` Files by Name Pattern Instead of Directory Skip
26+
27+
Rather than skipping the entire `workflow-logs/` directory with `filepath.SkipDir`, selectively exclude individual files whose names match known GitHub Actions runner-log patterns (e.g., `*_Run log step.txt`). This would be fragile: GitHub Actions file naming conventions can change, and any unrecognized file would silently inflate metrics again. Skipping the entire directory by name is simpler, robust, and aligns with how `downloadWorkflowRunLogs` places its output.
28+
29+
#### Alternative 3: Store Canonical Metrics in a Separate Lock File
30+
31+
Record only the metrics (token usage, turns) in a dedicated lock file separate from `run_summary.json`, and read that lock file on subsequent calls. This adds file-system complexity without meaningful benefit over reusing the existing `run_summary.json` structure. The current `loadRunSummary` already performs CLI-version validation, providing a clean automatic invalidation mechanism.
32+
33+
### Consequences
34+
35+
#### Positive
36+
- Repeated `audit` calls for the same run are now deterministic and produce identical output.
37+
- The cache-hit path avoids all API calls and file re-parsing, making subsequent audits significantly faster.
38+
- The `renderAuditReport` helper function eliminates the duplicated render + finalization logic that previously existed in both the fresh-download and (now) cache-hit code paths.
39+
- Cache invalidation on CLI upgrade is automatic via the existing `CLIVersion` check in `loadRunSummary`.
40+
41+
#### Negative
42+
- The first successful `audit` call becomes the canonical source of truth. If log files were incomplete on the first run (e.g., partial download), the cached metrics will be wrong until the cache is manually cleared or the CLI is upgraded.
43+
- The `workflow-logs/` exclusion is a directory-name-based heuristic. If `downloadWorkflowRunLogs` ever changes the output directory name, the exclusion silently stops working.
44+
- Adding a new top-level helper (`renderAuditReport`) increases the surface area of the package's internal API.
45+
46+
#### Neutral
47+
- The `run_summary.json` format is unchanged; only the read/write ordering is adjusted (save-before-render in the fresh-download path).
48+
- Existing tests for `loadRunSummary` and `saveRunSummary` remain valid; new regression tests were added for the cache-hit path and the `workflow-logs/` exclusion.
49+
50+
---
51+
52+
## Part 2 — Normative Specification (RFC 2119)
53+
54+
> The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHALL**, **SHALL NOT**, **SHOULD**, **SHOULD NOT**, **RECOMMENDED**, **MAY**, and **OPTIONAL** in this section are to be interpreted as described in [RFC 2119](https://www.rfc-editor.org/rfc/rfc2119).
55+
56+
### Cache-First Audit Strategy
57+
58+
1. Implementations **MUST** check for a valid `run_summary.json` on disk before initiating any API calls or log-file processing in `AuditWorkflowRun`.
59+
2. Implementations **MUST** treat a cache hit (valid `run_summary.json` with matching `CLIVersion`) as the authoritative source of metrics and return immediately without re-processing logs.
60+
3. Implementations **MUST NOT** overwrite an existing `run_summary.json` when serving a cache hit; the cached file **MUST** remain unmodified.
61+
4. Implementations **MUST** persist `run_summary.json` to disk before calling the render step in the fresh-download path, so that a render failure does not prevent future cache hits.
62+
5. Implementations **SHOULD** log a message (at the appropriate verbosity level) indicating that a cached summary is being used, including the original `ProcessedAt` timestamp.
63+
64+
### Log Metric Extraction
65+
66+
1. Implementations **MUST** skip the `workflow-logs/` directory (and its contents) when walking the run output directory in `extractLogMetrics`.
67+
2. Implementations **MUST** use `filepath.SkipDir` (or equivalent) to exclude the entire `workflow-logs/` subtree, not individual files within it.
68+
3. Implementations **MUST NOT** include token-usage data found in `workflow-logs/` in the `LogMetrics.TokenUsage` or `LogMetrics.Turns` totals.
69+
4. Implementations **MAY** log a debug message when skipping the `workflow-logs/` directory to aid in future diagnostics.
70+
71+
### Shared Render Path
72+
73+
1. Implementations **MUST** use a single shared function (currently `renderAuditReport`) to build and emit the audit report, invoked by both the cache-hit path and the fresh-download path.
74+
2. The shared render function **MUST NOT** re-extract metrics from log files; it **MUST** use only the metrics passed to it by the caller.
75+
76+
### Conformance
77+
78+
An implementation is considered conformant with this ADR if it satisfies all **MUST** and **MUST NOT** requirements above. Failure to meet any **MUST** or **MUST NOT** requirement constitutes non-conformance.
79+
80+
---
81+
82+
*This is a DRAFT ADR generated by the [Design Decision Gate](https://github.com/github/gh-aw/actions/runs/24396807146) workflow. The PR author must review, complete, and finalize this document before the PR can merge.*

pkg/cli/audit.go

Lines changed: 79 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -189,6 +189,41 @@ func AuditWorkflowRun(ctx context.Context, runID int64, owner, repo, hostname st
189189
return auditJobRun(runID, jobID, stepNumber, owner, repo, hostname, runOutputDir, verbose, jsonOutput)
190190
}
191191

192+
// Use cached run summary when available to ensure deterministic metrics across repeated calls.
193+
// Re-processing the same log files can produce different results (e.g. when GitHub's API
194+
// returns aggregated data that differs from the locally-stored firewall logs), so we always
195+
// prefer the first fully-processed summary written to disk. The cache is automatically
196+
// invalidated whenever the CLI version changes (see loadRunSummary).
197+
if summary, ok := loadRunSummary(runOutputDir, verbose); ok {
198+
auditLog.Printf("Using cached run summary for run %d (processed at %s)", runID, summary.ProcessedAt.Format(time.RFC3339))
199+
if verbose {
200+
fmt.Fprintln(os.Stderr, console.FormatInfoMessage(fmt.Sprintf("Using cached run summary for run %d (processed at %s)", runID, summary.ProcessedAt.Format(time.RFC3339))))
201+
}
202+
processedRun := ProcessedRun{
203+
Run: summary.Run,
204+
AwContext: summary.AwContext,
205+
TaskDomain: summary.TaskDomain,
206+
BehaviorFingerprint: summary.BehaviorFingerprint,
207+
AgenticAssessments: summary.AgenticAssessments,
208+
AccessAnalysis: summary.AccessAnalysis,
209+
FirewallAnalysis: summary.FirewallAnalysis,
210+
PolicyAnalysis: summary.PolicyAnalysis,
211+
RedactedDomainsAnalysis: summary.RedactedDomainsAnalysis,
212+
MissingTools: summary.MissingTools,
213+
MissingData: summary.MissingData,
214+
Noops: summary.Noops,
215+
MCPFailures: summary.MCPFailures,
216+
TokenUsage: summary.TokenUsage,
217+
GitHubRateLimitUsage: summary.GitHubRateLimitUsage,
218+
JobDetails: summary.JobDetails,
219+
}
220+
// Override the cached LogsPath with the current runOutputDir so that downstream
221+
// file reads (created items, aw_info, etc.) resolve correctly even if the run
222+
// directory has been moved or copied since the summary was first written.
223+
processedRun.Run.LogsPath = runOutputDir
224+
return renderAuditReport(processedRun, summary.Metrics, summary.MCPToolUsage, runOutputDir, owner, repo, hostname, verbose, parse, jsonOutput)
225+
}
226+
192227
// Check if we have locally cached artifacts first
193228
hasLocalCache := fileutil.DirExists(runOutputDir) && !fileutil.IsDirEmpty(runOutputDir)
194229

@@ -416,6 +451,50 @@ func AuditWorkflowRun(ctx context.Context, runID int64, owner, repo, hostname st
416451
processedRun.BehaviorFingerprint = behaviorFingerprint
417452
processedRun.AgenticAssessments = agenticAssessments
418453

454+
// Save run summary for caching future audit runs
455+
summary := &RunSummary{
456+
CLIVersion: GetVersion(),
457+
RunID: run.DatabaseID,
458+
ProcessedAt: time.Now(),
459+
Run: run,
460+
Metrics: metrics,
461+
AwContext: processedRun.AwContext,
462+
TaskDomain: processedRun.TaskDomain,
463+
BehaviorFingerprint: processedRun.BehaviorFingerprint,
464+
AgenticAssessments: processedRun.AgenticAssessments,
465+
AccessAnalysis: accessAnalysis,
466+
FirewallAnalysis: firewallAnalysis,
467+
PolicyAnalysis: policyAnalysis,
468+
RedactedDomainsAnalysis: redactedDomainsAnalysis,
469+
MissingTools: missingTools,
470+
MissingData: missingData,
471+
Noops: noops,
472+
MCPFailures: mcpFailures,
473+
MCPToolUsage: mcpToolUsage,
474+
TokenUsage: tokenUsageSummary,
475+
GitHubRateLimitUsage: rateLimitUsage,
476+
ArtifactsList: artifacts,
477+
JobDetails: jobDetails,
478+
}
479+
480+
if err := saveRunSummary(runOutputDir, summary, verbose); err != nil {
481+
if verbose {
482+
fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to save run summary: %v", err)))
483+
}
484+
}
485+
486+
return renderAuditReport(processedRun, metrics, mcpToolUsage, runOutputDir, owner, repo, hostname, verbose, parse, jsonOutput)
487+
}
488+
489+
// renderAuditReport builds and renders the audit report from a fully-populated processedRun.
490+
// It is called both when serving from a cached run summary and after a fresh processing pass,
491+
// ensuring that the two paths produce identical output.
492+
func renderAuditReport(processedRun ProcessedRun, metrics LogMetrics, mcpToolUsage *MCPToolUsageData, runOutputDir string, owner, repo, hostname string, verbose bool, parse bool, jsonOutput bool) error {
493+
runID := processedRun.Run.DatabaseID
494+
495+
currentCreatedItems := extractCreatedItemsFromManifest(runOutputDir)
496+
processedRun.Run.SafeItemsCount = len(currentCreatedItems)
497+
419498
currentSnapshot := buildAuditComparisonSnapshot(processedRun, currentCreatedItems)
420499
comparison := buildAuditComparisonForRun(processedRun, currentSnapshot, runOutputDir, owner, repo, hostname, verbose)
421500

@@ -474,38 +553,6 @@ func AuditWorkflowRun(ctx context.Context, runID int64, owner, repo, hostname st
474553
}
475554
}
476555

477-
// Save run summary for caching future audit runs
478-
summary := &RunSummary{
479-
CLIVersion: GetVersion(),
480-
RunID: run.DatabaseID,
481-
ProcessedAt: time.Now(),
482-
Run: run,
483-
Metrics: metrics,
484-
AwContext: processedRun.AwContext,
485-
TaskDomain: processedRun.TaskDomain,
486-
BehaviorFingerprint: processedRun.BehaviorFingerprint,
487-
AgenticAssessments: processedRun.AgenticAssessments,
488-
AccessAnalysis: accessAnalysis,
489-
FirewallAnalysis: firewallAnalysis,
490-
PolicyAnalysis: policyAnalysis,
491-
RedactedDomainsAnalysis: redactedDomainsAnalysis,
492-
MissingTools: missingTools,
493-
MissingData: missingData,
494-
Noops: noops,
495-
MCPFailures: mcpFailures,
496-
MCPToolUsage: mcpToolUsage,
497-
TokenUsage: tokenUsageSummary,
498-
GitHubRateLimitUsage: rateLimitUsage,
499-
ArtifactsList: artifacts,
500-
JobDetails: jobDetails,
501-
}
502-
503-
if err := saveRunSummary(runOutputDir, summary, verbose); err != nil {
504-
if verbose {
505-
fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to save run summary: %v", err)))
506-
}
507-
}
508-
509556
// Display logs location (only for console output)
510557
if !jsonOutput {
511558
absOutputDir, _ := filepath.Abs(runOutputDir)

0 commit comments

Comments
 (0)