Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
- **Polars support**: `polars.DataFrame` and `polars.LazyFrame` now work in `plot()`, `materialize_nodes()`, `get_degrees()`, `get_indegrees()`, `get_outdegrees()`, and `hypergraph()`. Polars is an optional dependency — no behavior change when not installed. Upload path uses efficient Arrow conversion (`to_arrow()` with schema-metadata stripping and memoization); compute/hypergraph paths coerce to pandas at entry. `LazyFrame` is materialized via `.collect()` at each boundary. Adds `test_polars.py` with 17 tests; skips gracefully when polars is absent (#1133).

### Internal
- **AI skills / workflow**: Updated the local `plan` skill template to reflect current pygraphistry validation commands, and added a new `review` skill with phased multi-wave review flow, adversarial confirmation, and explicit GPU validation guidance (local `docker/test-gpu-local.sh` or `dgx-spark` evidence path when no local GPU is available).
- **GFQL / Cypher lowering — secondary whole-row carry survives chained reentry boundaries (#1256, #989, #999 partial)**: Two extensions to `_demote_secondary_whole_row_aliases` in `graphistry/compute/gfql/cypher/lowering.py` so multi-alias carry survives a chained reentry compile. (1) **Forwarding-item drop** — bare-identifier projection items in downstream `WITH` stages whose name is a secondary alias are dropped at compile time (the same intent as slice 4.3c, integrated into the post-#1071 active rewrite path). Without this, a forwarding pattern like `WITH a, x, friend, ...` triggered the bare-ref failfast even though the carry already lives as a hidden scalar on the reentry-source's row table. (2) **Hidden-column forwarding** — for every `(secondary_alias, prop)` reference collected from trailing clauses, the synthesized `__cypher_reentry_<S>_<X>__` hidden alias is appended as a bare passthrough item to every downstream `WITH` stage so each recursive `compile_cypher_query` call sees it as a scalar carry. Without this, the inner compile failed alias resolution on the rewritten hidden identifier (`Unknown Cypher alias '__cypher_reentry_x_id__' in RETURN clause`) once a chained reentry stage narrowed the projection scope. New positive tests cover (a) chained reentry where the trailing MATCH continues to use the same primary alias, (b) chained reentry where the source is rebound between boundaries (`MATCH (a)-[:R]->(friend) ... MATCH (friend)-[:S]->(c)`), and (c) multi-alias `DISTINCT` forwarding through a single boundary. New failfast test pins the remaining gap — a trailing MATCH that does not start from any carried alias (free-form intermediate MATCH, the LDBC SNB IC3 prefix shape) — to the existing scoped error so future closure of that lane is regression-locked. Closes the chained-reentry portion of #1256; the free-form intermediate-MATCH case remains open follow-up.
- **GFQL / Cypher lowering — `ReentryPlan` IR + multi-alias whole-row carry slices (#989, #1026, #999 partial)**: Introduces an explicit `ReentryPlan` + `CarriedAlias` dataclass (`graphistry/compute/gfql/cypher/reentry_plan.py`) as the compile-time contract between a prefix `WITH` stage and the trailing `MATCH`, replacing the implicit handshake spread across tuple returns from `_bounded_reentry_carry_columns`, the `scalar_reentry_alias` / `scalar_reentry_columns` fields on `CompiledCypherExecutionExtras`, and runtime contract re-extraction in `_compiled_query_reentry_contract` (#987 step 1). Plan is exposed via `compiled_query.reentry_plan` and threaded through `_map_terminal_reentry_query` + `_attach_graph_context`. Builds three additional admit slices on top of the #1071 lift: (1) **slice 4.3a** lifts the residual single-whole-row gate at the compile site (`lowering.py`) and runtime contract (`gfql_unified.py`) so `WITH a, x` admits whenever only the trailing-MATCH source alias is referenced downstream — `ReentryPlan.aliases` records all whole-row aliases as `CarriedAlias` entries, with downstream non-source-alias bare references emitting an actionable failfast pointing to #989; (2) **slice 4.3b** adds a compile-time prefix rewrite (`_rewrite_multi_whole_row_prefix`) that turns `WITH a, x` into `WITH a, x.id AS __carry_x__id__` for every property of `x` referenced in trailing clauses (collected via `_collect_non_source_alias_property_refs` walking `WhereClause.expr_tree`, `ProjectionStage.where`, `ReturnItem`, `OrderItem`), and AST-rewrites trailing `<non_source>.<prop>` references to property access on the reentry-alias's hidden column — closes the multi-alias case of `#1026` regression-lock (`MATCH (a), (x) WITH a, x OPTIONAL MATCH (a)-->(b) RETURN x.id, b.id` now executes correctly with left-outer-join semantics, previously raised `GFQLValidationError`); (3) **slice 4.3c** drops bare carried-alias items at compile time when downstream `WITH a, x, y, collect(...)` re-projects them, so the bare-ref failfast does not false-positive on forwarding patterns. Adds 5 new positive/structure tests + 2 failfast-scope tests, retargets one prior `pytest.raises(GFQLValidationError)` regression-lock to a positive row assertion, and adds the multi-alias `OPTIONAL MATCH` regression test `test_issue_1026_multi_alias_with_optional_match_carries_secondary_property`. Cross-reentry-boundary forwarding (carry survival across `MATCH (a)-[:KNOWS]-(friend)`, the IC3 LDBC SNB shape) remains follow-up work tracked as #999 / slice 4.3d (#989, #1026, #999, #987).
- **GFQL / Cypher lowering — multi-alias carry through `WITH` before MATCH re-entry (#1071)**: Lifted the residual constraint that `WITH` before a MATCH re-entry must project exactly one whole-row node alias. The local Cypher compiler now accepts patterns like `MATCH (p)-[:KNOWS]-(friend) WITH p, friend MATCH (friend)-[:IS_LOCATED_IN]->(c) RETURN p.firstName, c.name` (LDBC SNB IC1 shape). Implementation pre-rewrites the prefix `WITH` in `_compile_bounded_reentry_query`: the trailing-MATCH primary alias is preserved as the sole whole-row carry; secondary aliases are demoted to scalar property carries via synthesized `S.X AS __cypher_reentry_<S>_<X>__` items, with downstream `S.X` references rewritten to bare hidden identifiers that compose with the existing single-alias carried-scalar machinery (#1047 / #1068). Returning a secondary alias as a whole-row entity (`RETURN s`) and re-binding a secondary alias as a node variable in the trailing MATCH remain unsupported with precise errors. Two prior failfast tests retargeted to assert correct multi-alias behavior; new tests cover IC1-shape, three-alias carry, and the secondary-whole-row-return rejection.
Expand Down
6 changes: 4 additions & 2 deletions agents/skills/plan/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ STOP → reload plan → find last ✅ step → continue from there

```markdown
# [Task] Plan
🔴 COLD START: reload skill first → `agents/skills/plan/SKILL.md`
🔴 COLD START: reload skill first → `.agents/skills/plan/SKILL.md`
File: `plans/[task]/plan.md` | Date: [DATE TZ] | Branch: [branch] | PR#[N] → [target] | Base: [branch @ SHA]

## Context (read-only)
Expand All @@ -90,8 +90,10 @@ Record ALL commands verbatim — exact flags, paths, env vars. No paraphrasing.
Preferred:
```bash
cat plans/[task]/plan.md | head -200 # reload plan
PYTHONPATH=. uv run --no-project --with lark --with pytest python -m pytest [test_path] -q
python -m pytest -q [test_path] # targeted tests
./bin/test-minimal-lite.sh # fast repo sentinel tests
./bin/ruff.sh && ./bin/typecheck.sh # lint + types
./bin/test.sh [optional_test_args] # broader test pass
gh pr checks [PR] --watch # CI
gh run view [RUN_ID] --json status,conclusion,jobs
git log --oneline -10
Expand Down
251 changes: 251 additions & 0 deletions agents/skills/review/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,251 @@
---
name: review
description: |
Structured PR review for pygraphistry. Input: PR number/branch (default current branch PR).
Output: findings and convergence artifacts under plans/<task>/.
Method: multi-wave, evidence-first review across spec, correctness, tests, security,
code quality, DRY, concurrency, performance, architecture, operability, and conventions.
---

# PR Review (pygraphistry)

## Invocation

```text
/review [<PR-number-or-branch>] [mode=findings|pr-comments|both] [fixes=deferred|inline]
```

Defaults:
- Target: current branch PR (`gh pr view --json number,headRefName,baseRefName,url`)
- `mode=findings`
- `fixes=deferred`

`mode`:
- `findings`: local artifacts only under `plans/<task>/`
- `pr-comments`: draft locally, post only after explicit user confirmation in the same session
- `both`: run findings then comment flow

`fixes`:
- `deferred`: read-only review
- `inline`: after each converged wave, apply confirmed `BLOCKER`/`IMPORTANT` fixes in separate commits

## Runtime Assumptions

- Run from repo root.
- `gh` is authenticated (`gh auth status`).
- Local branch reflects PR head (`origin/<base>...HEAD` matches intended review scope).
- `plans/` is local working memory and normally gitignored.

## Plan-First Requirement

Always use the plan skill flow:

1. If `plans/<task>/plan.md` exists, reuse it and append a review section.
2. Else reload `.agents/skills/plan/SKILL.md` and create `plans/<task>/plan.md`.
3. Record PR metadata, `mode`, `fixes`, and timestamp.
4. Reload plan before every step; update plan immediately after every step.

`<task>`: prefer `review-pr-<N>` or `<branch>-review`.

## Phase 0: Resolve Scope + Stack Context

1. Resolve PR context (number/title/url/head/base).
2. Record stack context:

```bash
gh pr view <PR> --json baseRefName,headRefName,title,body
gh pr list --base <headRefName>
```

3. If stacked, explicitly mark out-of-scope upstream/downstream work in `plan.md`.
4. Set diff range reference: `origin/<base>...HEAD`.

## Phase 1: Research Criteria Before Findings

Create `plans/<task>/research/` with:
- `context.md`
- `policies.md`
- `credentials.md`
- `canvas-<dimension>.md` (only for dimensions that apply)

### 1a) Collect context + changed files

```bash
gh pr view <PR> --json number,title,headRefName,baseRefName,url,body
git fetch origin <baseRefName> <headRefName>
git diff --name-only origin/<base>...HEAD
git log --oneline origin/<base>..HEAD
```

Record linked issue/spec refs from PR body and summarize PR intent.

### 1b) Discover applicable repo rules

Always inspect:
- `AGENTS.md`, `DEVELOP.md`, `ARCHITECTURE.md`, `CONTRIBUTING.md`, `README.md`, `CHANGELOG.md`
- `docs/source/**` relevant to changed areas
- CI/workflow context under `.github/workflows/` when checks/publish behavior are touched
- Tooling configs (`pyproject.toml`, `mypy.ini`, `pytest.ini`) and helper scripts in `bin/`

Walk up from each changed file to repo root and include nearby `.md` guidance.

### 1c) Credentials gate (always, early)

Run before wave analysis:

```bash
git diff origin/<base>...HEAD -- '*.env*' 'custom.env*' 'docker-compose*.y*ml' '*.conf' '*.config.*' | \
grep -iE '(api_key|secret|token|password|bearer|authorization|aws_|azure_|openai_|anthropic_).{0,4}=' || \
echo "[clean] no obvious credential strings"

git diff origin/<base>...HEAD | \
grep -oE '(sk-[A-Za-z0-9]{20,}|AKIA[0-9A-Z]{16}|Bearer\s+[A-Za-z0-9._-]{20,})' | head
```

If any likely secret is found, raise `BLOCKER` and stop until surfaced.

## Phase 2: Multi-Wave Review Loop

Run waves until 2 consecutive waves show no significant advance.

Severity:
- `BLOCKER`: merge must not proceed
- `IMPORTANT`: should fix before merge
- `SUGGESTION`: non-blocking improvement

Suggested folder layout:

```text
plans/<task>/waves/wave-<N>/
<dimension>/findings-<file-slug>.md
<dimension>/report.md
adversarial/<finding-id>.md
adversarial/report.md
wave-report.md
```

### Wave gate (start of each wave)

1. Reload `.agents/skills/plan/SKILL.md`
2. Reload `plans/<task>/plan.md`
3. Confirm current diff SHA and wave number
4. If prior wave had substantive findings, perform both:
- targeted amplification pass on prior findings/fixes
- clean-slate full pass on current diff

### Dimensions

Apply only relevant dimensions per PR:
- Spec conformance
- Correctness
- Testing
- Security
- Code quality
- DRY / reuse
- Concurrency / parallelism
- Performance
- Architecture
- Operability
- Repo conventions

Guidance:
- Keep dimensions independent (avoid blended "general review" prompts).
- For file-heavy diffs, parallelize by `(dimension, file)` and aggregate.
- Verify pre-existing patterns are not misreported as regressions:

```bash
git show origin/<base>:<file>
```

### Pygraphistry-specific review checks

- Test coverage should mirror changed areas in `graphistry/tests/**`.
- Prefer behavioral tests over implementation-detail assertions.
- Run focused validation before escalating severity when feasible:

```bash
python -m pytest -q [targeted_test]
./bin/ruff.sh [changed_path_or_pkg]
./bin/typecheck.sh [changed_path_or_pkg]
```

- For GPU-affecting PRs, require GPU-path validation evidence:
- Local GPU path: `cd docker && ./test-gpu-local.sh [targeted_test_or_path]`
- No local GPU available: run equivalent GPU validation on `dgx-spark` and record exact command + output artifact path in wave evidence.
- If startup/runtime claims are made, verify entrypoints/scripts in `bin/` and workflow behavior.
- For docs-only PRs, prioritize spec/documentation accuracy and navigability (toctree links, anchors, cross-refs).

### Per-file findings format

```markdown
## <file>
### Findings
- [BLOCKER] <description> — <file>:<line>
- [IMPORTANT] <description> — <file>:<line>
- [SUGGESTION] <description> — <file>:<line>

### Evidence
- <diff/code/test evidence with file:line references>
```

### Adversarial pass (required)

For each finding, attempt disproof and write verdict:
- `CONFIRMED`
- `DOWNGRADED`
- `REJECTED`

Only `CONFIRMED` and `DOWNGRADED` findings carry forward.

Each adversarial file should include:
- original claim
- disprove attempt
- verdict
- concrete proof (`file:line`, diff excerpt, or test output)

### Wave summary

Each wave writes `wave-report.md` with:
- inputs (diff SHA/range, dimensions, files)
- post-adversarial counts by severity
- fixes applied (if `fixes=inline`)
- finding resolution ledger (`FIXED`, `UNFIXED`, `DEFERRED`)
- convergence signal

## Phase 3: Convergence Report

Write `plans/<task>/final-report.md` with:
- summary (waves run, convergence status)
- per-wave table (new findings + advance signal)
- resolution matrix (status of non-rejected findings)
- sections: `Blockers`, `Important`, `Suggestions`, `Rejected/False Positives`, `Methodology`

## Phase 4: Output Behavior

`findings` mode:
- Keep full artifacts in `plans/<task>/`
- Provide concise terminal summary with:
- target + mode + fixes policy
- waves + convergence status
- per-wave counts
- `FIXED` vs `UNFIXED`/`DEFERRED`
- path to `final-report.md`

`pr-comments` mode:
1. Draft local comments in `plans/<task>/comment-drafts/`
2. Ask for explicit confirmation
3. Post inline + top-level comments with `gh`

`both` mode:
- Complete findings artifacts first, then comment flow.

## Guardrails

- `fixes=deferred`: read-only; do not edit source files.
- `fixes=inline`: edit only files in PR diff and only for confirmed `BLOCKER`/`IMPORTANT` findings.
- Never skip lint/tests when applying inline fixes.
- Never force-push from review flow.
- Never post PR comments without explicit user confirmation.
- Always run credentials scan in Phase 1 before waves.
- Prefer exact file/line evidence over broad statements.
- If uncertain whether issue is new, compare against `origin/<base>` before filing.
Loading