diff --git a/plugins/databases-on-aws/skills/dsql/SKILL.md b/plugins/databases-on-aws/skills/dsql/SKILL.md index ee3db2ba..600aaa5f 100644 --- a/plugins/databases-on-aws/skills/dsql/SKILL.md +++ b/plugins/databases-on-aws/skills/dsql/SKILL.md @@ -1,9 +1,6 @@ --- name: dsql -description: "Build with Aurora DSQL — manage schemas, execute queries, handle migrations, diagnose query plans, and develop applications with a serverless, distributed SQL database. Covers IAM auth, multi-tenant patterns, MySQL-to-DSQL migration, DDL operations, and query plan explainability. Triggers on phrases like: DSQL, Aurora DSQL, create DSQL table, DSQL schema, migrate to DSQL, distributed SQL database, serverless PostgreSQL-compatible database, DSQL query plan, DSQL EXPLAIN ANALYZE, why is my DSQL query slow." -license: Apache-2.0 -metadata: - tags: aws, aurora, dsql, distributed-sql, distributed, distributed-database, database, serverless, serverless-database, postgresql, postgres, sql, schema, migration, multi-tenant, iam-auth, aurora-dsql, mcp +description: "Build with Aurora DSQL — manage schemas, execute queries, handle migrations, diagnose query plans, and develop applications with a serverless, distributed SQL database. Covers IAM auth, multi-tenant patterns, MySQL-to-DSQL migration, DDL operations, and query plan explainability. Triggers on phrases like: DSQL, Aurora DSQL, create DSQL table, DSQL schema, migrate to DSQL, distributed SQL database, serverless PostgreSQL-compatible database, DSQL query plan, DSQL EXPLAIN ANALYZE, why is my DSQL query slow, DSQL query performance, DSQL full scan, DSQL DPU, DSQL query cost, DSQL latency, optimize this query, this query is slow, explain this plan, query performance, high DPU, make this faster, why is this doing a full scan." --- # Amazon Aurora DSQL Skill @@ -35,7 +32,7 @@ Load these files as needed for detailed guidance: **When:** Always load for guidance using or updating the DSQL MCP server **Contains:** Instructions for setting up the DSQL MCP server with 2 configuration options as -sampled in [.mcp.json](../../.mcp.json) +sampled in [mcp/.mcp.json](mcp/.mcp.json) 1. Documentation-Tools Only 2. Database Operations (requires a cluster endpoint) @@ -153,16 +150,18 @@ defaults that may change — when a user's decision depends on an exact limit, v | Max indexes per table | 24 | `aurora dsql index limits` | | Max columns per index | 8 | `aurora dsql index limits` | | IDENTITY/SEQUENCE CACHE values | 1 or >= 65536 | `aurora dsql sequence cache` | -| Supported column data types | See docs | `aurora dsql supported data types` | -**When to verify:** Before recommending batch sizes, connection pool settings, or schema designs where hitting a limit would cause failures; any time the exact number can affect user decision. +**When to verify:** Before recommending batch sizes, connection pool settings, or schema designs +where hitting a limit would cause failures. No need to verify for general guidance or when +the exact number doesn't affect the user's decision. -**Fallback:** If `awsknowledge` is unavailable, use the defaults above and flag that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/). +**Fallback:** If `awsknowledge` is unavailable, use the defaults above and note to the user +that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/). ## CLI Scripts Available -Bash scripts in [scripts/](../../scripts/) for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files. -See [scripts/README.md](../../scripts/README.md) for usage and hook configuration. +Bash scripts in [scripts/](scripts/) for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files. +See [scripts/README.md](scripts/README.md) for usage. --- @@ -206,7 +205,7 @@ ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Wo - MUST include tenant_id in all tables - MUST use `CREATE INDEX ASYNC` exclusively - MUST issue each DDL in its own transact call: `transact(["CREATE TABLE ..."])` -- MUST serialize arrays as TEXT or JSON; cast back at query time (`string_to_array(text, ',')` or `jsonb_array_elements_text(json::jsonb)`) +- MUST store arrays/JSON as TEXT ### Workflow 2: Safe Data Migration @@ -220,7 +219,10 @@ ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Wo - MUST batch updates under 3,000 rows in separate transact calls - MUST issue each ALTER TABLE in its own transaction -**Recovery — batch fails midway:** Rows already updated keep their new value (each batch committed independently). Resume by filtering on the unset state (`WHERE new_column IS NULL`) and continue. Re-running is safe because the filter naturally excludes completed rows. +**Recovery — batch fails midway:** Rows already updated keep their new value (each batch committed +in its own transaction). Resume by filtering on the unset state — e.g. add +`WHERE new_column IS NULL` (or the sentinel value) to the next UPDATE — and continue from there. +Re-running the entire migration is safe because the filter naturally excludes completed rows. ### Workflow 3: Application-Layer Referential Integrity @@ -252,7 +254,42 @@ MUST load [mysql-migrations/type-mapping.md](references/mysql-migrations/type-ma ### Workflow 8: Query Plan Explainability -Explains why the DSQL optimizer chose a particular plan. Triggered by slow queries, high DPU, unexpected Full Scans, or plans the user doesn't understand. **REQUIRES a structured Markdown diagnostic report is the deliverable** beyond conversation — run the workflow end-to-end before answering. Use the `aurora-dsql` MCP when connected; fall back to raw `psql` with a generated IAM token (see the fallback block below) otherwise. +Explains why the DSQL optimizer chose a particular plan. **REQUIRES a structured Markdown diagnostic report as the deliverable** — run the workflow end-to-end before answering. Use the `aurora-dsql` MCP when connected; fall back to raw `psql` with a generated IAM token (see the fallback block below) otherwise. + +#### Trigger Criteria + +Enter this workflow if **ANY** of these signals are present: + +| Signal | Examples | +|--------|----------| +| User provides SQL + mentions performance/speed/cost | "this query takes 8 seconds", "too slow", "optimize this", "make this faster" | +| User mentions DPU cost or resource consumption | "high DPU", "query cost is too high", "read DPU seems excessive" | +| User asks about a plan choice or scan type | "why is it doing a full scan?", "why not use the index?" | +| User pastes EXPLAIN / EXPLAIN ANALYZE output | Raw plan text in the message | +| User references a Query ID and asks about performance | "query abc-123 is slow" | +| User says "reassess" / "re-run" / "I added the index" | Phase 5 re-entry for an existing report | + +#### Context Disambiguation + +Before entering the workflow, confirm the query targets DSQL: + +| Condition | Action | +|-----------|--------| +| Only `aurora-dsql` MCP is connected (no other database MCPs) | Proceed — DSQL is the only target | +| User explicitly mentions DSQL, Aurora DSQL, or a known DSQL cluster | Proceed | +| Conversation already has prior DSQL interaction (earlier queries, schema ops) | Proceed | +| Multiple database MCPs are connected and no DSQL signal in the message | Ask the user which database they mean before proceeding | +| No database MCP is connected | Inform the user that the `aurora-dsql` MCP is required and offer the psql fallback | + +#### Routing (sub-path selection) + +| Condition | Path | +|-----------|------| +| User provides SQL but no plan output | Full workflow: Phase 0 → 1 → 2 → 3 → 4 | +| User pastes plan output + asks to fix/optimize | Full workflow: Phase 0 → 1 (re-capture fresh plan) → 2 → 3 → 4 | +| User pastes plan output + asks what it means (educational) | Full workflow: Phase 0 → 1 (re-capture fresh plan) → 2 → 3 → 4. The report is the explanation — do not produce a shorter conversational answer instead | +| Execution time >30s detected at Phase 1 | Phase 3 skips experiments per guc-experiments.md | +| User says "reassess" or equivalent | Re-run Phase 1–2, append Addendum to existing report | **Phase 0 — Load reference material.** Read all four before starting — each has content later phases need verbatim (node-type math, exact catalog SQL, the `>30s` skip protocol, required report elements): @@ -263,7 +300,7 @@ Explains why the DSQL optimizer chose a particular plan. Triggered by slow queri **Phase 1 — Capture the plan.** **ALWAYS** run `readonly_query("EXPLAIN ANALYZE VERBOSE …")` on the user's query verbatim (SELECT form) — **ALWAYS** capture a fresh plan from the cluster, even when the user describes the plan or reports an anomaly. **MAY** leverage `get_schema` or `information_schema` for schema sanity checks. When EXPLAIN errors (`relation does not exist`, `column does not exist`), **MUST** report the error verbatim — **MUST NOT** invent DSQL-specific semantics (e.g., case sensitivity, identifier quoting) as the root cause. Extract Query ID, Planning Time, Execution Time, DPU Estimate. **SELECT** runs as-is. **UPDATE/DELETE** rewrite to the equivalent SELECT (same join chain + WHERE) — the optimizer picks the same plan shape. **INSERT**, pl/pgsql, DO blocks, and functions **MUST** be rejected. **MUST NOT** use `transact --allow-writes` for plan capture; it bypasses MCP safety. -**Phase 2 — Gather evidence.** Using SQL from `catalog-queries.md`, query `pg_class`, `pg_stats`, `pg_indexes`, `COUNT(*)`, `COUNT(DISTINCT)`. Classify estimation errors per `plan-interpretation.md` (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew. +**Phase 2 — Gather evidence.** Using SQL from `catalog-queries.md`, query `pg_class`, `pg_stats`, `pg_indexes`, `COUNT(*)`, `COUNT(DISTINCT)`. Classify estimation errors per `plan-interpretation.md` (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew. When a Full Scan appears despite an apparently usable index, check for type coercion index bypass: retrieve indexed column types and compare against predicate literal types using the implicit cast compatibility matrix in `plan-interpretation.md`. **Phase 3 — Experiment (conditional).** ≤30s: run GUC experiments per `guc-experiments.md` (default + merge-join-only) plus optional redundant-predicate test. >30s: skip experiments, include the manual GUC testing SQL verbatim in the report, and do not re-run for redundant-predicate testing. Anomalous values (impossible row counts): confirm query results are correct despite the anomalous EXPLAIN, flag as a potential DSQL bug, and produce the Support Request Template from `report-format.md`. diff --git a/plugins/databases-on-aws/skills/dsql/references/query-plan/catalog-queries.md b/plugins/databases-on-aws/skills/dsql/references/query-plan/catalog-queries.md index 9b067cc8..d7458d97 100644 --- a/plugins/databases-on-aws/skills/dsql/references/query-plan/catalog-queries.md +++ b/plugins/databases-on-aws/skills/dsql/references/query-plan/catalog-queries.md @@ -103,6 +103,82 @@ Compare against `pg_stats.n_distinct`: - If `n_distinct` is positive: compare directly - If `n_distinct` is negative: multiply absolute value by actual row count to get estimated distinct count +## Column Types for Predicate Columns + +Retrieve the declared types for columns used in WHERE predicates and JOIN conditions, to detect type coercion index bypass (see plan-interpretation.md): + +```sql +SELECT + c.table_name, + c.column_name, + c.data_type, + c.udt_name, + c.is_nullable +FROM information_schema.columns c +WHERE c.table_schema = '{schema}' + AND c.table_name IN ('{table1}', '{table2}') + AND c.column_name IN ('{col1}', '{col2}'); +``` + +Cross-reference the column type against predicate literals visible in the EXPLAIN output. When the types differ, check the implicit cast compatibility matrix in plan-interpretation.md to determine whether the mismatch prevents index usage. + +## B-Tree Cross-Type Operator Support + +Determine which type pairs the DSQL B-Tree access method supports for index scans. If a (predicate-type, column-type) pair has no registered operator, the index cannot be used for that comparison: + +```sql +SELECT DISTINCT + lt.typname AS left_type, + rt.typname AS right_type +FROM pg_amop ao +JOIN pg_type lt ON lt.oid = ao.amoplefttype +JOIN pg_type rt ON rt.oid = ao.amoprighttype +WHERE ao.amopmethod = 10003 + AND ao.amoplefttype != ao.amoprighttype +ORDER BY lt.typname, rt.typname; +``` + +This returns only the cross-type pairs (where left and right types differ). Same-type pairs are always supported. Use this to confirm whether a suspected type mismatch actually prevents index usage — if the pair appears in the result, the index CAN be used and the issue lies elsewhere. + +To check a specific pair: + +```sql +SELECT EXISTS ( + SELECT 1 + FROM pg_amop ao + JOIN pg_type lt ON lt.oid = ao.amoplefttype + JOIN pg_type rt ON rt.oid = ao.amoprighttype + WHERE ao.amopmethod = 10003 + AND lt.typname = '{predicate_type}' + AND rt.typname = '{column_type}' +) AS index_usable; +``` + +## Indexed Column Types + +Retrieve index definitions together with their column types to identify type coercion bypass candidates: + +```sql +SELECT + i.indexname, + i.tablename, + a.attname AS column_name, + t.typname AS column_type, + i.indexdef +FROM pg_indexes i +JOIN pg_class ic ON ic.relname = i.indexname +JOIN pg_index ix ON ix.indexrelid = ic.oid +JOIN pg_attribute a ON a.attrelid = ix.indrelid + AND a.attnum = ANY(ix.indkey) +JOIN pg_type t ON t.oid = a.atttypid +JOIN pg_namespace n ON n.oid = ic.relnamespace +WHERE n.nspname = '{schema}' + AND i.tablename IN ('{table1}', '{table2}') +ORDER BY i.tablename, i.indexname, a.attnum; +``` + +Use this when a Full Scan appears despite an apparently usable index — compare the index column's `column_type` against the predicate literal's inferred type. + ## Value Distribution Analysis For columns with suspected data skew, retrieve the actual top-N value frequencies: diff --git a/plugins/databases-on-aws/skills/dsql/references/query-plan/plan-interpretation.md b/plugins/databases-on-aws/skills/dsql/references/query-plan/plan-interpretation.md index da4fefa5..610d738d 100644 --- a/plugins/databases-on-aws/skills/dsql/references/query-plan/plan-interpretation.md +++ b/plugins/databases-on-aws/skills/dsql/references/query-plan/plan-interpretation.md @@ -183,6 +183,63 @@ Detect physically impossible row counts in DSQL plan nodes: These anomalous values do not affect query correctness — only diagnostic output accuracy. +## Type Coercion and Index Bypass + +An index may exist on a column yet not be used when the predicate value's type does not match the column's declared type and no implicit cast exists between the two types. + +### Detection Pattern + +Flag this condition when **all** of the following are true: + +1. An index exists whose leading column matches a WHERE predicate column +2. The plan uses a Full Scan or Seq Scan on that table instead of an Index Scan +3. The predicate literal's type differs from the indexed column's declared type +4. The type pair is **not** in the implicit cast compatibility matrix below + +### Why It Happens + +DSQL (like PostgreSQL) can only use a B-Tree index when the comparison operator's input types match the index's operator class. When a predicate supplies a value of a different type: + +- If an implicit cast exists from the predicate type to the column type, the planner applies it transparently and can still use the index +- If no implicit cast exists, the planner must apply a per-row cast or comparison function that cannot use the index's ordering — resulting in a full scan + +This is particularly surprising to users because the query returns correct results (the cast happens at execution time, row by row) but performance degrades dramatically on large tables. + +### Determining Index-Compatible Type Pairs + +Rather than relying on a static matrix, query `pg_amop` directly on the cluster to determine which cross-type comparisons the DSQL B-Tree index access method supports. See catalog-queries.md for the exact SQL. + +The key insight: DSQL's B-Tree access method (amopmethod `10003`) only supports index scans when a registered operator exists for the specific (left-type, right-type) pair. If no operator is registered for the pair, the index cannot be used — regardless of whether a general-purpose implicit cast exists in `pg_cast`. + +In practice, cross-type index support is limited to the integer family (smallint, integer, bigint — all combinations). All other indexed types (text, numeric, uuid, timestamp, date, boolean, etc.) require an exact type match between the predicate and the indexed column for the index to be usable. + +### Quantifying Impact + +When this pattern is detected: + +``` +Full Scan rows processed = actual_rows from Full Scan node +Index Scan rows (expected) = estimated rows matching the predicate (from pg_stats selectivity) +Scan amplification = Full Scan rows / Index Scan rows (expected) +``` + +### Recommendation Template + +When a type coercion bypass is confirmed: + +- **Explicit cast in the predicate:** Rewrite `WHERE col = '42'` as `WHERE col = 42::float` (cast the literal to the column type) +- **Application-layer fix:** Ensure the application passes parameters with the correct type rather than relying on implicit conversion +- **Do NOT recommend changing the column type** to accommodate mismatched predicates — this masks the real issue and may break other queries + +### Evidence Gathering + +To confirm this pattern, cross-reference: + +1. The column type from `pg_attribute` or `information_schema.columns` (see catalog-queries.md) +2. The index definition from `pg_indexes` +3. The predicate literal in the EXPLAIN output (visible in `Filter:` or `Index Cond:` lines) +4. The implicit cast matrix above + ## Projections and Row Width Capture Projections lists from Storage Scan and Storage Lookup nodes: diff --git a/plugins/databases-on-aws/skills/dsql/references/query-plan/query-rewrites-dsql-specific.md b/plugins/databases-on-aws/skills/dsql/references/query-plan/query-rewrites-dsql-specific.md new file mode 100644 index 00000000..4340bc35 --- /dev/null +++ b/plugins/databases-on-aws/skills/dsql/references/query-plan/query-rewrites-dsql-specific.md @@ -0,0 +1,91 @@ +# Query Rewrites Reference — DSQL-Specific + +SQL rewrites that address Aurora DSQL-specific behaviors and limitations. Apply these when the plan reveals inefficiency unique to DSQL's distributed architecture or optimizer constraints. + +## Table of Contents + +1. [Replace COUNT(*) with reltuples Estimate](#replace-count-with-reltuples-estimate) +2. [Split Large Joins to Enable Optimal Join Ordering](#split-large-joins-to-enable-optimal-join-ordering) + +--- + +## Replace COUNT(*) with reltuples Estimate + +When a query performs `COUNT(*)` on a large table, rewrite to use the `reltuples` value from `pg_class` for an approximate row count. This is a common workaround for cases where `COUNT(*)` is too slow or times out on large tables. + +**When to apply:** An approximate count is acceptable and the table is large enough that `COUNT(*)` is prohibitively expensive. + +**Do not apply:** The application requires an exact count. + +```sql +-- Original +SELECT COUNT(*) AS exact_count +FROM big_table; + +-- Rewritten (DSQL) +SELECT reltuples::bigint AS estimated_count +FROM pg_class +WHERE oid = 'public.big_table'::regclass; +``` + +```sql +-- Not applicable: exact count required +SELECT COUNT(*) AS exact_count +FROM big_table; +``` + +--- + +## Split Large Joins to Enable Optimal Join Ordering + +If a query joins more tables than the optimizer's DP threshold (e.g., 10 joins for Aurora DSQL), rewrite it into multiple subqueries each joining no more tables than the threshold, then join the subquery results. + +This allows the PostgreSQL-based DSQL engine to apply dynamic-programming (DP) join ordering within each smaller block, producing a better overall join plan than a greedy algorithm on many tables. + +**When to apply:** The total number of joined tables exceeds the DP threshold (`join_collapse_limit` or `from_collapse_limit`). Partition the join into CTEs each with table count at or below the threshold, push down relevant filters, and join the CTE results. + +**Do not apply:** The total table count is at or below the threshold, or splitting would prevent necessary cross-block optimizations. + +```sql +-- Original +SELECT * +FROM R1 + JOIN R2 ON R1.id = R2.id + JOIN R3 ON R2.id = R3.id + JOIN R4 ON R3.id = R4.id + JOIN R5 ON R4.id = R5.id + JOIN R6 ON R5.id = R6.id + JOIN R7 ON R6.id = R7.id +WHERE Filters; + +-- Rewritten (DSQL) +WITH + sub1 AS ( + SELECT * + FROM R1 + JOIN R2 ON R1.id = R2.id + JOIN R3 ON R2.id = R3.id + JOIN R4 ON R3.id = R4.id + WHERE + ), + sub2 AS ( + SELECT * + FROM R5 + JOIN R6 ON R5.id = R6.id + JOIN R7 ON R6.id = R7.id + WHERE + ) +SELECT * +FROM sub1 +JOIN sub2 ON sub1.id = sub2.id; +``` + +```sql +-- Not applicable: total tables ≤ DP threshold +SELECT * +FROM R1 + JOIN R2 ON R1.id = R2.id + JOIN R3 ON R2.id = R3.id + JOIN R4 ON R3.id = R4.id +WHERE Filters; +``` diff --git a/plugins/databases-on-aws/skills/dsql/references/query-plan/query-rewrites-generic.md b/plugins/databases-on-aws/skills/dsql/references/query-plan/query-rewrites-generic.md new file mode 100644 index 00000000..4c01f0b5 --- /dev/null +++ b/plugins/databases-on-aws/skills/dsql/references/query-plan/query-rewrites-generic.md @@ -0,0 +1,594 @@ +# Query Rewrites Reference + +Generic SQL rewrites that can improve query performance. When a plan reveals inefficiency traceable to query structure (rather than missing indexes or stale statistics), recommend the applicable rewrite below. + +## Table of Contents + +1. [OR to IN](#or-to-in) +2. [LEFT JOIN with Null-Rejecting Predicate to INNER JOIN](#left-join-with-null-rejecting-predicate-to-inner-join) +3. [Propagate Filter to JOIN Columns](#propagate-filter-to-join-columns) +4. [Subquery Unnesting — Uncorrelated](#subquery-unnesting--uncorrelated) +5. [Subquery Unnesting — Correlated](#subquery-unnesting--correlated) +6. [Subquery Unnesting — Scalar](#subquery-unnesting--scalar) +7. [Push Computation to Constant Side](#push-computation-to-constant-side) +8. [Replace IN-Subquery with EXISTS](#replace-in-subquery-with-exists) +9. [Push GROUP BY into Subquery](#push-group-by-into-subquery) +10. [Replace NOT IN with NOT EXISTS](#replace-not-in-with-not-exists) +11. [Flatten Nested UNION ALL](#flatten-nested-union-all) + +--- + +## OR to IN + +If a query contains multiple OR clauses comparing the same column to different constant values, rewrite them into a single IN clause. This enables more efficient index lookups and reduces redundant OR evaluations. + +**When to apply:** All OR comparisons target the same column using equality (`=`) with constant values. + +**Do not apply:** OR clauses compare different columns or involve non-constant expressions. + +```sql +-- Original +SELECT * +FROM R +WHERE R.key = c1 OR R.key = c2; + +-- Rewritten +SELECT * +FROM R +WHERE R.key IN (c1, c2); +``` + +```sql +-- Additional example +SELECT name, age +FROM employees +WHERE department_id = 1 OR department_id = 2 OR department_id = 3; + +-- Rewritten +SELECT name, age +FROM employees +WHERE department_id IN (1, 2, 3); +``` + +```sql +-- Not applicable: different columns involved +SELECT name, age +FROM employees +WHERE department_id = 1 OR location_id = 2; +``` + +--- + +## LEFT JOIN with Null-Rejecting Predicate to INNER JOIN + +If a query uses LEFT JOIN but the WHERE clause rejects NULLs on the joined table, rewrite as INNER JOIN. This enables a simpler, more efficient join plan. + +**When to apply:** The WHERE clause rejects NULLs from the right-hand side of a LEFT JOIN (e.g., `IS NOT NULL`, equality comparisons, or any predicate that cannot be true for NULL). + +**Do not apply:** NULLs from the right-hand side are required in the result. + +```sql +-- Original +SELECT * +FROM R1 +LEFT JOIN R2 + ON R1.key = R2.key +WHERE R2.key IS NOT NULL; + +-- Rewritten +SELECT * +FROM R1 +JOIN R2 + ON R1.key = R2.key; +``` + +```sql +-- Not applicable: NULLs from R2 are required +SELECT * +FROM R1 +LEFT JOIN R2 + ON R1.key = R2.key; +``` + +--- + +## Propagate Filter to JOIN Columns + +If a query has an equality join condition and a filter predicate on one join attribute, propagate the filter to the corresponding attribute on the other table(s). This enables earlier filtering and reduces intermediate result sizes. + +**When to apply:** The filter predicate is on a column involved in an equality join condition. + +**Do not apply:** The predicate is on a non-join column. + +```sql +-- Original +SELECT * +FROM R1, R2 +WHERE R1.id = R2.id + AND R1.id > 10; + +-- Rewritten +SELECT * +FROM R1, R2 +WHERE R1.id = R2.id + AND R1.id > 10 + AND R2.id > 10; +``` + +```sql +-- Transitive propagation across multiple tables +SELECT * +FROM R1, R2, R3 +WHERE R1.id = R2.id + AND R2.id = R3.id + AND R1.id > 10; + +-- Rewritten +SELECT * +FROM R1, R2, R3 +WHERE R1.id = R2.id + AND R2.id = R3.id + AND R1.id > 10 + AND R2.id > 10 + AND R3.id > 10; +``` + +```sql +-- Not applicable: predicate is on a non-join column +SELECT * +FROM R1, R2 +WHERE R1.id = R2.id + AND R1.other_column > 10; +``` + +--- + +## Subquery Unnesting — Uncorrelated + +If a query contains an uncorrelated `IN (SELECT ...)` subquery, rewrite it as an explicit JOIN. This enables better join order optimizations and index usage. + +**When to apply:** The subquery does not reference columns from the outer query. + +**Do not apply:** The subquery is correlated (references outer query columns). + +```sql +-- Original +SELECT * +FROM R +WHERE R.a IN ( + SELECT S.b + FROM S +); + +-- Rewritten +SELECT DISTINCT R.* +FROM R +JOIN S + ON R.a = S.b; +``` + +```sql +-- Additional example +SELECT order_id +FROM orders +WHERE customer_id IN ( + SELECT customer_id + FROM customers + WHERE country = 'US' +); + +-- Rewritten +SELECT DISTINCT orders.order_id +FROM orders +JOIN customers + ON orders.customer_id = customers.customer_id +WHERE customers.country = 'US'; +``` + +```sql +-- Not applicable: subquery is correlated +SELECT * +FROM R +WHERE R.a IN ( + SELECT S.b + FROM S + WHERE S.c = R.d +); +``` + +--- + +## Subquery Unnesting — Correlated + +If a query contains a correlated EXISTS subquery, rewrite it as an explicit JOIN. This exposes the subquery to better join optimizations, especially when indexes exist on the join columns. + +**When to apply:** The correlated subquery is inside an EXISTS clause and the correlation is expressible as a JOIN condition (typically equality). + +**Do not apply:** The correlation cannot be expressed as a simple JOIN condition. + +```sql +-- Original +SELECT * +FROM R +WHERE EXISTS ( + SELECT 1 + FROM S + WHERE S.x = R.x + AND S.y > 0 +); + +-- Rewritten +SELECT DISTINCT R.* +FROM R +JOIN S + ON S.x = R.x + AND S.y > 0; +``` + +```sql +-- Additional example +SELECT product_id +FROM products +WHERE EXISTS ( + SELECT 1 + FROM product_reviews + WHERE product_reviews.product_id = products.product_id + AND product_reviews.rating >= 4 +); + +-- Rewritten +SELECT DISTINCT products.product_id +FROM products +JOIN product_reviews + ON product_reviews.product_id = products.product_id + AND product_reviews.rating >= 4; +``` + +```sql +-- Not applicable: correlation cannot be expressed as a JOIN condition +SELECT * +FROM R +WHERE EXISTS ( + SELECT 1 + FROM S + WHERE S.x + S.y = R.z +); +``` + +--- + +## Subquery Unnesting — Scalar + +If a query contains a scalar subquery in the SELECT clause computing an aggregate correlated by equality, rewrite it as a LEFT JOIN with GROUP BY. This reduces repeated subquery executions and enables better join planning. + +**When to apply:** The scalar subquery is correlated via equality and contains an aggregate function (MAX, MIN, COUNT, SUM). + +**Do not apply:** The scalar subquery is uncorrelated. + +```sql +-- Original +SELECT + R.*, + (SELECT MAX(S.y) + FROM S + WHERE S.x = R.x) AS max_y +FROM R; + +-- Rewritten +SELECT + R.*, + Agg.max_y +FROM R +LEFT JOIN ( + SELECT x, MAX(y) AS max_y + FROM S + GROUP BY x +) AS Agg + ON Agg.x = R.x; +``` + +```sql +-- Additional example +SELECT + R.id, + R.name, + (SELECT COUNT(*) + FROM S + WHERE S.owner_id = R.id) AS s_count +FROM R; + +-- Rewritten +SELECT + R.id, + R.name, + Agg.s_count +FROM R +LEFT JOIN ( + SELECT owner_id, COUNT(*) AS s_count + FROM S + GROUP BY owner_id +) AS Agg + ON Agg.owner_id = R.id; +``` + +```sql +-- Not applicable: scalar subquery is not correlated +SELECT + R.*, + (SELECT MAX(S.y) FROM S) AS global_max_y +FROM R; +``` + +--- + +## Push Computation to Constant Side + +When a filter predicate applies invertible arithmetic to an indexed column, move the computation to the constant side so the column appears alone and indexes can be used. + +**When to apply:** All operations on the column are mathematically invertible (addition, subtraction, multiplication/division by non-zero constant). + +**Do not apply:** The computation involves non-invertible functions (substring, lower/upper, trigonometric functions) or moving the computation changes query semantics (precision loss, integer-division rounding). + +```sql +-- Original +SELECT * FROM titles +WHERE emp_no * 100 / 5 = 10001; + +-- Rewritten +SELECT * FROM titles +WHERE emp_no = 10001 * 5 / 100; +``` + +```sql +-- Additional example +SELECT * FROM orders +WHERE order_id + 5 > 100; + +-- Rewritten +SELECT * FROM orders +WHERE order_id > 100 - 5; +``` + +```sql +-- Not applicable: non-invertible function +SELECT * FROM users +WHERE substring(username, 1, 3) = 'abc'; +``` + +--- + +## Replace IN-Subquery with EXISTS + +When a column is compared to a subquery using IN and the subquery may return many rows, rewrite as a correlated EXISTS to leverage short-circuit evaluation. + +**When to apply:** The IN subquery returns a large or variable number of rows. + +**Do not apply:** The IN list is a small static set of constants. + +```sql +-- Original +SELECT * +FROM customers +WHERE customer_id IN ( + SELECT customer_id + FROM orders + WHERE order_date >= DATEADD(day, -30, GETDATE()) +); + +-- Rewritten +SELECT * +FROM customers c +WHERE EXISTS ( + SELECT 1 + FROM orders o + WHERE o.customer_id = c.customer_id + AND o.order_date >= DATEADD(day, -30, GETDATE()) +); +``` + +```sql +-- Additional example +SELECT product_id +FROM products +WHERE product_id IN ( + SELECT product_id + FROM inventory + WHERE quantity > 0 +); + +-- Rewritten +SELECT product_id +FROM products p +WHERE EXISTS ( + SELECT 1 + FROM inventory i + WHERE i.product_id = p.product_id + AND i.quantity > 0 +); +``` + +```sql +-- Not applicable: small static set of constants +SELECT * +FROM users +WHERE user_type IN ('admin', 'editor', 'viewer'); +``` + +--- + +## Push GROUP BY into Subquery + +When a query aggregates after joining a fact table to a dimension table, push the GROUP BY into a subquery on the fact table alone. This aggregates fewer rows and joins the smaller result to retrieve dimension columns. + +**When to apply:** The aggregation is on the fact table and additional columns come from a dimension table joined on the grouping key. + +**Do not apply:** No additional columns are needed beyond the grouping key. + +```sql +-- Original +SELECT c.customer_id, + c.first_name, + c.last_name, + COUNT(*) AS order_count +FROM customers c +JOIN orders o + ON c.customer_id = o.customer_id +GROUP BY c.customer_id, c.first_name, c.last_name; + +-- Rewritten +SELECT c.customer_id, + c.first_name, + c.last_name, + agg.order_count +FROM customers c +JOIN ( + SELECT customer_id, + COUNT(*) AS order_count + FROM orders + GROUP BY customer_id +) AS agg + ON c.customer_id = agg.customer_id; +``` + +```sql +-- Additional example +SELECT cat.category_name, + cat.description, + SUM(t.amount) AS total_amount +FROM categories cat +JOIN transactions t + ON cat.id = t.category_id +GROUP BY cat.category_name, cat.description; + +-- Rewritten +SELECT cat.category_name, + cat.description, + agg.total_amount +FROM categories cat +JOIN ( + SELECT category_id, + SUM(amount) AS total_amount + FROM transactions + GROUP BY category_id +) AS agg + ON cat.id = agg.category_id; +``` + +```sql +-- Not applicable: no additional columns needed +SELECT department_id, + SUM(salary) AS total_salary +FROM employees +GROUP BY department_id; +``` + +--- + +## Replace NOT IN with NOT EXISTS + +When a column is filtered with `NOT IN (subquery)`, rewrite as a correlated NOT EXISTS. This avoids building a large intermediate set and sidesteps NULL semantics issues with NOT IN. + +**When to apply:** The NOT IN subquery returns many rows or may contain NULLs. + +**Do not apply:** The exclusion list is a small static set of constants. + +```sql +-- Original +SELECT * +FROM customers +WHERE customer_id NOT IN ( + SELECT customer_id + FROM blacklisted_customers +); + +-- Rewritten +SELECT * +FROM customers c +WHERE NOT EXISTS ( + SELECT 1 + FROM blacklisted_customers b + WHERE b.customer_id = c.customer_id +); +``` + +```sql +-- Additional example +SELECT product_id +FROM products +WHERE product_id NOT IN ( + SELECT product_id + FROM discontinued_products + WHERE discontinued = true +); + +-- Rewritten +SELECT p.product_id +FROM products p +WHERE NOT EXISTS ( + SELECT 1 + FROM discontinued_products d + WHERE d.product_id = p.product_id + AND d.discontinued = true +); +``` + +```sql +-- Not applicable: small static exclusion set +SELECT * +FROM items +WHERE item_type NOT IN ('typeA', 'typeB'); +``` + +--- + +## Flatten Nested UNION ALL + +When a query contains UNION ALL nested inside another UNION ALL, flatten all branches into a single UNION ALL to simplify the plan and reduce intermediate merge steps. + +**When to apply:** All set operations are UNION ALL (no deduplication). + +**Do not apply:** Any branch uses UNION (deduplicating), which must remain distinct. + +```sql +-- Original +SELECT * FROM sales_q1 +UNION ALL ( + SELECT * FROM sales_q2 + UNION ALL + SELECT * FROM sales_q3 +); + +-- Rewritten +SELECT * FROM sales_q1 +UNION ALL +SELECT * FROM sales_q2 +UNION ALL +SELECT * FROM sales_q3; +``` + +```sql +-- CTE example +-- Original +WITH a AS ( + SELECT * FROM t1 + UNION ALL + SELECT * FROM t2 +) +SELECT * FROM a +UNION ALL +SELECT * FROM t3; + +-- Rewritten +SELECT * FROM t1 +UNION ALL +SELECT * FROM t2 +UNION ALL +SELECT * FROM t3; +``` + +```sql +-- Not applicable: UNION (deduplicating) must stay distinct +SELECT * FROM t1 +UNION +SELECT * FROM t2; +```