Skip to content

Commit c438bb0

Browse files
suryaiyer95aidtyaclaude
authored
feat: data-parity skill — TypeScript orchestrator, ClickHouse driver, partition support (#493)
* feat: add data-parity cross-database table comparison - Add DataParity engine integration via native Rust bindings - Add data-diff tool for LLM agent (profile, joindiff, hashdiff, cascade, auto) - Add ClickHouse driver support - Add data-parity skill: profile-first workflow, algorithm selection guide, CRITICAL warning that joindiff cannot run cross-database (always returns 0 diffs), output style rules (facts only, no editorializing) - Gitignore .altimate-code/ (credentials) and *.node (platform binaries) * feat: add partition support to data_diff Split large tables by a date or numeric column before diffing. Each partition is diffed independently then results are aggregated. New params: - partition_column: column to split on (date or numeric) - partition_granularity: day | week | month | year (for dates) - partition_bucket_size: bucket width for numeric columns New output field: - partition_results: per-partition breakdown (identical / differ / error) Dialect-aware SQL: Postgres, Snowflake, BigQuery, ClickHouse, MySQL. Skill updated with partition guidance and examples. * feat: add categorical partition mode (string, enum, boolean) When partition_column is set without partition_granularity or partition_bucket_size, groups by raw DISTINCT values. Works for any non-date, non-numeric column: status, region, country, etc. WHERE clause uses equality: col = 'value' with proper escaping. * fix: correct outcome shape handling in extractStats and formatOutcome Rust serializes ReladiffOutcome with serde tag 'mode', producing: {mode: 'diff', diff_rows: [...], stats: {rows_table1, rows_table2, exclusive_table1, exclusive_table2, updated, unchanged}} Previous code checked for {Match: {...}} / {Diff: {...}} shapes that never matched, causing partitioned diff to report all partitions as 'identical' with 0 rows. - extractStats(): check outcome.mode === 'diff', read from stats fields - mergeOutcomes(): aggregate mode-based outcomes correctly - summarize()/formatOutcome(): display mode-based shape with correct labels * feat: rewrite data-parity skill with interactive, plan-first workflow Key changes based on feedback: - Always generate TODO plan before any tool is called - Enforce data_diff tool usage (never manual EXCEPT/JOIN SQL) - Add PK discovery + explicit user confirmation step - Profile pass is now mandatory before row-level diff - Ask user before expensive row-level diff on large tables: - <100K rows: proceed automatically - 100K-10M rows: ask with where_clause option - >10M rows: offer window/partition/full choices - Document partition modes (date/numeric/categorical) with examples - Add warehouse_list as first step to confirm connections * fix: auto-discover extra_columns and exclude audit/timestamp columns from data diff The Rust engine only compares columns explicitly listed in extra_columns. When omitted, it was silently reporting all key-matched rows as 'identical' even when non-key values differed — a false positive bug. Changes: - Auto-discover columns from information_schema when extra_columns is omitted and source is a plain table name (not a SQL query) - Exclude audit/timestamp columns (updated_at, created_at, inserted_at, modified_at, _fivetran_*, _airbyte_*, publisher_last_updated_*, etc.) from comparison by default since they typically differ due to ETL timing - Report excluded columns in tool output so users know what was skipped - Fix misleading tool description that said 'Omit to compare all columns' - Update SKILL.md with critical guidance on extra_columns behavior * fix: add `noLimit` option to driver `execute()` to prevent silent result truncation All drivers default to `LIMIT 1001` on SELECT queries and post-truncate to 1000 rows. This silently drops rows when the data-diff engine needs complete result sets — a FULL OUTER JOIN returning >1000 diff rows would be truncated, causing the engine to undercount differences. - Add `ExecuteOptions { noLimit?: boolean }` to the `Connector` interface - When `noLimit: true`, set `effectiveLimit = 0` (falsy) so the existing LIMIT injection guard is skipped, and add `effectiveLimit > 0` to the truncation check so rows aren't sliced to zero - Update all 12 drivers: postgres, clickhouse, snowflake, bigquery, mysql, redshift, databricks, duckdb, oracle, sqlserver, sqlite, mongodb - Pass `{ noLimit: true }` from `data-diff.ts` `executeQuery()` Interactive SQL callers are unaffected — they continue to get the default 1000-row limit. Only the data-diff pipeline opts out. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: detect auto-timestamp defaults from database catalog and confirm exclusions with user Column exclusion now has two layers: 1. Name-pattern matching (existing) — updated_at, created_at, _fivetran_synced, etc. 2. Schema-level default detection (new) — queries column_default for NOW(), CURRENT_TIMESTAMP, GETDATE(), SYSDATE, SYSTIMESTAMP, etc. Covers PostgreSQL, MySQL, Snowflake, SQL Server, Oracle, ClickHouse, DuckDB, SQLite, and Redshift in a single round-trip (no extra query). The skill prompt now instructs the agent to present detected auto-timestamp columns to the user and ask for confirmation before excluding them, since migrations should preserve timestamps while ETL replication regenerates them. * fix: address code review findings in data-diff orchestrator - `buildColumnDiscoverySQL`: escape single quotes in all interpolated table name parts to prevent SQL injection via crafted source/target names - `dateTruncExpr`: add Oracle case (`TRUNC(col, 'UNIT')`) — Oracle does not have `DATE_TRUNC`, date-partitioned diffs on Oracle tables previously failed Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: address code review security and correctness findings - Apply esc() to Oracle and SQLite paths in buildColumnDiscoverySQL (SQL injection via table name was unpatched in these dialects) - Quote identifiers in resolveTableSources to prevent injection via table names containing semicolons or special characters - Surface SQL execution errors before feeding empty rows to the engine (silent false "match" when warehouse is unreachable is now an error) - Fix Oracle TRUNC() format model map: 'WEEK' → 'IW' (ISO week) ('WEEK' throws ORA-01800 on all Oracle versions) - Quote partition column identifier in buildPartitionWhereClause * fix: resolve simulation suite failures — object stringification, error propagation, and test mock formats - `altimate-core-column-lineage`: fix `[object Object]` in `column_dict` output when source entries are `{ source_table, source_column }` objects instead of strings - `schema-inspect`: propagate `{ success: false, error }` dispatcher responses to `metadata.error` instead of silently returning empty schema - `sql-analyze`: guard against null/undefined result from dispatcher to prevent "undefined" literal in output - `lineage-check`: guard against null/undefined result from dispatcher to prevent "undefined" literal in output - `simulation-suite.test.ts`: fix `sql-translate` mock format — data fields must be flat (not wrapped in `data: {}`), add `source_dialect`/`target_dialect` to mock so assertions pass - `simulation-suite.test.ts`: fix `dbt-manifest` mock format — unwrap `data: {}` so `model_count` and `models` are accessible at top level Simulation suite: 695/839 → 839/839 (100%) * refactor: remove existing-tool improvements — scope to data-diff only * refactor: revert .gitignore changes — scope to data-diff only * fix: silence @clickhouse/client internal stderr logger to prevent TUI corruption The @clickhouse/client package enables ERROR-level logging by default and writes `[ERROR][@clickhouse/client][Connection]` lines directly to stderr on auth/query failures. These raw writes corrupt the terminal TUI rendering. Set `log: { level: 127 }` (ClickHouseLogLevel.OFF) when creating the client — consistent with how Snowflake (`logLevel: 'OFF'`) and Databricks (no-op logger) already suppress their SDK loggers for the same reason. * fix: SQL injection hardening, target partition discovery, and local pack script - Validate table names before interpolating into DESCRIBE/SHOW COLUMNS for ClickHouse and Snowflake — reject names with non-alphanumeric characters to prevent SQL injection; also quote parts with dialect-appropriate delimiters - Discover partition values from BOTH source and target tables and union the results — previously only source was queried, silently missing rows that existed only in target-side partitions - Add script/pack-local.ts: mirrors publish.ts but stops before npm publish; injects local altimate-core tarballs from /tmp/altimate-local-dist/ for local end-to-end testing * feat: add Step 9 result presentation guidelines to data-parity skill Require that every diff result summary surfaces: - Exact scope (tables + warehouses compared) - Filters and time period applied (or explicitly states none) - Key columns used and how they were confirmed - Columns compared and excluded, with reasons (auto-timestamp, user request) - Algorithm used Includes example full result summary and guidance for identical results — emphasising that bare numbers without context are meaningless to the user. * fix: use correct outcome format for empty/fallback partition results The partitioned diff returned `{ Match: { row_count: 0, algorithm: 'partitioned' } }` when no partition values were found or all partitions failed. This format lacks `mode: 'diff'`, so `formatOutcome` fell through to raw JSON.stringify instead of producing clean output. Use the standard Rust engine format: `{ mode: 'diff', stats: {...}, diff_rows: [] }` * chore: remove pack-local.ts — dev-only utility, not part of the feature * feat: add data-parity skill to builder prompt with table and SQL query comparison modes * fix: address code review findings — Oracle TRUNC, dialect-aware quoting, query+partition guard - Oracle day granularity: 'DDD' (day-of-year) → 'DD' (day-of-month) - Add `quoteIdentForDialect()` helper: MySQL/ClickHouse use backticks, TSQL/Fabric use brackets, others use ANSI double-quotes - `buildPartitionDiscoverySQL` and `buildPartitionWhereClause` now use dialect-aware quoting instead of hardcoded double-quotes - `runPartitionedDiff` rejects SQL queries as source/target with a clear error — partitioning requires table names to discover column values * fix: pin `duckdb` to 1.4.4 to prevent bun runtime timeout - Pin `duckdb` from `^1.0.0` to exact `1.4.4` in `packages/drivers` - Add `duckdb: 1.4.4` to root `package.json` for workspace resolution - Update `bun.lock` Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Revert "fix: pin `duckdb` to 1.4.4 to prevent bun runtime timeout" This reverts commit b2cf288. --------- Co-authored-by: Aditya Pandey <aditya.p@altimate.ai> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent adaebe0 commit c438bb0

File tree

20 files changed

+1732
-57
lines changed

20 files changed

+1732
-57
lines changed

.opencode/skills/data-parity/SKILL.md

Lines changed: 411 additions & 0 deletions
Large diffs are not rendered by default.

packages/drivers/src/bigquery.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
* BigQuery driver using the `@google-cloud/bigquery` package.
33
*/
44

5-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
5+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
66

77
export async function connect(config: ConnectionConfig): Promise<Connector> {
88
let BigQueryModule: any
@@ -37,8 +37,8 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
3737
client = new BigQuery(options)
3838
},
3939

40-
async execute(sql: string, limit?: number, binds?: any[]): Promise<ConnectorResult> {
41-
const effectiveLimit = limit ?? 1000
40+
async execute(sql: string, limit?: number, binds?: any[], execOptions?: ExecuteOptions): Promise<ConnectorResult> {
41+
const effectiveLimit = execOptions?.noLimit ? 0 : (limit ?? 1000)
4242
const query = sql.replace(/;\s*$/, "")
4343
const isSelectLike = /^\s*(SELECT|WITH|VALUES)\b/i.test(sql)
4444

@@ -58,7 +58,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
5858

5959
const [rows] = await client.query(options)
6060
const columns = rows.length > 0 ? Object.keys(rows[0]) : []
61-
const truncated = rows.length > effectiveLimit
61+
const truncated = effectiveLimit > 0 && rows.length > effectiveLimit
6262
const limitedRows = truncated ? rows.slice(0, effectiveLimit) : rows
6363

6464
return {

packages/drivers/src/clickhouse.ts

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Uses the official ClickHouse JS client which communicates over HTTP(S).
66
*/
77

8-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
8+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
99

1010
export async function connect(config: ConnectionConfig): Promise<Connector> {
1111
let createClient: any
@@ -57,14 +57,17 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
5757
clientConfig.clickhouse_settings = config.clickhouse_settings
5858
}
5959

60+
// Silence the client's internal stderr logger — its ERROR-level output
61+
// writes raw lines directly to stderr and corrupts terminal TUI rendering.
62+
clientConfig.log = { level: 127 } // ClickHouseLogLevel.OFF = 127
6063
client = createClient(clientConfig)
6164
},
6265

63-
async execute(sql: string, limit?: number, _binds?: any[]): Promise<ConnectorResult> {
66+
async execute(sql: string, limit?: number, _binds?: any[], options?: ExecuteOptions): Promise<ConnectorResult> {
6467
if (!client) {
6568
throw new Error("ClickHouse client not connected — call connect() first")
6669
}
67-
const effectiveLimit = limit === undefined ? 1000 : limit
70+
const effectiveLimit = options?.noLimit ? 0 : (limit ?? 1000)
6871
let query = sql
6972

7073
// Strip string literals, then comments, for accurate SQL heuristic checks.

packages/drivers/src/databricks.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
* Databricks driver using the `@databricks/sql` package.
33
*/
44

5-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
5+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
66

77
export async function connect(config: ConnectionConfig): Promise<Connector> {
88
let databricksModule: any
@@ -44,8 +44,8 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
4444
})
4545
},
4646

47-
async execute(sql: string, limit?: number, binds?: any[]): Promise<ConnectorResult> {
48-
const effectiveLimit = limit ?? 1000
47+
async execute(sql: string, limit?: number, binds?: any[], options?: ExecuteOptions): Promise<ConnectorResult> {
48+
const effectiveLimit = options?.noLimit ? 0 : (limit ?? 1000)
4949
let query = sql
5050
const isSelectLike = /^\s*(SELECT|WITH|VALUES)\b/i.test(sql)
5151
if (
@@ -65,7 +65,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
6565
await operation.close()
6666

6767
const columns = rows.length > 0 ? Object.keys(rows[0]) : []
68-
const truncated = rows.length > effectiveLimit
68+
const truncated = effectiveLimit > 0 && rows.length > effectiveLimit
6969
const limitedRows = truncated ? rows.slice(0, effectiveLimit) : rows
7070

7171
return {

packages/drivers/src/duckdb.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
* DuckDB driver using the `duckdb` package.
33
*/
44

5-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
5+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
66

77
export async function connect(config: ConnectionConfig): Promise<Connector> {
88
let duckdb: any
@@ -105,8 +105,8 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
105105
connection = db.connect()
106106
},
107107

108-
async execute(sql: string, limit?: number, binds?: any[]): Promise<ConnectorResult> {
109-
const effectiveLimit = limit ?? 1000
108+
async execute(sql: string, limit?: number, binds?: any[], options?: ExecuteOptions): Promise<ConnectorResult> {
109+
const effectiveLimit = options?.noLimit ? 0 : (limit ?? 1000)
110110

111111
let finalSql = sql
112112
const isSelectLike = /^\s*(SELECT|WITH|VALUES)\b/i.test(sql)
@@ -123,7 +123,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
123123
: await query(finalSql)
124124
const columns =
125125
rows.length > 0 ? Object.keys(rows[0]) : []
126-
const truncated = rows.length > effectiveLimit
126+
const truncated = effectiveLimit > 0 && rows.length > effectiveLimit
127127
const limitedRows = truncated ? rows.slice(0, effectiveLimit) : rows
128128

129129
return {

packages/drivers/src/mysql.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
* MySQL driver using the `mysql2` package.
33
*/
44

5-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
5+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
66

77
export async function connect(config: ConnectionConfig): Promise<Connector> {
88
let mysql: any
@@ -41,8 +41,8 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
4141
pool = mysql.createPool(poolConfig)
4242
},
4343

44-
async execute(sql: string, limit?: number, _binds?: any[]): Promise<ConnectorResult> {
45-
const effectiveLimit = limit ?? 1000
44+
async execute(sql: string, limit?: number, _binds?: any[], options?: ExecuteOptions): Promise<ConnectorResult> {
45+
const effectiveLimit = options?.noLimit ? 0 : (limit ?? 1000)
4646
let query = sql
4747
const isSelectLike = /^\s*(SELECT|WITH|VALUES)\b/i.test(sql)
4848
if (
@@ -56,7 +56,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
5656
const [rows, fields] = await pool.query(query)
5757
const columns = fields?.map((f: any) => f.name) ?? []
5858
const rowsArr = Array.isArray(rows) ? rows : []
59-
const truncated = rowsArr.length > effectiveLimit
59+
const truncated = effectiveLimit > 0 && rowsArr.length > effectiveLimit
6060
const limitedRows = truncated
6161
? rowsArr.slice(0, effectiveLimit)
6262
: rowsArr

packages/drivers/src/oracle.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
* Oracle driver using the `oracledb` package (thin mode, pure JS).
33
*/
44

5-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
5+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
66

77
export async function connect(config: ConnectionConfig): Promise<Connector> {
88
let oracledb: any
@@ -37,8 +37,8 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
3737
})
3838
},
3939

40-
async execute(sql: string, limit?: number, _binds?: any[]): Promise<ConnectorResult> {
41-
const effectiveLimit = limit ?? 1000
40+
async execute(sql: string, limit?: number, _binds?: any[], options?: ExecuteOptions): Promise<ConnectorResult> {
41+
const effectiveLimit = options?.noLimit ? 0 : (limit ?? 1000)
4242
let query = sql
4343
const isSelectLike = /^\s*(SELECT|WITH)\b/i.test(sql)
4444

@@ -61,7 +61,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
6161
const columns =
6262
result.metaData?.map((m: any) => m.name) ??
6363
(rows.length > 0 ? Object.keys(rows[0]) : [])
64-
const truncated = rows.length > effectiveLimit
64+
const truncated = effectiveLimit > 0 && rows.length > effectiveLimit
6565
const limitedRows = truncated
6666
? rows.slice(0, effectiveLimit)
6767
: rows

packages/drivers/src/postgres.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
* PostgreSQL driver using the `pg` package.
33
*/
44

5-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
5+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
66

77
export async function connect(config: ConnectionConfig): Promise<Connector> {
88
let pg: any
@@ -46,7 +46,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
4646
pool = new Pool(poolConfig)
4747
},
4848

49-
async execute(sql: string, limit?: number, _binds?: any[]): Promise<ConnectorResult> {
49+
async execute(sql: string, limit?: number, _binds?: any[], options?: ExecuteOptions): Promise<ConnectorResult> {
5050
const client = await pool.connect()
5151
try {
5252
if (config.statement_timeout) {
@@ -57,7 +57,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
5757
}
5858

5959
let query = sql
60-
const effectiveLimit = limit ?? 1000
60+
const effectiveLimit = options?.noLimit ? 0 : (limit ?? 1000)
6161
const isSelectLike = /^\s*(SELECT|WITH|VALUES)\b/i.test(sql)
6262
// Add LIMIT only for SELECT-like queries and if not already present
6363
if (
@@ -70,7 +70,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
7070

7171
const result = await client.query(query)
7272
const columns = result.fields?.map((f: any) => f.name) ?? []
73-
const truncated = result.rows.length > effectiveLimit
73+
const truncated = effectiveLimit > 0 && result.rows.length > effectiveLimit
7474
const rows = truncated
7575
? result.rows.slice(0, effectiveLimit)
7676
: result.rows

packages/drivers/src/redshift.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
* Uses svv_ system views for introspection.
44
*/
55

6-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
6+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
77

88
export async function connect(config: ConnectionConfig): Promise<Connector> {
99
let pg: any
@@ -46,10 +46,10 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
4646
pool = new Pool(poolConfig)
4747
},
4848

49-
async execute(sql: string, limit?: number, _binds?: any[]): Promise<ConnectorResult> {
49+
async execute(sql: string, limit?: number, _binds?: any[], options?: ExecuteOptions): Promise<ConnectorResult> {
5050
const client = await pool.connect()
5151
try {
52-
const effectiveLimit = limit ?? 1000
52+
const effectiveLimit = options?.noLimit ? 0 : (limit ?? 1000)
5353
let query = sql
5454
const isSelectLike = /^\s*(SELECT|WITH|VALUES)\b/i.test(sql)
5555
if (
@@ -62,7 +62,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
6262

6363
const result = await client.query(query)
6464
const columns = result.fields?.map((f: any) => f.name) ?? []
65-
const truncated = result.rows.length > effectiveLimit
65+
const truncated = effectiveLimit > 0 && result.rows.length > effectiveLimit
6666
const rows = truncated
6767
? result.rows.slice(0, effectiveLimit)
6868
: result.rows

packages/drivers/src/snowflake.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
*/
44

55
import * as fs from "fs"
6-
import type { ConnectionConfig, Connector, ConnectorResult, SchemaColumn } from "./types"
6+
import type { ConnectionConfig, Connector, ConnectorResult, ExecuteOptions, SchemaColumn } from "./types"
77

88
export async function connect(config: ConnectionConfig): Promise<Connector> {
99
let snowflake: any
@@ -232,8 +232,8 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
232232
})
233233
},
234234

235-
async execute(sql: string, limit?: number, binds?: any[]): Promise<ConnectorResult> {
236-
const effectiveLimit = limit ?? 1000
235+
async execute(sql: string, limit?: number, binds?: any[], options?: ExecuteOptions): Promise<ConnectorResult> {
236+
const effectiveLimit = options?.noLimit ? 0 : (limit ?? 1000)
237237
let query = sql
238238
const isSelectLike = /^\s*(SELECT|WITH|VALUES|SHOW)\b/i.test(sql)
239239
if (
@@ -245,7 +245,7 @@ export async function connect(config: ConnectionConfig): Promise<Connector> {
245245
}
246246

247247
const result = await executeQuery(query, binds)
248-
const truncated = result.rows.length > effectiveLimit
248+
const truncated = effectiveLimit > 0 && result.rows.length > effectiveLimit
249249
const rows = truncated
250250
? result.rows.slice(0, effectiveLimit)
251251
: result.rows

0 commit comments

Comments
 (0)