Skip to content

[SPARK-56231][SQL] Bucket pruning and bucket join optimization for V2 file read path#55229

Draft
LuciferYang wants to merge 9 commits intoapache:masterfrom
LuciferYang:SPARK-56231
Draft

[SPARK-56231][SQL] Bucket pruning and bucket join optimization for V2 file read path#55229
LuciferYang wants to merge 9 commits intoapache:masterfrom
LuciferYang:SPARK-56231

Conversation

@LuciferYang
Copy link
Copy Markdown
Contributor

PR Description

What changes were proposed in this pull request?

Enables bucket pruning and bucket join optimization for V2 file tables (BatchScanExec with FileScan), matching the V1 FileSourceScanExec behavior.

  • Thread BucketSpec from FileTable through FileScanBuilder to FileScan and all 6 concrete scan classes (Parquet, ORC, CSV, JSON, Text, Avro)
  • Implement bucketed file grouping in FileScan.partitions — files are grouped by bucket ID extracted from filenames, with optional bucket pruning and coalescing
  • Report HashPartitioning from DataSourceV2ScanExecBase.outputPartitioning for bucketed scans, enabling shuffle-free joins
  • Extend DisableUnnecessaryBucketedScan to handle BatchScanExec — disables bucketed scan when no downstream operator benefits from it
  • Extend CoalesceBucketsInJoin to handle BatchScanExec — coalesces bucket counts for joins between tables with different bucket numbers
  • Reuse FileSourceStrategy.genBucketSet (widened to private[sql]) for bucket pruning filter analysis

Why are the changes needed?

V2 file tables (default after the gate removal in SPARK-56170) do not support bucket pruning or bucket join optimizations. Workloads that rely on bucketed tables for performance would regress when using V2 file tables.

Does this PR introduce any user-facing change?

No. This is a performance optimization that makes V2 file tables match V1 behavior for bucketed reads.

How was this patch tested?

New V2BucketedReadSuite with 6 tests covering bucket pruning (equality + IN filters), bucketed join shuffle avoidance, disable unnecessary bucketed scan, bucket coalescing, and config-based bucketing disable. Existing BucketedReadSuite (31 tests), DisableUnnecessaryBucketedScanSuite, and CoalesceBucketsInJoinSuite all pass (50 tests total).

Was this patch authored or co-authored using generative AI tooling?

Generated-by: Claude Code

…Frame API writes and delete FallBackFileSourceV2

Key changes:
- FileWrite: added partitionSchema, customPartitionLocations,
  dynamicPartitionOverwrite, isTruncate; path creation and truncate
  logic; dynamic partition overwrite via FileCommitProtocol
- FileTable: createFileWriteBuilder with SupportsDynamicOverwrite
  and SupportsTruncate; capabilities now include TRUNCATE and
  OVERWRITE_DYNAMIC; fileIndex skips file existence checks when
  userSpecifiedSchema is provided (write path)
- All file format writes (Parquet, ORC, CSV, JSON, Text, Avro) use
  createFileWriteBuilder with partition/truncate/overwrite support
- DataFrameWriter.lookupV2Provider: enabled FileDataSourceV2 for
  non-partitioned Append and Overwrite via df.write.save(path)
- DataFrameWriter.insertInto: V1 fallback for file sources
  (TODO: SPARK-56175)
- DataFrameWriter.saveAsTable: V1 fallback for file sources
  (TODO: SPARK-56230, needs StagingTableCatalog)
- DataSourceV2Utils.getTableProvider: V1 fallback for file sources
  (TODO: SPARK-56175)
- Removed FallBackFileSourceV2 rule
- V2SessionCatalog.createTable: V1 FileFormat data type validation
…catalog table loading, and gate removal

Key changes:
- FileTable extends SupportsPartitionManagement with createPartition,
  dropPartition, listPartitionIdentifiers, partitionSchema
- Partition operations sync to catalog metastore (best-effort)
- V2SessionCatalog.loadTable returns FileTable instead of V1Table,
  sets catalogTable and useCatalogFileIndex on FileTable
- V2SessionCatalog.getDataSourceOptions includes storage.properties
  for proper option propagation (header, ORC bloom filter, etc.)
- V2SessionCatalog.createTable validates data types via FileTable
- FileTable.columns() restores NOT NULL constraints from catalogTable
- FileTable.partitioning() falls back to userSpecifiedPartitioning
  or catalog partition columns
- FileTable.fileIndex uses CatalogFileIndex when catalog has
  registered partitions (custom partition locations)
- FileTable.schema checks column name duplication for non-catalog
  tables only
- DataSourceV2Utils.getTableProvider: removed FileDataSourceV2 gate
- DataFrameWriter.insertInto: enabled V2 for file sources
- DataFrameWriter.saveAsTable: V1 fallback (TODO: SPARK-56230)
- ResolveSessionCatalog: V1 fallback for FileTable-backed commands
  (AnalyzeTable, AnalyzeColumn, TruncateTable, TruncatePartition,
  ShowPartitions, RecoverPartitions, AddPartitions, RenamePartitions,
  DropPartitions, SetTableLocation, CREATE TABLE validation,
  REPLACE TABLE blocking)
- FindDataSourceTable: streaming V1 fallback for FileTable
  (TODO: SPARK-56233)
- DataSource.planForWritingFileFormat: graceful V2 handling
Enable bucketed writes for V2 file tables via catalog BucketSpec.

Key changes:
- FileWrite: add bucketSpec field, use V1WritesUtils.getWriterBucketSpec()
  instead of hardcoded None
- FileTable: createFileWriteBuilder passes catalogTable.bucketSpec
  to the write pipeline
- FileDataSourceV2: getTable uses collect to skip BucketTransform
  (handled via catalogTable.bucketSpec instead)
- FileWriterFactory: use DynamicPartitionDataConcurrentWriter for
  bucketed writes since V2's RequiresDistributionAndOrdering cannot
  express hash-based ordering
- All 6 format Write/Table classes updated with BucketSpec parameter

Note: bucket pruning and bucket join (read-path optimization) are
not included in this patch (tracked under SPARK-56231).
Add RepairTableExec to sync filesystem partition directories with
catalog metastore for V2 file tables.

Key changes:
- New RepairTableExec: scans filesystem partitions via
  FileTable.listPartitionIdentifiers(), compares with catalog,
  registers missing partitions and drops orphaned entries
- DataSourceV2Strategy: route RepairTable and RecoverPartitions
  for FileTable to new V2 exec node
Implement SupportsOverwriteV2 for V2 file tables to support static
partition overwrite (INSERT OVERWRITE TABLE t PARTITION(p=1) SELECT ...).

Key changes:
- FileTable: replace SupportsTruncate with SupportsOverwriteV2 on
  WriteBuilder, implement overwrite(predicates)
- FileWrite: extend toBatch() to delete only the matching partition
  directory, ordered by partitionSchema
- FileTable.CAPABILITIES: add OVERWRITE_BY_FILTER
- All 6 format Write/Table classes: plumb overwritePredicates parameter

This is a prerequisite for SPARK-56304 (ifPartitionNotExists).
@LuciferYang
Copy link
Copy Markdown
Contributor Author

LuciferYang commented Apr 7, 2026

This is the 9th PR for SPARK-56170. The commit 770fafe3e1d0ae232488a2911f20fe5edc162911 contains the changes for this patch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant