Skip to content

Commit e57198a

Browse files
authored
feat(metric): Add output skewness metric to detect skewed plans easier (#21211)
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123. --> - Closes #. ## Rationale for this change ### Output Skewness Metric This introduces a metric to quantify execution skew across partitions, inspired by recent work on intra-partition parallelism for Parquet scans (#20529). cc @Dandandan and @alamb (I think this is related to your ongoing work) For example, a Parquet scan with 4 partitions: ``` partition 1: output_rows = 100 partition 2: output_rows = 0 partition 3: output_rows = 0 partition 4: output_rows = 0 ``` represents a highly skewed workload, where most work is concentrated in a single partition. This metric normalizes skew into the range `[0%, 100%]`, where `0%` indicates a perfectly balanced distribution and `100%` indicates maximal skew. This makes it easy to detect skew via simple thresholds in automated tooling. ### Demo Run clickbench q41, the parquet exec has skew score 92%, from `EXPLAIN ANALYZE VERBOSE` we can verify only 2 out of 14 partitions has output rows from the parquet scan. ```sh DataFusion CLI v52.3.0 > CREATE or replace EXTERNAL TABLE hits STORED AS PARQUET LOCATION '/Users/yongting/Code/datafusion/benchmarks/data/hits_partitioned'; 0 row(s) fetched. Elapsed 0.338 seconds. -- Clickbench Q41 > explain analyze SELECT "WindowClientWidth", "WindowClientHeight", COUNT(*) AS PageViews FROM hits WHERE "CounterID" = 62 AND "EventDate" >= '2013-07-01' AND "EventDate" <= '2013-07-31' AND "IsRefresh" = 0 AND "DontCountHits" = 0 AND "URLHash" = 2868770270353813622 GROUP BY "WindowClientWidth", "WindowClientHeight" ORDER BY PageViews DESC LIMIT 10 OFFSET 10000; ... DataSourceExec: ..., output_rows_skew=92.35%...] | ``` --- ### Definition Let `r_i` be the output rows of partition `i`: ``` # ranges from [1, partition_count], if perfectly balanced, it reaches partition_count for maximum effective parallelism effective_parallelism = (sum(r_i))^2 / sum(r_i^2) # convert effective parallelism to a skewness from 0%(perfectly balanced) to 100%(extreme skew) output_rows_skew = (1 - (effective_parallelism - 1) / (partition_count - 1)) * 100% ``` Examples: - `[10, 10, 10, 10]` → skew = `0%` - `[40, 0, 0, 0]` → skew = `100%` --- ### Motivation I think there are several more further works to do to tackle the skewness, so this metric can help. Even with intra-partition parallelism (e.g., morsel-driven Parquet scan), skew can still propagate to downstream operators: ``` partition 1: parquet_scan(output_rows=100) → FilterExec(90) partition 2: parquet_scan(0) → FilterExec(0) partition 3: parquet_scan(0) → FilterExec(0) partition 4: parquet_scan(0) → FilterExec(0) ``` Downstream operators like `FilterExec` may remain underutilized, suggesting the need for additional techniques such as internal parallelism or repartitioning. This metric helps identify such cases and guide further optimization. --- ### Why calculate on output_rows It's easy to implement since `output_rows` are already tracked, the downside is it requires some knowledge to interpret, for instance if we see a large skewness value on parquet DataSource, we want to ensure a) parquet has internal parallelism mechanism to evenly distribute the work b) the downstream won't be affected by the produced skewness ## What changes are included in this PR? Add a derived `output_rows_skew` to DataSourceExec with parquet source, in the follow-ups I think it's also useful to other datasources, and FilterExec (which might also introduce skewness during execution) ## Are these changes tested? <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> sqllogictests ## Are there any user-facing changes? No <!-- If there are user-facing changes then we may require documentation to be updated before approving the PR. --> <!-- If there are any breaking changes to public APIs, please add the `api change` label. -->
1 parent e883171 commit e57198a

File tree

7 files changed

+275
-32
lines changed

7 files changed

+275
-32
lines changed

datafusion/core/tests/sql/explain_analyze.rs

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ async fn explain_analyze_baseline_metrics() {
6969
assert_metrics!(
7070
&formatted,
7171
"AggregateExec: mode=Partial, gby=[c1@0 as c1]",
72-
"reduction_factor=5.1% (5/99)"
72+
"reduction_factor=5.05% (5/99)"
7373
);
7474

7575
{
@@ -887,7 +887,8 @@ async fn parquet_explain_analyze() {
887887
&formatted,
888888
"row_groups_pruned_statistics=1 total \u{2192} 1 matched"
889889
);
890-
assert_contains!(&formatted, "scan_efficiency_ratio=14%");
890+
assert_contains!(&formatted, "output_rows_skew=0%");
891+
assert_contains!(&formatted, "scan_efficiency_ratio=13.99%");
891892

892893
// The order of metrics is expected to be the same as the actual pruning order
893894
// (file-> row-group -> page)

datafusion/datasource/src/source.rs

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,9 @@ use datafusion_physical_plan::execution_plan::{
2727
Boundedness, EmissionType, SchedulingType,
2828
};
2929
use datafusion_physical_plan::metrics::SplitMetrics;
30-
use datafusion_physical_plan::metrics::{ExecutionPlanMetricsSet, MetricsSet};
30+
use datafusion_physical_plan::metrics::{
31+
BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet,
32+
};
3133
use datafusion_physical_plan::projection::ProjectionExec;
3234
use datafusion_physical_plan::stream::BatchSplitStream;
3335
use datafusion_physical_plan::{
@@ -356,7 +358,20 @@ impl ExecutionPlan for DataSourceExec {
356358
}
357359

358360
fn metrics(&self) -> Option<MetricsSet> {
359-
Some(self.data_source.metrics().clone_inner())
361+
let mut metrics = self.data_source.metrics().clone_inner();
362+
363+
// Add `output_rows_skew` metric to the metrics set.
364+
// Done here because it's a derived metric from output_rows metric.
365+
if let Some(file_scan_config) =
366+
self.data_source.as_any().downcast_ref::<FileScanConfig>()
367+
&& file_scan_config.file_source().file_type() == "parquet"
368+
&& let Some(output_rows_skew) =
369+
BaselineMetrics::output_rows_skew_metric(&metrics)
370+
{
371+
metrics.push(output_rows_skew);
372+
}
373+
374+
Some(metrics)
360375
}
361376

362377
fn partition_statistics(&self, partition: Option<usize>) -> Result<Arc<Statistics>> {

datafusion/physical-expr-common/src/metrics/baseline.rs

Lines changed: 93 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,12 +17,16 @@
1717

1818
//! Metrics common for almost all operators
1919
20-
use std::task::Poll;
20+
use std::{borrow::Cow, collections::BTreeMap, sync::Arc, task::Poll};
2121

2222
use arrow::record_batch::RecordBatch;
2323
use datafusion_common::{Result, utils::memory::get_record_batch_memory_size};
2424

25-
use super::{Count, ExecutionPlanMetricsSet, MetricBuilder, Time, Timestamp};
25+
use super::{
26+
Count, ExecutionPlanMetricsSet, Metric, MetricBuilder, MetricsSet, Time, Timestamp,
27+
};
28+
29+
const OUTPUT_ROWS_SKEW_METRIC_NAME: &str = "output_rows_skew";
2630

2731
/// Helper for creating and tracking common "baseline" metrics for
2832
/// each operator
@@ -125,6 +129,61 @@ impl BaselineMetrics {
125129
&self.output_batches
126130
}
127131

132+
/// Returns a derived metric that summarizes how unevenly `output_rows`
133+
/// are distributed across partitions.
134+
///
135+
/// The score is normalized to the range `[0%, 100%]`, where `0%`
136+
/// indicates a perfectly balanced distribution and `100%` indicates the
137+
/// most skewed distribution.
138+
///
139+
/// The calculation is:
140+
/// `effective_parallelism = square(sum(r_i)) / sum(square(r_i))`
141+
/// `output_rows_skew = (1 - ((effective_parallelism - 1) / (partition_count - 1))) * 100%`
142+
///
143+
/// Example: for 4 partitions with output rows `[10, 10, 10, 10]`,
144+
/// `effective_parallelism = 40^2 / (10^2 + 10^2 + 10^2 + 10^2) = 4`,
145+
/// so `output_rows_skew = 0%`. For `[40, 0, 0, 0]`, the score is `100%`.
146+
pub fn output_rows_skew_metric(metrics: &MetricsSet) -> Option<Arc<Metric>> {
147+
let output_rows = metrics
148+
.iter()
149+
.filter_map(|metric| match (metric.partition(), metric.value()) {
150+
(Some(partition), super::MetricValue::OutputRows(count)) => {
151+
Some((partition, count.value() as u128))
152+
}
153+
_ => None,
154+
})
155+
.fold(
156+
BTreeMap::<usize, u128>::new(),
157+
|mut output_rows, (partition, rows)| {
158+
*output_rows.entry(partition).or_default() += rows;
159+
output_rows
160+
},
161+
)
162+
.into_values()
163+
.collect::<Vec<_>>();
164+
165+
if output_rows.is_empty() {
166+
return None;
167+
}
168+
169+
let ratio_metrics = super::RatioMetrics::new().with_display_raw_values(false);
170+
if let Some(score) = output_rows_skew_score(&output_rows) {
171+
ratio_metrics.set_part((score * 10_000.0).round() as usize);
172+
ratio_metrics.set_total(10_000);
173+
}
174+
175+
Some(Arc::new(
176+
Metric::new(
177+
super::MetricValue::Ratio {
178+
name: Cow::Borrowed(OUTPUT_ROWS_SKEW_METRIC_NAME),
179+
ratio_metrics,
180+
},
181+
None,
182+
)
183+
.with_type(super::MetricType::DEV),
184+
))
185+
}
186+
128187
/// Records the fact that this operator's execution is complete
129188
/// (recording the `end_time` metric).
130189
///
@@ -178,6 +237,38 @@ impl Drop for BaselineMetrics {
178237
}
179238
}
180239

240+
/// See [`BaselineMetrics::output_rows_skew_metric`] for the algorithm.
241+
fn output_rows_skew_score(output_rows: &[u128]) -> Option<f64> {
242+
if output_rows.is_empty() {
243+
return None;
244+
}
245+
246+
let partition_count = output_rows.len();
247+
if partition_count == 1 {
248+
return Some(0.0);
249+
}
250+
251+
let (total_rows, sum_of_squares) =
252+
output_rows
253+
.iter()
254+
.fold((0.0, 0.0), |(total_rows, sum_of_squares), rows| {
255+
let rows = *rows as f64;
256+
(total_rows + rows, sum_of_squares + rows.powi(2))
257+
});
258+
if total_rows == 0.0 {
259+
return None;
260+
}
261+
262+
if sum_of_squares == 0.0 {
263+
return None;
264+
}
265+
266+
let effective_parallelism = total_rows.powi(2) / sum_of_squares;
267+
let balanced_score = (effective_parallelism - 1.0) / (partition_count as f64 - 1.0);
268+
269+
Some((1.0 - balanced_score).clamp(0.0, 1.0))
270+
}
271+
181272
/// Helper for creating and tracking spill-related metrics for
182273
/// each operator
183274
#[derive(Debug, Clone)]

datafusion/physical-expr-common/src/metrics/value.rs

Lines changed: 57 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -468,6 +468,8 @@ pub struct RatioMetrics {
468468
part: Arc<AtomicUsize>,
469469
total: Arc<AtomicUsize>,
470470
merge_strategy: RatioMergeStrategy,
471+
/// Ratios are displayed as `1% (1/100)`; this controls the latter part.
472+
display_raw_values: bool,
471473
}
472474

473475
#[derive(Debug, Clone, Default)]
@@ -485,6 +487,7 @@ impl RatioMetrics {
485487
part: Arc::new(AtomicUsize::new(0)),
486488
total: Arc::new(AtomicUsize::new(0)),
487489
merge_strategy: RatioMergeStrategy::AddPartAddTotal,
490+
display_raw_values: true,
488491
}
489492
}
490493

@@ -493,6 +496,11 @@ impl RatioMetrics {
493496
self
494497
}
495498

499+
pub fn with_display_raw_values(mut self, display_raw_values: bool) -> Self {
500+
self.display_raw_values = display_raw_values;
501+
self
502+
}
503+
496504
/// Add `n` to the numerator (`part`) value
497505
pub fn add_part(&self, n: usize) {
498506
self.part.fetch_add(n, Ordering::Relaxed);
@@ -544,44 +552,53 @@ impl RatioMetrics {
544552

545553
impl PartialEq for RatioMetrics {
546554
fn eq(&self, other: &Self) -> bool {
547-
self.part() == other.part() && self.total() == other.total()
555+
self.part() == other.part()
556+
&& self.total() == other.total()
557+
&& self.display_raw_values == other.display_raw_values
548558
}
549559
}
550560

551-
/// Format a float number with `digits` most significant numbers.
552-
///
553-
/// fmt_significant(12.5) -> "12"
554-
/// fmt_significant(0.0543) -> "0.054"
555-
/// fmt_significant(0.000123) -> "0.00012"
556-
fn fmt_significant(mut x: f64, digits: usize) -> String {
557-
if x == 0.0 {
558-
return "0".to_string();
559-
}
560-
561-
let exp = x.abs().log10().floor(); // exponent of first significant digit
562-
let scale = 10f64.powf(-(exp - (digits as f64 - 1.0)));
563-
x = (x * scale).round() / scale; // round to N significant digits
564-
format!("{x}")
565-
}
566-
567561
impl Display for RatioMetrics {
562+
/// Format the ratio to a format like '18.26% (220/1150)'
568563
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
569564
let part = self.part();
570565
let total = self.total();
571566

567+
// Format the ratio first (for example, `6667/10000` -> `66.67%`),
568+
// then optionally append the raw values as ` (6.67 K/10.00 K)`.
569+
if total == 0 {
570+
write!(f, "N/A")?;
571+
} else {
572+
// Use basis points so we can round with integer math:
573+
// e.g. 18.26% has basis point 1826
574+
let basis_points = (((part as u128 * 10_000) + (total as u128 / 2))
575+
/ total as u128) as usize;
576+
let whole = basis_points / 100;
577+
let fractional = basis_points % 100;
578+
579+
if fractional == 0 {
580+
write!(f, "{whole}%")?;
581+
} else if fractional.is_multiple_of(10) {
582+
write!(f, "{whole}.{}%", fractional / 10)?;
583+
} else {
584+
write!(f, "{whole}.{fractional:02}%")?;
585+
}
586+
}
587+
588+
if !self.display_raw_values {
589+
return Ok(());
590+
}
591+
572592
if total == 0 {
573593
if part == 0 {
574-
write!(f, "N/A (0/0)")
594+
write!(f, " (0/0)")
575595
} else {
576-
write!(f, "N/A ({}/0)", human_readable_count(part))
596+
write!(f, " ({}/0)", human_readable_count(part))
577597
}
578598
} else {
579-
let percentage = (part as f64 / total as f64) * 100.0;
580-
581599
write!(
582600
f,
583-
"{}% ({}/{})",
584-
fmt_significant(percentage, 2),
601+
" ({}/{})",
585602
human_readable_count(part),
586603
human_readable_count(total)
587604
)
@@ -865,7 +882,8 @@ impl MetricValue {
865882
Self::Ratio {
866883
name: name.clone(),
867884
ratio_metrics: RatioMetrics::new()
868-
.with_merge_strategy(merge_strategy),
885+
.with_merge_strategy(merge_strategy)
886+
.with_display_raw_values(ratio_metrics.display_raw_values),
869887
}
870888
}
871889
Self::Custom { name, value } => Self::Custom {
@@ -1232,7 +1250,21 @@ mod tests {
12321250
};
12331251
tiny_ratio_metrics.add_part(1);
12341252
tiny_ratio_metrics.add_total(3000);
1235-
assert_eq!("0.033% (1/3.00 K)", tiny_ratio.to_string());
1253+
assert_eq!("0.03% (1/3.00 K)", tiny_ratio.to_string());
1254+
1255+
ratio_metrics.set_part(6667);
1256+
ratio_metrics.set_total(10_000);
1257+
assert_eq!("66.67% (6.67 K/10.00 K)", ratio.to_string());
1258+
1259+
let percentage_only = RatioMetrics::new().with_display_raw_values(false);
1260+
let ratio = MetricValue::Ratio {
1261+
name: Cow::Borrowed("percentage_only"),
1262+
ratio_metrics: percentage_only.clone(),
1263+
};
1264+
assert_eq!("N/A", ratio.to_string());
1265+
percentage_only.set_part(6667);
1266+
percentage_only.set_total(10_000);
1267+
assert_eq!("66.67%", ratio.to_string());
12361268
}
12371269

12381270
#[test]

datafusion/sqllogictest/test_files/dynamic_filter_pushdown_config.slt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ Plan with Metrics
104104
03)----ProjectionExec: expr=[id@0 as id, value@1 as v, value@1 + id@0 as name], metrics=[output_rows=10, <slt:ignore>]
105105
04)------FilterExec: value@1 > 3, metrics=[output_rows=10, <slt:ignore>, selectivity=100% (10/10)]
106106
05)--------RepartitionExec: partitioning=RoundRobinBatch(4), input_partitions=1, metrics=[output_rows=10, <slt:ignore>]
107-
06)----------DataSourceExec: file_groups={1 group: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/dynamic_filter_pushdown_config/test_data.parquet]]}, projection=[id, value], file_type=parquet, predicate=value@1 > 3 AND DynamicFilter [ value@1 IS NULL OR value@1 > 800 ], pruning_predicate=value_null_count@1 != row_count@2 AND value_max@0 > 3 AND (value_null_count@1 > 0 OR value_null_count@1 != row_count@2 AND value_max@0 > 800), required_guarantees=[], metrics=[output_rows=10, elapsed_compute=<slt:ignore>, output_bytes=80.0 B, files_ranges_pruned_statistics=1 total → 1 matched, row_groups_pruned_statistics=1 total → 1 matched -> 1 fully matched, row_groups_pruned_bloom_filter=1 total → 1 matched, page_index_pages_pruned=1 total → 1 matched, limit_pruned_row_groups=0 total → 0 matched, bytes_scanned=210, metadata_load_time=<slt:ignore>, scan_efficiency_ratio=18% (210/1.15 K)]
107+
06)----------DataSourceExec: file_groups={1 group: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/dynamic_filter_pushdown_config/test_data.parquet]]}, projection=[id, value], file_type=parquet, predicate=value@1 > 3 AND DynamicFilter [ value@1 IS NULL OR value@1 > 800 ], pruning_predicate=value_null_count@1 != row_count@2 AND value_max@0 > 3 AND (value_null_count@1 > 0 OR value_null_count@1 != row_count@2 AND value_max@0 > 800), required_guarantees=[], metrics=[output_rows=10, elapsed_compute=<slt:ignore>, output_bytes=80.0 B, files_ranges_pruned_statistics=1 total → 1 matched, row_groups_pruned_statistics=1 total → 1 matched -> 1 fully matched, row_groups_pruned_bloom_filter=1 total → 1 matched, page_index_pages_pruned=1 total → 1 matched, limit_pruned_row_groups=0 total → 0 matched, bytes_scanned=210, metadata_load_time=<slt:ignore>, scan_efficiency_ratio=18.31% (210/1.15 K)]
108108

109109
statement ok
110110
set datafusion.explain.analyze_level = dev;

0 commit comments

Comments
 (0)