Skip to content
Closed
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions datafusion/spark/src/function/datetime/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ pub mod last_day;
pub mod make_dt_interval;
pub mod make_interval;
pub mod next_day;
pub mod quarter;
pub mod time_trunc;
pub mod to_utc_timestamp;
pub mod trunc;
Expand Down Expand Up @@ -72,6 +73,7 @@ make_udf_function!(
unix_seconds,
unix::SparkUnixTimestamp::seconds
);
make_udf_function!(quarter::SparkQuarter, quarter);

pub mod expr_fn {
use datafusion_functions::export_functions;
Expand Down Expand Up @@ -179,6 +181,11 @@ pub mod expr_fn {
"Returns the number of seconds since epoch (1970-01-01 00:00:00 UTC) for the given timestamp `ts`.",
ts
));
export_functions!((
quarter,
"Returns the quarter of the year for date, in the range 1 to 4.",
arg1
));
}

pub fn functions() -> Vec<Arc<ScalarUDF>> {
Expand All @@ -204,5 +211,6 @@ pub fn functions() -> Vec<Arc<ScalarUDF>> {
unix_micros(),
unix_millis(),
unix_seconds(),
quarter(),
]
}
102 changes: 102 additions & 0 deletions datafusion/spark/src/function/datetime/quarter.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

use arrow::array::{Array, ArrayRef};
use arrow::compute::{CastOptions, DatePart, cast_with_options, date_part};
use arrow::datatypes::{DataType, Field, FieldRef, TimeUnit};
use datafusion::logical_expr::{ColumnarValue, Signature, TypeSignature, Volatility};
use datafusion_common::utils::take_function_args;
use datafusion_common::{Result, internal_err};
use datafusion_expr::{ReturnFieldArgs, ScalarFunctionArgs, ScalarUDFImpl};
use datafusion_functions::utils::make_scalar_function;
use std::sync::Arc;

#[derive(Debug, PartialEq, Eq, Hash)]
pub struct SparkQuarter {
signature: Signature,
}

impl Default for SparkQuarter {
fn default() -> Self {
Self::new()
}
}

impl SparkQuarter {
pub fn new() -> Self {
Self {
signature: Signature::one_of(
vec![
TypeSignature::Exact(vec![DataType::Utf8]),
TypeSignature::Exact(vec![DataType::Utf8View]),
TypeSignature::Exact(vec![DataType::LargeUtf8]),
TypeSignature::Exact(vec![DataType::Date32]),
TypeSignature::Exact(vec![DataType::Timestamp(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is still one important gap here.

quarter is still declared with an exact Timestamp(Millisecond, None) signature, while Spark's date_part wrapper already uses the broader coercible timestamp path.
Because of that, timestamp inputs with other units or timezones can still get rejected during planning, even though the implementation below handles DataType::Timestamp(_, _) once execution starts.

Could we align this with the existing Spark datetime coercion model so quarter behaves consistently with the rest of that path?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

TimeUnit::Millisecond,
None,
)]),
],
Volatility::Immutable,
),
}
}
}

impl ScalarUDFImpl for SparkQuarter {
fn name(&self) -> &str {
"quarter"
}

fn signature(&self) -> &Signature {
&self.signature
}

fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {
internal_err!("return_field_from_args should be used instead")
}

fn return_field_from_args(&self, args: ReturnFieldArgs) -> Result<FieldRef> {
Ok(Arc::new(Field::new(
self.name(),
DataType::Int32,
args.arg_fields[0].is_nullable(),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the return-field nullability needs to be loosened here.

Right now return_field_from_args mirrors the input field nullability, but the new string path can produce NULL even when the input is non-null. This patch adds cases like quarter('abc'::string) and quarter(''::string) returning NULL, so quarter(non_null_utf8_col) would still be advertised as Int32 NOT NULL even though execution can yield nulls.

That looks like a schema contract bug. It also differs from existing Spark helpers like next_day, which force nullable output when invalid strings can map to NULL.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. fixed

)))
}

fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {
make_scalar_function(spark_quarter, vec![])(&args.args)
}
}

fn spark_quarter(args: &[ArrayRef]) -> Result<ArrayRef> {
let [array] = take_function_args("quarter", args)?;
match array.data_type() {
DataType::Date32 | DataType::Timestamp(_, _) => {
let quarter = date_part(array, DatePart::Quarter)?;
Ok(quarter)
}
DataType::Utf8 | DataType::Utf8View | DataType::LargeUtf8 => {
let date_array =
cast_with_options(array, &DataType::Date32, &CastOptions::default())?;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit concerned that the new string handling is narrower than the shared datetime coercion path.

This currently forces every string through a Date32 cast before calling date_part. That can reject valid timestamp-shaped strings that date_part already accepts elsewhere, for example date_part('second', '2020-09-08T12:00:12.12345678+00:00') in datafusion/sqllogictest/test_files/datetime/date_part.slt.

Because this does not route through the existing date_part('quarter', ...) behavior, quarter can still diverge from the rest of the datetime coercion model for string inputs. Could we reuse the same coercion path here so the behavior stays aligned?

Copy link
Copy Markdown
Contributor Author

@kazantsev-maksim kazantsev-maksim Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kosiew datafusion's date_part does not support string types, you'll still have to cast to a date type first.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing the earlier feedback. One thing that still stands out is how string inputs are handled here.

Right now all strings are cast through Date32 before calling date_part. This means something like quarter('2020-09-08T12:00:12.12345678+00:00') goes through a narrower path compared to date_part('quarter', ...).

The previous review suggested aligning quarter with the shared datetime coercion behavior. With the current approach, timestamp-shaped strings that DataFusion already accepts elsewhere may still be rejected or behave differently here.

Could we reuse the same coercion path as date_part so the behavior stays consistent across functions?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kosiew Did you mean something like this?

let quarter = date_part(&date_array, DatePart::Quarter)?;
Ok(quarter)
}
data_type => {
internal_err!("quarter does not support: {data_type}")
}
}
}
64 changes: 54 additions & 10 deletions datafusion/sqllogictest/test_files/spark/datetime/quarter.slt
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,57 @@
# specific language governing permissions and limitations
# under the License.

# This file was originally created by a porting script from:
# https://github.com/lakehq/sail/tree/43b6ed8221de5c4c4adbedbb267ae1351158b43c/crates/sail-spark-connect/tests/gold_data/function
# This file is part of the implementation of the datafusion-spark function library.
# For more information, please see:
# https://github.com/apache/datafusion/issues/15914

## Original Query: SELECT quarter('2016-08-31');
## PySpark 3.5.5 Result: {'quarter(2016-08-31)': 3, 'typeof(quarter(2016-08-31))': 'int', 'typeof(2016-08-31)': 'string'}
#query
#SELECT quarter('2016-08-31'::string);
query I
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The added coverage for DATE and TIMESTAMP inputs looks good 👍

That said, we’re still missing the specific Spark regression case that was called out earlier: SELECT quarter('2016-08-31');

Since the implementation still doesn’t accept plain string literals, not having this exact case in the SLT means the mismatch isn’t being caught.

It would be great to add this test back in so we lock in the expected Spark behavior and prevent regressions once the coercion issue is fixed.

SELECT quarter('2009-01-12'::date);
----
1

query I
SELECT quarter('1970-01-01'::date);
----
1

query I
SELECT quarter('1870-01-01'::date);
----
1

query I
SELECT quarter('2011-04-21'::date);
----
2

query I
SELECT quarter('2024-08-14'::date);
----
3

query I
SELECT quarter('2016-12-12'::date);
----
4

query I
SELECT quarter(NULL::date);
----
NULL

query I
SELECT quarter('2009-01-12 10:00:00'::timestamp);
----
1

query I
SELECT quarter('2009-01-12'::string);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to see the string coverage added here.

I think we still need the specific regression case from Spark's documented uncasted form.

Right now this file checks quarter('2009-01-12'::string), but it does not restore a plain string literal query like quarter('2016-08-31').

Since preserving that call shape was the reason for broadening the signature, could we add that case back as well?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

----
1

query I
SELECT quarter('abc'::string);
----
NULL

query I
SELECT quarter(''::string);
----
NULL
Loading