Skip to content

Add query filter, project, aggregation, sort converters#21047

Merged
mch2 merged 5 commits intoopensearch-project:mainfrom
vinaykpud:feature/datafusion-dsl-query-agg
Apr 13, 2026
Merged

Add query filter, project, aggregation, sort converters#21047
mch2 merged 5 commits intoopensearch-project:mainfrom
vinaykpud:feature/datafusion-dsl-query-agg

Conversation

@vinaykpud
Copy link
Copy Markdown
Contributor

@vinaykpud vinaykpud commented Mar 30, 2026

Description

Adds DSL-to-Calcite converters for the dsl-query-executor sandbox plugin. Converts OpenSearch SearchSourceBuilder DSL into Calcite RelNode logical plans for execution by the analytics engine.

Converters added:

  • FilterConverter — converts query DSL (term query) into LogicalFilter
  • ProjectConverter — converts _source field selection (includes, excludes, wildcards) into LogicalProject
  • SortConverter — converts sort, from, size into LogicalSort with collation and pagination
  • AggregateConverter — converts aggregations into LogicalAggregate (GROUP BY + metrics)
  • PostAggregateConverter — applies post-aggregation sorting from bucket orders (_key, _count, metric-based)
  • CollationResolver — resolves BucketOrder to Calcite RelFieldCollation against the post-agg schema

Supporting infrastructure:

  • SearchSourceConverter — orchestrator that wires converters into query plans:
    - Scan → Filter → Project → Sort (HITS)
    - Scan → Filter → Aggregate → PostAggregate (AGGREGATION)
  • ConversionContext — immutable shared state for converters
  • AggregationTreeWalker — walks nested aggregation DSL tree, groups metrics by granularity level, collects bucket orders
  • AggregationMetadata / AggregationMetadataBuilder — pre-computed metadata per granularity with GROUP BY, aggregate calls, and bucket orders

Related Issues

#20914

Check List

  • Functionality includes testing.
  • API changes companion pull request created, if applicable.
  • Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 30, 2026

PR Reviewer Guide 🔍

(Review updated until commit c9071d3)

Here are some key observations to aid the review process:

🧪 PR contains tests
🔒 No security concerns identified
📝 TODO sections

🔀 Multiple PR themes

Sub-PR theme: Add query filter, project, and sort converters

Relevant files:

  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/FilterConverter.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/ProjectConverter.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/SortConverter.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/converter/FilterConverterTests.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/converter/ProjectConverterTests.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/converter/SortConverterTests.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/query/TermQueryTranslator.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/query/TermQueryTranslatorTests.java

Sub-PR theme: Add aggregation tree walker, metadata, and aggregate/post-aggregate converters

Relevant files:

  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationMetadata.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationMetadataBuilder.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationTreeWalker.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/bucket/TermsBucketTranslator.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/AggregateConverter.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/PostAggregateConverter.java
  • sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/CollationResolver.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/aggregation/AggregationTreeWalkerTests.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/aggregation/bucket/TermsBucketTranslatorTests.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/aggregation/metric/MetricTranslatorTests.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/converter/AggregateConverterTests.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/converter/PostAggregateConverterTests.java
  • sandbox/plugins/dsl-query-executor/src/test/java/org/opensearch/dsl/converter/CollationResolverTests.java

⚡ Recommended focus areas for review

Direction Detection

In resolveMetricOrder, the ascending flag is determined by equality check with BucketOrder.aggregation(fieldName, true). This relies on the equals implementation of InternalOrder.Aggregation being stable and correct. If the equals contract changes or the path representation differs from the field name string, the direction will silently be wrong (always false/descending). Consider verifying this assumption with a comment or a more robust approach.

private static OrderTarget resolveMetricOrder(InternalOrder.Aggregation aggOrder) {
    String fieldName = aggOrder.path().toString();
    boolean ascending = aggOrder.equals(BucketOrder.aggregation(fieldName, true));
    return new OrderTarget(fieldName, ascending);
Null Aggregations

In hasAggregations, searchSource.aggregations() is checked for null, but getAggregatorFactories() is also null-checked separately. If aggregations() returns a non-null object but getAggregatorFactories() returns null, the behavior is correct, but this pattern may be fragile if the API changes. Consider consolidating or adding a comment explaining why both checks are needed.

private static boolean hasAggregations(SearchSourceBuilder searchSource) {
    return searchSource.aggregations() != null
        && searchSource.aggregations().getAggregatorFactories() != null
        && !searchSource.aggregations().getAggregatorFactories().isEmpty();
}
Granularity Key Collision

The granularityKey method uses field names joined by commas and pipes. If a field name contains | or ,, two different grouping paths could produce the same key, causing incorrect merging of metadata builders. Consider using a more collision-resistant separator or encoding.

private static String granularityKey(List<GroupingInfo> groupings) {
    if (groupings.isEmpty()) {
        return "";
    }
    return IntStream.range(0, groupings.size())
        .mapToObj(i -> i + ":" + String.join(",", groupings.get(i).getFieldNames()))
        .collect(Collectors.joining("|"));
}
Score Sort Ignored

The TODO comment notes that ScoreSortBuilder (_score sort) is not handled and will throw a ConversionException. This is a common default sort in OpenSearch queries. If a user sends a query without an explicit sort, OpenSearch defaults to _score DESC, which would cause a runtime exception rather than a graceful fallback. This should be documented or handled explicitly.

// TODO: handle ScoreSortBuilder (_score sort)
@Override
Nullable Type Override

When noGroupBy is true, metric return types are made nullable. However, the COUNT implicit call is always created with a non-nullable BIGINT type regardless of the noGroupBy flag. This is intentional per the comment, but it means the implicit count is added after the nullable-type loop, so it bypasses the nullability logic entirely. Verify this is correct and add a comment clarifying the intentional asymmetry.

if (implicitCountRequested) {
    allCalls.add(
        AggregateCall.create(
            SqlStdOperatorTable.COUNT,
            false,
            false,
            false,
            List.of(),
            -1,
            RelCollations.EMPTY,
            typeFactory.createSqlType(SqlTypeName.BIGINT),
            IMPLICIT_COUNT_NAME
        )
    );
    allFieldNames.add(IMPLICIT_COUNT_NAME);
}

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 30, 2026

PR Code Suggestions ✨

Latest suggestions up to c9071d3

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Reject unresolved query nodes in filter conversion

If queryRegistry.convert() returns an UnresolvedQueryCall (e.g. for a wildcard
query), it is silently passed to LogicalFilter.create(). This could cause downstream
failures in the Calcite planner or executor that are hard to diagnose. The converter
should detect this case and either throw a ConversionException or handle it
explicitly.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/FilterConverter.java [42-45]

 @Override
 protected RelNode doConvert(RelNode input, ConversionContext ctx) throws ConversionException {
     RexNode condition = queryRegistry.convert(ctx.getSearchSource().query(), ctx);
+    if (condition instanceof org.opensearch.dsl.query.UnresolvedQueryCall) {
+        throw new ConversionException(
+            "Query type '" + ctx.getSearchSource().query().getClass().getSimpleName() + "' is not supported for conversion"
+        );
+    }
     return LogicalFilter.create(input, condition);
 }
Suggestion importance[1-10]: 7

__

Why: Silently passing an UnresolvedQueryCall to LogicalFilter.create() could cause cryptic failures in the Calcite planner. However, the integration test testWildcardQueryWithUnresolvedNode explicitly tests that wildcard queries (which produce UnresolvedQueryCall) succeed, suggesting the design intent is to allow unresolved nodes to pass through for downstream handling rather than fail here.

Medium
Guard against null term query values

The value from TermQueryBuilder.value() can be null, and passing null to makeLiteral
with allowCasts=true may cause a NullPointerException or produce an incorrect
literal. Add a null check and throw a ConversionException if the value is null,
since a term query with a null value is semantically invalid.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/query/TermQueryTranslator.java [45]

+if (value == null) {
+    throw new ConversionException("Term query field '" + fieldName + "' has null value");
+}
 RexNode literal = ctx.getRexBuilder().makeLiteral(value, field.getType(), true);
Suggestion importance[1-10]: 6

__

Why: A null value from TermQueryBuilder.value() could cause a NullPointerException in makeLiteral. Adding a null check with a descriptive ConversionException improves robustness, though term queries with null values are uncommon in practice.

Low
Granularity key may produce incorrect collisions

The granularity key uses positional index (i) combined with field names, meaning two
different nesting paths with the same fields at the same depth would produce the
same key and incorrectly merge their metadata. For example, terms(brand) →
terms(name) and a sibling terms(brand) → terms(name) at a different path would
collide. The key should be based solely on the accumulated field names without
positional index, or include the full path structure to ensure uniqueness.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationTreeWalker.java [147-154]

 private static String granularityKey(List<GroupingInfo> groupings) {
     if (groupings.isEmpty()) {
         return "";
     }
-    return IntStream.range(0, groupings.size())
-        .mapToObj(i -> i + ":" + String.join(",", groupings.get(i).getFieldNames()))
+    return groupings.stream()
+        .map(g -> String.join(",", g.getFieldNames()))
         .collect(Collectors.joining("|"));
 }
Suggestion importance[1-10]: 3

__

Why: The suggestion claims removing the positional index i from the key is better, but the current implementation with i + ":" + fieldNames actually provides more uniqueness than the proposed change. The proposed improved_code could cause more collisions (e.g., two different nesting levels with the same field names), making this suggestion potentially incorrect.

Low
General
Silent empty aggregation metadata not handled

The aggregation path uses base (the filtered scan) as input to aggConverter.convert,
which is correct. However, if metadataList is empty (e.g., all aggregations are
unsupported and filtered out), the hasAggs block silently produces no aggregation
plans without any warning or error. This could mask misconfigured queries. Consider
logging a warning or throwing a ConversionException when hasAggs is true but
metadataList is empty.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/SearchSourceConverter.java [118-130]

 if (hasAggs) {
     List<AggregationMetadata> metadataList = treeWalker.walk(
         searchSource.aggregations().getAggregatorFactories(),
         table.getRowType(),
         cluster.getTypeFactory()
     );
+    if (metadataList.isEmpty()) {
+        throw new ConversionException("Aggregations were specified but produced no valid aggregation metadata");
+    }
     for (AggregationMetadata metadata : metadataList) {
         ConversionContext aggCtx = ctx.withAggregationMetadata(metadata);
         RelNode aggs = aggConverter.convert(base, metadata);
         aggs = postAggConverter.convert(aggs, aggCtx);
         builder.add(new QueryPlans.QueryPlan(QueryPlans.Type.AGGREGATION, aggs));
     }
 }
Suggestion importance[1-10]: 5

__

Why: Adding a guard for empty metadataList when hasAggs is true is a reasonable defensive check that could help surface misconfigured queries. However, in practice treeWalker.walk already throws ConversionException for unsupported aggregation types, so this scenario may be unlikely.

Low
Validate and defensively copy record fields

The keys list is mutable and not validated in the compact constructor. A caller
could pass a mutable list that gets modified later, or pass null for keys. The
compact constructor should validate non-null inputs and defensively copy keys
(though List.copyOf is already used in FieldGrouping, the same pattern should apply
here).

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/result/BucketEntry.java [22-31]

 public record BucketEntry(List<Object> keys, long docCount, InternalAggregations subAggs) {
-    /**
-     * Creates a bucket entry.
-     *
-     * @param keys the bucket key values
-     * @param docCount the document count
-     * @param subAggs the sub-aggregation results
-     */
     public BucketEntry {
+        Objects.requireNonNull(keys, "keys must not be null");
+        Objects.requireNonNull(subAggs, "subAggs must not be null");
+        keys = List.copyOf(keys);
     }
Suggestion importance[1-10]: 5

__

Why: The compact constructor for BucketEntry doesn't validate inputs or defensively copy the mutable keys list, which could lead to subtle bugs if callers modify the list after construction. Adding null checks and List.copyOf improves correctness and consistency with the pattern used in FieldGrouping.

Low
Fragile direction detection via equality check

The ascending direction is determined by checking equality with a newly constructed
BucketOrder.aggregation(fieldName, true), which relies on equals() being correctly
implemented for InternalOrder.Aggregation. If equals() is not implemented or
compares additional state, this check could silently return the wrong direction. A
more robust approach would be to directly access the sort direction from the
aggOrder object (e.g., via a reflection-based accessor or a dedicated helper), or at
minimum add a comment explaining why this equality check is reliable.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/CollationResolver.java [98-102]

 private static OrderTarget resolveMetricOrder(InternalOrder.Aggregation aggOrder) {
     String fieldName = aggOrder.path().toString();
-    boolean ascending = aggOrder.equals(BucketOrder.aggregation(fieldName, true));
+    // Determine ascending by checking equality with the ascending variant.
+    // This relies on InternalOrder.Aggregation.equals() comparing path and direction only.
+    BucketOrder ascVariant = BucketOrder.aggregation(fieldName, true);
+    boolean ascending = aggOrder.equals(ascVariant);
     return new OrderTarget(fieldName, ascending);
 }
Suggestion importance[1-10]: 4

__

Why: The concern about relying on equals() for direction detection is valid, but the comment in the existing code already acknowledges this limitation. The improved_code is nearly identical to existing_code with only a comment added, making this a low-impact suggestion.

Low
Empty projection validity assumption undocumented

When fetchSource is false, an empty projection is created with no fields. However,
LogicalProject.create with empty projects and field names may not be valid in all
Calcite versions and could cause downstream plan validation failures. It would be
safer to verify that Calcite accepts a zero-field projection, or document this
assumption explicitly with a test that exercises the full planning pipeline.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/ProjectConverter.java [43-45]

 if (!fetchSource.fetchSource()) {
-    return LogicalProject.create(input, List.of(), List.of(), List.of());
+    // Empty projection: _source: false — no fields returned.
+    // Calcite accepts zero-field LogicalProject; validated by testEmptyProjectionWhenFetchSourceFalse.
+    return LogicalProject.create(input, List.of(), List.of(), List.<String>of());
 }
Suggestion importance[1-10]: 1

__

Why: The improved_code is functionally identical to the existing_code with only a comment added, and the test testEmptyProjectionWhenFetchSourceFalse already validates this behavior. This is purely a documentation suggestion with no functional impact.

Low

Previous suggestions

Suggestions up to commit 45f3ae9
CategorySuggestion                                                                                                                                    Impact
Possible issue
Handle unresolved query nodes in filter conversion

If queryRegistry.convert() returns an UnresolvedQueryCall, it is silently passed to
LogicalFilter.create(). This could cause unexpected behavior or errors in the query
planner. You should check for UnresolvedQueryCall and either handle it explicitly or
throw a ConversionException.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/FilterConverter.java [42-45]

 @Override
 protected RelNode doConvert(RelNode input, ConversionContext ctx) throws ConversionException {
     RexNode condition = queryRegistry.convert(ctx.getSearchSource().query(), ctx);
+    if (condition instanceof org.opensearch.dsl.query.UnresolvedQueryCall) {
+        throw new ConversionException("Query type not supported for filter conversion: "
+            + ctx.getSearchSource().query().getClass().getSimpleName());
+    }
     return LogicalFilter.create(input, condition);
 }
Suggestion importance[1-10]: 6

__

Why: Silently passing an UnresolvedQueryCall to LogicalFilter.create() could cause unexpected behavior in the query planner. However, the PR's design intentionally allows unresolved nodes to propagate for the optimizer to handle, so throwing here may conflict with the intended architecture.

Low
Use direct direction accessor instead of equality comparison

The ascending check compares the order object against a newly created
BucketOrder.aggregation(fieldName, true), but this relies on equals() being
correctly implemented for InternalOrder.Aggregation. A more reliable approach is to
directly inspect the order's direction via a dedicated method or field, since
equals() may compare additional state. If InternalOrder.Aggregation exposes an
order() method returning SortOrder, use that instead to avoid subtle equality bugs.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/CollationResolver.java [65-68]

 if (order instanceof InternalOrder.Aggregation aggOrder) {
     fieldName = aggOrder.path().toString();
-    ascending = aggOrder.equals(BucketOrder.aggregation(fieldName, true));
+    ascending = aggOrder.order() == SortOrder.ASC;
 }
Suggestion importance[1-10]: 5

__

Why: The equals() comparison against a newly created BucketOrder.aggregation(fieldName, true) is fragile and may not work correctly if InternalOrder.Aggregation.equals() compares additional state. Using a direct accessor like order() would be more reliable, but it's unclear if InternalOrder.Aggregation exposes such a method in this codebase.

Low
Validate literal value type before conversion

The makeLiteral call passes value as a raw Object without type-checking or casting.
If value is not compatible with field.getType(), this can cause a runtime exception
or silent type coercion. You should validate or cast value to the appropriate Java
type before creating the literal.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/query/TermQueryTranslator.java [45]

-RexNode literal = ctx.getRexBuilder().makeLiteral(value, field.getType(), true);
+RexNode literal = ctx.getRexBuilder().makeLiteral(value.toString(), field.getType(), true);
Suggestion importance[1-10]: 3

__

Why: Converting value to a string via toString() is not a proper fix — it would break numeric and boolean field types. The suggestion identifies a real concern but the improved_code is incorrect and could introduce more bugs than it solves.

Low
General
Ensure fetch limit is set when offset is present

buildFetch returns null when pagination is default, but buildOffset also returns
null when from <= 0. If doConvert is called (meaning isApplicable returned true),
it's possible that only a sort is present with default pagination — in that case
both offset and fetch are null, which is valid for LogicalSort. However, if only
from > 0 is set (no sort, non-default from), buildFetch returns null (size is
default), meaning the fetch limit is not applied. This could result in unbounded
result sets when only an offset is specified. Consider always setting fetch when
offset is set.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/SortConverter.java [97-104]

 private RexNode buildFetch(ConversionContext ctx) {
-    if (!hasNonDefaultPagination(ctx)) {
-        return null;
-    }
     SearchSourceBuilder ss = ctx.getSearchSource();
     int size = ss.size() != -1 ? ss.size() : SearchService.DEFAULT_SIZE;
-    return ctx.getRexBuilder().makeLiteral(size, ctx.getCluster().getTypeFactory().createSqlType(SqlTypeName.INTEGER), false);
+    int from = ss.from() != -1 ? ss.from() : SearchService.DEFAULT_FROM;
+    // Always set fetch when offset is present, or when size is non-default
+    if (from > 0 || size != SearchService.DEFAULT_SIZE) {
+        return ctx.getRexBuilder().makeLiteral(size, ctx.getCluster().getTypeFactory().createSqlType(SqlTypeName.INTEGER), false);
+    }
+    return null;
 }
Suggestion importance[1-10]: 5

__

Why: The concern about unbounded result sets when only an offset is specified is valid — if from > 0 but size is default, buildFetch returns null, potentially omitting the fetch limit from LogicalSort. The improved code ensures fetch is always set when offset is present.

Low
Avoid shared mutable context across aggregation iterations

The same ConversionContext (ctx) is mutated via setAggregationMetadata across
multiple iterations of the loop. If any converter retains a reference to ctx or if
the loop is ever parallelized, this shared mutable state could cause incorrect
metadata to be used. Consider creating a new ConversionContext per granularity level
or passing metadata directly to avoid this coupling.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/SearchSourceConverter.java [124-129]

 for (AggregationMetadata metadata : metadataList) {
-    ctx.setAggregationMetadata(metadata);
+    ConversionContext aggCtx = new ConversionContext(searchSource, cluster, table);
+    aggCtx.setAggregationMetadata(metadata);
     RelNode aggs = aggConverter.convert(base, metadata);
-    aggs = postAggConverter.convert(aggs, ctx);
+    aggs = postAggConverter.convert(aggs, aggCtx);
     builder.add(new QueryPlans.QueryPlan(QueryPlans.Type.AGGREGATION, aggs));
 }
Suggestion importance[1-10]: 4

__

Why: Mutating the shared ctx via setAggregationMetadata in a loop is a valid concern for correctness and maintainability, though the current sequential single-threaded usage makes it safe. Creating a new context per iteration would be a cleaner design but is a minor improvement.

Low
Add input validation in record compact constructor

The compact constructor does not validate its inputs. A null or empty keys list, or
a negative docCount, could lead to subtle bugs downstream. Add basic validation in
the compact constructor to enforce invariants.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/result/BucketEntry.java [22-31]

 public record BucketEntry(List<Object> keys, long docCount, InternalAggregations subAggs) {
     public BucketEntry {
+        Objects.requireNonNull(keys, "keys must not be null");
+        if (docCount < 0) {
+            throw new IllegalArgumentException("docCount must be non-negative");
+        }
+        keys = List.copyOf(keys);
     }
Suggestion importance[1-10]: 4

__

Why: Adding null checks and defensive copying in the compact constructor is a reasonable defensive practice, but the impact is moderate since this is a data-holding record and downstream code would likely fail with clear errors anyway.

Low
Validate constructor arguments for grouping fields

The constructor does not validate that fieldNames is non-null or non-empty. Passing
a null list will throw a NullPointerException from List.copyOf, and an empty list
would produce a meaningless grouping. Add explicit validation to provide a clear
error message.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/FieldGrouping.java [31-33]

 public FieldGrouping(List<String> fieldNames) {
+    Objects.requireNonNull(fieldNames, "fieldNames must not be null");
+    if (fieldNames.isEmpty()) {
+        throw new IllegalArgumentException("fieldNames must not be empty");
+    }
     this.fieldNames = List.copyOf(fieldNames);
 }
Suggestion importance[1-10]: 4

__

Why: Adding null and empty validation provides clearer error messages, though List.copyOf already throws NullPointerException for null input. The empty list check adds some value for catching logic errors early.

Low
Verify bucket order is attached to correct granularity level

When a bucket has no sub-aggregations (leaf bucket), the builder is created with
requestImplicitCount() but no metric calls are added. The hasAggregateCalls() check
in walk() returns true due to implicitCountRequested, so the leaf bucket granularity
is included. However, the bucket order is added to this builder even when there are
sub-aggregations that will create a deeper granularity — the order may be attached
to the wrong (shallower) granularity level when sub-aggs exist. Verify that
addBucketOrder should always be called on the current level's builder regardless of
sub-aggregations.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationTreeWalker.java [92-113]

-private void handleBucket(
-    BucketTranslator<AggregationBuilder> translator,
-    AggregationBuilder aggBuilder,
-    List<GroupingInfo> currentGroupings,
-    Map<String, AggregationMetadataBuilder> granularities,
-    RelDataType rowType
-) throws ConversionException {
-    GroupingInfo grouping = translator.getGrouping(aggBuilder);
+// Only add bucket order to the current granularity if it has no deeper bucket sub-aggs
+// (i.e., the order belongs to the innermost bucket that defines this granularity)
+AggregationMetadataBuilder builder = getOrCreateBuilder(accumulatedGroupings, granularities);
+builder.addBucketOrder(translator.getBucketOrder(aggBuilder));
 
-    List<GroupingInfo> accumulatedGroupings = new ArrayList<>(currentGroupings);
-    accumulatedGroupings.add(grouping);
-
-    // Ensure builder exists for this granularity
-    AggregationMetadataBuilder builder = getOrCreateBuilder(accumulatedGroupings, granularities);
-    builder.addBucketOrder(translator.getBucketOrder(aggBuilder));
-
-    // Recurse into sub-aggregations
-    Collection<AggregationBuilder> subAggs = translator.getSubAggregations(aggBuilder);
-    if (subAggs != null && !subAggs.isEmpty()) {
-        walkRecursive(subAggs, accumulatedGroupings, granularities, rowType);
-    }
+Collection<AggregationBuilder> subAggs = translator.getSubAggregations(aggBuilder);
+if (subAggs != null && !subAggs.isEmpty()) {
+    walkRecursive(subAggs, accumulatedGroupings, granularities, rowType);
 }
Suggestion importance[1-10]: 2

__

Why: The suggestion asks to verify existing behavior rather than proposing a concrete code fix — the improved_code is essentially the same as the existing_code with only a comment change. The concern about bucket order being attached to the wrong granularity is a design question, not a clear bug.

Low
Suggestions up to commit 1d7ba9d
CategorySuggestion                                                                                                                                    Impact
Possible issue
Safely escape regex metacharacters in wildcard patterns

The wildcard-to-regex conversion only escapes . and replaces *, but other regex
metacharacters in field names (e.g., $, ^, [, () are left unescaped, which can cause
PatternSyntaxException or incorrect matches. Use Pattern.quote on the non-wildcard
segments to safely escape all special characters.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/ProjectConverter.java [92-94]

 private static Pattern toWildcardRegex(String pattern) {
-    return Pattern.compile(pattern.replace(".", "\\.").replace("*", ".*"));
+    String[] parts = pattern.split("\\*", -1);
+    StringBuilder sb = new StringBuilder();
+    for (int i = 0; i < parts.length; i++) {
+        if (i > 0) sb.append(".*");
+        sb.append(Pattern.quote(parts[i]));
+    }
+    return Pattern.compile(sb.toString());
 }
Suggestion importance[1-10]: 7

__

Why: The current toWildcardRegex only escapes . and *, leaving other regex metacharacters unescaped. This could cause PatternSyntaxException or incorrect matches for field names containing characters like $, [, (. Using Pattern.quote on non-wildcard segments is a more robust approach.

Medium
Prevent granularity key collisions across nesting levels

The granularity key is built from field names only, so two different bucket paths
that happen to share the same field names (e.g., by_brand.by_brand) would collide
and be merged into a single AggregationMetadataBuilder, producing incorrect results.
The key should also encode the nesting depth or the aggregation name to guarantee
uniqueness across different paths.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationTreeWalker.java [145-150]

 private static String granularityKey(List<GroupingInfo> groupings) {
     if (groupings.isEmpty()) {
         return "";
     }
-    return groupings.stream().map(g -> String.join(",", g.getFieldNames())).collect(Collectors.joining("|"));
+    return IntStream.range(0, groupings.size())
+        .mapToObj(i -> i + ":" + String.join(",", groupings.get(i).getFieldNames()))
+        .collect(Collectors.joining("|"));
 }
Suggestion importance[1-10]: 6

__

Why: The granularityKey method could produce collisions if two different nesting paths share the same field names at different depths (e.g., brand|brand from two different paths). Adding depth indices to the key would prevent incorrect merging of distinct granularities, making this a valid correctness concern.

Low
Fix missing TOTAL_HITS plan when size=0 with aggregations

When size > 0 and there are no aggregations, the current code produces a HITS plan
but never a TOTAL_HITS plan. However, when size > 0 with aggregations, the
TOTAL_HITS count is also missing. More critically, when size == 0 with aggregations,
neither the HITS nor the TOTAL_HITS branch executes, so the total hit count is
silently dropped. Consider adding a TOTAL_HITS plan whenever size == 0, regardless
of whether aggregations are present.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/SearchSourceConverter.java [114-121]

 if (size > 0) {
     RelNode hits = projectConverter.convert(base, ctx);
     hits = sortConverter.convert(hits, ctx);
     builder.add(new QueryPlans.QueryPlan(QueryPlans.Type.HITS, hits));
-} else if (!hasAggs) {
-    // size=0 with no aggregations: return only total hit count
+} else {
+    // size=0: always return total hit count
     builder.add(new QueryPlans.QueryPlan(QueryPlans.Type.TOTAL_HITS, buildCountPlan(base, ctx)));
 }
Suggestion importance[1-10]: 5

__

Why: The suggestion points out that when size=0 with aggregations, no TOTAL_HITS plan is added. However, looking at the test testAggsWithSizeZeroProducesOnlyAggregationPlan, the expected behavior is that only an AGGREGATION plan is produced (not TOTAL_HITS). The suggestion's proposed change would break this test, so the "issue" may be intentional design. The concern about missing total hit count is valid but the fix contradicts existing tests.

Low
General
Avoid emitting FETCH 0 literal for zero-size queries

When size is explicitly set to 0 (e.g., aggregation-only queries), buildFetch
returns a FETCH 0 literal, which would cause the LogicalSort to fetch zero rows.
This is correct for aggregation-only paths, but SortConverter is also applied on the
HITS path in SearchSourceConverter before the size check, so a size=0 HITS plan
would be created with FETCH 0. The SortConverter should treat size == 0 as "no fetch
limit needed on the hits path" or the caller should guard against applying sort on a
zero-size hits plan.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/SortConverter.java [97-104]

 private RexNode buildFetch(ConversionContext ctx) {
     if (!hasNonDefaultPagination(ctx)) {
         return null;
     }
     SearchSourceBuilder ss = ctx.getSearchSource();
     int size = ss.size() != -1 ? ss.size() : SearchService.DEFAULT_SIZE;
+    if (size <= 0) {
+        return null;
+    }
     return ctx.getRexBuilder().makeLiteral(size, ctx.getCluster().getTypeFactory().createSqlType(SqlTypeName.INTEGER), false);
 }
Suggestion importance[1-10]: 4

__

Why: In SearchSourceConverter, the HITS path is only entered when size > 0, so SortConverter with size=0 would not be applied to a HITS plan. The scenario described is not actually reachable in the current code flow, making this suggestion less impactful than presented.

Low
Suggestions up to commit b59c7a6
CategorySuggestion                                                                                                                                    Impact
General
Distinguish unregistered from unsupported aggregation types

When registry.get(aggBuilder.getClass()) returns null (unregistered aggregation),
the code falls into the else branch and throws a misleading "Unsupported aggregation
type" error. However, if the registry returns a non-null value that is neither a
BucketTranslator nor a MetricTranslator, the same error is thrown, which could be
confusing. Add an explicit null check before the instanceof checks to produce a
clearer error message for unregistered types.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationTreeWalker.java [83-85]

+} else if (type == null) {
+    throw new ConversionException("No translator registered for aggregation type: " + aggBuilder.getClass().getSimpleName());
 } else {
     throw new ConversionException("Unsupported aggregation type: " + aggBuilder.getClass().getSimpleName());
 }
Suggestion importance[1-10]: 5

__

Why: The suggestion is valid — when registry.get() returns null, the current code throws "Unsupported aggregation type" which is technically correct but could be clearer. Adding a null check improves debuggability. The improvement is minor since the error message already identifies the class name.

Low
Remove redundant record accessor overrides and add defensive copy

Java records automatically generate accessor methods with the same name as the
components, so the explicit keys(), docCount(), and subAggs() method overrides are
redundant and add unnecessary boilerplate. Additionally, the keys list should be
made defensive-copy immutable in the compact constructor to prevent external
mutation. Remove the redundant accessor overrides and add a defensive copy in the
compact constructor.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/result/BucketEntry.java [22-50]

 public record BucketEntry(List<Object> keys, long docCount, InternalAggregations subAggs) {
-    /**
-     * Creates a bucket entry.
-     *
-     * @param keys the bucket key values
-     * @param docCount the document count
-     * @param subAggs the sub-aggregation results
-     */
     public BucketEntry {
-    }
-
-    /** Returns the bucket key values. */
-    @Override
-    public List<Object> keys() {
-        return keys;
-    }
-
-    /** Returns the document count. */
-    @Override
-    public long docCount() {
-        return docCount;
-    }
-
-    /** Returns the sub-aggregation results. */
-    @Override
-    public InternalAggregations subAggs() {
-        return subAggs;
+        keys = List.copyOf(keys);
     }
 }
Suggestion importance[1-10]: 5

__

Why: The suggestion is correct — Java records auto-generate accessors, making the explicit overrides redundant boilerplate. Adding List.copyOf(keys) in the compact constructor is also a good defensive practice. Both improvements are valid and the improved_code accurately reflects the changes.

Low
Deduplicate wildcard-to-regex conversion logic

The wildcard-to-regex conversion replaces . with \. before replacing with ., but
this order means a literal in the pattern is correctly converted. However, the
pattern replace(".", "\.") will also escape dots that are part of nested field
paths (e.g., "obj.field
"), which is correct. The issue is that the same conversion
logic is duplicated in both buildExcludeSet and resolveWildcard. Extract this into a
shared helper method to avoid inconsistency if the logic needs to change.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/ProjectConverter.java [92-96]

+private static Pattern compileWildcard(String pattern) {
+    return Pattern.compile(pattern.replace(".", "\\.").replace("*", ".*"));
+}
+
 private Set<String> buildExcludeSet(String[] excludes, RelDataType rowType) {
     Set<String> excludeSet = new HashSet<>();
     for (String pattern : excludes) {
         if (pattern.contains("*")) {
-            Pattern compiled = Pattern.compile(pattern.replace(".", "\\.").replace("*", ".*"));
+            Pattern compiled = compileWildcard(pattern);
Suggestion importance[1-10]: 4

__

Why: The duplication of wildcard-to-regex logic in buildExcludeSet and resolveWildcard is a valid maintainability concern. Extracting it to a shared helper reduces the risk of inconsistency if the logic changes, though the impact is low since the logic is simple.

Low
Include total hits plan alongside aggregation plans

The hasAggregations check already guards against null aggregations(), but
getAggregatorFactories() could theoretically return pipeline aggregations
separately. More critically, if size > 0 with aggregations, a HITS plan is produced
but no TOTAL_HITS plan is added, which means the total hit count is silently
omitted. Consider whether a TOTAL_HITS plan should also be produced when size > 0
with aggregations, as OpenSearch normally returns total hits alongside aggregation
results.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/SearchSourceConverter.java [124-129]

 if (hasAggs) {
     List<AggregationMetadata> metadataList = treeWalker.walk(
         searchSource.aggregations().getAggregatorFactories(),
         table.getRowType(),
         cluster.getTypeFactory()
     );
+    // Also produce a TOTAL_HITS plan when aggregations are present
+    builder.add(new QueryPlans.QueryPlan(QueryPlans.Type.TOTAL_HITS, buildCountPlan(base, ctx)));
Suggestion importance[1-10]: 3

__

Why: This is a design suggestion rather than a bug fix. The PR's existing test testAggsWithSizeGreaterThanZeroProducesBothPlans expects exactly 2 plans (HITS + AGGREGATION), so adding a TOTAL_HITS plan would break existing tests. The suggestion contradicts the PR's intended behavior.

Low
Suggestions up to commit c924286
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix hits plan produced when size is explicitly zero

The condition size > 0 || !hasAggs means that when there are no aggregations and
size is explicitly set to 0, a HITS plan is still produced (because !hasAggs is
true). This may be unintentional — a caller setting size(0) with no aggregations
likely still wants no hits returned. Consider changing the condition to (size > 0 ||
!hasAggs) && size != 0 or simply size > 0 || !hasAggs with a note, but at minimum
the size == 0 && !hasAggs edge case should be handled explicitly.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/SearchSourceConverter.java [108-112]

 // Hits path: Scan → Filter → Project → Sort
-if (size > 0 || !hasAggs) {
+// Produce hits plan when size > 0, or when there are no aggregations and size is not explicitly 0
+boolean sizeExplicitlyZero = searchSource.size() == 0;
+if (!sizeExplicitlyZero && (size > 0 || !hasAggs)) {
     RelNode hits = projectConverter.convert(base, ctx);
     hits = sortConverter.convert(hits, ctx);
     builder.add(new QueryPlans.QueryPlan(QueryPlans.Type.HITS, hits));
 }
Suggestion importance[1-10]: 7

__

Why: When size(0) is set with no aggregations, the current condition size > 0 || !hasAggs evaluates to true (since !hasAggs is true), producing a HITS plan even though the caller explicitly requested zero hits. The test testAggsWithSizeZeroProducesOnlyAggregationPlan only covers the size(0) with aggregations case, leaving this edge case untested and potentially incorrect.

Medium
Fix potential granularity key collisions with ambiguous separator

The granularity key is built by joining field names with commas, but this can
produce collisions. For example, a single grouping with fields ["a,b"] and two
groupings with fields ["a"] and ["b"] would both produce the key "a,b". Use a
separator that cannot appear in field names, or include a structural delimiter
between grouping boundaries.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationTreeWalker.java [143-148]

 private static String granularityKey(List<GroupingInfo> groupings) {
     if (groupings.isEmpty()) {
         return "";
     }
-    return groupings.stream().flatMap(g -> g.getFieldNames().stream()).collect(Collectors.joining(","));
+    return groupings.stream()
+        .map(g -> String.join(",", g.getFieldNames()))
+        .collect(Collectors.joining("|"));
 }
Suggestion importance[1-10]: 6

__

Why: The current granularityKey implementation using a single comma separator can produce collisions when field names contain commas or when grouping boundaries are ambiguous. Using a structural delimiter like | between grouping levels prevents false key matches and is a valid correctness concern.

Low
Add explicit null check for unregistered aggregation types

When registry.get(aggBuilder.getClass()) returns null (no registered translator),
the code falls into the else branch and throws ConversionException("Unsupported
aggregation type: ..."), which is correct. However, if a translator is registered
but is neither a BucketTranslator nor a MetricTranslator, the same error message is
misleading. More critically, type could be null and the instanceof checks silently
fall through to the else — this is actually fine in Java, but the error message
should distinguish "not registered" from "registered but unknown kind". Consider
adding an explicit null check with a clearer message.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/aggregation/AggregationTreeWalker.java [77-86]

 AggregationType<?> type = registry.get(aggBuilder.getClass());
 
-if (type instanceof BucketTranslator) {
+if (type == null) {
+    throw new ConversionException("No translator registered for aggregation type: " + aggBuilder.getClass().getSimpleName());
+} else if (type instanceof BucketTranslator) {
     handleBucket((BucketTranslator<AggregationBuilder>) type, aggBuilder, currentGroupings, granularities, rowType);
 } else if (type instanceof MetricTranslator) {
     handleMetric((MetricTranslator<AggregationBuilder>) type, aggBuilder, currentGroupings, granularities, rowType);
 } else {
-    throw new ConversionException("Unsupported aggregation type: " + aggBuilder.getClass().getSimpleName());
+    throw new ConversionException("Unsupported aggregation kind for: " + aggBuilder.getClass().getSimpleName());
 }
Suggestion importance[1-10]: 5

__

Why: While Java's instanceof on null returns false (so the code is functionally correct), adding an explicit null check improves error message clarity by distinguishing "not registered" from "registered but unknown kind". This is a minor readability/maintainability improvement rather than a bug fix.

Low
General
Ensure empty projection is type-safe for Calcite API

When fetchSource is false, an empty LogicalProject is created with empty projects
and field names lists. However, LogicalProject.create may require the projects list
to be consistent with the input row type or throw an exception with mismatched
sizes. Verify that passing empty lists is valid for the Calcite version in use; if
not, a LogicalProject with a literal false row or a dedicated empty-projection
approach may be needed.

sandbox/plugins/dsl-query-executor/src/main/java/org/opensearch/dsl/converter/ProjectConverter.java [43-45]

 if (!fetchSource.fetchSource()) {
-    return LogicalProject.create(input, List.of(), List.of(), List.of());
+    // _source: false — project zero fields
+    return LogicalProject.create(input, List.of(), List.<RexNode>of(), List.<String>of());
 }
Suggestion importance[1-10]: 2

__

Why: The improved_code is essentially identical to the existing_code (just adds explicit generic type parameters to List.of()), and the suggestion asks to "verify" behavior rather than fixing a confirmed bug. This is a low-impact suggestion with no meaningful code change.

Low

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for c924286: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit b59c7a6

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for b59c7a6: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch from b59c7a6 to 1d7ba9d Compare March 31, 2026 19:32
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 1d7ba9d

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 1d7ba9d: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch 2 times, most recently from 3f312d6 to 45f3ae9 Compare April 1, 2026 06:06
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 1, 2026

Persistent review updated to latest commit 45f3ae9

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 1, 2026

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit 1dfeee0.

PathLineSeverityDescription
sandbox/plugins/parquet-data-format/src/main/rust/Cargo.lock1748criticalUnknown crate 'zmij' (v1.0.21) injected as a dependency of serde_json. This crate does not exist in the legitimate serde_json dependency graph and is a likely malicious supply chain insertion. The real serde_json has no such dependency.
sandbox/plugins/parquet-data-format/src/main/rust/Cargo.lock1499criticalSuspicious crate 'serde_core' (v1.0.228) injected to impersonate serde internals. The real serde ecosystem has no 'serde_core' crate. It is listed as a dependency of serde_json, csv, and indexmap — all of which do NOT depend on any such crate legitimately. This is a typosquatting/namespace hijacking supply chain attack.
sandbox/plugins/parquet-data-format/src/main/rust/Cargo.lock1521highserde_json (v1.0.149) has been tampered: its dependency list includes both 'zmij' and 'serde_core', neither of which appear in the real serde_json 1.0.149. The checksum may not match the published crate, indicating this is a substituted/modified artifact.
sandbox/plugins/parquet-data-format/src/main/rust/Cargo.lock318highcsv (v1.4.0) has been tampered: it depends on 'serde_core' instead of 'serde'. The real csv 1.4.0 depends on 'serde', not on an unknown 'serde_core' crate. This indicates a substituted artifact.
sandbox/plugins/parquet-data-format/src/main/rust/Cargo.lock623highindexmap (v2.13.0) has been tampered: it includes 'serde_core' as an extra dependency not present in the real indexmap 2.13.0. This indicates a substituted artifact as part of the broader serde_core injection campaign.
sandbox/plugins/parquet-data-format/src/main/rust/Cargo.lock1195highparquet-dataformat-jni lists itself ('parquet-dataformat-jni') as one of its own dependencies. A crate depending on itself is anomalous, could indicate a circular dependency injection or lock file manipulation, and warrants investigation.

The table above displays the top 10 most important findings.

Total: 6 | Critical: 2 | High: 4 | Medium: 0 | Low: 0


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch from c1a13ef to 1dfeee0 Compare April 1, 2026 16:51
@vinaykpud vinaykpud added the skip-diff-analyzer Maintainer to skip code-diff-analyzer check, after reviewing issues in AI analysis. label Apr 1, 2026
@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch from 1dfeee0 to c9071d3 Compare April 1, 2026 16:53
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 1, 2026

Persistent review updated to latest commit c9071d3

@vinaykpud vinaykpud marked this pull request as ready for review April 1, 2026 17:17
@vinaykpud vinaykpud requested a review from a team as a code owner April 1, 2026 17:17
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 1, 2026

❌ Gradle check result for c9071d3: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vinaykpud vinaykpud added the skip-diff-reviewer Maintainer to skip code-diff-reviewer check, after reviewing issues in AI analysis. label Apr 1, 2026
@vinaykpud vinaykpud closed this Apr 1, 2026
@vinaykpud vinaykpud reopened this Apr 1, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 1, 2026

❌ Gradle check result for c9071d3: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch from c9071d3 to 4ed3a3e Compare April 3, 2026 19:00
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 3, 2026

❌ Gradle check result for 4ed3a3e: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 6, 2026

❌ Gradle check result for 65a4499: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch from 65a4499 to b90cab4 Compare April 6, 2026 17:33
Signed-off-by: Vinay Krishna Pudyodu <vinkrish.neo@gmail.com>
@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch from b90cab4 to 580a3ec Compare April 6, 2026 17:34
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 6, 2026

✅ Gradle check result for 580a3ec: SUCCESS

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 6, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 73.21%. Comparing base (bd501c6) to head (96e8343).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main   #21047      +/-   ##
============================================
- Coverage     73.24%   73.21%   -0.03%     
+ Complexity    72974    72970       -4     
============================================
  Files          5888     5888              
  Lines        333169   333169              
  Branches      48058    48058              
============================================
- Hits         244024   243925      -99     
- Misses        69656    69731      +75     
- Partials      19489    19513      +24     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Comment thread sandbox/plugins/parquet-data-format/src/main/rust/Cargo.lock Outdated
Copy link
Copy Markdown
Contributor

@expani expani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @vinaykpud for the PR.

Dropping some comments on the high level abstractions. Will go through this further.

Signed-off-by: Vinay Krishna Pudyodu <vinkrish.neo@gmail.com>
Signed-off-by: Vinay Krishna Pudyodu <vinkrish.neo@gmail.com>
@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for ea72060: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vinaykpud vinaykpud closed this Apr 13, 2026
@vinaykpud vinaykpud reopened this Apr 13, 2026
@github-actions
Copy link
Copy Markdown
Contributor

❕ Gradle check result for ea72060: UNSTABLE

Please review all flaky tests that succeeded after retry and create an issue if one does not already exist to track the flaky failure.

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for a06b2f6: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch from a06b2f6 to 3c14c2b Compare April 13, 2026 17:26
@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 3c14c2b: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vinaykpud vinaykpud force-pushed the feature/datafusion-dsl-query-agg branch from 3c14c2b to 96e8343 Compare April 13, 2026 17:40
@github-actions
Copy link
Copy Markdown
Contributor

✅ Gradle check result for 96e8343: SUCCESS

@mch2 mch2 merged commit f7eb7c2 into opensearch-project:main Apr 13, 2026
24 checks passed
@msfroh
Copy link
Copy Markdown
Contributor

msfroh commented Apr 15, 2026

It looks like one of the added tests directly contradicts #21203, which was merged around the same time.

See https://github.com/opensearch-project/OpenSearch/actions/runs/24427826934/job/71365665229?pr=20635

pradeep-L pushed a commit to pradeep-L/OpenSearch that referenced this pull request Apr 21, 2026
…roject#21047)

* Add query filter, project, aggregation, sort converters

Signed-off-by: Vinay Krishna Pudyodu <vinkrish.neo@gmail.com>

* resolved PR comments from mch2

Signed-off-by: Vinay Krishna Pudyodu <vinkrish.neo@gmail.com>

* resolved PR comments from expani

Signed-off-by: Vinay Krishna Pudyodu <vinkrish.neo@gmail.com>

---------

Signed-off-by: Vinay Krishna Pudyodu <vinkrish.neo@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

skip-changelog skip-diff-analyzer Maintainer to skip code-diff-analyzer check, after reviewing issues in AI analysis. skip-diff-reviewer Maintainer to skip code-diff-reviewer check, after reviewing issues in AI analysis.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants