Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -359,7 +359,7 @@ public DataQualityReport getDataQualityReport(
return searchRepository.genericAggregation(q, index, searchAggregation, subjectContext);
}

public DataQualityReport getDataQualityReport(
public DataQualityReport getDataQualityReport(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Quality: Indentation damage on pre-existing lines (accidental whitespace changes)

Several pre-existing lines had their leading indentation removed (e.g., getDataQualityReport at line 362, searchDataQualityLineage at line 2787, SubjectContext at line 545, getDataQualityTab at line 74). These appear to be accidental whitespace-only changes that break the consistent indentation style of the codebase. They should be reverted to avoid noisy diffs.

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method signature lost its indentation (public DataQualityReport getDataQualityReport starts at column 0), which is inconsistent with surrounding code and may fail Spotless/format checks. Please reformat this section.

Suggested change
public DataQualityReport getDataQualityReport(
public DataQualityReport getDataQualityReport(

Copilot uses AI. Check for mistakes.
String q, String aggQuery, String index, String domain, SubjectContext subjectContext)
throws IOException {
String queryWithDomain = addDomainFilter(q, domain, index);
Expand All @@ -368,6 +368,94 @@ public DataQualityReport getDataQualityReport(
queryWithDomain, index, searchAggregation, subjectContext);
}

public List<Map<String, Object>> getDataQualityCheckImpact(
int limit, String testCaseStatus, SubjectContext subjectContext) throws IOException {

long thirtyDaysAgo = System.currentTimeMillis() - (30L * 24 * 60 * 60 * 1000);

String query = buildTestCaseImpactQuery(testCaseStatus, thirtyDaysAgo);

List<Map<String, Object>> testCases =
searchRepository.searchTestCasesForImpact(query, 0, limit * 2, subjectContext);

List<Map<String, Object>> rankedResults = new ArrayList<>();
Map<String, Integer> maxValues = calculateMaxValues(testCases);

int downstreamMax = maxValues.getOrDefault("downstreamUsage", 100);
int consumerMax = maxValues.getOrDefault("consumerCount", 50);
int incidentMax = maxValues.getOrDefault("recentIncidents", 10);

for (Map<String, Object> testCase : testCases) {
int downstreamUsage =
((Number) testCase.getOrDefault("downstreamUsage", 0)).intValue();
int consumerCount = ((Number) testCase.getOrDefault("consumerCount", 0)).intValue();
int recentIncidents =
Comment on lines +388 to +392
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scoring loop assumes downstreamUsage, consumerCount, and recentIncidents are present in the input docs, but the current retrieval code defaults them to 0. That makes impactScore always 0 and defeats the ranking.

Copilot uses AI. Check for mistakes.
((Number) testCase.getOrDefault("recentIncidents", 0)).intValue();
Comment on lines +389 to +393
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

downstreamUsage and consumerCount are inputs to the scoring model here, but they’re currently always 0 in the search results (set in searchTestCasesForImpact). That means the 40%/30% weights never affect the score, which is misleading given the feature description; please populate these metrics or remove them from scoring until available.

Copilot uses AI. Check for mistakes.

double normalizedDownstream =
downstreamMax > 0 ? Math.min((double) downstreamUsage / downstreamMax, 1.0) : 0;
double normalizedConsumer =
consumerMax > 0 ? Math.min((double) consumerCount / consumerMax, 1.0) : 0;
double incidentFactor =
incidentMax > 0 ? Math.min((double) recentIncidents / incidentMax, 1.0) : 0;

double impactScore =
(0.4 * normalizedDownstream) + (0.3 * normalizedConsumer) + (0.3 * incidentFactor);
impactScore = Math.round(impactScore * 100.0) / 100.0;

Map<String, Object> result = new HashMap<>(testCase);
result.put("impactScore", impactScore * 100);
rankedResults.add(result);
}

rankedResults.sort(
(a, b) -> {
double scoreA = ((Number) a.getOrDefault("impactScore", 0.0)).doubleValue();
double scoreB = ((Number) b.getOrDefault("impactScore", 0.0)).doubleValue();
return Double.compare(scoreB, scoreA);
});

return rankedResults.stream().limit(limit).collect(Collectors.toList());
}

private String buildTestCaseImpactQuery(String testCaseStatus, long thirtyDaysAgo) {
StringBuilder query = new StringBuilder();
query.append("{\"query\": {\"bool\": {\"must\": [");

if (testCaseStatus != null && !testCaseStatus.isEmpty()) {
query.append(String.format("{\"term\": {\"testCaseStatus\": \"%s\"}},", testCaseStatus.toLowerCase()));
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

User input (testCaseStatus) is interpolated into the JSON query without escaping. A value containing quotes will break the JSON (and can be used for query injection); escape it (e.g., escapeDoubleQuotes) or build the query via a JSON builder.

Suggested change
query.append(String.format("{\"term\": {\"testCaseStatus\": \"%s\"}},", testCaseStatus.toLowerCase()));
query.append(
String.format(
"{\"term\": {\"testCaseStatus\": \"%s\"}},",
escapeDoubleQuotes(testCaseStatus.toLowerCase())));

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This term filter targets testCaseStatus, but in the TestCase index mapping the field is testCaseResult.testCaseStatus. Using the wrong field path means the status filter will never match any documents.

Suggested change
query.append(String.format("{\"term\": {\"testCaseStatus\": \"%s\"}},", testCaseStatus.toLowerCase()));
query.append(
String.format(
"{\"term\": {\"testCaseResult.testCaseStatus\": \"%s\"}},",
testCaseStatus.toLowerCase()));

Copilot uses AI. Check for mistakes.
}

query.append(
String.format(
"{\"range\": {\"timestamp\": {\"gte\": %d}}}]}}", thirtyDaysAgo / 1000));
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This range filter targets top-level timestamp, but in the TestCase index mapping the field is testCaseResult.timestamp. Using the wrong field path means the last-30-days filter won’t apply (and may return no results).

Suggested change
"{\"range\": {\"timestamp\": {\"gte\": %d}}}]}}", thirtyDaysAgo / 1000));
"{\"range\": {\"testCaseResult.timestamp\": {\"gte\": %d}}}]}}",
thirtyDaysAgo / 1000));

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSON built here is malformed: it opens { "query": { "bool": { "must": [ but the final append only closes with }]}}, missing a final }. This will fail query_filter parsing and the endpoint will return empty results.

Suggested change
"{\"range\": {\"timestamp\": {\"gte\": %d}}}]}}", thirtyDaysAgo / 1000));
"{\"range\": {\"timestamp\": {\"gte\": %d}}}]}}}", thirtyDaysAgo / 1000));

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 Bug: JSON query still malformed — missing one closing brace

The fix at line 431 changed }}}} to }]}}, which correctly adds the missing ] for the must array. However, it still produces invalid JSON because there are only 2 closing braces after ] when 3 are needed.

The structure opened is: {"query": {"bool": {"must": [ — that's 3 { and 1 [. After the range clause closes its own 3 braces, the remaining closers should be ]}}} (1 ] + 3 }), but the code produces ]}} (1 ] + 2 }).

This means the Elasticsearch query is still malformed and searchTestCasesForImpact will throw a parse error, causing the entire impact ranking feature to return empty results.

Suggested fix:

Change the format string to include the missing brace:

  "{"range": {"timestamp": {"gte": %d}}}]}}}")

Or better yet, use a JSON library to build queries instead of string concatenation to prevent these bracket-matching issues.

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

return query.toString();
Comment on lines +422 to +432
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

buildTestCaseImpactQuery starts a JSON query string but never closes the bool.must array/object properly, so it will generate invalid JSON. Build this query with a JSON builder (or fix the missing closing brackets/braces).

Suggested change
StringBuilder query = new StringBuilder();
query.append("{\"query\": {\"bool\": {\"must\": [");
if (testCaseStatus != null && !testCaseStatus.isEmpty()) {
query.append(String.format("{\"term\": {\"testCaseStatus\": \"%s\"}},", testCaseStatus.toLowerCase()));
}
query.append(
String.format(
"{\"range\": {\"timestamp\": {\"gte\": %d}}}}}", thirtyDaysAgo / 1000));
return query.toString();
JsonArrayBuilder mustClauses = Json.createArrayBuilder();
if (testCaseStatus != null && !testCaseStatus.isEmpty()) {
mustClauses.add(
Json.createObjectBuilder()
.add(
"term",
Json.createObjectBuilder()
.add("testCaseStatus", testCaseStatus.toLowerCase())));
}
mustClauses.add(
Json.createObjectBuilder()
.add(
"range",
Json.createObjectBuilder()
.add(
"timestamp",
Json.createObjectBuilder().add("gte", thirtyDaysAgo / 1000))));
JsonObjectBuilder query =
Json.createObjectBuilder()
.add(
"query",
Json.createObjectBuilder()
.add(
"bool",
Json.createObjectBuilder().add("must", mustClauses)));
return query.build().toString();

Copilot uses AI. Check for mistakes.
}

private Map<String, Integer> calculateMaxValues(List<Map<String, Object>> testCases) {
Map<String, Integer> maxValues = new HashMap<>();
int maxDownstream = 0;
int maxConsumer = 0;
int maxIncident = 0;

for (Map<String, Object> tc : testCases) {
int downstream =
((Number) tc.getOrDefault("downstreamUsage", 0)).intValue();
int consumer = ((Number) tc.getOrDefault("consumerCount", 0)).intValue();
int incident = ((Number) tc.getOrDefault("recentIncidents", 0)).intValue();

if (downstream > maxDownstream) maxDownstream = downstream;
if (consumer > maxConsumer) maxConsumer = consumer;
if (incident > maxIncident) maxIncident = incident;
}

maxValues.put("downstreamUsage", Math.max(maxDownstream, 1));
maxValues.put("consumerCount", Math.max(maxConsumer, 1));
maxValues.put("recentIncidents", Math.max(maxIncident, 1));

return maxValues;
}

private String addDomainFilter(String query, String domain, String index) {
if (nullOrEmpty(domain)) {
return query;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -542,10 +542,53 @@ public DataQualityReport getDataQualityReport(
if (nullOrEmpty(aggregationQuery) || nullOrEmpty(index)) {
throw new IllegalArgumentException("aggregationQuery and index are required parameters");
}
SubjectContext subjectContext = getSubjectContext(securityContext);
SubjectContext subjectContext = getSubjectContext(securityContext);
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line lost its indentation (starts at column 0), which is inconsistent with surrounding code and may fail Spotless/format checks. Please reformat to match the file’s indentation.

Suggested change
SubjectContext subjectContext = getSubjectContext(securityContext);
SubjectContext subjectContext = getSubjectContext(securityContext);

Copilot uses AI. Check for mistakes.
return repository.getDataQualityReport(query, aggregationQuery, index, domain, subjectContext);
}

@GET
@Path("/dataQualityCheckImpact")
@Operation(
operationId = "getDataQualityCheckImpact",
summary = "Get Data Quality Check Impact Ranking",
description =
"""
Get data quality checks ranked by impact score. The impact score is calculated based on:
- Downstream usage (number of downstream entities using the data)
- Consumer count (number of direct consumers)
- Recent incidents (failed test results in the last 30 days)
This helps prioritize which data quality checks are most critical to fix.
""",
responses = {
@ApiResponse(
responseCode = "200",
description = "List of data quality checks ranked by impact",
content =
@Content(
mediaType = "application/json",
schema = @Schema(implementation = List.class)))
})
public List<?> getDataQualityCheckImpact(
@Context UriInfo uriInfo,
Comment on lines +569 to +572
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The endpoint returns List<?> and documents the response as List.class, even though a dedicated DataQualityCheckImpact schema/model was added. Please return a typed List<DataQualityCheckImpact> (or DTO) and update the OpenAPI response annotation to the correct item type so clients can generate proper bindings.

Copilot uses AI. Check for mistakes.
@Context SecurityContext securityContext,
@Parameter(
description = "Number of results to return",
Comment on lines +571 to +575
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The endpoint currently returns List<?>, which loses type safety and makes the OpenAPI schema unhelpful. Prefer returning a typed List<DataQualityCheckImpact> (schema was added) to provide a stable API contract.

Copilot uses AI. Check for mistakes.
schema = @Schema(type = "integer", defaultValue = "10"))
@QueryParam("limit")
@DefaultValue("10")
int limit,
@Parameter(
description = "Filter by test case status (e.g., Failed, Success)",
schema = @Schema(type = "string"))
@QueryParam("testCaseStatus")
String testCaseStatus)
throws IOException {
Comment on lines +580 to +585
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

testCaseStatus is accepted as a free-form String and then interpolated into the search JSON. To avoid invalid values (and potential query injection), model it as the existing TestCaseStatus enum and reject unknown values with a 400.

Copilot uses AI. Check for mistakes.
List<AuthRequest> authRequests = getAuthRequestsForListOps();
authorizer.authorizeRequests(securityContext, authRequests, AuthorizationLogic.ANY);
SubjectContext subjectContext = getSubjectContext(securityContext);
return repository.getDataQualityCheckImpact(limit, testCaseStatus, subjectContext);
}

@POST
@Operation(
operationId = "createLogicalTestSuite",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2785,11 +2785,92 @@ public Response searchEntityRelationship(
fqn, upstreamDepth, downstreamDepth, queryFilter, deleted);
}

public Response searchDataQualityLineage(
public Response searchDataQualityLineage(
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method signature lost its indentation (public Response searchDataQualityLineage starts at column 0), which is inconsistent with the rest of the class and is likely to fail Spotless/format checks. Please reformat this block.

Suggested change
public Response searchDataQualityLineage(
public Response searchDataQualityLineage(

Copilot uses AI. Check for mistakes.
String fqn, int upstreamDepth, String queryFilter, boolean deleted) throws IOException {
return searchClient.searchDataQualityLineage(fqn, upstreamDepth, queryFilter, deleted);
}

@SuppressWarnings("unchecked")
public List<Map<String, Object>> searchTestCasesForImpact(
String query, int from, int size, SubjectContext subjectContext) throws IOException {
SearchRequest searchRequest =
new SearchRequest()
.withIndex(Entity.TEST_CASE)
.withQuery(query)
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SearchRequest.withQuery(...) is treated as a query-string in the search layer; passing JSON DSL here won’t be parsed as DSL. If you want to pass ES query DSL, it needs to go in queryFilter (or the dedicated raw-DSL parameter), not in query.

Suggested change
.withQuery(query)
.withQuery("*")
.withQueryFilter(query)

Copilot uses AI. Check for mistakes.
.withFrom(from)
.withSize(size)
.withSortFieldParam("timestamp")
Comment on lines +2798 to +2802
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This request sorts by timestamp on the testCase index, but the test_case mapping doesn’t have a root timestamp field (it’s under testCaseResult.timestamp). This sort (and any range filter on timestamp) won’t behave as intended unless you use the correct index/field path.

Copilot uses AI. Check for mistakes.
.withDeleted(false)
.withSortOrder("desc");

Response response = search(searchRequest, subjectContext);

if (response.getStatus() != 200) {
return new ArrayList<>();
}

String json = (String) response.getEntity();
List<Map<String, Object>> results = new ArrayList<>();

try {
JsonNode hitsNode = JsonUtils.extractValue(json, HITS, HITS);
if (hitsNode == null || !hitsNode.isArray()) {
return results;
}

Map<String, Map<String, Object>> groupedTestCases = new LinkedHashMap<>();

for (Iterator<JsonNode> it = hitsNode.elements(); it.hasNext(); ) {
JsonNode jsonNode = it.next();
JsonNode sourceNode = JsonUtils.extractValue(jsonNode.toString(), SEARCH_SOURCE);
if (sourceNode != null) {
String testCaseFQN =
JsonUtils.extractValue(sourceNode.toString(), FULLY_QUALIFIED_NAME);
String testCaseId = JsonUtils.extractValue(sourceNode.toString(), ID);
String entityFQN = JsonUtils.extractValue(sourceNode.toString(), "entityFQN");
String testCaseStatus = JsonUtils.extractValue(sourceNode.toString(), "testCaseStatus");
String timestamp = JsonUtils.extractValue(sourceNode.toString(), "timestamp");
Comment on lines +2825 to +2832
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loop repeatedly converts JsonNode to string and re-parses it via JsonUtils.extractValue(...). There are existing overloads that operate directly on JsonNode (used elsewhere in this class), which avoids extra parsing and is less error-prone.

Suggested change
JsonNode sourceNode = JsonUtils.extractValue(jsonNode.toString(), SEARCH_SOURCE);
if (sourceNode != null) {
String testCaseFQN =
JsonUtils.extractValue(sourceNode.toString(), FULLY_QUALIFIED_NAME);
String testCaseId = JsonUtils.extractValue(sourceNode.toString(), ID);
String entityFQN = JsonUtils.extractValue(sourceNode.toString(), "entityFQN");
String testCaseStatus = JsonUtils.extractValue(sourceNode.toString(), "testCaseStatus");
String timestamp = JsonUtils.extractValue(sourceNode.toString(), "timestamp");
JsonNode sourceNode = JsonUtils.extractValue(jsonNode, SEARCH_SOURCE);
if (sourceNode != null) {
String testCaseFQN = JsonUtils.extractValue(sourceNode, FULLY_QUALIFIED_NAME);
String testCaseId = JsonUtils.extractValue(sourceNode, ID);
String entityFQN = JsonUtils.extractValue(sourceNode, "entityFQN");
String testCaseStatus = JsonUtils.extractValue(sourceNode, "testCaseStatus");
String timestamp = JsonUtils.extractValue(sourceNode, "timestamp");

Copilot uses AI. Check for mistakes.

if (testCaseFQN == null || testCaseFQN.isEmpty()) {
continue;
}

Map<String, Object> doc;
if (groupedTestCases.containsKey(testCaseFQN)) {
doc = groupedTestCases.get(testCaseFQN);
int runCount = ((Number) doc.getOrDefault("runCount", 0)).intValue();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Edge Case: ClassCastException risk on getOrDefault with Integer 0

At lines 2840 and 2843, doc.getOrDefault("runCount", 0) and doc.getOrDefault("recentIncidents", 0) use an int literal as the default. Since the map stores Integer values from doc.put("runCount", 1), the cast (Number) works. However, the default 0 is auto-boxed to Integer which is fine, but mixing with the pattern is fragile — if anyone changes the stored value type (e.g., to Long), the cast chain could break. This is minor but worth noting for maintainability.

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

doc.put("runCount", runCount + 1);

int failedCount = ((Number) doc.getOrDefault("recentIncidents", 0)).intValue();
if ("failed".equalsIgnoreCase(testCaseStatus)) {
doc.put("recentIncidents", failedCount + 1);
}
} else {
doc = new HashMap<>();
doc.put("testCaseId", testCaseId);
doc.put("testCaseFullyQualifiedName", testCaseFQN);
doc.put("entityFullyQualifiedName", entityFQN);
doc.put("testCaseStatus", testCaseStatus);
doc.put("timestamp", timestamp);
doc.put("runCount", 1);
doc.put(
"recentIncidents",
"failed".equalsIgnoreCase(testCaseStatus) ? 1 : 0);
doc.put("downstreamUsage", 0);
doc.put("consumerCount", 0);
Comment on lines +2859 to +2860
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Bug: downstreamUsage and consumerCount always 0, crippling score

While recentIncidents is now correctly computed by counting failed runs (lines 2843-2846), downstreamUsage and consumerCount remain hardcoded to 0 (lines 2858-2859). Since the impact score formula is 0.4 * downstream + 0.3 * consumer + 0.3 * incidents, 70% of the score weight is permanently zeroed out. The ranking will only ever reflect recent incident counts, making the multi-factor scoring model misleading — users see weights advertised for downstream/consumer impact that have no effect.

This was flagged in the previous review and is only partially addressed by this commit.

Suggested fix:

Either:
1. Populate `downstreamUsage` and `consumerCount` with real data from lineage/usage APIs, or
2. Remove these factors from the score formula and UI until they are implemented, to avoid confusing users with always-zero metrics.

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

groupedTestCases.put(testCaseFQN, doc);
}
}
}

results = new ArrayList<>(groupedTestCases.values());
} catch (Exception e) {
LOG.error("Error parsing search test cases for impact", e);
}

return results;
}

public Response searchSchemaEntityRelationship(
String fqn, int upstreamDepth, int downstreamDepth, String queryFilter, boolean deleted)
throws IOException {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
{
"$id": "https://open-metadata.org/schema/tests/dataQualityCheckImpact.json",
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "DataQualityCheckImpact",
"description": "Data Quality Check Impact model for ranking checks by business criticality.",
"type": "object",
"javaType": "org.openmetadata.schema.tests.DataQualityCheckImpact",
"properties": {
"testCaseId": {
"description": "Unique identifier of the test case.",
"$ref": "../type/basic.json#/definitions/uuid"
},
"testCaseFullyQualifiedName": {
"description": "Fully qualified name of the test case.",
"$ref": "../type/basic.json#/definitions/fullyQualifiedEntityName"
},
"testSuiteId": {
"description": "Unique identifier of the test suite.",
"$ref": "../type/basic.json#/definitions/uuid"
},
"testSuiteFullyQualifiedName": {
"description": "Fully qualified name of the test suite.",
"$ref": "../type/basic.json#/definitions/fullyQualifiedEntityName"
},
"entityFullyQualifiedName": {
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This schema uses entityFullyQualifiedName, while the UI/API currently refer to entityFQN. Align the property names (and required fields) with the actual API payload so generated models and UI types stay consistent.

Suggested change
"entityFullyQualifiedName": {
"entityFQN": {

Copilot uses AI. Check for mistakes.
"description": "The data entity this test case is testing.",
"$ref": "../type/basic.json#/definitions/fullyQualifiedEntityName"
},
"entityType": {
"description": "The type of entity being tested.",
"type": "string"
},
"impactScore": {
"description": "Calculated impact score (0-100) based on downstream usage, consumers, and incidents.",
"type": "number",
"minimum": 0,
"maximum": 100
},
"downstreamUsage": {
"description": "Number of downstream entities using this data.",
"type": "integer",
"minimum": 0
},
"consumerCount": {
"description": "Number of direct consumers of this data.",
"type": "integer",
"minimum": 0
},
"recentIncidents": {
"description": "Number of failed test results in the last 30 days.",
"type": "integer",
"minimum": 0
},
"lastFailedAt": {
"description": "Timestamp of the most recent test failure.",
"$ref": "../type/basic.json#/definitions/timestamp"
},
"testCaseStatus": {
"description": "Current status of the test case.",
"$ref": "./basic.json#/definitions/testCaseStatus"
},
"dataQualityDimension": {
"description": "Data quality dimension category.",
"type": "string"
}
},
"required": [
"testCaseId",
"testCaseFullyQualifiedName",
"impactScore",
"testCaseStatus"
],
"additionalProperties": false
}
Loading
Loading