Skip to content

Commit a0cee89

Browse files
authored
Add server-side SAFE flag for UseQueryForMetadata rollout on DBSQL (#1437)
## Summary Adds support for server-side SAFE flag `enableUseQueryForThriftJdbc` to control the SHOW commands rollout for Thrift metadata operations on DBSQL warehouses. ### Priority order (matches UseThriftClient pattern): 1. **Client-side param** (`UseQueryForMetadata` in JDBC URL) — honoured first, unconditionally 2. **Server-side SAFE flag** (`enableUseQueryForThriftJdbc`) — checked for DBSQL warehouses only 3. **Default** (`0` = disabled) ### Examples: | `UseQueryForMetadata` in URL | Server flag | Compute | Result | |-----|------|---------|--------| | `1` | any | any | SHOW commands **enabled** | | `0` | any | any | SHOW commands **disabled** | | not set | enabled | DBSQL warehouse | SHOW commands **enabled** | | not set | disabled | DBSQL warehouse | SHOW commands **disabled** | | not set | any | All-purpose cluster | SHOW commands **disabled** | ### Code changes: - Added `resolveFeatureFlag(clientParam, serverFlagName)` helper in `DatabricksConnectionContext` — reusable for future client-first/server-fallback patterns - Refactored `useQueryForMetadata()` to use the new helper ## Test plan - [x] `DatabricksConnectionContextTest` — 110 tests pass - [x] `DatabricksSessionTest` — 18 tests pass - [x] Formatting clean NO_CHANGELOG=true This pull request was AI-assisted by Isaac. --------- Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
1 parent 46bd9e0 commit a0cee89

5 files changed

Lines changed: 183 additions & 4 deletions

File tree

NEXT_CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,14 @@
22

33
## [Unreleased]
44

5+
### BREAKING CHANGES in 3.4.1
6+
7+
1. **`getTables()`: Percent sign (`%`) in catalog argument is now treated as a literal character, not a wildcard.** Previously returned all tables; now returns zero rows unless a catalog named "%" exists. JDBC spec: catalog is an exact-match parameter, not a pattern. Migration: Pass `null` to search all catalogs.
8+
9+
2. **`getColumnTypeName()`: DECIMAL columns now return `"DECIMAL"` without precision/scale** (e.g., `"DECIMAL"` not `"DECIMAL(10,2)"`). Use `getPrecision()` and `getScale()` for numeric constraints. JDBC spec: `getColumnTypeName()` returns the base type name only.
10+
11+
3. **For DBSQL warehouses, metadata operations are now powered by SHOW SQL commands.** SQL Exec API mode already was powered by SHOW commands, now the same is true for Thrift server mode as well. To revert to native Thrift metadata RPCs, set `UseQueryForMetadata` to `0`.
12+
513
### Added
614

715
### Updated

src/main/java/com/databricks/jdbc/api/impl/DatabricksConnectionContext.java

Lines changed: 58 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,9 @@ public class DatabricksConnectionContext implements IDatabricksConnectionContext
4545
private static final String SQL_EXEC_FLAG_NAME =
4646
"databricks.partnerplatform.clientConfigsFeatureFlags.enableSqlExecForJdbc";
4747

48+
private static final String USE_QUERY_FOR_THRIFT_FLAG_NAME =
49+
"databricks.partnerplatform.clientConfigsFeatureFlags.enableUseQueryForThriftJdbc";
50+
4851
private final String host;
4952
@VisibleForTesting final int port;
5053
private final String schema;
@@ -1131,7 +1134,8 @@ public boolean enableShowCommandsForGetFunctions() {
11311134

11321135
@Override
11331136
public boolean useQueryForMetadata() {
1134-
return getParameter(DatabricksJdbcUrlParams.USE_QUERY_FOR_METADATA).equals("1");
1137+
return resolveFeatureFlag(
1138+
DatabricksJdbcUrlParams.USE_QUERY_FOR_METADATA, USE_QUERY_FOR_THRIFT_FLAG_NAME);
11351139
}
11361140

11371141
@Override
@@ -1194,6 +1198,59 @@ private String getParameterIgnoreDefault(DatabricksJdbcUrlParams key) {
11941198
return this.parameters.getOrDefault(key.getParamName().toLowerCase(), null);
11951199
}
11961200

1201+
/**
1202+
* Resolves a boolean feature flag with client-side priority over server-side.
1203+
*
1204+
* <p>Priority order:
1205+
*
1206+
* <ol>
1207+
* <li>Client-side param (explicit user setting in JDBC URL) — honoured unconditionally
1208+
* <li>Server-side feature flag (DBSQL warehouses only) — checked if user didn't set the param
1209+
* <li>Default value from the param definition
1210+
* </ol>
1211+
*
1212+
* @param clientParam the JDBC URL parameter (e.g. USE_QUERY_FOR_METADATA)
1213+
* @param serverFlagName the server-side SAFE flag name
1214+
* @return true if the feature should be enabled
1215+
*/
1216+
private boolean resolveFeatureFlag(DatabricksJdbcUrlParams clientParam, String serverFlagName) {
1217+
// 1. User explicitly set the param — honour it regardless of compute type
1218+
String explicitValue = getParameterIgnoreDefault(clientParam);
1219+
if (explicitValue != null) {
1220+
return explicitValue.equals("1");
1221+
}
1222+
1223+
// 2. No explicit setting + all-purpose cluster — always false
1224+
if (!(computeResource instanceof Warehouse)) {
1225+
return false;
1226+
}
1227+
1228+
// 3. No explicit setting + warehouse — enabled only when BOTH client default
1229+
// AND server-side flag agree. This gives a two-key rollout mechanism:
1230+
// flip the param default to "1" in the driver AND enable the server flag.
1231+
boolean clientDefault = getParameter(clientParam).equals("1");
1232+
boolean serverEnabled = false;
1233+
try {
1234+
serverEnabled =
1235+
DatabricksDriverFeatureFlagsContextFactory.getInstance(this)
1236+
.isFeatureEnabled(serverFlagName);
1237+
} catch (Exception e) {
1238+
LOGGER.debug("Failed to check server-side flag {}: {}", serverFlagName, e.getMessage());
1239+
}
1240+
1241+
if (clientDefault && serverEnabled) {
1242+
LOGGER.debug(
1243+
"Feature {} enabled for warehouse: client default={}, server flag {} ={}",
1244+
clientParam.getParamName(),
1245+
clientDefault,
1246+
serverFlagName,
1247+
serverEnabled);
1248+
return true;
1249+
}
1250+
1251+
return false;
1252+
}
1253+
11971254
private String getParameter(DatabricksJdbcUrlParams key, String defaultValue) {
11981255
return this.parameters.getOrDefault(key.getParamName().toLowerCase(), defaultValue);
11991256
}

src/main/java/com/databricks/jdbc/common/DatabricksJdbcUrlParams.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ public enum DatabricksJdbcUrlParams {
173173
USE_QUERY_FOR_METADATA(
174174
"UseQueryForMetadata",
175175
"Use SQL SHOW commands instead of Thrift RPCs for metadata operations. When enabled, EnableShowCommandForGetFunctions is redundant",
176-
"0"),
176+
"1"),
177177
TREAT_METADATA_CATALOG_NAME_AS_PATTERN(
178178
"TreatMetadataCatalogNameAsPattern",
179179
"Treat catalog names as patterns in Thrift metadata RPCs. When disabled (default), wildcard characters in catalog names are escaped",

src/test/java/com/databricks/jdbc/api/impl/DatabricksConnectionContextTest.java

Lines changed: 87 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1456,15 +1456,16 @@ public void testDefaultGetterCoverage() throws DatabricksSQLException {
14561456

14571457
@Test
14581458
public void testUseQueryForMetadataDefaultFalseForWarehouse() throws DatabricksSQLException {
1459-
// Warehouse URL without explicit UseQueryForMetadata — default is false (native RPCs)
1459+
// Warehouse without explicit setting — requires both client default AND server flag.
1460+
// Client default is "1" but no server flag set → false
14601461
IDatabricksConnectionContext ctx =
14611462
DatabricksConnectionContext.parse(TestConstants.VALID_URL_1, properties);
14621463
assertFalse(ctx.useQueryForMetadata());
14631464
}
14641465

14651466
@Test
14661467
public void testUseQueryForMetadataDefaultFalseForCluster() throws DatabricksSQLException {
1467-
// Cluster URL without explicit UseQueryForMetadatadefault is false
1468+
// Cluster without explicit settingalways false regardless of defaults
14681469
IDatabricksConnectionContext ctx =
14691470
DatabricksConnectionContext.parse(TestConstants.VALID_CLUSTER_URL, properties);
14701471
assertFalse(ctx.useQueryForMetadata());
@@ -1488,6 +1489,90 @@ public void testUseQueryForMetadataExplicitFalseOnWarehouse() throws DatabricksS
14881489
assertFalse(ctx.useQueryForMetadata());
14891490
}
14901491

1492+
@Test
1493+
public void testUseQueryForMetadata_serverFlagEnabled_warehouseReturnsTrue()
1494+
throws DatabricksSQLException {
1495+
// Warehouse without explicit setting — client default "1" + server flag enabled → true
1496+
DatabricksConnectionContext ctx =
1497+
(DatabricksConnectionContext)
1498+
DatabricksConnectionContext.parse(TestConstants.VALID_URL_1, properties);
1499+
1500+
Map<String, String> flags = new HashMap<>();
1501+
flags.put(
1502+
"databricks.partnerplatform.clientConfigsFeatureFlags.enableUseQueryForThriftJdbc", "true");
1503+
DatabricksDriverFeatureFlagsContextFactory.setFeatureFlagsContext(ctx, flags);
1504+
1505+
assertTrue(ctx.useQueryForMetadata());
1506+
}
1507+
1508+
@Test
1509+
public void testUseQueryForMetadata_serverFlagDisabled_warehouseReturnsFalse()
1510+
throws DatabricksSQLException {
1511+
// Warehouse without explicit setting — client default "1" but server flag disabled → false
1512+
DatabricksConnectionContext ctx =
1513+
(DatabricksConnectionContext)
1514+
DatabricksConnectionContext.parse(TestConstants.VALID_URL_1, properties);
1515+
1516+
Map<String, String> flags = new HashMap<>();
1517+
flags.put(
1518+
"databricks.partnerplatform.clientConfigsFeatureFlags.enableUseQueryForThriftJdbc",
1519+
"false");
1520+
DatabricksDriverFeatureFlagsContextFactory.setFeatureFlagsContext(ctx, flags);
1521+
1522+
assertFalse(ctx.useQueryForMetadata());
1523+
}
1524+
1525+
@Test
1526+
public void testUseQueryForMetadata_serverFlagEnabled_clusterIgnored()
1527+
throws DatabricksSQLException {
1528+
// All-purpose cluster — always false, server flag and client default both ignored
1529+
DatabricksConnectionContext ctx =
1530+
(DatabricksConnectionContext)
1531+
DatabricksConnectionContext.parse(TestConstants.VALID_CLUSTER_URL, properties);
1532+
1533+
Map<String, String> flags = new HashMap<>();
1534+
flags.put(
1535+
"databricks.partnerplatform.clientConfigsFeatureFlags.enableUseQueryForThriftJdbc", "true");
1536+
DatabricksDriverFeatureFlagsContextFactory.setFeatureFlagsContext(ctx, flags);
1537+
1538+
assertFalse(ctx.useQueryForMetadata());
1539+
}
1540+
1541+
@Test
1542+
public void testUseQueryForMetadata_clientExplicit1_overridesServerFlagDisabled()
1543+
throws DatabricksSQLException {
1544+
// Client sets UseQueryForMetadata=1 — should be honoured even if server flag is disabled
1545+
DatabricksConnectionContext ctx =
1546+
(DatabricksConnectionContext)
1547+
DatabricksConnectionContext.parse(
1548+
TestConstants.VALID_URL_1 + ";UseQueryForMetadata=1", properties);
1549+
1550+
Map<String, String> flags = new HashMap<>();
1551+
flags.put(
1552+
"databricks.partnerplatform.clientConfigsFeatureFlags.enableUseQueryForThriftJdbc",
1553+
"false");
1554+
DatabricksDriverFeatureFlagsContextFactory.setFeatureFlagsContext(ctx, flags);
1555+
1556+
assertTrue(ctx.useQueryForMetadata());
1557+
}
1558+
1559+
@Test
1560+
public void testUseQueryForMetadata_clientExplicit0_overridesServerFlagEnabled()
1561+
throws DatabricksSQLException {
1562+
// Client sets UseQueryForMetadata=0 — should be honoured even if server flag is enabled
1563+
DatabricksConnectionContext ctx =
1564+
(DatabricksConnectionContext)
1565+
DatabricksConnectionContext.parse(
1566+
TestConstants.VALID_URL_1 + ";UseQueryForMetadata=0", properties);
1567+
1568+
Map<String, String> flags = new HashMap<>();
1569+
flags.put(
1570+
"databricks.partnerplatform.clientConfigsFeatureFlags.enableUseQueryForThriftJdbc", "true");
1571+
DatabricksDriverFeatureFlagsContextFactory.setFeatureFlagsContext(ctx, flags);
1572+
1573+
assertFalse(ctx.useQueryForMetadata());
1574+
}
1575+
14911576
// ---------------------------------------------------------------------------
14921577
// Geospatial flag independence from complex datatype flag
14931578
// ---------------------------------------------------------------------------

src/test/java/com/databricks/jdbc/api/impl/DatabricksSessionTest.java

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
import com.databricks.jdbc.api.internal.IDatabricksConnectionContext;
1414
import com.databricks.jdbc.common.DatabricksClientType;
1515
import com.databricks.jdbc.common.DatabricksJdbcUrlParams;
16+
import com.databricks.jdbc.common.safe.DatabricksDriverFeatureFlagsContextFactory;
1617
import com.databricks.jdbc.dbclient.impl.sqlexec.DatabricksMetadataQueryClient;
1718
import com.databricks.jdbc.dbclient.impl.sqlexec.DatabricksSdkClient;
1819
import com.databricks.jdbc.dbclient.impl.thrift.DatabricksThriftServiceClient;
@@ -22,6 +23,8 @@
2223
import com.databricks.jdbc.model.client.thrift.generated.TSessionHandle;
2324
import com.databricks.jdbc.telemetry.latency.DatabricksMetricsTimedProcessor;
2425
import java.sql.SQLException;
26+
import java.util.HashMap;
27+
import java.util.Map;
2528
import java.util.Properties;
2629
import org.junit.jupiter.api.Test;
2730
import org.junit.jupiter.api.extension.ExtendWith;
@@ -47,6 +50,11 @@ public class DatabricksSessionTest {
4750
static void setupWarehouse(boolean useThrift) throws SQLException {
4851
String url = useThrift ? WAREHOUSE_JDBC_URL : WAREHOUSE_JDBC_URL_WITH_SEA;
4952
connectionContext = DatabricksConnectionContext.parse(url, new Properties());
53+
// Override feature flags with empty map to prevent test contamination from
54+
// other test classes (e.g. DatabricksConnectionContextTest) that set flags
55+
// on the shared static DatabricksDriverFeatureFlagsContextFactory.
56+
DatabricksDriverFeatureFlagsContextFactory.setFeatureFlagsContext(
57+
connectionContext, new HashMap<>());
5058
}
5159

5260
private void setupCluster() throws SQLException {
@@ -328,6 +336,27 @@ public void testUseQueryForMetadataDisabledByDefaultForWarehouse() throws SQLExc
328336
"Default UseQueryForMetadata=0: warehouse uses native Thrift RPCs for metadata");
329337
}
330338

339+
@Test
340+
public void testUseQueryForMetadataEnabledViaServerFlag() throws SQLException {
341+
setupWarehouse(true /* useThrift */);
342+
// Simulate server-side flag enabling SHOW commands for this warehouse
343+
Map<String, String> flags = new HashMap<>();
344+
flags.put(
345+
"databricks.partnerplatform.clientConfigsFeatureFlags.enableUseQueryForThriftJdbc", "true");
346+
DatabricksDriverFeatureFlagsContextFactory.setFeatureFlagsContext(connectionContext, flags);
347+
348+
assertTrue(connectionContext.useQueryForMetadata());
349+
DatabricksSession session = new DatabricksSession(connectionContext, thriftClient);
350+
assertInstanceOf(
351+
DatabricksMetadataQueryClient.class,
352+
session.getDatabricksMetadataClient(),
353+
"Server flag enabled: warehouse should use SHOW commands for metadata");
354+
355+
// Clean up so other tests are not affected
356+
DatabricksDriverFeatureFlagsContextFactory.setFeatureFlagsContext(
357+
connectionContext, new HashMap<>());
358+
}
359+
331360
@Test
332361
public void testUseQueryForMetadataDisabledByDefaultForCluster() throws SQLException {
333362
connectionContext = DatabricksConnectionContext.parse(VALID_CLUSTER_URL, new Properties());

0 commit comments

Comments
 (0)