Skip to content

Commit 7969f85

Browse files
committed
[KYUUBI #7365] [DOC] Fix links
### Why are the changes needed? The PR fixes broken links in the documentation. Some links broke after [replacing recommonmark with myst](#7237), while others (like [kyuubi 1.9.4/aqe](https://kyuubi.readthedocs.io/en/v1.9.4/deployment/spark/aqe.html#:~:text=Configuring%20by%20spark%2Ddefaults.conf)) used incorrect paths and were already broken before the change. ### How was this patch tested? Built the documentation with the following command and verified the fixed links manually: ```shell make html ``` ### Was this patch authored or co-authored using generative AI tooling? No Closes #7365 from dnskr/docs-fix-links. Closes #7365 9dcbeed [Denis Krivenko] [DOC] Fix links Authored-by: Denis Krivenko <dnskrv88@gmail.com> Signed-off-by: Denis Krivenko <dnskrv88@gmail.com>
1 parent 52b038b commit 7969f85

10 files changed

Lines changed: 15 additions & 15 deletions

File tree

docs/client/bi_tools/datagrip.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Please go to [Download DataGrip](https://www.jetbrains.com/datagrip/download) to
2929

3030
### Get Kyuubi Started
3131

32-
[Get kyuubi server started](../../quick_start/quick_start.html) before you try DataGrip with kyuubi.
32+
[Get kyuubi server started](../../quick_start/quick_start.rst) before you try DataGrip with kyuubi.
3333

3434
For debugging purpose, you can use `tail -f` or `tailf` to track the server log.
3535

docs/client/bi_tools/hue.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525

2626
### Get Kyuubi Started
2727

28-
[Get the server Started](../../quick_start/quick_start.html) first before your try Hue with Kyuubi.
28+
[Get the server Started](../../quick_start/quick_start.rst) first before your try Hue with Kyuubi.
2929

3030
```
3131
Welcome to

docs/client/jdbc/hive_jdbc.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
Kyuubi is fully compatible with Hive JDBC and ODBC drivers that let you connect to popular Business Intelligence (BI)
2323
tools to query, analyze and visualize data though Spark SQL engines.
2424

25-
It's recommended to use [Kyuubi JDBC driver](./kyuubi_jdbc.html) for new applications.
25+
It's recommended to use [Kyuubi JDBC driver](./kyuubi_jdbc.rst) for new applications.
2626

2727
## Install Hive JDBC
2828

@@ -53,7 +53,7 @@ libraryDependencies += "org.apache.hive" % "hive-jdbc" % "2.3.8"
5353
implementation group: 'org.apache.hive', name: 'hive-jdbc', version: '2.3.8'
5454
```
5555

56-
For BI tools, please refer to [Quick Start](../../quick_start/index.html) to check the guide for the BI tool used.
56+
For BI tools, please refer to [Quick Start](../../quick_start/index.rst) to check the guide for the BI tool used.
5757
If you find there is no specific document for the BI tool that you are using, don't worry, the configuration part for all BI tools are basically the same.
5858
Also, we will appreciate if you can help us to improve the document.
5959

docs/client/rest/rest_api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -455,7 +455,7 @@ Refresh the Hadoop configurations of the Kyuubi server.
455455

456456
### POST /admin/refresh/user_defaults_conf
457457

458-
Refresh the [user defaults configs](../../configuration/settings.html#user-defaults) with key in format in the form of `___{username}___.{config key}` from default property file.
458+
Refresh the [user defaults configs](../../configuration/settings.md#user-defaults) with key in format in the form of `___{username}___.{config key}` from default property file.
459459

460460
### POST /admin/refresh/kubernetes_conf
461461

docs/deployment/engine_lifecycle.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ To better improve the overall resource utilization of the cluster,
5151

5252
The above two configurations can be used together to set the TTL of engines.
5353
These configurations are user-facing and able to use in JDBC urls.
54-
Note that, for [connection](engine_share_level.html#connection) share level engines that will be terminated at once when the connection is disconnected, these configurations not necessarily work in this case.
54+
Note that, for [connection](engine_share_level.md#connection) share level engines that will be terminated at once when the connection is disconnected, these configurations not necessarily work in this case.
5555

5656
### Executor TTL
5757

docs/deployment/engine_on_yarn.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ Please refer to [Spark properties](https://spark.apache.org/docs/latest/running-
111111
Kyuubi currently does not support Spark's [YARN-specific Kerberos Configuration](https://spark.apache.org/docs/3.0.1/running-on-yarn.html#kerberos),
112112
so `spark.kerberos.keytab` and `spark.kerberos.principal` should not use now.
113113

114-
Instead, you can schedule a periodically `kinit` process via `crontab` task on the local machine that hosts Kyuubi server or simply use [Kyuubi Kinit](settings.html#kinit).
114+
Instead, you can schedule a periodically `kinit` process via `crontab` task on the local machine that hosts Kyuubi server or simply use [Kyuubi Kinit](../configuration/settings.md#kinit).
115115

116116
## Deploy Kyuubi Flink Engine on YARN
117117

@@ -250,7 +250,7 @@ With regard to YARN application mode, Kerberos is supported natively by Flink, s
250250

251251
With regard to YARN session mode, `security.kerberos.login.keytab` and `security.kerberos.login.principal` are not effective, as Kyuubi Flink SQL engine mainly relies on Flink SQL client which currently does not support [Flink Kerberos Configuration](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/config/#security-kerberos-login-keytab),
252252

253-
As a workaround, you can schedule a periodically `kinit` process via `crontab` task on the local machine that hosts Kyuubi server or simply use [Kyuubi Kinit](settings.html#kinit).
253+
As a workaround, you can schedule a periodically `kinit` process via `crontab` task on the local machine that hosts Kyuubi server or simply use [Kyuubi Kinit](../configuration/settings.md#kinit).
254254

255255
## Deploy Kyuubi Hive Engine on YARN
256256

docs/deployment/high_availability_guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ When applying HA to Kyuubi deployment, we need to be aware of the below two thin
5151
- `kyuubi.ha.addresses` - the external zookeeper cluster address for deploy a `k.i.`
5252
- `kyuubi.ha.namespace` - the root directory, a.k.a. the ServerSpace for deploy a `k.i.`
5353

54-
For more configurations, please see the HA section of [Introduction to the Kyuubi Configurations System](./settings.html#ha)
54+
For more configurations, please see the HA section of [Introduction to the Kyuubi Configurations System](../configuration/settings.md#ha)
5555

5656
### Pseudo mode
5757

docs/deployment/spark/aqe.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ partition size > skewedPartitionFactor * the median partition size && \
178178
skewedPartitionThresholdInBytes
179179
```
180180

181-
As Spark splits skewed partitions targeting [spark.sql.adaptive.advisoryPartitionSizeInBytes](aqe.html#how-to-set-spark-sql-adaptive-advisorypartitionsizeinbytes), ideally `skewedPartitionThresholdInBytes` should be larger than `advisoryPartitionSizeInBytes`. In this case, anytime you increase `advisoryPartitionSizeInBytes`, you should also increase `skewedPartitionThresholdInBytes` if you tend to enable the feature.
181+
As Spark splits skewed partitions targeting [spark.sql.adaptive.advisoryPartitionSizeInBytes](#how-to-set-spark-sql-adaptive-advisorypartitionsizeinbytes), ideally `skewedPartitionThresholdInBytes` should be larger than `advisoryPartitionSizeInBytes`. In this case, anytime you increase `advisoryPartitionSizeInBytes`, you should also increase `skewedPartitionThresholdInBytes` if you tend to enable the feature.
182182

183183
### Hidden Features
184184

@@ -210,7 +210,7 @@ Kyuubi is a long-running service to make it easier for end-users to use Spark SQ
210210

211211
### Setting Default Configurations
212212

213-
[Configuring by `spark-defaults.conf`](../settings.html#via-spark-defaults-conf) at the engine side is the best way to set up Kyuubi with AQE. All engines will be instantiated with AQE enabled.
213+
[Configuring by `spark-defaults.conf`](../../configuration/settings.md#via-spark-defaults-conf) at the engine side is the best way to set up Kyuubi with AQE. All engines will be instantiated with AQE enabled.
214214

215215
Here is a config setting that we use in our platform when deploying Kyuubi.
216216

docs/deployment/spark/dynamic_allocation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ Kyuubi is a long-running service to make it easier for end-users to use Spark SQ
170170

171171
### Setting Default Configurations
172172

173-
[Configuring by `spark-defaults.conf`](../settings.html#via-spark-defaults-conf) at the engine side is the best way to set up Kyuubi with DRA. All engines will be instantiated with DRA enabled.
173+
[Configuring by `spark-defaults.conf`](../../configuration/settings.md#via-spark-defaults-conf) at the engine side is the best way to set up Kyuubi with DRA. All engines will be instantiated with DRA enabled.
174174

175175
Here is a config setting that we use in our platform when deploying Kyuubi.
176176

@@ -198,7 +198,7 @@ Note that, ```spark.cleaner.periodicGC.interval=5min``` is useful here when ```s
198198

199199
On the server-side, the workloads for different users might be different.
200200

201-
Then we can set different defaults for them via the [User Defaults](../settings.html#user-defaults) in ```$KYUUBI_HOME/conf/kyuubi-defaults.conf```
201+
Then we can set different defaults for them via the [User Defaults](../../configuration/settings.md#user-defaults) in ```$KYUUBI_HOME/conf/kyuubi-defaults.conf```
202202

203203
```properties
204204
# For a user named kent
@@ -220,7 +220,7 @@ SELECT * FROM default.tableA;
220220

221221
For the above case, the value - 33 will not affect as Spark does not support change core configurations in runtime.
222222

223-
Instead, end-users can set them via [JDBC Connection URL](../settings.html#via-jdbc-connection-url) for some specific cases.
223+
Instead, end-users can set them via [JDBC Connection URL](../../configuration/settings.md#via-jdbc-connection-url) for some specific cases.
224224

225225
## References
226226

docs/security/ldap.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,5 +56,5 @@ kyuubi.authentication.ldap.userFilter=hive-admin,hive,hive-test,hive-user
5656
kyuubi.authentication.ldap.customLDAPQuery=(&(objectClass=group)(objectClass=top)(instanceType=4)(cn=Domain*)), (&(objectClass=person)(|(sAMAccountName=admin)(|(memberOf=CN=Domain Admins,CN=Users,DC=domain,DC=com)(memberOf=CN=Administrators,CN=Builtin,DC=domain,DC=com))))
5757
```
5858

59-
Please refer to [Settings for LDAP authentication in Kyuubi](../configuration/settings.html?highlight=LDAP#authentication)
59+
Please refer to [Settings for LDAP authentication in Kyuubi](../configuration/settings.md#authentication)
6060
for all configurations.

0 commit comments

Comments
 (0)