Skip to content

Commit 74eb6c8

Browse files
Merge branch 'master' into feat/spire-docs
2 parents dd37981 + 0775f64 commit 74eb6c8

19 files changed

Lines changed: 2000 additions & 435 deletions

.cspell/terms.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,5 @@ cephclusters
2222
cephfilesystems
2323
cephobjectstores
2424
cephobjectstoreusers
25-
ghostunnel
25+
ghostunnel
26+
lvmdconfig

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
/.vscode
12
**/node_modules
23
**/dist
34
local/*

README.md

Lines changed: 80 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,84 @@ $ yarn install
1818

1919
## Updating AC CLI Documentation
2020

21-
* `yarn update-ac-manual`: Update the AC CLI documentation in [docs/en/ui/cli_tools/ac/](docs/en/ui/cli_tools/ac/).
21+
The AC CLI documentation in [docs/en/ui/cli_tools/ac/](docs/en/ui/cli_tools/ac/) is generated from the AC repository and should be updated through the sync script instead of manual edits.
2222

23-
**Important:** Do not manually edit files in [docs/en/ui/cli_tools/ac/](docs/en/ui/cli_tools/ac/) as they are managed by this update command.
23+
### Prerequisites
24+
25+
* Install project dependencies with `yarn install`
26+
* Ensure `git` is available on your machine
27+
* Ensure your account has SSH access to `git@gitlab-ce.alauda.cn:alauda/ac.git`
28+
29+
### Default Update Command
30+
31+
Run the standard update command from the repository root:
32+
33+
```bash
34+
yarn update-ac-manual
35+
```
36+
37+
This command runs:
38+
39+
```bash
40+
node scripts/update-ac-manual.js
41+
```
42+
43+
When you use `yarn update-ac-manual`, the repository sync pulls the `release-1.0` branch from `git@gitlab-ce.alauda.cn:alauda/ac.git` by default.
44+
45+
### What the Sync Script Does
46+
47+
The update script uses the following locations:
48+
49+
* Source repository: `git@gitlab-ce.alauda.cn:alauda/ac.git`
50+
* Temporary clone directory: `local/temp-ac-clone`
51+
* Source manual directory inside the AC repository: `manual/`
52+
* Target directory in this repository: `docs/en/ui/cli_tools/ac/`
53+
54+
The workflow is:
55+
56+
1. Clone the AC repository into `local/temp-ac-clone`
57+
2. Read the `manual/` directory from the cloned repository
58+
3. Remove all existing files under `docs/en/ui/cli_tools/ac/` except `index.mdx`
59+
4. Copy the files from `manual/` into `docs/en/ui/cli_tools/ac/`
60+
5. Strip leading numeric prefixes from top-level Markdown filenames such as `01_foo.md`
61+
6. Remove the temporary clone directory after the sync finishes
62+
63+
### Updating from a Different Branch
64+
65+
If you need to sync from a branch other than `release-1.0`, pass the branch name as an argument:
66+
67+
```bash
68+
yarn update-ac-manual <branch>
69+
```
70+
71+
This is equivalent to running the script directly:
72+
73+
```bash
74+
node scripts/update-ac-manual.js <branch>
75+
```
76+
77+
Replace `<branch>` with the branch name you want to pull from.
78+
79+
If you run `node scripts/update-ac-manual.js` without a branch argument, the script defaults to `release-1.0`.
80+
81+
### Recommended Verification Steps
82+
83+
After updating the generated files, review and validate the result:
84+
85+
```bash
86+
git diff -- docs/en/ui/cli_tools/ac
87+
yarn lint
88+
yarn build
89+
```
90+
91+
If the English source changes need to be synchronized to localized content, run:
92+
93+
```bash
94+
yarn translate
95+
```
96+
97+
### Notes
98+
99+
* Do not manually edit files in [docs/en/ui/cli_tools/ac/](docs/en/ui/cli_tools/ac/); they are managed by the update script
100+
* `index.mdx` in [docs/en/ui/cli_tools/ac/](docs/en/ui/cli_tools/ac/) is preserved by the sync process
101+
* If local preview output changes in the sidebar or navigation, restart `yarn dev`

docs/en/configure/clusters/overview.mdx

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,3 +87,29 @@ The platform also supports connecting and managing existing Kubernetes clusters,
8787
| User-provisioned Infrastructure | User | User | Platform | Partial |
8888
| Hosted Control Plane (HCP) | Platform | Shared nodes (Platform) | Platform | Partial |
8989
| Connected Cluster (Cloud or CNCF) | External Provider | External Provider | Partial / External | Minimal |
90+
91+
<span id="version-compatibility"></span>
92+
93+
## Version Compatibility
94+
95+
When importing or connecting existing clusters, validate the Kubernetes version against the current ACP compatibility policy.
96+
97+
### ACP 4.3 and Later
98+
99+
- ACP 4.3 adds support for Kubernetes 1.34 for platform-managed cluster scenarios.
100+
- For upgrades to ACP 4.3, workload clusters must remain within the compatible version range 1.34, 1.33, 1.32, and 1.31 before the `global` cluster upgrade.
101+
- For third-party clusters, ACP 4.3 accepts Kubernetes versions in the range `>=1.19.0 <1.35.0` for management.
102+
- Product documentation continues to list only the Kubernetes versions that have passed product validation for third-party cluster support and the default Extend baseline.
103+
- Product validation for the Extend baseline covers the following capability areas:
104+
- Installing and using Operators
105+
- Installing and using Cluster Plugins
106+
- ClickHouse-based logging
107+
- VictoriaMetrics-based monitoring
108+
- This does not mean that all specific Operators or Cluster Plugins are covered by product validation.
109+
- For specific Operators or Cluster Plugins outside this baseline, refer to the relevant product documentation or contact technical support.
110+
- For ACP 4.3 and later, workload clusters no longer need to be on the single latest compatible Kubernetes minor release before the `global` cluster upgrade.
111+
112+
### ACP 4.2 and Earlier
113+
114+
- Upgrade workload clusters to the latest documented compatible Kubernetes version before upgrading the `global` cluster.
115+
- Use the [Kubernetes Support Matrix](../../overview/kubernetes-support-matrix.mdx) as the main reference for the documented version mapping.

docs/en/configure/networking/functions/configure_gatewayapi_gateway.mdx

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,16 @@ In YAML, this setting is configured in the companion `EnvoyProxy` resource at
4949
The advantage is ease of use and high-availability load balancing capabilities.
5050
To use LoadBalancer, the cluster must have LoadBalancer support, which can be enabled via [MetalLB](./configure_metallb.mdx#configure_ip_pool).
5151

52+
When using MetalLB, you can specify a static VIP through service annotations. In the Web Console, use the **Service Annotation** field:
53+
54+
```yaml
55+
metallb.universe.tf/address-pool: ADDRESS_POOL_NAME
56+
# Or specify a specific IP directly
57+
metallb.universe.tf/loadBalancerIPs: VIP_IP
58+
```
59+
60+
For more details, see [How To Specify a VIP When Using MetalLB](../how_to/tasks_for_envoy_gateway.mdx#specify_vip_metallb).
61+
5262
#### NodePort \{#access_gateway_via_nodeport}
5363
5464
The advantage is that it doesn't require any external dependencies.

docs/en/configure/networking/how_to/tasks_for_envoy_gateway.mdx

Lines changed: 42 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,46 @@ spec:
151151
NodePort can only be within a specific range, typically `30000-32767`. If you want the Gateway listener port and NodePort to be consistent, your listener port must also be within the NodePort range.
152152
:::
153153

154+
### How To Specify a VIP When Using MetalLB \{#specify_vip_metallb}
155+
156+
When using MetalLB as the LoadBalancer provider, you can specify a static VIP for the Gateway service through service annotations.
157+
158+
```yaml
159+
apiVersion: gateway.envoyproxy.io/v1alpha1
160+
kind: EnvoyProxy
161+
metadata:
162+
name: demo
163+
namespace: demo
164+
spec:
165+
provider:
166+
type: Kubernetes
167+
kubernetes:
168+
envoyService:
169+
type: LoadBalancer
170+
annotations: # [!code callout]
171+
metallb.universe.tf/address-pool: production # [!code callout]
172+
metallb.universe.tf/loadBalancerIPs: VIP_IP # [!code callout]
173+
```
174+
175+
<Callouts>
176+
1. Add MetalLB annotations in the `envoyService.annotations` field
177+
2. Specify the address pool name to allocate IP from
178+
3. Or specify a specific IP address (must be within the address pool range)
179+
</Callouts>
180+
181+
**Available Annotations:**
182+
183+
| Annotation | Description |
184+
| ---------- | ----------- |
185+
| `metallb.universe.tf/address-pool` | Select the address pool to allocate IP from |
186+
| `metallb.universe.tf/loadBalancerIPs` | Specify a specific IP address (supports multiple IPs, comma-separated) |
187+
188+
:::note
189+
- The specified IP must be within a configured MetalLB address pool
190+
- Make sure MetalLB is properly installed and configured before specifying VIPs
191+
- For MetalLB configuration, see [Configure MetalLB](../functions/configure_metallb.mdx#configure_ip_pool)
192+
:::
193+
154194
### How To Add Pod Annotations In Envoy Gateway
155195

156196
[Add pod annotation](./configure_endpoint_health_checker.mdx#add_pod_annotation_in_envoy_gateway)
@@ -363,8 +403,8 @@ For applications running inside the cluster, use the service ClusterIP instead o
363403
If you need to access the LoadBalancer VIP from any cluster node, change `externalTrafficPolicy` to `Cluster`:
364404

365405
```bash
366-
kubectl patch envoyproxy $GATEWAY_NAME -n $GATEWAY_NS --type='json' -p='[
367-
{"op": "replace", "path": "/spec/provider/kubernetes/envoyService/externalTrafficPolicy", "value": "Cluster"}
406+
kubectl patch envoyproxy $GATEWAY_NAME -n $GATEWAY_NS --type='json' -p='[
407+
{"op": "replace", "path": "/spec/provider/kubernetes/envoyService/externalTrafficPolicy", "value": "Cluster"}
368408
]'
369409
```
370410

docs/en/overview/kubernetes-support-matrix.mdx

Lines changed: 24 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ This document provides the Kubernetes version support matrix for <Term name="pro
1212
<Term name="productShort" /> supports multiple Kubernetes versions across different <Term name="productShort" /> releases. Understanding the supported versions is essential for:
1313

1414
- **Creating clusters** – Determine which Kubernetes versions can be used when provisioning new clusters
15-
- **Upgrading <Term name="productShort" />** – Ensure all workload clusters meet compatibility requirements before upgrading the global cluster
16-
- **Managing third-party clusters** – Verify that public cloud or CNCF-compliant Kubernetes clusters are within the supported version range
15+
- **Upgrading <Term name="productShort" />** – Ensure all workload clusters meet the documented compatible-version requirements before upgrading the global cluster
16+
- **Managing third-party clusters** – Verify that public cloud or CNCF-compliant Kubernetes clusters are within the supported management range
1717

1818
## Version Support Matrix
1919

@@ -28,12 +28,32 @@ This ensures consistency and simplifies the upgrade path for new clusters.
2828

2929
| <Term name="productShort" /> Version | Supported for Cluster Creation | Compatible Versions |
3030
| --- | --- | --- |
31+
| <Term name="productShort" /> 4.3 | 1.34 | 1.34, 1.33, 1.32, 1.31 |
3132
| <Term name="productShort" /> 4.2 | 1.33 | 1.33, 1.32, 1.31, 1.30 |
3233
| <Term name="productShort" /> 4.1 | 1.32 | 1.32, 1.31, 1.30, 1.29 |
3334
| <Term name="productShort" /> 4.0 | 1.31, 1.30, 1.29, 1.28 | 1.31, 1.30, 1.29, 1.28 |
3435

36+
## ACP 4.3 Notes
37+
38+
- ACP 4.3 adds support for Kubernetes 1.34 for platform-managed cluster scenarios.
39+
- For upgrades to ACP 4.3, the workload-cluster compatible versions are 1.34, 1.33, 1.32, and 1.31.
40+
- This means environments upgrading from ACP 4.0 to ACP 4.3 can keep workload clusters on Kubernetes 1.31 through 1.34 while upgrading the global cluster.
41+
42+
## Third-Party Cluster Management Range
43+
44+
- For third-party clusters, ACP 4.3 accepts Kubernetes versions in the range `>=1.19.0 <1.35.0`.
45+
- This management range is separate from the Compatible Versions column, which is the authoritative prerequisite for upgrading the ACP global cluster.
46+
- Product documentation continues to list only the Kubernetes versions that have passed product validation for third-party cluster support and the default Extend baseline.
47+
- Product validation for the Extend baseline covers the following capability areas:
48+
- Installing and using Operators
49+
- Installing and using Cluster Plugins
50+
- ClickHouse-based logging
51+
- VictoriaMetrics-based monitoring
52+
- This does not mean that all specific Operators or Cluster Plugins are covered by product validation.
53+
- For specific Operators or Cluster Plugins outside this baseline, refer to the relevant product documentation or contact technical support.
54+
3555
## Upgrade Requirements
3656

37-
For <Term name="productShort" /> 4.2 and earlier, **all** workload clusters must be upgraded to the **latest** Kubernetes version in the compatible versions list **before** upgrading the <Term name="productShort" /> global cluster.
57+
For <Term name="productShort" /> 4.3 and later, workload clusters only need to remain within the documented compatible version range before upgrading the <Term name="productShort" /> global cluster. For ACP 4.3, this means Kubernetes 1.31 through 1.34.
3858

39-
In future releases, workload clusters will only need to be within the compatible versions range to upgrade the <Term name="productShort" /> global cluster.
59+
In <Term name="productShort" /> 4.2 and earlier, **all** workload clusters must be upgraded to the **latest** Kubernetes version in the compatible versions list **before** upgrading the <Term name="productShort" /> global cluster.

docs/en/overview/release_notes.mdx

Lines changed: 56 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -5,49 +5,81 @@ title: Release Notes
55

66
# Release Notes
77

8-
## 4.2.0
8+
## 4.3.0
99

1010
### Features and Enhancements
1111

12-
#### Support for Kubernetes 1.33
12+
#### Support for Kubernetes 1.34
1313

14-
ACP now supports **Kubernetes 1.33**, delivering the latest upstream features, performance improvements, and security enhancements from the Kubernetes community.
14+
ACP 4.3 adds support for **Kubernetes 1.34** for platform-managed cluster scenarios.
1515

16-
#### ACP CLI (ac)
16+
For upgrades to ACP 4.3, the workload-cluster compatible versions are 1.34, 1.33, 1.32, and 1.31. This compatible-version requirement determines whether the `global` cluster can be upgraded and is separate from the third-party cluster management range.
1717

18-
The new **ACP CLI (ac)** enables you to develop, build, deploy, and run applications on ACP with a seamless command-line experience.
18+
For more information, see [Kubernetes Support Matrix](/overview/kubernetes-support-matrix.mdx).
19+
20+
#### CVO-Based Cluster Upgrade Workflow
21+
22+
ACP 4.3 introduces a Cluster Version Operator (CVO)-based upgrade workflow for both `global` and workload clusters.
1923

2024
Key capabilities include:
2125

22-
* **kubectl-compatible commands**
23-
* **Integrated authentication** with ACP platform environments
24-
* **Unified session management** across multiple environments
25-
* **ACP-specific extensions** for platform access and cross-environment workflows
26+
* Preparing upgrade artifacts and the upgrade controller with `bash upgrade.sh`
27+
* Running preflight checks before execution
28+
* Requesting upgrades from the Web Console or by updating `ClusterVersionShadow.spec.desiredUpdate`
29+
* Inspecting conditions, preflight results, stages, and history from `cvsh.status`
30+
31+
ACP CLI also introduces upgrade-oriented administrator commands such as `ac adm upgrade`, `ac adm upgrade status`, `--to-latest`, `--to`, and `--allow-explicit-upgrade` for requesting and troubleshooting workload cluster upgrades from the current context.
32+
33+
For operational guidance, see [Upgrade](/upgrade/index.mdx).
34+
35+
#### MicroOS-Based Global Clusters on Huawei DCS
36+
37+
ACP 4.3 allows administrators to create the `global` cluster on Huawei DCS with MicroOS-based immutable infrastructure. This extends the immutable operating model from workload clusters to platform installation scenarios on DCS.
38+
39+
For more information, see [About Immutable Infrastructure](/configure/clusters/immutable-infra.mdx).
40+
41+
#### Huawei Cloud Stack Support in Immutable Infrastructure
42+
43+
ACP 4.3 adds Immutable Infrastructure support for Huawei Cloud Stack (HCS). The HCS provider documentation now covers provider overview, installation, cluster creation, node management, cluster upgrades, and provider APIs in the Immutable Infrastructure documentation set.
44+
45+
For more information, see [About Immutable Infrastructure](/configure/clusters/immutable-infra.mdx).
46+
47+
#### VMware vSphere Support in the 4.3 Cycle
48+
49+
ACP 4.3 begins introducing Immutable Infrastructure support for VMware vSphere. The provider work is now tracked in the Immutable Infrastructure documentation set, while the public installation details and finalized plugin naming are still being published.
50+
51+
For more information, see [About Immutable Infrastructure](/configure/clusters/immutable-infra.mdx).
52+
53+
#### New Web Console Preview Entry
54+
55+
ACP Core now provides the top-navigation anchor required by the next-generation Web Console experience. When Alauda Container Platform Web Console Base is installed on the `global` cluster, users in the **Container Platform** and **Administrator** views can open the new console through a **Preview Next-Gen Console** entry in a separate browser tab.
56+
57+
The experience is designed for gradual migration and works with the Web Console Base plugin on the global cluster and the Web Console Collector plugin on workload clusters.
58+
59+
#### Containerd 2.0 Baseline
2660

27-
For full feature details, see:
28-
[ACP CLI (ac)](/ui/cli_tools/ac/index.mdx)
61+
ACP 4.3 upgrades the platform runtime baseline to containerd 2.0. Review runtime-dependent operational procedures before upgrading environments that rely on customized containerd configuration.
2962

30-
#### Hosted Control Plane (HCP)
63+
#### Expanded Third-Party Cluster Management Range
3164

32-
**Released:**
65+
For third-party clusters, ACP 4.3 now accepts Kubernetes versions in the range `>=1.19.0 <1.35.0`.
3366

34-
* **Alauda Container Platform Kubeadm Provider**
35-
* **Alauda Container Platform Hosted Control Plane**
36-
* **Alauda Container Platform SSH Infrastructure Provider**
67+
This management range is separate from the compatible Kubernetes versions used to determine whether the `global` cluster can be upgraded.
3768

38-
**Lifecycle:** *Agnostic* (released asynchronously with ACP)
69+
Product documentation continues to publish only the Kubernetes versions that have passed product validation for third-party cluster support and the default Extend baseline.
3970

40-
Hosted Control Plane decouples the control plane from worker nodes by hosting each cluster's control plane as containerized components within a management cluster. This architecture reduces resource usage, speeds up cluster creation and upgrades, and provides improved scalability for large multi-cluster environments.
71+
Product validation for the Extend baseline covers the following capability areas:
4172

42-
For more information, see:
43-
[About Hosted Control Plane](/configure/clusters/about-hcp.mdx)
73+
* Installing and using Operators
74+
* Installing and using Cluster Plugins
75+
* ClickHouse-based logging
76+
* VictoriaMetrics-based monitoring
4477

45-
### Deprecated and Removed Features
78+
This does not mean that all specific Operators or Cluster Plugins are covered by product validation.
4679

47-
#### Kubernetes Version Upgrade Policy Update
80+
For specific Operators or Cluster Plugins outside this baseline, refer to the relevant product documentation or contact technical support.
4881

49-
Starting from **ACP 4.2**, upgrading the Kubernetes version is **no longer optional**. When performing a cluster upgrade, the Kubernetes version must be upgraded together with other platform components.
50-
This change ensures version consistency across the cluster and reduces future maintenance windows.
82+
For more information, see [Kubernetes Support Matrix](/overview/kubernetes-support-matrix.mdx) and [Import Standard Kubernetes Cluster](/configure/clusters/managed/import/standard-kubernetes.mdx).
5183

5284
### Fixed Issues
5385

0 commit comments

Comments
 (0)