You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/before-updating-ocp.adoc
+12-3Lines changed: 12 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,14 +6,23 @@
6
6
[id="before-updating-ocp_{context}"]
7
7
= Before updating the {product-title} cluster
8
8
9
-
Before updating, consider the following:
9
+
[role="_abstract"]
10
+
Before updating your cluster, you must consider several factors in order to improve the chances of performing a successful update.
10
11
11
-
* You have recently backed up etcd.
12
+
Consider the following information:
13
+
14
+
* Whether you have recently backed up etcd.
12
15
13
16
* In `PodDisruptionBudget`, if `minAvailable` is set to `1`, the nodes are drained to apply pending machine configs that might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
14
17
15
18
* You might need to update the cloud provider resources for the new release if your cluster uses manually maintained credentials.
16
19
17
20
* You must review administrator acknowledgement requests, take any recommended actions, and provide the acknowledgement when you are ready.
18
21
19
-
* You can perform a partial update by updating the worker or custom pool nodes to accommodate the time it takes to update. You can pause and resume within the progress bar of each pool.
22
+
* You can perform a partial update by updating the worker or custom pool nodes to accommodate the time it takes to update. You can pause and resume within the progress bar of each pool.
23
+
24
+
[IMPORTANT]
25
+
====
26
+
* When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red{nbsp}Hat support.
27
+
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster.
= Pausing a MachineHealthCheck resource by using the web console
8
8
9
-
During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster.
9
+
[role="_abstract"]
10
+
During the update process, nodes in the cluster might become temporarily unavailable. For worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster.
10
11
11
12
.Prerequisites
12
13
@@ -15,9 +16,8 @@ During the update process, nodes in the cluster might become temporarily unavail
15
16
16
17
.Procedure
17
18
18
-
. Log in to the {product-title} web console.
19
-
. Navigate to *Compute*->*MachineHealthChecks*.
20
-
. To pause the machine health checks, add the `cluster.x-k8s.io/paused=""` annotation to each `MachineHealthCheck` resource. For example, to add the annotation to the `machine-api-termination-handler` resource, complete the following steps:
19
+
. On the web console, navigate to *Compute*->*MachineHealthChecks*.
20
+
. For each `MachineHealthCheck` resource, pause the machine health checks by adding the `cluster.x-k8s.io/paused=""` annotation to the resource. For example, to add the annotation to the `machine-api-termination-handler` resource, complete the following steps:
21
21
.. Click the Options menu {kebab} next to the `machine-api-termination-handler` and click *Edit annotations*.
22
22
.. In the *Edit annotations* dialog, click *Add more*.
23
23
.. In the *Key* and *Value* fields, add `cluster.x-k8s.io/paused` and `""` values, respectively, and click *Save*.
= Changing the update server by using the web console
8
8
9
+
[role="_abstract"]
10
+
You can change the update server your cluster uses to retrieve information about update paths.
11
+
9
12
ifndef::openshift-origin[]
10
13
Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the `upstream` to use the local server during updates.
11
14
endif::openshift-origin[]
@@ -20,21 +23,21 @@ endif::openshift-origin[]
20
23
21
24
.Procedure
22
25
23
-
. Navigate to *Administration*->*Cluster Settings*, click *version*.
26
+
. On the web console, navigate to *Administration*->*Cluster Settings* and click *version*.
24
27
. Click the *YAML* tab and then edit the `upstream` parameter value:
25
28
+
26
-
.Example output
27
-
+
29
+
.Example YAML snippet
28
30
[source,yaml]
29
31
----
30
32
...
31
33
spec:
32
34
clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a
33
-
upstream: '<update-server-url>' <1>
35
+
upstream: '<update_server_url>'
34
36
...
35
37
----
36
-
<1> The `<update-server-url>` variable specifies the URL for the update server.
37
38
+
38
-
The default `upstream` is `\https://api.openshift.com/api/upgrades_info/v1/graph`.
39
+
Replace `<update_server_url>` with the URL for the update server.
40
+
+
41
+
The default `upstream` value is `\https://api.openshift.com/api/upgrades_info/v1/graph`.
Copy file name to clipboardExpand all lines: modules/update-upgrading-web.adoc
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,6 +11,7 @@ endif::[]
11
11
[id="update-upgrading-web_{context}"]
12
12
= Updating a cluster by using the web console
13
13
14
+
[role="_abstract"]
14
15
If updates are available, you can update your cluster from the web console.
15
16
16
17
You can find information about available {product-title} advisories and updates
@@ -49,14 +50,16 @@ endif::openshift-origin[]
49
50
[NOTE]
50
51
====
51
52
When you are ready to move to the next minor version, choose the channel that corresponds to that minor version.
52
-
The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version.
53
+
The sooner you declare the update channel, the more effectively the cluster can recommend update paths to your target version.
53
54
The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from.
54
55
Update recommendations can change over time, as they are based on what update options are available at the time.
55
56
56
57
If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the next minor version is available in the path.
57
58
====
58
-
** If the *Update status* is not *Updates available*, you cannot update your cluster.
59
-
***Select channel* indicates the cluster version that your cluster is running or is updating to.
59
+
+
60
+
If the *Update status* is not *Updates available*, you cannot update your cluster.
61
+
+
62
+
*Select channel* indicates the cluster version that your cluster is running or is updating to.
60
63
61
64
. Select a version to update to, and click *Save*.
In some specific use cases, you might want a more controlled update process where you do not want specific nodes updated concurrently with the rest of the cluster. These use cases include, but are not limited to:
9
+
[role="_abstract"]
10
+
In some specific use cases, you might want a more controlled update process where you do not want specific nodes updated concurrently with the rest of the cluster.
11
+
12
+
These use cases include, but are not limited to the following situations:
9
13
10
14
* You have mission-critical applications that you do not want unavailable during the update. You can slowly test the applications on your nodes in small batches after the update.
11
15
* You have a small maintenance window that does not allow the time for all nodes to be updated, or you have multiple maintenance windows.
@@ -29,4 +33,4 @@ The rolling update process described in this topic involves:
29
33
Pausing an MCP should be done with careful consideration and for short periods of time only.
30
34
====
31
35
32
-
//link that follows is in the assembly: updating-cluster-between-minor
36
+
If you want to use the canary rollout update process, see "Performing a canary rollout update".
WARNING: This assembly has been moved into a subdirectory for 4.14+. Changes to this assembly for earlier versions should be done in separate PRs based off of their respective version branches. Otherwise, your cherry picks may fail.
11
-
12
-
To do: Remove this comment once 4.13 docs are EOL.
13
-
////
14
-
9
+
[role="_abstract"]
15
10
You can perform minor version and patch updates on an {product-title} cluster by using the web console.
16
11
17
12
[NOTE]
@@ -21,12 +16,6 @@ Use the web console or `oc adm upgrade channel _<channel>_` to change the update
* When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red{nbsp}Hat support.
27
-
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster.
If you want to use the canary rollout update process, see xref:../../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update].
46
+
[role="_additional-resources"]
47
+
.Additional resources
48
+
49
+
* xref:../../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update]
0 commit comments