|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * machine_configuration/mco-update-boot-images-manual.adoc |
| 4 | + |
| 5 | +:_mod-docs-content-type: PROCEDURE |
| 6 | +[id="mco-update-boot-images-ibm-cloud_{context}"] |
| 7 | += Manually updating the boot image on an {ibm-cloud-name} cluster |
| 8 | + |
| 9 | +[role="_abstract"] |
| 10 | +For an {ibm-cloud-title} cluster, you can manually update the boot image for the compute nodes in your cluster by configuring your machine sets to use the latest {product-title} image as the boot image to help ensure any new nodes can scale up properly. |
| 11 | + |
| 12 | +[NOTE] |
| 13 | +==== |
| 14 | +The standard boot image management feature is not supported for {ibm-cloud-title} clusters. |
| 15 | +==== |
| 16 | + |
| 17 | +The following procedure, which includes steps to create environment variables that facilitate running the required commands, shows how to obtain {ibm-cloud-title} authentication credentials, download a boot image, upload that image to the {ibm-cloud-title} image service, and modify your compute machine sets to use the new boot image. |
| 18 | + |
| 19 | +This procedure uses the default {ibm-cloud-title} Cloud Object Storage (COS) bucket in your cluster, which was created during cluster installation. Each COS bucket has a specific Cloud Resource Name (CRN), which the {ibm-cloud-title} CLI uses the to select the correct COS bucket. The following procedure shows how to obtain the CRN for the default COS bucket. For more information on the CRN, see link:https://cloud.ibm.com/docs/account?topic=account-crn[Cloud Resource Names in the {ibm-cloud-title} documentation]. |
| 20 | + |
| 21 | +.Prerequisites |
| 22 | + |
| 23 | +* You have completed the general boot image prerequisites as described in the "Prerequisites" section of the link:https://access.redhat.com/articles/7053165#prerequisites-2[{product-title} Boot Image Updates knowledgebase article]. |
| 24 | +
|
| 25 | +* You have downloaded the latest version of the {product-title} installation program, openshift-install, from the {cluster-manager-url}. For more information, see "Obtaining the installation program." |
| 26 | +
|
| 27 | +* You have the {oc-first} installed. |
| 28 | +
|
| 29 | +* You have the link:https://cloud.ibm.com/docs/cli?topic=cli-getting-started[{ibm-cloud-title} CLI] installed. |
| 30 | +
|
| 31 | +* You have installed the {ibm-cloud-title} Virtual Private Cloud (VPC) CLI plugin. |
| 32 | +
|
| 33 | +* You have installed the {ibm-cloud-title} Object Storage plugin. |
| 34 | +
|
| 35 | +.Procedure |
| 36 | + |
| 37 | +. Obtain the resource group and region from the `infrastructure` object and set the values in an environment variable by running the following commands: |
| 38 | ++ |
| 39 | +[source,terminal] |
| 40 | +---- |
| 41 | +$ export RESOURCE_GROUP=$(oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}') |
| 42 | +---- |
| 43 | ++ |
| 44 | +[source,terminal] |
| 45 | +---- |
| 46 | +$ export REGION=$(oc get infrastructure cluster -o jsonpath='{.status.platformStatus.ibmcloud.location}') |
| 47 | +---- |
| 48 | + |
| 49 | +. Generate an {ibm-cloud-title} API key and log in to your {ibm-cloud-title}: |
| 50 | + |
| 51 | +.. Follow the instructions in link:https://www.ibm.com/docs/en/masv-and-l/cd?topic=cli-creating-your-cloud-api-key[Creating your {ibm-cloud-title} API key in the {ibm-cloud-title}] documentation to generate the API key. |
| 52 | ++ |
| 53 | +To ensure that the key has the appropriate permissions, you must use the same {ibm-cloud-title} account used to create the {product-title} cluster when generating the key. |
| 54 | + |
| 55 | +.. Set the API key in an environment variable by running the following command: |
| 56 | ++ |
| 57 | +[source,terminal] |
| 58 | +---- |
| 59 | +$ export IBM_API_KEY=<Your_IBM_Cloud_API_Key> |
| 60 | +---- |
| 61 | + |
| 62 | +.. Log in to your {ibm-cloud-title} by running the following command: |
| 63 | ++ |
| 64 | +[source,terminal] |
| 65 | +---- |
| 66 | +$ ibmcloud login --apikey ${IBM_API_KEY} -r ${REGION} -g ${RESOURCE_GROUP} |
| 67 | +---- |
| 68 | ++ |
| 69 | +`IBM_API_KEY`, `REGION`, and `RESOURCE_GROUP` are environment variables you created in previous steps. |
| 70 | ++ |
| 71 | +.Example output |
| 72 | +[source,terminal] |
| 73 | +---- |
| 74 | +API endpoint: https://cloud.ibm.com |
| 75 | +Authenticating... |
| 76 | +Retrieving API key token... |
| 77 | +OK |
| 78 | + |
| 79 | +Targeted account OpenShift-QE (xxxxxxxxxxxxxxxx) <-> xxxxxx |
| 80 | + |
| 81 | +Targeted resource group xxxxxxx-ibm3h-9pbgg |
| 82 | + |
| 83 | +Targeted region eu-gb |
| 84 | + |
| 85 | + |
| 86 | +API endpoint: https://cloud.ibm.com |
| 87 | +Region: eu-gb |
| 88 | +User: xxxxx |
| 89 | +Account: xxxxx |
| 90 | +Resource group: xxxxx |
| 91 | +---- |
| 92 | +
|
| 93 | +. Obtain the URL of the {op-system} image to use as the boot image and set the location in an environment variable by running one of the following commands, based on your cluster architecture: |
| 94 | ++ |
| 95 | +* Linux (x86_64, amd64): |
| 96 | ++ |
| 97 | +[source,terminal] |
| 98 | +---- |
| 99 | +$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.ibmcloud.formats["qcow2.gz"].disk.location') |
| 100 | +---- |
| 101 | ++ |
| 102 | +* Linux on {ibm-z-name} and {ibm-linuxone-name} (s390x): |
| 103 | ++ |
| 104 | +[source,terminal] |
| 105 | +---- |
| 106 | +export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.s390x.artifacts.ibmcloud.formats["qcow2.gz"].disk.location') |
| 107 | +---- |
| 108 | +
|
| 109 | +. Obtain the boot image: |
| 110 | +
|
| 111 | +.. Download the image by using the following command: |
| 112 | ++ |
| 113 | +[source,terminal] |
| 114 | +---- |
| 115 | +$ curl -L -o /tmp/rhcos-new.qcow2.gz "${RHCOS_URL}" |
| 116 | +---- |
| 117 | ++ |
| 118 | +`RHCOS_URL` is the environment variable you created in a previous step. |
| 119 | +
|
| 120 | +.. Decompress the downloaded image by running the following command: |
| 121 | ++ |
| 122 | +[source,terminal] |
| 123 | +---- |
| 124 | +$ gunzip /tmp/rhcos-new.qcow2.gz |
| 125 | +---- |
| 126 | +
|
| 127 | +. Upload the boot image to the default {ibm-cloud-title} Cloud Object Storage (COS) bucket: |
| 128 | +
|
| 129 | +.. Obtain the CRN for your COS bucket and set the CRN in an environment variable by running the following command: |
| 130 | ++ |
| 131 | +[source,terminal] |
| 132 | +---- |
| 133 | +$ export COS_CRN=$(ibmcloud resource service-instance "${RESOURCE_GROUP}-cos" --output json | jq -r '.[0].crn') |
| 134 | +---- |
| 135 | +
|
| 136 | +.. Optional: Check that the CRN is correct by running the following command: |
| 137 | ++ |
| 138 | +[source,terminal] |
| 139 | +---- |
| 140 | +$ echo ${COS_CRN} |
| 141 | +---- |
| 142 | +
|
| 143 | +.. Configure the default COS bucket with the CRN by running the following command: |
| 144 | ++ |
| 145 | +[source,terminal] |
| 146 | +---- |
| 147 | +$ ibmcloud cos config crn --crn "${COS_CRN}" |
| 148 | +---- |
| 149 | ++ |
| 150 | +`COS_CRN` is the environment variable you created in a previous step. |
| 151 | +
|
| 152 | +.. Upload the boot image to the COS bucket by running the following command: |
| 153 | ++ |
| 154 | +[source,terminal] |
| 155 | +---- |
| 156 | +$ ibmcloud cos object-put --bucket "${RESOURCE_GROUP}-vsi-image" --key "rhcos-new.qcow2" --body /tmp/rhcos-new.qcow2 --region "${REGION}" |
| 157 | +---- |
| 158 | ++ |
| 159 | +`RESOURCE_GROUP` and `REGION` are environment variables you created in previous steps. |
| 160 | +
|
| 161 | +.. Optional: Check that image was uploaded to the COS bucket by running the following command: |
| 162 | ++ |
| 163 | +[source,terminal] |
| 164 | +---- |
| 165 | +$ ibmcloud cos objects --bucket "${RESOURCE_GROUP}-vsi-image" --region "${REGION}" |
| 166 | +---- |
| 167 | ++ |
| 168 | +`RESOURCE_GROUP` and `REGION` are environment variables you created in previous steps. |
| 169 | ++ |
| 170 | +.Example output |
| 171 | +[source,terminal] |
| 172 | +---- |
| 173 | +OK |
| 174 | +Found 2 objects in bucket 'xxxxxx-ibm3h-9pbgg-vsi-image': |
| 175 | +---- |
| 176 | +
|
| 177 | +.. Set an environment variable to create a descriptive name for your boot image: |
| 178 | ++ |
| 179 | +[source,terminal] |
| 180 | +---- |
| 181 | +$ export IMAGE_NAME="<descriptive_image_name>" |
| 182 | +---- |
| 183 | ++ |
| 184 | +Setting a descriptive name for your boot image, such as using the {op-system-first} version number in the image name, makes it easier to track which version is currently deployed if you update the cluster in the future. |
| 185 | +
|
| 186 | +.. Create a custom image for your {ibm-cloud-title} Virtual Private Cloud (VPC) from the uploaded boot image by running one of the following commands, based on your cluster architecture: |
| 187 | ++ |
| 188 | +-- |
| 189 | +* Linux (x86_64, amd64): |
| 190 | ++ |
| 191 | +[source,terminal] |
| 192 | +---- |
| 193 | +$ ibmcloud is image-create "${RESOURCE_GROUP}-${IMAGE_NAME}" --file "cos://${REGION}/${RESOURCE_GROUP}-vsi-image/rhcos-new.qcow2" --os-name rhel-coreos-stable-amd64 --resource-group-name "${RESOURCE_GROUP}" |
| 194 | +---- |
| 195 | ++ |
| 196 | +You must set the `--os-name` argument to `rhel-coreos-stable-amd64` as shown. This parameter configures several {op-system-first} default values that are required. |
| 197 | ++ |
| 198 | +`RESOURCE_GROUP`, `IMAGE_NAME`, and `REGION` are environment variables you created in previous steps. |
| 199 | ++ |
| 200 | +* Linux on {ibm-z-name} and {ibm-linuxone-name} (s390x): |
| 201 | ++ |
| 202 | +[source,terminal] |
| 203 | +---- |
| 204 | +$ ibmcloud is image-create "${RESOURCE_GROUP}-${IMAGE_NAME}" --file "cos://${REGION}/${RESOURCE_GROUP}-vsi-image/rhcos-new.qcow2" --os-name red-8-s390x-byol --resource-group-name "${RESOURCE_GROUP}" |
| 205 | +---- |
| 206 | ++ |
| 207 | +You must set the `--os-name` argument to `red-8-s390x-byol` as shown. This parameter configures several {op-system-first} default values that are required. |
| 208 | ++ |
| 209 | +`RESOURCE_GROUP`, `IMAGE_NAME`, and `REGION` are environment variables you created in previous steps. |
| 210 | +-- |
| 211 | +
|
| 212 | +.. Optional: Observe the new image being uploaded until its status changes from `pending` to `available`. |
| 213 | ++ |
| 214 | +[source,terminal] |
| 215 | +---- |
| 216 | +$ watch ibmcloud is image "${RESOURCE_GROUP}-${IMAGE_NAME}" |
| 217 | +---- |
| 218 | ++ |
| 219 | +`RESOURCE_GROUP` and `IMAGE_NAME` are environment variables you created in previous steps. |
| 220 | +
|
| 221 | +. Update each of your compute machine sets to include the new boot image: |
| 222 | +
|
| 223 | +.. Obtain the name of your machine sets for use in the following step by running the following command: |
| 224 | ++ |
| 225 | +[source,terminal] |
| 226 | +---- |
| 227 | +$ oc get machineset -n openshift-machine-api |
| 228 | +---- |
| 229 | ++ |
| 230 | +.Example output |
| 231 | +[source,terminal] |
| 232 | +---- |
| 233 | +NAME DESIRED CURRENT READY AVAILABLE AGE |
| 234 | +rhhdrbk-b5564-4pcm9-worker-0 3 3 3 3 123m |
| 235 | +ci-ln-xj96skb-72292-48nm5-worker-d 1 1 1 1 27m |
| 236 | +---- |
| 237 | +
|
| 238 | +.. Edit a machine set to update the `image` field in the `providerSpec` stanza to add your boot image by running the following command: |
| 239 | ++ |
| 240 | +[source,terminal] |
| 241 | +---- |
| 242 | +$ oc patch machineset <machineset-name> -n openshift-machine-api --type merge \ |
| 243 | + -p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"image":"'${RESOURCE_GROUP}'-'${IMAGE_NAME}'"}}}}}}' |
| 244 | +---- |
| 245 | ++ |
| 246 | +Replace `<machineset_name>` with the name of your machine set. |
| 247 | ++ |
| 248 | +`IMAGE_NAME` is the environment variable you created in a previous step. |
| 249 | +
|
| 250 | +. If boot image skew enforcement in your cluster is set to the manual mode, update the version of the new boot image in the `MachineConfiguration` object as described in "Updating the boot image skew enforcement version". |
| 251 | +
|
| 252 | +.Verification |
| 253 | +
|
| 254 | +. Scale up a machine set to check that the new node is using the new boot image: |
| 255 | ++ |
| 256 | +-- |
| 257 | +.. Increase the machine set replicas by one to trigger a new machine by running the following command: |
| 258 | ++ |
| 259 | +[source,terminal] |
| 260 | +---- |
| 261 | +$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api |
| 262 | +---- |
| 263 | +where: |
| 264 | +
|
| 265 | +`<count>`:: Specifies the total number of replicas, including any existing replicas, that you want for this machine set. |
| 266 | +`<machineset_name>`:: Specifies the name of the machine set to scale. |
| 267 | +
|
| 268 | +.. Optional: View the status of the machine set as it provisions by running the following command: |
| 269 | ++ |
| 270 | +[source,terminal] |
| 271 | +---- |
| 272 | +$ oc get machines.machine.openshift.io -n openshift-machine-api -w |
| 273 | +---- |
| 274 | ++ |
| 275 | +It can take several minutes for the machine set to achieve the `Running` state. |
| 276 | +
|
| 277 | +.. Verify that the new node has been created and is in the `Ready` state by running the following command. |
| 278 | ++ |
| 279 | +[source,terminal] |
| 280 | +---- |
| 281 | +$ oc get nodes |
| 282 | +---- |
| 283 | +
|
| 284 | +.. Verify that the new node is using the new boot image by running the following command: |
| 285 | ++ |
| 286 | +[source,terminal] |
| 287 | +---- |
| 288 | +$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json |
| 289 | +---- |
| 290 | ++ |
| 291 | +Replace `<new_node>` with the name of your new node. |
| 292 | ++ |
| 293 | +.Example output |
| 294 | +[source,terminal] |
| 295 | +---- |
| 296 | +{ |
| 297 | +# ... |
| 298 | + "ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive", |
| 299 | + "version": "9.6.20251212-1" |
| 300 | +} |
| 301 | +---- |
| 302 | +where: |
| 303 | +
|
| 304 | +`<version>`:: Specifies the boot image version. |
| 305 | +-- |
| 306 | ++ |
| 307 | +After you migrate all machine sets to the new boot image, the old boot image is no longer needed. You can remove the old boot image from your COS bucket. |
0 commit comments