diff --git a/machine_configuration/mco-update-boot-images-manual.adoc b/machine_configuration/mco-update-boot-images-manual.adoc index 6277a056220b..c80369c457cd 100644 --- a/machine_configuration/mco-update-boot-images-manual.adoc +++ b/machine_configuration/mco-update-boot-images-manual.adoc @@ -14,6 +14,8 @@ For {product-title} platforms that do not support automatic boot image updating Red{nbsp}Hat does not support manually updating the boot image in control plane nodes. ==== +include::modules/mco-update-boot-images-ibm-bare-metal.adoc[leveloffset=+1] + include::modules/mco-update-boot-images-ibm-cloud.adoc[leveloffset=+1] include::modules/mco-update-boot-images-nutanix.adoc[leveloffset=+1] diff --git a/modules/mco-update-boot-images-ibm-bare-metal.adoc b/modules/mco-update-boot-images-ibm-bare-metal.adoc new file mode 100644 index 000000000000..381bae4bcc84 --- /dev/null +++ b/modules/mco-update-boot-images-ibm-bare-metal.adoc @@ -0,0 +1,124 @@ +// Module included in the following assemblies: +// +// * machine_configuration/mco-update-boot-images-manual.adoc + +:_mod-docs-content-type: PROCEDURE +[id="mco-update-boot-images-ibm-bare-metal_{context}"] += Manually updating the boot image on a bare-metal cluster + +[role="_abstract"] +For a bare-metal cluster that was installed with {product-title} version 4.9 or earlier, you need to change how the cluster provisions new nodes in order to update the boot image used with those nodes. Using an up-to-date boot image ensures that any new nodes can scale up properly. + +[NOTE] +==== +The standard boot image management feature is not supported for bare-metal clusters. +==== + +If your bare-metal cluster was installed with {product-title} version 4.10 or later, boot images are kept current by the Cluster Version Operator (CVO) and are not at risk of skew. Skew enforcement is disabled for the cluster by default. No further action on your part is required to maintain the boot image versioning. + +If your bare-metal cluster was installed with {product-title} version 4.9 or earlier, the cluster is using the legacy qcow2-based provisioning method. Boot images in these clusters are not managed by the CVO and could be significantly out of date. Follow the steps below to migrate the cluster to use the `machine-os-images` provisioning method, which was introduced in {product-title} 4.10. This migration ensures that the cluster always uses the release version as the boot image when a scale-up is taking place. + +The following procedure enables the `install_coreos` deployment method and disables the qcow2 image cache. With these changes, the Cluster Baremetal Operator (CBO) will use the `machine-os-images` container from the release payload for new node provisioning. The cluster will have no skew risk, the same as a cluster at version 4.10 or later. Skew enforcement is automatically disabled after the migration is complete. + +[NOTE] +==== +Boot image updates are not required for Agent-based Installer clusters. The boot image for Agent-based Installer nodes is generated from the current release payload through the `oc adm node-image create` command and does not have skew issues. +==== + +.Prerequisites + +* You have completed the general boot image prerequisites as described in the "Prerequisites" section of the link:https://access.redhat.com/articles/7053165#prerequisites-2[{product-title} Boot Image Updates knowledgebase article]. + +* You have the {oc-first} installed. + +* A new physical host must be registered and in the `available` state and an associated `BareMetalHost` object must be present in the `openshift-machine-api` namespace so that you can scale a new machine to verify the procedure. + +.Procedure + +. Check whether your cluster is using the legacy boot image provisioning path by running the following command: ++ +[source,terminal] +---- +$ oc get provisioning provisioning-configuration \ + -o jsonpath='{.spec.provisioningOSDownloadURL}' +---- ++ +* If the output is non-empty, your cluster was installed with {product-title} version 4.9 or earlier. Boot images are not managed by the the Cluster Version Operator (CVO) and could be significantly out of date. Follow the steps in this procedure to migrate to the current provisioning path. ++ +* If the output is empty, your cluster was installed with {product-title} version 4.10 or later. Boot images are kept current by the Cluster Version Operator (CVO) and are not at risk of skew. Skew enforcement should be disabled for this cluster. No further action on your part is required to maintain the boot image versioning. + +. Clear the legacy image fields and enable the `install_coreos` deployment method: + +.. Migrate each machine set to the `machine-os-images` provisioning path by running the follwing command: ++ +[source,terminal] +---- +$ oc patch machineset -n openshift-machine-api --type merge \ + -p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"customDeploy":{"method":"install_coreos"},"image":{"url":"","checksum":""}}}}}}}' +---- ++ +Replace `` with the name of your machine set. + +.. Clear the legacy download URL by running the following command: ++ +[source,terminal] +---- +$ oc patch provisioning provisioning-configuration --type=merge -p '{"spec":{"provisioningOSDownloadURL":""}}' +---- ++ +This process migrates the cluster to the `machine-os-images` provisioning method, which ensure that the latest boot image is used for scaling nodes. + + + +.Verification + +. Scale up a machine set to check that the new node is using the new boot image: + +.. Increase the machine set replicas by one to trigger a new machine by running the following command: ++ +[source,terminal] +---- +$ oc scale --replicas= machineset -n openshift-machine-api +---- +where: + +``:: Specifies the total number of replicas, including any existing replicas, that you want for this machine set. +``:: Specifies the name of the machine set to scale. + +.. Optional: View the status of the machine set as it provisions by running the following command: ++ +[source,terminal] +---- +$ oc get machines.machine.openshift.io -n openshift-machine-api -w +---- ++ +It can take several minutes for the machine set to achieve the `Running` state. + +.. Verify that the new node has been created and is in the `Ready` state by running the following command: ++ +[source,terminal] +---- +$ oc get nodes +---- + +. Verify that the new node is using the new boot image by running the following command: ++ +[source,terminal] +---- +$ oc debug node/ -- chroot /host cat /sysroot/.coreos-aleph-version.json +---- ++ +Replace `` with the name of your new node. ++ +.Example output +[source,terminal] +---- +{ +# ... + "ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive", + "version": "9.6.20251212-1" +} +---- +where: + +`version`:: Specifies the boot image version.