OCPBUGS-84308: fix(cpo) delete terminated MCD pods to retry in-place upgrades#8434
OCPBUGS-84308: fix(cpo) delete terminated MCD pods to retry in-place upgrades#8434PoornimaSingour wants to merge 4 commits intoopenshift:mainfrom
Conversation
When an in-place MachineConfig daemon pod is prematurely terminated (e.g., by a forced node drain), it may transition to Succeeded or Failed phase without having completed the configuration update. Previously, reconcileUpgradePods did not check the pod's phase when it already existed, leaving the terminated pod in place and causing the upgrade to stall indefinitely. Now, when an MCD pod exists in a terminal phase (Succeeded or Failed) on a node that still requires upgrading, the controller deletes the pod so it is recreated on the next reconciliation cycle. Signed-off-by: Poornima Singour <psingour@redhat.com> Assisted-by: Claude Opus 4.6 <noreply@anthropic.com>
|
Pipeline controller notification For optional jobs, comment This repository is configured in: LGTM mode |
|
Skipping CI for Draft Pull Request. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository YAML (base), Central YAML (inherited) Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughReconcileInPlaceUpgrade was updated to detect upgrade MCD pods in Succeeded or Failed phases and delete them (tolerating NotFound) so a new upgrade pod can be retried. Running pods are left unchanged; if no pod exists the controller creates one (with added creation logging). The caller’s error return message was updated to reflect reconciling upgrade pods. A unit test TestReconcileUpgradePods was added to cover deleted terminated pods, retained running pods, created missing pods, and removed idle pods on fully updated nodes. Sequence Diagram(s)sequenceDiagram
participant Controller
participant API_Server
participant Pod
Controller->>API_Server: Get upgrade Pod for node
API_Server-->>Controller: Return Pod (Running | Succeeded | Failed | NotFound)
alt Pod is Running
Controller->>Controller: Leave Pod unchanged
else Pod is Succeeded or Failed
Controller->>API_Server: Delete Pod (log termination retry)
API_Server-->>Controller: Delete response (Success / NotFound / Error)
else Pod NotFound
Controller->>API_Server: Create upgrade Pod (log creation result)
API_Server-->>Controller: Create response (Success / Error)
end
🚥 Pre-merge checks | ✅ 11 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (11 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsTimed out fetching pipeline failures after 30000ms Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: PoornimaSingour The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@PoornimaSingour: This pull request references Jira Issue OCPBUGS-84308, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@PoornimaSingour: This pull request references Jira Issue OCPBUGS-84308, which is invalid:
Comment DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #8434 +/- ##
==========================================
+ Coverage 37.39% 37.44% +0.04%
==========================================
Files 751 751
Lines 91806 91978 +172
==========================================
+ Hits 34333 34441 +108
- Misses 54838 54894 +56
- Partials 2635 2643 +8
... and 3 files with indirect coverage changes
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go (1)
736-738: ⚡ Quick winTighten deleted-pod assertion to
NotFoundinstead of any error.
HaveOccurred()can pass for unrelated failures. AssertingIsNotFoundmakes the test intent explicit and failures clearer.Proposed test hardening
+import apierrors "k8s.io/apimachinery/pkg/api/errors" ... if tc.expectPodDeleted { g.Expect(getErr).To(HaveOccurred(), "expected pod to be deleted") + g.Expect(apierrors.IsNotFound(getErr)).To(BeTrue(), "expected pod get to return NotFound after deletion") }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go` around lines 736 - 738, Replace the loose assertion g.Expect(getErr).To(HaveOccurred()) for deleted pods with a NotFound-specific check: import k8s.io/apimachinery/pkg/api/errors as apierrors (or errors alias used elsewhere) and replace the assertion with g.Expect(apierrors.IsNotFound(getErr)).To(BeTrue(), "expected pod to be NotFound") when tc.expectPodDeleted is true, referencing the tc.expectPodDeleted branch and the getErr variable so the test fails only for a NotFound error.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go`:
- Around line 736-738: Replace the loose assertion
g.Expect(getErr).To(HaveOccurred()) for deleted pods with a NotFound-specific
check: import k8s.io/apimachinery/pkg/api/errors as apierrors (or errors alias
used elsewhere) and replace the assertion with
g.Expect(apierrors.IsNotFound(getErr)).To(BeTrue(), "expected pod to be
NotFound") when tc.expectPodDeleted is true, referencing the tc.expectPodDeleted
branch and the getErr variable so the test fails only for a NotFound error.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 21221ab1-79e2-4d7c-8429-c9fb954b5229
📒 Files selected for processing (2)
control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go`:
- Around line 762-764: The test currently checks for a deleted pod using a broad
error assertion (g.Expect(getErr).To(HaveOccurred()) when tc.expectPodDeleted is
true); change this to assert specifically that the error is a NotFound error by
using the Kubernetes API errors helper on the getErr variable (e.g., assert
apierrors.IsNotFound(getErr) via the testing framework) so the deleted-pod
branch only passes for NotFound and not for other client/read errors; locate the
check guarded by tc.expectPodDeleted in inplaceupgrader_test.go and replace the
HaveOccurred() assertion with a specific IsNotFound assertion referencing
getErr.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 8b8c3bd2-d52a-403e-a20b-a5bbb89688c9
📒 Files selected for processing (1)
control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go
…dePods Add a test case for when a terminated MCD pod already has a DeletionTimestamp set, verifying the controller skips the delete and continues without error. Signed-off-by: Poornima Singour <psingour@redhat.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
b5637a4 to
df176c0
Compare
Use apierrors.IsNotFound instead of broad HaveOccurred matcher so delete assertions only pass for the expected NotFound error. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
/jira refresh |
|
@PoornimaSingour: This pull request references Jira Issue OCPBUGS-84308, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
No GitHub users were found matching the public email listed for the QA contact in Jira (dhuynh@redhat.com), skipping review request. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
|
@PoornimaSingour: This pull request references Jira Issue OCPBUGS-84308, which is valid. 3 validation(s) were run on this bug
No GitHub users were found matching the public email listed for the QA contact in Jira (dhuynh@redhat.com), skipping review request. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go (1)
352-363: ⚡ Quick winDeleted-pod retry has no requeue guarantee — upgrade may stall.
After the terminated pod is deleted,
reconcileUpgradePodsreturnsnil,reconcileInPlaceUpgradereturnsnil, andReconcilereturnsctrl.Result{}(no requeue). Because the deletion doesn't mutate any node annotation, no node-watch event fires to trigger a follow-up reconciliation. If no other MachineSet event arrives, the replacement pod is never created and the upgrade stalls indefinitely — which is exactly the problem this PR is fixing.Consider either propagating a boolean "needs requeue" flag back up through
reconcileInPlaceUpgradetoReconcile, or returningctrl.Result{RequeueAfter: ...}whenever at least one pod was deleted:💡 Sketch of the fix
-func (r *Reconciler) reconcileUpgradePods(...) error { +func (r *Reconciler) reconcileUpgradePods(...) (bool, error) { ... + podDeleted := false ... } else if pod.Status.Phase == corev1.PodSucceeded || pod.Status.Phase == corev1.PodFailed { ... if err := hostedClusterClient.Delete(ctx, pod); err != nil { ... - return fmt.Errorf("error deleting terminated upgrade MCD pod for node %s: %w", node.Name, err) + return false, fmt.Errorf("error deleting terminated upgrade MCD pod for node %s: %w", node.Name, err) } + podDeleted = true } ... - return nil + return podDeleted, nil }And in
reconcileInPlaceUpgrade/Reconcile, propagate the flag to returnctrl.Result{RequeueAfter: 5 * time.Second}.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go` around lines 352 - 363, reconcileUpgradePods currently deletes terminated upgrade pods but returns nil which causes reconcileInPlaceUpgrade and Reconcile to not requeue and the replacement pod may never be created; change reconcileUpgradePods to return a (bool, error) or similar indicator (e.g., deletedPod bool) when it deletes at least one pod, update reconcileInPlaceUpgrade to propagate that flag up, and have Reconcile return ctrl.Result{RequeueAfter: 5 * time.Second} (or another short duration) whenever the flag indicates a pod was deleted so the controller will immediately requeue and create the replacement pod.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go`:
- Around line 692-715: Update the test case that sets existingPod with a
DeletionTimestamp and Finalizers so it actually verifies the "skip" behavior
instead of just checking getErr; in the assertion block that currently checks
getErr (references variables existingPod, expectPodSkipped and the retrieved pod
variable), either assert that the retrieved pod's DeletionTimestamp is non-nil
(e.g., pod.DeletionTimestamp != nil) to prove we hit the skip path, or
replace/add a fake-client interceptor (WithInterceptorFuncs) to spy on Delete
and assert Delete was never called for that pod — do not rely solely on getErr.
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go`:
- Around line 352-363: reconcileUpgradePods now deletes both idle and terminated
pods but the error wrap at the caller still says "failed to delete idle upgrade
pods", which is misleading; update the error wrapping at the call site that
wraps the error from hostedClusterClient.Delete (the delete call inside
reconcileUpgradePods) to use a neutral message like "failed to delete upgrade
pod for node %s" or include the pod phase/node context so failures deleting
terminated pods are accurately described; adjust the fmt.Errorf wrapper (the
existing "failed to delete idle upgrade pods" message) to reference the upgrade
pod deletion generically (or include pod.Status.Phase) so logs reflect the
actual deletion target.
---
Nitpick comments:
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go`:
- Around line 352-363: reconcileUpgradePods currently deletes terminated upgrade
pods but returns nil which causes reconcileInPlaceUpgrade and Reconcile to not
requeue and the replacement pod may never be created; change
reconcileUpgradePods to return a (bool, error) or similar indicator (e.g.,
deletedPod bool) when it deletes at least one pod, update
reconcileInPlaceUpgrade to propagate that flag up, and have Reconcile return
ctrl.Result{RequeueAfter: 5 * time.Second} (or another short duration) whenever
the flag indicates a pod was deleted so the controller will immediately requeue
and create the replacement pod.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 7df03c82-4975-43fe-9170-34a23bcc9534
📒 Files selected for processing (2)
control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go
…iation Update error message at call site from "failed to delete idle upgrade pods" to "failed to reconcile upgrade pods" to accurately reflect that the function now handles both idle and terminated pod deletion. Strengthen the DeletionTimestamp skip test by asserting that the retrieved pod's DeletionTimestamp is non-nil, proving the skip path was actually taken rather than relying solely on existence checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
What this PR does / why we need it:
When an in-place MachineConfig daemon pod is prematurely terminated (e.g., by a forced node drain), it may transition to Succeeded or Failed phase without having completed the configuration update. Previously, reconcileUpgradePods did not check the pod's phase when it already existed, leaving the terminated pod in place and causing the upgrade to stall indefinitely.
Now, when an MCD pod exists in a terminal phase (Succeeded or Failed) on a node that still requires upgrading, the controller deletes the pod so it is recreated on the next reconciliation cycle.
Which issue(s) this PR fixes:
Fixes : https://redhat.atlassian.net/browse/OCPBUGS-84308
Special notes for your reviewer:
Checklist:
Summary by CodeRabbit
Bug Fixes
Tests