Skip to content

OCPBUGS-84577: clear stale EtcdRecoveryActive failure condition when etcd is healthy#8406

Open
vsolanki12 wants to merge 1 commit intoopenshift:mainfrom
vsolanki12:OCPBUGS-84577-etcd-recovery-stale-condition
Open

OCPBUGS-84577: clear stale EtcdRecoveryActive failure condition when etcd is healthy#8406
vsolanki12 wants to merge 1 commit intoopenshift:mainfrom
vsolanki12:OCPBUGS-84577-etcd-recovery-stale-condition

Conversation

@vsolanki12
Copy link
Copy Markdown

@vsolanki12 vsolanki12 commented May 4, 2026

What this PR does / why we need it:

When the etcd recovery job fails but etcd self-heals, the EtcdRecoveryJobFailed condition was never cleared. This caused the OpenShift Console to display a stale error message ("Error in Etcd Recovery job: the Etcd cluster requires manual intervention.") on the HostedCluster overview page, even when the cluster was fully healthy (Available=True, Degraded=False, EtcdAvailable=True).

This fix adds two checks in reconcileETCDMemberRecovery:

  • When a failed recovery job exists but the etcd StatefulSet is fully available (3/3 replicas), clean up the job and clear the condition.
  • When no failing etcd pods exist and etcd is healthy, clear any stale EtcdRecoveryJobFailed condition.

Which issue(s) this PR fixes:

Fixes https://issues.redhat.com/browse/OCPBUGS-84577

Special notes for your reviewer:

  • The etcd recovery feature is gated behind ENABLE_ETCD_RECOVERY env var and only applies to managed, highly-available etcd clusters.
  • Both fix paths were verified on a live KubeVirt HCP cluster by simulating the stale condition and confirming it gets cleared.
  • The etcd_manual_intervention_required metric (which reads this condition) will also correctly reset.

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.
  • This change includes unit tests.

Summary by CodeRabbit

  • Bug Fixes

    • Automatically cleans up etcd recovery resources and clears the recovery-active failure state when the etcd cluster reports full readiness, avoiding unnecessary manual-intervention alerts while still signaling failures when the cluster is unhealthy.
    • Clears stale recovery failure conditions when there is no failing etcd pod and the cluster is fully available.
  • Tests

    • Added unit tests covering recovery success/failure paths, condition clearing, and cleanup behavior.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 4, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 4, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels May 4, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@vsolanki12: This pull request references Jira Issue OCPBUGS-84577, which is invalid:

  • expected the bug to target the "5.0.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

What this PR does / why we need it:

When the etcd recovery job fails but etcd self-heals, the EtcdRecoveryJobFailed condition was never cleared. This caused the OpenShift Console to display a stale error message ("Error in Etcd Recovery job: the Etcd cluster requires manual intervention.") on the HostedCluster overview page, even when the cluster was fully healthy (Available=True, Degraded=False, EtcdAvailable=True).

This fix adds two checks in reconcileETCDMemberRecovery:

  • When a failed recovery job exists but the etcd StatefulSet is fully available (3/3 replicas), clean up the job and clear the condition.
  • When no failing etcd pods exist and etcd is healthy, clear any stale EtcdRecoveryJobFailed condition.

Which issue(s) this PR fixes:

Fixes https://issues.redhat.com/browse/OCPBUGS-84577

Special notes for your reviewer:

  • The etcd recovery feature is gated behind ENABLE_ETCD_RECOVERY env var and only applies to managed, highly-available etcd clusters.
  • Both fix paths were verified on a live KubeVirt HCP cluster by simulating the stale condition and confirming it gets cleared.
  • The etcd_manual_intervention_required metric (which reads this condition) will also correctly reset.

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.
  • This change includes unit tests.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

When an existing etcd recovery Job finishes unsuccessfully, the reconciler now fetches the etcd StatefulSet; if the StatefulSet reports ReadyReplicas==3 and AvailableReplicas==3 it deletes the recovery Job and related objects, clears the EtcdRecoveryActive condition (Status=False, Reason=AsExpectedReason) on the HostedCluster, updates HostedCluster status, and returns without marking manual-intervention. If the StatefulSet is not fully ready, the reconciler retains the prior failure handling (EtcdRecoveryJobFailedReason). Separately, when no failing etcd pod is detected and the StatefulSet is fully available, the reconciler clears a stale EtcdRecoveryActive failure condition only if its prior Reason was EtcdRecoveryJobFailedReason, updating status. A unit test verifies these behaviors.

Sequence Diagram

sequenceDiagram
    participant Reconciler
    participant ETCDJob as ETCD Recovery Job
    participant ETCDSet as ETCD StatefulSet
    participant HostedCluster as HostedCluster Status

    Reconciler->>ETCDJob: Check if Job exists and failed
    alt Job Failed
        Reconciler->>ETCDSet: Fetch StatefulSet readiness
        alt ReadyReplicas==3 && AvailableReplicas==3
            Reconciler->>ETCDJob: Delete recovery Job and objects
            Reconciler->>HostedCluster: Set EtcdRecoveryActive=False (Reason: AsExpectedReason)
            Reconciler->>HostedCluster: Update status
        else StatefulSet Not Fully Ready
            Reconciler->>HostedCluster: Set EtcdRecoveryActive=True/Failed (Reason: EtcdRecoveryJobFailedReason)
            Reconciler->>HostedCluster: Update status
        end
    else No Failing Pod Detected
        Reconciler->>ETCDSet: Check if fully available
        alt StatefulSet Fully Available
            Reconciler->>HostedCluster: If prior Reason==EtcdRecoveryJobFailedReason, clear EtcdRecoveryActive (Status=False, Reason=AsExpectedReason)
            Reconciler->>HostedCluster: Update status
        end
    end
Loading
🚥 Pre-merge checks | ✅ 10 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Structure And Quality ⚠️ Warning Test lacks assertion messages. 5 of 6 key assertions missing failure messages to diagnose failures, violating requirement #4 for meaningful diagnostic messages. Add descriptive failure messages to all assertions: err check, client.Get check, condition existence checks, and condition reason check. Example: g.Expect(err).ToNot(HaveOccurred(), "reconcileETCDMemberRecovery failed unexpectedly")
✅ Passed checks (10 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and specifically describes the main change: clearing a stale EtcdRecoveryActive failure condition when etcd becomes healthy, which matches the core fix implemented in the changeset.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed All test names in TestReconcileETCDMemberRecovery are stable and deterministic with no dynamic content like pod names, timestamps, UUIDs, or IP addresses. Test names are descriptive and static.
Microshift Test Compatibility ✅ Passed The new test TestReconcileETCDMemberRecovery is a standard Go unit test (testing.T), not a Ginkgo e2e test. The check applies only to Ginkgo e2e tests, so this check is not applicable.
Single Node Openshift (Sno) Test Compatibility ✅ Passed No Ginkgo e2e tests added. The new test is a standard Go unit test using testing.T, not Ginkgo. SNO compatibility check not applicable.
Topology-Aware Scheduling Compatibility ✅ Passed No scheduling constraints. Controller monitoring logic only, gated by HighlyAvailable topology. No affinity, node selectors, or topology spreads.
Ote Binary Stdout Contract ✅ Passed Check not applicable. Modified files are HyperShift controller code and unit tests, not OTE test extension binaries. No stdout writes found.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed Check is not applicable. The PR adds TestReconcileETCDMemberRecovery, a standard Go unit test using fake clients, not a Ginkgo e2e test. The custom check applies only to Ginkgo e2e tests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot added area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release and removed do-not-merge/needs-area labels May 4, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 4, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: vsolanki12
Once this PR has been reviewed and has the lgtm label, please assign sjenning for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@vsolanki12 vsolanki12 changed the title fix(etcd-recovery): OCPBUGS-84577: clear stale EtcdRecoveryActive failure condition when etcd is healthy OCPBUGS-84577: clear stale EtcdRecoveryActive failure condition when etcd is healthy May 4, 2026
@codecov
Copy link
Copy Markdown

codecov Bot commented May 4, 2026

Codecov Report

❌ Patch coverage is 60.60606% with 13 lines in your changes missing coverage. Please review.
✅ Project coverage is 37.61%. Comparing base (a62eefd) to head (7788d6b).
⚠️ Report is 23 commits behind head on main.

Files with missing lines Patch % Lines
...perator/controllers/hostedcluster/etcd_recovery.go 60.60% 9 Missing and 4 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8406      +/-   ##
==========================================
+ Coverage   37.44%   37.61%   +0.17%     
==========================================
  Files         751      751              
  Lines       91969    92036      +67     
==========================================
+ Hits        34435    34617     +182     
+ Misses      54894    54759     -135     
- Partials     2640     2660      +20     
Files with missing lines Coverage Δ
...perator/controllers/hostedcluster/etcd_recovery.go 26.13% <60.60%> (+26.13%) ⬆️

... and 4 files with indirect coverage changes

Flag Coverage Δ
cmd-support 32.76% <ø> (+0.12%) ⬆️
cpo-hostedcontrolplane 36.77% <ø> (+0.28%) ⬆️
cpo-other 37.73% <ø> (ø)
hypershift-operator 48.23% <60.60%> (+0.29%) ⬆️
other 27.77% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go (1)

6622-6627: ⚡ Quick win

Assert the failed recovery job is gone in the cleanup case.

The recovered-after-failed-job case only checks the condition reason. If cleanup regresses and the stale Job stays behind, this test still passes even though one of the main behaviors in this PR is broken.

Suggested assertion
 	testCases := []struct {
 		name            string
 		objects         []crclient.Object
 		conditions      []metav1.Condition
 		expectedReason  string
 		conditionExists bool
+		expectJobDeleted bool
 	}{
 		{
 			name:            "When failed job exists but etcd recovered it should cleanup job and clear condition",
 			conditions:      []metav1.Condition{staleCondition},
 			objects:         append(healthyEtcdPods(), healthyStatefulSet, failedJob),
 			expectedReason:  hyperv1.AsExpectedReason,
 			conditionExists: true,
+			expectJobDeleted: true,
 		},
 	}
@@
 			if tc.conditionExists {
 				g.Expect(condition).ToNot(BeNil())
 				g.Expect(condition.Reason).To(Equal(tc.expectedReason))
 			} else {
 				g.Expect(condition).To(BeNil())
 			}
+
+			if tc.expectJobDeleted {
+				job := etcdrecoverymanifests.EtcdRecoveryJob(hcpNS)
+				err := client.Get(t.Context(), crclient.ObjectKeyFromObject(job), job)
+				g.Expect(errors2.IsNotFound(err)).To(BeTrue())
+			}
 		})
 	}
 }

Also applies to: 6677-6686

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`
around lines 6622 - 6627, The test cases that use the failedJob fixture (the
case with name "When failed job exists but etcd recovered it should cleanup job
and clear condition" and the similar case around lines 6677-6686) only assert
condition reasons; add an assertion after reconciliation that the failed Job
(failedJob) has been removed—e.g., attempt to Get the Job by failedJob.Name in
the test namespace and assert the client returns NotFound or that listing Jobs
returns zero matching entries. Locate the assertions around
expectedReason/conditionExists in hostedcluster_controller_test.go and add the
cleanup check for failedJob for both test cases so a lingering Job causes the
test to fail.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`:
- Around line 6622-6627: The test cases that use the failedJob fixture (the case
with name "When failed job exists but etcd recovered it should cleanup job and
clear condition" and the similar case around lines 6677-6686) only assert
condition reasons; add an assertion after reconciliation that the failed Job
(failedJob) has been removed—e.g., attempt to Get the Job by failedJob.Name in
the test namespace and assert the client returns NotFound or that listing Jobs
returns zero matching entries. Locate the assertions around
expectedReason/conditionExists in hostedcluster_controller_test.go and add the
cleanup check for failedJob for both test cases so a lingering Job causes the
test to fail.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: d69011b7-ad73-45d5-aa19-6f7fc30bdb73

📥 Commits

Reviewing files that changed from the base of the PR and between 68106f0 and 7f5e921.

📒 Files selected for processing (4)
  • control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go
  • hypershift-operator/controllers/hostedcluster/etcd_recovery.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go

Comment thread hypershift-operator/controllers/hostedcluster/etcd_recovery.go
@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels May 4, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@vsolanki12: This pull request references Jira Issue OCPBUGS-84577, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (5.0.0) matches configured target version for branch (5.0.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

What this PR does / why we need it:

When the etcd recovery job fails but etcd self-heals, the EtcdRecoveryJobFailed condition was never cleared. This caused the OpenShift Console to display a stale error message ("Error in Etcd Recovery job: the Etcd cluster requires manual intervention.") on the HostedCluster overview page, even when the cluster was fully healthy (Available=True, Degraded=False, EtcdAvailable=True).

This fix adds two checks in reconcileETCDMemberRecovery:

  • When a failed recovery job exists but the etcd StatefulSet is fully available (3/3 replicas), clean up the job and clear the condition.
  • When no failing etcd pods exist and etcd is healthy, clear any stale EtcdRecoveryJobFailed condition.

Which issue(s) this PR fixes:

Fixes https://issues.redhat.com/browse/OCPBUGS-84577

Special notes for your reviewer:

  • The etcd recovery feature is gated behind ENABLE_ETCD_RECOVERY env var and only applies to managed, highly-available etcd clusters.
  • Both fix paths were verified on a live KubeVirt HCP cluster by simulating the stale condition and confirming it gets cleared.
  • The etcd_manual_intervention_required metric (which reads this condition) will also correctly reset.

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.
  • This change includes unit tests.

Summary by CodeRabbit

  • Bug Fixes

  • Enhanced error messages during in-place upgrades to include degraded node names and specific failure reasons.

  • Improved etcd recovery handling to automatically clean up recovery resources when etcd returns to healthy state, reducing manual intervention needs.

  • Tests

  • Added test coverage for degraded node scenarios during upgrades and etcd recovery status conditions.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@vsolanki12 vsolanki12 force-pushed the OCPBUGS-84577-etcd-recovery-stale-condition branch from d28aec7 to a9c44fb Compare May 4, 2026 13:57
@openshift-ci-robot
Copy link
Copy Markdown

@vsolanki12: This pull request references Jira Issue OCPBUGS-84577, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (5.0.0) matches configured target version for branch (5.0.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

What this PR does / why we need it:

When the etcd recovery job fails but etcd self-heals, the EtcdRecoveryJobFailed condition was never cleared. This caused the OpenShift Console to display a stale error message ("Error in Etcd Recovery job: the Etcd cluster requires manual intervention.") on the HostedCluster overview page, even when the cluster was fully healthy (Available=True, Degraded=False, EtcdAvailable=True).

This fix adds two checks in reconcileETCDMemberRecovery:

  • When a failed recovery job exists but the etcd StatefulSet is fully available (3/3 replicas), clean up the job and clear the condition.
  • When no failing etcd pods exist and etcd is healthy, clear any stale EtcdRecoveryJobFailed condition.

Which issue(s) this PR fixes:

Fixes https://issues.redhat.com/browse/OCPBUGS-84577

Special notes for your reviewer:

  • The etcd recovery feature is gated behind ENABLE_ETCD_RECOVERY env var and only applies to managed, highly-available etcd clusters.
  • Both fix paths were verified on a live KubeVirt HCP cluster by simulating the stale condition and confirming it gets cleared.
  • The etcd_manual_intervention_required metric (which reads this condition) will also correctly reset.

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.
  • This change includes unit tests.

Summary by CodeRabbit

  • Bug Fixes

  • In-place upgrade failures now report the degraded node name and its degraded-reason annotation.

  • Etcd recovery no longer forces manual intervention when the cluster is healthy; recovery resources and failure condition are cleared when etcd is fully ready.

  • Tests

  • Added tests covering degraded-node upgrade errors and etcd recovery status transitions.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@vsolanki12 vsolanki12 force-pushed the OCPBUGS-84577-etcd-recovery-stale-condition branch 2 times, most recently from 9462d10 to 5873d33 Compare May 4, 2026 14:07
@vsolanki12
Copy link
Copy Markdown
Author

/auto-cc

@openshift-ci openshift-ci Bot requested review from bryan-cox and sdminonne May 5, 2026 05:34
@vsolanki12 vsolanki12 marked this pull request as ready for review May 5, 2026 09:42
@openshift-ci openshift-ci Bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 5, 2026
@openshift-ci openshift-ci Bot requested a review from Nirshal May 5, 2026 09:42
@vsolanki12
Copy link
Copy Markdown
Author

@coderabbitai full review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 5, 2026

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go (1)

6661-6666: ⚡ Quick win

Assert recovery Job deletion in the “failed job but etcd recovered” case.

The test validates condition reason changes, but it doesn’t verify that cleanup actually removed the failed Job. Adding that assertion would directly protect the main behavior this PR introduces.

Suggested test assertion
 			condition := meta.FindStatusCondition(updatedHC.Status.Conditions, string(hyperv1.EtcdRecoveryActive))
 			if tc.conditionExists {
 				g.Expect(condition).ToNot(BeNil())
 				g.Expect(condition.Reason).To(Equal(tc.expectedReason))
 			} else {
 				g.Expect(condition).To(BeNil())
 			}
+
+			if tc.name == "When failed job exists but etcd recovered it should cleanup job and clear condition" {
+				job := etcdrecoverymanifests.EtcdRecoveryJob(hcpNS)
+				err := client.Get(t.Context(), crclient.ObjectKeyFromObject(job), job)
+				g.Expect(errors2.IsNotFound(err)).To(BeTrue(), "expected failed recovery job to be deleted")
+			}
 		})
 	}
 }

Also applies to: 6719-6725

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`
around lines 6661 - 6666, Add assertions to the two test cases that simulate
"failed job but etcd recovered" to ensure the failed Job resource was actually
deleted: after invoking the reconcile/test execution for the case where objects
include failedJob (referencing the test case with name "When failed job exists
but etcd recovered it should cleanup job and clear condition" and the analogous
case at the other location), query the fake client for the Job (the same
failedJob name/namespace used in the test) and assert it returns NotFound; use
the test's existing fake client/helper methods used elsewhere in
hostedcluster_controller_test.go so the assertion verifies cleanup of failedJob
in addition to the condition change.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Nitpick comments:
In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`:
- Around line 6661-6666: Add assertions to the two test cases that simulate
"failed job but etcd recovered" to ensure the failed Job resource was actually
deleted: after invoking the reconcile/test execution for the case where objects
include failedJob (referencing the test case with name "When failed job exists
but etcd recovered it should cleanup job and clear condition" and the analogous
case at the other location), query the fake client for the Job (the same
failedJob name/namespace used in the test) and assert it returns NotFound; use
the test's existing fake client/helper methods used elsewhere in
hostedcluster_controller_test.go so the assertion verifies cleanup of failedJob
in addition to the condition change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 1c4bd90e-22a0-4d2f-8a27-8b46bb7a4b92

📥 Commits

Reviewing files that changed from the base of the PR and between 68106f0 and 5873d33.

📒 Files selected for processing (2)
  • hypershift-operator/controllers/hostedcluster/etcd_recovery.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go

@vsolanki12 vsolanki12 force-pushed the OCPBUGS-84577-etcd-recovery-stale-condition branch from 5873d33 to cafae49 Compare May 5, 2026 10:24
@vsolanki12
Copy link
Copy Markdown
Author

@coderabbitai full review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 5, 2026

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
hypershift-operator/controllers/hostedcluster/etcd_recovery.go (1)

65-66: ⚡ Quick win

Avoid hardcoding the expected etcd replica count to 3.

Both health checks gate behavior on ReadyReplicas == 3 && AvailableReplicas == 3. Using the StatefulSet desired replicas keeps this logic correct if replica sizing changes in the future.

♻️ Suggested change
-} else if etcdStatefulSet.Status.ReadyReplicas == 3 && etcdStatefulSet.Status.AvailableReplicas == 3 {
+} else {
+	expectedReplicas := int32(1)
+	if etcdStatefulSet.Spec.Replicas != nil {
+		expectedReplicas = *etcdStatefulSet.Spec.Replicas
+	}
+	if etcdStatefulSet.Status.ReadyReplicas == expectedReplicas && etcdStatefulSet.Status.AvailableReplicas == expectedReplicas {
 		log.Info("etcd recovered despite failed recovery job, cleaning up")
 		...
 		return nil, nil
+	}
 }
-fullyAvailable := etcdStatefulSet.Status.ReadyReplicas == 3 && etcdStatefulSet.Status.AvailableReplicas == 3
+expectedReplicas := int32(1)
+if etcdStatefulSet.Spec.Replicas != nil {
+	expectedReplicas = *etcdStatefulSet.Spec.Replicas
+}
+fullyAvailable := etcdStatefulSet.Status.ReadyReplicas == expectedReplicas &&
+	etcdStatefulSet.Status.AvailableReplicas == expectedReplicas

Also applies to: 125-126

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hypershift-operator/controllers/hostedcluster/etcd_recovery.go` around lines
65 - 66, Replace the hardcoded "3" replica checks with the StatefulSet's desired
replica count: read the intended replica value from
etcdStatefulSet.Spec.Replicas (dereference the *int32) and use that for
comparisons instead of 3 in the blocks that check
etcdStatefulSet.Status.ReadyReplicas and
etcdStatefulSet.Status.AvailableReplicas (the checks around etcdStatefulSet). Do
the same replacement for the similar check later in the file (the other
ReadyReplicas/AvailableReplicas condition).
hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go (1)

6719-6730: ⚡ Quick win

Harden assertions to avoid false positives and brittle name-coupling.

The test currently keys a cleanup assertion off a case-name string and only checks Reason. Add explicit per-case flags (e.g., expectJobDeleted) and assert condition.Status as well to make failures more precise.

✅ Suggested test tightening
 type testCases := []struct {
     name            string
     objects         []crclient.Object
     conditions      []metav1.Condition
     expectedReason  string
     conditionExists bool
+    expectJobDeleted bool
 }{
   {
     name:            "When failed job exists but etcd recovered it should cleanup job and clear condition",
     ...
     conditionExists: true,
+    expectJobDeleted: true,
   },
 }

 ...

 if tc.conditionExists {
     g.Expect(condition).ToNot(BeNil())
+    g.Expect(condition.Status).To(Equal(metav1.ConditionFalse))
     g.Expect(condition.Reason).To(Equal(tc.expectedReason))
 } else {
     g.Expect(condition).To(BeNil())
 }
-if tc.name == "When failed job exists but etcd recovered it should cleanup job and clear condition" {
+if tc.expectJobDeleted {
     job := etcdrecoverymanifests.EtcdRecoveryJob(hcpNS)
     err := client.Get(t.Context(), crclient.ObjectKeyFromObject(job), job)
     g.Expect(errors2.IsNotFound(err)).To(BeTrue(), "expected failed recovery job to be deleted")
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`
around lines 6719 - 6730, The test currently couples a job-deletion assertion to
the case name and only checks condition.Reason; add a boolean test-case field
(e.g., tc.expectJobDeleted) and use that flag instead of comparing tc.name to
decide whether to fetch etcdrecoverymanifests.EtcdRecoveryJob and assert
deletion via client.Get; also tighten the condition check by asserting
condition.Status equals an explicit tc.expectedStatus (in addition to
tc.expectedReason) when tc.conditionExists, referencing the existing
meta.FindStatusCondition lookup for hyperv1.EtcdRecoveryActive and the
tc.conditionExists / tc.expectedReason symbols to locate where to add the new
assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Nitpick comments:
In `@hypershift-operator/controllers/hostedcluster/etcd_recovery.go`:
- Around line 65-66: Replace the hardcoded "3" replica checks with the
StatefulSet's desired replica count: read the intended replica value from
etcdStatefulSet.Spec.Replicas (dereference the *int32) and use that for
comparisons instead of 3 in the blocks that check
etcdStatefulSet.Status.ReadyReplicas and
etcdStatefulSet.Status.AvailableReplicas (the checks around etcdStatefulSet). Do
the same replacement for the similar check later in the file (the other
ReadyReplicas/AvailableReplicas condition).

In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`:
- Around line 6719-6730: The test currently couples a job-deletion assertion to
the case name and only checks condition.Reason; add a boolean test-case field
(e.g., tc.expectJobDeleted) and use that flag instead of comparing tc.name to
decide whether to fetch etcdrecoverymanifests.EtcdRecoveryJob and assert
deletion via client.Get; also tighten the condition check by asserting
condition.Status equals an explicit tc.expectedStatus (in addition to
tc.expectedReason) when tc.conditionExists, referencing the existing
meta.FindStatusCondition lookup for hyperv1.EtcdRecoveryActive and the
tc.conditionExists / tc.expectedReason symbols to locate where to add the new
assertions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: ff88e339-71fb-4394-b3b5-f412fbfc2743

📥 Commits

Reviewing files that changed from the base of the PR and between 0ca85b1 and cafae49.

📒 Files selected for processing (2)
  • hypershift-operator/controllers/hostedcluster/etcd_recovery.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go

@vsolanki12 vsolanki12 force-pushed the OCPBUGS-84577-etcd-recovery-stale-condition branch 2 times, most recently from cc510cb to a2a7e34 Compare May 5, 2026 11:54
@vsolanki12
Copy link
Copy Markdown
Author

/retest

@vsolanki12
Copy link
Copy Markdown
Author

@coderabbitai full review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 7, 2026

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go (1)

6640-6677: ⚡ Quick win

Add a regression case for StatefulSet read failure in failed-job path.

Current cases validate healthy/unhealthy outcomes, but not the new non-NotFound error branch (production Line 61-64 in reconcileETCDMemberRecovery). A dedicated case would lock in that behavior.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`
around lines 6640 - 6677, Add a regression test case in the testCases slice that
covers the non-NotFound StatefulSet read error path in
reconcileETCDMemberRecovery: include failedJob and the etcd pod objects (e.g.,
use healthyEtcdPods()), ensure the test harness simulates a GET error for the
StatefulSet (not a NotFound) instead of returning
unhealthyStatefulSet/healthyStatefulSet, and assert the behavior you expect for
that branch (e.g., expectedReason remains hyperv1.EtcdRecoveryJobFailedReason
and expectJobDeleted is false). Reference the existing identifiers failedJob,
reconcileETCDMemberRecovery, healthyEtcdPods,
unhealthyStatefulSet/healthyStatefulSet and add the new case to the testCases
array so the fake client error path is exercised.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Nitpick comments:
In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`:
- Around line 6640-6677: Add a regression test case in the testCases slice that
covers the non-NotFound StatefulSet read error path in
reconcileETCDMemberRecovery: include failedJob and the etcd pod objects (e.g.,
use healthyEtcdPods()), ensure the test harness simulates a GET error for the
StatefulSet (not a NotFound) instead of returning
unhealthyStatefulSet/healthyStatefulSet, and assert the behavior you expect for
that branch (e.g., expectedReason remains hyperv1.EtcdRecoveryJobFailedReason
and expectJobDeleted is false). Reference the existing identifiers failedJob,
reconcileETCDMemberRecovery, healthyEtcdPods,
unhealthyStatefulSet/healthyStatefulSet and add the new case to the testCases
array so the fake client error path is exercised.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 6fb89972-970a-4f3f-b1e7-2cadf40e46f3

📥 Commits

Reviewing files that changed from the base of the PR and between 6b39d47 and a2a7e34.

📒 Files selected for processing (2)
  • hypershift-operator/controllers/hostedcluster/etcd_recovery.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go

@vsolanki12 vsolanki12 force-pushed the OCPBUGS-84577-etcd-recovery-stale-condition branch from a2a7e34 to 01953c1 Compare May 7, 2026 15:21
Copy link
Copy Markdown
Contributor

@sdminonne sdminonne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May you follow up?

etcdRecoveryActiveCondition.Status = metav1.ConditionFalse
etcdRecoveryActiveCondition.Reason = hyperv1.AsExpectedReason
etcdRecoveryActiveCondition.LastTransitionTime = r.now()
meta.SetStatusCondition(&hcluster.Status.Conditions, etcdRecoveryActiveCondition)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about etcdRecoveryActiveCondition.Message in this path?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I have made all the changes suggested on the etcd_recovery.go & hostedcluster_controller_test.go file.

Type: string(hyperv1.EtcdRecoveryActive),
Status: metav1.ConditionFalse,
Reason: hyperv1.AsExpectedReason,
ObservedGeneration: hcluster.Generation,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about etcdRecoveryActiveCondition.Message in this path?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I have made all the changes suggested on the etcd_recovery.go & hostedcluster_controller_test.go file.

var pods []crclient.Object
for i := 0; i < 3; i++ {
pods = append(pods, &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May you add a case where RestartCount is > 0 and Running state?
To cover case when POD recovers.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I have made all the changes suggested on the etcd_recovery.go & hostedcluster_controller_test.go file.

@sdminonne
Copy link
Copy Markdown
Contributor

/cc @sdminonne

@openshift-ci openshift-ci Bot requested a review from sdminonne May 7, 2026 17:13
…when etcd is healthy

When the etcd recovery job fails but etcd self-heals, the
EtcdRecoveryJobFailed condition was never cleared. This caused the
OpenShift Console to display a stale error message even when the
cluster was fully healthy.

This fix adds two checks:
- When a failed recovery job exists but etcd StatefulSet is fully
  available (3/3), clean up the job and clear the condition.
  Transient API errors are propagated instead of silently falling
  through to the failure path.
- When no failing etcd pods exist and etcd is healthy, clear any
  stale EtcdRecoveryJobFailed condition.

Signed-off-by: Vimal Solanki <vsolanki@redhat.com>
@hypershift-jira-solve-ci
Copy link
Copy Markdown

I have the complete root cause for both failures. Both are caused by the same upstream infrastructure issue — quay-proxy.ci.openshift.org returning 502 Bad Gateway errors. Here's the report:

Test Failure Analysis Complete

Job Information

Test Failure Analysis

Error

verify-deps: Pod scheduling timeout. The multiarch scheduling gate was never removed because
image inspection failed with: "fetching blob: received unexpected HTTP status: 502 Bad Gateway"
(from quay-proxy.ci.openshift.org). The pod remained gated for 30 minutes until Prow killed it.

images: Pod pending timeout. The test container was stuck in ImagePullBackOff pulling
"quay-proxy.ci.openshift.org/openshift/ci:ci_ci-operator_latest" with error: "received
unexpected HTTP status: 502 Bad Gateway". The pod was pending for ~30 minutes until Prow
killed it.

Summary

Both job failures are caused by a transient infrastructure outage of quay-proxy.ci.openshift.org, which was returning HTTP 502 Bad Gateway errors on 2026-05-07 around 17:35 UTC. The verify-deps job never even got scheduled — the multiarch scheduling gate controller could not inspect the container images to determine architecture affinity (it encountered the 502 error twice), so the pod stayed in SchedulingGated state for 30 minutes until Prow's scheduling timeout killed it. The images job got past the scheduling gate but its test container could not pull the ci-operator image (quay-proxy.ci.openshift.org/openshift/ci:ci_ci-operator_latest), entering ImagePullBackOff until Prow's pending timeout killed it. Neither failure is related to the PR's code changes.

Root Cause

The root cause is a transient outage of the quay-proxy.ci.openshift.org container registry proxy, which serves as the image registry for OpenShift CI. At the time these jobs ran (~17:35 UTC on 2026-05-07), the proxy was returning HTTP 502 Bad Gateway responses.

This caused two distinct but related failure modes:

  1. verify-deps (Pod scheduling timeout): The multiarch.openshift.io scheduling gate controller must inspect container images to determine their supported architectures before allowing pod scheduling. It attempted image inspection twice (image-inspect-error-count: 2) and both attempts failed with "fetching blob: received unexpected HTTP status: 502 Bad Gateway". Because the gate was never removed (scheduling-gate: gated, node-affinity: not-set), the pod remained in SchedulingGated state from 17:35:29Z until Prow's 30-minute scheduling timeout killed it at 18:05:29Z.

  2. images (Pod pending timeout): This job's scheduling gate was successfully removed (scheduling-gate: removed), and the pod was scheduled onto a node at 17:35:55Z. However, the test container could not pull its image quay-proxy.ci.openshift.org/openshift/ci:ci_ci-operator_latest due to the same 502 errors from the registry proxy. The container entered ImagePullBackOff and the pod remained in Pending phase until Prow's pending timeout killed it at ~18:05:55Z.

These failures are entirely infrastructure-related and have nothing to do with the PR's code changes (which modify etcd_recovery.go and hostedcluster_controller_test.go).

Recommendations
  1. Retest the PR — Simply re-trigger both jobs with /retest on the PR. The quay-proxy outage was transient.
  2. No code changes needed — The PR's changes to etcd_recovery.go and hostedcluster_controller_test.go are not related to these failures.
  3. If retests continue to fail with the same 502 errors, escalate to the OpenShift CI infrastructure team (DPTP) as a quay-proxy.ci.openshift.org availability issue.
Evidence
Evidence Detail
verify-deps prowjob status state: error, description: "Pod scheduling timeout."
verify-deps scheduling gate multiarch.openshift.io/scheduling-gate: gated (never removed)
verify-deps node affinity multiarch.openshift.io/node-affinity: not-set (never determined)
verify-deps image inspect error "fetching blob: received unexpected HTTP status: 502 Bad Gateway" (2 attempts)
verify-deps pod lifecycle Created 17:35:29Z → Deleted 18:05:29Z (30 min timeout, never scheduled)
images prowjob status state: error, description: "Pod pending timeout."
images container state test container in ImagePullBackOff
images pull error "reading manifest sha256:3dbfb86… in quay-proxy.ci.openshift.org/openshift/ci: received unexpected HTTP status: 502 Bad Gateway"
images failed image quay-proxy.ci.openshift.org/openshift/ci:ci_ci-operator_latest
images pod lifecycle Created 17:35:28Z → Deleted 19:20:55Z (pod pending, test container never started)
Common failure source quay-proxy.ci.openshift.org returning HTTP 502 Bad Gateway
PR code changes Only etcd_recovery.go and hostedcluster_controller_test.go — unrelated to CI infra

@vsolanki12
Copy link
Copy Markdown
Author

/retest

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 8, 2026

@vsolanki12: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants