Skip to content

NO-JIRA: harden watch loop to prevent thread exhaustion#30956

Open
neisw wants to merge 4 commits intoopenshift:mainfrom
neisw:resource-watch-fixes
Open

NO-JIRA: harden watch loop to prevent thread exhaustion#30956
neisw wants to merge 4 commits intoopenshift:mainfrom
neisw:resource-watch-fixes

Conversation

@neisw
Copy link
Copy Markdown
Contributor

@neisw neisw commented Apr 3, 2026

Continuation of #30944

Handle watch error/bookmark events safely, always stop watch streams, and add context-aware retry backoff so reconnects do not spin and accumulate threads.

Made-with: Cursor

Summary by CodeRabbit

  • New Features

    • Added automatic retry with bounded exponential backoff for resource observation to improve recoverability.
  • Bug Fixes

    • Improved error handling for watch events and ensured watches are reliably stopped to prevent leaks.
    • Better classification of terminal vs. retryable failures and more transparent retry logging/timing.
  • Tests

    • Added unit tests covering watch behavior, retry/backoff logic, and cancellation handling.

@openshift-ci-robot
Copy link
Copy Markdown

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Apr 3, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@neisw: This pull request explicitly references no jira issue.

Details

In response to this:

Continuation of #30944

Handle watch error/bookmark events safely, always stop watch streams, and add context-aware retry backoff so reconnects do not spin and accumulate threads.

Made-with: Cursor

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 3, 2026

Walkthrough

Adds sentinel errors, a bounded exponential retry/backoff and wait helpers to ObserveResource, makes listAndWatchResource stricter about watch events (bookmarks ignored, closed channel → error, watch.Error validated as metav1.Status), ensures watch.Stop() via defer, and adds unit tests for watch and retry behaviors.

Changes

Cohort / File(s) Summary
Resource watch implementation
pkg/resourcewatch/observe/observe.go
Introduces sentinel errors (errWatchClosed, errWatchErrorEvent, errUnexpectedObject); adds bounded exponential backoff (newRetryBackoff), waitForRetry, and retryReason; ObserveResource now retries list/watch failures (NotFound → fixed 5s; others → exponential+jitter), resets backoff after a sufficiently long watch; listAndWatchResource now defers resourceWatch.Stop(), returns errWatchClosed on closed result channel, ignores watch.Bookmark, enforces *metav1.Status for watch.Error (wraps status), and returns errUnexpectedObject on unexpected types.
Resource watch tests
pkg/resourcewatch/observe/observe_test.go
Adds test helpers (fakeNamespaceableResource, trackingWatch) and tests: handles watch.Error with Status → returns errWatchErrorEvent and calls Stop(); emits ResourceObservation on Added and stops on context cancel; closed watch channel → errWatchClosed; validates exponential backoff growth/cap and reset; waitForRetry returns false when context canceled.

Sequence Diagram(s)

sequenceDiagram
    participant Caller as Caller / ObserveResource
    participant WatchLoop as listAndWatchResource
    participant K8s as Kubernetes API (List/Watch)
    participant Backoff as Backoff / waitForRetry

    Caller->>WatchLoop: call listAndWatchResource(ctx, resource...)
    WatchLoop->>K8s: List() then Watch()
    K8s-->>WatchLoop: events (Added / Modified / Deleted / Bookmark / Error / closed)
    alt Added / Modified / Deleted
        WatchLoop-->>Caller: emit ResourceObservation
        WatchLoop-->>WatchLoop: continue reading events
    else Bookmark
        WatchLoop-->>WatchLoop: ignore
    else Error event
        WatchLoop-->>WatchLoop: validate *metav1.Status -> return errWatchErrorEvent (with status)
    else closed channel
        WatchLoop-->>WatchLoop: return errWatchClosed
    end
    WatchLoop-->>Caller: return (nil or error)
    alt non-terminal error (not ctx done / not errUnexpectedObject)
        Caller->>Backoff: compute delay (NotFound = 5s | exponential+jitter)
        Backoff->>Caller: waitForRetry(ctx, delay)
        Caller->>WatchLoop: retry
    else terminal (ctx done or errUnexpectedObject)
        Caller-->>Caller: return error
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 8 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 9.09% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Structure And Quality ❓ Inconclusive Unable to access test file for Ginkgo structure assessment. Verify the file path 'pkg/resourcewatch/observe/observe_test.go' exists and is accessible in the repository.
✅ Passed checks (8 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main objective of the changeset: hardening the watch loop with retry backoff infrastructure and error handling to prevent thread exhaustion from reconnection spins.
Stable And Deterministic Test Names ✅ Passed All test names are stable and deterministic with no dynamic content. Test names are descriptive and clearly indicate what each test validates. Dynamic test data is appropriately placed within test bodies, not in test names.
Microshift Test Compatibility ✅ Passed Pull request adds standard Go unit tests with Test* naming, not Ginkgo e2e tests. MicroShift compatibility check is not applicable.
Single Node Openshift (Sno) Test Compatibility ✅ Passed The pull request adds only standard Go unit tests using the testing package, not Ginkgo e2e tests. The tests follow Go unit test conventions and are located in an internal package, not in e2e test directories.
Topology-Aware Scheduling Compatibility ✅ Passed Pull request introduces only low-level watch loop retry/backoff utility code with no deployment manifests, scheduling constraints, or topology assumptions. Changes are topology-agnostic and work uniformly across all OpenShift configurations.
Ote Binary Stdout Contract ✅ Passed The PR modifies observe.go and observe_test.go which are standard unit test files using Go's testing package, not OTE suite code. Neither file contains process-level code that writes to stdout.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed Test file contains only standard Go unit tests with mocked interfaces, not Ginkgo e2e tests. No hardcoded IPv4 addresses or external network connectivity assumptions present.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot requested review from deads2k and sjenning April 3, 2026 14:27
@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 3, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/resourcewatch/observe/observe.go`:
- Around line 73-74: The code in ObserveResource is resetting retryAttempt to 0
when the watch returns nil/ends, which causes immediate reconnects if the watch
stream was closed (ResultChan() closed) rather than a clean end; modify the
logic in the watch loop (observe.go, functions handling the watch and the block
that currently sets retryAttempt = 0) to only reset retryAttempt when the watch
truly ended cleanly (e.g., explicit stop signal or terminal condition), and
treat a closed ResultChan() (receive with ok == false) as a retryable error path
that does NOT reset retryAttempt but increments backoff and returns an error so
backoff applies; apply the same change to the analogous code paths around the
other reset occurrences (the block referenced by lines ~150-153) and add a small
regression test that simulates an already-closed ResultChan() to assert backoff
is not reset.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 66e3ac20-b4e0-4ad4-935e-ed3260c92694

📥 Commits

Reviewing files that changed from the base of the PR and between 85dec4b and 3836525.

📒 Files selected for processing (2)
  • pkg/resourcewatch/observe/observe.go
  • pkg/resourcewatch/observe/observe_test.go

Comment thread pkg/resourcewatch/observe/observe.go Outdated
@openshift-ci-robot
Copy link
Copy Markdown

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@neisw
Copy link
Copy Markdown
Contributor Author

neisw commented Apr 5, 2026

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 5, 2026

@neisw: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/3a7af320-30eb-11f1-8069-b05fa5978c62-0

@openshift-ci-robot
Copy link
Copy Markdown

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@neisw
Copy link
Copy Markdown
Contributor Author

neisw commented Apr 13, 2026

/retest-required

3 similar comments
@neisw
Copy link
Copy Markdown
Contributor Author

neisw commented Apr 15, 2026

/retest-required

@neisw
Copy link
Copy Markdown
Contributor Author

neisw commented Apr 15, 2026

/retest-required

@neisw
Copy link
Copy Markdown
Contributor Author

neisw commented Apr 16, 2026

/retest-required

@openshift-trt
Copy link
Copy Markdown

openshift-trt Bot commented Apr 17, 2026

Job Failure Risk Analysis for sha: ba94dc5

Job Name Failure Risk
pull-ci-openshift-origin-main-e2e-gcp-ovn Medium
[Jira:Node][sig-node] Node non-cnv swap configuration should reject user override of swap settings via KubeletConfig API [OCP-86395] [Suite:openshift/conformance/parallel]
This test has passed 96.68% of 2412 runs on release 5.0 [Overall] in the last week.
pull-ci-openshift-origin-main-e2e-metal-ipi-ovn-ipv6 Medium
[Jira:Node][sig-node] Node non-cnv swap configuration should reject user override of swap settings via KubeletConfig API [OCP-86395] [Suite:openshift/conformance/parallel]
This test has passed 96.68% of 2412 runs on release 5.0 [Overall] in the last week.
pull-ci-openshift-origin-main-e2e-vsphere-ovn Medium
[Jira:Node][sig-node] Node non-cnv swap configuration should reject user override of swap settings via KubeletConfig API [OCP-86395] [Suite:openshift/conformance/parallel]
This test has passed 96.68% of 2412 runs on release 5.0 [Overall] in the last week.
pull-ci-openshift-origin-main-e2e-vsphere-ovn-upi Low
[sig-instrumentation] Prometheus [apigroup:image.openshift.io] when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early][apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]
This test has passed 0.00% of 13 runs on release 5.0 [Architecture:amd64 FeatureSet:default Installer:upi JobTier:standard Network:ovn NetworkStack:ipv4 OS:rhcos9 Owner:eng Platform:vsphere Procedure:none SecurityMode:default Topology:ha Upgrade:none] in the last week.

@neisw
Copy link
Copy Markdown
Contributor Author

neisw commented Apr 17, 2026

/payload-job periodic-ci-openshift-release-main-nightly-5.0-e2e-gcp-ovn-rt

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 17, 2026

@neisw: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-5.0-e2e-gcp-ovn-rt

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/8a2f3510-3a6a-11f1-9128-c2c1994f1ca9-0

Copy link
Copy Markdown
Member

@petr-muller petr-muller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but I wonder if we could use k8s.io/apimachinery/pkg/util/wait for the retry/backoff/jitter, see inline but feel free to unhold

/lgtm
/approve
/hold

Comment thread pkg/resourcewatch/observe/observe.go Outdated

if err := listAndWatchResource(ctx, log, resourceClient, gvr, observedResources, resourceC); err != nil {
log.Error(err, "failed to list and watch resource")
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Retry with backoff & jitter felt like something k8s.io/apimachinery/pkg/util/wait could provide so I asked an LLM and it told me the following:

The PR could use wait.Backoff like this:

  func ObserveResource(ctx context.Context, log logr.Logger, client *dynamic.DynamicClient, gvr schema.GroupVersionResource, resourceC chan<- *ResourceObservation) {
      observedResources := make(map[types.UID]*resourceMeta)

      backoff := wait.Backoff{
          Duration: 500 * time.Millisecond, // minRetryDelay
          Factor:   2.0,
          Jitter:   0.25,                   // ±25% jitter built-in
          Steps:    8,                      // ~500ms -> 30s in 8 steps
          Cap:      30 * time.Second,       // maxRetryDelay
      }

      for {
          select {
          case <-ctx.Done():
              return
          default:
          }

          err := listAndWatchResource(ctx, log, resourceClient, gvr, observedResources, resourceC)
          if err == nil {
              backoff = resetBackoff() // Reset on clean cycle
              continue
          }

          // Handle NotFound specially
          delay := notFoundRetryDelay
          if !apierrors.IsNotFound(err) {
              delay = backoff.Step()
          }

          // wait package handles timer + context
          <-time.After(delay)
      }
  }

@openshift-ci openshift-ci Bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 20, 2026
@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Apr 20, 2026
@openshift-ci openshift-ci Bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 20, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@neisw: This pull request explicitly references no jira issue.

Details

In response to this:

Continuation of #30944

Handle watch error/bookmark events safely, always stop watch streams, and add context-aware retry backoff so reconnects do not spin and accumulate threads.

Made-with: Cursor

Summary by CodeRabbit

Release Notes

  • New Features

  • Added automatic retry mechanism with exponential backoff for resource observation failures

  • Bug Fixes

  • Improved error handling and resilience during resource watch operations

  • Enhanced resource lifecycle management to ensure proper cleanup

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@petr-muller
Copy link
Copy Markdown
Member

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Apr 20, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 20, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: neisw, petr-muller

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/resourcewatch/observe/observe.go`:
- Around line 85-88: The backoff reset code near backoff = newRetryBackoff() is
unreachable because listAndWatchResource now always returns a non-nil error
(e.g., errWatchClosed), causing the err == nil branch to never run; fix by
implementing a sustained-success reset: add a tracked timestamp or counter
(e.g., lastErrorAt time.Time or successCount int) scoped near the retry loop in
listAndWatchResource and update it on errors and on each successful event; then
when successCount >= N or time.Since(lastErrorAt) >= healthyPeriod, call backoff
= newRetryBackoff() to reset; ensure you update code paths that return
errWatchClosed so they also update lastErrorAt/successCount appropriately and
remove the dead unreachable branch or keep it only as a safeguard.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 82d89b05-5910-49cf-8c5e-8244a39e82f8

📥 Commits

Reviewing files that changed from the base of the PR and between ba94dc5 and 05d57a6.

📒 Files selected for processing (2)
  • pkg/resourcewatch/observe/observe.go
  • pkg/resourcewatch/observe/observe_test.go
✅ Files skipped from review due to trivial changes (1)
  • pkg/resourcewatch/observe/observe_test.go

Comment thread pkg/resourcewatch/observe/observe.go
@openshift-ci openshift-ci Bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 20, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 20, 2026

New changes are detected. LGTM label has been removed.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/resourcewatch/observe/observe.go`:
- Around line 63-82: The code resets backoff based on watchStart which is set
before calling listAndWatchResource, so slow/failing List() calls can
incorrectly count as a "healthy" watch; change the logic so the healthy-period
timer starts only after a successful Watch() (e.g., move the watchStart
timestamp into listAndWatchResource at the point client.Watch() succeeds) or
modify listAndWatchResource to return an additional boolean (e.g., enteredWatch
bool) indicating it reached the steady watch phase, then in the caller use
enteredWatch and time.Since(watchStart) (or just the boolean plus duration
measured inside listAndWatchResource) to decide whether to reset backoff
(references: listAndWatchResource, watchStart, client.Watch, maxRetryDelay,
backoff, newRetryBackoff).
- Around line 161-176: The type assertions for metav1.Status and
unstructured.Unstructured can succeed for typed-nil values; after asserting
observation.Object to (*metav1.Status) and (*unstructured.Unstructured)
(variables status and object) you must also check that the resulting pointer is
not nil before dereferencing; if nil, return fmt.Errorf("%w: %T",
errUnexpectedObject, observation.Object) (or similar) instead of proceeding to
access status.Reason/status.Message or object fields, ensuring both the
watch.Error branch and the Added/Modified/Deleted handling guard against
typed-nil objects.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 9b708878-474d-48d6-9c2a-33896f44122c

📥 Commits

Reviewing files that changed from the base of the PR and between 05d57a6 and ddaf89e.

📒 Files selected for processing (1)
  • pkg/resourcewatch/observe/observe.go

Comment on lines +63 to +82
watchStart := time.Now()
err := listAndWatchResource(ctx, log, resourceClient, gvr, observedResources, resourceC)
if err == nil {
continue
}

if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return
}
if errors.Is(err, errUnexpectedObject) {
log.Error(err, "terminal resource watch failure")
return
}

// If the watch ran for a healthy period before failing (e.g. a normal
// watch expiration after minutes of successful operation), reset the
// backoff so the next retry starts quickly.
if time.Since(watchStart) >= maxRetryDelay {
backoff = newRetryBackoff()
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reset the backoff only after a healthy watch, not after a slow failing list.

watchStart is captured before listAndWatchResource(), so a List() that stalls for 30s+ and then errors will satisfy the "healthy period" check and drop the next retry back to the minimum delay. That weakens the new backoff exactly when the apiserver/network is unhealthy. Consider starting this timer only after client.Watch() succeeds, or have listAndWatchResource report whether it actually entered a steady watch phase before resetting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/resourcewatch/observe/observe.go` around lines 63 - 82, The code resets
backoff based on watchStart which is set before calling listAndWatchResource, so
slow/failing List() calls can incorrectly count as a "healthy" watch; change the
logic so the healthy-period timer starts only after a successful Watch() (e.g.,
move the watchStart timestamp into listAndWatchResource at the point
client.Watch() succeeds) or modify listAndWatchResource to return an additional
boolean (e.g., enteredWatch bool) indicating it reached the steady watch phase,
then in the caller use enteredWatch and time.Since(watchStart) (or just the
boolean plus duration measured inside listAndWatchResource) to decide whether to
reset backoff (references: listAndWatchResource, watchStart, client.Watch,
maxRetryDelay, backoff, newRetryBackoff).

Comment on lines +161 to +176
case watch.Error:
status, ok := observation.Object.(*metav1.Status)
if !ok {
return fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object)
}
return fmt.Errorf("%w: reason=%s message=%s", errWatchErrorEvent, status.Reason, status.Message)
case watch.Added, watch.Modified, watch.Deleted:
// handled below
default:
log.Info("Unhandled watch event", "type", observation.Type)
continue
}

object, ok := observation.Object.(*unstructured.Unstructured)
if !ok {
return fmt.Errorf("failed to cast observation object to unstructured: %T", observation.Object)
return fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

In Go, if an interface holds a typed nil pointer like (*metav1.Status)(nil), does a type assertion to *metav1.Status return ok=true with a nil result?

💡 Result:

Yes, a type assertion to *metav1.Status on an interface holding (*metav1.Status)(nil) returns ok=true with a nil result.

Citations:


🏁 Script executed:

# First, let's look at the file structure and locate the file
find . -name "observe.go" -path "*/resourcewatch/observe/*" -type f

Repository: openshift/origin

Length of output: 98


🏁 Script executed:

# Once we find it, let's examine the context around lines 161-176
if [ -f "pkg/resourcewatch/observe/observe.go" ]; then
  wc -l pkg/resourcewatch/observe/observe.go
fi

Repository: openshift/origin

Length of output: 100


🏁 Script executed:

# Get the full context around the problematic lines
if [ -f "pkg/resourcewatch/observe/observe.go" ]; then
  sed -n '140,190p' pkg/resourcewatch/observe/observe.go
fi

Repository: openshift/origin

Length of output: 1711


🏁 Script executed:

# Let's also check what the watch package is - imports at the top
if [ -f "pkg/resourcewatch/observe/observe.go" ]; then
  head -30 pkg/resourcewatch/observe/observe.go
fi

Repository: openshift/origin

Length of output: 784


🏁 Script executed:

# Look for test files related to observe
find . -name "*observe*test*.go" -o -name "*test*observe*.go" | head -20

Repository: openshift/origin

Length of output: 103


🏁 Script executed:

# Check if there are any tests that might show edge cases with nil objects
if [ -f "pkg/resourcewatch/observe/observe_test.go" ]; then
  cat -n pkg/resourcewatch/observe/observe_test.go | head -150
fi

Repository: openshift/origin

Length of output: 5707


🏁 Script executed:

# Search for any existing nil checks or similar defensive code in the codebase
rg "observation\.Object" pkg/resourcewatch/observe/ -A 3 -B 1

Repository: openshift/origin

Length of output: 1087


🏁 Script executed:

# Check if there are any comments or documentation about watch event handling
rg -A 5 "watch.Error\|watch.Event" pkg/resourcewatch/observe/observe.go

Repository: openshift/origin

Length of output: 42


Guard typed-nil watch objects before dereferencing them.

In Go, a type assertion on an interface holding a typed nil pointer like (*metav1.Status)(nil) will return ok=true with a nil result. The current code checks only ok but not whether the asserted value is nil. If observation.Object contains a typed nil, accessing fields like status.Reason or object properties would panic. Add nil checks after each successful type assertion:

Proposed fix
 			case watch.Error:
 				status, ok := observation.Object.(*metav1.Status)
-				if !ok {
+				if !ok || status == nil {
 					return fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object)
 				}
 				return fmt.Errorf("%w: reason=%s message=%s", errWatchErrorEvent, status.Reason, status.Message)
 			case watch.Added, watch.Modified, watch.Deleted:
 				// handled below
 			default:
 				log.Info("Unhandled watch event", "type", observation.Type)
 				continue
 			}
 
 			object, ok := observation.Object.(*unstructured.Unstructured)
-			if !ok {
+			if !ok || object == nil {
 				return fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object)
 			}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case watch.Error:
status, ok := observation.Object.(*metav1.Status)
if !ok {
return fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object)
}
return fmt.Errorf("%w: reason=%s message=%s", errWatchErrorEvent, status.Reason, status.Message)
case watch.Added, watch.Modified, watch.Deleted:
// handled below
default:
log.Info("Unhandled watch event", "type", observation.Type)
continue
}
object, ok := observation.Object.(*unstructured.Unstructured)
if !ok {
return fmt.Errorf("failed to cast observation object to unstructured: %T", observation.Object)
return fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object)
case watch.Error:
status, ok := observation.Object.(*metav1.Status)
if !ok || status == nil {
return fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object)
}
return fmt.Errorf("%w: reason=%s message=%s", errWatchErrorEvent, status.Reason, status.Message)
case watch.Added, watch.Modified, watch.Deleted:
// handled below
default:
log.Info("Unhandled watch event", "type", observation.Type)
continue
}
object, ok := observation.Object.(*unstructured.Unstructured)
if !ok || object == nil {
return fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/resourcewatch/observe/observe.go` around lines 161 - 176, The type
assertions for metav1.Status and unstructured.Unstructured can succeed for
typed-nil values; after asserting observation.Object to (*metav1.Status) and
(*unstructured.Unstructured) (variables status and object) you must also check
that the resulting pointer is not nil before dereferencing; if nil, return
fmt.Errorf("%w: %T", errUnexpectedObject, observation.Object) (or similar)
instead of proceeding to access status.Reason/status.Message or object fields,
ensuring both the watch.Error branch and the Added/Modified/Deleted handling
guard against typed-nil objects.

@neisw
Copy link
Copy Markdown
Contributor Author

neisw commented Apr 21, 2026

/test images

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@neisw
Copy link
Copy Markdown
Contributor Author

neisw commented Apr 21, 2026

/test e2e-aws-ovn-serial-1of2

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 21, 2026

@neisw: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-microshift ddaf89e link true /test e2e-aws-ovn-microshift
ci/prow/e2e-aws-ovn-microshift-serial ddaf89e link true /test e2e-aws-ovn-microshift-serial

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants