Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions safe_message_handler/workflow.go
Original file line number Diff line number Diff line change
Expand Up @@ -292,16 +292,15 @@ func (cm *ClusterManager) run(ctx workflow.Context) (ClusterManagerResult, error
cm.logger.Info("Cluster started")
for {
selector := workflow.NewSelector(ctx)
shouldShutdown := false
selector.AddReceive(cm.shutdownCh, func(c workflow.ReceiveChannel, _ bool) {
c.Receive(ctx, nil)
shouldShutdown = true
cm.state.ClusterShutdown = true
})
selector.AddFuture(workflow.NewTimer(ctx, cm.sleepInterval), func(f workflow.Future) {
cm.performHealthCheck(ctx)
})
selector.Select(ctx)
if shouldShutdown {
if cm.state.ClusterShutdown {
break
Comment on lines 295 to 304
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a regression test to ensure updates are rejected after ShutdownCluster is signaled. This change makes cm.state.ClusterShutdown the source of truth, but there’s currently no unit test asserting that an AssignNodesToJobs/DeleteJob update attempted after shutdown returns the expected error (especially while the workflow is draining via workflow.AllHandlersFinished).

Copilot uses AI. Check for mistakes.
}
if cm.shouldContinueAsNew(ctx) {
Expand Down
Loading