Improved pod lifecycle management. #27
Conversation
during processing when the event is pulled of the work queue.
if the pod's replica count is being changed to avoid excessive requests
There was a problem hiding this comment.
Pull request overview
This PR enhances pod lifecycle management by preventing redundant deployment events during replica set changes. The implementation correlates deployment metadata with pod events to avoid sending create/delete notifications when pods are simply being scaled rather than truly deployed or decommissioned.
Changes:
- Added deployment existence checks to differentiate between pod deletions from scaling vs. actual deployment removal
- Implemented a best-effort cache to prevent duplicate deployment notifications
- Updated RBAC permissions to allow querying deployment resources
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| internal/controller/controller.go | Adds deployment existence validation, caching mechanism, and enhanced pod event filtering to distinguish scaling from deployment lifecycle events |
| deploy/manifest.yaml | Extends RBAC permissions to enable deployment resource queries |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Check if the parent deployment still exists | ||
| // If it does, this is just a scale-down event, skip it | ||
| deploymentName := getDeploymentName(pod) | ||
| if deploymentName != "" && c.deploymentExists(ctx, pod.Namespace, deploymentName) { |
There was a problem hiding this comment.
Making an API call to check deployment existence for every pod deletion could be expensive at scale. Consider using a deployment informer with a local cache instead of direct API calls, especially since the controller already uses informers for pods.
There was a problem hiding this comment.
Yeah, this is a good idea. I will do this as a follow up PR.
This PR improves the pod lifecycle management.
The main improvement is that we tie in the deployment metadata into the pod processing to avoid sending create/delete events when a pod's replica set is changed. This is done by correlating the deployment status' with the pod events.