Skip to content

Commit de609c6

Browse files
pedjakclaude
andcommitted
Fix race condition in e2e coverage collection
kubectl scale --replicas=0 is non-blocking and returns as soon as the API server accepts the change, not when pods have terminated. The existing wait on the copy pod was a no-op since it was already running. This meant kubectl cp could run before manager pods had terminated and flushed coverage data to the PVC. Wait for each deployment's .status.replicas to reach 0 before copying, ensuring the Go coverage runtime has written its data. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent dd57c28 commit de609c6

1 file changed

Lines changed: 3 additions & 2 deletions

File tree

hack/test/e2e-coverage.sh

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,9 @@ rm -rf ${COVERAGE_DIR} && mkdir -p ${COVERAGE_DIR}
2222
kubectl -n "$OPERATOR_CONTROLLER_NAMESPACE" scale deployment/"$OPERATOR_CONTROLLER_MANAGER_DEPLOYMENT_NAME" --replicas=0
2323
kubectl -n "$CATALOGD_NAMESPACE" scale deployment/"$CATALOGD_MANAGER_DEPLOYMENT_NAME" --replicas=0
2424

25-
# Wait for the copy pod to be ready
26-
kubectl -n "$OPERATOR_CONTROLLER_NAMESPACE" wait --for=condition=ready pod "$COPY_POD_NAME"
25+
# Wait for deployments to scale down so coverage data is flushed to the PVC
26+
kubectl -n "$OPERATOR_CONTROLLER_NAMESPACE" wait --for=jsonpath='{.status.replicas}'=0 deployment/"$OPERATOR_CONTROLLER_MANAGER_DEPLOYMENT_NAME" --timeout=60s
27+
kubectl -n "$CATALOGD_NAMESPACE" wait --for=jsonpath='{.status.replicas}'=0 deployment/"$CATALOGD_MANAGER_DEPLOYMENT_NAME" --timeout=60s
2728

2829
# Copy the coverage data from the temporary pod
2930
kubectl -n "$OPERATOR_CONTROLLER_NAMESPACE" cp "$COPY_POD_NAME":/e2e-coverage/ "$COVERAGE_DIR"

0 commit comments

Comments
 (0)