fix: flush *collection* keys on collection cache flush#1845
fix: flush *collection* keys on collection cache flush#1845
Conversation
Signed-off-by: anilb <epipav@gmail.com>
There was a problem hiding this comment.
Pull request overview
Updates the production “partial cache flush” GitHub Actions workflow so that running a collection flush also clears Redis keys related to collection caching, helping prevent stale collection-related data after a collection-level invalidation.
Changes:
- After flushing all project-related cache entries for projects in a collection, additionally delete Redis keys matching
*collection*. - Adds logging to indicate collection-key flushing is occurring.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| echo "Flushing collection cache keys matching *collection*" | ||
| kubectl exec -i redis-client -n insights -- \ | ||
| sh -c "redis-cli -h redis-svc -a \"$REDIS_PASS\" --scan --pattern \"*collection*\" 2>/dev/null | xargs -r redis-cli -h redis-svc -a \"$REDIS_PASS\" DEL 2>/dev/null" |
There was a problem hiding this comment.
This SCAN | xargs | DEL pipeline can silently succeed while deleting nothing if the SCAN fails (e.g., auth/connection issues), because stderr is redirected to /dev/null and the shell won’t propagate earlier pipeline failures. Also, if many keys match, a single xargs invocation can exceed the max command length. Consider enabling pipefail / checking exit codes and batching xargs (e.g., limit keys per DEL call) so failures are surfaced and large invalidations are reliable.
No description provided.