Currently, multiple instances of our containerized service can attempt to perform operations on the same shared resource simultaneously, leading to potential race conditions and data inconsistencies. This issue becomes critical when scaling the service horizontally in environments like Kubernetes.
We need to implement a distributed locking mechanism, using Redis, to ensure that only one instance of the service can perform critical sections of the code at a given time.
cc: @apgapg
Currently, multiple instances of our containerized service can attempt to perform operations on the same shared resource simultaneously, leading to potential race conditions and data inconsistencies. This issue becomes critical when scaling the service horizontally in environments like Kubernetes.
We need to implement a distributed locking mechanism, using Redis, to ensure that only one instance of the service can perform critical sections of the code at a given time.
cc: @apgapg