@@ -91,7 +91,7 @@ it is not. Websocket can be disconnected (actually happens on purpose sometimes)
9191
9292Let's consider the following operator:
9393 - we have a custom resource ` PodPrefix ` where the spec contains only one field: ` podNamePrexix ` ,
94- - goal of the operator is to create a pod with name that has the prefix and a random sequence
94+ - goal of the operator is to create a pod with name that has the prefix and a random sequence suffix
9595 - it should never run two pods at once, if the ` podNamePrefix ` changes it should delete
9696 the actual pod and after that create a new one
9797 - the status of the custom resource should contain the ` generatedPodName `
@@ -117,16 +117,13 @@ public UpdateControl<PodPrefix> reconcile(PodPrefix primary, Context<PodPrefix>
117117 } else {
118118 // creates new pod
119119 var newPod = context. getClient(). resource(createPodWithOwnerReference(primary)). serviceSideApply();
120- return UpdateControl . patchStatus(setPodNameToStatus (primary,newPod));
120+ return UpdateControl . patchStatus(setGeneratedPodNameToStatus (primary,newPod));
121121 }
122122}
123123
124124@Override
125125public List<EventSource<?, WebPage > > prepareEventSources(EventSourceContext<WebPage > context) {
126-
127- // Code omitted for adding InformerEventsSource for the pod
128-
129-
126+ // Code omitted for adding InformerEventsSource for the Pod
130127}
131128```
132129
@@ -136,10 +133,35 @@ the reconciliation.
136133
137134Now consider the following sequence of events:
138135
139- 1 . We create a ` PodPrefix ` with ` podNamePrefix ` : " first-pod-prefix" .
136+ 1 . We create a ` PodPrefix ` with ` spec. podNamePrefix` : ` first-pod-prefix ` .
1401372 . Concurrently:
141138 - The reconciliation logic runs and creates a Pod with a name generated suffix: "first-pod-prefix-a3j3ka";
142139 also sets this to the status and updates the custom resource status.
143140 - While the reconciliation is running we update the custom resource to have the value
144- " second-pod-prefix"
141+ ` second-pod-prefix `
1451423 . The update of the custom resource triggers the reconciliation.
143+
144+ When the spec change triggers the reconciliation in point 3. there is absolutely no guarantee that:
145+ - created pod will be already visible, this ` currentPod ` might be just empty
146+ - the ` status.generatedPodName ` will be visible
147+
148+ Since both are backed with an informer and the cache of those informers are eventually consistent with our updates.
149+ Therefore, the next reconiliation would create a new Pod, and we just missed the requirement to not have two
150+ Pods running at the same time. In addition to that controller will override the status. Altough in case of Kubernetes
151+ resource we anyway can find the existing Pods later with owner references, but in case if we would manage a
152+ non-Kuberetes resource we would not notice that we created a resource before.
153+
154+ So can we have stronger guarantees regarding caches? It turns out we can now...
155+
156+ ## Achieving read-cache-after-write consistency
157+
158+
159+
160+
161+
162+ ## Filtering events for our own updates
163+
164+
165+ TODO:
166+ - filter events
167+ - reschedule
0 commit comments