You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Step 1: The Actor Model — Commutative Grid Dispatch
1
+
# Step 1: The Actor Model: Commutative Grid Dispatch
2
2
3
3
## The Problem We're Solving
4
4
5
-
Two control centers — one in California, one in New York — each manage a portion of the national grid. Both maintain a copy of the **grid balance**: a signed integer representing net generation in megawatts (positive = excess, negative = deficit).
5
+
Two control centers, one in California and one in New York, each manage a portion of the national grid. Both maintain a copy of the **grid balance**: a signed integer representing net generation in megawatts (positive = excess, negative = deficit).
6
6
7
7
Operators at either location can issue dispatch commands:
8
8
-**Dispatch up**: bring more generation online (+MW)
@@ -20,7 +20,7 @@ In the classical actor model, components communicate via **asynchronous message

24
24
25
25
26
26
The squiggly arrows (`~>`) are **physical connections** in Lingua Franca: they use TCP for reliable in-order delivery on each link, but carry **no timestamp coordination** between links. Messages from California and New York may arrive at either grid manager in any order.
@@ -29,60 +29,107 @@ The squiggly arrows (`~>`) are **physical connections** in Lingua Franca: they u
29
29
30
30
## The Code
31
31
32
-
See [`src/DistibutedPowerGrid1_Actor.lf`](src/DistibutedPowerGrid1_Actor.lf).
32
+
See [`src/Step1_Actor.lf`](src/Step1_Actor.lf).
33
33
34
34
The core reactor is `SimpleGridManager`:
35
35
36
36
```lf
37
37
reactor SimpleGridManager {
38
-
input in1: int // commands arriving from local operator
39
-
input in2: int // commands arriving from remote operator
38
+
input in1: int // commands arriving from California
39
+
input in2: int // commands arriving from New York
40
40
output out: int // current balance reported back to local operator
41
41
42
42
state balance: int = 0
43
43
44
44
reaction(in1, in2) -> out {=
45
45
if (in1->is_present) {
46
46
self->balance += in1->value;
47
-
lf_print("Local command %+d MW -> balance now %d MW",
47
+
lf_print("California command %+d MW -> balance now %d MW",
48
48
in1->value, self->balance);
49
49
}
50
50
if (in2->is_present) {
51
51
self->balance += in2->value;
52
-
lf_print("Remote command %+d MW -> balance now %d MW",
52
+
lf_print("New York command %+d MW -> balance now %d MW",
53
53
in2->value, self->balance);
54
54
}
55
55
lf_set(out, self->balance);
56
56
=}
57
57
}
58
58
```
59
59
60
-
The top-level federated program wires everything together:
60
+
The top-level federated program wires everything together. For the first exercise, the operator consoles are scripted with parameters, so you can change the trace without writing new reactors or timers:
61
61
62
62
```lf
63
63
federated reactor {
64
-
op1 = new GridInterface(...) // California operator console
65
-
op2 = new GridInterface(...) // New York operator console
66
-
gm1 = new SimpleGridManager()
67
-
gm2 = new SimpleGridManager()
68
-
69
-
op1.command ~> gm1.in1 // California commands -> California manager (local)
70
-
op2.command ~> gm2.in2 // New York commands -> New York manager (local)
71
-
op1.command ~> gm2.in1 // California commands -> New York manager (remote)
72
-
op2.command ~> gm1.in2 // New York commands -> California manager (remote)
73
-
74
-
gm1.out ~> op1.status
75
-
gm2.out ~> op2.status
64
+
gi1 = new ScriptedGridInterface(
65
+
node_name="California",
66
+
command_value=100,
67
+
command_time=0 ms
68
+
)
69
+
gi2 = new ScriptedGridInterface(
70
+
node_name="New York",
71
+
command_value=-100,
72
+
command_time=1 ms
73
+
)
74
+
gm1 = new SimpleGridManager(node_name="California manager")
75
+
gm2 = new SimpleGridManager(node_name="New York manager")
76
+
77
+
gi1.command ~> gm1.in1 // California commands -> California manager (local)
78
+
gi2.command ~> gm2.in2 // New York commands -> New York manager (local)
79
+
gi1.command ~> gm2.in1 // California commands -> New York manager (remote)
80
+
gi2.command ~> gm1.in2 // New York commands -> California manager (remote)
81
+
82
+
gm1.out ~> gi1.status
83
+
gm2.out ~> gi2.status
76
84
}
77
85
```
78
86
79
87
Each grid manager receives commands from **both** operators and keeps its own copy of the balance. The local operator console gets the balance back from its local manager.
80
88
81
89
---
82
90
83
-
## Why This Works — Sometimes
91
+
## Running Step 1
84
92
85
-
The operation `balance += value` has a special mathematical property: it is **associative and commutative**. It doesn't matter what order the additions happen — the final sum is always the same.
93
+
Compile the LF program with `lfc`:
94
+
95
+
```bash
96
+
lfc src/Step1_Actor.lf
97
+
```
98
+
99
+
Because this is a federated LF program, compilation generates a launcher under `bin/` named after the source file:
100
+
101
+
```bash
102
+
./bin/Step1_Actor
103
+
```
104
+
105
+
This launches the runtime infrastructure (RTI) and the four federates:
106
+
107
+
-`federate__gi1`: California grid interface
108
+
-`federate__gi2`: New York grid interface
109
+
-`federate__gm1`: California grid manager
110
+
-`federate__gm2`: New York grid manager
111
+
112
+
To see each federate in its own terminal pane, run the launcher with `--tmux`:
113
+
114
+
```bash
115
+
./bin/Step1_Actor --tmux
116
+
```
117
+
118
+
If `tmux` is not installed, install it first, for example with `brew install tmux` on macOS or `sudo apt-get install tmux` on Ubuntu.
119
+
120
+
Inside the tmux view, the top pane is the RTI and the other panes are the federates. The program has a built-in timeout, so it should finish on its own. To leave and close the entire tmux session after the run, press `Ctrl+B`, then `D`. If you need to stop a still-running federation, press `Ctrl+C` in the RTI pane, then detach with `Ctrl+B`, then `D`.
121
+
122
+
Example tmux run:
123
+
124
+

125
+
126
+
In the screenshot, the managers receive the California and New York commands in different orders, but both managers end with balance `0 MW`.
127
+
128
+
---
129
+
130
+
## Why This Works, Sometimes
131
+
132
+
The operation `balance += value` has a special mathematical property: it is **associative and commutative**. It doesn't matter what order the additions happen; the final sum is always the same.
86
133
87
134
This means that even though `gm1` and `gm2` may process the same two commands in different orders, they will eventually agree on the same balance. This property is called **eventual consistency**.
88
135
@@ -92,28 +139,28 @@ More formally, this design satisfies **ACID 2.0** properties (Helland & Campbell
92
139
-**I**dempotent: TCP guarantees exactly-once delivery, so each command is applied exactly once
93
140
-**D**istributed: state is maintained at multiple nodes
94
141
95
-
A datatype with these properties is called a **Conflict-Free Replicated Datatype (CRDT)** — one of the simplest CRDTs in existence.
142
+
A datatype with these properties is called a **Conflict-Free Replicated Datatype (CRDT)**. This example is one of the simplest CRDTs in existence.
96
143
97
144
---
98
145
99
146
## The Catch
100
147
101
-
This design would allow operators to curtail generation far below zero — a dangerous grid imbalance that could trip protective relays and cause a cascading blackout.
148
+
This design would allow operators to curtail generation far below zero, creating a dangerous grid imbalance that could trip protective relays and cause a cascading blackout.
102
149
103
-
Any **business logic** that enforces limits (e.g., "don't curtail if balance is already at its minimum threshold") breaks commutativity — and with it, our consistency guarantees.
150
+
Any **business logic** that enforces limits (e.g., "don't curtail if balance is already at its minimum threshold") breaks commutativity. That also breaks the consistency guarantee from this simple CRDT-style design.
104
151
105
152
That's what we explore next.
106
153
107
154
---
108
155
109
156
## Exercises
110
157
111
-
1. Trace through a scenario: California dispatches +100 MW at time 0, New York curtails −100 MW at time 1 ms. Show that both grid managers reach the same balance regardless of message arrival order.
158
+
1. Trace through a scenario: California dispatches +100 MW at time 0 ms, New York curtails −100 MW at time 1 ms. Show that both grid managers reach the same balance regardless of message arrival order.
112
159
113
160
2. What would happen if TCP delivery were *not* guaranteed? How would the ACID 2.0 / CRDT properties need to change?
114
161
115
-
3.Why does the CAL theorem (Consistency vs. Availability under Latency constraints) apply here? What is the partial order? [[More Reading]](https://arxiv.org/abs/2301.08906)
162
+
3.Now consider the potential effect of network delays in this example. How would network delays affect the consistency of this example?
116
163
117
164
---
118
165
119
-
**Next:**[Step 2 — When Operations Are Non-Commutative](02-inconsistency.md)
166
+
**Next:**[Step 2: When Operations Are Non-Commutative](02-inconsistency.md)
Copy file name to clipboardExpand all lines: 02-inconsistency.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Step 2: When Operations Are Non-Commutative — The Consistency Problem
1
+
# Step 2: When Operations Are Non-Commutative: The Consistency Problem
2
2
3
3
## Adding Real Business Logic
4
4
@@ -10,7 +10,7 @@ Real grid operators enforce **safety constraints**. A typical rule:
10
10
>
11
11
> If a curtailment would push the balance below the threshold, reject it and log an **imbalance event** (which triggers automated protective relays in a real system).
12
12
13
-
Let's say our minimum safe threshold is **−200 MW**. Here is the updated reactor (See [`src/DistibutedPowerGrid2_Inconsistency.lf`](src/DistibutedPowerGrid2_Inconsistency.lf)).
13
+
Let's say our minimum safe threshold is **−200 MW**. Here is the updated reactor (See [`src/Step2_Inconsistency.lf`](src/Step2_Inconsistency.lf)).
3. Final balance at `gm1`: **−130 MW**. No imbalance event.
71
71
72
-
Since `gm1` and `gm2` receive these messages over physical (unordered) connections, they may each experience a different scenario. **They permanently disagree on the balance** — and worse, they may disagree on whether an imbalance event occurred.
72
+
Since `gm1` and `gm2` receive these messages over physical (unordered) connections, they may each experience a different scenario. **They permanently disagree on the balance**, and worse, they may disagree on whether an imbalance event occurred.
73
73
74
74
This is the fundamental consistency problem in distributed systems.
75
75
@@ -90,14 +90,14 @@ New York node: dispatch +100 ───────────
90
90
↓
91
91
gm2 final: -50 MW ✓ no event
92
92
93
-
But gm1 (-130) ≠ gm2 (-50) — INCONSISTENT STATE!
93
+
But gm1 (-130) ≠ gm2 (-50): INCONSISTENT STATE!
94
94
```
95
95
96
96
In a real grid, this inconsistency means the two control centers have **contradictory views of grid health**. Automated systems making decisions based on these views could take opposing corrective actions, worsening the situation.
97
97
98
98
---
99
99
100
-
## Fixing It — The Options
100
+
## Fixing It: The Options
101
101
102
102
We'll explore three approaches to fix the inconsistency issue:
103
103
@@ -121,4 +121,4 @@ The single-node approach defeats the purpose of having two control centers. So w
0 commit comments