You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert

24
24
25
25
26
26
The squiggly arrows (`~>`) are **physical connections** in Lingua Franca: they use TCP for reliable in-order delivery on each link, but carry **no timestamp coordination** between links. Messages from California and New York may arrive at either grid manager in any order.
@@ -35,51 +35,98 @@ The core reactor is `SimpleGridManager`:
35
35
36
36
```lf
37
37
reactor SimpleGridManager {
38
-
input in1: int // commands arriving from local operator
39
-
input in2: int // commands arriving from remote operator
38
+
input in1: int // commands arriving from California
39
+
input in2: int // commands arriving from New York
40
40
output out: int // current balance reported back to local operator
41
41
42
42
state balance: int = 0
43
43
44
44
reaction(in1, in2) -> out {=
45
45
if (in1->is_present) {
46
46
self->balance += in1->value;
47
-
lf_print("Local command %+d MW -> balance now %d MW",
47
+
lf_print("California command %+d MW -> balance now %d MW",
48
48
in1->value, self->balance);
49
49
}
50
50
if (in2->is_present) {
51
51
self->balance += in2->value;
52
-
lf_print("Remote command %+d MW -> balance now %d MW",
52
+
lf_print("New York command %+d MW -> balance now %d MW",
53
53
in2->value, self->balance);
54
54
}
55
55
lf_set(out, self->balance);
56
56
=}
57
57
}
58
58
```
59
59
60
-
The top-level federated program wires everything together:
60
+
The top-level federated program wires everything together. For the first exercise, the operator consoles are scripted with parameters, so you can change the trace without writing new reactors or timers:
61
61
62
62
```lf
63
63
federated reactor {
64
-
op1 = new GridInterface(...) // California operator console
65
-
op2 = new GridInterface(...) // New York operator console
66
-
gm1 = new SimpleGridManager()
67
-
gm2 = new SimpleGridManager()
68
-
69
-
op1.command ~> gm1.in1 // California commands -> California manager (local)
70
-
op2.command ~> gm2.in2 // New York commands -> New York manager (local)
71
-
op1.command ~> gm2.in1 // California commands -> New York manager (remote)
72
-
op2.command ~> gm1.in2 // New York commands -> California manager (remote)
73
-
74
-
gm1.out ~> op1.status
75
-
gm2.out ~> op2.status
64
+
gi1 = new ScriptedGridInterface(
65
+
node_name="California",
66
+
command_value=100,
67
+
command_time=0 ms
68
+
)
69
+
gi2 = new ScriptedGridInterface(
70
+
node_name="New York",
71
+
command_value=-100,
72
+
command_time=1 ms
73
+
)
74
+
gm1 = new SimpleGridManager(node_name="California manager")
75
+
gm2 = new SimpleGridManager(node_name="New York manager")
76
+
77
+
gi1.command ~> gm1.in1 // California commands -> California manager (local)
78
+
gi2.command ~> gm2.in2 // New York commands -> New York manager (local)
79
+
gi1.command ~> gm2.in1 // California commands -> New York manager (remote)
80
+
gi2.command ~> gm1.in2 // New York commands -> California manager (remote)
81
+
82
+
gm1.out ~> gi1.status
83
+
gm2.out ~> gi2.status
76
84
}
77
85
```
78
86
79
87
Each grid manager receives commands from **both** operators and keeps its own copy of the balance. The local operator console gets the balance back from its local manager.
80
88
81
89
---
82
90
91
+
## Running Step 1
92
+
93
+
Compile the LF program with `lfc`:
94
+
95
+
```bash
96
+
lfc src/Step1_Actor.lf
97
+
```
98
+
99
+
Because this is a federated LF program, compilation generates a launcher under `bin/` named after the source file:
100
+
101
+
```bash
102
+
./bin/Step1_Actor
103
+
```
104
+
105
+
This launches the runtime infrastructure (RTI) and the four federates:
106
+
107
+
-`federate__gi1`: California grid interface
108
+
-`federate__gi2`: New York grid interface
109
+
-`federate__gm1`: California grid manager
110
+
-`federate__gm2`: New York grid manager
111
+
112
+
To see each federate in its own terminal pane, run the launcher with `--tmux`:
113
+
114
+
```bash
115
+
./bin/Step1_Actor --tmux
116
+
```
117
+
118
+
If `tmux` is not installed, install it first, for example with `brew install tmux` on macOS or `sudo apt-get install tmux` on Ubuntu.
119
+
120
+
Inside the tmux view, the top pane is the RTI and the other panes are the federates. The program has a built-in timeout, so it should finish on its own. To leave and close the entire tmux session after the run, press `Ctrl+B`, then `D`. If you need to stop a still-running federation, press `Ctrl+C` in the RTI pane, then detach with `Ctrl+B`, then `D`.
121
+
122
+
Example tmux run:
123
+
124
+

125
+
126
+
In the screenshot, the managers receive the California and New York commands in different orders, but both managers end with balance `0 MW`.
127
+
128
+
---
129
+
83
130
## Why This Works, Sometimes
84
131
85
132
The operation `balance += value` has a special mathematical property: it is **associative and commutative**. It doesn't matter what order the additions happen; the final sum is always the same.
@@ -108,7 +155,7 @@ That's what we explore next.
108
155
109
156
## Exercises
110
157
111
-
1. Trace through a scenario: California dispatches +100 MW at time 0, New York curtails −100 MW at time 1 ms. Show that both grid managers reach the same balance regardless of message arrival order.
158
+
1. Trace through a scenario: California dispatches +100 MW at time 0 ms, New York curtails −100 MW at time 1 ms. Show that both grid managers reach the same balance regardless of message arrival order.
112
159
113
160
2. What would happen if TCP delivery were *not* guaranteed? How would the ACID 2.0 / CRDT properties need to change?
Copy file name to clipboardExpand all lines: 03-timestamps.md
+19-6Lines changed: 19 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,12 +30,12 @@ The only syntactic change in the Lingua Franca program is replacing physical con
30
30
31
31
```lf
32
32
// Before (Step 2): physical, unordered
33
-
op1.command ~> gm1.in1
34
-
op1.command ~> gm2.in1
33
+
gi1.command ~> gm1.in1
34
+
gi1.command ~> gm2.in1
35
35
36
36
// After (Step 3): logical, timestamp-ordered
37
-
op1.command -> gm1.in1
38
-
op1.command -> gm2.in1
37
+
gi1.command -> gm1.in1
38
+
gi1.command -> gm2.in1
39
39
```
40
40
41
41
This small change has a profound effect: LF now gives both `gm1` and `gm2` the same logical timestamps and a deterministic rule for processing them, provided the coordination assumptions are met.
@@ -46,7 +46,7 @@ This small change has a profound effect: LF now gives both `gm1` and `gm2` the s
46
46
47
47
See [`src/Step3_Timestamps.lf`](src/Step3_Timestamps.lf). And here is what our system looks like:
@@ -108,10 +108,23 @@ For two nodes in California and New York (cross-continental latency ~60–80 ms)
108
108
Tardy events, if not handled, result in messages like this:
109
109
110
110
```
111
-
Fed 1 (op2_main): ERROR: STP violation occurred in a trigger to reaction 3, and there is no handler.
111
+
Fed 3 (gm2_main): ERROR: STP violation occurred in a trigger to reaction 1, and there is no handler.
112
112
**** Invoking reaction at the wrong tag!
113
113
```
114
114
115
+
To intentionally trigger this error in [`src/Step3_Timestamps.lf`](src/Step3_Timestamps.lf):
116
+
117
+
1. Temporarily remove the `tardy {= ... =}` block attached to the `GridManager` reaction.
118
+
2. Make both `GridManager``@maxwait` values very small, for example `@maxwait(1 us)` or `@maxwait(0 ms)`.
119
+
3. Compile and run:
120
+
121
+
```bash
122
+
lfc src/Step3_Timestamps.lf
123
+
./bin/Step3_Timestamps
124
+
```
125
+
126
+
With a very small `maxwait`, a grid manager may advance logical time before an earlier remote message arrives. Without a tardy handler, the runtime reports the STP violation. After observing the error, restore the `tardy` block and return `maxwait` to `100 ms`.
127
+
115
128
Tardy events may be handled with a **tardy handler**.
116
129
For example, we can add the following to the `GridManager` reaction:
Copy file name to clipboardExpand all lines: README.md
+35-11Lines changed: 35 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# LF Tutorial @ CPS-IoT Week 2026 - Hands-on Session: Logical Time in Distributed Systems: A Power Grid Tutorial
2
2
3
-
> **Based on:**["Consistency vs. Availability in Distributed Cyber-Physical Systems" by Lee et al. (2023)](https://arxiv.org/abs/2301.08906)
3
+
> **Based on:**["Consistency vs. Availability in Distributed Cyber-Physical Systems" by Lee et al. (2023)](https://dl.acm.org/doi/10.1145/3609119)
4
4
> **Domain:** Distributed power grid control using Lingua Franca
5
5
6
6
---
@@ -9,7 +9,7 @@
9
9
10
10
Modern power grids are distributed cyber-physical systems. Generation, transmission, and load are spread across vast geographic areas. Multiple control nodes must coordinate in real time, and they must **agree** on the state of the grid even when separated by hundreds of milliseconds of network latency.
11
11
12
-
This tutorial takes you through a series of progressively more sophisticated designs for a distributed grid controller, using the [Lingua Franca (LF)](https://lf-lang.org/) coordination language. Each design exposes a new problem and motivates the next solution, culminating in a system that achieves **eventual consistency** while bounding unavailability to a manageable risk.
12
+
This tutorial takes you through a series of progressively more sophisticated designs for a distributed grid controller, using the [Lingua Franca (LF)](https://lf-lang.org/) coordination language. Each design exposes a new problem and motivates the next solution, ending with a hybrid design that separates fast, low-risk commands from slower, strongly consistent decisions.
13
13
14
14
The design journey mirrors the consistency-vs-availability tradeoff captured by the **CAL theorem**: stronger consistency requires more waiting, more assumptions about latency, or carefully chosen fault handling. Each design makes an explicit, application-specific compromise.
15
15
@@ -48,9 +48,10 @@ This follows the paper's focus on shared physical state in real-time systems: in
48
48
-**Physical connections** (`~>`) vs **logical connections** (`->`) in LF
49
49
-**Eventual consistency** via ACID 2.0 / CRDTs
50
50
-**Logical timestamps** and the notion of *logical time*
51
-
-**STA** (Safe To Advance) and **STAA** (Safe To Assume Absent) parameters
51
+
-**`maxwait`** as a practical safe-to-advance bound
52
+
-**Tardy handlers** for messages that arrive too late
-**The CAL theorem**: Consistency–Availability–Latency tradeoff
54
+
-**The CAL theorem**: consistency, availability, and latency tradeoff
54
55
-**Fault handlers** for bounded unavailability
55
56
56
57
---
@@ -60,6 +61,8 @@ This follows the paper's focus on shared physical state in real-time systems: in
60
61
- Basic familiarity with concurrent programming concepts
61
62
- Some exposure to distributed systems (helpful but not required)
62
63
- Lingua Franca installed: see [lf-lang.org](https://lf-lang.org/docs/installation)
64
+
- A C build toolchain and CMake, since the examples use `target C`
65
+
-`tmux` if you want to run federates in separate terminal panes
63
66
64
67
---
65
68
@@ -83,23 +86,44 @@ After creation, clone **your** new repository locally and follow [Running the Co
83
86
84
87
## Running the Code
85
88
86
-
Each`.lf`file in the `src/` directory can be compiled and run with:
89
+
All`.lf`files in this repository are federated LF programs. Compile an example with`lfc`:
87
90
88
91
```bash
89
92
lfc src/<filename>.lf
90
-
./bin/<program_name>
91
93
```
92
94
93
-
For federated programs (Steps 3 onward), each federate runs as a separate process. The LF compiler generates a launch script:
95
+
For example:
94
96
95
97
```bash
96
-
lfc src/<filename>.lf
97
-
./bin/<program_name>_launch.sh
98
+
lfc src/Step1_Actor.lf
99
+
```
100
+
101
+
Compilation generates a launcher under `bin/` with the same base name as the source file, without the `.lf` extension. For `src/Step1_Actor.lf`, the launcher is:
102
+
103
+
```bash
104
+
./bin/Step1_Actor
98
105
```
99
106
100
-
# References
107
+
That launcher starts the runtime infrastructure (RTI) and all federates for the example. There is no `_launch.sh` script in this repository.
108
+
109
+
To run a different step, replace the filename and launcher name:
110
+
111
+
```bash
112
+
lfc src/Step5_Hybrid.lf
113
+
./bin/Step5_Hybrid
114
+
```
115
+
116
+
The launcher also supports tmux panes, which can make federated output easier to read:
117
+
118
+
```bash
119
+
./bin/Step1_Actor --tmux
120
+
```
121
+
122
+
Step 1 exits on its own because it has a short timeout. Other steps may keep running until you stop them with `Ctrl+C` in the launcher or RTI pane.
123
+
124
+
## References
101
125
102
-
[1] E. A. Lee, R. Akella, S. Bateni, S. Lin, M. Lohstroh, and C. Menard, "Consistency vs. Availability in Distributed Cyber-Physical Systems," arXiv:2301.08906, 2023. [Online]. Available: https://arxiv.org/abs/2301.08906
126
+
[1] E. A. Lee, R. Akella, S. Bateni, S. Lin, M. Lohstroh, and C. Menard, "Consistency vs. Availability in Distributed Cyber-Physical Systems," arXiv:2301.08906, 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3609119
103
127
104
128
[2] T. Zhao, Z. Li and Z. Ding, "Consensus-Based Distributed Optimal Energy Management With Less Communication in a Microgrid," in IEEE Transactions on Industrial Informatics, vol. 15, no. 6, pp. 3356-3367, June 2019, doi: 10.1109/TII.2018.2871562.
0 commit comments