Skip to content

Commit aa042d4

Browse files
committed
Adding missing documentation of concepts like detailed carriers. Adding guides, howtos, adding more details regarding the algorithms.
1 parent 5ab49f9 commit aa042d4

15 files changed

Lines changed: 1695 additions & 148 deletions

docs/Project.toml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,5 +5,8 @@ Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
55
Logging = "56ddb016-857b-54e1-b83d-db4d58db5568"
66
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
77

8+
[sources]
9+
DistributedResourceOptimization = {path = "/home/rschrage/git/DistributedResourceOptimization.jl"}
10+
811
[compat]
912
Documenter = "~0.27"

docs/make.jl

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,21 @@ with_logger(logger) do
1414
"Algorithms" => [
1515
"ADMM"=>"algorithms/admm.md",
1616
"COHDA"=>"algorithms/cohda.md",
17+
"Averaging Consensus"=>"algorithms/consensus.md",
1718
],
18-
"Carrier" => [
19-
"Simple"=>"carrier/simple.md",
20-
"Mango"=>"carrier/mango.md",
19+
"Carriers" => [
20+
"SimpleCarrier"=>"carrier/simple.md",
21+
"MangoCarrier"=>"carrier/mango.md",
2122
],
22-
"API"=>"api.md"],
23+
"Tutorials" => [
24+
"Energy Dispatch (ADMM)"=>"tutorials/energy_dispatch.md",
25+
"Schedule Coordination (COHDA)"=>"tutorials/schedule_coordination.md",
26+
],
27+
"How-To Guides" => [
28+
"Custom Algorithm"=>"howtos/custom_algorithm.md",
29+
"Custom Carrier"=>"howtos/custom_carrier.md",
30+
],
31+
"API Reference"=>"api.md"],
2332
repo="https://github.com/Digitalized-Energy-Systems/DistributedResourceOptimization.jl",
2433
)
2534
end

docs/src/algorithms/admm.md

Lines changed: 149 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -1,91 +1,189 @@
1-
In DRO.jl every ADMM optimization consists of two components, the ADMM problem form itself, and the local model, which determines local constraints and objectives.
1+
# ADMM
22

3-
# Problem Form
3+
The Alternating Direction Method of Multipliers (ADMM) is a convex optimization algorithm
4+
that decomposes a large problem into smaller subproblems solved locally, coordinated by a
5+
central update. DRO implements two ADMM variants: **Sharing** and **Consensus**.
46

5-
## Consensus
7+
## Problem Forms
68

7-
The single global variable consensus form can be written as
9+
### Sharing
10+
11+
The sharing form distributes a resource across ``N`` agents whose individual contributions
12+
must sum to a global variable ``z``:
813

914
```math
1015
\begin{equation}
1116
\begin{split}
12-
\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \\
13-
\quad\text{s.t.}\quad x_i = z,\;\; i=1,\dots,N,
17+
\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \;+\; g(z) \\
18+
\quad\text{s.t.}\quad \sum_{i=1}^N x_i = z
1419
\end{split}
1520
\end{equation}
1621
```
1722

18-
where ``f_i`` is the local objective of agent ``i``, ``x_i`` the decision variable of this agent.
23+
where ``f_i`` is the local cost of agent ``i``, ``x_i \in \mathbb{R}^m`` its decision variable,
24+
and ``g`` a global penalty on the aggregate ``z``.
1925

20-
With the dual variable ``u`` and the penalty ``\rho`` the update iteration reads.
26+
The ADMM iterations with dual variable ``u`` and penalty ``\rho`` are:
2127

2228
```math
2329
\begin{align}
24-
x_i^{k+1}
25-
&= \arg\min_{x_i} \;
26-
f_i(x_i)
27-
+ \frac{\rho}{2}\big\| x_i - \big(z^k - u_i^k \big) \big\|_2^2
28-
\\
29-
z^{k+1}
30-
&= \arg\min_{z} \;
31-
g(z) + \frac{N \rho}{2}\left\|
32-
z - \Big( \bar{x}^{k+1} + \bar{u}^k \Big)
33-
\right\|_2^2 \\
34-
u_i^{k+1}
35-
&= u_i^k + x_i^{k+1} - z^{k+1}
30+
x_i^{k+1}
31+
&= \arg\min_{x_i}\;
32+
f_i(x_i) + \tfrac{\rho}{2}\,\big\lVert x_i - (z^k - u^k) \big\rVert_2^2,
33+
\quad i=1,\dots,N
34+
\\[4pt]
35+
\bar{x}^{\,k+1}
36+
&= \tfrac{1}{N}\sum_{i=1}^N x_i^{k+1}
37+
\\[4pt]
38+
z^{k+1}
39+
&= \arg\min_{z}\;
40+
g(N\cdot z) + \tfrac{N\rho}{2}\,\big\lVert z - \bar{x}^{\,k+1} - u^k \big\rVert_2^2
41+
\\[4pt]
42+
u^{k+1}
43+
&= u^k + \bar{x}^{\,k+1} - z^{k+1}
3644
\end{align}
3745
```
3846

39-
To instantiate a coordinator for the sharing form, use [`create_consensus_target_reach_admm_coordinator`](@ref). To start the neotiation you need to use [`create_admm_start_consensus`](@ref).
47+
To create a coordinator for this form use [`create_sharing_target_distance_admm_coordinator`](@ref),
48+
and start the negotiation with [`create_admm_start`](@ref).
4049

50+
### Consensus
4151

42-
## Sharing
43-
44-
Take the sharing problem:
52+
The consensus form drives all agents to agree on a single global value ``z``:
4553

4654
```math
4755
\begin{equation}
4856
\begin{split}
49-
\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \;+\; g(z)\\
50-
\quad\text{s.t.}\quad \sum_{i=1}^N x_i = z,\;\; i=1,\dots,N,
57+
\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \\
58+
\quad\text{s.t.}\quad x_i = z,\;\; i=1,\dots,N
5159
\end{split}
5260
\end{equation}
5361
```
5462

55-
where ``f_i`` is the local objective of agent ``i``, ``x_i`` the decision variable of this agent, and ``g`` the global objective.
56-
57-
With the dual variable ``u`` and the penalty ``\rho`` the generic update iteration reads.
63+
The update iterations are:
5864

5965
```math
6066
\begin{align}
61-
x_i^{k+1}
67+
x_i^{k+1}
6268
&= \arg\min_{x_i}\;
63-
f_i(x_i) + \tfrac{\rho}{2}\,\big\lVert x_i - (z^k - u^k) \big\rVert_2^2,
64-
\\
65-
&i=1,\dots,N,
66-
\\[6pt]
67-
z^{k+1}
69+
f_i(x_i) + \frac{\rho}{2}\big\| x_i - \big(z^k - u_i^k \big) \big\|_2^2
70+
\\[4pt]
71+
z^{k+1}
6872
&= \arg\min_{z}\;
69-
g(N\cdot z) + \tfrac{N\rho}{2}\,\big\lVert z - \bar{x}^{\,k+1} - u^k \big\rVert_2^2,
70-
\\
71-
\bar{x}^{\,k+1}
72-
&= \tfrac{1}{N}\sum_{i=1}^N x_i^{k+1},
73-
\\[6pt]
74-
u^{k+1}
75-
&= u^k + \bar{x}^{\,k+1} - z^{k+1}.
76-
73+
g(z) + \frac{N \rho}{2}\left\|
74+
z - \Big( \bar{x}^{k+1} + \bar{u}^k \Big)
75+
\right\|_2^2 \\[4pt]
76+
u_i^{k+1}
77+
&= u_i^k + x_i^{k+1} - z^{k+1}
7778
\end{align}
7879
```
7980

80-
To instantiate a coordinator for the sharing form, use [`create_sharing_admm_coordinator`](@ref). To start the negotiation you can use [`create_admm_start`](@ref).
81+
To create a coordinator for the consensus form use [`create_consensus_target_reach_admm_coordinator`](@ref),
82+
and to construct the start message use [`create_admm_start_consensus`](@ref).
83+
84+
## Local Model: Flexibility Actor
85+
86+
Each participant is modelled as a *flexibility actor* — a local resource with bounded and coupled
87+
decision variables:
88+
89+
| Constraint | Description |
90+
|-----------|-------------|
91+
| ``l_i \leq x_i \leq u_i`` | Box constraints (lower/upper bounds per sector) |
92+
| ``C_i x_i \leq d_i`` | Coupling constraints (e.g., input-output coupling) |
93+
| ``S_i^\top x_i`` | Linear priority penalty added to the local objective |
94+
95+
At each ADMM iteration the actor solves a small QP (via OSQP) to compute ``x_i^{k+1}``.
96+
97+
### One-to-Many Resource
98+
99+
A common model is a resource that converts a single input into ``m`` outputs with given
100+
efficiencies ``\eta \in \mathbb{R}^m``. Use [`create_admm_flex_actor_one_to_many`](@ref):
101+
102+
```julia
103+
# 10 kW input capacity, three outputs with efficiencies [0.1, 0.5, -1.0]
104+
# Negative efficiency means the resource *consumes* that output type.
105+
actor = create_admm_flex_actor_one_to_many(10.0, [0.1, 0.5, -1.0])
106+
```
107+
108+
An optional priorities vector ``P`` biases the solution towards specific sectors:
109+
110+
```julia
111+
# Prefer sector 1 with priority 5
112+
actor = create_admm_flex_actor_one_to_many(10.0, [0.1, 0.5, -1.0], [5.0, 0.0, 0.0])
113+
```
81114

82-
# Local Models
115+
After the optimization finishes, retrieve the result with:
116+
117+
```julia
118+
x_opt = result(actor) # Vector{Float64} of length m
119+
```
120+
121+
## Coordinator Parameters
122+
123+
The generic ADMM coordinator ([`ADMMGenericCoordinator`](@ref)) exposes several tuning parameters:
124+
125+
| Parameter | Default | Description |
126+
|-----------|---------|-------------|
127+
| `ρ` | `1.0` | Penalty parameter — larger values enforce constraints faster but may slow convergence |
128+
| `max_iters` | `1000` | Maximum number of ADMM iterations |
129+
| `abs_tol` | `1e-4` | Absolute primal/dual residual tolerance |
130+
| `rel_tol` | `1e-3` | Relative primal/dual residual tolerance |
131+
| `μ` | `10` | Residual ratio threshold for ρ adaptation |
132+
| `τ` | `2` | Multiplicative factor for ρ adaptation |
133+
| `slack_penalty` | `100` | Penalty for infeasibility slack variables |
134+
135+
## Complete Example — ADMM Sharing
136+
137+
```@example admm-sharing
138+
using DistributedResourceOptimization
139+
140+
# Three flexible resources (e.g., heat pumps, battery, PV inverter)
141+
# Each converts 10/15/10 kW input into three output types
142+
flex1 = create_admm_flex_actor_one_to_many(10.0, [0.1, 0.5, -1.0])
143+
flex2 = create_admm_flex_actor_one_to_many(15.0, [0.1, 0.5, -1.0])
144+
flex3 = create_admm_flex_actor_one_to_many(10.0, [-1.0, 0.0, 1.0])
145+
146+
# Coordinator minimises weighted distance of Σxᵢ to target [-4, 0, 6]
147+
# Priority weights [5, 1, 1] penalise deviations in sector 1 most heavily
148+
coordinator = create_sharing_target_distance_admm_coordinator()
149+
start_msg = create_admm_start(create_admm_sharing_data([-4.0, 0.0, 6.0], [5, 1, 1]))
150+
151+
start_coordinated_optimization([flex1, flex2, flex3], coordinator, start_msg)
152+
153+
println(result(flex1))
154+
println(result(flex2))
155+
println(result(flex3))
156+
```
157+
158+
## Complete Example — ADMM Consensus
159+
160+
```@example admm-consensus
161+
using DistributedResourceOptimization
162+
163+
# Two flex actors converge to a common 2-dimensional target [1.0, 2.0].
164+
# Each actor has 10 kW input capacity with efficiency vector [0.6, 0.4].
165+
actor1 = create_admm_flex_actor_one_to_many(10.0, [0.6, 0.4])
166+
actor2 = create_admm_flex_actor_one_to_many(10.0, [0.6, 0.4])
167+
168+
coordinator = create_consensus_target_reach_admm_coordinator()
169+
start_msg = create_admm_start_consensus([1.0, 2.0])
170+
171+
start_coordinated_optimization([actor1, actor2], coordinator, start_msg)
172+
173+
println(result(actor1))
174+
println(result(actor2))
175+
```
83176

84-
## Flexibility Actor
177+
!!! tip "Convergence Tips"
178+
If ADMM diverges or converges slowly, try:
179+
- Reducing `ρ` when primal residuals dominate
180+
- Increasing `ρ` when dual residuals dominate
181+
- Tightening `abs_tol` / `rel_tol` for higher precision
182+
- Increasing `max_iters` for complex problems
85183

86-
Each local actor `ì`` has some flexibility of ``m`` resources and a decision on the provided flexibility ``x_i``. The decision is constrained by
87-
* lower and upper bounds ``l_i \leq x_i \leq u_i``
88-
* coupling constraints ``C_i x_i\leq d_i``
89-
* linear penalites ``S_i`` for priorization
184+
## See Also
90185

91-
To instantiate a flexibility actor use [`create_admm_flex_actor_one_to_many`](@ref).
186+
- [`ADMMFlexActor`](@ref), [`create_admm_flex_actor_one_to_many`](@ref), [`result`](@ref)
187+
- [`create_sharing_target_distance_admm_coordinator`](@ref), [`create_admm_start`](@ref), [`create_admm_sharing_data`](@ref)
188+
- [`create_consensus_target_reach_admm_coordinator`](@ref), [`create_admm_start_consensus`](@ref)
189+
- [Tutorial: Energy Dispatch with ADMM](../tutorials/energy_dispatch.md)

0 commit comments

Comments
 (0)