|
1 | | -In DRO.jl every ADMM optimization consists of two components, the ADMM problem form itself, and the local model, which determines local constraints and objectives. |
| 1 | +# ADMM |
2 | 2 |
|
3 | | -# Problem Form |
| 3 | +The Alternating Direction Method of Multipliers (ADMM) is a convex optimization algorithm |
| 4 | +that decomposes a large problem into smaller subproblems solved locally, coordinated by a |
| 5 | +central update. DRO implements two ADMM variants: **Sharing** and **Consensus**. |
4 | 6 |
|
5 | | -## Consensus |
| 7 | +## Problem Forms |
6 | 8 |
|
7 | | -The single global variable consensus form can be written as |
| 9 | +### Sharing |
| 10 | + |
| 11 | +The sharing form distributes a resource across ``N`` agents whose individual contributions |
| 12 | +must sum to a global variable ``z``: |
8 | 13 |
|
9 | 14 | ```math |
10 | 15 | \begin{equation} |
11 | 16 | \begin{split} |
12 | | -\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \\ |
13 | | -\quad\text{s.t.}\quad x_i = z,\;\; i=1,\dots,N, |
| 17 | +\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \;+\; g(z) \\ |
| 18 | +\quad\text{s.t.}\quad \sum_{i=1}^N x_i = z |
14 | 19 | \end{split} |
15 | 20 | \end{equation} |
16 | 21 | ``` |
17 | 22 |
|
18 | | -where ``f_i`` is the local objective of agent ``i``, ``x_i`` the decision variable of this agent. |
| 23 | +where ``f_i`` is the local cost of agent ``i``, ``x_i \in \mathbb{R}^m`` its decision variable, |
| 24 | +and ``g`` a global penalty on the aggregate ``z``. |
19 | 25 |
|
20 | | -With the dual variable ``u`` and the penalty ``\rho`` the update iteration reads. |
| 26 | +The ADMM iterations with dual variable ``u`` and penalty ``\rho`` are: |
21 | 27 |
|
22 | 28 | ```math |
23 | 29 | \begin{align} |
24 | | -x_i^{k+1} |
25 | | -&= \arg\min_{x_i} \; |
26 | | -f_i(x_i) |
27 | | -+ \frac{\rho}{2}\big\| x_i - \big(z^k - u_i^k \big) \big\|_2^2 |
28 | | -\\ |
29 | | -z^{k+1} |
30 | | -&= \arg\min_{z} \; |
31 | | -g(z) + \frac{N \rho}{2}\left\| |
32 | | -z - \Big( \bar{x}^{k+1} + \bar{u}^k \Big) |
33 | | -\right\|_2^2 \\ |
34 | | -u_i^{k+1} |
35 | | -&= u_i^k + x_i^{k+1} - z^{k+1} |
| 30 | +x_i^{k+1} |
| 31 | + &= \arg\min_{x_i}\; |
| 32 | + f_i(x_i) + \tfrac{\rho}{2}\,\big\lVert x_i - (z^k - u^k) \big\rVert_2^2, |
| 33 | + \quad i=1,\dots,N |
| 34 | + \\[4pt] |
| 35 | +\bar{x}^{\,k+1} |
| 36 | + &= \tfrac{1}{N}\sum_{i=1}^N x_i^{k+1} |
| 37 | + \\[4pt] |
| 38 | +z^{k+1} |
| 39 | + &= \arg\min_{z}\; |
| 40 | + g(N\cdot z) + \tfrac{N\rho}{2}\,\big\lVert z - \bar{x}^{\,k+1} - u^k \big\rVert_2^2 |
| 41 | + \\[4pt] |
| 42 | +u^{k+1} |
| 43 | + &= u^k + \bar{x}^{\,k+1} - z^{k+1} |
36 | 44 | \end{align} |
37 | 45 | ``` |
38 | 46 |
|
39 | | -To instantiate a coordinator for the sharing form, use [`create_consensus_target_reach_admm_coordinator`](@ref). To start the neotiation you need to use [`create_admm_start_consensus`](@ref). |
| 47 | +To create a coordinator for this form use [`create_sharing_target_distance_admm_coordinator`](@ref), |
| 48 | +and start the negotiation with [`create_admm_start`](@ref). |
40 | 49 |
|
| 50 | +### Consensus |
41 | 51 |
|
42 | | -## Sharing |
43 | | - |
44 | | -Take the sharing problem: |
| 52 | +The consensus form drives all agents to agree on a single global value ``z``: |
45 | 53 |
|
46 | 54 | ```math |
47 | 55 | \begin{equation} |
48 | 56 | \begin{split} |
49 | | -\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \;+\; g(z)\\ |
50 | | -\quad\text{s.t.}\quad \sum_{i=1}^N x_i = z,\;\; i=1,\dots,N, |
| 57 | +\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \\ |
| 58 | +\quad\text{s.t.}\quad x_i = z,\;\; i=1,\dots,N |
51 | 59 | \end{split} |
52 | 60 | \end{equation} |
53 | 61 | ``` |
54 | 62 |
|
55 | | -where ``f_i`` is the local objective of agent ``i``, ``x_i`` the decision variable of this agent, and ``g`` the global objective. |
56 | | - |
57 | | -With the dual variable ``u`` and the penalty ``\rho`` the generic update iteration reads. |
| 63 | +The update iterations are: |
58 | 64 |
|
59 | 65 | ```math |
60 | 66 | \begin{align} |
61 | | -x_i^{k+1} |
| 67 | +x_i^{k+1} |
62 | 68 | &= \arg\min_{x_i}\; |
63 | | - f_i(x_i) + \tfrac{\rho}{2}\,\big\lVert x_i - (z^k - u^k) \big\rVert_2^2, |
64 | | - \\ |
65 | | - &i=1,\dots,N, |
66 | | - \\[6pt] |
67 | | -z^{k+1} |
| 69 | + f_i(x_i) + \frac{\rho}{2}\big\| x_i - \big(z^k - u_i^k \big) \big\|_2^2 |
| 70 | + \\[4pt] |
| 71 | +z^{k+1} |
68 | 72 | &= \arg\min_{z}\; |
69 | | - g(N\cdot z) + \tfrac{N\rho}{2}\,\big\lVert z - \bar{x}^{\,k+1} - u^k \big\rVert_2^2, |
70 | | - \\ |
71 | | -\bar{x}^{\,k+1} |
72 | | - &= \tfrac{1}{N}\sum_{i=1}^N x_i^{k+1}, |
73 | | - \\[6pt] |
74 | | -u^{k+1} |
75 | | - &= u^k + \bar{x}^{\,k+1} - z^{k+1}. |
76 | | - |
| 73 | + g(z) + \frac{N \rho}{2}\left\| |
| 74 | + z - \Big( \bar{x}^{k+1} + \bar{u}^k \Big) |
| 75 | + \right\|_2^2 \\[4pt] |
| 76 | +u_i^{k+1} |
| 77 | + &= u_i^k + x_i^{k+1} - z^{k+1} |
77 | 78 | \end{align} |
78 | 79 | ``` |
79 | 80 |
|
80 | | -To instantiate a coordinator for the sharing form, use [`create_sharing_admm_coordinator`](@ref). To start the negotiation you can use [`create_admm_start`](@ref). |
| 81 | +To create a coordinator for the consensus form use [`create_consensus_target_reach_admm_coordinator`](@ref), |
| 82 | +and to construct the start message use [`create_admm_start_consensus`](@ref). |
| 83 | + |
| 84 | +## Local Model: Flexibility Actor |
| 85 | + |
| 86 | +Each participant is modelled as a *flexibility actor* — a local resource with bounded and coupled |
| 87 | +decision variables: |
| 88 | + |
| 89 | +| Constraint | Description | |
| 90 | +|-----------|-------------| |
| 91 | +| ``l_i \leq x_i \leq u_i`` | Box constraints (lower/upper bounds per sector) | |
| 92 | +| ``C_i x_i \leq d_i`` | Coupling constraints (e.g., input-output coupling) | |
| 93 | +| ``S_i^\top x_i`` | Linear priority penalty added to the local objective | |
| 94 | + |
| 95 | +At each ADMM iteration the actor solves a small QP (via OSQP) to compute ``x_i^{k+1}``. |
| 96 | + |
| 97 | +### One-to-Many Resource |
| 98 | + |
| 99 | +A common model is a resource that converts a single input into ``m`` outputs with given |
| 100 | +efficiencies ``\eta \in \mathbb{R}^m``. Use [`create_admm_flex_actor_one_to_many`](@ref): |
| 101 | + |
| 102 | +```julia |
| 103 | +# 10 kW input capacity, three outputs with efficiencies [0.1, 0.5, -1.0] |
| 104 | +# Negative efficiency means the resource *consumes* that output type. |
| 105 | +actor = create_admm_flex_actor_one_to_many(10.0, [0.1, 0.5, -1.0]) |
| 106 | +``` |
| 107 | + |
| 108 | +An optional priorities vector ``P`` biases the solution towards specific sectors: |
| 109 | + |
| 110 | +```julia |
| 111 | +# Prefer sector 1 with priority 5 |
| 112 | +actor = create_admm_flex_actor_one_to_many(10.0, [0.1, 0.5, -1.0], [5.0, 0.0, 0.0]) |
| 113 | +``` |
81 | 114 |
|
82 | | -# Local Models |
| 115 | +After the optimization finishes, retrieve the result with: |
| 116 | + |
| 117 | +```julia |
| 118 | +x_opt = result(actor) # Vector{Float64} of length m |
| 119 | +``` |
| 120 | + |
| 121 | +## Coordinator Parameters |
| 122 | + |
| 123 | +The generic ADMM coordinator ([`ADMMGenericCoordinator`](@ref)) exposes several tuning parameters: |
| 124 | + |
| 125 | +| Parameter | Default | Description | |
| 126 | +|-----------|---------|-------------| |
| 127 | +| `ρ` | `1.0` | Penalty parameter — larger values enforce constraints faster but may slow convergence | |
| 128 | +| `max_iters` | `1000` | Maximum number of ADMM iterations | |
| 129 | +| `abs_tol` | `1e-4` | Absolute primal/dual residual tolerance | |
| 130 | +| `rel_tol` | `1e-3` | Relative primal/dual residual tolerance | |
| 131 | +| `μ` | `10` | Residual ratio threshold for ρ adaptation | |
| 132 | +| `τ` | `2` | Multiplicative factor for ρ adaptation | |
| 133 | +| `slack_penalty` | `100` | Penalty for infeasibility slack variables | |
| 134 | + |
| 135 | +## Complete Example — ADMM Sharing |
| 136 | + |
| 137 | +```@example admm-sharing |
| 138 | +using DistributedResourceOptimization |
| 139 | +
|
| 140 | +# Three flexible resources (e.g., heat pumps, battery, PV inverter) |
| 141 | +# Each converts 10/15/10 kW input into three output types |
| 142 | +flex1 = create_admm_flex_actor_one_to_many(10.0, [0.1, 0.5, -1.0]) |
| 143 | +flex2 = create_admm_flex_actor_one_to_many(15.0, [0.1, 0.5, -1.0]) |
| 144 | +flex3 = create_admm_flex_actor_one_to_many(10.0, [-1.0, 0.0, 1.0]) |
| 145 | +
|
| 146 | +# Coordinator minimises weighted distance of Σxᵢ to target [-4, 0, 6] |
| 147 | +# Priority weights [5, 1, 1] penalise deviations in sector 1 most heavily |
| 148 | +coordinator = create_sharing_target_distance_admm_coordinator() |
| 149 | +start_msg = create_admm_start(create_admm_sharing_data([-4.0, 0.0, 6.0], [5, 1, 1])) |
| 150 | +
|
| 151 | +start_coordinated_optimization([flex1, flex2, flex3], coordinator, start_msg) |
| 152 | +
|
| 153 | +println(result(flex1)) |
| 154 | +println(result(flex2)) |
| 155 | +println(result(flex3)) |
| 156 | +``` |
| 157 | + |
| 158 | +## Complete Example — ADMM Consensus |
| 159 | + |
| 160 | +```@example admm-consensus |
| 161 | +using DistributedResourceOptimization |
| 162 | +
|
| 163 | +# Two flex actors converge to a common 2-dimensional target [1.0, 2.0]. |
| 164 | +# Each actor has 10 kW input capacity with efficiency vector [0.6, 0.4]. |
| 165 | +actor1 = create_admm_flex_actor_one_to_many(10.0, [0.6, 0.4]) |
| 166 | +actor2 = create_admm_flex_actor_one_to_many(10.0, [0.6, 0.4]) |
| 167 | +
|
| 168 | +coordinator = create_consensus_target_reach_admm_coordinator() |
| 169 | +start_msg = create_admm_start_consensus([1.0, 2.0]) |
| 170 | +
|
| 171 | +start_coordinated_optimization([actor1, actor2], coordinator, start_msg) |
| 172 | +
|
| 173 | +println(result(actor1)) |
| 174 | +println(result(actor2)) |
| 175 | +``` |
83 | 176 |
|
84 | | -## Flexibility Actor |
| 177 | +!!! tip "Convergence Tips" |
| 178 | + If ADMM diverges or converges slowly, try: |
| 179 | + - Reducing `ρ` when primal residuals dominate |
| 180 | + - Increasing `ρ` when dual residuals dominate |
| 181 | + - Tightening `abs_tol` / `rel_tol` for higher precision |
| 182 | + - Increasing `max_iters` for complex problems |
85 | 183 |
|
86 | | -Each local actor `ì`` has some flexibility of ``m`` resources and a decision on the provided flexibility ``x_i``. The decision is constrained by |
87 | | -* lower and upper bounds ``l_i \leq x_i \leq u_i`` |
88 | | -* coupling constraints ``C_i x_i\leq d_i`` |
89 | | -* linear penalites ``S_i`` for priorization |
| 184 | +## See Also |
90 | 185 |
|
91 | | -To instantiate a flexibility actor use [`create_admm_flex_actor_one_to_many`](@ref). |
| 186 | +- [`ADMMFlexActor`](@ref), [`create_admm_flex_actor_one_to_many`](@ref), [`result`](@ref) |
| 187 | +- [`create_sharing_target_distance_admm_coordinator`](@ref), [`create_admm_start`](@ref), [`create_admm_sharing_data`](@ref) |
| 188 | +- [`create_consensus_target_reach_admm_coordinator`](@ref), [`create_admm_start_consensus`](@ref) |
| 189 | +- [Tutorial: Energy Dispatch with ADMM](../tutorials/energy_dispatch.md) |
0 commit comments