Skip to content

Commit 9c24fb6

Browse files
committed
Adding some examples to the readme.
1 parent 83b0436 commit 9c24fb6

2 files changed

Lines changed: 81 additions & 3 deletions

File tree

README.md

Lines changed: 80 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
1+
![lifecycle](https://img.shields.io/badge/lifecycle-experimental-blue.svg)
2+
[![MIT License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/Digitalized-Energy-Systems/DistributedOptimization.jl/blob/development/LICENSE)
13

2-
# Distributed Optimization for Julia
4+
# Distributed Resource Optimization for Julia
35

46
The package DistributedOptimization.jl (DO) aims to provide a collection of distributed optimization algorithms. The algorithms are implemented without considering one special communication technique or package. DO provides abstract types and function interfaces to implement so-called carriers, which are able to execute the distributed algorithms asynchronous. All algorithms can also be used without carrier using fitting @spawn or @async statements.
57

@@ -10,4 +12,80 @@ Currently there are two tested algorithms:
1012
There is one carrier implemented:
1113
* Mango.jl, agent framework for the simulation of distributed systems, DO provides roles to which the specific algorithms can be assigned to
1214

13-
Note that the package is highly work in progress.
15+
Note that the package is highly work in progress.
16+
17+
### Using the sharing ADMM with flex actors (e.g. for resource optimization) with Mango.jl
18+
19+
```julia
20+
using Mango
21+
using DistributedOptimization
22+
23+
@role struct HandleOptimizationResultRole
24+
got_it::Bool = false
25+
end
26+
27+
function Mango.handle_message(role::HandleOptimizationResultRole, message::OptimizationFinishedMessage, meta::Any)
28+
role.got_it = true
29+
end
30+
31+
container = create_tcp_container("127.0.0.1", 5555)
32+
33+
# create participant models
34+
flex_actor = create_admm_flex_actor_one_to_many(10, [0.1, 0.5, -1])
35+
flex_actor2 = create_admm_flex_actor_one_to_many(15, [0.1, 0.5, -1])
36+
flex_actor3 = create_admm_flex_actor_one_to_many(10, [0.1, 0.5, -1])
37+
38+
# create coordinator with objective
39+
coordinator = create_sharing_target_distance_admm_coordinator()
40+
41+
# create roles to integrate admm in Mango.jl
42+
dor = DistributedOptimizationRole(flex_actor, tid=:custom)
43+
dor2 = DistributedOptimizationRole(flex_actor2, tid=:custom)
44+
dor3 = DistributedOptimizationRole(flex_actor3, tid=:custom)
45+
coord_role = CoordinatorRole(coordinator, tid=:custom, include_self=true)
46+
47+
# role to handle a result
48+
handle = HandleOptimizationResultRole()
49+
handle2 = HandleOptimizationResultRole()
50+
handle3 = HandleOptimizationResultRole()
51+
52+
# create agents
53+
add_agent_composed_of(container, dor, handle)
54+
c = add_agent_composed_of(container, dor2, handle2)
55+
ca = add_agent_composed_of(container, coord_role, dor3, handle3)
56+
57+
# create a topology of the agents
58+
auto_assign!(complete_topology(3, tid=:custom), container)
59+
60+
# run the simulation with start message and wait for result
61+
activate(container) do
62+
wait(send_message(c, StartCoordinatedDistributedOptimization(create_admm_start(create_admm_sharing_data([0.2, 1, -2]))), address(ca)))
63+
wait(coord_role.task)
64+
end
65+
```
66+
67+
### Using COHDA with Mango.jl
68+
69+
```julia
70+
using Mango
71+
using DistributedOptimization
72+
73+
container = create_tcp_container("127.0.0.1", 5555)
74+
75+
# create agents with local model wrapped in the general distributed optimization role
76+
agent_one = add_agent_composed_of(container, DistributedOptimizationRole(
77+
create_cohda_participant(1, [[0.0, 1, 2], [1, 2, 3]])))
78+
agent_two = add_agent_composed_of(container, DistributedOptimizationRole(
79+
create_cohda_participant(2, [[0.0, 1, 2], [1, 2, 3]])))
80+
81+
# create start message
82+
initial_message = create_cohda_start_message([1.2, 2, 3])
83+
84+
# create topology
85+
auto_assign!(complete_topology(2), container)
86+
87+
# run simulation
88+
activate(container) do
89+
send_message(agent_one, initial_message, address(agent_two))
90+
end
91+
```

test/cohda_mango_tests.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ using DistributedOptimization
1111

1212
initial_message = create_cohda_start_message([1.2, 2, 3])
1313

14-
auto_assign!(complete_topology(), container)
14+
auto_assign!(complete_topology(2), container)
1515

1616
activate(container) do
1717
send_message(agent_one, initial_message, address(agent_two))

0 commit comments

Comments
 (0)