You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/installation.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -100,11 +100,11 @@ To install WebODM on a Qnap NAS:
100
100
101
101
### Manage Processing Nodes
102
102
103
-
WebODM can be linked to one or more processing nodes that speak the [NodeODX API](https://github.com/WebODM/NodeODX/blob/master/docs/index.adoc), such as [NodeODX](https://github.com/WebODM/NodeODX), [NodeMICMAC](https://github.com/OpenDroneMap/NodeMICMAC/), [ClusterODM](https://github.com/WebODM/ClusterODM) and [Lightning](https://webodm.net). The default configuration includes a "node-odx-1" processing node which runs on the same machine as WebODM, just to help you get started. As you become more familiar with WebODM, you might want to install processing nodes on separate machines.
103
+
WebODM can be linked to one or more processing nodes that speak the [NodeODX API](https://github.com/WebODM/NodeODX/blob/master/docs/index.adoc), such as [NodeODX](https://github.com/WebODM/NodeODX), [NodeMICMAC](https://github.com/OpenDroneMap/NodeMICMAC/), [ClusterODX](https://github.com/WebODM/ClusterODX) and [Lightning](https://webodm.net). The default configuration includes a "node-odx-1" processing node which runs on the same machine as WebODM, just to help you get started. As you become more familiar with WebODM, you might want to install processing nodes on separate machines.
104
104
105
105
Adding more processing nodes will allow you to run multiple jobs in parallel.
106
106
107
-
You can also setup a [ClusterODM](https://github.com/WebODM/ClusterODM) node to run a single task across multiple machines with [distributed split-merge](https://docs.opendronemap.org/large/?highlight=distributed#getting-started-with-distributed-split-merge) and process dozen of thousands of images more quickly, with less memory.
107
+
You can also setup a [ClusterODX](https://github.com/WebODM/ClusterODX) node to run a single task across multiple machines with [distributed split-merge](https://docs.opendronemap.org/large/?highlight=distributed#getting-started-with-distributed-split-merge) and process dozen of thousands of images more quickly, with less memory.
108
108
109
109
If you don't need the default "node-odx-1" node, simply pass `--default-nodes 0` flag when starting WebODM:
Copy file name to clipboardExpand all lines: src/content/docs/tutorials/large-datasets.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,16 +33,16 @@ will create 3 submodels. Make sure to pass `--split-overlap 0` if you manually p
33
33
34
34
## Distributed Split-Merge
35
35
36
-
WebODM can also automatically distribute the processing of each submodel to multiple machines via [NodeODX](https://github.com/WebODM/NodeODX) nodes, orchestrated via [ClusterODM](https://github.com/WebODM/ClusterODM).
36
+
WebODM can also automatically distribute the processing of each submodel to multiple machines via [NodeODX](https://github.com/WebODM/NodeODX) nodes, orchestrated via [ClusterODX](https://github.com/WebODM/ClusterODX).
37
37
38
-

38
+

39
39
40
40
### Getting Started with Distributed Split-Merge
41
41
42
-
The first step is start ClusterODM:
42
+
The first step is start ClusterODX:
43
43
44
44
```bash
45
-
docker run -ti -p 3001:3000 -p 8080:8080 webodm/clusterodm
45
+
docker run -ti -p 3001:3000 -p 8080:8080 webodm/clusterodx
46
46
```
47
47
48
48
Then on each machine you want to use for processing, launch a NodeODX instance via:
@@ -51,7 +51,7 @@ Then on each machine you want to use for processing, launch a NodeODX instance v
51
51
docker run -ti -p 3000:3000 webodm/nodeodx
52
52
```
53
53
54
-
Connect via telnet to ClusterODM and add the IP addresses/port of the machines running NodeODX:
54
+
Connect via telnet to ClusterODX and add the IP addresses/port of the machines running NodeODX:
55
55
56
56
```bash
57
57
$ telnet <cluster-odm-ip> 8080
@@ -93,7 +93,7 @@ ASR VIEWCMD <number of images> - View command used to create a machine
93
93
!! - Repeat last command
94
94
```
95
95
96
-
If the NodeODX instance wasn't active when ClusterODM started, you can perform a `NODE UPDATE`:
96
+
If the NodeODX instance wasn't active when ClusterODX started, you can perform a `NODE UPDATE`:
97
97
98
98
```
99
99
# NODE UPDATE
@@ -111,25 +111,25 @@ While a process is running, it is also possible to list the tasks and view the t
111
111
# TASK OUTPUT <taskId> [lines]
112
112
```
113
113
114
-
### Autoscaling ClusterODM
114
+
### Autoscaling ClusterODX
115
115
116
-
ClusterODM also includes the option to autoscale on multiple platforms, including Amazon and Digital Ocean. This allows users to reduce costs associated with always-on instances as well as being able to scale processing based on demand.
116
+
ClusterODX also includes the option to autoscale on multiple platforms, including Amazon and Digital Ocean. This allows users to reduce costs associated with always-on instances as well as being able to scale processing based on demand.
117
117
118
118
To setup autoscaling you must:
119
119
120
-
- Have a functioning version of NodeJS installed and then install ClusterODM:
120
+
- Have a functioning version of NodeJS installed and then install ClusterODX:
121
121
122
122
```bash
123
-
git clone https://github.com/WebODM/ClusterODM
124
-
cdClusterODM
123
+
git clone https://github.com/WebODM/ClusterODX
124
+
cdClusterODX
125
125
npm install
126
126
```
127
127
128
128
- Make sure docker-machine is installed.
129
129
- Setup a S3-compatible bucket for storing results.
130
-
- Create a configuration file for [DigitalOcean](https://github.com/WebODM/ClusterODM/blob/master/docs/digitalocean.md) or [Amazon Web Services](https://github.com/WebODM/ClusterODM/blob/master/docs/aws.md).
130
+
- Create a configuration file for [DigitalOcean](https://github.com/WebODM/ClusterODX/blob/master/docs/digitalocean.md) or [Amazon Web Services](https://github.com/WebODM/ClusterODX/blob/master/docs/aws.md).
131
131
132
-
You can then launch ClusterODM with:
132
+
You can then launch ClusterODX with:
133
133
134
134
```bash
135
135
node index.js --asr configuration.json
@@ -143,7 +143,7 @@ info: Can write to S3
143
143
info: Found docker-machine executable
144
144
```
145
145
146
-
You should always have at least one static NodeODX node attached to ClusterODM, even if you plan to use the autoscaler for all processing. If you setup auto scaling, you can't have zero nodes and rely 100% on the autoscaler. You need to attach a NodeODX node to act as the "reference node" otherwise ClusterODM will not know how to handle certain requests. For this purpose, you should add a "dummy" NodeODX node and lock it:
146
+
You should always have at least one static NodeODX node attached to ClusterODX, even if you plan to use the autoscaler for all processing. If you setup auto scaling, you can't have zero nodes and rely 100% on the autoscaler. You need to attach a NodeODX node to act as the "reference node" otherwise ClusterODX will not know how to handle certain requests. For this purpose, you should add a "dummy" NodeODX node and lock it:
Copy file name to clipboardExpand all lines: src/content/docs/tutorials/using-singularity.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,13 +34,13 @@ singularity run --bind /my/project:/datasets/code \
34
34
--project-path /datasets
35
35
```
36
36
37
-
### ClusterODM, NodeODX, SLURM, with Singularity on HPC
37
+
### ClusterODX, NodeODX, SLURM, with Singularity on HPC
38
38
39
-
You can write a SLURM script to schedule and set up available nodes with NodeODX for ClusterODM to be wired to if you are on the HPC. Using SLURM will decrease the amount of time and processes needed to set up nodes for ClusterODM each time.
39
+
You can write a SLURM script to schedule and set up available nodes with NodeODX for ClusterODX to be wired to if you are on the HPC. Using SLURM will decrease the amount of time and processes needed to set up nodes for ClusterODX each time.
40
40
41
41
To setup HPC with SLURM, you must make sure SLURM is installed.
42
42
43
-
SLURM script will be different from cluster to cluster, depending on which nodes in the cluster that you have. However, the main idea is to run NodeODX on each node once, and by default, each NodeODX will be running on port 3000. After that, run ClusterODM on the head node and connect the running NodeODXs to the ClusterODM.
43
+
SLURM script will be different from cluster to cluster, depending on which nodes in the cluster that you have. However, the main idea is to run NodeODX on each node once, and by default, each NodeODX will be running on port 3000. After that, run ClusterODX on the head node and connect the running NodeODXs to the ClusterODX.
44
44
45
45
Here is an example of a SLURM script assigning nodes 48, 50, 51 to run NodeODX:
46
46
@@ -66,7 +66,7 @@ wait
66
66
67
67
You can check for available nodes using `sinfo`, run the script with `sbatch sample.slurm`, and check running jobs with `squeue -u $USER`.
68
68
69
-
SLURM does not handle assigning jobs to the head node, so run ClusterODM locally. Then connect to the CLI and wire the NodeODXs to ClusterODM:
69
+
SLURM does not handle assigning jobs to the head node, so run ClusterODX locally. Then connect to the CLI and wire the NodeODXs to ClusterODX:
70
70
71
71
```bash
72
72
telnet localhost 8080
@@ -76,7 +76,7 @@ telnet localhost 8080
76
76
> NODE LIST
77
77
```
78
78
79
-
It is also possible to pre-populate nodes using JSON. If starting ClusterODM from apptainer or docker, the relevant JSON is available at `docker/data/nodes.json`:
79
+
It is also possible to pre-populate nodes using JSON. If starting ClusterODX from apptainer or docker, the relevant JSON is available at `docker/data/nodes.json`:
80
80
81
81
```json
82
82
[
@@ -86,13 +86,13 @@ It is also possible to pre-populate nodes using JSON. If starting ClusterODM fro
86
86
]
87
87
```
88
88
89
-
After hosting ClusterODM on the head node and wiring it to NodeODX, you can tunnel to see if ClusterODM works as expected:
89
+
After hosting ClusterODX on the head node and wiring it to NodeODX, you can tunnel to see if ClusterODX works as expected:
0 commit comments