Skip to content

Commit 5552fa3

Browse files
committed
Update ClusterODX
1 parent d6e4f4d commit 5552fa3

4 files changed

Lines changed: 24 additions & 24 deletions

File tree

src/content/docs/installation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,11 +100,11 @@ To install WebODM on a Qnap NAS:
100100

101101
### Manage Processing Nodes
102102

103-
WebODM can be linked to one or more processing nodes that speak the [NodeODX API](https://github.com/WebODM/NodeODX/blob/master/docs/index.adoc), such as [NodeODX](https://github.com/WebODM/NodeODX), [NodeMICMAC](https://github.com/OpenDroneMap/NodeMICMAC/), [ClusterODM](https://github.com/WebODM/ClusterODM) and [Lightning](https://webodm.net). The default configuration includes a "node-odx-1" processing node which runs on the same machine as WebODM, just to help you get started. As you become more familiar with WebODM, you might want to install processing nodes on separate machines.
103+
WebODM can be linked to one or more processing nodes that speak the [NodeODX API](https://github.com/WebODM/NodeODX/blob/master/docs/index.adoc), such as [NodeODX](https://github.com/WebODM/NodeODX), [NodeMICMAC](https://github.com/OpenDroneMap/NodeMICMAC/), [ClusterODX](https://github.com/WebODM/ClusterODX) and [Lightning](https://webodm.net). The default configuration includes a "node-odx-1" processing node which runs on the same machine as WebODM, just to help you get started. As you become more familiar with WebODM, you might want to install processing nodes on separate machines.
104104

105105
Adding more processing nodes will allow you to run multiple jobs in parallel.
106106

107-
You can also setup a [ClusterODM](https://github.com/WebODM/ClusterODM) node to run a single task across multiple machines with [distributed split-merge](https://docs.opendronemap.org/large/?highlight=distributed#getting-started-with-distributed-split-merge) and process dozen of thousands of images more quickly, with less memory.
107+
You can also setup a [ClusterODX](https://github.com/WebODM/ClusterODX) node to run a single task across multiple machines with [distributed split-merge](https://docs.opendronemap.org/large/?highlight=distributed#getting-started-with-distributed-split-merge) and process dozen of thousands of images more quickly, with less memory.
108108

109109
If you don't need the default "node-odx-1" node, simply pass `--default-nodes 0` flag when starting WebODM:
110110

src/content/docs/options-flags.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -508,7 +508,7 @@ Automatically compute image masks using AI to remove the sky. Experimental.
508508

509509
## sm-cluster
510510

511-
URL to a ClusterODM instance for distributing a split-merge workflow on multiple nodes in parallel.
511+
URL to a ClusterODX instance for distributing a split-merge workflow on multiple nodes in parallel.
512512

513513
**Options:** `<string>`
514514

src/content/docs/tutorials/large-datasets.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -33,16 +33,16 @@ will create 3 submodels. Make sure to pass `--split-overlap 0` if you manually p
3333

3434
## Distributed Split-Merge
3535

36-
WebODM can also automatically distribute the processing of each submodel to multiple machines via [NodeODX](https://github.com/WebODM/NodeODX) nodes, orchestrated via [ClusterODM](https://github.com/WebODM/ClusterODM).
36+
WebODM can also automatically distribute the processing of each submodel to multiple machines via [NodeODX](https://github.com/WebODM/NodeODX) nodes, orchestrated via [ClusterODX](https://github.com/WebODM/ClusterODX).
3737

38-
![ClusterODM](/images/clusterodm.webp)
38+
![ClusterODX](/images/ClusterODX.webp)
3939

4040
### Getting Started with Distributed Split-Merge
4141

42-
The first step is start ClusterODM:
42+
The first step is start ClusterODX:
4343

4444
```bash
45-
docker run -ti -p 3001:3000 -p 8080:8080 webodm/clusterodm
45+
docker run -ti -p 3001:3000 -p 8080:8080 webodm/clusterodx
4646
```
4747

4848
Then on each machine you want to use for processing, launch a NodeODX instance via:
@@ -51,7 +51,7 @@ Then on each machine you want to use for processing, launch a NodeODX instance v
5151
docker run -ti -p 3000:3000 webodm/nodeodx
5252
```
5353

54-
Connect via telnet to ClusterODM and add the IP addresses/port of the machines running NodeODX:
54+
Connect via telnet to ClusterODX and add the IP addresses/port of the machines running NodeODX:
5555

5656
```bash
5757
$ telnet <cluster-odm-ip> 8080
@@ -93,7 +93,7 @@ ASR VIEWCMD <number of images> - View command used to create a machine
9393
!! - Repeat last command
9494
```
9595

96-
If the NodeODX instance wasn't active when ClusterODM started, you can perform a `NODE UPDATE`:
96+
If the NodeODX instance wasn't active when ClusterODX started, you can perform a `NODE UPDATE`:
9797

9898
```
9999
# NODE UPDATE
@@ -111,25 +111,25 @@ While a process is running, it is also possible to list the tasks and view the t
111111
# TASK OUTPUT <taskId> [lines]
112112
```
113113

114-
### Autoscaling ClusterODM
114+
### Autoscaling ClusterODX
115115

116-
ClusterODM also includes the option to autoscale on multiple platforms, including Amazon and Digital Ocean. This allows users to reduce costs associated with always-on instances as well as being able to scale processing based on demand.
116+
ClusterODX also includes the option to autoscale on multiple platforms, including Amazon and Digital Ocean. This allows users to reduce costs associated with always-on instances as well as being able to scale processing based on demand.
117117

118118
To setup autoscaling you must:
119119

120-
- Have a functioning version of NodeJS installed and then install ClusterODM:
120+
- Have a functioning version of NodeJS installed and then install ClusterODX:
121121

122122
```bash
123-
git clone https://github.com/WebODM/ClusterODM
124-
cd ClusterODM
123+
git clone https://github.com/WebODM/ClusterODX
124+
cd ClusterODX
125125
npm install
126126
```
127127

128128
- Make sure docker-machine is installed.
129129
- Setup a S3-compatible bucket for storing results.
130-
- Create a configuration file for [DigitalOcean](https://github.com/WebODM/ClusterODM/blob/master/docs/digitalocean.md) or [Amazon Web Services](https://github.com/WebODM/ClusterODM/blob/master/docs/aws.md).
130+
- Create a configuration file for [DigitalOcean](https://github.com/WebODM/ClusterODX/blob/master/docs/digitalocean.md) or [Amazon Web Services](https://github.com/WebODM/ClusterODX/blob/master/docs/aws.md).
131131

132-
You can then launch ClusterODM with:
132+
You can then launch ClusterODX with:
133133

134134
```bash
135135
node index.js --asr configuration.json
@@ -143,7 +143,7 @@ info: Can write to S3
143143
info: Found docker-machine executable
144144
```
145145

146-
You should always have at least one static NodeODX node attached to ClusterODM, even if you plan to use the autoscaler for all processing. If you setup auto scaling, you can't have zero nodes and rely 100% on the autoscaler. You need to attach a NodeODX node to act as the "reference node" otherwise ClusterODM will not know how to handle certain requests. For this purpose, you should add a "dummy" NodeODX node and lock it:
146+
You should always have at least one static NodeODX node attached to ClusterODX, even if you plan to use the autoscaler for all processing. If you setup auto scaling, you can't have zero nodes and rely 100% on the autoscaler. You need to attach a NodeODX node to act as the "reference node" otherwise ClusterODX will not know how to handle certain requests. For this purpose, you should add a "dummy" NodeODX node and lock it:
147147

148148
```bash
149149
telnet localhost 8080

src/content/docs/tutorials/using-singularity.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -34,13 +34,13 @@ singularity run --bind /my/project:/datasets/code \
3434
--project-path /datasets
3535
```
3636

37-
### ClusterODM, NodeODX, SLURM, with Singularity on HPC
37+
### ClusterODX, NodeODX, SLURM, with Singularity on HPC
3838

39-
You can write a SLURM script to schedule and set up available nodes with NodeODX for ClusterODM to be wired to if you are on the HPC. Using SLURM will decrease the amount of time and processes needed to set up nodes for ClusterODM each time.
39+
You can write a SLURM script to schedule and set up available nodes with NodeODX for ClusterODX to be wired to if you are on the HPC. Using SLURM will decrease the amount of time and processes needed to set up nodes for ClusterODX each time.
4040

4141
To setup HPC with SLURM, you must make sure SLURM is installed.
4242

43-
SLURM script will be different from cluster to cluster, depending on which nodes in the cluster that you have. However, the main idea is to run NodeODX on each node once, and by default, each NodeODX will be running on port 3000. After that, run ClusterODM on the head node and connect the running NodeODXs to the ClusterODM.
43+
SLURM script will be different from cluster to cluster, depending on which nodes in the cluster that you have. However, the main idea is to run NodeODX on each node once, and by default, each NodeODX will be running on port 3000. After that, run ClusterODX on the head node and connect the running NodeODXs to the ClusterODX.
4444

4545
Here is an example of a SLURM script assigning nodes 48, 50, 51 to run NodeODX:
4646

@@ -66,7 +66,7 @@ wait
6666

6767
You can check for available nodes using `sinfo`, run the script with `sbatch sample.slurm`, and check running jobs with `squeue -u $USER`.
6868

69-
SLURM does not handle assigning jobs to the head node, so run ClusterODM locally. Then connect to the CLI and wire the NodeODXs to ClusterODM:
69+
SLURM does not handle assigning jobs to the head node, so run ClusterODX locally. Then connect to the CLI and wire the NodeODXs to ClusterODX:
7070

7171
```bash
7272
telnet localhost 8080
@@ -76,7 +76,7 @@ telnet localhost 8080
7676
> NODE LIST
7777
```
7878

79-
It is also possible to pre-populate nodes using JSON. If starting ClusterODM from apptainer or docker, the relevant JSON is available at `docker/data/nodes.json`:
79+
It is also possible to pre-populate nodes using JSON. If starting ClusterODX from apptainer or docker, the relevant JSON is available at `docker/data/nodes.json`:
8080

8181
```json
8282
[
@@ -86,13 +86,13 @@ It is also possible to pre-populate nodes using JSON. If starting ClusterODM fro
8686
]
8787
```
8888

89-
After hosting ClusterODM on the head node and wiring it to NodeODX, you can tunnel to see if ClusterODM works as expected:
89+
After hosting ClusterODX on the head node and wiring it to NodeODX, you can tunnel to see if ClusterODX works as expected:
9090

9191
```bash
9292
ssh -L localhost:10000:localhost:10000 user@hostname
9393
```
9494

95-
Open a browser and connect to `http://localhost:10000` (port 10000 is where ClusterODM's administrative web interface is hosted).
95+
Open a browser and connect to `http://localhost:10000` (port 10000 is where ClusterODX's administrative web interface is hosted).
9696

9797
Then tunnel port 3000 for task assignment:
9898

0 commit comments

Comments
 (0)