Skip to content

Commit b7c05b4

Browse files
authored
docs: adding self hosted executor docs (#3816)
1 parent 6898aaf commit b7c05b4

2 files changed

Lines changed: 45 additions & 2 deletions

File tree

docs/cloud/features/scheduler/scheduler.md

Lines changed: 45 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -163,8 +163,51 @@ This gives you complete control over data security and network access while stil
163163

164164
### How it works
165165

166-
Coming soon!
166+
Self-hosted executors as the name indicates are self-hosted workers that take on the responsibility of "executing" changes to the data warehouse. While Tobiko Cloud schedules and plans changes, the executors are responsible for executing those changes. Executors are docker containers that are configured to connect to Tobiko Cloud as well as your data warehouse. Connected to both, the executor pulls work from the cloud, whether it's a plan or scheduled background work from a run, and execute it on your data warehouse.
167167

168168
### Configuration
169169

170-
Coming soon!
170+
Exact configuration is left to the user and will vary based on the infrastructure and setup of the user, for example the executors could be run on a Kubernetes cluster or as a standalone pair of Docker containers depending on the user's infrastructure. The following details are an example of how this is done for a Postgres data warehouse and a pair of local containers running in docker.
171+
172+
Tobiko Cloud requires 2 docker instances to be running, one to pick up runs and one for plans. The entrypoint for both is `executor run` and `executor plan` respectively. The executor container can be found on [Docker Hub](https://hub.docker.com/r/tobikodata/tcloud). In addition to running the container, the user will need to configure the executor with environment variables that point to Tobiko Cloud as well as the data warehouse.
173+
174+
To connect to the Tobiko Cloud, the user will need to provide the following environment variables, replaced with the user's own values.
175+
176+
```env
177+
TCLOUD_URL=https://cloud.tobikodata.com/sqlmesh/acme/analytics_project
178+
TCLOUD_TOKEN=your_token
179+
```
180+
181+
In addition to the above variables, a gateway is needed to provide a connection to a data warehouse. The following details are an example of how this is done for a Postgres data warehouse where the gateway is named `GATEWAY_A`. For more details on how to configure a gateway, see the details on other [engines](../../../integrations/overview.md#execution-engines) and how to [overide variables](../../../guides/configuration.md#overrides) as done below.
182+
183+
```env
184+
SQLMESH__DEFAULT_GATEWAY=GATEWAY_A
185+
SQLMESH__GATEWAYS__GATEWAY_A__CONNECTION__TYPE=postgres
186+
SQLMESH__GATEWAYS__GATEWAY_A__CONNECTION__HOST=10.10.10.10
187+
SQLMESH__GATEWAYS__GATEWAY_A__CONNECTION__PORT=5432
188+
SQLMESH__GATEWAYS__GATEWAY_A__CONNECTION__DATABASE=example_db
189+
SQLMESH__GATEWAYS__GATEWAY_A__CONNECTION__USER=example_user
190+
SQLMESH__GATEWAYS__GATEWAY_A__CONNECTION__PASSWORD=example_password
191+
```
192+
193+
**Note**: If there are multiple gateways, each gateway will need to have its own set of environment variables. For example, if there are two gateways, `GATEWAY_A` and `GATEWAY_B`, the environment variables will need to be set for both gateways.
194+
195+
```env
196+
SQLMESH__GATEWAYS__GATEWAY_A__CONNECTION__TYPE=<connection type>
197+
# <Gateway A connection settings>
198+
SQLMESH__GATEWAYS__GATEWAY_B__CONNECTION__TYPE=<connection type>
199+
# <Gateway B connection settings>
200+
```
201+
202+
Once you have set up both sets of environment variables in a file named `local.env`, you can run the following command to start the executor:
203+
204+
```shell
205+
docker run -d --env-file local.env tobikodata/tcloud:latest -- executor run
206+
docker run -d --env-file local.env tobikodata/tcloud:latest -- executor plan
207+
```
208+
209+
After the executors are properly configured, they will appear in the cloud UI where they can be used to execute plans and scheduled tasks.
210+
211+
![executors](../scheduler/executors.png)
212+
213+
We recommend setting up monitoring for the executors to ensure they run smoothly and to help troubleshoot issues. This monitoring should include logs and system metrics like memory and CPU usage.
180 KB
Loading

0 commit comments

Comments
 (0)