You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -8,26 +8,21 @@ We gather [Telemetry data](telemetry.md) in the Percona packages and Docker imag
8
8
9
9
--8<--- "get-help-snip.md"
10
10
11
-
This guide shows you how to deploy a three-node Percona XtraDB Cluster 8.4 using Docker Compose. You generate SSL certificates on the first node and copy them to the other two nodes to enable secure communication.
11
+
The guide shows you how to deploy a three-node Percona XtraDB Cluster 8.4 using Docker Compose. You generate SSL certificates on the first node and copy them to the other two nodes to enable secure communication.
12
12
13
-
The following procedure describes setting up a simple 3-node cluster
14
-
for evaluation and testing purposes. Do not use these instructions in a
15
-
production environment because the MySQL certificates generated in this
16
-
procedure are self-signed.
17
-
18
-
In a production environment, you should generate and store the certificates to be used by Docker and configure proper storage, security, backup, and monitoring systems.
13
+
The following procedure is for evaluation and testing only. Do not use these instructions in a production environment because the MySQL certificates generated here are self-signed. In production, generate and store proper certificates and configure storage, security, backup, and monitoring.
19
14
20
15
## Prerequisites
21
16
22
-
* Docker and Docker Compose installed
17
+
* Docker and Docker Compose installed (or Podman 4.1 or later and Podman Compose; see [Appendix: Podman Alternative](#appendix-podman-alternative))
23
18
24
19
* At least 3 GB of memory per container
25
-
20
+
26
21
* Familiarity with Docker volumes and networks
27
22
28
23
## Directory Structure
29
24
30
-
You must create a separate directory structure to organize your configuration, certificate files, and Docker Compose setup. This step keeps your deployment clean and easy to manage.
25
+
You must create a separate directory structure to organize your configuration, certificate files, and Docker Compose setup. The directory structure keeps your deployment clean and easy to manage.
31
26
32
27
Run the following commands to create the directory structure:
This structure helps manage configuration files, TLS/SSL certificates, and setup scripts, ensuring a tidy and easy-to-manage deployment.
40
+
The directory structure helps manage configuration files, TLS/SSL certificates, and setup scripts, ensuring a tidy and easy-to-manage deployment.
44
41
{.power-number}
45
42
43
+
## Configuration Files
44
+
46
45
1. Create conf.d/custom.cnf with minimal SSL settings:
47
46
48
47
@@ -62,6 +61,8 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
62
61
63
62
Add `.env` to your .gitignore file to prevent committing secrets to version control.
64
63
64
+
## SSL Certificate Generation
65
+
65
66
3. Copy the SSL certificate generation script. Save the script as `init/create-ssl-certs.sh`:
66
67
67
68
```text
@@ -95,43 +96,50 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
95
96
chmod +x init/create-ssl-certs.sh
96
97
```
97
98
99
+
Expected result: No output.
100
+
98
101
5. Run the script to create the certs:
99
102
100
103
```shell
101
104
./init/create-ssl-certs.sh
102
105
```
103
106
104
-
4. Copy Certificates to All Nodes
107
+
Expected result: No output. The script creates `ca-key.pem`, `ca.pem`, `server-key.pem`, `server-req.pem`, and `server-cert.pem` in `certs/`.
105
108
106
-
All three nodes in the cluster must use the same set of SSL certificates. After generating the certificates in the certs/ directory on node 1, you need to copy them to the directories for node 2 and node 3.
107
-
108
-
If you run all containers from a single project directory (like with Docker Compose on one host), you can reuse the same certs/ directory for all nodes. However, you must explicitly copy the certificates if you’re organizing them into separate directories or deploying on separate hosts.
109
-
110
-
To create the directories for node 2 and node 3:
111
-
112
-
113
-
```shell
114
-
mkdir -p certs-node2
115
-
mkdir -p certs-node3
116
-
```
109
+
Note: The certificates expire in 10 years. For production environments, implement certificate rotation and expiration monitoring.
117
110
118
-
Then copy the certificates:
111
+
4. Copy Certificates to All Nodes (only for multi-host)
119
112
120
-
121
-
```shell
122
-
cp -r certs/* certs-node2/
123
-
cp -r certs/* certs-node3/
124
-
```
113
+
All three nodes in the cluster must use the same set of SSL certificates. The `docker-compose.yml` in the guide mounts the same `./certs` directory for all three services (pxc1, pxc2, pxc3). For a single-host deployment, you do not need to create separate certificate directories; the `certs/` you created in step 5 is used by all nodes.
125
114
126
-
If you’re deploying on separate machines, run the following from node 1:
115
+
If you deploy on separate machines instead of one host, copy the certificates to each machine so each has its own `certs/` (or equivalent path) for its Compose project. From node 1, run:
Ensure each container mounts its own copy of the certs/ directory.
123
+
Expected result: Progress or confirmation for each file copied to each host. No output indicates success after the transfer completes.
124
+
125
+
Ensure each host's Compose file mounts that host's certificate directory. For multi-host, see also [Multi-host deployment](#multi-host-deployment) below for firewall, name resolution, and time synchronization.
126
+
127
+
## Multi-host deployment
128
+
129
+
If you run each node on a separate machine, configure the following in addition to copying certificates.
130
+
131
+
Firewall: Allow cluster and client traffic between the hosts. Open these ports on each host for the other hosts' IPs (or subnet):
132
+
133
+
* 3306 (MySQL client)
134
+
* 4567 (Galera replication)
135
+
* 4568 (Galera incremental state transfer)
136
+
* 4444 (Percona XtraBackup snapshot transfer)
137
+
138
+
Name resolution: Containers on one host must reach containers on other hosts by hostname or IP. The Compose file uses service names (pxc1, pxc2, pxc3). When each host runs its own Compose stack, those names resolve only within that host. On each host, ensure the other nodes are reachable by a name or IP that you use in `CLUSTER_JOIN` (for example, the other hosts' FQDNs or IPs). Add DNS records or `/etc/hosts` entries on each host so that the hostname or IP used for each node resolves to the correct machine.
139
+
140
+
Time synchronization: PXC and Galera require consistent time across nodes. Synchronize the clock on every host with NTP (or chrony, systemd-timesyncd). Skew between hosts can cause replication issues and cluster instability. Ensure NTP is enabled and running before starting the cluster.
141
+
142
+
## Docker Compose Setup
135
143
136
144
5. Create docker-compose.yml:
137
145
@@ -143,11 +151,11 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
143
151
environment:
144
152
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
145
153
- CLUSTER_NAME=pxc-cluster
146
-
- CLUSTER_JOIN=pxc2,pxc3
147
154
- XTRABACKUP_PASSWORD=${XTRABACKUP_PASSWORD}
148
155
volumes:
149
156
- ./certs:/etc/mysql/certs:ro
150
157
- ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro
158
+
- ./data-pxc1:/var/lib/mysql
151
159
networks:
152
160
- pxcnet
153
161
ports:
@@ -164,11 +172,12 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
164
172
environment:
165
173
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
166
174
- CLUSTER_NAME=pxc-cluster
167
-
- CLUSTER_JOIN=pxc1,pxc3
175
+
- CLUSTER_JOIN=pxc1
168
176
- XTRABACKUP_PASSWORD=${XTRABACKUP_PASSWORD}
169
177
volumes:
170
178
- ./certs:/etc/mysql/certs:ro
171
179
- ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro
180
+
- ./data-pxc2:/var/lib/mysql
172
181
networks:
173
182
- pxcnet
174
183
healthcheck:
@@ -182,11 +191,12 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
182
191
environment:
183
192
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
184
193
- CLUSTER_NAME=pxc-cluster
185
-
- CLUSTER_JOIN=pxc1,pxc2
194
+
- CLUSTER_JOIN=pxc1
186
195
- XTRABACKUP_PASSWORD=${XTRABACKUP_PASSWORD}
187
196
volumes:
188
197
- ./certs:/etc/mysql/certs:ro
189
198
- ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro
199
+
- ./data-pxc3:/var/lib/mysql
190
200
networks:
191
201
- pxcnet
192
202
healthcheck:
@@ -199,6 +209,10 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
199
209
driver: bridge
200
210
```
201
211
212
+
SELinux: On systems with SELinux enabled (Docker or Podman), the container may not be able to read the mounted `certs/`, `conf.d/`, or data directories. Add `:Z` (private) or `:z` (shared) to each volume mount so the runtime relabels the mount for the container. For example, use `./certs:/etc/mysql/certs:ro,Z`, `./conf.d:/etc/percona-xtradb-cluster.conf.d:ro,Z`, and `./data-pxc1:/var/lib/mysql,Z` (and the same for pxc2 and pxc3). Apply the same suffix to every volume in every service.
213
+
214
+
## Deployment
215
+
202
216
6. Start the Cluster
203
217
204
218
Start node 1 to initialize the cluster:
@@ -207,23 +221,156 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
207
221
```shell
208
222
docker compose up -d pxc1
209
223
```
224
+
Expected result: A line such as `[+] Running 1/1 - Container pxc1 Started` (or similar). The container pxc1 is running.
225
+
226
+
(With Podman: `podman compose up -d pxc1` or `podman-compose up -d pxc1`.)
227
+
228
+
Wait for pxc1 to be fully healthy before starting pxc2 and pxc3. The bootstrap node must be ready and accepting connections, or the other nodes may fail to join. You can use a wait loop:
229
+
230
+
```shell
231
+
until docker exec pxc1 mysqladmin ping -h localhost &>/dev/null; do
232
+
echo "Waiting for pxc1..."
233
+
sleep 2
234
+
done
235
+
```
236
+
237
+
Expected result: The loop prints "Waiting for pxc1..." until the node responds, then exits with no further output.
210
238
211
239
Then, start the remaining nodes:
212
240
213
-
214
241
```shell
215
242
docker compose up -d pxc2 pxc3
216
243
```
244
+
Expected result: Lines such as `[+] Running 2/2 - Container pxc2 Started - Container pxc3 Started` (or similar). Both containers are running.
245
+
246
+
(With Podman: `podman compose up -d pxc2 pxc3` or `podman-compose up -d pxc2 pxc3`.)
247
+
248
+
## Validation
217
249
218
250
7. Validate the Cluster
219
251
220
-
Check the status of each node:
252
+
Check the status of each node. Exact commands (run from the host; use the same password as in your `.env`). First verify cluster size and node status:
253
+
254
+
```shell
255
+
docker exec pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
256
+
docker exec pxc2 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_status';"
257
+
```
258
+
259
+
Expected result: For `wsrep_cluster_size`, a row with value `3`. For `wsrep_cluster_status`, a row with value `Primary`.
260
+
261
+
Then verify additional cluster health indicators on any node (for example, pxc1). Expect `wsrep_ready` and `wsrep_connected` to be `ON`, and `wsrep_local_state_comment` to be `Synced`:
221
262
222
-
223
263
```shell
224
-
docker exec -it pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
225
-
docker exec -it pxc2 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_status';"
264
+
docker exec pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_ready';"
265
+
docker exec pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_connected';"
266
+
docker exec pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state_comment';"
226
267
```
227
268
228
-
You should see all three nodes joined and synchronized.
Expected result: The file `backup.sql` is created in the current directory with SQL statements for all databases. With Podman, use `podman exec` instead of `docker exec`.
284
+
285
+
## Troubleshooting
286
+
287
+
Viewing logs for debugging:
288
+
289
+
```shell
290
+
docker compose logs -f pxc1
291
+
docker compose logs -f pxc2
292
+
docker compose logs -f pxc3
293
+
```
294
+
295
+
Run one of the commands above to stream that container's logs (stdout and stderr). The `-f` option follows the log output; omit `-f` for a one-off dump. Expected result: Log lines from MySQL and PXC until you press Ctrl+C. With Podman, use `podman compose logs -f pxc1` (and so on).
296
+
297
+
* Containers exit or fail to start: Check logs with `docker compose logs pxc1` (and pxc2, pxc3). Ensure the bootstrap node (pxc1) is healthy before starting pxc2 and pxc3.
298
+
299
+
* Cluster size stays at 1: Start pxc1 first, wait for pxc1 to be ready (healthcheck passing), then start pxc2 and pxc3. If nodes start too soon, the nodes may not join; restart pxc2 and pxc3 after pxc1 is up.
300
+
301
+
* Permission denied on certs or config: Ensure `certs/` and `conf.d/` are readable by the container. On systems with SELinux enabled (Docker or Podman), add `:Z` or `:z` to all volume mounts (see the SELinux note in [Docker Compose Setup](#docker-compose-setup) and [Appendix: Podman Alternative](#appendix-podman-alternative)).
302
+
303
+
* Nodes cannot reach each other: On a single host, verify all containers are on the same network (`docker network inspect pxc-cluster_pxcnet` or equivalent). On multiple hosts, see [Multi-host deployment](#multi-host-deployment) for firewall rules, name resolution (DNS or `/etc/hosts`), and NTP.
304
+
305
+
## Cleanup and Shutdown
306
+
307
+
To stop the cluster and remove containers (data in `data-pxc1/`, `data-pxc2/`, `data-pxc3/` is preserved):
308
+
309
+
```shell
310
+
docker compose down
311
+
```
312
+
313
+
Expected result: Containers pxc1, pxc2, and pxc3 are stopped and removed. Project network is removed. Data in `data-pxc1/`, `data-pxc2/`, and `data-pxc3/` remains on the host.
314
+
315
+
To stop and remove containers and volumes (the `-v` option deletes all database data):
316
+
317
+
```shell
318
+
docker compose down -v
319
+
```
320
+
321
+
Expected result: Containers and any named volumes are removed. Bind-mounted data directories (`data-pxc1/`, etc.) are not removed by this command.
322
+
323
+
Note: The Compose file in the guide uses bind mounts (`./data-pxc1`, etc.), not named volumes, so `-v` removes only any named volumes if present. To fully reset, remove the data directories manually: `rm -rf data-pxc1 data-pxc2 data-pxc3` (only when the containers are stopped).
324
+
325
+
## Appendix: Podman Alternative
326
+
327
+
Podman can be used as an alternative to Docker because Podman supports the same container images. However, Podman is not fully compatible with Docker Compose.
328
+
329
+
To run the deployment with Podman, you may need to use tools such as podman compose, podman-compose, or a configured Docker Compose compatibility layer, depending on your environment.
330
+
331
+
Podman uses a different architecture (for example, pods and rootless containers), so networking, volume mounts, and service behavior may differ from Docker Compose.
332
+
333
+
The deployment is designed and tested with Docker Compose. Podman support is not guaranteed. If you use Podman, verify that the cluster operates correctly before using the cluster beyond testing or experimentation.
334
+
335
+
You can use the same Compose file and workflow with [Podman](https://podman.io/). Use Podman 4.1 or later; the built-in `podman compose` subcommand requires 4.1 or newer. With older Podman, use the separate podman-compose tool. Then:
336
+
337
+
* Prerequisites: Podman and Podman Compose (for example, `pip install podman-compose` or your distro's package) if you are not using built-in `podman compose`. For rootless Podman, ensure the user has enough resources (for example, `sysctl user.max_user_namespaces` and subuid/subgid ranges).
338
+
339
+
* Commands: Replace `docker compose` with `podman compose` or `podman-compose`, and `docker exec` with `podman exec`. Examples:
340
+
341
+
* Start node 1: `podman compose up -d pxc1` (or `podman-compose up -d pxc1`)
342
+
343
+
* Start other nodes: `podman compose up -d pxc2 pxc3`
344
+
345
+
* Validate: `podman exec -it pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_size';"`
346
+
347
+
* Directory structure: The same layout (`pxc-cluster/` with `certs/`, `conf.d/`, and `init/`) works with Podman. Run `podman compose` from the project directory (for example, `pxc-cluster/`) so the relative volume paths in the Compose file resolve correctly.
348
+
349
+
* Compose file: The `docker-compose.yml` in the guide works as-is with Podman; the `bridge` network and volume mounts are supported. For rootless Podman, see the volume mount options below.
350
+
351
+
### Handling rootless permissions (most important for Podman)
352
+
353
+
In Docker, the daemon runs as root and can override file permissions. In Podman, if you run as a normal user, the MySQL process inside the container (usually UID 1001 or 999) may not have permission to read the files you created on the host. Ensure the files are readable by the container. On systems with SELinux, use the `:Z` (private) or `:z` (shared) mount option so Podman relabels the volume for the container. In your `docker-compose.yml`, use:
Apply the same volume options to every service (pxc1, pxc2, pxc3). Without `:Z` or `:z`, the container may fail to read certs or config when running rootless on SELinux.
362
+
363
+
### podman-compose vs docker-compose
364
+
365
+
* If you use podman-compose (the Python tool): podman-compose reads the directory structure and `.env` file the same way Docker does.
366
+
367
+
* If you use docker-compose with the Podman socket: Using docker-compose with the Podman socket is often more stable for PXC. Set `DOCKER_HOST` to point at the Podman socket (for example, `unix:///run/user/$(id -u)/podman/podman.sock`) so `docker compose` talks to Podman.
368
+
369
+
### Network considerations
370
+
371
+
PXC uses specific ports for cluster communication (4567, 4568, 4444). Podman uses different networking (netavark or CNI). If nodes cannot find each other:
372
+
373
+
* Keep using a named network in the Compose file (for example, `pxcnet`) so Podman can resolve container names (pxc1, pxc2, pxc3).
374
+
375
+
* If problems persist, try setting `network_mode: slirp4netns` on the services, or run the stack rootful to use the default bridge.
0 commit comments