Skip to content

Commit ca7cc38

Browse files
committed
PXC-5141 - [DOCS] - [feedback] It doesn't work with podman compose 8.4
modified: docs/docker-compose.md
1 parent a94d125 commit ca7cc38

1 file changed

Lines changed: 187 additions & 40 deletions

File tree

docs/docker-compose.md

Lines changed: 187 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -8,26 +8,21 @@ We gather [Telemetry data](telemetry.md) in the Percona packages and Docker imag
88

99
--8<--- "get-help-snip.md"
1010

11-
This guide shows you how to deploy a three-node Percona XtraDB Cluster 8.4 using Docker Compose. You generate SSL certificates on the first node and copy them to the other two nodes to enable secure communication.
11+
The guide shows you how to deploy a three-node Percona XtraDB Cluster 8.4 using Docker Compose. You generate SSL certificates on the first node and copy them to the other two nodes to enable secure communication.
1212

13-
The following procedure describes setting up a simple 3-node cluster
14-
for evaluation and testing purposes. Do not use these instructions in a
15-
production environment because the MySQL certificates generated in this
16-
procedure are self-signed.
17-
18-
In a production environment, you should generate and store the certificates to be used by Docker and configure proper storage, security, backup, and monitoring systems.
13+
The following procedure is for evaluation and testing only. Do not use these instructions in a production environment because the MySQL certificates generated here are self-signed. In production, generate and store proper certificates and configure storage, security, backup, and monitoring.
1914

2015
## Prerequisites
2116

22-
* Docker and Docker Compose installed
17+
* Docker and Docker Compose installed (or Podman 4.1 or later and Podman Compose; see [Appendix: Podman Alternative](#appendix-podman-alternative))
2318

2419
* At least 3 GB of memory per container
25-
20+
2621
* Familiarity with Docker volumes and networks
2722

2823
## Directory Structure
2924

30-
You must create a separate directory structure to organize your configuration, certificate files, and Docker Compose setup. This step keeps your deployment clean and easy to manage.
25+
You must create a separate directory structure to organize your configuration, certificate files, and Docker Compose setup. The directory structure keeps your deployment clean and easy to manage.
3126

3227
Run the following commands to create the directory structure:
3328

@@ -36,13 +31,17 @@ mkdir -p pxc-cluster/{certs,conf.d,init}
3631
cd pxc-cluster
3732
```
3833

34+
Expected result: No output. The current directory is `pxc-cluster` with subdirectories `certs`, `conf.d`, and `init`.
35+
3936
After running these commands, your working directory (pxc-cluster/) will contain:
4037

4138
![PXC cluster directories](_static/pxc-cluster-dirs.png)
4239

43-
This structure helps manage configuration files, TLS/SSL certificates, and setup scripts, ensuring a tidy and easy-to-manage deployment.
40+
The directory structure helps manage configuration files, TLS/SSL certificates, and setup scripts, ensuring a tidy and easy-to-manage deployment.
4441
{.power-number}
4542

43+
## Configuration Files
44+
4645
1. Create conf.d/custom.cnf with minimal SSL settings:
4746

4847

@@ -62,6 +61,8 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
6261
6362
Add `.env` to your .gitignore file to prevent committing secrets to version control.
6463
64+
## SSL Certificate Generation
65+
6566
3. Copy the SSL certificate generation script. Save the script as `init/create-ssl-certs.sh`:
6667
6768
```text
@@ -95,43 +96,50 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
9596
chmod +x init/create-ssl-certs.sh
9697
```
9798
99+
Expected result: No output.
100+
98101
5. Run the script to create the certs:
99102
100103
```shell
101104
./init/create-ssl-certs.sh
102105
```
103106
104-
4. Copy Certificates to All Nodes
107+
Expected result: No output. The script creates `ca-key.pem`, `ca.pem`, `server-key.pem`, `server-req.pem`, and `server-cert.pem` in `certs/`.
105108
106-
All three nodes in the cluster must use the same set of SSL certificates. After generating the certificates in the certs/ directory on node 1, you need to copy them to the directories for node 2 and node 3.
107-
108-
If you run all containers from a single project directory (like with Docker Compose on one host), you can reuse the same certs/ directory for all nodes. However, you must explicitly copy the certificates if you’re organizing them into separate directories or deploying on separate hosts.
109-
110-
To create the directories for node 2 and node 3:
111-
112-
113-
```shell
114-
mkdir -p certs-node2
115-
mkdir -p certs-node3
116-
```
109+
Note: The certificates expire in 10 years. For production environments, implement certificate rotation and expiration monitoring.
117110
118-
Then copy the certificates:
111+
4. Copy Certificates to All Nodes (only for multi-host)
119112
120-
121-
```shell
122-
cp -r certs/* certs-node2/
123-
cp -r certs/* certs-node3/
124-
```
113+
All three nodes in the cluster must use the same set of SSL certificates. The `docker-compose.yml` in the guide mounts the same `./certs` directory for all three services (pxc1, pxc2, pxc3). For a single-host deployment, you do not need to create separate certificate directories; the `certs/` you created in step 5 is used by all nodes.
125114
126-
If you’re deploying on separate machines, run the following from node 1:
115+
If you deploy on separate machines instead of one host, copy the certificates to each machine so each has its own `certs/` (or equivalent path) for its Compose project. From node 1, run:
127116
128117
129118
```shell
130119
scp -r ./certs/ user@node2-host:/path/to/pxc-cluster/certs
131120
scp -r ./certs/ user@node3-host:/path/to/pxc-cluster/certs
132121
```
133122
134-
Ensure each container mounts its own copy of the certs/ directory.
123+
Expected result: Progress or confirmation for each file copied to each host. No output indicates success after the transfer completes.
124+
125+
Ensure each host's Compose file mounts that host's certificate directory. For multi-host, see also [Multi-host deployment](#multi-host-deployment) below for firewall, name resolution, and time synchronization.
126+
127+
## Multi-host deployment
128+
129+
If you run each node on a separate machine, configure the following in addition to copying certificates.
130+
131+
Firewall: Allow cluster and client traffic between the hosts. Open these ports on each host for the other hosts' IPs (or subnet):
132+
133+
* 3306 (MySQL client)
134+
* 4567 (Galera replication)
135+
* 4568 (Galera incremental state transfer)
136+
* 4444 (Percona XtraBackup snapshot transfer)
137+
138+
Name resolution: Containers on one host must reach containers on other hosts by hostname or IP. The Compose file uses service names (pxc1, pxc2, pxc3). When each host runs its own Compose stack, those names resolve only within that host. On each host, ensure the other nodes are reachable by a name or IP that you use in `CLUSTER_JOIN` (for example, the other hosts' FQDNs or IPs). Add DNS records or `/etc/hosts` entries on each host so that the hostname or IP used for each node resolves to the correct machine.
139+
140+
Time synchronization: PXC and Galera require consistent time across nodes. Synchronize the clock on every host with NTP (or chrony, systemd-timesyncd). Skew between hosts can cause replication issues and cluster instability. Ensure NTP is enabled and running before starting the cluster.
141+
142+
## Docker Compose Setup
135143
136144
5. Create docker-compose.yml:
137145
@@ -143,11 +151,11 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
143151
environment:
144152
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
145153
- CLUSTER_NAME=pxc-cluster
146-
- CLUSTER_JOIN=pxc2,pxc3
147154
- XTRABACKUP_PASSWORD=${XTRABACKUP_PASSWORD}
148155
volumes:
149156
- ./certs:/etc/mysql/certs:ro
150157
- ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro
158+
- ./data-pxc1:/var/lib/mysql
151159
networks:
152160
- pxcnet
153161
ports:
@@ -164,11 +172,12 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
164172
environment:
165173
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
166174
- CLUSTER_NAME=pxc-cluster
167-
- CLUSTER_JOIN=pxc1,pxc3
175+
- CLUSTER_JOIN=pxc1
168176
- XTRABACKUP_PASSWORD=${XTRABACKUP_PASSWORD}
169177
volumes:
170178
- ./certs:/etc/mysql/certs:ro
171179
- ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro
180+
- ./data-pxc2:/var/lib/mysql
172181
networks:
173182
- pxcnet
174183
healthcheck:
@@ -182,11 +191,12 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
182191
environment:
183192
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
184193
- CLUSTER_NAME=pxc-cluster
185-
- CLUSTER_JOIN=pxc1,pxc2
194+
- CLUSTER_JOIN=pxc1
186195
- XTRABACKUP_PASSWORD=${XTRABACKUP_PASSWORD}
187196
volumes:
188197
- ./certs:/etc/mysql/certs:ro
189198
- ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro
199+
- ./data-pxc3:/var/lib/mysql
190200
networks:
191201
- pxcnet
192202
healthcheck:
@@ -199,6 +209,10 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
199209
driver: bridge
200210
```
201211
212+
SELinux: On systems with SELinux enabled (Docker or Podman), the container may not be able to read the mounted `certs/`, `conf.d/`, or data directories. Add `:Z` (private) or `:z` (shared) to each volume mount so the runtime relabels the mount for the container. For example, use `./certs:/etc/mysql/certs:ro,Z`, `./conf.d:/etc/percona-xtradb-cluster.conf.d:ro,Z`, and `./data-pxc1:/var/lib/mysql,Z` (and the same for pxc2 and pxc3). Apply the same suffix to every volume in every service.
213+
214+
## Deployment
215+
202216
6. Start the Cluster
203217
204218
Start node 1 to initialize the cluster:
@@ -207,23 +221,156 @@ This structure helps manage configuration files, TLS/SSL certificates, and setup
207221
```shell
208222
docker compose up -d pxc1
209223
```
224+
Expected result: A line such as `[+] Running 1/1 - Container pxc1 Started` (or similar). The container pxc1 is running.
225+
226+
(With Podman: `podman compose up -d pxc1` or `podman-compose up -d pxc1`.)
227+
228+
Wait for pxc1 to be fully healthy before starting pxc2 and pxc3. The bootstrap node must be ready and accepting connections, or the other nodes may fail to join. You can use a wait loop:
229+
230+
```shell
231+
until docker exec pxc1 mysqladmin ping -h localhost &>/dev/null; do
232+
echo "Waiting for pxc1..."
233+
sleep 2
234+
done
235+
```
236+
237+
Expected result: The loop prints "Waiting for pxc1..." until the node responds, then exits with no further output.
210238
211239
Then, start the remaining nodes:
212240
213-
214241
```shell
215242
docker compose up -d pxc2 pxc3
216243
```
244+
Expected result: Lines such as `[+] Running 2/2 - Container pxc2 Started - Container pxc3 Started` (or similar). Both containers are running.
245+
246+
(With Podman: `podman compose up -d pxc2 pxc3` or `podman-compose up -d pxc2 pxc3`.)
247+
248+
## Validation
217249
218250
7. Validate the Cluster
219251
220-
Check the status of each node:
252+
Check the status of each node. Exact commands (run from the host; use the same password as in your `.env`). First verify cluster size and node status:
253+
254+
```shell
255+
docker exec pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
256+
docker exec pxc2 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_status';"
257+
```
258+
259+
Expected result: For `wsrep_cluster_size`, a row with value `3`. For `wsrep_cluster_status`, a row with value `Primary`.
260+
261+
Then verify additional cluster health indicators on any node (for example, pxc1). Expect `wsrep_ready` and `wsrep_connected` to be `ON`, and `wsrep_local_state_comment` to be `Synced`:
221262
222-
223263
```shell
224-
docker exec -it pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
225-
docker exec -it pxc2 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_status';"
264+
docker exec pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_ready';"
265+
docker exec pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_connected';"
266+
docker exec pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state_comment';"
226267
```
227268
228-
You should see all three nodes joined and synchronized.
269+
Expected result: `wsrep_ready` = `ON`, `wsrep_connected` = `ON`, `wsrep_local_state_comment` = `Synced`.
270+
271+
(With Podman: use `podman exec` instead of `docker exec`.)
272+
273+
You should see all three nodes joined and synchronized, with the health indicators above showing the expected values.
274+
275+
## Backup
276+
277+
Even for testing, back up data before major changes or shutdown. Example: dump all databases from one node (pxc1) to a file on the host:
278+
279+
```shell
280+
docker exec pxc1 mysqldump -uroot -p${MYSQL_ROOT_PASSWORD} --all-databases > backup.sql
281+
```
282+
283+
Expected result: The file `backup.sql` is created in the current directory with SQL statements for all databases. With Podman, use `podman exec` instead of `docker exec`.
284+
285+
## Troubleshooting
286+
287+
Viewing logs for debugging:
288+
289+
```shell
290+
docker compose logs -f pxc1
291+
docker compose logs -f pxc2
292+
docker compose logs -f pxc3
293+
```
294+
295+
Run one of the commands above to stream that container's logs (stdout and stderr). The `-f` option follows the log output; omit `-f` for a one-off dump. Expected result: Log lines from MySQL and PXC until you press Ctrl+C. With Podman, use `podman compose logs -f pxc1` (and so on).
296+
297+
* Containers exit or fail to start: Check logs with `docker compose logs pxc1` (and pxc2, pxc3). Ensure the bootstrap node (pxc1) is healthy before starting pxc2 and pxc3.
298+
299+
* Cluster size stays at 1: Start pxc1 first, wait for pxc1 to be ready (healthcheck passing), then start pxc2 and pxc3. If nodes start too soon, the nodes may not join; restart pxc2 and pxc3 after pxc1 is up.
300+
301+
* Permission denied on certs or config: Ensure `certs/` and `conf.d/` are readable by the container. On systems with SELinux enabled (Docker or Podman), add `:Z` or `:z` to all volume mounts (see the SELinux note in [Docker Compose Setup](#docker-compose-setup) and [Appendix: Podman Alternative](#appendix-podman-alternative)).
302+
303+
* Nodes cannot reach each other: On a single host, verify all containers are on the same network (`docker network inspect pxc-cluster_pxcnet` or equivalent). On multiple hosts, see [Multi-host deployment](#multi-host-deployment) for firewall rules, name resolution (DNS or `/etc/hosts`), and NTP.
304+
305+
## Cleanup and Shutdown
306+
307+
To stop the cluster and remove containers (data in `data-pxc1/`, `data-pxc2/`, `data-pxc3/` is preserved):
308+
309+
```shell
310+
docker compose down
311+
```
312+
313+
Expected result: Containers pxc1, pxc2, and pxc3 are stopped and removed. Project network is removed. Data in `data-pxc1/`, `data-pxc2/`, and `data-pxc3/` remains on the host.
314+
315+
To stop and remove containers and volumes (the `-v` option deletes all database data):
316+
317+
```shell
318+
docker compose down -v
319+
```
320+
321+
Expected result: Containers and any named volumes are removed. Bind-mounted data directories (`data-pxc1/`, etc.) are not removed by this command.
322+
323+
Note: The Compose file in the guide uses bind mounts (`./data-pxc1`, etc.), not named volumes, so `-v` removes only any named volumes if present. To fully reset, remove the data directories manually: `rm -rf data-pxc1 data-pxc2 data-pxc3` (only when the containers are stopped).
324+
325+
## Appendix: Podman Alternative
326+
327+
Podman can be used as an alternative to Docker because Podman supports the same container images. However, Podman is not fully compatible with Docker Compose.
328+
329+
To run the deployment with Podman, you may need to use tools such as podman compose, podman-compose, or a configured Docker Compose compatibility layer, depending on your environment.
330+
331+
Podman uses a different architecture (for example, pods and rootless containers), so networking, volume mounts, and service behavior may differ from Docker Compose.
332+
333+
The deployment is designed and tested with Docker Compose. Podman support is not guaranteed. If you use Podman, verify that the cluster operates correctly before using the cluster beyond testing or experimentation.
334+
335+
You can use the same Compose file and workflow with [Podman](https://podman.io/). Use Podman 4.1 or later; the built-in `podman compose` subcommand requires 4.1 or newer. With older Podman, use the separate podman-compose tool. Then:
336+
337+
* Prerequisites: Podman and Podman Compose (for example, `pip install podman-compose` or your distro's package) if you are not using built-in `podman compose`. For rootless Podman, ensure the user has enough resources (for example, `sysctl user.max_user_namespaces` and subuid/subgid ranges).
338+
339+
* Commands: Replace `docker compose` with `podman compose` or `podman-compose`, and `docker exec` with `podman exec`. Examples:
340+
341+
* Start node 1: `podman compose up -d pxc1` (or `podman-compose up -d pxc1`)
342+
343+
* Start other nodes: `podman compose up -d pxc2 pxc3`
344+
345+
* Validate: `podman exec -it pxc1 mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_cluster_size';"`
346+
347+
* Directory structure: The same layout (`pxc-cluster/` with `certs/`, `conf.d/`, and `init/`) works with Podman. Run `podman compose` from the project directory (for example, `pxc-cluster/`) so the relative volume paths in the Compose file resolve correctly.
348+
349+
* Compose file: The `docker-compose.yml` in the guide works as-is with Podman; the `bridge` network and volume mounts are supported. For rootless Podman, see the volume mount options below.
350+
351+
### Handling rootless permissions (most important for Podman)
352+
353+
In Docker, the daemon runs as root and can override file permissions. In Podman, if you run as a normal user, the MySQL process inside the container (usually UID 1001 or 999) may not have permission to read the files you created on the host. Ensure the files are readable by the container. On systems with SELinux, use the `:Z` (private) or `:z` (shared) mount option so Podman relabels the volume for the container. In your `docker-compose.yml`, use:
354+
355+
```text
356+
volumes:
357+
- ./certs:/etc/mysql/certs:ro,Z
358+
- ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro,Z
359+
```
360+
361+
Apply the same volume options to every service (pxc1, pxc2, pxc3). Without `:Z` or `:z`, the container may fail to read certs or config when running rootless on SELinux.
362+
363+
### podman-compose vs docker-compose
364+
365+
* If you use podman-compose (the Python tool): podman-compose reads the directory structure and `.env` file the same way Docker does.
366+
367+
* If you use docker-compose with the Podman socket: Using docker-compose with the Podman socket is often more stable for PXC. Set `DOCKER_HOST` to point at the Podman socket (for example, `unix:///run/user/$(id -u)/podman/podman.sock`) so `docker compose` talks to Podman.
368+
369+
### Network considerations
370+
371+
PXC uses specific ports for cluster communication (4567, 4568, 4444). Podman uses different networking (netavark or CNI). If nodes cannot find each other:
372+
373+
* Keep using a named network in the Compose file (for example, `pxcnet`) so Podman can resolve container names (pxc1, pxc2, pxc3).
374+
375+
* If problems persist, try setting `network_mode: slirp4netns` on the services, or run the stack rootful to use the default bridge.
229376

0 commit comments

Comments
 (0)