Status: ✅ Resolved (2026-03-06) — two root causes identified and fixed:
ufwwas blocking IPv6 UDP 6969 (primary blocker — packets never reached the container)- Asymmetric routing needed policy routing tables so replies leave via the correct floating IP
✅ All fixes are now persistent:
ufwstores rules in/etc/ufw/; policy routing rules are persisted via netplan (/etc/netplan/60-floating-ip.yaml)
During issue #407 (submitting the UDP1 tracker to newTrackon), the tracker was rejected with a
"UDP timeout" error. The newTrackon probe used the AAAA record
(2a01:4f8:1c0c:828e::1) to reach the tracker via IPv6. IPv4 probes (tested locally) work fine.
This document records the investigation, likely root cause, and the fix required.
udp://udp1.torrust-tracker-demo.com:6969/announcesubmitted to newTrackon- newTrackon probed via IPv6:
2a01:4f8:1c0c:828e::1 - Result: ❌ Rejected — UDP timeout
- Local test via IPv4 (
116.202.177.184): ✅ Works
| Hypothesis | Evidence | Verdict |
|---|---|---|
| Docker IPv6 disabled | ss -ulnp shows [::]:6969 — container binds to both IPv4 and IPv6 |
❌ Ruled out |
| Wrong IP in DNS | dig AAAA returns 2a01:4f8:1c0c:828e::1 ✅ |
❌ Ruled out |
| Floating IP not on interface | ip addr show eth0 shows all four IPs with valid_lft forever |
❌ Ruled out |
| BEP 34 TXT record missing | dig TXT udp1.torrust-tracker-demo.com returns correct value |
❌ Ruled out |
| Caddy proxy intercepting UDP | UDP tracker bypasses reverse proxy entirely | ❌ Ruled out |
| Asymmetric routing (reply source) | Without policy routing, replies left via primary IP, not the floating IP | ✅ Secondary issue — fixed |
First hypothesis was Docker IPv6 disabled. Ran on the server:
sudo ss -ulnp | grep 6969Output:
UNCONN 0 0 0.0.0.0:6969 0.0.0.0:* users:(("docker-proxy",pid=1533796,fd=7))
UNCONN 0 0 [::]:6969 [::]:* users:(("docker-proxy",pid=1533806,fd=7))
✅ The tracker binds to both 0.0.0.0:6969 (IPv4) and [::]:6969 (IPv6). Docker IPv6 is
enabled — packets arriving on the IPv6 floating IP do reach the container.
The issue is elsewhere: the packet arrives and the container processes it, but the reply goes out via the wrong source IP.
Policy-based routing forces replies to leave via the same floating IP the probe arrived on. Checked whether the IPv6 rule was already in place:
ip -6 rule listOutput:
0: from all lookup local
32765: from 2a01:4f8:1c0c:828e::1 lookup 200
32766: from all lookup main
A rule was already present from a previous attempt. Verified the corresponding route table:
ip -6 route show table 200Output:
default via fe80::1 dev eth0
✅ IPv6 replies from 2a01:4f8:1c0c:828e::1 route via fe80::1 (Hetzner's IPv6 gateway on
eth0).
The IPv4 floating IP (116.202.177.184) also needed symmetric routing. Found the IPv4 default
gateway:
ip route show defaultOutput:
default via 172.31.1.1 dev eth0 proto dhcp src 46.225.234.201 metric 100
Added the IPv4 policy routing rules:
ip route add default via 172.31.1.1 dev eth0 table 100
ip rule add from 116.202.177.184 table 100Verified:
ip rule listOutput:
0: from all lookup local
32765: from 116.202.177.184 lookup 100
32766: from all lookup main
32767: from all lookup default
ip route show table 100Output:
default via 172.31.1.1 dev eth0
✅ IPv4 replies from 116.202.177.184 now route via 172.31.1.1 (Hetzner's IPv4 gateway).
After applying both policy routing rules, resubmitted to newTrackon. While the probe was running, captured traffic on the server:
sudo tcpdump -i eth0 -n udp port 6969 -vOutput:
tcpdump: listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
13:18:22.128306 IP6 (flowlabel 0xb28c9, hlim 56, next-header UDP (17) payload length: 24) 2a01:4f8:1c1a:715::1.37318 > 2a01:4f8:1c0c:828e::1.6969: [udp sum ok] UDP, length 16
13:18:32.129210 IP6 (flowlabel 0xdf835, hlim 56, next-header UDP (17) payload length: 24) 2a01:4f8:1c1a:715::1.34285 > 2a01:4f8:1c0c:828e::1.6969: [udp sum ok] UDP, length 16
Key observation: Only incoming packets appear (2a01:4f8:1c1a:715::1 → 2a01:4f8:1c0c:828e::1).
There are no outgoing reply lines from 2a01:4f8:1c0c:828e::1. This means:
- ✅ Packets arrive at eth0 — no Hetzner cloud firewall blocking upstream
- ❌ Packets are silently dropped before the container ever processes them
- The container logs showed zero hits on
:6969— confirmed packets never reached docker-proxy
This rules out asymmetric routing as the primary cause: the issue is that packets don't reach the
container at all. Something between eth0 ingress and Docker is dropping them.
Inspected the ufw rules on the server:
sudo ufw status verboseOutput:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere # SSH access (configured port 22)
22/tcp (v6) ALLOW IN Anywhere (v6) # SSH access (configured port 22)
❌ 6969/udp is absent from the allow list.
Default: deny (incoming) means all inbound traffic not explicitly allowed is dropped.
while Docker bypasses the iptables INPUT chain for IPv4 published ports (writing its own DNAT
rules), it does not manage ip6tables by default. As a result:
- IPv4 UDP 6969 — Docker DNAT bypasses ufw INPUT chain → works ✅
- IPv6 UDP 6969 — No Docker ip6tables DNAT rule exists → hits ufw INPUT →
default: deny→ dropped
This explains why IPv4 tests always worked and IPv6 failed.
Verified Docker's FORWARD rules were correctly in place for both IPv4 and IPv6:
sudo iptables -L FORWARD --line-numbers -n
sudo ip6tables -L FORWARD --line-numbers -nOutput:
Chain FORWARD (policy DROP)
num target prot opt source destination
1 DOCKER-USER 0 -- 0.0.0.0/0 0.0.0.0/0
2 DOCKER-FORWARD 0 -- 0.0.0.0/0 0.0.0.0/0
...
Chain FORWARD (policy DROP)
num target prot opt source destination
1 DOCKER-USER 0 -- ::/0 ::/0
2 DOCKER-FORWARD 0 -- ::/0 ::/0
...
✅ DOCKER-FORWARD is present at position 2 for both IPv4 and IPv6. Once packets pass the INPUT
chain, forwarding to the container is handled correctly.
Two independent issues were both required to be fixed:
The ufw firewall default policy is deny (incoming). Port 6969/udp was never added to the
allow list. For IPv6, Docker does not write ip6tables INPUT rules, so packets hit ufw's default
deny policy and are silently dropped before reaching docker-proxy.
For IPv4, Docker writes DNAT rules directly into iptables which bypass the ufw INPUT chain —
that is why IPv4 probes always worked.
IPv6 path (broken):
probe → eth0 → ip6tables INPUT chain → ufw default deny → dropped ❌
IPv4 path (always worked):
probe → eth0 → iptables DNAT (Docker) → bypasses INPUT → container ✅
When a UDP probe arrives on a floating IP, the Linux kernel routes the reply using the default
route — which sends the packet out via the primary server IP (2a01:4f8:1c19:620b::1 on IPv6,
46.225.234.201 on IPv4). newTrackon discards the response because it comes from an unexpected
source address.
Without policy routing:
probe → arrives on floating IP → container processes → reply leaves via primary IP
newTrackon sees reply from wrong source → discards → "UDP timeout"
With policy routing:
probe → arrives on floating IP → container processes
→ kernel matches "from <floating IP>" rule → routes via table 100/200
→ reply leaves via correct floating IP → newTrackon accepts
The previous Torrust demo tracker was deployed on Digital Ocean with a reserved IPv4
(144.126.245.19). That deployment only served IPv4 — no IPv6 floating IPs were configured.
This means the asymmetric routing / IPv6 Docker issue was never encountered.
This is the first Torrust deployment routing UDP tracker traffic over IPv6 floating IPs. The combination of:
- Multiple floating IPs (both IPv4 and IPv6)
- Docker with default network settings
- UDP tracker on port 6969
…is new territory. Both ufw and asymmetric routing needed to be addressed (see above).
The old demo used Nginx as a reverse proxy; this deployment uses Caddy. This is irrelevant for UDP tracker traffic — UDP does not go through the reverse proxy (HTTP only). Both setups are equivalent from the UDP tracker's perspective.
Fix 1 (
ufw) was immediately persistent —ufwstores rules in/etc/ufw/and they survive a reboot. Fixes 2 and 3 (policy routing) were runtime only when first applied; they were persisted via netplan in Step 4 below.
This was the critical fix that allowed IPv6 UDP packets to reach the container.
sudo ufw allow 6969/udpVerified:
sudo ufw status verboseOutput:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere # SSH access (configured port 22)
6969/udp ALLOW IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6) # SSH access (configured port 22)
6969/udp (v6) ALLOW IN Anywhere (v6)
✅ Both IPv4 and IPv6 UDP port 6969 are now allowed in.
Already present from an earlier investigation step (see Check 2 above).
| Rule | from 2a01:4f8:1c0c:828e::1 lookup 200 |
| Route | default via fe80::1 dev eth0 (table 200) |
Added to ensure IPv4 replies also leave via the correct floating IP (see Check 3 above).
ip route add default via 172.31.1.1 dev eth0 table 100
ip rule add from 116.202.177.184 table 100The ip rule and ip route commands from Fixes 2 and 3 are runtime only — they are held in
kernel memory and are lost on reboot. systemd-networkd (the network renderer used here)
manages persistent network state via netplan. The policy routing rules were added to
/etc/netplan/60-floating-ip.yaml.
Two netplan files exist on this server. The numeric prefix controls load order — networkd processes them in ascending order:
| File | Who manages it | Purpose |
|---|---|---|
50-cloud-init.yaml |
cloud-init (automatic) | Primary interface: DHCP4, primary IPv6 address, default IPv6 route |
60-floating-ip.yaml |
manually managed | Floating IPs and (after this fix) policy routing rules |
⚠️ Never edit50-cloud-init.yaml— cloud-init may regenerate it on the next run and overwrite your changes.
The cloud-init file that was already present:
sudo cat /etc/netplan/50-cloud-init.yamlOutput:
network:
version: 2
ethernets:
eth0:
match:
macaddress: "92:00:07:4f:b3:4f"
addresses:
- "2a01:4f8:1c19:620b::1/64"
nameservers:
addresses:
- 2a01:4ff:ff00::add:2
- 2a01:4ff:ff00::add:1
dhcp4: true
set-name: "eth0"
routes:
- on-link: true
to: "default"
via: "fe80::1"Key observations:
dhcp4: true— the primary IPv4 address (46.225.234.201) and default IPv4 route are assigned by DHCP; the gateway (172.31.1.1) discovered in Check 3 came from DHCP.- The primary IPv6 address
2a01:4f8:1c19:620b::1/64and its default route viafe80::1are statically configured here. fe80::1is Hetzner's link-local router address oneth0. This is why we reuse it as the gateway in table 200 for the UDP1 floating IP — it is the only IPv6 gateway available on this interface.
The file only contained the static IP address assignments:
network:
version: 2
renderer: networkd
ethernets:
eth0:
addresses:
# Existing floating IPs (HTTP1)
- 116.202.176.169/32
- 2a01:4f8:1c0c:9aae::1/64
# New floating IPs (UDP1)
- 116.202.177.184/32
- 2a01:4f8:1c0c:828e::1/64Two new blocks were added under eth0::
routing-policy: — one entry per floating IP, mapping the source address to a routing table
number. When a packet's source IP matches from, the kernel consults that table instead of the
main routing table.
routes: — one default route per table, pointing outbound traffic to the correct gateway.
Table 100 uses the IPv4 gateway (172.31.1.1); table 200 uses the IPv6 link-local gateway
(fe80::1).
sudo cat /etc/netplan/60-floating-ip.yamlOutput:
network:
version: 2
renderer: networkd
ethernets:
eth0:
addresses:
# Existing floating IPs (HTTP1)
- 116.202.176.169/32
- 2a01:4f8:1c0c:9aae::1/64
# New floating IPs (UDP1)
- 116.202.177.184/32
- 2a01:4f8:1c0c:828e::1/64
routing-policy:
- from: 116.202.177.184
table: 100
- from: 2a01:4f8:1c0c:828e::1
table: 200
routes:
- to: default
via: 172.31.1.1
table: 100
- to: default
via: fe80::1
table: 200Requirement:
routing-policyand per-tableroutesrequirerenderer: networkd. Verify the renderer is active with:systemctl status systemd-networkd.
sudo netplan applyNo output on success. netplan apply instructs systemd-networkd to recalculate and apply the
network configuration — including the new routing tables and policy rules — without taking the
interface down.
ip rule list
ip route show table 100
ip -6 rule list
ip -6 route show table 200Output:
0: from all lookup local
32764: from 116.202.177.184 lookup 100 proto static
32766: from all lookup main
32767: from all lookup default
default via 172.31.1.1 dev eth0 proto static
0: from all lookup local
32764: from 2a01:4f8:1c0c:828e::1 lookup 200 proto static
32766: from all lookup main
default via fe80::1 dev eth0 proto static metric 1024 pref medium
Two differences from manually-added rules (as in Check 2 and Check 3):
proto static— netplan/networkd marks its routes and rules asproto static, whereasip rule add/ip route addwithout aprotoflag produce untagged entries. This is cosmetic only; both work identically.- Priority
32764instead of32765— networkd assigns its own priority numbers. Again, functionally equivalent.
✅ Both routing tables are active and will survive a server reboot — they are now managed by
systemd-networkd via netplan.
The manual ip rule add / ip route add commands work for a running system but are not
persistent. On Ubuntu/Debian systems with netplan, options for persistence include:
- netplan (used here) — cleanest approach when using
renderer: networkd /etc/rc.local— works on older systems but is not idiomatic on modern Ubuntu- A
systemdone-shot service — explicit but verbose
Netplan is the right choice here because it already manages the floating IP addresses on this interface. Keeping routing policy alongside address configuration in the same file ensures both are always applied together.
After applying both fixes (ufw + policy routing):
URL: udp://udp1.torrust-tracker-demo.com:6969/announce
IP: 2a01:4f8:1c0c:828e::1
Result: ✅ Accepted
Response: {'interval': 300, 'leechers': 0, 'peers': [], 'seeds': 1}
This issue no longer blocks the UDP1 tracker. All tracker functionality is operational:
- HTTP tracker — Caddy → Docker on IPv4 ✅
- IPv4 UDP tracker ✅
- IPv6 UDP tracker via floating IP
2a01:4f8:1c0c:828e::1✅ - HTTP1 and UDP1 trackers listed on newTrackon ✅
This issue should also be documented in the torrust-tracker repository, as it involves the tracker's network configuration requirements when running with multiple IPv6 floating IPs. Any future deployment guide covering IPv6 should mention:
- Open firewall port for UDP tracker:
sudo ufw allow <port>/udp— Docker does not manageip6tablesINPUT rules, so ufw's default deny blocks all IPv6 inbound unless explicitly allowed - Verify with
sudo ss -ulnp | grep <port>that the tracker binds to both0.0.0.0and[::] - Policy-based routing is required for each floating IP to ensure replies leave via the correct source address (both IPv4 and IPv6)