diff --git a/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/+page.server.ts b/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/+page.server.ts new file mode 100644 index 0000000..5cf0731 --- /dev/null +++ b/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/+page.server.ts @@ -0,0 +1,14 @@ +import { getMetadata } from '$lib/data/metadata'; +import type { PageServerLoad } from './$types'; + +export const load: PageServerLoad = async ({ url }) => { + const slug = url.pathname.split('/').filter(Boolean).pop(); + if (!slug) throw new Error('Slug could not be determined.'); + + const metadata = await getMetadata(); + const currentPost = metadata.find((post) => post.slug === slug); + + if (!currentPost) throw new Error(`Post not found: ${slug}`); + + return { currentPost, allPosts: metadata }; +}; diff --git a/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/+page.svelte b/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/+page.svelte new file mode 100644 index 0000000..6ce7e45 --- /dev/null +++ b/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/+page.svelte @@ -0,0 +1,968 @@ + + + + +
+ +
+

Introduction

+

+ When you run a UDP BitTorrent tracker behind Docker bridge networking, the Linux kernel + creates conntrack (connection tracking) entries for UDP flows that pass through Docker's + NAT layer. Under sustained tracker load those entries accumulate faster than they expire, + the conntrack table fills up, and the kernel starts silently dropping packets. +

+

+ The result is intermittent UDP timeouts with a characteristic self-recovery + cycle: the table fills, a probe gets dropped, entries expire, the table drains, the next probe + succeeds, and the cycle repeats. The application log is completely silent. No error, no counter, + no warning — just unexplained timeout spikes on your uptime monitor. +

+

+ This post documents the mechanism behind the problem, how to diagnose it, the fix, and a + reboot-persistence trap that trips many operators. +

+ +

Our Experience: Repeated Incidents Across Two Demos

+ +

First Demo — DigitalOcean (2024–2025)

+

+ The first occurrence was on the original + torrust/torrust-demo hosted on + DigitalOcean. UDP uptime on + newTrackon had been fluctuating and eventually + dropped to around 60 % at peak. The investigation is documented in + torrust/torrust-demo#26. +

+

+ The kernel journal confirmed: nf_conntrack: table full, dropping packet with + 20 million+ early_drop events on CPU 3. After increasing + nf_conntrack_max, UDP uptime on + newTrackon recovered to + 99.2 %. +

+

+ A few months later, in June 2025, the same DigitalOcean server filled the conntrack table + again (uptime back down to about 90 %, with fresh + nf_conntrack: table full, dropping packet messages and tens of millions of + early_drop events on CPU 3). The follow-up investigation in + torrust/torrust-demo#72 + tried to go further than just raising the ceiling and disable conntrack for the tracker port + altogether using NOTRACK rules. As the + Alternative Approaches section below describes in detail, that + attempt failed in our Docker setup — even after switching the tracker to + --network=host mode — and ultimately required restoring a server backup. We kept + the sysctl tuning and migrated the demo to Hetzner shortly afterwards. +

+ +

New Tracker Demo — Hetzner (2026)

+

+ In April 2026 we migrated the + Torrust Tracker Demo to + Hetzner and resized the server from a CCX23 (4 vCPU, 16 GB RAM) to a CCX33 (8 vCPU, 32 GB + RAM) to improve performance. The opposite happened: UDP uptime the day after the resize + was + 83.9 %, down from 92.2 % before the resize. +

+

+ As we explain in the symptom section below, a larger server can make things + worse: more processing power means more requests per second, which fills the + conntrack table faster and increases the drop rate. +

+

+ Investigation (tracked in + torrust/torrust-tracker-demo#21) found nf_conntrack_count = nf_conntrack_max = 262144 — the table + completely full — with 2478 "table full" messages in dmesg. +

+

+ The fix was applied on 2026-04-20 (see + torrust/torrust-tracker-demo PR #22) with all three parameters and the module pre-load. We are monitoring + newTrackon for recovery data. +

+ + + Confirmed outcome (2026-04-27): the 7-day post-fix observation window is + complete. newTrackon reports UDP uptime at 99.9 % — above the 99.0 % + target. The conntrack table stabilised at roughly 32–34 % utilisation (≈ 340 000 of 1 048 + 576 entries) with no table-full events in dmesg and zero IPv4 + UdpRcvbufErrors. The fix held across a server reboot and at peak load (~750 + UDP req/s, ~2 000 HTTP req/s). Before the fix, UDP uptime had been as low as 83.9 % on the + day the conntrack table first filled (262 144 / 262 144 entries). + + +

The Symptom

+

+ If you run a UDP tracker and observe any of the following on an uptime monitor such as + newTrackon, you may be hitting conntrack exhaustion: +

+
    +
  • + UDP availability drops intermittently to 60–90 % while the HTTP tracker stays healthy. +
  • +
  • + Outages are self-recovering — they resolve without any operator intervention, typically + within seconds to a few minutes. +
  • +
  • + You cannot reproduce the problem by sending a single announce manually; it only appears + under sustained load. +
  • +
  • + There is nothing relevant in the tracker application log, the Docker logs, or + netstat / ss socket counters. +
  • +
  • + Restarting the tracker or Docker has no lasting effect — the problem returns once load + resumes. +
  • +
  • + Upgrading the server to a larger instance (more CPU, more RAM) makes things + worse because the tracker can now handle more requests per second, which fills the + conntrack table faster. +
  • +
+ + + A counter-intuitive signal: if your UDP uptime drops after you resize + to a larger server, conntrack exhaustion is the likely explanation. More processing power increases + request throughput, which exhausts the table sooner. + + +

Why It's Hard to Diagnose

+

The standard places you look for dropped packets do not show this problem:

+
    +
  • + Application log: the tracker process never sees the dropped packet. The kernel + drops it before it reaches the socket. +
  • +
  • + Socket receive-buffer drops: + ss -u -s and netstat -su show socket-level drops, not kernel-level + conntrack drops. They will not increment. +
  • +
  • + Firewall logs: iptables / ufw log rules fire on + packets that reach the firewall. A packet dropped by the conntrack subsystem before the firewall + never appears in those logs. +
  • +
  • + Docker logs: Docker has no visibility into kernel packet drops. +
  • +
+

+ The primary evidence is in dmesg and conntrack counters in + /proc/sys/net/netfilter/. +

+ + + +

+ When we investigated the second occurrence on Hetzner, we found + nf_conntrack_count = nf_conntrack_max = 262144 — the table was completely + full at the moment of inspection — and 2478 "table full" drop messages in + dmesg. +

+ +

The Mechanism: Docker DNAT and Conntrack

+ +

How Docker Publishes UDP Ports

+

+ When you publish a UDP port in Docker (-p 6969:6969/udp), Docker installs a + DNAT (Destination Network Address Translation) rule in iptables. This rule + rewrites the destination address of every inbound packet from the host's public IP to the + container's private bridge IP. +

+

+ NAT requires connection tracking. The kernel must remember which packets were rewritten so + it can apply the reverse translation to outbound replies. For each new UDP "flow" (unique + source IP + source port combination), the kernel creates a conntrack entry. +

+ +

How Entries Accumulate Under UDP Tracker Load

+

+ Unlike TCP, UDP has no handshake. The kernel cannot know when a UDP exchange is + "finished", so each entry persists until a configurable timeout expires: +

+
    +
  • + One-way (unreplied) UDP: default timeout is + 30 seconds. +
  • +
  • + Bidirectional (replied) UDP: default timeout is + 120 seconds. +
  • +
+

+ A BitTorrent tracker announce is a request–response exchange, so entries are classified as + bidirectional with the 120-second timeout. Each unique client IP/port pair that sends an + announce holds a conntrack entry for two full minutes. +

+ + + Not the same as the announce interval. The conntrack timeout is a + kernel-level timer — it controls how long the NAT translation entry survives after the + last packet. It is completely independent of the tracker's announce interval, + which is the time the tracker tells BitTorrent clients to wait before re-announcing the + same torrent. The Torrust Tracker Demo sets an announce interval of 300 seconds (5 + minutes); newTrackon requires announce intervals + between 5 minutes and 3 hours. Each re-announce typically arrives on a new ephemeral source + port, creating a fresh conntrack entry regardless of whether the previous entry has expired. + + +

The Calculation

+

+ The minimum conntrack table size needed to handle your request rate without dropping + packets is: +

+

+ minimum_table_size = requests_per_second × udp_stream_timeout_seconds +

+

+ With default settings (udp_timeout_stream = 120 s) and a table size of 262 + 144 entries: +

+
    +
  • Maximum safe request rate = 262 144 ÷ 120 ≈ 2 186 requests/s
  • +
+

+ That sounds large, but BitTorrent clients re-announce every 30–60 minutes from a rotating + pool of ports. A tracker with tens of thousands of active torrents, each with dozens of + peers, easily exceeds this rate at peak times. +

+

+ Reducing the stream timeout to 15 seconds multiplies the effective capacity by 8× without + changing the table size: +

+
    +
  • 262 144 ÷ 15 ≈ 17 476 requests/s at the default table size
  • +
+

+ Combining a larger table with a shorter timeout gives significant headroom even on a busy + public tracker. +

+ +

The Fix: Three Kernel Parameters

+

+ Create (or edit) /etc/sysctl.d/99-conntrack.conf with the following content + (the deployed version for the Torrust Tracker Demo is at + server/etc/sysctl.d/99-conntrack.conf): +

+ + + +

Apply the settings immediately without rebooting:

+ + + +

Verify that the new values are active:

+ + + + + The three values above are a conservative starting point. You can calculate a more precise + nf_conntrack_max from your actual request rate using the formula in the + previous section. Raising the table ceiling increases kernel memory usage (roughly 300–400 + bytes per entry). At nf_conntrack_max = 1 048 576 that is ≈ 384 MB of kernel memory + reserved for the conntrack table — trivial on a 32 GB server, but worth budgeting for on a 1–2 + GB VPS. + + +

Don't Forget the Hash Table

+

+ When you raise nf_conntrack_max by an order of magnitude, the + hash bucket count does not auto-scale. The default is around 65 536 + buckets; if you keep that while raising the ceiling to 1 048 576, every lookup walks long + collision chains and table operations degrade from O(1) toward O(n). The recommended ratio + is roughly + nf_conntrack_max / 4 to nf_conntrack_max / 8. +

+

+ You can tune buckets with the nf_conntrack_buckets sysctl (writeable in the + initial network namespace) or set the module parameter hashsize for early-boot + consistency. +

+ + + +

Reduced Timeouts Are Global

+

+ The nf_conntrack_udp_timeout* values are kernel-wide — they apply to every + UDP flow on the host, not only to tracker traffic. A 15-second stream timeout is + appropriate for request–response protocols like a BitTorrent tracker, DNS resolver, or + QUIC server, but it can be aggressive for long-lived UDP services such as WireGuard, + IPsec, VoIP/SIP gateways, or long-running game servers. If you co-host such services, + either keep the default 120 s or use + NOTRACK rules (see the + Alternative Approaches section) to exempt them from connection tracking + entirely. +

+ +

The Reboot Persistence Trap

+

+ This is where many operators get burned: you apply the fix, it works perfectly, you reboot + the server, and the problem silently comes back. +

+

+ The net.netfilter.nf_conntrack_* sysctl keys only exist after the + nf_conntrack kernel module has been loaded. The module is loaded by Docker + when Docker starts. However, systemd applies sysctl configuration at boot + before Docker runs — so when systemd reads + /etc/sysctl.d/99-conntrack.conf, the keys do not exist yet and the settings + are silently skipped. +

+

The fix is to instruct the kernel to pre-load the module during boot:

+ + + +

+ With this in place, the module is loaded early in the boot sequence, the sysctl keys exist + when systemd applies sysctl.d, and the settings take effect before Docker + starts. +

+ + + Always pair the sysctl config with the module pre-load. Without + /etc/modules-load.d/conntrack.conf, the settings will not survive a reboot + even though sysctl --system confirms they are active on the running system. + + +

+ After the next reboot, verify both that the module is loaded and that the values are + correct: +

+ + + +

Alternative Approaches: Avoid the Problem Entirely

+

+ Tuning conntrack raises the ceiling, but the most fundamental fix is to stop creating + conntrack entries for tracker traffic in the first place. There are three approaches worth + knowing about, in order of how invasive they are. +

+ +

1. Host Networking (--network=host)

+

+ Running the tracker container with --network=host bypasses Docker's bridge + and DNAT layer entirely. The tracker binds directly to the host network namespace, so no + NAT rewrite happens and no conntrack entry is created for incoming UDP packets. +

+

+ This is what many high-volume public trackers do. Trade-offs: you lose Docker's network + isolation between containers, port mappings (-p host:container) are ignored, + and the container can collide with any other process listening on the same port on the + host. +

+ + + +

2. NOTRACK on the Tracker Port

+

+ If you want to keep bridge networking for isolation, you can tell the kernel to skip + connection tracking for traffic on the tracker port using a rule in the + raw table. Modern Ubuntu / Debian uses iptables-nft under the + hood, so the cleanest way to express these rules is directly in nftables. Add + the following to /etc/nftables.conf: +

+ + + +

Apply and persist across reboots:

+ + + +

For comparison, the equivalent classic iptables form is:

+ + + +

+ With NOTRACK, packets bypass conntrack and the table never grows from tracker + traffic. The catch is significant: NAT requires conntrack, so once you + stop tracking these packets, Docker's automatic DNAT for the published port no longer + works. +

+ + + We tried this and it broke the tracker. In + torrust/torrust-demo#72 + we added the nftables rules above, confirmed they were active (conntrack -S + showed early_drop = 0), and immediately UDP announces from + newTrackon and from our own + tracker_checker client started timing out. HTTP kept working. Switching the + tracker container to network_mode: host (per + torrust/torrust-demo#27) + did not fix it either, and we eventually had to + restore a server backup. A + secondary problem we observed: even with port-level NOTRACK, internal Docker + traffic to the tracker (statsd on 8125, healthchecks, the index calling the tracker over + 127.0.0.1) was still being tracked because those flows go through the + loopback / bridge interfaces, not through the public DNAT path. + + +

+ The takeaway is that NOTRACK is most useful with macvlan or with a bare-metal + install that does not rely on Docker's DNAT/iptables rules. With host networking, many + setups do not need NOTRACK at all. In a typical multi-container Docker Compose + setup it is fragile and hard to get right. +

+ + + Reboot trap, again. If you do go down the nftables route, + run sudo systemctl enable nftables. We hit a case where the rules in + /etc/nftables.conf were syntactically valid and present on disk, but + nft list ruleset came back empty after a reboot because the + nftables service was not enabled. + + +

3. macvlan Network Driver

+

+ The macvlan driver + gives the container its own MAC address and IP on the physical LAN. Packets reach the container + without NAT, so no conntrack entries are created on the host for tracker traffic. This preserves + container isolation but requires more involved network setup (a parent interface in promiscuous + mode, an IP plan, and a host that is allowed to claim multiple MACs — which rules out most cloud + providers that filter on the upstream switch). +

+ + + Why we kept Docker bridge networking on the demo. The Torrust Tracker + Demo uses Docker Compose with bridge networking because the same stack also runs HTTP + services behind a reverse proxy and benefits from Docker's built-in DNS service discovery + between containers. For us, sysctl tuning is the right balance. For a single-purpose, + high-throughput public UDP tracker, --network=host is usually the simplest and + most efficient choice. + + +

Monitoring and Verification

+

+ After applying the fix, use these commands to confirm that the table is no longer + exhausting. The conntrack CLI is not installed by default on most distributions; + install it first: +

+ + + + /dev/null | wc -l +cat /proc/sys/net/netfilter/nf_conntrack_count + +# Drop messages since boot +dmesg | grep -c "table full" + +# Conntrack statistics per CPU (early_drop column indicates table pressure) +sudo conntrack -S + +# One-liner for the drop count across all CPUs +sudo conntrack -S | awk '{for (i=1;i<=NF;i++) if ($i ~ /^early_drop=/) { split($i,a,"="); sum += a[2] } } END {print "total early_drop:", sum+0}'`} + /> + +

+ The conntrack -S output includes an early_drop counter per CPU. A + non-zero value means the kernel had to evict entries early to make room — a leading indicator + of exhaustion before packets start dropping. If this counter is growing, you need a larger table + or shorter timeouts. +

+

+ On the first Torrust demo, we observed 20 million+ early_drop events on CPU 3 + before the fix. After increasing nf_conntrack_max and adjusting the timeouts, the + counter stabilized at zero. +

+ + + Consider adding a simple alerting rule that fires when + nf_conntrack_count / nf_conntrack_max > 0.8. At 80 % fill, entries are + still being accepted; at 100 % they are being dropped. Catching it at 80 % gives you time + to react without customer-facing impact. + + +

Independent Documentation

+

+ This is not unique to Torrust. The + ftorrent/open README + — a comprehensive guide to running the + Aquatic tracker in Docker — covers + the same problem in its "Kernel tuning for bridge networking" section. That guide + documents the same + nf_conntrack_max, nf_conntrack_udp_timeout, and + nf_conntrack_udp_timeout_stream fixes, and extends them with two additional + parameters: net.core.rmem_max / rmem_default to size UDP socket + receive buffers, and net.core.netdev_max_backlog to prevent softirq drops + when Docker's veth pair adds per-packet overhead. It also covers the same + reboot-persistence trap (pre-loading the nf_conntrack module) and provides matching + monitoring commands. +

+

+ Any UDP service that receives sustained traffic through Docker bridge networking and + Docker's DNAT layer is susceptible. BitTorrent trackers happen to be a high-frequency case + because every peer re-announces periodically, generating a constant stream of short + request–response exchanges. +

+ +

Further Reading

+

+ The resources below independently document the same conntrack problem and cover related + topics for anyone running a public tracker with Docker. +

+
    +
  • + Running Aquatic in Docker: A Complete Guide to Public BitTorrent and WebTorrent + Trackers + — A detailed guide to deploying the + Aquatic tracker (a Rust implementation + of all three BitTorrent tracker protocols) in hardened Docker containers. Covers conntrack + tuning, UDP socket buffers, NIC backlog, Docker bridge networking security, container hardening + with dropped capabilities and custom seccomp profiles, IPv6 dual-stack, and reverse proxy + setup. The "Kernel tuning for bridge networking" section is directly relevant to this post. +
  • +
  • + torrust/torrust-demo#26 + — The GitHub issue tracking our first encounter with this problem on the DigitalOcean demo. + Includes the kernel journal output showing + nf_conntrack: table full, dropping packet and the initial fix. +
  • +
  • + torrust/torrust-demo#72 + — The follow-up issue from June 2025 documenting the second occurrence on the same DigitalOcean + droplet, the failed attempt to disable conntrack with + nftables NOTRACK rules (with and without + --network=host), and the localhost-tracking gotcha that affects + multi-container Docker setups. Closely related to + torrust/torrust-demo#27 + (Docker network configuration) and + torrust/torrust-demo#78 (the + backup restore that followed). +
  • +
  • + torrust/torrust-tracker-demo#21 + — The issue tracking the second occurrence on the Hetzner tracker demo, along with + PR #22 + which added the sysctl settings and the conntrack module pre-load to the deployer. +
  • +
+ +

Related Posts on This Blog

+ + +

Official Documentation

+
    +
  • + Linux kernel: Netfilter Conntrack Sysfs variables + — The authoritative reference for every nf_conntrack_* sysctl parameter, + including the default values for nf_conntrack_udp_timeout (30 s), + nf_conntrack_udp_timeout_stream (120 s), and + nf_conntrack_max. +
  • +
  • + Docker Engine: Port publishing and mapping + — Explains how Docker uses NAT, PAT, and masquerading to forward traffic to published container + ports, and the role of iptables firewall rules in that process. +
  • +
  • + Docker Engine: Docker with iptables + — Documents the custom iptables chains Docker creates (including the + DOCKER nat table for port-mapping) and notes that packets in + the DOCKER-USER chain have already been DNAT-rewritten — confirming why the + conntrack extension is required to match original IP/port. +
  • +
  • + Docker Engine: Packet filtering and firewalls + — Overview of Docker's firewall rule model for bridge networks, including masquerading and + the interaction with external firewall tools. +
  • +
  • + Docker Engine: Bridge network driver + — Covers how Docker's default bridge network works, including IP masquerading and port publishing + to host addresses. +
  • +
+ +

Lessons

+
    +
  • + The application log is not enough. For kernel-level drops, check + dmesg and /proc/sys/net/netfilter/. +
  • +
  • + A larger server can make conntrack exhaustion worse, not better. + More throughput fills the table faster if the table size is unchanged. +
  • +
  • + Always pre-load the module. Without + /etc/modules-load.d/conntrack.conf, the sysctl settings will not survive a + reboot. +
  • +
  • + This affects any UDP service behind Docker bridge networking at non-trivial + request rates — not just BitTorrent trackers. DNS resolvers, game servers, VoIP services, + and QUIC-based applications are equally vulnerable. +
  • +
  • + Reducing UDP timeouts is safe for request–response protocols. A BitTorrent + announce completes in milliseconds. The default 120-second stream timeout exists for stateful + protocols; for stateless UDP services, shorter timeouts are appropriate and dramatically increase + effective table capacity. +
  • +
  • + Monitor conntrack fill level proactively. An alert at 80 % gives you time + to respond before packets start dropping. +
  • +
  • + Resize the hash table when you raise the ceiling. + nf_conntrack_max and the bucket count (hashsize) are + independent. Raising one without the other turns O(1) lookups into O(n) chain walks. +
  • +
  • + Consider eliminating the problem instead of tuning around it. + --network=host, NOTRACK rules, and the macvlan driver + all remove conntrack from the path entirely. Sysctl tuning is the right call when you need + bridge networking; otherwise it is treating a symptom. +
  • +
  • + + NOTRACK is harder than it looks in a multi-container Docker setup. + + A port-level rule does not catch flows that traverse loopback or the Docker bridge (statsd, + healthchecks, container-to-container traffic), and disabling tracking on a NAT-published port + breaks Docker's DNAT. We tried it twice on the DigitalOcean demo and reverted both times — + see + torrust/torrust-demo#72. +
  • +
+
+
+
+ + +
+ + diff --git a/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/metadata.ts b/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/metadata.ts new file mode 100644 index 0000000..7888fb0 --- /dev/null +++ b/src/routes/blog/nf-conntrack-overflow-docker-udp-tracker/metadata.ts @@ -0,0 +1,12 @@ +export const metadata = { + title: 'How nf_conntrack Overflow Causes Intermittent UDP Tracker Downtime with Docker', + slug: 'nf-conntrack-overflow-docker-udp-tracker', + contributor: 'Jose Celano', + contributorSlug: 'jose-celano', + date: '2026-04-27T12:00:00.000Z', + coverImage: + '/images/posts/nf-conntrack-overflow-docker-udp-tracker/nf-conntrack-overflow-docker-udp-tracker.webp', + excerpt: + 'A subtle Linux kernel resource exhaustion silently drops UDP packets when running a BitTorrent tracker behind Docker bridge networking. No application error, no socket counter — just intermittent timeouts and a self-recovery cycle. Here is how to diagnose it, fix it, and make sure the fix survives a reboot.', + tags: ['BitTorrent', 'Tracker', 'Networking', 'Docker', 'Linux', 'Deployment'] +}; diff --git a/static/blogMetadata.json b/static/blogMetadata.json index 69e7463..901668b 100644 --- a/static/blogMetadata.json +++ b/static/blogMetadata.json @@ -141,17 +141,17 @@ ] }, { - "title": "How To Setup The Dev Env", - "slug": "how-to-setup-the-development-environment", + "title": "How To Run A Local Demo", + "slug": "how-to-run-a-local-demo", "contributor": "Jose Celano", "contributorSlug": "jose-celano", - "date": "2023-07-11T12:29:04.295Z", - "coverImage": "/images/posts/development-environment.png", - "excerpt": "If you want to contribute to the Torrust Index, this article explains how to setup a development environment with the latest versions for all services.", + "date": "2023-07-11T15:28:28.769Z", + "coverImage": "/images/posts/mandelbrot-set-periods-torrent-screenshot.png", + "excerpt": "You can easily run the Torrust Index demo on your computer easily with Git and Docker.", "tags": [ - "Torrent", - "Tracker", - "BitTorrent" + "Demo", + "Tutorial", + "Guide" ] }, { @@ -168,31 +168,17 @@ ] }, { - "title": "Introducing the Torrust Tracker Deployer", - "slug": "introducing-the-torrust-tracker-deployer", - "contributor": "Jose Celano", - "contributorSlug": "jose-celano", - "date": "2025-12-22T00:00:00.000Z", - "coverImage": "/images/posts/introducing-the-torrust-tracker-deployer/torrust-tracker-deployer.png", - "excerpt": "We're excited to announce a new tool we've been working on: the Torrust Tracker Deployer. This tool aims to simplify the deployment process of the Torrust Tracker to virtual machines, making it as easy as running a single command.", - "tags": [ - "Deployment", - "Automation", - "Announcement" - ] - }, - { - "title": "How To Run A Local Demo", - "slug": "how-to-run-a-local-demo", + "title": "How To Setup The Dev Env", + "slug": "how-to-setup-the-development-environment", "contributor": "Jose Celano", "contributorSlug": "jose-celano", - "date": "2023-07-11T15:28:28.769Z", - "coverImage": "/images/posts/mandelbrot-set-periods-torrent-screenshot.png", - "excerpt": "You can easily run the Torrust Index demo on your computer easily with Git and Docker.", + "date": "2023-07-11T12:29:04.295Z", + "coverImage": "/images/posts/development-environment.png", + "excerpt": "If you want to contribute to the Torrust Index, this article explains how to setup a development environment with the latest versions for all services.", "tags": [ - "Demo", - "Tutorial", - "Guide" + "Torrent", + "Tracker", + "BitTorrent" ] }, { @@ -209,6 +195,20 @@ "Documentation" ] }, + { + "title": "Introducing the Torrust Tracker Deployer", + "slug": "introducing-the-torrust-tracker-deployer", + "contributor": "Jose Celano", + "contributorSlug": "jose-celano", + "date": "2025-12-22T00:00:00.000Z", + "coverImage": "/images/posts/introducing-the-torrust-tracker-deployer/torrust-tracker-deployer.png", + "excerpt": "We're excited to announce a new tool we've been working on: the Torrust Tracker Deployer. This tool aims to simplify the deployment process of the Torrust Tracker to virtual machines, making it as easy as running a single command.", + "tags": [ + "Deployment", + "Automation", + "Announcement" + ] + }, { "title": "The New Torrust Tracker Demo Is Live", "slug": "new-torrust-tracker-demo", @@ -243,16 +243,20 @@ ] }, { - "title": "Released version v3.0.0-beta", - "slug": "released-v3-0-0", + "title": "How nf_conntrack Overflow Causes Intermittent UDP Tracker Downtime with Docker", + "slug": "nf-conntrack-overflow-docker-udp-tracker", "contributor": "Jose Celano", "contributorSlug": "jose-celano", - "date": "2024-09-03T14:30:38.554Z", - "coverImage": "/images/posts/released-v3-0-0-beta/team.png", - "excerpt": "We're excited to announce the release of v3.0.0-beta, marking a significant step towards our upcoming major release, v3.0.0. This release solidifies the features and prepares us for the beta phase.", + "date": "2026-04-27T12:00:00.000Z", + "coverImage": "/images/posts/nf-conntrack-overflow-docker-udp-tracker/nf-conntrack-overflow-docker-udp-tracker.webp", + "excerpt": "A subtle Linux kernel resource exhaustion silently drops UDP packets when running a BitTorrent tracker behind Docker bridge networking. No application error, no socket counter — just intermittent timeouts and a self-recovery cycle. Here is how to diagnose it, fix it, and make sure the fix survives a reboot.", "tags": [ - "Announcement", - "Release" + "BitTorrent", + "Tracker", + "Networking", + "Docker", + "Linux", + "Deployment" ] }, { @@ -270,18 +274,16 @@ ] }, { - "title": "Review and Setup Guide for UNIT3D", - "slug": "review-and-setup-guide-for-unit3d", + "title": "Released version v3.0.0-beta", + "slug": "released-v3-0-0", "contributor": "Jose Celano", "contributorSlug": "jose-celano", - "date": "2024-08-16T09:36:17.990Z", - "coverImage": "/images/posts/review-and-setup-guide-for-unit3d/unit3d-user-profile-screenshot.png", - "excerpt": "UNIT3D is one of the fully featured BitTorrent Indexes that promises a robust, customizable, and community-driven experience. In this first post of our review series at Torrust, we’ll dive into UNIT3D, evaluating its features, installation process, and overall usability. Whether you're an open-source advocate or a BitTorrent expert, this guide will help you understand the ins and outs of UNIT3D, including a step-by-step tutorial to set it up on a Digital Ocean droplet.", + "date": "2024-09-03T14:30:38.554Z", + "coverImage": "/images/posts/released-v3-0-0-beta/team.png", + "excerpt": "We're excited to announce the release of v3.0.0-beta, marking a significant step towards our upcoming major release, v3.0.0. This release solidifies the features and prepares us for the beta phase.", "tags": [ - "Tutorial", - "Review", - "Index", - "Third-party" + "Announcement", + "Release" ] }, { @@ -298,17 +300,34 @@ ] }, { - "title": "Setting Up Torrust with Claude Code", - "slug": "setting-up-torrust-with-claude-code", - "contributor": "Graeme Byrne", - "contributorSlug": "graeme-byrne", - "date": "2025-07-09T19:53:17.990Z", - "coverImage": "/images/posts/setting-up-torrust-with-claude-code/setting-up-torrust-with-claude-code.png", - "excerpt": "Based on a real terminal session, this guide documents the complete process of setting up the Torrust BitTorrent index development environment using Claude Code, including all the challenges encountered and solutions applied.", + "title": "Review and Setup Guide for UNIT3D", + "slug": "review-and-setup-guide-for-unit3d", + "contributor": "Jose Celano", + "contributorSlug": "jose-celano", + "date": "2024-08-16T09:36:17.990Z", + "coverImage": "/images/posts/review-and-setup-guide-for-unit3d/unit3d-user-profile-screenshot.png", + "excerpt": "UNIT3D is one of the fully featured BitTorrent Indexes that promises a robust, customizable, and community-driven experience. In this first post of our review series at Torrust, we’ll dive into UNIT3D, evaluating its features, installation process, and overall usability. Whether you're an open-source advocate or a BitTorrent expert, this guide will help you understand the ins and outs of UNIT3D, including a step-by-step tutorial to set it up on a Digital Ocean droplet.", "tags": [ "Tutorial", - "AI", - "Claude" + "Review", + "Index", + "Third-party" + ] + }, + { + "title": "How to Run a UDP Tracker Behind a Floating IP on Ubuntu", + "slug": "setup-udp-tracker-behind-floating-ip", + "contributor": "Jose Celano", + "contributorSlug": "jose-celano", + "date": "2026-04-14T00:00:00.000Z", + "coverImage": "/images/posts/setup-udp-tracker-behind-floating-ip/udp-tracker-floating-ip-ipv6-docker-configuration.webp", + "excerpt": "A practical guide to running a UDP BitTorrent tracker behind floating IPs (also known as static, reserved, or elastic IPs) on Ubuntu, including policy routing, Docker IPv6 networking, and SNAT for correct reply paths.", + "tags": [ + "BitTorrent", + "Tracker", + "Networking", + "IPv6", + "Deployment" ] }, { @@ -328,19 +347,17 @@ ] }, { - "title": "How to Run a UDP Tracker Behind a Floating IP on Ubuntu", - "slug": "setup-udp-tracker-behind-floating-ip", - "contributor": "Jose Celano", - "contributorSlug": "jose-celano", - "date": "2026-04-14T00:00:00.000Z", - "coverImage": "/images/posts/setup-udp-tracker-behind-floating-ip/udp-tracker-floating-ip-ipv6-docker-configuration.webp", - "excerpt": "A practical guide to running a UDP BitTorrent tracker behind floating IPs (also known as static, reserved, or elastic IPs) on Ubuntu, including policy routing, Docker IPv6 networking, and SNAT for correct reply paths.", + "title": "Setting Up Torrust with Claude Code", + "slug": "setting-up-torrust-with-claude-code", + "contributor": "Graeme Byrne", + "contributorSlug": "graeme-byrne", + "date": "2025-07-09T19:53:17.990Z", + "coverImage": "/images/posts/setting-up-torrust-with-claude-code/setting-up-torrust-with-claude-code.png", + "excerpt": "Based on a real terminal session, this guide documents the complete process of setting up the Torrust BitTorrent index development environment using Claude Code, including all the challenges encountered and solutions applied.", "tags": [ - "BitTorrent", - "Tracker", - "Networking", - "IPv6", - "Deployment" + "Tutorial", + "AI", + "Claude" ] }, { @@ -357,6 +374,19 @@ "BitTorrent" ] }, + { + "title": "Torrust - Enhancing the BitTorrent Ecosystem", + "slug": "torrust-enhancing-the-bittorrent-ecosystem", + "contributor": "Jose Celano", + "contributorSlug": "jose-celano", + "date": "2024-05-31T09:33:14.163Z", + "coverImage": "/images/posts/deploying-torrust-to-production/deploy-torrust-to-a-digital-ocean-droplet.png", + "excerpt": "Torrust, an open-source organization, is making significant contributions to the BitTorrent ecosystem by developing robust tools, improving documentation, and fostering community collaboration.", + "tags": [ + "Introduction", + "Torrust" + ] + }, { "title": "BitTorrent Trackers Implemented in Rust", "slug": "trackers-implemented-in-rust", @@ -372,19 +402,6 @@ "Open Source" ] }, - { - "title": "Torrust - Enhancing the BitTorrent Ecosystem", - "slug": "torrust-enhancing-the-bittorrent-ecosystem", - "contributor": "Jose Celano", - "contributorSlug": "jose-celano", - "date": "2024-05-31T09:33:14.163Z", - "coverImage": "/images/posts/deploying-torrust-to-production/deploy-torrust-to-a-digital-ocean-droplet.png", - "excerpt": "Torrust, an open-source organization, is making significant contributions to the BitTorrent ecosystem by developing robust tools, improving documentation, and fostering community collaboration.", - "tags": [ - "Introduction", - "Torrust" - ] - }, { "title": "Visualize Tracker Metrics with Prometheus and Grafana", "slug": "visualize-tracker-metrics-prometheus-grafana", @@ -401,6 +418,20 @@ "Community" ] }, + { + "title": "What is a BitTorrent tracker and types of trackers", + "slug": "what-is-a-bittorent-tracker", + "contributor": "", + "contributorSlug": "", + "date": "2023-10-05T13:42:25.671Z", + "coverImage": "/images/posts/tracker.jpg", + "excerpt": "Basic explanation of what a BitTorrent tracker is and the two types of trackers, public and private.", + "tags": [ + "Torrent", + "Tracker", + "BitTorrent" + ] + }, { "title": "Vortex: A High-Performance Rust BitTorrent Client for Linux", "slug": "vortex-rust-bittorrent-client-review", @@ -416,19 +447,5 @@ "Performance", "Third-party" ] - }, - { - "title": "What is a BitTorrent tracker and types of trackers", - "slug": "what-is-a-bittorent-tracker", - "contributor": "", - "contributorSlug": "", - "date": "2023-10-05T13:42:25.671Z", - "coverImage": "/images/posts/tracker.jpg", - "excerpt": "Basic explanation of what a BitTorrent tracker is and the two types of trackers, public and private.", - "tags": [ - "Torrent", - "Tracker", - "BitTorrent" - ] } ] \ No newline at end of file diff --git a/static/images/posts/nf-conntrack-overflow-docker-udp-tracker/nf-conntrack-overflow-docker-udp-tracker.webp b/static/images/posts/nf-conntrack-overflow-docker-udp-tracker/nf-conntrack-overflow-docker-udp-tracker.webp new file mode 100644 index 0000000..84d07a3 Binary files /dev/null and b/static/images/posts/nf-conntrack-overflow-docker-udp-tracker/nf-conntrack-overflow-docker-udp-tracker.webp differ