|
| 1 | ++++ |
| 2 | +title = "Building ezli.me, a link shortener in Rust" |
| 3 | +date = 2026-04-17 |
| 4 | +[extra] |
| 5 | +tags = ["rust","web","infra"] |
| 6 | +hidden = true |
| 7 | +custom_summary = "A small Rust link shortener with PostgreSQL, in-memory caching and batched click writes." |
| 8 | ++++ |
| 9 | + |
| 10 | +In this short post we introduce [ezlime](https://github.com/rustunit/ezlime), the small Rust service behind [ezli.me](https://ezli.me). |
| 11 | + |
| 12 | +# Why we built this |
| 13 | + |
| 14 | +We built it for a pretty boring reason. On [Live-Ask.com](https://www.live-ask.com) we used [TinyURL](https://tinyurl.com) to shorten event links, and after they tightened their free tier we were about to outgrow it. |
| 15 | + |
| 16 | +Paying for another SaaS plan would have solved that. But we already run a small K3s cluster with spare capacity, so at that point the question became: how much software do we actually need to redirect a URL? |
| 17 | + |
| 18 | +Not much, as it turns out. We wanted: |
| 19 | + |
| 20 | +* instant redirects |
| 21 | +* a simple link creation API |
| 22 | +* minimal stats: click count and last used |
| 23 | +* a setup small enough that self-hosting stays worth it |
| 24 | + |
| 25 | +# How it works |
| 26 | + |
| 27 | +A request comes in, we check an in-memory cache, fall back to Postgres on a miss, and bump a click counter in memory. A background task batches those counters to the database every few seconds. That is the entire request lifecycle. |
| 28 | + |
| 29 | +# What actually runs |
| 30 | + |
| 31 | +The service is using `axum`, `tokio` and PostgreSQL. Database access goes through `diesel`, and migrations are baked into the binary. |
| 32 | + |
| 33 | +For redirects we keep hot links in an in-memory [`quick_cache`](https://crates.io/crates/quick_cache), so popular IDs usually do not hit the database at all. Public link creation goes through [Cloudflare Turnstile](https://developers.cloudflare.com/turnstile/). Authenticated clients can use API keys. There is no account system and no tracking. |
| 34 | + |
| 35 | +In production this runs as two replicas in our K3s cluster with `10m` CPU / `32Mi` memory requests and `100m` CPU / `128Mi` memory limits. The landing page at [ezli.me](https://ezli.me) is plain static HTML/CSS. |
| 36 | + |
| 37 | +# The hot path |
| 38 | + |
| 39 | +The redirect handler is intentionally tiny: |
| 40 | + |
| 41 | +```rust |
| 42 | +pub async fn redirect(&self, id: &str) -> Result<String, anyhow::Error> { |
| 43 | + if id.contains(".") || id.is_empty() { |
| 44 | + anyhow::bail!("invalid id"); |
| 45 | + } |
| 46 | + |
| 47 | + if let Some(link) = self.cache.get(id) { |
| 48 | + self.click_counter.increment(id).await; |
| 49 | + return Ok(link.clone()); |
| 50 | + } |
| 51 | + |
| 52 | + let Some(link) = self.db.get(id).await? else { |
| 53 | + anyhow::bail!("unknown link") |
| 54 | + }; |
| 55 | + |
| 56 | + self.cache.insert(id.to_string(), link.url.clone()); |
| 57 | + self.click_counter.increment(id).await; |
| 58 | + |
| 59 | + Ok(link.url) |
| 60 | +} |
| 61 | +``` |
| 62 | + |
| 63 | +There are only two real cases here. On a cache hit we return immediately and only bump an in-memory counter. On a miss we do one database lookup, put the result into the cache, and return that. |
| 64 | + |
| 65 | +The important part is what does not happen on the request path: a click does not become a Postgres write. |
| 66 | + |
| 67 | +# Keeping writes off the request path |
| 68 | + |
| 69 | +`ClickCounter` accumulates click counts and `last_used` timestamps in memory. A background task drains that state and flushes it to Postgres every few seconds: |
| 70 | + |
| 71 | +```rust |
| 72 | +pub async fn start_counter_flusher( |
| 73 | + counter: Arc<ClickCounter>, |
| 74 | + db: DbPool, |
| 75 | + interval_duration: Duration, |
| 76 | +) { |
| 77 | + let mut ticker = interval(interval_duration); |
| 78 | + loop { |
| 79 | + ticker.tick().await; |
| 80 | + |
| 81 | + let counts = counter.drain().await; |
| 82 | + if counts.is_empty() { |
| 83 | + continue; |
| 84 | + } |
| 85 | + |
| 86 | + if let Err(e) = flush_counts_to_db(db.clone(), counts).await { |
| 87 | + tracing::error!("failed to flush click counts: {e}"); |
| 88 | + } |
| 89 | + } |
| 90 | +} |
| 91 | +``` |
| 92 | + |
| 93 | +Flushing means one bulk `UPDATE` query: |
| 94 | + |
| 95 | +```sql |
| 96 | +UPDATE links AS l |
| 97 | +SET |
| 98 | + click_count = l.click_count + v.inc, |
| 99 | + last_used = GREATEST(l.last_used, v.ts) |
| 100 | +FROM ( |
| 101 | + SELECT |
| 102 | + unnest(link_ids) AS link_id, |
| 103 | + unnest(increments) AS inc, |
| 104 | + unnest(timestamps) AS ts |
| 105 | +) AS v |
| 106 | +WHERE l.id = v.link_id; |
| 107 | +``` |
| 108 | + |
| 109 | +If one link gets clicked a thousand times in 3 seconds, that still becomes one row update in the next flush, not a thousand separate writes. |
| 110 | + |
| 111 | +> `GREATEST` on `last_used` also makes concurrent flushing across replicas safe. A slower replica cannot clobber a newer timestamp from another one. |
| 112 | +
|
| 113 | +There is nothing fancy here, which is exactly the point. The hot path stays small, the write path stays cheap, and there is very little to reason about in production. |
| 114 | + |
| 115 | +# Short IDs |
| 116 | + |
| 117 | +Short IDs are deterministic. We hash the original URL and base62-encode it, so the same URL gets the same short code. If two different URLs collide, we retry with an offset in the hash input until they no longer do. |
| 118 | + |
| 119 | +# Small footprint |
| 120 | + |
| 121 | +This is the part that made it worth building for us. The running system is: |
| 122 | + |
| 123 | +* one Rust binary |
| 124 | +* one Postgres database |
| 125 | +* one small K3s deployment in infrastructure we already have |
| 126 | + |
| 127 | +There is no Redis, no queue, no analytics pipeline and no admin backend. We get tracing logs, Prometheus metrics and a health endpoint and call it a day. |
| 128 | + |
| 129 | +That is when self-hosting starts to make sense: when the service just slots into the platform you already run instead of dragging more infrastructure behind it. |
| 130 | + |
| 131 | +# Conclusion |
| 132 | + |
| 133 | +ezli.me only makes sense for us because the rest of the platform is already there. Once that box is checked, a link shortener really is just Rust, Postgres, an in-memory cache and a tiny K3s deployment. |
| 134 | + |
| 135 | +The project is on [GitHub](https://github.com/rustunit/ezlime), dual-licensed MIT / Apache-2.0. If you want to run your own, give it a try. |
| 136 | + |
| 137 | +--- |
| 138 | + |
| 139 | +Want to use the hosted ezli.me API? Join our [Discord](https://discord.gg/MHzmYHnnsE) and reach out to get an API key. |
0 commit comments