Add Gleam + Mist: first Gleam and BEAM VM entry! 🌟#24
Conversation
Adds Mist HTTP server for Gleam running on the BEAM (Erlang VM). This is the first Gleam framework and first BEAM VM entry in HttpArena. - Language: Gleam (compiles to Erlang bytecode) - Runtime: BEAM (Erlang VM / OTP) - HTTP: Mist (OTP process per connection) - JSON: gleam_json - SQLite: sqlight (NIF bindings) - Compression: Erlang's built-in zlib The BEAM's preemptive scheduling and per-process GC represents a fundamentally different concurrency model than async/await or OS threads.
gleam_stdlib requires Gleam >= 1.14.0, was using v1.8.0.
|
Build fix: bumped Gleam from v1.8.0 to v1.14.0 in the Dockerfile. The latest |
decode.at returns Decoder(a) directly (2 args) and can't be used with `use` callbacks. decode.subfield takes a path + decoder + callback (3 args), which is what we need for the `use` pattern in the dataset decoder. This fixes the Gleam 1.14 build error.
|
Build fix: |
Gleam 1.14's build image ships OTP 28, so BEAM files compiled there
won't load on the OTP 27 runtime. Bumped runtime to erlang:28-alpine.
Also added mist.bind("0.0.0.0") so the server listens on all
interfaces — required for Docker networking (was binding 127.0.0.1).
|
Found the startup issue — two problems:
Should start up cleanly now. 🤞 |
Benchmark ResultsFramework: Full log |
|
Benchmarks are in! Here's the breakdown for Gleam + Mist on the BEAM VM: 🚀 Baseline: 741K req/s at 4096c — really solid for an Erlang VM framework. The BEAM's scheduler is doing great work distributing across cores. 📡 Pipelined: 847K req/s at 16384c — interesting that it peaks at higher concurrency rather than 512c. The BEAM handles massive connection counts gracefully, which is exactly what you'd expect from Erlang's heritage. 📊 JSON: 308K req/s at 4096c — reasonable, serialization is always going to be a heavier workload. ⏱️ Limited-conn: 110K req/s — consistent across connection counts, showing stable per-connection throughput. 📁 Upload: 322 req/s at 64c — in line with other frameworks on this test. 🗜️ Compression: 3.9K req/s — this is CPU-bound work so the BEAM's interpreted nature shows here. 🔀 Noisy: 529K req/s at 16384c — handles mixed traffic well. One note: the 16384c run showed 28% unexpected status codes which might be worth investigating. Could be the server getting overwhelmed at that concurrency level. 🎭 Mixed: 16.3K req/s at 4096c — the reconnect-heavy profile, decent showing. Overall: really impressive for a BEAM language! Gleam brings type safety and a modern DX to the Erlang ecosystem, and Mist is putting up competitive numbers. The baseline throughput especially — 741K on the BEAM is no joke 🌟 |
What's this?
Adds Mist — a Gleam HTTP server running on the BEAM (Erlang VM).
This is the first Gleam framework AND the first BEAM VM entry in HttpArena!
Why it's interesting
The BEAM VM is a completely different runtime model than anything currently in HttpArena. While most entries use async/await (Tokio, libuv, etc.) or OS threads, the BEAM uses lightweight processes with preemptive scheduling and per-process garbage collection. Each connection gets its own isolated process — if one crashes, nothing else is affected.
Gleam itself is a really cool language — type-safe, functional, compiles to Erlang bytecode, and the pattern-matching-based routing is clean and fast.
Stack
Endpoints implemented
/pipeline— plain text/baseline11— GET/POST with query param sum/baseline2— GET with query param sum/json— JSON serialization (pre-cached)/compression— gzip via Erlang zlib/upload— POST body size/db— SQLite query/static/{filename}— static file servingcc @rawhat — thought it'd be cool to see how Mist stacks up in HttpArena! The BEAM's concurrency model is so different from everything else in here, should make for really interesting benchmark comparisons.