Skip to content

Use Ruby's x* allocators uniformly for internal selector allocations#176

Merged
ioquatix merged 1 commit into
mainfrom
fix/use-xmalloc-for-internal-allocations
May 12, 2026
Merged

Use Ruby's x* allocators uniformly for internal selector allocations#176
ioquatix merged 1 commit into
mainfrom
fix/use-xmalloc-for-internal-allocations

Conversation

@samuel-williams-shopify
Copy link
Copy Markdown
Contributor

@samuel-williams-shopify samuel-williams-shopify commented May 12, 2026

Summary

The selector internals (queue entries in IO_Event_Selector_ready_push, and the backing array + per-element slots in IO_Event_Array) were using raw malloc / calloc / realloc / free. Two of those were unchecked-with-assert(...), which in release builds (NDEBUG) silently compiles to a NULL pointer dereference on out-of-memory. The rest checked NULL and propagated -1 through IO_Event_Array_initialize / _resize / _lookup to their callers in epoll.c / kqueue.c / uring.c.

This PR switches the lot to Ruby's xmalloc / xcalloc / xrealloc2 / xfree:

  • Each allocator triggers a GC sweep on memory pressure before failing, increasing the chance of success.
  • Failures raise NoMemoryError (or RangeError for the array-size-exceeds-maximum branch) instead of returning NULL, so the assert is no longer needed and the -1 returns become dead.
  • Ruby's allocation accounting stays in sync with the actual heap state, which matters for GC pressure heuristics.

With the -1 paths gone:

  • IO_Event_Array_initialize and IO_Event_Array_resize now return void.
  • IO_Event_Array_lookup is guaranteed to return a non-NULL pointer.
  • The corresponding if (result < 0) rb_sys_fail(...) / if (!descriptor) rb_sys_fail(...) checks in each selector are removed.

GC / state-handling audit

Each allocator call site has the invariant:

1. Allocate (the only thing that can raise).
2. In-place initialise the new storage (no allocation, can't raise).
3. Publish it into a GC-traceable structure (no allocation, can't raise).

At step 1 the new object doesn't exist yet, so there's nothing to roll back. Between 1 and 3 no allocation can happen, so GC can't see half-initialised state. No mutex is ever held across an allocation. No call site requires rb_ensure.

Files changed

  • ext/io/event/array.hcalloc / realloc / malloc / freexcalloc / xrealloc2 / xmalloc / xfree. _initialize and _resize lose their int return.
  • ext/io/event/selector/selector.cIO_Event_Selector_ready_push allocation pair → xmalloc / xfree.
  • ext/io/event/selector/epoll.c, kqueue.c, uring.c — drop dead < 0 / NULL checks at the call sites of _initialize and _lookup.
  • releases.md — bullet under ## Unreleased.

Diff is net −35 LOC.

Relationship to #175

Supersedes #175 (which only fixed half of the selector.c allocation pair and didn't touch array.h). Close #175 once this lands.

Verification

  • make -C ext builds clean (modulo a pre-existing warning from ruby/internal/core/rstring.h).

The selector internals (queue entries in `IO_Event_Selector_ready_push`, and the backing array + per-element slots in `IO_Event_Array`) were using raw `malloc` / `calloc` / `realloc` / `free`.  Two were unchecked-with-`assert(...)`, which in release builds (`NDEBUG`) silently compiled to a NULL pointer dereference on out-of-memory.  The rest checked `NULL` and propagated `-1` through `IO_Event_Array_initialize` / `_resize` / `_lookup` to their callers in `epoll.c` / `kqueue.c` / `uring.c`.

Switch the lot to Ruby's `xmalloc` / `xcalloc` / `xrealloc2` / `xfree`:

  - Each allocator triggers a GC sweep on memory pressure before failing, increasing the chance of success.
  - Failures raise `NoMemoryError` (or `RangeError` for the array-size-exceeds-maximum branch) instead of returning `NULL`, so the `assert` is no longer needed and the `-1` returns become dead.
  - Ruby's allocation accounting stays in sync with the actual heap state.

With the `-1` paths gone, `IO_Event_Array_initialize` and `IO_Event_Array_resize` now return `void`, `IO_Event_Array_lookup` is guaranteed to return a non-NULL pointer, and the corresponding `if (result < 0) rb_sys_fail(...)` / `if (!descriptor) rb_sys_fail(...)` checks in each selector are removed.

Audit of GC/state-handling risk: each allocator call site has the invariant "allocate → in-place initialise (no allocation) → publish into a GC-traceable structure", so no half-initialised state is ever visible to a GC sweep, no locks are held across the allocation, and no `rb_ensure` is required.

Supersedes #175.

Co-authored-by: Cursor <cursoragent@cursor.com>
@samuel-williams-shopify samuel-williams-shopify force-pushed the fix/use-xmalloc-for-internal-allocations branch from b91adb8 to 6fd1bff Compare May 12, 2026 06:07
@samuel-williams-shopify samuel-williams-shopify changed the title Use Ruby's xmalloc / xfree for internal selector allocations Use Ruby's x* allocators uniformly for internal selector allocations May 12, 2026
@ioquatix ioquatix merged commit 3b6c2a8 into main May 12, 2026
54 of 60 checks passed
@ioquatix ioquatix deleted the fix/use-xmalloc-for-internal-allocations branch May 12, 2026 06:22
@samuel-williams-shopify samuel-williams-shopify added this to the v1.16.0 milestone May 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants