Skip to content

Commit b91adb8

Browse files
Use Ruby's xmalloc / xfree for internal selector allocations.
`IO_Event_Selector_ready_push` (selector.c) and `IO_Event_Array_lookup` (array.h) both allocated internal bookkeeping structs via raw `malloc` paired with a debug-build-only `assert(...)`. In a release build (`NDEBUG`) the `assert` compiles to nothing, so if `malloc` returned `NULL` under memory pressure the next line dereferenced a null pointer and crashed the process. Switch those allocations (and the paired `free` calls on the same objects) to Ruby's `xmalloc` / `xfree`: - Triggers a GC sweep on memory pressure before failing, increasing the chance of success. - Raises `NoMemoryError` (a Ruby exception) instead of returning `NULL`, so the `assert` is no longer needed. - Keeps Ruby's allocation accounting in sync with the actual heap state. The remaining raw `calloc` / `realloc` / `free(base)` calls in `array.h` already check for `NULL` and propagate `-1` via the C-style API, so they are left as-is. Supersedes #175. Co-authored-by: Cursor <cursoragent@cursor.com>
1 parent a0c57a1 commit b91adb8

3 files changed

Lines changed: 8 additions & 7 deletions

File tree

ext/io/event/array.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ inline static void IO_Event_Array_free(struct IO_Event_Array *array)
7272
if (element) {
7373
array->element_free(element);
7474

75-
free(element);
75+
xfree(element);
7676
}
7777
}
7878

@@ -139,8 +139,8 @@ inline static void* IO_Event_Array_lookup(struct IO_Event_Array *array, size_t i
139139

140140
// Allocate the element if it doesn't exist:
141141
if (*element == NULL) {
142-
*element = malloc(array->element_size);
143-
assert(*element);
142+
// Ruby's allocator triggers GC on memory pressure and raises `NoMemoryError` on failure, so no NULL check is required.
143+
*element = xmalloc(array->element_size);
144144

145145
if (array->element_initialize) {
146146
array->element_initialize(*element);
@@ -166,7 +166,7 @@ inline static void IO_Event_Array_truncate(struct IO_Event_Array *array, size_t
166166
void **element = array->base + i;
167167
if (*element) {
168168
array->element_free(*element);
169-
free(*element);
169+
xfree(*element);
170170
*element = NULL;
171171
}
172172
}

ext/io/event/selector/selector.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -246,8 +246,8 @@ VALUE IO_Event_Selector_raise(struct IO_Event_Selector *backend, int argc, VALUE
246246

247247
void IO_Event_Selector_ready_push(struct IO_Event_Selector *backend, VALUE fiber)
248248
{
249-
struct IO_Event_Selector_Queue *waiting = malloc(sizeof(struct IO_Event_Selector_Queue));
250-
assert(waiting);
249+
// Ruby's allocator triggers GC on memory pressure and raises `NoMemoryError` on failure, so no NULL check is required.
250+
struct IO_Event_Selector_Queue *waiting = xmalloc(sizeof(struct IO_Event_Selector_Queue));
251251

252252
waiting->head = NULL;
253253
waiting->tail = NULL;
@@ -268,7 +268,7 @@ void IO_Event_Selector_ready_pop(struct IO_Event_Selector *backend, struct IO_Ev
268268
if (ready->flags & IO_EVENT_SELECTOR_QUEUE_INTERNAL) {
269269
// This means that the fiber was added to the ready queue by the selector itself, and we need to transfer control to it, but before we do that, we need to remove it from the queue, as there is no expectation that returning from `transfer` will remove it.
270270
queue_pop(backend, ready);
271-
free(ready);
271+
xfree(ready);
272272
} else if (ready->flags & IO_EVENT_SELECTOR_QUEUE_FIBER) {
273273
// This means the fiber added itself to the ready queue, and we need to transfer control back to it. Transferring control back to the fiber will call `queue_pop` and remove it from the queue.
274274
} else {

releases.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
- Add support for the `io_close` fiber-scheduler hook (Ruby 4.0+). The `URing` selector performs the close asynchronously via the ring; the `Debug::Selector` and `TestScheduler` wrappers forward to the underlying selector when supported.
77
- Improve `WorkerPool` GC compaction support and add proper write barriers, fixing potential use-after-free under compacting GC.
88
- Keep blocked scheduler fibers alive during GC by registering them as roots in `TestScheduler#block`, preventing premature collection and the resulting use-after-free crash on resume.
9+
- Use Ruby's `xmalloc` / `xfree` for internal queue and element allocations in the selector backend. Previously a raw `malloc` paired with a debug-build-only `assert(...)` would silently dereference `NULL` and crash in release builds under memory pressure; switching to `xmalloc` triggers a GC sweep on pressure and raises `NoMemoryError` on real failure.
910
- Correctly handle short `io_uring_submit()` results in the `URing` selector. `io_uring_submit()` returns the number of SQEs actually accepted by the kernel and can be short (SQE prep errors, `ENOMEM`, transient `EAGAIN`); the old accounting reset `pending = 0` on any success and silently lost track of unsubmitted SQEs.
1011
- Enable `IORING_SETUP_SUBMIT_ALL` (kernel 5.18+) on the `URing` selector so the kernel keeps processing the rest of an SQE batch past individual errors, reducing the frequency of short submits in practice.
1112

0 commit comments

Comments
 (0)