All notable changes to the Async extension for PHP will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
pdo_sqlitehonoursPDO::ATTR_POOL_STMT_CACHE_SIZE. Pool slots now carry a per-connection LRU cache of compiledsqlite3_stmt*. On$pdo->prepare()the driver looks up the SQL in the cache and reuses an already-compiled statement when present, skippingsqlite3_prepare_v2entirely. OnPDOStatementdestruction the stmt issqlite3_reset'd and inserted back into the cache; LRU eviction callssqlite3_finalizevia the entry'sdriver_data_dtor. Statements that errored during execute are markeddo_not_cacheand finalized normally so a poisoned vdbe never re-enters the cache. Driver coverage now: pdo_pgsql, pdo_mysql, pdo_sqlite. Measured on a tight prepare+execute+fetch loop against a 100k-row table: pool-without-cache 111k ops/s → pool-with-cache 270k ops/s (2.4×). In an HTTP handler that prepares every request the gain is ~9% RPS (33.2k → 36.1k on a 16-core box) — the prepare step is no longer the bottleneck, leaving the remaining gap to nativeSqlite3ext (~16%) squarely in PDO core overhead (PDOStatement object init, fetch wrapping).
Async\signal()no longer dies in worker threads (#109). Each call tophp_request_startup()in a worker thread ranzend_signal_activate(), which unconditionally re-installedzend_signal_handler_deferviasigaction()— clobbering the libuv handler the reactor had installed in the main thread. The next process-directed signal then hit Zend's defer path in a worker whoseSIGG(handlers)was empty, fell through toSIG_DFL, and killed the process. Fixed inZend/zend_signal.c:zend_signal_activate()andzend_signal_deactivate()now early-return whenzend_async_reactor_is_enabled()(reactor module registered at MINIT) — the reactor owns the OS-level sigaction process-wide, and the per-thread libuv signal callback already dispatches via TLSSIGG(handlers).zend_sigactionis unchanged so pcntl-only flows keep working. Teststests/signal/008-009cover bothThreadPoolandspawn_threadvariants.- PDO MySQL
010-pdo_resource_cleanupno longer false-fails under parallel test workers (#114). The test counted leaks againstSHOW STATUS LIKE 'Threads_connected'— a server-global counter that also sees connections held by other run-tests.php workers under-jN. Replaced with a process-local check: collect the connection IDs we created in coroutines, then pollinformation_schema.PROCESSLISTuntil those specific IDs disappear (or report whichever ones leaked). - PDO PgSQL pool no longer leaks a killed-but-idle connection (#114).
When
pg_terminate_backend(or any other server-side close) hits a connection while it is sitting idle in the pool, the slot stayed in the pool until somebody reused it —tests/pdo_pgsql/029-pdo_pgsql_pool_killed_concurrent.phptsawpool->count()stuck at 2 instead of dropping to 1. Two driver-level fixes: (a)_pdo_pgsql_errornow treatssqlstate==NULL && errcode==PGRES_FATAL_ERRORas a connection-level failure and marks the slot broken (covers the case where libpq returned NULL with no result, e.g. EOF mid-flush); (b) newpdo_pgsql_pool_before_acquireruns a non-blockingPQconsumeInput+PQstatusprobe each time the pool hands out an idle slot — a slot whose backend died is destroyed instead of returned. Test 029 polling loop now also drives a probingSELECT 1so the scrub fires before the test samplespool->count(). - Channel(0)
send()no longer returns without a waiting receiver (#108). Previously the unbuffered slot acted as a 1-message buffer: the firstsenddeposited intorendezvous_valueand returned immediately, breaking the documented Go-style rendezvous and the happens-before guarantee. Nowsendon cap=0 blocks inwaiting_sendersuntil arecvtakes the value. When the slot-owner wakes after consumption, if both queues still have waiters it wakes the next sender to refill the slot — keeping the chain moving for N senders / M receivers without deadlock.sendAsyncis unchanged (still non-blocking, deposits into the slot). Testschannel/003andchannel/011updated to reflect proper rendezvous ordering.
fuzzy_tests/directory renamed tofuzzy-tests/for consistency with other dash-separated paths. All harness, generated tests, feature files, CI workflows, and docs updated.- Closures with class/function declarations are rejected at thread transfer.
spawn_thread(),ThreadPool::submit()and any path that snapshots a closure now scan the op_array forZEND_DECLARE_CLASS{,_DELAYED},ZEND_DECLARE_ANON_CLASSandZEND_DECLARE_FUNCTION; the first match throwsCannot transfer closure to another thread: illegal <kind> declaration at <file>:<line>. The previous behaviour replayed the opcode in the worker, where the compile-timeEG(class_table)registration underrtd_keyis missing —do_bind_classthen trippedZEND_ASSERT(ce)(Zend/zend_compile.c:1372). Validation is memoised in a privatefn_flags2bit (ASYNC_FN_FLAG_THREAD_TRANSFER_OK) so repeated transfers (ThreadPool resubmits, channel resends) skip the rescan; invalid closures stay unflagged and re-throw with the exact location every time. Recurses intodynamic_func_defsso an invalid nested closure is caught at the outer transfer. Mirrors parallel'sphp_parallel_check_functionpolicy. Teststests/thread/047–049.
-
Request-level scope on Scope (#105) — new
request_scopefield onzend_async_scope_t, inherited fromparent_scopeinasync_new_scope. Gives O(1) access to a user-designated request Scope from any descendant scope viaZEND_ASYNC_REQUEST_SCOPE(resolves throughCURRENT_SCOPE). Borrowed pointer — no refcount, no free; the embedding C host sets it (typicallyscope->request_scope = scopeto mark a scope as the request). PHP: newAsync\request_context(): ?Contextreturns the inherited request scope's Context, ornull. No PHP-side setter — assignment is internal machinery owned by the embedding host. -
Channel deadlock protection — three layers of defence against blocked coroutines, exposed through a typed
Async\ChannelCloseReasonenum onChannelException::$reason:- Per-channel timer — new constructor parameters
noProducerTimeout/noConsumerTimeout(ms, default 5000,0disables) close the channel after the configured wait.hardTimeouts(defaultfalse) controls whether the timer is hidden from the loop (soft) or keeps the loop alive (hard, contractual). - Global resolver — soft-timer channels register in a per-request
table and are bulk-closed by
async_channel_resolve_deadlocks()before the scheduler raises a genericDeadlockError. The scheduler skips the blockinguv_run(UV_RUN_ONCE)when only hidden events are alive AND a soft channel is registered, so resolution is immediate. - Owner-scope binding — every channel subscribes to its owner
scope's event via an extended callback (
channel_scope_callback_tembedding the scope back-pointer). When the scope dies for any reason (dispose / cancel / parent-cascade) the channel auto-closes with reasonSCOPE_DISPOSED. The channel never pins the scope and never holds the scope's refcount; lifecycle is symmetric in both directions and verified under ASAN across 18 stress tests covering cross-scope producers/consumers, TaskGroup-managed scopes, blocked senders/receivers, channels that outlive their scope, channels that die before their scope, parent/child cascades, idempotent closes, and OOM bailout.
- Per-channel timer — new constructor parameters
-
ThreadPool::submit_internal(C-only) — new public C-level method onzend_async_thread_pool_tfor submitting a C-handler task to an existing pool without going through the closure-snapshot pipeline. Handler signature:void (*)(zend_async_event_t *, void *ctx). Pool treatsctxas opaque (never reads, never frees); caller owns the lifecycle. Returns an awaitablezend_async_event_t *whose complete callbacks fire after the handler returns. Closes the API gap that previously forced C-level pool consumers through the PHP-levelsubmit(callable)path — internal methods have no op_array, so the snapshot serialiser segfaulted. -
Top-level transfer/load helpers —
zend_async_thread_transfer_zval_toplevel_fn,zend_async_thread_load_zval_toplevel_fn,zend_async_thread_release_transferred_zval_fn(with matchingZEND_ASYNC_THREAD_*_TOPLEVELmacros). Convenience wrappers that allocate and tear down the cross-thread transfer ctx internally — for callers shipping a single zval per worker. -
PDO Pool: opt-in prepared-statement cache — per-physical-connection LRU cache of server-side prepared statements, transparent to user code. Enabled by passing
PDO::ATTR_POOL_STMT_CACHE_SIZE => Nto the PDO constructor (alongsideATTR_POOL_ENABLED). Default0disables the cache and preserves current behaviour. On a cache hitprepare()reuses the existing server-side prepared statement on that physical connection with zero wire traffic — noPQprepare, no Parse round-trip. This collapses the canonical "prepare-on-every-request" pool pattern to "prepare once per physical connection, execute many times", which is the same shape jackc/pgx, sqlx, pgjdbc and Npgsql converged on years ago.- Driver coverage:
pdo_pgsqlandpdo_mysql. For pdo_mysql the cache only kicks in withATTR_EMULATE_PREPARES => false(default emulate path doesn't speakCOM_STMT_PREPAREand is left untouched); eviction sendsCOM_STMT_CLOSE, dtor returns theMYSQL_STMT*to the cache, plan-invalidation errors (1243/1615/2057) drop the stale stmt. - Key: the canonical SQL (
zend_stringreturned bypdo_parse_params, with?rewritten to$1, $2, …). Two PHP-level SQLs that rewrite to the same wire form share a slot. - Storage: standard Zend
HashTable, insertion-order-as-LRU. On hit the entry is moved to MRU viadel + add_new. On overflow the oldest entry is evicted and best-effortDEALLOCATEd on the wire. - Bypassed automatically for
PDO_CURSOR_SCROLL,ATTR_EMULATE_PREPARES => true,PDO_PGSQL_ATTR_DISABLE_PREPARES, and non-pool PDO handles. No semantic change for existing code. - Memory bounded by the configured capacity per physical conn. On
connection close the cache is freed without
DEALLOCATE(server-side state goes away with the session). - Concurrency: the cache lives on the physical
pdo_dbh_t(pool slot). A slot is held by at most one coroutine at a time, so all cache mutation is single-owner — no locking. Each thread in theThreadPoolcase has its own pool, so the same invariant holds. - Plan invalidation is handled transparently. When PostgreSQL retires
a cached plan after DDL (e.g.
ALTER TABLEchanging a column type) the nextEXECUTEfails with SQLSTATE0A000(feature_not_supported, "cached plan must not change result type") or26000(invalid_sql_statement_name). The driver detects these classes, evicts the cache entry, best-effortDEALLOCATEs the stale server-side stmt, re-issuesPQpreparewith the same name, re-inserts into the cache and re-executes — all in a single retry, invisible to user code. - Known limitation: pgbouncer transaction mode requires
STMT_CACHE_SIZE => 0because named prepared statements break across pooled checkouts at the bouncer layer. Unbuffered + plan-invalidation retry is best-effort (the buffered path is the primary target). - New constant:
PDO::ATTR_POOL_STMT_CACHE_SIZE. New API inext/pdo/pdo_pool.{h,c}:pdo_pool_stmt_cache_create,_destroy,_lookup,_insert,_take,_entry_free,_size,_capacity. Driver integration inext/pdo_pgsql/pgsql_driver.c::pgsql_handle_preparer. - Performance: measured ~2.9× throughput on a tight
prepare+execute+fetchloop against local Postgres (debug build, ZTS,-O0);strace -cconfirms a 3:1 reduction in wire syscalls (sendto3008→1007,recvfrom6010→2008 per 1000 iterations); callgrind shows a 25 % drop in user-space instruction count, with the entire pool+cache layer accounting for ~0.5 % of CPU. Full methodology, raw numbers and reproduction recipe indocs/pdo-pool-stmt-cache-perf.md.
- Driver coverage:
-
CPU usage probes — cross-platform process and host CPU monitoring, identical fields and semantics on Linux and Windows. Suitable for backpressure decisions in long-running coroutines and for emitting telemetry from PHP-level metrics loops.
Async\CpuSnapshot::now(): CpuSnapshot— immutable point-in-time snapshot. Final, readonly, private constructor, no dynamic properties. Exposes raw monotonic counters:wallNs,processUserNs,processSystemNs,systemIdleNs,systemBusyNs,cpuCount. Single values are not directly meaningful — callers compute deltas between two snapshots themselves.Async\cpu_usage(): array— telemetry-friendly wrapper that maintains an internal "previous" snapshot per process and returns ready-to-use percentages:process_cores,process_percent,system_percent,cpu_count,interval_sec,loadavg. The first call seeds the internal state and returns zeros; subsequent calls return the delta against the previously stored snapshot. State is reset inRSHUTDOWN.Async\loadavg(): ?array— POSIX 1/5/15-minute system load averages. Returnsnullon Windows (no native equivalent; emulating CPU% as loadavg has different semantics and would mislead callers). Linux usesclock_gettime(CLOCK_MONOTONIC),getrusage(RUSAGE_SELF),/proc/statandgetloadavg(). Windows usesQueryPerformanceCounter,GetProcessTimes,GetSystemTimesandGetActiveProcessorCount(ALL_PROCESSOR_GROUPS). ZTS-safe viatsrm_mutex. Inside containers,system*fields reflect the host rather than the cgroup; for per-process backpressure prefer theprocess*fields, which automatically account for affinity and cgroup CPU throttling. Nozend_async_APIchanges.
-
Async\available_parallelism(): int— returns the number of CPUs usable by the current process (cgroup quotas,sched_setaffinity, etc.), i.e. the value libuv recommends for thread-pool / worker sizing. Backed byuv_available_parallelism()(libuv ≥1.44) with auv_cpu_info()fallback on older libuv. Always returns ≥1. Exposed at the API level viazend_async_available_parallelism_fn/ theZEND_ASYNC_AVAILABLE_PARALLELISM()macro, registered as a new parameter onzend_async_reactor_register— third-party reactors must thread the new function pointer through. ABI bump v0.9.1 → v0.10.0. -
Timer rearm API (
zend_async_timer_rearm_fn/ZEND_ASYNC_TIMER_REARM). Reschedules an existing timer event without thenew_timer_event+uv_close+disposecycle, dropping three per-cycle allocations on hot paths that constantly reset a timer (e.g. QUIC retransmission timers, idle reapers, exponential backoff loops). Opt-in via the new private timer flagZEND_ASYNC_TIMER_F_MULTISHOT(bit 13) — set after construction withZEND_ASYNC_TIMER_SET_MULTISHOT(ev). A multishot timer does not self-close on a one-shot fire; the owner is responsible for an explicitdispose()at teardown. Existing one-shot timers are unaffected (default path still self-closes). libuv reactor implements rearm via a seconduv_timer_starton the same handle (libuv-native). Registered as a new parameter onzend_async_reactor_register— third-party reactors must thread the new function pointer through (NULL is rejected at register time? — current impl tolerates NULL, caller must checkzend_async_timer_rearm_fn != NULLbefore use). -
PDO_SQLite connection pool support (
PDO::ATTR_POOL_ENABLED). A pooledPdo\Sqlitetemplate hands out a privatesqlite3*per coroutine, with the samePDO::ATTR_POOL_MIN/POOL_MAX/POOL_HEALTHCHECK_INTERVALcontrols as the other PDO drivers. UDFs, aggregates and collations registered on the template viacreateFunction/createAggregate/createCollationare applied to every slot. The registry freezes on the first acquire — any further registration throwsPDOExceptionso that all coroutines see the same set of UDFs. Single-connection methods that bind to a specificsqlite3*(setAuthorizer,openBlob,loadExtension) throw on a pool template. Unshareable in-memory DSNs (:memory:,file:?mode=memorywithoutcache=shared) are rejected at construction. Two new PDO-level driver hooks (pool_before_acquire,pool_before_releaseonpdo_dbh_methods) let other drivers plug into the slot hand-off without leaking pool internals intoext/pdo/pdo_pool.c. Tests:ext/async/tests/pdo_sqlite/001..020,ext/pdo_sqlite/tests/pool_001..005. Known limitation: per-coroutine personal UDFs (registered after the registry has frozen) are intentionally out of scope — the per-releasesqlite3_create_function(NULL, …)cleanup cost is a poor fit for the hot pool path; bootstrap-time registration on the template covers the realistic use case. -
TaskGroup/TaskSetconstructor gainsqueueLimitparameter (bounded pending queue, backpressure).new TaskGroup(concurrency: N, queueLimit: M). When the pending queue reachesMentries,spawn()/spawnWithKey()suspend the calling coroutine until a queue slot frees instead of allocating an unbounded pending entry. A slot frees whenever a pending task transitions to RUNNING (i.e. when a running task finishes andtask_group_drain()promotes the next pending one). Waiters are resumed in FIFO order, one per freed slot. Onseal()/cancel()/ dtor all waiters are woken at once — they rejoindo_spawn(), observe the terminal state, and throw "Cannot spawn tasks on a sealed TaskGroup". Defaults:queueLimit = nullresolves to2 × concurrency(a modest backpressure window);queueLimit = 0explicitly selects the legacy unbounded queue; withconcurrency = 0(unlimited)queueLimitis ignored because tasks always spawn immediately. Motivation: the previous behaviour allocated atask_entry_tzend_fcall_tfor every over-concurrencyspawn()call with no upper bound — a worker thread runningwhile (true) { $job = $channel->recv(); $group->spawn($fn); }would growgroup->tasksat ~500 MB/s when its own$group->spawn()outpaced the 100-slot concurrency, and starving the main thread prevented the results collector from ever running (see thebench_ta.php1×100 IO scenario that hit 7 GB RSS withcompleted=0before OOM). New regression tests:tests/task_group/041-task_group_queue_limit.phpt,tests/task_group/042-task_group_queue_limit_defaults.phpt. BC note: the ABI signature ofzend_async_new_group_fn/ZEND_ASYNC_NEW_GROUP()now takesuint32_t queue_limitbetweenconcurrencyandscope. C callers ofasync_new_group()must pass the new argument (there are none outside oftask_group.cand thenew_group_stubfallback inZend/zend_async_API.c). PHP-level positional callers ofnew TaskGroup($concurrency, $scope)now pick upnull→ default queueLimit; callers relying on positional$scopemust use the named argumentscope:.
-
Async\ThreadPool(new class): pool of OS threads for executing PHP closures.submit($callable, ...$args): Future,map(array $items, $callable): array,close()(graceful),cancel()(rejects backlog withAsync\CancellationException, running tasks still finish),isClosed(),getWorkerCount(),getPendingCount(),getRunningCount(). ImplementsCountable. Constructornew ThreadPool(int $workers, int $queue_size = 0); queue is a thread-safe channel that suspends the submitting coroutine when full (backpressure). -
Async\ThreadPoolException(new class): thrown fromsubmit()/map()when the pool is closed. -
Async\ThreadChannel(new class): thread-safe channel for transferring zvals between threads via deep-copy snapshot.send()/receive()suspend the calling coroutine instead of blocking the OS thread. Closures, including those with bound variables, transfer correctly through the snapshot machinery. -
Async\ThreadChannelException(new class). -
Coverage phase 2 — targeted tests for
future.c,async.c,task_group.c,channel.c,thread.c,thread_pool.c,context.c,pool.c. Aggregate ext/async coverage went from 77.45% to 78.34% lines (+104 lines) and from 88% to 89.1% functions (+10 functions). New tests cover Future status/cancel/getAwaitingInfo methods, FutureState double-resolve errors, finally() exception-chain propagation, non-callable argument rejection on map/catch/finally, TaskGroup synchronous-settled paths forall()/race()/any(), Channel unbuffered-iterator and foreach-by-ref branches,Async\timeout(0)ValueError,Async\delay(0)fast path,Async\current_coroutine()out-of-coroutine error, andContext::get()missing-key fallback. SeeCOVERAGE_PROGRESS.mdfor the per-target breakdown. -
Async file → socket zero-copy transfer — new
zend_async_io_sendfile_thook plus theZEND_ASYNC_IO_SENDFILE(out_io, in_io, offset, length)convenience macro. The libuv backend implements it viauv_fs_sendfile(sendfile(2) on Linux/BSD, TransmitFile on Windows) with an internal partial-send loop, so a single submitted request completes only when the full byte count has landed on the wire. Pure zero-copy: bytes go straight from the source fd into the destination socket buffer in the kernel and never touch user space — callers MUST therefore use a different write path (e.g. read + send through their TLS layer) on user-space-encrypted transports. There is no in-API fallback because the alternativeread + uv_writewould also bypass the user-space encryption layer the same way sendfile does, defeating the point of having the fallback. The first consumer is the built-in static file handler intrue-async/php-http-server(issue #13). -
Asynchronous open(2) — new
zend_async_fs_open_thook plus theZEND_ASYNC_FS_OPEN(path, flags, mode)macro. Returns a pendingzend_async_io_t *ofZEND_ASYNC_IO_TYPE_FILEimmediately; the thread-pool worker fills in the fd whenuv_fs_opencompletes and the libuv backend flipsZEND_ASYNC_IO_READABLEplus notifies the io's event with NULL exception. Errors setZEND_ASYNC_IO_CLOSEDand notify with anIOException. Symmetric withzend_async_socket_listenandzend_async_io_create(both also returnio_tdirectly). Closes the last sync syscall on the file-IO hot path — previously every static-asset GET blocked the loop on a synchronousopen()while the kernel pulled an inode off cold cache. -
zend_async_io_registerextended signature — picks up the two new function-pointer slots (sendfile_fn,fs_open_fn) betweenseek_fnandudp_sendto_fn. Single in-tree caller (libuv reactor) is updated; out-of-tree reactors must mirror the change.
Async\Signalenum values broken on Darwin/FreeBSD — the enum bakes Linux signal numbers (SIGUSR1=10, SIGUSR2=12) at compile time, but BSD-derived kernels use 30/31. Without translationAsync\signal(Signal::SIGUSR1)on macOS/FreeBSD armed the libuv watcher on signum 10 (== SIGBUS on Darwin), and a real SIGUSR1 (30) slipped past to PHP'szend_signal_handler_defer, which restored SIG_DFL and re-raised — terminating the process. Caught while running new multi-signum tests on macOS CI; the bug pre-dates this commit but was masked because existing single-signum tests passedSignal::SIGUSR1->valueto both sides (registration andposix_kill), so both agreed on the wrong number. Fixed via twostatic zend_always_inlineshims inzend_common.h—async_signum_enum_to_native()(called inAsync_signal()before passing to libuv) andasync_signum_native_to_enum()(called in the signal-fired callback beforezend_enum_get_case_by_value). On Linux both are identity.tests/signal/002updated to use the PHP-levelSIGUSR1constant (which is platform-correct via PHP's MINIT) instead ofSignal::SIGUSR1->value.Async\signal()leaked its libuv signal handle when the returned Future was dropped without resolving — e.g.await_any_or_fail([signal(SIGUSR1), signal(SIGUSR2)])left the secondsignal_eventarmed in the reactor after the first signal arrived, so the script never exited. Thesignal_cbheld a raw pointer to the future but the future had no back-reference, sozend_future_dispose()freed the future while the signal_event kept running — a later signal would then write into freed memory (UAF). Fixed by reserving extra space on the future viaasync_new_future(false, sizeof(async_signal_future_extra_t))and overridingevent->disposewithasync_signal_future_dispose(), which stops and disposes the still-armedsignal_eventbefore chaining to the originalzend_future_dispose. Both completion paths (signal fired, cancellation fired) NULL outextra->signal_eventfirst so the override doesn't double-dispose. Sameprev_dispose-chaining pattern asasync_timeout_create(). Teststests/signal/005-007.Channel::recvAsync()corrupted heap when the returned Future was dropped before the channel completed it — same class of bug as above. The channel'schannel_waiter_theld a rawwaiter->futurepointer; if the user dropped the Future (unset(),await_any_or_fail()picking another future, etc.), the future was freed while the waiter remained queued inchannel->waiting_receivers. The nextsend()ranZEND_FUTURE_COMPLETE(waiter->future, …)against freed memory, corrupting the zend_mm heap and crashing on shutdown (zend_mm_panic). Fixed by allocatingchannel_recv_future_extra_t(channel back-pointer, waiter, prev_dispose) on the future, overriding dispose withchannel_recv_future_dispose()which removes the waiter fromwaiting_receiversand releases its callback ref before chaining. Channel lifetime is safe becausechannel_close()rejects every queued future-waiter before destruction, settingZEND_ASYNC_EVENT_F_CLOSED; the override checks!IS_CLOSEDbefore dereferencingextra->channel. Teststests/channel/066-068.
- API version bumped to v0.11.0 (was v0.10.0) — reflects the
breaking change in
zend_async_io_register's signature. The macrosZEND_ASYNC_API_VERSION_MAJOR/_MINOR/_PATCHand the stringZEND_ASYNC_API "TrueAsync ABI v0.11.0"are updated accordingly.
- Static TSRMLS cache for ext/async sources: the extension was being built without
-DZEND_ENABLE_STATIC_TSRMLS_CACHE=1, so everyEG()/ASYNC_G()/ZEND_ASYNC_G()macro expansion in scheduler.c, coroutine.c, libuv_reactor.c and the rest ofext/async/routed throughpthread_getspecific(the slow TSRM fallback). The PHP CLI sapi already passes this flag for its own files —ext/async/did not. On the bench profile this category was the largest single TLS overhead:pthread_getspecificat 4.03% of total CPU andtsrm_get_ls_cacheat 1.19%, mostly underasync_scheduler_coroutine_suspend,fiber_entry,async_coroutine_executeandasync_coroutine_finalize. Fixed by adding-DZEND_ENABLE_STATIC_TSRMLS_CACHE=1as the per-extensionextra-cflagsargument toPHP_NEW_EXTENSIONinconfig.m4. After the change the same macros compile to a single__threadload (%fs:offseton x86_64) instead of a libpthread call. The_tsrm_ls_cachesymbol is already provided by the PHP binary'sTSRMLS_MAIN_CACHE_DEFINE()so the change is link-clean. Measured on a single-thread minimal HTTP handler (median of 5 wrk -t4 -c64 -d6s runs): throughput rose from ~44k to ~58k req/s (+32%);pthread_getspecificdropped to 0.64% andtsrm_get_ls_cacheto 0.50% in perf top-N; 102/102 phpt regression tests still pass.
- Scheduler asserted on graceful shutdown when libuv had pending close-callbacks: in PHP_DEBUG, the
do-whileescape valve infiber_entry()could exit while libuv still had handles inclosingstate andactive_event_count > 0(typical after an UncaughtErrorwith a live PDO connection — the timer cancel-pathuv_close()'d the timer but its completion callback hadn't run yet). The post-loopZEND_ASSERT(REACTOR_LOOP_ALIVE() == false)then aborted with"The event loop must be stopped", masking the original Fatal error. Fixed by draining the reactor with up to 8UV_RUN_NOWAITticks before the assert so closing-handle callbacks complete andactive_event_countreaches 0. Restores ext/pdo_mysql/tests/{bug_37445, pdo_mysql_prepare_native_clear_error, pdo_mysql_prepare_native_mixed_style, pdo_mysql_stmt_errorcode, pdo_mysql_stmt_multiquery}. op_array_to_emallocdid not deep-copyarg_info[i].typeto the worker's emalloc heap, leaving the class-namezend_stringof every class typehint pointing at the parent's persistent arena. Onceasync_thread_snapshot_destroy()freed that arena, the firstzend_call_functionin the worker that hit a class type-check (e.g.zend_lookup_class_ex→zend_string_tolower_exonarg_info[i].type's name) read garbage aslenand asked_emalloc()for multi-exabyte allocations (e.g.tried to allocate 8242266043723114880 bytes), trippingmemory_limitandzend_bailout(). Each bailout reachedasync_coroutine_execute's outerzend_catch, which setshould_start_graceful_shutdown = trueand quietly retired the worker — under sustained HTTP/2 load the throughput collapsed from ~175k req/s warm to ~12k req/s as workers died one by one. The persistent-arena copy path (thread_persist_copy_xlatat thread.c:598-606) already handled this viathread_copy_type(ctx, &arg_info[i].type); the emalloc path simply forgot to mirror it. Fixed by addingop_array_emalloc_copy_type()(mirrorsthread_copy_typebut usesemalloc+zend_string_initinstead of arena allocations + xlat), and calling it for everyarg_info[i]slot insideop_array_to_emalloc(). The function recurses intoZEND_TYPE_HAS_LISTfor union/intersection types and rewrites both thezend_type_listand each list entry's class-namezend_stringinto the worker's heap. Regression tests:tests/thread_pool/029-submit_closure_class_typehint.phpt(single class typehint on parameter),tests/thread_pool/030-submit_closure_class_return_type.phpt(return-type slot atarg_info[-1]whenZEND_ACC_HAS_RETURN_TYPEis set),tests/thread_pool/031-submit_closure_union_typehint.phpt(zend_type_listpath). The bug was missed for two reasons: (1) the persistent-arena path was correct, so the emalloc path looked correct by analogy when reviewed; (2) all 28 pre-existingtests/thread_pool/tests use scalar typehints (int,string) or no typehints at all — scalar types are encoded in the bitmask without azend_stringpayload, so no UAF surface to expose.bailout_all_coroutinesleft popped coroutines flaggedWAKER_QUEUED, which then trippedasync_coroutine_finalize's "Attempt to finalize a coroutine that is still in the queue" warning during graceful shutdown of multi-threaded servers (worker thread's main coroutine got enqueued bycancel_queued_coroutines, popped bybailout_all_coroutineswithout a status transition, then finalized viaasync_thread_run'srequest_shutdownpath). Mirrors the standard dispatch transition (QUEUED → RESULT) right afternext_coroutine().bailout_all_coroutinesalso missedWAKER_IGNORED, the same false-positive warning fired through a second path:cancel_queued_coroutines()flips not-started coroutines toIGNORED(their semantic state is "logically out of the queue, physically still in the circular buffer, scheduler will skip on next visit"). Whenbailout_all_coroutines()then popped them vianext_coroutine(), the status-normalisation block only matched== ZEND_ASYNC_WAKER_QUEUEDand letIGNOREDslip through;async_coroutine_finalizesawIS_IN_QUEUE(which covers both flags) and emitted the warning. Reproduced withWorker(threads:1, concurrency:2)runningAsync\spawn(...)jobs while a producer closes the queue under cgroup-bounded memory — three coroutines were markedIGNOREDduring graceful shutdown and each fired one warning. Widened the predicate to theZEND_ASYNC_WAKER_IN_QUEUE()macro so popped coroutines are normalised regardless of which path enqueued them.active_event_countunderflow on double-stop inEVENT_STOP_PROLOGUE: The prologue had a guard for the double-start case (loop_ref_count > 1→ decrement and return withoutDECREASE_EVENT_COUNT) but no guard for the symmetric double-stop case whereloop_ref_countwas already0. When something stopped an event through the normal cancel/resolve path (loop_ref_count1→0,DECREASE_EVENT_COUNTran once), andwaker_events_dtorlater calledevent->stop()again during waker cleanup atZend/zend_async_API.c:775, the second call fell through the prologue and ranDECREASE_EVENT_COUNTa second time. The macro's underflow protection clamped the global counter at zero, but the lost decrement effectively "stole a count" from another live event — the globalactive_event_countreached zero while wakers still held triggers on actually-running libuv handles. The deadlock detector then dumpedCoroutines waiting: N, active_events: 0and force-cancelled coroutines whose I/O was still in flight; in the mysqli cancellation path this surfaced as a phantommysqli_sql_exception("MySQL server has gone away")thrown after{main}and a corresponding 152-byte exception leak. Fixed by adding an earlyreturn true;at the top ofEVENT_STOP_PROLOGUEwhenloop_ref_count == 0, making every*_stopoperation idempotent for the global counter. The same condition was previously hand-rolled insidelibuv_io_event_stoponly; promoting it into the prologue coverstimer_stop,poll_stop,poll_proxy_stop,signal_stop,listen_stop,process_stop,filesystem_stopetc. uniformly. Regression test:tests/mysqli/009-mysqli_cancellation.phpt.- Windows: TCP accept broken in
libuv_io_create()— every accepted connection failed: The TCP branch ran the incomingio_fdthrough_get_osfhandle()before handing it touv_tcp_open(). For sockets that came straight fromWSASocketW()/accept()(i.e. the value already is a nativeSOCKET, not a CRT fd),_get_osfhandle()returnedINVALID_HANDLE_VALUE,uv_tcp_openfailed with"Failed to open TCP handle", and the exception propagated throughon_connection_event→IF_EXCEPTION_STOP_REACTOR. The reactor stopped,start_graceful_shutdown()fired, every live coroutine was cancelled with"Graceful shutdown", and the scheduler's finalisation assert (scheduler.c:1793—"The event loop must be stopped") tripped because the listen_event was still armed (user'sstop()never got to run). Symptom was immediate: any HTTP server built onZEND_ASYNC_SOCKET_LISTENwould accept the TCP three-way handshake, then crash before dispatching a single request. Linux was unaffected because its branch (const uv_os_sock_t sock = (uv_os_sock_t) io_fd;) already passed the socket through as-is. Fixed by dropping_get_osfhandle()from the Windows TCP path: forZEND_ASYNC_IO_TYPE_TCP/ZEND_ASYNC_IO_TYPE_UDPthe caller passes the nativezend_socket_tvalue, matching both the type that gets stored atio->base.descriptor.socket = (zend_socket_t) io_fda few lines up and the POSIX side. Discovered while bringing upphp-http-serveron Windows for the first time — the canonical010-server-e2e-simple.phptcould not return a response because of this. - TaskGroup owned-scope UAF on worker-thread shutdown:
TaskGroup(concurrency: N)without an explicit scope creates a childasync_new_scope(..., with_zend_object=false)and bumps itsref_countto pin it. Butscope_dispose()unconditionally consumes one ref when called directly (e.g. from a parent scope's cascade-disposal atscope.c:1161), so the TaskGroup's +1 was eaten by the first parent dispose. A second dispose (thread shutdown, nested scope teardown) then dropped the count to 0 andefreed the scope, leavinggroup->scopedangling. Whentask_group_dtor_object()ran duringzend_call_destructorsin the worker thread'sphp_request_shutdown(), it dereferenced the freedscope->eventand SIGSEGV'd. Reproducible with 12spawn_threadworkers, aThreadChannel-based job queue, and a closing producer. Fixed by introducingZEND_ASYNC_SCOPE_F_OWNER_PINNED: a scope marked with this flag refuses disposal viascope_can_be_disposed()so neither parent-cascade nortry_to_disposecan consume its ref. TaskGroup sets the flag in__construct/async_new_groupand clears it intask_group_dtor_objectbeforeZEND_ASYNC_SCOPE_RELEASE.curl_async_get_scope()uses the same pattern for the lazily-created callback scope incurl_event— previously it relied on manualref_count--/try_to_disposearithmetic that had the same latent UAF surface. Regression test:tests/task_group/040-task_group_thread_shutdown_uaf.phpt. Async\Timeout::cancel()double-released the backing object: Calling$t->cancel()disposed the backing timer event, whoseasync_timeout_event_dispose()unconditionally ranOBJ_RELEASE(object)assuming the event held a counted reference. In the current architecture the event only stores a raw pointer (async_timeout_ext_t::std) without a matchingGC_ADDREFat creation time, so the release actually decremented the caller's live refcount. The backing object was freed while the userland$tvariable still pointed to it, and shutdown trippedIS_OBJ_VALID(object_buckets[handle])inzend_objects_store_del(). Fixed by mirroringasync_timeout_destroy_object():cancel()now clearstimeout_ext->stdbefore dispatching the dispose soasync_timeout_event_dispose()sees a NULLstdand skips the stray release.pool_strategy_report_failure()captured a dangling exception pointer: When no caller-provided error was available, the helper created a freshExceptionviazend_throw_exception(NULL, "Resource validation failed", 0)followed immediately byzend_clear_exception(). The throw setEG(exception)to a refcount-1 object;clear_exception()dropped that reference, freeing the exception. The subsequentZVAL_OBJ(&error_zval, ex)captured a dangling pointer that was then handed to the userlandreportFailure()handler, producingzend_mm_heap corruptedand SIGSEGV at shutdown on ZTS DEBUG. Fixed by constructing the exception directly viaobject_init_ex(zend_ce_exception)+zend_update_property_ex(MESSAGE), which never touchesEG(exception), and managing the zval lifetime with anowns_errorflag and an explicitzval_ptr_dtor()after thereportFailure()call.Async\Scope::disposeAfterTimeout()leaked the scope refcount: The timer callback bumpedcallback->scope->scope.event.ref_countonce but nothing inscope_timeout_callback()orscope_timeout_coroutine_entry()ever released it, so the scope was always held above its natural lifetime — 4zend_mmleaks per invocation in DEBUG. The rawref_count++was replaced withZEND_ASYNC_EVENT_ADD_REFand a customscope_timeout_callback_disposehandler now releases the ref when the callback is freed without firing. On the fire path,scope_timeout_callback()transfers ownership to the spawned cancellation coroutine (viaextended_data);scope_timeout_coroutine_entry()callsZEND_ASYNC_EVENT_RELEASEafterSCOPE_CANCEL. The previously-silentadd_callbackfailure path also now releases the ref and frees the unclaimed callback.Async\CompositeExceptionwrote to hard-codedproperties_table[7]:async_composite_exception_add_exception()assumed theprivate array $exceptionstyped property lived at slot 7 of the typed-property layout. The real offset forCompositeException extends \Exceptiondid not match, so the helper was clobbering an unrelated typed slot:getExceptions()on an empty composite hit the "Typed property must not be accessed before initialization" fatal because it was reading the actual (uninitialized)$exceptionsslot viazend_read_property; multipleaddException()calls producedvar_dumpoutput with garbage pointer fields and implausible string lengths. Fixed by reading and writing$exceptionsthroughzend_read_property/zend_update_propertywith the property name, so the engine resolves the correct typed-property slot regardless of inherited layout.getExceptions()switched fromsilent=0tosilent=1(BP_VAR_IS) so an empty composite reads back as[]rather than triggering the typed-uninit fatal. A second latent bug surfaced while verifying: the PHP methodaddExceptionwas passingtransfer=trueto the C helper even thoughZ_PARAM_OBJECT_OF_CLASSonly lends a borrowed reference, which caused the stored zval refcount to be one short and made repeated adds alias to the last-inserted object once the slot-7 corruption stopped masking it. Fixed by switching the method call site totransfer=falseso the helper performs theGC_ADDREF.Async\Timeout::cancel()assertion at shutdown (IS_OBJ_VALID): See the first entry above — this is the same bug; leaving it listed becausetests/common/timeout_class_methods.phptfrom coverage phase 2 is the test that exposed it.TaskGroup::all()/race()/any()use-after-free in synchronous-settled path: The synchronous fast paths created a waiter viatask_group_waiter_future_new()(which pushes it intogroup->waiter_events[]), resolved it synchronously, wrapped it in a Future wrapper and returned — but never removed it from thewaiter_events[]vector. The drain path intask_group_try_complete()always callstask_group_waiter_event_remove()after resolving; the sync path forgot to mirror that. At shutdown,task_group_free_object()force-disposed everything still inwaiter_events[], whichefree'd the waiter. When the Future wrapper was then destroyed and released the waiter it had wrapped, it touched freed memory — "Future was never used" warning from a stalezend_future_tfollowed by a segfault whenever user code kept an intermediate$future = $group->all()variable across atry/catch. Fixed by callingtask_group_waiter_event_remove(waiter)at the end of each synchronous-resolve branch, matching whattask_group_try_complete()does.Thread::finally()on a still-running thread NULL-scope crash:thread_object_dtor()dispatches registered finally handlers viaasync_call_finally_handlers(), which unconditionally dereferencescontext->scopethroughZEND_ASYNC_NEW_SCOPE(context->scope)andZEND_ASYNC_EVENT_ADD_REF(&context->scope->event).thread.cwas passingcontext->scope = NULLbecause the Thread object had no PHP-side scope of its own, and registering a finally handler on a still-running thread then destroying the thread would segfault at dtor time. Fixed by capturingZEND_ASYNC_CURRENT_SCOPEat spawn time (async_thread_object_t::parent_scope) and holding a refcount on the scope event so it outlives the Thread.thread_object_dtor()now passesthread->parent_scopeto the finally dispatcher, so handlers inherit the caller's async context hierarchy (exception handlers, context values) just likecoroutine/task_group/scopefinally do. Released inthread_object_free(). Addedthread_finally_handlers_dtor()to pair theGC_ADDREFthat keeps the Thread alive during handler execution with anOBJ_RELEASE— previouslycontext->dtorwasNULLand the Thread object leaked 72 bytes every time dtor-time finally ran. AZEND_ASYNC_IS_OFFsafety net is kept for the edge case where a Thread object outlives the async subsystem (latezend_call_destructorsafter RSHUTDOWN).
- PDO Pool:
getAttribute()support for pool attributes:$pdo->getAttribute(PDO::ATTR_POOL_ENABLED)now returnstrue/falsedepending on whether the connection pool is active.PDO::ATTR_POOL_MINandPDO::ATTR_POOL_MAXreturn the configured pool size limits (orfalsewhen pooling is disabled).PDO::ATTR_POOL_HEALTHCHECK_INTERVALis a construction-only attribute and raises an error if read at runtime.
- Heap-use-after-free in
await_all()/await_*()with string keys: When anyawait_*function received an array with non-interned string keys (e.g. fromjson_decode()orstr_repeat()), the returned results/errors arrays had incorrect refcount on those keys. The root cause:async_waiting_callback_disposewas called twice per callback (once fromzend_async_callbacks_removeduringdel_callback, once fromZEND_ASYNC_EVENT_CALLBACK_RELEASE), but did not checkref_count— it unconditionally calledzval_ptr_dtoron the key each time, decrementing the string refcount twice instead of once. When the calling function's local variables were freed (i_free_compiled_variables), the already-freed string was accessed again — heap-use-after-free. Fixed by adding ref_count guard toasync_waiting_callback_dispose: whenref_count > 1, decrement and return without touching resources; only perform cleanup on the final dispose (ref_count == 1).
- PDO Pool: broken connection detection: Pooled connections that lose server contact or get interrupted (e.g. cancelled coroutine, server restart, DBA kill) are now automatically detected and destroyed instead of being returned to the pool. This prevents the next coroutine from receiving a broken connection ("MySQL server has gone away", "another command is already in progress"). Works for both MySQL and PostgreSQL.
- PDO Pool: transparent reconnect after broken connection: When a coroutine catches an error from a broken connection and retries a query on the same
$pdo, the pool automatically discards the broken connection and acquires a fresh one. No manual reconnection needed. - PDO Pool: error state isolation between coroutines:
$pdo->errorCode()and$pdo->errorInfo()no longer leak error state from one coroutine to another. Each coroutine sees only its own errors. - PDO Pool:
errorCode()returns"00000"on first query: Previously could returnNULLwhen multiple coroutines ran their first query concurrently on fresh connections.
- Heap-use-after-free in DNS resolve on cancellation: When a coroutine was cancelled while a DNS resolve (
gethostbyname, database connect) was in flight, the DNS event memory was freed immediately indispose()while the libuv thread pool callback was still pending. When libuv later invoked the callback, it accessed freed memory — crash or corruption. Fixed by deferring the free to the libuv callback itself:dispose()sets aDISPOSE_PENDINGflag and the callback checks it on completion, taking ownership of the memory cleanup. - Pool
max_sizenot enforced during concurrent connection creation: When multiple coroutines tried to open connections simultaneously (e.g. on application startup), the pool could create more connections thanmax_sizeallowed. Now the limit is strictly enforced — excess coroutines wait until a connection becomes available. Scope::awaitCompletion()not marking cancellation Future as used: The cancellation token passed toawaitCompletion()was never marked withRESULT_USED/EXC_CAUGHT, causing a spurious "Future was never used" warning when the Future was destroyed. Additionally, early return paths (scope already finished, closed, or cancelled) skipped the marking entirely. Fixed by setting flags immediately after parameter parsing, before any early returns.Scope::awaitAfterCancellation()not marking cancellation Future as used: Same issue asawaitCompletion()— the optional cancellation Future was only marked when the method reachedresume_when, but early returns bypassed it. Fixed identically.- Heap-use-after-free in
stream_socket_accept()during coroutine cancellation: When a coroutine blocked instream_socket_accept()was cancelled during graceful shutdown,network_async_accept_incoming()extracted the exception's message string into*error_stringwithout incrementing its refcount (*error_string = Z_STR_P(message)). The caller then calledzend_string_release_ex(), freeing the string while the exception object still referenced it. On exception destruction,zend_object_std_dtoraccessed the freed string — heap-use-after-free. Fixed by usingzend_string_copy()to properly addref the borrowed string. Same bug existed in the synchronous pathphp_network_accept_incoming_ex()inmain/network.c— fixed there too.
- ZEND_ASYNC_SUSPEND No longer throws an error when called with an empty array of events.
- Waker inline storage optimization: Embedded 2 trigger slots and 2 callback slots directly into the Waker struct, eliminating heap allocations for the most common case (1-2 events per await). Uses
capacity == 0to mark inline triggers andbase.callback == NULLto mark free inline callback slots. When more than 1 callback per event is needed, the inline trigger automatically promotes to a heap-allocated one. Benchmarks show ~3× speedup across all hot paths (await: 2.13 → 0.67 μs,await_allx2: 3.88 → 1.38 μs, Channel: 1.48 → 0.50 μs) with zero memory overhead. - Adaptive fiber pool sizing: The fiber context pool now grows dynamically based on coroutine queue pressure instead of being limited to a fixed size of 4. When demand exceeds the pool (queue size > pool count), the pool grows via
circular_buffer_push_ptr_with_resize. When demand is low, excess fibers are destroyed instead of returned to the pool. A minimum of 4 fibers (ASYNC_FIBER_POOL_SIZE) is always retained. This eliminates costly fiber create/destroy cycles under bursty workloads, yielding a 10–15% improvement in context switch throughput (10k coroutines × 10 suspends: 490 → 566 switches/ms).
- SIGSEGV in pool healthcheck callback: The healthcheck timer callback was registered by casting the pool pointer directly to
zend_async_event_callback_t, corrupting the pool's event structure fields and leaving thedisposefunction pointer uninitialized. When the pool was closed,zend_async_callbacks_freecalled the garbage dispose pointer, causing a segfault. Fixed by embedding a properzend_async_event_callback_tinsideasync_pool_tand usingoffsetofto recover the pool pointer in the callback. proc_close()crash when child process already reaped: When a child process was killed by a signal and its zombie was reaped externally (e.g. by a host runtime callingwaitpid(-1)),async_wait_process()fell through tolibuv_process_event_start()which threwAsyncException: Failed to monitor process N: No child processes. Fixed by handlingECHILDin bothasync_wait_process()(early return) andlibuv_process_event_start()(treat as exited with unknown status).- Pool acquire with failed factory caused use-after-free: When
pool_create_resource()threw an exception,zend_async_pool_acquire()fell through topool_wait_for_resource()with a liveEG(exception), registering a coroutine callback on the pool event. At shutdown, the coroutine was freed first, leaving a dangling pointer thatpool_disposetried to dereference. Fixed by checkingEG(exception)after factory failure and returning immediately. - Missing exception checks in pool error paths:
pool_destroy_resource()andpool_create_resource()exceptions were not checked in healthcheck loop,beforeAcquirefailure path, andtry_acquire. AddedEG(exception)checks to break/return on error instead of continuing with live exceptions. - Pool close now chains destructor exceptions via
previous: When multiple resource destructors throw duringpool->close(), all resources are still destroyed and exceptions are chained usingzend_exception_set_previous()so no error is silently lost. - Pool destructor exceptions now propagate: Resource destructor exceptions were silently discarded by
zend_clear_exception(). Removed the suppression so exceptions propagate normally to the caller.
- NULL
driver_datacrash in PDO PgSQL pool mode:pgsql_stmt_execute()calledin_transaction()onstmt->dbh, which in pool mode is the template PDO object withdriver_data == NULL. This caused a segfault when dereferencingH->serverviaPQtransactionStatus(). Fixed by usingstmt->pooled_conn(the actual pooled connection) when available.
Scope::awaitCompletion()ignoring completion:async_scope_notify_coroutine_finished()was missing the call toscope_check_completion_and_notify(), soawaitCompletion()never woke up when all coroutines finished and always waited until the timeout expired.Scope::awaitAfterCancellation()cleanup: Replacedzend_async_waker_clean()withZEND_ASYNC_WAKER_DESTROY()on error paths, and switched to checking the return value ofzend_async_resume_when()instead ofEG(exception).- Negative stream timeout causing poll event leak: When a stream context timeout was negative (e.g.
PHP_INT_MIN), the signedtv_secoverflowed to a huge positive value when cast tozend_ulongmilliseconds. This created an async waker with a timer event that held an extra reference to the poll event (refcount 3 instead of 2), causing it to leak. Fixed by checkingtv_sec < 0before the conversion and falling back to synchronousphp_pollfd_for().
- Non-blocking
flock():flock()no longer blocks the event loop. The lock operation is offloaded to the libuv thread pool viazend_async_task_t, allowing other coroutines to continue executing while waiting for a file lock. zend_async_task_new()API: New factory function for creating thread pool tasks, registered through the reactor like timer and IO events. Replaces manualpecalloc+ field initialization.
await_*()deadlock with already-completed awaitables: When a coroutine or Future passed toawait_all(),await_any_or_fail(), or otherawait_*()functions had already completed, it was skipped entirely (ZEND_ASYNC_EVENT_IS_CLOSED→continue), butresolved_countwas never incremented. Sincetotalstill counted the skipped awaitable,resolved_countcould never reachtotal, causing a deadlock. Fixed by usingZEND_ASYNC_EVENT_REPLAYto synchronously replay the stored result/exception through the normal callback path, correctly updating all counters. Additionally, when replay satisfies the waiting condition early (e.g.await_any_or_failneeds only one result), the loop now breaks immediately instead of subscribing to remaining awaitables and suspending unnecessarily.
feof()on sockets unreliable on Windows:WSAPoll(timeout=0)fails to detect FIN packets on Windows, causingfeof()to return false on closed sockets. Fixed by skipping poll for liveness checks (value==0) and going directly torecv(MSG_PEEK). On Windows,MSG_DONTWAITis unavailable, so non-blocking mode is temporarily toggled viaioctlsocket. Errno is saved immediately afterrecvbecauseioctlsocketclearsWSAGetLastError(). Shared logic extracted intophp_socket_check_liveness()innetwork_async.cto eliminate duplication betweenxp_socket.candxp_ssl.c.- Pipe close error on Windows:
php_select()incorrectly skipped signaled pipe handles whennum_read_pipes >= n_handles, causing pipe-close events to be missed andproc_openreads to hang. Fixed by removing thenum_read_pipes < n_handlesguard soPeekNamedPipeis always called for signaled handles.
- Async file IO position tracking: Replaced bare
lseek/_lseeki64withzend_lseekacross reactor. Rewrotelibuv_io_seekto acceptwhenceand return position, eliminating double lseek inphp_stdiop_seek. Fixed append-mode offset init and fseek behavior. On Windows, append writes now query real EOF vialseek(SEEK_END)before dispatch to avoid stale cached offsets. - Windows concurrent append (XFAIL): On Windows,
WriteFilevia libuv ignores CRT_O_APPENDbecauseFILE_WRITE_DATAcoexists withFILE_APPEND_DATAon the HANDLE. RemovingFILE_WRITE_DATAwould fix atomic append but breaksftruncate/SetEndOfFile. Concurrent append from multiple coroutines remains a known limitation (test 069 marked XFAIL). - Reactor deadlock on pending file I/O requests:
uv_fs_read,uv_fs_write,uv_fs_fsync, anduv_fs_fstatare libuv requests (not handles) that keepuv_loop_alive()true but were invisible toZEND_ASYNC_ACTIVE_EVENT_COUNT. The reactor loop exited prematurely (has_handles && active_event_count > 0→ false) while file I/O callbacks were still pending, causing deadlocks in async file writes (e.g.CURLOPT_FILEwith async I/O). Fixed by addingZEND_ASYNC_INCREASE_EVENT_COUNTafter successfuluv_fs_*submission andZEND_ASYNC_DECREASE_EVENT_COUNTin their completion callbacks (io_file_read_cb,io_file_write_cb,io_file_flush_cb,io_file_stat_cb). - Generator segfault in fiber-coroutine mode: Generators running inside fiber coroutines were not marked with
ZEND_GENERATOR_IN_FIBERbecauseEG(active_fiber)is not set in coroutine mode. This caused shutdown destructors to close generators while the coroutine was still suspended, leading to a NULLexecute_datadereference inzend_generator_resume. Fixed by also checkingZEND_ASYNC_CURRENT_COROUTINEwithZEND_COROUTINE_IS_FIBERwhen setting theIN_FIBERflag on generators.
Async\OperationCanceledException: New exception class extendingAsyncCancellation, thrown when an awaited operation is interrupted by a cancellation token. The original exception from the token is always available via$previous. This allows distinguishing token-triggered cancellations from exceptions thrown by the awaitable itself. Affects all cancellable APIs:await(),await_*()family,Future::await(),Channel::send()/recv(),Scope::awaitCompletion()/awaitAfterCancellation(), andsignal().- TaskGroup (
Async\TaskGroup): Task pool with queue, concurrency control, and structured completion viaall(),race(),any(),awaitCompletion(),cancel(),seal(),finally(), andforeachiteration - TaskSet (
Async\TaskSet): Mutable task collection with automatic cleanup semantics. Completed entries are removed after results are consumed. ProvidesjoinNext(),joinAny(),joinAll()methods (replacingrace()/any()/all()with join semantics), plusforeachiteration with per-entry cleanup. - Deadlock diagnostics (
async.debug_deadlockINI option): When enabled (default: on), prints detailed diagnostic info on deadlock detection — coroutine list with spawn/suspend locations and the events each coroutine is waiting for. All event types now implementinfomethod for human-readable descriptions. - TCP/UDP Socket I/O: Efficient non-blocking TCP/UDP socket functions without poll overhead via libuv handles. Includes
sendto/recvfromfor UDP, socket options API (broadcast,multicast, TCPnodelay/keepalive), and unified close callback for all I/O handle types. - Async File and Pipe I/O: Non-blocking I/O for plain files and pipes via
php_stdiop_read/php_stdiop_writeasync path. Supported functions:fread,fwrite,fseek,ftell,rewind,fgets,fgetc,fgetcsv,fputcsv,ftruncate,fflush,fscanf,file_get_contents,file_put_contents,file(),copy,tmpfile,readfile,fpassthru,stream_get_contents,stream_copy_to_stream - Pipe/Stream Read Timeout:
stream_set_timeout()now works for pipe streams (proc_openpipes, TTY). In async mode, timeout is enforced via waker timer competing with IO event — whoever fires first wins.stream_get_meta_data()['timed_out']correctly reports timeout state. The pipe handle remains usable after timeout. Also fixedlibuv_io_event_stopto properly cancel pending reads viauv_read_stopwithout destroying the handle. - Async IO Seek API:
ZEND_ASYNC_IO_SEEKfor syncing libuv file offset afterfseek/rewind - Async IO Append Flag:
ZEND_ASYNC_IO_APPENDflag for correct append-mode file offset initialization - Future Support: Full Future/FutureState implementation with
map(),catch(),finally()chains and proper flag propagation - Channel: CSP-style message passing between coroutines with buffered/unbuffered modes, timeout support, and iterator interface
- Pool: Resource pool implementation with CircuitBreaker pattern support
Async\Poolclass for managing reusable resources (connections, handles, etc.)- Configurable min/max pool size with automatic pre-warming
acquire()/tryAcquire()/release()methods for resource management- Blocking acquire with timeout support in coroutine context
- Callbacks:
factory,destructor,healthcheck,beforeAcquire,beforeRelease CircuitBreakerInterfaceimplementation with state management (ACTIVE/INACTIVE/RECOVERING)CircuitBreakerStrategyInterfacefor custom recovery strategiesServiceUnavailableExceptionwhen circuit breaker is INACTIVE- C API:
ZEND_ASYNC_NEW_POOL(),ZEND_ASYNC_POOL_ACQUIRE(), etc. macros for internal use
- TrueAsync ABI: Extended
zend_async_API.hwith Pool support- Added
zend_async_pool_tstructure with CircuitBreaker state - Added
zend_async_circuit_state_tenum and strategy types - Added Pool API function pointers and registration mechanism
- Added
ZEND_ASYNC_CLASS_POOLandZEND_ASYNC_EXCEPTION_SERVICE_UNAVAILABLEto class enum
- Added
- PDO Connection Pooling: Transparent connection pooling for PDO with per-coroutine dispatch and automatic lifecycle management
- PDO PgSQL: Non-blocking query execution for PostgreSQL PDO driver
- PostgreSQL: Concurrent
pg_*query execution with separate connections per async context Async\iterate()function: Iterates over an iterable, calling the callback for each element with optional concurrency limit. SupportscancelPendingparameter (default:true) that controls whether coroutines spawned inside the callback are cancelled or awaited after iteration completes.Async\FileSystemWatcherclass: Persistent filesystem watcher withforeachiteration support, suspend/resume on new events, two storage modes (coalesce with HashTable deduplication, raw with circular buffer),close()/isClosed()lifecycle, andAwaitableinterface viaZEND_ASYNC_EVENT_REF_FIELDSpattern. Replaces the one-shotAsync\watch_filesystem()function.Async\signal()function: One-shot signal handler that returns aFutureresolved when the specified signal is received. Supports optionalCancellationfor early cancellation.- Acting coroutine for error context (
zend_async_globals_t.acting_coroutine): New field in async globals that allows scheduler-context code to attribute errors to a suspended coroutine. When set,zend_get_executed_filename_ex(),zend_get_executed_lineno(), andget_active_function_name()inZend/zend_execute_API.cfall back to the coroutine's suspendedexecute_datafor file, line, and function name. Zero-cost: the execute_data is only read when an error actually occurs. Macros:ZEND_ASYNC_ACTING_COROUTINE,ZEND_ASYNC_ACT_AS_START(coroutine),ZEND_ASYNC_ACT_AS_END().
- Bailout handling: Added
ZEND_ASYNC_EVENT_F_BAILOUTflag (bit 11) onzend_async_event_t. During bailout (e.g. OOM), PHP-level handlers are no longer called — finally handlers on coroutines and scopes are destroyed without execution, scope exception handlers (try_to_handle_exception) are skipped. C-level callbacks (ZEND_ASYNC_CALLBACKS_NOTIFY) continue to work normally. Convenience macros:ZEND_COROUTINE_SET_BAILOUT/ZEND_COROUTINE_IS_BAILOUT,ZEND_ASYNC_SCOPE_SET_BAILOUT/ZEND_ASYNC_SCOPE_IS_BAILOUT. - Removed "Graceful shutdown mode" warning: The
Warning: Graceful shutdown mode was startedmessage is no longer emitted during bailout (OOM/stack overflow). The graceful shutdown still happens, but without the warning output. - Breaking Change:
onFinally()renamed tofinally()on bothAsync\CoroutineandAsync\Scopeclasses, aligning with the Promise/A+ convention (.then(),.catch(),.finally()).- Migration: Replace
->onFinally(function() { ... })with->finally(function() { ... }).
- Migration: Replace
- Breaking Change:
Async\CancellationErrorrenamed toAsync\AsyncCancellationand now extends\Cancellationinstead of\Error.\Cancellationis a new PHP core root class implementing\Throwable(alongside\Exceptionand\Error), added per the True Async RFC. This prevents cancellation exceptions from being accidentally caught bycatch(\Exception)orcatch(\Error)blocks.- Migration: Replace
catch(Async\CancellationError $e)withcatch(Async\AsyncCancellation $e)orcatch(\Cancellation $e)for broader matching.
- Migration: Replace
- Hidden Events: Added
ZEND_ASYNC_EVENT_F_HIDDENflag for events excluded from deadlock detection - Scope
can_be_disposedAPI: Exposedscope_can_be_disposedas a virtual method onzend_async_scope_t, enabling scope completion checks from the Zend API viaZEND_ASYNC_SCOPE_IS_COMPLETED,ZEND_ASYNC_SCOPE_IS_COMPLETELY_DONE, andZEND_ASYNC_SCOPE_CAN_BE_DISPOSEDmacros. - TaskGroup completion semantics:
ASYNC_TASK_GROUP_F_COMPLETEDflag is now set only when the group is both sealed and all tasks are settled.finally()handlers fire only in this terminal state. Callingfinally()on an already-completed group invokes the callback synchronously.
- exec() output not split into lines in async path: The libuv read callback delivered raw byte chunks to the output array instead of splitting by newlines and stripping trailing whitespace like the POPEN path does. Implemented an on-the-fly line parser with zero-copy optimization and 8 KB reusable buffer (doubling strategy). Uses
memchr()for SIMD-accelerated newline scanning. Fully matches POPEN path behavior includingisspace()trailing whitespace stripping. - exec() exit code race condition: Pipe EOF notification (
exec_read_cb) often arrived beforeexec_on_exit, waking the coroutine withexit_codestill 0. Fixed by makingexec_on_exitthe sole notification point. - exec() not routed through async path: Changed routing condition from
ZEND_ASYNC_IS_ACTIVEtoZEND_ASYNC_ON+ZEND_ASYNC_SCHEDULER_INIT()so exec functions use the async path when the scheduler is available. - Deadlock in
proc_close()when spawning many concurrent processes on Windows: Windows Job Objects sendJOB_OBJECT_MSG_ACTIVE_PROCESS_ZEROin addition toJOB_OBJECT_MSG_EXIT_PROCESSfor every single-process job that exits. The IOCP watcher thread was treating both messages as process-exit events, pushing the sameprocess_eventtopid_queuetwice and decrementingcountWaitingDescriptorsan extra time per process. With enough concurrent processes, the counter reached zero prematurely, triggeringlibuv_stop_process_watcher()too early and destroyingpid_queue— leaving coroutines suspended inproc_close()with no event to wake them. Fixed by ignoringJOB_OBJECT_MSG_ACTIVE_PROCESS_ZEROin the switch statement since it always accompaniesEXIT_PROCESSfor single-process jobs. - Use-after-free in
zend_exception_set_previouscalls: Whenexception == add_previous(same object),zend_exception_set_previouscallsOBJ_RELEASEwhich frees the object while other pointers (e.g.EG(exception)) still reference it. Added identity checks before allzend_exception_set_previouscalls where the two arguments could alias the same object. Affected files:scheduler.c,exceptions.c,zend_common.c,future.c. - Memory leak of
Async\DeadlockErrorin scheduler fiber exit path: Infiber_entry, when the scheduler fiber finalized,exit_exceptionfromZEND_ASYNC_EXIT_EXCEPTIONwas not propagated whenEG(exception) == NULL— the exception was silently lost. Addedasync_rethrow_exception(exit_exception)for this case. - stream_select() ignoring PHP-buffered data in async context: When
fgets()/fread()pulled more data into PHP's internal stream buffer than returned, a subsequentstream_select()would not detect the buffered data because the async path (libuv poll) only checks OS-level file descriptors. This caused hangs inrun-tests.php -jparallel workers on macOS where TCP delivered multiple messages in a single segment. Fixed by checkingstream_array_emulate_read_fd_set()before entering the async poll path. - Waker events not cleaned when coroutine is resumed outside scheduler context: When a coroutine was resumed directly (not from the scheduler), its waker events were not automatically cleaned up, which could lead to stale event references. Now
ZEND_ASYNC_WAKER_CLEAN_EVENTSis called on resume outside the scheduler. - False deadlock detection after coroutine execution: The
has_handlesflag fromZEND_ASYNC_REACTOR_EXECUTEwas evaluated before coroutines ran but checked after, causing false deadlock when coroutines created new I/O handles between those points. AddedZEND_ASYNC_REACTOR_LOOP_ALIVE()check to deadlock conditions for accurate state at decision time. - TaskSet auto-cleanup race condition: Completed task entries were removed unconditionally in
task_group_try_complete(), even when no consumer had requested results. This causedjoinAll()/joinNext()/joinAny()to return empty results when called after tasks had already completed. Fixed by deferring cleanup to the point of actual result delivery — per-entry removal inrace()/any()/iterator callbacks, and bulk cleanup inall()after results are collected.
- Fiber Support: Full integration of PHP Fibers with TrueAsync coroutine system
Fiber::suspend()andFiber::resume()work in async scheduler contextFiber::getCoroutine()method to access fiber's coroutine- Fiber status methods (isStarted, isSuspended, isRunning, isTerminated)
- Support for nested fibers and fiber-coroutine interactions
- Comprehensive test coverage for all fiber scenarios
- TrueAsync API: Added
ZEND_ASYNC_SCHEDULER_LAUNCH()macro for scheduler initialization - TrueAsync API: Updated to version 0.8.0 with fiber support
- TrueAsync API: Added customizable scheduler heartbeat handler mechanism with
zend_async_set_heartbeat_handler()API
- Critical GC Bug: Fixed garbage collection crash during coroutine cancellation when exception occurs in main coroutine while GC is running
- Fixed double free in
zend_fiber_object_destroy() - Fixed
stream_select()fortimeout == NULLcase in async context - Fixed fiber memory leaks and improved GC logic
- Deadlock Detection: Replaced warnings with structured exception handling
- Deadlock detection now throws
Async\DeadlockErrorexception instead of multiple warnings - Breaking Change: Applications relying on deadlock warnings
will need to be updated to catch
Async\DeadlockErrorexceptions
- Deadlock detection now throws
- Breaking Change: PHP Coding Standards Compliance - Function names updated to follow official PHP naming conventions:
spawnWith()→spawn_with()awaitAnyOrFail()→await_any_or_fail()awaitFirstSuccess()→await_first_success()awaitAllOrFail()→await_all_or_fail()awaitAll()→await_all()awaitAnyOfOrFail()→await_any_of_or_fail()awaitAnyOf()→await_any_of()currentContext()→current_context()coroutineContext()→coroutine_context()currentCoroutine()→current_coroutine()rootContext()→root_context()getCoroutines()→get_coroutines()gracefulShutdown()→graceful_shutdown()- Rationale: Compliance with PHP Coding Standards - functions must use lowercase with underscores
- UDP socket stream support for TrueAsync
- SSL support for socket stream
- Poll Proxy: New
zend_async_poll_proxy_tstructure for optimized file descriptor management- Efficient caching of event handlers to reduce EventLoop creation overhead
- Poll proxy event aggregation and improved lifecycle management
- Fixing
ref_countlogic for thezend_async_event_callback_tstructure:- The add/dispose methods correctly increment the counter
- Memory leaks fixed
- Fixed await iterator logic for
awaitXXXfunctions - Fixed process waiting logic for UNIX-like systems
- Memory Optimization: Enhanced memory allocation for async structures
- Optimized waker trigger structures with improved memory layout
- Enhanced memory management for poll proxy events
- Better resource cleanup and lifecycle management
- Event Loop Performance: Major scheduler optimizations
- Automatic Event Cleanup: Added automatic waker event cleanup when coroutines resume (see
ZEND_ASYNC_WAKER_CLEAN_EVENTS) - Separate queue implementation for resumed coroutines to improve stability
- Reduced unnecessary LibUV calls in scheduler tick processing
- Automatic Event Cleanup: Added automatic waker event cleanup when coroutines resume (see
- Socket Performance:
- Event handler caching for sockets to avoid constant EventLoop recreation
- Optimized
network_async_accept_incomingto tryaccept()before waiting - Enhanced stream_select functionality with event-driven architecture
- Improved blocking operation handling with boolean return values
- TrueAsync API Performance: Optimized execution paths by replacing expensive
EG(exception)checks with directboolreturn values across all async functions - Upgrade
LibUVto version1.45due to a timer bug that causes the application to hang
- Docker support with multi-stage build (Ubuntu 24.04, libuv 1.49, curl 8.10)
- PDO MySQL and MySQLi async support
- TrueAsync API Extensions: Enhanced async API with new object creation and coroutine grouping capabilities
- Added
ZEND_ASYNC_NEW_GROUP()API for creating CoroutineGroup objects for managing multiple coroutines - Added
ZEND_ASYNC_NEW_FUTURE_OBJ()andZEND_ASYNC_NEW_CHANNEL_OBJ()APIs for creating Zend objects from async primitives - Extended
zend_async_task_tstructure withrunmethod for thread pool task execution - Enhanced
zend_async_scheduler_register()function with new API function pointers
- Added
- Multiple Callbacks Per Event Support: Complete redesign of waker trigger system to support multiple callbacks on a single event
- Modified
zend_async_waker_trigger_sstructure to use flexible array member with dynamic capacity - Added
waker_trigger_create()andwaker_trigger_add_callback()helper functions for efficient memory management - Implemented single-block memory allocation for better performance (trigger + callback array in one allocation)
- Default capacity starts at 1 and doubles as needed (1 → 2 → 4 → 8...)
- Fixed
coroutine_event_callback_dispose()to remove only specific callbacks instead of entire events - Breaking Change: Events now persist until all associated callbacks are removed
- Modified
- Bailout Tests: Added 15 tests covering memory exhaustion and stack overflow scenarios in async operations
- Garbage Collection Support: Implemented comprehensive GC handlers for async objects
- Added
async_coroutine_object_gc()function to track all ZVALs in coroutine structures - Added
async_scope_object_gc()function to track ZVALs in scope structures - Proper GC tracking for context HashTables (values and keys)
- GC support for finally handlers, exception handlers, and function call parameters
- GC tracking for waker events, internal context, and nested async structures
- Prevents memory leaks in complex async applications with circular references
- Added
- Key Order Preservation: Added
preserveKeyOrderparameter to async await functions- Added
preserve_key_orderparameter toasync_await_futures()API function - Added
preserve_key_orderfield toasync_await_context_tstructure - Enhanced
awaitAll(),awaitAllWithErrors(),awaitAnyOf(), andawaitAnyOfWithErrors()functions withpreserveKeyOrderparameter (defaults totrue) - Allows controlling whether the original key order is maintained in result arrays
- Added
- Memory management improvements for long-running async applications
- Proper cleanup of coroutine and scope objects during garbage collection cycles
- Async Iterator API:
- Fixed iterator state management to prevent memory leaks
- Fixed the
spawnWith()function for interaction with theScopeProviderandSpawnStrategyinterface - Build System Fixes:
- Fixed macOS compilation error with missing field initializer in
uv_stdio_container_tstructure (libuv_reactor.c:1956) - Fixed Windows build script PowerShell syntax error (missing
shell: cmddirective) - Fixed race condition issues in 10 async test files for deterministic test execution on all platforms
- Fixed macOS compilation error with missing field initializer in
- Breaking Change: Function Renaming - Major API reorganization for better consistency:
awaitAllFailFirst()→awaitAllOrFail()awaitAllWithErrors()→awaitAll()awaitAnyOfFailFirst()→awaitAnyOfOrFail()awaitAnyOfWithErrors()→awaitAnyOf()
- Breaking Change:
awaitAll()Return Format - NewawaitAll()(formerlyawaitAllWithErrors()) now returns[results, exceptions]tuple:- First element
[0]contains array of successful results - Second element
[1]contains array of exceptions from failed coroutines - Migration: Update from
$results = awaitAll($coroutines)to[$results, $exceptions] = awaitAll($coroutines)
- First element
- LibUV requirement increased to ≥ 1.44.0 - Requires libuv version 1.44.0 or later to ensure proper UV_RUN_ONCE behavior and prevent busy loop issues that could cause high CPU usage
- Async Iterator API:
- Proper handling of
REWIND/NEXTstates in a concurrent environment. The iterator code now stops iteration in coroutines if the iterator is in the process of changing its position. - Added functionality for proper handling of exceptions from
Zend iterators(\Iteratorandgenerators). An exception that occurs in the iterator can now be handled by the iterator's owner.
- Proper handling of
- Async-aware destructor handling (PHP Core): Implemented
async_shutdown_destructors()function to properly handle destructors that may suspend execution in async context - CompositeException: New exception class for handling multiple exceptions that occur in finally handlers
- Automatically collects multiple exceptions from
onFinallyhandlers in both Scope and Coroutine - Provides
addException()method to add exceptions to the composite - Provides
getExceptions()method to retrieve all collected exceptions - Ensures all finally handlers are executed even when exceptions occur
- Automatically collects multiple exceptions from
- Complete implementation of
onFinally()method forAsync\Scopeclass - Cross-thread trigger event API
- Priority support to async iterator system
- Coroutine priority support to TrueAsync API
- Iterator API integration: Added
zend_async_iterator_tstructure to TrueAsync API withrun()andrun_in_coroutine()methods disposeAfterTimeout()method for ScopeawaitAfterCancellation()method for Scope- Complete Scope API implementation
Async\protect()function- Signal handlers support (UNIX)
- Coroutine class with full lifecycle management
onFinally()logic for Coroutine class
- Enhanced ZEND_ASYNC_NEW_SCOPE API to create Scope without Zend object for internal use
- Refactored catch_or_cancel logic according to RFC scope behavior
- Refactored async_scheduler_coroutine_suspend to support non-zero exception context
- Optimized iterator module
- Iterator structure refactoring: Made
async_iterator_tcompatible withzend_async_iterator_tAPI by adding function pointer methods - Improved exception handling and cancellation logic
- Enhanced Context API behavior for Scope
- Multiple fixes for Scope dispose operations
- Fixed scope_try_to_dispose logic
- Spawn tests fixes
- Build issues for Scope
- Context logic with NULL scope
- Iterator bugs and coroutine issues
- Stream tests and DNS tests
- ZEND_ASYNC_IS_OFF issues
- Race condition in process waiting (libuv)
- Memory cleanup for reactor shutdown
- Future logic for coroutine class - coroutines can now behave like real Future objects
- Support for
ob_startwith coroutines - Global main coroutine switch handlers API for context isolation
- Socket Listening API
- Support for
proc_openin async context - CURL async support and comprehensive tests
- Sleep functions (
usleep,sleep) with async support - Exec functions with async support
- Async DNS resolution support including IPv6
- Comprehensive DNS test suite (13 test files)
- Nanosecond support for async timer events
- Stream socket tests and functionality
- PHP_POLL2 implementation and tests
cancel_on_exitoption forasync_await_futures- Context API with HashTable optimization
- Coroutine Internal context support
- Refactored sockets extension to use new TrueAsync API
- Refactored timeout object implementation with proper memory separation
- Refactored Internal Context API
- Refactored Zend DNS API
- Moved async extension memory initialization to RINIT
- Changed allocator to erealloc2
- Improved circular buffer behavior during relocation and resizing
- Multiple memory leaks
- DNS API bugs and errors
- Stream tests fixes
- CURL function fixes
- Socket extension test fixes
- Exception propagation bugs
- Poll2 logic fixes
- Double free issues in awaitAll
- Coroutine cancellation and completion logic
- Scheduler graceful shutdown logic
- Initial TrueAsync extension architecture
- Basic coroutine support
- Event loop integration with libuv
- Core async/await functionality
- Basic suspend/resume operations
- Initial test framework
- Context switching mechanisms
- Basic scheduler implementation