@@ -92,13 +92,6 @@ with ease, as the picker will only ever render and decorate a small subset of th
9292 - [ Using lazy.nvim] ( #using-lazynvim )
9393 - [ Using vim-plug] ( #using-vim-plug )
9494- [ Configuration] ( #configuration )
95- - [ Tracing] ( #tracing )
96- - [ Workers] ( #workers )
97- - [ Pool] ( #pool )
98- - [ Lifecycle] ( #lifecycle )
99- - [ Normalization] ( #normalization )
100- - [ Pruning] ( #pruning )
101- - [ Misc] ( #misc )
10295- [ Quickstart] ( #quickstart )
10396 - [ User command] ( #user-command )
10497 - [ Static table] ( #static-table )
@@ -137,7 +130,17 @@ with ease, as the picker will only ever render and decorate a small subset of th
137130 - [ Context aware] ( #context-aware )
138131 - [ Static tables] ( #static-tables )
139132 - [ Basic ui.select] ( #basic-uiselect )
140- - [ Advanced ui.select] ( #advanced-uiselect )
133+ - [ Advanced ui.select] ( #advanced-uiselect )
134+ - [ Tracing] ( #tracing )
135+ - [ Scheduler] ( #scheduler )
136+ - [ Registry] ( #registry )
137+ - [ Async] ( #async )
138+ - [ Workers] ( #workers )
139+ - [ Pool] ( #pool )
140+ - [ Lifecycle] ( #lifecycle )
141+ - [ Normalization] ( #normalization )
142+ - [ Pruning] ( #pruning )
143+ - [ Others] ( #others )
141144- [ Requirements] ( #requirements )
142145 - [ Mandatory] ( #mandatory )
143146 - [ Optional] ( #optional )
@@ -234,97 +237,6 @@ Configuration details:
234237- ` registry.prune_interval ` : Interval in milliseconds for registry cleanup.
235238- ` registry.trace ` : Optional debug hook ` function(event, data) ` for registry lifecycle tracing.
236239
237- ## Tracing
238-
239- Several singleton modules expose a ` trace ` hook for lightweight diagnostics. Each hook receives ` function(event, data) ` where ` event ` is a
240- string and ` data ` is a small table of relevant fields.
241-
242- Supported modules:
243-
244- - ` pool.trace ` : Allocation/reuse, normalization, return, and prune events.
245- - ` scheduler.trace ` : Scheduler start, idle, and setup events.
246- - ` async.trace ` : Async creation and completion events.
247- - ` registry.trace ` : Registry registration, touch, removal, and prune events.
248-
249- ## Workers
250-
251- Workers are internal coordination helpers that sit on top of the async scheduler. They are not user-configurable, but they define how
252- different subsystems sequence work so ordering is deterministic even under heavy async load.
253-
254- There are two worker modes used in the codebase:
255-
256- - ** Coalescer** : Keeps only the latest request while work is in flight. This is used for UI rendering in ` Select ` , where multiple list updates
257- can arrive rapidly. The coalescer guarantees that only the most recent render request runs, so stale intermediate renders are dropped.
258-
259- - ** Queue** : Runs every request in strict FIFO order. This is used for stream processing, where chunk order is semantically important. The
260- queue executes tasks one after the other, and each task can yield without allowing another queued task to start early.
261-
262- Workers are cooperative: each task runs inside an ` Async ` coroutine and may call ` Async.yield() ` . Yielding allows other unrelated async tasks
263- to run, but does not violate the ordering guarantees within a worker.
264-
265- ## Pool
266-
267- The pool is a table-reuse subsystem that keeps allocation pressure low for hot paths (streaming, matching, selection). It is a single global
268- pool created during ` setup() ` and reused by the rest of the runtime. The pool is not a cache of results and does not preserve semantic data;
269- it only manages table instances and their sizes.
270-
271- This implementation is ** bucketed and deterministic** : tables are normalized into size buckets on return, and future ` obtain() ` calls return
272- the smallest idle table that fits the requested size. There is no adaptive priming or background allocation. Normalization keeps the pool
273- stable and prevents it from filling with many distinct odd sizes.
274-
275- ### Lifecycle
276-
277- ** Obtain**
278-
279- ` Pool.obtain(size) ` returns a reusable table for scratch work.
280-
281- - If ` size ` is provided and greater than 0, the pool returns the ** smallest idle table whose size is >= size** .
282- - If no idle table is large enough, the pool falls back to the largest available idle table.
283- - If the pool is empty, it allocates a fresh table sized to the normalized bucket for the requested size.
284- - The returned table is tracked as “in use” and not eligible for pruning.
285-
286- ** Return**
287-
288- ` Pool._return(tbl) ` releases a table back to the pool:
289-
290- - The table is normalized to a bucket size (power-of-two) within ` [prime_min, prime_max] ` .
291- - If ` max_tables == 0 ` , the table is discarded immediately (no pooling).
292- - Otherwise, the table is inserted into the idle pool and becomes eligible for reuse.
293-
294- For code paths that create tables outside of ` obtain() ` but still need tracking, ` Pool.attach(tbl) ` and ` Pool.detach(tbl) ` mark tables as
295- in-use without putting them into the idle pool.
296-
297- ### Normalization
298-
299- Normalization keeps reuse stable across runs:
300-
301- - Sizes below ` prime_min ` are kept as-is.
302- - Sizes in ` [prime_min, prime_max] ` are rounded up to the next power-of-two bucket.
303- - Sizes above ` prime_max ` are clamped down to ` prime_max ` before pooling.
304-
305- This means a ` 125000 ` -element table becomes ` 131072 ` , and a ` 600000 ` -element table becomes ` 524288 ` (default ` prime_max ` ). The pool prefers
306- these normalized buckets to avoid fragmentation.
307-
308- ### Pruning
309-
310- The pool runs a background prune timer:
311-
312- - Tables idle longer than ` max_idle ` are discarded.
313- - If ` max_tables ` is set, the pool removes the least-recently-used idle tables until the limit is satisfied.
314- - Tables marked “in use” are never pruned.
315-
316- ### Misc
317-
318- Beyond ` obtain() ` /` _return() ` , the pool exposes a few utility helpers:
319-
320- - ` Pool.attach(tbl) ` : mark a table as “in use” without placing it into the idle pool. Use this when you create a table outside
321- ` obtain() ` but still want the pool to track it.
322- - ` Pool.detach(tbl) ` : remove a table from pool tracking without returning it to the idle pool.
323- - ` Pool.is_pooled(tbl) ` : returns ` true ` if the table is tracked by the pool (in use or idle).
324- - ` Pool.fill(tbl, value) ` : in-place fill of a table with a single value.
325- - ` Pool.resize(tbl, size, default) ` : resize a table to ` size ` , filling new slots with ` default ` when expanding.
326- - ` Pool.remove(tbl, value) ` : remove all entries that match ` value ` from ` tbl ` .
327-
328240## Quickstart
329241
330242The examples below create unique instance of a Picker, it is important for users to take note of the fact `it is recommended that you
@@ -1873,6 +1785,131 @@ vim.ui.select = function(items, opts, on_choice)
18731785)
18741786```
18751787
1788+ ## Tracing
1789+
1790+ Several singleton modules expose a ` trace ` hook for lightweight diagnostics. Each hook receives ` function(event, data) ` where ` event ` is a
1791+ string and ` data ` is a small table of relevant fields.
1792+
1793+ Supported modules:
1794+
1795+ - ` pool.trace ` : Allocation/reuse, normalization, return, and prune events.
1796+ - ` scheduler.trace ` : Scheduler start, idle, and setup events.
1797+ - ` worker.trace ` : Worker cooperative job management
1798+ - ` async.trace ` : Async creation and completion events.
1799+ - ` registry.trace ` : Registry registration, touch, removal, and prune events.
1800+
1801+ ## Scheduler
1802+
1803+ The scheduler is a ** singleton cooperative runtime** that drives ` Async ` coroutines. It owns a single libuv check handle and an in-memory
1804+ queue of runnable async jobs. Any module that needs background work schedules an ` Async ` instance through ` Scheduler.add() ` .
1805+
1806+ Key properties:
1807+
1808+ - ** Budgeted execution** : The scheduler runs a batch of async steps up to a time budget (` async_budget ` , in microseconds).
1809+ - ** Cooperative** : Each async job yields explicitly; the scheduler never preempts.
1810+ - ** Singleton** : There is only one scheduler, initialized during ` setup() ` .
1811+
1812+ ## Registry
1813+
1814+ The registry is a ** singleton picker manager** that tracks active pickers and prunes stale hidden ones. It is used to prevent long-lived
1815+ pickers from piling up after they are closed or hidden.
1816+
1817+ Key properties:
1818+
1819+ - ** Idle tracking** : Each picker receives a ` last_used ` timestamp.
1820+ - ** Pruning** : A periodic timer checks for idle pickers and closes them if they are not in use.
1821+ - ** Safety checks** : A picker is never pruned while it is open, running a stream, or running a match.
1822+
1823+ ## Async
1824+
1825+ ` Async ` is the low-level coroutine wrapper that powers all cooperative work in the plugin. It is not tied to any picker or UI state; it is
1826+ used by the scheduler, worker helpers, streaming, and matching.
1827+
1828+ Key properties:
1829+
1830+ - ** Explicit yielding** : Work continues only when ` Async.yield() ` is called.
1831+ - ** Cancelable** : Async jobs can be canceled, which is used to prevent stale work from completing.
1832+ - ** Traceable** : ` async.trace ` hooks allow diagnosing lifecycle events for unit tests and performance analysis.
1833+
1834+ ## Workers
1835+
1836+ Workers are internal coordination helpers that sit on top of the async scheduler. They are not user-configurable, but they define how
1837+ different subsystems sequence work so ordering is deterministic even under heavy async load.
1838+
1839+ There are two worker modes used in the codebase:
1840+
1841+ - ** Coalesce** : Keeps only the latest request while work is in flight. This is used for UI rendering in ` Select ` , where multiple list updates
1842+ can arrive rapidly. The coalesce guarantees that only the most recent render request runs, so stale intermediate renders are dropped.
1843+
1844+ - ** Queue** : Runs every request in strict FIFO order. This is used for stream processing, where chunk order is semantically important. The
1845+ queue executes tasks one after the other, and each task can yield without allowing another queued task to start early.
1846+
1847+ Workers are cooperative: each task runs inside an ` Async ` coroutine and may call ` Async.yield() ` . Yielding allows other unrelated async tasks
1848+ to run, but does not violate the ordering guarantees within a worker.
1849+
1850+ ## Pool
1851+
1852+ The pool is a table-reuse subsystem that keeps allocation pressure low for hot paths (streaming, matching, selection). It is a single global
1853+ pool created during ` setup() ` and reused by the rest of the runtime. The pool is not a cache of results and does not preserve semantic data;
1854+ it only manages table instances and their sizes.
1855+
1856+ This implementation is ** bucketed and deterministic** : tables are normalized into size buckets on return, and future ` obtain() ` calls return
1857+ the smallest idle table that fits the requested size. There is no adaptive priming or background allocation. Normalization keeps the pool
1858+ stable and prevents it from filling with many distinct odd sizes.
1859+
1860+ ### Lifecycle
1861+
1862+ ** Obtain**
1863+
1864+ ` Pool.obtain(size) ` returns a reusable table for scratch work.
1865+
1866+ - If ` size ` is provided and greater than 0, the pool returns the ** smallest idle table whose size is >= size** .
1867+ - If no idle table is large enough, the pool falls back to the largest available idle table.
1868+ - If the pool is empty, it allocates a fresh table sized to the normalized bucket for the requested size.
1869+ - The returned table is tracked as “in use” and not eligible for pruning.
1870+
1871+ ** Return**
1872+
1873+ ` Pool._return(tbl) ` releases a table back to the pool:
1874+
1875+ - The table is normalized to a bucket size (power-of-two) within ` [prime_min, prime_max] ` .
1876+ - If ` max_tables == 0 ` , the table is discarded immediately (no pooling).
1877+ - Otherwise, the table is inserted into the idle pool and becomes eligible for reuse.
1878+
1879+ For code paths that create tables outside of ` obtain() ` but still need tracking, ` Pool.attach(tbl) ` and ` Pool.detach(tbl) ` mark tables as
1880+ in-use without putting them into the idle pool.
1881+
1882+ ### Normalization
1883+
1884+ Normalization keeps reuse stable across runs:
1885+
1886+ - Sizes below ` prime_min ` are kept as-is.
1887+ - Sizes in ` [prime_min, prime_max] ` are rounded up to the next power-of-two bucket.
1888+ - Sizes above ` prime_max ` are clamped down to ` prime_max ` before pooling.
1889+
1890+ This means a ` 125000 ` -element table becomes ` 131072 ` , and a ` 600000 ` -element table becomes ` 524288 ` (default ` prime_max ` ). The pool prefers
1891+ these normalized buckets to avoid fragmentation.
1892+
1893+ ### Pruning
1894+
1895+ The pool runs a background prune timer:
1896+
1897+ - Tables idle longer than ` max_idle ` are discarded.
1898+ - If ` max_tables ` is set, the pool removes the least-recently-used idle tables until the limit is satisfied.
1899+ - Tables marked “in use” are never pruned.
1900+
1901+ ### Others
1902+
1903+ Beyond ` obtain() ` /` _return() ` , the pool exposes a few utility helpers:
1904+
1905+ - ` Pool.attach(tbl) ` : mark a table as “in use” without placing it into the idle pool. Use this when you create a table outside
1906+ ` obtain() ` but still want the pool to track it.
1907+ - ` Pool.detach(tbl) ` : remove a table from pool tracking without returning it to the idle pool.
1908+ - ` Pool.is_pooled(tbl) ` : returns ` true ` if the table is tracked by the pool (in use or idle).
1909+ - ` Pool.fill(tbl, value) ` : in-place fill of a table with a single value.
1910+ - ` Pool.resize(tbl, size, default) ` : resize a table to ` size ` , filling new slots with ` default ` when expanding.
1911+ - ` Pool.remove(tbl, value) ` : remove all entries that match ` value ` from ` tbl ` .
1912+
18761913## Requirements
18771914
18781915### Mandatory
0 commit comments