Skip to content

Commit e35a109

Browse files
committed
fix(timing): matching and timing were overwhelming main loop
1 parent 9176fba commit e35a109

13 files changed

Lines changed: 366 additions & 129 deletions

README.md

Lines changed: 136 additions & 99 deletions
Original file line numberDiff line numberDiff line change
@@ -92,13 +92,6 @@ with ease, as the picker will only ever render and decorate a small subset of th
9292
- [Using lazy.nvim](#using-lazynvim)
9393
- [Using vim-plug](#using-vim-plug)
9494
- [Configuration](#configuration)
95-
- [Tracing](#tracing)
96-
- [Workers](#workers)
97-
- [Pool](#pool)
98-
- [Lifecycle](#lifecycle)
99-
- [Normalization](#normalization)
100-
- [Pruning](#pruning)
101-
- [Misc](#misc)
10295
- [Quickstart](#quickstart)
10396
- [User command](#user-command)
10497
- [Static table](#static-table)
@@ -137,7 +130,17 @@ with ease, as the picker will only ever render and decorate a small subset of th
137130
- [Context aware](#context-aware)
138131
- [Static tables](#static-tables)
139132
- [Basic ui.select](#basic-uiselect)
140-
- [Advanced ui.select](#advanced-uiselect)
133+
- [Advanced ui.select](#advanced-uiselect)
134+
- [Tracing](#tracing)
135+
- [Scheduler](#scheduler)
136+
- [Registry](#registry)
137+
- [Async](#async)
138+
- [Workers](#workers)
139+
- [Pool](#pool)
140+
- [Lifecycle](#lifecycle)
141+
- [Normalization](#normalization)
142+
- [Pruning](#pruning)
143+
- [Others](#others)
141144
- [Requirements](#requirements)
142145
- [Mandatory](#mandatory)
143146
- [Optional](#optional)
@@ -234,97 +237,6 @@ Configuration details:
234237
- `registry.prune_interval`: Interval in milliseconds for registry cleanup.
235238
- `registry.trace`: Optional debug hook `function(event, data)` for registry lifecycle tracing.
236239

237-
## Tracing
238-
239-
Several singleton modules expose a `trace` hook for lightweight diagnostics. Each hook receives `function(event, data)` where `event` is a
240-
string and `data` is a small table of relevant fields.
241-
242-
Supported modules:
243-
244-
- `pool.trace`: Allocation/reuse, normalization, return, and prune events.
245-
- `scheduler.trace`: Scheduler start, idle, and setup events.
246-
- `async.trace`: Async creation and completion events.
247-
- `registry.trace`: Registry registration, touch, removal, and prune events.
248-
249-
## Workers
250-
251-
Workers are internal coordination helpers that sit on top of the async scheduler. They are not user-configurable, but they define how
252-
different subsystems sequence work so ordering is deterministic even under heavy async load.
253-
254-
There are two worker modes used in the codebase:
255-
256-
- **Coalescer**: Keeps only the latest request while work is in flight. This is used for UI rendering in `Select`, where multiple list updates
257-
can arrive rapidly. The coalescer guarantees that only the most recent render request runs, so stale intermediate renders are dropped.
258-
259-
- **Queue**: Runs every request in strict FIFO order. This is used for stream processing, where chunk order is semantically important. The
260-
queue executes tasks one after the other, and each task can yield without allowing another queued task to start early.
261-
262-
Workers are cooperative: each task runs inside an `Async` coroutine and may call `Async.yield()`. Yielding allows other unrelated async tasks
263-
to run, but does not violate the ordering guarantees within a worker.
264-
265-
## Pool
266-
267-
The pool is a table-reuse subsystem that keeps allocation pressure low for hot paths (streaming, matching, selection). It is a single global
268-
pool created during `setup()` and reused by the rest of the runtime. The pool is not a cache of results and does not preserve semantic data;
269-
it only manages table instances and their sizes.
270-
271-
This implementation is **bucketed and deterministic**: tables are normalized into size buckets on return, and future `obtain()` calls return
272-
the smallest idle table that fits the requested size. There is no adaptive priming or background allocation. Normalization keeps the pool
273-
stable and prevents it from filling with many distinct odd sizes.
274-
275-
### Lifecycle
276-
277-
**Obtain**
278-
279-
`Pool.obtain(size)` returns a reusable table for scratch work.
280-
281-
- If `size` is provided and greater than 0, the pool returns the **smallest idle table whose size is >= size**.
282-
- If no idle table is large enough, the pool falls back to the largest available idle table.
283-
- If the pool is empty, it allocates a fresh table sized to the normalized bucket for the requested size.
284-
- The returned table is tracked as “in use” and not eligible for pruning.
285-
286-
**Return**
287-
288-
`Pool._return(tbl)` releases a table back to the pool:
289-
290-
- The table is normalized to a bucket size (power-of-two) within `[prime_min, prime_max]`.
291-
- If `max_tables == 0`, the table is discarded immediately (no pooling).
292-
- Otherwise, the table is inserted into the idle pool and becomes eligible for reuse.
293-
294-
For code paths that create tables outside of `obtain()` but still need tracking, `Pool.attach(tbl)` and `Pool.detach(tbl)` mark tables as
295-
in-use without putting them into the idle pool.
296-
297-
### Normalization
298-
299-
Normalization keeps reuse stable across runs:
300-
301-
- Sizes below `prime_min` are kept as-is.
302-
- Sizes in `[prime_min, prime_max]` are rounded up to the next power-of-two bucket.
303-
- Sizes above `prime_max` are clamped down to `prime_max` before pooling.
304-
305-
This means a `125000`-element table becomes `131072`, and a `600000`-element table becomes `524288` (default `prime_max`). The pool prefers
306-
these normalized buckets to avoid fragmentation.
307-
308-
### Pruning
309-
310-
The pool runs a background prune timer:
311-
312-
- Tables idle longer than `max_idle` are discarded.
313-
- If `max_tables` is set, the pool removes the least-recently-used idle tables until the limit is satisfied.
314-
- Tables marked “in use” are never pruned.
315-
316-
### Misc
317-
318-
Beyond `obtain()`/`_return()`, the pool exposes a few utility helpers:
319-
320-
- `Pool.attach(tbl)`: mark a table as “in use” without placing it into the idle pool. Use this when you create a table outside
321-
`obtain()` but still want the pool to track it.
322-
- `Pool.detach(tbl)`: remove a table from pool tracking without returning it to the idle pool.
323-
- `Pool.is_pooled(tbl)`: returns `true` if the table is tracked by the pool (in use or idle).
324-
- `Pool.fill(tbl, value)`: in-place fill of a table with a single value.
325-
- `Pool.resize(tbl, size, default)`: resize a table to `size`, filling new slots with `default` when expanding.
326-
- `Pool.remove(tbl, value)`: remove all entries that match `value` from `tbl`.
327-
328240
## Quickstart
329241

330242
The examples below create unique instance of a Picker, it is important for users to take note of the fact `it is recommended that you
@@ -1873,6 +1785,131 @@ vim.ui.select = function(items, opts, on_choice)
18731785
)
18741786
```
18751787

1788+
## Tracing
1789+
1790+
Several singleton modules expose a `trace` hook for lightweight diagnostics. Each hook receives `function(event, data)` where `event` is a
1791+
string and `data` is a small table of relevant fields.
1792+
1793+
Supported modules:
1794+
1795+
- `pool.trace`: Allocation/reuse, normalization, return, and prune events.
1796+
- `scheduler.trace`: Scheduler start, idle, and setup events.
1797+
- `worker.trace`: Worker cooperative job management
1798+
- `async.trace`: Async creation and completion events.
1799+
- `registry.trace`: Registry registration, touch, removal, and prune events.
1800+
1801+
## Scheduler
1802+
1803+
The scheduler is a **singleton cooperative runtime** that drives `Async` coroutines. It owns a single libuv check handle and an in-memory
1804+
queue of runnable async jobs. Any module that needs background work schedules an `Async` instance through `Scheduler.add()`.
1805+
1806+
Key properties:
1807+
1808+
- **Budgeted execution**: The scheduler runs a batch of async steps up to a time budget (`async_budget`, in microseconds).
1809+
- **Cooperative**: Each async job yields explicitly; the scheduler never preempts.
1810+
- **Singleton**: There is only one scheduler, initialized during `setup()`.
1811+
1812+
## Registry
1813+
1814+
The registry is a **singleton picker manager** that tracks active pickers and prunes stale hidden ones. It is used to prevent long-lived
1815+
pickers from piling up after they are closed or hidden.
1816+
1817+
Key properties:
1818+
1819+
- **Idle tracking**: Each picker receives a `last_used` timestamp.
1820+
- **Pruning**: A periodic timer checks for idle pickers and closes them if they are not in use.
1821+
- **Safety checks**: A picker is never pruned while it is open, running a stream, or running a match.
1822+
1823+
## Async
1824+
1825+
`Async` is the low-level coroutine wrapper that powers all cooperative work in the plugin. It is not tied to any picker or UI state; it is
1826+
used by the scheduler, worker helpers, streaming, and matching.
1827+
1828+
Key properties:
1829+
1830+
- **Explicit yielding**: Work continues only when `Async.yield()` is called.
1831+
- **Cancelable**: Async jobs can be canceled, which is used to prevent stale work from completing.
1832+
- **Traceable**: `async.trace` hooks allow diagnosing lifecycle events for unit tests and performance analysis.
1833+
1834+
## Workers
1835+
1836+
Workers are internal coordination helpers that sit on top of the async scheduler. They are not user-configurable, but they define how
1837+
different subsystems sequence work so ordering is deterministic even under heavy async load.
1838+
1839+
There are two worker modes used in the codebase:
1840+
1841+
- **Coalesce**: Keeps only the latest request while work is in flight. This is used for UI rendering in `Select`, where multiple list updates
1842+
can arrive rapidly. The coalesce guarantees that only the most recent render request runs, so stale intermediate renders are dropped.
1843+
1844+
- **Queue**: Runs every request in strict FIFO order. This is used for stream processing, where chunk order is semantically important. The
1845+
queue executes tasks one after the other, and each task can yield without allowing another queued task to start early.
1846+
1847+
Workers are cooperative: each task runs inside an `Async` coroutine and may call `Async.yield()`. Yielding allows other unrelated async tasks
1848+
to run, but does not violate the ordering guarantees within a worker.
1849+
1850+
## Pool
1851+
1852+
The pool is a table-reuse subsystem that keeps allocation pressure low for hot paths (streaming, matching, selection). It is a single global
1853+
pool created during `setup()` and reused by the rest of the runtime. The pool is not a cache of results and does not preserve semantic data;
1854+
it only manages table instances and their sizes.
1855+
1856+
This implementation is **bucketed and deterministic**: tables are normalized into size buckets on return, and future `obtain()` calls return
1857+
the smallest idle table that fits the requested size. There is no adaptive priming or background allocation. Normalization keeps the pool
1858+
stable and prevents it from filling with many distinct odd sizes.
1859+
1860+
### Lifecycle
1861+
1862+
**Obtain**
1863+
1864+
`Pool.obtain(size)` returns a reusable table for scratch work.
1865+
1866+
- If `size` is provided and greater than 0, the pool returns the **smallest idle table whose size is >= size**.
1867+
- If no idle table is large enough, the pool falls back to the largest available idle table.
1868+
- If the pool is empty, it allocates a fresh table sized to the normalized bucket for the requested size.
1869+
- The returned table is tracked as “in use” and not eligible for pruning.
1870+
1871+
**Return**
1872+
1873+
`Pool._return(tbl)` releases a table back to the pool:
1874+
1875+
- The table is normalized to a bucket size (power-of-two) within `[prime_min, prime_max]`.
1876+
- If `max_tables == 0`, the table is discarded immediately (no pooling).
1877+
- Otherwise, the table is inserted into the idle pool and becomes eligible for reuse.
1878+
1879+
For code paths that create tables outside of `obtain()` but still need tracking, `Pool.attach(tbl)` and `Pool.detach(tbl)` mark tables as
1880+
in-use without putting them into the idle pool.
1881+
1882+
### Normalization
1883+
1884+
Normalization keeps reuse stable across runs:
1885+
1886+
- Sizes below `prime_min` are kept as-is.
1887+
- Sizes in `[prime_min, prime_max]` are rounded up to the next power-of-two bucket.
1888+
- Sizes above `prime_max` are clamped down to `prime_max` before pooling.
1889+
1890+
This means a `125000`-element table becomes `131072`, and a `600000`-element table becomes `524288` (default `prime_max`). The pool prefers
1891+
these normalized buckets to avoid fragmentation.
1892+
1893+
### Pruning
1894+
1895+
The pool runs a background prune timer:
1896+
1897+
- Tables idle longer than `max_idle` are discarded.
1898+
- If `max_tables` is set, the pool removes the least-recently-used idle tables until the limit is satisfied.
1899+
- Tables marked “in use” are never pruned.
1900+
1901+
### Others
1902+
1903+
Beyond `obtain()`/`_return()`, the pool exposes a few utility helpers:
1904+
1905+
- `Pool.attach(tbl)`: mark a table as “in use” without placing it into the idle pool. Use this when you create a table outside
1906+
`obtain()` but still want the pool to track it.
1907+
- `Pool.detach(tbl)`: remove a table from pool tracking without returning it to the idle pool.
1908+
- `Pool.is_pooled(tbl)`: returns `true` if the table is tracked by the pool (in use or idle).
1909+
- `Pool.fill(tbl, value)`: in-place fill of a table with a single value.
1910+
- `Pool.resize(tbl, size, default)`: resize a table to `size`, filling new slots with `default` when expanding.
1911+
- `Pool.remove(tbl, value)`: remove all entries that match `value` from `tbl`.
1912+
18761913
## Requirements
18771914

18781915
### Mandatory

lua/fuzzy/init.lua

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,24 @@
11
local Pool = require("fuzzy.pool")
22
local Registry = require("fuzzy.registry")
33
local Scheduler = require("fuzzy.scheduler")
4+
local group = vim.api.nvim_create_augroup("FUZZYMATCH", { clear = true })
45

56
local M = {
67
config = {},
78
}
89

10+
function M.teardown()
11+
if Scheduler and Scheduler.close then
12+
Scheduler.close()
13+
end
14+
if Registry and Registry.close then
15+
Registry.close()
16+
end
17+
if Pool and Pool.close then
18+
Pool.close()
19+
end
20+
end
21+
922
function M.setup(opts)
1023
M.config = vim.tbl_deep_extend("keep", opts or {}, {
1124
general = {
@@ -63,6 +76,8 @@ function M.setup(opts)
6376

6477
vim.api.nvim_set_hl(0, "SelectLineHighlight", { link = "Normal", default = false })
6578
vim.api.nvim_set_hl(0, "SelectDecoratorDefault", { link = "Normal", default = false })
79+
80+
vim.api.nvim_create_autocmd("VimLeavePre", { group = group, callback = M.teardown() })
6681
end
6782

6883
return M

lua/fuzzy/match.lua

Lines changed: 22 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,8 @@ function Match:_stop_processing()
103103
-- kill the timer if it is still active, this will stop any further processing, we do not wait for the current processing to finish,
104104
-- since it is expected that the callback will handle nil results as a signal that processing was aborted
105105
if vim.loop.is_closing(self._state.timer) == false then
106-
pcall(vim.loop.stop, self._state.timer)
106+
pcall(self._state.timer.stop, self._state.timer)
107+
pcall(self._state.timer.close, self._state.timer)
107108
end
108109
self._state.timer = nil
109110
end
@@ -179,9 +180,11 @@ function Match:_clean_context()
179180
self.transform = nil
180181
end
181182

182-
function Match:_bind_method(method)
183+
function Match:_bind_guarded(method, token)
183184
return function(...)
184-
return method(self, ...)
185+
if self._state.token == token then
186+
return method(self, ...)
187+
end
185188
end
186189
end
187190

@@ -334,15 +337,12 @@ function Match:match(list, pattern, callback, transform)
334337
end
335338
end
336339

337-
-- ensure we drop the old results first, these will be re-referenced only when the matching process finishes after this matching process
338-
-- that is currently being started now
339-
if self.results then
340-
self.results = nil
341-
end
342-
343-
-- init core match context
340+
-- version the current run, into an incrementing token, that is to ensure that older matching does not start operating on new
341+
-- state or on new starts and ensures older ticks get terminated safely
344342
self._state.token = (self._state.token or 0) + 1
345343
local token = self._state.token
344+
345+
-- init core match context
346346
self.list = assert(list)
347347
self.pattern = assert(pattern)
348348
self.callback = assert(callback)
@@ -370,15 +370,22 @@ function Match:match(list, pattern, callback, transform)
370370
-- ensure offset restored
371371
self._state.offset = 0
372372

373+
if self.results then
374+
-- ensure we drop the old results first, these will be re-referenced only when the matching process finishes after this matching process
375+
-- that is currently being started now
376+
self.results = nil
377+
end
378+
373379
if not self._state.accum then
374-
-- prepare accumulator for results
380+
-- the accumulator is an array of 3 sub-arrays, the first sub-array is the matching elements, the second is the matching
381+
-- positions and the third is the score, the 3 sub-arrays are always guaranteed to be of the same size.
375382
self._state.accum = {}
376383
end
377384

378385
if not self._state.chunks then
379386
-- chunks are reused to avoid frequent allocations, they represent the part of the whole source list currently being processed
380387
-- for matches, the chunks are first filled from the source list and then used in the matchfuzzy call
381-
local size = self._options.step
388+
local size = self._options.step or 0
382389
self._state.chunks = Pool.obtain(size)
383390
end
384391

@@ -400,12 +407,9 @@ function Match:match(list, pattern, callback, transform)
400407
self._state.timer = vim.loop.new_timer()
401408
self._state.timer:start(0,
402409
self._options.timer,
403-
vim.schedule_wrap(function()
404-
if self._state.token ~= token then
405-
return
406-
end
407-
Match._match_worker(self)
408-
end)
410+
vim.schedule_wrap(self:_bind_guarded(
411+
Match._match_worker, token
412+
))
409413
)
410414

411415
-- run one cycle immediately now

0 commit comments

Comments
 (0)