Skip to content

Commit 76965e6

Browse files
committed
Add io_context_options for runtime scheduler and service tuning
Introduce io_context_options with seven configurable knobs: max_events_per_poll, inline_budget_initial/max, unassisted_budget, gqcs_timeout_ms, thread_pool_size, and single_threaded. The single_threaded option disables all scheduler and descriptor mutex/condvar operations via conditionally_enabled_mutex/event wrappers, following Asio's model. Cross-thread post is UB when enabled; DNS and file I/O return operation_not_supported. Benchmarks show 2x throughput on the single-threaded post path with zero regression on multi-threaded paths. Add lockless benchmark variants across all single-threaded suites: io_context, socket_throughput, socket_latency, http_server, timer, accept_churn, and fan_out. Add Asio lockless benchmarks for comparison (concurrency_hint=1).
1 parent f34a1ba commit 76965e6

26 files changed

+1887
-99
lines changed

doc/modules/ROOT/nav.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@
2323
** xref:4.guide/4a.tcp-networking.adoc[TCP/IP Networking]
2424
** xref:4.guide/4b.concurrent-programming.adoc[Concurrent Programming]
2525
** xref:4.guide/4c.io-context.adoc[I/O Context]
26+
*** xref:4.guide/4c2.configuration.adoc[Configuration]
2627
** xref:4.guide/4d.sockets.adoc[Sockets]
2728
** xref:4.guide/4e.tcp-acceptor.adoc[Acceptors]
2829
** xref:4.guide/4f.endpoints.adoc[Endpoints]
Lines changed: 157 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,157 @@
1+
= Configuration
2+
:navtitle: Configuration
3+
4+
The `io_context_options` struct provides runtime tuning knobs for the
5+
I/O context and its backend scheduler. All defaults match the
6+
library's built-in values, so an unconfigured context behaves
7+
identically to previous releases.
8+
9+
[source,cpp]
10+
----
11+
#include <boost/corosio/io_context.hpp>
12+
13+
corosio::io_context_options opts;
14+
opts.max_events_per_poll = 256;
15+
opts.inline_budget_max = 32;
16+
17+
corosio::io_context ioc(opts);
18+
----
19+
20+
Both `io_context` and `native_io_context` accept options:
21+
22+
[source,cpp]
23+
----
24+
#include <boost/corosio/native/native_io_context.hpp>
25+
26+
corosio::io_context_options opts;
27+
opts.max_events_per_poll = 512;
28+
29+
corosio::native_io_context<corosio::epoll> ioc(opts);
30+
----
31+
32+
== Available Options
33+
34+
[cols="1,1,1,3"]
35+
|===
36+
| Option | Default | Backends | Description
37+
38+
| `max_events_per_poll`
39+
| 128
40+
| epoll, kqueue
41+
| Number of events fetched per reactor poll call. Larger values
42+
reduce syscall frequency under high load; smaller values improve
43+
fairness between connections.
44+
45+
| `inline_budget_initial`
46+
| 2
47+
| epoll, kqueue, select
48+
| Starting inline completion budget per handler chain. After a
49+
posted handler executes, the reactor grants this many speculative
50+
inline completions before forcing a re-queue.
51+
52+
| `inline_budget_max`
53+
| 16
54+
| epoll, kqueue, select
55+
| Hard ceiling on adaptive inline budget ramp-up. The budget
56+
doubles each cycle it is fully consumed, up to this limit.
57+
58+
| `unassisted_budget`
59+
| 4
60+
| epoll, kqueue, select
61+
| Inline budget when no other thread is running the event loop.
62+
Prevents a single-threaded context from starving connections.
63+
64+
| `gqcs_timeout_ms`
65+
| 500
66+
| IOCP
67+
| Maximum `GetQueuedCompletionStatus` blocking time in
68+
milliseconds. Lower values improve timer responsiveness at the
69+
cost of more syscalls.
70+
71+
| `thread_pool_size`
72+
| 1
73+
| POSIX (epoll, kqueue, select)
74+
| Number of worker threads in the shared thread pool used for
75+
blocking file I/O and DNS resolution. Ignored on IOCP where
76+
file I/O uses native overlapped I/O.
77+
78+
| `single_threaded`
79+
| false
80+
| all
81+
| Disable all scheduler mutex and condition variable operations.
82+
Eliminates synchronization overhead when only one thread calls
83+
`run()`. See <<single-threaded-mode>> for restrictions.
84+
|===
85+
86+
Options that do not apply to the active backend are silently ignored.
87+
88+
== Tuning Guidelines
89+
90+
=== Event Buffer Size (`max_events_per_poll`)
91+
92+
The event buffer controls how many I/O events are fetched in a single
93+
`epoll_wait()` or `kevent()` call.
94+
95+
* *High-throughput streaming* (few connections, high bandwidth):
96+
increase to 256-512 to reduce syscall overhead.
97+
* *Many idle connections* (chat servers, WebSocket hubs):
98+
keep at 128 or lower for better fairness.
99+
100+
=== Inline Completion Budget
101+
102+
The inline budget controls how many I/O completions the reactor
103+
completes speculatively within a single handler chain before forcing
104+
a re-queue through the scheduler.
105+
106+
* *Streaming workloads* (file transfer, video):
107+
`inline_budget_max = 32` or higher reduces context switches.
108+
* *Request-response workloads* (HTTP, RPC):
109+
keep at 16 to prevent one connection from monopolizing a thread.
110+
* *Single-threaded contexts*:
111+
`unassisted_budget` caps the budget when only one thread is
112+
running the event loop, preserving fairness.
113+
114+
=== IOCP Timeout (`gqcs_timeout_ms`)
115+
116+
On Windows, the IOCP scheduler periodically wakes to recheck timers.
117+
The default 500ms balances responsiveness with efficiency.
118+
119+
* *Sub-second timer precision*: reduce to 50-100ms.
120+
* *Minimal syscall overhead*: increase to 1000ms or higher.
121+
122+
=== Thread Pool Size (`thread_pool_size`)
123+
124+
On POSIX platforms, file I/O (`stream_file`, `random_access_file`)
125+
and DNS resolution use a shared thread pool.
126+
127+
* *Concurrent file operations*: increase to match expected
128+
parallelism (e.g. 4 for four concurrent file reads).
129+
* *No file I/O*: leave at 1 (the pool is created lazily).
130+
131+
[#single-threaded-mode]
132+
=== Single-Threaded Mode (`single_threaded`)
133+
134+
Disables all mutex and condition variable operations inside the
135+
scheduler and per-socket descriptor states. This eliminates
136+
15-25% of overhead on the post-and-dispatch hot path.
137+
138+
[source,cpp]
139+
----
140+
corosio::io_context_options opts;
141+
opts.single_threaded = true;
142+
143+
corosio::io_context ioc(opts);
144+
ioc.run(); // only one thread may call this
145+
----
146+
147+
WARNING: Single-threaded mode imposes hard restrictions.
148+
Violating them is undefined behavior.
149+
150+
* Only **one thread** may call `run()` (or any run/poll variant).
151+
* **Posting work from another thread** is undefined behavior.
152+
* **DNS resolution** returns `operation_not_supported`.
153+
* **POSIX file I/O** (`stream_file`, `random_access_file`) returns
154+
`operation_not_supported` on `open()`.
155+
* **Signal sets** should not be shared across contexts.
156+
* **Timer cancellation via `stop_token`** from another thread
157+
remains safe (the timer service retains its own mutex).
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
//
2+
// Copyright (c) 2026 Michael Vandeberg
3+
//
4+
// Distributed under the Boost Software License, Version 1.0. (See accompanying
5+
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
6+
//
7+
// Official repository: https://github.com/cppalliance/corosio
8+
//
9+
10+
#ifndef BOOST_COROSIO_DETAIL_CONDITIONALLY_ENABLED_EVENT_HPP
11+
#define BOOST_COROSIO_DETAIL_CONDITIONALLY_ENABLED_EVENT_HPP
12+
13+
#include <boost/corosio/detail/conditionally_enabled_mutex.hpp>
14+
15+
#include <chrono>
16+
#include <condition_variable>
17+
18+
namespace boost::corosio::detail {
19+
20+
/* Condition variable wrapper that becomes a no-op when disabled.
21+
22+
When enabled, notify/wait delegate to an underlying
23+
std::condition_variable. When disabled, all operations
24+
are no-ops. The wait paths are unreachable in
25+
single-threaded mode because the task sentinel prevents
26+
the empty-queue state in do_one().
27+
*/
28+
class conditionally_enabled_event
29+
{
30+
std::condition_variable cond_;
31+
bool enabled_;
32+
33+
public:
34+
explicit conditionally_enabled_event(bool enabled = true) noexcept
35+
: enabled_(enabled)
36+
{
37+
}
38+
39+
conditionally_enabled_event(conditionally_enabled_event const&) = delete;
40+
conditionally_enabled_event& operator=(conditionally_enabled_event const&) = delete;
41+
42+
void set_enabled(bool v) noexcept
43+
{
44+
enabled_ = v;
45+
}
46+
47+
void notify_one()
48+
{
49+
if (enabled_)
50+
cond_.notify_one();
51+
}
52+
53+
void notify_all()
54+
{
55+
if (enabled_)
56+
cond_.notify_all();
57+
}
58+
59+
void wait(conditionally_enabled_mutex::scoped_lock& lock)
60+
{
61+
if (enabled_)
62+
cond_.wait(lock.underlying());
63+
}
64+
65+
template<class Rep, class Period>
66+
void wait_for(
67+
conditionally_enabled_mutex::scoped_lock& lock,
68+
std::chrono::duration<Rep, Period> const& d)
69+
{
70+
if (enabled_)
71+
cond_.wait_for(lock.underlying(), d);
72+
}
73+
};
74+
75+
} // namespace boost::corosio::detail
76+
77+
#endif // BOOST_COROSIO_DETAIL_CONDITIONALLY_ENABLED_EVENT_HPP
Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
//
2+
// Copyright (c) 2026 Michael Vandeberg
3+
//
4+
// Distributed under the Boost Software License, Version 1.0. (See accompanying
5+
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
6+
//
7+
// Official repository: https://github.com/cppalliance/corosio
8+
//
9+
10+
#ifndef BOOST_COROSIO_DETAIL_CONDITIONALLY_ENABLED_MUTEX_HPP
11+
#define BOOST_COROSIO_DETAIL_CONDITIONALLY_ENABLED_MUTEX_HPP
12+
13+
#include <mutex>
14+
15+
namespace boost::corosio::detail {
16+
17+
/* Mutex wrapper that becomes a no-op when disabled.
18+
19+
When enabled (the default), lock/unlock delegate to an
20+
underlying std::mutex. When disabled, all operations are
21+
no-ops. The enabled flag is fixed after construction.
22+
23+
scoped_lock wraps std::unique_lock<std::mutex> internally
24+
so that condvar wait paths (which require the real lock
25+
type) compile and work in multi-threaded mode.
26+
*/
27+
class conditionally_enabled_mutex
28+
{
29+
std::mutex mutex_;
30+
bool enabled_;
31+
32+
public:
33+
explicit conditionally_enabled_mutex(bool enabled = true) noexcept
34+
: enabled_(enabled)
35+
{
36+
}
37+
38+
conditionally_enabled_mutex(conditionally_enabled_mutex const&) = delete;
39+
conditionally_enabled_mutex& operator=(conditionally_enabled_mutex const&) = delete;
40+
41+
bool enabled() const noexcept
42+
{
43+
return enabled_;
44+
}
45+
46+
void set_enabled(bool v) noexcept
47+
{
48+
enabled_ = v;
49+
}
50+
51+
// Lockable interface — allows std::lock_guard<conditionally_enabled_mutex>
52+
void lock() { if (enabled_) mutex_.lock(); }
53+
void unlock() { if (enabled_) mutex_.unlock(); }
54+
bool try_lock() { return !enabled_ || mutex_.try_lock(); }
55+
56+
class scoped_lock
57+
{
58+
std::unique_lock<std::mutex> lock_;
59+
bool enabled_;
60+
61+
public:
62+
explicit scoped_lock(conditionally_enabled_mutex& m)
63+
: lock_(m.mutex_, std::defer_lock)
64+
, enabled_(m.enabled_)
65+
{
66+
if (enabled_)
67+
lock_.lock();
68+
}
69+
70+
scoped_lock(scoped_lock const&) = delete;
71+
scoped_lock& operator=(scoped_lock const&) = delete;
72+
73+
void lock()
74+
{
75+
if (enabled_)
76+
lock_.lock();
77+
}
78+
79+
void unlock()
80+
{
81+
if (enabled_)
82+
lock_.unlock();
83+
}
84+
85+
bool owns_lock() const noexcept
86+
{
87+
return enabled_ && lock_.owns_lock();
88+
}
89+
90+
// Access the underlying unique_lock for condvar wait().
91+
// Only called when locking is enabled.
92+
std::unique_lock<std::mutex>& underlying() noexcept
93+
{
94+
return lock_;
95+
}
96+
};
97+
};
98+
99+
} // namespace boost::corosio::detail
100+
101+
#endif // BOOST_COROSIO_DETAIL_CONDITIONALLY_ENABLED_MUTEX_HPP

0 commit comments

Comments
 (0)