This page explains how to configure SummonerClient.run(...) in a way that is understandable to readers who do not want to study the implementation. Each setting is described by its purpose, behavior, default value, and the practical consequences of changing it.
A Summoner client is an async runtime that connects to a TCP relay, receives newline-delimited messages, runs your registered handlers (@receive, @send, @hook), and optionally uses flow-aware routing (parsed routes, a state tape, reactive senders, timed senders, and Event.data delivery). Most configuration exists to answer three operational questions:
- Where does the client connect (
host,port), and how does it reconnect (hyper_parameters.reconnection)? - What is recorded for observability (
logger)? - How does the client stay stable under load (
hyper_parameters.receiverandhyper_parameters.sender)?
Note
Loading & precedence
-
You can pass a configuration dictionary directly to
client.run(...)asconfig_dict, or provide a JSON file path viaconfig_path. -
config_dicttakes precedence overconfig_path. -
Host/Port precedence
- If
host/portare present in the config, they override the arguments you pass torun(host, port). - If they are omitted (or
null), the client uses therun(host, port)arguments.
- If
Internally, a session connects to current_host = self.host or run_host and current_port = self.port or run_port.
Purpose: The parameter host is the network address the client connects to.
"127.0.0.1"connects to a server on the same machine (development).- A LAN or public IP (or DNS name) connects to a server reachable over the network.
If host is not provided (or set to null), the client uses the host argument passed to SummonerClient.run(host=...). The default run() value is "127.0.0.1".
- Setting
hostin the config makes the target stable across runs, independent ofrun(host=...). - Leaving it unset makes
run(host=...)the single place to control the target.
Purpose: The parameter port is the TCP port number the client connects to.
If port is not provided (or set to null), the client uses the port argument passed to SummonerClient.run(port=...). The default run() value is 8888.
Logging is how the client answers basic operational questions such as: "Did I connect?", "Why am I reconnecting?", "Which hook failed?", "Are my senders crashing?", and "Is backpressure building inside the client runtime?"
The client commonly logs:
- connection success and clean disconnects
- retry attempts and fallback transitions (primary target to default target)
- failures in hooks (receive/send hook exceptions are logged and the payload is preserved)
- sender worker crashes (with a configurable "consecutive crash" threshold)
- queue pressure warnings (for example, when the send queue is close to full)
- parsing warnings when flow is enabled and a route fails to parse at registration time
Purpose: The logger object configures the client logger behavior (level, handlers, formatting).
Config path: logger.
The client forwards this dictionary directly to:
configure_logger(self.logger, logger_cfg)This means the accepted keys and their behavior are defined by summoner.logger.configure_logger, not by the client itself.
If omitted, {} is used. You still get a logger instance, but you get whatever default handler and formatting behavior your logger implementation chooses.
- Use a stable
loggerconfig for repeatable output across environments. - Keep the log level at
"INFO"for normal usage and"DEBUG"for short investigations. Under load, debug logging can be noisy.
The hyper_parameters section contains runtime limits and timing constants that control:
- Reconnection behavior (how long to wait between attempts, when to fail over)
- Receive-side safety (line size limits, optional read timeouts)
- Send-side throughput and backpressure (worker concurrency, queue sizes, and failure thresholds)
All hyper parameters are optional. If omitted, defaults apply.
Purpose: The parameter retry_delay_seconds sets how long the client sleeps between connection attempts.
Config path: hyper_parameters.reconnection.retry_delay_seconds.
This is the backoff delay used after connection failures such as refused connections, disconnects, or OS-level connection errors.
If omitted, it defaults to 3.
- Lower values reconnect faster but can spam logs and create tight retry loops in failure scenarios.
- Higher values reduce noise and load on the target server during outages.
Purpose: The parameter primary_retry_limit sets how many retries are allowed before failing over to the default target.
Config path: hyper_parameters.reconnection.primary_retry_limit.
The client starts in a "Primary" stage. If it cannot maintain a session, it retries up to this limit.
If omitted, it defaults to 3.
- Higher values keep trying the primary target longer.
- Lower values fail over faster.
Purpose: The parameter default_host is the host used when the client falls back after primary retries are exhausted.
Config path: hyper_parameters.reconnection.default_host.
If not set, the client uses the top-level host from the config as the fallback host.
If omitted, it defaults to the value of top-level host (which may itself be null).
- Set
default_hostif you want an explicit fallback independent of the primary host. - If both
default_hostand top-levelhostarenull, the fallback stage will still use therun(host=...)argument.
Purpose: The parameter default_port is the port used when the client falls back after primary retries are exhausted.
If omitted, it defaults to the value of top-level port (which may itself be null).
Purpose: The parameter default_retry_limit sets how many retries are allowed in the fallback stage.
Config path: hyper_parameters.reconnection.default_retry_limit.
After the primary stage fails, the client enters a "Default" stage and retries up to this limit.
If omitted, it defaults to 2.
If set to null, the fallback stage retries indefinitely.
Purpose: The parameter max_bytes_per_line caps the size of a single incoming line.
Config path: hyper_parameters.receiver.max_bytes_per_line.
Incoming messages are read using a line-based protocol. If a received line exceeds this size, it is dropped and the client continues.
If omitted, it defaults to 65536 (64 KiB).
- Increase this only if you expect legitimately large single-line payloads.
- Keeping it bounded reduces memory exposure to a single oversized message.
Purpose: The parameter read_timeout_seconds controls whether reads block indefinitely or time out and retry.
Config path: hyper_parameters.receiver.read_timeout_seconds.
- If
null, the client blocks waiting for data. - If set to a number, the client waits up to that duration for a line. If no data arrives, it sleeps briefly and retries.
If omitted, it defaults to null (wait indefinitely).
nullis simplest and avoids periodic wakeups.- A small timeout can be useful if you want the receive loop to frequently re-check internal stop conditions, but it adds wakeups.
Purpose: The parameter concurrency_limit sets how many sender workers run concurrently.
Config path: hyper_parameters.sender.concurrency_limit.
The client starts this many worker tasks per session. Workers pull sender jobs from an internal queue.
If omitted, it defaults to 50.
Must be an integer >= 1, otherwise the client raises:
ValueError("sender.concurrency_limit must be an integer ≥ 1")
- Higher values increase parallelism but can increase load on your runtime and on the server.
- Lower values reduce throughput but can make behavior easier to reason about.
Purpose: The parameter queue_maxsize sets the capacity of the internal send queue used for backpressure.
Config path: hyper_parameters.sender.queue_maxsize.
When the queue is full, the sender loop blocks while trying to enqueue new sender jobs. This is the main client-side backpressure mechanism.
If omitted, it defaults to concurrency_limit.
Must be an integer >= 1, otherwise:
ValueError("sender.queue_maxsize must be an integer ≥ 1")
- Larger queues absorb bursts but increase buffering and can hide sustained overload.
- Smaller queues apply backpressure earlier and keep memory usage tighter.
The client also warns if:
queue_maxsize < concurrency_limit
because producers will be throttled by the smaller queue capacity.
Purpose: The parameter batch_drain controls how often the client flushes writes to the socket.
Config path: hyper_parameters.sender.batch_drain.
- If
true, the sender loop drains once per batch. - If
false, each worker drains after writing its payload(s).
If omitted, it defaults to true.
truereduces the number ofdrain()calls and usually improves throughput.falsecan reduce latency per message at the cost of more frequent draining.
Purpose: The parameter event_bridge_maxsize sets the capacity of the internal bridge from receivers to reactive and timed senders when flow is enabled.
Config path: hyper_parameters.sender.event_bridge_maxsize.
When flow-aware routing is enabled, receiver batches can produce events that trigger reactive senders and arm timed sender runtimes. Those events are passed through an internal queue (the "event bridge"). If the bridge is full, the receiver side blocks, which slows input processing and applies backpressure.
If omitted, it defaults to 1000.
- Larger values absorb bursts of receiver events.
- Smaller values apply backpressure earlier if receivers are producing events faster than senders can consume them.
Purpose: The parameter max_worker_errors sets how many consecutive sender worker failures abort the session.
Config path: hyper_parameters.sender.max_worker_errors.
A sender worker can fail if a sender function raises unexpectedly. After this many consecutive failures in a worker, the client shuts down the sender loop for the session.
If omitted, it defaults to 3.
Must be an integer >= 1, otherwise:
ValueError("sender.max_worker_errors must be an integer ≥ 1")
- Lower values fail fast on buggy sender code.
- Higher values tolerate intermittent sender failures but can hide persistent errors.
These recipes provide starting points. They do not replace measurement and load tests.
Goal: quick reconnects and readable logs.
retry_delay_seconds: 1primary_retry_limit: 5sender.concurrency_limit: 10 to 20sender.batch_drain: falseif you care about per-message latency while debugging
Goal: high send throughput without frequent flushes.
sender.batch_drain: true- Increase
sender.concurrency_limitgradually - Increase
sender.queue_maxsizeso short bursts do not stall the producer immediately
Goal: avoid blocking receiver-side event production.
- Increase
sender.event_bridge_maxsize - Keep
sender.queue_maxsizecomfortably above the typical number of senders emitted per cycle
Goal: stop quickly when handler code is unhealthy.
- Keep
sender.max_worker_errorslow (default is already conservative) - Use hook logging at
"INFO"or"DEBUG"during investigations to spot failing hooks or invalid payloads
{
"host": "127.0.0.1",
"port": 8888,
"logger": {
"log_level": "INFO"
},
"hyper_parameters": {
"reconnection": {
"retry_delay_seconds": 2,
"primary_retry_limit": 3,
"default_host": "127.0.0.1",
"default_port": 8888,
"default_retry_limit": 2
},
"receiver": {
"max_bytes_per_line": 65536,
"read_timeout_seconds": null
},
"sender": {
"concurrency_limit": 50,
"queue_maxsize": 200,
"batch_drain": true,
"event_bridge_maxsize": 1000,
"max_worker_errors": 3
}
}
}
« Previous: Summoner.client.client
|
Next: Summoner.client.merger »