Skip to content

Commit 2bbcd52

Browse files
authored
Fix/streaming timestamps (#105)
* fix: Fix streaming output timestamps showing same second for all messages Pass send timestamp to MessageLatencyRecord instead of capturing it when the record is created. This makes the timestamp represent when the message was sent, so gaps between timestamps now match the actual latency. Changes: - MessageLatencyRecord::new() and new_combined() now require send_timestamp_ns parameter instead of capturing SystemTime::now() internally - Added MessageLatencyRecord::current_timestamp_ns() helper for call sites that need to capture the current wall-clock time - Updated all streaming record creation sites in benchmark.rs and benchmark_blocking.rs to capture and pass send timestamps Cherry-picked from container-to-container-ipc branch (6cb8f9b). Host-container-specific changes excluded (host_container.rs not present). AI-assisted-by: Claude claude-4.6-opus-high-thinking (Anthropic) Made-with: Cursor * test/docs: Add streaming timestamp tests and sequential test execution documentation - Add 3 new unit tests for MessageLatencyRecord timestamp handling: test_new_uses_provided_send_timestamp, test_new_combined_uses_provided_send_timestamp, test_current_timestamp_ns_returns_recent_value - Add 2 end-to-end streaming tests in benchmark.rs: test_one_way_streaming_captures_send_timestamp, test_round_trip_streaming_captures_send_timestamp - Document sequential one-way/round-trip test execution in README.md (new "Test Execution Order" section), CONFIG.md, dashboard README, and run() doc comments in benchmark.rs and benchmark_blocking.rs - All 332 tests pass (331 passed, 1 ignored), zero clippy warnings AI-assisted-by: Claude claude-4.6-opus-high-thinking Made-with: Cursor * fix: Capture streaming timestamps at message-send time for all code paths The previous commit (68afe07) added a send_timestamp_ns parameter to MessageLatencyRecord but only the blocking round-trip path was actually capturing timestamps at send time. The async round-trip, async combined, and all one-way paths were still using current_timestamp_ns() at record-creation time (post-test), causing all timestamps within a run to cluster into the same second. Changes: - Async round-trip: capture wall-clock timestamp inside the spawned client future before each send(), return Vec<(Duration, u64)> - Async combined: same pattern for one-way latency vector - One-way (async + blocking): server now writes "wall_send_ns,latency_ns" per line (wall_send_ns = wall_clock_now - latency); client readers parse and use the server-computed send timestamp - Add parse_latency_file_line() with 7 unit tests covering valid input, missing commas, empty lines, non-numeric values, and extra commas - Enhance existing end-to-end streaming tests to validate timestamps fall within the test execution window and are not all identical - Document streaming output column definitions in README.md including timestamp_ns semantics and accuracy note for one-way clock mixing All 265+ unit tests pass, clippy clean, no scope creep. AI-assisted-by: Claude claude-4.6-opus-high-thinking (Anthropic) Made-with: Cursor * fix: Capture wall-clock timestamp immediately and trim latency lines - Move SystemTime::now() capture to immediately after get_monotonic_time_ns() in both blocking and async server paths, eliminating minor drift from the should_buffer_latency branch - Add .trim() to parse_latency_file_line() input for cross-platform robustness against Windows \r\n line endings - All tests passing AI-assisted-by: Claude claude-4.6-opus-high-thinking Made-with: Cursor
1 parent ad0979e commit 2bbcd52

7 files changed

Lines changed: 500 additions & 69 deletions

File tree

CONFIG.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,12 @@ This document provides detailed information about configuring the IPC Benchmark
2727

2828
### Test Configuration
2929

30+
One-way and round-trip tests run **sequentially**, not
31+
simultaneously. Each test spawns its own server process, runs to
32+
completion, and tears down before the next test starts. By default
33+
both are enabled. For duration-based tests (`-d`), each test gets
34+
its own full time window.
35+
3036
| Option | Type | Default | Description |
3137
|--------|------|---------|-------------|
3238
| `--one-way` | Boolean | `true` | Enable one-way latency tests |

README.md

Lines changed: 52 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,16 @@ This benchmark suite provides a systematic way to evaluate the performance of va
3131
### Output Formats
3232

3333
- **JSON**: Optional, machine-readable structured output for final, aggregated results. Generated only when the `--output-file` flag is used.
34-
- **Streaming JSON**: Real-time, per-message latency data written to a file in a columnar JSON format. This allows for efficient, live monitoring of long-running tests. The format consists of a `headings` array and a `data` array containing value arrays for each message.
35-
- **Streaming CSV**: Real-time, per-message latency data written to a standard CSV file. This format is ideal for easy import into spreadsheets and data analysis tools.
34+
- **Streaming JSON**: Real-time, per-message latency data written
35+
to a file in a columnar JSON format. This allows for efficient,
36+
live monitoring of long-running tests. The format consists of a
37+
`headings` array and a `data` array containing value arrays for
38+
each message. See [Streaming Output Columns](#streaming-output-columns)
39+
for the column definitions.
40+
- **Streaming CSV**: Real-time, per-message latency data written
41+
to a standard CSV file. The columns match the streaming JSON
42+
headings. This format is ideal for easy import into spreadsheets
43+
and data analysis tools.
3644
- **Console Output**: User-friendly, color-coded summaries on `stdout`. Includes a configuration summary at startup and a detailed results summary upon completion.
3745
- **Detailed Logs**: Structured, timestamped logs written to a file or `stderr` for diagnostics.
3846

@@ -166,6 +174,24 @@ This benchmark suite uses **high-precision monotonic clocks** to measure true IP
166174
- **Windows**: Falls back to system time (less precise)
167175
- **Characteristics**: Monotonic clocks measure time from system boot and are unaffected by NTP adjustments, daylight saving time, or manual clock changes
168176

177+
#### Test Execution Order
178+
179+
One-way and round-trip tests run **sequentially**, never
180+
simultaneously. Each test spawns its own server process, runs to
181+
completion, and tears down before the next test starts:
182+
183+
1. **Warmup** (if configured)
184+
2. **One-way test** — client sends messages; server measures
185+
receive latency
186+
3. **Round-trip test** — client sends a message, waits for the
187+
server's response, then sends the next
188+
189+
By default both tests are enabled. Use `--one-way` or
190+
`--round-trip` to run only one. For duration-based tests (`-d`),
191+
each test gets its own full time window, so one-way typically
192+
produces far more records than round-trip (fire-and-forget vs
193+
send-wait-receive per message).
194+
169195
#### Timestamp Capture Points
170196

171197
**For One-Way Latency Tests:**
@@ -179,6 +205,28 @@ This benchmark suite uses **high-precision monotonic clocks** to measure true IP
179205
3. **Client Side**: Timestamp captured after receiving response
180206
4. **Latency Calculation**: Total elapsed time from send to receive
181207

208+
#### Streaming Output Columns
209+
210+
The per-message streaming output (JSON and CSV) contains the
211+
following columns:
212+
213+
| Column | Type | Description |
214+
|--------|------|-------------|
215+
| `timestamp_ns` | `u64` | Approximate wall-clock time (nanoseconds since Unix epoch) when the message was sent. For round-trip and combined tests this is captured on the client immediately before `send()`. For one-way tests the server approximates it as `wall_clock_now - latency`. |
216+
| `message_id` | `u64` | Zero-based sequential message identifier. |
217+
| `mechanism` | `string` | IPC mechanism name (e.g. `"TcpSocket"`, `"UnixDomainSocket"`). |
218+
| `message_size` | `u64` | Payload size in bytes. |
219+
| `one_way_latency_ns` | `u64` or `null` | One-way latency in nanoseconds, or `null` if this record is round-trip only. |
220+
| `round_trip_latency_ns` | `u64` or `null` | Round-trip latency in nanoseconds, or `null` if this record is one-way only. |
221+
222+
> **Note on `timestamp_ns` accuracy:** For one-way tests the
223+
> server computes the send timestamp by subtracting the measured
224+
> monotonic latency from its current wall-clock time. This mixes
225+
> two clock domains (wall-clock and monotonic) and is subject to
226+
> minor drift from NTP adjustments, but it is the best
227+
> approximation available without clock synchronization between
228+
> the client and server processes.
229+
182230
### What's Measured
183231

184232
The latency measurements include:
@@ -390,7 +438,8 @@ ipc-benchmark --log-file stderr
390438
# Continue running tests even if one mechanism fails
391439
ipc-benchmark -m all --continue-on-error
392440

393-
# Run only round-trip tests
441+
# Run only round-trip tests (one-way and round-trip run
442+
# sequentially by default; use these flags to select one)
394443
ipc-benchmark --round-trip --no-one-way
395444

396445
# Custom percentiles for latency analysis

0 commit comments

Comments
 (0)