This document provides detailed information about configuring the IPC Benchmark Suite for various testing scenarios and environments.
- Command Line Options
- Configuration File
- Environment Variables
- IPC Mechanism Settings
- Performance Tuning
- Test Scenarios
- System Requirements
| Option | Short | Type | Default | Description |
|---|---|---|---|---|
-m |
String[] | [uds] |
IPC mechanisms to benchmark (uds, shm, tcp, pmq, all) | |
--message-size |
-s |
Number | 1024 |
Message size in bytes |
--msg-count |
-i |
Number | 10000 |
Number of messages to send |
--duration |
-d |
String | - | Duration to run (e.g., "30s", "5m") |
--concurrency |
-c |
Number | 1 |
Number of concurrent workers |
--output-file |
-o |
String | benchmark_results.json |
Output file path |
One-way and round-trip tests run sequentially, not
simultaneously. Each test spawns its own server process, runs to
completion, and tears down before the next test starts. By default
both are enabled. For duration-based tests (-d), each test gets
its own full time window.
| Option | Type | Default | Description |
|---|---|---|---|
--one-way |
Boolean | true |
Enable one-way latency tests |
--round-trip |
Boolean | true |
Enable round-trip latency tests |
--warmup-iterations |
-w |
Number | 1000 |
--percentiles |
Number[] | [50.0, 95.0, 99.0, 99.9] |
Percentiles to calculate |
--buffer-size |
Number | auto | Override buffer size (auto-sized per mechanism when omitted) |
| Option | Type | Default | Description |
|---|---|---|---|
--streaming-output |
String | - | File for streaming results during execution |
--continue-on-error |
Boolean | false |
Continue running if one test fails |
--verbose |
-v |
Boolean | false |
--host |
String | "127.0.0.1" |
Host address for TCP sockets |
--port |
Number | 8080 |
Port for TCP sockets |
--server-affinity |
Number | - | Pin the server process (message receiver) to a CPU core (best effort) |
--client-affinity |
Number | - | Pin the client workload (message sender) to a CPU core (best effort) |
# Basic usage
ipc-benchmark -m uds shm pmq --message-size 4096 --msg-count 50000
# Test all mechanisms (including PMQ)
ipc-benchmark -m all --message-size 1024 --msg-count 10000
# Test POSIX message queues specifically
ipc-benchmark -m pmq --message-size 2048 --msg-count 5000
# Duration-based testing
ipc-benchmark --duration 60s --concurrency 8
# Comprehensive latency analysis
ipc-benchmark --percentiles 50 90 95 99 99.9 99.99 --warmup-iterations 10000
# High-throughput testing
ipc-benchmark -m shm --message-size 65536 --buffer-size 1048576You can create a configuration file to avoid repeating command-line arguments:
{
"mechanisms": ["uds", "shm", "tcp", "pmq"],
// Alternative: "mechanisms": ["all"] to test all available mechanisms
"message_size": 1024,
"msg_count": 10000,
"concurrency": 4,
"one_way": true,
"round_trip": true,
"warmup_iterations": 1000,
"percentiles": [50.0, 95.0, 99.0, 99.9],
"buffer_size": 8192,
"output_file": "results.json",
"streaming_output": "streaming.json",
"host": "127.0.0.1",
"port": 8080
}mechanisms = ["uds", "shm", "tcp", "pmq"]
# Alternative: mechanisms = ["all"] # to test all available mechanisms
message_size = 1024
msg_count = 10000
concurrency = 4
one_way = true
round_trip = true
warmup_iterations = 1000
percentiles = [50.0, 95.0, 99.0, 99.9]
buffer_size = 8192
output_file = "results.json"
streaming_output = "streaming.json"
host = "127.0.0.1"
port = 8080# JSON configuration
ipc-benchmark --config config.json
# TOML configuration
ipc-benchmark --config config.toml
# Override specific options
ipc-benchmark --config config.json --concurrency 8| Variable | Description | Default |
|---|---|---|
RUST_LOG |
Logging level (trace, debug, info, warn, error) | info |
IPC_BENCHMARK_TEMP_DIR |
Temporary directory for IPC files | /tmp |
IPC_BENCHMARK_OUTPUT_DIR |
Default output directory | Current directory |
IPC_BENCHMARK_CONFIG |
Default configuration file path | - |
CARGO_BIN_EXE_ipc-benchmark |
Path hint for the test runner to spawn the server binary | Auto-detected |
CARGO_BIN_EXE_ipc_benchmark |
Alternate env var name used in some setups | Auto-detected |
# Enable debug logging
RUST_LOG=debug ipc-benchmark
# Use custom temporary directory
IPC_BENCHMARK_TEMP_DIR=/var/tmp ipc-benchmark
# Set default configuration
IPC_BENCHMARK_CONFIG=./default.json ipc-benchmark| Setting | Description | Default | Range |
|---|---|---|---|
socket_path |
Path to socket file | /tmp/ipc_benchmark_<uuid>.sock |
Valid file path |
buffer_size |
Socket buffer size | 8192 |
1024 - 1MB |
Optimal Settings:
- Message size: 64 - 4096 bytes for lowest latency
- Concurrency: 1-4 for latency testing, up to CPU cores for throughput
- Buffer size: 8192 - 65536 bytes
| Setting | Description | Default | Range |
|---|---|---|---|
shared_memory_name |
Shared memory segment name | ipc_benchmark_<uuid> |
Valid identifier |
buffer_size |
Ring buffer size | 64 KB (auto) | 4096 - 1GB |
Automatic Buffer Sizing: When --buffer-size is omitted,
SHM uses a fixed 64 KB ring buffer (or 2x message size for
large messages). This enables proper streaming where the writer
blocks when the buffer is full, rather than dumping all data at
once into an oversized buffer.
Optimal Settings:
- Message size: 1024 - 65536 bytes for highest throughput
- Concurrency: 1 (single producer, single consumer)
- Buffer size: Leave at automatic (64 KB) for streaming;
override with
--buffer-sizeonly to test specific backpressure scenarios
| Setting | Description | Default | Range |
|---|---|---|---|
host |
Server host address | 127.0.0.1 |
Valid IP address |
port |
Server port | 8080 |
1024 - 65535 |
buffer_size |
TCP buffer size | 8192 |
1024 - 1MB |
Optimal Settings:
- Message size: 1024 - 8192 bytes for balanced performance
- Concurrency: 1-16 depending on system capacity
- Buffer size: 32KB - 256KB
The Rusty-Comms Dashboard requires specific output formats and parameters to provide full functionality. Missing required outputs will result in limited dashboard features.
For full dashboard compatibility, you must use both output parameters:
| Parameter | Purpose | Dashboard Impact |
|---|---|---|
-o <directory> |
Generate summary JSON files | Enables Summary Analysis tab |
--streaming-output-json |
Generate streaming data files | Enables Time Series Analysis tab |
# Basic dashboard-compatible benchmark
ipc-benchmark --mechanism SharedMemory --message-size 1024 \
-o ./dashboard_data/ \
--streaming-output-json
# Comprehensive comparison for dashboard
ipc-benchmark -m uds shm tcp pmq \
--message-size 1024 \
--msg-count 50000 \
-o ./results/ \
--streaming-output-json \
--continue-on-error
# Multi-size analysis
for size in 64 256 1024 4096; do
ipc-benchmark --mechanism SharedMemory \
--message-size $size \
-o ./dashboard_data/ \
--streaming-output-json \
--duration 10s
doneDashboard-compatible runs will generate:
results/
├── sharedmemory_1024_summary.json # Summary statistics and throughput
└── sharedmemory_1024_streaming.json # Per-message latency data
| ipc-benchmark Parameters | Summary Tab | Time Series Tab | Impact |
|---|---|---|---|
-o only |
✅ Available | ❌ Missing | Limited functionality |
--streaming-output-json only |
❌ Missing | ✅ Available | No statistical summaries |
Both -o and --streaming-output-json |
✅ Available | ✅ Available | Full functionality |
| Neither parameter | ❌ Missing | ❌ Missing | Dashboard incompatible |
# Problem: Only used -o parameter
ipc-benchmark -m shm -o results/
# Solution: Add streaming output
ipc-benchmark -m shm -o results/ --streaming-output-json# Problem: Only used streaming output
ipc-benchmark -m shm --streaming-output-json
# Solution: Add summary output
ipc-benchmark -m shm -o results/ --streaming-output-json# Problem: No output parameters specified
ipc-benchmark -m shm
# Solution: Use both required parameters
ipc-benchmark -m shm -o results/ --streaming-output-jsonFor dashboard setup and usage instructions, see utils/dashboard/README.md.
# Disable frequency scaling
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
# Set CPU affinity
taskset -c 0-3 ipc-benchmark --concurrency 4
# Disable turbo boost for consistency
echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo# Increase shared memory limits
echo 2147483648 | sudo tee /proc/sys/kernel/shmmax # 2GB
echo 4096 | sudo tee /proc/sys/kernel/shmmni # 4096 segments
# Optimize virtual memory
echo 1 | sudo tee /proc/sys/vm/overcommit_memory
echo 90 | sudo tee /proc/sys/vm/dirty_ratio# Increase TCP buffer sizes
echo 'net.core.rmem_max = 16777216' | sudo tee -a /etc/sysctl.conf
echo 'net.core.wmem_max = 16777216' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv4.tcp_rmem = 4096 87380 16777216' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv4.tcp_wmem = 4096 65536 16777216' | sudo tee -a /etc/sysctl.conf
# Apply settings
sudo sysctl -p# Maximum optimization
RUSTFLAGS="-C target-cpu=native -C opt-level=3" cargo build --release
# Link-time optimization
RUSTFLAGS="-C lto=fat" cargo build --release
# Profile-guided optimization
RUSTFLAGS="-C profile-generate=/tmp/pgo-data" cargo build --release
# Run benchmark to generate profile data
RUSTFLAGS="-C profile-use=/tmp/pgo-data" cargo build --release# Increase stack size
ulimit -s 16384
# Set process priority
nice -n -20 ipc-benchmark
# Use huge pages
echo always | sudo tee /sys/kernel/mm/transparent_hugepage/enabledGoal: Measure minimum latency with high precision
ipc-benchmark \
-m uds \
--message-size 64 \
--msg-count 100000 \
--concurrency 1 \
--warmup-iterations 10000 \
--percentiles 50 90 95 99 99.9 99.99 \
--round-trip \
--no-one-wayConfiguration:
- Single-threaded to eliminate contention
- Small message sizes to minimize serialization overhead
- Many messages for statistical significance
- Focus on round-trip measurements
Goal: Measure maximum data transfer rate
ipc-benchmark \
-m shm \
--message-size 65536 \
--duration 60s \
--concurrency 8 \
--buffer-size 1048576 \
--one-way \
--no-round-tripConfiguration:
- Multiple concurrent workers
- Large message sizes
- Large buffer sizes
- One-way communication for maximum throughput
Goal: Evaluate performance under increasing load
# Test with increasing concurrency
for concurrency in 1 2 4 8 16; do
ipc-benchmark \
-m uds shm tcp pmq \
--concurrency $concurrency \
--message-size 1024 \
--msg-count 10000 \
--output-file "results_c${concurrency}.json"
done
# Test all mechanisms with scalability
for concurrency in 1 2 4 8; do
ipc-benchmark \
-m all \
--concurrency $concurrency \
--message-size 1024 \
--msg-count 10000 \
--output-file "results_all_c${concurrency}.json"
done
# PMQ-specific scalability testing (Note: PMQ has limited multi-connection support)
for msg_size in 64 512 2048 8192; do
ipc-benchmark \
-m pmq \
--message-size $msg_size \
--msg-count 5000 \
--concurrency 1 \
--output-file "results_pmq_${msg_size}.json"
doneGoal: Compare different IPC mechanisms
ipc-benchmark \
-m uds shm tcp \
--message-size 1024 \
--msg-count 50000 \
--concurrency 4 \
--one-way \
--round-trip \
--output-file comparison.json
# Or simply test all available mechanisms
ipc-benchmark \
-m all \
--message-size 1024 \
--msg-count 50000 \
--concurrency 4 \
--one-way \
--round-trip \
--output-file complete_comparison.json- OS: Linux kernel 3.10+ (RHEL 7+, Ubuntu 16.04+)
- CPU: x86_64 architecture, 2 cores
- RAM: 4GB
- Disk: 1GB free space
- Rust: 1.70.0+
- OS: Linux kernel 5.0+ (RHEL 9, Ubuntu 20.04+)
- CPU: x86_64, 8+ cores, 3.0+ GHz
- RAM: 16GB+
- Disk: SSD with 10GB+ free space
- Network: Gigabit Ethernet (for TCP tests)
- CPU: 2-4 cores, high frequency
- RAM: 4GB minimum
- Disk: Any (minimal I/O)
- CPU: 8+ cores
- RAM: 16GB+ (for shared memory)
- Disk: SSD recommended
- CPU: 16+ cores
- RAM: 32GB+
- Disk: High-performance SSD
FROM rust:1.75-slim
RUN apt-get update && apt-get install -y \
build-essential \
pkg-config \
libc6-dev
WORKDIR /app
COPY . .
RUN cargo build --release
# Increase shared memory limit
RUN echo 'tmpfs /dev/shm tmpfs defaults,size=2g 0 0' >> /etc/fstab
CMD ["./target/release/ipc-benchmark"]apiVersion: v1
kind: Pod
spec:
containers:
- name: ipc-benchmark
image: ipc-benchmark:latest
resources:
requests:
memory: "4Gi"
cpu: "2"
limits:
memory: "16Gi"
cpu: "8"
volumeMounts:
- name: shm
mountPath: /dev/shm
volumes:
- name: shm
emptyDir:
medium: Memory
sizeLimit: 2Gi- Buffer Size Too Small: Increase buffer_size for large messages
- Insufficient Shared Memory: Adjust kernel limits
- Port Conflicts: Use different ports for multiple TCP tests
- Permission Errors: Ensure write access to temp directories
# Check PMQ limits
cat /proc/sys/fs/mqueue/msg_max # Max messages per queue
cat /proc/sys/fs/mqueue/msgsize_max # Max message size
# Mount message queue filesystem (if not mounted)
sudo mkdir -p /dev/mqueue
sudo mount -t mqueue none /dev/mqueue
# Increase PMQ limits if needed
echo 100 | sudo tee /proc/sys/fs/mqueue/msg_max
echo 16384 | sudo tee /proc/sys/fs/mqueue/msgsize_maxPMQ Limitations:
- Limited multi-connection support compared to other mechanisms
- Message size and queue depth are system-limited
- Queue names must start with '/' and follow POSIX naming rules
- Persistent queues may need manual cleanup after crashes
# Check current limits
ipcs -lm
# Increase shared memory limits
echo 2147483648 | sudo tee /proc/sys/kernel/shmmax # 2GB max segment
echo 4096 | sudo tee /proc/sys/kernel/shmmni # 4096 segments# Increase socket buffer sizes
echo 'net.core.rmem_max = 16777216' | sudo tee -a /etc/sysctl.conf
echo 'net.core.wmem_max = 16777216' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p# Check shared memory limits
ipcs -lm
# Check available ports
netstat -tuln | grep :8080
# Check file permissions
ls -la /tmp/ipc_benchmark_*
# Validate configuration
ipc-benchmark --config config.json --dry-run# Enable detailed logging
RUST_LOG=debug ipc-benchmark --verbose
# Profile with perf
perf record -g ipc-benchmark
perf report
# Memory usage analysis
valgrind --tool=massif ipc-benchmarkFor additional configuration assistance, see the troubleshooting section in the main README.