Skip to content

Commit dea73ef

Browse files
authored
docs: Add comprehensive command-line options documentation (#2187)
This commit adds detailed documentation for all benchmark command-line options to the user guide. Each option is documented with: - Description of what the option does - Default value (where applicable) - Valid values (where applicable) - Example usage The documentation is organized into logical categories: - Benchmark Selection and Execution - Timing and Repetition Control - Output Formatting - Reporting Options - Performance Counters and Context - Miscellaneous This addresses issue #2156 where users requested public documentation of command-line options instead of having to run --help. Closes #2156
1 parent ff773f8 commit dea73ef

1 file changed

Lines changed: 205 additions & 0 deletions

File tree

docs/user_guide.md

Lines changed: 205 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
## Command Line
44

5+
[Command Line Options](#command-line-options)
6+
57
[Output Formats](#output-formats)
68

79
[Output Files](#output-files)
@@ -65,6 +67,209 @@
6567
[Disabling CPU Frequency Scaling](#disabling-cpu-frequency-scaling)
6668

6769
[Reducing Variance in Benchmarks](reducing_variance.md)
70+
<a name="command-line-options" />
71+
72+
## Command Line Options
73+
74+
Benchmarks accept options that may be specified either through their command line interface or by setting environment variables before execution. For every `--option_flag=<value>` CLI switch, a corresponding environment variable `OPTION_FLAG=<value>` exists and is used as default if set (CLI switches always prevail).
75+
76+
### Benchmark Selection and Execution
77+
78+
#### `--benchmark_list_tests` (BENCHMARK_LIST_TESTS)
79+
80+
Print a list of all benchmark names and exit. This option overrides all other options.
81+
82+
**Example:**
83+
```bash
84+
$ ./benchmark --benchmark_list_tests
85+
BM_SomeFunction
86+
BM_AnotherFunction
87+
```
88+
89+
#### `--benchmark_filter=<regex>` (BENCHMARK_FILTER)
90+
91+
A regular expression that specifies the set of benchmarks to execute. If this flag is empty, or if this flag is the string "all", all benchmarks linked into the binary are run.
92+
93+
**Example:**
94+
```bash
95+
$ ./benchmark --benchmark_filter=BM_memcpy/32
96+
```
97+
98+
#### `--benchmark_dry_run` (BENCHMARK_DRY_RUN)
99+
100+
If enabled, forces each benchmark to execute exactly one iteration and one repetition, bypassing any configured `MinTime()`, `MinWarmUpTime()`, `Iterations()`, or `Repetitions()`. This is useful for quickly verifying that benchmarks can run successfully without waiting for full execution.
101+
102+
**Example:**
103+
```bash
104+
$ ./benchmark --benchmark_dry_run
105+
```
106+
107+
#### `--benchmark_enable_random_interleaving` (BENCHMARK_ENABLE_RANDOM_INTERLEAVING)
108+
109+
If set, enable random interleaving of repetitions of all benchmarks. This can help reduce the impact of system state changes on benchmark results. See [GitHub issue #1051](https://github.com/google/benchmark/issues/1051) for details.
110+
111+
**Example:**
112+
```bash
113+
$ ./benchmark --benchmark_enable_random_interleaving
114+
```
115+
116+
### Timing and Repetition Control
117+
118+
#### `--benchmark_min_time=<seconds>` (BENCHMARK_MIN_TIME)
119+
120+
Specifies the minimum amount of time (in seconds) that each benchmark should run. For CPU-time based tests, this is the lower bound on the total CPU time used by all threads that make up the test. For real-time based tests, this is the lower bound on the elapsed time of the benchmark execution, regardless of number of threads.
121+
122+
**Default:** `0.5` seconds
123+
124+
**Example:**
125+
```bash
126+
$ ./benchmark --benchmark_min_time=1.0
127+
```
128+
129+
#### `--benchmark_min_warmup_time=<seconds>` (BENCHMARK_MIN_WARMUP_TIME)
130+
131+
Minimum number of seconds a benchmark should be run before results should be taken into account. This can be necessary for benchmarks of code which needs to fill some form of cache before performance is of interest. Results gathered within this period are discarded and not used for the reported result.
132+
133+
**Default:** `0.0` seconds
134+
135+
**Example:**
136+
```bash
137+
$ ./benchmark --benchmark_min_warmup_time=0.5
138+
```
139+
140+
#### `--benchmark_repetitions=<count>` (BENCHMARK_REPETITIONS)
141+
142+
The number of runs of each benchmark. If greater than 1, the mean and standard deviation of the runs will be reported.
143+
144+
**Default:** `1`
145+
146+
**Example:**
147+
```bash
148+
$ ./benchmark --benchmark_repetitions=5
149+
```
150+
151+
### Output Formatting
152+
153+
#### `--benchmark_format=<console|json|csv>` (BENCHMARK_FORMAT)
154+
155+
The format to use for console output. Valid values are 'console', 'json', or 'csv'. See [Output Formats](#output-formats) for more details.
156+
157+
**Default:** `console`
158+
159+
**Example:**
160+
```bash
161+
$ ./benchmark --benchmark_format=json
162+
```
163+
164+
#### `--benchmark_out=<filename>` (BENCHMARK_OUT)
165+
166+
The file to write additional output to. The output format is controlled by `--benchmark_out_format`. Specifying this option does not suppress console output.
167+
168+
**Example:**
169+
```bash
170+
$ ./benchmark --benchmark_out=results.json
171+
```
172+
173+
#### `--benchmark_out_format=<console|json|csv>` (BENCHMARK_OUT_FORMAT)
174+
175+
The format to use for file output specified by `--benchmark_out`. Valid values are 'console', 'json', or 'csv'.
176+
177+
**Default:** `json`
178+
179+
**Example:**
180+
```bash
181+
$ ./benchmark --benchmark_out=results.csv --benchmark_out_format=csv
182+
```
183+
184+
#### `--benchmark_color=<auto|true|false>` (BENCHMARK_COLOR)
185+
186+
Whether to use colors in the output. Valid values are 'true'/'yes'/1, 'false'/'no'/0, and 'auto'. 'auto' means to use colors if the output is being sent to a terminal and the TERM environment variable is set to a terminal type that supports colors.
187+
188+
**Default:** `auto`
189+
190+
**Example:**
191+
```bash
192+
$ ./benchmark --benchmark_color=false
193+
```
194+
195+
#### `--benchmark_time_unit=<ns|us|ms|s>` (BENCHMARK_TIME_UNIT)
196+
197+
Set the default time unit to use for reports. Valid values are 'ns' (nanoseconds), 'us' (microseconds), 'ms' (milliseconds), or 's' (seconds).
198+
199+
**Default:** (empty, uses automatic selection)
200+
201+
**Example:**
202+
```bash
203+
$ ./benchmark --benchmark_time_unit=us
204+
```
205+
206+
### Reporting Options
207+
208+
#### `--benchmark_report_aggregates_only` (BENCHMARK_REPORT_AGGREGATES_ONLY)
209+
210+
When enabled, only the mean, standard deviation, and other statistics are reported for repeated benchmarks. This affects all reporters (both console and file output).
211+
212+
**Default:** `false`
213+
214+
**Example:**
215+
```bash
216+
$ ./benchmark --benchmark_repetitions=5 --benchmark_report_aggregates_only
217+
```
218+
219+
#### `--benchmark_display_aggregates_only` (BENCHMARK_DISPLAY_AGGREGATES_ONLY)
220+
221+
When enabled, only the mean, standard deviation, and other statistics are displayed for repeated benchmarks. Unlike `--benchmark_report_aggregates_only`, this only affects the display (console) reporter, not the file reporter, which will still contain all output.
222+
223+
**Default:** `false`
224+
225+
**Example:**
226+
```bash
227+
$ ./benchmark --benchmark_repetitions=5 --benchmark_display_aggregates_only
228+
```
229+
230+
#### `--benchmark_counters_tabular` (BENCHMARK_COUNTERS_TABULAR)
231+
232+
Whether to use tabular format when printing user counters to the console. Valid values: 'true'/'yes'/1, 'false'/'no'/0.
233+
234+
**Default:** `false`
235+
236+
**Example:**
237+
```bash
238+
$ ./benchmark --benchmark_counters_tabular=true
239+
```
240+
241+
### Performance Counters and Context
242+
243+
#### `--benchmark_perf_counters=<list>` (BENCHMARK_PERF_COUNTERS)
244+
245+
List of additional performance counters to collect, in libpfm format. For more information about libpfm, see the [libpfm documentation](https://man7.org/linux/man-pages/man3/libpfm.3.html).
246+
247+
**Example:**
248+
```bash
249+
$ ./benchmark --benchmark_perf_counters=cycles,instructions,cache-misses
250+
```
251+
252+
#### `--benchmark_context=<key=value,...>` (BENCHMARK_CONTEXT)
253+
254+
Extra context to include in the output, formatted as comma-separated key-value pairs. This context is included in the JSON output's `context` object.
255+
256+
**Example:**
257+
```bash
258+
$ ./benchmark --benchmark_context=compiler=clang,version=13
259+
```
260+
261+
### Miscellaneous
262+
263+
#### `-v` (V)
264+
265+
The level of verbose logging to output. Higher values produce more verbose output.
266+
267+
**Default:** `0`
268+
269+
**Example:**
270+
```bash
271+
$ ./benchmark -v
272+
```
68273

69274
<a name="output-formats" />
70275

0 commit comments

Comments
 (0)