Comparison of Gungraun with Iai
This is a comparison with Iai. There is no known downside in using Gungraun instead of Iai. Although the original idea of Iai will always be remembered, Gungraun has surpassed Iai over the years in functionality, stability and flexibility.
Gungraun Pros:
-
Gungraun is actively maintained.
-
The user interface and benchmarking api of Gungraun is simple, intuitive and allows for a much more concise and clearer structure of benchmarks.
-
Gungraun excludes setup code from the metrics of interest natively. The metrics are more stable because the benchmark function is virtually encapsulated by Callgrind and separates the benchmarked code from the surrounding code.
-
Full support for benchmarking multi-threaded/multi-process functions/binaries.
-
Can still run Cachegrind but with a real one-shot implementation using client requests instead of a calibration run.
-
Support of memory profiling with
DHAT
andMassif
. -
Running error checking valgrind tools is a few keystrokes away if you really need them.
-
The Callgrind output files are much more focused on the benchmark function and the function under test than the Cachegrind output files that Iai produces. The calibration run of Iai only sanitized the visible summary output but not the metrics in the output files themselves. So, the output of
cg_annotate
was still cluttered by the initialization code, setup functions and metrics. -
Changes to the library of Gungraun have almost never an influence on the benchmark metrics, since the actual runner (
gungraun-runner
) and thus99%
of the code needed to run the benchmarks is isolated from the benchmarks by an independent binary. In contrast to the library of Iai which is compiled together with the benchmarks. -
Gungraun has functionality in place that provides a constant and reproducible benchmarking environment, like the
Sandbox
and clearing environment variables. -
Customizable output format to be able to show all Callgrind/Cachegrind/DHAT/... metrics or only the set of metrics you're interested in.
-
Comparison by id of benchmark functions.
-
Gungraun can be configured to check for performance regressions.
-
Ships with a complete implementation of Valgrind Client Requests
-
Comparison of benchmarks to baselines instead of only to
.old
files. -
Natively supports benchmarking binaries.
-
Gungraun can print and/or save machine-readable output in
.json
format. -
Fixed the wrong labeling of
L1 Accesses
, ... toL1 Hits
, ...