For benchmarking with
dawn_perf_tests, it's best to build inside a Chromium checkout using the following GN args:
is_official_build = true # Enables highest optimization level, using LTO on some platforms use_dawn = true # Required to build Dawn use_cfi_icall=false # Required because Dawn dynamically loads function pointers, and we don't sanitize them yet.
A Chromium checkout is required for the highest optimization flags. It is possible to build and run
dawn_perf_tests from a standalone Dawn checkout as well, only using GN arg
is_debug=false. For more information on building, please see building.md.
dawn_perf_testsmetrics are reported as time per iteration.
iterationsPerStepis provided to the constructor of
kNumTrialsare run for each test. A Step in a Trial is run repetitively for approximately
kCalibrationRunTimeSeconds. Metrics are accumlated per-trial and reported as the total time divided by
numSteps * iterationsPerStep.
maxStepsInFlightis passed to the
DawnPerfTestsBaseconstructor to limit the number of Steps pipelined.
//src/dawn/tests/perf_tests/DawnPerfTest.h for the values of the constants).
dawn_perf_tests measures the following metrics:
wall_time: The time per iteration, including time waiting for the GPU between Steps in a Trial.
cpu_time: The time per iteration, not including time waiting for the GPU between Steps in a Trial.
validation_time: The time for CommandBuffer / RenderBundle validation.
recording_time: The time to convert Dawn commands to native commands.
Metrics are reported according to the format specified at [chromium]//build/scripts/slave/performance_log_processor.py
The test harness supports a
--trace-file=path/to/trace.json argument where Dawn trace events can be dumped. The traces can be viewed in Chrome's
//scripts/perf_test_runner.py may be run to continuously run a test and report mean times and variances.
Currently the script looks in the
out/Release build directory and measures the
wall_time metric (hardcoded into the script). These should eventually become arguments.
Tests repetitively uploading data to the GPU using either
mappedAtCreation = true.
DrawCallPerf tests drawing a simple triangle with many ways of encoding commands, binding, and uploading data to the GPU. The rationale for this is the following: