(TODO)
For benchmarking with dawn_perf_tests
, it's best to build inside a Chromium checkout using the following GN args:
is_official_build = true # Enables highest optimization level, using LTO on some platforms use_dawn = true # Required to build Dawn use_cfi_icall=false # Required because Dawn dynamically loads function pointers, and we don't sanitize them yet.
A Chromium checkout is required for the highest optimization flags. It is possible to build and run dawn_perf_tests
from a standalone Dawn checkout as well, only using GN arg is_debug=false
. For more information on building, please see building.md.
dawn_perf_tests
metrics are reported as time per iteration.iterationsPerStep
is provided to the constructor of DawnPerfTestBase
.kNumTrials
are run for each test. A Step in a Trial is run repetitively for approximately kCalibrationRunTimeSeconds
. Metrics are accumlated per-trial and reported as the total time divided by numSteps * iterationsPerStep
. maxStepsInFlight
is passed to the DawnPerfTestsBase
constructor to limit the number of Steps pipelined.(See //src/dawn/tests/perf_tests/DawnPerfTest.h
for the values of the constants).
dawn_perf_tests
measures the following metrics:
wall_time
: The time per iteration, including time waiting for the GPU between Steps in a Trial.cpu_time
: The time per iteration, not including time waiting for the GPU between Steps in a Trial.validation_time
: The time for CommandBuffer / RenderBundle validation.recording_time
: The time to convert Dawn commands to native commands.Metrics are reported according to the format specified at [chromium]//build/recipes/performance_log_processor.py
The test harness supports a --trace-file=path/to/trace.json
argument where Dawn trace events can be dumped. The traces can be viewed in Chrome's about://tracing
viewer.
//scripts/perf_test_runner.py
may be run to continuously run a test and report mean times and variances.
Currently the script looks in the out/Release
build directory and measures the wall_time
metric (hardcoded into the script). These should eventually become arguments.
Example usage:
scripts/perf_test_runner.py DrawCallPerf.Run/Vulkan__e_skip_validation
BufferUploadPerf
Tests repetitively uploading data to the GPU using either WriteBuffer
or CreateBuffer
with mappedAtCreation = true
.
DrawCallPerf
DrawCallPerf tests drawing a simple triangle with many ways of encoding commands, binding, and uploading data to the GPU. The rationale for this is the following: