1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 19:23:23 +01:00
llvm-mirror/utils/benchmark/docs/AssemblyTests.md
Kirill Bobyrev 0f55045526 Pull google/benchmark library to the LLVM tree
This patch pulls google/benchmark v1.4.1 into the LLVM tree so that any
project could use it for benchmark generation. A dummy benchmark is
added to `llvm/benchmarks/DummyYAML.cpp` to validate the correctness of
the build process.

The current version does not utilize LLVM LNT and LLVM CMake
infrastructure, but that might be sufficient for most users. Two
introduced CMake variables:

* `LLVM_INCLUDE_BENCHMARKS` (`ON` by default) generates benchmark
  targets
* `LLVM_BUILD_BENCHMARKS` (`OFF` by default) adds generated
  benchmark targets to the list of default LLVM targets (i.e. if `ON`
  benchmarks will be built upon standard build invocation, e.g. `ninja` or
  `make` with no specific targets)

List of modifications:

* `BENCHMARK_ENABLE_TESTING` is disabled
* `BENCHMARK_ENABLE_EXCEPTIONS` is disabled
* `BENCHMARK_ENABLE_INSTALL` is disabled
* `BENCHMARK_ENABLE_GTEST_TESTS` is disabled
* `BENCHMARK_DOWNLOAD_DEPENDENCIES` is disabled

Original discussion can be found here:
http://lists.llvm.org/pipermail/llvm-dev/2018-August/125023.html

Reviewed by: dberris, lebedev.ri

Subscribers: ilya-biryukov, ioeric, EricWF, lebedev.ri, srhines,
dschuff, mgorny, krytarowski, fedor.sergeev, mgrang, jfb, llvm-commits

Differential Revision: https://reviews.llvm.org/D50894

llvm-svn: 340809
2018-08-28 09:42:41 +00:00

5.2 KiB

Assembly Tests

The Benchmark library provides a number of functions whose primary purpose in to affect assembly generation, including DoNotOptimize and ClobberMemory. In addition there are other functions, such as KeepRunning, for which generating good assembly is paramount.

For these functions it's important to have tests that verify the correctness and quality of the implementation. This requires testing the code generated by the compiler.

This document describes how the Benchmark library tests compiler output, as well as how to properly write new tests.

Anatomy of a Test

Writing a test has two steps:

  • Write the code you want to generate assembly for.
  • Add // CHECK lines to match against the verified assembly.

Example:


// CHECK-LABEL: test_add:
extern "C" int test_add() {
    extern int ExternInt;
    return ExternInt + 1;

    // CHECK: movl ExternInt(%rip), %eax
    // CHECK: addl %eax
    // CHECK: ret
}

LLVM Filecheck

LLVM's Filecheck is used to test the generated assembly against the // CHECK lines specified in the tests source file. Please see the documentation linked above for information on how to write CHECK directives.

Tips and Tricks:

  • Tests should match the minimal amount of output required to establish correctness. CHECK directives don't have to match on the exact next line after the previous match, so tests should omit checks for unimportant bits of assembly. (CHECK-NEXT can be used to ensure a match occurs exactly after the previous match).

  • The tests are compiled with -O3 -g0. So we're only testing the optimized output.

  • The assembly output is further cleaned up using tools/strip_asm.py. This removes comments, assembler directives, and unused labels before the test is run.

  • The generated and stripped assembly file for a test is output under <build-directory>/test/<test-name>.s

  • Filecheck supports using CHECK prefixes to specify lines that should only match in certain situations. The Benchmark tests use CHECK-CLANG and CHECK-GNU for lines that are only expected to match Clang or GCC's output respectively. Normal CHECK lines match against all compilers. (Note: CHECK-NOT and CHECK-LABEL are NOT prefixes. They are versions of non-prefixed CHECK lines)

  • Use extern "C" to disable name mangling for specific functions. This makes them easier to name in the CHECK lines.

Problems Writing Portable Tests

Writing tests which check the code generated by a compiler are inherently non-portable. Different compilers and even different compiler versions may generate entirely different code. The Benchmark tests must tolerate this.

LLVM Filecheck provides a number of mechanisms to help write "more portable" tests; including matching using regular expressions, allowing the creation of named variables for later matching, and checking non-sequential matches.

Capturing Variables

For example, say GCC stores a variable in a register but Clang stores it in memory. To write a test that tolerates both cases we "capture" the destination of the store, and then use the captured expression to write the remainder of the test.

// CHECK-LABEL: test_div_no_op_into_shr:
extern "C" void test_div_no_op_into_shr(int value) {
    int divisor = 2;
    benchmark::DoNotOptimize(divisor); // hide the value from the optimizer
    return value / divisor;

    // CHECK: movl $2, [[DEST:.*]]
    // CHECK: idivl [[DEST]]
    // CHECK: ret
}

Using Regular Expressions to Match Differing Output

Often tests require testing assembly lines which may subtly differ between compilers or compiler versions. A common example of this is matching stack frame addresses. In this case regular expressions can be used to match the differing bits of output. For example:

int ExternInt;
struct Point { int x, y, z; };

// CHECK-LABEL: test_store_point:
extern "C" void test_store_point() {
    Point p{ExternInt, ExternInt, ExternInt};
    benchmark::DoNotOptimize(p);

    // CHECK: movl ExternInt(%rip), %eax
    // CHECK: movl %eax, -{{[0-9]+}}(%rsp)
    // CHECK: movl %eax, -{{[0-9]+}}(%rsp)
    // CHECK: movl %eax, -{{[0-9]+}}(%rsp)
    // CHECK: ret
}

Current Requirements and Limitations

The tests require Filecheck to be installed along the PATH of the build machine. Otherwise the tests will be disabled.

Additionally, as mentioned in the previous section, codegen tests are inherently non-portable. Currently the tests are limited to:

  • x86_64 targets.
  • Compiled with GCC or Clang

Further work could be done, at least on a limited basis, to extend the tests to other architectures and compilers (using CHECK prefixes).

Furthermore, the tests fail for builds which specify additional flags that modify code generation, including --coverage or -fsanitize=.