mirror of
https://github.com/RPCS3/llvm-mirror.git
synced 2024-11-22 10:42:39 +01:00
Revamp test-suite documentation
- Remove duplication: Both TestingGuide and TestSuiteMakefileGuide would give a similar overview over the test-suite. - Present cmake/lit as the default/normal way of running the test-suite: - Move information about the cmake/lit testsuite into the new TestSuiteGuide.rst file. Mark the remaining information in TestSuiteMakefilesGuide.rst as deprecated. - General simplification and shorting of language. - Remove paragraphs about tests known to fail as everything should pass nowadays. - Remove paragraph about zlib requirement; it's not required anymore since we copied a zlib source snapshot into the test-suite. - Remove paragraph about comparison with "native compiler". Correctness is always checked against reference outputs nowadays. - Change cmake/lit quickstart section to recommend `pip` for installing lit and use `CMAKE_C_COMPILER` and a cache file in the example as that is what most people will end up doing anyway. Also a section about compare.py to quickstart. - Document `Bitcode` and `MicroBenchmarks` directories. - Add section with commonly used cmake configuration options. - Add section about showing and comparing result files via compare.py. - Add section about using external benchmark suites. - Add section about using custom benchmark suites. - Add section about profile guided optimization. - Add section about cross-compilation and running on external devices. Differential Revision: https://reviews.llvm.org/D51465 llvm-svn: 341260
This commit is contained in:
parent
9c66dc8201
commit
fd5877ed05
@ -609,8 +609,8 @@ A few notes about CMake Caches:
|
||||
For more information about some of the advanced build configurations supported
|
||||
via Cache files see :doc:`AdvancedBuilds`.
|
||||
|
||||
Executing the test suite
|
||||
========================
|
||||
Executing the Tests
|
||||
===================
|
||||
|
||||
Testing is performed when the *check-all* target is built. For instance, if you are
|
||||
using Makefiles, execute this command in the root of your build directory:
|
||||
|
@ -115,8 +115,9 @@ elimination and inlining), but you might lose the ability to modify the program
|
||||
and call functions which were optimized out of the program, or inlined away
|
||||
completely.
|
||||
|
||||
The :ref:`LLVM test suite <test-suite-quickstart>` provides a framework to test
|
||||
optimizer's handling of debugging information. It can be run like this:
|
||||
The :doc:`LLVM test-suite <TestSuiteMakefileGuide>` provides a framework to
|
||||
test the optimizer's handling of debugging information. It can be run like
|
||||
this:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
403
docs/TestSuiteGuide.md
Normal file
403
docs/TestSuiteGuide.md
Normal file
@ -0,0 +1,403 @@
|
||||
test-suite Guide
|
||||
================
|
||||
|
||||
Quickstart
|
||||
----------
|
||||
|
||||
1. The lit test runner is required to run the tests. You can either use one
|
||||
from an LLVM build:
|
||||
|
||||
```bash
|
||||
% <path to llvm build>/bin/llvm-lit --version
|
||||
lit 0.8.0dev
|
||||
```
|
||||
|
||||
An alternative is installing it as a python package in a python virtual
|
||||
environment:
|
||||
|
||||
```bash
|
||||
% mkdir venv
|
||||
% virtualenv -p python2.7 venv
|
||||
% . venv/bin/activate
|
||||
% pip install svn+http://llvm.org/svn/llvm-project/llvm/trunk/utils/lit
|
||||
% lit --version
|
||||
lit 0.8.0dev
|
||||
```
|
||||
|
||||
2. Check out the `test-suite` module with:
|
||||
|
||||
```bash
|
||||
% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
|
||||
```
|
||||
|
||||
3. Create a build directory and use CMake to configure the suite. Use the
|
||||
`CMAKE_C_COMPILER` option to specify the compiler to test. Use a cache file
|
||||
to choose a typical build configuration:
|
||||
|
||||
```bash
|
||||
% mkdir test-suite-build
|
||||
% cd test-suite-build
|
||||
% cmake -DCMAKE_C_COMPILER=<path to llvm build>/bin/clang \
|
||||
-C../test-suite/cmake/caches/O3.cmake \
|
||||
../test-suite
|
||||
```
|
||||
|
||||
4. Build the benchmarks:
|
||||
|
||||
```text
|
||||
% make
|
||||
Scanning dependencies of target timeit-target
|
||||
[ 0%] Building C object tools/CMakeFiles/timeit-target.dir/timeit.c.o
|
||||
[ 0%] Linking C executable timeit-target
|
||||
...
|
||||
```
|
||||
|
||||
5. Run the tests with lit:
|
||||
|
||||
```text
|
||||
% llvm-lit -v -j 1 -o results.json .
|
||||
-- Testing: 474 tests, 1 threads --
|
||||
PASS: test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test (1 of 474)
|
||||
********** TEST 'test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test' RESULTS **********
|
||||
compile_time: 0.2192
|
||||
exec_time: 0.0462
|
||||
hash: "59620e187c6ac38b36382685ccd2b63b"
|
||||
size: 83348
|
||||
**********
|
||||
PASS: test-suite :: MultiSource/Applications/ALAC/encode/alacconvert-encode.test (2 of 474)
|
||||
...
|
||||
```
|
||||
|
||||
6. Show and compare result files (optional):
|
||||
|
||||
```bash
|
||||
# Make sure pandas is installed. Prepend `sudo` if necessary.
|
||||
% pip install pandas
|
||||
# Show a single result file:
|
||||
% test-suite/utils/compare.py results.json
|
||||
# Compare two result files:
|
||||
% test-suite/utils/compare.py results_a.json results_b.json
|
||||
```
|
||||
|
||||
|
||||
Structure
|
||||
---------
|
||||
|
||||
The test-suite contains benchmark and test programs. The programs come with
|
||||
reference outputs so that their correctness can be checked. The suite comes
|
||||
with tools to collect metrics such as benchmark runtime, compilation time and
|
||||
code size.
|
||||
|
||||
The test-suite is divided into several directories:
|
||||
|
||||
- `SingleSource/`
|
||||
|
||||
Contains test programs that are only a single source file in size. A
|
||||
subdirectory may contain several programs.
|
||||
|
||||
- `MultiSource/`
|
||||
|
||||
Contains subdirectories which entire programs with multiple source files.
|
||||
Large benchmarks and whole applications go here.
|
||||
|
||||
- `MicroBenchmarks/`
|
||||
|
||||
Programs using the [google-benchmark](https://github.com/google/benchmark)
|
||||
library. The programs define functions that are run multiple times until the
|
||||
measurement results are statistically significant.
|
||||
|
||||
- `External/`
|
||||
|
||||
Contains descriptions and test data for code that cannot be directly
|
||||
distributed with the test-suite. The most prominent members of this
|
||||
directory are the SPEC CPU benchmark suites.
|
||||
See [External Suites](#external-suites).
|
||||
|
||||
- `Bitcode/`
|
||||
|
||||
These tests are mostly written in LLVM bitcode.
|
||||
|
||||
- `CTMark/`
|
||||
|
||||
Contains symbolic links to other benchmarks forming a representative sample
|
||||
for compilation performance measurements.
|
||||
|
||||
### Benchmarks
|
||||
|
||||
Every program can work as a correctness test. Some programs are unsuitable for
|
||||
performance measurements. Setting the `TEST_SUITE_BENCHMARKING_ONLY` CMake
|
||||
option to `ON` will disable them.
|
||||
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
The test-suite has configuration options to customize building and running the
|
||||
benchmarks. CMake can print a list of them:
|
||||
|
||||
```bash
|
||||
% cd test-suite-build
|
||||
# Print basic options:
|
||||
% cmake -LH
|
||||
# Print all options:
|
||||
% cmake -LAH
|
||||
```
|
||||
|
||||
### Common Configuration Options
|
||||
|
||||
- `CMAKE_C_FLAGS`
|
||||
|
||||
Specify extra flags to be passed to C compiler invocations. The flags are
|
||||
also passed to the C++ compiler and linker invocations. See
|
||||
[https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_FLAGS.html](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_FLAGS.html)
|
||||
|
||||
- `CMAKE_C_COMPILER`
|
||||
|
||||
Select the C compiler executable to be used. Note that the C++ compiler is
|
||||
inferred automatically i.e. when specifying `path/to/clang` CMake will
|
||||
automatically use `path/to/clang++` as the C++ compiler. See
|
||||
[https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html)
|
||||
|
||||
- `CMAKE_BUILD_TYPE`
|
||||
|
||||
Select a build type like `OPTIMIZE` or `DEBUG` selecting a set of predefined
|
||||
compiler flags. These flags are applied regardless of the `CMAKE_C_FLAGS`
|
||||
option and may be changed by modifying `CMAKE_C_FLAGS_OPTIMIZE` etc. See
|
||||
[https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html]](https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html)
|
||||
|
||||
- `TEST_SUITE_RUN_UNDER`
|
||||
|
||||
Prefix test invocations with the given tool. This is typically used to run
|
||||
cross-compiled tests within a simulator tool.
|
||||
|
||||
- `TEST_SUITE_BENCHMARKING_ONLY`
|
||||
|
||||
Disable tests that are unsuitable for performance measurements. The disabled
|
||||
tests either run for a very short time or are dominated by I/O performance
|
||||
making them unsuitable as compiler performance tests.
|
||||
|
||||
- `TEST_SUITE_SUBDIRS`
|
||||
|
||||
Semicolon-separated list of directories to include. This can be used to only
|
||||
build parts of the test-suite or to include external suites. This option
|
||||
does not work reliably with deeper subdirectories as it skips intermediate
|
||||
`CMakeLists.txt` files which may be required.
|
||||
|
||||
- `TEST_SUITE_COLLECT_STATS`
|
||||
|
||||
Collect internal LLVM statistics. Appends `-save-stats=obj` when invocing the
|
||||
compiler and makes the lit runner collect and merge the statistic files.
|
||||
|
||||
- `TEST_SUITE_RUN_BENCHMARKS`
|
||||
|
||||
If this is set to `OFF` then lit will not actually run the tests but just
|
||||
collect build statistics like compile time and code size.
|
||||
|
||||
- `TEST_SUITE_USE_PERF`
|
||||
|
||||
Use the `perf` tool for time measurement instead of the `timeit` tool that
|
||||
comes with the test-suite. The `perf` is usually available on linux systems.
|
||||
|
||||
- `TEST_SUITE_SPEC2000_ROOT`, `TEST_SUITE_SPEC2006_ROOT`, `TEST_SUITE_SPEC2017_ROOT`, ...
|
||||
|
||||
Specify installation directories of external benchmark suites. You can find
|
||||
more information about expected versions or usage in the README files in the
|
||||
`External` directory (such as `External/SPEC/README`)
|
||||
|
||||
### Common CMake Flags
|
||||
|
||||
- `-GNinja`
|
||||
|
||||
Generate build files for the ninja build tool.
|
||||
|
||||
- `-Ctest-suite/cmake/caches/<cachefile.cmake>`
|
||||
|
||||
Use a CMake cache. The test-suite comes with several CMake caches which
|
||||
predefine common or tricky build configurations.
|
||||
|
||||
|
||||
Displaying and Analyzing Results
|
||||
--------------------------------
|
||||
|
||||
The `compare.py` script displays and compares result files. A result file is
|
||||
produced when invoking lit with the `-o filename.json` flag.
|
||||
|
||||
Example usage:
|
||||
|
||||
- Basic Usage:
|
||||
|
||||
```text
|
||||
% test-suite/utils/compare.py baseline.json
|
||||
Warning: 'test-suite :: External/SPEC/CINT2006/403.gcc/403.gcc.test' has No metrics!
|
||||
Tests: 508
|
||||
Metric: exec_time
|
||||
|
||||
Program baseline
|
||||
|
||||
INT2006/456.hmmer/456.hmmer 1222.90
|
||||
INT2006/464.h264ref/464.h264ref 928.70
|
||||
...
|
||||
baseline
|
||||
count 506.000000
|
||||
mean 20.563098
|
||||
std 111.423325
|
||||
min 0.003400
|
||||
25% 0.011200
|
||||
50% 0.339450
|
||||
75% 4.067200
|
||||
max 1222.896800
|
||||
```
|
||||
|
||||
- Show compile_time or text segment size metrics:
|
||||
|
||||
```bash
|
||||
% test-suite/utils/compare.py -m compile_time baseline.json
|
||||
% test-suite/utils/compare.py -m size.__text baseline.json
|
||||
```
|
||||
|
||||
- Compare two result files and filter short running tests:
|
||||
|
||||
```bash
|
||||
% test-suite/utils/compare.py --filter-short baseline.json experiment.json
|
||||
...
|
||||
Program baseline experiment diff
|
||||
|
||||
SingleSour.../Benchmarks/Linpack/linpack-pc 5.16 4.30 -16.5%
|
||||
MultiSourc...erolling-dbl/LoopRerolling-dbl 7.01 7.86 12.2%
|
||||
SingleSour...UnitTests/Vectorizer/gcc-loops 3.89 3.54 -9.0%
|
||||
...
|
||||
```
|
||||
|
||||
- Merge multiple baseline and experiment result files by taking the minimum
|
||||
runtime each:
|
||||
|
||||
```bash
|
||||
% test-suite/utils/compare.py base0.json base1.json base2.json vs exp0.json exp1.json exp2.json
|
||||
```
|
||||
|
||||
### Continuous Tracking with LNT
|
||||
|
||||
LNT is a set of client and server tools for continuously monitoring
|
||||
performance. You can find more information at
|
||||
[http://llvm.org/docs/lnt](http://llvm.org/docs/lnt). The official LNT instance
|
||||
of the LLVM project is hosted at [http://lnt.llvm.org](http://lnt.llvm.org).
|
||||
|
||||
|
||||
External Suites
|
||||
---------------
|
||||
|
||||
External suites such as SPEC can be enabled by either
|
||||
|
||||
- placing (or linking) them into the `test-suite/test-suite-externals/xxx` directory (example: `test-suite/test-suite-externals/speccpu2000`)
|
||||
- using a configuration option such as `-D TEST_SUITE_SPEC2000_ROOT=path/to/speccpu2000`
|
||||
|
||||
You can find further information in the respective README files such as
|
||||
`test-suite/External/SPEC/README`.
|
||||
|
||||
For the SPEC benchmarks you can switch between the `test`, `train` and
|
||||
`ref` input datasets via the `TEST_SUITE_RUN_TYPE` configuration option.
|
||||
The `train` dataset is used by default.
|
||||
|
||||
|
||||
Custom Suites
|
||||
-------------
|
||||
|
||||
You can build custom suites using the test-suite infrastructure. A custom suite
|
||||
has a `CMakeLists.txt` file at the top directory. The `CMakeLists.txt` will be
|
||||
picked up automatically if placed into a subdirectory of the test-suite or when
|
||||
setting the `TEST_SUITE_SUBDIRS` variable:
|
||||
|
||||
```bash
|
||||
% cmake -DTEST_SUITE_SUBDIRS=path/to/my/benchmark-suite ../test-suite
|
||||
```
|
||||
|
||||
|
||||
Profile Guided Optimization
|
||||
---------------------------
|
||||
|
||||
Profile guided optimization requires to compile and run twice. First the
|
||||
benchmark should be compiled with profile generation instrumentation enabled
|
||||
and setup for training data. The lit runner will merge the profile files
|
||||
using `llvm-profdata` so they can be used by the second compilation run.
|
||||
|
||||
Example:
|
||||
```bash
|
||||
# Profile generation run:
|
||||
% cmake -DTEST_SUITE_PROFILE_GENERATE=ON \
|
||||
-DTEST_SUITE_RUN_TYPE=train \
|
||||
../test-suite
|
||||
% make
|
||||
% llvm-lit .
|
||||
# Use the profile data for compilation and actual benchmark run:
|
||||
% cmake -DTEST_SUITE_PROFILE_GENERATE=OFF \
|
||||
-DTEST_SUITE_PROFILE_USE=ON \
|
||||
-DTEST_SUITE_RUN_TYPE=ref \
|
||||
.
|
||||
% make
|
||||
% llvm-lit -o result.json .
|
||||
```
|
||||
|
||||
The `TEST_SUITE_RUN_TYPE` setting only affects the SPEC benchmark suites.
|
||||
|
||||
|
||||
Cross Compilation and External Devices
|
||||
--------------------------------------
|
||||
|
||||
### Compilation
|
||||
|
||||
CMake allows to cross compile to a different target via toolchain files. More
|
||||
information can be found here:
|
||||
|
||||
- [http://llvm.org/docs/lnt/tests.html#cross-compiling](http://llvm.org/docs/lnt/tests.html#cross-compiling)
|
||||
|
||||
- [https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html](https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html)
|
||||
|
||||
Cross compilation from macOS to iOS is possible with the
|
||||
`test-suite/cmake/caches/target-target-*-iphoneos-internal.cmake` CMake cache
|
||||
files; this requires an internal iOS SDK.
|
||||
|
||||
### Running
|
||||
|
||||
There are two ways to run the tests in a cross compilation setting:
|
||||
|
||||
- Via SSH connection to an external device: The `TEST_SUITE_REMOTE_HOST` option
|
||||
should be set to the SSH hostname. The executables and data files need to be
|
||||
transferred to the device after compilation. This is typically done via the
|
||||
`rsync` make target. After this, the lit runner can be used on the host
|
||||
machine. It will prefix the benchmark and verification command lines with an
|
||||
`ssh` command.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
% cmake -G Ninja -D CMAKE_C_COMPILER=path/to/clang \
|
||||
-C ../test-suite/cmake/caches/target-arm64-iphoneos-internal.cmake \
|
||||
-D TEST_SUITE_REMOTE_HOST=mydevice \
|
||||
../test-suite
|
||||
% ninja
|
||||
% ninja rsync
|
||||
% llvm-lit -j1 -o result.json .
|
||||
```
|
||||
|
||||
- You can specify a simulator for the target machine with the
|
||||
`TEST_SUITE_RUN_UNDER` setting. The lit runner will prefix all benchmark
|
||||
invocations with it.
|
||||
|
||||
|
||||
Running the test-suite via LNT
|
||||
------------------------------
|
||||
|
||||
The LNT tool can run the test-suite. Use this when submitting test results to
|
||||
an LNT instance. See
|
||||
[http://llvm.org/docs/lnt/tests.html#llvm-cmake-test-suite](http://llvm.org/docs/lnt/tests.html#llvm-cmake-test-suite)
|
||||
for details.
|
||||
|
||||
Running the test-suite via Makefiles (deprecated)
|
||||
-------------------------------------------------
|
||||
|
||||
**Note**: The test-suite comes with a set of Makefiles that are considered
|
||||
deprecated. They do not support newer testing modes like `Bitcode` or
|
||||
`Microbenchmarks` and are harder to use.
|
||||
|
||||
Old documentation is available in the
|
||||
[test-suite Makefile Guide](TestSuiteMakefileGuide).
|
@ -1,6 +1,6 @@
|
||||
=====================
|
||||
LLVM test-suite Guide
|
||||
=====================
|
||||
======================================
|
||||
test-suite Makefile Guide (deprecated)
|
||||
======================================
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
@ -8,154 +8,6 @@ LLVM test-suite Guide
|
||||
Overview
|
||||
========
|
||||
|
||||
This document describes the features of the Makefile-based LLVM
|
||||
test-suite as well as the cmake based replacement. This way of interacting
|
||||
with the test-suite is deprecated in favor of running the test-suite using LNT,
|
||||
but may continue to prove useful for some users. See the Testing
|
||||
Guide's :ref:`test-suite Quickstart <test-suite-quickstart>` section for more
|
||||
information.
|
||||
|
||||
Test suite Structure
|
||||
====================
|
||||
|
||||
The ``test-suite`` module contains a number of programs that can be
|
||||
compiled with LLVM and executed. These programs are compiled using the
|
||||
native compiler and various LLVM backends. The output from the program
|
||||
compiled with the native compiler is assumed correct; the results from
|
||||
the other programs are compared to the native program output and pass if
|
||||
they match.
|
||||
|
||||
When executing tests, it is usually a good idea to start out with a
|
||||
subset of the available tests or programs. This makes test run times
|
||||
smaller at first and later on this is useful to investigate individual
|
||||
test failures. To run some test only on a subset of programs, simply
|
||||
change directory to the programs you want tested and run ``gmake``
|
||||
there. Alternatively, you can run a different test using the ``TEST``
|
||||
variable to change what tests or run on the selected programs (see below
|
||||
for more info).
|
||||
|
||||
In addition for testing correctness, the ``test-suite`` directory also
|
||||
performs timing tests of various LLVM optimizations. It also records
|
||||
compilation times for the compilers and the JIT. This information can be
|
||||
used to compare the effectiveness of LLVM's optimizations and code
|
||||
generation.
|
||||
|
||||
``test-suite`` tests are divided into three types of tests: MultiSource,
|
||||
SingleSource, and External.
|
||||
|
||||
- ``test-suite/SingleSource``
|
||||
|
||||
The SingleSource directory contains test programs that are only a
|
||||
single source file in size. These are usually small benchmark
|
||||
programs or small programs that calculate a particular value. Several
|
||||
such programs are grouped together in each directory.
|
||||
|
||||
- ``test-suite/MultiSource``
|
||||
|
||||
The MultiSource directory contains subdirectories which contain
|
||||
entire programs with multiple source files. Large benchmarks and
|
||||
whole applications go here.
|
||||
|
||||
- ``test-suite/External``
|
||||
|
||||
The External directory contains Makefiles for building code that is
|
||||
external to (i.e., not distributed with) LLVM. The most prominent
|
||||
members of this directory are the SPEC 95 and SPEC 2000 benchmark
|
||||
suites. The ``External`` directory does not contain these actual
|
||||
tests, but only the Makefiles that know how to properly compile these
|
||||
programs from somewhere else. The presence and location of these
|
||||
external programs is configured by the test-suite ``configure``
|
||||
script.
|
||||
|
||||
Each tree is then subdivided into several categories, including
|
||||
applications, benchmarks, regression tests, code that is strange
|
||||
grammatically, etc. These organizations should be relatively self
|
||||
explanatory.
|
||||
|
||||
Some tests are known to fail. Some are bugs that we have not fixed yet;
|
||||
others are features that we haven't added yet (or may never add). In the
|
||||
regression tests, the result for such tests will be XFAIL (eXpected
|
||||
FAILure). In this way, you can tell the difference between an expected
|
||||
and unexpected failure.
|
||||
|
||||
The tests in the test suite have no such feature at this time. If the
|
||||
test passes, only warnings and other miscellaneous output will be
|
||||
generated. If a test fails, a large <program> FAILED message will be
|
||||
displayed. This will help you separate benign warnings from actual test
|
||||
failures.
|
||||
|
||||
Running the test suite via CMake
|
||||
================================
|
||||
|
||||
To run the test suite, you need to use the following steps:
|
||||
|
||||
#. The test suite uses the lit test runner to run the test-suite,
|
||||
you need to have lit installed first. Check out LLVM and install lit:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
% svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm
|
||||
% cd llvm/utils/lit
|
||||
% sudo python setup.py install # Or without sudo, install in virtual-env.
|
||||
running install
|
||||
running bdist_egg
|
||||
running egg_info
|
||||
writing lit.egg-info/PKG-INFO
|
||||
...
|
||||
% lit --version
|
||||
lit 0.5.0dev
|
||||
|
||||
#. Check out the ``test-suite`` module with:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
|
||||
|
||||
#. Use CMake to configure the test suite in a new directory. You cannot build
|
||||
the test suite in the source tree.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
% mkdir test-suite-build
|
||||
% cd test-suite-build
|
||||
% cmake ../test-suite
|
||||
|
||||
#. Build the benchmarks, using the makefiles CMake generated.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
% make
|
||||
Scanning dependencies of target timeit-target
|
||||
[ 0%] Building C object tools/CMakeFiles/timeit-target.dir/timeit.c.o
|
||||
[ 0%] Linking C executable timeit-target
|
||||
[ 0%] Built target timeit-target
|
||||
Scanning dependencies of target fpcmp-host
|
||||
[ 0%] [TEST_SUITE_HOST_CC] Building host executable fpcmp
|
||||
[ 0%] Built target fpcmp-host
|
||||
Scanning dependencies of target timeit-host
|
||||
[ 0%] [TEST_SUITE_HOST_CC] Building host executable timeit
|
||||
[ 0%] Built target timeit-host
|
||||
|
||||
|
||||
#. Run the tests with lit:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
% lit -v -j 1 . -o results.json
|
||||
-- Testing: 474 tests, 1 threads --
|
||||
PASS: test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test (1 of 474)
|
||||
********** TEST 'test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test' RESULTS **********
|
||||
compile_time: 0.2192
|
||||
exec_time: 0.0462
|
||||
hash: "59620e187c6ac38b36382685ccd2b63b"
|
||||
size: 83348
|
||||
**********
|
||||
PASS: test-suite :: MultiSource/Applications/ALAC/encode/alacconvert-encode.test (2 of 474)
|
||||
|
||||
|
||||
Running the test suite via Makefiles (deprecated)
|
||||
=================================================
|
||||
|
||||
First, all tests are executed within the LLVM object directory tree.
|
||||
They *are not* executed inside of the LLVM source tree. This is because
|
||||
the test suite creates temporary files during execution.
|
||||
@ -208,7 +60,7 @@ you have the suite checked out and configured, you don't need to do it
|
||||
again (unless the test code or configure script changes).
|
||||
|
||||
Configuring External Tests
|
||||
--------------------------
|
||||
==========================
|
||||
|
||||
In order to run the External tests in the ``test-suite`` module, you
|
||||
must specify *--with-externals*. This must be done during the
|
||||
@ -237,8 +89,8 @@ names known to LLVM include:
|
||||
Others are added from time to time, and can be determined from
|
||||
``configure``.
|
||||
|
||||
Running different tests
|
||||
-----------------------
|
||||
Running Different Tests
|
||||
=======================
|
||||
|
||||
In addition to the regular "whole program" tests, the ``test-suite``
|
||||
module also provides a mechanism for compiling the programs in different
|
||||
@ -257,8 +109,8 @@ LLVM research group. They may still be valuable, however, as a guide to
|
||||
writing your own TEST Makefile for any optimization or analysis passes
|
||||
that you develop with LLVM.
|
||||
|
||||
Generating test output
|
||||
----------------------
|
||||
Generating Test Output
|
||||
======================
|
||||
|
||||
There are a number of ways to run the tests and generate output. The
|
||||
most simple one is simply running ``gmake`` with no arguments. This will
|
||||
@ -283,8 +135,8 @@ running with ``TEST=<type>``). The ``report`` also generate a file
|
||||
called ``report.<type>.raw.out`` containing the output of the entire
|
||||
test run.
|
||||
|
||||
Writing custom tests for the test suite
|
||||
---------------------------------------
|
||||
Writing Custom Tests for the test-suite
|
||||
=======================================
|
||||
|
||||
Assuming you can run the test suite, (e.g.
|
||||
"``gmake TEST=nightly report``" should work), it is really easy to run
|
||||
|
@ -8,6 +8,7 @@ LLVM Testing Infrastructure Guide
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
TestSuiteGuide
|
||||
TestSuiteMakefileGuide
|
||||
|
||||
Overview
|
||||
@ -25,11 +26,7 @@ In order to use the LLVM testing infrastructure, you will need all of the
|
||||
software required to build LLVM, as well as `Python <http://python.org>`_ 2.7 or
|
||||
later.
|
||||
|
||||
If you intend to run the :ref:`test-suite <test-suite-overview>`, you will also
|
||||
need a development version of zlib (zlib1g-dev is known to work on several Linux
|
||||
distributions).
|
||||
|
||||
LLVM testing infrastructure organization
|
||||
LLVM Testing Infrastructure Organization
|
||||
========================================
|
||||
|
||||
The LLVM testing infrastructure contains two major categories of tests:
|
||||
@ -77,6 +74,8 @@ LLVM compiles, optimizes, and generates code.
|
||||
|
||||
The test-suite is located in the ``test-suite`` Subversion module.
|
||||
|
||||
See the :doc:`TestSuiteGuide` for details.
|
||||
|
||||
Debugging Information tests
|
||||
---------------------------
|
||||
|
||||
@ -96,9 +95,8 @@ regressions tests are in the main "llvm" module under the directory
|
||||
``llvm/test`` (so you get these tests for free with the main LLVM tree).
|
||||
Use ``make check-all`` to run the regression tests after building LLVM.
|
||||
|
||||
The more comprehensive test suite that includes whole programs in C and C++
|
||||
is in the ``test-suite`` module. See :ref:`test-suite Quickstart
|
||||
<test-suite-quickstart>` for more information on running these tests.
|
||||
The ``test-suite`` module contains more comprehensive tests including whole C
|
||||
and C++ programs. See the :doc:`TestSuiteGuide` for details.
|
||||
|
||||
Regression tests
|
||||
----------------
|
||||
@ -585,65 +583,3 @@ the last RUN: line. This has two side effects:
|
||||
|
||||
(b) it speeds things up for really big test cases by avoiding
|
||||
interpretation of the remainder of the file.
|
||||
|
||||
.. _test-suite-overview:
|
||||
|
||||
``test-suite`` Overview
|
||||
=======================
|
||||
|
||||
The ``test-suite`` module contains a number of programs that can be
|
||||
compiled and executed. The ``test-suite`` includes reference outputs for
|
||||
all of the programs, so that the output of the executed program can be
|
||||
checked for correctness.
|
||||
|
||||
``test-suite`` tests are divided into three types of tests: MultiSource,
|
||||
SingleSource, and External.
|
||||
|
||||
- ``test-suite/SingleSource``
|
||||
|
||||
The SingleSource directory contains test programs that are only a
|
||||
single source file in size. These are usually small benchmark
|
||||
programs or small programs that calculate a particular value. Several
|
||||
such programs are grouped together in each directory.
|
||||
|
||||
- ``test-suite/MultiSource``
|
||||
|
||||
The MultiSource directory contains subdirectories which contain
|
||||
entire programs with multiple source files. Large benchmarks and
|
||||
whole applications go here.
|
||||
|
||||
- ``test-suite/External``
|
||||
|
||||
The External directory contains Makefiles for building code that is
|
||||
external to (i.e., not distributed with) LLVM. The most prominent
|
||||
members of this directory are the SPEC 95 and SPEC 2000 benchmark
|
||||
suites. The ``External`` directory does not contain these actual
|
||||
tests, but only the Makefiles that know how to properly compile these
|
||||
programs from somewhere else. When using ``LNT``, use the
|
||||
``--test-externals`` option to include these tests in the results.
|
||||
|
||||
.. _test-suite-quickstart:
|
||||
|
||||
``test-suite`` Quickstart
|
||||
-------------------------
|
||||
|
||||
The modern way of running the ``test-suite`` is focused on testing and
|
||||
benchmarking complete compilers using the
|
||||
`LNT <http://llvm.org/docs/lnt>`_ testing infrastructure.
|
||||
|
||||
For more information on using LNT to execute the ``test-suite``, please
|
||||
see the `LNT Quickstart <http://llvm.org/docs/lnt/quickstart.html>`_
|
||||
documentation.
|
||||
|
||||
``test-suite`` Makefiles
|
||||
------------------------
|
||||
|
||||
Historically, the ``test-suite`` was executed using a complicated setup
|
||||
of Makefiles. The LNT based approach above is recommended for most
|
||||
users, but there are some testing scenarios which are not supported by
|
||||
the LNT approach. In addition, LNT currently uses the Makefile setup
|
||||
under the covers and so developers who are interested in how LNT works
|
||||
under the hood may want to understand the Makefile based setup.
|
||||
|
||||
For more information on the ``test-suite`` Makefile setup, please see
|
||||
the :doc:`Test Suite Makefile Guide <TestSuiteMakefileGuide>`.
|
||||
|
@ -145,6 +145,9 @@ representation.
|
||||
:doc:`LLVM Testing Infrastructure Guide <TestingGuide>`
|
||||
A reference manual for using the LLVM testing infrastructure.
|
||||
|
||||
:doc:`TestSuiteGuide`
|
||||
Describes how to compile and run the test-suite benchmarks.
|
||||
|
||||
`How to build the C, C++, ObjC, and ObjC++ front end`__
|
||||
Instructions for building the clang front-end from source.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user