Make sure all print statements are compatible with Python 2 and Python3 using
the `from __future__ import print_function` statement.
Differential Revision: https://reviews.llvm.org/D56249
llvm-svn: 350307
Summary: This is the only test that is still failing on Windows - or rather, it is expected to fail on the bots, but passes on the new bot that we're preparing causing a failure, so I'm going to disable it. Since the test has rarely, if ever, passed on the bots, this should have the same effect and it will unblock the creation of the new bot.
Reviewers: asmith, delcypher, zturner
Subscribers: stella.stamenova, llvm-commits
Differential Revision: https://reviews.llvm.org/D51871
llvm-svn: 341856
Summary:
Right now this test is failing on the builtbots on Windows but we have a very similar setup where the test passes. The test is meant to test that specifying a timeout works correctly by running an infnite loop and having it timeout - on the buildbot, the infinite loop doesn't actually execute. This change runs all of the tests in the set using an internal shell rather than an external shell. I expect this will make the test pass which means that either the way the external shell is invoked or the external shell setup on the buildbots is not correct. Regardless of whether the test passes with this change, we'll need to undo this change and have a real fix.
@gkistanova was able to get logs from the buildbot to rule out a number of theories as to why this test is failing, but they didn't have enough information to confirm exactly what the issue is. The purpose of this change is to narrow it down, but if someone has a local repro and can aid in debugging, that would make it much speedier (and less prone to making the bots fail).
Reviewers: gkistanova, asmith, zturner, modocache, rnk, delcypher
Reviewed By: rnk
Subscribers: delcypher, llvm-commits, gkistanova
Differential Revision: https://reviews.llvm.org/D51326
llvm-svn: 340840
This test passes on Windows when using Python 3 but fails when using Python 2, so it needs more investigation before it can be enabled as the bots use Python 2.
llvm-svn: 339184
Summary:
In Python2 'unicode' is a distinct type from 'str', but in Python3 'unicode' does not exist and instead all 'str' objects are Unicode string. This change updates the logic in the test logging for lit to correctly process each of the types, and more importantly, to not just fail in Python3.
This change also reverses the use of quotes in several of the cfg files. By using '""' we are guaranteeing that the resulting path will work correctly on Windows while "''" only works correctly sometimes. This also fixes one of the failing tests.
Reviewers: asmith, zturner
Subscribers: stella.stamenova, delcypher, llvm-commits
Differential Revision: https://reviews.llvm.org/D50397
llvm-svn: 339179
Summary:
The problem here is that on windows double quotes are used for paths (usually) while single quotes are not. This is not generally a problem for the tests because the lit infrastructure tends to treat both the same. One (and possibly only) exception is when some tests are run in an external shell such as some of the shtest-format tests. In this case on windows the path to python was not created correctly because it had single quotes and the test failed.
This same test is already failing with python 3 which is why our testing missed the new failure. This patch will take care of the immediate failure with python 2 and I'll send a follow up for the python 3 failure.
Reviewers: asmith, zturner
Subscribers: delcypher, llvm-commits
Differential Revision: https://reviews.llvm.org/D50373
llvm-svn: 339091
Summary:
The issue with the python path is that the path to python on Windows can contain spaces. To make the tests always work, the path to python needs to be surrounded by quotes.
This change updates several configuration files which specify the path to python as a substitution and also remove quotes from existing tests.
Reviewers: asmith, zturner, alexshap, jakehehrlich
Reviewed By: zturner, alexshap, jakehehrlich
Subscribers: mehdi_amini, nemanjai, eraman, kbarton, jakehehrlich, steven_wu, dexonsmith, stella.stamenova, delcypher, llvm-commits
Differential Revision: https://reviews.llvm.org/D50206
llvm-svn: 339073
These two tests are operating on the same test suite, which causes
them to be racy about writing temporary files and can cause spurious
failures. Merge them into one test to avoid the issue.
llvm-svn: 337718
The test was failing on Windows machines which had bash.exe on PATH (but
not in the so called lit tools dir, containing cmp.exe, grep.exe etc.).
The problem was that the outer lit invocation would load LLVMConfig
from utils/lit/lit/llvm/config.py, which looks up the tools path with
getToolsPath(). That has a surprising side effect of also setting
bashPath, in our case setting it to empty.
The outer lit invocation would thus configure the pdbg0 and pdbg1
substitutions based on not running with bash.
But the inner lit invocation would not load LLVMConfig, so bash
would be found on PATH, that would be used as external shell,
and so the output wouldn't match pdbg0 and pdbg1.
It seems weird to me that getBashPath() will return different results
depending on whether getToolsPath() has been called before, but I
also don't know how to fix it properly.
This commit just relaxes the test case, because there doesn't seem
to be much point in testing for the exact syntax of the run file
as long as it works.
(See https://crbug.com/850023)
llvm-svn: 334100
This fixes projects/compiler-rt/test/fuzzer/sigusr.test, which was
broken by r333614. The trouble was that "&&" changes the command for
which "$!" gives the pid.
llvm-svn: 333620
(Relands r333584, reverted in 333592.)
When debugging test failures with -vv (or -v in the case of the
internal shell), this makes it easier to locate the RUN line that
failed. For example, clang's test/Driver/linux-ld.c has 892 total RUN
lines, and clang's test/Driver/arm-cortex-cpus.c has 424 RUN lines
after concatenation for line continuations.
When reading the generated shell script, this also makes it easier to
locate the RUN line that produced each command.
To support reporting RUN line numbers in the case of the internal
shell, this patch extends the internal shell to support the null
command, ":", except pipelines are not supported.
To support reporting RUN line numbers in the case of windows cmd.exe
as the external shell, this patch extends -vv to set "echo on" instead
of "echo off" in bat files. (Support for windows cmd.exe as a lit
external shell will likely be dropped later, but I found out too
late.)
Reviewed By: delcypher, asmith, stella.stamenova, jmorse, lebedev.ri, rnk
Differential Revision: https://reviews.llvm.org/D44598
llvm-svn: 333614
(Relands r330755 (reverted in r330848) with fix for PR37239.)
When debugging test failures with -vv (or -v in the case of the
internal shell), this makes it easier to locate the RUN line that
failed. For example, clang's test/Driver/linux-ld.c has 892 total RUN
lines, and clang's test/Driver/arm-cortex-cpus.c has 424 RUN lines
after concatenation for line continuations.
When reading the generated shell script, this also makes it easier to
locate the RUN line that produced each command.
To support reporting RUN line numbers in the case of the internal
shell, this patch extends the internal shell to support the null
command, ":", except pipelines are not supported.
To support reporting RUN line numbers in the case of windows cmd.exe
as the external shell, this patch extends -vv to set "echo on" instead
of "echo off" in bat files. (Support for windows cmd.exe as a lit
external shell will likely be dropped later, but I found out too
late.)
Reviewed By: delcypher, asmith, stella.stamenova, jmorse, lebedev.ri, rnk
Differential Revision: https://reviews.llvm.org/D44598
llvm-svn: 333584
We were making malformed XML on tests with ' in the name. Switch to
using saxutils to set all of our attributes, so it can handle quotes
etc correctly.
llvm-svn: 333249
larger timeout value. This really isn't very good because it will
still be susceptible to machine performance.
While we are here also fix a bug in validation of
`maxIndividualTestTime` where previously it wasn't checked if the
type was an int.
rdar://problem/40221572
llvm-svn: 332987
The program used to be used in `quick_then_slow.py` but that was
removed in r328702. The tests always run `slow.py` on its own but
this doesn't really test additional code so we'll just drop running
`slow.py` so the tests run faster.
rdar://problem/40221572
llvm-svn: 332986
If the system is under heavy load 1 second might not be long enough
for it to produce output which could lead to spurious test failures.
What matters is that the right test cases reach a timeout.
rdar://problem/40221572
llvm-svn: 332985
Summary:
This sequence ends the CDATA block so any characters after that are no
longer escaped. This can be fixed by replacing "]]>" with "]]]]><![CDATA[>".
Reviewers: cmatthews
Reviewed By: cmatthews
Differential Revision: https://reviews.llvm.org/D46886
llvm-svn: 332440
We were reporting "Unsupported" tests in xunit as passes, however since
they are not run, it make more sense to mark them as skipped. The Junit
xml standard has support for that, so lets use it.
llvm-svn: 332065
Lit creates malformed xml when the test case has an & in the name.
Escape those correctly.
This also adds a test case which I will add other nasty encoding issues to in some followup commits.
llvm-svn: 331942
When debugging test failures with -vv (or -v in the case of the
internal shell), this makes it easier to locate the RUN line that
failed. For example, clang's test/Driver/linux-ld.c has 892 total RUN
lines, and clang's test/Driver/arm-cortex-cpus.c has 424 RUN lines
after concatenation for line continuations.
When reading the generated shell script, this also makes it easier to
locate the RUN line that produced each command.
To support reporting RUN line numbers in the case of the internal
shell, this patch extends the internal shell to support the null
command, ":", except pipelines are not supported.
Reviewed By: asmith, delcypher
Differential Revision: https://reviews.llvm.org/D44598
llvm-svn: 330755
`shtest-xunit-output.py` test.
Although there is no `-` file Jeremy Morse has reported to me that it
causes problems in their setup because lit tries to find it and ends up
loading an out of tree lit configuration file.
llvm-svn: 330728
XML printer.
A test has been added that tries to comprehensively test emitting
XUnit XML output for shell tests.
Differential Revision: https://reviews.llvm.org/D45567
llvm-svn: 330409
This reverts commit 771829b640a5494ab65c810dd6b4330522bf3a33 (rr328598)
Hopefully the test will now pass on the bots.
rdar://problem/38774530
llvm-svn: 328703
The `shtest-timeout.py` test was failing intermittently. It looks like
the issue is that on a resource constrained system lit is unable to run
`quick_then_slow.py` twice and print out the messages the tests expects
within the one second timeout.
The underlying issue is that the test is dependent on the performance of
the host machine is a rather fragile way. This is due to hardcoding
timeout values and having assumptions that the host machine is able to
perform a certain amount of work within the hardcoded timeout values.
We could increase the timeout values but that doesn't really fix the
underlying issue. Instead this patch removes one of fragile assumptions
in the hope that this will be enough to fix the bots.
There are other fragile assumptions in this test (e.g. `quick.py` can be
executed in less than 1 second). If the bots continue to fail we'll have
to revisit this.
rdar://problem/38774530
llvm-svn: 328702
Summary:
This reverts commit r328596.
Checking if the arguments are strings before testing if they contain "/dev/null".
Reviewers: rnk
Reviewed By: rnk
Subscribers: delcypher, llvm-commits
Differential Revision: https://reviews.llvm.org/D44914
llvm-svn: 328603
Summary:
These changes are to allow to a Result object to have nested Result objects in
order to support microbenchmarks. Currently lit is restricted to reporting one
result object for one test, this change provides support tests that want to
report individual timings for individual kernels.
This revision is the result of the discussions in
https://reviews.llvm.org/D32272#794759,
https://reviews.llvm.org/D37421#f8003b27 and https://reviews.llvm.org/D38496.
It is a separation of the changes purposed in https://reviews.llvm.org/D40077.
This change will enable adding LCALS (Livermore Compiler Analysis Loop Suite)
collection of loop kernels to the llvm test suite using the google benchmark
library (https://reviews.llvm.org/D43319) with tracking of individual kernel
timings.
Previously microbenchmarks had been handled by using macros to section groups
of microbenchmarks together and build many executables while still getting a
grouped timing (MultiSource/TSVC). Recently the google benchmark library was
added to the test suite and utilized with a litsupport plugin. However the
limitation of 1 test 1 result limited its use to passing a runtime option to
run only 1 microbenchmark with several hand written tests
(MicroBenchmarks/XRay). This runs the same executable many times with different
hand-written tests. I will update the litsupport plugin to utilize the new
functionality (https://reviews.llvm.org/D43316).
These changes allow lit to report micro test results if desired in order to get
many precise timing results from 1 run of 1 test executable.
Reviewers: MatzeB, hfinkel, rengolin, delcypher
Differential Revision: https://reviews.llvm.org/D43314
llvm-svn: 327422
Summary:
That would allow to recursively compare directories in tests using
"diff -r" on Windows in a similar way as it can be done on Linux or Mac.
Reviewers: zturner, morehouse, vsk
Reviewed By: zturner
Subscribers: kcc, llvm-commits
Differential Revision: https://reviews.llvm.org/D41776
llvm-svn: 322102
"No such file or directory: C:\\...\\tests\\Output\\shared-output.py.tmp/Output/Shared/SHARED.tmp"
And yet other forward-slashes don't seem to be causing the same
problem. I'll see if I can get ahold of a Windows machine to poke at
this directly later.
llvm-svn: 315792
This refers to a temporary path that can be shared across all tests,
identified by a particular label. This can be used for things like
caches.
At the moment, the character set for the LABEL is limited to C
identifier characters, plus '-', '+', '=', and '.'. This is the same
set of characters currently allowed in REQUIRES clause identifiers.
llvm-svn: 315697
Fix llvm_tools_dir attribute access not to fail when the variable is not
present. This directory is not really necessary to run lit tests,
and the code already accounts for it being None.
The reference was added in r313407, and it breaks the stand-alone lit
package in Gentoo.
Differential Revision: https://reviews.llvm.org/D38442
llvm-svn: 314620
This has gone back and forth, but it seems this is necessary
after all. realpath is not sufficient because if you have a
file named 'C:\foo.txt', then both realpath('c:\foo.txt') and
realpath(C:\foo.txt') return the string that was passed to them
exactly as is, meaning the case of the drive-letter won't match.
The problem before was not that we were normalizing the case of
items going into the config map, but rather that we were
normalizing the case of something we needed to print. The value
that is used to key on the config map should never be printed.
llvm-svn: 313918
This makes all paths lowercase on Windows, which seemed like a
good idea at the time, but it means that tests can't properly
use FileCheck to match expected path names.
llvm-svn: 313889
Config map is not exposed through the command line, so testing this
is somewhat tricky. But basically we need a test that if a custom
driver builds a config map and passes it to main, it gets respected.
A config map allows config files in the source tree to be mapped
to alternate config files in the build tree. This particular test
works by having two config files in separate directories, and
setting up a config map to have that redirects A/lit.site.cfg
to B/altconfig. Then, we print a message in A/lit.site.cfg
and B/altconfig and check that we do see the output from B
but don't see the output from A. Additionally we test that
the test suite specified by A's config map is properly discovered.
Differential Revision: https://reviews.llvm.org/D38105
llvm-svn: 313887
Many editors and Python-related diagnostics tools such as
debuggers break or fail in mysterious ways when python files
don't end in .py. This is especially true on Windows, but
still exists on other platforms. I don't want to be too heavy
handed in changing everything across the board, but I do want
to at least *allow* lit configs to have .py extensions. This
patch makes the discovery process first look for a config file
with a .py extension, and if one is not found, then looks for
a config file using the old method. So for existing users, there
should be no functional change.
Differential Revision: https://reviews.llvm.org/D37838
llvm-svn: 313849
A few tests were manually constructing a LitConfig object, since
I added a new argument to it this was triggering some failures
I didn't detect. `ninja check-lit` passes now.
llvm-svn: 313461
This is a resubmission of r313270. It broke standalone builds of
compiler-rt because we were not correctly generating the llvm-lit
script in the standalone build directory.
The fixes incorporated here attempt to find llvm/utils/llvm-lit
from the source tree returned by llvm-config. If present, it
will generate llvm-lit into the output directory. Regardless,
the user can specify -DLLVM_EXTERNAL_LIT to point to a specific
lit.py on their file system. This supports the use case of
someone installing lit via a package manager. If it cannot find
a source tree, and -DLLVM_EXTERNAL_LIT is either unspecified or
invalid, then we print a warning that tests will not be able
to run.
Differential Revision: https://reviews.llvm.org/D37756
llvm-svn: 313407
This patch is still breaking several multi-stage compiler-rt bots.
I already know what the fix is, but I want to get the bots green
for now and then try re-applying in the morning.
llvm-svn: 313335
This patch simplifies LLVM's lit infrastructure by enforcing an ordering
that a site config is always run before a source-tree config.
A significant amount of the complexity from lit config files arises from
the fact that inside of a source-tree config file, we don't yet know if
the site config has been run. However it is *always* required to run
a site config first, because it passes various variables down through
CMake that the main config depends on. As a result, every config
file has to do a bunch of magic to try to reverse-engineer the location
of the site config file if they detect (heuristically) that the site
config file has not yet been run.
This patch solves the problem by emitting a mapping from source tree
config file to binary tree site config file in llvm-lit.py. Then, during
discovery when we find a config file, we check to see if we have a
target mapping for it, and if so we use that instead.
This mechanism is generic enough that it does not affect external users
of lit. They will just not have a config mapping defined, and everything
will work as normal.
On the other hand, for us it allows us to make many simplifications:
* We are guaranteed that a site config will be executed first
* Inside of a main config, we no longer have to assume that attributes
might not be present and use getattr everywhere.
* We no longer have to pass parameters such as --param llvm_site_config=<path>
on the command line.
* It is future-proof, meaning you don't have to edit llvm-lit.in to add
support for new projects.
* All of the duplicated logic of trying various fallback mechanisms of
finding a site config from the main config are now gone.
One potentially noteworthy thing that was required to implement this
change is that whereas the ninja check targets previously used the first
method to spawn lit, they now use the second. In particular, you can no
longer run lit.py against the source tree while specifying the various
`foo_site_config=<path>` parameters. Instead, you need to run
llvm-lit.py.
Differential Revision: https://reviews.llvm.org/D37756
llvm-svn: 313270
It was marked as unsupported on Windows in r311230 because on some Win10
machines it failed or caused hang. The problem was that on these machines
system bash (C:\Windows\System32\bash.exe) was used which requires paths to be
passed like '/mnt/c/path/to/my/script' instead of 'C:\path\to\my\script'.
TODO: we should make lit detect if system bash is used instead of msys and set
appropriate path format.
llvm-svn: 311558
This is an updated version of https://reviews.llvm.org/D22144 by @jlpeyton.
The patch was accepted but not landed.
This is useful functionality and I would like to use this to enable lit tests for environment variable behaviour.
Differential Revision: https://reviews.llvm.org/D36403
llvm-svn: 311180
Multi-configuration CMake generators such as those for Visual Studio or Xcode do not
specify a build config at configure time, but let the user choose at build
time. In these cases binaries go into build/${Configuration}/bin rather than
build/bin. Prior to this commit, check-lit would fail when using multi-configuration
generators as it did not know how to resolve ${Configuration} in order
to find tools such as FileCheck. This commit teaches it to resolve
llvm_tools_dir within lit using the value specified with --param
build_mode.
Differential Revision: https://reviews.llvm.org/D36263
llvm-svn: 309967
Summary:
This is an alternative solution to running the lit test suite on bots
without polluting the source directory. Each input test suite gets an
auto-generated site config in the build directory that points back to
the test input source directory.
This adds some cmake comlexity, but now we don't need to remove and
re-copy the test input directory before every test.
Reviewers: delcypher, modocache
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D36026
llvm-svn: 309602
Traceback (most recent call last):
File "llvm/utils/lit/tests/Inputs/shtest-format/external_shell/write-bad-encoding.py", line 5, in <module>
sys.stdout.write(b"a line with bad encoding: \xc2.")
sys.stdout.write doesn't accept bytes but sys.stdout.buffer.write accepts.
llvm-svn: 309473
When using win32 cmd.exe, turn off command echoing at the beginning of
the script (@echo off).
Replace a bash shell script with a python script for the
fail_with_bad_encoding test.
llvm-svn: 309399
Summary:
The technique of directly calling subprocess.Popen on a python script
doesn't work on Windows. The executable path of the command must refer
to a valid win32 executable.
Instead, rename all the python scripts masquerading as gtest executables
to have .py extensions, so we can easily detect then and call the python
executable for them. Do this on Linux as well as Windows for
consistency.
The test suite directory names also come out in lower-case on Windows.
We can consider removing that in a later patch. This change just updates
the FileCheck lines to match on Windows.
Fixes PR33933
Reviewers: modocache, mgorny
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D35909
llvm-svn: 309347
Summary:
Depends on https://reviews.llvm.org/D35879.
This reverts rL257268, which in turn was a revert of rL257221.
https://reviews.llvm.org/D35879 marks the tests in the lit test suite
that fail on Windows as XFAIL, which should allow these tests to pass
on Windows-based buildbots.
Reviewers: delcypher, beanz, mgorny, jroelofs, rnk
Reviewed By: mgorny
Subscribers: rnk, ddunbar, george.karpenkov, llvm-commits
Differential Revision: https://reviews.llvm.org/D35880
llvm-svn: 309310
Summary:
An expectation in `utils/lit/tests/Inputs/shtest-shell/redirects.txt`
expects that first a string printed to stdout is seen, and then a
string printed to stderr. Add `flush()` calls to ensure that stdout is
printed before stderr, as expected.
Reviewers: rnk, mgorny, jroelofs
Reviewed By: rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D35947
llvm-svn: 309292
Rewrite the write-to-stderr.sh and write-to-stdout-and-stderr.sh shell
scripts as python scripts and call python on them.
Fixes PR33940
llvm-svn: 309200
This passes locally for me, which fails the overall lit test suite. I
can't debug a passing test, but I will try to help debug the test when
we get some failing logs.
llvm-svn: 309190
Summary:
rL257221 attempted to run lit's own test suite continuously, but that
commit was reverted because lit's test suite does not pass on Windows.
Because lit's tests do not run continuously, they often regress.
In order to un-revert rL257221, mark lit tests that fail as XFAIL for
Windows platforms.
Test Plan:
On a Windows development environment, follow the instructions in
utils/lit/README.txt to run lit's test suite:
```
utils/lit/lit.py \
--path /path/to/your/llvm/build/bin \
utils/lit/tests
```
Verify that the test suite is run and a successful exit code is
returned.
Reviewers: mgorny, rnk, delcypher, beanz
Reviewed By: rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D35879
llvm-svn: 309123
This is especially useful when lit is invoked indirectly by the build
system, and additional arguments can not be easily specified.
Differential Revision: https://reviews.llvm.org/D35091
llvm-svn: 307339
This is necessary to pass the lit test suite at llvm/utils/lit/tests.
There are some pre-existing failures here, but now switching to pools
doesn't regress any tests.
I had to change test-data/lit.cfg to import DummyConfig from a module to
fix pickling problems, but I think it'll be OK if we require test
formats to be written in real .py modules outside lit.cfg files.
I also discovered that in some circumstances AsyncResult.wait() will not
raise KeyboardInterrupt in a timely manner, but you can pass a non-zero
timeout to work around this. This makes threading.Condition.wait use a
polling loop that runs through the interpreter, so it's capable of
asynchronously raising KeyboardInterrupt.
llvm-svn: 299605
Summary:
`assert.assertItemEqual` went away in Python 3. Seeing how lists
are ordered, comparing a list against each other should work just
as well.
Patch by @jbergstroem (Johan Bergström).
Reviewers: modocache, gparker42
Reviewed By: modocache
Differential Revision: https://reviews.llvm.org/D31229
llvm-svn: 298479
and UNSUPPORTED"
After r292904 llvm-lit fails to emit the test results in the XML format for
Apple's internal buildbots.
rdar://30164800
llvm-svn: 292942
A `lit` condition line is now a comma-separated list of boolean expressions.
Comma-separated expressions act as if each expression were on its own
condition line:
For REQUIRES, if every expression is true then the test will run.
For UNSUPPORTED, if every expression is false then the test will run.
For XFAIL, if every expression is false then the test is expected to succeed.
As a special case "XFAIL: *" expects the test to fail.
Examples:
# Test is expected fail on 64-bit Apple simulators and pass everywhere else
XFAIL: x86_64 && apple && !macosx
# Test is unsupported on Windows and on non-Ubuntu Linux
# and supported everywhere else
UNSUPPORTED: linux && !ubuntu, system-windows
Syntax:
* '&&', '||', '!', '(', ')'. 'true' is true. 'false' is false.
* Each test feature is a true identifier.
* Substrings of the target triple are true identifiers for UNSUPPORTED
and XFAIL, but not for REQUIRES. (This matches the current behavior.)
* All other identifiers are false.
* Identifiers are [-+=._a-zA-Z0-9]+
Differential Revision: https://reviews.llvm.org/D18185
llvm-svn: 292904
A `lit` condition line is now a comma-separated list of boolean expressions.
Comma-separated expressions act as if each expression were on its own
condition line:
For REQUIRES, if every expression is true then the test will run.
For UNSUPPORTED, if every expression is false then the test will run.
For XFAIL, if every expression is false then the test is expected to succeed.
As a special case "XFAIL: *" expects the test to fail.
Examples:
# Test is expected fail on 64-bit Apple simulators and pass everywhere else
XFAIL: x86_64 && apple && !macosx
# Test is unsupported on Windows and on non-Ubuntu Linux
# and supported everywhere else
UNSUPPORTED: linux && !ubuntu, system-windows
Syntax:
* '&&', '||', '!', '(', ')'. 'true' is true. 'false' is false.
* Each test feature is a true identifier.
* Substrings of the target triple are true identifiers for UNSUPPORTED
and XFAIL, but not for REQUIRES. (This matches the current behavior.)
* All other identifiers are false.
* Identifiers are [-+=._a-zA-Z0-9]+
Differential Revision: https://reviews.llvm.org/D18185
llvm-svn: 292896
Summary:
This change equips lit.py with two new options, --num-shards=M and
--run-shard=N (set by default from env vars LIT_NUM_SHARDS and LIT_RUN_SHARD).
The options must be used together, and N must be in 1..M.
Together these options effect only test selection: they partition the testsuite
into M equal-sized "shards", then select only the Nth shard. They can be used
in a cluster of test machines to achieve a very crude (static) form of
parallelism, with minimal configuration work.
Reviewers: modocache, ddunbar
Reviewed By: ddunbar
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28789
llvm-svn: 292417
Summary:
Libc++ frequently has the need to parse more than just the builtin *test keywords* (`RUN`, `REQUIRES`, `XFAIL`, ect). For example libc++ currently needs a new keyword `MODULES-DEFINES: macro list...`. Instead of re-implementing the script parsing in libc++ this patch allows `parseIntegratedTestScript` to take custom parsers.
This patch introduces a new class `IntegratedTestKeywordParser` which implements the logic to parse/process a test keyword. Parsing of various keyword "kinds" are supported out of the box, including 'TAG', 'COMMAND', and 'LIST', which parse keywords such as `END.`, `RUN:` and `XFAIL:` respectively.
As an example after this change libc++ can implement the `MODULES-DEFINES` simply using:
```
mparser = IntegratedTestKeywordParser('MODULES-DEFINES:', ParserKind.LIST)
parseIntegratedTestScript(test, additional_parsers=[mparser])
macro_list = mparser.getValue()
```
Reviewers: ddunbar, modocache, rnk, danalbert, jroelofs
Subscribers: mgrang, llvm-commits, cfe-commits
Differential Revision: https://reviews.llvm.org/D27005
llvm-svn: 288694
Update the CHECK lines in the shtest-timeout.py lit test to account for
the current output. The output has been changed in r271610 without
adjusting the tests.
Differential Revision: https://reviews.llvm.org/D25236
llvm-svn: 284057
Summary:
The Python file `utils/lit/lit/ShUtil.py` contains:
1. Logic used by lit itself
2. A set of unit tests for that logic, which can be run by invoking
`python utils/lit/lit/ShUtil.py`
Move these unit tests to a `tests/unit` subdirectory of lit, and run
the tests as part of lit's test suite. This ensures that, should the
lit test suite be included in LLVM's own regression test suite, these
unit tests will also be run.
(Instructions on how to run lit's test suite can be found in
`utils/lit/README.txt`.)
Reviewers: ddunbar, echristo, delcypher, beanz
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D25411
llvm-svn: 283968
Summary:
optparse is deprecated in Python 2.7, which is the minimum version of
Python required to run the LLVM test suite. Replace its usage in lit
with argparse, optparse's 2.7 replacement module.
argparse has several benefits over optparse, but this commit does not
make use of those benefits yet. Instead, it simply uses the new API,
and attempts to keep the number of changes to a minimum.
Confirmed that lit's test suite, as well as LLVM's regression test suite,
still pass with these changes.
Patch By Brian Gesiak!
Reviewers: ddunbar, echristo, beanz, delcypher
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D25173
llvm-svn: 283152
- This is primarily useful as a "fail fast" mode for lit, where it will stop
running tests after the first failure.
- Patch by Max Moiseev.
llvm-svn: 282452
- This will cause lit to automatically include the first 1K of data in
redirected output files when a command fails (previously if the command
failed, but the main point of the test was, say, a `FileCheck` later on, then
the log wasn't helpful in showing why the command failed).
llvm-svn: 272021
- This only applies to scripts executed by the _internal_ shell script
interpreter.
- This patch reworks the log to look more like a shell transcript, and be less
verbose (but in the interest of calling attention to the important parts).
Here is an example of the new format, for commands with/without failures and
with/without output:
```
$ true
$ echo hi
hi
$ false
note: command had no output on stdout or stderr
error: command failed with exit status 1
```
llvm-svn: 271610
Summary:
This patch adds a "REQUIRES-ANY" feature test that is disjunctive. This marks a test as `UNSUPPORTED` if none of the specified features are available.
Libc++ has the need to write feature test such as `// REQUIRES-ANY: c++98, c++03` when testing of behavior that is specific to older dialects but has since changed.
Reviewers: rnk, ddunbar
Subscribers: ddunbar, probinson, llvm-commits, cfe-commits
Differential Revision: http://reviews.llvm.org/D20757
llvm-svn: 271468
Summary:
Upstream googletest prints "Running main() from gtest_main.cc" to stdout prior
to running tests. LLVM removed that print statement in r61540. If a user were
to use lit to run tests that use upstream googletest, however, lit
reports "Running main()" as an invalid test name.
To avoid such a failure, add an extra conditional to `formats/googletest.py`.
Also add tests to demonstrate the modified behavior.
Reviewers: abdulras, ddunbar
Subscribers: ddunbar, llvm-commits, kastiglione
Differential Revision: http://reviews.llvm.org/D18606
llvm-svn: 265034
This reverts r257221.
This caused several build bot failures
* It looks like some of the tests don't work correctly under Windows
* It looks like the lit per test timeout tests fail
So I'm reverting for now. Once the above failures are fixed running
lit's tests can be enabled again.
llvm-svn: 257268
directy with ``make check-lit`` and are run as part of
``make check-all``.
In principle we should run lit's testsuite before testing LLVM using lit
so that any problems with lit get discovered before testing LLVM so we
can bail out early. However this implementation (``check-all`` runs all
tests together) seemed simpler and will still report failing lit tests.
Note that the tests and the configured ``lit.site.cfg`` have to be
copied into the build directory to avoid polluting the source tree.
llvm-svn: 257221
This should work with ShTest (executed externally or internally) and GTest
test formats.
To set the timeout a new option ``--timeout=`` has
been added which specifies the maximum run time of an individual test
in seconds. By default this 0 which causes no timeout to be enforced.
The timeout can also be set from a lit configuration file by modifying
the ``lit_config.maxIndividualTestTime`` property.
To implement a timeout we now require the psutil Python module if a
timeout is requested. This dependency is confined to the newly added
``lit.util.killProcessAndChildren()``. A note has been added into the
TODO document describing how we can remove the dependency on the
``pustil`` module in the future. It would be nice to remove this
immediately but that is a lot more work and Daniel Dunbar believes it is
better that we get a working implementation first and then improve it.
To avoid breaking the existing behaviour the psutil module will not be
imported if no timeout is requested.
The included testcases are derived from test cases provided by
Jonathan Roelofs which were in an previous attempt to add a per test
timeout to lit (http://reviews.llvm.org/D6584). Thanks Jonathan!
Reviewers: ddunbar, jroelofs, cmatthews, MatzeB
Subscribers: cmatthews, llvm-commits
Differential Revision: http://reviews.llvm.org/D14706
llvm-svn: 256471
Summary:
I spend some time trying to get the LIT test suite passing. Here are the changes that I needed to make on my machine.
I made the following changes for the following reasons.
1. google-test.py: The Google test format now checks for "[ PASSED ] 1 test." to check if a test passes.
2. discovery.py: The output appears in a different order on my machine than it did in the test.
3. unittest-adaptor.py: The output appears in a different order on my machine than it did in the test.
4. The classname is now formed differently in `getJUnitXML(...)`.
I'm not sure what is causing the output order to differ in discovery.py and unittest-adaptor.py. Does anybody have any thoughts?
Reviewers: ddunbar, danalbert, jroelofs
Reviewed By: jroelofs
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9864
llvm-svn: 239663
We were already requiring 2.5, which meant that people on old linux distros
had to upgrade anyway.
Requiring python 2.6 will make supporting 3.X easier as we can use the 3.X
exception syntax.
According to the discussion on llvmdev, there is not much value is requiring
just 2.6, we may as well just require 2.7.
llvm-svn: 224129
- This is a work-in-progress and all details are subject to change, but I am
trying to build up support for allowing lit to be used as a driver for
performance tests (or other tests which might want to record information
beyond simple PASS/FAIL).
llvm-svn: 190535
- This aligns with how existing test suites end up wanting to use the local
config files, conceptually it makes sense to consider them to be inherited.
llvm-svn: 189885
- At least on OS X, it is important for correct behavior of /bin/[ that argv[0]
is passed as written, and not as the full executable path.
llvm-svn: 189559
- For whatever reason, we have a lot of test files with bogus unicode
characters. This patch allows those scripts to still be parsed on Python3 by
changing the parsing logic to work on binary files, and only require the
actual script commands to be convertible to ascii.
- This patch has been tweaked to now ensure that the command strings are not of
unicode type on Python 2.6-7.
llvm-svn: 188398
- For whatever reason, we have a lot of test files with bogus unicode
characters. This patch allows those scripts to still be parsed on Python3 by
changing the parsing logic to work on binary files, and only require the
actual script commands to be convertible to ascii.
llvm-svn: 188376