Summary:
Remove the redundant, config-time call to cmake when
building host tools for cross compiles or optimized tablegen..
The config-time call to cmake is redundant because it will always get
called again when the CONFIGURE_LLVM_${target_name} target fires at
build-time. This speeds up initial configuration, but has no affect
on build behavior.
Reviewers: beanz
Reviewed By: beanz
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D40229
llvm-svn: 319176
Atom's FABS/FCHS/FSQRT latencies taken from Agner.
Note: I just added FSIN and FCOS to the existing IIC_FSINCOS itinerary, which is actually a more costly instruction.
llvm-svn: 319175
This is needed for cases when the memory access is not as big as the width of
the data type. For instance, storing i1 (1 bit) would be done in a byte (8
bits).
Using 'BitSize >> 3' (or '/ 8') would e.g. give the memory access of an i1 a
size of 0, which for instance makes alias analysis return NoAlias even when
it shouldn't.
There are no tests as this was done as a follow-up to the bugfix for the case
where this was discovered (r318824). This handles more similar cases.
Review: Björn Petterson
https://reviews.llvm.org/D40339
llvm-svn: 319173
LLVM Coding Standards:
Function names should be verb phrases (as they represent actions), and
command-like function should be imperative. The name should be camel
case, and start with a lower case letter (e.g. openFile() or isFoo()).
Differential Revision: https://reviews.llvm.org/D40416
llvm-svn: 319168
Certain ARM implementations treat icache clear instruction as a memory read,
and CPU segfaults on trying to clear cache on !PROT_READ page.
We workaround this in Memory::protectMappedMemory by adding
PROT_READ to affected pages, clearing the cache, and then setting
desired protection.
This fixes "AllocationTests/MappedMemoryTest.***/3" unit-tests on
affected hardware.
Reviewers: psmith, zatrazz, kristof.beyls, lhames
Reviewed By: lhames
Subscribers: llvm-commits, krytarowski, peter.smith, jgreenhalgh, aemerson,
rengolin
Patch by maxim-kuvrykov!
Differential Revision: https://reviews.llvm.org/D40423
llvm-svn: 319166
The core idea is to (re-)introduce some redundancies where their cost is
hidden by the cost of materializing immediates for constant operands of
PHI nodes. When the cost of the redundancies is covered by this,
avoiding materializing the immediate has numerous benefits:
1) Less register pressure
2) Potential for further folding / combining
3) Potential for more efficient instructions due to immediate operand
As a motivating example, consider the remarkably different cost on x86
of a SHL instruction with an immediate operand versus a register
operand.
This pattern turns up surprisingly frequently, but is somewhat rarely
obvious as a significant performance problem.
The pass is entirely target independent, but it does rely on the target
cost model in TTI to decide when to speculate things around the PHI
node. I've included x86-focused tests, but any target that sets up its
immediate cost model should benefit from this pass.
There is probably more that can be done in this space, but the pass
as-is is enough to get some important performance on our internal
benchmarks, and should be generally performance neutral, but help with
more extensive benchmarking is always welcome.
One awkward part is that this pass has to be scheduled after
*everything* that can eliminate these kinds of redundancies. This
includes SimplifyCFG, GVN, etc. I'm open to suggestions about better
places to put this. We could in theory make it part of the codegen pass
pipeline, but there doesn't really seem to be a good reason for that --
it isn't "lowering" in any sense and only relies on pretty standard cost
model based TTI queries, so it seems to fit well with the "optimization"
pipeline model. Still, further thoughts on the pipeline position are
welcome.
I've also only implemented this in the new pass manager. If folks are
very interested, I can try to add it to the old PM as well, but I didn't
really see much point (my use case is already switched over to the new
PM).
I've tested this pretty heavily without issue. A wide range of
benchmarks internally show no change outside the noise, and I don't see
any significant changes in SPEC either. However, the size class
computation in tcmalloc is substantially improved by this, which turns
into a 2% to 4% win on the hottest path through tcmalloc for us, so
there are definitely important cases where this is going to make
a substantial difference.
Differential revision: https://reviews.llvm.org/D37467
llvm-svn: 319164
Summary:
I think we do not need to analyze debug intrinsics here, as they should
not impact codegen. This has 2 benefits: 1) slightly less work to do and
2) avoiding generating optimization remarks for converting calls to
debug intrinsics to tail calls, which are not really helpful for users.
Based on work by Sander de Smalen.
Reviewers: davide, trentxintong, aprantl
Reviewed By: aprantl
Subscribers: llvm-commits, JDevlieghere
Tags: #debug-info
Differential Revision: https://reviews.llvm.org/D40440
llvm-svn: 319158
Summary:
The entire algorithm operates per basic-block, so for cache locality
it should be better to re-optimize a basic-block immediately rather than
in a separate loop.
I don't have performance measurements.
Change-Id: I85106570bd623c4ff277faaa50ee43258e1ddcc5
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D40344
llvm-svn: 319156
Summary:
The PeepholeOptimizer pass calls this function solely based on checking
DefMI->isMoveImmediate(), which only checks the MoveImm bit of the
instruction description. So it's up to FoldImmediate itself to properly
check that DefMI *actually* moves from an immediate.
I don't have a separate test case for this, but the next patch introduces
a test case which happens to crash without this change.
This error is caught by the assertion in MachineOperand::getImm().
Change-Id: I88e7cdbcf54d75e1a296822e6fe5f9a5f095bbf8
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D40342
llvm-svn: 319155
Currently, we use a set of pairs to cache responces like `CompareValueComplexity(X, Y) == 0`. If we had
proved that `CompareValueComplexity(S1, S2) == 0` and `CompareValueComplexity(S2, S3) == 0`,
this cache does not allow us to prove that `CompareValueComplexity(S1, S3)` is also `0`.
This patch replaces this set with `EquivalenceClasses` that merges Values into equivalence sets so that
any two values from the same set are equal from point of `CompareValueComplexity`. This, in particular,
allows us to prove the fact from example above.
Differential Revision: https://reviews.llvm.org/D40429
llvm-svn: 319153
The priorities in the section name suffixes are zero padded,
allowing the linker to just do a lexical sort.
Add zero padding for .ctors sections in ELF as well.
Differential Revision: https://reviews.llvm.org/D40407
llvm-svn: 319150
Currently, we use a set of pairs to cache responces like `CompareSCEVComplexity(X, Y) == 0`. If we had
proved that `CompareSCEVComplexity(S1, S2) == 0` and `CompareSCEVComplexity(S2, S3) == 0`,
this cache does not allow us to prove that `CompareSCEVComplexity(S1, S3)` is also `0`.
This patch replaces this set with `EquivalenceClasses` any two values from the same set are equal from
point of `CompareSCEVComplexity`. This, in particular, allows us to prove the fact from example above.
Differential Revision: https://reviews.llvm.org/D40428
llvm-svn: 319149
This is to address a problem similar to those in D37460 for Scalar PRE. We should not
PRE across an instruction that may not pass execution to its successor unless it is safe
to speculatively execute it.
Differential Revision: https://reviews.llvm.org/D38619
llvm-svn: 319147
Unoptimized IR can have linear sequences of stores to an array, where the
initial GEP for the first store is formed from the pointer to the array, and the
GEP for each store after the first is formed from the previous GEP with some
offset in an inductive fashion.
The (large) resulting DAG when analyzed by DAGCombine undergoes an excessive
number of combines as each store node is examined every time its' offset node
is combined with any child of the offset. One of the transformations is
findBetterNeighborChains which assists MergeConsecutiveStores. The former
relies on repeated chain walking to do its' work, however MergeConsecutiveStores
is disabled at O0 which makes the transformation redundant.
Any optimization level other than O0 would invoke InstCombine which would
resolve the chain of GEPs into flat base + offset GEP for each store which
does not exhibit the repeated examination of each store to the array.
Disabling this optimization fixes an excessive compile time issue (30~ minutes
for the test case provided) at O0.
Reviewers: niravd, craig.topper, t.p.northover
Differential Revision: https://reviews.llvm.org/D40193
llvm-svn: 319142
This fixes cases where we wouldn't perform various register operand
checks just because we didn't happen to have a definition in the
MCInstrDesc. This changes the code to only skip the tests that actually
depend on the MCInstrDesc definition.
This makes the machine verifier spot the problem from
https://llvm.org/PR33071 after the pass that actually caused it.
llvm-svn: 319141
Additional checks for phi operands:
- first operand should be a virtual register def. It should not be
tied, implicit, internalread, earlyclobber or a read.
- The other operands should be register/mbb operands next to each other
- The register operands should not be implicit, internalread,
earlyclobber, debug or tied.
- We can perform most of the PHI checks even for unreachable blocks.
llvm-svn: 319140
If /debug was not specified, readSection will return a null
pointer for debug sections. If the debug section is associative with
another section, we need to make sure that the section returned from
readSection is not a null pointer before adding it as an associative
section.
Differential Revision: https://reviews.llvm.org/D40533
llvm-svn: 319133
Revert "[SROA] Propagate !range metadata when moving loads."
Revert "[Mem2Reg] Clang-format unformatted parts of this file. NFCI."
Davide says they broke a bot.
llvm-svn: 319131
This adds code to protect WebAssembly's `trunc_s` family of opcodes
from values outside their domain. Even though such conversions have
full undefined behavior in C/C++, LLVM IR's `fptosi` and `fptoui` do
not, and only return undef.
This also implements the proposed non-trapping float-to-int conversion
feature and uses that instead when available.
llvm-svn: 319128
These lines all exist identically either under SSE2, AVX2 or AVX512. Given that VLX implies all of those, these aren't providing anything new.
llvm-svn: 319124
With AVX512 vXi1 types are legal so we shouldn't be extending them.
This change is similar to existing code in the zext(setcc) combine.
llvm-svn: 319120
Which VTs are considered simple is determined by the superset of the legal types of all targets in LLVM. If we're looking at VTs that are going to be split down to 512-bits we should allow any VT not just simple ones since the simple list changes over time as new targets are added.
llvm-svn: 319110
LLVM runtimes rely on LLVM_HOST_TRIPLE being set in their builds
and tests so make sure it's being passed down.
Differential Revision: https://reviews.llvm.org/D40515
llvm-svn: 319109
We introduce a new variable LLVM_ENABLE_RUNTIMES which works
similarly to LLVM_ENABLE_PROJECTS and allows specifying runtimes
that will be enabled in the runtimes build.
Differential Revision: https://reviews.llvm.org/D40233
llvm-svn: 319107
Prevent unloading shared libraries on Linux when dlclose() is called.
This is necessary since command-line option parsing API relies on
registering the global option instances in the option parser instance
which can be loaded in a different shared library.
Given that we can't reliably remove those options when a library is
unloaded, the parser ends up containing dangling references. Since glibc
has relatively complex library unloading rules, some of the LLVM
libraries can be unloaded while others (including the Support library)
stay loaded causing quite a mayhem. To reliably prevent that, just
forbid unloading all libraries -- it's a very bad idea anyway.
While the issue arguably happens only with BUILD_SHARED_LIBS, it may
affect any library reusing llvm::cl interface.
Based on patch provided Ross Hayward on https://bugs.gentoo.org/617154.
Previously hit by Fedora back in Feb 2016:
https://lists.freedesktop.org/archives/mesa-dev/2016-February/107242.html
Differential Revision: https://reviews.llvm.org/D40459
llvm-svn: 319105
The previous implementation would only look 1 DW_AT_specification or DW_AT_abstract_origin deep. This means DWARFDie::getName() would fail in certain cases. I ran into such a case while creating a tool that used the LLVM DWARF parser to generate a symbolication format so I have seen this in the wild.
Differential Revision: https://reviews.llvm.org/D40156
llvm-svn: 319104
We don't do this for narrow vectors under AVX or SSE features. We also don't set them to Expand like we do for many vectors op. Nor does TargetLoweringBase.cpp. This leads me to believe these default to Legal.
llvm-svn: 319103
This tries to propagate !range metadata to a pre-existing load
when a load is optimized out. This is done instead of adding an
assume because converting loads to and from assumes creates a
lot of IR.
Patch by Ariel Ben-Yehuda.
Differential Revision: https://reviews.llvm.org/D37216
llvm-svn: 319096