According to Joseph Myers, a libm maintainer
> They were only ever an ABI (selected by use of -ffinite-math-only or
> options implying it, which resulted in the headers using "asm" to redirect
> calls to some libm functions), not an API. The change means that ABI has
> turned into compat symbols (only available for existing binaries, not for
> anything newly linked, not included in static libm at all, not included in
> shared libm for future glibc ports such as RV32), so, yes, in any case
> where tools generate direct calls to those functions (rather than just
> following the "asm" annotations on function declarations in the headers),
> they need to stop doing so.
As a consequence, we should no longer assume these symbols are available on the
target system.
Still keep the TargetLibraryInfo for constant folding.
Differential Revision: https://reviews.llvm.org/D74712
llvm-ar is using CompareStringOrdinal which is available
only starting with Windows Vista (WINVER 0x600).
Fix this by hoising WindowsSupport.h, which sets _WIN32_WINNT
to 0x0601, up to llvm/include/llvm/Support and use it in llvm-ar.
Patch by Cristian Adam!
Differential revision: https://reviews.llvm.org/D74599
The integrity checks for index entries in DWARFUnitHeader::extract()
might cause the function to return before checking the state of an
Error object, which leads to a crash in runtime. The patch fixes the
issue by moving the checks in a safe place.
Differential Revision: https://reviews.llvm.org/D75177
Summary: Include the offset at which this happened.
Reviewers: dblaikie, jhenderson
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D75265
Process the path for libxml2 before embedding that into the command line
that is generated in `llvm-config`. Each element in the path is being
given a `-l` unconditionally which should not be the case for absolute
paths. Since the library path may be absolute or not, just apply some
CMake pre-processing when generating the path.
Before:
```
/usr/lib/x86_64-linux-gnu/libz.so -lrt -ldl -ltinfo -lpthread -lm /usr/lib/x86_64-linux-gnu/libxml2.so
```
After:
```
/usr/lib/x86_64-linux-gnu/libz.so -lrt -ldl -ltinfo -lpthread -lm -lxml2
```
Resolves PR44179!
MathExtras.h was just wrapping SwapByteOrder.h functionality, so have
the callers use it directly. Use the MathExtras.h name (ByteSwap_NN) as
the standard naming, since it appears to be the most popular.
Summary:
For now just insert the callback for stores, similar to how MSan tracks
origins. In the future we may want to add callbacks for loads, memcpy,
function calls, CMPs, etc.
Reviewers: pcc, vitalybuka, kcc, eugenis
Reviewed By: vitalybuka, kcc, eugenis
Subscribers: eugenis, hiraditya, #sanitizers, llvm-commits, kcc
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D75312
DSE would mistakenly remove store (2):
a = calloc(n+1)
for (int i = 0; i < n; i++) {
store 1, a[i+1] // (1)
store 0, a[i] // (2)
}
The fix is to do PHI transaltion while looking for clobbering
instructions between the store and the calloc.
Reviewed By: efriedma, bjope
Differential Revision: https://reviews.llvm.org/D68006
Hopefully fixes compile errors on some bots, like:
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-avx2-linux/builds/13383/steps/ninja%20check%201/logs/stdio
/home/ssglocal/clang-cmake-x86_64-avx2-linux/clang-cmake-x86_64-avx2-linux/llvm/llvm/unittests/ADT/CoalescingBitVectorTest.cpp:452:3: required from here
/home/ssglocal/clang-cmake-x86_64-avx2-linux/clang-cmake-x86_64-avx2-linux/llvm/llvm/utils/unittest/googletest/include/gtest/gtest-printers.h:377:56: error: ‘const class llvm::CoalescingBitVector<long unsigned int>::const_iterator’ has no member named ‘begin’
for (typename C::const_iterator it = container.begin();
^
/home/ssglocal/clang-cmake-x86_64-avx2-linux/clang-cmake-x86_64-avx2-linux/llvm/llvm/utils/unittest/googletest/include/gtest/gtest-printers.h:378:11: error: ‘const class llvm::CoalescingBitVector<long unsigned int>::const_iterator’ has no member named ‘end’
it != container.end(); ++it, ++count) {
^
On some bots, using gtest asserts to compare iterators does not compile,
and I'm not sure why (this certainly compiles with clang). Disable the
checks for now :/.
```
C:\buildbot\as-builder-3\llvm-clang-x86_64-win-fast\llvm-project\llvm\utils\unittest\googletest\include\gtest/gtest-printers.h(377): error C2039: 'begin': is not a member of 'llvm::CoalescingBitVector<unsigned int,16>::const_iterator'
C:\buildbot\as-builder-3\llvm-clang-x86_64-win-fast\llvm-project\llvm\include\llvm/ADT/CoalescingBitVector.h(243): note: see declaration of 'llvm::CoalescingBitVector<unsigned int,16>::const_iterator'
C:\buildbot\as-builder-3\llvm-clang-x86_64-win-fast\llvm-project\llvm\utils\unittest\googletest\include\gtest/gtest-printers.h(478): note: see reference to function template instantiation 'void testing::internal::DefaultPrintTo<T>(testing::internal::IsContainer,testing::internal::false_type,const C &,std::ostream *)' being compiled
with
[
T=T1,
C=T1
]
```
http://lab.llvm.org:8011/builders/llvm-clang-x86_64-win-fast/builds/12006/steps/test-check-llvm-unit/logs/stdiohttp://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/34521/steps/ninja%20check%201/logs/stdio
This is part 3 of a 3-part series to address a compile-time explosion
issue in LiveDebugValues.
---
Start encoding register locations within VarLoc IDs, and take advantage
of this encoding to speed up transferRegisterDef.
There is no fundamental algorithmic change: this patch simply swaps out
SparseBitVector in favor of CoalescingBitVector. That changes iteration
order (hence the test updates), but otherwise this patch is NFCI.
The only interesting change is in transferRegisterDef. Instead of doing:
```
KillSet = {}
for (ID : OpenRanges.getVarLocs())
if (DeadRegs.count(ID))
KillSet.add(ID)
```
We now do:
```
KillSet = {}
for (Reg : DeadRegs)
for (ID : intervalsReservedForReg(Reg, OpenRanges.getVarLocs()))
KillSet.add(ID)
```
By not visiting each open location every time we visit an instruction,
this eliminates some potentially quadratic behavior. The new
implementation basically does a constant amount of work per instruction
because the interval map lookups are very fast.
For a file in WebKit, this brings the time spent in LiveDebugValues down
from ~2.5 minutes to 4 seconds, reducing compile time spent in that pass
from 28% of the total to just over 1%.
Before:
```
2.49 min 27.8% 0 s LiveDebugValues::process
2.41 min 27.0% 5.40 s LiveDebugValues::transferRegisterDef
1.51 min 16.9% 1.51 min LiveDebugValues::VarLoc::isDescribedByReg() const
32.73 s 6.1% 8.70 s llvm::SparseBitVector<128u>::SparseBitVectorIterator::operator++()
```
After:
```
4.53 s 1.1% 0 s LiveDebugValues::process
3.00 s 0.7% 107.00 ms LiveDebugValues::transferRegisterCopy
892.00 ms 0.2% 406.00 ms LiveDebugValues::transferSpillOrRestoreInst
404.00 ms 0.1% 32.00 ms LiveDebugValues::transferRegisterDef
110.00 ms 0.0% 2.00 ms LiveDebugValues::getUsedRegs
57.00 ms 0.0% 1.00 ms std::__1::vector<>::push_back
40.00 ms 0.0% 1.00 ms llvm::CoalescingBitVector<>::find(unsigned long long)
```
FWIW, I tried the same approach using SparseBitVector, but got bad
results. To do that, I had to extend SparseBitVector to support 64-bit
indices and expose its lower bound operation. The problem with this is
that the performance is very hard to predict: SparseBitVector's lower
bound operation falls back to O(n) linear scans in a std::list if you're
not /very/ careful about managing iteration order. When I profiled this
the performance looked worse than the baseline.
You can see the full CoalescingBitVector-based implementation here:
https://github.com/vedantk/llvm-project/commits/try-coalescing
You can see the full SparseBitVector-based implementation here:
https://github.com/vedantk/llvm-project/commits/try-sparsebitvec-find
Depends on D74984 and D74985.
Differential Revision: https://reviews.llvm.org/D74986
This is part 2 of a 3-part series to address a compile-time explosion
issue in LiveDebugValues.
---
Each VarLoc has a unique ID: this ID is used to look up a VarLoc in the
VarLocMap, and to virtually insert a VarLoc into a VarLocSet. Instead of
inserting the VarLoc /itself/ into the VarLocSet, we insert just the ID,
because this can be represented efficiently with a SparseBitVector.
This change introduces LocIndex, a layer of abstraction on top of VarLoc
IDs. Prior to this change, an ID was just an index into a vector. With
this change, an ID encodes both an index /and/ a register location. The
type-checker ensures that conversions to and from LocIndex are correct.
For the moment the register location is always 0 (undef). We have plenty
of bits left over to encode physregs, stack slots, and other locations
in the future.
Differential Revision: https://reviews.llvm.org/D74985
Add CoalescingBitVector to ADT. This is part 1 of a 3-part series to
address a compile-time explosion issue in LiveDebugValues.
---
CoalescingBitVector is a bitvector that, under the hood, relies on an
IntervalMap to coalesce elements into intervals.
CoalescingBitVector efficiently represents sets which predominantly
contain contiguous ranges (e.g. the VarLocSets in LiveDebugValues,
which are very long sequences that look like {1, 2, 3, ...}). OTOH,
CoalescingBitVector isn't good at representing sets with lots of gaps
between elements. The first N coalesced intervals of set bits are stored
in-place (in the initial heap allocation).
Compared to SparseBitVector, CoalescingBitVector offers more predictable
performance for non-sequential find() operations. This provides a
crucial speedup in LiveDebugValues.
Differential Revision: https://reviews.llvm.org/D74984
The code changes here are hopefully straightforward:
1. Use MachineInstruction flags to decide if FP ops can be reassociated
(use both "reassoc" and "nsz" to be consistent with IR transforms;
we probably don't need "nsz", but that's a safer interpretation of
the FMF).
2. Check that both nodes allow reassociation to change instructions.
This is a stronger requirement than we've usually implemented in
IR/DAG, but this is needed to solve the motivating bug (see below),
and it seems unlikely to impede optimization at this late stage.
3. Intersect/propagate MachineIR flags to enable further reassociation
in MachineCombiner.
We managed to make MachineCombiner flexible enough that no changes are
needed to that pass itself. So this patch should only affect x86
(assuming no other targets have implemented the hooks using MachineIR
flags yet).
The motivating example in PR43609 is another case of fast-math transforms
interacting badly with special FP ops created during lowering:
https://bugs.llvm.org/show_bug.cgi?id=43609
The special fadd ops used for converting int to FP assume that they will
not be altered, so those are created without FMF.
However, the MachineCombiner pass was being enabled for FP ops using the
global/function-level TargetOption for "UnsafeFPMath". We managed to run
instruction/node-level FMF all the way down to MachineIR sometime in the
last 1-2 years though, so we can do better now.
The test diffs require some explanation:
1. llvm/test/CodeGen/X86/fmf-flags.ll - no target option for unsafe math was
specified here, so MachineCombiner kicks in where it did not previously;
to make it behave consistently, we need to specify a CPU schedule model,
so use the default model, and there are no code diffs.
2. llvm/test/CodeGen/X86/machine-combiner.ll - replace the target option for
unsafe math with the equivalent IR-level flags, and there are no code diffs;
we can't remove the NaN/nsz options because those are still used to drive
x86 fmin/fmax codegen (special SDAG opcodes).
3. llvm/test/CodeGen/X86/pow.ll - similar to #1
4. llvm/test/CodeGen/X86/sqrt-fastmath.ll - similar to #1, but MachineCombiner
does some reassociation of the estimate sequence ops; presumably these are
perf wins based on latency/throughput (and we get some reduction of move
instructions too); I'm not sure how it affects numerical accuracy, but the
test reflects reality better now because we would expect MachineCombiner to
be enabled if the IR was generated via something like "-ffast-math" with clang.
5. llvm/test/CodeGen/X86/vec_int_to_fp.ll - this is the test added to model PR43609;
the fadds are not reassociated now, so we should get the expected results.
6. llvm/test/CodeGen/X86/vector-reduce-fadd-fast.ll - similar to #1
7. llvm/test/CodeGen/X86/vector-reduce-fmul-fast.ll - similar to #1
Differential Revision: https://reviews.llvm.org/D74851
Summary:
We need to handle the MCSA_LGlobal case in emitSymbolAttribute for functions marked internal in the IR so that the
appropriate storage class is emitted on the function descriptor csect. As part of this we need to make sure that external
labels are not emitted into the symbol table, so we don't emit the descriptor label in the object writing path.
Reviewers: jasonliu, DiggerLin, hubert.reinterpretcast
Reviewed By: jasonliu
Subscribers: Xiangling_L, wuzish, nemanjai, hiraditya, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74968
Summary:
The affected LIT test intends to test the correct use of divergence
analysis to detect a divergent branch with a uniform predicate. The
passes involved are LLVM IR passes, but the test runs llc and tries to
match against generated ISA, which makes it hard to demonstrate that
the intended behavior was really tested. Replaced this with a test
that invokes opt on the required passes and then checks for the
appropriate changes in the LLVM IR.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D75267
When InstCombine initially populates the worklist, it already
performs constant folding and DCE. However, as the instructions
are initially visited in program order, this DCE can pick up only
the last instruction of a dead chain, the rest would only get
picked up in the main InstCombine run.
To avoid this, we instead perform the DCE in separate pass over the
collected instructions in reverse order, which will allow us to
pick up full dead instruction chains. We already need to do this
reverse iteration anyway to populate the worklist, so this
shouldn't add extra cost.
This by itself only fixes a small part of the problem though:
The same basic issue also applies during the main InstCombine loop.
We generally always want DCE to occur as early as possible,
because it will allow one-use folds to happen. Address this by also
performing DCE while adding deferred instructions to the main worklist.
This drops the number of tests that perform more than 2 InstCombine
iterations from ~80 to ~40. There's some spurious test changes due
to operand order / icmp toggling.
Differential Revision: https://reviews.llvm.org/D75008
Use UnaryOperator::CreateFNeg instead.
Summary:
With the introduction of the native fneg instruction, the
fsub -0.0, %x idiom is obsolete. This patch makes LLVM
emit fneg instead of the idiom in all places.
Reviewed By: cameron.mcinally
Differential Revision: https://reviews.llvm.org/D75130
This tries to improve the accuracy of extract/insert element costs by accounting for subvector extraction/insertion for >128-bit vectors and the shuffling of elements to/from the 0'th index.
It also adds INSERTPS for f32 types and PINSR/PEXTR costs for integer types (at the moment we assume the same cost as MOVD/MOVQ - which isn't always true).
Differential Revision: https://reviews.llvm.org/D74976
Summary:
Error reporting in DebugInfoDWARF library currently done in three ways :
1. Direct calls to WithColor::error()/WithColor::warning()
2. ErrorPolicy defaultErrorHandler(Error E);
3. void dumpWarning(Error Warning);
additionally, other locations could have more variations:
lld/ELF/SyntheticSection.cpp
if (Error e = cu->tryExtractDIEsIfNeeded(false)) {
error(toString(sec) + ": " + toString(std::move(e)));
DebugInfo/DWARF/DWARFUnit.cpp
if (Error e = tryExtractDIEsIfNeeded(CUDieOnly))
WithColor::error() << toString(std::move(e));
Thus error reporting could look inconsistent. To have a consistent error
messages it is necessary to have a possibility to redefine error
reporting functions. This patch creates two handlers and allows to
redefine them. It also patches all places inside DebugInfoDWARF
to use these handlers.
The intention is always to use following handlers for error reporting
purposes inside DebugInfoDWARF:
DebugInfo/DWARF/DWARFContext.h
std::function<void(Error E)> RecoverableErrorHandler = WithColor::defaultErrorHandler;
std::function<void(Error E)> WarningHandler = WithColor::defaultWarningHandler;
This is last patch from series of patches: D74481, D74635, D75118.
Reviewers: jhenderson, dblaikie, probinson, aprantl, JDevlieghere
Reviewed By: jhenderson
Subscribers: grimar, hiraditya, llvm-commits
Tags: #llvm, #debug-info
Differential Revision: https://reviews.llvm.org/D74308
Summary:
Currently the dependence analysis in LLVM is unable to compute accurate
dependence vectors for multi-dimensional fixed size arrays.
This is mainly because the delinearization algorithm in scalar evolution
relies on parametric terms to be present in the access functions. In the
case of fixed size arrays such parametric terms are not present, but we
can use the indexes from GEP instructions to recover the subscripts for
each dimension of the arrays. This patch adds this ability under the
existing option `-da-disable-delinearization-checks`.
Authored By: bmahjour
Reviewer: Meinersbur, sebpop, fhahn, dmgreen, grosser, etiotto, bollu
Reviewed By: Meinersbur
Subscribers: hiraditya, arphaman, Whitney, ppc-slack, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72178
This will address the issue: P8198 and P8199 (from D73534).
The methods was not handle bundles properly.
Differential Revision: https://reviews.llvm.org/D74904
Summary:
The following intrinsics are added:
* @llvm.aarch64.sve.ldff1.gather
* @llvm.aarch64.sve.ldff1.gather.index
* @llvm.aarch64.sve.ldff1.gather_sxtw
* @llvm.aarch64.sve.ldff1.gather.uxtw
* @llvm.aarch64.sve.ldff1.gather_sxtw.index
* @llvm.aarch64.sve.ldff1.gather.uxtw.index
* @llvm.aarch64.sve.ldff1.gather.scalar.offset
Although this patch is quite substantial, the vast majority of the
implementation is just a 'copy & paste' of the implementation of regular
gather loads, including tests. There's only a handful of new
definitions:
* AArch64ISD nodes defined in AArch64ISelLowering.h (e.g. GLDFF1)
* Seleciton DAG Types in AArch64SVEInstrInfo.td (e.g.
AArch64ldff1_gather)
* intrinsics in IntrinsicsAArch64.td (e.g. aarch64_sve_ldff1_gather)
* Pseudo instructions in SVEInstrFormats.td to workaround the issue of
use-before-def for the FFR register.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D75128
-debug-only=inline-cost does not exist in optimized builds without
asserts and therefore the test fails for such configurations.
Related revision: c965fd942f1d2de6179cd1a2f78c78fa4bd74626