We are avoiding writing to WZR just about everywhere else.
Also update the code to use MachineIRBuilder for the sake of consistency.
We also didn't have a GlobalISel testcase for this path, so add a simple one
now.
Differential Revision: https://reviews.llvm.org/D90626
So that instructions like `lla a5, (0xFF + end) - 4` (supported by GNU as) can
be parsed.
Add a missing test that an operand like `foo + foo` is not allowed.
Reviewed By: jrtc27
Differential Revision: https://reviews.llvm.org/D92293
When handling a DSOLocalEquivalent operand change:
- Remove assertion checking that the `To` type and current type are the
same type. This is not always a requirement.
- Add a missing bitcast from an old DSOLocalEquivalent to the type of
the new one.
Instead of falling back to selecting TB(N)Z when we fail to select an
optimized compare against 0, select Bcc instead.
Also simplify selectCompareBranch a little while we're here, because the logic
was kind of hard to follow.
At -O0, this is a 0.1% geomean code size improvement for CTMark.
A simple example of where this can kick in is here:
https://godbolt.org/z/4rra6P
In the example above, GlobalISel currently produces a subs, cset, and tbnz.
SelectionDAG, on the other hand, just emits a compare and b.le.
Differential Revision: https://reviews.llvm.org/D92358
This reverts commit cf1c774d6ace59c5adc9ab71b31e762c1be695b1.
This change caused several regressions in the gdb test suite - at least
a sample of which was due to line zero instructions making breakpoints
un-lined. I think they're worth investigating/understanding more (&
possibly addressing) before moving forward with this change.
Revert "[FastISel] NFC: Clean up unnecessary bookkeeping"
This reverts commit 3fd39d3694d32efa44242c099e923a7f4d982095.
Revert "[FastISel] NFC: Remove obsolete -fast-isel-sink-local-values option"
This reverts commit a474657e30edccd9e175d92bddeefcfa544751b2.
Revert "Remove static function unused after cf1c774."
This reverts commit dc35368ccf17a7dca0874ace7490cc3836fb063f.
Revert "[lldb] Fix TestThreadStepOut.py after "Flush local value map on every instruction""
This reverts commit 53a14a47ee89dadb8798ca8ed19848f33f4551d5.
This allows us to use its value everywhere, rather than just clang. Some
other places, like opt and lld, will use its value soon.
Rename it internally to LLVM_ENABLE_NEW_PASS_MANAGER.
The #define for it is now in llvm-config.h.
The initial land accidentally set the value of
LLVM_ENABLE_NEW_PASS_MANAGER to the string
ENABLE_EXPERIMENTAL_NEW_PASS_MANAGER instead of its value.
Reviewed By: rnk, hans
Differential Revision: https://reviews.llvm.org/D92072
This allows us to use its value everywhere, rather than just clang. Some
other places, like opt and lld, will use its value soon.
The #define for it is now in llvm-config.h.
Reviewed By: rnk, hans
Differential Revision: https://reviews.llvm.org/D92072
Currently, `llvm_bb_addr_map` sections are generated per section names because we use
the `LinkedToSymbol` argument of getELFSection. This will cause the address map tables of functions
grouped into the same section when `-function-sections=true -unique-section-names=false` which is not
the intended behaviour. This patch lets the unique id of every `.text` section propagate to the associated
`.llvm_bb_addr_map` section.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D92113
This is yet another attempt at providing support for epilogue
vectorization following discussions raised in RFC http://llvm.1065342.n5.nabble.com/llvm-dev-Proposal-RFC-Epilog-loop-vectorization-tt106322.html#none
and reviews D30247 and D88819.
Similar to D88819, this patch achieve epilogue vectorization by
executing a single vplan twice: once on the main loop and a second
time on the epilogue loop (using a different VF). However it's able
to handle more loops, and generates more optimal control flow for
cases where the trip count is too small to execute any code in vector
form.
Reviewed By: SjoerdMeijer
Differential Revision: https://reviews.llvm.org/D89566
This is a straightforward port of MemCpyOpt to MemorySSA following
the approach of D26739. MemDep queries are replaced with MSSA queries
without changing the overall structure of the pass. Some care has
to be taken to account for differences between these APIs
(MemDep also returns reads, MSSA doesn't).
Differential Revision: https://reviews.llvm.org/D89207
We were not correctly splitting a blocks for chains of length 1.
Before that change, additional instructions for blocks in chains of
length 1 were not split off from the block before removing (this was
done correctly for chains of longer size).
If this first block contained an instruction referenced elsewhere,
deleting the block, would result in invalidation of the produced value.
This caused a miscompile which motivated D92297 (before D17993,
nonnull and dereferenceable attributed were not added so MergeICmps were
not triggered.) The new test gep-references-bb.ll demonstrate the issue.
The regression was introduced in
rG0efadbbcdeb82f5c14f38fbc2826107063ca48b2.
This supersedes D92364.
Test case by MaskRay (Fangrui Song).
Differential Revision: https://reviews.llvm.org/D92375
Without FMF, we lower these intrinsics into something like this:
vmaxsd %xmm0, %xmm1, %xmm2
vcmpunordsd %xmm0, %xmm0, %xmm0
vblendvpd %xmm0, %xmm1, %xmm2, %xmm0
But if we can ignore NANs, the single min/max instruction is enough
because there is no need to fix up the x86 logic that corresponds to
X > Y ? X : Y.
We probably want to make other adjustments for FP intrinsics with FMF
to account for specialized codegen (for example, FSQRT).
Differential Revision: https://reviews.llvm.org/D92337
We already expand select and select_cc in codegenprepare, but they can
still be generated under some situations. Explicitly mark them as expand
to ensure they are not produced, leading to a failure to select the
nodes.
Differential Revision: https://reviews.llvm.org/D92373
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
If the shift amount was undef for some lane, the shift amount in opposite
shift is irrelevant for that lane, and the new shift amount for that lane
can be undef.
If the shift amount was undef for some lane, the shift amount in opposite
shift is irrelevant for that lane, and the new shift amount for that lane
can be undef.