By default LLVM doesn't save any regs for GHC on arm64.
This means we'll clobber LR on arm64 if we make non-tail calls (e.g. L2 syscall)
So we should save LR on non-tail calls, and not assume we won't make
non-tail calls.
In function combineMulToPMADDWD, if 17 bit are sign bits,
not just zero bits, the optimization can be applied sometimes.
For now, detect and replace SRA pairs with SRL.
Prefer vector-vector shifts if available (AVX2+).
Improves code generated for rotate and funnel shifts.
Otherwise it would generate a shuffle + slower vector-scalar shift.
Currently, we use SExtValue to decide whether to invert tbz or tbnz.
However, for the case zext (xor x, c), we should use ZExt rather
than SExt otherwise we will generate totally opposite branches.
Reviewed By: paquette
Differential Revision: https://reviews.llvm.org/D108755
(cherry picked from commit 5f48c144c58f6d23e850a1978a6fe05887103b17)
Add a release note for the renaming of the debuginfo-test to
cross-project-tests, performed in commit
1364750dadbb56032ef73b4d0d8cbc88a51392da and follow-ons.
Reviewed by: sylvestre.ledru
Differential Revision: https://reviews.llvm.org/D110134
When searching for hidden identity shuffles (added at rG41146bfe82aecc79961c3de898cda02998172e4b), only peek through bitcasts to the source operand if it is a vector type as well.
(cherry picked from commit dcba99418438ec1d624ad207674234bd2e9e3394)
Users of VPValues are managed in a vector, so we need to be more
careful when iterating over users while updating them. For now, just
copy them.
Fixes 51798.
(cherry picked from commit 368af7558e55039e4e93c3eed68ce00da86e5e35)
On X86, the stackprobe emission code chooses the `R11D` register, which
is illegal on i686. This ends up wrapping around to `EBX`, which does
not get properly callee-saved within the stack probing prologue,
clobbering the register for the callers.
We fix this by explicitly using `EAX` as the stack probe register.
Reviewed By: pengfei
Differential Revision: https://reviews.llvm.org/D109203
(cherry picked from commit ae8507b0df738205a6b9e3795ad34672b7499381)
If the constant is a constant expression, then getAggregateElement()
will return null. Guard against this before calling HasFn().
(cherry picked from commit af382b93831ae6a58bce8bc075458cfd056e3976)
I can't seem to wrap my head around the proper fix here,
we should be fine without this requirement, iff we can form this form,
but the naive attempt (https://reviews.llvm.org/D106317) has failed.
So just to unblock the release, put up a restriction.
Fixes https://bugs.llvm.org/show_bug.cgi?id=51125
(cherry picked from commit 909cba969981032c5740774ca84a34b7f76b909b)
This patch fixes a variety of crashes resulting from the `MemCpyOptPass`
casting `TypeSize` to a constant integer, whether implicitly or
explicitly.
Since the `MemsetRanges` requires a constant size to work, all but one
of the fixes in this patch simply involve skipping the various
optimizations for scalable types as cleanly as possible.
The optimization of `byval` parameters, however, has been updated to
work on scalable types in theory. In practice, this optimization is only
valid when the length of the `memcpy` is known to be larger than the
scalable type size, which is currently never the case. This could
perhaps be done in the future using the `vscale_range` attribute.
Some implicit casts have been left as they were, under the knowledge
they are only called on aggregate types. These should never be
scalably-sized.
Reviewed By: nikic, tra
Differential Revision: https://reviews.llvm.org/D109329
(cherry-picked from commit 7fb66d4)
When lowering a fixed length gather/scatter the index type is assumed to
be the same as the memory type, this is incorrect in cases where the
extension of the index has been folded into the addressing mode.
For now add a temporary workaround to fix the codegen faults caused by
this by preventing the removal of this extension. At a later date the
lowering for SVE gather/scatters will be redesigned to improve the way
addressing modes are handled.
As a short term side effect of this change, the addressing modes
generated for fixed length gather/scatters will not be optimal.
Differential Revision: https://reviews.llvm.org/D109145
(cherry picked from commit 14e1a4a6eef2fb95ec852c9ddfc597f80bba3226)
As part of the nontrivial unswitching we could end up removing child
loops. This patch add a notification to the pass manager when
that happens (using the markLoopAsDeleted callback).
Without this there could be stale LoopAccessAnalysis results cached
in the analysis manager. Those analysis results are cached based on
a Loop* as key. Since the BumpPtrAllocator used to allocate
Loop objects could be resetted between different runs of for
example the loop-distribute pass (running on different functions),
a new Loop object could be created using the same Loop pointer.
And then when requiring the LoopAccessAnalysis for the loop we
got the stale (corrupt) result from the destroyed loop.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D109257
(fixes PR51754)
(cherry-picked from commit 0f0344dd1e3b53387bb396070916e67f4c426da6)
Setting -fvisibility=hidden when compiling Target libs has the advantage of
not being intrusive on the codebase, but it also sets the visibility of all
functions within header-only component like ADT. In the end, we end up with
some symbols with hidden visibility within llvm dylib (through the target libs),
and some with external visibility (through other libs). This paves the way for
subtle bugs like https://reviews.llvm.org/D101972
This patch explicitly set the visibility of some classes to `default` so that
`llvm::Any` related symbols keep a `default` visibility. Indeed a template
function with `default` visibility parametrized by a type with `hidden`
visibility is granted `hidden` visibility, and we don't want this for the
uniqueness of `llvm::Any::TypeId`.
Differential Revision: https://reviews.llvm.org/D108943