In function combineMulToPMADDWD, if 17 bit are sign bits,
not just zero bits, the optimization can be applied sometimes.
For now, detect and replace SRA pairs with SRL.
Prefer vector-vector shifts if available (AVX2+).
Improves code generated for rotate and funnel shifts.
Otherwise it would generate a shuffle + slower vector-scalar shift.
Currently, we use SExtValue to decide whether to invert tbz or tbnz.
However, for the case zext (xor x, c), we should use ZExt rather
than SExt otherwise we will generate totally opposite branches.
Reviewed By: paquette
Differential Revision: https://reviews.llvm.org/D108755
(cherry picked from commit 5f48c144c58f6d23e850a1978a6fe05887103b17)
Add a release note for the renaming of the debuginfo-test to
cross-project-tests, performed in commit
1364750dadbb56032ef73b4d0d8cbc88a51392da and follow-ons.
Reviewed by: sylvestre.ledru
Differential Revision: https://reviews.llvm.org/D110134
When searching for hidden identity shuffles (added at rG41146bfe82aecc79961c3de898cda02998172e4b), only peek through bitcasts to the source operand if it is a vector type as well.
(cherry picked from commit dcba99418438ec1d624ad207674234bd2e9e3394)
Users of VPValues are managed in a vector, so we need to be more
careful when iterating over users while updating them. For now, just
copy them.
Fixes 51798.
(cherry picked from commit 368af7558e55039e4e93c3eed68ce00da86e5e35)
On X86, the stackprobe emission code chooses the `R11D` register, which
is illegal on i686. This ends up wrapping around to `EBX`, which does
not get properly callee-saved within the stack probing prologue,
clobbering the register for the callers.
We fix this by explicitly using `EAX` as the stack probe register.
Reviewed By: pengfei
Differential Revision: https://reviews.llvm.org/D109203
(cherry picked from commit ae8507b0df738205a6b9e3795ad34672b7499381)
If the constant is a constant expression, then getAggregateElement()
will return null. Guard against this before calling HasFn().
(cherry picked from commit af382b93831ae6a58bce8bc075458cfd056e3976)
I can't seem to wrap my head around the proper fix here,
we should be fine without this requirement, iff we can form this form,
but the naive attempt (https://reviews.llvm.org/D106317) has failed.
So just to unblock the release, put up a restriction.
Fixes https://bugs.llvm.org/show_bug.cgi?id=51125
(cherry picked from commit 909cba969981032c5740774ca84a34b7f76b909b)
This patch fixes a variety of crashes resulting from the `MemCpyOptPass`
casting `TypeSize` to a constant integer, whether implicitly or
explicitly.
Since the `MemsetRanges` requires a constant size to work, all but one
of the fixes in this patch simply involve skipping the various
optimizations for scalable types as cleanly as possible.
The optimization of `byval` parameters, however, has been updated to
work on scalable types in theory. In practice, this optimization is only
valid when the length of the `memcpy` is known to be larger than the
scalable type size, which is currently never the case. This could
perhaps be done in the future using the `vscale_range` attribute.
Some implicit casts have been left as they were, under the knowledge
they are only called on aggregate types. These should never be
scalably-sized.
Reviewed By: nikic, tra
Differential Revision: https://reviews.llvm.org/D109329
(cherry-picked from commit 7fb66d4)
When lowering a fixed length gather/scatter the index type is assumed to
be the same as the memory type, this is incorrect in cases where the
extension of the index has been folded into the addressing mode.
For now add a temporary workaround to fix the codegen faults caused by
this by preventing the removal of this extension. At a later date the
lowering for SVE gather/scatters will be redesigned to improve the way
addressing modes are handled.
As a short term side effect of this change, the addressing modes
generated for fixed length gather/scatters will not be optimal.
Differential Revision: https://reviews.llvm.org/D109145
(cherry picked from commit 14e1a4a6eef2fb95ec852c9ddfc597f80bba3226)
As part of the nontrivial unswitching we could end up removing child
loops. This patch add a notification to the pass manager when
that happens (using the markLoopAsDeleted callback).
Without this there could be stale LoopAccessAnalysis results cached
in the analysis manager. Those analysis results are cached based on
a Loop* as key. Since the BumpPtrAllocator used to allocate
Loop objects could be resetted between different runs of for
example the loop-distribute pass (running on different functions),
a new Loop object could be created using the same Loop pointer.
And then when requiring the LoopAccessAnalysis for the loop we
got the stale (corrupt) result from the destroyed loop.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D109257
(fixes PR51754)
(cherry-picked from commit 0f0344dd1e3b53387bb396070916e67f4c426da6)
Setting -fvisibility=hidden when compiling Target libs has the advantage of
not being intrusive on the codebase, but it also sets the visibility of all
functions within header-only component like ADT. In the end, we end up with
some symbols with hidden visibility within llvm dylib (through the target libs),
and some with external visibility (through other libs). This paves the way for
subtle bugs like https://reviews.llvm.org/D101972
This patch explicitly set the visibility of some classes to `default` so that
`llvm::Any` related symbols keep a `default` visibility. Indeed a template
function with `default` visibility parametrized by a type with `hidden`
visibility is granted `hidden` visibility, and we don't want this for the
uniqueness of `llvm::Any::TypeId`.
Differential Revision: https://reviews.llvm.org/D108943
Also fixes a warning mentioned in D109359.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D109363
(cherry picked from commit 89786c2b992c3cb4c4a230542d2af34ec2915a08)
This causes https://bugs.llvm.org/show_bug.cgi?id=51714 and
is not a right patch according to comments in D91724
This reverts commit 42eaf4fe0adef3344adfd9fbccd49f325cb549ef.
(cherry picked from commit 34badc409cc452575c538c4b6449546adc38f121)
In case of a virtual register tied to a phys-def, the register class needs to
be computed. Make sure that this works generally also with fast regalloc by
using TLI.getRegClassFor() whenever possible, and make only the case of
'Untyped' use getMinimalPhysRegClass().
Fixes https://bugs.llvm.org/show_bug.cgi?id=51699.
Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D109291
(cherry picked from commit 118997d8e931dcb4c6e972611a7e4febcc33a061)
Print relocations interleaved with disassembled instructions for
executables with relocatable sections, e.g. those built with "-Wl,-q".
Differential Revision: https://reviews.llvm.org/D109016
(cherry picked from commit 6300e4ac5806c9255c68c6fada37b2ce70efc524)
so that it gets installed in LLVM_INSTALL_TOOLCHAIN_ONLY builds,
such as used by the Windows installer.
Differential revision: https://reviews.llvm.org/D109358
(cherry picked from commit c364dcbf1fd81c6291e935564fce2d9ebb97a3d0)
The isEssentiallyExtractHighSubvector function currently calls
getVectorNumElements on a type that in specific cases might be scalable.
Since this function only has correct behaviour at the moment on scalable
types anyway, the function can just return false when given a fixed type.
Differential Revision: https://reviews.llvm.org/D109163
(cherry picked from commit b297531ece896fb9ec36f001a74aef144082602b)
Due to a typo, this replaced %x with umax(C1, umin(C2, %x + C3))
rather than umax(C1, umin(C2, %x)). This didn't make a difference
for the existing tests, because the result is only used for range
calculation, and %x will usually have an unknown starting range,
and the additional offset keeps it unknown. However, if %x already
has a known range, we may compute a result range that is too
small.
(cherry picked from commit 8d54c8a0c3d7d4a50186ae7087780c6082e5bb46)