Summary: This is a follow up on D81196, fixing one more call site.
Reviewers: courbet
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D81268
After pseudo-expansion, we may end up with ADDI (add immediate)
instructions where the operand is not an immediate but a relocation.
For such instructions, attempts to get the immediate result in
assertion failures for obvious reasons.
Fixes: https://bugs.llvm.org/show_bug.cgi?id=45432
This combine tries shrink a vzmovl if its input is an
insert_subvector. This patch improves it to turn
(vzmovl (bitcast (insert_subvector))) into
(insert_subvector (vzmovl (bitcast))) potentially allowing the
bitcast to be folded with a load.
The instruction addi is usually used to post increase the loop indvar, which looks like this:
label_X:
load x, base(i)
...
y = op x
...
i = addi i, 1
goto label_X
However, for PowerPC, if there are too many vsx instructions that between y = op x and i = addi i, 1,
it will use all the hw resource that block the execution of i = addi, i, 1, which result in the stall
of the load instruction in next iteration. So, a heuristic is added to move the addi as early as possible
to have the load hide the latency of vsx instructions, if other heuristic didn't apply to avoid the starve.
Reviewed By: jji
Differential Revision: https://reviews.llvm.org/D80269
We can pad the v2f32 with 0s up to v8f32 and use a v8f32->v8i64
operation. This is what we end up with on non-strict nodes except
we don't pad with 0s since we don't care about exceptions.
In the sign splat case, we can fold PMOVMSKB(PACKSSBW(LO(X), HI(X))) -> PMOVMSKB(BITCAST_v32i8(X)) without introducing a signmask + comparison (which unlike for any_of won't fold into a single TEST).
It is quite common to get multiple instances of optimization flags while building.
The following optimizations does not have cl::ZeroOrMore which causes errors during the build.
Reviewers: alexbdv,spop
Differential Revision: https://reviews.llvm.org/D81187
Handle MOVMSK 'allof' comparisons (X86ISD::SUB X, AllBitsMask) as well as 'anyof' patterns.
This allows us to handle these patterns in the MOVMSK(BITCAST(X)) pattern to fix PR37087.
As shown on PR37087, if we have a MOVMSK(BICAST(X)) from a wider vector, then by using MOVMSK from the wider type (32/64-bit elements) we can improve the chances of further combines with SimplifyDemandedBits/Elts and on some targets (skylake) can be more efficient.
This patch implements the * binary operator for values of
MatrixType. It adds support for matrix * matrix, scalar * matrix and
matrix * scalar.
For the matrix, matrix case, the number of columns of the first operand
must match the number of rows of the second. For the scalar,matrix variants,
the element type of the matrix must match the scalar type.
Reviewers: rjmccall, anemet, Bigcheese, rsmith, martong
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D76794
global-ctor.ll no longer checks what it intended to check
(@_GLOBAL__sub_I_global-ctor.ll needs a !dbg to work).
Rewrite it.
gcov 3.4 and gcov 4.2 use the same format, thus we can lower the version
requirement to 3.4
There's two properties we want to verify:
1. That the successors returned by analyzeBranch are in the CFG
successor list, and
2. That there are no extraneous successors are in the CFG successor
list.
The previous implementation mostly accomplished this, but in a very
convoluted manner.
Differential Revision: https://reviews.llvm.org/D79793
Previously, it tried to infer the correct destination block from the
successor list, but this is a rather tricky propspect, given the
existence of successors that occur mid-block, such as invoke, and
potentially in the future, callbr/INLINEASM_BR. (INLINEASM_BR, in
particular would be problematic, because its successor blocks are not
distinct from "normal" successors, as EHPads are.)
Instead, require the caller to pass in the expected fallthrough
successor explicitly. In most callers, the correct block is
immediately clear. But, in MachineBlockPlacement, we do need to record
the original ordering, before starting to reorder blocks.
Unfortunately, the goal of decoupling the behavior of end-of-block
jumps from the successor list has not been fully accomplished in this
patch, as there is currently no other way to determine whether a block
is intended to fall-through, or end as unreachable. Further work is
needed there.
Differential Revision: https://reviews.llvm.org/D79605
The annoying behavior where the output is different due to the
legality check struck again, plus the subtarget predicate wasn't
really correctly set for DS FP atomics.
Some of the FP min/max instructions seem to be in the gfx6/gfx7
manuals, but IIRC this might have been one of the cases where the
manual got ahead of the actual hardware support, but I've left these
as-is for now since the assembler tests seem to expect them.