An incorrect mask type when lowering an SVE gather/scatter was causing
a codegen fault which manifested as the incorrect predicate size being
used for an SVE gather/scatter, (e.g.. p0.b rather than p0.d).
Fixes PR51182.
Differential Revision: https://reviews.llvm.org/D106943
(cherry picked from commit 191831e380f317cd2baa5d48abe02d1d11cd44cb)
Don't prefer python2's virtualenv when setting up the test-suite.
Always use python3 instead, since that's what we support everywhere else
anyway.
Differential Revision: https://reviews.llvm.org/D106941
The device runtime contains several calls to `__kmpc_get_hardware_num_threads_in_block`
and `__kmpc_get_hardware_num_blocks`. If the thread_limit and the num_teams are constant,
these calls can be folded to the constant value.
In this patch we use the already introduced `AAFoldRuntimeCall` and the `NumTeams` and
`NumThreads` kernel attributes (to be introduced in a different patch) to fold these functions.
The code checks all the kernels, and if their attributes match, the functions are folded.
In the future we will explore specializing for multiple values of NumThreads and NumTeams.
Depends on D106390
Reviewed By: jdoerfert, JonChesterfield
Differential Revision: https://reviews.llvm.org/D106033
At the moment, the revert ordering from this tool is unspecified (though
it happens to be in `git log` order, so newest reverts come first).
From the standpoint of tooling and users, this seems to be the opposite
of what we want by default: tools and users will generally try to apply
these reverts as cherry-picks. If two reverts in the list are close
enough to each other, if the reverts get applied out of order, we'll get
a merge conflict.
Rather than having `reverse`s for all tools (and mental reverses for
manual users), just guarantee an oldest-first output ordering for this
function.
Differential Revision: https://reviews.llvm.org/D106838
This patch adds a peephole optimization `SETCC(FREEZE(x),const)` => `FREEZE(SETCC(x,const))`
if the SETCC is only used by BRCOND.
Combined with `BRCOND(FREEZE(X)) => BRCOND(X)`, this leads to a nice improvement in the generated assembly when x is a masked loaded value.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D105344
This reapplies commit cbb709e25124dc38ee593882051fc88c987fe591 and
includes the use of the lookup method instead of operator[] to avoid
accidentally setting (empty) simplification callbacks.
This reverts commit aa27430a625b2fd059707a87f8ba2df8f480ff11.
AAValueSimplify, AAValueConstantRange, and AAPotentialValues all look at
the IR by default. If queried for a IR position which has a
simplification callback we should either look at the callback return, or
give up. We do the latter for now.
It was writing to the source directory (which may not be writeable),
rather than using %t.
Fixes: a5dd6c6cf935 ("[LoopVectorize] Don't interleave scalar ordered reductions for inner loops")
We already have an indication (error) if the desired inline advisor
cannot be enabled, but we don't have a positive indication. Added
LLVM_DEBUG messages for the latter.
The dst/dstt/dstst/dststt instructions are nop's on all PowerPC
cores that AIX supports. The AIX assembler also does not accept
these mnemonics. Turn them into nop's on AIX (similar to dstall).
`StackAlignment` has only one use: `StackAlignment = std::max(StackAlignment, AI.getAlignment());` So it is redundant.
Reviewed By: vitalybuka, MTC
Differential Revision: https://reviews.llvm.org/D106741
As an instruction is replaced in optimizeTransposes RAUW will replace it in
the ShapeMap (ShapeMap is ValueMap so that uses are updated). In
finalizeLowering however we skip updating uses if they are in the ShapeMap
since they will be lowered separately at which point we pick up the lowered
operands.
In the testcase what happened was that since we replaced the doubled-transpose
with the shuffle, it ended up in the ShapeMap. As we lowered the
columnwise-load the use in the shuffle was not updated. Then as we removed
the original columnwise-load we changed that to an undef. I.e. we ended up
with:
```
%shuf = shufflevector <8 x double> undef, <8 x double> poison, <6 x i32>
^^^^^
<i32 0, i32 1, i32 2, i32 4, i32 5, i32 6>
```
Besides the fix itself, I have fortified this last bit. As we change uses to
undef when removing instruction we track the undefed instruction to make sure
we eventually remove those too. This would have caught the issue at compile
time.
Differential Revision: https://reviews.llvm.org/D106714
The current JumpThreading pass does not jump thread loops since it can
result in irreducible control flow that harms other optimizations. This
prevents switch statements inside a loop from being optimized to use
unconditional branches.
This code pattern occurs in the core_state_transition function of
Coremark. The state machine can be implemented manually with goto
statements resulting in a large runtime improvement, and this transform
makes the switch implementation match the goto version in performance.
This patch specifically targets switch statements inside a loop that
have the opportunity to be threaded. Once it identifies an opportunity,
it creates new paths that branch directly to the correct code block.
For example, the left CFG could be transformed to the right CFG:
```
sw.bb sw.bb
/ | \ / | \
case1 case2 case3 case1 case2 case3
\ | / / | \
latch.bb latch.2 latch.3 latch.1
br sw.bb / | \
sw.bb.2 sw.bb.3 sw.bb.1
br case2 br case3 br case1
```
Co-author: Justin Kreiner @jkreiner
Co-author: Ehsan Amiri @amehsan
Reviewed By: SjoerdMeijer
Differential Revision: https://reviews.llvm.org/D99205
This expands the cost model test for min/max to many more types,
including floating point minnum/maxnum and minimum/maximum, and FP16
with and without fullfp16. The old llc run lines are removed, as those
are better tested by CodeGen tests.
Patch by Mohammad Fawaz
This patch allows lifetime calls to be ignored (and later erased) if we
know that the copy-constant-to-alloca optimization is going to happen.
The case that is missed is when the global variable is in a different address
space than the alloca (as shown in the example added to the lit test.)
This used to work before 6da31fa4a6
Differential Revision: https://reviews.llvm.org/D106573
Consider the following loop:
void foo(float *dst, float *src, int N) {
for (int i = 0; i < N; i++) {
dst[i] = 0.0;
for (int j = 0; j < N; j++) {
dst[i] += src[(i * N) + j];
}
}
}
When we are not building with -Ofast we may attempt to vectorise the
inner loop using ordered reductions instead. In addition we also try
to select an appropriate interleave count for the inner loop. However,
when choosing a VF=1 the inner loop will be scalar and there is existing
code in selectInterleaveCount that limits the interleave count to 2
for reductions due to concerns about increasing the critical path.
For ordered reductions this problem is even worse due to the additional
data dependency, and so I've added code to simply disable interleaving
for scalar ordered reductions for now.
Test added here:
Transforms/LoopVectorize/AArch64/strict-fadd-vf1.ll
Differential Revision: https://reviews.llvm.org/D106646
This is partially a workaround. SILowerI1Copies does not understand
unstructured loops. This would result in inserting instructions to
merge a mask register in the same block where it was defined in an
unstructured loop.
Replace the clang builtins and LLVM intrinsics for the SIMD extmul instructions
with normal codegen patterns.
Differential Revision: https://reviews.llvm.org/D106724
- This patch consists of the bare basic code needed in order to generate some assembly for the z/OS target.
- Only the .text and the .bss sections are added for now.
- The relevant MCSectionGOFF/Symbol interfaces have been added. This enables us to print out the GOFF machine code sections.
- This patch enables us to add simple lit tests wherever possible, and contribute to the testing coverage for the z/OS target
- Further improvements and additions will be made in future patches.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D106380
When hoisting/moving calls to locations, we strip unknown metadata. Such calls are usually marked `speculatable`, i.e. they are guaranteed to not cause undefined behaviour when run anywhere. So, we should strip attributes that can cause immediate undefined behaviour if those attributes are not valid in the context where the call is moved to.
This patch introduces such an API and uses it in relevant passes. See
updated tests.
Fix for PR50744.
Reviewed By: nikic, jdoerfert, lebedev.ri
Differential Revision: https://reviews.llvm.org/D104641
This reverts commit 1cfecf4fc4278afb0005923f6dff595cd372da5c.
This commit broke LLVM code generated through XLA by removing a
conditional on Ld->getExtensionType() == ISD::NON_EXTLOAD
This is not a perfect revert. The new function is left as other uses of
it exist now.
This reverts commit d7bbb1230a94cb239aa4a8cb896c45571444675d.
There were follow up uses of a deleted method and I didn't run the
tests. Undo the revert, so I can do it properly.
This reverts commit 1cfecf4fc4278afb0005923f6dff595cd372da5c.
This commit broke LLVM code generated through XLA by removing a
conditional on Ld->getExtensionType() == ISD::NON_EXTLOAD
Avoid several crashes when DBG_INSTR_REF and DBG_PHI instructions are fed
to the instruction scheduler. DBG_INSTR_REFs should be treated like
DBG_LABELs, and just ignored for the purpose of scheduling [0].
DBG_PHIs however behave much more like DBG_VALUEs: they refer to register
operands, and if some register defs get shuffled around during instruction
scheduling, there's a risk that the debug instr will refer to the wrong
value. There's already a facility for updating DBG_VALUEs to reflect this;
add DBG_PHI to the list of instructions that it will update.
[0] Suboptimal, but it's what instr scheduling does right now.
Differential Revision: https://reviews.llvm.org/D106663