Summary:
We have defined MTSPR/MFSPR and MTSPR8/MFSPR8, but we only defined
mtspr/mfspr InstAlias for some MTSPR/MFSPR.
This patch is to add the InstAlias definitions for MTSPR8/MFSPR8,
and add the some new mtspr/mfspr InstAlias we may use.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D77531
Reduce XMM->GPR traffic by performing bitops on the vectors, and using a single MOVMSK call.
This requires us to use vectors of the same size and element width, but we can mix fp/int type equivalents with suitable bitcasting.
This patch adds handling of constrained FP intrinsics about round,
truncate and extend for PowerPC target, with necessary tests.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D64193
On PowerPC, we have vnmsubfp Altivec instruction for fnmsub operation on
v4f32 type. Default pattern for this instruction never works since we
don't have legal fneg for v4f32 when VSX disabled.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D80617
We never codegen them so this doesn't matter in practice. But
sometimes someone comes along and tries to use these flags
for something else. LIke the Load Value Inject inline assembly
handling.
If the input to the bitcast is a sign bit test, it makes sense to
directly use vpmovmskb or vmovmskps/pd. This removes the need to
copy the sign bits to a k-register and then to a GPR.
Fixes PR46200.
Differential Revision: https://reviews.llvm.org/D81327
This pass has no dependencies on other passes so conditionally
including it in the pipeline doens't do much. Just move it the
pass itself to keep it isolated.
A lot of what EVEX->VEX does is equivalent to what the
prioritization in the assembly parser does. When an AVX mnemonic
is used without any EVEX features or XMM16-31, the parser will
pick the VEX encoding.
Since codegen doesn't go through the parser, we should also
use VEX instructions when we can so that the code coming out of
integrated assembler matches what you'd get from outputing an
assembly listing and parsing it.
The pass early outs if AVX isn't enabled and uses TSFlags to
check for EVEX instructions before doing the more costly table
lookups. Hopefully that's enough to keep this from impacting
-O0 compile times.
relocImm was a complexPattern that handled both ConstantSDNode
and X86Wrapper. But it was only applied selectively because using
it would cause patterns to be not importable into FastISel or
GlobalISel. So it only got applied to flag setting instructions,
stores, RMW arithmetic instructions, and rotates.
Most of the test changes are a result of making patterns available
to GlobalISel or FastISel. The absolute-cmp.ll change is due to
this fixing a pattern ordering issue to make an absolute symbol
match to an 8-bit immediate before trying a 32-bit immediate.
I tried to use PatFrags to reduce the repetition, but I was getting
errors from TableGen.
Summary:
This commit slightly modifies the MCDisassembler, and llvm-objdump to
allow targets to also decode entire symbols.
WebAssembly uses the onSymbolStart hook it to decode preludes.
WebAssembly partially disassembles the symbol in its target specific
way; and then falls back to the normal flow of llvm-objdump.
AMDGPU needs it to decode kernel descriptors entirely, and move to the
next symbol.
This commit is to split the above task into 2.
- Changes to llvm-objdump and MC-layer without breaking WebAssembly code
[ this commit ]
- AMDGPU's implementation of onSymbolStart that decodes kernel
descriptors. [ https://reviews.llvm.org/D80713 ]
Reviewers: scott.linder, t-tye, sunfish, arsenm, jhenderson, MaskRay, aardappel
Reviewed By: scott.linder, jhenderson, aardappel
Subscribers: bcain, dschuff, wdng, tpr, sbc100, jgravelle-google, hiraditya, aheejin, MaskRay, rupprecht, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80512
Similar to a recent change to the X86 backend, this changes things so
that we always produce a reduction intrinsics for all reduction types,
not just the legal ones. This gives a better chance in the backend to
custom lower them to something more suitable for MVE. Especially for
something like fadd the in-order reduction produced during DAG lowering
is already better than the shuffles produced in the midend, and we can
do even better with a bit of custom lowering.
Differential Revision: https://reviews.llvm.org/D81398
We select all of these via patterns now, so there's no reason to disallow this.
Update select-dup.mir to show that we correctly select the smaller types.
Differential Revision: https://reviews.llvm.org/D81322
This was making it so that the instructions weren't eliminated in
select-rev.mir and select-trn.mir despite not being used.
Update the tests accordingly.
Differential Revision: https://reviews.llvm.org/D81492
Here, I am proposing to add an special case for massv powf4/powd2 function (SIMD counterpart of powf/pow function in MASSV library) in MASSV pass to get later optimizations like conversion from pow(x,0.75) and pow(x,0.25) for double and single precision to sequence of sqrt's in the DAGCombiner in vector float case. My reason for doing this is: the optimized pow(x,0.75) and pow(x,0.25) for double and single precision to sequence of sqrt's is faster than powf4/powd2 on P8 and P9.
In case MASSV functions is called, and if the exponent of pow is 0.75 or 0.25, we will get the sequence of sqrt's and if exponent is not 0.75 or 0.25 we will get the appropriate MASSV function.
Reviewed By: steven.zhang
Tags: #LLVM #PowerPC
Differential Revision: https://reviews.llvm.org/D80744
Noticed while trying to cleanup D66004 - if a shuffle operand came from a scalar, we're better off using INSERTPS vs UNPCKLPS as this is more likely to load fold later on. It also matches our existing BUILD_VECTOR lowering.
We can extend this to other PINSRB/D/Q/W cases in the future as the need arises.
This is a NFC patch to make convertToImmediateForm a light wrapper
for converting xform and imm form instructions on PowerPC.
Reviewed By: Steven.zhang
Differential Revision: https://reviews.llvm.org/D80907
To make sure that no barrier gets placed on the architectural execution
path, each
BLR x<N>
instruction gets transformed to a
BL __llvm_slsblr_thunk_x<N>
instruction, with __llvm_slsblr_thunk_x<N> a thunk that contains
__llvm_slsblr_thunk_x<N>:
BR x<N>
<speculation barrier>
Therefore, the BLR instruction gets split into 2; one BL and one BR.
This transformation results in not inserting a speculation barrier on
the architectural execution path.
The mitigation is off by default and can be enabled by the
harden-sls-blr subtarget feature.
As a linker is allowed to clobber X16 and X17 on function calls, the
above code transformation would not be correct in case a linker does so
when N=16 or N=17. Therefore, when the mitigation is enabled, generation
of BLR x16 or BLR x17 is avoided.
As BLRA* indirect calls are not produced by LLVM currently, this does
not aim to implement support for those.
Differential Revision: https://reviews.llvm.org/D81402
In BPF Instruction Selection DAGToDAG transformation phase,
BPF backend had an optimization to turn load from readonly data
section to direct load of the values. This phase is implemented
before libbpf has readonly section support and before alu32
is supported.
This phase however may generate incorrect type when alu32 is
enabled. The following is an example,
-bash-4.4$ cat ~/tmp2/t.c
struct t {
unsigned char a;
unsigned char b;
unsigned char c;
};
extern void foo(void *);
int test() {
struct t v = {
.b = 2,
};
foo(&v);
return 0;
}
The compiler will turn local variable "v" into a readonly section.
During instruction selection phase, the compiler generates two
loads from readonly section, one 2 byte load or 1 byte load, e.g., for 2 loads,
t8: i32,ch = load<(dereferenceable load 2 from `i8* getelementptr inbounds
(%struct.t, %struct.t* @__const.test.v, i64 0, i32 0)`, align 1),
anyext from i16> t3, GlobalAddress:i64<%struct.t* @__const.test.v> 0, undef:i64
t9: ch = store<(store 2 into %ir.v1.sub1), trunc to i16> t3, t8,
FrameIndex:i64<0>, undef:i64
BPF backend changed t8 to i64 = Constant<2> and eventually the generated machine IR:
t10: i64 = MOV_ri TargetConstant:i64<2>
t40: i32 = SLL_ri_32 t10, TargetConstant:i32<8>
t41: i32 = OR_ri_32 t40, TargetConstant:i64<0>
t9: ch = STH32<Mem:(store 2 into %ir.v1.sub1)> t41, TargetFrameIndex:i64<0>,
TargetConstant:i64<0>, t3
Note that t10 in the above is not correct. The type should be i32 and instruction
should be MOV_ri_32. The reason for incorrect insn selection is BPF insn selection
generated an i64 constant instead of an i32 constant as specified in the original
load instruction. Such incorrect insn sequence eventually caused the following
fatal error when a COPY insn tries to copy a 64bit register to a 32bit subregister.
Impossible reg-to-reg copy
UNREACHABLE executed at ../lib/Target/BPF/BPFInstrInfo.cpp:42!
This patch fixed the issue by using the load result type instead of always i64
when doing readonly load optimization.
Differential Revision: https://reviews.llvm.org/D81630
This was implicitly assuming the branch instruction was the next after
the pseudo. It's possible for another non-terminator instruction to be
inserted between the intrinsic and the branch, so adjust the insertion
point. Fixes a non-terminator after terminator verifier error (which
without the verifier, manifested itself as an infinite loop in
analyzeBranch much later on).
Summary:
After their range checks were removed in 7f50c15be5c0, br_tables
started being duplicated into their predecessors by tail
folding. Unfortunately, when the br_tables were in loops this
transformation introduced bad irreducible control flow which was later
expanded into even more br_tables. This commit abuses the
`isNotDuplicable` property to prevent this irreducible control flow
from being introduced. This change saves a few dozen bytes of code
size and has a negligible affect on performance for most of the large
Emscripten benchmarks, but can improve performance significantly on
microbenchmarks of switches in loops.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D81628
The spec for these says they need 0xf3 but also mentions REP
before the mnemonic. But I don't think its fair to users to make
them write REP first. And gas doesn't make them. objdump seems to
disassemble with or without the prefix and just prints any 0xf3
as REP.
'NP' means that the instruction is not recognized with a 66, F2 or F3
prefix. It will either #UD or decode to a different instruction.
All of the cases are here should fall into the #UD variety since
we should be detecting the collision with other instructions when
we build the disassembler tables.
SUMMARY:
Since we deal with aix emitLinkage in the PPCAIXAsmPrinter::emitLinkage() in the patch https://reviews.llvm.org/D75866. It do not go to AsmPrinter::emitLinkage() any more, we clean up some aix related code in the AsmPrinter::emitLinkage()
Reviewers: Jason liu
Differential Revision: https://reviews.llvm.org/D81613
Convert shift+or bool vector patterns into CONCAT_VECTORS if we know this will be lowered to KUNPCK (which requires 16+ vector elements).
Fixes PR32547
AVX512 mask types are often bitcasted to scalar integers for various ops before being bitcast back to be used as a predicate. In many cases we can avoid these KMASK<->GPR transfers and perform equivalent operations on the mask unit.
If the destination mask type is legal, and we can confirm that the scalar op originally came from a mask/vector/float/double type then we should try to avoid the scalar entirely.
This avoids some codegen issues noticed while working on PTEST/MOVMSK improvements.
Partially fixes PR32547 - we don't create a KUNPCK yet, but OR(X,KSHIFTL(Y)) can be handled in a separate patch.
Differential Revision: https://reviews.llvm.org/D81548
By moving target-independent code from
llvm/lib/Target/X86/X86IndirectThunks.cpp
to
llvm/include/llvm/CodeGen/IndirectThunks.h
Differential Revision: https://reviews.llvm.org/D81401
This appears to have been added when In64BitMode was added to a
bunch of instructions that don't have register operands. When an
instruction uses a register the parser will prevent a 64-bit
register from being parsed on a 32-bit target. But with only
memory and immediate operands this doesn't happen.
TEST64ri32 does have a register operand so the issue the predicate
was supposed to fix doesn't apply.
Some processors may speculatively execute the instructions immediately
following RET (returns) and BR (indirect jumps), even though
control flow should change unconditionally at these instructions.
To avoid a potential miss-speculatively executed gadget after these
instructions leaking secrets through side channels, this pass places a
speculation barrier immediately after every RET and BR instruction.
Since these barriers are never on the correct, architectural execution
path, performance overhead of this is expected to be low.
On targets that implement that Armv8.0-SB Speculation Barrier extension,
a single SB instruction is emitted that acts as a speculation barrier.
On other targets, a DSB SYS followed by a ISB is emitted to act as a
speculation barrier.
These speculation barriers are implemented as pseudo instructions to
avoid later passes to analyze them and potentially remove them.
Even though currently LLVM does not produce BRAA/BRAB/BRAAZ/BRABZ
instructions, these are also mitigated by the pass and tested through a
MIR test.
The mitigation is off by default and can be enabled by the
harden-sls-retbr subtarget feature.
Differential Revision: https://reviews.llvm.org/D81400
The fp16 ops are legalized by extending/chopping them as needed.
The tests are shamelessly stolen from the RISC-V backend.
Recommit with fixed RUN lines for the test.
Differential Revision: https://reviews.llvm.org/D77569
The baffling thing is this passed the OpenCL conformance test for
32-bit integer divisions, but only failed in the 32-bit path of
BypassSlowDivisions for the 64-bit tests.
@nikic raised an issue on D75936 that the added complexity to the O0 pipeline was causing noticeable slowdowns for `-O0` builds. This patch addresses the issue by adding a pass with equal security properties, but without any optimizations (and more importantly, without the need for expensive analysis dependencies).
Reviewers: nikic, craig.topper, mattdr
Reviewed By: craig.topper, mattdr
Differential Revision: https://reviews.llvm.org/D80964
This is a cheap instruction. It's better to repeat it than to do
two separate operations.
There are probably more cases like this, but this one was reported
as a regression in our internal benchmarking.