Summary:
This stops it misidentifying unconditional branches as conditional branches
which fixes a -verify-machineinstrs error about exiting a function via fall through.
Reviewers: sdardis
Subscribers: dsanders, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D19864
llvm-svn: 268731
Summary:
Check calling convention in AMDGPUMachineFunction::isKernel
This will be used for AMDGPU_HSA_KERNEL symbol type in output ELF.
Also, in the future unused non-kernels may be optimized.
Reviewers: tstellarAMD, arsenm
Subscribers: arsenm, joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19917
llvm-svn: 268719
Summary:
This will be used for AMDGPU_HSA_KERNEL symbol type in output ELF.
Also, in the future unused non-kernels may be optimized.
For now, also accept SPIR_KERNEL for HCC frontend.
Also, add bitcode compatibility tests for missing calling conventions
except AVR_BUILTIN which doesn't have parse code.
Reviewers: tstellarAMD, arsenm
Subscribers: arsenm, joker.eph, llvm-commits
llvm-svn: 268717
If we don't, values that aren't precisely representable in f16 could
be used as-is in a promoted f32 operation, which would produce
incorrect results.
AArch64 had the correct behavior; add a focused test.
Fixes http://llvm.org/PR26871
llvm-svn: 268700
Given something like:
ldr r0, .LCPI0_0 (== pc-rel var)
add r0, pc
ldr r1, .LCPI0_1 (== pc-rel var)
add r1, pc
we cannot combine the 2 ldr instructions and litpools because they get added to
a different pc to form the correct address. I think the original logic came
from a time when we fused the LDRpci/PICADD instructions into one
pseudo-instruction so the PC was always immediately at-hand. That's no longer
the case.
Should fix general-dynamic TLS access on Linux, and quite possibly other -fPIC
code that relies on litpools (e.g. v6m and -Oz compilations) though trivial
tweaks of the .ll test didn't provoke anything.
llvm-svn: 268662
Summary:
Discovered by Dave Airlie, fixes an assertion in Khronos OpenGL CTS
GL43-CTS.shader_storage_buffer_object.advanced-matrix.
In this particular case, the buffer load intrinsic fed into a uniform
conditional branch, and led the brcond lowering down the wrong path.
Reviewers: tstellarAMD, arsenm
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19931
llvm-svn: 268650
Summary:
Version 2 is now the default. If you want to emit version 1, use
the amdgcn--amdhsa-amdcov1 triple.
Reviewers: arsenm, kzhuravl
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19283
llvm-svn: 268647
This backend was supposed to generate C++ code which will re-construct
the LLVM IR passed as input. This seems to me to have very marginal
usefulness in the first place.
However, the code has never been updated to use IRBuilder, which makes
its current value negative -- people who look at the output may be
steered to use the *wrong* C++ APIs to construct IR.
Furthermore, it's generated code that doesn't compile since at least
2013.
Differential Revision: http://reviews.llvm.org/D19942
llvm-svn: 268631
Both Linux and kFreeBSD use glibc, so follow similiar code paths.
Add isTargetGlibc to check for this, and use it instead of isTargetLinux
in a few places.
Fixes PR22248 for kFreeBSD.
Differential Revision: http://reviews.llvm.org/D19104
llvm-svn: 268624
The result type of setcc is dependent on whether or not AVX512 is
present.
We had an X86-specific DAG-combine which assumed that the result type
should be i8 when it could be i1.
This meant that we would generate illegal setccs which LowerSETCC did
not like.
Instead, use an appropriate type and zero extend to i8.
Also, there were some scenarios where the fold should have fired but
didn't because we were overly cautious about the types. This meant that
we generated:
shrl $31, %edi
andl $1, %edi
kmovw %edi, %k0
kxnorw %k0, %k0, %k1
kshiftrw $15, %k1, %k1
kxorw %k1, %k0, %k0
kmovw %k0, %eax
instead of:
testl %edi, %edi
setns %al
This fixes PR27638.
llvm-svn: 268609
This introduces a SystemZ-specific "backchain" attribute on function, which
enables writing the frame backchain link as specified by the ABI. This will
be used to implement -mbackchain option in clang.
Differential Revision: http://reviews.llvm.org/D19889
Fixed in this version: added RegState::Define and RegState::Kill on R1D
in prologue.
llvm-svn: 268581
This introduces a SystemZ-specific "backchain" attribute on function, which
enables writing the frame backchain link as specified by the ABI. This will
be used to implement -mbackchain option in clang.
Differential Revision: http://reviews.llvm.org/D19889
llvm-svn: 268571
The new register classes allow to tell the machine verifier that it is
fine to use RIP for address accesses in x32 mode. Prior to that patch,
we would complain that we are using a GR64 in place of GR32, whereas it
is actually fine to use GR64 for x32 as long as the 32 high bits are 0s.
RIP has this property and is used for RIP-relative addressing.
This partially fixes http://llvm.org/PR27481.
llvm-svn: 268567
Some vector bit operations are promoted instead of having custom lowering. This patch changes the isOperationLegalOrCustom tests for vector AND/OR operations to use a new TLI helper isOperationLegalOrCustomOrPromote instead, allowing the SSE implementations to stay on the simd unit.
Differential Revision: http://reviews.llvm.org/D19805
llvm-svn: 268561
This patch adds support for estimating the square root, its reciprocal and
division or reciprocal using the combiner generic reciprocal machinery.
llvm-svn: 268539
Summary:
Currently, when checking if a stack is "BigStack" or not, it doesn't count into spills and arguments. Therefore, LLVM won't reserve spill slot for this actually "BigStack". This may cause scavenger failure.
Reviewers: rengolin
Subscribers: aemerson, rengolin, tberghammer, danalbert, srhines, llvm-commits
Differential Revision: http://reviews.llvm.org/D19896
llvm-svn: 268529
This patch corresponds to review:
http://reviews.llvm.org/D18592
It allows the PPC back end to generate the xxspltw instruction where we
previously only emitted vspltw.
llvm-svn: 268516
Vector bit operations are typically promoted instead of having custom lowering. This patch changes the isOperationLegalOrCustom tests for vector AND/OR operations to use isOperationLegalOrPromote instead, allowing the SSE implementations to stay on the simd unit.
Differential Revision: http://reviews.llvm.org/D19805
llvm-svn: 268504
Modification of previously existing code (variable rename only), with unit test added.
Differential Revision: http://reviews.llvm.org/D19368
llvm-svn: 268493
This code implements builtin_setjmp and builtin_longjmp exception handling intrinsics for 32-bit Sparc back-ends.
The code started as a mash-up of the PowerPC and X86 versions, although there are sufficient differences to both that had to be made for Sparc handling.
Note: I have manual tests running. I'll work on a unit test and add that to the rest of this diff in the next day.
Also, this implementation is only for 32-bit Sparc. I haven't focussed on a 64-bit version, although I have left the code in a prepared state for implementing this, including detecting pointer size and comments indicating where I suspect there may be differences.
Differential Revision: http://reviews.llvm.org/D19798
llvm-svn: 268483
i1 is now a legal type for X86 with AVX512.
There were some paths in X86FastISel which were not quite ready to see
an i1 value: they were not quite sure how to deal with sign/zero extends
for call arguments.
DTRT by extending to i8 for zeroext and bailing out of FastISel for
signext.
This fixes PR27591.
llvm-svn: 268470
The replaced load may have implicit-defs and those defs may be used
in the block of the original load. Make sure to update the liveness
accordingly.
This is a generalization of r267817.
llvm-svn: 268412
Summary:
It's always zero for SelectionDAG and is never read by the MIPS backend so
do the same for FastISel.
Reviewers: sdardis
Subscribers: dsanders, llvm-commits, sdardis
Differential Revision: http://reviews.llvm.org/D19863
llvm-svn: 268386
After the layout of the basic blocks is set, the target may be able to get rid
of unconditional branches to fallthrough blocks that the generic code does not
catch. This happens any time TargetInstrInfo::AnalyzeBranch is not able to
analyze all the branches involved in the terminators sequence, while still
understanding a few of them.
In such situation, AnalyzeBranch can directly modify the branches if it has been
instructed to do so.
This patch takes advantage of that.
llvm-svn: 268328
This operation may branch to the handler block and we do not want it
to happen anywhere within the basic block.
Moreover, by marking it "terminator and branch" the machine verifier
does not wrongly assume (because of AnalyzeBranch not knowing better)
the branch is analyzable. Indeed, the target was seeing only the
unconditional branch and not the faulting load op and thought it was
a simple unconditional block.
The machine verifier was complaining because of that and moreover,
other optimizations could have done wrong transformation!
In the process, simplify the representation of the handler block in
the faulting load op. Now, we directly reference the handler block
instead of using a label. This has the benefits of:
1. MC knows how to issue a label for a BB, so leave that to it.
2. Accessing the target BB from its label is painful, whereas it is
direct from a MBB operand.
Note: The 2 bytes offset in implicit-null-check.ll comes from the
fact the unconditional jumps are not removed anymore, as the whole
terminator sequence is not analyzable anymore.
Will fix it in a subsequence commit.
llvm-svn: 268327
We were using v_readlane_b32 with the lane set to zero, but this won't
work if thread 0 is not active.
Differential Revision: http://reviews.llvm.org/D19745
llvm-svn: 268295