1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-01-31 20:51:52 +01:00

43082 Commits

Author SHA1 Message Date
Craig Topper
8652178bc5 [IR] Add Type::isIntOrIntVectorTy(unsigned) similar to the existing isIntegerTy(unsigned), but also works for vectors.
llvm-svn: 307492
2017-07-09 07:04:03 +00:00
Craig Topper
86739c18e2 [IR] Make use of Type::isPtrOrPtrVectorTy/isIntOrIntVectorTy/isFPOrFPVectorTy to shorten code. NFC
llvm-svn: 307491
2017-07-09 07:04:00 +00:00
Hiroshi Inoue
1d578b7e75 fix trivial typos; NFC
sucessor -> successor 

llvm-svn: 307488
2017-07-09 05:54:44 +00:00
Simon Pilgrim
7f479d6ac7 [AMDGPU] Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307485
2017-07-08 19:50:03 +00:00
Simon Pilgrim
82a8d24e96 [AArch64] Fix -Wimplicit-fallthrough warnings. NFCI.
Add breaks - doesn't affect results as both GPR/FPU both check for 32/64 bit sizes. So will still default to GenericOps in the same way.

llvm-svn: 307484
2017-07-08 19:28:24 +00:00
Simon Pilgrim
5d9ea7a075 [ARM] Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307480
2017-07-08 18:42:04 +00:00
Simon Pilgrim
1a783539c1 Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307473
2017-07-08 15:26:26 +00:00
Sanjay Patel
0c17c0063e [x86] add SBB optimization for SETBE (ule) condition code
x86 scalar select-of-constants (Cond ? C1 : C2) combining/lowering is a mess 
with missing optimizations. We handle some patterns, but miss logical variants.

To clean that up, we should convert all select-of-constants to logic/math and 
enhance the combining for the expected patterns from that. Selecting 0 or -1 
needs extra attention to produce the optimal code as shown here.

Attempt to verify that all of these IR forms are logically equivalent:
http://rise4fun.com/Alive/plxs

Earlier steps in this series:
rL306040
rL306072
rL307404 (D34652)

As acknowledged in the earlier review, there's a possibility that some Intel
uarch would prefer to produce an xor to clear the fake register operand with
sbb %eax, %eax. This will likely need to be addressed in a separate pass.

llvm-svn: 307471
2017-07-08 14:04:48 +00:00
Eric Christopher
8fe591d225 Remove a variable that was only used in asserts and had a duplicate copy in something we did use anyhow.
llvm-svn: 307457
2017-07-08 01:03:29 +00:00
Lei Huang
eab61d9acf [PowerPC] NFC : Common up definitions of isIntS16Immediate and update parameter to int16_t
llvm-svn: 307442
2017-07-07 21:12:35 +00:00
Tony Jiang
e91b1f0847 [PPC CodeGen] Expand the bitreverse.i32 intrinsic.
Differential Revision: https://reviews.llvm.org/D33572
Fix PR: https://bugs.llvm.org/show_bug.cgi?id=33093

llvm-svn: 307413
2017-07-07 16:41:55 +00:00
Simon Pilgrim
faf0b1f574 Fix some more -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307411
2017-07-07 16:40:06 +00:00
Matthew Simpson
72db9fecc0 [ARM] Implement interleaved access bug fix from r306334
r306334 fixed a bug in AArch64 dealing with wide interleaved accesses having
pointer types. The bug also exists in ARM, so this patch copies over the fix.

llvm-svn: 307409
2017-07-07 16:15:05 +00:00
Sam Kolton
fa1d4df786 [AMDGPU] Assembler: refactor convert methods (VOP3 and MIMG)
Summary: Simplified converter methods for VOP3 and MIMG.

Reviewers: dp, artem.tamazov

Subscribers: arsenm, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, vpykhtin, t-tye

Differential Revision: https://reviews.llvm.org/D35047

llvm-svn: 307407
2017-07-07 15:21:52 +00:00
Sanjay Patel
0a3961a15c [x86] add SBB optimization for SETAE (uge) condition code
x86 scalar select-of-constants (Cond ? C1 : C2) combining/lowering is a mess 
with missing optimizations. We handle some patterns, but miss logical variants.

To clean that up, we should convert all select-of-constants to logic/math and 
enhance the combining for the expected patterns from that. DAGCombiner already 
has the foundation to allow the transforms, so we just need to fill in the holes 
for x86 math op lowering. Selecting 0 or -1 needs extra attention to produce the
optimal code as shown here.

Attempt to verify that all of these IR forms are logically equivalent:
http://rise4fun.com/Alive/plxs

Earlier steps in this series:
rL306040
rL306072

Differential Revision: https://reviews.llvm.org/D34652

llvm-svn: 307404
2017-07-07 14:56:20 +00:00
Dmitry Preobrazhensky
69f65ba292 [AMDGPU][mc][gfx9] Added support of op_sel/op_sel_hi for V_MAD_MIX*
See https://bugs.llvm.org//show_bug.cgi?id=33595

Reviewers: vpykhtin, artem.tamazov, arsenm

Differential Revision: https://reviews.llvm.org/D35021

llvm-svn: 307402
2017-07-07 14:29:06 +00:00
Simon Pilgrim
4901fdecbe [Lanai] Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307396
2017-07-07 13:22:47 +00:00
Simon Pilgrim
4dc6c0148c [Hexagon] Fix some more -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307395
2017-07-07 13:21:43 +00:00
Simon Pilgrim
cbc9dfc291 [AArch64] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307393
2017-07-07 13:03:28 +00:00
Florian Hahn
588c2fb554 [AArch64] Use 16 bytes as preferred function alignment on Cortex-A57.
Summary:
This change gives a 0.89% speed on execution time, a 0.94% improvement
in benchmark scores and a 0.62% increase in binary size on a Cortex-A57.
These numbers are the geomean results on a wide range of benchmarks from
the test-suite, SPEC2000, SPEC2006 and a range of proprietary suites.

The software optimization guide for the Cortex-A57 recommends 16 byte
branch alignment.

Reviewers: t.p.northover, mcrosier, javed.absar, kristof.beyls, sbaranga

Reviewed By: kristof.beyls

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D34954

llvm-svn: 307389
2017-07-07 10:43:01 +00:00
Simon Pilgrim
23bb64b0ff [PowerPC] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307382
2017-07-07 10:21:44 +00:00
Simon Pilgrim
36101c148d [AMDGPU] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307381
2017-07-07 10:18:57 +00:00
Florian Hahn
3afa7f20f6 [AArch64] Use 16 bytes as preferred function alignment on Cortex-A72.
Summary:
This change gives a 0.34% speed on execution time, a 0.61% improvement
in benchmark scores and a 0.57% increase in binary size on a Cortex-A72.
These numbers are the geomean results on a wide range of benchmarks from
the test-suite, SPEC2000, SPEC2006 and a range of proprietary suites.

The software optimization guide for the Cortex-A72 recommends 16 byte
branch alignment.


Reviewers: t.p.northover, kristof.beyls, rengolin, sbaranga, mcrosier, javed.absar

Reviewed By: kristof.beyls

Subscribers: llvm-commits, aemerson

Differential Revision: https://reviews.llvm.org/D34961

llvm-svn: 307380
2017-07-07 10:15:49 +00:00
Simon Pilgrim
144dd3c621 [Sparc] Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307378
2017-07-07 10:14:46 +00:00
Simon Pilgrim
a399a4d551 [SystemZ] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307376
2017-07-07 10:07:09 +00:00
Simon Pilgrim
8cca661544 [Arm] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307375
2017-07-07 10:05:45 +00:00
Simon Pilgrim
211b55102e [Hexagon] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307374
2017-07-07 10:04:12 +00:00
Diana Picus
7a450fe6fd [ARM] GlobalISel: Fixup r307365
Rename member DebugLoc -> DbgLoc (so it doesn't conflict with the class
name).

llvm-svn: 307366
2017-07-07 08:53:27 +00:00
Diana Picus
fc47e4e210 [ARM] GlobalISel: Select hard G_FCMP for s32
We lower to a sequence consisting of:
- MOVi 0 into a register
- VCMPS to do the actual comparison and set the VFP flags
- FMSTAT to move the flags out of the VFP unit
- MOVCCi to either use the "zero register" that we have previously set
  with the MOVi, or move 1 into the result register, based on the values
  of the flags

As was the case with soft-float, for some predicates (one, ueq) we
actually need two comparisons instead of just one. When that happens, we
generate two VCMPS-FMSTAT-MOVCCi sequences and chain them by means of
using the result of the first MOVCCi as the "zero register" for the
second one. This is a bit overkill, since one comparison followed by
two non-flag-setting conditional moves should be enough. In any case,
the backend manages to CSE one of the comparisons away so it doesn't
matter much.

Note that unlike SelectionDAG and FastISel, we always use VCMPS, and not
VCMPES. This makes the code a lot simpler, and it also seems correct
since the LLVM Lang Ref defines simple true/false returns if the
operands are QNaN's. For SNaN's, even VCMPS throws an Invalid Operand
exception, so they won't be slipping through unnoticed.

Implementation-wise, this introduces a template so we can share the same
code that we use for handling integer comparisons, since the only
differences are in the details (exact opcodes to be used etc). Hopefully
this will be easy to extend to s64 G_FCMP.

llvm-svn: 307365
2017-07-07 08:39:04 +00:00
Matthias Braun
3df3235c8d LiveRegUnits: Rename accumulateBackward()->accumulate()
Contrary to the stepForward()/stepBackward() method accumulate() doesn't
have a direction as defs, uses and clobbers all have the same effect.

Also improve the documentation comment.

llvm-svn: 307351
2017-07-07 03:02:17 +00:00
Sean Fertile
2de601c7f8 Extend memcpy expansion in Transform/Utils to handle wider operand types.
Adds loop expansions for known-size and unknown-sized memcpy calls, allowing the
target to provide the operand types through TTI callbacks. The default values
for the TTI callbacks use int8 operand types and matches the existing behaviour
if they aren't overridden by the target.

Differential revision: https://reviews.llvm.org/D32536

llvm-svn: 307346
2017-07-07 02:00:06 +00:00
Michael Kuperstein
edd218dbfd Reverting r307326 because it breaks clang tests.
llvm-svn: 307334
2017-07-06 23:24:39 +00:00
Michael Kuperstein
dc71dcb613 [NVPTX] Add lowering of i128 params.
The patch adds support of i128 params lowering. The changes are quite trivial to
support i128 as a "special case" of integer type. With this patch, we lower i128
params the same way as aggregates of size 16 bytes: .param .b8 _ [16].

Currently, NVPTX can't deal with the 128 bit integers:
* in some cases because of failed assertions like
ValVTs.size() == OutVals.size() && "Bad return value decomposition"
* in other cases emitting PTX with .i128 or .u128 types (which are not valid [1])
[1] http://docs.nvidia.com/cuda/parallel-thread-execution/index.html#fundamental-types

Differential Revision: https://reviews.llvm.org/D34555
Patch by: Denys Zariaiev (denys.zariaiev@gmail.com)

llvm-svn: 307326
2017-07-06 22:18:54 +00:00
Martin Storsjo
0743dfe73a [COFF, AArch64] Set the private label prefix to .L
This fixes calls to external functions starting with a capital L,
fixing errors like this:
fatal error: error in backend: assembler label 'LocalFree' can not be undefined

Differential Revision: https://reviews.llvm.org/D35079

llvm-svn: 307317
2017-07-06 21:08:34 +00:00
Matt Arsenault
8858f865a6 AMDGPU: Add macro fusion schedule DAG mutation
Try to increase opportunities to shrink vcc uses.

llvm-svn: 307313
2017-07-06 20:57:05 +00:00
Matt Arsenault
67da610b84 AMDGPU: Minor cleanup of shrinking logic
llvm-svn: 307312
2017-07-06 20:56:59 +00:00
Stanislav Mekhanoshin
c39b2b753b [AMDGPU] Always use rcp + mul with fast math
Regardless of relaxation options such as -cl-fast-relaxed-math
we are producing rather long code for fdiv via amdgcn_fdiv_fast
intrinsic. This intrinsic is used to replace fdiv with 2.5ulp
metadata and does not handle denormals, thus believed to be fast.

An fdiv instruction can also have fast math flag either by itself
or together with fpmath metadata. Clang used with a relaxation flag
always produces both metadata and fast flag:

%div = fdiv fast float %v, %0, !fpmath !12
!12 = !{float 2.500000e+00}

Current implementation ignores fast flag and favors metadata. An
instruction with just fast flag would be lowered to a fastest rcp +
mul, but that never happen on practice because of described mutual
clang and BE behavior.

This change allows an "fdiv fast" to be always lowered as rcp + mul.

Differential Revision: https://reviews.llvm.org/D34844

llvm-svn: 307308
2017-07-06 20:34:21 +00:00
Aditya Nandakumar
8b33f4001f [GISel]: Enhance the MachineIRBuilder API
Allows the MachineIRBuilder APIs to directly create registers (based on
LLT or TargetRegisterClass) as well as accept MachineInstrBuilders
and implicitly converts to register(with getOperand(0).getReg()).

Eg usage:
LLT s32 = LLT::scalar(32);
auto C32 = Builder.buildConstant(s32, 32);
auto Tmp = Builder.buildInstr(TargetOpcode::G_SUB, s32, C32,
OtherReg);
auto Tmp2 = Builder.buildInstr(Opcode, DstReg,
Builder.buildConstant(s32, 31)); ....

Only a few methods added for now.

Reviewed by Tim

llvm-svn: 307302
2017-07-06 19:40:07 +00:00
Craig Topper
d8ebaac997 [Constants] If we already have a ConstantInt*, prefer to use isZero/isOne/isMinusOne instead of isNullValue/isOneValue/isAllOnesValue inherited from Constant. NFCI
Going through the Constant methods requires redetermining that the Constant is a ConstantInt and then calling isZero/isOne/isMinusOne.

llvm-svn: 307292
2017-07-06 18:39:47 +00:00
Simon Pilgrim
c1e4401ffa Fix spelling in comments. NFCI.
llvm-svn: 307288
2017-07-06 18:17:07 +00:00
Simon Pilgrim
7aae3c3a38 [X86][SSE4A] Add support for shuffle combining to INSERTQI.
llvm-svn: 307268
2017-07-06 15:34:17 +00:00
Joel Jones
f459d91094 Doxygen formatting. NFCI
llvm-svn: 307263
2017-07-06 14:17:36 +00:00
Simon Pilgrim
4117f45403 [X86][SSE] combineX86ShuffleChain - merge duplicate creations of integer mask types
llvm-svn: 307257
2017-07-06 13:09:19 +00:00
Simon Pilgrim
8b60a5c670 [X86][SSE] combineX86ShuffleChain - merge duplicate 'Zeroable' element masks
llvm-svn: 307255
2017-07-06 12:40:10 +00:00
Simon Pilgrim
59df197d87 [X86][SSE4A] Add support for shuffle combining to EXTRQ.
llvm-svn: 307254
2017-07-06 12:22:58 +00:00
Simon Pilgrim
14bbbf2fb7 [X86][SSE4A] Split EXTRQ/INSERTQ shuffle matching from lowering. NFCI.
First step toward supporting shuffle combining to EXTRQ/INSERTQ.

llvm-svn: 307250
2017-07-06 11:06:54 +00:00
Diana Picus
54070b2884 [ARM] GlobalISel: Map s32 G_FCMP in reg bank select
Map hard G_FCMP operands to FPR and the result to GPR.

llvm-svn: 307245
2017-07-06 09:57:46 +00:00
Diana Picus
1704a62e0b [ARM] GlobalISel: Legalize G_FCMP for s32
This covers both hard and soft float.

Hard float is easy, since it's just Legal.

Soft float is more involved, because there are several different ways to
handle it based on the predicate: one and ueq need not only one, but two
libcalls to get a result. Furthermore, we have large differences between
the values returned by the AEABI and GNU functions.

AEABI functions return a nice 1 or 0 representing true and respectively
false. GNU functions generally return a value that needs to be compared
against 0 (e.g. for ogt, the value returned by the libcall is > 0 for
true).  We could introduce redundant comparisons for AEABI as well, but
they don't seem easy to remove afterwards, so we do different processing
based on whether or not the result really needs to be compared against
something (and just truncate if it doesn't).

llvm-svn: 307243
2017-07-06 09:09:33 +00:00
Diana Picus
131de8c452 [ARM] GlobalISel: Widen s1, s8, s16 G_CONSTANT
Get the legalizer to widen small constants.

llvm-svn: 307239
2017-07-06 08:04:16 +00:00
Sam Clegg
d6548e35b2 [WebAssembly] Fix types for address taken functions
Differential Revision: https://reviews.llvm.org/D34966

llvm-svn: 307198
2017-07-05 20:25:08 +00:00