1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-24 05:23:45 +02:00
Commit Graph

16256 Commits

Author SHA1 Message Date
Craig Topper
9666215a51 [X86] Fix a test I failed to re-generate in r272249.
llvm-svn: 272250
2016-06-09 07:10:34 +00:00
Craig Topper
a336600f12 [X86] Bring consistent naming to the SSE/AVX and AVX512 PALIGNR instructions. Then add shuffle decode printing for the EVEX forms which is made easier by having the naming structure more similar to other instructions.
llvm-svn: 272249
2016-06-09 07:06:38 +00:00
Quentin Colombet
b4d0707b26 [MIR] Check that generic virtual registers get a size.
Without that check it was possible to write test cases where the size
was not specified and we ended up with weird asserts down the road,
because the default value (1) would not make sense.

llvm-svn: 272226
2016-06-08 23:27:46 +00:00
Dehao Chen
521d44df2a Revive http://reviews.llvm.org/D12778 to handle forward-hot-prob and backward-hot-prob consistently.
Summary:
Consider the following diamond CFG:

 A
/ \
B C
 \/
 D

Suppose A->B and A->C have probabilities 81% and 19%. In block-placement, A->B is called a hot edge and the final placement should be ABDC. However, the current implementation outputs ABCD. This is because when choosing the next block of B, it checks if Freq(C->D) > Freq(B->D) * 20%, which is true (if Freq(A) = 100, then Freq(B->D) = 81, Freq(C->D) = 19, and 19 > 81*20%=16.2). Actually, we should use 25% instead of 20% as the probability here, so that we have 19 < 81*25%=20.25, and the desired ABDC layout will be generated.

Reviewers: djasper, davidxl

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D20989

llvm-svn: 272203
2016-06-08 21:30:12 +00:00
Quentin Colombet
c7fca81422 [AArch64][RegisterBankInfo] G_OR are fine on either GPR or FPR.
Teach AArch64RegisterBankInfo that G_OR can be mapped on either GPR or
FPR for 64-bit or 32-bit values.

Add test cases demonstrating how this information is used to coalesce a
computation on a single register bank.

llvm-svn: 272170
2016-06-08 16:53:32 +00:00
Oliver Stannard
e8d6d1e081 [ARM] MSR instructions implicitly set CPSR
The MSR instructions can write to the CPSR, but we did not model this
fact, so we could emit them in the middle of IT blocks, changing the
condition flags for later instructions in the block.

The tests use two calls to llvm.write_register.i32 because it is valid
to use these instructions at the end of an IT block, which if conversion
does do in some cases. With two calls, the first clobbers the flags, so
a branch has to be used to make the second one conditional.

Differential Revision: http://reviews.llvm.org/D21139

llvm-svn: 272154
2016-06-08 15:26:34 +00:00
Matthias Braun
4b4bf8be09 MIR: Fix parsing of stack object references in MachineMemOperands
The MachineMemOperand parser lacked the code to handle %stack.X
references (%fixed-stack.X was working).

llvm-svn: 272082
2016-06-08 00:47:07 +00:00
Nicolai Haehnle
364da50d63 AMDGPU: Add amdgpu-ps-wqm-outputs function attributes
Summary:
The presence of this attribute indicates that VGPR outputs should be computed
in whole quad mode. This will be used by Mesa for prolog pixel shaders, so
that derivatives can be taken of shader inputs computed by the prolog, fixing
a bug.

The generated code could certainly be improved: if a prolog pixel shader is
used (which isn't common in modern OpenGL - they're used for gl_Color, polygon
stipples, and forcing per-sample interpolation), Mesa will use this attribute
unconditionally, because it has to be conservative. So WQM may be used in the
prolog when it isn't really needed, and furthermore a silly back-and-forth
switch is likely to happen at the boundary between prolog and main shader
parts.

Fixing this is a bit involved: we'd first have to add a mechanism by which
LLVM writes the WQM-related input requirements to the main shader part binary,
and then Mesa specializes the prolog part accordingly. At that point, we may
as well just compile a monolithic shader...

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=95130

Reviewers: arsenm, tstellarAMD, mareko

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: http://reviews.llvm.org/D20839

llvm-svn: 272063
2016-06-07 21:37:17 +00:00
Simon Pilgrim
28e8ea3a0d [X86][SSE4A] Regenerated SSE4A intrinsics tests
There are no VEX encoded versions of SSE4A instructions, make sure that AVX targets give the same output

llvm-svn: 272060
2016-06-07 21:15:45 +00:00
Eric Christopher
d5bd140a4f Revert "Differential Revision: http://reviews.llvm.org/D20557"
Author: Wei Ding <wei.ding2@amd.com>
Date:   Tue Jun 7 19:04:44 2016 +0000

    Differential Revision: http://reviews.llvm.org/D20557

    git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272044
    91177308-0d34-0410-b5e6-96231b3b80d8

as it was breaking the bots.

This reverts commit r272044.

llvm-svn: 272056
2016-06-07 20:27:12 +00:00
Etienne Bergeron
3b57eca787 [stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation

The buffer security check is turned on with the '/GS' compiler switch.
  * https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
  * To be added to clang here: http://reviews.llvm.org/D20347

Some overview of buffer security check feature and implementation:
  * https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
  * http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
  * http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html


For the following example:
```
int example(int offset, int index) {
  char buffer[10];
  memset(buffer, 0xCC, index);
  return buffer[index];
}
```

The MSVC compiler is adding these instructions to perform stack integrity check:
```
        push        ebp  
        mov         ebp,esp  
        sub         esp,50h  
  [1]   mov         eax,dword ptr [__security_cookie (01068024h)]  
  [2]   xor         eax,ebp  
  [3]   mov         dword ptr [ebp-4],eax  
        push        ebx  
        push        esi  
        push        edi  
        mov         eax,dword ptr [index]  
        push        eax  
        push        0CCh  
        lea         ecx,[buffer]  
        push        ecx  
        call        _memset (010610B9h)  
        add         esp,0Ch  
        mov         eax,dword ptr [index]  
        movsx       eax,byte ptr buffer[eax]  
        pop         edi  
        pop         esi  
        pop         ebx  
  [4]   mov         ecx,dword ptr [ebp-4]  
  [5]   xor         ecx,ebp  
  [6]   call        @__security_check_cookie@4 (01061276h)  
        mov         esp,ebp  
        pop         ebp  
        ret  
```

The instrumentation above is:
  * [1] is loading the global security canary,
  * [3] is storing the local computed ([2]) canary to the guard slot,
  * [4] is loading the guard slot and ([5]) re-compute the global canary,
  * [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.

Overview of the current stack-protection implementation:
  * lib/CodeGen/StackProtector.cpp
    * There is a default stack-protection implementation applied on intermediate representation.
    * The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
    * An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
    * Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
    * Guard manipulation and comparison are added directly to the intermediate representation.

  * lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
  * lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
    * There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
      * see long comment above 'class StackProtectorDescriptor' declaration.
    * The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
      * 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
    * The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.

  * include/llvm/Target/TargetLowering.h
    * Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.

  * lib/Target/X86/X86ISelLowering.cpp
    * Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.

Function-based Instrumentation:
  * The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
  * To support function-based instrumentation, this patch is
    * adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
      * If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
    * modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
    * generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
    * if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).

Modifications
  * adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
  * adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)

Results

  * IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```

```
*** Final LLVM Code input to ISel ***

; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
  %StackGuardSlot = alloca i8*                                                  <<<-- Allocated guard slot
  %0 = call i8* @llvm.stackguard()                                              <<<-- Loading Stack Guard value
  call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot)                  <<<-- Prologue intrinsic call (store to Guard slot)
  %index.addr = alloca i32, align 4
  %offset.addr = alloca i32, align 4
  %buffer = alloca [10 x i8], align 1
  store i32 %index, i32* %index.addr, align 4
  store i32 %offset, i32* %offset.addr, align 4
  %arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
  %1 = load i32, i32* %index.addr, align 4
  call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
  %2 = load i32, i32* %index.addr, align 4
  %arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
  %3 = load i8, i8* %arrayidx, align 1
  %conv = sext i8 %3 to i32
  %4 = load volatile i8*, i8** %StackGuardSlot                                  <<<-- Loading Guard slot
  call void @__security_check_cookie(i8* %4)                                    <<<-- Epilogue function-based check
  ret i32 %conv
}
```

  * SelectionDAG generated instrumentation:

```
clang-cl /GS test.cc /O1 /c /FA
```

```
"?example@@YAHHH@Z":                    # @"\01?example@@YAHHH@Z"
# BB#0:                                 # %entry
        pushl   %esi
        subl    $16, %esp
        movl    ___security_cookie, %eax                                        <<<-- Loading Stack Guard value
        movl    28(%esp), %esi
        movl    %eax, 12(%esp)                                                  <<<-- Store to Guard slot
        leal    2(%esp), %eax
        pushl   %esi
        pushl   $204
        pushl   %eax
        calll   _memset
        addl    $12, %esp
        movsbl  2(%esp,%esi), %esi
        movl    12(%esp), %ecx                                                  <<<-- Loading Guard slot
        calll   @__security_check_cookie@4                                      <<<-- Epilogue function-based check
        movl    %esi, %eax
        addl    $16, %esp
        popl    %esi
        retl
```

Reviewers: kcc, pcc, eugenis, rnk

Subscribers: majnemer, llvm-commits, hans, thakis, rnk

Differential Revision: http://reviews.llvm.org/D20346

llvm-svn: 272053
2016-06-07 20:15:35 +00:00
Wei Ding
f737e3726e Differential Revision: http://reviews.llvm.org/D20557
llvm-svn: 272044
2016-06-07 19:04:44 +00:00
Geoff Berry
26dc05069f Reapply [AArch64] Fix isLegalAddImmediate() to return true for valid negative values.
Originally reviewed here: http://reviews.llvm.org/D17463

llvm-svn: 272023
2016-06-07 16:48:43 +00:00
Haicheng Wu
860b042ccd Revert "[MBP] Reduce code size by running tail merging in MBP."
This reverts commit r271930, r271915, r271923.  They break a thumb selfhosting
bot.

llvm-svn: 272017
2016-06-07 15:17:21 +00:00
Simon Pilgrim
7ece42dd08 [X86][AVX512] Added 512-bit integer vector non-temporal load tests
llvm-svn: 272016
2016-06-07 15:12:47 +00:00
Simon Pilgrim
67ca4cba96 [X86][SSE] Add general lowering of nontemporal vector loads
Currently the only way to use the (V)MOVNTDQA nontemporal vector loads instructions is through the int_x86_sse41_movntdqa style builtins.

This patch adds support for lowering nontemporal loads from general IR, allowing us to remove the movntdqa builtins in a future patch.

We currently still fold nontemporal loads into suitable instructions, we should probably look at removing this (and nontemporal stores as well) or at least make the target's folding implementation aware that its dealing with a nontemporal memory transaction.

There is also an issue that VMOVNTDQA only acts on 128-bit vectors on pre-AVX2 hardware - so currently a normal ymm load is still used on AVX1 targets.

Differential Review: http://reviews.llvm.org/D20965

llvm-svn: 272010
2016-06-07 13:34:24 +00:00
James Molloy
bab28a1f85 [Thumb-1] Add optimized constant materialization for integers [256..512)
We can materialize these integers using a MOV; ADDi8 pair.

llvm-svn: 272007
2016-06-07 13:10:14 +00:00
Igor Breger
53dd84b6c0 [AVX512] Fix load opcode for fast isel.
Differential Revision: http://reviews.llvm.org/D21067

llvm-svn: 272006
2016-06-07 13:08:45 +00:00
Ulrich Weigand
f4980a0fe0 [PowerPC] Support multiple return values with fast isel
Using an LLVM IR aggregate return value type containing three
or more integer values causes an abort in the fast isel pass.

This patch adds two more registers to RetCC_PPC64_ELF_FIS to
allow returning up to four integers with fast isel, just the
same as is currently supported with regular isel (RetCC_PPC).

This is needed for Swift and (possibly) other non-clang frontends.

Fixes PR26190.

llvm-svn: 272005
2016-06-07 12:48:22 +00:00
Simon Pilgrim
5c22a56177 [X86][SSE] Improved blend+zero target shuffle combining to use combined shuffle mask directly
We currently only combine to blend+zero if the target value type has 8 elements or less, but this was missing a lot of cases where the combined mask had been widened.

This change makes it so we use the combined mask to determine the blend value type, allowing us to catch more widened cases.

llvm-svn: 272003
2016-06-07 12:20:14 +00:00
James Molloy
7a93cbe27f [ARM] Shrink post-indexed LDR and STR to LDM/STM
A Thumb-2 post-indexed LDR instruction such as:

  ldr.w r0, [r1], #4

Can be rewritten as:

  ldm.n r1!, {r0}

LDMs can be more expensive than LDRs on some cores, so this has been enabled only in minsize mode.

llvm-svn: 272002
2016-06-07 12:13:34 +00:00
James Molloy
98866e3fe1 [ARM] Transform LDMs into writeback form to save code size
If we have an LDM that uses only low registers and doesn't write to its base register:

  ldm.w r0, {r1, r2, r3}

And that base register is dead after the LDM, then we can convert it to writeback form and use a narrow encoding:

  ldm.n r0!, {r1, r2, r3}

Obviously, this introduces a new register write and so can cause WAW hazards, so I've enabled it only in minsize mode. This is a code size trick that ARM Compiler 5 ("armcc") does that we don't.

llvm-svn: 272000
2016-06-07 11:47:24 +00:00
Saleem Abdulrasool
def9e129ca ARM: correct TLS access on WoA
TLS access requires an offset from the TLS index.  The index itself is the
section-relative distance of the symbol.  For ARM, the relevant relocation
(IMAGE_REL_ARM_SECREL) is applied as a constant.  This means that the value may
not be an immediate and must be lowered into a constant pool.  This offset will
not be base relocated.  We were previously emitting the actual address of the
symbol which would be base relocated and would therefore be the vaue offset by
the ImageBase + TLS Offset.

llvm-svn: 271974
2016-06-07 03:15:07 +00:00
Matt Arsenault
2c826f3e0a AMDGPU: Fix constantexpr addrspacecasts
If we had a constant group address space cast the queue pointer
wasn't enabled for the function, resulting in a crash on noreg
later.

llvm-svn: 271935
2016-06-06 20:03:31 +00:00
Haicheng Wu
f2bdc1a6d4 Fix a test case. NFC.
llvm-svn: 271930
2016-06-06 19:11:53 +00:00
Haicheng Wu
a0f47b2fa1 [MBP] Reduce code size by running tail merging in MBP.
The code layout that TailMerging (inside BranchFolding) works on is not the
final layout optimized based on the branch probability. Generally, after
BlockPlacement, many new merging opportunities emerge.

This patch calls Tail Merging after MBP and calls MBP again if Tail Merging
merges anything.

Differential Revision: http://reviews.llvm.org/D20276

llvm-svn: 271925
2016-06-06 18:36:07 +00:00
Artem Tamazov
b28a41c23e [AMDGPU][llvm-mc] v_cndmask_b32: src2 is mandatory; do not enforce VOP2 when src2 == VCC.
Another step for unification llvm assembler/disassembler with sp3.
Besides, CodeGen output is a bit improved, thus changes in CodeGen tests.
Assembler/Disassembler tests updated/added.

Differential Revision: http://reviews.llvm.org/D20796

llvm-svn: 271900
2016-06-06 15:23:43 +00:00
Igor Breger
e3844a404a [KNL] Fix UMULO lowering.
Differential Revision: http://reviews.llvm.org/D21013

llvm-svn: 271891
2016-06-06 12:24:52 +00:00
Craig Topper
bb0d5ffb41 [AVX512] Remove masked palignr intrinsics and auto-upgrade them to native IR of vector shuffle and select.
llvm-svn: 271872
2016-06-06 06:12:54 +00:00
Craig Topper
50151618a6 [AVX512] Add PALIGNR shuffle lowering for v32i16 and v16i32.
llvm-svn: 271870
2016-06-06 05:39:10 +00:00
Craig Topper
c4ce25ed3e [AVX512] Update tests to show shuffle decoding for vpshuflw/vpshufhw.
llvm-svn: 271869
2016-06-06 05:39:07 +00:00
Simon Pilgrim
585db6f6c3 [X86][XOP] Added VPERMIL2PD/VPERMIL2PS raw mask decoding for target shuffle combines
llvm-svn: 271834
2016-06-05 15:21:30 +00:00
Simon Pilgrim
94a9893eda [X86][XOP] Added VPERMIL2PD/VPERMIL2PS as a target shuffle type
llvm-svn: 271831
2016-06-05 15:01:45 +00:00
Craig Topper
2249826460 [AVX512] Add support for lowering PALIGNR for v64i8.
Could do this for other types to, but this is what's needed to replace the instrinsic with native IR in clang.

llvm-svn: 271828
2016-06-05 06:29:12 +00:00
Craig Topper
47e5abb616 [AVX512] Split command lines and regenerate a test to prepare for a future commit.
llvm-svn: 271827
2016-06-05 06:29:08 +00:00
Craig Topper
d8c697aad5 [AVX512] Fix PANDN combining for v4i32/v8i32 when VLX is enabled.
v4i32/v8i32 ANDs aren't promoted to v2i64/v4i64 when VLX is enabled.

llvm-svn: 271826
2016-06-05 05:35:11 +00:00
Simon Pilgrim
2edc73fed4 [X86][XOP] Added VPERMIL2PD/VPERMIL2PS shuffle mask comment decoding
llvm-svn: 271809
2016-06-04 21:44:28 +00:00
Saleem Abdulrasool
f999318a81 X86: enable TLS on Windows itanium
Windows itanium is nearly identical to windows-msvc (MS ABI for C, itanium for
C++).  Enable the TLS support for the target similar to the MSVC model.

llvm-svn: 271797
2016-06-04 18:27:22 +00:00
Simon Pilgrim
73aea916e6 [X86][AVX2] Fix v16i16 SHL lowering (PR27730)
The AVX2 v16i16 shift lowering works by unpacking to 2 x v8i32, performing the shift and then truncating the result.

The unpacking is used to place the values in the upper 16-bits so that we can correctly sign-extend for SRA shifts. Unfortunately we weren't ensuring that the lower 16-bits were zero to ensure that SHL correctly shifts in zero bits.

llvm-svn: 271796
2016-06-04 16:45:33 +00:00
Matthias Braun
64003711e2 MIR: Support MachineMemOperands without associated value
This is allowed (though used rarely) and useful to keep your tests
short.

llvm-svn: 271752
2016-06-04 00:06:31 +00:00
Chad Rosier
5059358611 [AArch64] Move tests from r271677 to a more appropriately named file. NFC.
llvm-svn: 271718
2016-06-03 20:11:09 +00:00
Chad Rosier
a775bf75c7 [AArch64] Spot SBFX-compatible code expressed with sign_extend.
This is very similar to r271677, but for extracts from i32 with the SIGN_EXTEND
acting on a arithmetic shift.

llvm-svn: 271717
2016-06-03 20:05:49 +00:00
Derek Schuff
26aab3499f [WebAssembly] Emit type signatures for declared functions
Under emscripten, C code can take the address of a function implemented
in Javascript (which is exposed via an import in wasm). Because imports
do not have linear memory address in wasm, we need to generate a thunk
to be the target of the indirect call; it call the import directly.

To make this possible, LLVM needs to emit the type signatures for these
functions, because they may not be called directly or referred to other
than where the address is taken.

This uses s new .s directive (.functype) which specifies the signature.

Differential Revision: http://reviews.llvm.org/D20891

Re-apply r271599 but instead of bailing with an error when a declared
function has multiple returns, replace it with a pointer argument. Also
add the test case I forgot to 'git add' last time around.

llvm-svn: 271703
2016-06-03 18:34:36 +00:00
Sjoerd Meijer
3d4cf26cf6 Code size optimisation: do not inline memcpy if this expansion results
in more instructions than the libary call.

Differential Revision: http://reviews.llvm.org/D20958

llvm-svn: 271678
2016-06-03 15:38:55 +00:00
Chad Rosier
1fa884d6ab [AArch64] Spot SBFX-compatbile code expressed with sign_extend_inreg.
We were assuming all SBFX-like operations would have the shl/asr form, but often
when the field being extracted is an i8 or i16, we end up with a
SIGN_EXTEND_INREG acting on a shift instead.

This is a port of r213754 from ARM to AArch64.

llvm-svn: 271677
2016-06-03 15:00:09 +00:00
Simon Pilgrim
d5ca72a493 [X86][AVX512] Fixed 512-bit vector nontemporal load alignment
llvm-svn: 271673
2016-06-03 14:12:43 +00:00
Simon Pilgrim
096c6479fc [X86][AVX512] Added 512-bit vector nontemporal load tests
llvm-svn: 271668
2016-06-03 13:42:49 +00:00
Simon Pilgrim
42c22dd5cc [X86][SSE] Added nontemporal load tests
These currently all lower to regular loads, generic nontemporal load support will be added in a future patch

llvm-svn: 271659
2016-06-03 11:00:55 +00:00
Simon Pilgrim
9ac3c4e1c9 [X86] Added nontemporal scalar store tests
llvm-svn: 271656
2016-06-03 10:30:54 +00:00
Simon Pilgrim
df30ac0194 [X86][SSE] Regenerated nontemporal vector store tests and added extra target types
llvm-svn: 271654
2016-06-03 10:24:24 +00:00
Simon Pilgrim
f30fab0fe4 [X86] Regenerated nontemporal store tests and added tests for all 128-bit vector types
llvm-svn: 271651
2016-06-03 10:15:36 +00:00
Simon Pilgrim
3bafbaa984 [X86][AVX2] Relaxed alignment on nontemporal store tests
llvm-svn: 271646
2016-06-03 10:06:59 +00:00
Simon Pilgrim
c8347b15c6 [X86][AVX2] Regenerated nontemporal store tests and added tests for all 256-bit vector types
llvm-svn: 271645
2016-06-03 09:56:24 +00:00
Daniel Sanders
1a3a82d3cc [mips] Implement 'la' macro in PIC mode for O32.
Summary:
N32 support will follow in a later patch since the symbol version of 'la'
incorrectly believes N32 to have 64-bit pointers and rejects it early.

This fixes the three incorrectly expanded 'la' macros found in bionic.

Reviewers: sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: http://reviews.llvm.org/D20820

llvm-svn: 271644
2016-06-03 09:53:06 +00:00
Simon Pilgrim
0c614eb08a [X86][XOP] Support for VPERMIL2PD/VPERMIL2PS 2-input shuffle instructions
This patch begins adding support for lowering to the XOP VPERMIL2PD/VPERMIL2PS shuffle instructions - adding the X86ISD::VPERMIL2 opcode and cleaning up the usage.

The internal llvm intrinsics were assuming the shuffle mask operand was the same type as the float/double input operands (I guess to simplify the intrinsic definitions in X86InstrXOP.td to a single value type). These needed changing to integer types (matching the clang builtin and the AMD intrinsics definitions), an auto upgrade path is added to convert old calls.

Mask decoding/target shuffle support will be added in future patches.

Differential Revision: http://reviews.llvm.org/D20049

llvm-svn: 271633
2016-06-03 08:06:03 +00:00
Craig Topper
7dae6773ea [AVX512] Ensure EVEX vpshufd, vpshuflw, and vpshufhw have isel priority over the VEX encoded ones.
llvm-svn: 271629
2016-06-03 05:31:04 +00:00
Craig Topper
8371de0a99 [AVX512] Fix shuffle comment printing for EVEX encoded PSHUFD, PSHUFHW, and PSHUFLW.
llvm-svn: 271628
2016-06-03 05:31:00 +00:00
Derek Schuff
b7000e8219 Revert "[WebAssembly] Emit type signatures for declared functions"
This reverts r271599, it broke the integration tests.
More places than I expected had nontrival return types in imports, or
else the check was wrong.

llvm-svn: 271606
2016-06-02 23:02:44 +00:00
Derek Schuff
f0dc82710c [WebAssembly] Emit type signatures for declared functions
Under emscripten, C code can take the address of a function implemented
in Javascript (which is exposed via an import in wasm). Because imports
do not have linear memory address in wasm, we need to generate a thunk
to be the target of the indirect call; it call the import directly.

To make this possible, LLVM needs to emit the type signatures for these
functions, because they may not be called directly or referred to other
than where the address is taken.

This uses s new .s directive (.functype) which specifies the signature.

Differential Revision: http://reviews.llvm.org/D20891

llvm-svn: 271599
2016-06-02 21:34:18 +00:00
Sanjay Patel
02638731c1 transform obscured FP sign bit ops into a fabs/fneg using TLI hook
This is effectively a revert of:
http://reviews.llvm.org/rL249702 - [InstCombine] transform masking off of an FP sign bit into a fabs() intrinsic call (PR24886)
and:
http://reviews.llvm.org/rL249701 - [ValueTracking] teach computeKnownBits that a fabs() clears sign bits
and a reimplementation as a DAG combine for targets that have IEEE754-compliant fabs/fneg instructions.

This is intended to resolve the objections raised on the dev list:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098154.html
and:
https://llvm.org/bugs/show_bug.cgi?id=24886#c4

In the interest of patch minimalism, I've only partly enabled AArch64. PowerPC, MIPS, x86 and others can enable later.

Differential Revision: http://reviews.llvm.org/D19391

llvm-svn: 271573
2016-06-02 20:01:37 +00:00
Matt Arsenault
a9ec8d2f35 AMDGPU: Cleanup load tests
There are a lot of different kinds of loads to test for,
and these were scattered around inconsistently with
some redundancy. Try to comprehensively test all loads
in a consistent way.

llvm-svn: 271571
2016-06-02 19:54:26 +00:00
Matt Arsenault
2119ea8370 AMDGPU: Temporary fix for broken store combine
llvm-svn: 271567
2016-06-02 19:00:55 +00:00
Matt Arsenault
28ae99d5d5 AMDGPU: Fix crashes on unknown processor name
If the processor name failed to parse for amdgcn,
the resulting output would have R600 ISA in it.

If the processor name was missing or invalid for R600,
the wavefront size would not be set and there would be
crashes from missing itinerary data.

Fixes crashes in future commit caused by dividing by the unset/0
wavefront size.

llvm-svn: 271561
2016-06-02 18:37:16 +00:00
Geoff Berry
f48d5c7111 [PowerPC] Run reg2mem on tests to simplify them.
Summary:
Also convert test/CodeGen/PowerPC/vsx-ldst-builtin-le.ll to use
FileCheck instead of two grep and count runs.

This change is needed to avoid spurious diffs in these tests when
EarlyCSE is improved to use MemorySSA and can do more load elimination.

Reviewers: hfinkel

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20238

llvm-svn: 271553
2016-06-02 18:02:50 +00:00
Simon Pilgrim
29c4e8764f [X86][SSE] Added SSE41/AVX2 non-temporal tests
Useful for when we add MOVNTDQA support

llvm-svn: 271552
2016-06-02 18:01:21 +00:00
Dimitry Andric
9beb691de0 Only attempt to detect AVG if SSE2 is available
Summary:
In PR29973 Sanjay Patel reported an assertion failure when a certain
loop was optimized, for a target without SSE2 support.  It turned out
this was because of the AVG pattern detection introduced in rL253952.

Prevent the assertion failure by bailing out early in
`detectAVGPattern()`, if the target does not support SSE2.

Also add a minimized test case.

Reviewers: congh, eli.friedman, spatel

Subscribers: emaste, llvm-commits

Differential Revision: http://reviews.llvm.org/D20905

llvm-svn: 271548
2016-06-02 17:30:49 +00:00
Geoff Berry
9151cce5f4 [PEI, AArch64] Use empty spaces in stack area for local stack slot allocation.
Summary:
If the target requests it, use emptry spaces in the fixed and
callee-save stack area to allocate local stack objects.

AArch64: Change last callee-save reg stack object alignment instead of
size to leave a gap to take advantage of above change.

Reviewers: t.p.northover, qcolombet, MatzeB

Subscribers: rengolin, mcrosier, llvm-commits, aemerson

Differential Revision: http://reviews.llvm.org/D20220

llvm-svn: 271527
2016-06-02 16:22:07 +00:00
Sanjay Patel
153b260453 [DAG] use getBitcast() to reduce code
Although this was intended to be NFC, the test case wiggle shows a change in
code scheduling/RA caused by a difference in the SDLoc() generation.

Depending on how you look at it, this is the (dis)advantage of exact checking
in regression tests.

llvm-svn: 271526
2016-06-02 16:01:15 +00:00
Simon Pilgrim
e49fac5565 [X86][SSE] Added non-temporal load tests for vector types
These currently lower to regular loads instead of MOVNTDQA

llvm-svn: 271516
2016-06-02 13:51:50 +00:00
Simon Pilgrim
2e72cbb66e [X86][SSE] Replace (V)CVTTPS2DQ and VCVTTPD2DQ truncating (round to zero) f32/f64 to i32 with generic IR (llvm)
This patch removes the llvm intrinsics (V)CVTTPS2DQ and VCVTTPD2DQ truncation (round to zero) conversions and auto-upgrades to FP_TO_SINT calls instead.

Note: I looked at updating CVTTPD2DQ as well but this still requires a lot more work to correctly lower.

Differential Revision: http://reviews.llvm.org/D20860

llvm-svn: 271510
2016-06-02 10:55:21 +00:00
Sjoerd Meijer
067c8106bd This adds support for Cortex-A73 as an available target.
Differential Revision: http://reviews.llvm.org/D20865

llvm-svn: 271508
2016-06-02 10:48:52 +00:00
Craig Topper
f70345a66d [X86] Add AVX 256-bit load and stores to fast isel.
I'm not sure why this was missing for so long.

This also exposed that we were picking floating point 256-bit VMOVNTPS for some integer types in normal isel for AVX1 even though VMOVNTDQ is available. In practice it doesn't matter due to the execution dependency fix pass, but it required extra isel patterns. Fixing that in a follow up commit.

llvm-svn: 271481
2016-06-02 04:19:45 +00:00
Craig Topper
1887664778 [AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked loads.

llvm-svn: 271478
2016-06-02 04:19:36 +00:00
Rafael Espindola
c82586e4b5 Avoid a load for local functions.
llvm-svn: 271437
2016-06-01 21:57:11 +00:00
Sanjay Patel
287ac2dfec [x86, AVX2] regenerate checks
llvm-svn: 271434
2016-06-01 21:32:56 +00:00
Michael Kuperstein
1e7dd66dfc [DAG] Improve legalization of INSERT_SUBVECTOR
When the index is known to be constant 0, insert directly into the the low half,
instead of spilling, performing the insert in-memory, and reloading.

Differential Revision: http://reviews.llvm.org/D20763

llvm-svn: 271428
2016-06-01 20:49:35 +00:00
Keno Fischer
2092f44163 [PPC64] Fix SUBFC8 Defs list
Fix PR27943 "Bad machine code: Using an undefined physical register".
SUBFC8 implicitly defines the CR0 register, but this was omitted in
the instruction definition.

Patch by Jameson Nash <jameson@juliacomputing.com>

Reviewers: hfinkel
Differential Revision: http://reviews.llvm.org/D20802

llvm-svn: 271425
2016-06-01 20:31:07 +00:00
Than McIntosh
a03aeb4a97 Better fix for PR27903.
Summary:
Re-enable lifetime-start-on-first-use for stack coloring,
but explicitly disable it for slots with more than one start
or end lifetime marker.

Bug: 27903

Reviewers: wmi, tejohnson, qcolombet, gbiv

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D20739

llvm-svn: 271412
2016-06-01 17:55:10 +00:00
Simon Pilgrim
c1ef71aea6 [X86][SSE] Added non-temporal store tests for all 512-bit vector types
llvm-svn: 271393
2016-06-01 13:58:00 +00:00
Simon Pilgrim
8864f582d0 [X86][SSE] Added non-temporal store tests for all 256-bit vector types
Also added KNL AVX-512 checks

llvm-svn: 271391
2016-06-01 13:20:25 +00:00
Simon Pilgrim
577c6a3c29 [X86][SSE] Added non-temporal store tests for all 128-bit integer vector types
llvm-svn: 271389
2016-06-01 13:05:00 +00:00
Michael Zuckerman
e5673d8456 Adding back-end support to two bit scanning intrinsics
Adding LLVM back-end support to two intrinsics dealing with bit scan: _bit_scan_forward and _bit_scan_reverse.
Their functionality is as described in Intel intrinsics guide:
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_bit_scan_forward&expand=371,370
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_bit_scan_reverse&expand=371,370

Commit on behalf of Omer Paparo Bivas


Differential Revision: http://reviews.llvm.org/D19915

llvm-svn: 271386
2016-06-01 12:02:37 +00:00
Oliver Stannard
559228e8d5 [ARM] Add additional matching for UBFX instructions
This adds an additional matcher to select UBFX(..) from SRL(AND(..)) in
ARMISelDAGToDAG to help with code size.

Patch by David Green.

Differential Revision: http://reviews.llvm.org/D20667

llvm-svn: 271384
2016-06-01 12:01:01 +00:00
Chris Dewhurst
0ad8334427 [Sparc] Allow passing of empty structs.
Passing an empty struct as a function call argument is now supported.

unit tests for various scenarios added.

llvm-svn: 271374
2016-06-01 08:48:56 +00:00
Craig Topper
bc9e4ba942 Revert r271362 "[AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead."
Looks like something isn't quite right still. Also forgot to move the test cases to an autoupgrade test.

llvm-svn: 271363
2016-06-01 05:57:55 +00:00
Craig Topper
734c8343a6 [AVX512] Remove masked load intrinsics. Clang now emits generic masked load intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked loads.

llvm-svn: 271362
2016-06-01 05:35:16 +00:00
Matthias Braun
fe00b0505d CodeGen: Refactor renameDisconnectedComponents() as a pass
Refactor LiveIntervals::renameDisconnectedComponents() to be a pass.
Also change the name to "RenameIndependentSubregs":

- renameDisconnectedComponents() worked on a MachineFunction at a time
  so it is a natural candidate for a machine function pass.

- The algorithm is testable with a .mir test now.

- This also fixes a problem where the lazy renaming as part of the
  MachineScheduler introduced IMPLICIT_DEF instructions after the number
  of a nodes in a region were counted leading to a mismatch.

Differential Revision: http://reviews.llvm.org/D20507

llvm-svn: 271345
2016-05-31 22:38:06 +00:00
Kevin B. Smith
21876b39ee [X86]: Add a pattern that uses GR16_ABCD rather than GR32_ABCD to avoid falsely marking whole 32 bit register as live.
Differential Revision: http://reviews.llvm.org/D20649

llvm-svn: 271341
2016-05-31 22:00:12 +00:00
Matthias Braun
4c14d0b65d ARM: Improve/fix comment in recently added test.
llvm-svn: 271340
2016-05-31 21:59:59 +00:00
Matthias Braun
b873b20639 ARM: Do not attempt to modify register class of physregs.
Physregs have no associated register class, do not attempt to modify it
in Thumb2InstrInfo::storeRegToStackSlot()/loadFromStackSlot().

llvm-svn: 271339
2016-05-31 21:39:12 +00:00
Ahmed Bougacha
104eede8a8 [CodeGen] Promote FMINNAN/FMAXNAN like other binops.
We think it's OK to generate half fminnan because it's legal for the
transform-to type (f32; r245196). However, PromoteFloatRes was missing
the case; simply promote like the other binops, including minnum.

llvm-svn: 271317
2016-05-31 18:50:25 +00:00
Rafael Espindola
11d3302464 Delete AArch64II::MO_CONSTPOOL.
A constant pool holding the address of a variable in equivalent to
a got entry. It produces exactly the same instruction sequence as a
got use and unlike a got use this is not uniqued by the linker.

llvm-svn: 271311
2016-05-31 18:31:14 +00:00
Rafael Espindola
95d3b4d586 Add a use of shouldAssumeDSOLocal to ARM.
Now this code path knows about position independent executables.

llvm-svn: 271290
2016-05-31 15:31:55 +00:00
Ranjeet Singh
2e252abc9e [ARM] Add backend support for load/store intrinsics.
Added support to map intrinsics
__builtin_arm_{ldc,ldcl,ldc2,ldc2l,stc,stcl,stc2,stc2l}
to their ARM instructions.

Differential Revision: http://reviews.llvm.org/D20564

llvm-svn: 271271
2016-05-31 12:39:30 +00:00
Simon Pilgrim
29a89e51e6 [X86][SSE] Add load-folding patterns for (V)CVTDQ2PD (PR27291)
Added patterns for (V)CVTDQ2PD -> 2f64 loading from a 64-bit source.

llvm-svn: 271269
2016-05-31 12:04:35 +00:00
Simon Dardis
573359788f [mips] bnec/beqc register constraint fix
beqc and bnec cannot have $rs == $rt. Inhibit compact branch creation
if that would occur.

Reviewers: vkalintiris, dsanders

Differential Revision: http://reviews.llvm.org/D20624

llvm-svn: 271260
2016-05-31 09:54:55 +00:00
Igor Breger
dab6232733 [AVX512] Fix intrinsic vcvtps2ph lowering.
Differential Revision: http://reviews.llvm.org/D20788

llvm-svn: 271255
2016-05-31 08:04:21 +00:00
Igor Breger
cc3e7d2dd1 Fix intrinsic vbroadcast{i32|f32}x2 lowering.
Differential Revision: http://reviews.llvm.org/D20780

llvm-svn: 271254
2016-05-31 07:43:39 +00:00
Craig Topper
cb79936a4b [AVX512] Remove masked store intrinsics. Clang now emits generic masked store intrinsics instead.
The intrinsics will be autoupgraded to the same generic masked stores.

llvm-svn: 271245
2016-05-31 01:50:02 +00:00
Saleem Abdulrasool
f85a029a33 X86: permit using SjLj EH on x86 targets as an option
This adds support to the backed to actually support SjLj EH as an exception
model.  This is *NOT* the default model, and requires explicitly opting into it
from the frontend.  GCC supports this model and for MinGW can still be enabled
via the `--using-sjlj-exceptions` options.

Addresses PR27749!

llvm-svn: 271244
2016-05-31 01:48:07 +00:00
Craig Topper
4f195e8edb [X86] Remove SSE/AVX unaligned store intrinsics as clang no longer uses them. Auto upgrade to native unaligned store instructions.
llvm-svn: 271236
2016-05-30 23:15:56 +00:00
Craig Topper
9d8b6d7b31 [X86] Use update_llc_test_checks.py to re-generate a test in preparation for an upcoming commit. NFC
llvm-svn: 271234
2016-05-30 22:54:14 +00:00
Simon Pilgrim
9226b7680f [X86][XOP] Split off auto-upgraded xop intrinsics
llvm-svn: 271228
2016-05-30 19:50:56 +00:00
Simon Pilgrim
5e4ee0eb80 [X86][SSE] Renamed pmovxrm tests
These aren't intrinsics anymore - as discussed on D20686

llvm-svn: 271226
2016-05-30 19:14:37 +00:00
Simon Pilgrim
15ddf25bef [X86][AVX2] Regenerated AVX2 extension tests
llvm-svn: 271224
2016-05-30 18:49:57 +00:00
Simon Pilgrim
83a5e7cad9 [X86][SSE] Updated storeu fast-isel tests to match clang builtin tests
Since rL271214 the headers have no longer used the storeu intrinsic

llvm-svn: 271222
2016-05-30 18:42:51 +00:00
Simon Pilgrim
97b3734c4d [X86][SSE2] Updated _mm_store_pd1/_mm_store1_pd fast-isel tests to match D20617
llvm-svn: 271220
2016-05-30 18:18:44 +00:00
Diana Picus
178ddc2360 [BPF] Remove exit-on-error from tests (PR27768, PR27769)
The exit-on-error flag is necessary to avoid some assertions/unreachables. We
can get past them by creating a few dummy nodes.

Fixes PR27768, PR27769.

Differential Revision: http://reviews.llvm.org/D20726

llvm-svn: 271200
2016-05-30 08:28:34 +00:00
Simon Pilgrim
6ec0f7efbc [X86][SSE] (Reapplied) Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 271131
2016-05-28 18:03:41 +00:00
Sanjay Patel
fc1048d379 [x86] avoid printing unnecessary sign bits of hex immediates in asm comments (PR20347)
It would be better to check the valid/expected size of the immediate operand, but this is
generally better than what we print right now.

Differential Revision: http://reviews.llvm.org/D20385

llvm-svn: 271114
2016-05-28 14:58:37 +00:00
Ahmed Bougacha
396b16d3af [X86] Try to zero elts when lowering 256-bit shuffle with PSHUFB.
Otherwise we fallback to a blend of PSHUFBs later on.

Differential Revision: http://reviews.llvm.org/D19661

llvm-svn: 271113
2016-05-28 14:38:04 +00:00
Rafael Espindola
e3a90de4ca Fix default reloc model on ARM.
llvm-svn: 271111
2016-05-28 10:41:15 +00:00
Renato Golin
46dd534f95 Revert "Revert "Map DynamicNoPIC to Static on non-darwin.""
This reverts commit r271096, as reverting it broke even more buildbots!

But that also means I'll break on ARM again... :(

llvm-svn: 271099
2016-05-28 04:47:13 +00:00
Renato Golin
df3d92e4b4 Revert "Map DynamicNoPIC to Static on non-darwin."
This reverts commit r271052, as it broke some ARM buildbots.

llvm-svn: 271096
2016-05-28 04:24:26 +00:00
Matt Arsenault
24c4ef4e50 AMDGPU: Cleanup vector insert/extract tests
This mostly makes sure that 3-vector dynamic inserts
and extracts are covered.

llvm-svn: 271082
2016-05-28 00:51:06 +00:00
Matt Arsenault
f5367daffc AMDGPU: Add fract intrinsic
Remove broken patterns matching it. This was matching the
unsafe math pattern and expanding the fix for the buggy instruction
from the pattern. The problems are also on CI. Remove the workarounds
and only use fract with unsafe math or from the intrinsic.

llvm-svn: 271078
2016-05-28 00:19:52 +00:00
Rafael Espindola
dff4498487 Map DynamicNoPIC to Static on non-darwin.
DynamicNoPIC was only every used on darwin. This maps it to static on
ELF. It matches what is done on X86.

llvm-svn: 271052
2016-05-27 21:44:18 +00:00
Michael Kuperstein
ac24107aac [X86] Detect SAD patterns and emit psadbw instructions.
This recommits r267649 with a fix for PR27539.

Differential Revision: http://reviews.llvm.org/D20598

llvm-svn: 271033
2016-05-27 18:53:22 +00:00
Simon Pilgrim
67f884d956 [X86][AVX] Removed some remains of old (pre-regeneration) filechecks
llvm-svn: 271007
2016-05-27 15:56:19 +00:00
Than McIntosh
4e1e347d86 Disable lifetime-start-on-first-use analysis.
Summary:
Turn off lifetime-start-on-first-use enhancement for the moment
pending a fix for bug 27903.

Bug: 27903

Reviewers: tejohnson, wmi, qcolombet, gbiv

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D20731

llvm-svn: 271003
2016-05-27 15:27:51 +00:00
Simon Pilgrim
99e3cf65ff Revert: r270973 - [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
llvm-svn: 270976
2016-05-27 09:02:25 +00:00
Simon Pilgrim
c8925e270b [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

A companion patch (D20684) removes/auto-upgrade the clang intrinsics.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 270973
2016-05-27 08:49:15 +00:00
Mitch Bodart
8ebfa4a476 [CodeGen] Fix problem with X86 byte registers in CriticalAntiDepBreaker
CriticalAntiDepBreaker was not correctly tracking defs of the high X86 byte
registers, leading to incorrect use of a busy register to break an
antidependence.

Fixes pr27681, and its duplicates pr27580, pr27804.

Differential Revision: http://reviews.llvm.org/D20456

llvm-svn: 270935
2016-05-26 23:08:52 +00:00
Krzysztof Parzyszek
871650f3b5 [Hexagon] Enable the post-RA scheduler
The aggressive anti-dependency breaker can rename the restored callee-
saved registers. To prevent this, mark these registers are live on all
paths to the return/tail-call instructions, and add implicit use operands
for them to these instructions.

llvm-svn: 270898
2016-05-26 19:44:28 +00:00
Chad Rosier
6d0c828d28 [AArch64] Generate rev16/rev32 from bswap + srl when upper bits are known zero.
Canonicalize (srl (bswap i32 x), 16) to (rotr (bswap i32 x), 16), if the high
16-bits of x are zero. Similarly, canonicalize (srl (bswap i64 x), 32) to
(rotr (bswap i64 x), 32), if the high 32-bits of x are zero.

test_rev_w_srl16:            test_rev_w_srl16:
  and w8, w0, #0xffff          and     w8, w0, #0xffff
  rev w8, w8           --->    rev16   w0, w8
  lsr     w0, w8, #16

test_rev_x_srl32:            test_rev_x_srl32:
  rev x8, x8           --->    rev32   x0, x8
  lsr x0, x8, #32

llvm-svn: 270896
2016-05-26 19:41:33 +00:00
Changpeng Fang
7c77c6e723 AMDGPU/SI: Enable load-store-opt by default.
Summary: Enable load-store-opt by default, and update LIT tests.

Reviewers: arsenm

Differential Revision: http://reviews.llvm.org/D20694

llvm-svn: 270894
2016-05-26 19:35:29 +00:00
Krzysztof Parzyszek
790c43253d Add test/CodeGen/MIR/Hexagon/lit.local.cfg
Require that Hexagon is a registered target.

llvm-svn: 270887
2016-05-26 18:35:45 +00:00
Krzysztof Parzyszek
c5a589044f Do not rename registers that do not start an independent live range
llvm-svn: 270885
2016-05-26 18:22:53 +00:00
Artem Belevich
f14e0f0a96 [NVPTX] Added NVVMIntrRange pass
NVVMIntrRange adds !range metadata to calls of NVVM intrinsics
that return values within known limited range.

This allows LLVM to generate optimal code for indexing arrays
based on tid/ctaid which is a frequently used pattern in CUDA code.

Differential Revision: http://reviews.llvm.org/D20644

llvm-svn: 270872
2016-05-26 17:02:56 +00:00
Simon Pilgrim
3ed5f417f8 [X86][SSE] When lowering a 256-bit shuffle as PMOVZX, reduce the input vector to the lower 128-bit subvector.
Most often as not this is what it started out as, the extraction is zero-cost on AVX and the PMOVZX/PMOVSX folding logic is based around 128-bit loads.

llvm-svn: 270858
2016-05-26 15:40:36 +00:00
Diana Picus
e53f2eed3e [AMDGPU] Remove exit-on-error flag from test (PR27762)
Similar to r269948, but for argument lowering.

Fixes PR27762

Differential Revision: http://reviews.llvm.org/D20430

llvm-svn: 270856
2016-05-26 15:24:55 +00:00
Diana Picus
83ac5c0a7d [BPF] Remove exit-on-error flag in test (PR27767)
The exit-on-error flag is needed to avoid an assert where
llvm::SelectionDAGISel::LowerArguments doesn't create enough arguments. Fill up
with zeroes to reach the right number of args.

Fixes PR27767.

Differential Revision: http://reviews.llvm.org/D20571

llvm-svn: 270855
2016-05-26 15:23:50 +00:00
Simon Pilgrim
91342a593e [X86][SSE] Added load_zext_16i8_to_8i32 test
Odd issue with input vector not being folded into pmovzx on AVX2+ targets

llvm-svn: 270852
2016-05-26 14:45:30 +00:00
Chad Rosier
be99101f8a [AArch64] Generate a BFI/BFXIL from 'or (and X, MaskImm), OrImm'.
If and only if the value being inserted sets only known zero bits.

This combine transforms things like

  and w8, w0, #0xfffffff0
  movz w9, #5
  orr w0, w8, w9

into

  movz w8, #5
  bfxil w0, w8, #0, #4

The combine is tuned to make sure we always reduce the number of instructions.
We avoid churning code for what is expected to be performance neutral changes
(e.g., converted AND+OR to OR+BFI).

Differential Revision: http://reviews.llvm.org/D20387

llvm-svn: 270846
2016-05-26 13:27:56 +00:00
Rafael Espindola
e388c6f6a2 Use shouldAssumeDSOLocal on AArch64.
This reduces code duplication and now AArch64 also handles PIE.

llvm-svn: 270844
2016-05-26 12:42:55 +00:00
Igor Breger
d6da40dfa4 [AVX512] Fix intrinsic cmp{sd|ss} lowering.
Differential Revision: http://reviews.llvm.org/D20615

llvm-svn: 270843
2016-05-26 12:42:25 +00:00
Simon Pilgrim
bc104367db [X86][F16C] Added F16C fast-isel tests to match clang/test/CodeGen/f16c-builtins.c
llvm-svn: 270837
2016-05-26 10:26:56 +00:00
Simon Pilgrim
e303ca3030 [X86][AVX2] Added gather fast-isel tests to match clang/test/CodeGen/avx2-builtins.c
llvm-svn: 270835
2016-05-26 10:07:05 +00:00
Simon Pilgrim
e9a179168b [X86][SSE41] Removed pblendw intrinsics tests - they are auto-upgraded
Equivalent tests included in sse41-intrinsics-x86-upgrade.ll - the i8/i32 immediate diff doesn't matter anymore

llvm-svn: 270767
2016-05-25 21:27:58 +00:00
Simon Pilgrim
0da4a2c2e6 [X86][SSE41] Regenerated intrinsics tests
llvm-svn: 270764
2016-05-25 21:21:51 +00:00
Simon Pilgrim
5142359484 [X86][SSE41] Removed blendpd/blendps intrinsics tests - they are auto-upgraded
Equivalent tests included in sse41-intrinsics-x86-upgrade.ll

llvm-svn: 270761
2016-05-25 21:06:36 +00:00
Simon Pilgrim
0792d32b20 [X86][AVX2] Regenerate avx2 vector shift tests
llvm-svn: 270756
2016-05-25 21:00:40 +00:00
Rafael Espindola
2224908028 Fix shouldAssumeDSOLocal for private linkage.
llvm-svn: 270746
2016-05-25 19:55:16 +00:00
Matt Arsenault
1f6cee6a4f AMDGPU: Fix v2i64/v2f64 bitcasts
These operations tend to get promoted away to v4i32 so
this doesn't happen often.

llvm-svn: 270740
2016-05-25 18:07:36 +00:00
Matt Arsenault
ea952af911 AMDGPU: Fix missing br_cc i1 test coverage
Also un xfail a test.

llvm-svn: 270739
2016-05-25 17:58:27 +00:00
Chad Rosier
9f1f557fac [SelectionDAG] Add smarts for BSWAP in computeKnownBits.
llvm-svn: 270738
2016-05-25 17:52:38 +00:00
Matt Arsenault
d17f030a97 AMDGPU: Make vectorization defeating test changes
Simplifies test updates in the future.

llvm-svn: 270736
2016-05-25 17:42:39 +00:00
Matt Arsenault
e21e61958d AMDGPU: Fix inconsistent lowering of select of vectors
f32 vectors would use a sequence of BFI instructions instead
of unrolled cmp + select. This was better in the case of a VALU
select with SGPR inputs, but we don't have a way of dealing with that
in the DAG.

llvm-svn: 270731
2016-05-25 17:34:58 +00:00
Tim Shen
198e5cb8a0 Move and add comments to the top for tailcall-string-rvo.ll
Differential Revision: http://reviews.llvm.org/D20311

llvm-svn: 270722
2016-05-25 17:01:09 +00:00
Hal Finkel
c1f6823ee0 [SDAG] Add a fallback multiplication expansion
LegalizeIntegerTypes does not have a way to expand multiplications for large
integer types (i.e. larger than twice the native bit width). There's no
standard runtime call to use in that case, and so we'd just assert.

Unfortunately, as it turns out, it is possible to hit this case from
standard-ish C code in rare cases. A particular case a user ran into yesterday
involved an __int128 induction variable and a loop with a quadratic (not
linear) recurrence which triggered some backend logic using SCEVExpander. In
this case, the BinomialCoefficient code in SCEV generates some i129 variables,
which get widened to i256. At a high level, this is not actually good (i.e. the
underlying optimization, PPCLoopPreIncPrep, should not be transforming the loop
in question for performance reasons), but regardless, the backend shouldn't
crash because of cost-modeling issues in the optimizer.

This is a straightforward implementation of the multiplication expansion, based
on the algorithm in Hacker's Delight. I validated it against the code for the
mul256b function from http://locklessinc.com/articles/256bit_arithmetic/ using
random inputs. There should be no functional change for previously-working code
(the new expansion code only replaces an assert).

Fixes PR19797.

llvm-svn: 270720
2016-05-25 16:50:22 +00:00