1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 19:23:23 +01:00
Commit Graph

8437 Commits

Author SHA1 Message Date
Reed Kotler
9f4f45f079 Make first substantial checkin of my port of ARM constant islands code to Mips.
Before I just ported the shell of the pass. I've tried to keep everything
nearly identical to the ARM version. I think it will be very easy to eventually
merge these two and create a new more general pass that other targets can
use. I have some improvements I would like to make to allow pools to 
be shared across functions and some other things. When I'm all done we
can think about making a more general pass. More to be ported but the
basic mechanism works now almost as good as gcc mips16.

llvm-svn: 193509
2013-10-27 21:57:36 +00:00
Elena Demikhovsky
0e6849495e AVX-512: PMIN/PMAX intrinsics and patterns
Patch by Cameron McInally <cameron.mcinally@nyu.edu>

llvm-svn: 193497
2013-10-27 08:18:37 +00:00
Quentin Colombet
5d88e45af6 [X86][AVX512] Add patterns that match the AVX512 floating point register vbroadcast intrinsics.
Patch by Cameron McInally <cameron.mcinally@nyu.edu>

llvm-svn: 193422
2013-10-25 18:04:12 +00:00
Quentin Colombet
5650890143 [X86][AVX512] Add patterns that match the AVX512 floating point vbroadcast intrinsics.
Patch by Cameron McInally <cameron.mcinally@nyu.edu>

llvm-svn: 193421
2013-10-25 17:47:18 +00:00
Tim Northover
ec8c784fa3 ARM: don't expand atomicrmw inline on Cortex-M0
There's a barrier instruction so that should still be used, but most actual
atomic operations are going to need a platform decision on the correct
behaviour (either nop if single-threaded or OS-support otherwise).

rdar://problem/15287210

llvm-svn: 193399
2013-10-25 09:30:24 +00:00
Tim Northover
ee00055f8f LegalizeDAG: allow libcalls for max/min atomic operations
ARM processors without ldrex/strex need to be able to make libcalls for all
atomic operations, including the newer min/max versions.

The alternative would probably be expanding these operations in terms of
cmpxchg (as x86 does always), but in the configurations where this matters
code-size tends to be paramount so the libcall is more desirable.

llvm-svn: 193398
2013-10-25 09:30:20 +00:00
Jim Grosbach
8e79ebd235 ARM: Test r193381 a bit more thoroughly.
Make sure we're predicating right based on CPU even if the triple is 'wrong'.

llvm-svn: 193382
2013-10-24 23:11:05 +00:00
Jim Grosbach
1b0b1ab2d5 ARM: Tweak usage of '*vfp' compiler_rt functions.
Only use them if the subtarget has ARM mode, as these routines are implemented
as ARM code.

rdar://15302004

llvm-svn: 193381
2013-10-24 23:07:11 +00:00
Tim Northover
adc6407bf1 ARM: Use non-VFP softcalls on embedded Darwinish targets
The compiler-rt functions __adddf3vfp and so on exist purely to allow Thumb1
code to make use of VFP instructions by switching back to ARM mode, they make
no sense for M-class processors which don't even have an ARM mode.

Given that justification, in practice this is a platform ABI decision so the
actual check is based on that rather than CPU features.

rdar://problem/15302004

llvm-svn: 193327
2013-10-24 10:37:09 +00:00
Yaron Keren
33ee14bb94 Added test for -elf configuration, to see that _alloca call is properly
generated. See:

http://llvm.org/viewvc/llvm-project?view=revision&revision=193289

llvm-svn: 193321
2013-10-24 09:36:08 +00:00
Job Noorman
49c1205681 Make sure SP is always aligned on a 2 byte boundary
llvm-svn: 193320
2013-10-24 09:32:31 +00:00
Amara Emerson
de52a239bd [AArch64] Fix NZCV reg live-in bug in F128CSEL codegen.
When generating the IfTrue basic block during the F128CSEL pseudo-instruction
handling, the NZCV live-in for the newly created BB wasn't being added. This
caused a fault during MI-sched/live range calculation when the predecessor
for the fall-through BB didn't have a live-in for phys-reg as expected.

llvm-svn: 193316
2013-10-24 08:28:24 +00:00
Elena Demikhovsky
da06b9b278 AVX-512: added VCVTPH2PS, VCVTPS2PH with intrinsics
llvm-svn: 193312
2013-10-24 07:16:35 +00:00
Craig Topper
44b55e7b6f Replace sse41/sse42 with sse4.1/sse4.2 in test command lines to fix bots.
llvm-svn: 193311
2013-10-24 07:00:06 +00:00
Craig Topper
ec249f2467 Add non-AVX tests for AES intrinsics.
llvm-svn: 193310
2013-10-24 06:50:17 +00:00
Craig Topper
5219df38ec Add tests for SSE intrinsics in non-avx mode by copying from the AVX test cases. Some of these may have been tested by other tests, but most weren't. Patch by Cameron McInally.
llvm-svn: 193309
2013-10-24 06:45:13 +00:00
Benjamin Kramer
701e41bb58 X86: Custom lower sext v16i8 to v16i16, and the corresponding truncate.
Also update the cost model.

llvm-svn: 193270
2013-10-23 21:06:07 +00:00
Benjamin Kramer
8ed652c269 X86: Custom lower zext v16i8 to v16i16.
On sandy bridge (PR17654) we now get
	vpxor	%xmm1, %xmm1, %xmm1
	vpunpckhbw	%xmm1, %xmm0, %xmm2
	vpunpcklbw	%xmm1, %xmm0, %xmm0
	vinsertf128	$1, %xmm2, %ymm0, %ymm0

On haswell it's a simple
	vpmovzxbw	%xmm0, %ymm0

There is a maze of duplicated and dead transforms and patterns in this
area. Remove the dead custom lowering of zext v8i16 to v8i32, that's
already handled by LowerAVXExtend.

llvm-svn: 193262
2013-10-23 19:19:04 +00:00
Michael Liao
3b38b22386 Fix PR17631
- Skip instructions added in prolog. For specific targets, prolog may
  insert helper function calls (e.g. _chkstk will be called when
  there're more than 4K bytes allocated on stack). However, these
  helpers don't use/def YMM/XMM registers.

llvm-svn: 193261
2013-10-23 18:32:43 +00:00
Daniel Sanders
9918652e43 [mips][msa] Added support for matching fexp2 from normal IR (i.e. not intrinsics)
llvm-svn: 193239
2013-10-23 10:36:52 +00:00
Tom Stellard
7df5f52e81 R600/SI: fix MIMG writemask adjustement
This fixes piglit:
- shaders/glsl-fs-texture2d-masked
- shaders/glsl-fs-texture2d-masked-4

Patch by: Marek Olšák

Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 193222
2013-10-23 02:53:47 +00:00
Tom Stellard
2b6ff7e802 R600: Fix handling of vector kernel arguments
The SelectionDAGBuilder was promoting vector kernel arguments to legal
types, but this won't work for R600 and SI since kernel arguments are
stored in memory and can't be promoted.  In order to handle vector
arguments correctly we need to look at the original types from the LLVM IR
function.

llvm-svn: 193215
2013-10-23 00:44:32 +00:00
Tom Stellard
a1dbb396e5 R600/SI: Add support for i64 bitwise or
llvm-svn: 193213
2013-10-23 00:44:19 +00:00
Tom Stellard
914bfa633e R600/SI: Use S_LOAD_DWORD instructions for v8i32 and v16i32
llvm-svn: 193212
2013-10-23 00:44:12 +00:00
Tom Stellard
5908e906e2 R600: Simplify handling of private address space
The AMDGPUIndirectAddressing pass was previously responsible for
lowering private loads and stores to indirect addressing instructions.
However, this pass was buggy and way too complicated.  The only
advantage it had over the new simplified code was that it saved one
instruction per direct write to private memory.  This optimization
likely has a minimal impact on performance, and we may be able
to duplicate it using some other transformation.

For the private address space, we now:
1. Lower private loads/store to Register(Load|Store) instructions
2. Reserve part of the register file as 'private memory'
3. After regalloc lower the Register(Load|Store) instructions to
   MOV instructions that use indirect addressing.

llvm-svn: 193179
2013-10-22 18:19:10 +00:00
Elena Demikhovsky
3136868b1d AVX-512: aligned / unaligned load and store for 512-bit integer vectors.
llvm-svn: 193156
2013-10-22 09:19:28 +00:00
Bill Wendling
0b3ac89081 Add testcase for PR3168. It was fixed over time.
PR3168

llvm-svn: 193152
2013-10-22 08:23:03 +00:00
Eric Christopher
4762bba338 Fix spelling, grammar, and match naming convention for test files.
llvm-svn: 193130
2013-10-21 23:14:06 +00:00
Chad Rosier
838b6065b8 [AArch64] Add the constraint to NEON scalar mla/mls instructions.
llvm-svn: 193117
2013-10-21 20:11:47 +00:00
Matt Arsenault
aad9d96b2b Fix CodeGen for vectors of pointers with address spaces.
llvm-svn: 193112
2013-10-21 20:03:58 +00:00
Matt Arsenault
dad904931b Fix CodeGen for different size address space GEPs
llvm-svn: 193111
2013-10-21 20:03:54 +00:00
Lang Hames
df2443e32e X86 vector element shift-by-immediate instructions take i8 immediates. Make
the instruction defenitions and ISEL reflect this.

Prior to this patch these instructions took an i32i8imm, and the high bits were
dropped during encoding. This led to incorrect behavior for shifts by
immediates higher than 255. This patch fixes that issue by detecting large
immediate shifts and returning constant zero (for logical shifts) or capping
the shift amount at an encodable value (for arithmetic shifts).

Fixes <rdar://problem/14968098>

llvm-svn: 193096
2013-10-21 17:51:24 +00:00
Elena Demikhovsky
dceb9534bf AVX-512: MUL operation lowering for v8i64
llvm-svn: 193083
2013-10-21 13:27:34 +00:00
Matheus Almeida
1760b2c642 [mips][msa] Fix definition of SLD instruction.
The second parameter of the SLD intrinsic is the number of columns (GPR) to 
slide left the source array.

llvm-svn: 193076
2013-10-21 11:47:56 +00:00
Peter Collingbourne
657e872155 Emit prefix data after debug and EH directives.
This ensures that the prefix data is treated as part of the function for
the purpose of debug info.  This provides a better debugging experience,
among other things by allowing a debug info client to correctly look up
a function in debug info given a function pointer.

llvm-svn: 193042
2013-10-20 02:16:21 +00:00
Andrew Trick
19f2e6075e Update PPC loop tests after SCEV non-unit-stride checkin r193015.
llvm-svn: 193021
2013-10-19 00:14:04 +00:00
David Majnemer
70efa1983f Test case for r192957
Forgot to 'svn add'

llvm-svn: 192978
2013-10-18 14:49:59 +00:00
Bill Schmidt
d6db838787 [PATCH] Fix PR17168 (DAG scheduler inserts DBG_VALUE before PHI with fast-isel)
PR17168 describes a test case that fails when compiling for debug with
fast-isel.  Investigation showed that the test was failing because a DBG_VALUE
machine instruction was placed prior to a PHI.

For this problem to occur requires the following:
 * Compile for debug
 * Compile with fast-isel
 * In a block B, fast-isel must partially succeed before punting to DAG-isel
 * B must start with a PHI
 * The first unhandled node in the DAG must not generate a machine instruction
 * A debug value with an order less than that of that first node exists

When all of these circumstances apply, the existing test that an instruction
was not inserted won't fire.  Currently it tests whether the block is empty,
or whether the last instruction generated is a phi.  When fast-isel has
partially succeeded, the last instruction generated will not be a phi.
Instead, we need to check whether the current insert position is immediately
following a phi.  This patch adds that check, and adds the test case from the
PR as a regression test.

llvm-svn: 192976
2013-10-18 14:20:11 +00:00
Chad Rosier
163fdd3e73 [AArch64] Add support for NEON scalar extract narrow instructions.
llvm-svn: 192970
2013-10-18 14:03:24 +00:00
Daniel Sanders
794132ee1f [mips][msa] Added a regression test that depended on multiple patches to pass.
llvm-svn: 192961
2013-10-18 09:52:21 +00:00
Hans Wennborg
33576424a9 Revert "Re-commit r192758 - MC: quote tricky symbol names in asm output"
This caused the clang-native-mingw32-win7 buildbot to break.

The assembler was complaining about the following lines that were showing up
in the asm for CrashRecoveryContext.cpp:

  movl  $"__ZL16ExceptionHandlerP19_EXCEPTION_POINTERS@4", 4(%eax)
  calll "_AddVectoredExceptionHandler@8"
  .def   "__ZL16ExceptionHandlerP19_EXCEPTION_POINTERS@4";
  "__ZL16ExceptionHandlerP19_EXCEPTION_POINTERS@4":
  calll "_RemoveVectoredExceptionHandler@4"

Reverting for now.

llvm-svn: 192940
2013-10-18 02:14:40 +00:00
David Peixotto
839f0f98a5 17309 ARM backend incorrectly lowers COPY_STRUCT_BYVAL_I32 for thumb1 targets
This commit implements the correct lowering of the
COPY_STRUCT_BYVAL_I32 pseudo-instruction for thumb1 targets.
Previously, the lowering of COPY_STRUCT_BYVAL_I32 generated the
post-increment forms of ldr/ldrh/ldrb instructions. Thumb1 does not
have the post-increment form of these instructions so the generated
assembly contained invalid instructions.

Passing the generated assembly to gcc caused it to complain with an
error like this:

  Error: cannot honor width suffix -- `ldrb r3,[r0],#1'

and the integrated assembler would generate an object file with an
invalid instruction encoding.

This commit contains a small test case that demonstrates the problem
with thumb1 targets as well as an expanded test case that more
throughly tests the lowering of byval struct passing for arm,
thumb1, and thumb2 targets.

llvm-svn: 192916
2013-10-17 19:52:05 +00:00
Chad Rosier
9a6d485c7f [AArch64] Add support for NEON scalar three register different instruction
class.  The instruction class includes the signed saturating doubling
multiply-add long, signed saturating doubling multiply-subtract long, and
the signed saturating doubling multiply long instructions.

llvm-svn: 192908
2013-10-17 18:12:29 +00:00
Bill Wendling
c98a0f705a Add testcase to make sure we don't generate a compact unwind section for ELF binaries.
This tests r190354.

llvm-svn: 192903
2013-10-17 17:38:49 +00:00
Daniel Sanders
f588d8546a [mips][msa] Added lsa instruction
llvm-svn: 192895
2013-10-17 13:38:20 +00:00
Benjamin Kramer
302eac4a38 Fix tests not to depend on specific regalloc or instruction order.
They were failing with -mcpu=atom.

llvm-svn: 192890
2013-10-17 12:41:05 +00:00
Daniel Sanders
4adbd7d84f Fix r192888: test/CodeGen/Mips/msa/3r_ld_st.ll should have been deleted
llvm-svn: 192889
2013-10-17 12:36:35 +00:00
Richard Sandiford
d52a8d6d92 Replace sra with srl if a single sign bit is required
E.g. (and (sra (i32 x) 31) 2) -> (and (srl (i32 x) 30) 2).

llvm-svn: 192884
2013-10-17 11:16:57 +00:00
Andrea Di Biagio
f45ea088f7 Fix edge condition in DAGCombiner to improve codegen of shift sequences.
When canonicalizing dags according to the rule
(shl (zext (shr X, c1) ), c1) ==> (zext (shl (shr X, c1), c1))

remember to add the new shl dag to the DAGCombiner worklist of nodes.
If we don't explicitly add it to the worklist of nodes to visit, we
may not trigger later on the rule that folds the shift left + logical
shift right into a AND instruction with bitmask.

llvm-svn: 192883
2013-10-17 11:02:58 +00:00
Jim Grosbach
504d93cae7 x86: Move bitcasts outside concat_vector.
Consider the following:

typedef unsigned short ushort4U __attribute__((ext_vector_type(4),
aligned(2)));
typedef unsigned short ushort4 __attribute__((ext_vector_type(4)));
typedef unsigned short ushort8 __attribute__((ext_vector_type(8)));
typedef int int4 __attribute__((ext_vector_type(4)));

int4 __bbase_cvt_int(ushort4 v) {
  ushort8 a;
  a.lo = v;
  return _mm_cvtepu16_epi32(a);
}

This generates the, not unreasonable, IR:
define <4 x i32> @foo0(double %v.coerce) nounwind ssp {
  %tmp = bitcast double %v.coerce to <4 x i16>
  %tmp1 = shufflevector <4 x i16> %tmp, <4 x i16> undef, <8 x i32> <i32
  %0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %tmp2 = tail call <4 x i32> @llvm.x86.sse41.pmovzxwd(<8 x i16> %tmp1)
  ret <4 x i32> %tmp2
}

The problem is when type legalization gets hold of the v4i16. It
legalizes that by spilling to the stack, then doing a zero-extending
load. Things go even more silly from there, ending up with something
like:
_foo0:
  movsd %xmm0, -8(%rsp)       <== Spill to the stack.
  movq  -8(%rsp), %xmm0       <== Reload it right back out.
  pmovzxwd  %xmm0, %xmm1      <== Here's what we actually asked for.
  pblendw $1, %xmm1, %xmm0    <== We don't need this at all
  pmovzxwd  %xmm0, %xmm0      <== We already did this
  ret

The v8i8 to v8i16 zext intrinsic gives even worse results, with two
table lookups via pshufb instructions(!!).

To avoid all that, we can move the bitcasting until after we've formed
the wider (legal) vector type. Then our normal codegen flows along
nicely and we get the expected:
_foo0:
  pmovzxwd  %xmm0, %xmm0
  ret

rdar://15245794

llvm-svn: 192866
2013-10-17 02:58:06 +00:00