1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-23 13:02:52 +02:00
Commit Graph

9851 Commits

Author SHA1 Message Date
Tim Northover
0dd1dfabfd AArch64: add test for NZCV cross-copy save.
llvm-svn: 209665
2014-05-27 16:50:09 +00:00
Tim Northover
13435480e0 AArch64: add AArch64-specific test for 'c' and 'n'.
llvm-svn: 209664
2014-05-27 16:50:03 +00:00
Bill Schmidt
b806d02b5b [PATCH] Correct type used for VADD_SPLAT optimization on PowerPC
In PPCISelLowering.cpp: PPCTargetLowering::LowerBUILD_VECTOR(), there
is an optimization for certain patterns to generate one or two vector
splats followed by a vector add or subtract.  This operation is
represented by a VADD_SPLAT in the selection DAG.  Prior to this
patch, it was possible for the VADD_SPLAT to be assigned the wrong
data type, causing incorrect code generation.  This patch corrects the
problem.

Specifically, the code previously assigned the value type of the
BUILD_VECTOR node to the newly generated VADD_SPLAT node.  This is
correct much of the time, but not always.  The problem is that the
call to isConstantSplat() may return a SplatBitSize that is not the
same as the number of bits in the original element vector type.  The
correct type to assign is a vector type with the same element bit size
as SplatBitSize.

The included test case shows an example of this, where the
BUILD_VECTOR node has a type of v16i8.  The vector to be built is {0,
16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16}.  isConstantSplat
detects that we can generate a splat of 16 for type v8i16, which is
the type we must assign to the VADD_SPLAT node.  If we do not, we
generate a vspltisb of 8 and a vaddubm, which generates the incorrect
result {16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16}.  The correct code generation is a vspltish of 8 and a vadduhm.

This patch also corrected code generation for
CodeGen/PowerPC/2008-07-10-SplatMiscompile.ll, which had been marked
as an XFAIL, so we can remove the XFAIL from the test case.

llvm-svn: 209662
2014-05-27 15:57:51 +00:00
Amara Emerson
77fc34e95e [ARM] Emit correct build attributes for the relocation models.
Patch by Asiri Rathnayake.

llvm-svn: 209656
2014-05-27 13:30:21 +00:00
Tim Northover
2172cefdfd ARM: teach AAPCS-VFP to deal with Cortex-M4.
Cortex-M4 only has single-precision floating point support, so any LLVM
"double" type will have been split into 2 i32s by now. Fortunately, the
consecutive-register framework turns out to be precisely what's needed to
reconstruct the double and follow AAPCS-VFP correctly!

rdar://problem/17012966

llvm-svn: 209650
2014-05-27 10:43:38 +00:00
Filipe Cabecinhas
7df1221986 Convert some X86 blendv* intrinsics into IR.
Summary:
Implemented an InstCombine transformation that takes a blendv* intrinsic
call and translates it into an IR select, if the mask is constant.

This will eventually get lowered into blends with immediates if possible,
or pblendvb (with an option to further optimize if we can transform the
pblendvb into a blend+immediate instruction, depending on the selector).
It will also enable optimizations by the IR passes, which give up on
sight of the intrinsic.

Both the transformation and the lowering of its result to asm got shiny
new tests.

The transformation is a bit convoluted because of blendvp[sd]'s
definition:

Its mask is a floating point value! This forces us to convert it and get
the highest bit. I suppose this happened because the mask has type
__m128 in Intel's intrinsic and v4sf (for blendps) in gcc's builtin.

I will send an email to llvm-dev to discuss if we want to change this or
not.

Reviewers: grosbach, delena, nadav

Differential Revision: http://reviews.llvm.org/D3859

llvm-svn: 209643
2014-05-27 03:42:20 +00:00
Rafael Espindola
94cd9a1ed6 [PPC] Use alias symbols in address computation.
This seems to match what gcc does for ppc and what every other llvm
backend does.

llvm-svn: 209638
2014-05-26 19:08:19 +00:00
Tim Northover
10cffb6eef AArch64: force i1 to be zero-extended at an ABI boundary.
This commit is debatable. There are two possible approaches, neither
of which is really satisfactory:

1. Use "@foo(i1 zeroext)" to mean an extension to 32-bits on Darwin,
   and 8 bits otherwise.
2. Redefine "@foo(i1)" to mean that the i1 is extended by the caller
   to 8 bits. This goes against the spirit of "zeroext" I think, but
   it's a bit of a vague construct anyway (by definition you're going
   to extend to the amount required by the ABI, that's why it's the
   ABI!).

This implements option 2. The DAG machinery really isn't setup for the
first (there's a fairly strong assumption that "zeroext" goes to at
least the smallest register size), and even if it was the resulting
DAG looks like it would be inferior in many cases.

Theoretically we could add AssertZext nodes in the consumers of
ABI-passed values too now, but this actually seems to make the code
worse in practice by making truncation proceed in two steps. The code
produced is equally valid if we continue to assume only the low bit is
defined.

Should fix PR19850

llvm-svn: 209637
2014-05-26 17:22:07 +00:00
Tim Northover
fc1b1e8952 AArch64: simplify calling conventions slightly.
We can eliminate the custom C++ code in favour of some TableGen to
check the same things. Functionality should be identical, except for a
buffer overrun that was present in the C++ code and meant webkit
failed if any small argument needed to be passed on the stack.

llvm-svn: 209636
2014-05-26 17:21:53 +00:00
Tilmann Scheller
0d5b6aa201 [AArch64] Add store + add folding regression tests for the load/store optimization pass.
Add tests for the following transform:

 str X, [x0, #32]
  ...
 add x0, x0, #32
  ->
 str X, [x0, #32]!

with X being either w1, x1, s0, d0 or q0.

llvm-svn: 209627
2014-05-26 13:36:47 +00:00
Tilmann Scheller
bf3f06fd14 [AArch64] Add more regression tests for the load/store optimization pass.
Cover the following cases:

  ldr X, [x0, #32]
   ...
  add x0, x0, #32
   ->
  ldr X, [x0, #32]!

with X being either w1, x1, s0, d0 or q0.

llvm-svn: 209624
2014-05-26 12:15:51 +00:00
Tilmann Scheller
4b78dd1812 Remove accidentally committed whitespace.
llvm-svn: 209619
2014-05-26 09:40:40 +00:00
Tilmann Scheller
f505625e35 [AArch64] Add a regression test for the load store optimizer.
We have a couple of regression tests for load/store pairing, but (to my knowledge) there are no regression tests for the load/store + add/sub folding.

As a first step towards increased test coverage of this area, this commit adds a test for one instance of a load + add to pre-indexed load transformation.

llvm-svn: 209618
2014-05-26 09:37:19 +00:00
Rafael Espindola
93615d7bf8 Just check the entire string.
Thanks to David Blaikie for the suggestion.

llvm-svn: 209610
2014-05-26 04:08:51 +00:00
Rafael Espindola
a40a842933 Emit data or code export directives based on the type.
Currently we look at the Aliasee to decide what type of export
directive to use. It seems better to use the type of the alias
directly. This is similar to how we handle the alias having the
same address but other attributes (linkage, visibility) from the
aliasee.

With this patch it is now possible to do things like

target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-pc-windows-msvc"
@foo = global [6 x i8] c"\B8*\00\00\00\C3", section ".text", align 16
@f = dllexport alias i32 (), [6 x i8]* @foo
!llvm.module.flags = !{!0}
!0 = metadata !{i32 6, metadata !"Linker Options", metadata !1}
!1 = metadata !{metadata !2, metadata !3}
!2 = metadata !{metadata !"/DEFAULTLIB:libcmt.lib"}
!3 = metadata !{metadata !"/DEFAULTLIB:oldnames.lib"}

llvm-svn: 209600
2014-05-25 12:49:07 +00:00
Rafael Espindola
885673b0ec Make these CHECKs a bit more strict.
The " at the end of the line makes sure we matched the entire directive.

llvm-svn: 209599
2014-05-25 12:43:13 +00:00
Tim Northover
ca0f4dc4f0 AArch64/ARM64: move ARM64 into AArch64's place
This commit starts with a "git mv ARM64 AArch64" and continues out
from there, renaming the C++ classes, intrinsics, and other
target-local objects for consistency.

"ARM64" test directories are also moved, and tests that began their
life in ARM64 use an arm64 triple, those from AArch64 use an aarch64
triple. Both should be equivalent though.

This finishes the AArch64 merge, and everyone should feel free to
continue committing as normal now.

llvm-svn: 209577
2014-05-24 12:50:23 +00:00
Tim Northover
d7f173214f AArch64/ARM64: remove AArch64 from tree prior to renaming ARM64.
I'm doing this in two phases for a better "git blame" record. This
commit removes the previous AArch64 backend and redirects all
functionality to ARM64. It also deduplicates test-lines and removes
orphaned AArch64 tests.

The next step will be "git mv ARM64 AArch64" and rewire most of the
tests.

Hopefully LLVM is still functional, though it would be even better if
no-one ever had to care because the rename happens straight
afterwards.

llvm-svn: 209576
2014-05-24 12:42:26 +00:00
Tim Northover
8d0c65ea6b ARM64: extract a 32-bit subreg when selecting an inreg extend
After the load/store refactoring, we were sometimes trying to feed a
GPR64 into a 32-bit register offset operand. This failed in
copyPhysReg.

llvm-svn: 209566
2014-05-24 07:05:42 +00:00
Nico Rieck
03369ca3aa Revert part of "Fix broken FileCheck prefixes"
This reverts part of commit r209538.

llvm-svn: 209544
2014-05-23 19:33:49 +00:00
Rafael Espindola
79a001c01e Use alias linkage and visibility to decide tls access mode.
This matches both what we do for the non-thread case and what gcc does.

With this patch clang would match gcc's behaviour in

static __thread int a = 42;
extern __thread int b __attribute__((alias("a")));
int *f(void) { return &a; }
int *g(void) { return &b; }

if not for pr19843. Manually writing the IL does produce the same access modes.

It is also a step in the direction of fixing pr19844.

llvm-svn: 209543
2014-05-23 19:16:56 +00:00
Nico Rieck
b7010b8ef7 Remove unused CHECK lines
llvm-svn: 209539
2014-05-23 19:06:44 +00:00
Nico Rieck
e0fe65b49d Fix broken FileCheck prefixes
llvm-svn: 209538
2014-05-23 19:06:24 +00:00
Rafael Espindola
1c383955e0 Convert test to use FileCheck.
llvm-svn: 209528
2014-05-23 16:51:13 +00:00
Daniel Sanders
b781c1f734 [mips][mips64r6] [ls][dw][lr] are not available in MIPS32r6/MIPS64r6
Summary:
Instead the system is required to provide some means of handling unaligned
load/store without special instructions. Options include full hardware
support, full trap-and-emulate, and hybrids such as hardware support within
a cache line and trap-and-emulate for multi-line accesses.

MipsSETargetLowering::allowsUnalignedMemoryAccesses() has been configured to
assume that unaligned accesses are 'fast' on the basis that I expect few
hardware implementations will opt for pure-software handling of unaligned
accesses. The ones that do handle it purely in software can override this.

mips64-load-store-left-right.ll has been merged into load-store-left-right.ll

The stricter testing revealed a Bits!=Bytes bug in passByValArg(). This has
been fixed and the variables renamed to clarify the units they hold.

Reviewers: zoran.jovanovic, jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3872

llvm-svn: 209512
2014-05-23 13:18:02 +00:00
Jiangning Liu
97cbeb5ba8 [ARM64] Fix a bug in shuffle vector lowering to generate corect vext ISD with swapped input vectors.
llvm-svn: 209495
2014-05-23 02:54:50 +00:00
Matt Arsenault
bfc007dbb5 R600: Try to convert BFE back to standard bit ops when possible.
This allows existing DAG combines to work on them, and then
we can re-match to BFE if necessary during instruction selection.

llvm-svn: 209462
2014-05-22 18:09:12 +00:00
Matt Arsenault
90d0fd2ea0 R600: Add dag combine for BFE
llvm-svn: 209461
2014-05-22 18:09:07 +00:00
Matt Arsenault
4ab9246e99 R600: Implement ComputeNumSignBitsForTargetNode for BFE
llvm-svn: 209460
2014-05-22 18:09:03 +00:00
Matt Arsenault
c7d0679684 R600: Expand mul24 for GPUs without it
llvm-svn: 209458
2014-05-22 18:00:24 +00:00
Matt Arsenault
fcb6cf68ee R600: Expand mad24 for GPUs without it
llvm-svn: 209457
2014-05-22 18:00:20 +00:00
Matt Arsenault
e43426533f R600: Add intrinsics for mad24
llvm-svn: 209456
2014-05-22 18:00:15 +00:00
Andrea Di Biagio
98dd66445e [X86] Improve the lowering of BITCAST from MVT::f64 to MVT::v4i16/MVT::v8i8.
This patch teaches the x86 backend how to efficiently lower ISD::BITCAST dag
nodes from MVT::f64 to MVT::v4i16 (and vice versa), and from MVT::f64 to
MVT::v8i8 (and vice versa).

This patch extends the logic from revision 208107 to also handle MVT::v4i16
and MVT::v8i8. Also, this patch correctly propagates Undef values when
performing the widening of a vector (example: when widening from v2i32 to
v4i32, the upper 64bits of the resulting vector are 'undef').

llvm-svn: 209451
2014-05-22 16:21:39 +00:00
Tim Northover
6877f3a322 Segmented stacks: omit __morestack call when there's no frame.
Patch by Florian Zeitz

llvm-svn: 209436
2014-05-22 13:03:43 +00:00
Daniel Sanders
e5b7eb47f6 [mips] Make unalignedload.ll test stricter and easier to modify for MIPS32r6/MIPS64r6
Summary:
* Split into two functions, one to test each struct.
* R0 and R2 must be defined by an lw with a %got reference to the correct
  symbol.
* Test for $4 (first argument) where appropriate instead of accepting any
  register.
* Test that the two lbu's are correctly combined into $4

Depends on D3844

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3845

llvm-svn: 209424
2014-05-22 11:55:04 +00:00
Daniel Sanders
c64e291555 [mips] Change lwl and lwr in inlineasm_constraint.ll to lw
Summary:
lwl and lwr are not available in MIPS32r6/MIPS64r6. The purpose of the test
is to check that the '$1' expands to '0($x)' rather than to test something related
to the lwl or lwr instructions so we can simply switch to lw.

Depends on D3842

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3844

llvm-svn: 209423
2014-05-22 11:51:06 +00:00
Daniel Sanders
a0467a4d71 [mips] Use addiu in inline assembly tests since addi is not available in all ISA's
Summary:
This patch is necessary so that they do not fail on MIPS32r6/MIPS64r6 when
-integrated-as is enabled by default and we correctly detect the host CPU.

No functional change since these tests are testing the behaviour of the
constraint used for the third operand rather than the mnemonic.

Depends on D3842

Reviewers: zoran.jovanovic, jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3843

llvm-svn: 209421
2014-05-22 11:46:58 +00:00
Tim Northover
c8bed61f8e AArch64/ARM64: enable more AArch64 tests.
llvm-svn: 209408
2014-05-22 07:40:55 +00:00
Saleem Abdulrasool
fa4b3f6e65 ARM: introduce llvm.arm.undefined intrinsic
This intrinsic permits the emission of platform specific undefined sequences.
ARM has reserved the 0xde opcode which takes a single integer parameter (ignored
by the CPU).  This permits the operating system to implement custom behaviour on
this trap.  The llvm.arm.undefined intrinsic is meant to provide a means for
generating the target specific behaviour from the frontend.  This is
particularly useful for Windows on ARM which has made use of a series of these
special opcodes.

llvm-svn: 209390
2014-05-22 04:46:46 +00:00
Matt Arsenault
cec6ae49e8 R600/SI: Match fp_to_uint / uint_to_fp for f64
llvm-svn: 209388
2014-05-22 03:20:30 +00:00
David Blaikie
f148b7f5aa DebugInfo: Use the SPMap to find the parent CU of inlined functions as they may not be in the current CU
Committed in r209178 then reverted in r209251 due to LTO breakage,
here's a proper fix for the case of the missing subprogram DIE. The DIEs
were there, just in other compile units. Using the SPMap we can find the
right compile unit to search for and produce cross-unit references to
describe this kind of inlining.

One existing test case needed to be updated because it had a function
that wasn't in the CU's subprogram list, so it didn't appear in the
SPMap.

llvm-svn: 209335
2014-05-21 23:14:12 +00:00
Matt Arsenault
094f9f1e9c R600: Partially fix constant initializers for structs and vectors.
This should extend the current workaround to work with structs
that only contain legal, scalar types.

llvm-svn: 209331
2014-05-21 22:42:42 +00:00
Matt Arsenault
dc960fcadc R600: Add failing testcases for constant initializers.
Constant initializers involving illegal types hit an assertion.

Patch by: Jan Vesely <jan.vesely@rutgers.edu>

llvm-svn: 209330
2014-05-21 22:42:38 +00:00
Quentin Colombet
b70fffa971 [X86] Fix a bug in the lowering of BLENDI introduced in r209043.
ISD::VSELECT mask uses 1 to identify the first argument and 0 to identify the
second argument.
On the other hand, BLENDI uses 0 to identify the first argument and 1 to
identify the second argument.
Fix the generation of the blend mask to account for this difference.

The bug did not show up with r209043, because we were not checking for the
actual arguments of the blend instruction!
This commit also fixes the test cases.

Note: The same mask works for the BLENDr variant because the arguments are
swapped during instruction selection (see the BLENDXXrr patterns).

<rdar://problem/16975435>

llvm-svn: 209324
2014-05-21 22:00:39 +00:00
Dave Estes
7d24ae5558 Test comment commit.
llvm-svn: 209306
2014-05-21 16:19:51 +00:00
Saleem Abdulrasool
7f0235499b ARM: correct bundle generation for MOV32T relocations
Although the previous code would construct a bundle and add the correct elements
to it, it would not finalise the bundle.  This resulted in the InternalRead
markers not being added to the MachineOperands nor, more importantly, the
externally visible defs to the bundle itself.  So, although the bundle was not
exposing the def, the generated code would be correct because there was no
optimisations being performed.  When optimisations were enabled, the post
register allocator would kick in, and the hazard recognizer would reorder
operations around the load which would define the value being operated upon.

Rather than manually constructing the bundle, simply construct and finalise the
bundle via the finaliseBundle call after both MIs have been emitted.  This
improves the code generation with optimisations where IMAGE_REL_ARM_MOV32T
relocations are emitted.

The changes to the other tests are the result of the bundle generation
preventing the scheduler from hoisting the moves across the loads.  The net
effect of the generated code is equivalent, but, is much more identical to what
is actually being lowered.

llvm-svn: 209267
2014-05-21 01:25:24 +00:00
Alexey Samsonov
e152790b3c Fix test added in r209242: llc shouldn't create files in source tree
llvm-svn: 209252
2014-05-20 22:40:31 +00:00
Adam Nemet
6b3d606a41 [ARM64] PR19792: Fix cycle in DAG after performPostLD1Combine
Povray and dealII currently assert with "Overran sorted position" in
AssignTopologicalOrder.  The problem is that performPostLD1Combine can
introduce cycles.

Consider:

(insert_vector_elt (INSERT_SUBREG undef,
                                  (load (add %vreg0, Constant<8>), undef),  <= A
                                  TargetConstant<2>),
                   (load %vreg0, undef),                                    <= B
                   Constant<1>)

This is turned into a LD1LANEpost node.  However the address in A is not a
valid user of the post-incremented address of B in LD1LANEpost.

llvm-svn: 209242
2014-05-20 21:47:07 +00:00
Eric Christopher
974cef18f4 Move the function and data section flags into the options struct and
make the functions to set them non-static.
Move and rename the llvm specific backend options to avoid conflicting
with the clang option.

Paired with a backend commit to update.

llvm-svn: 209238
2014-05-20 21:25:34 +00:00
Quentin Colombet
c4e9e184e8 [LSR] Canonicalize reg1 + ... + regN into reg1 + ... + 1*regN.
This commit introduces a canonical representation for the formulae.
Basically, as soon as a formula has more that one base register, the scaled
register field is used for one of them. The register put into the scaled
register is preferably a loop variant.
The commit refactors how the formulae are built in order to produce such
representation.
This yields a more accurate, but still perfectible, cost model.

<rdar://problem/16731508>

llvm-svn: 209230
2014-05-20 19:25:04 +00:00