1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-24 05:23:45 +02:00
Commit Graph

591 Commits

Author SHA1 Message Date
Gerolf Hoflehner
40e09fb7dd MachineCombiner Pass for selecting faster instruction
sequence on AArch64

Re-commit of r214669 without changes to test cases
LLVM::CodeGen/AArch64/arm64-neon-mul-div.ll and
LLVM:: CodeGen/AArch64/dp-3source.ll
This resolves the reported compfails of the original commit.

llvm-svn: 214832
2014-08-05 01:16:13 +00:00
Juergen Ributzka
f39fd7e490 [FastISel][AArch64] Fix shift lowering for i8 and i16 value types.
This fix changes the parameters #r and #s that are passed to the UBFM/SBFM
instruction to get the zero/sign-extension for free.

The original problem was that the shift left would use the 32-bit shift even for
i8/i16 value types, which could leave the upper bits set with "garbage" values.

The arithmetic shift right on the other side would use the wrong MSB as sign-bit
to determine what bits to shift into the value.

This fixes <rdar://problem/17907720>.

llvm-svn: 214788
2014-08-04 21:49:51 +00:00
Chad Rosier
d843d13525 [AArch64] Extend the number of scalar instructions supported in the AdvSIMD
scalar integer instruction pass.

This is a patch I had lying around from a few months ago.  The pass is
currently disabled by default, so nothing to interesting.

llvm-svn: 214779
2014-08-04 21:20:25 +00:00
Kevin Qin
348a8dd760 Revert "r214669 - MachineCombiner Pass for selecting faster instruction"
This commit broke "make check" for several hours, so get it reverted.

llvm-svn: 214697
2014-08-04 05:10:33 +00:00
Gerolf Hoflehner
265dd68643 MachineCombiner Pass for selecting faster instruction
sequence -  AArch64 target support

 This patch turns off madd/msub generation in the DAGCombiner and generates
 them in the MachineCombiner instead. It replaces the original code sequence
 with the combined sequence when it is beneficial to do so.

 When there is no machine model support it always generates the madd/msub
 instruction. This is true also when the objective is to optimize for code
 size: when the combined sequence is shorter is always chosen and does not
 get evaluated.

 When there is a machine model the combined instruction sequence
 is evaluated for critical path and resource length using machine
 trace metrics and the original code sequence is replaced when it is
 determined to be faster.

 rdar://16319955

llvm-svn: 214669
2014-08-03 22:03:40 +00:00
James Molloy
b4d821ca5b Update test to use a more modern AArch64 triple, as requested by Renato.
llvm-svn: 214637
2014-08-02 17:15:11 +00:00
James Molloy
240087c61a [AArch64] Teach DAGCombiner that converting two consecutive loads into a vector load is not a good transform when paired loads are available.
The combiner was creating Q-register loads and stores, which then had to be spilled because there are no callee-save Q registers!

llvm-svn: 214634
2014-08-02 14:51:24 +00:00
Juergen Ributzka
bb4322fe8b [FastISel][AArch64] Fold offset into the memory operation.
Fold simple offsets into the memory operation:
  add x0, x0, #8
  ldr x0, [x0]
-->
  ldr x0, [x0, #8]

Fixes <rdar://problem/17887945>.

llvm-svn: 214545
2014-08-01 19:40:16 +00:00
Juergen Ributzka
5fad1ddebc [FastISel][AArch64] Add branch weights.
Add branch weights to branch instructions, so that the following passes can
optimize based on it (i.e. basic block ordering).

Fixes <rdar://problem/17887137>.

llvm-svn: 214537
2014-08-01 18:39:24 +00:00
Chad Rosier
f6bd4ee872 [AArch64] Fix test from r214518 in an attempt to appease buildbots.
llvm-svn: 214521
2014-08-01 15:30:41 +00:00
Chad Rosier
69caabf908 [AArch64] Generate tbz/tbnz when comparing against zero.
The tbz/tbnz checks the sign bit to convert

op w1, w1, w10
cmp w1, #0
b.lt .LBB0_0

to

op w1, w1, w10
tbnz w1, #31, .LBB0_0

Differential Revision: http://reviews.llvm.org/D4440

llvm-svn: 214518
2014-08-01 14:48:56 +00:00
Juergen Ributzka
1686869897 [FastISel][AArch64] Fix the immediate versions of the {s|u}{add|sub}.with.overflow intrinsics.
ADDS and SUBS cannot encode negative immediates or immediates larger than 12bit.
This fix checks if the immediate version can be used under this constraints and
if we can convert ADDS to SUBS or vice versa to support negative immediates.

Also update the test cases to test the immediate versions.

llvm-svn: 214470
2014-08-01 01:25:55 +00:00
Juergen Ributzka
ed5bbc5130 [FastISel][AArch64] Add basic bitcast support for conversion between float and int.
Fixes <rdar://problem/17867078>.

llvm-svn: 214389
2014-07-31 06:25:37 +00:00
Juergen Ributzka
35e07f4cd0 [FastISel][AArch64] Add sqrt intrinsic support.
Fixes <rdar://problem/17867067>.

llvm-svn: 214388
2014-07-31 06:25:33 +00:00
Juergen Ributzka
4e074494e8 [FastISel][AArch64] Update and enable patchpoint and stackmap intrinsic tests for FastISel.
This commit updates the existing SelectionDAG tests for the stackmap and patchpoint
intrinsics and enables FastISel testing. It also splits up the tests into separate
files, due to different codegen between SelectionDAG and FastISel.

llvm-svn: 214382
2014-07-31 04:10:43 +00:00
Juergen Ributzka
622f41919a [FastISel][AArch64] Add MachO large code model support for function calls.
Currently the large code model for MachO uses the GOT to make function calls.
Emit the required adrp and ldr instructions to load the address from the GOT.

Related to <rdar://problem/17733076>.

llvm-svn: 214381
2014-07-31 04:10:40 +00:00
Juergen Ributzka
d1fc9d2924 [FastISel][AArch64] Add select folding support for the XALU intrinsics.
This improves the code generation for the XALU intrinsics when the
condition is feeding a select instruction.

This also updates and enables the XALU unit tests for FastISel.

This fixes <rdar://problem/17831117>.

llvm-svn: 214350
2014-07-30 22:04:37 +00:00
Juergen Ributzka
9ceeaa8bbc [FastISel][AArch64] Add support for shift-immediate.
Currently the shift-immediate versions are not supported by tblgen and
hopefully this can be later removed, once the required support has been
added to tblgen.

llvm-svn: 214345
2014-07-30 22:04:22 +00:00
Jiangning Liu
86fc448354 Implement AArch64 TTI interface isAsCheapAsAMove.
llvm-svn: 214159
2014-07-29 02:09:26 +00:00
Tim Northover
5290b47db1 AArch64: fix conversion of 'J' inline asm constraints.
'J' represents a negative number suitable for an add/sub alias
instruction, but while preparing it to become an int64_t we were
mangling the sign extension. So "i32 -1" became 0xffffffffLL, for
example.

Should fix one half of PR20456.

llvm-svn: 214052
2014-07-27 07:10:29 +00:00
Akira Hatanaka
e9a7fadd46 [stack protector] Fix a potential security bug in stack protector where the
address of the stack guard was being spilled to the stack.

Previously the address of the stack guard would get spilled to the stack if it
was impossible to keep it in a register. This patch introduces a new target
independent node and pseudo instruction which gets expanded post-RA to a
sequence of instructions that load the stack guard value. Register allocator
can now just remat the value when it can't keep it in a register. 

<rdar://problem/12475629>

llvm-svn: 213967
2014-07-25 19:31:34 +00:00
Juergen Ributzka
d43a6ec447 [FastISel][AArch64] Add support for frameaddress intrinsic.
This commit implements the frameaddress intrinsic for the AArch64 architecture
in FastISel.

There were two test cases that pretty much tested the same, so I combined them
to a single test case.

Fixes <rdar://problem/17811834>

llvm-svn: 213959
2014-07-25 17:47:14 +00:00
Chandler Carruth
8182e043d9 [SDAG] Introduce a combined set to the DAG combiner which tracks nodes
which have successfully round-tripped through the combine phase, and use
this to ensure all operands to DAG nodes are visited by the combiner,
even if they are only added during the combine phase.

This is critical to have the combiner reach nodes that are *introduced*
during combining. Previously these would sometimes be visited and
sometimes not be visited based on whether they happened to end up on the
worklist or not. Now we always run them through the combiner.

This fixes quite a few bad codegen test cases lurking in the suite while
also being more principled. Among these, the TLS codegeneration is
particularly exciting for programs that have this in the critical path
like TSan-instrumented binaries (although I think they engineer to use
a different TLS that is faster anyways).

I've tried to check for compile-time regressions here by running llc
over a merged (but not LTO-ed) clang bitcode file and observed at most
a 3% slowdown in llc. Given that this is essentially a worst case (none
of opt or clang are running at this phase) I think this is tolerable.
The actual LTO case should be even less costly, and the cost in normal
compilation should be negligible.

With this combining logic, it is possible to re-legalize as we combine
which is necessary to implement PSHUFB formation on x86 as
a post-legalize DAG combine (my ultimate goal).

Differential Revision: http://reviews.llvm.org/D4638

llvm-svn: 213898
2014-07-24 22:15:28 +00:00
Kevin Qin
a272df5db2 [AArch64] Fix a bug generating incorrect instruction when building small vector.
This bug is introduced by r211144. The element of operand may be
smaller than the element of result, but previous commit can
only handle the contrary condition. This commit is to handle this
scenario and generate optimized codes like ZIP1.

llvm-svn: 213830
2014-07-24 02:05:42 +00:00
Jiangning Liu
a94c806f8b [AArch64] Disable some optimization cases for type conversion from sint to fp, because those optimization cases are micro-architecture dependent and only make sense for Cyclone. A new predicate Cyclone is introduced in .td file.
llvm-svn: 213827
2014-07-24 01:29:59 +00:00
Jim Grosbach
98597dd715 Use an explicit triple in testcase.
Make the test work better on non-darwin hosts. Hopefully.

llvm-svn: 213801
2014-07-23 20:46:32 +00:00
Jim Grosbach
f6143714d5 [X86,AArch64] Extend vcmp w/ unary op combine to work w/ more constants.
The transform to constant fold unary operations with an AND across a
vector comparison applies when the constant is not a splat of a scalar
as well.

llvm-svn: 213800
2014-07-23 20:41:43 +00:00
Jim Grosbach
5e39096957 X86: restrict combine to when type sizes are safe.
The folding of unary operations through a vector compare and mask operation
is only safe if the unary operation result is of the same size as its input.
For example, it's not safe for [su]itofp from v4i32 to v4f64.

llvm-svn: 213799
2014-07-23 20:41:38 +00:00
Juergen Ributzka
2a176cc855 [FastISel][AArch64] Fix return type in FastLowerCall.
I used the wrong method to obtain the return type inside FinishCall. This fix
simply uses the return type from FastLowerCall, which we already determined to
be a valid type.

Reduced test case from Chad. Thanks.

llvm-svn: 213788
2014-07-23 20:03:13 +00:00
Chad Rosier
2cb7fd5dae [AArch64] Lower sdiv x, pow2 using add + select + shift.
The target-independent DAGcombiner will generate:
asr w1, X, #31 w1 = splat sign bit.
add X, X, w1, lsr #28 X = X + 0 or pow2-1
asr w0, X, asr #4 w0 = X/pow2

However, the add + shifts is expensive, so generate:
add w0, X, 15 w0 = X + pow2-1
cmp X, wzr X - 0
csel X, w0, X, lt X = (X < 0) ? X + pow2-1 : X;
asr w0, X, asr 4 w0 = X/pow2

llvm-svn: 213758
2014-07-23 14:57:52 +00:00
Tim Northover
c357579164 AArch64: remove "arm64_be" support in favour of "aarch64_be".
There really is no arm64_be: it was a useful fiction to test big-endian support
while both backends existed in parallel, but now the only platform that uses
the name (iOS) doesn't have a big-endian variant, let alone one called
"arm64_be".

llvm-svn: 213748
2014-07-23 12:58:11 +00:00
Juergen Ributzka
c2d9ee45f3 [FastIsel][AArch64] Add support for the FastLowerCall and FastLowerIntrinsicCall target-hooks.
This commit modifies the existing call lowering functions to be used as the
FastLowerCall and FastLowerIntrinsicCall target-hooks instead.

This enables patchpoint intrinsic lowering for AArch64.

This fixes <rdar://problem/17733076>

llvm-svn: 213704
2014-07-22 23:14:58 +00:00
Juergen Ributzka
49aab6445a [AArch64] Use CHECK-LABEL in ARM64 ABI unit tests.
llvm-svn: 213703
2014-07-22 23:14:54 +00:00
Tim Northover
86e216695b CodeGen: emit IR-level f16 conversion intrinsics as fptrunc/fpext
This makes the first stage DAG for @llvm.convert.to.fp16 an fptrunc,
and correspondingly @llvm.convert.from.fp16 an fpext. The legalisation
path is now uniform, regardless of the input IR:

  fptrunc -> FP_TO_FP16 (if f16 illegal) -> libcall
  fpext -> FP16_TO_FP (if f16 illegal) -> libcall

Each target should be able to select the version that best matches its
operations and not be required to duplicate patterns for both fptrunc
and FP_TO_FP16 (for example).

As a result we can remove some redundant AArch64 patterns.

llvm-svn: 213507
2014-07-21 09:13:56 +00:00
Tim Northover
be8b73df94 AArch64: implement efficient f16 bitcasts
Because i16 is illegal, there's no native DAG method to
represent a bitcast to or from an f16 type. This meant LLVM was
inserting a stack store/load pair which is really not ideal.

llvm-svn: 213378
2014-07-18 13:07:05 +00:00
Tim Northover
1b266803fb AArch64: support f16 extend/trunc operations.
llvm-svn: 213375
2014-07-18 13:01:31 +00:00
Tim Northover
e4c93c0798 CodeGen: soften f16 type by default instead of marking legal.
Actual support for softening f16 operations is still limited, and can be added
when it's needed.  But Soften is much closer to being a useful thing to try
than keeping it Legal when no registers can actually hold such values.

Longer term, we probably want something between Soften and Promote semantics
for most targets, it'll be more efficient to promote the 4 basic operations to
f32 than libcall them.

llvm-svn: 213372
2014-07-18 12:41:46 +00:00
Jim Grosbach
2f665f1cd7 AArch64: Constant fold converting vector setcc results to float.
Since the result of a SETCC for AArch64 is 0 or -1 in each lane, we can
move unary operations, in this case [su]int_to_fp through the mask
operation and constant fold the operation away. Generally speaking:
  UNARYOP(AND(VECTOR_CMP(x,y), constant))
      --> AND(VECTOR_CMP(x,y), constant2)
where constant2 is UNARYOP(constant).

This implements the transform where UNARYOP is [su]int_to_fp.

For example, consider the simple function:
define <4 x float> @foo(<4 x float> %val, <4 x float> %test) nounwind {
  %cmp = fcmp oeq <4 x float> %val, %test
  %ext = zext <4 x i1> %cmp to <4 x i32>
  %result = sitofp <4 x i32> %ext to <4 x float>
  ret <4 x float> %result
}

Before this change, the code is generated as:
  fcmeq.4s  v0, v0, v1
  movi.4s v1, #0x1        // Integer splat value.
  and.16b v0, v0, v1      // Mask lanes based on the comparison.
  scvtf.4s  v0, v0        // Convert each lane to f32.
  ret

After, the code is improved to:
  fcmeq.4s  v0, v0, v1
  fmov.4s v1, #1.00000000 // f32 splat value.
  and.16b v0, v0, v1      // Mask lanes based on the comparison.
  ret

The svvtf.4s has been constant folded away and the floating point 1.0f
vector lanes are materialized directly via fmov.4s.

Rather than do the folding manually in the target code, teach getNode()
in the generic SelectionDAG to handle folding constant operands of
vector [su]int_to_fp nodes. It is reasonable (as noted in a FIXME) to do
additional constant folding there as well, but I don't have test cases
for those operations, so leaving them for another time when it becomes
appropriate.

rdar://17693791

llvm-svn: 213341
2014-07-18 00:40:52 +00:00
Tim Northover
eae1f1c8cc CodeGen: extend f16 conversions to permit types > float.
This makes the two intrinsics @llvm.convert.from.f16 and
@llvm.convert.to.f16 accept types other than simple "float". This is
only strictly needed for the truncate operation, since otherwise
double rounding occurs and there's no way to represent the strict IEEE
conversion. However, for symmetry we allow larger types in the extend
too.

During legalization, we can expand an "fp16_to_double" operation into
two extends for convenience, but abort when the truncate isn't legal. A new
libcall is probably needed here.

Even after this commit, various target tweaks are needed to actually use the
extended intrinsics. I've put these into separate commits for clarity, so there
are no actual tests of f64 conversion here.

llvm-svn: 213248
2014-07-17 10:51:23 +00:00
Yi Kong
9b1652c5d0 Port memory barriers intrinsics to AArch64
Memory barrier __builtin_arm_[dmb, dsb, isb] intrinsics are required to
implement their corresponding ACLE and MSVC intrinsics.

This patch ports ARM dmb, dsb, isb intrinsic to AArch64.

Differential Revision: http://reviews.llvm.org/D4520

llvm-svn: 213247
2014-07-17 10:50:20 +00:00
Tim Northover
93f17f8aee AArch64: fall back to generic code for out of range extract/insert.
rdar://problem/17624784

llvm-svn: 213059
2014-07-15 10:00:26 +00:00
Tim Northover
4ac35c9d7b AArch64: remove unnecessary pseudo-instruction.
Sufficiently twisted use of TableGen lets us write patterns directly for f16
(as an i16 promoted to i32) -> f32 conversion.

llvm-svn: 212933
2014-07-14 11:16:02 +00:00
Saleem Abdulrasool
00c379d824 AArch64: add support for llvm.aarch64.hint intrinsic
This adds a llvm.aarch64.hint intrinsic to mirror the llvm.arm.hint in order to
support the various hint intrinsic functions in the ACLE.

Add an optional pattern field that permits the subclass to specify the pattern
that matches the selection.  The intrinsic pattern is set as mayLoad, mayStore,
so overload the value for the definition of the hint instruction.

llvm-svn: 212883
2014-07-12 21:20:49 +00:00
Oliver Stannard
6090374181 ARM: Allow __fp16 as a function arg or return type for AArch64
ACLE 2.0 allows __fp16 to be used as a function argument or return
type. This enables this for AArch64.

llvm-svn: 212812
2014-07-11 13:33:46 +00:00
Tim Northover
3a26804901 AArch64: correctly fast-isel i8 & i16 multiplies
We were asking for a register for type i8 or i16 which caused an assert.

rdar://problem/17620015

llvm-svn: 212718
2014-07-10 14:18:46 +00:00
Hao Liu
cb23c5a35e [AArch64]Fix an assertion failure in DAG Combiner about concating 2 build_vector.
llvm-svn: 212677
2014-07-10 03:41:50 +00:00
Jim Grosbach
10098b8e10 AArch64: Better codegen for storing to __fp16.
Storing will generally be immediately preceded by rounding from an f32
or f64, so make sure to match those patterns directly to convert into the
FPR16 register class directly rather than going through the integer GPRs.

This also eliminates an extra step in the convert-from-f64 path
which was first converting to f32 and then to f16 from there.

rdar://17594379

llvm-svn: 212638
2014-07-09 18:55:52 +00:00
Jim Grosbach
4c963f72c5 AArch64: Better codegen for loading from __fp16.
Loading will generally extend to an f32 or an 64, so make sure
to match those patterns directly to load into the FPR16 register
class directly rather than going through the integer GPRs.

This also eliminates an extra step in the convert-to-f64 path
which was first converting to f32 and then to f64 from there.

rdar://17594379

llvm-svn: 212573
2014-07-08 23:28:48 +00:00
Louis Gerbarg
30a2dcd984 Allow AArch64FastISel to degrade graceully in the presence of an MVT::i128
Currently AArch64FastISel crashes if it tries to extend an integer into an
MVT::i128. This can happen by creating 128 bit integers like so:

  typedef unsigned int uint128_t __attribute__((mode(TI)));
  typedef int sint128_t __attribute__((mode(TI)));

This patch makes EmitIntExt check for their presence and then falls back to
SelectionDAG.

Tests included.

rdar://17516686

llvm-svn: 212492
2014-07-07 21:37:51 +00:00
Tim Northover
96e2bce418 CodeGen: it turns out that NAND is not the same thing as BIC. At all.
We've been performing the wrong operation on ARM for "atomicrmw nand" for
years, since "a NAND b" is "~(a & b)" rather than ARM's very tempting "a & ~b".
This bled over into the generic expansion pass.

So I assume no-one has ever actually tried to do an atomic nand in the real
world. Oh well.

llvm-svn: 212443
2014-07-07 09:06:35 +00:00
Kevin Qin
c72c5a1639 [AArch64] Normalize all constants to build a vector.
The value of constant operands will be truncated to fit element width.

llvm-svn: 212428
2014-07-07 02:45:40 +00:00
Chandler Carruth
636964a893 [aarch64] Add a test that should have been in r212242 but I forgot to
add it. Sorry about that.

llvm-svn: 212251
2014-07-03 02:12:26 +00:00
Chandler Carruth
fc0fe5064b [codegen,aarch64] Add a target hook to the code generator to control
vector type legalization strategies in a more fine grained manner, and
change the legalization of several v1iN types and v1f32 to be widening
rather than scalarization on AArch64.

This fixes an assertion failure caused by scalarizing nodes like "v1i32
trunc v1i64". As v1i64 is legal it will fail to scalarize v1i32.

This also provides a foundation for other targets to have more granular
control over how vector types are legalized.

Patch by Hao Liu, reviewed by Tim Northover. I'm committing it to allow
some work to start taking place on top of this patch as it adds some
really important hooks to the backend that I'd like to immediately start
using. =]

http://reviews.llvm.org/D4322

llvm-svn: 212242
2014-07-03 00:23:43 +00:00
Duncan P. N. Exon Smith
3ca30c6b7f AArch64: Re-enable AArch64AddressTypePromotion
This reverts commits r212189 and r212190.

While this pass was accidentally disabled (until r212073), r205437
slipped in a use of `auto` that should have been `auto&`.

This fixes PR20188.

llvm-svn: 212201
2014-07-02 18:17:40 +00:00
Duncan P. N. Exon Smith
971c17962b XFAIL the test to go with r202189
llvm-svn: 212190
2014-07-02 17:07:03 +00:00
Chad Rosier
f00a3b6176 Revert "Revert "MachineScheduler: better book-keeping for asserts.""
This reverts commit r212109, which reverted r212088.

However, disable the assert as it's not necessary for correctness.  There are
several corner cases that the assert needed to handle better for in-order
scheduling, but none of them are incorrect scheduler behavior. The assert is
mainly there to collect good unit tests like this and ensure that the
target-independent scheduler is working as expected with the various machine
models.

llvm-svn: 212187
2014-07-02 16:46:08 +00:00
Chad Rosier
92846e46b5 Revert "MachineScheduler: better book-keeping for asserts."
This reverts commit r212088, which is causing a number of spec
failures.  Will provide reduced test cases shortly.
PR20057

llvm-svn: 212109
2014-07-01 17:23:11 +00:00
Andrew Trick
449b2dbbfb MachineScheduler: better book-keeping for asserts.
Fixes another test case under PR20057.

llvm-svn: 212088
2014-07-01 03:23:13 +00:00
Duncan P. N. Exon Smith
8dd9ef0369 AArch64: Actually do address type promotion
AArch64AddressTypePromotion was doing nothing because it was using the
old semantics of `Use` and `uses()`, when it really wanted to get at the
`users()`.

llvm-svn: 212073
2014-06-30 23:42:14 +00:00
Chad Rosier
e739f85e32 [AArch64] Unsized types don't specify an alignment.
PR20109

llvm-svn: 212045
2014-06-30 15:03:00 +00:00
Chad Rosier
2954f936f3 [AArch64] Convert mul x, -(pow2 +/- 1) to shift + add/sub.
The combine for mul x, pow2 +/- 1 is unchanged. Test cases for
both combines as well as mul x, pow2 have been added as well.

llvm-svn: 212044
2014-06-30 14:51:14 +00:00
Chad Rosier
98a58b0e56 [AArch64] Fix memset ICE when memset value is f128.
llvm-svn: 211960
2014-06-27 21:05:09 +00:00
Andrew Trick
a8cb3163c0 MachineScheduler: add some book-keeping to fix an assert.
Fixe for Bug 20057 - Assertion failied in llvm::SUnit* llvm::SchedBoundary::pickOnlyChoice(): Assertion `i <= (HazardRec->getMaxLookAhead() + MaxObservedStall) && "permanent hazard"'

Thanks to Chad for the test case.

llvm-svn: 211865
2014-06-27 04:57:05 +00:00
Weiming Zhao
23e9a680c2 Resubmit commit r211533
"Fix PR20056: Implement pseudo LDR <reg>, =<literal/label> for AArch64"
Missed files are added in this commit.

llvm-svn: 211605
2014-06-24 16:21:38 +00:00
Kevin Qin
22b89e0ae0 [AArch64] Fix a build_vector pattern match fail
caused by defect in isBuildVectorAllZeros().

llvm-svn: 211567
2014-06-24 05:37:27 +00:00
Arnold Schwaighofer
69cfc9ae09 Add a triple so that right syntax is choosen on mac osx systems
llvm-svn: 211188
2014-06-18 17:20:49 +00:00
Kevin Qin
8c13822db4 [AArch64] Fix a pattern match failure caused by creating improper CONCAT_VECTOR.
ReconstructShuffle() may wrongly creat a CONCAT_VECTOR trying to
concat 2 of v2i32 into v4i16. This commit is to fix this issue and
try to generate UZP1 instead of lots of MOV and INS.
Patch is initalized by Kevin Qin, and refactored by Tim Northover.

llvm-svn: 211144
2014-06-18 05:54:42 +00:00
Tim Northover
300d65b2ea AArch64: estimate inline asm length during branch relaxation
To make sure branches are in range, we need to do a better job of estimating
the length of an inline assembly block than "it's probably 1 instruction, who'd
write asm with more than that?".

Fortunately there's already a (highly suspect, see how many ways you can think
of to break it!) callback for this purpose, which is used by the other targets.

rdar://problem/17277590

llvm-svn: 211095
2014-06-17 11:31:42 +00:00
James Molloy
d8293dd333 [AArch64] Fix a fencepost error in lowering for llvm.aarch64.neon.uqshl.
Patch by Jiangning Liu!

llvm-svn: 211014
2014-06-16 10:39:21 +00:00
Tim Northover
9eac1de1e4 AArch64: improve handling & modelling of FP_TO_XINT nodes.
There's probably no acatual change in behaviour here, just updating
the LowerFP_TO_INT function to be more similar to the reverse
implementation and updating costs to current CodeGen.

llvm-svn: 210985
2014-06-15 09:27:15 +00:00
Tim Northover
0f6e617e90 AArch64: improve vector [su]itofp handling.
This somehow got missed in the AArch64 merge, so should fix a
performance regression since 3.4.

llvm-svn: 210984
2014-06-15 09:27:06 +00:00
Jiangning Liu
c7a7a14ca1 Move GlobalMerge from Transform to CodeGen.
This patch is to move GlobalMerge pass from Transform/Scalar                                                           
to CodeGen, because GlobalMerge depends on TargetMachine.
In the mean time, the macro INITIALIZE_TM_PASS is also moved
to CodeGen/Passes.h. With this fix we can avoid making
libScalarOpts depend on libCodeGen.

llvm-svn: 210951
2014-06-13 22:57:59 +00:00
Tim Northover
b9ec29d7c5 IR: add "cmpxchg weak" variant to support permitted failure.
This commit adds a weak variant of the cmpxchg operation, as described
in C++11. A cmpxchg instruction with this modifier is permitted to
fail to store, even if the comparison indicated it should.

As a result, cmpxchg instructions must return a flag indicating
success in addition to their original iN value loaded. Thus, for
uniformity *all* cmpxchg instructions now return "{ iN, i1 }". The
second flag is 1 when the store succeeded.

At the DAG level, a new ATOMIC_CMP_SWAP_WITH_SUCCESS node has been
added as the natural representation for the new cmpxchg instructions.
It is a strong cmpxchg.

By default this gets Expanded to the existing ATOMIC_CMP_SWAP during
Legalization, so existing backends should see no change in behaviour.
If they wish to deal with the enhanced node instead, they can call
setOperationAction on it. Beware: as a node with 2 results, it cannot
be selected from TableGen.

Currently, no use is made of the extra information provided in this
patch. Test updates are almost entirely adapting the input IR to the
new scheme.

Summary for out of tree users:
------------------------------

+ Legacy Bitcode files are upgraded during read.
+ Legacy assembly IR files will be invalid.
+ Front-ends must adapt to different type for "cmpxchg".
+ Backends should be unaffected by default.

llvm-svn: 210903
2014-06-13 14:24:07 +00:00
Chad Rosier
07ce4c0d5f [AArch64] Basic Sched Model for Cortex-A57.
Patch by Dave Estes<cestes@codeaurora.org>
Differential Revision: http://reviews.llvm.org/D4008

llvm-svn: 210705
2014-06-11 21:06:56 +00:00
Jiangning Liu
4291d4247a Global merge for global symbols.
This commit is to improve global merge pass and support global symbol merge.
The global symbol merge is not enabled by default. For aarch64, we need some
more back-end fix to make it really benifit ADRP CSE.

llvm-svn: 210640
2014-06-11 06:44:53 +00:00
Chad Rosier
0f6d185fcf [AArch64] Emit .ident compiler version attribute.
Patch by Ana Pazos<apazos@codeaurora.org>!

llvm-svn: 210535
2014-06-10 14:32:08 +00:00
Tim Northover
8d5e97704b AArch64: disallow x30 & x29 as the destination for indirect tail calls
As Ana Pazos pointed out, these have to be restored to their incoming values
before a function returns; i.e. before the tail call. So they can't be used
correctly as the destination register.

llvm-svn: 210525
2014-06-10 10:50:24 +00:00
Tim Northover
bfac8dd607 AArch64: teach FastISel how to handle offset FrameIndices
Previously we were abandonning the attempt, leading to some combination of
extra work (when selection of a load/store fails completely) and inferior code
(when this leads to a real memcpy call instead of inlining).

rdar://problem/17187463

llvm-svn: 210520
2014-06-10 09:52:44 +00:00
Tim Northover
666d07f003 AArch64: make FastISel memcpy emission more robust.
We were hitting an assert if FastISel couldn't create the load or store we
requested. Currently this happens for large frame-local addresses, though
CodeGen could be improved there.

rdar://problem/17187463

llvm-svn: 210519
2014-06-10 09:52:40 +00:00
Alp Toker
03b6e12fae Reduce verbiage of lit.local.cfg files
We can just split targets_to_build in one place and make it immutable.

llvm-svn: 210496
2014-06-09 22:42:55 +00:00
Chad Rosier
22a15b47d4 [AArch64] Fix the ordering of the accumulate operand in SchedRW list.
Patch by Dave Estes <cestes@codeaurora.org>
http://reviews.llvm.org/D4037

llvm-svn: 210446
2014-06-09 01:54:00 +00:00
Chad Rosier
010594577d [AArch64] When combining constant mul of power of 2 plus/minus 1, prefer shift
plus add.  The shift can be folded into the add.  This only effects codegen
when the constant is 3.

llvm-svn: 210445
2014-06-09 01:25:51 +00:00
Tilmann Scheller
b619db1a71 [AArch64] Add regression tests for the load/store optimizer which cover post-index update folding with sub rather than add.
The tests check that the following transform happens:

  (ldr|str) X, [x20]
   ...
  sub x20, x20, #16
   ->
  (ldr|str) X, [x20], #-16

with X being either w0, x0, s0, d0 or q0.

llvm-svn: 210113
2014-06-03 16:03:00 +00:00
Tim Northover
d56609ce6c AArch64: mark small types (i1, i8, i16) as promoted
This means the output of LowerFormalArguments returns a lowered
SDValue with the correct type (expected in SelectionDAGBuilder).
Without this, an assertion under a DEBUG macro triggers when those
types are passed on the stack.

llvm-svn: 210102
2014-06-03 13:54:53 +00:00
Jiangning Liu
531302fb19 [AArch64] Correctly deal with VPR stack parameter passing.
llvm-svn: 210067
2014-06-03 03:25:09 +00:00
Tilmann Scheller
7206d94ee3 [AArch64] Add some more regression tests for store pre-index update folding in the load/store optimizer.
Add tests for the following transform:

 add x8, x8, #16
  ...
 str X, [x8]
  ->
 str X, [x8, #16]!

with X being either w0, x0, s0, d0 or q0.

llvm-svn: 210021
2014-06-02 12:33:33 +00:00
Tilmann Scheller
d9a7937b47 [AArch64] Add some more regression tests for load pre-index update folding in the load/store optimizer.
Add tests for the following transform:

 add x8, x8, #16
  ...
 ldr X, [x8]
  ->
 ldr X, [x8, #16]!

with X being either w0, x0, s0, d0 or q0.

llvm-svn: 210018
2014-06-02 11:57:09 +00:00
Tim Northover
3bb84c9bcc ARM & AArch64: make use of common cmpxchg idioms after expansion
The C and C++ semantics for compare_exchange require it to return a bool
indicating success. This gets mapped to LLVM IR which follows each cmpxchg with
an icmp of the value loaded against the desired value.

When lowered to ldxr/stxr loops, this extra comparison is redundant: its
results are implicit in the control-flow of the function.

This commit makes two changes: it replaces that icmp with appropriate PHI
nodes, and then makes sure earlyCSE is called after expansion to actually make
use of the opportunities revealed.

I've also added -{arm,aarch64}-enable-atomic-tidy options, so that
existing fragile tests aren't perturbed too much by the change. Many
of them either rely on undef/unreachable too pervasively to be
restored to something well-defined (particularly while making sure
they test the same obscure assert from many years ago), or depend on a
particular CFG shape, which is disrupted by SimplifyCFG.

rdar://problem/16227836

llvm-svn: 209883
2014-05-30 10:09:59 +00:00
Tim Northover
6f54bcc408 AArch64 & ARM: remove undefined behaviour from some tests.
llvm-svn: 209880
2014-05-30 08:59:55 +00:00
Hao Liu
f3d89d6049 Test cases named with dates is a legacy rule not used now. Rename several test cases.
llvm-svn: 209877
2014-05-30 05:58:19 +00:00
Hao Liu
60bfd53df1 Rename a test case to contain correct date info.
llvm-svn: 209799
2014-05-29 09:21:23 +00:00
Hao Liu
0e99724daa Fix an assertion failure caused by v1i64 in DAGCombiner Shrink.
llvm-svn: 209798
2014-05-29 09:19:07 +00:00
Hal Finkel
530ec71f07 Revert "[DAGCombiner] Split up an indexed load if only the base pointer value is live"
This reverts r208640 (I've just XFAILed the test) because it broke ppc64/Linux
self-hosting. Because nearly every regression test triggers a segfault, I hope
this will be easy to fix.

llvm-svn: 209747
2014-05-28 15:33:19 +00:00
Tilmann Scheller
99ec622e7d [AArch64] Add store post-index update folding regression tests for the load/store optimizer.
Add regression tests for the following transformation:

  str X, [x20]
   ...
  add x20, x20, #32
   ->
  str X, [x20], #32

with X being either w0, x0, s0, d0 or q0.

llvm-svn: 209715
2014-05-28 06:43:00 +00:00
Tilmann Scheller
371ff7e9ac [AArch64] Add load post-index update folding regression tests for the load/store optimizer.
Add regression tests for the following transformation:

 ldr X, [x20]
  ...
 add x20, x20, #32
  ->
 ldr X, [x20], #32

 with X being either w0, x0, s0, d0 or q0.

llvm-svn: 209711
2014-05-28 05:44:14 +00:00
Tim Northover
0dd1dfabfd AArch64: add test for NZCV cross-copy save.
llvm-svn: 209665
2014-05-27 16:50:09 +00:00
Tim Northover
13435480e0 AArch64: add AArch64-specific test for 'c' and 'n'.
llvm-svn: 209664
2014-05-27 16:50:03 +00:00
Tim Northover
10cffb6eef AArch64: force i1 to be zero-extended at an ABI boundary.
This commit is debatable. There are two possible approaches, neither
of which is really satisfactory:

1. Use "@foo(i1 zeroext)" to mean an extension to 32-bits on Darwin,
   and 8 bits otherwise.
2. Redefine "@foo(i1)" to mean that the i1 is extended by the caller
   to 8 bits. This goes against the spirit of "zeroext" I think, but
   it's a bit of a vague construct anyway (by definition you're going
   to extend to the amount required by the ABI, that's why it's the
   ABI!).

This implements option 2. The DAG machinery really isn't setup for the
first (there's a fairly strong assumption that "zeroext" goes to at
least the smallest register size), and even if it was the resulting
DAG looks like it would be inferior in many cases.

Theoretically we could add AssertZext nodes in the consumers of
ABI-passed values too now, but this actually seems to make the code
worse in practice by making truncation proceed in two steps. The code
produced is equally valid if we continue to assume only the low bit is
defined.

Should fix PR19850

llvm-svn: 209637
2014-05-26 17:22:07 +00:00
Tim Northover
fc1b1e8952 AArch64: simplify calling conventions slightly.
We can eliminate the custom C++ code in favour of some TableGen to
check the same things. Functionality should be identical, except for a
buffer overrun that was present in the C++ code and meant webkit
failed if any small argument needed to be passed on the stack.

llvm-svn: 209636
2014-05-26 17:21:53 +00:00
Tilmann Scheller
0d5b6aa201 [AArch64] Add store + add folding regression tests for the load/store optimization pass.
Add tests for the following transform:

 str X, [x0, #32]
  ...
 add x0, x0, #32
  ->
 str X, [x0, #32]!

with X being either w1, x1, s0, d0 or q0.

llvm-svn: 209627
2014-05-26 13:36:47 +00:00
Tilmann Scheller
bf3f06fd14 [AArch64] Add more regression tests for the load/store optimization pass.
Cover the following cases:

  ldr X, [x0, #32]
   ...
  add x0, x0, #32
   ->
  ldr X, [x0, #32]!

with X being either w1, x1, s0, d0 or q0.

llvm-svn: 209624
2014-05-26 12:15:51 +00:00
Tilmann Scheller
4b78dd1812 Remove accidentally committed whitespace.
llvm-svn: 209619
2014-05-26 09:40:40 +00:00
Tilmann Scheller
f505625e35 [AArch64] Add a regression test for the load store optimizer.
We have a couple of regression tests for load/store pairing, but (to my knowledge) there are no regression tests for the load/store + add/sub folding.

As a first step towards increased test coverage of this area, this commit adds a test for one instance of a load + add to pre-indexed load transformation.

llvm-svn: 209618
2014-05-26 09:37:19 +00:00
Tim Northover
ca0f4dc4f0 AArch64/ARM64: move ARM64 into AArch64's place
This commit starts with a "git mv ARM64 AArch64" and continues out
from there, renaming the C++ classes, intrinsics, and other
target-local objects for consistency.

"ARM64" test directories are also moved, and tests that began their
life in ARM64 use an arm64 triple, those from AArch64 use an aarch64
triple. Both should be equivalent though.

This finishes the AArch64 merge, and everyone should feel free to
continue committing as normal now.

llvm-svn: 209577
2014-05-24 12:50:23 +00:00
Tim Northover
d7f173214f AArch64/ARM64: remove AArch64 from tree prior to renaming ARM64.
I'm doing this in two phases for a better "git blame" record. This
commit removes the previous AArch64 backend and redirects all
functionality to ARM64. It also deduplicates test-lines and removes
orphaned AArch64 tests.

The next step will be "git mv ARM64 AArch64" and rewire most of the
tests.

Hopefully LLVM is still functional, though it would be even better if
no-one ever had to care because the rename happens straight
afterwards.

llvm-svn: 209576
2014-05-24 12:42:26 +00:00
Tim Northover
c8bed61f8e AArch64/ARM64: enable more AArch64 tests.
llvm-svn: 209408
2014-05-22 07:40:55 +00:00
Rafael Espindola
6d40091c3c Revert "Implement global merge optimization for global variables."
This reverts commit r208934.

The patch depends on aliases to GEPs with non zero offsets. That is not
supported and fairly broken.

The good news is that GlobalAlias is being redesigned and will have support
for offsets, so this patch should be a nice match for it.

llvm-svn: 208978
2014-05-16 13:02:18 +00:00
Tim Northover
31e1362588 TableGen: fix operand counting for aliases
TableGen has a fairly dubious heuristic to decide whether an alias should be
printed: does the alias have lest operands than the real instruction. This is
bad enough (particularly with no way to override it), but it should at least be
calculated consistently for both strings.

This patch implements that logic: first get the *correct* string for the
variant, in the same way as the Matcher, without guessing; then count the
number of whitespace chars.

There are basically 4 changes this brings about after the previous
commits; all of these appear to be good, so I have changed the tests:

+ ARM64: we print "neg X, Y" instead of "sub X, xzr, Y".
+ ARM64: we skip implicit "uxtx" and "uxtw" modifiers.
+ Sparc: we print "mov A, B" instead of "or %g0, A, B".
+ Sparc: we print "fcmpX A, B" instead of "fcmpX %fcc0, A, B"

llvm-svn: 208969
2014-05-16 09:42:04 +00:00
Jiangning Liu
5366cb42f6 Implement global merge optimization for global variables.
This commit implements two command line switches -global-merge-on-external
and -global-merge-aligned, and both of them are false by default, so this
optimization is disabled by default for all targets.

For ARM64, some back-end behaviors need to be tuned to get this optimization
further enabled.

llvm-svn: 208934
2014-05-15 23:45:42 +00:00
Tim Northover
ed117bb644 ARM64: print correct aliases for NEON mov & mvn instructions
In all cases, if a "mov" alias exists, it is the canonical form of the
instruction. Now that TableGen can support aliases containing syntax variants,
we can enable them and improve the quality of the asm output.

llvm-svn: 208874
2014-05-15 12:11:02 +00:00
Tim Northover
4ba95d4483 TableGen/ARM64: print aliases even if they have syntax variants.
To get at least one use of the change (and some actual tests) in with its
commit, I've enabled the AArch64 & ARM64 NEON mov aliases.

llvm-svn: 208867
2014-05-15 11:16:32 +00:00
Jiangning Liu
ecd097d587 [ARM64] Support aggressive fastcc/tailcallopt breaking ABI by popping out argument stack from callee.
llvm-svn: 208837
2014-05-15 01:33:17 +00:00
Tim Northover
3c2cc7a397 TableGen: use PrintMethods to print more aliases
llvm-svn: 208607
2014-05-12 18:04:06 +00:00
Tim Northover
aed9e378c3 AArch64/ARM64: optimise vector selects & enable test
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.

llvm-svn: 208210
2014-05-07 14:10:27 +00:00
James Molloy
c6eeb59eb7 [ARM64-BE] Make big endian (scalar) argument passing work correctly.
This completes the port of r204814 (cpirker "AArch64_BE function argument
passing for ARM ABI") from AArch64 to ARM64, and fixes a bunch of issues
found during later development along the way. The biggest of these was
that the alignment fixup logic wasn't replicated into all the places it
should have been.

llvm-svn: 208192
2014-05-07 11:28:36 +00:00
Tim Northover
158a8f8793 AArch64/ARM64: run test on ARM64 too.
llvm-svn: 208188
2014-05-07 10:47:04 +00:00
Tim Northover
2d1729cbfc AArch64/ARM64: put annotation in test
It makes finding already covered tests much easier with "grep -L
arm64".

llvm-svn: 208187
2014-05-07 10:47:00 +00:00
Renato Golin
8a9a382ab2 Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

llvm-svn: 208104
2014-05-06 16:51:25 +00:00
Rafael Espindola
5988eacc23 Special case aliases in GlobalValue::getAlignment.
An alias has the address of what it points to, so it also has the same
alignment.

This allows a few optimizations to see past aliases for free.

llvm-svn: 208103
2014-05-06 16:48:58 +00:00
Tim Northover
56141b7826 AArch64/ARM64: print BFM instructions as BFI or BFXIL
The canonical form of the BFM instruction is always one of the more explicit
extract or insert operations, which makes reading output much easier.

llvm-svn: 207752
2014-05-01 12:29:38 +00:00
Tim Northover
dd37c75601 AArch64/ARM64: use HS instead of CS & LO instead of CC.
On instructions using the NZCV register, a couple of conditions have dual
representations: HS/CS and LO/CC (meaning unsigned-higher-or-same/carry-set and
unsigned-lower/carry-clear). The first of these is more descriptive in most
circumstances, so we should print it.

llvm-svn: 207644
2014-04-30 13:14:03 +00:00
Tim Northover
beb9daa30f ARM64: use hex immediates for movz/movk instructions
Since these are mostly used in "lsl #16", "lsl #32", "lsl #48" combinations to
piece together an immediate in 16-bit chunks, hex is probably the most
appropriate format.

llvm-svn: 207635
2014-04-30 11:19:40 +00:00
Tim Northover
357012cebb ARM64: print canonical syntax for add/sub (imm) instructions.
Since these instructions only accept a 12-bit immediate, possibly shifted left
by 12, the canonical syntax used by the architecture reference manual is "#N {,
lsl #12 }". We should accept an immediate that has already been shifted, (e.g.

Also, print a comment giving the full addend since it can be helpful.

llvm-svn: 207633
2014-04-30 11:19:15 +00:00
James Molloy
b41c043c3c [ARM64] Ensure arm64_be is dealt with when emitting debug info.
This is a partial port of r204816 (cpirker "Elf support for MC-JIT
runtime dynamic linker") from AArch64 to ARM64.

llvm-svn: 207625
2014-04-30 10:15:35 +00:00
Benjamin Kramer
3e3e43e656 AArch64: Mark vector long multiplication as expand.
There are no patterns for this. This was already fixed for ARM64 but I forgot
to apply it to AArch64 too.

llvm-svn: 207515
2014-04-29 09:37:54 +00:00
Bradley Smith
51d6f272a3 [ARM64] Print preferred aliases for SFBM/UBFM in InstPrinter
llvm-svn: 207219
2014-04-25 10:25:29 +00:00
Kevin Qin
6a9c40e76d [ARM64] Add RUN lines for "–target arm64 –mattr=-fp-armv8" on AArch64 no-fp test.
This patch is a supplement of implementing predicate of FP, enabling aarch64 backend
no-fp tests on arm64 target for verification. During this, one bug is exposed and
fixed by this patch.

llvm-svn: 207215
2014-04-25 09:44:20 +00:00
Tim Northover
44ef313763 AArch64: print NEON lists with a space.
This matches ARM64 behaviour, which I think is clearer. It also puts all the
churn from that difference into one easily ignored commit.

llvm-svn: 207116
2014-04-24 14:06:20 +00:00
Tim Northover
a47a5090e8 AArch64/ARM64: port bitfield test to ARM64.
llvm-svn: 207103
2014-04-24 12:11:56 +00:00
Tim Northover
586e442827 AArch64/ARM64: implement BFI optimisation
ARM64 was not producing pure BFI instructions for bitfield insertion
operations, unlike AArch64. The approach had to be a little different (in
ISelDAGToDAG rather than ISelLowering), and the outcomes aren't identical but
hopefully this gives it similar power.

This should address PR19424.

llvm-svn: 207102
2014-04-24 12:11:53 +00:00
Tim Northover
5b13e67220 AArch64/ARM64: port more tests
llvm-svn: 207101
2014-04-24 12:11:46 +00:00
Tim Northover
3a2827019a AArch64/ARM64: more testing from AArch64 to ARM64
llvm-svn: 206889
2014-04-22 12:45:47 +00:00
Tim Northover
36d7628659 AArch64/ARM64: make use of ANDS and BICS instructions for comparisons.
llvm-svn: 206888
2014-04-22 12:45:42 +00:00
Tim Northover
f8a9b17bf4 AArch64/ARM64: add extra testing from AArch64 to ARM64
llvm-svn: 206887
2014-04-22 12:45:32 +00:00
Tim Northover
e641292574 AArch64/ARM64: enable various AArch64 tests on ARM64.
llvm-svn: 206877
2014-04-22 10:10:26 +00:00
Tim Northover
99097ae341 AArch64/ARM64: add patterns for scalar_to_vector/extract pairs
llvm-svn: 206876
2014-04-22 10:10:18 +00:00
Tim Northover
8de0d64b13 AArch64/ARM64: mark fmul intrinsic as commutative.
This gives DAG patterns matching indexed patterns where either side is an
indexed vector.

llvm-svn: 206875
2014-04-22 10:10:14 +00:00
Jiangning Liu
fbd9fe9a73 [AArch64] Enable global merge pass.
llvm-svn: 206861
2014-04-22 03:33:26 +00:00
Tim Northover
51c071ce73 AArch64/ARM64: add more NEON tests.
Mostly no testing this time, since they were just wrangling
target-specific intrinsics.

llvm-svn: 206613
2014-04-18 14:54:53 +00:00
Tim Northover
f3e0ceb127 ARM64: add extra NEG pattern.
llvm-svn: 206609
2014-04-18 14:54:35 +00:00
Tim Northover
c327e9b2cf AArch64/ARM64: port more AArch64 tests to ARM64.
llvm-svn: 206592
2014-04-18 13:16:55 +00:00
Tim Northover
de9624364e AArch64/ARM64: improve spotting of EXT instructions from VECTOR_SHUFFLE.
We couldn't cope if the first mask element was UNDEF before, which
isn't ideal.

llvm-svn: 206588
2014-04-18 12:50:58 +00:00
Tim Northover
61643896ea AArch64/ARM64: port atomics test to ARM64.
Covers quite a few extra instructions (like any of the max/min ones
which were broken until recently on ARM64).

llvm-svn: 206575
2014-04-18 09:31:31 +00:00
Tim Northover
23ca911ed1 AArch64/ARM64: spot a greater variety of concat_vector operations.
Code mostly copied from AArch64, just tidied up a trifle and plumbed
into the ARM64 way of doing things.

This also enables the AArch64 tests which inspired the previous
untested commits.

llvm-svn: 206574
2014-04-18 09:31:27 +00:00
Tim Northover
21403d6f09 AArch64/ARM64: emit all vector FP comparisons as such.
ARM64 was scalarizing some vector comparisons which don't quite map to
AArch64's compare and mask instructions. AArch64's approach of sacrificing a
little efficiency to emulate them with the limited set available was better, so
I ported it across.

More "inspired by" than copy/paste since the backend's internal expectations
were a bit different, but the tests were invaluable.

llvm-svn: 206570
2014-04-18 09:31:07 +00:00
Tim Northover
1828862541 AArch64/ARM64: port BSL logic from AArch64 & enable test.
I enhanced it a little in the process. The decision shouldn't really be beased
on whether a BUILD_VECTOR is a splat: any set of constants will do the job
provided they're related in the correct way.

Also, the BUILD_VECTOR could be any operand of the incoming AND nodes, so it's
best to check for all 4 possibilities rather than assuming it'll be the RHS.

llvm-svn: 206569
2014-04-18 09:31:01 +00:00
Tim Northover
2b48b866aa AArch64/ARM64: copy byval implementation from AArch64.
It's not actually used to handle C or C++ ABI rules on ARM64, but could well be
emitted by other language front-ends, so it's as well to have a sensible
implementation.

llvm-svn: 206568
2014-04-18 09:30:52 +00:00
Jiangning Liu
6aa9a901c7 This is one of the optimizations ported from ARM64 to AArch64 to address the performance gap between these two back ends. The test case newly added for AArch64 already exists in ARM64.
Patched by Z.Zheng

llvm-svn: 206559
2014-04-18 05:58:09 +00:00
Jiangning Liu
fcc0f2379a This commit enables unaligned memory accesses of vector types on AArch64 back end. This should boost vectorized code performance.
Patched by Z. Zheng

llvm-svn: 206557
2014-04-18 03:58:38 +00:00
Tim Northover
e2f8ef0ee7 AArch64/ARM64: port some NEON tests to ARM64
These ones used completely different sets of intrinsics, so the only way to do
it is create a separate ARM64 copy and change them all.

Other than that, CodeGen was straightforward, no deficiencies detected here.

llvm-svn: 206392
2014-04-16 15:28:02 +00:00
Tim Northover
f6c8e8c84a AArch64/ARM64: add another set of tests from AArch64
Another batch with no code changes.

llvm-svn: 206381
2014-04-16 11:53:07 +00:00
Tim Northover
dde1d393b4 AArch64/ARM64: port across stub handling for ELF C++ exceptions.
The most important part here is that we should actuall emit the stubs we refer
to in the exception table, but as a side issue this uses more sensible & GCC
compatible representations for some of the bits of information.

llvm-svn: 206380
2014-04-16 11:52:55 +00:00
Tim Northover
9be7d39fd7 ARM64: use 32-bit moves for constants where possible.
If we know that a particular 64-bit constant has all high bits zero, then we
can rely on the fact that 32-bit ARM64 instructions automatically zero out the
high bits of an x-register. This gives the expansion logic less constraints to
satisfy and so sometimes allows it to pick better sequences.

Came up while porting test/CodeGen/AArch64/movw-consts.ll: this will allow a
32-bit MOVN to be used in @test8 soon.

llvm-svn: 206379
2014-04-16 11:52:51 +00:00
Tim Northover
fb109731e4 ARM64: use the integrated assembler on ELF.
llvm-svn: 206378
2014-04-16 11:52:40 +00:00
Tim Northover
dcc9d1cb89 DAGCombiner: don't optimise non-existant litpool load
This particular DAG combine is designed to kick in when both ConstantFPs will
end up being loaded via a litpool, however those nodes have a semi-legal
status, dictated by isFPImmLegal so in some cases there wouldn't have been a
litpool in the first place. Don't try to be clever in those circumstances.

Picked up while merging some AArch64 tests.

llvm-svn: 206365
2014-04-16 09:03:09 +00:00
Quentin Colombet
d23a7863fe [ARM64] Set default CPU to generic instead of cyclone.
llvm-svn: 206313
2014-04-15 19:08:46 +00:00
Tim Northover
f6bd9143a7 AArch64/ARM64: enable more AArch64 tests on ARM64.
No code changes for this bunch, just some test rejigs.

llvm-svn: 206291
2014-04-15 14:00:29 +00:00
Tim Northover
1a72d5035d AArch64/ARM64: add missing pattern for extending load.
llvm-svn: 206290
2014-04-15 14:00:19 +00:00
Tim Northover
2fa51b50b7 AArch64/ARM64: only mangle MOVZ/MOVN during encoding when needed
Sometimes we need emit the bits that would actually be a MOVN when producing a
relocated MOVZ instruction (don't ask). But not always, a check which ARM64 got
wrong until now.

llvm-svn: 206289
2014-04-15 14:00:15 +00:00
Tim Northover
b80745a8af AArch64/ARM64: add support for large code-model jump tables.
I've left the MachO CodeGen as it is, there's a reasonable chance it should use
the GOT like ConstPools, but I'm not certain.

llvm-svn: 206288
2014-04-15 14:00:11 +00:00
Tim Northover
25bd67bc13 AArch64/ARM64: add patterns for various commutations of FNMADD.
llvm-svn: 206287
2014-04-15 14:00:06 +00:00
Tim Northover
807e4d1956 AArch64/ARM64: add half as a storage type on ARM64.
This brings it into line with the AArch64 behaviour and should open the way for
certain OpenCL features.

llvm-svn: 206286
2014-04-15 14:00:03 +00:00
Tim Northover
d9b311a7db AArch64/ARM64: copy patterns for fixed-point conversions
Code is mostly copied directly across, with a slight extension of the
ISelDAGToDAG function so that it can cope with the floating-point constants
being behind a litpool.

llvm-svn: 206285
2014-04-15 13:59:57 +00:00
Tim Northover
19b6db94a6 ARM64: add constraints to various FastISel operations
llvm-svn: 206284
2014-04-15 13:59:53 +00:00
Tim Northover
21cae43ac0 AArch64/ARM64: add more arm64 lines to AArch64 regression tests
llvm-svn: 206282
2014-04-15 13:59:44 +00:00
Tim Northover
5317490f6a AArch64/ARM64: add dp tests from AArch64
llvm-svn: 206281
2014-04-15 13:59:40 +00:00
Tim Northover
977888c9a2 ARM64: specify full triple in tests to pacify Windows.
llvm-svn: 206175
2014-04-14 13:18:48 +00:00
Tim Northover
67d0c1d00c AArch64: add newline to end of test files.
Should be no other change.

llvm-svn: 206174
2014-04-14 13:18:40 +00:00
Tim Northover
837be1f97d AArch64/ARM64: enable directcond.ll test on ARM64.
Code change is because optimizeCompareInstr didn't know how to pull the
condition code out of FCSEL instructions.

llvm-svn: 206171
2014-04-14 12:51:06 +00:00
Tim Northover
3a59e7c449 ARM64: add patterns for csXYZ with reversed operands.
AArch64 tests for this, and it's obviously a good idea. Have to invert the
condition code, of course.

llvm-svn: 206170
2014-04-14 12:51:02 +00:00
Tim Northover
601b8c24ef ARM64: enable more regression tests from AArch64
llvm-svn: 206169
2014-04-14 12:50:58 +00:00
Tim Northover
0f5179b30d ARM64: add support for AArch64's addsub_ext.ll
There was one definite issue in ARM64 (the off-by-1 check for whether
a shift could be folded in) and one difference that is probably
correct: ARM64 didn't fold nodes with multiple uses into the
arithmetic operations unless optimising for code size.

llvm-svn: 206168
2014-04-14 12:50:50 +00:00
Tim Northover
614708ff8e ARM64: optimise (cmp x, (sub 0, y)) to (cmn x, y).
This transformation is only valid when being used for an EQ or NE
comparison since the flags change otherwise.

llvm-svn: 206167
2014-04-14 12:50:47 +00:00
Tim Northover
ce243cb6ce ARM64: start porting regression test suite from AArch64
llvm-svn: 206166
2014-04-14 12:50:41 +00:00
Chad Rosier
0a639ba5a6 [AArch64] Implement the isZExtFree APIs.
llvm-svn: 205926
2014-04-09 20:51:21 +00:00
Chad Rosier
b6bf098390 [AArch64] Implement the isTruncateFree API.
In AArch64 i64 to i32 truncate operation is a subregister access.

This allows more opportunities for LSR optmization to eliminate
variables of different types (i32 and i64).

llvm-svn: 205925
2014-04-09 20:43:40 +00:00
Logan Chien
00ea0f9c1b [AArch64] Lower SHL_PARTS, SRA_PARTS and SRL_PARTS
Lower SHL_PARTS, SRA_PARTS and SRL_PARTS to perform 128-bit integer shift

Patch by GuanHong Liu.

llvm-svn: 204940
2014-03-27 16:28:09 +00:00
Christian Pirker
47e115b816 AArch64_BE Elf support for MC-JIT runtime dynamic linker
llvm-svn: 204816
2014-03-26 14:57:32 +00:00
Christian Pirker
a1df68e278 AArch64_BE function argument passing for ARM ABI
llvm-svn: 204814
2014-03-26 14:51:22 +00:00
Manman Ren
e9c89df128 Register Allocator: check other options before using a CSR for the first time.
When register allocator's stage is RS_Spill, we choose spill over using the CSR
for the first time, if the spill cost is lower than CSRCost. 
When register allocator's stage is < RS_Split, we choose pre-splitting over
using the CSR for the first time, if the cost of splitting is lower than
CSRCost.

CSRCost is set with command-line option "regalloc-csr-first-time-cost". The
default value is 0 to generate the same codes as before this commit.

With a value of 15 (1 << 14 is the entry frequency), I measured performance
gain of 3% on 253.perlbmk and 1.7% on 197.parser, with instrumented PGO,
on an arm device.

rdar://16162005

llvm-svn: 204690
2014-03-25 00:16:25 +00:00
Chad Rosier
9733d5cead [AArch64] Add SchedRW lists to NEON instructions.
Previously, only regular AArch64 instructions were annotated with SchedRW lists.
This patch does the same for NEON enabling these instructions to be scheduled by
the MIScheduler. Additionally, store operations are now modeled and a few
SchedRW lists were updated for bug fixes (e.g. multiple def operands).

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

llvm-svn: 204505
2014-03-21 19:34:41 +00:00
Kevin Qin
d7954d3d41 [AArch64] Remove .data_region directive from AArch64.
.data_region is only used in Darwin, so it shouldn't be generated
for other OS. Currently AArch64 doesn't support darwin yet, so
I removed it from AArch64. When Darwin is supported someday, we can
add it back and associate it with Darwin.

llvm-svn: 204424
2014-03-21 02:12:48 +00:00
Matt Arsenault
c95c06bda9 Make DAGCombiner work on vector bitshifts with constant splat vectors.
llvm-svn: 204071
2014-03-17 18:58:01 +00:00
Tim Northover
68c567a38a IR: add a second ordering operand to cmpxhg for failure
The syntax for "cmpxchg" should now look something like:

	cmpxchg i32* %addr, i32 42, i32 3 acquire monotonic

where the second ordering argument gives the required semantics in the case
that no exchange takes place. It should be no stronger than the first ordering
constraint and cannot be either "release" or "acq_rel" (since no store will
have taken place).

rdar://problem/15996804

llvm-svn: 203559
2014-03-11 10:48:52 +00:00
Matt Arsenault
998df7332f Fix undefined behavior in vector shift tests.
These were all shifting the same amount as the bitwidth.

llvm-svn: 203519
2014-03-11 00:01:41 +00:00
Tim Northover
2f522988cc AArch64: fix LowerCONCAT_VECTORS for new CodeGen.
The function was making too many assumptions about its input:

1. The NEON_VDUP optimisation was far too aggressive, assuming (I
think) that the input would always be BUILD_VECTOR.

2. We were treating most unknown concats as legal (by returning Op
rather than SDValue()). I think only concats of pairs of vectors are
actually legal.

http://llvm.org/PR19094

llvm-svn: 203450
2014-03-10 09:34:07 +00:00
Rafael Espindola
cb9ca86245 Replace PROLOG_LABEL with a new CFI_INSTRUCTION.
The old system was fairly convoluted:
* A temporary label was created.
* A single PROLOG_LABEL was created with it.
* A few MCCFIInstructions were created with the same label.

The semantics were that the cfi instructions were mapped to the PROLOG_LABEL
via the temporary label. The output position was that of the PROLOG_LABEL.
The temporary label itself was used only for doing the mapping.

The new CFI_INSTRUCTION has a 1:1 mapping to MCCFIInstructions and points to
one by holding an index into the CFI instructions of this function.

I did consider removing MMI.getFrameInstructions completelly and having
CFI_INSTRUCTION own a MCCFIInstruction, but MCCFIInstructions have non
trivial constructors and destructors and are somewhat big, so the this setup
is probably better.

The net result is that we don't create temporary labels that are never used.

llvm-svn: 203204
2014-03-07 06:08:31 +00:00
Chad Rosier
6c9595d931 [AArch64] This is a work in progress to provide a machine description
for the Cortex-A53 subtarget in the AArch64 backend.

This patch lays the ground work to annotate each AArch64 instruction
(no NEON yet) with a list of SchedReadWrite types. The patch also
provides the Cortex-A53 processor resources, maps those the the default
SchedReadWrites, and provides basic latency. NEON support will be added
in a subsequent patch with proper forwarding logic.

Verification was done by setting the pre-RA scheduler to linearize to
better gauge the effect of the MIScheduler. Even without modeling the
forward logic, the results show a modest improvement for Cortex-A53.

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

llvm-svn: 203125
2014-03-06 16:04:00 +00:00
Chad Rosier
e60c767814 Revert "[AArch64] This is a work in progress to provide a machine description"
This reverts commit ff717c8fc786a0cfa1602982b91895fa09e514fc.

llvm-svn: 202773
2014-03-04 00:32:07 +00:00
Chad Rosier
ad64e09862 [AArch64] This is a work in progress to provide a machine description
for the Cortex-A53 subtarget in the AArch64 backend.

This patch lays the ground work to annotate each AArch64 instruction
(no NEON yet) with a list of SchedReadWrite types. The patch also
provides the Cortex-A53 processor resources, maps those the the default
SchedReadWrites, and provides basic latency. NEON support will be added
in a subsequent patch with proper forwarding logic.

Verification was done by setting the pre-RA scheduler to linearize to
better gauge the effect of the MIScheduler. Even without modeling the
forward logic, the results show a modest improvement for Cortex-A53.

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

llvm-svn: 202767
2014-03-03 23:32:47 +00:00
Tim Northover
852c0d63ee AArch64: simplify tbl/tbx polymorphism
The table argument is always 128-bit (and interpreted as <16 x i8>) so the
extra specifier for it is just clutter.

No user-visible behaviour change, so no tests.

llvm-svn: 202258
2014-02-26 11:55:09 +00:00
Kevin Qin
e05e6b31e1 [AArch64] Add register constraints to avoid generating STLXR and STXR with unpredictable behavior.
llvm-svn: 201841
2014-02-21 07:45:48 +00:00
Oliver Stannard
ce7688d8cc AArch64: __va_list.__stack must be 8-byte aligned
The va_start macro for AArch64 must set va_list.__stack to the address
following the last named argument on the stack, rounded up to an alignment
of 8 bytes.

llvm-svn: 201797
2014-02-20 17:19:26 +00:00
Ana Pazos
9cdade7a3e [AArch64] Expanded sin, cos, pow with FP vector types inputs
llvm-svn: 201601
2014-02-18 20:31:05 +00:00
Jiangning Liu
9508c695c8 Fix a typo about lowering AArch64 va_copy.
llvm-svn: 201541
2014-02-18 02:37:42 +00:00
Nico Rieck
02b2fbee0d Fix broken CHECK lines
llvm-svn: 201479
2014-02-16 07:31:05 +00:00
Kevin Qin
fa58a631ae [AArch64 NEON] Fix a bug to avoid using floating type as condition type in lowering SELECT_CC.
llvm-svn: 201395
2014-02-14 09:41:15 +00:00
Hao Liu
022a50cb21 [AArch64]Fix the assertion failure caused by "v1i1 SETCC" DAG node.
As v1i1 is illegal, the type legalizer tries to scalarize such node. But if the type operands of SETCC is legal, the scalarization algorithm will cause an assertion failure.

llvm-svn: 201381
2014-02-14 02:21:56 +00:00
Rafael Espindola
71737927b7 Specify a triple. MachO AArch64 support is missing.
llvm-svn: 201335
2014-02-13 15:30:06 +00:00
Daniel Sanders
7a3a160940 Re-commit: Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
Summary:
AsmPrinter::EmitInlineAsm() will no longer use the EmitRawText() call for
targets with mature MC support. Such targets will always parse the inline
assembly (even when emitting assembly). Targets without mature MC support
continue to use EmitRawText() for assembly output.

The hasRawTextSupport() check in AsmPrinter::EmitInlineAsm() has been replaced
with MCAsmInfo::UseIntegratedAs which when true, causes the integrated assembler
to parse inline assembly (even when emitting assembly output). UseIntegratedAs
is set to true for targets that consider any failure to parse valid assembly
to be a bug. Target specific subclasses generally enable the integrated
assembler in their constructor. The default value can be overridden with
-no-integrated-as.

All tests that rely on inline assembly supporting invalid assembly (for example,
those that use mnemonics such as 'foo' or 'hello world') have been updated to
disable the integrated assembler.

Changes since review (and last commit attempt):
- Fixed test failures that were missed due to configuration of local build.
  (fixes crash.ll and a couple others).
- Fixed tests that happened to pass because the local build was on X86
  (should fix 2007-12-17-InvokeAsm.ll)
- mature-mc-support.ll's should no longer require all targets to be compiled.
  (should fix ARM and PPC buildbots)
- Object output (-filetype=obj and similar) now forces the integrated assembler
  to be enabled regardless of default setting or -no-integrated-as.
  (should fix SystemZ buildbots)

Reviewers: rafael

Reviewed By: rafael

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2686

llvm-svn: 201333
2014-02-13 14:44:26 +00:00
NAKAMURA Takumi
74e8d4a7db llvm/test/CodeGen/AArch64/cpus.ll: Tweak to use -mtriple=aarch64-unknown-unknown, or this would crash for targeting pecoff like *-mingw32.
llvm-svn: 201315
2014-02-13 11:06:23 +00:00
Oliver Stannard
f7cc40a705 Add Cortex-A53 and Cortex-A57 cores to the AArch64 backend
llvm-svn: 201305
2014-02-13 09:46:11 +00:00
Hao Liu
386fc0d8ae [AArch64]Fix the problems that can't select mul/add/sub of v1i8/v1i16/v1i32 types.
As this problems are similar to shl/sra/srl, also add patterns for shift nodes.

llvm-svn: 201298
2014-02-13 05:42:33 +00:00
Hao Liu
ee04163cfe [AArch64]Add support for spilling FPR8/FPR16.
llvm-svn: 201287
2014-02-13 02:36:58 +00:00
Daniel Sanders
656c4d360b Revert r201237+r201238: Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
It introduced multiple test failures in the buildbots.

llvm-svn: 201241
2014-02-12 15:39:20 +00:00
Daniel Sanders
e647d6441b Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
Summary:
AsmPrinter::EmitInlineAsm() will no longer use the EmitRawText() call for targets with mature MC support. Such targets will always parse the inline assembly (even when emitting assembly). Targets without mature MC support continue to use EmitRawText() for assembly output.

The hasRawTextSupport() check in AsmPrinter::EmitInlineAsm() has been replaced with MCAsmInfo::UseIntegratedAs which when true, causes the integrated assembler to parse inline assembly (even when emitting assembly output). UseIntegratedAs is set to true for targets that consider any failure to parse valid assembly to be a bug. Target specific subclasses generally enable the integrated assembler in their constructor. The default value can be overridden with -no-integrated-as.

All tests that rely on inline assembly supporting invalid assembly (for example, those that use mnemonics such as 'foo' or 'hello world') have been updated to disable the integrated assembler.

Reviewers: rafael

Reviewed By: rafael

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2686

llvm-svn: 201237
2014-02-12 14:44:54 +00:00
Hao Liu
636db9c0e6 [AArch64]Implement the copy of two FPR8 registers by using FMOVss of two FPR32 registers in copyPhysReg.
llvm-svn: 201061
2014-02-10 03:16:22 +00:00
Tim Northover
cf3928b4f7 ARM & AArch64: merge NEON absolute compare intrinsics
There was an extremely confusing proliferation of LLVM intrinsics to implement
the vacge & vacgt instructions. This combines them all into two polymorphic
intrinsics, shared across both backends.

llvm-svn: 200768
2014-02-04 14:55:42 +00:00
Tim Northover
0b6ea5de72 AArch64 & ARM: refactor crypto intrinsics to take scalars
Some of the SHA instructions take a scalar i32 as one argument (largely because
they work on 160-bit hash fragments). This wasn't reflected in the IR
previously, with ARM and AArch64 choosing different types (<4 x i32> and <1 x
i32> respectively) which was ugly.

This makes all the affected intrinsics take a uniform "i32", allowing them to
become non-polymorphic at the same time.

llvm-svn: 200706
2014-02-03 17:27:49 +00:00
Chad Rosier
156f3a2a96 [AArch64] Custom lower concat_vector patterns with v4i16, v4i32, v8i8, v8i16, v16i8 types.
llvm-svn: 200491
2014-01-30 21:46:54 +00:00
Kevin Qin
379441a4e6 [AArch64 NEON] Lower SELECT_CC with vector operand.
When the scalar compare is between floating point and operands are
vector, we custom lower SELECT_CC to use NEON SIMD compare for
generating less instructions.

llvm-svn: 200365
2014-01-29 01:57:30 +00:00
Kevin Qin
436aae7633 [AArch64 NEON] Try to generate CONCAT_VECTOR when lowering BUILD_VECTOR or SHUFFLE_VECTOR.
Replace r199791.

llvm-svn: 200180
2014-01-27 02:53:54 +00:00
Kevin Qin
d83dee8270 Revert r199791.
It's old version which has some bugs. I'll commit lattest patch soon.

llvm-svn: 200179
2014-01-27 02:53:41 +00:00
Jiangning Liu
5ac0a5db29 Improve pattern match from v1i8 to v1i32 for AArch64 Neon.
llvm-svn: 200119
2014-01-26 04:55:53 +00:00
Jiangning Liu
8a0b567fb9 Implement pattern match from v1xx to v1xx for AArch64 Neon.
llvm-svn: 200113
2014-01-26 03:27:40 +00:00
Kevin Qin
ef4cd4a730 [AArch64 NEON] Add patterns for concat_vector on v2i32.
llvm-svn: 200111
2014-01-26 02:46:15 +00:00
Kevin Qin
db813b2f82 [AArch64 NEON] Add test case for vector FP_ROUND.
llvm-svn: 200110
2014-01-26 02:23:33 +00:00
Ana Pazos
0a0875b43a [AArch64] Removed unused i8 type from FPR8 register class.
The i8 type is not registered with any register class.
This causes a segmentation fault in MachineLICM::getRegisterClassIDAndCost.

The code selects the first type associated with register class FPR8,
which happens to be i8.
It uses this type (i8) to get the representative class pointer, which is 0.
It then uses this pointer to access a field, resulting in segmentation fault.

Since i8 type is not being used for printing any neon instruction
we can safely remove it.

llvm-svn: 200046
2014-01-24 22:36:53 +00:00
Kevin Qin
3282007e08 [AArch64 NEON] Fix a bug in implementing register copy bwtween FPR16.
llvm-svn: 199978
2014-01-24 07:53:04 +00:00
Ana Pazos
5fdec23c84 [AArch64] Added vselect patterns with float and double types
llvm-svn: 199925
2014-01-23 19:18:57 +00:00
Hao Liu
80b39e0b02 [AArch64]Add CHECK for two test cases testing scalar_to_vector committed in r199461.
llvm-svn: 199861
2014-01-23 02:09:30 +00:00
Kevin Qin
9a631f3af4 [AArch64 NEON] Try to generate CONCAT_VECTOR when lowering BUILD_VECTOR or SHUFFLE_VECTOR.
llvm-svn: 199791
2014-01-22 06:11:03 +00:00
Kevin Qin
d925e0a953 [AArch64 NEON] Fix a bug caused by undef lane when generating VEXT.
It was commited as r199628 but reverted in r199628 as causing
regression test failed. It's because of old vervsion of patch
I used to commit. Sorry for mistake.

llvm-svn: 199704
2014-01-21 01:48:52 +00:00
Chandler Carruth
f3546bc541 Revert r199628: "[AArch64 NEON] Fix a bug caused by undef lane when generating VEXT."
This test fails the newly added regression tests.

llvm-svn: 199631
2014-01-20 08:18:01 +00:00
Kevin Qin
a2c8e30bce [AArch64 NEON] Fix a bug caused by undef lane when generating VEXT.
llvm-svn: 199628
2014-01-20 07:32:26 +00:00
Kevin Qin
e739fc1b8e [AArch64 NEON] Expand vector for UDIV/SDIV/UREM/SREM/FREM as neon doesn't support these operations.
llvm-svn: 199485
2014-01-17 09:54:30 +00:00
Hao Liu
f1036ed220 [AArch64]Fix the problem can't select f16_to_f32 and f32_to_f16.
Also add copy support for FPR16.
Also add a missing test case file belongs to commit r197361.

llvm-svn: 199463
2014-01-17 06:23:30 +00:00
Kevin Qin
71a9ad96db [AArch64 NEON] Custom lower conversion between vector integer and vector floating point if element bit-width doesn't match.
llvm-svn: 199462
2014-01-17 05:52:35 +00:00
Hao Liu
96315c1088 [AArch64]Fix the problem can't select concat_vectors of two v1i32 types.
Also fix the problem can't select scalar_to_vector from f32 to v2f32/v4f32.

llvm-svn: 199461
2014-01-17 05:44:46 +00:00
Jiangning Liu
ff1e0e1ce3 For AArch64, lowering sext_inreg and generate optimized code by using SXTL.
llvm-svn: 199296
2014-01-15 05:08:01 +00:00
Tim Northover
3f497bbb76 AArch64: don't try to handle [SU]MUL_LOHI nodes
We should set them to expand for now since there are no patterns
dealing with them. Actually, there are no instructions either so I
doubt they'll ever be acceptable.

llvm-svn: 199265
2014-01-14 22:53:22 +00:00
Rafael Espindola
9ae2f1aa3d Revert "[AArch64] Added vselect patterns with float and double types"
This reverts commit r199242.

It is causing CodeGen/AArch64/neon-bsl.ll to fail.

llvm-svn: 199248
2014-01-14 19:24:08 +00:00
Ana Pazos
51fd756e4b [AArch64] Added vselect patterns with float and double types
llvm-svn: 199242
2014-01-14 18:45:48 +00:00
Andrea Di Biagio
c159ef589c [AArch64] Fix assertion failure caused by an invalid comparison between APInt values.
APInt only knows how to compare values with the same BitWidth and asserts
in all other cases.

With this fix, function PerformORCombine does not use the APInt equality
operator if the APInt values returned by 'isConstantSplat' differ in BitWidth.
In that case they are different and no comparison is needed.

llvm-svn: 199119
2014-01-13 16:51:00 +00:00
Kevin Qin
5aa184711d [AArch64 NEON] Add missing patterns for bitcast from or to v1f64
llvm-svn: 199070
2014-01-13 01:58:38 +00:00
Kevin Qin
9b14d101ea [AArch64 NEON] Add more scenarios to use perm instructions when lowering shuffle_vector
This patch covered 2 more scenarios:

1.  Two operands of shuffle_vector are the same, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> %a, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>

2. One of operands is undef, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> undef, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>

After this patch, perm instructions will have chance to be emitted instead of lots of INS.

llvm-svn: 199069
2014-01-13 01:56:29 +00:00
Nico Rieck
884ee061f2 Fix non-deterministic SDNodeOrder-dependent codegen
Reset SelectionDAGBuilder's SDNodeOrder to ensure deterministic code
generation.

llvm-svn: 199050
2014-01-12 14:09:17 +00:00
Kristof Beyls
082ab7548c Make sure -use-init-array has intended effect on all AArch64 ELF targets, not just linux.
llvm-svn: 198937
2014-01-10 13:41:49 +00:00
Andrea Di Biagio
946f61e227 Teach the DAGCombiner how to fold 'vselect' dag nodes according
to the following two rules:
  1) fold (vselect (build_vector AllOnes), A, B) -> A
  2) fold (vselect (build_vector AllZeros), A, B) -> B

llvm-svn: 198777
2014-01-08 18:33:04 +00:00
Kevin Qin
fd4df4bd7a [AArch64 NEON] Fix generating incorrect value type of NEON_VDUPLANE
when lower build_vector if result value type mismatch with operand
value type.

llvm-svn: 198743
2014-01-08 08:06:14 +00:00
Hao Liu
2324cab69b [AArch64]Add support to spill/fill D tuples such as DPair/DTriple/DQuad. There is no test cases for D tuple as the original test cases are too large. As the spill/fill of the D tuple is similar to the Q tuple, the correctness can be guaranteed.
llvm-svn: 198684
2014-01-07 10:50:43 +00:00
Hao Liu
bbb265cfee [AArch64]Add support to copy D tuples such as DPair/DTriple/DQuad and Q tuples such as QPair/QTriple/QQuad. There is no test case for D tuple as the original test cases are too large. As the copy of the D tuple is similar to the Q tuple, the correctness can be guaranteed.
llvm-svn: 198682
2014-01-07 10:00:03 +00:00
Kevin Qin
cb6af368ab [AArch64 NEON] Fixed incorrect immediate used in BIC instruction.
llvm-svn: 198675
2014-01-07 05:10:47 +00:00
Kevin Qin
162c44e325 [AArch64 NEON] Fix invalid constant used in vselect condition.
There is a wrong assumption that the vector element type and the
type of each ConstantSDNode in the build_vector were the same.
However, when promoting the integer operand of a legally typed
build_vector, the operand type and the vector element type do not
need to be the same
(See method 'DAGTypeLegalizer::PromoteIntOp_BUILD_VECTOR' in
LegalizeIntegerTypes.cpp).

  in AArch64 backend, the following dag sequence:

  C0: i1 = Constant<0>
  C1: i1 = Constant<-1>
  V: v8i1 = BUILD_VECTOR C1, C1, C0, C0, C0, C0, C0, C0

  is type-legalized into:

  NewC0: i32 = Constant<0>
  NewC1: i32 = Constant<1>
  V: v8i8 = BUILD_VECTOR NewC1, NewC1, NewC0, NewC0, NewC0, NewC0, NewC0, NewC0

Forcing a getZeroExtend to VTBits to ensure that the new constant
is correctly.

llvm-svn: 198582
2014-01-06 02:26:10 +00:00
Jiangning Liu
583b8a7116 For AArch64 Neon, simplify scalar dup by lane0 for fp.
llvm-svn: 198194
2013-12-30 02:44:35 +00:00
Hao Liu
ab32d54fad [AArch64]Add code to spill/fill Q register tuples such as QPair/QTriple/QQuad.
llvm-svn: 198193
2013-12-30 02:38:12 +00:00
Hao Liu
8bef865160 [AArch64]Can't select shift left 0 of type v1i64
llvm-svn: 198192
2013-12-30 02:12:46 +00:00
Kevin Qin
cbb0be4bee Fix a bug in DAGcombiner about zero-extend after setcc.
For AArch64 backend, if DAGCombiner see "sext(setcc)", it will
combine them together to a single setcc with extended value type.
Then if it see "zext(setcc)", it assumes setcc is Vxi1, and try to
create "(and (vsetcc), (1, 1, ...)". While setcc isn't Vxi1,
DAGcombiner will create wrong node and get wrong code emitted.

llvm-svn: 198190
2013-12-30 02:05:13 +00:00
Hao Liu
e8d49c2088 [AArch64]Fix the problem that can't select mul of v1i64/v2i64 types.
E.g. Can't select such IR:
     %tmp = mul <2 x i64> %a, %b

llvm-svn: 198188
2013-12-30 01:38:41 +00:00
Hao Liu
8ed49e0c42 [AArch64]Fix a problem that the register order of fmls/fmla by element is incorrect.
E.g. the codegen result is 
     fmls v1.2s, v0.2s, v2.s[3]
which is expected to be
     fmls v0.2s, v1.2s, v2.s[3]

llvm-svn: 198001
2013-12-25 07:12:34 +00:00