1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

9190 Commits

Author SHA1 Message Date
Andrew Trick
7350fe079d Move a unit test into the correct dir. Sorry if it broke Mips-only builds.
llvm-svn: 199911
2014-01-23 17:47:57 +00:00
Tom Stellard
6f13c22a7a R600: Recommit 199842: Add work-around for the CF stack entry HW bug
The unit test is now disabled on non-asserts builds.

The CF stack can be corrupted if you use CF_ALU_PUSH_BEFORE,
CF_ALU_ELSE_AFTER, CF_ALU_BREAK, or CF_ALU_CONTINUE when the number of
sub-entries on the stack is greater than or equal to the stack entry
size and sub-entries modulo 4 is either 0 or 3 (on cedar the bug is
present when number of sub-entries module 8 is either 7 or 0)

We choose to be conservative and always apply the work-around when the
number of sub-enries is greater than or equal to the stack entry size,
so that we can safely over-allocate the stack when we are unsure of the
stack allocation rules.

reviewed-by: Vincent Lejeune <vljn at ovi.com>
llvm-svn: 199905
2014-01-23 16:18:02 +00:00
Elena Demikhovsky
6f951ffaa0 AVX-512: added VPERM2D VPERM2Q VPERM2PS VPERM2PD instructions,
they give better sequences than VPERMI

llvm-svn: 199893
2014-01-23 14:27:26 +00:00
Tim Northover
cfdf1357ee ARM: use litpools for normal i32 imms when compiling minsize.
With constant-sharing, litpool loads consume 4 + N*2 bytes of code, but
movw/movt pairs consume 8*N. This means litpools are better than movw/movt even
with just one use. Other materialisation strategies can still be better though,
so the logic is a little odd.

llvm-svn: 199891
2014-01-23 13:43:47 +00:00
Hao Liu
80b39e0b02 [AArch64]Add CHECK for two test cases testing scalar_to_vector committed in r199461.
llvm-svn: 199861
2014-01-23 02:09:30 +00:00
Owen Anderson
2224c08928 Revert r162101 and replace it with a solution that works for targets where the pointer type is illegal.
This is a horrible bit of code.  We're calling a simplification routine *in the middle* of type legalization.  We tell the
simplification routine that it's running after legalization, but some of the types it will encounter will be illegal!  The
fix is only to invoke the simplification if the types in question were legal, so that none of its invariants will be violated.

llvm-svn: 199847
2014-01-22 22:34:17 +00:00
Tom Stellard
d5181ee67d Revert "R600: Add work-around for the CF stack entry HW bug"
This reverts commit 35b8331cad6eb512a2506adbc394201181da94ba.

The -debug-only flag for llc doesn't appear to be available in
all build configurations.

llvm-svn: 199845
2014-01-22 22:20:54 +00:00
Tom Stellard
cd874ab98c R600: Add work-around for the CF stack entry HW bug
The CF stack can be corrupted if you use CF_ALU_PUSH_BEFORE,
CF_ALU_ELSE_AFTER, CF_ALU_BREAK, or CF_ALU_CONTINUE when the number of
sub-entries on the stack is greater than or equal to the stack entry
size and sub-entries modulo 4 is either 0 or 3 (on cedar the bug is
present when number of sub-entries module 8 is either 7 or 0)

We choose to be conservative and always apply the work-around when the
number of sub-enries is greater than or equal to the stack entry size,
so that we can safely over-allocate the stack when we are unsure of the
stack allocation rules.

reviewed-by: Vincent Lejeune <vljn at ovi.com>
llvm-svn: 199842
2014-01-22 21:55:46 +00:00
Tom Stellard
ae477cc774 R600: Refactor stack size calculation
reviewed-by: Vincent Lejeune <vljn at ovi.com>
llvm-svn: 199840
2014-01-22 21:55:43 +00:00
Quentin Colombet
f6ddfeb084 Add a testcase for r199430.
llvm-svn: 199831
2014-01-22 20:11:50 +00:00
Tom Stellard
19af07fe92 R600: MOVA is vector only
llvm-svn: 199827
2014-01-22 19:24:24 +00:00
Tom Stellard
0971c460b5 R600: Take alignment into account when calculating the stack offset
llvm-svn: 199826
2014-01-22 19:24:23 +00:00
Tom Stellard
d424fe57e4 R600: Add support for global addresses with constant initializers
llvm-svn: 199825
2014-01-22 19:24:21 +00:00
Tom Stellard
452996a15e R600: Begin private memory at the second GPR.
This way private memory does not over-write work group information
stored in GPRs 0 and 1.

llvm-svn: 199824
2014-01-22 19:24:19 +00:00
Tom Stellard
369c33de20 R600/SI: Add support for i8 and i16 private loads/stores
llvm-svn: 199823
2014-01-22 19:24:14 +00:00
Greg Fitzgerald
d54e246d6a Fix inline assembly that switches between ARM and Thumb modes
This patch restores the ARM mode if the user's inline assembly
does not.  In the object streamer, it ensures that instructions
following the inline assembly are encoded correctly and that
correct mapping symbols are emitted.  For the asm streamer, it
emits a .arm or .thumb directive.

This patch does not ensure that the inline assembly contains
the ADR instruction to switch modes at runtime.

The problem we need to solve is code like this:

  int foo(int a, int b) {
    int r = a + b;
    asm volatile(
        ".align 2     \n"
        ".arm         \n"
        "add r0,r0,r0 \n"
    : : "r"(r));
    return r+1;
  }

If we compile this function in thumb mode then the inline assembly
will switch to arm mode. We need to make sure that we switch back to
thumb mode after emitting the inline assembly or we will incorrectly
encode the instructions that follow (i.e. the assembly instructions
for return r+1).

Based on patch by David Peixotto

Change-Id: Ib57f6d2d78a22afad5de8693fba6230ff56ba48b
llvm-svn: 199818
2014-01-22 18:32:35 +00:00
Elena Demikhovsky
45e0f9e6b6 AVX512: combining setcc and zext is wrong on AVX512
because vector compare instruction puts result in mask register.

llvm-svn: 199798
2014-01-22 12:26:19 +00:00
James Molloy
2e64527bc0 MachineCopyPropagation has special logic for removing COPY instructions. It will remove plain COPYs using eraseFromParent(), but if the COPY has imp-defs/imp-uses it will convert it to a KILL, to keep the imp-def around.
This actually totally breaks and causes the machine verifier to cry in several cases, one of which being:

%RAX<def> = COPY %RCX<kill>
%ECX<def> = COPY %EAX<kill>, %RAX<imp-use,kill>

These subregister copies are together identified as noops, so are both removed. However, the second one as it has an imp-use gets converted into a kill:

%ECX<def> = KILL %EAX<kill>, %RAX<imp-use,kill>

As the original COPY has been removed, the verifier goes into tears at the use of undefined EAX and RAX.

There are several hacky solutions to this hacky problem (which is all to do with imp-use/def weirdnesses), but the least hacky I've come up with is to *always* remove COPYs by converting to KILLs. KILLs are no-ops to the code generator so the generated code doesn't change (which is why they were partially used in the first place), but using them also keeps the def/use and imp-def/imp-use chains alive:

%RAX<def> = KILL %RCX<kill>
%ECX<def> = KILL %EAX<kill>, %RAX<imp-use,kill>

The patch passes all test cases including the ones that check the removal of MOVs in this circumstance, along with an extra test I added to check subregister behaviour (which made the machine verifier fall over before my patch).

The patch also adds some DEBUG() statements because the file hadn't got any.

llvm-svn: 199797
2014-01-22 09:12:27 +00:00
Kevin Qin
9a631f3af4 [AArch64 NEON] Try to generate CONCAT_VECTOR when lowering BUILD_VECTOR or SHUFFLE_VECTOR.
llvm-svn: 199791
2014-01-22 06:11:03 +00:00
Venkatraman Govindaraju
2dcfaac1b8 [Sparc] Add support for inline assembly constraints which specify registers by their aliases.
llvm-svn: 199786
2014-01-22 03:18:42 +00:00
Venkatraman Govindaraju
6c498f0e2f [Sparc] Add support for inline assembly constraint 'I'.
llvm-svn: 199781
2014-01-22 01:29:51 +00:00
Venkatraman Govindaraju
0a2da12ffd [Sparc] Do not add PC to _GLOBAL_OFFSET_TABLE_ address to access GOT in absolute code.
Fixes PR#18521

llvm-svn: 199775
2014-01-22 00:13:18 +00:00
Duncan P. N. Exon Smith
6d0186fb05 CodeGen: Stop treating vectors as aggregates
Fix a crash in SjLjEHPrepare::lowerIncomingArguments caused by treating
VectorType like an aggregate.  It's first-class!

<rdar://problem/15854596>

llvm-svn: 199768
2014-01-21 22:46:46 +00:00
Chandler Carruth
a4347f800e Tweak the spelling of the asserts requirement a bit more. This makes it
match the (reasonably prevelant) usage in Clang's test suite and so
seems more "canonical".

llvm-svn: 199767
2014-01-21 22:39:19 +00:00
Andrew Trick
e67db7b7b2 Fix PR18572 - llc crash during GenericScheduler::initPolicy().
Generalized the heuristic that looks at the (very rough) size of the
register file before enabling regpressure tracking.

llvm-svn: 199766
2014-01-21 21:27:37 +00:00
Hal Finkel
ca5b9beeb1 Fix pointer info on PPC byval stores
For PPC64 SVR (and Darwin), the stores that take byval aggregate parameters
from registers into the stack frame had MachinePointerInfo objects with
incorrect offsets. These offsets are relative to the object itself, not to the
stack frame base.

This fixes self hosting on PPC64 when compiling with -enable-aa-sched-mi.

llvm-svn: 199763
2014-01-21 20:15:58 +00:00
Justin Holewinski
dbdc639118 [NVPTX] Add missing patterns for div.approx with immediate denominator
llvm-svn: 199746
2014-01-21 14:40:05 +00:00
Kevin Qin
d925e0a953 [AArch64 NEON] Fix a bug caused by undef lane when generating VEXT.
It was commited as r199628 but reverted in r199628 as causing
regression test failed. It's because of old vervsion of patch
I used to commit. Sorry for mistake.

llvm-svn: 199704
2014-01-21 01:48:52 +00:00
Andrea Di Biagio
5b5e7e3cb5 [X86] Teach how to combine a vselect into a movss/movsd
Add target specific rules for combining vselect dag nodes into movss/movsd
when possible.

If the vector type of the vselect dag node in input is either MVT::v4i13 or
MVT::v4f32, then try to fold according to rules:

  1) fold (vselect (build_vector (0, -1, -1, -1)), A, B) -> (movss A, B)
  2) fold (vselect (build_vector (-1, 0, 0, 0)), A, B) -> (movss B, A)

If the vector type of the vselect dag node in input is either MVT::v2i64 or
MVT::v2f64 (and we have SSE2), then try to fold according to rules:

  3) fold (vselect (build_vector (0, -1)), A, B) -> (movsd A, B)
  4) fold (vselect (build_vector (-1, 0)), A, B) -> (movsd B, A)

llvm-svn: 199683
2014-01-20 19:35:22 +00:00
James Molloy
bd56139ee7 Remove the useless pseudo instructions VDUPfdf and VDUPfqf, replacing them with patterns to match VDUPLN.
llvm-svn: 199675
2014-01-20 17:14:48 +00:00
Hal Finkel
0fab5a5e7b Fix misched-aa-colored.ll to require asserts (trying again)
Perhaps it needs to be in caps.

llvm-svn: 199661
2014-01-20 14:15:28 +00:00
Hal Finkel
a7712939fe Fix misched-aa-colored.ll to require asserts.
-misched=shuffle is NDEBUG only. Maybe we should change that.

llvm-svn: 199659
2014-01-20 14:09:34 +00:00
Hal Finkel
fe3a3ecfad Update IR when merging slots in stack coloring
The way that stack coloring updated MMOs when merging stack slots, while
correct, is suboptimal, and is incompatible with the use of AA during
instruction scheduling. The solution, which involves the use of const_cast (and
more importantly, updating the IR from within an MI-level pass), obviously
requires some explanation:

When the stack coloring pass was originally committed, the code in
ScheduleDAGInstrs::buildSchedGraph tracked possible alias sets by using
GetUnderlyingObject, and all load/store and store/store memory control
dependencies where added between SUs at the object level (where only one
object, that returned by GetUnderlyingObject, was used to identify the object
associated with each MMO). When stack coloring merged stack slots, it would
replace MMOs derived from the remapped alloca with the alloca with which the
remapped alloca was being replaced. Because ScheduleDAGInstrs only used single
objects, and tracked alias sets at the object level, this was a fine solution.

In r169744, (Andy and) I updated the code in ScheduleDAGInstrs to use
GetUnderlyingObjects, and track alias sets using, potentially, multiple
underlying objects for each MMO. This was done, primarily, to provide the
ability to look through PHIs, and provide better scheduling for
induction-variable-dependent loads and stores inside loops. At this point, the
MMO-updating code in stack coloring became suboptimal, because it would clear
the MMOs for (i.e. completely pessimize) all instructions for which r169744
might help in scheduling. Updating the IR directly is the simplest fix for this
(and the one with, by far, the least compile-time impact), but others are
possible (we could give each MMO a small vector of potential values, or make
use of a remapping table, constructed from MFI, inside ScheduleDAGInstrs).

Unfortunately, replacing all MMO values derived from the remapped alloca with
the base replacement alloca fundamentally breaks our ability to use AA during
instruction scheduling (which is critical to performance on some targets). The
reason is that the original MMO might have had an offset (either constant or
dynamic) from the base remapped alloca, and that offset is not present in the
updated MMO. One possible way around this would be to use
GetPointerBaseWithConstantOffset, and update not only the MMO's value, but also
its offset based on the original offset. Unfortunately, this solution would
only handle constant offsets, and for safety (because AA is not completely
restricted to deducing relationships with constant offsets), we would need to
clear all MMOs without constant offsets over the entire function. This would be
an even worse pessimization than the current single-object restriction. Any
other solution would involve passing around a vector of remapped allocas, and
teaching AA to use it, introducing additional complexity and overhead into AA.

Instead, when remapping an alloca, we replace all IR uses of that alloca as
well (optionally inserting a bitcast as necessary). This is even more efficient
that the old MMO-updating code in the stack coloring pass (because it removes
the need to call GetUnderlyingObject on all MMO values), removes the
single-object pessimization in the default configuration, and enables the
correct use of AA during instruction scheduling (all without any additional
overhead).

LLVM now no longer miscompiles itself on x86_64 when using -enable-misched
-enable-aa-sched-mi -misched-bottomup=0 -misched-topdown=0 -misched=shuffle!
Fixed PR18497.

Because the alloca replacement is now done at the IR level, unless the MMO
directly refers to the remapped alloca, the change cannot be seen at the MI
level. As a result, there is no good way to fix test/CodeGen/X86/pr14090.ll.

llvm-svn: 199658
2014-01-20 14:03:16 +00:00
Artyom Skrobov
264da37809 [ARM] Do not generate Tag_DIV_use=AllowDIVExt when hardware div is non-optional: it should have the default value of AllowDIVIfExists
llvm-svn: 199638
2014-01-20 10:18:42 +00:00
Chandler Carruth
f3546bc541 Revert r199628: "[AArch64 NEON] Fix a bug caused by undef lane when generating VEXT."
This test fails the newly added regression tests.

llvm-svn: 199631
2014-01-20 08:18:01 +00:00
Kevin Qin
a2c8e30bce [AArch64 NEON] Fix a bug caused by undef lane when generating VEXT.
llvm-svn: 199628
2014-01-20 07:32:26 +00:00
Saleem Abdulrasool
c5281b558c ARM: update build attributes for ABI r2.09
Update names for the names as per the current ABI errata.  Mark deprecated tags
as such.

llvm-svn: 199576
2014-01-19 08:25:35 +00:00
Juergen Ributzka
4f480d4b83 Add two new calling conventions for runtime calls
This patch adds two new target-independent calling conventions for runtime
calls - PreserveMost and PreserveAll.
The target-specific implementation for X86-64 is defined as following:
  - Arguments are passed as for the default C calling convention
  - The same applies for the return value(s)
  - PreserveMost preserves all GPRs - except R11
  - PreserveAll preserves all GPRs and all XMMs/YMMs - except R11

Reviewed by Lang and Philip

llvm-svn: 199508
2014-01-17 19:47:03 +00:00
Daniel Sanders
32197355d2 [mips][msa] Correct pattern for LSA
Summary:
$rs and $rt were the wrong way round in the .td and the testcase wasn't
strict enough to detect the mistake.

Reviewers: matheusalmeida

Reviewed By: matheusalmeida

Differential Revision: http://llvm-reviews.chandlerc.com/D2554

llvm-svn: 199498
2014-01-17 15:40:05 +00:00
Kevin Qin
e739fc1b8e [AArch64 NEON] Expand vector for UDIV/SDIV/UREM/SREM/FREM as neon doesn't support these operations.
llvm-svn: 199485
2014-01-17 09:54:30 +00:00
Hao Liu
f1036ed220 [AArch64]Fix the problem can't select f16_to_f32 and f32_to_f16.
Also add copy support for FPR16.
Also add a missing test case file belongs to commit r197361.

llvm-svn: 199463
2014-01-17 06:23:30 +00:00
Kevin Qin
71a9ad96db [AArch64 NEON] Custom lower conversion between vector integer and vector floating point if element bit-width doesn't match.
llvm-svn: 199462
2014-01-17 05:52:35 +00:00
Hao Liu
96315c1088 [AArch64]Fix the problem can't select concat_vectors of two v1i32 types.
Also fix the problem can't select scalar_to_vector from f32 to v2f32/v4f32.

llvm-svn: 199461
2014-01-17 05:44:46 +00:00
Amara Emerson
b634c3f6a8 Move the xscale build attribute test to the proper place and remove the old one.
The encoding of build attributes is already tested in CodeGen/ARM/build-attributes-encoding.s

llvm-svn: 199393
2014-01-16 15:11:54 +00:00
Jiangning Liu
47e6e27d8b For ARM, fix assertuib failures for some ld/st 3/4 instruction with wirteback.
llvm-svn: 199369
2014-01-16 09:16:13 +00:00
Elena Demikhovsky
36c03b27cc AVX-512: fixed a compare pattern
llvm-svn: 199366
2014-01-16 08:45:54 +00:00
Rafael Espindola
1beef6f079 Convert test to FileCheck.
llvm-svn: 199355
2014-01-16 06:31:20 +00:00
Reed Kotler
c3cb7dce9b Adjust offsets for max load instruction offsets. This is more pessimistic
than it needs to be by 1 bit but I need to finish some other things so 
that all the boundary cases will work in that situation. constpool.c
in test-suite will fail to assemble under our new internal test-suite sync
without this change.

llvm-svn: 199343
2014-01-16 00:47:46 +00:00
Andrea Di Biagio
0af625e231 Update test/CodeGen/X86/vbinop-simplify-bug.ll.
Redirect the output of llc to /dev/null.

llvm-svn: 199329
2014-01-15 20:16:14 +00:00
Andrea Di Biagio
aa6085dde3 [DAGCombiner] Fix a wrong check in method SimplifyVBinOp.
This fixes a regression intruced by r199135.

Revision 199135 tried to simplify part of the logic in method
DAGCombiner::SimplifyVBinOp introducing calls to method BuildVectorSDNode::isConstant().

However, that revision wrongly changed the check performed by method
SimplifyVBinOp to identify dag nodes that can be folded.
Before revision 199135, that method only tried to simplify vector binary operations
if both operands were build_vector of Constant/ConstantFP/Undef only.

After revision 199135, method SimplifyVBinop tried to
simplify also vector binary operations with only one constant operand.

This fixes the problem restoring the old behavior of SimplifyVBinOp.

llvm-svn: 199328
2014-01-15 19:51:32 +00:00
Jiangning Liu
ff1e0e1ce3 For AArch64, lowering sext_inreg and generate optimized code by using SXTL.
llvm-svn: 199296
2014-01-15 05:08:01 +00:00
Weiming Zhao
7e22e3a1ea PR 18466: Fix ARM Pseudo Expansion
When expanding neon pseudo stores, it may miss the implicit uses of sub
regs, which may cause post RA scheduler reorder instructions that
breakes anti dependency.

For example:
  VST1d64QPseudo %R0<kill>, 16, %Q9_Q10, pred:14, pred:%noreg
  will be expanded to
    VST1d64Q %R0<kill>, 16, %D18, pred:14, pred:%noreg;

An instruction that defines %D20 may be scheduled before the store by
mistake.

This patches adds implicit uses for such case. For the example above, it
emits:
  VST1d64Q %R0<kill>, 8, %D18, pred:14, pred:%noreg, %Q9_Q10<imp-use>

llvm-svn: 199282
2014-01-15 01:32:12 +00:00
Tim Northover
fae3207c66 ARM: correctly determine final tBX_LR in Thumb1 functions
The changes caused by folding an sp-adjustment into a "pop" previously
disrupted the forward search for the final real instruction in a
terminating block. This switches to a backward search (skipping debug
instrs).

This fixes PR18399.

Patch by Zhaoshi.

llvm-svn: 199266
2014-01-14 22:53:28 +00:00
Tim Northover
3f497bbb76 AArch64: don't try to handle [SU]MUL_LOHI nodes
We should set them to expand for now since there are no patterns
dealing with them. Actually, there are no instructions either so I
doubt they'll ever be acceptable.

llvm-svn: 199265
2014-01-14 22:53:22 +00:00
Rafael Espindola
9ae2f1aa3d Revert "[AArch64] Added vselect patterns with float and double types"
This reverts commit r199242.

It is causing CodeGen/AArch64/neon-bsl.ll to fail.

llvm-svn: 199248
2014-01-14 19:24:08 +00:00
Rafael Espindola
16efb0d1a0 Fix a low hanging use of hasRawTextSupport.
This also fixes the placement of the function label comment. It was being
placed next to the mips16 directive instead of next to the label.

llvm-svn: 199245
2014-01-14 18:57:12 +00:00
Ana Pazos
51fd756e4b [AArch64] Added vselect patterns with float and double types
llvm-svn: 199242
2014-01-14 18:45:48 +00:00
Zoran Jovanovic
618c7b08b2 Test case micromips-load-effective-address.s renamed to micromips-load-effective-address.ll and moved to test/CodeGen/Mips.
llvm-svn: 199221
2014-01-14 16:26:47 +00:00
Nico Rieck
2c6e6a7b2a Handle dllexport for global aliases
llvm-svn: 199219
2014-01-14 15:23:25 +00:00
Nico Rieck
964a13bb4e Decouple dllexport/dllimport from linkage
Representing dllexport/dllimport as distinct linkage types prevents using
these attributes on templates and inline functions.

Instead of introducing further mixed linkage types to include linkonce and
weak ODR, the old import/export linkage types are replaced with a new
separate visibility-like specifier:

  define available_externally dllimport void @f() {}
  @Var = dllexport global i32 1, align 4

Linkage for dllexported globals and functions is now equal to their linkage
without dllexport. Imported globals and functions must be either
declarations with external linkage, or definitions with
AvailableExternallyLinkage.

llvm-svn: 199218
2014-01-14 15:22:47 +00:00
Elena Demikhovsky
4e0fae8031 AVX-512: optimized scalar compare patterns
removed AVX512SI format, since it is similar to AVX512BI.

llvm-svn: 199217
2014-01-14 15:10:08 +00:00
Andrea Di Biagio
1b98eb21b1 [X86] Fix assertion failure caused by a wrong folding of vector shifts by immediate count.
This fixes a regression intruced by r198113.

Revision r198113 introduced an algorithm that tries to fold a vector shift
by immediate count into a build_vector if the input vector is a known vector
of constants.

However the algorithm only worked under the assumption that the input vector
type and the shift type are exactly the same.

This patch disables the folding of vector shift by immediate count if the
input vector type and the shift value type are not the same.

llvm-svn: 199213
2014-01-14 13:17:12 +00:00
Tim Northover
5e4dddf9d6 ARM: add constraint that RdLo != Rn != RdHi for v5 MLA insts.
llvm-svn: 199212
2014-01-14 13:05:47 +00:00
Nico Rieck
e8a579c6bc Revert "Decouple dllexport/dllimport from linkage"
Revert this for now until I fix an issue in Clang with it.

This reverts commit r199204.

llvm-svn: 199207
2014-01-14 12:38:32 +00:00
Nico Rieck
669650bf0c Revert "Handle dllexport for global aliases"
This reverts commit r199205.

llvm-svn: 199206
2014-01-14 12:36:54 +00:00
Nico Rieck
9bff21adae Handle dllexport for global aliases
llvm-svn: 199205
2014-01-14 11:55:40 +00:00
Nico Rieck
6203d44313 Decouple dllexport/dllimport from linkage
Representing dllexport/dllimport as distinct linkage types prevents using
these attributes on templates and inline functions.

Instead of introducing further mixed linkage types to include linkonce and
weak ODR, the old import/export linkage types are replaced with a new
separate visibility-like specifier:

  define available_externally dllimport void @f() {}
  @Var = dllexport global i32 1, align 4

Linkage for dllexported globals and functions is now equal to their linkage
without dllexport. Imported globals and functions must be either
declarations with external linkage, or definitions with
AvailableExternallyLinkage.

llvm-svn: 199204
2014-01-14 11:55:03 +00:00
Jakob Stoklund Olesen
45496cea60 Always let value types influence register classes.
When creating a virtual register for a def, the value type should be
used to pick the register class. If we only use the register class
constraint on the instruction, we might pick a too large register class.

Some registers can store values of different sizes. For example, the x86
xmm registers can hold f32, f64, and 128-bit vectors. The three
different value sizes are represented by register classes with identical
register sets: FR32, FR64, and VR128. These register classes have
different spill slot sizes, so it is important to use the right one.

The register class constraint on an instruction doesn't necessarily care
about the size of the value its defining. The value type determines
that.

This fixes a problem where InstrEmitter was picking 32-bit register
classes for 64-bit values on SPARC.

llvm-svn: 199187
2014-01-14 06:18:38 +00:00
Mark Seaborn
76b70ff14a Fix llc to not reuse spill slots in functions that invoke setjmp()
We need to ensure that StackSlotColoring.cpp does not reuse stack
spill slots in functions that call "returns_twice" functions such as
setjmp(), otherwise this can lead to miscompiled code, because a stack
slot would be clobbered when it's still live.

This was already handled correctly for functions that call setjmp()
(though this wasn't covered by a test), but not for functions that
invoke setjmp().

We fix this by changing callsFunctionThatReturnsTwice() to check for
invoke instructions.

This fixes PR18244.

llvm-svn: 199180
2014-01-14 04:20:01 +00:00
Juergen Ributzka
52e4b4d675 [DAG] Teach DAG to also reassociate vector operations
This commit teaches DAG to reassociate vector ops, which in turn enables
constant folding of vector op chains that appear later on during custom lowering
and DAG combine.

Reviewed by Andrea Di Biagio

llvm-svn: 199135
2014-01-13 20:51:35 +00:00
Weiming Zhao
04e2261e54 Fix PR 18369: [Thumbv8] asserts due to inconsistent CPSR liveness of IT blocks
The issue is caused when Post-RA scheduler reorders a bundle instruction
(IT block). However, it only flips the CPSR liveness of the bundle instruction,
leaves the instructions inside the bundle unchanged, which causes inconstancy and crashes
Thumb2SizeReduction.cpp::ReduceMBB().

llvm-svn: 199127
2014-01-13 18:47:54 +00:00
Andrea Di Biagio
c159ef589c [AArch64] Fix assertion failure caused by an invalid comparison between APInt values.
APInt only knows how to compare values with the same BitWidth and asserts
in all other cases.

With this fix, function PerformORCombine does not use the APInt equality
operator if the APInt values returned by 'isConstantSplat' differ in BitWidth.
In that case they are different and no comparison is needed.

llvm-svn: 199119
2014-01-13 16:51:00 +00:00
Richard Sandiford
c891615944 [SystemZ] Flesh out stackrestore test (frame-11.ll)
...so that it does something vaguely sensible.

llvm-svn: 199117
2014-01-13 15:44:44 +00:00
Richard Sandiford
70c8cbd696 [SystemZ] Improve risbg-01.ll test
The old mask in f24 wasn't well chosen because the lshr would always be zero.
CodeGen didn't detect this but InstCombine would.  The new mask ensures
that both shifts are needed.

f26 is specifically testing for a wrap-around mask.  The AND can be applied
to just the shift left, either before or after the shift.  Again, CodeGen
kept it in the original form but InstCombine would mask after the shift
instead.  The exact choice of NILF isn't important for the test so I just
dropped it and kept the rotate.

llvm-svn: 199115
2014-01-13 15:40:25 +00:00
Richard Sandiford
add53b9fe0 [SystemZ] Optimize (sext (ashr (shl ...), ...))
...into (ashr (shl (anyext X), ...), ...), which requires one fewer
instruction.  The (anyext X) can sometimes be simplified too.

I didn't do this in DAGCombiner because widening shifts isn't a win
on all targets.

llvm-svn: 199114
2014-01-13 15:17:53 +00:00
Tim Northover
e33f96dc43 ARM: add test for r199108. Oops.
rdar://problem/15800156

llvm-svn: 199109
2014-01-13 14:20:25 +00:00
Elena Demikhovsky
e635ade802 AVX-512: Embedded Rounding Control - encoding and printing
Changed intrinsics for vrcp14/vrcp28 vrsqrt14/vrsqrt28 - aligned with GCC.

llvm-svn: 199102
2014-01-13 12:55:03 +00:00
Kevin Qin
5aa184711d [AArch64 NEON] Add missing patterns for bitcast from or to v1f64
llvm-svn: 199070
2014-01-13 01:58:38 +00:00
Kevin Qin
9b14d101ea [AArch64 NEON] Add more scenarios to use perm instructions when lowering shuffle_vector
This patch covered 2 more scenarios:

1.  Two operands of shuffle_vector are the same, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> %a, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>

2. One of operands is undef, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> undef, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>

After this patch, perm instructions will have chance to be emitted instead of lots of INS.

llvm-svn: 199069
2014-01-13 01:56:29 +00:00
Jakob Stoklund Olesen
026885712c Handle bundled terminators in isBlockOnlyReachableByFallthrough.
Targets like SPARC and MIPS have delay slots and normally bundle the
delay slot instruction with the corresponding terminator.

Teach isBlockOnlyReachableByFallthrough to find any MBB operands on
bundled terminators so SPARC doesn't need to specialize this function.

llvm-svn: 199061
2014-01-12 19:24:08 +00:00
Nico Rieck
ddf7787f91 Make test independent of scheduling
llvm-svn: 199055
2014-01-12 15:57:38 +00:00
NAKAMURA Takumi
75b448c67e llvm/test/CodeGen/X86/shl_undef.ll: Tweak to satisfy r199050.
Use intel syntax, or "shl" might hit "pushl".

llvm-svn: 199051
2014-01-12 14:41:41 +00:00
Nico Rieck
884ee061f2 Fix non-deterministic SDNodeOrder-dependent codegen
Reset SelectionDAGBuilder's SDNodeOrder to ensure deterministic code
generation.

llvm-svn: 199050
2014-01-12 14:09:17 +00:00
Jakob Stoklund Olesen
3dd52f1fbd The SPARCv9 ABI returns a float in %f0.
This is different from the argument passing convention which puts the
first float argument in %f1.

With this patch, all returned floats are treated as if the 'inreg' flag
were set. This means multiple float return values get packed in %f0,
%f1, %f2, ...

Note that when returning a struct in registers, clang will set the
'inreg' flag on the return value, so that behavior is unchanged. This
also happens when returning a float _Complex.

llvm-svn: 199028
2014-01-12 04:13:17 +00:00
Venkatraman Govindaraju
406e85c8e3 [Sparc] Add missing processor types: v7 and niagara
llvm-svn: 199024
2014-01-11 23:56:13 +00:00
Benjamin Kramer
002aed9cb3 Fix broken CHECK lines.
llvm-svn: 199016
2014-01-11 21:06:00 +00:00
Venkatraman Govindaraju
db6b1cbac8 [Sparc] Bundle instruction with delay slow and its filler. Now, we can use -verify-machineinstrs with SPARC backend.
llvm-svn: 199014
2014-01-11 19:38:03 +00:00
NAKAMURA Takumi
a62eb4ba36 llvm/test/CodeGen/X86/anyregcc.ll: Add explicit -mtriple=x86_64-unknown-unknown.
XMM(s) are really spilling for targeting Win64.

llvm-svn: 198999
2014-01-11 09:23:44 +00:00
Juergen Ributzka
3673ce83a4 [anyregcc] Fix callee-save mask for anyregcc
Use separate callee-save masks for XMM and YMM registers for anyregcc on X86 and
select the proper mask depending on the target cpu we compile for.

llvm-svn: 198985
2014-01-11 01:00:27 +00:00
Artyom Skrobov
759f6384e9 Must not produce Tag_CPU_arch_profile for pre-ARMv7 cores (e.g. cortex-m0)
llvm-svn: 198945
2014-01-10 16:42:55 +00:00
Kristof Beyls
082ab7548c Make sure -use-init-array has intended effect on all AArch64 ELF targets, not just linux.
llvm-svn: 198937
2014-01-10 13:41:49 +00:00
Venkatraman Govindaraju
b0841deda2 [Sparc] Emit retl/ret instead of jmp instruction. It improves the readability of the assembly generated.
llvm-svn: 198910
2014-01-10 02:55:27 +00:00
Richard Sandiford
9a2e030d71 [SystemZ] Fix RNSBG bug introduced by r197802
The zext handling added in r197802 wasn't right for RNSBG.  This patch
restricts it to ROSBG, RXSBG and RISBG.  (The tests for RISBG were added
in r197802 since RISBG was the motivating example.)

llvm-svn: 198862
2014-01-09 11:28:53 +00:00
Richard Sandiford
62f3dcdad8 Handle masked rotate amounts
At the moment we expect rotates to have the form:

   (or (shl X, Y), (shr X, Z))

where Y == bitsize(X) - Z or Z == bitsize(X) - Y.  This form means that
the (or ...) is undefined for Y == 0 or Z == 0.  This undefinedness can
be avoided by using Y == (C * bitsize(X) - Z) & (bitsize(X) - 1) or
Z == (C * bitsize(X) - Y) & (bitsize(X) - 1) for any integer C
(including 0, the most natural choice).

llvm-svn: 198861
2014-01-09 10:56:42 +00:00
Richard Sandiford
e9b9f0fa27 Match the InstCombine form of rotates by X+C
InstCombine converts (sub 32, (add X, C)) into (sub 32-C, X),
so a rotate left of a 32-bit Y by X+C could appear as either:

   (or (shl Y, (add X, C)), (shr Y, (sub 32, (add X, C))))

without InstCombine or:

   (or (shl Y, (add X, C)), (shr Y, (sub 32-C, X)))

with it.

We already matched the first form.  This patch handles the second too.

llvm-svn: 198860
2014-01-09 10:49:40 +00:00
Andrew Trick
304bd525cb llvm.experimental.stackmap: fix encoding of large constants.
In the stackmap format we advertise the constant field as signed.
However, we were determining whether to promote to a 64-bit constant
pool based on an unsigned comparison.

This fix allows -1 to be encoded as a small constant.

llvm-svn: 198816
2014-01-09 00:22:31 +00:00
Hal Finkel
c5d78d63ff Conservatively handle multiple MMOs in MIsNeedChainEdge
MIsNeedChainEdge, which is used by -enable-aa-sched-mi (AA in misched), had an
llvm_unreachable when -enable-aa-sched-mi is enabled and we reach an
instruction with multiple MMOs. Instead, return a conservative answer. This
allows testing -enable-aa-sched-mi on x86.

Also, this moves the check above the isUnsafeMemoryObject checks.
isUnsafeMemoryObject is currently correct only for instructions with one MMO
(as noted in the comment in isUnsafeMemoryObject):

  // We purposefully do no check for hasOneMemOperand() here
  // in hope to trigger an assert downstream in order to
  // finish implementation.

The problem with this is that, had the candidate edge passed the
"!MIa->mayStore() && !MIb->mayStore()" check, the hoped-for assert would never
happen (which could, in theory, lead to incorrect behavior if one of these
secondary MMOs was volatile, for example).

llvm-svn: 198795
2014-01-08 21:52:02 +00:00
Andrea Di Biagio
946f61e227 Teach the DAGCombiner how to fold 'vselect' dag nodes according
to the following two rules:
  1) fold (vselect (build_vector AllOnes), A, B) -> A
  2) fold (vselect (build_vector AllZeros), A, B) -> B

llvm-svn: 198777
2014-01-08 18:33:04 +00:00
David Woodhouse
e757b998ec [x86] Disambiguate RET[QL] and fix aliases for 16-bit mode
I couldn't see how to do this sanely without splitting RETQ from RETL.

Eric says: "sad about the inability to roundtrip them now, but...".
I have no idea what that means, but perhaps it wants preserving in the
commit comment.

llvm-svn: 198756
2014-01-08 12:58:07 +00:00
Elena Demikhovsky
1ecccf9364 AVX-512: Added more intrinsics for pmin/pmax, pabs, blend, pmuldq.
llvm-svn: 198745
2014-01-08 10:54:22 +00:00
Iain Sandoe
99b3d76928 [patch] Adjust behavior of FDE cross-section relocs for targets that don't support abs-differences.
Modern versions of OSX/Darwin's ld (ld64 > 97.17) have an optimisation present that allows the back end to omit relocations (and replace them with an absolute difference) for FDE some text section refs.

This patch allows a backend to opt-in to this behaviour by setting "DwarfFDESymbolsUseAbsDiff".  At present, this is only enabled for modern x86 OSX ports.

test changes by David Fang.

llvm-svn: 198744
2014-01-08 10:22:54 +00:00
Kevin Qin
fd4df4bd7a [AArch64 NEON] Fix generating incorrect value type of NEON_VDUPLANE
when lower build_vector if result value type mismatch with operand
value type.

llvm-svn: 198743
2014-01-08 08:06:14 +00:00
Rafael Espindola
492c4c1202 Don't assert with private type info variables.
With the gnu objc runtime private strings are used. Since we only need to
produce a unique label, the fix is to just drop the asserts.

llvm-svn: 198701
2014-01-07 19:38:47 +00:00
Hao Liu
2324cab69b [AArch64]Add support to spill/fill D tuples such as DPair/DTriple/DQuad. There is no test cases for D tuple as the original test cases are too large. As the spill/fill of the D tuple is similar to the Q tuple, the correctness can be guaranteed.
llvm-svn: 198684
2014-01-07 10:50:43 +00:00
Hao Liu
bbb265cfee [AArch64]Add support to copy D tuples such as DPair/DTriple/DQuad and Q tuples such as QPair/QTriple/QQuad. There is no test case for D tuple as the original test cases are too large. As the copy of the D tuple is similar to the Q tuple, the correctness can be guaranteed.
llvm-svn: 198682
2014-01-07 10:00:03 +00:00
Andrew Trick
d008071ce2 Fix for PR18396: Assertion: MO->isDead "Cannot fold physreg def".
InlineSpiller::foldMemoryOperand needs to handle undef call operands.

llvm-svn: 198679
2014-01-07 07:31:10 +00:00
Kevin Qin
cb6af368ab [AArch64 NEON] Fixed incorrect immediate used in BIC instruction.
llvm-svn: 198675
2014-01-07 05:10:47 +00:00
Saleem Abdulrasool
9005119e13 ARM IAS: improve .eabi_attribute handling
Parse tag names as well as expressions.  The former is part of the
specification, the latter is for improved compatibility with the GNU assembler.
Fix attribute value handling to be comformant to the specification.

llvm-svn: 198662
2014-01-07 02:28:42 +00:00
Tim Northover
e0e3fee19b ARM MachO: sort out isTargetDarwin/isTargetIOS/... checks.
The ARM backend has been using most of the MachO related subtarget
checks almost interchangeably, and since the only target it's had to
run on has been IOS (which is all three of MachO, Darwin and IOS) it's
worked out OK so far.

But we'd like to support embedded targets under the "*-*-none-macho"
triple, which means everything starts falling apart and inconsistent
behaviours emerge.

This patch should pick a reasonably sensible set of behaviours for the
new triple (and any others that come along, with luck). Some choices
were debatable (notably FP == r7 or r11), but we can revisit those
later when deficiencies become apparent.

llvm-svn: 198617
2014-01-06 14:28:05 +00:00
Robert Lytton
3d4bb0d4e4 XCore Target: correct callee save register spilling when callsUnwindInit is true.
llvm-svn: 198616
2014-01-06 14:21:12 +00:00
Robert Lytton
69e4de31bf XCore target: Lower EH_RETURN
llvm-svn: 198615
2014-01-06 14:21:07 +00:00
Robert Lytton
2c10e542b0 XCore target: Lower FRAME_TO_ARGS_OFFSET
This requires a knowledge of the stack size which is not known until
the frame is complete, hence the need for the XCoreFTAOElim pass
which lowers the XCoreISD::FRAME_TO_ARGS_OFFSET instrution into its
final form.

llvm-svn: 198614
2014-01-06 14:21:00 +00:00
Robert Lytton
9059c1d570 XCore target: Lower RETURNADDR
Only handles a depth of zero (the same as FRAMEADDR)

llvm-svn: 198613
2014-01-06 14:20:53 +00:00
Robert Lytton
33b5209ddc XCore target: Optimise entsp / retsp selection
llvm-svn: 198612
2014-01-06 14:20:47 +00:00
Robert Lytton
6e7ff61390 XCore target: fix handling of unsized global arrays in large code model
llvm-svn: 198609
2014-01-06 14:20:32 +00:00
Tim Northover
3756e37a97 ARM: keep special non-AEABIness of "-darwin-eabi" triples for now
Longer term, we want to move users to "*-*-*-macho" for embedded work, but for
now people are relying on the last thing we told them, which is unfortunately
"*-*-darwin-eabi".

rdar://problem/15703934

llvm-svn: 198602
2014-01-06 12:00:44 +00:00
Elena Demikhovsky
591c25725f AVX-512: added intrinsic vcvtpd2ps (with rounding mode and without)
llvm-svn: 198593
2014-01-06 08:45:54 +00:00
Kevin Qin
162c44e325 [AArch64 NEON] Fix invalid constant used in vselect condition.
There is a wrong assumption that the vector element type and the
type of each ConstantSDNode in the build_vector were the same.
However, when promoting the integer operand of a legally typed
build_vector, the operand type and the vector element type do not
need to be the same
(See method 'DAGTypeLegalizer::PromoteIntOp_BUILD_VECTOR' in
LegalizeIntegerTypes.cpp).

  in AArch64 backend, the following dag sequence:

  C0: i1 = Constant<0>
  C1: i1 = Constant<-1>
  V: v8i1 = BUILD_VECTOR C1, C1, C0, C0, C0, C0, C0, C0

  is type-legalized into:

  NewC0: i32 = Constant<0>
  NewC1: i32 = Constant<1>
  V: v8i8 = BUILD_VECTOR NewC1, NewC1, NewC0, NewC0, NewC0, NewC0, NewC0, NewC0

Forcing a getZeroExtend to VTBits to ensure that the new constant
is correctly.

llvm-svn: 198582
2014-01-06 02:26:10 +00:00
Bill Wendling
5cc1d930e7 Remove a failing test to get the buildbots back to green.
llvm-svn: 198578
2014-01-06 00:43:09 +00:00
Bill Wendling
6635bd4679 Try to fix s390x build bot.
llvm-svn: 198577
2014-01-06 00:43:04 +00:00
Elena Demikhovsky
935f81172d AVX-512: changed property name from "neverHasSideEffects=1" to "hasSideEffects=0", added this property to VMOVSS/VMOVSD;
Optimized a truncate pattern.

llvm-svn: 198562
2014-01-05 14:21:07 +00:00
Elena Demikhovsky
034a667c24 AVX-512: Added more intrinsics for convert and min/max.
Removed vzeroupper from AVX-512 mode - our optimization gude does not recommend to insert vzeroupper at all.

llvm-svn: 198557
2014-01-05 10:46:09 +00:00
Bill Wendling
e236f0d36e Attempt to fix buildbots by XFAILing some architectures.
llvm-svn: 198537
2014-01-05 03:10:56 +00:00
Bill Wendling
be9af41475 Emit an error message if the value passed to __builtin_returnaddress isn't a constant
__builtin_returnaddress requires that the value passed into is be a constant.
However, at -O0 even a constant expression may not be converted to a constant.
Emit an error message intead of crashing.

llvm-svn: 198531
2014-01-05 01:47:20 +00:00
Venkatraman Govindaraju
bfcef24b91 [SparcV9]: Implement RETURNADDR and FRAMEADDR lowering in SPARC64.
Fixes PR18356.

llvm-svn: 198480
2014-01-04 07:17:21 +00:00
Quentin Colombet
23080225fa [RegAlloc] Make tryInstructionSplit less aggressive.
The greedy register allocator tries to split a live-range around each
instruction where it is used or defined to relax the constraints on the entire
live-range (this is a last chance split before falling back to spill).
The goal is to have a big live-range that is unconstrained (i.e., that can use
the largest legal register class) and several small local live-range that carry
the constraints implied by each instruction.
E.g.,
Let csti be the constraints on operation i.

V1=
op1 V1(cst1)
op2 V1(cst2)

V1 live-range is constrained on the intersection of cst1 and cst2.

tryInstructionSplit relaxes those constraints by aggressively splitting each
def/use point:
V1=
V2 = V1
V3 = V2
op1 V3(cst1)
V4 = V2
op2 V4(cst2)

Because of how the coalescer infrastructure works, each new variable (V3, V4)
that is alive at the same time as V1 (or its copy, here V2) interfere with V1.
Thus, we end up with an uncoalescable copy for each split point.

To make tryInstructionSplit less aggressive, we check if the split point
actually relaxes the constraints on the whole live-range. If it does not, we do
not insert it.
Indeed, it will not help the global allocation problem:
- V1 will have the same constraints.
- V1 will have the same interference + possibly the newly added split variable
  VS.
- VS will produce an uncoalesceable copy if alive at the same time as V1.

<rdar://problem/15570057>

llvm-svn: 198369
2014-01-02 22:47:22 +00:00
Rafael Espindola
4b38156778 Make the ARM ABI selectable via SubtargetFeature.
This patch makes it possible to select the ABI with -mattr. It will be used to
forward clang's -target-abi option to llvm's CodeGen.

llvm-svn: 198304
2014-01-02 13:40:08 +00:00
Venkatraman Govindaraju
cb22135b23 [Sparc] Handle atomic loads/stores in sparc backend.
llvm-svn: 198286
2014-01-01 22:11:54 +00:00
Venkatraman Govindaraju
e8745ffca1 [SparcV9]: Custom lower UMULO/SMULO so that the arguments are send to __multi3() in correct order.
llvm-svn: 198281
2014-01-01 20:22:45 +00:00
Venkatraman Govindaraju
2fc6090f42 [SparcV9]: Use SRL instead of SLL to clear top 32-bits in ctpop:i32. SLL does not clear top 32 bit, only SRL does.
llvm-svn: 198280
2014-01-01 19:00:10 +00:00
Elena Demikhovsky
7174584583 AVX-512: Added intrinsics for vcvt, vcvtt, vrndscale, vcmp
Printing rounding control.
Enncoding for EVEX_RC (rounding control).

llvm-svn: 198277
2014-01-01 15:12:34 +00:00
Jiangning Liu
583b8a7116 For AArch64 Neon, simplify scalar dup by lane0 for fp.
llvm-svn: 198194
2013-12-30 02:44:35 +00:00
Hao Liu
ab32d54fad [AArch64]Add code to spill/fill Q register tuples such as QPair/QTriple/QQuad.
llvm-svn: 198193
2013-12-30 02:38:12 +00:00
Hao Liu
8bef865160 [AArch64]Can't select shift left 0 of type v1i64
llvm-svn: 198192
2013-12-30 02:12:46 +00:00
Kevin Qin
cbb0be4bee Fix a bug in DAGcombiner about zero-extend after setcc.
For AArch64 backend, if DAGCombiner see "sext(setcc)", it will
combine them together to a single setcc with extended value type.
Then if it see "zext(setcc)", it assumes setcc is Vxi1, and try to
create "(and (vsetcc), (1, 1, ...)". While setcc isn't Vxi1,
DAGcombiner will create wrong node and get wrong code emitted.

llvm-svn: 198190
2013-12-30 02:05:13 +00:00
Hao Liu
e8d49c2088 [AArch64]Fix the problem that can't select mul of v1i64/v2i64 types.
E.g. Can't select such IR:
     %tmp = mul <2 x i64> %a, %b

llvm-svn: 198188
2013-12-30 01:38:41 +00:00
Bill Wendling
984fb2bf17 Un-XFAILify some tests which are now passing.
llvm-svn: 198184
2013-12-29 23:09:14 +00:00
Venkatraman Govindaraju
451c278cbc [SparcV9] Use separate instruction patterns for 64 bit arithmetic instructions instead of reusing 32 bit instruction patterns.
This is done to avoid spilling the result of the 64-bit instructions to a 4-byte slot.

llvm-svn: 198157
2013-12-29 07:15:09 +00:00
Venkatraman Govindaraju
d46a491054 [SparcV9] For codegen generated library calls that return float, set inreg flag manually in LowerCall().
This makes the sparc backend to generate Sparc64 ABI compliant code.

llvm-svn: 198149
2013-12-29 04:27:21 +00:00
Venkatraman Govindaraju
05510dd426 [SparcV9]: Implement lowering of long double (fp128) arguments in Sparc64 ABI.
Also, pass fp128 arguments to varargs through integer registers if necessary.

llvm-svn: 198145
2013-12-29 01:20:36 +00:00
Andrew Trick
ed2d925c84 New machine model for cortex-a9. Schedule for resources and latency.
Schedule more conservatively to account for stalls on floating point
resources and latency. Use the AGU resource to model latency stalls
since it's shared between FP and LD/ST instructions. This might not be
completely accurate but should work well in practice.

llvm-svn: 198125
2013-12-28 21:57:05 +00:00
NAKAMURA Takumi
a213dd2dcc llvm/test/CodeGen/X86/vselect.ll: Unbreak Windows x64 targets to add -mtriple=x86_64-unknown-unknown.
llvm-svn: 198114
2013-12-28 13:04:29 +00:00
Andrea Di Biagio
b2f4969e98 [X86] Teach the backend how to fold target specific dag node for packed
vector shift by immedate count (VSHLI/VSRLI/VSRAI) into a build_vector when
the vector in input to the shift is a build_vector of all constants or UNDEFs.

Target specific nodes for packed shifts by immediate count are in
general introduced by function 'getTargetVShiftByConstNode' (in
X86ISelLowering.cpp) when lowering shift operations, SSE/AVX immediate
shift intrinsics and (only in very few cases) SIGN_EXTEND_INREG dag
nodes.

This patch adds extra rules for simplifying vector shifts inside
function 'getTargetVShiftByConstNode'.

Added file test/CodeGen/X86/vec_shift5.ll to verify that packed
shifts by immediate are correctly folded into a build_vector when the
input vector to the shift dag node is a vector of constants or undefs.

llvm-svn: 198113
2013-12-28 11:11:52 +00:00
Andrea Di Biagio
86fc6e8bd5 Teach DAGCombiner how to fold a SIGN_EXTEND_INREG of a BUILD_VECTOR of
ConstantSDNodes (or UNDEFs) into a simple BUILD_VECTOR.

For example, given the following sequence of dag nodes:

  i32 C = Constant<1>
  v4i32 V = BUILD_VECTOR C, C, C, C
  v4i32 Result = SIGN_EXTEND_INREG V, ValueType:v4i1

The SIGN_EXTEND_INREG node can be folded into a build_vector since
the vector in input is a BUILD_VECTOR of constants.

The optimized sequence is:

  i32 C = Constant<-1>
  v4i32 Result = BUILD_VECTOR C, C, C, C

llvm-svn: 198084
2013-12-27 20:20:28 +00:00
Venkatraman Govindaraju
8c2d10768d [Sparc] Lower and MachineInstr to MC and print assembly using MCInstPrinter.
llvm-svn: 198030
2013-12-26 01:49:59 +00:00
Simon Atanasyan
f306a50db4 [Mips] Does not take in account 'use-soft-float' attribute's value when
consider to generate stubs for mips16 hard-float mode.

The patch reviewed by Reed Kotler.

llvm-svn: 198019
2013-12-25 17:00:27 +00:00
Hao Liu
8ed49e0c42 [AArch64]Fix a problem that the register order of fmls/fmla by element is incorrect.
E.g. the codegen result is 
     fmls v1.2s, v0.2s, v2.s[3]
which is expected to be
     fmls v0.2s, v1.2s, v2.s[3]

llvm-svn: 198001
2013-12-25 07:12:34 +00:00
Jiangning Liu
ca1d69d4c2 Add missing pattern matches to support ACLE intrinsics of AArch64 NEON.
llvm-svn: 197993
2013-12-25 01:22:51 +00:00
Richard Sandiford
99ae48f5bb [SystemZ] Use interlocked-access 1 instructions for CodeGen
...namely LOAD AND ADD, LOAD AND AND, LOAD AND OR and LOAD AND EXCLUSIVE OR.
LOAD AND ADD LOGICAL isn't really separately useful for LLVM.

I'll look at adding reusing the CC results in new year.

llvm-svn: 197985
2013-12-24 15:18:04 +00:00
Elena Demikhovsky
2d23dc9650 AVX-512: fixed some patterns for MVT::i1
llvm-svn: 197981
2013-12-24 14:24:07 +00:00
Hao Liu
8ef969c4a0 [AArch64]Add patterns to match normal shift nodes: shl, sra and srl.
llvm-svn: 197969
2013-12-24 09:00:21 +00:00
Kevin Qin
3993f1cd71 [AArch64 NEON] Fix a bug when lowering BUILD_VECTOR.
DAG.getVectorShuffle() doesn't always return a vector_shuffle node.
If mask is the exact sequence of it's operand(For example, operand_0
is v8i8, and  the mask is 0, 1, 2, 3, 4, 5, 6, 7), it will directly
return that operand. So a check is added here.

llvm-svn: 197967
2013-12-24 08:16:06 +00:00
Kevin Qin
8f86911897 [AArch64 NEON] Fix a pattern match failure with NEON_VDUP.
This failure caused by improper condition when lowering shuffle_vector
to scalar_to_vector. After this patch NEON_VDUP with v1i64 will not
be generated.

llvm-svn: 197966
2013-12-24 08:11:47 +00:00
Ana Pazos
85f191fc73 [AArch64] Check fmul node single use in fused multiply patterns
Check for single use of fmul node in fused multiply patterns
to allow generation of fused multiply add/sub instructions.
Otherwise fmul operation ends up being repeated more than
once which does not help peformance on targets with
only one MAC unit, as for example cortex-a53.

llvm-svn: 197929
2013-12-24 00:47:29 +00:00
Ana Pazos
8821a9ef6b [AArch64 NEON] Fixed fused multiply negate add/sub patterns
The correct pattern matching should be:

- fnmadd is (-Ra) + (-Rn)*Rm  which should be matched as:

  fma (fneg node:$Rn),  node:$Rm, (fneg node:$Ra) and as

  (f32 (fsub (f32 (fneg FPR32:$Ra)), (f32 (fmul FPR32:$Rn, FPR32:$Rm))))

- fnmsub is (-Ra) + Rn*Rm which should be matched as

  fma node:$Rn,  node:$Rm, (fneg node:$Ra) and as

  (f32 (fsub (f32 (fmul FPR32:$Rn, FPR32:$Rm)), FPR32:$Ra))))

llvm-svn: 197928
2013-12-24 00:40:10 +00:00
Hao Liu
3ae1e13884 [AArch64]The compare to zero intrinsics should be implemented by 'icmp/fcmp' and 'sext' not 'zext'. Modify the test cases.
llvm-svn: 197897
2013-12-23 02:42:10 +00:00
Elena Demikhovsky
39275c48ca AVX512: SETCC returns i1 for AVX-512 and i8 for all others
llvm-svn: 197876
2013-12-22 10:13:18 +00:00
Roman Divacky
513296cd04 Implement initial-exec TLS for PPC32.
llvm-svn: 197824
2013-12-20 18:08:54 +00:00
Richard Sandiford
8daaabe4c3 [SystemZ] Optimize comparisons with truncated extended loads
If the extension of a loaded value is compared against zero and used in
other arithmetic, InstCombine will change the comparison to use the
unextended load.  It's also possible that the comparison could be against
the unextended load from the outset.

In DAG form this becomes a truncation of an extending load.  We want to
strip the truncation if possible so that we can use load-and-test instructions.

llvm-svn: 197804
2013-12-20 11:56:02 +00:00
Richard Sandiford
48a0b2f8e3 [SystemZ] Extend RISBG optimization
The handling of ANY_EXTEND and ZERO_EXTEND was too strict.  In this context
we can treat ZERO_EXTEND in much the same way as an AND and then also handle
outermost ZERO_EXTENDs.

I couldn't find a test that benefited from the ANY_EXTEND change, but it's
more obvious to write it this way once SIGN_EXTEND and ZERO_EXTEND are
handled differently.

llvm-svn: 197802
2013-12-20 11:49:48 +00:00
Tom Stellard
b39ac07c09 R600: Allow ftrunc
v2: Add ftrunc->TRUNC pattern instead of replacing int_AMDGPU_trunc
v3: move ftrunc pattern next to TRUNC definition, it's available since R600

Patch By: Jan Vesely

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
llvm-svn: 197783
2013-12-20 05:11:55 +00:00
Quentin Colombet
884367d931 [X86][fast-isel] Fix select lowering.
The condition in selects is supposed to be i1.
Make sure we are just reading the less significant bit
of the 8 bits width value to match this constraint.

<rdar://problem/15651765>

llvm-svn: 197712
2013-12-19 18:32:04 +00:00
Josh Magee
f3c5790260 Unbreak ARM buildbots after r197653 by forcing the target triple on this test.
llvm-svn: 197709
2013-12-19 18:14:42 +00:00
Rafael Espindola
64a77ceb5f Add a triple so that this passes on OS X.
I am surprised I am the first one to notice this.

llvm-svn: 197689
2013-12-19 16:06:33 +00:00
NAKAMURA Takumi
e67a3fe0ef Add REQUIRES:asserts to 3 tests in llvm/test/CodeGen/R600 added in r192212.
They are failing in assertions.

llvm-svn: 197669
2013-12-19 10:41:12 +00:00
Matt Arsenault
e64331a159 R600/SI: Make private pointers be 32-bit.
Different sized address spaces should theoretically work
most of the time now, and since 64-bit add is currently
disabled, using more 32-bit pointers fixes some cases.

llvm-svn: 197659
2013-12-19 05:32:55 +00:00
Josh Magee
86d29cffa7 [stackprotector] Use analysis from the StackProtector pass for stack layout in PEI a nd LocalStackSlot passes.
This changes the MachineFrameInfo API to use the new SSPLayoutKind information
produced by the StackProtector pass (instead of a boolean flag) and updates a
few pass dependencies (to preserve the SSP analysis).

The stack layout follows the same approach used prior to this change - i.e.,
only LargeArray stack objects will be placed near the canary and everything
else will be laid out normally.  After this change, structures containing large
arrays will also be placed near the canary - a case previously missed by the
old implementation.

Out of tree targets will need to update their usage of
MachineFrameInfo::CreateStackObject to remove the MayNeedSP argument. 

The next patch will implement the rules for sspstrong and sspreq.  The end goal
is to support ssp-strong stack layout rules.

WIP.

Differential Revision: http://llvm-reviews.chandlerc.com/D2158

llvm-svn: 197653
2013-12-19 03:17:11 +00:00
Reid Kleckner
f795c3e4a9 Begin adding docs and IR-level support for the inalloca attribute
The inalloca attribute is designed to support passing C++ objects by
value in the Microsoft C++ ABI.  It behaves the same as byval, except
that it always implies that the argument is in memory and that the bytes
are never copied.  This attribute allows the caller to take the address
of an outgoing argument's memory and execute arbitrary code to store
into it.

This patch adds basic IR support, docs, and verification.  It does not
attempt to implement any lowering or fix any possibly broken transforms.

When this patch lands, a complete description of this feature should
appear at http://llvm.org/docs/InAlloca.html .

Differential Revision: http://llvm-reviews.chandlerc.com/D2173

llvm-svn: 197645
2013-12-19 02:14:12 +00:00
Reed Kotler
012c0a0f79 Fix a problem with mips16 stubs when calls are transformed during
tail call optimization. Some more work may be needed for indirect
calls but this patch fixes the current regression in Prolangc++/trees.
S2 optimization as part of the general cleanup and optimization
of prolog and epilog was not saving S2 in this case and needed to.

llvm-svn: 197630
2013-12-18 23:57:48 +00:00
Andrew Trick
e73fd60399 Revert "Add -mcpu=z10 to SystemZ tests."
This reverts commit r197466.

The MachineCSE fix that required the -mcpu flag has been disabled
until more work can be done to fix downstream issues. Adding -mcpu
wasn't the right workaround anyway.

llvm-svn: 197624
2013-12-18 23:04:37 +00:00
Weiming Zhao
628bf03d65 [aarch32] fix bug 18268: Incorrect condition of vsel
Given vsel_cc, op1, op2, since vsel has no LE/LT, to generate vsel for
such selection, it needs to inverse cc and swap op1 and op2. To inverse
cc, both L/G and E bits should be flipped.

llvm-svn: 197615
2013-12-18 22:25:17 +00:00
Rafael Espindola
6dc5fe883a Correctly handle the degenerated triple "thumb".
Fixes a crash in llc where some parts think the target is thumb and others think
it is ARM.

llvm-svn: 197607
2013-12-18 21:29:44 +00:00
Rafael Espindola
e1792e72e1 One ppc32-darwin, a i64 inside a structure can have 32 bit alignment.
Thanks for Iain Sandoe for testing this with the original gcc.

Clang was already getting this right.

llvm-svn: 197572
2013-12-18 14:35:37 +00:00
Tim Northover
d61d401a25 ARM: force soft-float ABI for tests depending on it.
This should fix the ARM bots.

llvm-svn: 197555
2013-12-18 09:58:06 +00:00
Tim Northover
fe4e45a5b0 ARM: set default float ABI based on triple.
Clang sets the float-abi target option manually, but no longer
annotates each function with its ABI. This can lead to confusing
mistmatch between "clang -emit-llvm | llc" and normal clang
invocations.

Besides which, gnueabihf actually *is* hard-float. Defaulting to soft
was just perverse.

llvm-svn: 197554
2013-12-18 09:27:33 +00:00
Kevin Qin
99ae282f19 [AArch64 NEON]Implment loading vector constant form constant pool.
llvm-svn: 197551
2013-12-18 06:26:04 +00:00
Andrew Trick
68ff5cf488 Disabled subregister copy coalescing during MachineCSE.
This effectively backs out r197465 but leaves some of the general
fixes in place. Not all targets are ready to handle this feature. To
enable it, some infrastructure work is needed to better handle
register class constraints.

llvm-svn: 197514
2013-12-17 19:29:36 +00:00
Quentin Colombet
67a68c0b99 Add warning capabilities in LLVM.
This reapplies r197438 and fixes the link-time circular dependency between
IR and Support. The fix consists in moving the diagnostic support into IR.

The patch adds a new LLVMContext::diagnose that can be used to communicate to
the front-end, if any, that something of interest happened.
The diagnostics are supported by a new abstraction, the DiagnosticInfo class.
The base class contains the following information:
- The kind of the report: What this is about.
- The severity of the report: How bad this is.

This patch also adds 2 classes:
- DiagnosticInfoInlineAsm: For inline asm reporting. Basically, this diagnostic
will be used to switch to the new diagnostic API for LLVMContext::emitError.
- DiagnosticStackSize: For stack size reporting. Comes as a replacement of the
hard coded warning in PEI.

This patch also features dynamic diagnostic identifiers. In other words plugins
can use this infrastructure for their own diagnostics (for more details, see
getNextAvailablePluginDiagnosticKind).

This patch introduces a new DiagnosticHandlerTy and a new DiagnosticContext in
the LLVMContext that should be set by the front-end to be able to map these
diagnostics in its own system.

http://llvm-reviews.chandlerc.com/D2376
<rdar://problem/15515174>

llvm-svn: 197508
2013-12-17 17:47:22 +00:00
Duncan P. N. Exon Smith
8e76a18c61 Setting the CPU in the new vaargs test
Trying to fix buildbots after r197503 (test passes locally).

<rdar://problem/15627766>

llvm-svn: 197505
2013-12-17 16:20:37 +00:00
Duncan P. N. Exon Smith
85e7983ab3 Revert "Revert "Mark vastart_save_xmm_regs as changing EFLAGS""
This reverts commit r197481, recommiting r197469 with an extra fix.

The vastart_save_xmm_regs pseudo-instruction expands to a test and a
branch, so it modifies EFLAGS.  Mark it so, or else the scheduler might
place it in the middle of another test+branch.

This fixes a bug exposed by r192750, which changed the initial scheduler
to source-order as part of enabling the MI Scheduler for X86.

This re-commit changes the VASTART_SAVE_XMM_REGS custom inserter not to
try to save %flags, and adds a test that catches the bad behavior of
r197469.

<rdar://problem/15627766>

llvm-svn: 197503
2013-12-17 15:54:45 +00:00
Stepan Dyatkovskiy
ea2c7b6742 Fix for PR18045:
http://llvm.org/bugs/show_bug.cgi?id=18045

Short issue description:
For X86 machines with sse < sse4.1 we got failures for some
particular load/store vector sequences:

$ clang-trunk -m32 -O2 test-case.c
fatal error: error in backend: Cannot select: 0x4200920: v4i32,ch = load 0x41d6ab0, 0x4205850,
      0x41dcb10<LD16[getelementptr inbounds ([4 x i32]* @e, i32 0, i32 0)](align=4)> [ORD=82]
      [ID=58]
  0x4205850: i32 = X86ISD::Wrapper 0x41d5490 [ORD=26] [ID=43]
    0x41d5490: i32 = TargetGlobalAddress<[4 x i32]* @e> 0 [ORD=26] [ID=23]
  0x41dcb10: i32 = undef [ID=2]

The reason is that EltsFromConsecutiveLoads could emit such load instruction
both before and after legalize stage. Though this instruction is not legal for
machines with SSSE3 and lower.

The fix: In EltsFromConsecutiveLoads, if we have passed legalize stage, we
check whether nodes it emits are legal. 

P.S.: If you get failure in time from 12:00 and till 22:00 (UTC-8),
perhaps I'll slow with response, so you better reject this commit. Thanks!

llvm-svn: 197492
2013-12-17 12:07:33 +00:00
Elena Demikhovsky
241694a7bc AVX-512: Added implementation of CONCAT_VECTORS for v8i1 vectors (by Alexey Bader).
Added implementation of "truncate" from integer type (i64/i32/i16/i8) to i1.

llvm-svn: 197482
2013-12-17 08:33:15 +00:00
Duncan P. N. Exon Smith
3f31d678ca Revert "Mark vastart_save_xmm_regs as changing EFLAGS"
This reverts commit r197469.

The sanitizer and dragonegg buildbots are failing, I think because of
this change.  Reverting until I figure out why.

llvm-svn: 197481
2013-12-17 07:13:58 +00:00
Duncan P. N. Exon Smith
2cc99f0e39 Mark vastart_save_xmm_regs as changing EFLAGS
The vastart_save_xmm_regs pseudo-instruction expands to a test and a
branch, so it modifies EFLAGS.  Mark it so, or else the scheduler might
place it in the middle of another test+branch.

This fixes a bug exposed by r192750, which turned on the MI Scheduler
for X86.

<rdar://problem/15627766>

llvm-svn: 197469
2013-12-17 06:12:05 +00:00
Andrew Trick
42fe1fadc6 Add -mcpu=z10 to SystemZ tests.
llvm-svn: 197466
2013-12-17 05:27:16 +00:00
Andrew Trick
a3aa2ba174 Allow MachineCSE to coalesce trivial subregister copies the same way that it coalesces normal copies.
Without this, MachineCSE is powerless to handle redundant operations with truncated source operands.

This required fixing the 2-addr pass to handle tied subregisters. It isn't clear what combinations of subregisters can legally be tied, but the simple case of truncated source operands is now safely handled:

     %vreg11<def> = COPY %vreg1:sub_32bit; GR32:%vreg11 GR64:%vreg1
     %vreg12<def> = COPY %vreg2:sub_32bit; GR32:%vreg12 GR64:%vreg2
     %vreg13<def,tied1> = ADD32rr %vreg11<tied0>, %vreg12<kill>, %EFLAGS<imp-def>

Test case: cse-add-with-overflow.ll.

This exposed an existing bug in
PPCInstrInfo::commuteInstruction. Thanks to Rafael for the test case:
PowerPC/crash.ll.

llvm-svn: 197465
2013-12-17 04:50:45 +00:00
Quentin Colombet
71b4c4cbe8 Revert r197438 and r197447 until we figure out how to avoid circular dependency at link time
llvm-svn: 197451
2013-12-17 01:19:59 +00:00
Quentin Colombet
6369ce9a04 Add warning capabilities in LLVM.
The patch adds a new LLVMContext::diagnose that can be used to communicate to
the front-end, if any, that something of interest happened.
The diagnostics are supported by a new abstraction, the DiagnosticInfo class.
The base class contains the following information:
- The kind of the report: What this is about.
- The severity of the report: How bad this is.

This patch also adds 2 classes:
- DiagnosticInfoInlineAsm: For inline asm reporting. Basically, this diagnostic
will be used to switch to the new diagnostic API for LLVMContext::emitError.
- DiagnosticStackSize: For stack size reporting. Comes as a replacement of the
hard coded warning in PEI.

This patch also features dynamic diagnostic identifiers. In other words plugins
can use this infrastructure for their own diagnostics (for more details, see
getNextAvailablePluginDiagnosticKind).

This patch introduces a new DiagnosticHandlerTy and a new DiagnosticContext in
the LLVMContext that should be set by the front-end to be able to map these
diagnostics in its own system.

http://llvm-reviews.chandlerc.com/D2376
<rdar://problem/15515174>

llvm-svn: 197438
2013-12-16 23:22:51 +00:00
Juergen Ributzka
d14ed59350 [Stackmap] Allow WebKit_JS calling convention to store 4 byte sized and aligned arguments.
This allows the WebKit_JS calling convention to perform partial writes on a 4
byte granularity to stack slots.

llvm-svn: 197431
2013-12-16 22:05:32 +00:00
Rafael Espindola
cc981c4385 Add a reduced testcase from the recent bootstrap crash.
llvm-svn: 197426
2013-12-16 21:24:00 +00:00
Rafael Espindola
53e4a36322 Revert "Allow MachineCSE to coalesce trivial subregister copies the same way that it coalesces normal copies."
This reverts commit r197414.

It broke the ppc64 bootstrap. I will post a testcase in a sec.

llvm-svn: 197424
2013-12-16 20:57:09 +00:00
Juergen Ributzka
3569d25c1a [Stackmap] The first integer argument is passed in register for the WebKit_JS calling convention.
Pass the first integer argument (callee) in register to optimize inline caches.

llvm-svn: 197416
2013-12-16 19:53:31 +00:00
Andrew Trick
45152b22b3 Allow MachineCSE to coalesce trivial subregister copies the same way
that it coalesces normal copies.

Without this, MachineCSE is powerless to handle redundant operations
with truncated source operands.

This required fixing the 2-addr pass to handle tied subregisters. It
isn't clear what combinations of subregisters can legally be tied, but
the simple case of truncated source operands is now safely handled:

     %vreg11<def> = COPY %vreg1:sub_32bit; GR32:%vreg11 GR64:%vreg1
     %vreg12<def> = COPY %vreg2:sub_32bit; GR32:%vreg12 GR64:%vreg2
     %vreg13<def,tied1> = ADD32rr %vreg11<tied0>, %vreg12<kill>, %EFLAGS<imp-def>

llvm-svn: 197414
2013-12-16 19:36:21 +00:00
Joerg Sonnenberger
dea3c5aab3 Recognize EABIHF as environment and use it for RTAPI + VFP.
llvm-svn: 197405
2013-12-16 18:51:28 +00:00
Elena Demikhovsky
7faa770ce1 fixed one more line
llvm-svn: 197387
2013-12-16 14:36:50 +00:00
Elena Demikhovsky
d1bc2f2399 Fixed the test - added -mcpu=penryn flag to avoid ambiguity in code generation.
llvm-svn: 197385
2013-12-16 14:24:08 +00:00
Elena Demikhovsky
b43ccbc3f7 AVX-512: Added legal type MVT::i1 and VK1 register for it.
Added scalar compare VCMPSS, VCMPSD.
Implemented LowerSELECT for scalar FP operations.
I replaced FSETCCss, FSETCCsd with one node type FSETCCs.
Node extract_vector_elt(v16i1/v8i1, idx) returns an element of type i1.

llvm-svn: 197384
2013-12-16 13:52:35 +00:00
Hao Liu
81b69b5ce1 [AArch64]Fix the pattern match failure for v1i8/v1i16/v1i32 types.
Currently we have such types as legal vector types. The DAG combiner may generate some DAG nodes having such types but we don't have patterns to match them.
E.g. a load i32 and a bitcast i32 to v1i32 will be combined into a load v1i32:
     bitcast (load i32) to v1i32 -> load v1i32.
So this patch fixes such problems for load/dup instructions.
If v1i8/v1i16/v1i32 are not legal any more, the code in this patch can be deleted. So I also add some FIXME.

llvm-svn: 197361
2013-12-16 02:51:28 +00:00
Reed Kotler
5bb816aed3 Last change for mips16 prolog/epilog cleanup and optimization.
Some tiny cosmetic code changes to follow. Because of the wide
ranging nature of the patch a full 24 test cycle was needed to
check against regression. This was the smallest patch I could
make to progress from the earlier ones in the series. 

llvm-svn: 197350
2013-12-15 20:49:30 +00:00
Iain Sandoe
d78fe2e004 [Powerpc darwin] AsmParser Base implementation.
This is a base implementation of the powerpc-apple-darwin asm parser dialect.

* Enables infrastructure (essentially isDarwin()) and fixes up the parsing of asm directives to separate out ELF and MachO/Darwin additions.
* Enables parsing of {r,f,v}XX as register identifiers.
* Enables parsing of lo16() hi16() and ha16() as modifiers.

The changes to the test case are from David Fang (fangism).

llvm-svn: 197324
2013-12-14 13:34:02 +00:00
Juergen Ributzka
d7df87c066 [Stackmap] Liveness Analysis Pass
This optional register liveness analysis pass can be enabled with either
-enable-stackmap-liveness, -enable-patchpoint-liveness, or both. The pass
traverses each basic block in a machine function. For each basic block the
instructions are processed in reversed order and if a patchpoint or stackmap
instruction is encountered the current live-out register set is encoded as a
register mask and attached to the instruction.

Later on during stackmap generation the live-out register mask is processed and
also emitted as part of the stackmap.

This information is optional and intended for optimization purposes only. This
will enable a client of the stackmap to reason about the registers it can use
and which registers need to be preserved.

Reviewed by Andy

llvm-svn: 197317
2013-12-14 06:53:06 +00:00
Matt Arsenault
329326f031 R600/SI: Minor improvements to test.
Use CHECK-LABEL, add an i64 version, check store instructions.

llvm-svn: 197293
2013-12-14 00:38:04 +00:00
Andrew Trick
1157632f3d Revert "Liveness Analysis Pass"
This reverts commit r197254.

This was an accidental merge of Juergen's patch. It will be checked in
shortly, but wasn't meant to go in quite yet.

Conflicts:
	include/llvm/CodeGen/StackMaps.h
	lib/CodeGen/StackMaps.cpp
	test/CodeGen/X86/stackmap-liveness.ll

llvm-svn: 197260
2013-12-13 18:57:20 +00:00
Andrew Trick
e726cc0278 Grow the stackmap/patchpoint format to hold 64-bit IDs.
llvm-svn: 197255
2013-12-13 18:37:10 +00:00
Andrew Trick
3b62606852 Liveness Analysis Pass
llvm-svn: 197254
2013-12-13 18:37:03 +00:00
Rafael Espindola
77962b5146 Fix pr18235.
The cpp backend is not a reasonable fallback for a missing target. It is a
very special backend, so it is reasonable to use it only if explicitly
requested.

While at it, simplify the interface a bit.

llvm-svn: 197241
2013-12-13 16:05:32 +00:00
Richard Sandiford
5a37a68afd [SystemZ] Optimize X [!=]= Y in cases where X - Y or Y - X is also computed
In those cases it's better to compare the result of the subtraction
against zero.

llvm-svn: 197239
2013-12-13 15:50:30 +00:00
Richard Sandiford
7ae30a86de [SystemZ] Make more use of TMHH
This originally came about after noticing that InstCombine turns
some of the TMHH (icmp (and...), ...) tests into plain comparisons.
Since there is no instruction to compare with a 64-bit immediate,
TMHH is generally better than an ordered comparison for the cases
that it can handle.

llvm-svn: 197238
2013-12-13 15:46:55 +00:00
Richard Sandiford
50a6f85c7a [SystemZ] Extend integer absolute selection
This patch makes more use of LPGFR and LNGFR.  It builds on top of
the LTGFR selection from r197234.  Most of the tests are motivated
by what InstCombine would produce.

llvm-svn: 197236
2013-12-13 15:35:00 +00:00
Richard Sandiford
3cb2e57bb5 [SystemZ] Make more use of LTGFR
InstCombine turns (sext (trunc)) into (ashr (shl)), then converts any
comparison of the ashr against zero into a comparison of the shl against zero.
This makes sense in itself, but we want to undo it for z, since the sign-
extension instruction has a CC-setting form.

I've included tests for both the original and InstCombined variants,
but the former already worked.  The patch fixes the latter.

llvm-svn: 197234
2013-12-13 15:07:39 +00:00
Benjamin Kramer
123ebfd785 X86: When lowering shl_parts, don't emit shift amounts larger than the bit width.
While it's safe for the X86-specific shift nodes, dag combining will
kill generic nodes. Insert an AND to make it safe, isel will nuke it
as x86's shift instructions have an implicit AND.

Fixes PR16108, which contains a contraption to hit this case in between
constant folders.

llvm-svn: 197228
2013-12-13 13:40:24 +00:00
Joerg Sonnenberger
2826f5b278 Enabling thumb2 mode used to force support for armv6t2. Replace this
with a temporary assertion and adjust the various test cases.

llvm-svn: 197224
2013-12-13 11:16:00 +00:00
Kai Nacke
384eb1e0f0 Change stack probing code for MingW.
Since gcc 4.6 the compiler uses ___chkstk_ms which has the same semantics as the
MS CRT function __chkstk. This simplifies the prologue generation a bit.

Reviewed by Rafael Espíndola. 

llvm-svn: 197205
2013-12-13 05:37:05 +00:00
Rafael Espindola
52eb6b7c6e Switch to the new MingW ABI.
GCC 4.7 changed the MingW ABI. On the LLVM side it means that sret functions
don't pop the stack.

llvm-svn: 197163
2013-12-12 16:06:58 +00:00
Chad Rosier
7ff770d07b [AArch64] Removed unnecessary copy patterns with v1fx types.
- Copy patterns with float/double types are enough.
- Fix typos in test case names that were using v1fx.
- There is no ACLE intrinsic that uses v1f32 type.  And there is no conflict of
  neon and non-neon ovelapped operations with this type, so there is no need to
  support operations with this type.
- Remove v1f32 from FPR32 register and disallow v1f32 as a legal type for
  operations.

Patch by Ana Pazos!

llvm-svn: 197159
2013-12-12 15:46:29 +00:00
Tim Northover
cb71ef244e PowerPC: add Linux triple to TLS tests
The tests were failing on OS X.

llvm-svn: 197146
2013-12-12 11:51:23 +00:00
Andrea Di Biagio
b414478458 Added new X86 patterns to select SSE scalar fp arithmetic instructions from
a vector packed single/double fp operation followed by a vector insert.

The effect is that the backend coverts the packed fp instruction
followed by a vectro insert into a SSE or AVX scalar fp instruction.

For example, given the following code:
   __m128 foo(__m128 A, __m128 B) {
     __m128 C = A + B;
     return (__m128) {c[0], a[1], a[2], a[3]};
   }

 previously we generated:
   addps %xmm0, %xmm1
   movss %xmm1, %xmm0
 
 we now generate:
   addss %xmm1, %xmm0

llvm-svn: 197145
2013-12-12 11:50:47 +00:00
Hao Liu
f67a904939 [AArch64]Fix the problem that AArch64 backend fails to select scalar_to_vector of vector types having more than one element.
llvm-svn: 197135
2013-12-12 07:36:26 +00:00
Kevin Qin
18b9563375 Fix Incorrect CHECK message [0-31]+ in test case.
In regular expression, [0-31]+ equals to [0-3]+, not the number from
0 to 31. So change it to [0-9]+.

llvm-svn: 197113
2013-12-12 02:19:13 +00:00
Hal Finkel
e8c7fef754 Improve instruction scheduling for the PPC POWER7
Aside from a few minor latency corrections, the major change here is a new
hazard recognizer which focuses on better dispatch-group formation on the
POWER7. As with the PPC970's hazard recognizer, the most important thing it
does is avoid load-after-store hazards within the same dispatch group. It uses
the POWER7's special dispatch-group-terminating nop instruction (instead of
inserting multiple regular nop instructions). This new hazard recognizer makes
use of the scheduling dependency graph itself, built using AA information, to
robustly detect the possibility of load-after-store hazards.

significant test-suite performance changes (the error bars are 99.5% confidence
intervals based on 5 test-suite runs both with and without the change --
speedups are negative):

speedups:

MultiSource/Benchmarks/FreeBench/pcompress2/pcompress2
	-0.55171% +/- 0.333168%

MultiSource/Benchmarks/TSVC/CrossingThresholds-dbl/CrossingThresholds-dbl
	-17.5576% +/- 14.598%

MultiSource/Benchmarks/TSVC/Reductions-dbl/Reductions-dbl
	-29.5708% +/- 7.09058%

MultiSource/Benchmarks/TSVC/Reductions-flt/Reductions-flt
	-34.9471% +/- 11.4391%

SingleSource/Benchmarks/BenchmarkGame/puzzle
	-25.1347% +/- 11.0104%

SingleSource/Benchmarks/Misc/flops-8
	-17.7297% +/- 9.79061%

SingleSource/Benchmarks/Shootout-C++/ary3
	-35.5018% +/- 23.9458%

SingleSource/Regression/C/uint64_to_float
	-56.3165% +/- 25.4234%

SingleSource/UnitTests/Vectorizer/gcc-loops
	-18.5309% +/- 6.8496%

regressions:

MultiSource/Benchmarks/ASCI_Purple/SMG2000/smg2000
	18.351% +/- 12.156%

SingleSource/Benchmarks/Shootout-C++/methcall
	27.3086% +/- 14.4733%

llvm-svn: 197099
2013-12-12 00:19:11 +00:00
Quentin Colombet
a4f824b0d6 Fix an over-constrained assertion in MachineFunction::addLiveIn.
The assertion was checking that the virtual register VReg used to represent the
physical register PReg uses the same register class as the one passed to
MachineFunction::addLiveIn.
This is over-constraining because it is sufficient to check that the register
class of VReg (VRegRC) is a subclass of the register class of PReg (PRegRC) and
that VRegRC contains PReg.
Indeed, if VReg gets constrained because of some operation constraints
between two calls of MachineFunction::addLiveIn, the original assertion
cannot match.

This fixes <rdar://problem/15633429>. 

llvm-svn: 197097
2013-12-12 00:15:47 +00:00
Chad Rosier
551789d294 [AArch64] Refactor NEON floating-point Max/Min/Maxnm/Minnm across vector AArch64
intrinsics to use f32 types, rather than their vector equivalents.

llvm-svn: 197090
2013-12-11 23:21:25 +00:00
Hal Finkel
4a76fae817 Fix the PPC subsumes-predicate check
For one predicate to subsume another, they must both check the same condition
register. Failure to check this prerequisite was causing miscompiles.

Fixes PR18003.

llvm-svn: 197089
2013-12-11 23:12:25 +00:00
Roman Divacky
d980459c16 Merge all tls tests to two files. One for normal codegen (initial and local
exec) and one for PIC codegen (local and general dynamic).

llvm-svn: 197081
2013-12-11 22:25:39 +00:00
Rafael Espindola
e5eb9f253d On ELF and COFF treat linker_private like private.
The linkers on these systems don't have anything special to do with these
symbols. Since the intent is for them to be absent from the final object,
just treat them as private.

llvm-svn: 197080
2013-12-11 22:18:44 +00:00
Roman Divacky
70398ec7f7 Remove test thats testing the same thing as tls.ll.
llvm-svn: 197074
2013-12-11 21:37:04 +00:00
Chad Rosier
c251a82254 [AArch64] Add NEON scalar floating-point compare LLVM AArch64 intrinsics that
use f32/f64 types, rather than their vector equivalents.

llvm-svn: 197068
2013-12-11 21:03:46 +00:00
Chad Rosier
0b1fef12e8 [AArch64] Refactor the NEON scalar floating-point reciprocal step and
floating-point reciprocal square root step LLVM AArch64 intrinsics to
use f32/f64 types, rather than their vector equivalents.

llvm-svn: 197067
2013-12-11 21:03:43 +00:00
Chad Rosier
43daaa765b [AArch64] Refactor the NEON scalar floating-point reciprocal estimate, floating-
point reciprocal exponent, and floating-point reciprocal square root estimate
LLVM AArch64 intrinsics to use f32/f64 types, rather than their vector
equivalents.

llvm-svn: 197066
2013-12-11 21:03:40 +00:00
Tim Northover
d66403a4e1 ARM: constrain register-class in fast-isel
The tests were no longer using fast-isel at all (MachO needs an "ios" rather
than "darwin" triple at the moment and Linux needs ARM mode). Once that was
corrected, the verifier complained about a t2ADDri created for the alloca.

llvm-svn: 197046
2013-12-11 16:04:57 +00:00
Elena Demikhovsky
154413adc2 AVX-512: Removed "z" suffix from AVX-512 instructions, since it is incompatible with GCC.
I moved a test from avx512-vbroadcast-crash.ll to avx512-vbroadcast.ll
I defined HasAVX512 predicate as AssemblerPredicate. It means that you should invoke llvm-mc with "-mcpu=knl" to get encoding for AVX-512 instructions. I need this to let AsmMatcher to set different encoding for AVX and AVX-512 instructions that have the same mnemonic and operands (all scalar instructions).

llvm-svn: 197041
2013-12-11 14:31:04 +00:00
Richard Sandiford
533c977c68 [SystemZ] Optimize fcmp X, 0 in cases where X is also negated
In such cases it's often better to test the result of the negation instead,
since the negation also sets CC.

llvm-svn: 197032
2013-12-11 11:45:08 +00:00
Richard Sandiford
ffff4e9e61 Extend (truncate (load)) folding
DAGCombiner could fold (truncate (load)) -> smaller load if the original
load was the width of the truncation result or wider.  This patch extends
it to handle cases where the original load was narrower (and so the
extension type stays the same).

llvm-svn: 197030
2013-12-11 11:37:27 +00:00
Manuel Klimek
e3472acb45 Fix XFAIL rules.
llvm-svn: 197017
2013-12-11 08:38:42 +00:00
Rafael Espindola
857e3cbd88 Make this test a bit stricter.
The extra CHECK and CHECK-NEXT are there to show that we don't print a
linker private symbol on linux.

llvm-svn: 197003
2013-12-11 04:10:41 +00:00
Reed Kotler
ebfdeaa94e Distinguish and choose 16 or 32 bit forms of save/restore for Mips16.
llvm-svn: 196999
2013-12-11 03:32:44 +00:00
Reid Kleckner
9e1ba62c63 Revert the backend fatal error from r196939
The combination of inline asm, stack realignment, and dynamic allocas
turns out to be too common to reject out of hand.

ASan inserts empy inline asm fragments and uses aligned allocas.
Compiling any trivial function containing a dynamic alloca with ASan is
enough to trigger the check.

XFAIL the test cases that would be miscompiled and add one that uses the
relevant functionality.

llvm-svn: 196986
2013-12-10 23:23:52 +00:00
David Fang
7d43532c99 on darwin<10, fallback to .weak_definition (PPC,X86)
.weak_def_can_be_hidden was not yet supported by the system assembler

llvm-svn: 196970
2013-12-10 21:37:41 +00:00
Chad Rosier
29ed5c4552 [AArch64] Refactor the NEON floating-point absolute difference LLVM AArch64
intrinsic to use f32/f64 types, rather than their vector equivalents.

llvm-svn: 196965
2013-12-10 21:33:59 +00:00
Chad Rosier
5394f9c916 [AArch64] Refactor the NEON signed/unsigned floating-point convert to fixed-point
LLVM AArch64 intrinsics to use f32/f64, rather than their vector equivalents.

llvm-svn: 196964
2013-12-10 21:33:56 +00:00
Chad Rosier
b2112dc6c3 [AArch64] Overload NEON signed/unsigned floating-point convert to fixed-point
and fixed-point convert to floating-point LLVM AArch64 intrinsics.

llvm-svn: 196963
2013-12-10 21:33:53 +00:00
Chad Rosier
3d7979609e [AArch64] Overload NEON signed/unsigned integer convert to floating-point
LLVM AArch64 intrinsics.

llvm-svn: 196962
2013-12-10 21:33:50 +00:00
Matt Arsenault
0a974aca94 R600/SI: Add i64 cmp tests
llvm-svn: 196960
2013-12-10 21:11:55 +00:00
Reid Kleckner
b6a72325f3 Reland "Fix miscompile of MS inline assembly with stack realignment"
This re-lands commit r196876, which was reverted in r196879.

The tests have been fixed to pass on platforms with a stack alignment
larger than 4.

Update to clang side tests will land shortly.

llvm-svn: 196939
2013-12-10 18:27:32 +00:00
Chad Rosier
7e9f19f92d [AArch64] Refactor the Neon vector/scalar floating-point convert intrinsics so
that they use float/double rather than the vector equivalents when appropriate.

llvm-svn: 196930
2013-12-10 16:11:39 +00:00
Chad Rosier
0b6c7be6f7 [AArch64] Refactor the Neon vector/scalar floating-point convert implementation.
Specifically, reuse the ARM intrinsics when possible.

llvm-svn: 196926
2013-12-10 15:35:33 +00:00
Andrea Di Biagio
3107b3cbaa Ensure that the backend no longer emits unnecessary vector insert instructions
immediately after SSE scalar fp instructions like addss or mulss.

Added patterns to select SSE scalar fp arithmetic instructions from a scalar
fp operation followed by a blend.

For example, given the following code:
  __m128 foo(__m128 A, __m128 B) {
    A[0] += B[0];
    return A;
  }

previously we generated:
  addss %xmm0, %xmm1
  movss %xmm1, %xmm0

now we generate:
  addss %xmm1, %xmm0

llvm-svn: 196925
2013-12-10 15:22:48 +00:00
Vincent Lejeune
73c6e97c15 R600: Fix an infinite loop when trying to reorganize export/tex vector input
llvm-svn: 196923
2013-12-10 14:43:31 +00:00
Vincent Lejeune
cd5d9a9849 R600: Fix input modifiers lost for Cayman
llvm-svn: 196922
2013-12-10 14:43:27 +00:00
Reed Kotler
1f7ad447b7 Next step in Mips16 prologue/epilogue cleanup.
Save S2(reg 18) only when we are calling floating point stubs that
have a return value of float or complex. Some more work to make this
better but this is the first step.

llvm-svn: 196921
2013-12-10 14:29:38 +00:00