1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 20:12:56 +02:00
Commit Graph

27748 Commits

Author SHA1 Message Date
Jozef Kolek
814723a8ed [mips][microMIPS] Implement LWSP and SWSP instructions
Differential Revision: http://reviews.llvm.org/D6416

llvm-svn: 224771
2014-12-23 16:16:33 +00:00
Michael Kuperstein
180649bb38 [ValueTracking] Move GlobalAlias handling to be after the max depth check in computeKnownBits()
GlobalAlias handling used to be after GlobalValue handling, which meant it was, in practice, dead code. r220165 moved GlobalAlias handling to be before GlobalValue handling, but also moved it to be before the max depth check, causing an assert due to a recursion depth limit violation. 

This moves GlobalAlias handling forward to where it's safe, and changes the GlobalValue handling to only look at GlobalObjects.

Differential Revision: http://reviews.llvm.org/D6758

llvm-svn: 224765
2014-12-23 11:33:41 +00:00
Elena Demikhovsky
bb8ca1f551 AVX-512: Added FMA instructions, intrinsics an tests for KNL and SKX targets
by Asaf Badouh

http://reviews.llvm.org/D6456

llvm-svn: 224764
2014-12-23 10:30:39 +00:00
Hal Finkel
f0195675c6 [PowerPC] Don't mark the return-address slot as immutable
It is tempting to mark the fixed stack slot used to store the return address as
immutable when lowering @llvm.returnaddress(i32 0). Unfortunately, within the
function, it is not completely immutable: it is written during the function
prologue. When using post-RA instruction scheduling, the prologue instructions
are available for scheduling, and we're not free to interchange the order of a
particular store in the prologue with loads from that stack location.

Fixes PR21976.

llvm-svn: 224761
2014-12-23 09:45:06 +00:00
Elena Demikhovsky
cfbcf5995c AVX-512: BLENDM - fixed encoding of the broadcast version
Added more intrinsics and encoding tests.

llvm-svn: 224760
2014-12-23 09:36:28 +00:00
Michael Kuperstein
81a40a5cbe [DagCombine] Improve DAGCombiner BUILD_VECTOR when it has two sources of elements
This partially fixes PR21943.

For AVX, we go from:

vmovq   (%rsi), %xmm0
vmovq   (%rdi), %xmm1
vpermilps       $-27, %xmm1, %xmm2 ## xmm2 = xmm1[1,1,2,3]
vinsertps       $16, %xmm2, %xmm1, %xmm1 ## xmm1 = xmm1[0],xmm2[0],xmm1[2,3]
vinsertps       $32, %xmm0, %xmm1, %xmm1 ## xmm1 = xmm1[0,1],xmm0[0],xmm1[3]
vpermilps       $-27, %xmm0, %xmm0 ## xmm0 = xmm0[1,1,2,3]
vinsertps       $48, %xmm0, %xmm1, %xmm0 ## xmm0 = xmm1[0,1,2],xmm0[0]

To the expected:

vmovq   (%rdi), %xmm0
vmovhpd (%rsi), %xmm0, %xmm0
retq

Fixing this for AVX2 is still open.

Differential Revision: http://reviews.llvm.org/D6749

llvm-svn: 224759
2014-12-23 08:59:45 +00:00
Hal Finkel
867d174942 [PowerPC] Don't attempt a 64-bit pow2 division on PPC32
In r224033, in moving the signed power-of-2 division expansion into
BuildSDIVPow2, I accidentally made it possible to attempt the lowering for a
64-bit division on PPC32. This later asserts.

Fixes PR21928.

llvm-svn: 224758
2014-12-23 08:38:50 +00:00
Michael Liao
e7760832e1 [SimplifyCFG] Revise common code sinking
- Fix the case where more than 1 common instructions derived from the same
  operand cannot be sunk. When a pair of value has more than 1 derived values
  in both branches, only 1 derived value could be sunk.
- Replace BB1 -> (BB2, PN) map with joint value map, i.e.
  map of (BB1, BB2) -> PN, which is more accurate to track common ops.

llvm-svn: 224757
2014-12-23 08:26:55 +00:00
Ahmed Bougacha
ac752e2bc3 [ARM] Don't break alignment when combining base updates into load/stores.
r223862/r224203 tried to also combine base-updating load/stores.
There was a mistake there: the alignment was added as is as an operand to
the ARMISD::VLD/VST node.  However, the VLD/VST selection logic doesn't care
about less-than-standard alignment attributes.
For example, no matter the alignment of a v2i64 load (say 1), SelectVLD picks
VLD1q64 (because of the memory type).  But VLD1q64 ("vld1.64 {dXX, dYY}") is
8-aligned, per ARMARMv7a 3.2.1.
For the 1-aligned load, what we really want is VLD1q8.

This commit introduces bitcasts if necessary, and changes the vld/vst type to
one whose standard alignment matches the original load/store alignment.

Differential Revision: http://reviews.llvm.org/D6759

llvm-svn: 224754
2014-12-23 06:07:31 +00:00
Chandler Carruth
5fb4297ebd Revert r224739: Debug info: Teach SROA how to update debug info for
fragmented variables.

This caused codegen to start crashing when we built somewhat large
programs with debug info and optimizations. 'check-msan' hit in, and
I suspect a bootstrap would as well. I mailed a test case to the
review thread.

llvm-svn: 224750
2014-12-23 02:58:14 +00:00
Jim Grosbach
19c4fa899d X86: Don't over-align combined loads.
When combining consecutive loads+inserts into a single vector load,
we should keep the alignment of the base load. Doing otherwise can, and does,
lead to using overly aligned instructions. In the included test case, for
example, using a 32-byte vmovaps on a 16-byte aligned value. Oops.

rdar://19190968

llvm-svn: 224746
2014-12-23 00:35:23 +00:00
Reid Kleckner
04fe8002a0 Make musttail more robust for vector types on x86
Previously I tried to plug musttail into the existing vararg lowering
code. That turned out to be a mistake, because non-vararg calls use
significantly different register lowering, even on x86. For example, AVX
vectors are usually passed in registers to normal functions and memory
to vararg functions.  Now musttail uses a completely separate lowering.

Hopefully this can be used as the basis for non-x86 perfect forwarding.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D6156

llvm-svn: 224745
2014-12-22 23:58:37 +00:00
Adrian Prantl
566772c5d4 Thumb1 frame lowering: Mark CFI instructions with the FrameSetup flag.
Followup to r224294:

ARM/AArch64: Attach the FrameSetup MIFlag to CFI instructions.
Debug info marks the first instruction without the FrameSetup flag
as being the end of the function prologue. Any CFI instructions in the
middle of the function prologue would cause debug info to end the prologue
too early and worse, attach the line number of the CFI instruction, which
incidentally is often 0.

llvm-svn: 224743
2014-12-22 23:09:14 +00:00
Bruno Cardoso Lopes
c8d20ce475 [LCSSA] Handle PHI insertion in disjoint loops
Take two disjoint Loops L1 and L2.

LoopSimplify fails to simplify some loops (e.g. when indirect branches
are involved). In such situations, it can happen that an exit for L1 is
the header of L2. Thus, when we create PHIs in one of such exits we are
also inserting PHIs in L2 header.

This could break LCSSA form for L2 because these inserted PHIs can also
have uses in L2 exits, which are never handled in the current
implementation. Provide a fix for this corner case and test that we
don't assert/crash on that.

Differential Revision: http://reviews.llvm.org/D6624

rdar://problem/19166231

llvm-svn: 224740
2014-12-22 22:35:46 +00:00
Adrian Prantl
85354b18d3 Debug info: Teach SROA how to update debug info for fragmented variables.
This allows us to generate debug info for extremely advanced code such as

  typedef struct { long int a; int b;} S;

  int foo(S s) {
    return s.b;
  }

which at -O1 on x86_64 is codegen'd into

  define i32 @foo(i64 %s.coerce0, i32 %s.coerce1) #0 {
    ret i32 %s.coerce1, !dbg !24
  }

with this patch we emit the following debug info for this

  TAG_formal_parameter [3]
    AT_location( 0x00000000
                 0x0000000000000000 - 0x0000000000000006: rdi, piece 0x00000008, rsi, piece 0x00000004
                 0x0000000000000006 - 0x0000000000000008: rdi, piece 0x00000008, rax, piece 0x00000004 )
                 AT_name( "s" )
                 AT_decl_file( "/Volumes/Data/llvm/_build.ninja.release/test.c" )

Thanks to chandlerc, dblaikie, and echristo for their feedback on all
previous iterations of this patch!

llvm-svn: 224739
2014-12-22 22:26:00 +00:00
Reid Kleckner
eecdfef0b6 Fix Windows unwind info for functions in sections other than .text
Previously we assumed the section name had the form .text$foo, which is
what we used to do for inline functions. If the dollar wasn't present,
we'd put unwind data in the .pdata and .xdata sections for the main
.text section, which is incorrect.

Fixes PR22001.

llvm-svn: 224738
2014-12-22 22:10:08 +00:00
Colin LeMahieu
b1f14d473d [Hexagon] Adding memb instruction. Fixing whitespace in test from 224730.
llvm-svn: 224735
2014-12-22 21:40:43 +00:00
Colin LeMahieu
c88fff49c9 [Hexagon] Adding classes and load unsigned byte instruction, updating usages.
llvm-svn: 224730
2014-12-22 21:20:03 +00:00
Bruno Cardoso Lopes
351f228624 [x86] Add vector @llvm.ctpop intrinsic custom lowering
Currently, when ctpop is supported for scalar types, the expansion of
@llvm.ctpop.vXiY uses vector element extractions, insertions and individual
calls to @llvm.ctpop.iY. When not, expansion with bit-math operations is used
for the scalar calls.

Local haswell measurements show that we can improve vector @llvm.ctpop.vXiY
expansion in some cases by using a using a vector parallel bit twiddling
approach, based on:

v = v - ((v >> 1) & 0x55555555);
v = (v & 0x33333333) + ((v >> 2) & 0x33333333);
v = ((v + (v >> 4) & 0xF0F0F0F)
v = v + (v >> 8)
v = v + (v >> 16)
v = v & 0x0000003F
(from http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel)

When scalar ctpop isn't supported, the approach above performs better for
v2i64, v4i32, v4i64 and v8i32 (see numbers below). And even when scalar ctpop
is supported, this approach performs ~2x better for v8i32.

Here, x86_64 implies -march=corei7-avx without ctpop and x86_64h includes ctpop
support with -march=core-avx2.

== [x86_64h - new]
v8i32: 0.661685
v4i32: 0.514678
v4i64: 0.652009
v2i64: 0.324289
== [x86_64h - old]
v8i32: 1.29578
v4i32: 0.528807
v4i64: 0.65981
v2i64: 0.330707

== [x86_64 - new]
v8i32: 1.003
v4i32: 0.656273
v4i64: 1.11711
v2i64: 0.754064
== [x86_64 - old]
v8i32: 2.34886
v4i32: 1.72053
v4i64: 1.41086
v2i64: 1.0244

More work for other vector types will come next.

llvm-svn: 224725
2014-12-22 19:45:43 +00:00
Quentin Colombet
c245321b0e [CodeGenPrepare] Handle properly the promotion of operands when this does not
generate instructions.

Fixes PR21978.
Related to <rdar://problem/18310086>

llvm-svn: 224717
2014-12-22 18:11:52 +00:00
Elena Demikhovsky
aeb7ff5f14 AVX-512: Added all forms of BLENDM instructions,
intrinsics, encoding tests for AVX-512F and skx instructions.

llvm-svn: 224707
2014-12-22 13:52:48 +00:00
Karthik Bhat
a80b575047 Lower multiply-negate operation to mneg on AArch64
This patch pattern matches code such as-
neg	 w8, w8
mul	 w8, w9, w8
to
mneg	 w8, w8, w9

Review: http://reviews.llvm.org/D6754
llvm-svn: 224706
2014-12-22 13:38:58 +00:00
Rafael Espindola
6ca1e5a091 Convert a few tests to FileCheck. NFC.
llvm-svn: 224705
2014-12-22 13:29:46 +00:00
Matt Arsenault
92400a8e42 Enable (sext x) == C --> x == (trunc C) combine
Extend the existing code which handles this for zext. This makes this
more useful for targets with ZeroOrNegativeOne BooleanContent and
obsoletes a custom combine SI uses for i1 setcc (sext(i1), 0, setne)
since the constant will now be shrunk to i1.

llvm-svn: 224691
2014-12-21 16:48:42 +00:00
Saleem Abdulrasool
5b457858a7 ARM: further improve deprecated diagnosis (LDM)
The ARM ARM states:
  LDM/LDMIA/LDMFD:
    The SP can be in the list. However, ARM deprecates using these instructions
    with SP in the list.

    ARM deprecates using these instructions with both the LR and the PC in the
    list.

  LDMDA/LDMFA/LDMDB/LDMEA/LDMIB/LDMED:
    The SP can be in the list. However, instructions that include the SP in the
    list are deprecated.

    Instructions that include both the LR and the PC in the list are deprecated.

  POP:
    The SP can only be in the list before ARMv7. ARM deprecates any use of ARM
    instructions that include the SP, and the value of the SP after such an
    instruction is UNKNOWN.

    ARM deprecates the use of this instruction with both the LR and the PC in
    the list.

Attempt to diagnose use of deprecated forms of these instructions.  This mirrors
the previous changes to diagnose use of the deprecated forms of STM in ARM mode.

llvm-svn: 224682
2014-12-20 20:25:36 +00:00
David Majnemer
ee1d92da94 This should have been part of r224676.
llvm-svn: 224677
2014-12-20 04:48:34 +00:00
David Majnemer
3da9d34415 InstCombine: Squash an icmp+select into bitwise arithmetic
(X & INT_MIN) == 0 ? X ^ INT_MIN : X  into  X | INT_MIN
(X & INT_MIN) != 0 ? X ^ INT_MIN : X  into  X & INT_MAX

This fixes PR21993.

llvm-svn: 224676
2014-12-20 04:45:35 +00:00
David Majnemer
e2cf8f21e0 InstSimplify: Optimize away pointless comparisons
(X & INT_MIN) ? X & INT_MAX : X  into  X & INT_MAX
(X & INT_MIN) ? X : X & INT_MAX  into  X
(X & INT_MIN) ? X | INT_MIN : X  into  X
(X & INT_MIN) ? X : X | INT_MIN  into  X | INT_MIN

llvm-svn: 224669
2014-12-20 03:04:38 +00:00
Chandler Carruth
dbb21c0267 [x86] Change the test added in r223774 to first check the spelling of
the error message for a bogus processor, and then look specifically for
that error message using FileCheck.

I actually tried to write the test this way at first, but drew a blank
on how to ensure the error message stayed in sync (oops). Now that I've
recalled how to do that, this is clearly better.

It also fixes an issue with a malloc implementation that actually prints
to stderr in all cases, which was causing problems for some builders it
seems.

llvm-svn: 224665
2014-12-20 02:19:22 +00:00
Elena Demikhovsky
744da8554e Masked load and store codegen - fixed 128-bit vectors
The codegen failed on 128-bit types on AVX2.
I added patterns and in td files and tests.

llvm-svn: 224647
2014-12-19 23:27:57 +00:00
Matt Arsenault
a3a9080dce R600/SI: Only form min/max with 1 use.
If the condition is used for something else, this increases
the number of instructions.

llvm-svn: 224646
2014-12-19 23:15:30 +00:00
Kevin Enderby
7664feebc5 Add printing the LC_ROUTINES load commands with llvm-objdump’s -private-headers.
llvm-svn: 224627
2014-12-19 22:25:22 +00:00
Reid Kleckner
be2f2ecb15 Add the ExceptionHandling::MSVC enumeration
It is intended to be used for a family of personality functions that
have similar IR preparation requirements. Typically when interoperating
with MSVC personality functions, bits of functionality need to be
outlined from the main function into helper functions. There is also
usually more than one landing pad per invoke, which does not match the
LLVM IR landingpad representation.

None of this is implemented yet. This change just adds a new enum that
is active for *-windows-msvc and delegates to the EH removal preparation
pass.  No functionality change for other targets.

llvm-svn: 224625
2014-12-19 22:19:48 +00:00
Sanjay Patel
50aab0bca2 Model sqrtss as a binary operation with one source operand tied to the destination (PR14221)
This is a continuation of r167064 ( http://llvm.org/viewvc/llvm-project?view=revision&revision=167064 ).
That patch started to fix PR14221 ( http://llvm.org/bugs/show_bug.cgi?id=14221 ), but it was not completed. 

Differential Revision: http://reviews.llvm.org/D6330

llvm-svn: 224624
2014-12-19 22:16:28 +00:00
Tom Stellard
d0128b6cee R600/SI: Make sure non-inline constants aren't folded into mubuf soffset operand
mubuf instructions now define the soffset field using the SCSrc_32
register class which indicates that only SGPRs and inline constants
are allowed.

llvm-svn: 224622
2014-12-19 22:15:30 +00:00
Kevin Enderby
213e0f76f1 Add printing the LC_SUB_CLIENT load command with llvm-objdump’s -private-headers.
llvm-svn: 224616
2014-12-19 21:06:24 +00:00
Peter Collingbourne
cebc34e511 CodeGen: do not attempt to invalidate virtual registers for zero-sized phis.
llvm-svn: 224615
2014-12-19 20:50:07 +00:00
Colin LeMahieu
4c325f2cef [Hexagon] Removing old variants of instructions and updating references.
llvm-svn: 224612
2014-12-19 20:29:29 +00:00
Sanjay Patel
7cfa420ae4 merge consecutive stores of extracted vector elements
Add a path to DAGCombiner::MergeConsecutiveStores() 
to combine multiple scalar stores when the store operands
are extracted vector elements. This is a partial fix for
PR21711 ( http://llvm.org/bugs/show_bug.cgi?id=21711 ).

For the new test case, codegen improves from:

   vmovss  %xmm0, (%rdi)
   vextractps      $1, %xmm0, 4(%rdi)
   vextractps      $2, %xmm0, 8(%rdi)
   vextractps      $3, %xmm0, 12(%rdi)
   vextractf128    $1, %ymm0, %xmm0
   vmovss  %xmm0, 16(%rdi)
   vextractps      $1, %xmm0, 20(%rdi)
   vextractps      $2, %xmm0, 24(%rdi)
   vextractps      $3, %xmm0, 28(%rdi)
   vzeroupper
   retq

To:

   vmovups	%ymm0, (%rdi)
   vzeroupper
   retq

Patch reviewed by Nadav Rotem.

Differential Revision: http://reviews.llvm.org/D6698

llvm-svn: 224611
2014-12-19 20:23:41 +00:00
Colin LeMahieu
b1509fe128 [Hexagon] Adding bit extraction and table indexing instructions.
llvm-svn: 224610
2014-12-19 20:01:08 +00:00
Colin LeMahieu
7e0cce8462 [Hexagon] Adding bit insertion instructions.
llvm-svn: 224609
2014-12-19 19:54:38 +00:00
Colin LeMahieu
11b6034e2b [Hexagon] Adding more xtype shift instructions.
llvm-svn: 224608
2014-12-19 19:51:35 +00:00
Kevin Enderby
eb0052136f Add printing the LC_SUB_LIBRARY load command with llvm-objdump’s -private-headers.
llvm-svn: 224607
2014-12-19 19:48:16 +00:00
Colin LeMahieu
b6c1e97753 [Hexagon] Adding xtype shift instructions.
llvm-svn: 224604
2014-12-19 19:34:50 +00:00
Colin LeMahieu
16013f08b8 [Hexagon] Adding transfers to and from control registers.
llvm-svn: 224599
2014-12-19 19:06:32 +00:00
Bruno Cardoso Lopes
b25873afd1 Reapply: [InstCombine] Fix visitSwitchInst to use right operand types for sub cstexpr
The visitSwitchInst generates SUB constant expressions to recompute the
switch condition. When truncating the condition to a smaller type, SUB
expressions should use the previous type (before trunc) for both
operands. Also, fix code to also return the modified switch when only
the truncation is performed.

This fixes an assertion crash.

Differential Revision: http://reviews.llvm.org/D6644

rdar://problem/19191835

llvm-svn: 224588
2014-12-19 17:12:35 +00:00
Sanjay Patel
b1bb7100db use -0.0 when creating an fneg instruction
Backends recognize (-0.0 - X) as the canonical form for fneg
and produce better code. Eg, ppc64 with 0.0:

   lis r2, ha16(LCPI0_0)
   lfs f0, lo16(LCPI0_0)(r2)
   fsubs f1, f0, f1
   blr

vs. -0.0:

   fneg f1, f1
   blr

Differential Revision: http://reviews.llvm.org/D6723

llvm-svn: 224583
2014-12-19 16:44:08 +00:00
Bruno Cardoso Lopes
ba090bc5c2 Revert "[InstCombine] Fix visitSwitchInst to use right operand types for sub cstexpr"
Reverts commit r224574 to appease buildbots:

The visitSwitchInst generates SUB constant expressions to recompute the
switch condition. When truncating the condition to a smaller type, SUB
expressions should use the previous type (before trunc) for both
operands. This fixes an assertion crash.

llvm-svn: 224576
2014-12-19 14:36:24 +00:00
Bruno Cardoso Lopes
afb4b15ab3 [InstCombine] Fix visitSwitchInst to use right operand types for sub cstexpr
The visitSwitchInst generates SUB constant expressions to recompute the
switch condition. When truncating the condition to a smaller type, SUB
expressions should use the previous type (before trunc) for both
operands. This fixes an assertion crash.

Differential Revision: http://reviews.llvm.org/D6644

rdar://problem/19191835

llvm-svn: 224574
2014-12-19 14:23:15 +00:00
Juergen Ributzka
87fbd21d74 [Object] Don't crash on empty export lists.
Summary: This fixes the exports iterator if the export list is empty.

Reviewers: Bigcheese, kledzik

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6732

llvm-svn: 224563
2014-12-19 02:31:01 +00:00
Colin LeMahieu
ab28d2b2e6 [Hexagon] Adding loop0/1 sp0/1/2loop0 instructions.
llvm-svn: 224556
2014-12-19 00:06:53 +00:00
David Majnemer
273acc7601 ConstantFold: Shifting undef by zero results in undef
llvm-svn: 224553
2014-12-18 23:54:43 +00:00
Colin LeMahieu
069ff897d0 Reverting 224550, was not ready for commit.
llvm-svn: 224552
2014-12-18 23:36:15 +00:00
Colin LeMahieu
260ab2afbd [Hexagon] Adding loop0/1 sp0/1/2loop0 instructions.
llvm-svn: 224550
2014-12-18 23:27:51 +00:00
Kevin Enderby
61ee8e90b3 Add printing the LC_SUB_UMBRELLA load command with llvm-objdump’s -private-headers.
llvm-svn: 224548
2014-12-18 23:13:26 +00:00
Kevin Enderby
5c57c31db7 Add printing the LC_SUB_FRAMEWORK load command with llvm-objdump’s -private-headers.
llvm-svn: 224534
2014-12-18 19:24:35 +00:00
Jozef Kolek
cd3fff8406 [mips][microMIPS] Fix bugs related to atomic SC/LL instructions
Fix bugs related to atomic microMIPS SC/LL instructions: While expanding atomic
operations the mips32r2 encoding was emitted instead of microMIPS.

Differential Revision: http://reviews.llvm.org/D6659

llvm-svn: 224524
2014-12-18 16:39:29 +00:00
Saleem Abdulrasool
82b43a5896 ARM: fix an off-by-one in the register list access
Fix an off-by-one access introduced in 224502 for push.w and pop.w with single
register operands.  Add test cases for both scenarios.

Thanks to Asiri Rathnayake for pointing out the failure!

llvm-svn: 224521
2014-12-18 16:16:53 +00:00
Toma Tabacu
9813b67bea [mips] Clean up the CodeGen/Mips/inlineasmmemop.ll test. NFC.
Summary:
Improve comments and remove a redundant attribute list.
There are no functional changes (to the CHECK's or to the code).

Part of these changes were suggested in http://reviews.llvm.org/D6637.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6705

llvm-svn: 224517
2014-12-18 13:03:51 +00:00
Robert Khasanov
0bf2db97cb [AVX512] Enable FP arithmetic lowering for AVX512VL subsets.
Added RegOp2MemOpTable4 to transform 4th operand from register to memory in merge-masked versions of instructions. 
Added lowering tests.

llvm-svn: 224516
2014-12-18 12:28:22 +00:00
Saleem Abdulrasool
0b2c7d3514 ARM: improve instruction validation for thumb mode
The ARM Architecture Reference Manual states the following:
  LDM{,IA,DB}:
    The SP cannot be in the list.
    The PC can be in the list.
    If the PC is in the list:
      • the LR must not be in the list
      • the instruction must be either outside any IT block, or the last
        instruction in an IT block.
  POP:
    The PC can be in the list.
    If the PC is in the list:
      • the LR must not be in the list
      • the instruction must be either outside any IT block, or the last
        instruction in an IT block.
  PUSH:
    The SP and PC can be in the list in ARM instructions, but not in Thumb
    instructions.
  STM:{,IA,DB}:
    The SP and PC can be in the list in ARM instructions, but not in Thumb
    instructions.

llvm-svn: 224502
2014-12-18 05:24:38 +00:00
Saleem Abdulrasool
f6fc6efa8f test: avoid unnecessary temporary files
Use pipes and redirect the error output to FileCheck directly.  NFC.

llvm-svn: 224501
2014-12-18 05:24:32 +00:00
Eric Christopher
31f514defb Add a new string member to the TargetOptions struct for the name
of the abi we should be using. For targets that don't use the
option there's no change, otherwise this allows external users
to set the ABI via string and avoid some of the -backend-option
pain in clang.

Use this option to move the ABI for the ARM port from the
Subtarget to the TargetMachine and update the testcases
accordingly since it's no longer valid to set via -mattr.

llvm-svn: 224492
2014-12-18 02:20:58 +00:00
Eric Christopher
aa3a65c590 Model ARM backend ABI selection after the front end code doing the
same. This will change the "bare metal" ABI from APCS to AAPCS.

The only difference between the front and back end code is that
the code for Triple::GNU was added for environment. That will migrate
to the front end shortly.

Tests updated with the ABI they were originally testing in the case
of bare metal (e.g. -mtriple armv7) or with a -gnu for arm-linux
triples.

llvm-svn: 224489
2014-12-18 02:08:45 +00:00
Duncan P. N. Exon Smith
4c1c83ec6c Reapply "Linker: Drop superseded subprograms"
This reverts commit r224416, reapplying r224389.  The buildbots hadn't
recovered after my revert, waiting until David reverted a couple of his
commits.  It looks like it was just bad timing (where we were both
modifying code related to the same assertion).  Trying again...

Here's the original text:

    When a function gets replaced by `ModuleLinker`, drop superseded
    subprograms.  This ensures that the "first" subprogram pointing at a
    function is the same one that `!dbg` references point at.

    This is a stop-gap fix for PR21910.  Notably, this fixes Release+Asserts
    bootstraps that are currently asserting out in
    `LexicalScopes::initialize()` due to the explicit instantiations in
    `lib/IR/Dominators.cpp` eventually getting replaced by -argpromotion.

llvm-svn: 224487
2014-12-18 01:05:33 +00:00
Kevin Enderby
7c66bebcdf Add printing the LC_LINKER_OPTION load command with llvm-objdump’s -private-headers.
Also corrected the name of the load command to not end in an ’S’ as well as corrected
the name of the MachO::linker_option_command struct and other places that had the
word option as plural which did not match the Mac OS X headers.

llvm-svn: 224485
2014-12-18 00:53:40 +00:00
Matt Arsenault
7f117abc99 R600/SI: Fix f64 inline immediates
llvm-svn: 224458
2014-12-17 21:04:08 +00:00
Jingyue Wu
17566f954b [NVPTX] Fix bugs related to isSingleValueType
Summary:
With isSingleValueType starting to treat vector types as single-value types,
code that uses this interface needs to be updated.

Test Plan:
vector-global.ll
nvcl-param-align.ll

Reviewers: jholewinski

Reviewed By: jholewinski

Subscribers: llvm-commits, meheff, eliben, jholewinski

Differential Revision: http://reviews.llvm.org/D6573

llvm-svn: 224440
2014-12-17 17:59:04 +00:00
Timur Iskhodzhanov
2f4f744301 Fix CR/LF line endings in test case
llvm-svn: 224437
2014-12-17 17:52:12 +00:00
Saleem Abdulrasool
3a1f685ef5 ARM: correct an off-by-one in an assert
The assert was off-by-one, resulting in failures for valid input.

Thanks to Asiri Rathnayake for pointing out the failure!

llvm-svn: 224432
2014-12-17 16:17:44 +00:00
Michael Kuperstein
3790301d73 [DAGCombine] Slightly improve lowering of BUILD_VECTOR into a shuffle.
This handles the case of a BUILD_VECTOR being constructed out of elements extracted from a vector twice the size of the result vector. Previously this was always scalarized. Now, we try to construct a shuffle node that feeds on extract_subvectors.

This fixes PR15872 and provides a partial fix for PR21711.

Differential Revision: http://reviews.llvm.org/D6678

llvm-svn: 224429
2014-12-17 12:32:17 +00:00
Toma Tabacu
311b69b658 [mips] Set GCC-compatible MIPS asssembler options before inline asm blocks.
Summary:
When generating MIPS assembly, LLVM always overrides the default assembler options by emitting the '.set noreorder', '.set nomacro' and '.set noat' directives,
while GCC uses the default options if an assembly-level function contains inline assembly code.

This becomes a problem when the code generated by LLVM is interleaved with inline assembly which assumes GCC-like assembler options (from Linux, for example).

This patch fixes these conflicts by setting the appropriate assembler options at the beginning of an inline asm block and popping them at the end.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6637

llvm-svn: 224425
2014-12-17 10:56:16 +00:00
Suyog Sarda
ea83428380 Revert 224119 "This patch recognizes (+ (+ v0, v1) (+ v2, v3)), reorders them for bundling into vector of loads,
and vectorizes it." 

This was re-ordering floating point data types resulting in mismatch in output.

llvm-svn: 224424
2014-12-17 10:34:27 +00:00
Yaron Keren
48c54e0ea2 Teach lit.cfg to recognize -windows-gnu in addition to -mingw32.
llvm-svn: 224421
2014-12-17 09:55:15 +00:00
Elena Demikhovsky
537a8ee728 Added 5 more tests related to sink store revision 224247
- by Ella Bolshinsky

http://reviews.llvm.org/D6420

llvm-svn: 224418
2014-12-17 08:12:59 +00:00
Erik Eckstein
042c032147 Strength reduce intrinsics with overflow into regular arithmetic operations if possible.
Some intrinsics, like s/uadd.with.overflow and umul.with.overflow, are already strength reduced.
This change adds other arithmetic intrinsics: s/usub.with.overflow, smul.with.overflow.
It completes the work on PR20194.

llvm-svn: 224417
2014-12-17 07:29:19 +00:00
Duncan P. N. Exon Smith
759b4ed45d Revert "Linker: Drop superseded subprograms"
This reverts commit r224389.  Based on feedback from the bots, the
assertion seems to be going off *more* often, not less (previously I was
just seeing it in an internal bootstrap, now it's happening in public
builds too).

http://lab.llvm.org:8080/green/job/clang-stage2-configure-Rlto_build/936/
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap/builds/5325

Reverting in order to investigate.

llvm-svn: 224416
2014-12-17 07:27:31 +00:00
Justin Hibbits
9dd5e8fee1 Add parsing of 'foo@local".
Summary:
Currently, it supports generating, but not parsing, this expression.
Test added as well.

Test Plan: New test added, no regressions due to this.

Reviewers: hfinkel

Reviewed By: hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6672

llvm-svn: 224415
2014-12-17 06:23:35 +00:00
Duncan P. N. Exon Smith
1a57e7746d llvm-lto: Add testing coverage for local contexts
Add coverage in `llvm-lto` for the API exposed by libLTO to create
modules in local contexts.

The goal here isn't to test the symbol-related API extensively, just to
confirm that these modules work at all.  (I'll be shifting code around
soon that should be NFC and I realized there was no test coverage.)

llvm-svn: 224408
2014-12-17 02:00:38 +00:00
David Majnemer
fe299df41a InstSimplify: shl nsw/nuw undef, %V -> undef
We can always choose an value for undef which might cause %V to shift
out an important bit except for one case, when %V is zero.

However, shl behaves like an identity function when the right hand side
is zero.

llvm-svn: 224405
2014-12-17 01:54:33 +00:00
Quentin Colombet
5896cdb9ff [CodeGenPrepare] Reapply r224351 with a fix for the assertion failure:
The type promotion helper does not support vector type, so when make
such it does not kick in in such cases.

Original commit message:
[CodeGenPrepare] Move sign/zero extensions near loads using type promotion.

This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.

Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.


** Context **

Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.


** Motivating Example **

Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
  %ld = load i8* %addr1
  %zextld = zext i8 %ld to i32
  %ld2 = load i32* %addr2
  %add = add nsw i32 %ld2, %zextld
  %sextadd = sext i32 %add to i64
  %zexta = zext i8 %a to i32
  %addza = add nsw i32 %zexta, %zextld
  %sextaddza = sext i32 %addza to i64
  %addb = add nsw i32 %b, %zextld
  %sextaddb = sext i32 %addb to i64
  call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
  ret void
}

As it is, this IR generates the following assembly on x86_64:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movl  (%rsi), %es      # plain load
  addl  %eax, %esi       # 32-bit add
  movslq  %esi, %rdi     # sign extend the result of add
  movzbl  %dl, %edx      # zero extend the first argument
  addl  %eax, %edx       # 32-bit add
  movslq  %edx, %rsi     # sign extend the result of add
  addl  %eax, %ecx       # 32-bit add
  movslq  %ecx, %rdx     # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.

Now, by promoting the additions to form more extended loads we would generate:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movslq  (%rsi), %rdi   # sign-extended load
  addq  %rax, %rdi       # 64-bit add
  movzbl  %dl, %esi      # zero extend the first argument
  addq  %rax, %rsi       # 64-bit add
  movslq  %ecx, %rdx     # sign extend the second argument
  addq  %rax, %rdx       # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.

This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.

Note: The throughput numbers are similar on Sandy Bridge and Haswell.


** Proposed Solution **

To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))

The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.


** Performance **

Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar:  ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%

The results are consistent for both O3 and Os.

<rdar://problem/18310086>

llvm-svn: 224402
2014-12-17 01:36:17 +00:00
Kevin Enderby
dc6f805541 Add printing the LC_ENCRYPTION_INFO_64 load command with llvm-objdump’s -private-headers
and add tests for the two AArch64 binaries.

llvm-svn: 224400
2014-12-17 01:01:30 +00:00
David Blaikie
93e50409ac PR21875: codegen for non-type template parameters of nullptr_t type
llvm-svn: 224399
2014-12-17 00:43:22 +00:00
Reid Kleckner
b4ee65bf9b Revert "[CodeGenPrepare] Move sign/zero extensions near loads using type promotion."
This reverts commit r224351. It causes assertion failures when building
ICU.

llvm-svn: 224397
2014-12-17 00:29:23 +00:00
Hans Wennborg
37a572f581 SelectionDAG switch lowering: use 'unsigned' to count destination popularity
SwitchInst::getNumCases() returns unsinged, so using uint64_t to count cases
seems unnecessary.

Also fix a missing CHECK in the test case.

llvm-svn: 224393
2014-12-16 23:41:59 +00:00
Colin LeMahieu
55e21b6e4f [Hexagon] Updating doubleword shift usages to new versions.
llvm-svn: 224391
2014-12-16 23:36:15 +00:00
Kevin Enderby
c4d55fc277 Add printing the LC_ENCRYPTION_INFO load command with llvm-objdump’s -private-headers.
llvm-svn: 224390
2014-12-16 23:25:52 +00:00
Duncan P. N. Exon Smith
4efb9deca4 Linker: Drop superseded subprograms
When a function gets replaced by `ModuleLinker`, drop superseded
subprograms.  This ensures that the "first" subprogram pointing at a
function is the same one that `!dbg` references point at.

This is a stop-gap fix for PR21910.  Notably, this fixes Release+Asserts
bootstraps that are currently asserting out in
`LexicalScopes::initialize()` due to the explicit instantiations in
`lib/IR/Dominators.cpp` eventually getting replaced by -argpromotion.

llvm-svn: 224389
2014-12-16 23:23:41 +00:00
Sanjay Patel
8f620fe4b0 fix typo, add spaces; NFC
llvm-svn: 224384
2014-12-16 22:48:42 +00:00
Simon Pilgrim
f9bdd6a092 [X86][SSE] Vector double -> float conversion memory folding (cvtpd2ps)
Added a missing memory folding relationship for the (V)CVTPD2PS instruction - we can safely fold these for stack reloads.

Differential Revision: http://reviews.llvm.org/D6663

llvm-svn: 224383
2014-12-16 22:30:10 +00:00
Sanjay Patel
af93d5f15c merge consecutive loads that are offset from a base address
SelectionDAG::isConsecutiveLoad() was not detecting consecutive loads
when the first load was offset from a base address. 

This patch recognizes that pattern and subtracts the offset before comparing
the second load to see if it is consecutive.

The codegen change in the new test case improves from:

vmovsd	32(%rdi), %xmm0
vmovsd	48(%rdi), %xmm1 
vmovhpd	56(%rdi), %xmm1, %xmm1
vmovhpd	40(%rdi), %xmm0, %xmm0
vinsertf128	$1, %xmm1, %ymm0, %ymm0

To:

vmovups	32(%rdi), %ymm0

An existing test case is also improved from:

vmovsd	(%rdi), %xmm0
vmovsd	16(%rdi), %xmm1
vmovsd	24(%rdi), %xmm2
vunpcklpd	%xmm2, %xmm0, %xmm0 ## xmm0 = xmm0[0],xmm2[0]
vmovhpd	8(%rdi), %xmm1, %xmm3

To:

vmovsd	(%rdi), %xmm0
vmovsd	16(%rdi), %xmm1
vmovhpd	24(%rdi), %xmm0, %xmm0
vmovhpd	8(%rdi), %xmm1, %xmm1

This patch fixes PR21771 ( http://llvm.org/bugs/show_bug.cgi?id=21771 ).

Differential Revision: http://reviews.llvm.org/D6642

llvm-svn: 224379
2014-12-16 21:57:18 +00:00
Kevin Enderby
601ffaa6fb Fix a bug in llvm-objdump’s -private-headers for the LC_VERSION_MIN_IPHONEOS
load command not getting printed.

llvm-svn: 224376
2014-12-16 21:48:27 +00:00
Colin LeMahieu
5cbdca29ae [Hexagon] Adding tstbit/bitclr/bitset instructions.
llvm-svn: 224374
2014-12-16 21:28:58 +00:00
Kostya Serebryany
7693e4617d [sanitizer] prevent function call merging for sanitizer-coverage callbacks
llvm-svn: 224372
2014-12-16 21:24:15 +00:00
Colin LeMahieu
bb5c698516 [Hexagon] Adding bit count and twiddling instructions.
llvm-svn: 224367
2014-12-16 20:57:56 +00:00
Colin LeMahieu
28f6e273c4 [Hexagon] Adding asr/lsr/asl reg/imm, asl with saturation, asr with rounding. Doubleword abs/neg/not. Interleave and deinterleave instructions.
llvm-svn: 224365
2014-12-16 20:40:23 +00:00
JF Bastien
02501293ba x86-32: PUSHF/POPF use/def EFLAGS
Summary: As a side-quest for D6629 jvoung pointed out that I should use -verify-machineinstrs and this found a bug in x86-32's handling of EFLAGS for PUSHF/POPF. This patch fixes the use/def, and adds -verify-machineinstrs to all x86 tests which contain 'EFLAGS'. One exception: this patch leaves inline-asm-fpstack.ll as-is because it fails -verify-machineinstrs in a way unrelated to EFLAGS. This patch also modifies cmpxchg-clobber-flags.ll along the lines of what D6629 already does by also testing i386.

Test Plan: ninja check

Reviewers: t.p.northover, jvoung

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6687

llvm-svn: 224359
2014-12-16 20:15:45 +00:00
Quentin Colombet
d31121348b [CodeGenPrepare] Move sign/zero extensions near loads using type promotion.
This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.

Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.


** Context **

Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.


** Motivating Example **

Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
  %ld = load i8* %addr1
  %zextld = zext i8 %ld to i32
  %ld2 = load i32* %addr2
  %add = add nsw i32 %ld2, %zextld
  %sextadd = sext i32 %add to i64
  %zexta = zext i8 %a to i32
  %addza = add nsw i32 %zexta, %zextld
  %sextaddza = sext i32 %addza to i64
  %addb = add nsw i32 %b, %zextld
  %sextaddb = sext i32 %addb to i64
  call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
  ret void
}

As it is, this IR generates the following assembly on x86_64:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movl  (%rsi), %es      # plain load
  addl  %eax, %esi       # 32-bit add
  movslq  %esi, %rdi     # sign extend the result of add
  movzbl  %dl, %edx      # zero extend the first argument
  addl  %eax, %edx       # 32-bit add
  movslq  %edx, %rsi     # sign extend the result of add
  addl  %eax, %ecx       # 32-bit add
  movslq  %ecx, %rdx     # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.

Now, by promoting the additions to form more extended loads we would generate:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movslq  (%rsi), %rdi   # sign-extended load
  addq  %rax, %rdi       # 64-bit add
  movzbl  %dl, %esi      # zero extend the first argument
  addq  %rax, %rsi       # 64-bit add
  movslq  %ecx, %rdx     # sign extend the second argument
  addq  %rax, %rdx       # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.

This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.

Note: The throughput numbers are similar on Sandy Bridge and Haswell.


** Proposed Solution **

To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))

The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.


** Performance **

Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar:  ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%

The results are consistent for both O3 and Os.

<rdar://problem/18310086>

llvm-svn: 224351
2014-12-16 19:09:03 +00:00
Robert Khasanov
104b98b388 [AVX512] Enable integer arithmetic lowering for AVX512BW/VL subsets.
Added lowering tests.

llvm-svn: 224349
2014-12-16 18:24:07 +00:00
Ahmed Bougacha
6c3e1c0f56 [MC] Reset the MCInst in the matcher function before adding opcode/operands.
On X86, the Intel asm parser tries to match all memory operand sizes when
none is explicitly specified.  For LEA, which doesn't really have a memory
operand (just a pointer one), this results in multiple successful matches,
one for each memory size.  There's no error because it's same opcode, so
really, it's just one match.  However, the tablegen'd matcher function
adds opcode/operands to the passed MCInst, and this results in multiple
duplicated operands.

This commit clears the MCInst in the tablegen'd matcher function.
We sometimes clear it when the match failed, so there's no expectation of
keeping the previous content anyway.

Differential Revision: http://reviews.llvm.org/D6670

llvm-svn: 224347
2014-12-16 18:05:28 +00:00
Colin LeMahieu
4932546e48 [Hexagon] Adding absolute value, and negate with saturation
llvm-svn: 224346
2014-12-16 17:44:49 +00:00
Sanjay Patel
8363dd3b42 combine consecutive subvector 16-byte loads into one 32-byte load
This is a fix for PR21709 ( http://llvm.org/bugs/show_bug.cgi?id=21709 ).
When we have 2 consecutive 16-byte loads that are merged into one 32-byte vector,
we can use a single 32-byte load instead. 
But we don't do this for SandyBridge / IvyBridge because they have slower 32-byte memops.
We also don't bother using 32-byte *integer* loads on a machine that only has AVX1 (btver2)
because those operands would have to be split in half anyway since there is no support for
32-byte integer math ops.

Differential Revision: http://reviews.llvm.org/D6492

llvm-svn: 224344
2014-12-16 16:30:01 +00:00
Colin LeMahieu
585f29d985 [Hexagon] Adding saturate and swizzle instructions.
llvm-svn: 224343
2014-12-16 16:27:17 +00:00
Zoran Jovanovic
d72dae73a8 [mips][microMIPS] Implement SWP and LWP instructions
Differential Revision: http://reviews.llvm.org/D5667

llvm-svn: 224338
2014-12-16 14:59:10 +00:00
Vladimir Medic
3860fde4a8 Add disassembler tests for mips4 platform. There are no functional changes.
llvm-svn: 224335
2014-12-16 13:02:25 +00:00
Elena Demikhovsky
fe73fcc29b Masked Load and Store Intrinsics in loop vectorizer.
The loop vectorizer optimizes loops containing conditional memory
accesses by generating masked load and store intrinsics.
This decision is target dependent.

http://reviews.llvm.org/D6527

llvm-svn: 224334
2014-12-16 11:50:42 +00:00
Daniel Sanders
018d1acab3 [mips] Fix arguments-struct.ll for Windows and OSX hosts.
llvm-svn: 224333
2014-12-16 11:21:58 +00:00
Bradley Smith
5d5a40a0f8 [ARM] Prevent PerformVCVTCombine from combining a vmul/vcvt with 8 lanes
This would result in a crash since the vcvt used does not support v8i32 types.

llvm-svn: 224332
2014-12-16 10:59:27 +00:00
Duncan P. N. Exon Smith
58ed764767 IR: Stop printing 'metadata' in Metadata::print()
Stop printing `metadata` in `Metadata::print()` and
`Metadata::printAsOperand()`.

llvm-svn: 224327
2014-12-16 07:40:31 +00:00
Duncan P. N. Exon Smith
c03e29935e DebugInfo: Update testcase to actually check something
This test was missing a `Debug Info Version` so it's `not grep` was
passing vacuously.  Update it to CHECK for something useful at the same
time so it doesn't bitrot quite so easily in the future.

llvm-svn: 224324
2014-12-16 07:08:19 +00:00
Saleem Abdulrasool
c163948b80 ARM: diagnose deprecated syntax
The use of SP and PC in the register list for stores is deprecated on ARM
(ARM ARM A.8.8.199):

  ARM deprecates the use of ARM instructions that include the SP or the PC in
  the list.

Provide a deprecation warning from the assembler in the case that the syntax is
ever seen.

llvm-svn: 224319
2014-12-16 05:53:25 +00:00
Hal Finkel
04ae4c36c5 [PowerPC] Improve instruction selection bit-permuting operations (32-bit)
The PowerPC backend, somewhat embarrassingly, did not generate an
optimal-length sequence of instructions for a 32-bit bswap. While adding a
pattern for the bswap intrinsic to fix this would not have been terribly
difficult, doing so would not have addressed the real problem: we had been
generating poor code for many bit-permuting operations (by which I mean things
like byte-swap that permute the bits of one or more inputs around in various
ways). Here are some initial steps toward solving this deficiency.

Bit-permuting operations are represented, at the SDAG level, using ISD::ROTL,
SHL, SRL, AND and OR (mostly with constant second operands). Looking back
through these operations, we can build up a description of the bits in the
resulting value in terms of bits of one or more input values (and constant
zeros). For each bit, we compute the rotation amount from the original value,
and then group consecutive (value, rotation factor) bits into groups. Groups
sharing these attributes are then collected and sorted, and we can then
instruction select the entire permutation using a combination of masked
rotations (rlwinm), imm ands (andi/andis), and masked rotation inserts
(rlwimi).

The result is that instead of lowering an i32 bswap as:

	rlwinm 5, 3, 24, 16, 23
	rlwinm 4, 3, 24, 0, 7
	rlwimi 4, 3, 8, 8, 15
	rlwimi 5, 3, 8, 24, 31
	rlwimi 4, 5, 0, 16, 31

we now produce:

	rlwinm 4, 3, 8, 0, 31
	rlwimi 4, 3, 24, 16, 23
	rlwimi 4, 3, 24, 0, 7

and for the 'test6' example in the PowerPC/README.txt file:

 unsigned test6(unsigned x) {
   return ((x & 0x00FF0000) >> 16) | ((x & 0x000000FF) << 16);
 }

we used to produce:

	lis 4, 255
	rlwinm 3, 3, 16, 0, 31
	ori 4, 4, 255
	and 3, 3, 4

and now we produce:

	rlwinm 4, 3, 16, 24, 31
	rlwimi 4, 3, 16, 8, 15

and, as a nice bonus, this fixes the FIXME in
test/CodeGen/PowerPC/rlwimi-and.ll.

This commit does not include instruction-selection for i64 operations, those
will come later.

llvm-svn: 224318
2014-12-16 05:51:41 +00:00
Rafael Espindola
f367e92d33 Start adding thin archive support.
This is just sufficient for 'ar t' to work.

llvm-svn: 224307
2014-12-16 01:43:41 +00:00
Kevin Enderby
db5408dea6 Fix a bug in llvm-objdump’s -private-headers for 32-bit Mach-O files
printing the section header.  And add some tests for this for 32-bit files.

llvm-svn: 224302
2014-12-16 01:14:45 +00:00
Adrian Prantl
33921ffabc ARM/AArch64: Attach the FrameSetup MIFlag to CFI instructions.
Debug info marks the first instruction without the FrameSetup flag
as being the end of the function prologue. Any CFI instructions in the
middle of the function prologue would cause debug info to end the prologue
too early and worse, attach the line number of the CFI instruction, which
incidentally is often 0.

llvm-svn: 224294
2014-12-16 00:20:49 +00:00
Colin LeMahieu
4c0e2a35a6 [Hexagon] Adding doubleword multiplies with and without accumulation.
llvm-svn: 224293
2014-12-16 00:07:24 +00:00
Colin LeMahieu
0a4e0a7b23 [Hexagon] Adding halfword to doubleword multiplies.
llvm-svn: 224289
2014-12-15 23:29:37 +00:00
Colin LeMahieu
b56764d577 [Hexagon] Adding logical-logical accumulation instructions and tests.
llvm-svn: 224288
2014-12-15 23:19:07 +00:00
Sanjoy Das
0cdde3ea1f Teach ScalarEvolution to exploit min and max expressions when proving
isKnownPredicate.

The motivation for this change is to optimize away checks in loops
like this:

    limit = min(t, len)
    for (i = 0 to limit)
      if (i >= len || i < 0) throw_array_of_of_bounds();
      a[i] = ...

Differential Revision: http://reviews.llvm.org/D6635

llvm-svn: 224285
2014-12-15 22:50:15 +00:00
Simon Pilgrim
1fd72b137f Added missing tests for X86vzmovl folding. NFC.
llvm-svn: 224284
2014-12-15 22:45:48 +00:00
JF Bastien
27a63b4d77 x86: Emit LOCK prefix after DATA16
Summary: x86 allows either ordering for the LOCK and DATA16 prefixes, but using GCC+GAS leads to different code generation than using LLVM. This change matches the order that GAS emits the x86 prefixes when a semicolon isn't used in inline assembly (see tc-i386.c comment before define LOCK_PREFIX), and helps simplify tooling that operates on the instruction's byte sequence (such as NaCl's validator). This change shouldn't have any performance impact.

Test Plan: ninja check

Reviewers: craig.topper, jvoung

Subscribers: jfb, llvm-commits

Differential Revision: http://reviews.llvm.org/D6630

llvm-svn: 224283
2014-12-15 22:34:58 +00:00
Colin LeMahieu
a6e921963f [Hexagon] Adding a number of additional multiply forms with tests.
llvm-svn: 224282
2014-12-15 22:10:37 +00:00
Colin LeMahieu
cb4ac18de9 [Hexagon] Adding misc multiply encodings and tests.
llvm-svn: 224273
2014-12-15 21:17:03 +00:00
Colin LeMahieu
cfd931a5a2 [Hexagon] Adding doubleworld accumulating multiplies of halfwords.
llvm-svn: 224267
2014-12-15 20:17:46 +00:00
Colin LeMahieu
410a9d158e [Hexagon] Adding accumulating half word multiplies.
llvm-svn: 224266
2014-12-15 20:10:28 +00:00
Colin LeMahieu
5b550fb31a [Hexagon] Adding multiply with rnd/sat/rndsat
llvm-svn: 224265
2014-12-15 20:01:59 +00:00
Ahmed Bougacha
9d970e1dc6 [X86] And also test INSERTPS shuffle mask pretty-printing.
For r224260.

llvm-svn: 224264
2014-12-15 19:47:35 +00:00
Colin LeMahieu
4225ddfd4f [Hexagon] Adding encoding bits for halfword multiplies.
llvm-svn: 224261
2014-12-15 19:22:07 +00:00
Duncan P. N. Exon Smith
9c5542c040 IR: Make metadata typeless in assembly
Now that `Metadata` is typeless, reflect that in the assembly.  These
are the matching assembly changes for the metadata/value split in
r223802.

  - Only use the `metadata` type when referencing metadata from a call
    intrinsic -- i.e., only when it's used as a `Value`.

  - Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode`
    when referencing it from call intrinsics.

So, assembly like this:

    define @foo(i32 %v) {
      call void @llvm.foo(metadata !{i32 %v}, metadata !0)
      call void @llvm.foo(metadata !{i32 7}, metadata !0)
      call void @llvm.foo(metadata !1, metadata !0)
      call void @llvm.foo(metadata !3, metadata !0)
      call void @llvm.foo(metadata !{metadata !3}, metadata !0)
      ret void, !bar !2
    }
    !0 = metadata !{metadata !2}
    !1 = metadata !{i32* @global}
    !2 = metadata !{metadata !3}
    !3 = metadata !{}

turns into this:

    define @foo(i32 %v) {
      call void @llvm.foo(metadata i32 %v, metadata !0)
      call void @llvm.foo(metadata i32 7, metadata !0)
      call void @llvm.foo(metadata i32* @global, metadata !0)
      call void @llvm.foo(metadata !3, metadata !0)
      call void @llvm.foo(metadata !{!3}, metadata !0)
      ret void, !bar !2
    }
    !0 = !{!2}
    !1 = !{i32* @global}
    !2 = !{!3}
    !3 = !{}

I wrote an upgrade script that handled almost all of the tests in llvm
and many of the tests in cfe (even handling many `CHECK` lines).  I've
attached it (or will attach it in a moment if you're speedy) to PR21532
to help everyone update their out-of-tree testcases.

This is part of PR21532.

llvm-svn: 224257
2014-12-15 19:07:53 +00:00
Reid Kleckner
af63007e28 Move mips1 tests to test/MC/Disassembler/Mips/mips1
This matches the pattern of the mips2 and 3 tests, as well as our normal
conventions.

llvm-svn: 224254
2014-12-15 17:56:02 +00:00
Vladimir Medic
ab769bd4d1 Add disassembler tests for mips3 platform. There are no functional changes.
llvm-svn: 224253
2014-12-15 16:19:34 +00:00
Vladimir Medic
e80ff03b83 Add disassembler tests for mips2 platform. There are no functional changes.
llvm-svn: 224252
2014-12-15 15:58:20 +00:00
Vladimir Medic
f681904c24 This is the first in a series of patches that add missing disassembler tests for mips platform. The patches are divided per version of mips CPU to keep the patches smaller and ease the review. There are no functional changes, code is changed only if new tests reveal a bug.This patch adds disassembler tests for mips1 CPU.
llvm-svn: 224251
2014-12-15 15:22:33 +00:00
Elena Demikhovsky
003ce257e6 Added a test related to 224247 revision
llvm-svn: 224248
2014-12-15 14:14:10 +00:00
Michael Kuperstein
cc87d705cb [X86] Break false dependencies before partial register updates when the source operand is in memory
Adds the various "rm" instruction variants into the list of instructions that have a partial register update. Also adds all variants of SQRTSD that were missing in the original list.

Differential Revision: http://reviews.llvm.org/D6620

llvm-svn: 224246
2014-12-15 13:18:21 +00:00
Suyog Sarda
c0c2a062e1 Typo Correction in Test Case. NFC.
llvm-svn: 224244
2014-12-15 12:19:46 +00:00
Elena Demikhovsky
51c511a201 AVX-512: Added EXPAND instructions and intrinsics.
llvm-svn: 224241
2014-12-15 10:03:52 +00:00
Hal Finkel
acf8e7a584 [PowerPC] Handle cmp op promotion for SELECT[_CC] nodes in PPCTL::DAGCombineExtBoolTrunc
PPCTargetLowering::DAGCombineExtBoolTrunc contains logic to remove unwanted
truncations and extensions when dealing with nodes of the form:
  zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)

There was a FIXME in the implementation (now removed) regarding the fact that
the function would abort the transformations if any of the non-output operands
of a SELECT or SELECT_CC node would need to be promoted (because they were
also output operands, for example). As a result, we continued to generate
unnecessary zero-extends for code such as this:

  unsigned foo(unsigned a, unsigned b) {
    return  (a <= b) ? a : b;
  }

which would produce:

  cmplw 0, 3, 4
  isel 3, 4, 3, 1
  rldicl 3, 3, 0, 32
  blr

and now we produce:

  cmplw 0, 3, 4
  isel 3, 4, 3, 1
  blr

which is better in the obvious way.

llvm-svn: 224213
2014-12-14 05:53:19 +00:00
Ahmed Bougacha
88111b0889 Reapply "[ARM] Combine base-updating/post-incrementing vector load/stores."
r223862 tried to also combine base-updating load/stores.
r224198 reverted it, as "it created a regression on the test-suite
on test MultiSource/Benchmarks/Ptrdist/anagram by scrambling the order
in which the words are shown."
Reapply, with a fix to ignore non-normal load/stores.
Truncstores are handled elsewhere (you can actually write a pattern for
those, whereas for postinc loads you can't, since they return two values),
but it should be possible to also combine extloads base updates, by checking
that the memory (rather than result) type is of the same size as the addend.

Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.

We can do the same thing for generic load/stores.

Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).

Differential Revision: http://reviews.llvm.org/D6585

llvm-svn: 224203
2014-12-13 23:22:12 +00:00
Renato Golin
3418b50014 Revert "[ARM] Combine base-updating/post-incrementing vector load/stores."
This reverts commit r223862, as it created a regression on the test-suite
on test MultiSource/Benchmarks/Ptrdist/anagram by scrambling the order
in which the words are shown. We'll investigate the issue and re-apply
when safe.

llvm-svn: 224198
2014-12-13 20:23:18 +00:00
Akira Hatanaka
e6d6f49584 Rename argument strings of codegen passes to avoid collisions with command line
options.

This commit changes the command line arguments (PassInfo::PassArgument) of two
passes, MachineFunctionPrinter and MachineScheduler, to avoid collisions with
command line options that have the same argument strings.

This bug manifests when the PassList construct (defined in opt.cpp) is used
in a tool that links with codegen passes. To reproduce the bug, paste the
following lines into llc.cpp and run llc.

#include "llvm/IR/LegacyPassNameParser.h"
static llvm:🆑:list<const llvm::PassInfo*, bool, llvm::PassNameParser>
PassList(llvm:🆑:desc("Optimizations available:"));

rdar://problem/19212448

llvm-svn: 224186
2014-12-13 04:52:04 +00:00
Hal Finkel
30da0a42c8 [PowerPC] Add a DAGToDAG peephole to remove unnecessary zero-exts
On PPC64, we end up with lots of i32 -> i64 zero extensions, not only from all
of the usual places, but also from the ABI, which specifies that values passed
are zero extended. Almost all 32-bit PPC instructions in PPC64 mode are defined
to do *something* to the higher-order bits, and for some instructions, that
action clears those bits (thus providing a zero-extended result). This is
especially common after rotate-and-mask instructions. Adding an additional
instruction to zero-extend the results of these instructions is unnecessary.

This PPCISelDAGToDAG peephole optimization examines these zero-extensions, and
looks back through their operands to see if all instructions will implicitly
zero extend their results. If so, we convert these instructions to their 64-bit
variants (which is an internal change only, the actual encoding of these
instructions is the same as the original 32-bit ones) and remove the
unnecessary zero-extension (changing where the INSERT_SUBREG instructions are
to make everything internally consistent).

llvm-svn: 224169
2014-12-12 23:59:36 +00:00
David Majnemer
70ded5026c ValueTracking: Don't recurse too deeply in computeKnownBitsFromAssume
Respect the MaxDepth recursion limit, doing otherwise will trigger an
assert in computeKnownBits.

This fixes PR21891.

llvm-svn: 224168
2014-12-12 23:59:29 +00:00
Chad Rosier
b5c6a89ee6 [ARMConstantIsland] Insert tbb/tbh optimization where previous jump table resided.
llvm-svn: 224165
2014-12-12 23:27:40 +00:00
Colin LeMahieu
e750275948 [Hexagon] Adding double word add/min/minu/max/maxu instructions and tests.
llvm-svn: 224153
2014-12-12 21:29:25 +00:00
Nico Weber
b1b6a861e5 Revert r224149, llvm-dsymutil was already here.
I saw a failure on an internal bot, opened this file, saw it was missing,
thought "aha!", tried to land, got an "file is out of date", synced, didn't see
the file listed right above the line I added (cause I didn't add it in the
right place) and landed. Apologies!

llvm-svn: 224152
2014-12-12 21:25:07 +00:00
Colin LeMahieu
980129091a [Hexagon] Adding J class call instructions.
llvm-svn: 224150
2014-12-12 21:12:27 +00:00
Nico Weber
5b22824a0f Add llvm-dsymutil to test/CMakeLists.txt
r224134 added this and runs it from a test, but doesn't build it with test
binaries.

llvm-svn: 224149
2014-12-12 20:56:49 +00:00
Reid Kleckner
516fcdc230 Relax debug-map-parsing.test error message check for Windows
On Windows we get the string "no such file or directory".

llvm-svn: 224141
2014-12-12 18:52:07 +00:00
Frederic Riss
3654a51c98 Initial dsymutil tool commit.
The goal of this tool is to replicate Darwin's dsymutil functionality
based on LLVM. dsymutil is a DWARF linker. Darwin's linker (ld64) does
not link the debug information, it leaves it in the object files in
relocatable form, but embbeds a `debug map` into the executable that
describes where to find the debug information and how to relocate it.
When releasing/archiving a binary, dsymutil is called to link all the DWARF
information into a `dsym bundle` that can distributed/stored along with
the binary.

With this commit, the LLVM based dsymutil is just able to parse the STABS
debug maps embedded by ld64 in linked binaries (and not all of them, for
example archives aren't supported yet).

Note that the tool directory is called dsymutil, but the executable is
currently called llvm-dsymutil. This discrepancy will disappear once the
tool will be feature complete. At this point the executable will be renamed
to dsymutil, but until then you do not want it to override the system one.

    Differential Revision: http://reviews.llvm.org/D6242

llvm-svn: 224134
2014-12-12 17:31:24 +00:00