1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-27 22:12:47 +01:00
Commit Graph

12153 Commits

Author SHA1 Message Date
Kalle Raiskila
070fb5e54d Allow sign-extending of i8 and i16 to i128 on SPU.
llvm-svn: 123912
2011-01-20 15:49:06 +00:00
Duncan Sands
1faa8712c9 At -O123 the early-cse pass is run before instcombine has run. According to my
auto-simplier the transform most missed by early-cse is (zext X) != 0 -> X != 0.
This patch adds this transform and some related logic to InstructionSimplify
and removes some of the logic from instcombine (unfortunately not all because
there are several situations in which instcombine can improve things by making
new instructions, whereas instsimplify is not allowed to do this).  At -O2 this
often results in more than 15% more simplifications by early-cse, and results in
hundreds of lines of bitcode being eliminated from the testsuite.  I did see some
small negative effects in the testsuite, for example a few additional instructions
in three programs.  One program, 483.xalancbmk, got an additional 35 instructions,
which seems to be due to a function getting an additional instruction and then
being inlined all over the place.

llvm-svn: 123911
2011-01-20 13:21:55 +00:00
Eric Christopher
f7579ff174 Expand invalid return values for umulo and smulo. Handle these similarly
to add/sub by doing the normal operation and then checking for overflow
afterwards. This generally relies on the DAG handling the later invalid
operations as well.

Fixes the 64-bit part of rdar://8622122 and rdar://8774702.

llvm-svn: 123908
2011-01-20 08:54:28 +00:00
Evan Cheng
5c5e42a878 Add test.
llvm-svn: 123906
2011-01-20 08:38:21 +00:00
Evan Cheng
6dc21c7358 Sorry, several patches in one.
TargetInstrInfo:
Change produceSameValue() to take MachineRegisterInfo as an optional argument.
When in SSA form, targets can use it to make more aggressive equality analysis.

Machine LICM:
1. Eliminate isLoadFromConstantMemory, use MI.isInvariantLoad instead.
2. Fix a bug which prevent CSE of instructions which are not re-materializable.
3. Use improved form of produceSameValue.

ARM:
1. Teach ARM produceSameValue to look pass some PIC labels.
2. Look for operands from different loads of different constant pool entries
   which have same values.
3. Re-implement PIC GA materialization using movw + movt. Combine the pair with
   a "add pc" or "ldr [pc]" to form pseudo instructions. This makes it possible
   to re-materialize the instruction, allow machine LICM to hoist the set of
   instructions out of the loop and make it possible to CSE them. It's a bit
   hacky, but it significantly improve code quality.
4. Some minor bug fixes as well.

With the fixes, using movw + movt to materialize GAs significantly outperform the
load from constantpool method. 186.crafty and 255.vortex improved > 20%, 254.gap
and 176.gcc ~10%.

llvm-svn: 123905
2011-01-20 08:34:58 +00:00
Michael J. Spencer
c3f075e648 Object: Add some tests!
llvm-svn: 123899
2011-01-20 06:39:15 +00:00
Venkatraman Govindaraju
5280b2876f Sparc backend: Implements a delay slot filler that attempt to fill delay slots
with useful instructions.

llvm-svn: 123884
2011-01-20 05:08:26 +00:00
Eric Christopher
1b0e5debb4 If we can, lower the multiply part of a umulo/smulo call to a libcall
with an invalid type then split the result and perform the overflow check
normally.

Fixes the 32-bit parts of rdar://8622122 and rdar://8774702.

llvm-svn: 123864
2011-01-20 00:29:24 +00:00
Devang Patel
729c5e59af Fix debug info for merged global.
llvm-svn: 123862
2011-01-20 00:02:16 +00:00
Nick Lewycky
51c13384f5 Similarly, analyze truncate through multiply.
llvm-svn: 123842
2011-01-19 18:56:00 +00:00
Nick Lewycky
9867e58096 Add a missed SCEV fold that is required to continue analyzing the IR produced
by indvars through the scev expander.

trunc(add x, y) --> add(trunc x, y). Currently SCEV largely folds the other way
which is probably wrong, but preserved to minimize churn. Instcombine doesn't
do this fold either, demonstrating a missed optz'n opportunity on code doing
add+trunc+add.

llvm-svn: 123838
2011-01-19 16:59:46 +00:00
Bruno Cardoso Lopes
0f7a30b1cb Fix the encoding of mrrc and mcrr family of instructions. Also add testcases for mcr and mrc
llvm-svn: 123837
2011-01-19 16:56:52 +00:00
Rafael Espindola
ce499efe1d Add unnamed_addr when we can show that address of a global is not used.
llvm-svn: 123834
2011-01-19 16:32:21 +00:00
Nick Lewycky
5a538b62ca Add a missing SCEV simplification sext(zext x) --> zext x.
llvm-svn: 123832
2011-01-19 15:56:12 +00:00
Owen Anderson
ed4acd59cb When matching asm operands, always try to match the most restricted type first.
Unfortunately, while this is the "right" thing to do, it breaks some ARM
asm parsing tests because MemMode5 and ThumbMemModeReg are ambiguous.  This
is tricky to resolve since neither is a subset of the other.

XFAIL the test for now.  The old way was broken in other ways, just ways
we didn't happen to be testing, and our ARM asm parsing is going to require
significant revisiting at a later point anyways.

llvm-svn: 123786
2011-01-18 23:01:21 +00:00
Bruno Cardoso Lopes
e0f8fee637 Create two new generic classes to represent the following VMRS/VMSR variations:
vmrs  reg, fpexc
vmrs  reg, fpsid
vmsr  fpexc, reg
vmsr  fpsid, reg

llvm-svn: 123783
2011-01-18 21:58:20 +00:00
Bruno Cardoso Lopes
82c6fe3dfe Fix MRS encoding for arm and thumb.
llvm-svn: 123778
2011-01-18 21:31:35 +00:00
Bruno Cardoso Lopes
6e4c5af01e Fix the encoding of t2ISB by using the right class and also parse it correctly
llvm-svn: 123776
2011-01-18 21:17:09 +00:00
Dan Gohman
df668227fb Teach BasicAA to return PartialAlias in cases where both pointers
are pointing to the same object, one pointer is accessing the entire
object, and the other is access has a non-zero size. This prevents
TBAA from kicking in and saying NoAlias in such cases.

llvm-svn: 123775
2011-01-18 21:16:06 +00:00
Bruno Cardoso Lopes
c1e21b06b9 Follow the current hack set and enable the correct parsing of bkpt while in thumb mode.
llvm-svn: 123772
2011-01-18 20:55:11 +00:00
Chris Lattner
4832a9d32c fix rdar://8878965, a regression I introduced with the recent
llvm.objectsize changes.

llvm-svn: 123771
2011-01-18 20:53:04 +00:00
Bruno Cardoso Lopes
94247155c4 Add support for parsing and encoding ARM's official syntax for the BFI instruction
llvm-svn: 123770
2011-01-18 20:45:56 +00:00
Bruno Cardoso Lopes
6c5db0236a Add support for mips32 madd and msub instructions. Patch by Akira Hatanaka
llvm-svn: 123760
2011-01-18 19:29:17 +00:00
Duncan Sands
732cb58b61 For completeness, generalize the (X + Y) - Y -> X transform and add X - (X + 1) -> -1.
These were not recommended by my auto-simplifier since they don't fire often enough.
However they do fire from time to time, for example they remove one subtraction from
the final bitcode for 483.xalancbmk.

llvm-svn: 123755
2011-01-18 11:50:19 +00:00
Duncan Sands
2abe6f500f Simplify (X<<1)-X into X. According to my auto-simplier this is the most common missed
simplification in fully optimized code.  It occurs sporadically in the testsuite, and
many times in 403.gcc: the final bitcode has 131 fewer subtractions after this change.
The reason that the multiplies are not eliminated is the same reason that instcombine
did not catch this: they are used by other instructions (instcombine catches this with
a more general transform which in general is only profitable if the operands have only
one use).

llvm-svn: 123754
2011-01-18 09:24:58 +00:00
Daniel Dunbar
ba39b2fdc1 McARM: Start marking T2 address operands as such, for the benefit of the parser.
llvm-svn: 123722
2011-01-18 03:06:03 +00:00
Benjamin Kramer
869dc645f1 Fix an off-by-one error in ctpop combining.
llvm-svn: 123664
2011-01-17 18:00:28 +00:00
Devang Patel
ec7c842bfa Update tests to accomodate unnamed_addr introduction.
llvm-svn: 123663
2011-01-17 17:54:17 +00:00
Benjamin Kramer
e9488ed8eb Add a DAGCombine to turn (ctpop x) u< 2 into (x & x-1) == 0.
This shaves off 4 popcounts from the hacked 186.crafty source.

This is enabled even when a native popcount instruction is available. The
combined code is one operation longer but it should be faster nevertheless.

llvm-svn: 123621
2011-01-17 12:04:57 +00:00
Kalle Raiskila
8eaf0e83d5 Don't crash SPU BE with memory accesses with big alignmnet.
llvm-svn: 123620
2011-01-17 11:59:20 +00:00
Evan Cheng
53ec6fc591 Materialize GA addresses with movw + movt pairs for Darwin in PIC mode. e.g.
movw    r0, :lower16:(L_foo$non_lazy_ptr-(LPC0_0+4))
        movt    r0, :upper16:(L_foo$non_lazy_ptr-(LPC0_0+4))
LPC0_0:
        add     r0, pc, r0

It's not yet enabled by default as some tests are failing. I suspect bugs in
down stream tools.

llvm-svn: 123619
2011-01-17 08:03:18 +00:00
Nick Lewycky
8f0b243661 Test for lazy value info's ability to prove the absense of NULLs in pointers.
llvm-svn: 123601
2011-01-16 21:57:20 +00:00
Michael J. Spencer
3ea4ed2e6b Make everyone happy this time.
llvm-svn: 123599
2011-01-16 21:34:34 +00:00
Anders Carlsson
c9781e5764 Teach DAE to look for functions whose arguments are unused, and change all callers to pass in an undefvalue instead.
llvm-svn: 123596
2011-01-16 21:25:33 +00:00
Michael J. Spencer
b0510b04d5 Try and fix this test. For some reason llvm-ar thinks that the file exists when
it shouldn't, but I have no way to verify that it doesn't actually exist on the
buildbot.

llvm-svn: 123594
2011-01-16 20:52:58 +00:00
Rafael Espindola
9afb7af08a Update tests.
llvm-svn: 123591
2011-01-16 18:02:57 +00:00
Rafael Espindola
41852873f7 Don't merge two constants if we care about the address of both.
This fixes the original testcase in PR8927. It also causes a clang
binary built with a patched clang to increase in size by 0.21%.

We can probably get some of the size back by writing a pass that
detects that a global never has its pointer compared and adds
unnamed_addr to it (maybe extend global opt). It is also possible that
there are some other cases clang could add unnamed_addr to.

I will investigate extending globalopt next.

llvm-svn: 123584
2011-01-16 17:05:09 +00:00
Owen Anderson
b86db71ad0 Reduce and merge testcases.
llvm-svn: 123579
2011-01-16 09:13:31 +00:00
Chris Lattner
dde85de90f fix PR8514, a bug where the "heroic" transformation of shift/and
into and/shift would cause nodes to move around and a dangling pointer
to happen.  The code tried to avoid this with a HandleSDNode, but 
got the details wrong.

llvm-svn: 123578
2011-01-16 08:48:11 +00:00
Chris Lattner
a4454efc85 fix PR8932, a case where arg promotion could infinitely promote.
llvm-svn: 123574
2011-01-16 08:09:24 +00:00
Chris Lattner
29f339f87c if an alloca is only ever accessed as a unit, and is accessed with load/store instructions,
then don't try to decimate it into its individual pieces.  This will just make a mess of the
IR and is pointless if none of the elements are individually accessed.  This was generating
really terrible code for std::bitset (PR8980) because it happens to be lowered by clang
as an {[8 x i8]} structure instead of {i64}.

The testcase now is optimized to:

define i64 @test2(i64 %X) {
  br label %L2

L2:                                               ; preds = %0
  ret i64 %X
}

before we generated:

define i64 @test2(i64 %X) {
  %sroa.store.elt = lshr i64 %X, 56
  %1 = trunc i64 %sroa.store.elt to i8
  %sroa.store.elt8 = lshr i64 %X, 48
  %2 = trunc i64 %sroa.store.elt8 to i8
  %sroa.store.elt9 = lshr i64 %X, 40
  %3 = trunc i64 %sroa.store.elt9 to i8
  %sroa.store.elt10 = lshr i64 %X, 32
  %4 = trunc i64 %sroa.store.elt10 to i8
  %sroa.store.elt11 = lshr i64 %X, 24
  %5 = trunc i64 %sroa.store.elt11 to i8
  %sroa.store.elt12 = lshr i64 %X, 16
  %6 = trunc i64 %sroa.store.elt12 to i8
  %sroa.store.elt13 = lshr i64 %X, 8
  %7 = trunc i64 %sroa.store.elt13 to i8
  %8 = trunc i64 %X to i8
  br label %L2

L2:                                               ; preds = %0
  %9 = zext i8 %1 to i64
  %10 = shl i64 %9, 56
  %11 = zext i8 %2 to i64
  %12 = shl i64 %11, 48
  %13 = or i64 %12, %10
  %14 = zext i8 %3 to i64
  %15 = shl i64 %14, 40
  %16 = or i64 %15, %13
  %17 = zext i8 %4 to i64
  %18 = shl i64 %17, 32
  %19 = or i64 %18, %16
  %20 = zext i8 %5 to i64
  %21 = shl i64 %20, 24
  %22 = or i64 %21, %19
  %23 = zext i8 %6 to i64
  %24 = shl i64 %23, 16
  %25 = or i64 %24, %22
  %26 = zext i8 %7 to i64
  %27 = shl i64 %26, 8
  %28 = or i64 %27, %25
  %29 = zext i8 %8 to i64
  %30 = or i64 %29, %28
  ret i64 %30
}

In this case, instcombine was able to eliminate the nonsense, but in PR8980 enough
PHIs are in play that instcombine backs off.  It's better to not generate this stuff
in the first place.

llvm-svn: 123571
2011-01-16 06:18:28 +00:00
Chris Lattner
2067fb2a93 enhance FoldOpIntoPhi in instcombine to try harder when a phi has
multiple uses.  In some cases, all the uses are the same operation,
so instcombine can go ahead and promote the phi.  In the testcase
this pushes an add out of the loop.

llvm-svn: 123568
2011-01-16 05:28:59 +00:00
Evan Cheng
144b435a15 Spill R4 if it's going to be used to restore SP from FP.
llvm-svn: 123567
2011-01-16 05:14:33 +00:00
Owen Anderson
6e0fa67f91 Improve the safety of my globalopt enhancement by ensuring that the bitcast
of the stored value to the new store type is always.  Also, add a testcase.

llvm-svn: 123563
2011-01-16 04:33:33 +00:00
Chris Lattner
aba06ce448 fix PR8983, a broken assertion.
llvm-svn: 123562
2011-01-16 03:43:53 +00:00
Venkatraman Govindaraju
fe346f6cba Implement AnalyzeBranch in Sparc Backend.
llvm-svn: 123561
2011-01-16 03:15:11 +00:00
Chris Lattner
24ea7f696e fix PR8981, a crash trying to form a conditional inc with a floating point compare.
llvm-svn: 123560
2011-01-16 02:56:53 +00:00
Chris Lattner
c4d1d86d3e reapply my fix for PR8961 with a tweak to properly handle
multi-instruction sequences like calls.  Many thanks to Jakob for
finding a testcase.

llvm-svn: 123559
2011-01-16 02:27:38 +00:00
Michael J. Spencer
76f1706025 Revert "Archive: Replace all internal uses of PathV1 with PathV2. The external API still uses PathV1."
llvm-svn: 123557
2011-01-16 01:43:22 +00:00
Chris Lattner
44bcf63348 one of michael's recent patches broke this, temporarily disable
it so the bots go green

llvm-svn: 123555
2011-01-16 01:04:49 +00:00
Chris Lattner
75599bb566 remove the partial specialization pass. It is unmaintained and has bugs.
llvm-svn: 123554
2011-01-16 00:27:10 +00:00
Nick Lewycky
1d57e867a4 Make constmerge a two-pass algorithm so that it won't miss merging
opporuntities. Fixes PR8978.

llvm-svn: 123541
2011-01-15 18:14:21 +00:00
Rafael Espindola
3b43f22391 Allow unnamed_addr on declarations.
llvm-svn: 123529
2011-01-15 08:15:00 +00:00
Chris Lattner
55c2150f36 temporarily revert r123526. While working on a follow-on patch I
realize that ConstantFoldTerminator doesn't preserve dominfo.

llvm-svn: 123527
2011-01-15 07:51:19 +00:00
Chris Lattner
68a47147ba fix rdar://8785296 - -fcatch-undefined-behavior generates inefficient code
The basic issue is that isel (very reasonably!) expects conditional branches
to be folded, so CGP leaving around a bunch dead computation feeding
conditional branches isn't such a good idea.  Just fold branches on constants
into unconditional branches.

llvm-svn: 123526
2011-01-15 07:36:13 +00:00
Chris Lattner
d4eaf6eba8 Now that instruction optzns can update the iterator as they go, we can
have objectsize folding recursively simplify away their result when it
folds.  It is important to catch this here, because otherwise we won't
eliminate the cross-block values at isel and other times.

llvm-svn: 123524
2011-01-15 07:25:29 +00:00
Chris Lattner
74ed5d30ca implement an instcombine xform that canonicalizes casts outside of and-with-constant operations.
This fixes rdar://8808586 which observed that we used to compile:


union xy {
        struct x { _Bool b[15]; } x;
        __attribute__((packed))
        struct y {
                __attribute__((packed)) unsigned long b0to7;
                __attribute__((packed)) unsigned int b8to11;
                __attribute__((packed)) unsigned short b12to13;
                __attribute__((packed)) unsigned char b14;
        } y;
};

struct x
foo(union xy *xy)
{
        return xy->x;
}

into:

_foo:                                   ## @foo
	movq	(%rdi), %rax
	movabsq	$1095216660480, %rcx    ## imm = 0xFF00000000
	andq	%rax, %rcx
	movabsq	$-72057594037927936, %rdx ## imm = 0xFF00000000000000
	andq	%rax, %rdx
	movzbl	%al, %esi
	orq	%rdx, %rsi
	movq	%rax, %rdx
	andq	$65280, %rdx            ## imm = 0xFF00
	orq	%rsi, %rdx
	movq	%rax, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rdx, %rsi
	movl	%eax, %edx
	andl	$-16777216, %edx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rdx
	orq	%rcx, %rdx
	movabsq	$280375465082880, %rcx  ## imm = 0xFF0000000000
	movq	%rax, %rsi
	andq	%rcx, %rsi
	orq	%rdx, %rsi
	movabsq	$71776119061217280, %r8 ## imm = 0xFF000000000000
	andq	%r8, %rax
	orq	%rsi, %rax
	movzwl	12(%rdi), %edx
	movzbl	14(%rdi), %esi
	shlq	$16, %rsi
	orl	%edx, %esi
	movq	%rsi, %r9
	shlq	$32, %r9
	movl	8(%rdi), %edx
	orq	%r9, %rdx
	andq	%rdx, %rcx
	movzbl	%sil, %esi
	shlq	$32, %rsi
	orq	%rcx, %rsi
	movl	%edx, %ecx
	andl	$-16777216, %ecx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rcx
	movq	%rdx, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rcx, %rsi
	movq	%rdx, %rcx
	andq	$65280, %rcx            ## imm = 0xFF00
	orq	%rsi, %rcx
	movzbl	%dl, %esi
	orq	%rcx, %rsi
	andq	%r8, %rdx
	orq	%rsi, %rdx
	ret

We now compile this into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movzwl	12(%rdi), %eax
	movzbl	14(%rdi), %ecx
	shlq	$16, %rcx
	orl	%eax, %ecx
	shlq	$32, %rcx
	movl	8(%rdi), %edx
	orq	%rcx, %rdx
	movq	(%rdi), %rax
	ret

A small improvement :-)

llvm-svn: 123520
2011-01-15 06:32:33 +00:00
Rafael Espindola
70a277119f Update llvm-gcc's tests.
llvm-svn: 123447
2011-01-14 17:01:20 +00:00
Duncan Sands
dc51b0ee48 Turn X-(X-Y) into Y. According to my auto-simplifier this is the most common
simplification present in fully optimized code (I think instcombine fails to
transform some of these when "X-Y" has more than one use).  Fires here and
there all over the test-suite, for example it eliminates 8 subtractions in
the final IR for 445.gobmk, 2 subs in 447.dealII, 2 in paq8p etc.

llvm-svn: 123442
2011-01-14 15:26:10 +00:00
Duncan Sands
4757061c47 Factorize common code out of the InstructionSimplify shift logic. Add in
threading of shifts over selects and phis while there.  This fires here and
there in the testsuite, to not much effect.  For example when compiling spirit
it fires 5 times, during early-cse, resulting in 6 more cse simplifications,
and 3 more terminators being folded by jump threading, but the final bitcode
doesn't change in any interesting way: other optimizations would have caught
the opportunity anyway, only later.

llvm-svn: 123441
2011-01-14 14:44:12 +00:00
Duncan Sands
01be7e406d Rename this test.
llvm-svn: 123440
2011-01-14 14:16:33 +00:00
Chris Lattner
de9ec03027 relax testcase a bit.
llvm-svn: 123433
2011-01-14 07:46:33 +00:00
Chris Lattner
eba719204c revert my fastisel patch again which apparently still gives the
llvm-gcc-i386-linux-selfhost buildbot heartburn...

llvm-svn: 123431
2011-01-14 06:14:33 +00:00
Chris Lattner
ee950eeb24 reapply r123414 now that the botz are calmed down and the fix is already in.
llvm-svn: 123427
2011-01-14 04:24:28 +00:00
Evan Cheng
0cdd5547f1 Completed :lower16: / :upper16: support for movw / movt pairs on Darwin.
- Fixed :upper16: fix up routine. It should be shifting down the top 16 bits first.
- Added support for Thumb2 :lower16: and :upper16: fix up.
- Added :upper16: and :lower16: relocation support to mach-o object writer.

llvm-svn: 123424
2011-01-14 02:38:49 +00:00
Chris Lattner
349735530b r123414 broke llvm-gcc bootstrap apparently, revert
llvm-svn: 123422
2011-01-14 02:07:32 +00:00
Duncan Sands
44c273d907 Move some shift transforms out of instcombine and into InstructionSimplify.
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything.  I fixed this in the constant folder as well.  Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero.  This is in accordance with the LangRef, but I must
admit that it is fairly aggressive.  Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.

llvm-svn: 123417
2011-01-14 00:37:45 +00:00
Chris Lattner
5baec05809 fix PR8961 - a fast isel miscompilation where we'd insert a new instruction
after sext's generated for addressing that got folded.  Previously we compiled
test5 into:

_test5:                                 ## @test5
## BB#0:
        movq    -8(%rsp), %rax          ## 8-byte Reload
        movq    (%rdi,%rax), %rdi
        addq    %rdx, %rdi
        movslq  %esi, %rax
        movq    %rax, -8(%rsp)          ## 8-byte Spill
        movq    %rdi, %rax
        ret

which is insane and wrong.  Now we produce:

_test5:                                 ## @test5
## BB#0:
	movslq	%esi, %rax
	movq	(%rdi,%rax), %rax
	addq	%rdx, %rax
	ret

llvm-svn: 123414
2011-01-14 00:01:01 +00:00
Owen Anderson
4f5dac3541 As far as I can tell, unified syntax uses c0-c15 instead of cr0-cr15 for mcr and friends.
llvm-svn: 123407
2011-01-13 22:38:16 +00:00
Bob Wilson
3b0197489e Extend SROA to handle arrays accessed as homogeneous structs and vice versa.
This is a minor extension of SROA to handle a special case that is
important for some ARM NEON operations.  Some of the NEON intrinsics
return multiple values, which are handled as struct types containing
multiple elements of the same vector type.  The corresponding return
types declared in the arm_neon.h header have equivalent arrays.  We
need SROA to recognize that it can split up those arrays and structs
into separate vectors, even though they are not always accessed with
the same type.  SROA already handles loads and stores of an entire
alloca by using insertvalue/extractvalue to access the individual
pieces, and that code works the same regardless of whether the type
is a struct or an array.  So, all that needs to be done is to check
for compatible arrays and homogeneous structs.

llvm-svn: 123381
2011-01-13 17:45:11 +00:00
Bob Wilson
9f8d730f9b Make SROA more aggressive with allocas containing padding.
SROA only split up structs and arrays one level at a time, so padding can
only cause trouble if it is located in between the struct or array elements.

llvm-svn: 123380
2011-01-13 17:45:08 +00:00
Duncan Sands
36b007d63b The most common simplification missed by instsimplify in unoptimized bitcode
is "X != 0 -> X" when X is a boolean.  This occurs a lot because of the way
llvm-gcc converts gcc's conditional expressions.  Add this, and a few other
similar transforms for completeness.

llvm-svn: 123372
2011-01-13 08:56:29 +00:00
Evan Cheng
cc474b4864 Model :upper16: and :lower16: as ARM specific MCTargetExpr. This is a step
in the right direction. It eliminated some hacks and will unblock codegen
work. But it's far from being done. It doesn't reject illegal expressions,
e.g. (FOO - :lower16:BAR). It also doesn't work in Thumb2 mode at all.

llvm-svn: 123369
2011-01-13 07:58:56 +00:00
Eric Christopher
3821f63f4b Experiment with changing the default 32-bit linux stack alignment to
16 bytes for PR8969. Update all testcases accordingly.

llvm-svn: 123367
2011-01-13 06:47:10 +00:00
Rafael Espindola
0272c002ae Keep unnamed_addr when linking.
llvm-svn: 123364
2011-01-13 05:12:34 +00:00
Rafael Espindola
f6cae95276 Reject uses of unnamed_addr in declarations.
llvm-svn: 123358
2011-01-13 01:30:30 +00:00
Jakob Stoklund Olesen
3987889b61 Try again enabling LiveDebugVariables.
llvm-svn: 123342
2011-01-12 23:36:21 +00:00
Bill Wendling
e82361731d Sort the register list based on the *actual* register numbers rather than the
enum values we give to them. <rdar://problem/8823730>

llvm-svn: 123321
2011-01-12 21:20:59 +00:00
Venkatraman Govindaraju
2d89fea217 Implement RETURNADDR and FRAMEADDR lowering in SPARC backend.
llvm-svn: 123310
2011-01-12 05:08:36 +00:00
Chris Lattner
fdef60bef7 revert 123144, reenabling the rest of memset formation.
llvm-svn: 123302
2011-01-12 03:25:15 +00:00
Venkatraman Govindaraju
816f7dfed0 Fix SPARC backend call instruction so that arguments passed through registers
are correctly marked as used instead of passing all possible argument registers
as used.  

llvm-svn: 123301
2011-01-12 03:18:21 +00:00
Chris Lattner
e288204194 revert r123146 which disabled code that wasn't the root cause
of the bootstrap miscompare issue.

llvm-svn: 123299
2011-01-12 01:52:23 +00:00
Jason W Kim
ae183f9862 1. Support ELF pcrel relocations for movw/movt:
R_ARM_MOVT_PREL and R_ARM_MOVW_PREL_NC.
2. Fix minor bug in ARMAsmPrinter - treat bitfield flag as a bitfield, not an enum.
3. Add support for 3 new elf section types (no-ops)

llvm-svn: 123294
2011-01-12 00:19:25 +00:00
Jason W Kim
db6eddeea3 Workaround for bug 8721.
.s Test added.

llvm-svn: 123292
2011-01-11 23:53:41 +00:00
Jakob Stoklund Olesen
1f7052b53b The world is not ready for LiveDebugVariables yet.
llvm-svn: 123290
2011-01-11 23:20:33 +00:00
Jakob Stoklund Olesen
d7a523358c Enable LiveDebugVariables by default.
llvm-svn: 123282
2011-01-11 22:45:28 +00:00
Venkatraman Govindaraju
f681d4e782 SPARC backend: correct ICC/FCC uses for ADDX and SELECT_CC
llvm-svn: 123281
2011-01-11 22:38:28 +00:00
Chris Lattner
586e7af07d Fix PR8946, a missing reg/reg form of movdqu.
llvm-svn: 123242
2011-01-11 17:04:55 +00:00
Daniel Dunbar
fbc0b96c34 McARM: Add more hard coded logic to SplitMnemonicAndCC to also split out the
carry setting flag from the mnemonic.

Note that this currently involves me disabling a number of working cases in
arm_instructions.s, this is a hopefully short term evil which will be rapidly
fixed (and greatly surpassed), assuming my current approach flies.

llvm-svn: 123238
2011-01-11 15:59:50 +00:00
Eric Christopher
c6db56a31e Revert the testcase from the previous reverted commit.
llvm-svn: 123227
2011-01-11 09:20:44 +00:00
Chris Lattner
feade29ab8 merge tests into one crash.ll test.
llvm-svn: 123220
2011-01-11 07:50:07 +00:00
Chris Lattner
5731a92f5b remove a bogus assertion: the latch block of a loop is not
neccesarily an uncond branch to the header.  This fixes 
PR8955 (the assertion tripping).

llvm-svn: 123219
2011-01-11 07:47:59 +00:00
Chandler Carruth
250dce460c Teach constant folding to perform conversions from constant floating
point values to their integer representation through the SSE intrinsic
calls. This is the last part of a README.txt entry for which I have real
world examples.

llvm-svn: 123206
2011-01-11 01:07:24 +00:00
Chandler Carruth
acb82e863d FileCheck-ize a test, and move a no-longer calling test case to another
file and make it actually test something...

llvm-svn: 123205
2011-01-11 01:07:20 +00:00
Owen Anderson
4479341626 Fix a random missed optimization by making InstCombine more aggressive when determining which bits are demanded by
a comparison against a constant.

llvm-svn: 123203
2011-01-11 00:36:45 +00:00
Eric Christopher
68263285d5 Even if we don't have 7 bytes of stack space we may need to save and
restore the stack pointer from the frame pointer on thumbv6.

Fixes rdar://8819685

llvm-svn: 123196
2011-01-11 00:16:04 +00:00
Dale Johannesen
cd78621861 Fix PR 8916 (qv for analysis), at least the immediate problem.
There's an inherent tension in DAGCombine between assuming
that things will be put in canonical form, and the Depth
mechanism that disables transformations when recursion gets
too deep.  It would not surprise me if there's a lot of little
bugs like this one waiting to be discovered.  The mechanism
seems fragile and I'd suggest looking at it from a design viewpoint.

llvm-svn: 123191
2011-01-10 21:53:07 +00:00
Daniel Dunbar
0e9ece99bb McARM: Flush out hard coded known non-predicated mnemonic list.
llvm-svn: 123189
2011-01-10 21:01:03 +00:00
Chandler Carruth
772e26df36 Teach instcombine about the rest of the SSE and SSE2 conversion
intrinsics element dependencies. Reviewed by Nick.

llvm-svn: 123161
2011-01-10 07:19:37 +00:00
Chandler Carruth
7f854ac9a9 Fold two related tests into the newly FileCheck-ized test, migrating
them to FileCheck as well.

llvm-svn: 123154
2011-01-10 02:53:58 +00:00
Chandler Carruth
7c332e5abd Clean up and FileCheck-ize a test.
llvm-svn: 123153
2011-01-10 02:53:54 +00:00
Chris Lattner
867dbe1329 fix typo
llvm-svn: 123148
2011-01-10 02:33:34 +00:00
Chris Lattner
b5562212e2 another (more) aggressive attempt to bring llvm-gcc-i386-linux-selfhost
back to life.

llvm-svn: 123146
2011-01-10 00:47:34 +00:00
Chris Lattner
e8e9ec58bf temporarily disable memset formation from memsets in an effort to restore buildbot stability.
llvm-svn: 123144
2011-01-09 23:52:48 +00:00
Chris Lattner
749f1eff13 add a testcase I missed in previous commit.
llvm-svn: 123143
2011-01-09 23:52:31 +00:00
Tobias Grosser
9899845dd3 Instcombine: Fix pattern where the sext did not dominate the icmp using it
llvm-svn: 123121
2011-01-09 16:00:11 +00:00
Chris Lattner
57e9b35653 teach SCEV analysis of PHI nodes that PHI recurences formed
with GEP instructions are always NUW, because PHIs cannot wrap
the end of the address space.

llvm-svn: 123105
2011-01-09 02:28:48 +00:00
Chris Lattner
fa37cac39c reduce indentation. Print <nuw> and <nsw> when dumping SCEV AddRec's
that have the bit set.

llvm-svn: 123104
2011-01-09 02:16:18 +00:00
Chris Lattner
98136397bd Merge memsets followed by neighboring memsets and other stores into
larger memsets.  Among other things, this fixes rdar://8760394 and
allows us to handle "Example 2" from http://blog.regehr.org/archives/320,
compiling it into a single 4096-byte memset:

_mad_synth_mute:                        ## @mad_synth_mute
## BB#0:                                ## %entry
	pushq	%rax
	movl	$4096, %esi             ## imm = 0x1000
	callq	___bzero
	popq	%rax
	ret

llvm-svn: 123089
2011-01-08 21:19:19 +00:00
Chris Lattner
e09439ed9d fix an issue in IsPointerOffset that prevented us from recognizing that
P and P+1 are relative to the same base pointer.

llvm-svn: 123087
2011-01-08 21:07:56 +00:00
Chris Lattner
20bf2d50b8 enhance memcpyopt to merge a store and a subsequent
memset into a single larger memset.

llvm-svn: 123086
2011-01-08 20:54:51 +00:00
Chris Lattner
756416d4c0 merge two tests and filecheckify
llvm-svn: 123082
2011-01-08 20:27:22 +00:00
Chris Lattner
7d3c4712e9 When loop rotation happens, it is *very* common for the duplicated condbr
to be foldable into an uncond branch.  When this happens, we can make a
much simpler CFG for the loop, which is important for nested loop cases
where we want the outer loop to be aggressively optimized.

Handle this case more aggressively.  For example, previously on
phi-duplicate.ll we would get this:


define void @test(i32 %N, double* %G) nounwind ssp {
entry:
  %cmp1 = icmp slt i64 1, 1000
  br i1 %cmp1, label %bb.nph, label %for.end

bb.nph:                                           ; preds = %entry
  br label %for.body

for.body:                                         ; preds = %bb.nph, %for.cond
  %j.02 = phi i64 [ 1, %bb.nph ], [ %inc, %for.cond ]
  %arrayidx = getelementptr inbounds double* %G, i64 %j.02
  %tmp3 = load double* %arrayidx
  %sub = sub i64 %j.02, 1
  %arrayidx6 = getelementptr inbounds double* %G, i64 %sub
  %tmp7 = load double* %arrayidx6
  %add = fadd double %tmp3, %tmp7
  %arrayidx10 = getelementptr inbounds double* %G, i64 %j.02
  store double %add, double* %arrayidx10
  %inc = add nsw i64 %j.02, 1
  br label %for.cond

for.cond:                                         ; preds = %for.body
  %cmp = icmp slt i64 %inc, 1000
  br i1 %cmp, label %for.body, label %for.cond.for.end_crit_edge

for.cond.for.end_crit_edge:                       ; preds = %for.cond
  br label %for.end

for.end:                                          ; preds = %for.cond.for.end_crit_edge, %entry
  ret void
}

Now we get the much nicer:

define void @test(i32 %N, double* %G) nounwind ssp {
entry:
  br label %for.body

for.body:                                         ; preds = %entry, %for.body
  %j.01 = phi i64 [ 1, %entry ], [ %inc, %for.body ]
  %arrayidx = getelementptr inbounds double* %G, i64 %j.01
  %tmp3 = load double* %arrayidx
  %sub = sub i64 %j.01, 1
  %arrayidx6 = getelementptr inbounds double* %G, i64 %sub
  %tmp7 = load double* %arrayidx6
  %add = fadd double %tmp3, %tmp7
  %arrayidx10 = getelementptr inbounds double* %G, i64 %j.01
  store double %add, double* %arrayidx10
  %inc = add nsw i64 %j.01, 1
  %cmp = icmp slt i64 %inc, 1000
  br i1 %cmp, label %for.body, label %for.end

for.end:                                          ; preds = %for.body
  ret void
}

With all of these recent changes, we are now able to compile:

void foo(char *X) {
 for (int i = 0; i != 100; ++i) 
   for (int j = 0; j != 100; ++j)
     X[j+i*100] = 0;
}

into a single memset of 10000 bytes.  This series of changes
should also be helpful for other nested loop scenarios as well.

llvm-svn: 123079
2011-01-08 19:59:06 +00:00
Chris Lattner
db05334c7f Three major changes:
1. Rip out LoopRotate's domfrontier updating code.  It isn't
   needed now that LICM doesn't use DF and it is super complex
   and gross.
2. Make DomTree updating code a lot simpler and faster.  The 
   old loop over all the blocks was just to find a block??
3. Change the code that inserts the new preheader to just use
   SplitCriticalEdge instead of doing an overcomplex 
   reimplementation of it.

No behavior change, except for the name of the inserted preheader.

llvm-svn: 123072
2011-01-08 18:52:51 +00:00
Rafael Espindola
9f526bcf4d First step in fixing PR8927:
Add a unnamed_addr bit to global variables and functions. This will be used
to indicate that the address is not significant and therefore the constant
or function can be merged with others.

If an optimization pass can show that an address is not used, it can set this.

Examples of things that can have this set by the FE are globals created to
hold string literals and C++ constructors.

Adding unnamed_addr to a non-const global should have no effect unless
an optimization can transform that global into a constant.

Aliases are not allowed to have unnamed_addr since I couldn't figure
out any use for it.

llvm-svn: 123063
2011-01-08 16:42:36 +00:00
Frits van Bommel
966cc00809 Fix a bug in r123034 (trying to sext/zext non-integers) and clean up a little.
llvm-svn: 123061
2011-01-08 10:51:36 +00:00
Chris Lattner
6729ce1c33 Have loop-rotate simplify instructions (yay instsimplify!) as it clones
them into the loop preheader, eliminating silly instructions like
"icmp i32 0, 100" in fixed tripcount loops.  This also better exposes the 
bigger problem with loop rotate that I'd like to fix: once this has been
folded, the duplicated conditional branch *often* turns into an uncond branch.

Not aggressively handling this is pessimizing later loop optimizations 
somethin' fierce by making "dominates all exit blocks" checks fail.

llvm-svn: 123060
2011-01-08 08:24:46 +00:00
Evan Cheng
1afd04fc59 Recognize inline asm 'rev /bin/bash, ' as a bswap intrinsic call.
llvm-svn: 123048
2011-01-08 01:24:27 +00:00
Evan Cheng
aa16fd02ad Do not model all INLINEASM instructions as having unmodelled side effects.
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.

This allows memory instructions to be moved around INLINEASM instructions.

llvm-svn: 123044
2011-01-07 23:50:32 +00:00
Devang Patel
d3ba97949a Speculatively revert r123032.
llvm-svn: 123039
2011-01-07 22:33:41 +00:00
Bob Wilson
c485ff3ced Lower some BUILD_VECTORS using VEXT+shuffle.
Patch by Tim Northover.

llvm-svn: 123035
2011-01-07 21:37:30 +00:00
Tobias Grosser
48469b566a InstCombine: Match min/max hidden by sext/zext
X = sext x; x >s c ? X : C+1 --> X = sext x; X <s C+1 ? C+1 : X
X = sext x; x <s c ? X : C-1 --> X = sext x; X >s C-1 ? C-1 : X
X = zext x; x >u c ? X : C+1 --> X = zext x; X <u C+1 ? C+1 : X
X = zext x; x <u c ? X : C-1 --> X = zext x; X >u C-1 ? C-1 : X
X = sext x; x >u c ? X : C+1 --> X = sext x; X <u C+1 ? C+1 : X
X = sext x; x <u c ? X : C-1 --> X = sext x; X >u C-1 ? C-1 : X

Instead of calculating this with mixed types promote all to the
larger type. This enables scalar evolution to analyze this
expression. PR8866

llvm-svn: 123034
2011-01-07 21:33:14 +00:00
Devang Patel
a52d6c216d Appropriately truncate debug info range in dwarf output.
Enable live debug variables pass.

llvm-svn: 123032
2011-01-07 21:30:41 +00:00
Benjamin Kramer
62b5a4d14c Revert 122959, it needs more thought. Add it back to README.txt with additional notes.
llvm-svn: 123030
2011-01-07 20:42:20 +00:00
Evan Cheng
ae26b91353 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
llvm-svn: 123015
2011-01-07 19:35:30 +00:00
David Greene
e9b2fb7e0d Rename lisp-like functions as suggested by Gabor Greif as loooong time
ago.  This is both easier to learn and easier to read.

llvm-svn: 123001
2011-01-07 17:05:37 +00:00
Benjamin Kramer
a842a10fc1 Try to unbreak the arm buildbot.
llvm-svn: 122999
2011-01-07 11:35:21 +00:00
Bob Wilson
8341c8971d Add testcases for PR8411 (vget_low and vget_high implemented as shuffles).
llvm-svn: 122997
2011-01-07 06:44:14 +00:00
Bob Wilson
22f18a7e94 Add ARM patterns to match EXTRACT_SUBVECTOR nodes.
Also fix an off-by-one in SelectionDAGBuilder that was preventing shuffle
vectors from being translated to EXTRACT_SUBVECTOR.
Patch by Tim Northover.

The test changes are needed to keep those spill-q tests from testing aligned
spills and restores.  If the only aligned stack objects are spill slots, we
no longer realign the stack frame.  Prior to this patch, an EXTRACT_SUBVECTOR
was legalized by loading from the stack, which created an aligned frame index.
Now, however, there is nothing except the spill slot in the stack frame, so
I added an aligned alloca.

llvm-svn: 122995
2011-01-07 04:59:04 +00:00
Duncan Sands
06444485ee Fix the other problem reported in PR8582. Testcase and patch by
Nadav Rotem.

llvm-svn: 122983
2011-01-06 23:45:22 +00:00
Duncan Sands
883391f3f2 Add a testcase for PR8582, which mysteriously fixed itself, in case the problem
comes back some day.

llvm-svn: 122982
2011-01-06 23:04:29 +00:00
Bob Wilson
461eb28678 PR8921: LDM/POP do not support interworking prior to v5t.
llvm-svn: 122970
2011-01-06 19:24:41 +00:00
Rafael Espindola
64814fff0b Correctly disassemble truncated asm.
Patch by Richard Simth.

llvm-svn: 122962
2011-01-06 16:48:42 +00:00
Benjamin Kramer
fb2bb22b6f InstCombine: Turn _chk functions into the "unsafe" variant if length and max langth are equal.
This happens when we take the (non-constant) length from a malloc.

llvm-svn: 122961
2011-01-06 14:22:52 +00:00
Benjamin Kramer
5834b2bab8 InstCombine: If we call llvm.objectsize on a malloc call we can replace it with the size passed to malloc.
llvm-svn: 122959
2011-01-06 13:11:05 +00:00
Benjamin Kramer
d5e1c24646 InstCombine: Teach llvm.objectsize folding to look through GEPs.
llvm-svn: 122958
2011-01-06 13:07:49 +00:00
Evan Cheng
1a1771584e Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010

llvm-svn: 122955
2011-01-06 07:58:36 +00:00
Evan Cheng
cb39cc2164 Re-implement r122936 with proper target hooks. Now getMaxStoresPerMemcpy
etc. takes an option OptSize. If OptSize is true, it would return
the inline limit for functions with attribute OptSize.

llvm-svn: 122952
2011-01-06 06:52:41 +00:00
Chris Lattner
83067bc3e7 implement constant folding support for an exotic constant expr:
ret i64 ptrtoint (i8* getelementptr ([1000 x i8]* @X, i64 1, i64 sub (i64 0, i64 ptrtoint ([1000 x i8]* @X to i64))) to i64)

to "ret i64 1000".  This allows us to correctly compute the trip count
on a loop in PR8883, which occurs with std::fill on a char array.  This
allows us to transform it into a memset with a constant size.

llvm-svn: 122950
2011-01-06 06:19:46 +00:00
Evan Cheng
70711ea54d Revert r122936. I'll re-implement the change.
llvm-svn: 122949
2011-01-06 06:17:53 +00:00
Bill Wendling
b3bf7cd562 Fix test to coincide with r122934 change from PR8919.
llvm-svn: 122937
2011-01-06 01:09:35 +00:00
Evan Cheng
d425aa5d2a r105228 reduced the memcpy / memset inline limit to 4 with -Os to avoid blowing
up freebsd bootloader. However, this doesn't make much sense for Darwin, whose
-Os is meant to optimize for size only if it doesn't hurt performance.
rdar://8821501

llvm-svn: 122936
2011-01-06 01:04:47 +00:00
Evan Cheng
2af40ae781 Avoid zero extend bit test operands to pointer type if all the masks fit in
the original type of the switch statement key.
rdar://8781238

llvm-svn: 122935
2011-01-06 01:02:44 +00:00
Evan Cheng
bf92316fab Optimize:
r1025 = s/zext r1024, 4
  r1026 = extract_subreg r1025, 4
to:
  r1026 = copy r1024

llvm-svn: 122925
2011-01-05 23:06:49 +00:00
Chris Lattner
3ef9db5cd4 fix PR8900, a shuffle miscompilation. Patch by Nadav Rotem!
llvm-svn: 122921
2011-01-05 22:28:46 +00:00
Frits van Bommel
4c9ed7d055 Fix lit for people whose LLVM path contains 'opt', which is a common directory name on Unix-like systems.
llvm-svn: 122873
2011-01-05 15:10:24 +00:00
Chris Lattner
dbb1b09731 fix an off-by-one bug that caused a crash analyzing
ashr's with huge shift amounts, PR8896

llvm-svn: 122814
2011-01-04 18:19:15 +00:00
Tobias Grosser
e8df8a7c39 Include llvm-gcc dir before llvm_tools_dir
This ensures that always the recently compiled tools are picked for testing.

llvm-svn: 122810
2011-01-04 16:01:17 +00:00
Chris Lattner
1f58120bfe Teach loop-idiom to turn a loop containing a memset into a larger memset
when safe.

The testcase is basically this nested loop:
void foo(char *X) {
  for (int i = 0; i != 100; ++i) 
    for (int j = 0; j != 100; ++j)
      X[j+i*100] = 0;
}

which gets turned into a single memset now.  clang -O3 doesn't optimize
this yet though due to a phase ordering issue I haven't analyzed yet.

llvm-svn: 122806
2011-01-04 07:46:33 +00:00
David Greene
3926d82c53 Don't pattern match "/clang" so we don't mangle directory names. Some
tests use absolute paths to clang.

llvm-svn: 122796
2011-01-04 01:05:30 +00:00