Chris Lattner
d4bf588b85
move some tests from libcall optimizer suite.
...
llvm-svn: 50516
2008-05-01 06:13:48 +00:00
Owen Anderson
f8c80ca156
Move this test to LoopDeletion, where it now passes.
...
llvm-svn: 50474
2008-04-30 07:17:22 +00:00
Chris Lattner
15195e00ee
move lowering of llvm.memset -> store from simplify libcalls
...
to instcombine.
llvm-svn: 50472
2008-04-30 06:39:11 +00:00
Chris Lattner
ce01263bff
no reason for simplifylibcalls to simplify intrinsics, instcombine does
...
a fine job.
llvm-svn: 50470
2008-04-30 06:12:15 +00:00
Chris Lattner
a62a4d407a
remove redundant check.
...
llvm-svn: 50469
2008-04-30 06:06:37 +00:00
Owen Anderson
2caa79ae70
Fix a bug in memcpyopt where the memcpy-memcpy transform was never being applied because
...
we were checking for it in the wrong order. This caused a miscompilation because the
return slot optimization assumes that the call it is dealing with is NOT a memcpy.
llvm-svn: 50444
2008-04-29 21:26:06 +00:00
Chris Lattner
5bd55b0885
don't eliminate load from volatile value on paths where the load is dead.
...
This fixes the second half of PR2262
llvm-svn: 50430
2008-04-29 17:28:22 +00:00
Chris Lattner
4b5d48a3f0
make this test reduced and *valid*
...
llvm-svn: 50429
2008-04-29 17:25:32 +00:00
Chris Lattner
7099f3c400
fix a subtle volatile handling bug.
...
llvm-svn: 50428
2008-04-29 17:13:43 +00:00
Chris Lattner
51fe8415da
don't delete the last store to an alloca if the store is volatile.
...
llvm-svn: 50390
2008-04-29 04:58:38 +00:00
Dan Gohman
9e4db7f0bd
Fix DSE to not eliminate volatile loads with no uses.
...
llvm-svn: 50370
2008-04-28 19:51:27 +00:00
Dan Gohman
1b7238e6e4
Teach InstCombine's ComputeMaskedBits what SelectionDAG's
...
ComputeMaskedBits knows about cttz, ctlz, and ctpop. Teach
SelectionDAG's ComputeMaskedBits what InstCombine's knows
about SRem. And teach them both some things about high bits
in Mul, UDiv, URem, and Sub. This allows instcombine and
dagcombine to eliminate sign-extension operations in
several new cases.
llvm-svn: 50358
2008-04-28 17:02:21 +00:00
Chris Lattner
ede7e89144
Fix PR2256, yet another miscompilation in simplifycfg of i
...
multiple return values.
Bill, please pull this into Tak.
llvm-svn: 50332
2008-04-28 00:19:07 +00:00
Chris Lattner
2798e42a9f
When SRoA'ing a global variable, make sure the new globals get the
...
appropriate alignment. This fixes a miscompilation of 252.eon on
x86-64 (rdar://5891920).
Bill, please pull this into Tak.
llvm-svn: 50308
2008-04-26 07:40:11 +00:00
Nick Lewycky
1f831c0f57
Remove 'unwinds to' support from mainline. This patch undoes r47802 r47989
...
r48047 r48084 r48085 r48086 r48088 r48096 r48099 r48109 and r48123.
llvm-svn: 50265
2008-04-25 16:53:59 +00:00
Chris Lattner
1a6268f776
Don't infininitely thread branches when a threaded edge
...
goes back to the block, e.g.:
Threading edge through bool from 'bb37.us.thread3829' to 'bb37.us' with cost: 1, across block:
bb37.us: ; preds = %bb37.us.thread3829, %bb37.us, %bb33
%D1361.1.us = phi i32 [ %tmp36, %bb33 ], [ %D1361.1.us, %bb37.us ], [ 0, %bb37.us.thread3829 ] ; <i32> [#uses=2]
%tmp39.us = icmp eq i32 %D1361.1.us, 0 ; <i1> [#uses=1]
br i1 %tmp39.us, label %bb37.us, label %bb42.us
llvm-svn: 50251
2008-04-25 04:12:29 +00:00
Chris Lattner
be35a0c224
Split some code out of the main SimplifyCFG loop into its own function.
...
Fix said code to handle merging return instructions together correctly
when handling multiple return values.
llvm-svn: 50199
2008-04-24 00:01:19 +00:00
Chris Lattner
721ea7ca10
Rewrite multiple return value handling in SCCP. Before, the -sccp pass
...
would turn every getresult instruction into undef. This helps with
rdar://5778210
llvm-svn: 50140
2008-04-23 05:38:20 +00:00
Chris Lattner
0dd624d232
remove this testcase. It isn't testing loop rotate, it is testing all
...
of -std-compile-opts and is now failing because other passes are generating
IR that looks different to input of loop rotate. Devang, please
introduce a testcase that only runs loop rotate.
llvm-svn: 50136
2008-04-23 05:36:04 +00:00
Chris Lattner
be858fc296
make this test more interesting.
...
llvm-svn: 50128
2008-04-23 03:49:32 +00:00
Chris Lattner
d059ac2e32
distill down the essense of this test.
...
llvm-svn: 50125
2008-04-23 03:03:42 +00:00
Dale Johannesen
547a55caf1
new test
...
llvm-svn: 50123
2008-04-23 01:22:22 +00:00
Evan Cheng
680839e258
Don't do: "(X & 4) >> 1 == 2 --> (X & 4) == 4" if there are more than one uses of the shift result.
...
llvm-svn: 50118
2008-04-23 00:38:06 +00:00
Chris Lattner
e304ae5621
Start doing the significantly useful part of jump threading: handle cases
...
where a comparison has a phi input and that phi is a constant. For example,
stuff like:
Threading edge through bool from 'bb2149' to 'bb2231' with cost: 1, across block:
bb2237: ; preds = %bb2231, %bb2149
%tmp2328.rle = phi i32 [ %tmp2232, %bb2231 ], [ %tmp2232439, %bb2149 ] ; <i32> [#uses=2]
%done.0 = phi i32 [ %done.2, %bb2231 ], [ 0, %bb2149 ] ; <i32> [#uses=1]
%tmp2239 = icmp eq i32 %done.0, 0 ; <i1> [#uses=1]
br i1 %tmp2239, label %bb2231, label %bb2327
or
bb38.i298: ; preds = %bb33.i295, %bb1693
%tmp39.i296.rle = phi %struct.ibox* [ null, %bb1693 ], [ %tmp39.i296.rle1109, %bb33.i295 ] ; <%struct.ibox*> [#uses=2]
%minspan.1.i291.reg2mem.1 = phi i32 [ 32000, %bb1693 ], [ %minspan.0.i288, %bb33.i295 ] ; <i32> [#uses=1]
%tmp40.i297 = icmp eq %struct.ibox* %tmp39.i296.rle, null ; <i1> [#uses=1]
br i1 %tmp40.i297, label %implfeeds.exit311, label %bb43.i301
This triggers thousands of times in spec.
llvm-svn: 50110
2008-04-22 21:40:39 +00:00
Chris Lattner
c59cf9c8da
Dig through multiple levels of AND to thread jumps if needed.
...
llvm-svn: 50106
2008-04-22 20:46:09 +00:00
Chris Lattner
dcbc6443ae
Teach jump threading to thread through blocks like:
...
br (and X, phi(Y, Z, false)), label L1, label L2
This triggers once on 252.eon and 6 times on 176.gcc. Blocks
in question often look like this:
bb262: ; preds = %bb261, %bb248
%iftmp.251.0 = phi i1 [ true, %bb261 ], [ false, %bb248 ] ; <i1> [#uses=4]
%tmp270 = icmp eq %struct.rtx_def* %tmp.0.i, null ; <i1> [#uses=1]
%bothcond = or i1 %iftmp.251.0, %tmp270 ; <i1> [#uses=1]
br i1 %bothcond, label %bb288, label %bb273
In this case, it is clear that it doesn't matter if tmp.0.i is null when coming from bb261. When coming from bb248, it is all that matters.
Another random example:
check_asm_operands.exit: ; preds = %check_asm_operands.exit.thr_comm, %bb30.i, %bb12.i, %bb6.i413
%tmp.0.i420 = phi i1 [ true, %bb6.i413 ], [ true, %bb12.i ], [ true, %bb30.i ], [ false, %check_asm_operands.exit.thr_comm ; <i1> [#uses=1]
call void @llvm.stackrestore( i8* %savedstack ) nounwind
%tmp4389 = icmp eq i32 %added_sets_1.0, 0 ; <i1> [#uses=1]
%tmp4394 = icmp eq i32 %added_sets_2.0, 0 ; <i1> [#uses=1]
%bothcond80 = and i1 %tmp4389, %tmp4394 ; <i1> [#uses=1]
%bothcond81 = and i1 %bothcond80, %tmp.0.i420 ; <i1> [#uses=1]
br i1 %bothcond81, label %bb4398, label %bb4397
Here is the case from 252.eon:
bb290.i.i: ; preds = %bb23.i57.i.i, %bb8.i39.i.i, %bb100.i.i, %bb100.i.i, %bb85.i.i110
%myEOF.1.i.i = phi i1 [ true, %bb100.i.i ], [ true, %bb100.i.i ], [ true, %bb85.i.i110 ], [ true, %bb8.i39.i.i ], [ false, %bb23.i57.i.i ] ; <i1> [#uses=2]
%i.4.i.i = phi i32 [ %i.1.i.i, %bb85.i.i110 ], [ %i.0.i.i, %bb100.i.i ], [ %i.0.i.i, %bb100.i.i ], [ %i.3.i.i, %bb8.i39.i.i ], [ %i.3.i.i, %bb23.i57.i.i ] ; <i32> [#uses=3]
%tmp292.i.i = load i8* %tmp16.i.i100, align 1 ; <i8> [#uses=1]
%tmp293.not.i.i = icmp ne i8 %tmp292.i.i, 0 ; <i1> [#uses=1]
%bothcond.i.i = and i1 %tmp293.not.i.i, %myEOF.1.i.i ; <i1> [#uses=1]
br i1 %bothcond.i.i, label %bb202.i.i, label %bb301.i.i
Factoring out 3 common predecessors.
On the path from any blocks other than bb23.i57.i.i, the load and compare
are dead.
llvm-svn: 50096
2008-04-22 07:05:46 +00:00
Chris Lattner
4638234905
add a basic testcase.
...
llvm-svn: 50093
2008-04-22 06:35:14 +00:00
Chris Lattner
14be19cf1e
optimize "p != gep p, ..." better. This allows us to compile
...
getelementptr-seteq.ll into:
define i1 @test(i64 %X, %S* %P) {
%C = icmp eq i64 %X, -1 ; <i1> [#uses=1]
ret i1 %C
}
instead of:
define i1 @test(i64 %X, %S* %P) {
%A.idx.mask = and i64 %X, 4611686018427387903 ; <i64> [#uses=1]
%C = icmp eq i64 %A.idx.mask, 4611686018427387903 ; <i1> [#uses=1]
ret i1 %C
}
And fixes the second half of PR2235. This speeds up the insertion sort
case by 45%, from 1.12s to 0.77s. In practice, this will significantly
speed up for loops structured like:
for (double *P = Base + N; P != Base; --P)
...
Which happens frequently for C++ iterators.
llvm-svn: 50079
2008-04-22 02:53:33 +00:00
Owen Anderson
bc6046416f
Refactor memcpyopt based on Chris' suggestions. Consolidate several functions
...
and simplify code that was fallout from the separation of memcpyopt and gvn.
llvm-svn: 50034
2008-04-21 07:45:10 +00:00
Chris Lattner
a9d8d647ca
rename *.llx -> *.ll, last batch.
...
llvm-svn: 49971
2008-04-19 22:32:52 +00:00
Owen Anderson
64fc7a4268
XFAIL this test for the moment. The real solution is to prevent ADCE
...
from transforming loops and adding a separate loop pass for removing
loops with know trip counts. Until that happens, ADCE is miscompiling this code.
llvm-svn: 49769
2008-04-16 04:25:42 +00:00
Owen Anderson
15e930588a
Add testcase for PR2213.
...
llvm-svn: 49517
2008-04-11 05:13:32 +00:00
Dan Gohman
318d9a6605
Teach InstCombine's ComputeMaskedBits to handle pointer expressions
...
in addition to integer expressions. Rewrite GetOrEnforceKnownAlignment
as a ComputeMaskedBits problem, moving all of its special alignment
knowledge to ComputeMaskedBits as low-zero-bits knowledge.
Also, teach ComputeMaskedBits a few basic things about Mul and PHI
instructions.
This improves ComputeMaskedBits-based simplifications in a few cases,
but more noticeably it significantly improves instcombine's alignment
detection for loads, stores, and memory intrinsics.
llvm-svn: 49492
2008-04-10 18:43:06 +00:00
Chris Lattner
be01a5f699
Generalize getUnaryFloatFunction to handle any FP unary function, automatically
...
figuring out the suffix to use. implement pow(2,x) -> exp2(x).
llvm-svn: 49437
2008-04-09 17:48:11 +00:00
Chris Lattner
5d0cbe7d22
remove capital letter from test name.
...
llvm-svn: 49436
2008-04-09 17:46:36 +00:00
Owen Anderson
ca7e0e21f3
Factor a bunch of functionality related to memcpy and memset transforms out of
...
GVN and into its own pass.
llvm-svn: 49419
2008-04-09 08:23:16 +00:00
Chris Lattner
976ea8990e
many cleanups to the pow optimizer. Allow it to handle powf,
...
add support for pow(x, 2.0) -> x*x.
llvm-svn: 49411
2008-04-09 00:07:45 +00:00
Gabor Greif
80acb912a9
merge r48768 from branches/ggreif/parallelized-test
...
llvm-svn: 49382
2008-04-08 15:22:41 +00:00
Chris Lattner
12cecbbb25
add a testcase for forming memset from noncontiguous stores.
...
llvm-svn: 48938
2008-03-29 04:51:35 +00:00
Evan Cheng
563b265f37
Handle a special case xor undef, undef -> 0. Technically this should be transformed to undef. But this is such a common idiom (misuse) we are going to handle it.
...
llvm-svn: 48791
2008-03-25 20:07:13 +00:00
Tanya Lattner
b6a27ed83f
Byebye llvm-upgrade!
...
llvm-svn: 48762
2008-03-25 04:26:08 +00:00
Devang Patel
425514c509
Add incoming value from header only if phi node has any use inside the loop.
...
llvm-svn: 48738
2008-03-24 20:16:14 +00:00
Chris Lattner
97e4d98c2d
apparently tclsh doesn't lex like bash. Weird.
...
llvm-svn: 48732
2008-03-24 17:41:57 +00:00
Chris Lattner
3a6d3372f5
pass the option so this test tests the right thing.
...
llvm-svn: 48731
2008-03-24 17:36:38 +00:00
Evan Cheng
1d63708523
Transform (zext (or (icmp), (icmp))) to (or (zext (cimp), (zext icmp))) if at least one of the (zext icmp) can be transformed to eliminate an icmp.
...
llvm-svn: 48715
2008-03-24 00:21:34 +00:00
Owen Anderson
2f91173e40
Use normal naming convention for test.
...
llvm-svn: 48693
2008-03-22 21:08:33 +00:00
Chris Lattner
16f62d36e8
implement an initial hack at a straight-line store -> memset optimization.
...
This fires dozens of times across spec and multisource, but I don't know
if it actually speeds stuff up. Hopefully the testers will show something
nice :)
llvm-svn: 48680
2008-03-22 05:37:16 +00:00
Chris Lattner
96cdf21ed4
Teach masked value is zero about add and sub, and use MVIZ to
...
simplify things like (X & 4) >> 1 == 2 --> (X & 4) == 4.
since it is obvious that the shift doesn't remove any bits.
llvm-svn: 48631
2008-03-21 05:19:58 +00:00
Tanya Lattner
52e5896b3f
Upgrade tests.
...
llvm-svn: 48538
2008-03-19 07:28:33 +00:00
Tanya Lattner
f0dc625b4f
Upgrade tests.
...
llvm-svn: 48536
2008-03-19 05:39:35 +00:00