1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-23 04:52:54 +02:00
Commit Graph

1698 Commits

Author SHA1 Message Date
Eric Christopher
2e0201ee18 Make sure that ConstantExpr offsets also aren't off of extern
symbols.

Thanks to Duncan Sands for the testcase!

llvm-svn: 95877
2010-02-11 17:44:04 +00:00
Chris Lattner
a59eb7c09c Rename ValueRequiresCast to ShouldOptimizeCast, to better reflect
what it does.  Enhance it to return false to optimizing vector
sign extensions from vector comparisions, which is the idiom used
to get a splatted vector for a vector comparison.

Doing this breaks vector-casts.ll, add some compensating 
transformations to handle the important case they cover without
depending on this canonicalization.

This fixes rdar://7434900 a serious pessimization of vector compares.

llvm-svn: 95855
2010-02-11 06:26:33 +00:00
Chris Lattner
ef91e752a6 convert to filecheck.
llvm-svn: 95854
2010-02-11 06:24:37 +00:00
Chris Lattner
a087e6e82f Make DSE only scan blocks that are reachable from the entry
block.  Other blocks may have pointer cycles that will crash
basicaa and other alias analyses.  In any case, there is no
point wasting cycles optimizing dead blocks.  This fixes 
rdar://7635088

llvm-svn: 95852
2010-02-11 05:11:54 +00:00
Chris Lattner
199f4187b6 a testcase that doesn't crash GVN but could someday.
llvm-svn: 95851
2010-02-11 05:08:05 +00:00
Chris Lattner
733ffcdb1f Make jump threading honor x|undef -> true and x&undef -> false,
instead of considering x|undef -> x, which may not be true.

llvm-svn: 95850
2010-02-11 04:40:44 +00:00
Eric Christopher
9516309f55 Add ConstantExpr handling to Intrinsic::objectsize lowering.
Update testcase accordingly now that we can optimize another
section.

llvm-svn: 95846
2010-02-11 01:48:54 +00:00
Eric Christopher
6691a59247 Move Intrinsic::objectsize lowering back to InstCombineCalls and
enable constant 0 offset lowering.

llvm-svn: 95691
2010-02-09 21:24:27 +00:00
Eric Christopher
871cf7bce2 Pull these back out, they're a little too aggressive and time
consuming for a simple optimization.

llvm-svn: 95671
2010-02-09 17:29:18 +00:00
Chris Lattner
26b712379f fix PR6193, only considering sign extensions *from i1* for this
xform.

llvm-svn: 95642
2010-02-09 01:12:41 +00:00
Eric Christopher
428b385575 Add a new pass to do llvm.objsize lowering using SCEV.
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly.  Move testcase
to new directory.

Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.

llvm-svn: 95628
2010-02-09 00:35:38 +00:00
Bob Wilson
60fb5a2446 Add a test for my change to disable reassociation for i1 types.
llvm-svn: 95465
2010-02-06 01:16:25 +00:00
Jakob Stoklund Olesen
670458b3be Teach SimplifyCFG about magic pointer constants.
Weird code sometimes uses pointer constants other than null. This patch
teaches SimplifyCFG to build switch instructions in those cases.

Code like this:

void f(const char *x) {
  if (!x)
    puts("null");
  else if ((uintptr_t)x == 1)
    puts("one");
  else if (x == (char*)2 || x == (char*)3)
    puts("two");
  else if ((intptr_t)x == 4)
    puts("four");
  else
    puts(x);
}

Now becomes a switch:

define void @f(i8* %x) nounwind ssp {
entry:
  %magicptr23 = ptrtoint i8* %x to i64            ; <i64> [#uses=1]
  switch i64 %magicptr23, label %if.else16 [
    i64 0, label %if.then
    i64 1, label %if.then2
    i64 2, label %if.then9
    i64 3, label %if.then9
    i64 4, label %if.then14
  ]

Note that LLVM's own DenseMap uses magic pointers.

llvm-svn: 95439
2010-02-05 22:03:18 +00:00
Chris Lattner
44965f1107 fix logical-select to invoke filecheck right, and fix hte instcombine
xform it is checking to actually pass.  There is no need to match
m_SelectCst<0, -1> since instcombine canonicalizes that into not(sext).

Add matches for sext(not(x)) in addition to not(sext(x)).

llvm-svn: 95420
2010-02-05 19:53:02 +00:00
Eric Christopher
f89979ce6a Remove this code for now. I have a better idea and will rewrite with
that in mind.

llvm-svn: 95402
2010-02-05 19:04:06 +00:00
Eric Christopher
ee4a176739 Temporarily revert this since it appears to have caused a build
failure.

llvm-svn: 95294
2010-02-04 06:41:27 +00:00
Eric Christopher
9b3e42f09e Rework constant expr and array handling for objectsize instcombining.
Fix bugs where we would compute out of bounds as in bounds, and where
we couldn't know that the linker could override the size of an array.

Add a few new testcases, change existing testcase to use a private
global array instead of extern.

llvm-svn: 95283
2010-02-04 02:55:34 +00:00
Eric Christopher
fe6ab1518e If we're dealing with a zero-length array, don't lower to any
particular size, we just don't know what the length is yet.

llvm-svn: 95266
2010-02-03 23:56:07 +00:00
Evan Cheng
e273e42195 Revert 94937 and move the noreturn check to codegen.
llvm-svn: 95198
2010-02-03 03:55:59 +00:00
Eric Christopher
ac28e14b77 Recommit this, looks like it wasn't the cause.
llvm-svn: 95165
2010-02-03 00:21:58 +00:00
Eric Christopher
f070aae6f7 Hopefully temporarily revert this.
llvm-svn: 95154
2010-02-02 23:01:31 +00:00
Eric Christopher
575fe8690d Re-add strcmp and known size object size checking optimization.
Passed bootstrap and nightly test run here.

llvm-svn: 95145
2010-02-02 22:10:43 +00:00
Chris Lattner
9f50341a96 don't turn (A & (C0?-1:0)) | (B & ~(C0?-1:0)) -> C0 ? A : B
for vectors.  Codegen is generating awful code or segfaulting
in various cases (e.g. PR6204).

llvm-svn: 95058
2010-02-02 02:43:51 +00:00
Chris Lattner
e471d94f91 fix a crash in loop unswitch on a loop invariant vector condition.
llvm-svn: 95055
2010-02-02 02:26:54 +00:00
Chris Lattner
5371fc3f06 remove an unreduced testcase, rename another.
llvm-svn: 95054
2010-02-02 02:23:37 +00:00
Chris Lattner
18e6b4eb6b fix PR6195, a bug constant folding scalar -> vector compares.
llvm-svn: 94997
2010-02-01 20:04:40 +00:00
Chris Lattner
8e4042108e fix PR6197 - infinite recursion in ipsccp due to block addresses
evaluateICmpRelation wasn't handling blockaddress.

llvm-svn: 94993
2010-02-01 19:35:08 +00:00
Dan Gohman
0b2c2769ba Generalize target-independent folding rules for sizeof to handle more
cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.

Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.

Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.

And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.

llvm-svn: 94987
2010-02-01 18:27:38 +00:00
Chris Lattner
5f10919836 fix rdar://7590304, a miscompilation of objc apps on arm. The caller
of objc message send was getting marked arm_apcscc, but the prototype
isn't.  This is fine at runtime because objcmsgsend is implemented in
assembly.  Only turn a mismatched caller and callee into 'unreachable'
if the callee is a definition.

llvm-svn: 94986
2010-02-01 18:11:34 +00:00
Chris Lattner
a336497d3f fix rdar://7590304, an infinite loop in instcombine. In the invoke
case, instcombine can't zap the invoke for fear of changing the CFG.
However, we have to do something to prevent the next iteration of
instcombine from inserting another store -> undef before the invoke
thereby getting into infinite iteration between dead store elim and
store insertion.

Just zap the callee to null, which will prevent the next iteration
from doing anything.

llvm-svn: 94985
2010-02-01 18:04:58 +00:00
Eli Friedman
0babc63336 Remove test which is no longer relevant.
llvm-svn: 94944
2010-01-31 04:40:45 +00:00
Eli Friedman
19c5c57885 Simplify/generalize the xor+add->sign-extend instcombine.
llvm-svn: 94943
2010-01-31 04:29:12 +00:00
Eli Friedman
58c7936637 Add a small transform: transform -(X<<Y) to (-X<<Y) when the shift has a single
use and X is free to negate.

llvm-svn: 94941
2010-01-31 02:30:23 +00:00
Evan Cheng
c2f3c20680 Do not mark no-return calls tail calls. It'll screw up special calls like longjmp and it doesn't make much sense for performance reason. If my logic is faulty, please let me know.
llvm-svn: 94937
2010-01-31 00:59:31 +00:00
Bob Wilson
0f04082970 Check alignment of loads when deciding whether it is safe to execute them
unconditionally.  Besides checking the offset, also check that the underlying
object is aligned as much as the load itself.

llvm-svn: 94875
2010-01-30 04:42:39 +00:00
Bob Wilson
ccd1585ba8 Remove ARM-specific calling convention from this test. Target data is
needed for this test, but otherwise, there's nothing ARM-specific about
it and no need to specify the calling convention.

llvm-svn: 94862
2010-01-30 00:40:23 +00:00
Eric Christopher
47d90f7adb Revert my last couple of patches. They appear to have broken bison.
llvm-svn: 94841
2010-01-29 21:16:24 +00:00
Bob Wilson
f897b7b37e Improve isSafeToLoadUnconditionally to recognize that GEPs with constant
indices are safe if the result is known to be within the bounds of the
underlying object.

llvm-svn: 94829
2010-01-29 19:19:08 +00:00
Eric Christopher
f01379e6c2 Make strcpy_chk lower to strcpy if we have a safe size.
llvm-svn: 94783
2010-01-29 01:37:11 +00:00
Eric Christopher
7d74af1824 Add constant support to object size handling and remove default
lowering. We'll either figure it out, or not and be lowered by
SelectionDAGBuild.

Add test.

llvm-svn: 94775
2010-01-29 01:09:57 +00:00
Duncan Sands
a3395c61b5 Fix PR6165. The bug was that LHSKnownZero was being and'd with DemandedMask
when it should have been and'd with LowBits.  Fix that and while there beef
up the logic in the case of a negative LHS.

llvm-svn: 94745
2010-01-28 17:22:42 +00:00
Bob Wilson
2e1a609654 Avoid creating redundant PHIs in SSAUpdater::GetValueInMiddleOfBlock.
This was already being done in SSAUpdater::GetValueAtEndOfBlock so I've
just changed SSAUpdater to check for existing PHIs in both places.

llvm-svn: 94690
2010-01-27 22:01:02 +00:00
Victor Hernandez
e6321dc910 When converting dbg.declare to dbg.value, attach promoted store's debug metadata to dbg.value
llvm-svn: 94634
2010-01-27 00:44:36 +00:00
Dan Gohman
71fc5e8fce -disable-output is no longer needed with -analyze.
llvm-svn: 94574
2010-01-26 19:25:59 +00:00
Victor Hernandez
f12e8a120f In mem2reg, for all alloca/stores that get promoted where the alloca has an associated llvm.dbg.declare instrinsic, insert an llvm.dbg.var intrinsic before each store.
llvm-svn: 94493
2010-01-26 02:42:15 +00:00
Victor Hernandez
253c09eb8f Revert r94260 until findDbgDeclare() is made more efficient
llvm-svn: 94432
2010-01-25 17:52:13 +00:00
Chris Lattner
91fafbd4e8 change the canonical form of "cond ? -1 : 0" to be
"sext cond" instead of a select.  This simplifies some instcombine
code, matches the policy for zext (cond ? 1 : 0 -> zext), and allows
us to generate better code for a testcase on ppc.

llvm-svn: 94339
2010-01-24 00:09:49 +00:00
Nick Lewycky
f9a681fb61 Speculatively revert r94322 to see if it fixes darwin selfhost buildbot.
llvm-svn: 94331
2010-01-23 20:32:12 +00:00
Chris Lattner
b444bc0234 third bug from PR6119: the xor dupe extension allows
for arbitrary terminators in predecessors, don't assume
it is a conditional or uncond branch.  The testcase shows
an example where they can happen with switches.

llvm-svn: 94323
2010-01-23 19:21:31 +00:00
Nick Lewycky
8bbb754c7c Teach DAE that even though it can't modify the function signature of an
externally visible function, it can still find all callers of it and replace
the parameters to a dead argument with undef.

llvm-svn: 94322
2010-01-23 19:19:34 +00:00
Chris Lattner
7130788ea2 add an early out to ProcessBranchOnXOR to speed it up,
handle the case when we can infer an input to the xor
from all inputs that agree, instead of going into an
infinite loop.  Another part of PR6199

llvm-svn: 94321
2010-01-23 19:16:25 +00:00
Chris Lattner
e4391a1adb fix a crash in jump threading, PR6119
llvm-svn: 94319
2010-01-23 18:56:07 +00:00
Chris Lattner
8909d5aca5 implement a simple instcombine xform that has been in the
readme forever.

llvm-svn: 94318
2010-01-23 18:49:30 +00:00
Mon P Wang
b7fce13b78 InstCombine should not fold sext/zext of a vector and a bitcast to a scalar to a sext/zext
llvm-svn: 94280
2010-01-23 04:35:57 +00:00
Victor Hernandez
7ce9006ba0 In mem2reg, for all alloca/stores that get promoted where the alloca has an associated llvm.dbg.declare instrinsic, insert an llvm.dbg.var intrinsic before each store
llvm-svn: 94260
2010-01-23 00:17:34 +00:00
Dan Gohman
525f7d7833 Revert LoopStrengthReduce.cpp to pre-r94061 for now.
llvm-svn: 94123
2010-01-22 00:46:49 +00:00
Nick Lewycky
938b8b195c Fix a crasher trying to fold each element in a comparison between two vectors
if one of the vectors didn't have elements (such as undef). Fixes PR 6096.

Fix an issue in the constant folder where fcmp (<2 x %ty>, <2 x %ty>) would
have <2 x i1> type if constant folding was successful and i1 type if it wasn't.
This exposed a related issue in the bitcode reader.

llvm-svn: 94069
2010-01-21 07:03:21 +00:00
Dan Gohman
be34c35f32 Re-implement the main strength-reduction portion of LoopStrengthReduction.
This new version is much more aggressive about doing "full" reduction in
cases where it reduces register pressure, and also more aggressive about
rewriting induction variables to count down (or up) to zero when doing so
reduces register pressure.

It currently uses fairly simplistic algorithms for finding reuse
opportunities, but it introduces a new framework allows it to combine
multiple strategies at once to form hybrid solutions, instead of doing
all full-reduction or all base+index.

llvm-svn: 94061
2010-01-21 02:09:26 +00:00
Dan Gohman
190fee462e Add nounwinds.
llvm-svn: 93919
2010-01-19 21:51:51 +00:00
Chris Lattner
e0124f19f9 optimize ~(~X >>s Y) --> (X >>s Y), patch by Edmund Grimley
Evans!

llvm-svn: 93884
2010-01-19 18:16:19 +00:00
Bob Wilson
e94d77854f Fix a crash in scalarrepl for memcpy/memmove where the source and destination
are the same.  I had already fixed a similar problem where the source and
destination were different bitcasts derived from the same alloca, but the
previous fix still did not handle the case where both operands are exactly
the same value.  Radar 7552893.

llvm-svn: 93848
2010-01-19 04:32:48 +00:00
Chris Lattner
6cd7a81f86 my instcombine transformations to make extension elimination more
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.

llvm-svn: 93787
2010-01-18 22:19:16 +00:00
Chris Lattner
6020407d78 filecheckize this.
llvm-svn: 93776
2010-01-18 22:00:46 +00:00
Chris Lattner
a01771c6db filecheckize
llvm-svn: 93775
2010-01-18 21:58:32 +00:00
Chris Lattner
882bcda7ca remove a redundant test, filecheckize another.
llvm-svn: 93774
2010-01-18 21:55:43 +00:00
Bill Wendling
615508be92 Reduce fsub-fadd.ll and merge it into fsub-fsub.ll. Rename fsub-fsub.ll to
fsub.ll and FileCheckify it.

llvm-svn: 93669
2010-01-17 00:21:21 +00:00
Bill Wendling
488a7187b4 When the visitSub method was split into visitSub and visitFSub, this xform was
added to the FSub version. However, the original version of this xform guarded
against doing this for floating point (!Op0->getType()->isFPOrFPVector()).

This is causing LLVM to perform incorrect xforms for code like:

void func(double *rhi, double *rlo, double xh, double xl, double yh, double yl){
  double mh, ml;
  double c = 134217729.0;
  double up, u1, u2, vp, v1, v2;
        
  up = xh*c;
  u1 = (xh - up) + up;
  u2 = xh - u1;
        
  vp = yh*c;
  v1 = (yh - vp) + vp;
  v2 = yh - v1;
        
  mh = xh*yh;
  ml = (((u1*v1 - mh) + (u1*v2)) + (u2*v1)) + (u2*v2);
  ml += xh*yl + xl*yh;
        
  *rhi = mh + ml;
  *rlo = (mh - (*rhi)) + ml;
}

The last line was optimized away, but rl is intended to be the difference
between the infinitely precise result of mh + ml and after it has been rounded
to double precision.

llvm-svn: 93369
2010-01-13 23:23:17 +00:00
Chris Lattner
21bbaf49d5 1) Use the new SimplifyInstructionsInBlock routine instead of the copy
in JT.

2) When cloning blocks for PHI or xor conditions, use
instsimplify to simplify the code as we go.  This allows us to 
squish common cases early in JT which opens up opportunities for
subsequent iterations, and allows it to completely simplify the
testcase.

llvm-svn: 93253
2010-01-12 20:41:47 +00:00
Dan Gohman
a48d524fbc Make several tests less fragile.
llvm-svn: 93230
2010-01-12 04:52:47 +00:00
Chris Lattner
774e3967ad Teach jump threading to duplicate small blocks when the branch
condition is a xor with a phi node.  This eliminates nonsense
like this from 176.gcc in several places:

 LBB166_84:
        testl   %eax, %eax
-       setne   %al
-       xorb    %cl, %al
-       notb    %al
-       testb   $1, %al
-       je      LBB166_85
+       je      LBB166_69
+       jmp     LBB166_85

This is rdar://7391699

llvm-svn: 93221
2010-01-12 02:07:17 +00:00
Chris Lattner
1549da6af4 disable this testcase, PR5997
llvm-svn: 93206
2010-01-11 23:18:33 +00:00
Chris Lattner
85a6f02b94 add one more bitfield optimization, allowing clang to generate
good code on PR4216:

_test_bitfield:                                             ## @test_bitfield
	orl	$32962, %edi
	movl	$4294941946, %eax
	andq	%rdi, %rax
	ret

instead of:

_test_bitfield:
        movl    $4294941696, %ecx
        movl    %edi, %eax
        orl     $194, %edi
        orl     $32768, %eax
        andq    $250, %rdi
        andq    %rax, %rcx
        movq    %rdi, %rax
        orq     %rcx, %rax
        ret

Evan is looking into the remaining andq+imm -> andl optimization.

llvm-svn: 93147
2010-01-11 06:55:24 +00:00
Chris Lattner
16e36659f5 Extend CanEvaluateZExtd to handle and/or/xor more aggressively in the
BitsToClear case.  This allows it to promote expressions which have an
and/or/xor after the lshr, promoting cases like test2 (from PR4216) 
and test3 (random extample extracted from a spec benchmark).

clang now compiles the code in PR4216 into:

_test_bitfield:                                             ## @test_bitfield
	movl	%edi, %eax
	orl	$194, %eax
	movl	$4294902010, %ecx
	andq	%rax, %rcx
	orl	$32768, %edi
	andq	$39936, %rdi
	movq	%rdi, %rax
	orq	%rcx, %rax
	ret

instead of:

_test_bitfield:                                             ## @test_bitfield
	movl	%edi, %eax
	orl	$194, %eax
	movl	$4294902010, %ecx
	andq	%rax, %rcx
	shrl	$8, %edi
	orl	$128, %edi
	shlq	$8, %rdi
	andq	$39936, %rdi
	movq	%rdi, %rax
	orq	%rcx, %rax
	ret

which is still not great, but is progress.

llvm-svn: 93145
2010-01-11 04:05:13 +00:00
Chris Lattner
f2ba85eedc Remove the dead TD argument to CanEvaluateZExtd, and add a
new BitsToClear result which allows us to start promoting
expressions that end with a lshr-by-constant.  This is
conservatively correct and better than what we had before
(see testcases) but still needs to be extended further.

llvm-svn: 93144
2010-01-11 03:32:00 +00:00
Chris Lattner
18d753e05f teach sext optimization to handle truncs from types that are not
the dest of the sext.

llvm-svn: 93128
2010-01-10 20:30:41 +00:00
Chris Lattner
ca53de1ab7 teach zext optimization how to deal with truncs that don't come from
the zext dest type.  This allows us to handle test52/53 in cast.ll,
and allows llvm-gcc to generate much better code for PR4216 in -m64
mode:

_test_bitfield:                                             ## @test_bitfield
	orl	$32962, %edi
	movl	%edi, %eax
	andl	$-25350, %eax
	ret

This also fixes a bug handling vector extends, ensuring that the
mask produced is a vector constant, not an integer constant.

llvm-svn: 93127
2010-01-10 20:25:54 +00:00
Chris Lattner
e68d6e61b1 now that the cost model has changed, we can always consider
elimination of a sign extend to be a win, which simplifies 
the client of CanEvaluateSExtd, and allows us to eliminate
more casts (examples taken from real code).

llvm-svn: 93109
2010-01-10 07:40:50 +00:00
Chris Lattner
e86619ca01 change the preferred canonical form for a sign extension to be
lshr+ashr instead of trunc+sext.  We want to avoid type 
conversions whenever possible, it is easier to codegen expressions
without truncates and extensions.

llvm-svn: 93107
2010-01-10 07:08:30 +00:00
Chris Lattner
1106f03886 two changes:
1) don't try to optimize a sext or zext that is only used by a trunc, let
   the trunc get optimized first.  This avoids some pointless effort in
   some common cases since instcombine scans down a block in the first pass.
2) Change the cost model for zext elimination to consider an 'and' cheaper
   than a zext.  This allows us to do it more aggressively, and for the next
   patch to simplify the code quite a bit.

llvm-svn: 93097
2010-01-10 02:39:31 +00:00
Chris Lattner
a04ed0659f enhance CanEvaluateZExtd to handle shift left and sext, allowing
more expressions to be promoted and casts eliminated.

llvm-svn: 93096
2010-01-10 02:22:12 +00:00
Dan Gohman
771144e807 Use WriteAsOperand instead of getName() to print loop header names,
so that unnamed blocks are handled.

llvm-svn: 93059
2010-01-09 18:17:45 +00:00
Chris Lattner
b142e13f84 only factor from expressions whose uses are empty and whose
base is the right expression type.  This fixes PR5981.

llvm-svn: 93045
2010-01-09 06:01:36 +00:00
Chris Lattner
05ae88cc8f teach instcombine to delete sign extending shift pairs (sra(shl X, C), C) when
the input is already sign extended.

llvm-svn: 93019
2010-01-08 19:04:21 +00:00
Chris Lattner
0dc48180de fix PR5978 by peeling the loop so that we avoid shifting the
result int by 8 for the first byte.  While normally harmless,
if the result is smaller than a byte, this shift is invalid.

llvm-svn: 93018
2010-01-08 19:02:23 +00:00
Chris Lattner
944f9c4ac1 teach ComputeNumSignBits to look through PHI nodes.
llvm-svn: 92964
2010-01-07 23:44:37 +00:00
Chris Lattner
b8ec2bccaf filecheckize
llvm-svn: 92963
2010-01-07 23:42:23 +00:00
Chris Lattner
db8fa82914 Enhance instcombine to reason more strongly about promoting computation
that feeds into a zext, similar to the patch I did yesterday for sext.
There is a lot of room for extension beyond this patch.

llvm-svn: 92962
2010-01-07 23:41:00 +00:00
Chris Lattner
0b0caf0877 fix a globalopt crash on 'bullet' (handling evaluation of a store
to an element of a vector in a static ctor) which occurs with an 
unrelated patch I'm testing.  Annoyingly, EvaluateStoreInto basically 
does exactly the same stuff as InsertElement constant folding, but it
now handles vectors, and you can't insertelement into a vector.  It
would be 'really nice' if GEP into a vector were not legal.

llvm-svn: 92889
2010-01-07 01:16:21 +00:00
Duncan Sands
de0adbdf25 Fix a README item: have functionattrs look through selects and
phi nodes when deciding which pointers point to local memory.
I actually checked long ago how useful this is, and it isn't
very: it hardly ever fires in the testsuite, but since Chris
wants it here it is!

llvm-svn: 92836
2010-01-06 15:37:47 +00:00
Duncan Sands
4ef1119d94 Partially address a README by having functionattrs consider calls to
memcpy, memset and other intrinsics that only access their arguments
to be readnone if the intrinsic's arguments all point to local memory.
This improves the testcase in the README to readonly, but it could in
theory be made readnone, however this would involve more sophisticated
analysis that looks through the memcpy.

llvm-svn: 92829
2010-01-06 08:45:52 +00:00
Chris Lattner
0b73344d8a Teach instcombine's sext elimination logic to be more aggressive.
Previously, instcombine would only promote an expression tree to
the larger type if doing so eliminated two casts.  This is because
a need to manually do the sign extend after the promoted expression
tree with two shifts.  Now, we keep track of whether the result of
the computation is going to be properly sign extended already.  If
so, we can unconditionally promote the expression, which allows us
to zap more sext's.

This implements rdar://6598839 (aka gcc pr38751)

llvm-svn: 92815
2010-01-06 01:56:21 +00:00
Dan Gohman
93a28a6ce9 Move this test from test/Transforms/IndVarSimplify to
test/CodeGen/X86, as doesn't use -indvars, and it does use
llc -march=x86-64.

llvm-svn: 92799
2010-01-05 22:52:54 +00:00
Chris Lattner
53b9ed70ee more rearrangement and cleanup, fix my test failure.
llvm-svn: 92792
2010-01-05 22:21:18 +00:00
Chris Lattner
2f69f6a822 remove two trunc xforms that are subsumed by EvaluateInDifferentType.
The only difference is that EvaluateInDifferentType checks to ensure
they are profitable before doing them :)

llvm-svn: 92788
2010-01-05 22:01:41 +00:00
Chris Lattner
96e30cb44f merge some tests.
llvm-svn: 92786
2010-01-05 21:54:09 +00:00
Chris Lattner
fd23a9b6dd merge cast2 into cast.ll
llvm-svn: 92784
2010-01-05 21:48:13 +00:00
Chris Lattner
24e500eb45 remove useless test.
llvm-svn: 92782
2010-01-05 21:46:22 +00:00
Chris Lattner
61de3ae41a another example.
llvm-svn: 92781
2010-01-05 21:43:08 +00:00
Chris Lattner
65d5ec781a remove a useless negative test, add a rdar # to an xfail that I'm working on.
llvm-svn: 92777
2010-01-05 21:37:44 +00:00
Chris Lattner
3e7dbaf22d clean up tests.
llvm-svn: 92776
2010-01-05 21:32:59 +00:00
Chris Lattner
13293b9738 just remove this xform which is subsumed by others.
llvm-svn: 92775
2010-01-05 21:16:30 +00:00
Chris Lattner
f457542506 optimize comparisons against cttz/ctlz/ctpop, patch by Alastair Lynn!
llvm-svn: 92745
2010-01-05 18:09:56 +00:00
Dan Gohman
5fa04f2707 Delete useless trailing semicolons.
llvm-svn: 92740
2010-01-05 17:55:26 +00:00
Chris Lattner
491e03b6ef optimize cttz and ctlz when we can prove something about the
leading/trailing bits.  Patch by Alastair Lynn!

llvm-svn: 92706
2010-01-05 07:23:56 +00:00
Chris Lattner
2ef4ba7cf5 fix an infinite loop in reassociate building emacs.
llvm-svn: 92679
2010-01-05 04:55:35 +00:00
Devang Patel
3b08c33f33 Remove dead debug info intrinsics.
Intrinsic::dbg_stoppoint
 Intrinsic::dbg_region_start 
 Intrinsic::dbg_region_end 
 Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.

llvm-svn: 92557
2010-01-05 01:10:40 +00:00
Chris Lattner
3b060b2d41 Truncate GEP indexes larger than the pointer size down to pointer size
when doing this transform if the GEP is not inbounds.  No testcase because
it is very difficult to trigger this: instcombine already canonicalizes
GEP indices to pointer size, so it relies specific permutations of the
instcombine worklist.

Thanks to Duncan for pointing this possible problem out.

llvm-svn: 92495
2010-01-04 18:57:15 +00:00
Chris Lattner
ce3f5f3448 implement an instcombine xform needed by clang's codegen
on the example in PR4216.  This doesn't trigger in the testsuite,
so I'd really appreciate someone scrutinizing the logic for
correctness.

llvm-svn: 92458
2010-01-04 06:03:59 +00:00
Chris Lattner
647c629ee4 generalize the previous transformation to handle indexing into
arrays of structs and other arrays, so long as all the subsequent
indexes are constants.  This triggers frequently for stuff like:

@divisions = internal constant [29 x [2 x i32]] [[2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 1], [2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 2], [2 x i32] zeroinitializer, [2 x i32] zeroinitializer, [2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 1], [2 x i32] zeroinitializer, [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 2]], align 32 ; <[29 x [2 x i32]]*> [#uses=50]

	  %623 = getelementptr inbounds [29 x [2 x i32]]* @divisions, i64 0, i64 %619, i64 0 ; <i32*> [#uses=1]
	   %684 = icmp eq i32 %683, 999 

also for the "my_defs" table in 'gs', etc.

llvm-svn: 92444
2010-01-03 03:03:27 +00:00
Chris Lattner
acb0c133ec teach instcombine to optimize idioms like A[i]&42 == 0. This
occurs in 403.gcc in mode_mask_array, in safe-ctype.c (which
is copied in multiple apps) in _sch_istable, etc.

llvm-svn: 92427
2010-01-02 22:08:28 +00:00
Chris Lattner
4af67af013 Teach the table lookup optimization to generate range compares
when a consequtive sequence of elements all satisfies the 
predicate.  Like the double compare case, this generates better
code than the magic constant case and generalizes to more than
32/64 element array lookups.

Here are some examples where it triggers.  From 403.gcc, most
accesses to the rtx_class array are handled, e.g.:

@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=547]
   %142 = icmp eq i8 %141, 105
@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=543]
	   %165 = icmp eq i8 %164, 60      

Also, most of the 59-element arrays (mode_class/rid_to_yy, etc) 
optimized before are actually range compares.  This lets 32-bit
machines optimize them.

400.perlbmk has stuff like this:

400.perlbmk: PL_regkind, even for 32-bit:
@PL_regkind = constant [62 x i8] c"\00\00\02\02\02\06\06\06\06\09\09\0B\0B\0D\0E\0E\0E\11\12\12\14\14\16\16\18\18\1A\1A\1C\1C\1E\1F !!!$$&'((((,-.///88886789:;8$", align 32 ; <[62 x i8]*> [#uses=4]
	   %811 = icmp ne i8 %810, 33 

@PL_utf8skip = constant [256 x i8] c"\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\04\04\04\04\04\04\04\04\05\05\05\05\06\06\07\0D", align 32 ; <[256 x i8]*> [#uses=94]
	   %12 = icmp ult i8 %10, 2
           
etc.

llvm-svn: 92426
2010-01-02 21:50:18 +00:00
Nick Lewycky
cda0109ec5 Fix logic error in previous commit. The != case needs to become an or, not an
and.

llvm-svn: 92419
2010-01-02 16:14:56 +00:00
Nick Lewycky
3cc8fe073a Optimize pointer comparison into the typesafe form, now that the backends will
handle them efficiently. This is the opposite direction of the transformation
we used to have here.

llvm-svn: 92418
2010-01-02 15:25:44 +00:00
Chris Lattner
e1a2489017 Generalize the previous xform to handle cases where exactly
two elements match or don't match with two comparisons.  For
example, the testcase compiles into:

define i1 @test5(i32 %X) {
  %1 = icmp eq i32 %X, 2                          ; <i1> [#uses=1]
  %2 = icmp eq i32 %X, 7                          ; <i1> [#uses=1]
  %R = or i1 %1, %2                               ; <i1> [#uses=1]
  ret i1 %R
}

This generalizes the previous xforms when the array is larger than
64 elements (and this case matches) and generates better code for
cases where it overlaps with the magic bitshift case.

This generalizes more cases than you might expect.  For example,
400.perlbmk has:

@PL_utf8skip = constant [256 x i8] c"\01\01\01\...
%15 = icmp ult i8 %7, 7

403.gcc has:
@rid_to_yy = internal constant [114 x i16] [i16 259, i16 260, ...
%18 = icmp eq i16 %16, 295 

and xalancbmk has a bunch of examples, such as 
_ZN11xercesc_2_5L15gCombiningCharsE and _ZN11xercesc_2_5L10gBaseCharsE.

llvm-svn: 92417
2010-01-02 09:35:17 +00:00
Chris Lattner
1cdc77b8da enhance the compare/load/index optimization to work on *any* load
from a global with 32/64 elements or less (depending on whether
i64 is native on the target), generating a bitshift idiom to 
determine the result.  For example, on test4 we produce:

define i1 @test4(i32 %X) {
  %1 = lshr i32 933, %X                           ; <i32> [#uses=1]
  %2 = and i32 %1, 1                              ; <i32> [#uses=1]
  %R = icmp ne i32 %2, 0                          ; <i1> [#uses=1]
  ret i1 %R
}

This triggers in a number of interesting cases, for example, here's an
fp case:
@A.3255 = internal constant [4 x double] [double 4.100000e+00, double -3.900000e+00, double -1.000000e+00, double 1.000000e+00], align 32 ; <[4 x double]*> [#uses=7]
...
	   %7 = fcmp olt double %3, 0.000000e+00

In this case we make the slen2_tab global dead, which is nice:
@slen2_tab = internal constant [16 x i32] [i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 1, i32 2, i32 3, i32 1, i32 2, i32 3, i32 2, i32 3], align 32 ; <[16 x i32]*> [#uses=1]
...
	   %204 = icmp eq i32 %46, 0     

Perl has a bunch of these, also on the 'Perl_regkind' array:
@Perl_yygindex = internal constant [51 x i16] [i16 0, i16 0, i16 0, i16 0, i16 374, i16 351, i16 0, i16 -12, i16 0, i16 946, i16 413, i16 -83, i16 0, i16 0, i16 0, i16 -311, i16 -13, i16 4007, i16 2893, i16 0, i16 0, i16 0, i16 0, i16 0, i16 372, i16 -8, i16 0, i16 0, i16 246, i16 -131, i16 43, i16 86, i16 208, i16 -45, i16 -169, i16 987, i16 0, i16 0, i16 0, i16 0, i16 308, i16 0, i16 -271, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0], align 32 ; <[51 x i16]*> [#uses=1]
...
  %1364 = icmp eq i16 %1361, 0

186.crafty really likes this on 64-bit machines, because it triggers on a bunch of globals like this:
@white_outpost = internal constant [64 x i8] c"\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\02\02\00\00\00\00\00\04\05\05\04\00\00\00\00\03\06\06\03\00\00\00\00\00\01\01\00\00\00\00\00\00\00\00\00\00\00", align 32 ; <[64 x i8]*> [#uses=2]

However the big winner is 403.gcc, which triggers hundreds of times, eliminating all the accesses to the 57-element arrays 'mode_class', mode_unit_size, mode_bitsize, regclass_map, etc.

go 64-bit machines :)

llvm-svn: 92415
2010-01-02 08:56:52 +00:00
Chris Lattner
59136ba5ad enhance the previous optimization to work with fcmp in addition
to icmp.

llvm-svn: 92412
2010-01-02 08:20:51 +00:00
Chris Lattner
f3f6c10218 Teach instcombine to fold compares of loads from constant
arrays with variable indices into a comparison of the index
with a constant.  The most common occurrence of this that
I see by far is stuff like:

if ("foobar"[i] == '\0') ...

which we compile into: if (i == 6), saving a load and 
materialization of the global address.  This also exposes 
loop trip count information to later passes in many cases.

This triggers hundreds of times in xalancbmk, which is where I first
noticed it, but it also triggers in many other apps.  Here are a few 
interesting ones from various apps:

@must_be_connected_without = internal constant [8 x i8*] [i8* getelementptr inbounds ([3 x i8]* @.str64320, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str27283, i64 0, i64 0), i8* getelementptr inbounds ([4 x i8]* @.str71327, i64 0, i64 0), i8* getelementptr inbounds ([4 x i8]* @.str72328, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str18274, i64 0, i64 0), i8* getelementptr inbounds ([6 x i8]* @.str11267, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str32288, i64 0, i64 0), i8* null], align 32 ; <[8 x i8*]*> [#uses=2]
  %scevgep.i = getelementptr [8 x i8*]* @must_be_connected_without, i64 0, i64 %indvar.i ; <i8**> [#uses=1]
  %17 = load ...
  %18 = icmp eq i8* %17, null                     ; <i1> [#uses=1]
-> icmp eq i64 %indvar.i, 7 


@yytable1095 = internal constant [84 x i8] c"\12\01(\05\06\07\08\09\0A\0B\0C\0D\0E1\0F\10\11266\1D: \10\11,-,0\03'\10\11B6\04\17&\18\1945\05\06\07\08\09\0A\0B\0C\0D\0E\1E\0F\10\11*\1A\1B\1C$3+>#%;<IJ=ADFEGH9KL\00\00\00C", align 32 ; <[84 x i8]*> [#uses=2]
  %57 = getelementptr inbounds [84 x i8]* @yytable1095, i64 0, i64 %56 ; <i8*> [#uses=1]
   %mode.0.in = getelementptr inbounds [9 x i32]* @mb_mode_table, i64 0, i64 %.pn ; <i32*> [#uses=1]
load ...
   %64 = icmp eq i8 %58, 4                         ; <i1> [#uses=1]
-> icmp eq i64 %.pn, 35             ; <i1> [#uses=0]


@gsm_DLB = internal constant [4 x i16] [i16 6554, i16 16384, i16 26214, i16 32767]
%scevgep.i = getelementptr [4 x i16]* @gsm_DLB, i64 0, i64 %indvar.i ; <i16*> [#uses=1]
%425 = load %scevgep.i
%426 = icmp eq i16 %425, -32768                 ; <i1> [#uses=0]
-> false

llvm-svn: 92411
2010-01-02 08:12:04 +00:00
Chris Lattner
cf784992da remove the instcombine transformations that are inserting nasty
pointer to int casts that confuse later optimizations.  See PR3351
for details.

This improves but doesn't complete fix 483.xalancbmk because llvm-gcc
does this xform in GCC's "fold" routine as well.  Clang++ will do
better I guess.

llvm-svn: 92408
2010-01-02 00:31:05 +00:00
Chris Lattner
ef4fba933d add a simple instcombine xform, simplify another one to use hasAllZeroIndices()
instead of hand rolling a loop.

llvm-svn: 92403
2010-01-01 23:09:08 +00:00
Chris Lattner
feb7b1af69 generalize the pointer difference optimization to handle
a constantexpr gep on the 'base' side of the expression.
This completes comment #4 in PR3351, which comes from
483.xalancbmk.

llvm-svn: 92402
2010-01-01 22:42:29 +00:00
Chris Lattner
89b1b63bdf teach instcombine to optimize pointer difference idioms involving constant
expressions.  This is a step towards comment #4 in PR3351.

llvm-svn: 92401
2010-01-01 22:29:12 +00:00
Chris Lattner
ce7717e168 implement the transform requested in PR5284
llvm-svn: 92398
2010-01-01 18:34:40 +00:00
Chris Lattner
e5f5e4b151 add a few trivial instcombines for llvm.powi.
llvm-svn: 92383
2010-01-01 01:52:15 +00:00
Chris Lattner
662a872e15 When factoring multiply expressions across adds, factor both
positive and negative forms of constants together.  This 
allows us to compile:

int foo(int x, int y) {
    return (x-y) + (x-y) + (x-y);
}

into:

_foo:                                                       ## @foo
	subl	%esi, %edi
	leal	(%rdi,%rdi,2), %eax
	ret

instead of (where the 3 and -3 were not factored):

_foo:
        imull   $-3, 8(%esp), %ecx
        imull   $3, 4(%esp), %eax
        addl    %ecx, %eax
        ret

this started out as:
    movl    12(%ebp), %ecx
    imull   $3, 8(%ebp), %eax
    subl    %ecx, %eax
    subl    %ecx, %eax
    subl    %ecx, %eax
    ret

This comes from PR5359.

llvm-svn: 92381
2010-01-01 01:13:15 +00:00
Chris Lattner
dd837a069b test case we alredy get right.
llvm-svn: 92380
2010-01-01 00:50:00 +00:00
Chris Lattner
ebe5932016 reuse negates where possible instead of always creating them from scratch.
This allows us to optimize test12 into:

define i32 @test12(i32 %X) {
  %factor = mul i32 %X, -3                        ; <i32> [#uses=1]
  %Z = add i32 %factor, 6                         ; <i32> [#uses=1]
  ret i32 %Z
}

instead of:

define i32 @test12(i32 %X) {
  %Y = sub i32 6, %X                              ; <i32> [#uses=1]
  %C = sub i32 %Y, %X                             ; <i32> [#uses=1]
  %Z = sub i32 %C, %X                             ; <i32> [#uses=1]
  ret i32 %Z
}

llvm-svn: 92373
2009-12-31 20:34:32 +00:00
Chris Lattner
59379f8bb0 teach reassociate to factor x+x+x -> x*3. While I'm at it,
fix RemoveDeadBinaryOp to actually do something.

llvm-svn: 92368
2009-12-31 19:24:52 +00:00
Chris Lattner
4870f6a384 simple fix for an incorrect factoring which causes a
miscompilation, PR5458.

llvm-svn: 92354
2009-12-31 08:33:49 +00:00
Chris Lattner
d9ed31f6d0 merge some more tests in.
llvm-svn: 92353
2009-12-31 08:32:22 +00:00
Chris Lattner
67d96ee04c filecheckize
llvm-svn: 92352
2009-12-31 08:29:56 +00:00
Chris Lattner
83d5230121 fix two bogus tests that the asmparser now rejects.
llvm-svn: 92303
2009-12-30 05:54:51 +00:00
Chris Lattner
5d3919d5f9 move an optimization for memcmp out of simplifylibcalls and into
SDISel.  This optimization was causing simplifylibcalls to 
introduce type-unsafe nastiness.  This is the first step, I'll be 
expanding the memcmp optimizations shortly, covering things that
we really really wouldn't want simplifylibcalls to do.

llvm-svn: 92098
2009-12-24 00:37:38 +00:00
Bob Wilson
0dc93264b1 Generalize SROA to allow the first index of a GEP to be non-zero. Add a
missing check that an array reference doesn't go past the end of the array,
and remove some redundant checks for in-bound array and vector references
that are no longer needed.

llvm-svn: 91897
2009-12-22 06:57:14 +00:00
Chris Lattner
4e30207029 Implement PR5795 by merging duplicated return blocks. This could go further
by merging all returns in a function into a single one, but simplifycfg 
currently likes to duplicate the return (an unfortunate choice!)

llvm-svn: 91890
2009-12-22 06:07:30 +00:00
Chris Lattner
3f9fc699e0 convert to filecheck
llvm-svn: 91889
2009-12-22 06:04:26 +00:00
Chris Lattner
c54fd1e777 fix PR5837 by having SSAUpdate reuse phi nodes for the
'GetValueInMiddleOfBlock' case, instead of inserting 
duplicates.

A similar fix is almost certainly needed by the machine-level
SSAUpdate implementation.

llvm-svn: 91820
2009-12-21 07:16:11 +00:00
Chris Lattner
ae95fddf98 add check lines for min/max tests.
llvm-svn: 91816
2009-12-21 06:08:50 +00:00
Chris Lattner
013b88ee59 really convert this to filecheck.
llvm-svn: 91815
2009-12-21 06:06:10 +00:00
Chris Lattner
c9bfe8679e give instcombine some helper functions for matching MIN and MAX, and
implement some optimizations for MIN(MIN()) and MAX(MAX()) and 
MIN(MAX()) etc.  This substantially improves the code in PR5822 but
doesn't kick in much elsewhere.  2 max's were optimized in 
pairlocalalign and one in smg2000.

llvm-svn: 91814
2009-12-21 06:03:05 +00:00
Chris Lattner
d5bdc7876d filecheckize
llvm-svn: 91813
2009-12-21 05:53:13 +00:00
Chris Lattner
f1474e1761 enhance x-(-A) -> x+A to preserve NUW/NSW.
Use the presence of NSW/NUW to fold "icmp (x+cst), x" to a constant in
cases where it would otherwise be undefined behavior.

Surprisingly (to me at least), this triggers hundreds of the times in
a few benchmarks: lencode, ldecode, and 466.h264ref seem to *really*
like this.

llvm-svn: 91812
2009-12-21 04:04:05 +00:00
Chris Lattner
d34eb29977 Optimize all cases of "icmp (X+Cst), X" to something simpler. This triggers
a bunch in lencode, ldecod, spass, 176.gcc, 252.eon, among others.  It is 
also the first part of PR5822

llvm-svn: 91811
2009-12-21 03:19:28 +00:00
Chris Lattner
072f39fd3a convert to filecheck
llvm-svn: 91810
2009-12-21 03:11:05 +00:00
Chris Lattner
07f0e8ec8a fix an overly conservative caching issue that caused memdep to
cache a pointer as being unavailable due to phi trans in the
wrong place.  This would cause later queries to fail even when
they didn't involve phi trans.

llvm-svn: 91787
2009-12-19 21:29:22 +00:00
Chris Lattner
4f562e5f14 fix inconsistent use of tabs
llvm-svn: 91783
2009-12-19 20:44:43 +00:00
Chris Lattner
d9bf69f1a5 fix PR5827 by disabling the phi slicing transformation in a case
where instcombine would have to split a critical edge due to a
phi node of an invoke.  Since instcombine can't change the CFG,
it has to bail out from doing the transformation.

llvm-svn: 91763
2009-12-19 07:01:15 +00:00
Bob Wilson
03b6955c7f Reapply 91459 with a simple fix for the problem that broke the x86_64-darwin
bootstrap.  This also replaces the WeakVH references that Chris objected to
with normal Value references.

llvm-svn: 91711
2009-12-18 20:14:40 +00:00
Eli Friedman
c8ab298dbd Optimize icmp of null and select of two constants even if the select has
multiple uses.  (The construct in question was found in gcc.)

llvm-svn: 91675
2009-12-18 08:22:35 +00:00
Eli Friedman
9543d05079 Allow instcombine to combine "sext(a) >u const" to "a >u trunc(const)".
llvm-svn: 91631
2009-12-17 22:42:29 +00:00
Eli Friedman
8afea2d095 Make the ptrtoint comparison simplification work if one side is a global.
llvm-svn: 91624
2009-12-17 21:27:47 +00:00