1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 11:42:57 +01:00
Commit Graph

1198 Commits

Author SHA1 Message Date
Dan Gohman
d13f1a3b59 Return null instead of false, as appropriate.
llvm-svn: 70054
2009-04-25 17:28:45 +00:00
Dan Gohman
a7fae1f865 Add several more icmp simplifications. Transform signed comparisons
into unsigned ones when the operands are known to have the same
sign bit value.

llvm-svn: 70053
2009-04-25 17:12:48 +00:00
Sanjiv Gupta
f1177e1be7 Allow i16 type indices to gep.
llvm-svn: 69946
2009-04-24 02:37:54 +00:00
Sanjiv Gupta
0cb9d67bcc Before trying to introduce/eliminate cast/ext/trunc to make indices type as
pointer type, make sure that the pointer size is a valid sequential index type.

llvm-svn: 69574
2009-04-20 06:05:54 +00:00
Chris Lattner
7d75f78b92 Instcombine should not promote whole computation trees to "strange"
integer types, unless they are already strange.  This prevents it from
turning the code produced by SROA into crazy libcalls and stuff that 
the code generator can't handle.  In the attached example, the result
was an i96 multiply that caused the x86 backend to assert.

Note that if TargetData had an idea of what the legal types are for
a target that this could be used to stop instcombine from introducing
i64 muls, as Scott wanted.

llvm-svn: 68598
2009-04-08 05:41:03 +00:00
Chris Lattner
2f520929d4 fix rdar://6762290, a crash compiling cxx filt with clang.
llvm-svn: 68500
2009-04-07 05:03:34 +00:00
Evan Cheng
c419350132 Throttle back "fold select into operand" transformation. InstCombine should not generate selects of two constants unless they are selects of 0 and 1.
e.g.
define i32 @t1(i32 %c, i32 %x) nounwind {
       %t1 = icmp eq i32 %c, 0
       %t2 = lshr i32 %x, 18
       %t3 = select i1 %t1, i32 %t2, i32 %x
       ret i32 %t3
}

was turned into

define i32 @t2(i32 %c, i32 %x) nounwind {
       %t1 = icmp eq i32 %c, 0
       %t2 = select i1 %t1, i32 18, i32 0
       %t3 = lshr i32 %x, %t2
       ret i32 %t3
}

For most targets, that means materializing two constants and then a select. e.g. On x86-64

movl    %esi, %eax
shrl    $18, %eax
testl   %edi, %edi
cmovne  %esi, %eax
ret

=>

xorl    %eax, %eax
testl   %edi, %edi
movl    $18, %ecx
cmovne  %eax, %ecx
movl    %esi, %eax
shrl    %cl, %eax
ret

Also, the optimizer and codegen can reason about shl / and / add, etc. by a constant. This optimization will hinder optimizations using ComputeMaskedBits.

llvm-svn: 68142
2009-03-31 20:42:45 +00:00
Chris Lattner
c055403764 Fix PR3874 by restoring a condition I removed, but making it more
precise than it used to be.

llvm-svn: 67662
2009-03-25 00:28:58 +00:00
Chris Lattner
be6ee56fb2 oops, I intended to remove this, not comment it out. Thanks Duncan!
llvm-svn: 67657
2009-03-24 23:48:25 +00:00
Chris Lattner
aabd3eeeff canonicalize inttoptr and ptrtoint instructions which cast pointers
to/from integer types that are not intptr_t to convert to intptr_t
then do an integer conversion to the dest type.  This exposes the
cast to the optimizer.

llvm-svn: 67638
2009-03-24 18:35:40 +00:00
Chris Lattner
51a4134e1c two changes:
1. Make instcombine always canonicalize trunc x to i1 into an icmp(x&1).  This 
   exposes the AND to other instcombine xforms and is more of what the code
   generator expects.
2. Rewrite the remaining trunc pattern match to use 'match', which 
   simplifies it a lot.
   

llvm-svn: 67635
2009-03-24 18:15:30 +00:00
Duncan Sands
3c115770e7 Factorize out a concept - no functionality change.
llvm-svn: 67454
2009-03-21 21:27:31 +00:00
Chris Lattner
623662e8e1 Fix instcombine to not introduce undefined shifts when merging two
shifts together.  This fixes PR3851.

llvm-svn: 67411
2009-03-20 22:41:15 +00:00
Duncan Sands
926d062a48 Don't load values out of global constants with weak
linkage: the value may be replaced with something
different at link time.  (Frontends that want to
allow values to be loaded out of weak constants can
give their constants weak_odr linkage).

llvm-svn: 67407
2009-03-20 21:53:29 +00:00
Chris Lattner
0542f9f1ba Fix PR3826 - InstComb assert with vector shift, by not calling ComputeNumSignBits on a vector.
llvm-svn: 67211
2009-03-18 16:32:19 +00:00
Chris Lattner
43ae27a75e Remove a condition which is always true.
llvm-svn: 67089
2009-03-17 17:55:15 +00:00
Dale Johannesen
a4bb3e6d14 One more place where debug info affects codegen.
llvm-svn: 66930
2009-03-13 19:23:20 +00:00
Bill Wendling
5499163a0a Oops...I committed too much.
llvm-svn: 66867
2009-03-13 04:39:26 +00:00
Bill Wendling
02a239b837 Temporarily XFAIL this test.
llvm-svn: 66866
2009-03-13 04:37:11 +00:00
Dale Johannesen
bc9067e872 Skip interleaved debug info when fast-forwarding through
allocations.  Apparently the assumption is there is an
instruction (terminator?) following the allocation so I
am allowing the same assumption.

llvm-svn: 66716
2009-03-11 22:19:43 +00:00
Dale Johannesen
f650b9b7da Removing a dead debug intrinsic shouldn't trigger
another instcombine pass if we weren't going to make
one without debug info.

llvm-svn: 66576
2009-03-10 21:19:49 +00:00
Chris Lattner
e367477979 change the MemIntrinsic get/setAlignment method to take an unsigned
instead of a Constant*, which is what the clients of it really want.

llvm-svn: 66364
2009-03-08 03:59:00 +00:00
Chris Lattner
f827ae4fa5 Introduce a new MemTransferInst pseudo class, which is a common
parent between MemCpyInst and MemMoveInst, simplify some code to
use it.

llvm-svn: 66361
2009-03-08 03:37:16 +00:00
Dale Johannesen
a73a4ee680 Fix another case where debug info was affecting
codegen.  I convinced myself it was OK to skip all
pointer bitcasts here too.

llvm-svn: 66122
2009-03-05 02:06:48 +00:00
Dale Johannesen
428972ecad Fix another case where a dbg.declare meant something
had 2 uses instead of 1.

llvm-svn: 66112
2009-03-05 00:39:02 +00:00
Dale Johannesen
e184480072 Always skip ptr-to-ptr bitcasts when counting,
per Chris' suggestion.  Slightly faster.

llvm-svn: 65999
2009-03-04 01:53:05 +00:00
Dale Johannesen
a6f7a45366 Make my earlier patch to skip debug intrinsics
when counting work; it was only off by 1.

llvm-svn: 65993
2009-03-04 01:20:34 +00:00
Dale Johannesen
81b6cd8ce5 Instruction counters must skip the bitcasts that
feed into llvm.dbg.declare nodes, as well as
the debug directives themselves.

llvm-svn: 65976
2009-03-03 22:36:47 +00:00
Dale Johannesen
ceed180d4c When removing a store to an alloca that has only one
use, check also for the case where it has two uses,
the other being a llvm.dbg.declare.  This is needed so
debug info doesn't affect codegen.

llvm-svn: 65970
2009-03-03 21:26:39 +00:00
Dan Gohman
51d4e8db6a Fix a bunch of Doxygen syntax issues. Escape special characters,
and put @file directives on their own comment line.

llvm-svn: 65920
2009-03-03 02:55:14 +00:00
Dale Johannesen
d4a205b300 Don't count DebugInfo instructions in another limit
(lest they affect codegen).

llvm-svn: 65915
2009-03-03 01:43:03 +00:00
Dale Johannesen
33fa9dc8a9 When sinking an insn in InstCombine bring its debug
info with it.
Don't count debug info insns against the scan maximum
in FindAvailableLoadedValue (lest they affect codegen).

llvm-svn: 65910
2009-03-03 01:09:07 +00:00
Duncan Sands
51ce06c788 Fix PR3694: add an instcombine micro-optimization that helps
clean up when using variable length arrays in llvm-gcc.

llvm-svn: 65832
2009-03-02 09:18:21 +00:00
Nick Lewycky
44b8675102 Silence compiler warning about use of uninitialized variables (in reality these
are always set by reference on the path that uses them.) No functional change.

llvm-svn: 65621
2009-02-27 06:37:39 +00:00
Chris Lattner
1443cb8f77 Fix PR3667
llvm-svn: 65464
2009-02-25 18:20:01 +00:00
Dan Gohman
1197d46ccf Fix a ValueTracking rule: RHS means operand 1, not 0. Add a simple
ashr instcombine to help expose this code. And apply the fix to
SelectionDAG's copy of this code too.

llvm-svn: 65364
2009-02-24 02:00:40 +00:00
Zhou Sheng
d3008c8b1c Should reset DBI_Prev if DBI_Next == 0.
llvm-svn: 65314
2009-02-23 10:14:11 +00:00
Chris Lattner
29437eb4c3 fix some typos that Duncan noticed
llvm-svn: 65306
2009-02-23 05:56:17 +00:00
Dan Gohman
b105ab4e42 Revert the part of 64623 that attempted to align the source in a
memcpy to match the alignment of the destination. It isn't necessary
for making loads and stores handled like the SSE loadu/storeu
intrinsics, and it was causing a performance regression in
MultiSource/Applications/JM/lencod.

The problem appears to have been a memcpy that copies from some
highly aligned array into an alloca; the alloca was then being
assigned a large alignment, which required codegen to perform
dynamic stack-pointer re-alignment, which forced the enclosing
function to have a frame pointer, which led to increased spilling.

llvm-svn: 65289
2009-02-22 18:06:32 +00:00
Nick Lewycky
2c8f0fd57f Don't sign extend the char when expanding char -> int during
load(bitcast(char[4] to i32*)) evaluation.

llvm-svn: 65246
2009-02-21 20:50:42 +00:00
Chris Lattner
3adae91c70 rename a function to indicate that it checks for profitability as well
as legality.  Make load sinking and gep sinking more careful: we only
do it when it won't pessimize loads from the stack.  This has the added
benefit of not producing code that is unanalyzable to SROA.

llvm-svn: 65209
2009-02-21 00:46:50 +00:00
Chris Lattner
0837686a2a commit a tweaked version of Daniel's patch for PR3599. We now
eliminate all the extensions and all but the one required truncate
from the testcase, but the or/and/shift stuff still isn't zapped.

llvm-svn: 64809
2009-02-17 20:47:23 +00:00
Dan Gohman
e06ea828a2 Fix EnforceKnownAlignment so that it doesn't ever reduce the alignment
of an alloca or global variable.

llvm-svn: 64693
2009-02-16 23:02:21 +00:00
Dan Gohman
3d93bc5654 Change these tests to use regular loads instead of llvm.x86.sse2.loadu.dq.
Enhance instcombine to use the preferred field of
GetOrEnforceKnownAlignment in more cases, so that regular IR operations are
optimized in the same way that the intrinsics currently are.

llvm-svn: 64623
2009-02-16 00:44:23 +00:00
Nate Begeman
9b68eff12e the two non-mask arguments to a shufflevector must be the same width, but they do not have to be the same
width as the result value.

llvm-svn: 64335
2009-02-11 22:36:25 +00:00
Mon P Wang
028d995112 Instrcombine should not change load(cast p) to cast(load p) if the cast
changes the address space of the pointer.

llvm-svn: 64035
2009-02-07 22:19:29 +00:00
Evan Cheng
b3da5fb3a4 APInt'fy SimplifyDemandedVectorElts so it can analyze vectors with more than 64 elements.
llvm-svn: 63631
2009-02-03 10:05:09 +00:00
Chris Lattner
6402178a04 reduce indentation, (~XorCST->getValue()).isSignBit() -> isMaxSignedValue()
llvm-svn: 63500
2009-02-02 07:15:30 +00:00
Nick Lewycky
e25b96473e Reinstate this optimization to fold icmp of xor when possible. Don't try to
turn icmp eq a+x, b+x into icmp eq a, b if a+x or b+x has other uses. This
may have been increasing register pressure leading to the bzip2 slowdown.

llvm-svn: 63487
2009-01-31 21:30:05 +00:00
Chris Lattner
26698a600e Fix PR3452 (an infinite loop bootstrapping) by disabling the recent
improvements to the EvaluateInDifferentType code.  This code works 
by just inserted a bunch of new code and then seeing if it is 
useful.  Instcombine is not allowed to do this: it can only insert
new code if it is useful, and only when it is converging to a more
canonical fixed point.  Now that we iterate when DCE makes progress,
this causes an infinite loop when the code ends up not being used.

llvm-svn: 63483
2009-01-31 19:05:27 +00:00