1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-28 06:22:51 +01:00
Commit Graph

1046 Commits

Author SHA1 Message Date
Nadav Rotem
8564ccca8b Revert r164763 because it introduces new shuffles.
Thanks Nick Lewycky for pointing this out.

llvm-svn: 181177
2013-05-06 02:39:09 +00:00
Dmitri Gribenko
82c92dc3dd Add ArrayRef constructor from None, and do the cleanups that this constructor enables
Patch by Robert Wilhelm.

llvm-svn: 181138
2013-05-05 00:40:33 +00:00
Nick Lewycky
d0db52e3ac Tabs to spaces. No functionality change.
llvm-svn: 181082
2013-05-04 01:08:15 +00:00
Filip Pizlo
dd62846c56 This patch breaks up Wrap.h so that it does not have to include all of
the things, and renames it to CBindingWrapping.h.  I also moved 
CBindingWrapping.h into Support/.

This new file just contains the macros for defining different wrap/unwrap 
methods.

The calls to those macros, as well as any custom wrap/unwrap definitions 
(like for array of Values for example), are put into corresponding C++ 
headers.

Doing this required some #include surgery, since some .cpp files relied 
on the fact that including Wrap.h implicitly caused the inclusion of a 
bunch of other things.

This also now means that the C++ headers will include their corresponding 
C API headers; for example Value.h must include llvm-c/Core.h.  I think 
this is harmless, since the C API headers contain just external function 
declarations and some C types, so I don't believe there should be any 
nasty dependency issues here.

llvm-svn: 180881
2013-05-01 20:59:00 +00:00
Jim Grosbach
5002c3f17d Revert "InstCombine: Fold more shuffles of shuffles."
This reverts commit r180802

There's ongoing discussion about whether this is the right place to make
this transformation. Reverting for now while we figure it out.

llvm-svn: 180834
2013-05-01 00:25:27 +00:00
Jim Grosbach
940f9dc094 InstCombine: Fold more shuffles of shuffles.
Always fold a shuffle-of-shuffle into a single shuffle when there's only one
input vector in the first place. Continue to be more conservative when there's
multiple inputs.

rdar://13402653
PR15866

llvm-svn: 180802
2013-04-30 20:43:52 +00:00
David Majnemer
5f05aaa765 Fix a bug in foldSelectICmpAndOr.
Differences in bitwidth between X and Y could exist even if C1 and C2 have
the same Log2 representation.

llvm-svn: 180779
2013-04-30 10:36:33 +00:00
David Majnemer
4b346f9a6e Fix "Combine bit test + conditional or into simple math"
This fixes the optimization introduced in r179748 and reverted in r179750.

While the optimization was sound, it did not properly respect differences in
bit-width.

llvm-svn: 180777
2013-04-30 08:57:58 +00:00
Eric Christopher
beec5d09da Move C++ code out of the C headers and into either C++ headers
or the C++ files themselves. This enables people to use
just a C compiler to interoperate with LLVM.

llvm-svn: 180063
2013-04-22 22:47:22 +00:00
Anat Shemer
0d94c56dac Changed back (relative to commit 179786) the operations executed when extract(cast) is transformed to cast(extract). It uses the Builder class as before. In addition the result node is added to the Worklist, so all the previous extract users will become the new scalar cast users.
llvm-svn: 180045
2013-04-22 20:51:10 +00:00
Jakub Staszak
068c94ea78 Keep coding stanard. Don't use "else if" after "return".
llvm-svn: 179826
2013-04-19 01:18:04 +00:00
Anat Shemer
ca5036302e In the function InstCombiner::visitExtractElementInst() removed the limitation that extract is promoted over a cast only if the cast has only one use.
llvm-svn: 179786
2013-04-18 19:56:44 +00:00
Anat Shemer
2d789b4b53 Added a function scalarizePHI() that sclarizes a vector phi instruction if it has only 2 uses: one to promote the vector phi in a loop and the other use is an extract operation of one element at a constant location.
llvm-svn: 179783
2013-04-18 19:35:39 +00:00
David Majnemer
72034bc02f Revert "Combine bit test + conditional or into simple math"
It is causing stage2 builds to fail, let's get them running again.

llvm-svn: 179750
2013-04-18 08:42:33 +00:00
David Majnemer
7dd2b94d65 Combine bit test + conditional or into simple math
Simplify:
(select (icmp eq (and X, C1), 0), Y, (or Y, C2))

Into:
(or (shl (and X, C1), C3), y)

Where:
C3 = Log(C2) - Log(C1)

If:
C1 and C2 are both powers of two

llvm-svn: 179748
2013-04-18 07:30:07 +00:00
David Majnemer
1dc3d3f7a0 Reorders two transforms that collide with each other
One performs: (X == 13 | X == 14) -> X-13 <u 2
The other: (A == C1 || A == C2) -> (A & ~(C1 ^ C2)) == C1

The problem is that there are certain values of C1 and C2 that
trigger both transforms but the first one blocks out the second,
this generates suboptimal code.

Reordering the transforms should be better in every case and
allows us to do interesting stuff like turn:
  %shr = lshr i32 %X, 4
  %and = and i32 %shr, 15
  %add = add i32 %and, -14
  %tobool = icmp ne i32 %add, 0

into:
  %and = and i32 %X, 240
  %tobool = icmp ne i32 %and, 224

llvm-svn: 179493
2013-04-14 21:15:43 +00:00
Benjamin Kramer
9f30ffb4a7 InstCombine: Check the operand types before merging fcmp ord & fcmp ord.
Fixes PR15737.

llvm-svn: 179417
2013-04-12 21:56:23 +00:00
David Majnemer
eec2fe2c55 Simplify (A & ~B) in icmp if A is a power of 2
The transform will execute like so:
(A & ~B) == 0 --> (A & B) != 0
(A & ~B) != 0 --> (A & B) == 0

llvm-svn: 179386
2013-04-12 17:25:07 +00:00
David Majnemer
82ec1d080e Optimize icmp involving addition better
Allows LLVM to optimize sequences like the following:

%add = add nsw i32 %x, 1
%cmp = icmp sgt i32 %add, %y

into:

%cmp = icmp sge i32 %x, %y

as well as:

%add1 = add nsw i32 %x, 20
%add2 = add nsw i32 %y, 57
%cmp = icmp sge i32 %add1, %add2

into:

%add = add nsw i32 %y, 37
%cmp = icmp sle i32 %cmp, %x

llvm-svn: 179316
2013-04-11 20:05:46 +00:00
Benjamin Kramer
3b38288ea2 Fix for wrong instcombine on vector insert/extract
When trying to collapse sequences of insertelement/extractelement
instructions into single shuffle instructions, there is one specific
case where the Instruction Combiner wrongly updates the resulting
Mask of shuffle indexes.

The problem is in function CollectShuffleElments.

If we have a sequence of insert/extract element instructions
like the one below:

  %tmp1 = extractelement <4 x float> %LHS, i32 0
  %tmp2 = insertelement <4 x float> %RHS, float %tmp1, i32 1
  %tmp3 = extractelement <4 x float> %RHS, i32 2
  %tmp4 = insertelement <4 x float> %tmp2, float %tmp3, i32 3

Where:
  . %RHS will have a mask of [4,5,6,7]
  . %LHS will have a mask of [0,1,2,3]

The Mask of shuffle indexes is wrongly computed to [4,1,6,7]
instead of [4,0,6,7].
When analyzing %tmp2 in order to compute the Mask for the
resulting shuffle instruction, the algorithm forgets to update
the mask index at position 1 with the index associated to the
element extracted from %LHS by instruction %tmp1.

Patch by Andrea DiBiagio!

llvm-svn: 179291
2013-04-11 15:10:09 +00:00
Jim Grosbach
e7766f7108 Tidy up a bit. No functional change.
llvm-svn: 178915
2013-04-05 21:20:12 +00:00
Akira Hatanaka
724132bda3 Check if Type is a vector before calling function Type::getVectorNumElements.
llvm-svn: 178208
2013-03-28 01:28:02 +00:00
Ulrich Weigand
ae3d13ab3f Make InstCombineCasts.cpp:OptimizeIntToFloatBitCast endian safe.
The OptimizeIntToFloatBitCast converts shift-truncate sequences
into extractelement operations.  The computation of the element
index to be used in the resulting operation is currently only
correct for little-endian targets.

This commit fixes the element index computation to be correct
for big-endian targets as well.  If the target byte order is
unknown, the optimization cannot be performed at all.

llvm-svn: 178031
2013-03-26 15:36:14 +00:00
Shuxin Yang
9f502ba0a0 Fix a bug in fast-math fadd/fsub simplification.
The problem is that the code mistakenly took for granted that following constructor 
is able to create an APFloat from a *SIGNED* integer:
   
  APFloat::APFloat(const fltSemantics &ourSemantics, integerPart value)

rdar://13486998

llvm-svn: 177906
2013-03-25 20:43:41 +00:00
Arnaud A. de Grandmaison
019bd576ab Address issues found by Duncan during post-commit review of r177856.
llvm-svn: 177863
2013-03-25 11:47:38 +00:00
Arnaud A. de Grandmaison
1fdfeaba38 InstCombine: simplify comparisons to zero of (shl %x, Cst) or (mul %x, Cst)
This simplification happens at 2 places :
 - using the nsw attribute when the shl / mul is used by a sign test
 - when the shl / mul is compared for (in)equality to zero

llvm-svn: 177856
2013-03-25 09:48:49 +00:00
Arnaud A. de Grandmaison
7a4226244b InstCombine: Improve the result bitvect type when folding (cmp pred (load (gep GV, i)) C) to a bit test.
The original code used i32, and i64 if legal. This introduced unneeded
casts when they aren't legal, or when the index variable i has another
type. In order of preference: try to use i's type; use the smallest
fitting legal type (using an added DataLayout method); default to i32.
A testcase checks that this works when the index gep operand is i16.

Patch by : Ahmed Bougacha <ahmed.bougacha@gmail.com>
Reviewed by : Duncan

llvm-svn: 177712
2013-03-22 08:25:01 +00:00
Shuxin Yang
55038cc0b2 Perform factorization as a last resort of unsafe fadd/fsub simplification.
Rules include:
  1)1 x*y +/- x*z => x*(y +/- z) 
    (the order of operands dosen't matter)

  2) y/x +/- z/x => (y +/- z)/x 

 The transformation is disabled if the new add/sub expr "y +/- z" is a 
denormal/naz/inifinity.

rdar://12911472

llvm-svn: 177088
2013-03-14 18:08:26 +00:00
Arnaud A. de Grandmaison
0810447275 Fix a performance regression when combining to smaller types in icmp (shl %v, C1), C2 :
Only combine when the shl is only used by the icmp

llvm-svn: 176950
2013-03-13 14:40:37 +00:00
Jakub Staszak
cf6be75b52 Simplify code. No functionality change.
llvm-svn: 176765
2013-03-09 11:18:59 +00:00
Jim Grosbach
8dd9a160c8 InstCombine: Don't shrink allocas when combining with a bitcast.
When considering folding a bitcast of an alloca into the alloca itself,
make sure we don't shrink the amount of memory being allocated, or
things rapidly go sideways.

rdar://13324424

llvm-svn: 176547
2013-03-06 05:44:53 +00:00
Quentin Colombet
5b2d4c19c9 Fix a bug in instcombine for fmul in fast math mode.
The instcombine recognized pattern looks like:
a = b * c
d = a +/- Cst
or
a = b * c
d = Cst +/- a

When creating the new operands for fadd or fsub instruction following the related fmul, the first operand was created with the second original operand (M0 was created with C1) and the second with the first (M1 with Opnd0).

The fix consists in creating the new operands with the appropriate original operand, i.e., M0 with Opnd0 and M1 with C1.

llvm-svn: 176300
2013-02-28 21:12:40 +00:00
Bill Wendling
cac4751d0b The transform is:
(or (bool?A:B),(bool?C:D)) --> (bool?(or A,C):(or B,D))

By the time the OR is visited, both the SELECTs have been visited and not
optimized and the OR itself hasn't been transformed so we do this transform in
the hopes that the new ORs will be optimized.

The transform is explicitly disabled for vector-selects until "codegen matures
to handle them better".

Patch by Muhammad Tauqir!

llvm-svn: 175380
2013-02-16 23:41:36 +00:00
Arnaud A. de Grandmaison
e63e196b5d Fix refactoring mistake in "Teach InstCombine to work with smaller legal types..."
llvm-svn: 175273
2013-02-15 15:18:17 +00:00
Arnaud A. de Grandmaison
6b2f7add7e Teach InstCombine to work with smaller legal types in icmp (shl %v, C1), C2
It enables to work with a smaller constant, which is target friendly for those which can compare to immediates.
It also avoids inserting a shift in favor of a trunc, which can be free on some targets.

This used to work until LLVM-3.1, but regressed with the 3.2 release.

llvm-svn: 175270
2013-02-15 14:35:47 +00:00
Arnaud A. de Grandmaison
2c730cd330 Fix comment
visitSExt is an adapted copy of the related visitZExt method, so adapt the comment accordingly.

llvm-svn: 175019
2013-02-13 00:19:19 +00:00
Michael Ilseman
61bd4eabc9 Optimization: bitcast (<1 x ...> insertelement ..., X, ...) to ... ==> bitcast X to ...
llvm-svn: 174905
2013-02-11 21:41:44 +00:00
Andrew Trick
b8764716fa Revert "Have InstCombine call SipmlifyCall when handling calls. Test case included."
This reverts commit 3854a5d90fee52af1065edbed34521fff6cdc18d.

This causes a clang unit test to hang: vtable-available-externally.cpp.

llvm-svn: 174692
2013-02-08 01:55:39 +00:00
Michael Ilseman
63dd0ecb1e Have InstCombine call SipmlifyCall when handling calls. Test case included.
llvm-svn: 174675
2013-02-07 23:01:35 +00:00
Michael Ilseman
27c81fc400 Preserve fast-math flags after reassociation and commutation. Update test cases
llvm-svn: 174571
2013-02-07 01:40:15 +00:00
Benjamin Kramer
7d9ae0ca14 InstCombine: Fix and simplify the inttoptr side too.
llvm-svn: 174438
2013-02-05 20:22:40 +00:00
Benjamin Kramer
e615fc610e InstCombine: Harden code to work with vectors of pointers and simplify it a bit.
Found by running instcombine on a fabricated test case for the constant folder.

llvm-svn: 174430
2013-02-05 19:21:56 +00:00
Nadav Rotem
696003f5f2 Revert r174152. The shift amount may overflow and in that case this transformation is illegal.
llvm-svn: 174156
2013-02-01 07:59:33 +00:00
Nadav Rotem
8e40171f27 Optimize shift lefts of a constant by a value plus constant into a single shift.
llvm-svn: 174152
2013-02-01 06:45:40 +00:00
Bill Wendling
c926e2a691 Convert typeIncompatible to return an AttributeSet.
There are still places which treat the Attribute object as a collection of
attributes. I'm systematically removing them.

llvm-svn: 173990
2013-01-30 23:07:40 +00:00
Nadav Rotem
7b1c05a9b7 InstCombine: canonicalize sext-and --> select
sext-not-and --> select.

Patch by Muhammad Tauqir Ahmad.

llvm-svn: 173901
2013-01-30 06:35:22 +00:00
Bill Wendling
e901c4cf0a Use the AttributeSet instead of AttributeWithIndex.
In the future, AttributeWithIndex won't be used anymore. Besides, it exposes the
internals of the AttributeSet to outside users, which isn't goodness.

llvm-svn: 173602
2013-01-27 02:08:22 +00:00
Bill Wendling
47efd8b988 Remove some introspection functions.
The 'getSlot' function and its ilk allow introspection into the AttributeSet
class. However, that class should be opaque. Allow access through accessor
methods instead.

llvm-svn: 173522
2013-01-25 23:09:36 +00:00
Bill Wendling
c5e0fb9b77 Use the new 'getSlotIndex' method to retrieve the attribute's slot index.
llvm-svn: 173499
2013-01-25 21:46:52 +00:00
Craig Topper
f358937aff Remove trailing whitespace.
llvm-svn: 173322
2013-01-24 05:22:40 +00:00
Benjamin Kramer
90ac6c1a88 Revert "InstCombine: Clean up weird code that talks about a modulus that's long gone."
This causes crashes during the build of compiler-rt during selfhost. Add a
testcase for coverage.

llvm-svn: 173279
2013-01-23 17:52:29 +00:00
Benjamin Kramer
cb0126b7dc InstCombine: Clean up weird code that talks about a modulus that's long gone.
This does the right thing unless the multiplication overflows, but the old code
didn't handle that case either.

llvm-svn: 173276
2013-01-23 17:16:22 +00:00
Bill Wendling
c31f99d129 Remove the last of uses that use the Attribute object as a collection of attributes.
Collections of attributes are handled via the AttributeSet class now. This
finally frees us up to make significant changes to how attributes are structured.

llvm-svn: 173228
2013-01-23 06:14:59 +00:00
Bill Wendling
f8c0d4f9cd Have AttributeSet::getRetAttributes() return an AttributeSet instead of Attribute.
This further restricts the use of the Attribute class to the Attribute family of
classes.

llvm-svn: 173098
2013-01-21 22:44:49 +00:00
Bill Wendling
991cef6573 Make AttributeSet::getFnAttributes() return an AttributeSet instead of an Attribute.
This is more code to isolate the use of the Attribute class to that of just
holding one attribute instead of a collection of attributes.

llvm-svn: 173094
2013-01-21 21:57:28 +00:00
Paul Redmond
3d611dcda9 Transform (sub 0, (zext bool to A)) to (sext bool to A) and
(sub 0, (sext bool to A)) to (zext bool to A).

Patch by Muhammad Ahmad
Reviewed by Duncan Sands

llvm-svn: 173093
2013-01-21 21:57:20 +00:00
Bill Wendling
7777bbbbf3 Use AttributeSet accessor methods instead of Attribute accessor methods.
Further encapsulation of the Attribute object. Don't allow direct access to the
Attribute object as an aggregate.

llvm-svn: 172853
2013-01-18 21:53:16 +00:00
Bill Wendling
b5ddc9a5f8 Push some more methods down to hide the use of the Attribute class.
Because the Attribute class is going to stop representing a collection of
attributes, limit the use of it as an aggregate in favor of using AttributeSet.
This replaces some of the uses for querying the function attributes.

llvm-svn: 172844
2013-01-18 21:11:39 +00:00
Craig Topper
215a93298b Check for less than 0 in shuffle mask instead of -1. It's more consistent with other code related to shuffles and easier to implement in compiled code.
llvm-svn: 172788
2013-01-18 05:30:07 +00:00
Craig Topper
bf40e524fd Remove trailing whitespace. Remove new lines between closing brace and 'else'
llvm-svn: 172784
2013-01-18 05:09:16 +00:00
Nadav Rotem
24c96fa80c Teach InstCombine to optimize extract of a value from a vector add operation with a constant zero.
llvm-svn: 172576
2013-01-15 23:43:14 +00:00
Shuxin Yang
36c3cb1b3b 1. Hoist minus sign as high as possible in an attempt to reveal
some optimization opportunities (in the enclosing supper-expressions).

   rule 1. (-0.0 - X ) * Y => -0.0 - (X * Y)
     if expression "-0.0 - X" has only one reference.

   rule 2. (0.0 - X ) * Y => -0.0 - (X * Y)
     if expression "0.0 - X" has only one reference, and
        the instruction is marked "noSignedZero".

2. Eliminate negation (The compiler was already able to handle these
    opt if the 0.0s are replaced with -0.0.)

   rule 3: (0.0 - X) * (0.0 - Y) => X * Y
   rule 4: (0.0 - X) * C => X * -C
   if the expr is flagged "noSignedZero".

3. 
  Rule 5: (X*Y) * X => (X*X) * Y
   if X!=Y and the expression is flagged with "UnsafeAlgebra".

   The purpose of this transformation is two-fold:
    a) to form a power expression (of X).
    b) potentially shorten the critical path: After transformation, the
       latency of the instruction Y is amortized by the expression of X*X,
       and therefore Y is in a "less critical" position compared to what it
      was before the transformation. 

4. Remove the InstCombine code about simplifiying "X * select".
   
   The reasons are following:
    a) The "select" is somewhat architecture-dependent, therefore the
       higher level optimizers are not able to precisely predict if
       the simplification really yields any performance improvement
       or not.

    b) The "select" operator is bit complicate, and tends to obscure
       optimization opportunities. It is btter to keep it as low as
       possible in expr tree, and let CodeGen to tackle the optimization.

llvm-svn: 172551
2013-01-15 21:09:32 +00:00
Jakub Staszak
24a461467e Remove trailing spaces.
llvm-svn: 172489
2013-01-14 23:16:36 +00:00
Shuxin Yang
974390c4aa This change is to implement following rules under the condition C_A and/or C_R
---------------------------------------------------------------------------
 C_A: reassociation is allowed
 C_R: reciprocal of a constant C is appropriate, which means 
    - 1/C is exact, or 
    - reciprocal is allowed and 1/C is neither a special value nor a denormal.
 -----------------------------------------------------------------------------

 rule1:  (X/C1) / C2 => X / (C2*C1)  (if C_A)
                     => X * (1/(C2*C1))  (if C_A && C_R)
 rule 2:  X*C1 / C2 => X * (C1/C2)  if C_A
 rule 3: (X/Y)/Z = > X/(Y*Z)  (if C_A && at least one of Y and Z is symbolic value)
 rule 4: Z/(X/Y) = > (Z*Y)/X  (similar to rule3)

 rule 5: C1/(X*C2) => (C1/C2) / X (if C_A)
 rule 6: C1/(X/C2) => (C1*C2) / X (if C_A)
 rule 7: C1/(C2/X) => (C1/C2) * X (if C_A)

llvm-svn: 172488
2013-01-14 22:48:41 +00:00
David Greene
a7b1d4d895 Fix Casting Bug
Add a const version of getFpValPtr to avoid a cast-away-const warning.

llvm-svn: 172467
2013-01-14 21:04:40 +00:00
Nick Lewycky
8ca069daaa Fix typo in comment.
llvm-svn: 172460
2013-01-14 20:56:10 +00:00
Owen Anderson
9e81fe53cf Teach InstCombine to hoist FABS and FNEG through FPTRUNC instructions. The application of these operations commutes with the truncation, so we should prefer to do them in the smallest size we can, to save register space, use smaller constant pool entries, etc.
llvm-svn: 172117
2013-01-10 22:06:52 +00:00
Shuxin Yang
2dc1fb7889 Consider expression "0.0 - X" as the negation of X if
- this expression is explicitly marked no-signed-zero, or
  - no-signed-zero of this expression can be derived from some context.

llvm-svn: 171922
2013-01-09 00:13:41 +00:00
Shuxin Yang
1c3b5e4f89 Cosmetical changne in order to conform to coding std.
Thank Eric Christopher for figuring out these problems!

llvm-svn: 171805
2013-01-07 22:41:28 +00:00
Shuxin Yang
9c476471f2 This change is to implement following rules:
o. X/C1 * C2 => X * (C2/C1) (if C2/C1 is neither special FP nor denormal)
  o. X/C1 * C2 -> X/(C1/C2)   (if C2/C1 is either specical FP or denormal, but C1/C2 is a normal Fp)

     Let MDC denote multiplication or dividion with one & only one operand being a constant
  o. (MDC ± C1) * C2 => (MDC * C2) ± (C1 * C2)
     (so long as the constant-folding doesn't yield any denormal or special value)

llvm-svn: 171793
2013-01-07 21:39:23 +00:00
Quentin Colombet
994ef807c2 When code size is the priority (Oz, MinSize attribute), help llvm
turning a code like this:

if (foo)
   free(foo)

into that:
free(foo)

Move a call to free from basic block FB into FB's predecessor, P,
when the path from P to FB is taken only if the argument of free is
not equal to NULL.

Some restrictions apply on P and FB to be sure that this code motion
is profitable. Namely:
1. FB must have only one predecessor P.
2. FB must contain only the call to free plus an unconditional
   branch to S.
3. P's successors are FB and S.

Because of 1., we will not increase the code size when moving the call
to free from FB to P.
Because of 2., FB will be empty after the move.
Because of 2. and 3., P's branch instruction becomes useless, so as FB
(simplifycfg will do the job).

llvm-svn: 171762
2013-01-07 18:37:41 +00:00
Chris Lattner
561e5f6442 switch from pointer equality comparison to MDNode::getMostGenericTBAA
when merging two TBAA tags, pointed out by Nuno.

llvm-svn: 171627
2013-01-05 16:44:07 +00:00
Chandler Carruth
4c1f3c24db Move all of the header files which are involved in modelling the LLVM IR
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.

There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.

The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.

I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).

I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.

llvm-svn: 171366
2013-01-02 11:36:10 +00:00
Jakub Staszak
eb568744b7 Add extra CHECK to make sure that 'or' instruction was replaced.
Also add an assert to avoid confusion in the code where is known that C1 <= C2.

llvm-svn: 171310
2012-12-31 18:26:42 +00:00
Chris Lattner
aea9f04075 teach instcombine to preserve TBAA tag when merging two stores, part of
PR14753

llvm-svn: 171279
2012-12-31 08:10:58 +00:00
Jakub Staszak
24f562a34b Grammo.
llvm-svn: 171272
2012-12-31 01:40:44 +00:00
Bill Wendling
eaa222d341 Remove the getAttributesAtIndex and getNumAttrs methods in favor of using the getAttrSomewhere predicate. This prevents the uses of 'Attribute' as a collection of attributes.
llvm-svn: 171271
2012-12-31 00:49:59 +00:00
Jakub Staszak
000956e03b Transform (A == C1 || A == C2) into (A & ~(C1 ^ C2)) == C1
if C1 and C2 differ only with one bit.
Fixes PR14708.

llvm-svn: 171270
2012-12-31 00:34:55 +00:00
Nuno Lopes
0873c9d511 convert a bunch of callers from DataLayout::getIndexedOffset() to GEP::accumulateConstantOffset().
The later API is nicer than the former, and is correct regarding wrap-around offsets (if anyone cares).
There are a few more places left with duplicated code, which I'll remove soon.

llvm-svn: 171259
2012-12-30 16:25:48 +00:00
Nick Lewycky
3aba2e21ea Remove mid-optimizer warning. This situation should be handled differently,
such as by a compiler warning, a check in clang -fsanitizer=undefined, being
optimized to unreachable, or a combination of the above. PR14722.

llvm-svn: 171119
2012-12-26 22:00:35 +00:00
Bob Wilson
94a94e9500 Add LLVMContext::emitWarning methods and use them. <rdar://problem/12867368>
When the backend is used from clang, it should produce proper diagnostics
instead of just printing messages to errs(). Other clients may also want to
register their own error handlers with the LLVMContext, and the same handler
should work for warnings in the same way as the existing emitError methods.

llvm-svn: 171041
2012-12-24 18:15:21 +00:00
Craig Topper
0855d67e26 Remove trailing whitespace
llvm-svn: 170990
2012-12-22 18:09:02 +00:00
Craig Topper
92c3efe257 Formatting fixes. Remove some unnecessary 'else' after 'return'. No functional change.
llvm-svn: 170676
2012-12-20 07:15:54 +00:00
Craig Topper
39d7615441 Removing trailing whitespace
llvm-svn: 170675
2012-12-20 07:09:41 +00:00
Paul Redmond
b0b73c2c50 Transform (x&C)>V into (x&C)!=0 where possible
When the least bit of C is greater than V, (x&C) must be greater than V
if it is not zero, so the comparison can be simplified.

Although this was suggested in Target/X86/README.txt, it benefits any
architecture with a directly testable form of AND.

Patch by Kevin Schoedel

llvm-svn: 170576
2012-12-19 19:47:13 +00:00
Bill Wendling
620587ed25 Inline the 'hasIncompatibleWithVarArgsAttrs' method into its only uses. And some minor comment reformatting.
llvm-svn: 170516
2012-12-19 08:57:40 +00:00
Bill Wendling
56d9c4b832 Rename the 'Attributes' class to 'Attribute'. It's going to represent a single attribute in the future.
llvm-svn: 170502
2012-12-19 07:18:57 +00:00
Shuxin Yang
6f5cd3ba7f Make sure the buffer, which containas an instance of APFloat, has proper alignment.
llvm-svn: 170486
2012-12-19 01:10:17 +00:00
Shuxin Yang
712da2e5c6 rdar://12801297
InstCombine for unsafe floating-point add/sub.

llvm-svn: 170471
2012-12-18 23:10:12 +00:00
Michael Ilseman
5cd248cc09 Add back FoldOpIntoPhi optimizations with fix. Included test cases to help catch these errors and to test the presence of the optimization itself
llvm-svn: 170248
2012-12-14 22:08:26 +00:00
Shuxin Yang
fc24901300 rdar://12753946
Implement rule : "x * (select cond 1.0, 0.0) -> select cond x, 0.0"

llvm-svn: 170226
2012-12-14 18:46:06 +00:00
NAKAMURA Takumi
86c489b530 Revert r170020, "Simplify negated bit test", for now.
This assumes (1 << n) is always not zero. Consider n is greater than word size.
Although I know it is undefined, this transforms undefined behavior hidden.

This led clang unexpected behavior with some failures. I will investigate to fix undefined shl in clang.

llvm-svn: 170128
2012-12-13 14:28:16 +00:00
Eric Christopher
dcebe062a7 Revert "Restore the PHI optimization I accidently removed" temporarily since
it seems to be breaking self-host for a few people and is PR14592.

This reverts commit r170024.

llvm-svn: 170106
2012-12-13 06:48:05 +00:00
Rafael Espindola
89cd622454 Missed these calls from the previous rename somehow.
llvm-svn: 170094
2012-12-13 03:42:31 +00:00
Rafael Espindola
997fcdb78b Rename isPowerOfTwo to isKnownToBeAPowerOfTwo.
In a previous thread it was pointed out that isPowerOfTwo is not a very precise
name since it can return false for powers of two if it is unable to show that
they are powers of two.

llvm-svn: 170093
2012-12-13 03:37:24 +00:00
Michael Ilseman
1654daa5a0 Pattern matching code for intrinsics.
Provides m_Argument that allows matching against a CallSite's specified argument. Provides m_Intrinsic pattern that can be templatized over the intrinsic id and bind/match arguments similarly to other pattern matchers. Implementations provided for 0 to 4 arguments, though it's very simple to extend for more. Also provides example template specialization for bswap (m_BSwap) and example of code cleanup for its use.

llvm-svn: 170091
2012-12-13 03:13:36 +00:00
Chad Rosier
ae6e0d99a8 Typo.
llvm-svn: 170050
2012-12-13 00:18:46 +00:00
Michael Ilseman
6db2e9667a Restore the PHI optimization I accidently removed
llvm-svn: 170024
2012-12-12 20:59:36 +00:00
Michael Ilseman
2641a31f93 Remove trailing whitespace
llvm-svn: 170022
2012-12-12 20:57:53 +00:00
David Majnemer
3abca00876 Simplify negated bit test
llvm-svn: 170020
2012-12-12 20:48:54 +00:00
Rafael Espindola
a77a0b105f The TargetData is not used for the isPowerOfTwo determination. It has never
been used in the first place.  It simply was passed to the function and to the
recursive invocations.  Simply drop the parameter and update the callers for the
new signature.

Patch by Saleem Abdulrasool!

llvm-svn: 169988
2012-12-12 16:52:40 +00:00
Shuxin Yang
0b24765e3f - Fix a problematic way in creating all-the-1 APInt.
- Propagate "exact" bit of [l|a]shr instruction.

llvm-svn: 169942
2012-12-12 00:29:03 +00:00
Michael Ilseman
8e297c097a Remove redunant optimizations from InstCombine, instead call the appropriate functions from SimplifyInstruction
llvm-svn: 169941
2012-12-12 00:28:32 +00:00
Jakub Staszak
3375cd11b4 Use m_OneUse pattern instead of hasOneUse() method.
No functionality change.

llvm-svn: 169703
2012-12-09 16:06:44 +00:00
Jakub Staszak
30bae3f07e Remove trailing spaces.
llvm-svn: 169701
2012-12-09 15:37:46 +00:00
Bill Wendling
3f153ce37b s/AttrListPtr/AttributeSet/g to better label what this class is going to be in the near future.
llvm-svn: 169651
2012-12-07 23:16:57 +00:00
Shuxin Yang
c390be6a5d For rdar://12329730, last piece.
This change attempts to simplify (X^Y) -> X or Y in the user's context if we know that
only bits from X or Y are demanded.

  A minimized case is provided bellow. This change will simplify "t>>16" into "var1 >>16".

  =============================================================
  unsigned foo (unsigned val1, unsigned val2) {
    unsigned t = val1 ^ 1234;
    return (t >> 16) | t; // NOTE: t is used more than once.
  }
  =============================================================

  Note that if the "t" were used only once, the expression would be finally optimized as well.
However, with with this change, the optimization will take place earlier.

  Reviewed by Nadav, Thanks a lot!

llvm-svn: 169317
2012-12-04 22:15:32 +00:00
Chandler Carruth
a98c778194 Sort includes for all of the .h files under the 'lib' tree. These were
missed in the first pass because the script didn't yet handle include
guards.

Note that the script is now able to handle all of these headers without
manual edits. =]

llvm-svn: 169224
2012-12-04 07:12:27 +00:00
Shuxin Yang
ac685f44b0 rdar://12329730 (2nd part, revised)
The type of shirt-right (logical or arithemetic) should remain unchanged 
when transforming  "X << C1 >> C2" into "X << (C1-C2)"

llvm-svn: 169209
2012-12-04 03:28:32 +00:00
Shuxin Yang
f6948fd368 rdar://12329730 (2nd part)
This change tries to simmplify E1 = " X >> C1 << C2" into :
  - E2 = "X << (C2 - C1)" if C2 > C1, or
  - E2 = "X >> (C1 - C2)" if C1 > C2, or
  - E2 = X if C1 == C2.

 Reviewed by Nadav. Thanks!

llvm-svn: 169182
2012-12-04 00:04:54 +00:00
Chandler Carruth
a490793037 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Pedro Artigas
11b0b5d7b3 reversed the logic of the log2 detection routine to reduce the number of nested ifs
llvm-svn: 169049
2012-11-30 22:47:15 +00:00
Pedro Artigas
ba70a53103 Addresses many style issues with prior checkin (r169025)
llvm-svn: 169043
2012-11-30 22:07:05 +00:00
Pedro Artigas
76e7f353e3 Add fast math inst combine X*log2(Y*0.5)-->X*log2(Y)-X
reviewed by Michael Ilseman <milseman@apple.com>

llvm-svn: 169025
2012-11-30 19:09:41 +00:00
Meador Inge
6a05d05854 Move library call simplification statistic to instcombine
The simplify-libcalls pass maintained a statistic to count the number
of library calls that have been simplified.  Now that library call
simplification is being carried out in instcombine the statistic should
be moved to there.

llvm-svn: 168975
2012-11-30 04:05:06 +00:00
Chandler Carruth
15fed97f3e Move the InstVisitor utility into VMCore where it belongs. It heavily
depends on the IR infrastructure, there is no sense in it being off in
Support land.

This is in preparation to start working to expand InstVisitor into more
special-purpose visitors that are still generic and can be re-used
across different passes. The expansion will go into the Analylis tree
though as nothing in VMCore needs it.

llvm-svn: 168972
2012-11-30 03:08:41 +00:00
Meador Inge
4275530cf4 instcombine: Don't replace all uses for instructions with no uses
My commit to migrate the printf simplifiers from the simplify-libcalls
in r168604 introduced a regression reported by Duncan [1].  The problem
is that in some cases the library call simplifier can return a new value
that has no uses and the new value's type is different than the old value's
type (which is fine because there are no uses).  The specific case that
triggered the bug looked something like:

   declare void @printf(i8*, ...)
   ...
   call void (i8*, ...)* @printf(i8* %fmt)

Which we want to optimized into:

   call i32 @putchar(i32 104)

However, the code was attempting to replace all uses of the printf with
the putchar and the types differ, hence a crash.  This is fixed by *just*
deleting the original instruction when there are no uses.  The old
simplify-libcalls pass is already doing something similar.

[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-November/056338.html

llvm-svn: 168716
2012-11-27 18:52:49 +00:00
Eli Friedman
dd1df015f4 Get rid of the getPointeeAlignment helper function from
InstCombineLoadStoreAlloca.cpp, which had many issues.
(At least two bugs were noted on llvm-commits, and it was overly conservative.)
Instead, use getOrEnforceKnownAlignment.

llvm-svn: 168629
2012-11-26 23:04:53 +00:00
Shuxin Yang
38196f1ebf rdar://12329730 (defect 2)
Enhancement to InstCombine. Try to catch this opportunity:
  
 ---------------------------------------------------------------
 ((X^C1) >> C2) ^ C3  => (X>>C2) ^ ((C1>>C2)^C3)
  where the subexpression "X ^ C1" has more than one uses, and
  "(X^C1) >> C2" has single use. 
 ---------------------------------------------------------------- 

 Reviewed by Nadav (with minor change per his request).

llvm-svn: 168615
2012-11-26 21:44:25 +00:00
Bill Wendling
62a846033f Make the AttrListPtr object a part of the LLVMContext.
When code deletes the context, the AttributeImpls that the AttrListPtr points to
are now invalid. Therefore, instead of keeping a separate managed static for the
AttrListPtrs that's reference counted, move it into the LLVMContext and delete
it when deleting the AttributeImpls.

llvm-svn: 168354
2012-11-20 05:09:20 +00:00
Nick Lewycky
570e765264 Don't try to calculate the alignment of an unsigned type. Fixes PR14371!
llvm-svn: 168280
2012-11-18 05:39:39 +00:00
Duncan Sands
e52f0de8bb Make this easier to understand, as suggested by Chandler.
llvm-svn: 168196
2012-11-16 20:53:08 +00:00
Duncan Sands
86ca3afe2e Fix PR14361: wrong simplification of A+B==B+A. You may think that the old logic
replaced by this patch is equivalent to the new logic, but you'd be wrong, and
that's exactly where the bug was.  There's a similar bug in instsimplify which
manifests itself as instsimplify failing to simplify this, rather than doing it
wrong, see next commit.

llvm-svn: 168181
2012-11-16 18:55:49 +00:00
NAKAMURA Takumi
349861530f InstCombineAndOrXor.cpp: Escape bracket in doxygen description. [-Wdocumentation]
llvm-svn: 168013
2012-11-15 00:35:50 +00:00
Meador Inge
a191db7d99 instcombine: Migrate math library call simplifications
This patch migrates the math library call simplifications from the
simplify-libcalls pass into the instcombine library call simplifier.

I have typically migrated just one simplifier at a time, but the math
simplifiers are interdependent because:

   1. CosOpt, PowOpt, and Exp2Opt all depend on UnaryDoubleFPOpt.
   2. CosOpt, PowOpt, Exp2Opt, and UnaryDoubleFPOpt all depend on
      the option -enable-double-float-shrink.

These two factors made migrating each of these simplifiers individually
more of a pain than it would be worth.  So, I migrated them all together.

llvm-svn: 167815
2012-11-13 04:16:17 +00:00
Meador Inge
9ad3990ff0 Add method for replacing instructions to LibCallSimplifier
In some cases the library call simplifier may need to replace instructions
other than the library call being simplified.  In those cases it may be
necessary for clients of the simplifier to override how the replacements
are actually done.  As such, a new overrideable method for replacing
instructions was added to LibCallSimplifier.

A new subclass of LibCallSimplifier is also defined which overrides
the instruction replacement method.  This is because the instruction
combiner defines its own replacement method which updates the worklist
when instructions are replaced.

llvm-svn: 167681
2012-11-11 03:51:43 +00:00
Duncan Sands
626552af21 Generalize the transform that boosts GEP indices to the size of a pointer to
also do it for vectors of pointers.

llvm-svn: 167354
2012-11-03 11:44:17 +00:00
Chandler Carruth
0a6b99ee2b Revert the majority of the next patch in the address space series:
r165941: Resubmit the changes to llvm core to update the functions to
         support different pointer sizes on a per address space basis.

Despite this commit log, this change primarily changed stuff outside of
VMCore, and those changes do not carry any tests for correctness (or
even plausibility), and we have consistently found questionable or flat
out incorrect cases in these changes. Most of them are probably correct,
but we need to devise a system that makes it more clear when we have
handled the address space concerns correctly, and ideally each pass that
gets updated would receive an accompanying test case that exercises that
pass specificaly w.r.t. alternate address spaces.

However, from this commit, I have retained the new C API entry points.
Those were an orthogonal change that probably should have been split
apart, but they seem entirely good.

In several places the changes were very obvious cleanups with no actual
multiple address space code added; these I have not reverted when
I spotted them.

In a few other places there were merge conflicts due to a cleaner
solution being implemented later, often not using address spaces at all.
In those cases, I've preserved the new code which isn't address space
dependent.

This is part of my ongoing effort to clean out the partial address space
code which carries high risk and low test coverage, and not likely to be
finished before the 3.2 release looms closer. Duncan and I would both
like to see the above issues addressed before we return to these
changes.

llvm-svn: 167222
2012-11-01 09:14:31 +00:00
Chandler Carruth
76f7f4a33e Revert the series of commits starting with r166578 which introduced the
getIntPtrType support for multiple address spaces via a pointer type,
and also introduced a crasher bug in the constant folder reported in
PR14233.

These commits also contained several problems that should really be
addressed before they are re-committed. I have avoided reverting various
cleanups to the DataLayout APIs that are reasonable to have moving
forward in order to reduce the amount of churn, and minimize the number
of commits that were reverted. I've also manually updated merge
conflicts and manually arranged for the getIntPtrType function to stay
in DataLayout and to be defined in a plausible way after this revert.

Thanks to Duncan for working through this exact strategy with me, and
Nick Lewycky for tracking down the really annoying crasher this
triggered. (Test case to follow in its own commit.)

After discussing with Duncan extensively, and based on a note from
Micah, I'm going to continue to back out some more of the more
problematic patches in this series in order to ensure we go into the
LLVM 3.2 branch with a reasonable story here. I'll send a note to
llvmdev explaining what's going on and why.

Summary of reverted revisions:

r166634: Fix a compiler warning with an unused variable.
r166607: Add some cleanup to the DataLayout changes requested by
         Chandler.
r166596: Revert "Back out r166591, not sure why this made it through
         since I cancelled the command. Bleh, sorry about this!
r166591: Delete a directory that wasn't supposed to be checked in yet.
r166578: Add in support for getIntPtrType to get the pointer type based
         on the address space.
llvm-svn: 167221
2012-11-01 08:07:29 +00:00
Duncan Sands
bce56286fb Fix isEliminableCastPair to work correctly in the presence of pointers
with different sizes.

llvm-svn: 167018
2012-10-30 16:03:32 +00:00
Ulrich Weigand
2df331332d Enable some additional constant folding for PPCDoubleDouble.
This fixes Clang :: CodeGen/complex-builtints.c on PowerPC.

llvm-svn: 167013
2012-10-30 12:33:18 +00:00
Micah Villmow
7c7b8259bc Add some cleanup to the DataLayout changes requested by Chandler.
llvm-svn: 166607
2012-10-24 18:36:13 +00:00
Micah Villmow
521311700f Add in support for getIntPtrType to get the pointer type based on the address space.
This checkin also adds in some tests that utilize these paths and updates some of the
clients.

llvm-svn: 166578
2012-10-24 15:52:52 +00:00
Duncan Sands
0021b8d8fb Fix typo that somehow escaped both testing and code inspection.
llvm-svn: 166475
2012-10-23 09:07:02 +00:00
Duncan Sands
6ce2ce7ed1 Transform code like this
%V = mul i64 %N, 4
 %t = getelementptr i8* bitcast (i32* %arr to i8*), i32 %V
into
 %t1 = getelementptr i32* %arr, i32 %N
 %t = bitcast i32* %t1 to i8*
incorporating the multiplication into the getelementptr.
This happens all the time in dragonegg, for example for
  int foo(int *A, int N) {
    return A[N];
  }
because gcc turns this into byte pointer arithmetic before it hits the plugin:
  D.1590_2 = (long unsigned int) N_1(D);
  D.1591_3 = D.1590_2 * 4;
  D.1592_5 = A_4(D) + D.1591_3;
  D.1589_6 = *D.1592_5;
  return D.1589_6;
The D.1592_5 line is a POINTER_PLUS_EXPR, which is turned into a getelementptr
on a bitcast of A_4 to i8*, so this becomes exactly the kind of IR that the
transform fires on.

An analogous transform (with no testcases!) already existed for bitcasts of
arrays, so I rewrote it to share code with this one.

llvm-svn: 166474
2012-10-23 08:28:26 +00:00
Benjamin Kramer
f3ac3fa22b InstCombine: Fix an edge case where constant icmps could sneak into ConstantFoldInstOperands and crash.
Have to refactor the ConstantFolder interface one day to define bugs like this away. Fixes PR14131.

llvm-svn: 166374
2012-10-20 08:43:52 +00:00
Michael Gottesman
b9205da3c6 [InstCombine] Teach InstCombine how to handle an obfuscated splat.
An obfuscated splat is where the frontend poorly generates code for a splat
using several different shuffles to create the splat, i.e.,

  %A = load <4 x float>* %in_ptr, align 16
  %B = shufflevector <4 x float> %A, <4 x float> undef, <4 x i32> <i32 0, i32 0, i32 undef, i32 undef>
  %C = shufflevector <4 x float> %B, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 4, i32 undef>
  %D = shufflevector <4 x float> %C, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 2, i32 4>

llvm-svn: 166061
2012-10-16 21:29:38 +00:00
Bill Wendling
7a89835ee4 Move the Attributes::Builder outside of the Attributes class and into its own class named AttrBuilder. No functionality change.
llvm-svn: 165960
2012-10-15 20:35:56 +00:00
Micah Villmow
272663afc2 Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis.
llvm-svn: 165941
2012-10-15 16:24:29 +00:00
Bill Wendling
f2fff93263 Add an enum for the return and function indexes into the AttrListPtr object. This gets rid of some magic numbers.
llvm-svn: 165924
2012-10-15 07:29:08 +00:00
Bill Wendling
a77399599d Attributes Rewrite
Convert the internal representation of the Attributes class into a pointer to an
opaque object that's uniqued by and stored in the LLVMContext object. The
Attributes class then becomes a thin wrapper around this opaque
object. Eventually, the internal representation will be expanded to include
attributes that represent code generation options, etc.

llvm-svn: 165917
2012-10-15 04:46:55 +00:00
Bill Wendling
ea777202df Remove operator cast method in favor of querying with the correct method.
llvm-svn: 165899
2012-10-14 08:54:26 +00:00
Bill Wendling
7005388897 Remove the bitwise AND operators from the Attributes class. Replace it with the equivalent from the builder class.
llvm-svn: 165896
2012-10-14 07:52:48 +00:00
Meador Inge
0d4d368820 Implement new LibCallSimplifier class
This patch implements the new LibCallSimplifier class as outlined in [1].
In addition to providing the new base library simplification infrastructure,
all the fortified library call simplifications were moved over to the new
infrastructure.  The rest of the library simplification optimizations will
be moved over with follow up patches.

NOTE: The original fortified library call simplifier located in the
SimplifyFortifiedLibCalls class was not removed because it is still
used by CodeGenPrepare.  This class will eventually go away too.

[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-August/052283.html

llvm-svn: 165873
2012-10-13 16:45:24 +00:00
Micah Villmow
4eb108750d Revert 165732 for further review.
llvm-svn: 165747
2012-10-11 21:27:41 +00:00
Micah Villmow
d8b76fdc50 Add in the first iteration of support for llvm/clang/lldb to allow variable per address space pointer sizes to be optimized correctly.
llvm-svn: 165726
2012-10-11 17:21:41 +00:00
Nick Lewycky
211cb06fc3 Don't crash when !tbaa.struct contents is invalid.
llvm-svn: 165693
2012-10-11 02:05:23 +00:00
Bill Wendling
b53357de39 Create enums for the different attributes.
We use the enums to query whether an Attributes object has that attribute. The
opaque layer is responsible for knowing where that specific attribute is stored.

llvm-svn: 165488
2012-10-09 07:45:08 +00:00
Bill Wendling
c00f4a514d Convert to using the Attributes::Builder interface.
llvm-svn: 165465
2012-10-09 00:01:21 +00:00
Micah Villmow
bb1a25cd67 Move TargetData to DataLayout.
llvm-svn: 165402
2012-10-08 16:38:25 +00:00
Nick Lewycky
689f61680b Surprisingly, we missed a trivial case here. Fix that!
llvm-svn: 164814
2012-09-28 09:33:53 +00:00
Sylvestre Ledru
b77340e506 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
2012-09-27 10:14:43 +00:00
Sylvestre Ledru
1c5e7904de Fix a typo 'iff' => 'if'
llvm-svn: 164767
2012-09-27 09:59:43 +00:00
Nick Lewycky
9ae46f5c91 Prefer shuffles to selects. Backends love shuffles!
llvm-svn: 164763
2012-09-27 08:33:56 +00:00
Bill Wendling
14c8072a79 Move Attribute::typeIncompatible inside of the Attributes class.
llvm-svn: 164629
2012-09-25 20:38:59 +00:00
Richard Osborne
9be8a6db61 Add missing check for presence of target data.
This avoids a crash in visitAllocaInst when target data isn't available.

llvm-svn: 164539
2012-09-24 17:10:03 +00:00
Benjamin Kramer
3083ebd4de InstCombine: Make sure we use the pre-zext type when creating a constant of a value that is zext'd.
Fixes PR13250.

llvm-svn: 164377
2012-09-21 16:26:41 +00:00
Richard Osborne
552f578018 Fix instcombine to obey requested alignment when merging allocas.
llvm-svn: 164117
2012-09-18 09:31:44 +00:00
Craig Topper
95869a202b Use LLVM_DELETED_FUNCTION in place of 'DO NOT IMPLEMENT' comments.
llvm-svn: 163974
2012-09-15 17:09:36 +00:00
Dan Gohman
3abf8239e6 Handle the new !tbaa.struct metadata tags when converting a memcpy into scalar
loads and stores.

llvm-svn: 163844
2012-09-13 21:51:01 +00:00
Dan Gohman
06b29049c2 Extract code for reducing a type to a single value type into a helper function.
llvm-svn: 163817
2012-09-13 18:19:06 +00:00
Benjamin Kramer
e78ea6f9cb InstCombine: Fix comment to reflect the code.
llvm-svn: 162911
2012-08-30 15:07:40 +00:00
Nadav Rotem
5848719c42 It is illegal to transform (sdiv (ashr X c1) c2) -> (sdiv x (2^c1 * c2)),
because C always rounds towards zero.

Thanks Dirk and Ben.

llvm-svn: 162899
2012-08-30 11:23:20 +00:00
Benjamin Kramer
b92d13cc42 Make MemoryBuiltins aware of TargetLibraryInfo.
This disables malloc-specific optimization when -fno-builtin (or -ffreestanding)
is specified. This has been a problem for a long time but became more severe
with the recent memory builtin improvements.

Since the memory builtin functions are used everywhere, this required passing
TLI in many places. This means that functions that now have an optional TLI
argument, like RecursivelyDeleteTriviallyDeadFunctions, won't remove dead
mallocs anymore if the TLI argument is missing. I've updated most passes to do
the right thing.

Fixes PR13694 and probably others.

llvm-svn: 162841
2012-08-29 15:32:21 +00:00
Benjamin Kramer
52e6ecf819 InstCombine: Defensively avoid undefined shifts by limiting the amount to the bit width.
No test case, undefined shifts get folded early, but can occur when other
transforms generate a constant. Thanks to Duncan for bringing this up.

llvm-svn: 162755
2012-08-28 13:59:23 +00:00
Benjamin Kramer
bc139a63fc InstCombine: Guard the transform introduced in r162743 against large ints and non-const shifts.
llvm-svn: 162751
2012-08-28 13:08:13 +00:00
Nadav Rotem
d0cd39e7c9 Make sure that we don't call getZExtValue on values > 64 bits.
Thanks Benjamin for noticing this.

llvm-svn: 162749
2012-08-28 12:23:22 +00:00
Nadav Rotem
9582c96aef Teach InstCombine to canonicalize [SU]div+[AL]shl patterns.
For example:
  %1 = lshr i32 %x, 2
  %2 = udiv i32 %1, 100

rdar://12182093

llvm-svn: 162743
2012-08-28 10:01:43 +00:00
Chandler Carruth
b82f8d4af5 Port the global copy optimization from the SROA pass to InstCombine.
This optimization is really just replacing allocas wholesale with
globals, there is no scalarization.

The underlying motivation for this patch is to simplify the SROA pass
and focus it on splitting and promoting allocas.

llvm-svn: 162271
2012-08-21 08:39:44 +00:00
Benjamin Kramer
54d9d1a993 InstCombine: Fix a crasher when encountering a function pointer.
llvm-svn: 162180
2012-08-18 22:04:34 +00:00
Benjamin Kramer
f79b68f912 Remove overly conservative hasOneUse check, this always expands into a single IR instruction.
llvm-svn: 162175
2012-08-18 20:24:19 +00:00
Benjamin Kramer
1a05d12328 InstCombine: Add a couple of fabs identities for comparing with 0.0.
llvm-svn: 162174
2012-08-18 20:06:47 +00:00
Michael Liao
cd290ba4fd fix infinite loop in instcombine with more than 4GB memcpy
- memcpy size is wrongly truncated into 32-bit and treat 8GB memcpy is
  0-sized memcpy
- as 0-sized memcpy/memset is already removed before SimplifyMemTransfer
  and SimplifyMemSet in visitCallInst, replace 0 checking with
  assertions.
- replace getZExtValue() with getLimitedValue() according to
  Eli Friedman

llvm-svn: 161923
2012-08-15 03:49:59 +00:00
Bob Wilson
51c50d44b7 Fix a serious typo in InstCombine's optimization of comparisons.
An unsigned value converted to floating-point will always be greater than
a negative constant.  Unfortunately InstCombine reversed the check so that
unsigned values were being optimized to always be greater than all positive
floating-point constants.  <rdar://problem/12029145>

llvm-svn: 161452
2012-08-07 22:35:16 +00:00
Nuno Lopes
7ec9936cb2 fix infinite loop in instcombine in the presence of a (malformed) self-referencing select inst.
This can happen as long as the instruction is not reachable. Instcombine does generate these unreachable malformed selects when doing RAUW

llvm-svn: 160874
2012-07-27 18:03:57 +00:00
Pete Cooper
ddb89a91ca Simplify demanded bits of select sources where the condition is a constant vector
llvm-svn: 160835
2012-07-26 23:10:24 +00:00
Pete Cooper
8d971d19cb Teach SimplifyDemandedBits how to look through fpext and fptrunc to simplify their operand
llvm-svn: 160823
2012-07-26 22:37:04 +00:00
Nuno Lopes
537a3395e5 make all Emit*() functions consult the TargetLibraryInfo information before creating a call to a library function.
Update all clients to pass the TLI information around.
Previous draft reviewed by Eli.

llvm-svn: 160733
2012-07-25 16:46:31 +00:00
Bill Wendling
17b12b72bc Remove tabs.
llvm-svn: 160477
2012-07-19 00:11:40 +00:00
Evan Cheng
5e82ad04d5 Back out r160101 and instead implement a dag combine to recover from instcombine transformation.
llvm-svn: 160387
2012-07-17 18:54:11 +00:00
Evan Cheng
e6c5349fcd Instcombine was transforming:
%shr = lshr i64 %key, 3
  %0 = load i64* %val, align 8
  %sub = add i64 %0, -1
  %and = and i64 %sub, %shr
  ret i64 %and

to:
  %shr = lshr i64 %key, 3
  %0 = load i64* %val, align 8
  %sub = add i64 %0, 2305843009213693951
  %and = and i64 %sub, %shr
  ret i64 %and

The demanded bit optimization is actually a pessimization because add -1 would
be codegen'ed as a sub 1. Teach the demanded constant shrinking optimization
to check for negated constant to make sure it is actually reducing the width
of the constant.

rdar://11793464

llvm-svn: 160101
2012-07-12 01:45:35 +00:00
Nuno Lopes
c676931bb9 instcombine: merge the functions that remove dead allocas and dead mallocs/callocs/...
This patch removes ~70 lines in InstCombineLoadStoreAlloca.cpp and makes both functions a bit more aggressive than before :)
In theory, we can be more aggressive when removing an alloca than a malloc, because an alloca pointer should never escape, but we are not taking advantage of this anyway

llvm-svn: 159952
2012-07-09 18:38:20 +00:00
Nuno Lopes
f3ba9a4d21 teach instcombine to remove allocated buffers even if there are stores, memcpy/memmove/memset, and objectsize users.
This means we can do cheap DSE for heap memory.
Nothing is done if the pointer excapes or has a load.

The churn in the tests is mostly due to objectsize, since we want to make sure we
don't delete the malloc call before evaluating the objectsize (otherwise it becomes -1/0)

llvm-svn: 159876
2012-07-06 23:09:25 +00:00
Chandler Carruth
4b51f99c87 Move llvm/Support/IRBuilder.h -> llvm/IRBuilder.h
This was always part of the VMCore library out of necessity -- it deals
entirely in the IR. The .cpp file in fact was already part of the VMCore
library. This is just a mechanical move.

I've tried to go through and re-apply the coding standard's preferred
header sort, but at 40-ish files, I may have gotten some wrong. Please
let me know if so.

I'll be committing the corresponding updates to Clang and Polly, and
Duncan has DragonEgg.

Thanks to Bill and Eric for giving the green light for this bit of cleanup.

llvm-svn: 159421
2012-06-29 12:38:19 +00:00
Nuno Lopes
b0d4abe297 make instcombine produce calls to llvm.donothing instead of a random intrinsic
llvm-svn: 159384
2012-06-28 22:31:24 +00:00
Evan Cheng
9132bcf0e3 Remove a instcombine transform that (no longer?) makes sense:
// C - zext(bool) -> bool ? C - 1 : C
    if (ZExtInst *ZI = dyn_cast<ZExtInst>(Op1))
      if (ZI->getSrcTy()->isIntegerTy(1))
        return SelectInst::Create(ZI->getOperand(0), SubOne(C), C);

This ends up forming sext i1 instructions that codegen to terrible code. e.g.
int blah(_Bool x, _Bool y) {
  return (x - y) + 1;
}
=>
        movzbl  %dil, %eax
        movzbl  %sil, %ecx
        shll    $31, %ecx
        sarl    $31, %ecx
        leal    1(%rax,%rcx), %eax
        ret


Without the rule, llvm now generates:
        movzbl  %sil, %ecx
        movzbl  %dil, %eax
        incl    %eax
        subl    %ecx, %eax
        ret

It also helps with ARM (and pretty much any target that doesn't have a sext i1 :-).

The transformation was done as part of Eli's r75531. He has given the ok to
remove it.

rdar://11748024

llvm-svn: 159230
2012-06-26 22:03:13 +00:00
Duncan Sands
1770ae1ae4 Replacing zero-sized alloca's with a null pointer is too aggressive, instead
merge all zero-sized alloca's into one, fixing c43204g from the Ada ACATS
conformance testsuite.  What happened there was that a variable sized object
was being allocated on the stack, "alloca i8, i32 %size".  It was then being
passed to another function, which tested that the address was not null (raising
an exception if it was) then manipulated %size bytes in it (load and/or store).
The optimizers cleverly managed to deduce that %size was zero (congratulations
to them, as it isn't at all obvious), which made the alloca zero size, causing
the optimizers to replace it with null, which then caused the check mentioned
above to fail, and the exception to be raised, wrongly.  Note that no loads
and stores were actually being done to the alloca (the loop that does them is
executed %size times, i.e. is not executed), only the not-null address check.

llvm-svn: 159202
2012-06-26 13:39:21 +00:00
Nuno Lopes
165c99b53d improve optimization of invoke instructions:
- simplifycfg:  invoke undef/null -> unreachable
 - instcombine:  invoke new  -> invoke expect(0, 0)  (an arbitrary NOOP intrinsic;  only done if the allocated memory is unused, of course)
 - verifier:  allow invoke of intrinsics  (to make the previous step work)

llvm-svn: 159146
2012-06-25 17:11:47 +00:00
NAKAMURA Takumi
4599dee67a llvm/lib: [CMake] Add explicit dependency to intrinsics_gen.
llvm-svn: 159112
2012-06-24 13:32:01 +00:00
Jakob Stoklund Olesen
c970d61f6d Revert remaining part of r93200: "Disable folding sext(trunc(x)) -> x"
This fixes PR5997.

These transforms were disabled because codegen couldn't deal with other
uses of trunc(x). This is now handled by the peephole pass.

This causes no regressions on x86-64.

llvm-svn: 159003
2012-06-22 16:36:43 +00:00
Nuno Lopes
009e7f08aa instcombine: disable optimization of 'invoke null/undef'. I'll move this functionality to SimplifyCFG (since we cannot make changes to the CFG here).
Fixes the crashes with the attached test case

llvm-svn: 158951
2012-06-21 23:52:14 +00:00
Evan Cheng
404624ee4d Look pass zext to strength reduce an udiv. Patch by David Majnemer. rdar://11721329
llvm-svn: 158946
2012-06-21 22:52:49 +00:00
Nuno Lopes
8baf9fdf84 Add support for invoke to the MemoryBuiltin analysid.
Update comments accordingly.

Make instcombine remove useless invokes to C++'s 'new' allocation function (test attached).

llvm-svn: 158937
2012-06-21 21:25:05 +00:00
Nuno Lopes
c9edab11db refactor the MemoryBuiltin analysis:
- provide more extensive set of functions to detect library allocation functions (e.g., malloc, calloc, strdup, etc)
 - provide an API to compute the size and offset of an object pointed by

Move a few clients (GVN, AA, instcombine, ...) to the new API.
This implementation is a lot more aggressive than each of the custom implementations being replaced.

Patch reviewed by Nick Lewycky and Chandler Carruth, thanks.

llvm-svn: 158919
2012-06-21 15:45:28 +00:00
Nuno Lopes
af699605ac replace usage of EmitGEPOffset() with TargetData::getIndexedOffset() when the GEP offset is known to be constant.
With this change, we avoid relying on the IR Builder to constant fold the operations.

No functionality change intended.

llvm-svn: 158829
2012-06-20 17:30:51 +00:00
Manman Ren
e3471c0bdf InstCombine: fix a bug when combining (fcmp cc0 x, y) && (fcmp cc1 x, y).
uno && ueq was converted to ueq, it should be converted to uno.

llvm-svn: 158441
2012-06-14 05:57:42 +00:00
Benjamin Kramer
f350a319b9 InstCombine: factor code better.
No functionality change.

llvm-svn: 158301
2012-06-11 08:01:25 +00:00
Benjamin Kramer
14e8b5eac3 InstCombine: Turn (zext A) == (B & (1<<X)-1) into A == (trunc B), narrowing the compare.
This saves a cast, and zext is more expensive on platforms with subreg support
than trunc is. This occurs in the BSD implementation of memchr(3), see PR12750.
On the synthetic benchmark from that bug stupid_memchr and bsd_memchr have the
same performance now when not inlining either function.

stupid_memchr: 323.0us
bsd_memchr: 321.0us
memchr: 479.0us

where memchr is the llvm-gcc compiled bsd_memchr from osx lion's libc. When
inlining is enabled bsd_memchr still regresses down to llvm-gcc memchr time,
I haven't fully understood the issue yet, something is grossly mangling the
loop after inlining.

llvm-svn: 158297
2012-06-10 20:35:00 +00:00
Nuno Lopes
4485a55890 canonicalize:
-%a + 42
into
42 - %a

previously we were emitting:
-(%a + 42)

This fixes the infinite loop in PR12338. The generated code is still not perfect, though.
Will work on that next

llvm-svn: 158237
2012-06-08 22:30:05 +00:00
Nadav Rotem
e3db9cf2fd Fix a bug in FoldSelectOpOp. Bitcast ops may change the number of vector elements, which may disagree with the select condition type.
llvm-svn: 158166
2012-06-07 20:28:57 +00:00
Chad Rosier
fb2fc059af Fix combine of uno && ord -> false so that the ordering of the fcmps doesn't
matter.
rdar://11579835

llvm-svn: 158084
2012-06-06 17:22:40 +00:00
Benjamin Kramer
790d8456b5 Fix suspicous hasOneUse() check, found by PVS Studio (PR12357).
llvm-svn: 157592
2012-05-28 20:52:48 +00:00
Benjamin Kramer
f5a7a0dcf1 InstCombine: Fix infinite loop when encountering switch on trivial icmp.
The test case feeds the following into InstCombine's visitSelect:
%tobool8 = icmp ne i32 0, 0
%phitmp = select i1 %tobool8, i32 3, i32 0
Then instcombine replaces the right side of the switch with 0, doesn't notice
that nothing changes and tries again indefinitely.

This fixes PR12897.

llvm-svn: 157587
2012-05-28 19:18:16 +00:00
Chris Lattner
afebb9fced switch AttrListPtr::get to take an ArrayRef, simplifying a lot of clients.
llvm-svn: 157556
2012-05-28 01:47:44 +00:00
Benjamin Kramer
1e46062a2b PR12967: Don't crash when trying to fold a shift that's larger than the type's size.
llvm-svn: 157548
2012-05-27 22:03:32 +00:00
Nuno Lopes
114b8eaa9c add a new pass to instrument loads and stores for run-time bounds checking
move EmitGEPOffset from InstCombine to Transforms/Utils/Local.h

(a draft of this) patch reviewed by Andrew, thanks.

llvm-svn: 157261
2012-05-22 17:19:09 +00:00
Nuno Lopes
944814b41a revert my previous patches that introduced an additional parameter to the objectsize intrinsic.
After a lot of discussion, we realized it's not the best option for run-time bounds checking

llvm-svn: 157255
2012-05-22 15:25:31 +00:00
Nuno Lopes
11d6ecb6db objectsize: add a few more tests and fix a bug
llvm-svn: 156625
2012-05-11 18:25:29 +00:00
Eli Friedman
1746bfc50e Fix a minor logic mistake transforming compares in instcombine. PR12514.
llvm-svn: 156600
2012-05-11 01:32:59 +00:00
Nuno Lopes
415911a5c7 objectsize: add support for GEPs with non-constant indexes
add an additional parameter to InstCombiner::EmitGEPOffset() to force it to *not* emit operations with NUW flag

llvm-svn: 156585
2012-05-10 23:17:35 +00:00
Nuno Lopes
3d7a8137ee objectsize:
refactor code a bit to enable future changes to support run-time information
add support to compute allocation sizes at run-time if penalty > 1 (e.g., malloc(x), calloc(x, y), and VLAs)

llvm-svn: 156515
2012-05-09 21:30:57 +00:00
Jakub Staszak
b3bddb41cb Remove trailing spaces.
llvm-svn: 156257
2012-05-06 13:52:31 +00:00
Stepan Dyatkovskiy
469935e0ae Small fix in InstCombineCasts.cpp. Restored "alloca + bitcast" reducing for case when alloca's size is calculated within the "add/sub/... nsw".
Also added fix to 2011-06-13-nsw-alloca.ll test.

llvm-svn: 156231
2012-05-05 07:09:40 +00:00
Nuno Lopes
2762496a1a remove calls to calloc if the allocated memory is not used (it was already being done for malloc)
fix a few typos found by Chad in my previous commit

llvm-svn: 156110
2012-05-03 22:08:19 +00:00
Nuno Lopes
26239aeb99 add support for calloc to objectsize lowering
llvm-svn: 156102
2012-05-03 21:19:58 +00:00
Nuno Lopes
fe63eee05b replace 'break's with 'return 0' in visitCallInst code for objectsize, since there is no need to fallback to visitCallSite.
This gives a 0.9% in a test case

llvm-svn: 156069
2012-05-03 16:06:07 +00:00
Lang Hames
26d71c9d0a Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. Fixes
<rdar://problem/11291436>.

This is a second attempt at a fix for this, the first was r155468. Thanks
to Chandler, Bob and others for the feedback that helped me improve this.

llvm-svn: 155866
2012-05-01 00:20:38 +00:00
Chad Rosier
f3d4646377 Add instcombine patterns for the following transformations:
(x & y) | (x ^ y) -> x | y 
 (x & y) + (x ^ y) -> x | y 

Patch by Manman Ren.
rdar://10770603

llvm-svn: 155674
2012-04-26 23:29:14 +00:00
Lang Hames
7f69fbca29 Reverting r155468. Chris and Chandler have convinced me that it's dangerous and
in poor taste.

Talking through some alternate solutions with Chandler.

llvm-svn: 155530
2012-04-25 02:16:54 +00:00
Lang Hames
08eb5f2340 Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. This fixes
<rdar://problem/11291436>.

llvm-svn: 155468
2012-04-24 18:58:36 +00:00
Jakob Stoklund Olesen
6c1440cf27 Reapply r155136 after fixing PR12599.
Original commit message:

Defer some shl transforms to DAGCombine.

The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.

Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.

These transformations are deferred:

  (X >>? C) << C   --> X & (-1 << C)  (When X >> C has multiple uses)
  (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2)   (When C2 > C1)
  (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2)  (When C1 > C2)

The corresponding exact transformations are preserved, just like
div-exact + mul:

  (X >>?,exact C) << C   --> X
  (X >>?,exact C1) << C2 --> X << (C2-C1)
  (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)

The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.

llvm-svn: 155362
2012-04-23 17:39:52 +00:00
Jakob Stoklund Olesen
3d22f26e88 Revert r155136 "Defer some shl transforms to DAGCombine."
While the patch was perfect and defect free, it exposed a really nasty
bug in X86 SelectionDAG that caused an llc crash when compiling lencod.

I'll put the patch back in after fixing the SelectionDAG problem.

llvm-svn: 155181
2012-04-20 00:38:45 +00:00
Jakob Stoklund Olesen
1507d20c57 Defer some shl transforms to DAGCombine.
The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.

Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.

These transformations are deferred:

  (X >>? C) << C   --> X & (-1 << C)  (When X >> C has multiple uses)
  (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2)   (When C2 > C1)
  (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2)  (When C1 > C2)

The corresponding exact transformations are preserved, just like
div-exact + mul:

  (X >>?,exact C) << C   --> X
  (X >>?,exact C1) << C2 --> X << (C2-C1)
  (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)

The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.

llvm-svn: 155136
2012-04-19 16:46:26 +00:00
Chandler Carruth
b3fb4be360 Teach InstCombine to nuke a common alloca pattern -- an alloca which has
GEPs, bit casts, and stores reaching it but no other instructions. These
often show up during the iterative processing of the inliner, SROA, and
DCE. Once we hit this point, we can completely remove the alloca. These
were actually showing up in the final, fully optimized code in a bunch
of inliner tests I've been working on, and notably they show up after
LLVM finishes optimizing away all function calls involved in
hash_combine(a, b).

llvm-svn: 154285
2012-04-08 14:36:56 +00:00
Rafael Espindola
88a1aeb123 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

llvm-svn: 154011
2012-04-04 12:51:34 +00:00
Nadav Rotem
bcd74e695a 153465 was incorrect. In this code we wanted to check that the pointer operand is of pointer type (and not vector type).
llvm-svn: 153468
2012-03-26 21:00:53 +00:00
Nadav Rotem
165c8a3432 PR12357: The pointer was used before it was checked.
llvm-svn: 153465
2012-03-26 20:39:18 +00:00
Chris Lattner
5a5f3badd6 eliminate an unneeded branch, part of PR12357
llvm-svn: 153458
2012-03-26 19:13:57 +00:00
Bill Wendling
9343ed10c6 Revert r152907.
llvm-svn: 152935
2012-03-16 18:20:54 +00:00
Bill Wendling
3c44ed8385 The alignment of the pointer part of the store instruction may have an
alignment. If that's the case, then we want to make sure that we don't increase
the alignment of the store instruction. Because if we increase it to be "more
aligned" than the pointer, code-gen may use instructions which require a greater
alignment than the pointer guarantees.
<rdar://problem/11043589>

llvm-svn: 152907
2012-03-16 07:40:08 +00:00
Eli Friedman
0763584d78 In InstCombiner::visitOr, make sure we reverse the operand swap used for checking for or-of-xor operations after those checks; a later check expects that any constant will be in Op1. PR12234.
llvm-svn: 152884
2012-03-16 00:52:42 +00:00
Bill Wendling
92dc871aad Use an iterator instead of calling .size() on the worklist every time, which is wasteful.
llvm-svn: 152794
2012-03-15 11:19:41 +00:00
Stepan Dyatkovskiy
72fdcabd4d llvm::SwitchInst
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.

llvm-svn: 152532
2012-03-11 06:09:17 +00:00
Stepan Dyatkovskiy
79f3dd93b7 Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html

Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".

ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.

Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.

Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow

for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
  BasicBlock *BB = i.getCaseSuccessor();
  ConstantInt *V = i.getCaseValue();
  // Do something.
}

If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.

There are also related changes in llvm-clients: klee and clang.

llvm-svn: 152297
2012-03-08 07:06:20 +00:00
Bill Wendling
7786b1f9dc Restrict this transformation to equality conditions.
This transformation is not correct for not-equal conditions:

(trunc x) != C1 & (and x, CA) != C2 -> (and x, CA|CMAX) != C1|C2

Let
  C1 == 0
  C2 == 0
  CA == 0xFF0000
  CMAX == 0xFF
and truncating to i8.

The original truth table:

    x   | A: trunc x != 0 | B: x & 0xFF0000 != 0 | A & B != 0
--------------------------------------------------------------
0x00000 |        0        |          0           |     0
0x00001 |        1        |          0           |     0
0x10000 |        0        |          1           |     0
0x10001 |        1        |          1           |     1

The truth table of the replacement:

    x   | x & 0xFF00FF != 0
----------------------------
0x00000 |        0
0x00001 |        1
0x10000 |        1
0x10001 |        1

So they are different.

llvm-svn: 151691
2012-02-29 01:46:50 +00:00
Benjamin Kramer
bc60ae1e71 Fix unsigned off-by-one in comment.
llvm-svn: 151056
2012-02-21 13:40:06 +00:00
Benjamin Kramer
dacc2e8edb InstCombine: Don't transform a signed icmp of two GEPs into a signed compare of the indices.
This transformation is not safe in some pathological cases (signed icmp of pointers should be an
extremely rare thing, but it's valid IR!). Add an explanatory comment.

Kudos to Duncan for pointing out this edge case (and not giving up explaining it until I finally got it).

llvm-svn: 151055
2012-02-21 13:31:09 +00:00
Benjamin Kramer
0dac66d9a2 InstCombine: Removing the base from the address calculation is only safe when the GEPs are inbounds.
llvm-svn: 150978
2012-02-20 18:45:10 +00:00
Benjamin Kramer
9ade8e4d79 InstCombine: When comparing two GEPs that were derived from the same base pointer but use different types, expand the offset calculation and to the compare on the offset if profitable.
This came up in SmallVector code.

llvm-svn: 150962
2012-02-20 15:07:47 +00:00
Benjamin Kramer
3d87f26b44 InstCombine: Make OptimizePointerDifference more aggressive.
- Ignore pointer casts.
- Also expand GEPs that aren't constantexprs when they have one use or only constant indices.

- We now compile "&foo[i] - &foo[j]" into "i - j".

llvm-svn: 150961
2012-02-20 14:34:57 +00:00
Devang Patel
7f07d60411 Check against umin while converting fcmp into an icmp.
llvm-svn: 150425
2012-02-13 23:05:18 +00:00
Craig Topper
639b152ca5 Convert assert(0) to llvm_unreachable
llvm-svn: 149967
2012-02-07 05:05:23 +00:00
Chris Lattner
7a6bd0185e Remove some dead code and tidy things up now that vectors use ConstantDataVector
instead of always using ConstantVector.

llvm-svn: 149912
2012-02-06 21:56:39 +00:00
Bill Wendling
4e92f798ff [unwind removal] We no longer have 'unwind' instructions being generated, so
remove the code that handles them.

llvm-svn: 149901
2012-02-06 21:16:41 +00:00
Benjamin Kramer
8f25434574 Make helper static.
llvm-svn: 149865
2012-02-06 11:28:19 +00:00
Jim Grosbach
68e59a434c Narrow test further. Make bot and test happy.
llvm-svn: 149650
2012-02-03 00:26:07 +00:00
Jim Grosbach
6c70b7e9a5 Tidy up. Trailing whitespace.
llvm-svn: 149649
2012-02-03 00:07:04 +00:00
Jim Grosbach
12f2a6322e Restrict InstCombine from converting varargs to or from fixed args.
More targetted fix replacing d0e277d272d517ca1cda368267d199f0da7cad95.

llvm-svn: 149648
2012-02-03 00:00:55 +00:00
Jim Grosbach
bc7e9b3c96 Revert "Disable InstCombine unsafe folding bitcasts of calls w/ varargs."
This reverts commit d0e277d272d517ca1cda368267d199f0da7cad95.

llvm-svn: 149647
2012-02-03 00:00:50 +00:00
Stepan Dyatkovskiy
856ca370cc SwitchInst refactoring.
The purpose of refactoring is to hide operand roles from SwitchInst user (programmer). If you want to play with operands directly, probably you will need lower level methods than SwitchInst ones (TerminatorInst or may be User). After this patch we can reorganize SwitchInst operands and successors as we want.

What was done:

1. Changed semantics of index inside the getCaseValue method:
getCaseValue(0) means "get first case", not a condition. Use getCondition() if you want to resolve the condition. I propose don't mix SwitchInst case indexing with low level indexing (TI successors indexing, User's operands indexing), since it may be dangerous.
2. By the same reason findCaseValue(ConstantInt*) returns actual number of case value. 0 means first case, not default. If there is no case with given value, ErrorIndex will returned.
3. Added getCaseSuccessor method. I propose to avoid usage of TerminatorInst::getSuccessor if you want to resolve case successor BB. Use getCaseSuccessor instead, since internal SwitchInst organization of operands/successors is hidden and may be changed in any moment.
4. Added resolveSuccessorIndex and resolveCaseIndex. The main purpose of these methods is to see how case successors are really mapped in TerminatorInst.
4.1 "resolveSuccessorIndex" was created if you need to level down from SwitchInst to TerminatorInst. It returns TerminatorInst's successor index for given case successor.
4.2 "resolveCaseIndex" converts low level successors index to case index that curresponds to the given successor.

Note: There are also related compatability fix patches for dragonegg, klee, llvm-gcc-4.0, llvm-gcc-4.2, safecode, clang.
llvm-svn: 149481
2012-02-01 07:49:51 +00:00