1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 04:02:41 +01:00
Commit Graph

1940 Commits

Author SHA1 Message Date
Chris Lattner
ba1cc33676 Implement PR8644: forwarding a memcpy value to a byval,
allowing the memcpy to be eliminated.

Unfortunately, the requirements on byval's without explicit 
alignment are really weak and impossible to predict in the 
mid-level optimizer, so this doesn't kick in much with current
frontends.  The fix is to change clang to set alignment on all
byval arguments.

llvm-svn: 119916
2010-11-21 00:28:59 +00:00
Owen Anderson
5ee547b9d5 Add a test for CodeGenPrepare's ability to look through PHI nodes when performing
addressing mode folding, introduced in r119853.

llvm-svn: 119857
2010-11-19 22:34:53 +00:00
Duncan Sands
4562d3b919 Factor code for testing whether replacing one value with another
preserves LCSSA form out of ScalarEvolution and into the LoopInfo
class.  Use it to check that SimplifyInstruction simplifications
are not breaking LCSSA form.  Fixes PR8622.

llvm-svn: 119727
2010-11-18 19:59:41 +00:00
Owen Anderson
c2db966e5e Completely rework the datastructure GVN uses to represent the value number to leader mapping. Previously,
this was a tree of hashtables, and a query recursed into the table for the immediate dominator ad infinitum
if the initial lookup failed.  This led to really bad performance on tall, narrow CFGs.

We can instead replace it with what is conceptually a multimap of value numbers to leaders (actually
represented by a hashtable with a list of Value*'s as the value type), and then
determine which leader from that set to use very cheaply thanks to the DFS numberings maintained by
DominatorTree.  Because there are typically few duplicates of a given value, this scan tends to be
quite fast.  Additionally, we use a custom linked list and BumpPtr allocation to avoid any unnecessary
allocation in representing the value-side of the multimap.

This change brings with it a 15% (!) improvement in the total running time of GVN on 403.gcc, which I
think is pretty good considering that includes all the "real work" being done by MemDep as well.

The one downside to this approach is that we can no longer use GVN to perform simple conditional progation,
but that seems like an acceptable loss since we now have LVI and CorrelatedValuePropagation to pick up
the slack.  If you see conditional propagation that's not happening, please file bugs against LVI or CVP.

llvm-svn: 119714
2010-11-18 18:32:40 +00:00
Dan Gohman
ec75e876ab Add support for PHI-translating sext, zext, and trunc instructions,
enabling more PRE. PR8586.

llvm-svn: 119704
2010-11-18 17:05:13 +00:00
Chris Lattner
c752718881 remove a pointless restriction from memcpyopt. It was
refusing to optimize two memcpy's like this:

copy A <- B
copy C <- A

if it couldn't prove that noalias(B,C).  We can eliminate
the copy by producing a memmove instead of memcpy.

llvm-svn: 119694
2010-11-18 08:00:57 +00:00
Chris Lattner
1000d06bee filecheckize, this is still not optimal, see PR8643
llvm-svn: 119693
2010-11-18 07:49:32 +00:00
Chris Lattner
6048697a30 allow eliminating an alloca that is just copied from an constant global
if it is passed as a byval argument.  The byval argument will just be a
read, so it is safe to read from the original global instead.  This allows
us to promote away the %agg.tmp alloca in PR8582

llvm-svn: 119686
2010-11-18 06:41:51 +00:00
Chris Lattner
791e914b1b enhance the "alloca is just a memcpy from constant global"
to ignore calls that obviously can't modify the alloca
because they are readonly/readnone.

llvm-svn: 119683
2010-11-18 06:26:49 +00:00
Chris Lattner
44ccd4643d fix a small oversight in the "eliminate memcpy from constant global"
optimization.  If the alloca that is "memcpy'd from constant" also has
a memcpy from *it*, ignore it: it is a load.  We now optimize the testcase to:

define void @test2() {
  %B = alloca %T
  %a = bitcast %T* @G to i8*
  %b = bitcast %T* %B to i8*
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %b, i8* %a, i64 124, i32 4, i1 false)
  call void @bar(i8* %b)
  ret void
}

previously we would generate:

define void @test() {
  %B = alloca %T
  %b = bitcast %T* %B to i8*
  %G.0 = getelementptr inbounds %T* @G, i32 0, i32 0
  %tmp3 = load i8* %G.0, align 4
  %G.1 = getelementptr inbounds %T* @G, i32 0, i32 1
  %G.15 = bitcast [123 x i8]* %G.1 to i8*
  %1 = bitcast [123 x i8]* %G.1 to i984*
  %srcval = load i984* %1, align 1
  %B.0 = getelementptr inbounds %T* %B, i32 0, i32 0
  store i8 %tmp3, i8* %B.0, align 4
  %B.1 = getelementptr inbounds %T* %B, i32 0, i32 1
  %B.12 = bitcast [123 x i8]* %B.1 to i8*
  %2 = bitcast [123 x i8]* %B.1 to i984*
  store i984 %srcval, i984* %2, align 1
  call void @bar(i8* %b)
  ret void
}

llvm-svn: 119682
2010-11-18 06:20:47 +00:00
Chris Lattner
c1e63bb987 filecheckize
llvm-svn: 119681
2010-11-18 06:16:43 +00:00
Benjamin Kramer
1b330efb46 InstCombine: Add a missing irem identity (X % X -> 0).
llvm-svn: 119538
2010-11-17 19:11:46 +00:00
Duncan Sands
825c7d7f79 In which I discover the existence of loops. Threading an operation
over a phi node by applying it to each operand may be wrong if the
operation and the phi node are mutually interdependent (the testcase
has a simple example of this).  So only do this transform if it would
be correct to perform the operation in each predecessor of the block
containing the phi, i.e. if the other operands all dominate the phi.
This should fix the FFMPEG snow.c regression reported by İsmail Dönmez.

llvm-svn: 119347
2010-11-16 12:16:38 +00:00
Duncan Sands
ccdccb2776 Teach InstructionSimplify the trick of skipping incoming phi
values that are equal to the phi itself.

llvm-svn: 119161
2010-11-15 17:52:45 +00:00
Duncan Sands
658bdcf094 Move PHI tests to phi.ll, out of select.ll.
llvm-svn: 119153
2010-11-15 16:43:28 +00:00
Duncan Sands
617030ad18 Teach InstructionSimplify about phi nodes. I chose to have it simply
offload the work to hasConstantValue rather than do something more
complicated (such handling mutually recursive phis) because (1) it is
not clear it is worth it; and (2) if it is worth it, maybe such logic
would be better placed in hasConstantValue.  Adjust some GVN tests
which are now cleaned up much further (eg: all phi nodes are removed).

llvm-svn: 119043
2010-11-14 13:30:18 +00:00
Chris Lattner
7ade54ed02 rename test.
llvm-svn: 119033
2010-11-14 07:03:49 +00:00
Chris Lattner
9b45357c09 filecheckize, remove an old and useless test
llvm-svn: 119032
2010-11-14 07:03:38 +00:00
Chris Lattner
449c3bd7b5 this test is pretty pointless and "propogation" isn't a word (or so Misha claims).
llvm-svn: 119031
2010-11-14 07:02:03 +00:00
Duncan Sands
47dddbe925 Testcase to go along with commit 118923 ("Have GVN simplify instructions
as it goes").  Before -std-compile-opts only got it down to
  %a = tail call i32 @foo(i32 0) readnone
  %x = tail call i32 @foo(i32 %a) readnone
  %y = tail call i32 @foo(i32 %a) readnone
  %z = icmp eq i32 %x, %y
  ret i1 %z
while now -basicaa -gvn alone reduce it to
  %a = call i32 @foo(i32 0) readnone
  %x = call i32 @foo(i32 %a) readnone
  ret i1 true

llvm-svn: 119009
2010-11-13 21:33:19 +00:00
Duncan Sands
88fc6cd7fe Generalize the reassociation transform in SimplifyCommutative (now renamed to
SimplifyAssociativeOrCommutative) "(A op C1) op C2" -> "A op (C1 op C2)",
which previously was only done if C1 and C2 were constants, to occur whenever
"C1 op C2" simplifies (a la InstructionSimplify).  Since the simplifying operand
combination can no longer be assumed to be the right-hand terms, consider all of
the possible permutations.  When compiling "gcc as one big file", transform 2
(i.e. using right-hand operands) fires about 4000 times but it has to be said
that most of the time the simplifying operands are both constants.  Transforms
3, 4 and 5 each fired once.  Transform 6, which is an existing transform that
I didn't change, never fired.  With this change, the testcase is now optimized
perfectly with one run of instcombine (previously it required instcombine +
reassociate + instcombine, and it may just have been luck that this worked).

llvm-svn: 119002
2010-11-13 15:10:37 +00:00
Dan Gohman
bc5a716f10 Enhance DSE to handle the case where a free call makes more than
one store dead. This is especially noticeable in
SingleSource/Benchmarks/Shootout/objinst.

llvm-svn: 118875
2010-11-12 02:19:17 +00:00
Dan Gohman
2cf0e523cb Filecheckize.
llvm-svn: 118874
2010-11-12 02:02:39 +00:00
Dan Gohman
b20b7d2ec2 Factor out Instruction::isSafeToSpeculativelyExecute's code for
testing for dereferenceable pointers into a helper function,
isDereferenceablePointer.  Teach it how to reason about GEPs
with simple non-zero indices.

Also eliminate ArgumentPromtion's IsAlwaysValidPointer,
which didn't check for weak externals or out of range gep
indices.

llvm-svn: 118840
2010-11-11 21:23:25 +00:00
Dan Gohman
2b4e8302a6 Enhance GVN to do more precise alias queries for non-local memory
references. For example, this allows gvn to eliminate the load in
this example:

  void foo(int n, int* p, int *q) {
    p[0] = 0;
    p[1] = 1;
    if (n) {
      *q = p[0];
    }
  }

llvm-svn: 118714
2010-11-10 20:37:15 +00:00
Duncan Sands
8531f14f5b Teach InstructionSimplify how to look through PHI nodes. Since PHI
nodes can be used in loops, this could result in infinite looping
if there is no recursion limit, so add such a limit.  It is also
used for the SelectInst case because in theory there could be an
infinite loop there too if the basic block is unreachable.

llvm-svn: 118694
2010-11-10 18:23:01 +00:00
Dale Johannesen
24a82bdec2 When checking that the necessary bits are zero in
order to reduce ((x<<30)>>24) to x<<6, check the
correct bits.  PR 8547.

llvm-svn: 118665
2010-11-10 01:30:56 +00:00
Dan Gohman
1571dfc883 Make ModRefBehavior a lattice. Use this to clean up AliasAnalysis
chaining and simplify FunctionAttrs' GetModRefBehavior logic.

llvm-svn: 118660
2010-11-10 01:02:18 +00:00
Duncan Sands
3ced912bf8 Add an additional test for icmp of select folding.
llvm-svn: 118441
2010-11-08 20:56:28 +00:00
Dan Gohman
6909ecf66e Extend the AliasAnalysis::pointsToConstantMemory interface to allow it
to optionally look for constant or local (alloca) memory.

Teach BasicAliasAnalysis::pointsToConstantMemory to look through Select
and Phi nodes, and to support looking for local memory.

Remove FunctionAttrs' PointsToLocalOrConstantMemory function, now that
AliasAnalysis knows all the tricks that it knew.

llvm-svn: 118412
2010-11-08 16:45:26 +00:00
Dan Gohman
4e65d5ff92 Make FunctionAttrs use AliasAnalysis::getModRefBehavior, now that it
knows about intrinsic functions.

llvm-svn: 118410
2010-11-08 16:10:15 +00:00
Duncan Sands
0a5bcce250 Add simplification of floating point comparisons with the result
of a select instruction, the same as already exists for integer
comparisons.

llvm-svn: 118379
2010-11-07 16:46:25 +00:00
Duncan Sands
c9c7d54930 Fix a README item: when doing a comparison with the result
of a select instruction, see if doing the compare with the
true and false values of the select gives the same result.
If so, that can be used as the value of the comparison.

llvm-svn: 118378
2010-11-07 16:12:23 +00:00
Owen Anderson
47f0efad86 When folding away a (shl (shr)) pair, we need to check that the bits that will BECOME the low
bits are zero, not that the current low bits are zero.  Fixes <rdar://problem/8606771>.

llvm-svn: 117953
2010-11-01 21:08:20 +00:00
Duncan Sands
a7198342e7 If a function does a volatile load from a global constant, do not
consider it to be readonly.  In fact, don't even consider it to be
readonly if it does a volatile load from an AllocaInst either (it
is debatable as to whether readonly would be correct or not in this
case; play safe for the moment).  This fixes PR8279.

llvm-svn: 117783
2010-10-30 12:59:44 +00:00
Bob Wilson
d7f24e831f Change instcombine's getShuffleMask to represent undef with negative values.
This code had previously used 2*N, where N is the mask length, to represent
undef.  That is not safe because the shufflevector operands may have more
than N elements -- they don't have to match the result type.

llvm-svn: 117721
2010-10-29 22:03:05 +00:00
Bob Wilson
996353fb5d Make instcombine a little more aggressive in combining vector shuffles.
Allow splats even if they don't match either of the original shuffles,
possibly due to undef entries in the shuffles masks.  Radar 8597790.
Also fix some 80-column violations.

llvm-svn: 117719
2010-10-29 22:02:50 +00:00
Owen Anderson
14cf6bfa0f Update testcase since we're no longer doing the constant forwarding inline with correlated value propagation.
llvm-svn: 117712
2010-10-29 21:18:23 +00:00
NAKAMURA Takumi
b89aaebdde test/Transforms/SimplifyLibCalls/floor.ll: Mark as XFAIL:win32 due to lack of nearbyintf on MSVC. [PR8466]
llvm-svn: 117529
2010-10-28 06:46:04 +00:00
Dale Johannesen
454b9243bd Teach InstCombine not to use Add and Neg on FP. PR 8490.
llvm-svn: 117510
2010-10-27 23:45:18 +00:00
Dan Gohman
96e34e87ca Fix a case where instcombine was stripping metadata (and alignment)
from stores when folding in bitcasts.

llvm-svn: 117265
2010-10-25 16:16:27 +00:00
Duncan Sands
5b25503aab Fix PR8445: a block with no predecessors may be the entry block, in which case
it isn't unreachable and should not be zapped.  The check for the entry block
was missing in one case: a block containing a unwind instruction.  While there,
do some small cleanups: "M" is not a great name for a Function* (it would be
more appropriate for a Module*), change it to "Fn"; use Fn in more places.

llvm-svn: 117224
2010-10-24 12:23:30 +00:00
Bob Wilson
0290dbe7d4 Teach instcombine to set the alignment arguments for NEON load/store intrinsics.
llvm-svn: 117154
2010-10-22 21:41:48 +00:00
Mikhail Glushenkov
0c09a4b97f GlobalOpt: EvaluateFunction() must not evaluate stores to weak_odr globals.
Fixes PR8389.

llvm-svn: 116812
2010-10-19 16:47:23 +00:00
Dan Gohman
6aff5b94ff Make BasicAliasAnalysis a normal AliasAnalysis implementation which
does normal initialization and normal chaining. Change the default
AliasAnalysis implementation to NoAlias.

Update StandardCompileOpts.h and friends to explicitly request
BasicAliasAnalysis.

Update tests to explicitly request -basicaa.

llvm-svn: 116720
2010-10-18 18:04:47 +00:00
Owen Anderson
4373b4516b Generalize MemCpyOpt's handling of call slot forwarding to function properly when the call slot
forwarding is implemented with a load/store pair rather than a memcpy.

llvm-svn: 116637
2010-10-15 22:52:12 +00:00
Chris Lattner
27d8b68afa fix a bug I introduced, no idea how this didn't repro right.
llvm-svn: 116462
2010-10-14 00:30:00 +00:00
Chris Lattner
7c5912d186 hack to unbreak buildbots
llvm-svn: 116461
2010-10-14 00:26:10 +00:00
Chris Lattner
451a0accb5 add uadd_ov/usub_ov to apint, consolidate constant folding
logic to use the new APInt methods.  Among other things this
implements rdar://8501501 - llvm.smul.with.overflow.i32 should constant fold

which comes from "clang -ftrapv", originally brought to my attention from PR8221.

llvm-svn: 116457
2010-10-14 00:05:07 +00:00
Kenneth Uildriks
e9771f15f7 Now using a variant of the existing inlining heuristics to decide whether to create a given specialization of a function in PartialSpecialization. If the total performance bonus across all callsites passing the same constant exceeds the specialization cost, we create the specialization.
llvm-svn: 116158
2010-10-09 22:06:36 +00:00