1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-27 14:02:50 +01:00
Commit Graph

430 Commits

Author SHA1 Message Date
Chandler Carruth
4b51f99c87 Move llvm/Support/IRBuilder.h -> llvm/IRBuilder.h
This was always part of the VMCore library out of necessity -- it deals
entirely in the IR. The .cpp file in fact was already part of the VMCore
library. This is just a mechanical move.

I've tried to go through and re-apply the coding standard's preferred
header sort, but at 40-ish files, I may have gotten some wrong. Please
let me know if so.

I'll be committing the corresponding updates to Clang and Polly, and
Duncan has DragonEgg.

Thanks to Bill and Eric for giving the green light for this bit of cleanup.

llvm-svn: 159421
2012-06-29 12:38:19 +00:00
Bill Wendling
74b96ac7b8 The DIBuilder class is just a wrapper around debug info creation
(a.k.a. MDNodes). The module doesn't belong in Analysis. Move it to the VMCore
instead.

llvm-svn: 159414
2012-06-29 08:32:07 +00:00
Bill Wendling
e8949ecfa6 Move lib/Analysis/DebugInfo.cpp to lib/VMCore/DebugInfo.cpp and
include/llvm/Analysis/DebugInfo.h to include/llvm/DebugInfo.h.

The reasoning is because the DebugInfo module is simply an interface to the
debug info MDNodes and has nothing to do with analysis.

llvm-svn: 159312
2012-06-28 00:05:13 +00:00
Nadav Rotem
313b090606 Add a number of threshold arguments to the SRA pass.
A patch by Tom Stellard with minor changes.

llvm-svn: 158918
2012-06-21 13:44:31 +00:00
Pete Cooper
5e72f7e4f9 Now that SROA can form alloca's for dynamic vector accesses, further improve it to be able to replace operations on these vector alloca's with insert/extract element insts
llvm-svn: 158623
2012-06-17 03:58:26 +00:00
Pete Cooper
f0846e363a Fix crash from r158529 on Bullet.
Dynamic GEPs created by SROA needed to insert extra "i32 0"
operands to index through structs and arrays to get to the
vector being indexed.

llvm-svn: 158590
2012-06-16 01:43:26 +00:00
Pete Cooper
d64dbc9162 Allow SROA to split up an array of vectors into multiple vectors, even when the vectors are dynamically indexed
llvm-svn: 158529
2012-06-15 18:07:29 +00:00
Pete Cooper
f7d46afa61 Recommit r158407: Allow SROA to look at a vector type and see if the offset is out of range to be replaced with a scalar access. Now with additional fix and test for indexing into a vector inside a struct
llvm-svn: 158479
2012-06-14 23:53:53 +00:00
Pete Cooper
75c1521e67 Revert r158454: Allow SROA to look at a vector type... Its breaking the vectorise buildbot
This reverts commit 12c1f86ffa731e2952c80d2cc577000c96b8962c.

llvm-svn: 158462
2012-06-14 18:32:52 +00:00
Pete Cooper
8bba872141 Recommit r158407: Allow SROA to look at a vector type and see if the offset is out of range to be replaced with a scalar access. Now with additional fix and test for indexing into a vector inside a struct
llvm-svn: 158454
2012-06-14 16:38:13 +00:00
Pete Cooper
ce49530fba Revert "Allow SROA to look at a vector type and see if the offset is out of range to be replaced with a scalar access"
This reverts commit 51786e0aaec76b973205066bd44f7f427b21969f.

llvm-svn: 158408
2012-06-13 17:55:22 +00:00
Pete Cooper
efba533f47 Allow SROA to look at a vector type and see if the offset is out of range to be replaced with a scalar access
llvm-svn: 158407
2012-06-13 17:30:34 +00:00
Peter Collingbourne
0baed83df2 Do not eliminate allocas whose alignment exceeds that of the
copied-in constant, as a subsequent user may rely on over alignment.
Fixes PR12885.

llvm-svn: 157134
2012-05-19 22:52:10 +00:00
Chad Rosier
b41586c8e1 Typo.
llvm-svn: 154522
2012-04-11 19:21:58 +00:00
Duncan Sands
1ac68ded1a Indentation.
llvm-svn: 153322
2012-03-23 08:29:04 +00:00
Chris Lattner
9bb176f16d don't use "signed", just something I noticed in patches flying by.
llvm-svn: 153237
2012-03-22 03:46:58 +00:00
Aaron Ballman
bf6eebde21 Fixed a transform crash when setting a negative size value for memset. Fixes PR12202.
llvm-svn: 152756
2012-03-15 00:05:31 +00:00
Benjamin Kramer
4cd3b0e4e6 Reflow code, no functionality change.
llvm-svn: 151262
2012-02-23 17:42:19 +00:00
Chris Lattner
473bdbaabc use ConstantVector::getSplat in a few places.
llvm-svn: 148929
2012-01-25 06:02:56 +00:00
Rafael Espindola
d448dfaa25 Fix warning.
llvm-svn: 147284
2011-12-26 23:12:42 +00:00
Nadav Rotem
1a91e4381d Add support for vectors of pointers.
llvm-svn: 145801
2011-12-05 06:29:09 +00:00
Nick Lewycky
39c6f0a5d5 Refactor code to use new attribute getters on CallSite for NoCapture and ByVal.
Suggested in code review by Eli.

That code in InstCombine looks kinda suspicious.

llvm-svn: 145013
2011-11-20 19:09:04 +00:00
Eli Friedman
a83fbaff5f Make sure scalarrepl picks the correct alloca when it rewrites a bitcast. Fixes PR11353.
llvm-svn: 144442
2011-11-12 02:07:50 +00:00
Cameron Zwarich
2dd06afcf5 The element insertion code in scalar replacement doesn't handle incorrect
element types, even though the element extraction code does. It is surprising
that this bug has been here for so long. Fixes <rdar://problem/10318778>.

llvm-svn: 142740
2011-10-23 07:02:10 +00:00
Cameron Zwarich
fac176ac51 Fix PR11106 by correcting a typo that has been in the code for over a year. This
would have never worked, since the element type of a vector type is never a
vector type. Also fix the conditional to be more direct in checking whether
EltTy is a vector type.

llvm-svn: 141713
2011-10-11 21:26:40 +00:00
Cameron Zwarich
a34d748f83 Remove a lot of the fancy scalar replacement code for dealing with llvm-gcc's
lowering of NEON code. It provides little-to-no benefit now and only introduces
additional complexity.

llvm-svn: 141646
2011-10-11 06:10:30 +00:00
Benjamin Kramer
355b353595 Stop emitting instructions with the name "tmp" they eat up memory and have to be uniqued, without any benefit.
If someone prefers %tmp42 to %42, run instnamer.

llvm-svn: 140634
2011-09-27 20:39:19 +00:00
Eli Friedman
6e15091fc6 PR10987: add a missed safety check to isSafePHIToSpeculate in scalarrepl.
llvm-svn: 140327
2011-09-22 18:56:30 +00:00
Eli Friedman
047d3de417 Change a bunch of isVolatile() checks to check for atomic load/store as well.
No tests; these changes aren't really interesting in the sense that the logic is the same for volatile and atomic.

I believe this completes all of the changes necessary for the optimizer to handle loads and stores correctly.  I'm going to try and come up with some additional testing, though.

llvm-svn: 139533
2011-09-12 20:23:13 +00:00
Nick Lewycky
271216a67b Finish adding support for lifetime intrinsics to SROA. Fixes PR10121!
llvm-svn: 136008
2011-07-25 23:14:22 +00:00
Jay Foad
6513dac6e2 Convert GetElementPtrInst to use ArrayRef.
llvm-svn: 135904
2011-07-25 09:48:08 +00:00
Dan Gohman
b320950f65 Fix MergeInVectorType to check for vector types with the same alloc
size but different element types, so that it filters out the cases
that CreateShuffleVectorCast doesn't handle. This fixes rdar://9786827.

llvm-svn: 135721
2011-07-21 23:30:09 +00:00
Jay Foad
0974b71f17 Convert TargetData::getIndexedOffset to use ArrayRef.
llvm-svn: 135478
2011-07-19 14:01:37 +00:00
Chris Lattner
e1fe7061ce land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Devang Patel
4011cc12b8 Use DBG_VALUE location while inserting DBG_VALUE during alloca promotion.
llvm-svn: 134568
2011-07-07 00:05:58 +00:00
Devang Patel
47d505f9ef Handle cases where multiple dbg.declare and dbg.value intrinsics are tied to one alloca.
llvm-svn: 134549
2011-07-06 22:06:11 +00:00
Devang Patel
214fa1739f Simplify. Consolidate dbg.declare handling in AllocaPromoter.
llvm-svn: 134538
2011-07-06 21:09:55 +00:00
Nick Lewycky
465651fe6e Fix likely typo, reduce number of instruction name collisions.
llvm-svn: 134235
2011-07-01 06:27:03 +00:00
Nick Lewycky
0a59da4c5f Teach one piece of scalarrepl to handle lifetime markers. When transforming an
alloca that only holds a copy of a global and we're going to replace the users
of the alloca with that global, just nuke the lifetime intrinsics. Part of
PR10121.

llvm-svn: 133905
2011-06-27 05:40:02 +00:00
Cameron Zwarich
09c312acad When scalar replacement returns a vector type, only accept it if the vector
type's bitwidth matches the (allocated) size of the alloca. This severely
pessimizes vector scalar replacement when the only vector type being used is
something like <3 x float> on x86 or ARM whose allocated size matches a
<4 x float>.

I hope to fix some of the flawed assumptions about allocated size throughout
scalar replacement and reenable this in most cases.

llvm-svn: 133338
2011-06-18 06:17:51 +00:00
Cameron Zwarich
00255ad84e Fix an invalid bitcast crash that occurs when doing a partial memset of a vector
alloca. Fixes part of <rdar://problem/9580800>.

llvm-svn: 133336
2011-06-18 05:47:49 +00:00
Cameron Zwarich
d6e767ca48 Remove a pointless assignment. Nothing checks the value of VectorTy anymore now
unless ScalarKind is Vector.

llvm-svn: 133335
2011-06-18 05:47:45 +00:00
Cameron Zwarich
77f6a2d09e Be more obvious about what is being tested.
llvm-svn: 132982
2011-06-14 06:33:51 +00:00
Cameron Zwarich
5b03baf002 Fix grammar.
llvm-svn: 132952
2011-06-13 23:39:23 +00:00
Cameron Zwarich
c122dc4050 Rename MergeInType to MergeInTypeForLoadOrStore.
llvm-svn: 132940
2011-06-13 21:44:43 +00:00
Cameron Zwarich
e2e3fc3fe2 Remove the HadAVector instance variable and replace it with a use of ScalarKind.
llvm-svn: 132939
2011-06-13 21:44:40 +00:00
Cameron Zwarich
3614d9bec8 Remove a vacuous check.
llvm-svn: 132938
2011-06-13 21:44:38 +00:00
Cameron Zwarich
934fdc5aba Have SRoA explicitly track the kind of scalar it is promoting. This is pretty
spartan right now, but I plan to encode more information in this enum to improve
the correctness and reliability of SRoA. At least this first pass makes it
possible to make VectorTy an actual VectorType.

llvm-svn: 132937
2011-06-13 21:44:35 +00:00
Cameron Zwarich
ab21bb23b9 Remove an argument that is always true.
llvm-svn: 132936
2011-06-13 21:44:31 +00:00
Cameron Zwarich
ca3f5d4844 Remove a vacuous condition.
llvm-svn: 132767
2011-06-09 01:52:44 +00:00
Cameron Zwarich
72cd0d1b5b Fix PR10104 by adding a bounds check on a vector element access check. It was
assuming that all offsets are legal vector accesses, and thus trying to access
the float member of { <2 x float>, float } as the 3rd element of the first
member.

llvm-svn: 132766
2011-06-09 01:45:33 +00:00
Cameron Zwarich
68f8e98b8e Fix an assymmetry between ConvertScalar_ExtractValue and ConvertScalar_InsertValue. The
former was using the size of the entire alloca, whereas the latter was correctly using
the allocated size of the immediate type being converted (which may differ from the size
of the alloca). This fixes PR10082.

llvm-svn: 132759
2011-06-08 22:08:31 +00:00
Devang Patel
ec747bb76b Use IRBuilder, preserve line numbers.
llvm-svn: 132578
2011-06-03 19:46:19 +00:00
Cameron Zwarich
e425894eaf Clean up the lazy initialization of DIBuilder a bit.
llvm-svn: 131956
2011-05-24 06:00:08 +00:00
Cameron Zwarich
462b5db500 Make LoadAndStorePromoter preserve debug info and create llvm.dbg.values when
promoting allocas to SSA variables. Fixes <rdar://problem/9479036>.

llvm-svn: 131953
2011-05-24 03:10:43 +00:00
Duncan Sands
c4d27a63a4 Fix PR9820: a read-only call differs from a load in that a load doesn't
return the pointer being dereferenced, it returns the pointee, but a call
might return the pointer itself.

llvm-svn: 130979
2011-05-06 10:30:37 +00:00
Cameron Zwarich
fee060175e Fix another case of <rdar://problem/9184212> that only occurs with code
generated by llvm-gcc, since llvm-gcc uses 2 i64s for passing a 4 x float
vector on ARM rather than an i64 array like Clang.

llvm-svn: 129878
2011-04-20 21:48:38 +00:00
Cameron Zwarich
139f811a9b The bitcast case here is actually handled uniformly earlier in the function, so
delete it.

llvm-svn: 129877
2011-04-20 21:48:34 +00:00
Cameron Zwarich
13cd49fd54 Cleanup some code to better use an early return style in preparation for adding
more cases.

llvm-svn: 129876
2011-04-20 21:48:16 +00:00
Mon P Wang
f9b26f115c Cleanup r129509 based on comments by Chris
llvm-svn: 129532
2011-04-14 19:20:42 +00:00
Mon P Wang
8b09087f46 Cleanup r129472 by using a utility routine as suggested by Eli.
llvm-svn: 129509
2011-04-14 08:04:01 +00:00
Mon P Wang
4b667cd995 Vectors with different number of elements of the same element type can have
the same allocation size but different primitive sizes(e.g., <3xi32> and
<4xi32>).  When ScalarRepl promotes them, it can't use a bit cast but
should use a shuffle vector instead.

llvm-svn: 129472
2011-04-13 21:40:02 +00:00
Jay Foad
53632b7c03 Remove PHINode::reserveOperandSpace(). Instead, add a parameter to
PHINode::Create() giving the (known or expected) number of operands.

llvm-svn: 128537
2011-03-30 11:28:46 +00:00
Jay Foad
dc5a008237 (Almost) always call reserveOperandSpace() on newly created PHINodes.
llvm-svn: 128535
2011-03-30 11:19:20 +00:00
Cameron Zwarich
d49e32233c Do some simple copy propagation through integer loads and stores when promoting
vector types. This helps a lot with inlined functions when using the ARM soft
float ABI. Fixes <rdar://problem/9184212>.

llvm-svn: 128453
2011-03-29 05:19:52 +00:00
Cameron Zwarich
09bd1deda3 Fix a typo and add a test.
llvm-svn: 128331
2011-03-26 04:58:50 +00:00
Cameron Zwarich
9f72ea0a80 Fix PR9464 by correcting some math that just happened to be right in most cases
that were hit in practice.

llvm-svn: 128146
2011-03-23 05:25:55 +00:00
Cameron Zwarich
c60b47a7e2 Fix a comment.
llvm-svn: 127728
2011-03-16 08:13:42 +00:00
Cameron Zwarich
7fd94ea393 Only convert allocas to scalars if it is profitable. The profitability metric I
chose is having a non-memcpy/memset use and being larger than any native integer
type. Originally I chose having an access of a size smaller than the total size
of the alloca, but this caused some minor issues on the spirit benchmark where
SRoA runs again after some inlining.

This fixes <rdar://problem/8613163>.

llvm-svn: 127718
2011-03-16 00:13:44 +00:00
Cameron Zwarich
88790f3d4d Better use initializer lists.
llvm-svn: 127716
2011-03-16 00:13:37 +00:00
Cameron Zwarich
f09bb5f2f5 Add a clarifying comment.
llvm-svn: 127715
2011-03-16 00:13:35 +00:00
Cameron Zwarich
90efa03e8b Fix a crasher introduced by r127317 that is seen on the bots when using an
alloca as both integer and floating-point vectors of the same size. Bugpoint is
not cooperating with me, but I'll try to find a manual testcase tomorrow.

llvm-svn: 127320
2011-03-09 07:34:11 +00:00
Cameron Zwarich
e0b4705b03 Add support to scalar replacement for partial vector accesses of an alloca, e.g.
a union of a float, <2 x float>, and <4 x float>. This mostly comes up with the
use of vector intrinsics, especially in NEON when programmers know the layout of
the register file. This enables codegen to eliminate a lot of the subregister
traffic it would otherwise generate.

This commit only enables this for a small number of floating-point cases, but a
lot more integer cases. I assume this is okay for all ports, but I did not do
extensive testing of the quality of code involving i512 vectors and the like. If
there is a use case where this generates worse code than before, let me know and
we can scale it back.

This fixes <rdar://problem/9036264>.

llvm-svn: 127317
2011-03-09 05:43:05 +00:00
Cameron Zwarich
e80c47f295 Move vector type merging to a separate function in preparation for it getting
more complicated.

llvm-svn: 127316
2011-03-09 05:43:01 +00:00
Chris Lattner
db204cbe42 convert ConstantVector::get to use ArrayRef.
llvm-svn: 125537
2011-02-15 00:14:00 +00:00
Chris Lattner
ee7f7c2494 revert my ConstantVector patch, it seems to have made the llvm-gcc
builders unhappy.

llvm-svn: 125504
2011-02-14 18:15:46 +00:00
Chris Lattner
34f32cb4c2 Switch ConstantVector::get to use ArrayRef instead of a pointer+size
idiom.  Change various clients to simplify their code.

llvm-svn: 125487
2011-02-14 07:55:32 +00:00
Dan Gohman
db0dc19c04 Give GetUnderlyingObject a TargetData, to keep it in sync
with BasicAA's DecomposeGEPExpression, which recently began
using a TargetData. This fixes PR8968, though the testcase
is awkward to reduce.

Also, update several off GetUnderlyingObject's users
which happen to have a TargetData handy to pass it in.

llvm-svn: 124134
2011-01-24 18:53:32 +00:00
Chris Lattner
3609e5afb4 enhance SRoA to promote allocas that are used by PHI nodes. This often
occurs because instcombine sinks loads and inserts phis.  This kicks in 
on such apps as 175.vpr, eon, 403.gcc, xalancbmk and a bunch of times in
spec2006 in some app that uses std::deque.

This resolves the last of rdar://7339113.

llvm-svn: 124090
2011-01-24 01:07:11 +00:00
Chris Lattner
50288f8e54 Enhance SRoA to promote allocas that are used by selects in some
common cases.  This triggers a surprising number of times in SPEC2K6
because min/max idioms end up doing this.  For example, code from the
STL ends up looking like this to SRoA:

  %202 = load i64* %__old_size, align 8, !tbaa !3
  %203 = load i64* %__old_size, align 8, !tbaa !3
  %204 = load i64* %__n, align 8, !tbaa !3
  %205 = icmp ult i64 %203, %204
  %storemerge.i = select i1 %205, i64* %__n, i64* %__old_size
  %206 = load i64* %storemerge.i, align 8, !tbaa !3

We can now promote both the __n and the __old_size allocas.

This addresses another chunk of rdar://7339113, poor codegen on
stringswitch.

llvm-svn: 124088
2011-01-23 22:04:55 +00:00
Chris Lattner
2ef605b499 Enhance SRoA to be more aggressive about scalarization of aggregate allocas
that have PHI or select uses of their element pointers.  This can often happen
when instcombine sinks two loads into a successor, inserting a phi or select.

With this patch, we can scalarize the alloca, but the pinned elements are not
yet promoted.  This is still a win for large aggregates where only one element
is used.  This fixes rdar://8904039 and part of rdar://7339113 (poor codegen
on stringswitch).

llvm-svn: 124070
2011-01-23 08:27:54 +00:00
Chris Lattner
2e57722d9d have AllocaInfo store the alloca being inspected, simplifying callers.
No functionality change.

llvm-svn: 124067
2011-01-23 07:29:29 +00:00
Chris Lattner
2064a3e213 Rearrange some code a bit. Change MarkUnsafe to
handle the "Transformation preventing inst" printing, 
so that -scalarrepl -debug will always print the rejected
instruction.  No functionality change.

llvm-svn: 124066
2011-01-23 07:05:44 +00:00
Chris Lattner
ba66871643 remove an old hack that avoided creating MMX datatypes. The
X86 backend has been fixed.

llvm-svn: 124064
2011-01-23 06:40:33 +00:00
Cameron Zwarich
e39e476305 Remove outdated references to dominance frontiers.
llvm-svn: 123724
2011-01-18 03:53:26 +00:00
Cameron Zwarich
0a1975802e Roll r123609 back in with two changes that fix test failures with expensive
checks enabled:

1) Use '<' to compare integers in a comparison function rather than '<='.

2) Use the uniqued set DefBlocks rather than Info.DefiningBlocks to initialize
the priority queue.

The speedup of scalarrepl on test-suite + SPEC2000 + SPEC2006 is a bit less, at
just under 16% rather than 17%.

llvm-svn: 123662
2011-01-17 17:38:41 +00:00
Cameron Zwarich
77e381e302 Roll out r123609 due to failures on the llvm-x86_64-linux-checks bot.
llvm-svn: 123618
2011-01-17 07:26:51 +00:00
Cameron Zwarich
e35642342b Eliminate the use of dominance frontiers in PromoteMemToReg. In addition to
eliminating a potentially quadratic data structure, this also gives a 17%
speedup when running -scalarrepl on test-suite + SPEC2000 + SPEC2006. My initial
experiment gave a greater speedup around 25%, but I moved the dominator tree
level computation from dominator tree construction to PromoteMemToReg.

Since this approach to computing IDFs has a much lower overhead than the old
code using precomputed DFs, it is worth looking at using this new code for the
second scalarrepl pass as well.

llvm-svn: 123609
2011-01-17 01:08:59 +00:00
Chris Lattner
2f902bc3b1 tidy up a comment, as suggested by duncan
llvm-svn: 123590
2011-01-16 17:46:19 +00:00
Chris Lattner
29f339f87c if an alloca is only ever accessed as a unit, and is accessed with load/store instructions,
then don't try to decimate it into its individual pieces.  This will just make a mess of the
IR and is pointless if none of the elements are individually accessed.  This was generating
really terrible code for std::bitset (PR8980) because it happens to be lowered by clang
as an {[8 x i8]} structure instead of {i64}.

The testcase now is optimized to:

define i64 @test2(i64 %X) {
  br label %L2

L2:                                               ; preds = %0
  ret i64 %X
}

before we generated:

define i64 @test2(i64 %X) {
  %sroa.store.elt = lshr i64 %X, 56
  %1 = trunc i64 %sroa.store.elt to i8
  %sroa.store.elt8 = lshr i64 %X, 48
  %2 = trunc i64 %sroa.store.elt8 to i8
  %sroa.store.elt9 = lshr i64 %X, 40
  %3 = trunc i64 %sroa.store.elt9 to i8
  %sroa.store.elt10 = lshr i64 %X, 32
  %4 = trunc i64 %sroa.store.elt10 to i8
  %sroa.store.elt11 = lshr i64 %X, 24
  %5 = trunc i64 %sroa.store.elt11 to i8
  %sroa.store.elt12 = lshr i64 %X, 16
  %6 = trunc i64 %sroa.store.elt12 to i8
  %sroa.store.elt13 = lshr i64 %X, 8
  %7 = trunc i64 %sroa.store.elt13 to i8
  %8 = trunc i64 %X to i8
  br label %L2

L2:                                               ; preds = %0
  %9 = zext i8 %1 to i64
  %10 = shl i64 %9, 56
  %11 = zext i8 %2 to i64
  %12 = shl i64 %11, 48
  %13 = or i64 %12, %10
  %14 = zext i8 %3 to i64
  %15 = shl i64 %14, 40
  %16 = or i64 %15, %13
  %17 = zext i8 %4 to i64
  %18 = shl i64 %17, 32
  %19 = or i64 %18, %16
  %20 = zext i8 %5 to i64
  %21 = shl i64 %20, 24
  %22 = or i64 %21, %19
  %23 = zext i8 %6 to i64
  %24 = shl i64 %23, 16
  %25 = or i64 %24, %22
  %26 = zext i8 %7 to i64
  %27 = shl i64 %26, 8
  %28 = or i64 %27, %25
  %29 = zext i8 %8 to i64
  %30 = or i64 %29, %28
  ret i64 %30
}

In this case, instcombine was able to eliminate the nonsense, but in PR8980 enough
PHIs are in play that instcombine backs off.  It's better to not generate this stuff
in the first place.

llvm-svn: 123571
2011-01-16 06:18:28 +00:00
Chris Lattner
b43fce09a9 Use an irbuilder to get some trivial constant folding when doing a store
of a constant.

llvm-svn: 123570
2011-01-16 05:58:24 +00:00
Chris Lattner
2067fb2a93 enhance FoldOpIntoPhi in instcombine to try harder when a phi has
multiple uses.  In some cases, all the uses are the same operation,
so instcombine can go ahead and promote the phi.  In the testcase
this pushes an add out of the loop.

llvm-svn: 123568
2011-01-16 05:28:59 +00:00
Chris Lattner
e6d5b3c4ce Generalize LoadAndStorePromoter a bit and switch LICM
to use it.

llvm-svn: 123501
2011-01-15 00:12:35 +00:00
Chris Lattner
1ce35a0362 switch SRoA to use LoadAndStorePromoter instead of its own copy of the code.
llvm-svn: 123457
2011-01-14 19:50:47 +00:00
Chris Lattner
8e171470d3 split SROA into two passes: one that uses DomFrontiers (-scalarrepl)
and one that uses SSAUpdater (-scalarrepl-ssa)

llvm-svn: 123436
2011-01-14 08:13:00 +00:00
Chris Lattner
b5c39352d8 Implement full support for promoting allocas to registers using SSAUpdater
instead of DomTree/DomFrontier.  This may be interesting for reducing compile 
time.  This is currently disabled, but seems to work just fine.

When this is enabled, we eliminate two runs of dominator frontier, one in the
"early per-function" optimizations and one in the "interlaced with inliner"
function passes.

llvm-svn: 123434
2011-01-14 07:50:47 +00:00
Bob Wilson
1238f872da Fix whitespace.
llvm-svn: 123396
2011-01-13 20:59:44 +00:00
Bob Wilson
fbab825516 Check for empty structs, and for consistency, zero-element arrays.
llvm-svn: 123383
2011-01-13 18:26:59 +00:00
Bob Wilson
3b0197489e Extend SROA to handle arrays accessed as homogeneous structs and vice versa.
This is a minor extension of SROA to handle a special case that is
important for some ARM NEON operations.  Some of the NEON intrinsics
return multiple values, which are handled as struct types containing
multiple elements of the same vector type.  The corresponding return
types declared in the arm_neon.h header have equivalent arrays.  We
need SROA to recognize that it can split up those arrays and structs
into separate vectors, even though they are not always accessed with
the same type.  SROA already handles loads and stores of an entire
alloca by using insertvalue/extractvalue to access the individual
pieces, and that code works the same regardless of whether the type
is a struct or an array.  So, all that needs to be done is to check
for compatible arrays and homogeneous structs.

llvm-svn: 123381
2011-01-13 17:45:11 +00:00
Bob Wilson
9f8d730f9b Make SROA more aggressive with allocas containing padding.
SROA only split up structs and arrays one level at a time, so padding can
only cause trouble if it is located in between the struct or array elements.

llvm-svn: 123380
2011-01-13 17:45:08 +00:00