1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-25 14:02:52 +02:00
Commit Graph

45 Commits

Author SHA1 Message Date
Richard Osborne
9be8a6db61 Add missing check for presence of target data.
This avoids a crash in visitAllocaInst when target data isn't available.

llvm-svn: 164539
2012-09-24 17:10:03 +00:00
Richard Osborne
552f578018 Fix instcombine to obey requested alignment when merging allocas.
llvm-svn: 164117
2012-09-18 09:31:44 +00:00
Chandler Carruth
b82f8d4af5 Port the global copy optimization from the SROA pass to InstCombine.
This optimization is really just replacing allocas wholesale with
globals, there is no scalarization.

The underlying motivation for this patch is to simplify the SROA pass
and focus it on splitting and promoting allocas.

llvm-svn: 162271
2012-08-21 08:39:44 +00:00
Nuno Lopes
c676931bb9 instcombine: merge the functions that remove dead allocas and dead mallocs/callocs/...
This patch removes ~70 lines in InstCombineLoadStoreAlloca.cpp and makes both functions a bit more aggressive than before :)
In theory, we can be more aggressive when removing an alloca than a malloc, because an alloca pointer should never escape, but we are not taking advantage of this anyway

llvm-svn: 159952
2012-07-09 18:38:20 +00:00
Duncan Sands
1770ae1ae4 Replacing zero-sized alloca's with a null pointer is too aggressive, instead
merge all zero-sized alloca's into one, fixing c43204g from the Ada ACATS
conformance testsuite.  What happened there was that a variable sized object
was being allocated on the stack, "alloca i8, i32 %size".  It was then being
passed to another function, which tested that the address was not null (raising
an exception if it was) then manipulated %size bytes in it (load and/or store).
The optimizers cleverly managed to deduce that %size was zero (congratulations
to them, as it isn't at all obvious), which made the alloca zero size, causing
the optimizers to replace it with null, which then caused the check mentioned
above to fail, and the exception to be raised, wrongly.  Note that no loads
and stores were actually being done to the alloca (the loop that does them is
executed %size times, i.e. is not executed), only the not-null address check.

llvm-svn: 159202
2012-06-26 13:39:21 +00:00
Chandler Carruth
b3fb4be360 Teach InstCombine to nuke a common alloca pattern -- an alloca which has
GEPs, bit casts, and stores reaching it but no other instructions. These
often show up during the iterative processing of the inliner, SROA, and
DCE. Once we hit this point, we can completely remove the alloca. These
were actually showing up in the final, fully optimized code in a bunch
of inliner tests I've been working on, and notably they show up after
LLVM finishes optimizing away all function calls involved in
hash_combine(a, b).

llvm-svn: 154285
2012-04-08 14:36:56 +00:00
Bill Wendling
9343ed10c6 Revert r152907.
llvm-svn: 152935
2012-03-16 18:20:54 +00:00
Bill Wendling
3c44ed8385 The alignment of the pointer part of the store instruction may have an
alignment. If that's the case, then we want to make sure that we don't increase
the alignment of the store instruction. Because if we increase it to be "more
aligned" than the pointer, code-gen may use instructions which require a greater
alignment than the pointer guarantees.
<rdar://problem/11043589>

llvm-svn: 152907
2012-03-16 07:40:08 +00:00
Bill Wendling
3d7b8eaa78 Use the getFirstInsertionPt() method instead of getFirstNonPHI + an 'isa<>'
check for a LandingPadInst.

llvm-svn: 137745
2011-08-16 20:45:24 +00:00
Bill Wendling
3e159bd43d A few places where we want to skip the landingpad instruction for insertion.
llvm-svn: 137712
2011-08-16 04:52:55 +00:00
Eli Friedman
36ef5fd140 Update instcombine for atomic load/store.
llvm-svn: 137664
2011-08-15 22:09:40 +00:00
Jay Foad
6513dac6e2 Convert GetElementPtrInst to use ArrayRef.
llvm-svn: 135904
2011-07-25 09:48:08 +00:00
Jay Foad
42463ed852 Convert IRBuilder::CreateGEP and IRBuilder::CreateInBoundsGEP to use
ArrayRef.

llvm-svn: 135761
2011-07-22 08:16:57 +00:00
Jay Foad
e12d8629a8 Fix an MSVC warning, caused by a case I missed when converting
ConstantExpr::getGetElementPtr to use ArrayRef.

llvm-svn: 135758
2011-07-22 07:54:01 +00:00
Chris Lattner
e1fe7061ce land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Eli Friedman
6937c422a0 Final step of instcombine debuginfo; switch a couple more places over to InsertNewInstWith, and use setDebugLoc for the cases which can't be easily handled by the automated mechanisms.
llvm-svn: 132167
2011-05-27 00:19:40 +00:00
Eli Friedman
40a0353b96 More instcombine cleanup, towards improving debug line info.
llvm-svn: 131604
2011-05-18 23:58:37 +00:00
Jay Foad
53632b7c03 Remove PHINode::reserveOperandSpace(). Instead, add a parameter to
PHINode::Create() giving the (known or expected) number of operands.

llvm-svn: 128537
2011-03-30 11:28:46 +00:00
Jin-Gu Kang
9d52ff5473 This case is solved by Scalar Replacement of Aggregates (DT) and
Early CSE pass so this patch reverts it to original source code.

llvm-svn: 127574
2011-03-14 01:21:00 +00:00
Jin-Gu Kang
5000ba8961 Add comment as following:
load and store reference same memory location, the memory location
is represented by getelementptr with two uses (load and store) and
the getelementptr's base is alloca with single use. At this point,
instructions from alloca to store can be removed.
(this pattern is generated when bitfield is accessed.)
For example,
%u = alloca %struct.test, align 4               ; [#uses=1]
%0 = getelementptr inbounds %struct.test* %u, i32 0, i32 0;[#uses=2]
%1 = load i8* %0, align 4                       ; [#uses=1]
%2 = and i8 %1, -16                             ; [#uses=1]
%3 = or i8 %2, 5                                ; [#uses=1]
store i8 %3, i8* %0, align 4

llvm-svn: 127565
2011-03-13 14:05:51 +00:00
Jin-Gu Kang
5e537a9449 This patch removes some of useless instructions generated by bitfield access.
llvm-svn: 127539
2011-03-12 12:18:44 +00:00
Devang Patel
2f204229ef llvm.dbg.declare intrinsic does not use any llvm::Values. It's magic!
llvm-svn: 127282
2011-03-08 22:12:11 +00:00
Duncan Sands
061150ac1b Spelling fix: consequtive -> consecutive.
llvm-svn: 125563
2011-02-15 09:23:02 +00:00
Chris Lattner
c4cb20b9bf Move getOrEnforceKnownAlignment out of instcombine into Transforms/Utils.
llvm-svn: 122554
2010-12-25 20:37:57 +00:00
Dan Gohman
96e34e87ca Fix a case where instcombine was stripping metadata (and alignment)
from stores when folding in bitcasts.

llvm-svn: 117265
2010-10-25 16:16:27 +00:00
Owen Anderson
bd9edea8a3 Remove r111665, which implemented store-narrowing in InstCombine. Chris discovered a miscompilation in it, and it's not easily
fixable at the optimizer level. I'll investigate reimplementing it in DAGCombine.

llvm-svn: 112575
2010-08-31 04:41:06 +00:00
Owen Anderson
678fd04aa5 Re-apply r111568 with a fix for the clang self-host.
llvm-svn: 111665
2010-08-20 18:24:43 +00:00
Owen Anderson
0e57acb623 Revert r111568 to unbreak clang self-host.
llvm-svn: 111571
2010-08-19 23:25:16 +00:00
Owen Anderson
7f2852ba2d When a set of bitmask operations, typically from a bitfield initialization, only modifies the low bytes of a value,
we can narrow the store to only over-write the affected bytes.

llvm-svn: 111568
2010-08-19 22:15:40 +00:00
Dan Gohman
a80f89dbc7 Make instcombine set explicit alignments on load or store
instructions with alignment 0, so that subsequent passes don't
need to bother checking the TargetData ABI size manually.

llvm-svn: 110128
2010-08-03 18:20:32 +00:00
Gabor Greif
96a9f8c7c6 mass elimination of reliance on automatic iterator dereferencing
llvm-svn: 109103
2010-07-22 13:36:47 +00:00
Gabor Greif
9f6b0059e4 cache result of operator*
llvm-svn: 108150
2010-07-12 15:48:26 +00:00
Gabor Greif
9e2aa5d5d7 do not repeatedly dereference use_iterator
llvm-svn: 107962
2010-07-09 12:23:50 +00:00
Dan Gohman
0d7d3faf8e Move FindAvailableLoadedValue isSafeToLoadUnconditionally out of
lib/Transforms/Utils and into lib/Analysis so that Analysis passes
can use them.

llvm-svn: 104949
2010-05-28 16:19:17 +00:00
Dan Gohman
22d22caaed Teach instcombine to promote alloca array sizes.
llvm-svn: 104945
2010-05-28 15:09:00 +00:00
Duncan Sands
1b33dd3c83 There are two ways of checking for a given type, for example isa<PointerType>(T)
and T->isPointerTy().  Convert most instances of the first form to the second form.
Requested by Chris.

llvm-svn: 96344
2010-02-16 11:11:14 +00:00
Duncan Sands
2acaf3609c Uniformize the names of type predicates: rather than having isFloatTy and
isInteger, we now have isFloatTy and isIntegerTy.  Requested by Chris!

llvm-svn: 96223
2010-02-15 16:12:20 +00:00
Bob Wilson
0f04082970 Check alignment of loads when deciding whether it is safe to execute them
unconditionally.  Besides checking the offset, also check that the underlying
object is aligned as much as the load itself.

llvm-svn: 94875
2010-01-30 04:42:39 +00:00
Bob Wilson
e979c978e0 Use more specific types to avoid casts. No functionality change.
llvm-svn: 94863
2010-01-30 00:41:10 +00:00
Bob Wilson
71bc5f0787 Preserve load alignment in instcombine transformations. I've been unable to
create a testcase where this matters.  The select+load transformation only
occurs when isSafeToLoadUnconditionally is true, and in those situations,
instcombine also changes the underlying objects to be aligned.  This seems
like a good idea regardless, and I've verified that it doesn't pessimize
the subsequent realignment.

llvm-svn: 94850
2010-01-29 22:39:21 +00:00
Bob Wilson
f897b7b37e Improve isSafeToLoadUnconditionally to recognize that GEPs with constant
indices are safe if the result is known to be within the bounds of the
underlying object.

llvm-svn: 94829
2010-01-29 19:19:08 +00:00
Victor Hernandez
e4d8ef8fc4 Keep ignoring pointer-to-pointer bitcasts
llvm-svn: 94194
2010-01-22 19:05:05 +00:00
Victor Hernandez
1cc721ee56 No need to look through bitcasts for DbgInfoIntrinsic
llvm-svn: 94112
2010-01-21 23:07:15 +00:00
Eric Christopher
5b602f9bf7 Fix comment.
llvm-svn: 93831
2010-01-19 01:20:15 +00:00
Chris Lattner
b55724e825 split out load/store/alloca.
llvm-svn: 92685
2010-01-05 05:57:49 +00:00