cannot be folded into target cmp instruction.
- Avoid a phase ordering issue where early cmp optimization would prevent the
later count-to-zero optimization.
- Add missing checks which could cause LSR to reuse stride that does not have
users.
- Fix a bug in count-to-zero optimization code which failed to find the pre-inc
iv's phi node.
- Remove, tighten, loosen some incorrect checks disable valid transformations.
- Quite a bit of code clean up.
llvm-svn: 86969
This allows StringRef to skip controversial if(str) check in constructor.
Buildbots, wait for corresponding clang and llvm-gcc FE check-ins!
llvm-svn: 86914
start using them in a trivial way when -enable-jump-threading-lvi
is passed. enable-jump-threading-lvi will be my playground for
awhile.
llvm-svn: 86789
except that the result may not be a constant. Switch jump threading to
use it so that it gets things like (X & 0) -> 0, which occur when phi preds
are deleted and the remaining phi pred was a zero.
llvm-svn: 86637
This patch forbids implicit conversion of DenseMap::const_iterator to
DenseMap::iterator which was possible because DenseMapIterator inherited
(publicly) from DenseMapConstIterator. Conversion the other way around is now
allowed as one may expect.
The template DenseMapConstIterator is removed and the template parameter
IsConst which specifies whether the iterator is constant is added to
DenseMapIterator.
Actually IsConst parameter is not necessary since the constness can be
determined from KeyT but this is not relevant to the fix and can be addressed
later.
Patch by Victor Zverovich!
llvm-svn: 86636
takes decimated instructions and applies identities to them. This
is pretty minimal at this point, but I plan to pull some instcombine
logic out into these and similar routines.
llvm-svn: 86613
Here is the original commit message:
This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.
Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86311
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86077
ArraySize * ElementSize
ElementSize * ArraySize
ArraySize << log2(ElementSize)
ElementSize << log2(ArraySize)
Refactor isArrayMallocHelper and delete isSafeToGetMallocArraySize, so that there is only 1 copy of the malloc array determining logic.
Update users of getMallocArraySize() to not bother calling isArrayMalloc() as well.
llvm-svn: 85421
used elsewhere - an exit block is a block outside the loop branched to
from within the loop. An exiting block is a block inside the loop that
branches out.
llvm-svn: 85019
Update all analysis passes and transforms to treat free calls just like FreeInst.
Remove RaiseAllocations and all its tests since FreeInst no longer needs to be raised.
llvm-svn: 84987
Analysis/ConstantFolding.cpp. This doesn't change the behavior of
instcombine but makes other clients of ConstantFoldInstruction
able to handle loads. This was partially extracted from Eli's patch
in PR3152.
llvm-svn: 84836
identifying the malloc as a non-array malloc. This broke GlobalOpt's optimization of stores of mallocs
to global variables.
The fix is to classify malloc's into 3 categories:
1. non-array mallocs
2. array mallocs whose array size can be determined
3. mallocs that cannot be determined to be of type 1 or 2 and cannot be optimized
getMallocArraySize() returns NULL for category 3, and all users of this function must avoid their
malloc optimization if this function returns NULL.
Eventually, currently unexpected codegen for computing the malloc's size argument will be supported in
isArrayMalloc() and getMallocArraySize(), extending malloc optimizations to those examples.
llvm-svn: 84199
question, can we get rid of the BasicBlock versions of all inserters
and use Head == 0 to indicate the old case when GetInsertBlock == 0?
llvm-svn: 83216
information. This allows arbitrary code involving DW_OP_plus_uconst
and DW_OP_deref. The scheme allows for easy extention to include,
any, or all of the DW_OP_ opcodes. I thought about just exposing all
of them, but, wasn't sure if people wanted the dwarf opcodes exposed
in the api. Is that a layering violation?
With this scheme, the entire existing block scheme used by llvm-gcc
can be switched over to the new scheme. I think that would be
cleaner, as then the compiler specific bits are not present in llvm
proper. Before the old code can be yanked however, similar code in
clang would have to be removed.
Next up, more testing.
llvm-svn: 83120
the PassManager code into a regular verifyAnalysis method.
Also, reorganize loop verification. Make the LoopPass infrastructure
call verifyLoop as needed instead of having LoopInfo::verifyAnalysis
check every loop in the function after each looop pass. Add a new
command-line argument, -verify-loop-info, to enable the expensive
full checking.
llvm-svn: 82952
to. This can be combined with LCSSA or SSI form to store more information on a
PHINode than can be computed by looking at its incoming values.
llvm-svn: 82317
In getMallocArraySize(), fix bug in the case that array size is the product of 2 constants.
Extend isArrayMalloc() and getMallocArraySize() to handle case where malloc is used as char array.
Ensure that ArraySize in LowerAllocations::runOnBasicBlock() is correct type.
Extend Instruction::isSafeToSpeculativelyExecute() to handle malloc calls.
Add verification for malloc calls.
Reviewed by Dan Gohman.
llvm-svn: 82257
where the induction variable has a non-unit stride, such as {0,+,2}, and
there are expressions such as {1,+,2} inside the loop formed with
or or add nsw operators.
llvm-svn: 82151
argpromote to avoid invalidating an iterator. This fixes PR4977.
All clang tests now pass with expensive checking (on my system
at least).
llvm-svn: 81843
that get created during loop unswitching, and fix SplitBlockPredecessors'
LCSSA updating code to create new PHIs instead of trying to just move
existing ones.
Also, optimize Loop::verifyLoop, since it gets called a lot. Use
searches on a sorted list of blocks instead of calling the "contains"
function, as is done in other places in the Loop class, since "contains"
does a linear search. Also, don't call verifyLoop from LoopSimplify or
LCSSA, as the PassManager is already calling verifyLoop as part of
LoopInfo's verifyAnalysis.
llvm-svn: 81221
Optimal edge profiling is only possible when blocks with no predecessors get an
virtual edge (BB,0) that counts the execution frequencies of this
function-exiting blocks.
This patch makes the necessary changes before actually enabling optimal edge profiling.
llvm-svn: 80667
This adds a pass to verify the current profile against the flow conditions.
This is very helpful when later on trying to perserve the profiling information
during all passes.
llvm-svn: 80666
for sanity. This didn't turn up any bugs.
Change CallGraphNode to maintain its "callsite" information in the
call edges list as a WeakVH instead of as an instruction*. This fixes
a broad class of dangling pointer bugs, and makes CallGraph have a number
of useful invariants again. This fixes the class of problem indicated
by PR4029 and PR3601.
llvm-svn: 80663
SCEVUnknowns, as the non-SCEVUnknown cases in the getSCEVAtScope code
can also end up repeatedly climing through the same expression trees,
which can be unusably slow when the trees are very tall.
Also, add a quick check for SCEV pointer equality to the main
SCEV comparison routine, as the full comparison code can be expensive
in the case of large expression trees.
These fix compile-time problems in some pathlogical cases.
llvm-svn: 80623
stem from the fact that we have two types of passes that need to update it:
1. callgraphscc and module passes that are explicitly aware of it
2. Functionpasses (and loop passes etc) that are interlaced with CGSCC passes
by the CGSCC Passmgr.
In the case of #1, we can reasonably expect the passes to update the call
graph just like any analysis. However, functionpasses are not and generally
should not be CG aware. This has caused us no end of problems, so this takes
a new approach. Logically, the CGSCC Pass manager can rescan every function
after it runs a function pass over it to see if the functionpass made any
updates to the IR that affect the callgraph. This allows it to catch new calls
introduced by the functionpass.
In practice, doing this would be slow. This implementation keeps track of
whether or not the current scc is dirtied by a function pass, and, if so,
delays updating the callgraph until it is actually needed again. This was
we avoid extraneous rescans, but we still have good invariants when the
callgraph is needed.
Step #2 of the "give Callgraph some sane invariants" is to change CallGraphNode
to use a CallBackVH for the callsite entry of the CallGraphNode. This way
we can immediately remove entries from the callgraph when a FunctionPass is
active instead of having dangling pointers. The current pass tries to tolerate
these dangling pointers, but it is just an evil hack.
This is related to PR3601/4835/4029. This also reverts r80541, a hack working
around the sad lack of invariants.
llvm-svn: 80566
indirect function pointer, inline it, then go to delete the body.
The problem is that the callgraph had other references to the function,
though the inliner had no way to know it, so we got a dangling pointer
and an invalid iterator out of the deal.
The fix to this is pretty simple: stop the inliner from deleting the
function by knowing that there are references to it. Do this by making
CallGraphNodes contain a refcount. This requires moving deletion of
available_externally functions to the module-level cleanup sweep where
it belongs.
llvm-svn: 80533
argpromotion and structretpromote. Basically, when replacing
a function, they used the 'changeFunction' api which changes
the entry in the function map (and steals/reuses the callgraph
node).
This has some interesting effects: first, the problem is that it doesn't
update the "callee" edges in any callees of the function in the call graph.
Second, this covers for a major problem in all the CGSCC pass stuff, which
is that it is completely broken when functions are deleted if they *don't*
reuse a CGN. (there is a cute little fixme about this though :).
This patch changes the protocol that CGSCC passes must obey: now the CGSCC
pass manager copies the SCC and preincrements its iterator to avoid passes
invalidating it. This allows CGSCC passes to mutate the current SCC. However
multiple passes may be run on that SCC, so if passes do this, they are now
required to *update* the SCC to be current when they return.
Other less interesting parts of this patch are that it makes passes update
the CG more directly, eliminates changeFunction, and requires clients of
replaceCallSite to specify the new callee CGN if they are changing it.
llvm-svn: 80527
This is a simple AliasAnalysis implementation which works by making
ScalarEvolution queries. ScalarEvolution has a more complete understanding
of arithmetic than BasicAA's collection of ad-hoc checks, so it handles
some cases that BasicAA misses, for example p[i] and p[i+1] within the
same iteration of a loop.
This is currently experimental. It may be that the main use for this pass
will be to help find cases where BasicAA can be profitably extended, or
to help in the development of the overall AliasAnalysis infrastructure,
however it's also possible that it could grow up to become a directly
useful pass.
llvm-svn: 80098
*) introducing new data type and export function of edge info for whole function (preparation for next patch).
*) renaming variables to make clear distinction between data and containers that contain this data.
*) updated comments and whitespaces.
*) made ProfileInfo::MissingValue a double (as it should be...).
(Discussed at http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20090817/084955.html.)
llvm-svn: 79940
TargetData is not present. It still uses TargetData when available.
This generalization also fixed some limitations in the TargetData
case; the attached testcase covers this.
llvm-svn: 79344
- Part of optimal static profiling patch sequence by Andreas Neustifter.
- Store edge, block, and function information separately for each functions
(instead of in one giant map).
- Return frequencies as double instead of int, and use a sentinel value for
missing information.
llvm-svn: 78477
LoopDependenceAnalysis::getLoops is currently O(N*M) for a loop-nest of
depth N and a compound SCEV of M atomic SCEVs. As both N and M will
typically be very small, this should not be a problem. If it turns out
to be one, rewriting getLoops as SCEVVisitor will reduce complexity to
O(M).
llvm-svn: 78394
affected after a PHI node has been analyzed, just remove affected
SCEVs from the Scalars map, so that they'll be (lazily) recreated as
needed. This avoids creating SCEV objects that aren't actually needed.
Also, rewrite the associated def-use walking code to be non-recursive
and to continue traversing past Instructions that don't have an
entry in the Scalars map.
llvm-svn: 77032
This introduces an LDA-internal DependencePair class. The intention is,
that this is a place where dependence testers can store various results
such as SCEVs describing conflicting iterations, breaking conditions,
distance/direction vectors, etc.
llvm-svn: 76877
(x pred y) with more thorough code that does more complete canonicalization
before resorting to range checks. This helps it find more cases where
the canonicalized expressions match.
llvm-svn: 76671
For now this only computes the allocated size of the memory pointed to by a
pointer, and offset a pointer from allocated pointer.
The actual checkLimits part will come later, after another round of review.
llvm-svn: 75657
This adds location info for all llvm_unreachable calls (which is a macro now) in
!NDEBUG builds.
In NDEBUG builds location info and the message is off (it only prints
"UREACHABLE executed").
llvm-svn: 75640
works similar to isLoopInvariant, except that it will do trivial
hoisting to try to make the value loop invariant if it isn't already.
This makes it easier for transformation passes to clear trivial
instructions out of the way (the regular LICM pass doesn't run
until relatively late). This is code factored out of LoopSimplify
and other places.
llvm-svn: 75578
and related functions out of LoopBase and into Loop, since they
are specific to BasicBlock-based loops. This also allows the code
to be moved out-of-line.
llvm-svn: 75523
using the Curiously Recurring Template Pattern with LoopBase.
This will help further refactoring, and future functionality for
Loop. Also, Headers can now foward-declare Loop, instead of pulling
in LoopInfo.h or doing tricks.
llvm-svn: 75519
Make llvm_unreachable take an optional string, thus moving the cerr<< out of
line.
LLVM_UNREACHABLE is now a simple wrapper that makes the message go away for
NDEBUG builds.
llvm-svn: 75379
to a loop deletion more thorough. Don't prune the def-use tree search at
instructions that don't have SCEVs computed, because an instruction with
a user that has a computed SCEV may itself lack a computed SCEV. Also,
remove loop-related values from the ValuesAtScopes and
ConstantEvolutionLoopExitValues maps as well.
This fixes a regression in 483.xalancbmk.
llvm-svn: 75030
This helps it avoid reusing an instruction that doesn't dominate all
of the users, in cases where the original instruction was inserted
before all of the users were known. This may result in redundant
expansions of sub-expressions that depend on loop-unpredictable values
in some cases, however this isn't very common, and it primarily impacts
IndVarSimplify, so GVN can be expected to clean these up.
This eliminates the need for IndVarSimplify's FixUsesBeforeDefs,
which fixes several bugs.
llvm-svn: 74352
trip counts in more cases.
Generalize ScalarEvolution's isLoopGuardedByCond code to recognize
And and Or conditions, splitting the code out into an
isNecessaryCond helper function so that it can evaluate Ands and Ors
recursively, and make SCEVExpander be much more aggressive about
hoisting instructions out of loops.
test/CodeGen/X86/pr3495.ll has an additional instruction now, but
it appears to be due to an arbitrary register allocation difference.
llvm-svn: 74048
createSCEV. Also, recognize UndefValue in createSCEV.
Change getIntegerSCEV's comment to avoid mentioning FP types,
and re-implement it in terms of getConstant instead of getUnknown.
llvm-svn: 74041
This also throws out the SCEV reference counting scheme, as the the SCEVs now have a lifetime controlled by the
ScalarEvolution pass.
Note that SCEVHandle is now a no-op, and will be remove in a future commit.
llvm-svn: 73892
blocks, and also exit blocks with multiple conditions (combined
with (bitwise) ands and ors). It's often infeasible to compute an
exact trip count in such cases, but a useful upper bound can often
be found.
llvm-svn: 73866
so that it can access the TargetData member (when available) and
use ValueTracking.h information to compute information for
SCEVUnknown Values.
Also add GetMinLeadingZeros and GetMinSignBits functions,
with minimal implementations.
llvm-svn: 73794
casted induction variables in cases where the cast
isn't foldable. It ended up being a pessimization in
many cases. This could be fixed, but it would require
a bunch of complicated code in IVUsers' clients. The
advantages of this approach aren't visible enough to
justify it at this time.
llvm-svn: 73706
failures.
To support this, add some utility functions to Type to help support
vector/scalar-independent code. Change ConstantInt::get and
ConstantFP::get to support vector types, and add an overload to
ConstantInt::get that uses a static IntegerType type, for
convenience.
Introduce a new getConstant method for ScalarEvolution, to simplify
common use cases.
llvm-svn: 73431