cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.
Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.
Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.
And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.
llvm-svn: 94987
use plain SCEVUnknowns with ConstantExpr::getSizeOf and
ConstantExpr::getOffsetOf constants. This eliminates a bunch of
special-case code.
Also add code for pattern-matching these expressions, for clients that
want to recognize them.
Move ScalarEvolution's logic for expanding array and vector sizeof
expressions into an element count times the element size, to expose
the multiplication to subsequent folding, into the regular constant
folder.
llvm-svn: 94737
After running a batch of measurements, it is clear that the inliner metrics
need some adjustments:
Own argument bonus: 20 -> 5
Outgoing argument penalty: 0 -> 5
Alloca bonus: 10 -> 5
Constant instr bonus: 7 -> 5
Dead successor bonus: 40 -> 5*(avg instrs/block)
The new cost metrics are generaly 25 points higher than before, so we may need
to move thresholds.
With this change, InlineConstants::CallPenalty becomes a political correction:
if (!isa<IntrinsicInst>(II) && !callIsSmall(CS.getCalledFunction()))
NumInsts += InlineConstants::CallPenalty + CS.arg_size();
The code size is accurately modelled by CS.arg_size(). CallPenalty is added
because calls tend to take a long time, so it may not be worth it to inline a
function with lots of calls.
All of the political corrections are in the InlineConstants namespace:
IndirectCallBonus, CallPenalty, LastCallToStaticBonus, ColdccPenalty,
NoreturnPenalty.
llvm-svn: 94615
A GEP with all constant indices is already considered free by
analyzeBasicBlock(), so don't give it an extra bonus in
CountCodeReductionForAlloca().
This patch should remove a small positive bias toward inlining functions with
variable-index GEPs, and remove a smaller negative bias from functions with
all-constant index GEPs.
llvm-svn: 94591
Functions containing indirectbr are marked NeverInline by analyzeBasicBlock(),
so there is no point in giving indirectbr special treatment in
CountCodeReductionForConstant. It is never called.
No functional change intended.
llvm-svn: 94590
have trouble with an intermediate add overflowing. Also, be more conservative
about the case where the induction variable in an SLT loop exit can step past
the RHS of the SLT and overflow in a single step.
Make getSignedRange more aggressive, to recover for some common cases which
the above fixes pessimized.
This addresses rdar://7561161.
llvm-svn: 94512
missing ones are libsupport, libsystem and libvmcore. libvmcore is
currently blocked on bugpoint, which uses EH. Once it stops using
EH, we can switch it off.
This #if 0's out 3 unit tests, because gtest requires RTTI information.
Suggestions welcome on how to fix this.
llvm-svn: 94164
This new version is much more aggressive about doing "full" reduction in
cases where it reduces register pressure, and also more aggressive about
rewriting induction variables to count down (or up) to zero when doing so
reduces register pressure.
It currently uses fairly simplistic algorithms for finding reuse
opportunities, but it introduces a new framework allows it to combine
multiple strategies at once to form hybrid solutions, instead of doing
all full-reduction or all base+index.
llvm-svn: 94061
form of an expression. This is the expression without the
post-increment adjustment made, which is useful in determining
which registers will be used by the expansion.
llvm-svn: 93921
Move the DOTGraphTraits dotty printer/viewer templates, that were developed for
the dominance tree into their own header file. This will allow reuse in future
passes.
llvm-svn: 93632
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
It also strips old llvm.dbg.declare intrinsics that did not pass metadata as the first argument.
llvm-svn: 93531