1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-31 16:02:52 +01:00
Commit Graph

137 Commits

Author SHA1 Message Date
Chandler Carruth
80c3e3bbba Add two statistics to help track how we are computing the inline cost.
Yea, 'NumCallerCallersAnalyzed' isn't a great name, suggestions welcome.

llvm-svn: 154492
2012-04-11 10:15:10 +00:00
Chandler Carruth
8b30ff4d0f Belatedly address some code review from Chris.
As a side note, I really dislike array_pod_sort... Do we really still
care about any STL implementations that get this so wrong? Does libc++?

llvm-svn: 153834
2012-04-01 10:41:24 +00:00
Chandler Carruth
61d0735f00 Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

llvm-svn: 153813
2012-03-31 12:48:08 +00:00
Chandler Carruth
8cacff57bf Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

llvm-svn: 153812
2012-03-31 12:42:41 +00:00
Chandler Carruth
cd6d2689a0 Make a seemingly tiny change to the inliner and fix the generated code
size bloat. Unfortunately, I expect this to disable the majority of the
benefit from r152737. I'm hopeful at least that it will fix PR12345. To
explain this requires... quite a bit of backstory I'm afraid.

TL;DR: The change in r152737 actually did The Wrong Thing for
linkonce-odr functions. This change makes it do the right thing. The
benefits we saw were simple luck, not any actual strategy. Benchmark
numbers after a mini-blog-post so that I've written down my thoughts on
why all of this works and doesn't work...

To understand what's going on here, you have to understand how the
"bottom-up" inliner actually works. There are two fundamental modes to
the inliner:

1) Standard fixed-cost bottom-up inlining. This is the mode we usually
   think about. It walks from the bottom of the CFG up to the top,
   looking at callsites, taking information about the callsite and the
   called function and computing th expected cost of inlining into that
   callsite. If the cost is under a fixed threshold, it inlines. It's
   a touch more complicated than that due to all the bonuses, weights,
   etc. Inlining the last callsite to an internal function gets higher
   weighth, etc. But essentially, this is the mode of operation.

2) Deferred bottom-up inlining (a term I just made up). This is the
   interesting mode for this patch an r152737. Initially, this works
   just like mode #1, but once we have the cost of inlining into the
   callsite, we don't just compare it with a fixed threshold. First, we
   check something else. Let's give some names to the entities at this
   point, or we'll end up hopelessly confused. We're considering
   inlining a function 'A' into its callsite within a function 'B'. We
   want to check whether 'B' has any callers, and whether it might be
   inlined into those callers. If so, we also check whether inlining 'A'
   into 'B' would block any of the opportunities for inlining 'B' into
   its callers. We take the sum of the costs of inlining 'B' into its
   callers where that inlining would be blocked by inlining 'A' into
   'B', and if that cost is less than the cost of inlining 'A' into 'B',
   then we skip inlining 'A' into 'B'.

Now, in order for #2 to make sense, we have to have some confidence that
we will actually have the opportunity to inline 'B' into its callers
when cheaper, *and* that we'll be able to revisit the decision and
inline 'A' into 'B' if that ever becomes the correct tradeoff. This
often isn't true for external functions -- we can see very few of their
callers, and we won't be able to re-consider inlining 'A' into 'B' if
'B' is external when we finally see more callers of 'B'. There are two
cases where we believe this to be true for C/C++ code: functions local
to a translation unit, and functions with an inline definition in every
translation unit which uses them. These are represented as internal
linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for
linkonce-odr in r152737.

Unfortunately, when I did that, I also introduced a subtle bug. There
was an implicit assumption that the last caller of the function within
the TU was the last caller of the function in the program. We want to
bonus the last caller of the function in the program by a huge amount
for inlining because inlining that callsite has very little cost.
Unfortunately, the last caller in the TU of a linkonce-odr function is
*not* the last caller in the program, and so we don't want to apply this
bonus. If we do, we can apply it to one callsite *per-TU*. Because of
the way deferred inlining works, when it sees this bonus applied to one
callsite in the TU for 'B', it decides that inlining 'B' is of the
*utmost* importance just so we can get that final bonus. It then
proceeds to essentially force deferred inlining regardless of the actual
cost tradeoff.

The result? PR12345: code bloat, code bloat, code bloat. Another result
is getting *damn* lucky on a few benchmarks, and the over-inlining
exposing critically important optimizations. I would very much like
a list of benchmarks that regress after this change goes in, with
bitcode before and after. This will help me greatly understand what
opportunities the current cost analysis is missing.

Initial benchmark numbers look very good. WebKit files that exhibited
the worst of PR12345 went from growing to shrinking compared to Clang
with r152737 reverted.

- Bootstrapped Clang is 3% smaller with this change.
- Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4%
  faster with this change.

Please let me know about any other performance impact you see. Thanks to
Nico for reporting and urging me to actually fix, Richard Smith, Duncan
Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues
today.

llvm-svn: 153506
2012-03-27 10:48:28 +00:00
Chandler Carruth
2a69d3eac5 Move the instruction simplification of callsite arguments in the inliner
to instead rely on much more generic and powerful instruction
simplification in the function cloner (and thus inliner).

This teaches the pruning function cloner to use instsimplify rather than
just the constant folder to fold values during cloning. This can
simplify a large number of things that constant folding alone cannot
begin to touch. For example, it will realize that 'or' and 'and'
instructions with certain constant operands actually become constants
regardless of what their other operand is. It also can thread back
through the caller to perform simplifications that are only possible by
looking up a few levels. In particular, GEPs and pointer testing tend to
fold much more heavily with this change.

This should (in some cases) have a positive impact on compile times with
optimizations on because the inliner itself will simply avoid cloning
a great deal of code. It already attempted to prune proven-dead code,
but now it will be use the stronger simplifications to prove more code
dead.

llvm-svn: 153403
2012-03-25 04:03:40 +00:00
Chandler Carruth
1d0b0955da Start removing the use of an ad-hoc 'never inline' set and instead
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.

Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.

The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.

The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.

This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.

I believe there is no functional change here, but yell if you spot one.
None are intended.

Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.

llvm-svn: 152903
2012-03-16 06:10:13 +00:00
Chandler Carruth
5caf391dc5 Change where we enable the heuristic that delays inlining into functions
which are small enough to themselves be inlined. Delaying in this manner
can be harmful if the function is inelligible for inlining in some (or
many) contexts as it pessimizes the code of the function itself in the
event that inlining does not eventually happen.

Previously the check was written to only do this delaying of inlining
for static functions in the hope that they could be entirely deleted and
in the knowledge that all callers of static functions will have the
opportunity to inline if it is in fact profitable. However, with C++ we
get two other important sources of functions where the definition is
always available for inlining: inline functions and templated functions.
This patch generalizes the inliner to allow linkonce-ODR (the linkage
such C++ routines receive) to also qualify for this delay-based
inlining.

Benchmarking across a range of large real-world applications shows
roughly 2% size increase across the board, but an average speedup of
about 0.5%. Some benhcmarks improved over 2%, and the 'clang' binary
itself (when bootstrapped with this feature) shows a 1% -O0 performance
improvement when run over all Sema, Lex, and Parse source code smashed
into a single file. A clean re-build of Clang+LLVM with a bootstrapped
Clang shows approximately 2% improvement, but that measurement is often
noisy.

llvm-svn: 152737
2012-03-14 20:16:41 +00:00
Chandler Carruth
015ff468c2 When inlining a function and adding its inner call sites to the
candidate set for subsequent inlining, try to simplify the arguments to
the inner call site now that inlining has been performed.

The goal here is to propagate and fold constants through deeply nested
call chains. Without doing this, we loose the inliner bonus that should
be applied because the arguments don't match the exact pattern the cost
estimator uses.

Reviewed on IRC by Benjamin Kramer.

llvm-svn: 152556
2012-03-12 11:19:33 +00:00
Chad Rosier
096b8c0365 Add support for disabling llvm.lifetime intrinsics in the AlwaysInliner. These
are optimization hints, but at -O0 we're not optimizing.  This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594

llvm-svn: 151429
2012-02-25 02:56:01 +00:00
Eli Friedman
e8f8cf1f33 Refactor code from inlining and globalopt that checks whether a function definition is unused, and enhance it so it can tell that functions which are only used by a blockaddress are in fact dead. This probably doesn't happen much on most code, but the Linux kernel's _THIS_IP_ can trigger this issue with blockaddress. (GlobalDCE can also handle the given tescase, but we only run that at -O3.) Found while looking at PR11180.
llvm-svn: 142572
2011-10-20 05:23:42 +00:00
Chris Lattner
e1fe7061ce land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Jay Foad
c146569beb Remove unused STL header includes.
llvm-svn: 130068
2011-04-23 19:53:52 +00:00
Dale Johannesen
de70d69dff Improve the accuracy of the inlining heuristic looking for the
case where a static caller is itself inlined everywhere else, and
thus may go away if it doesn't get too big due to inlining other
things into it.  If there are references to the caller other than
calls, it will not be removed; account for this.
This results in same-day completion of the case in PR8853.

llvm-svn: 122821
2011-01-04 19:01:54 +00:00
Chris Lattner
48a7310e08 Fix PR8735, a really terrible problem in the inliner's "alloca merging"
optimization.

Consider:
static void foo() {
  A = alloca
  ...
}

static void bar() {
  B = alloca
  ...
  call foo();
}

void main() {
  bar()
}

The inliner proceeds bottom up, but lets pretend it decides not to inline foo
into bar.  When it gets to main, it inlines bar into main(), and says "hey, I
just inlined an alloca "B" into main, lets remember that.  Then it keeps going
and finds that it now contains a call to foo.  It decides to inline foo into
main, and says "hey, foo has an alloca A, and I have an alloca B from another
inlined call site, lets reuse it".  The problem with this of course, is that 
the lifetime of A and B are nested, not disjoint.

Unfortunately I can't create a reasonable testcase for this: the one in the
PR is both huge and extremely sensitive, because you minor tweaks end up
causing foo to get inlined into bar too early.  We already have tests for the
basic alloca merging optimization and this does not break them.

llvm-svn: 120995
2010-12-06 07:52:42 +00:00
Chris Lattner
71a4c43942 improve -debug output and comments a little.
llvm-svn: 120993
2010-12-06 07:38:40 +00:00
Jakob Stoklund Olesen
ea31f5aadd Let the -inline-threshold command line argument take precedence over the
threshold given to createFunctionInliningPass().

Both opt -O3 and clang would silently ignore the -inline-threshold option.

llvm-svn: 118117
2010-11-02 23:40:26 +00:00
Owen Anderson
f2fea95f2f Reapply r110396, with fixes to appease the Linux buildbot gods.
llvm-svn: 110460
2010-08-06 18:33:48 +00:00
Owen Anderson
aadd8a89ca Revert r110396 to fix buildbots.
llvm-svn: 110410
2010-08-06 00:23:35 +00:00
Owen Anderson
b9762c07cb Don't use PassInfo* as a type identifier for passes. Instead, use the address of the static
ID member as the sole unique type identifier.  Clean up APIs related to this change.

llvm-svn: 110396
2010-08-05 23:42:04 +00:00
Gabor Greif
0899f01287 simplify by using CallSite constructors; virtually eliminates CallSite::get from the tree
llvm-svn: 109687
2010-07-28 22:50:26 +00:00
Eric Christopher
f57bd9c94d Grammar.
llvm-svn: 108252
2010-07-13 18:27:13 +00:00
Benjamin Kramer
860be640dd Avoid swap when a copy suffices.
llvm-svn: 105220
2010-05-31 12:50:41 +00:00
Chris Lattner
afaee8e110 revert r102831. We already delete dead readonly calls in
other places, killing a valid transformation is not the right
answer.

llvm-svn: 102850
2010-05-01 17:19:38 +00:00
Owen Anderson
443d813b45 Disable the call-deletion transformation introduced in r86975. Without
halting analysis, it is illegal to delete a call to a read-only function.
The correct solution is almost certainly to add a "must halt" attribute and
only allow deletions in its presence.

XFAIL the relevant testcase for now.

llvm-svn: 102831
2010-05-01 08:34:28 +00:00
Chris Lattner
66d1a6ad69 rename InlineInfo.DevirtualizedCalls -> InlinedCalls to
reflect that it includes all inlined calls now, not just
devirtualized ones.

llvm-svn: 102824
2010-05-01 01:26:13 +00:00
Chris Lattner
b893cedad2 The inliner has traditionally not considered call sites
that appear due to inlining a callee as candidates for
futher inlining, but a recent patch made it do this if
those call sites were indirect and became direct.

Unfortunately, in bizarre cases (see testcase) doing this
can cause us to infinitely inline mutually recursive
functions into callers not in the cycle.  Fix this by
keeping track of the inline history from which callsite
inline candidates got inlined from.

This shouldn't affect any "real world" code, but is required
for a follow on patch that is coming up next.

llvm-svn: 102822
2010-05-01 01:05:10 +00:00
Chris Lattner
c4f93f2cbf remove #if 1's.
llvm-svn: 102296
2010-04-25 04:43:02 +00:00
Chris Lattner
3309e63080 enable my inliner change: add newly devirtualized call sites to
the worklist, making them inline candidates.

llvm-svn: 102213
2010-04-23 21:16:07 +00:00
Chris Lattner
a61306d68f switch InlineInfo.DevirtualizedCalls's list to be of WeakVH.
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call.  The improved inliner behavior is still
disabled for now.

llvm-svn: 102196
2010-04-23 18:37:01 +00:00
Chris Lattner
85dd1e42b6 disable my previous inliner patch, it appears to be busting self-host.
llvm-svn: 102153
2010-04-23 00:41:03 +00:00
Chris Lattner
5d87e1be44 The inliner was choosing to not consider call sites
that appear in the SCC as a result of inlining as candidates
for inlining.  Change this so that it *does* consider call 
sites that change from being indirect to being direct as a
result of inlining.  This allows it to completely 
"devirtualize" the testcase.

llvm-svn: 102146
2010-04-22 23:37:35 +00:00
Chris Lattner
5edbd34cff refactor the interface to InlineFunction so that most of the in/out
arguments are handled with a new InlineFunctionInfo class.  This 
makes it easier to extend InlineFunction to return more info in the
future.

llvm-svn: 102137
2010-04-22 23:07:58 +00:00
Chris Lattner
92df9f46f0 make the inliner do less work for leaf functions.
llvm-svn: 101846
2010-04-20 00:47:08 +00:00
Chris Lattner
6a038be777 introduce a new CallGraphSCC class, and pass it around
to CallGraphSCCPass's instead of passing around a
std::vector<CallGraphNode*>.  No functionality change,
but now we have a much tidier interface.

llvm-svn: 101558
2010-04-16 22:42:17 +00:00
Jakob Stoklund Olesen
189a55cc16 Try to keep the cached inliner costs around for a bit longer for big functions.
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.

This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.

This is a more conservative version of r98089 that doesn't break the clang
test CodeGenCXX/temp-order.cpp. That test relies on rather extreme inlining
for constant folding.

llvm-svn: 98099
2010-03-09 23:02:17 +00:00
Jakob Stoklund Olesen
24bdfeee51 Revert r98089, it was breaking a clang test.
llvm-svn: 98094
2010-03-09 22:43:37 +00:00
Jakob Stoklund Olesen
0e8f00292c Try to keep the cached inliner costs around for a bit longer for big functions.
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.

This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.

llvm-svn: 98089
2010-03-09 22:17:11 +00:00
Jakob Stoklund Olesen
5b150ee899 Add inlining threshold to log output.
llvm-svn: 98024
2010-03-09 00:59:53 +00:00
Jakob Stoklund Olesen
8ce1b3d280 Enable the inlinehint attribute in the Inliner.
Functions explicitly marked inline will get an inlining threshold slightly
more aggressive than the default for -O3. This means than -O3 builds are
mostly unaffected while -Os builds will be a bit bigger and faster.

The difference depends entirely on how many 'inline's are sprinkled on the
source.

In the CINT2006 suite, only these tests are significantly affected under -Os:

               Size   Time
471.omnetpp   +1.63% -1.85%
473.astar     +4.01% -6.02%
483.xalancbmk +4.60%  0.00%

Note that 483.xalancbmk runs too quickly to give useful timing results.

llvm-svn: 96066
2010-02-13 01:51:53 +00:00
Jakob Stoklund Olesen
83ebc265b3 Reintroduce the InlineHint function attribute.
This time it's for real! I am going to hook this up in the frontends as well.

The inliner has some experimental heuristics for dealing with the inline hint.
When given a -respect-inlinehint option, functions marked with the inline
keyword are given a threshold just above the default for -O3.

We need some experiments to determine if that is the right thing to do.

llvm-svn: 95466
2010-02-06 01:16:28 +00:00
Jakob Stoklund Olesen
54b09bc819 Increase inliner thresholds by 25.
This makes the inliner about as agressive as it was before my changes to the
inliner cost calculations. These levels give the same performance and slightly
smaller code than before.

llvm-svn: 95320
2010-02-04 18:48:20 +00:00
Jakob Stoklund Olesen
ac14f3bf31 Move per-function inline threshold calculation to a method.
No functional change except the forgotten test for
InlineLimit.getNumOccurrences() == 0 in the CurrentThreshold2 calculation.

llvm-svn: 94007
2010-01-20 17:51:28 +00:00
David Greene
0d082a1e9d Change errs() to dbgs().
llvm-svn: 92625
2010-01-05 01:27:51 +00:00
Chris Lattner
9a1f2aaff0 use isInstructionTriviallyDead, as pointed out by Duncan
llvm-svn: 87035
2009-11-12 21:58:18 +00:00
Chris Lattner
00a9240c9c implement a nice little efficiency hack in the inliner. Since we're now
running IPSCCP early, and we run functionattrs interlaced with the inliner,
we often (particularly for small or noop functions) completely propagate
all of the information about a call to its call site in IPSSCP (making a call
dead) and functionattrs is smart enough to realize that the function is
readonly (because it is interlaced with inliner).

To improve compile time and make the inliner threshold more accurate, realize
that we don't have to inline dead readonly function calls.  Instead, just 
delete the call.  This happens all the time for C++ codes, here are some
counters from opt/llvm-ld counting the number of times calls were deleted vs
inlined on various apps:

Tramp3d opt:
  5033 inline                - Number of call sites deleted, not inlined
 24596 inline                - Number of functions inlined
llvm-ld:
  667 inline           - Number of functions deleted because all callers found
  699 inline           - Number of functions inlined

483.xalancbmk opt:
  8096 inline                - Number of call sites deleted, not inlined
 62528 inline                - Number of functions inlined
llvm-ld:
   217 inline           - Number of allocas merged together
  2158 inline           - Number of functions inlined

471.omnetpp:
  331 inline                - Number of call sites deleted, not inlined
 8981 inline                - Number of functions inlined
llvm-ld:
  171 inline           - Number of functions deleted because all callers found
  629 inline           - Number of functions inlined


Deleting a call is much faster than inlining it, and is insensitive to the
size of the callee. :)

llvm-svn: 86975
2009-11-12 07:56:08 +00:00
Dan Gohman
18cdd7ef1b Move the InlineCost code from Transforms/Utils to Analysis.
llvm-svn: 83998
2009-10-13 18:30:07 +00:00
Dale Johannesen
7691b73da0 Use names instead of numbers for some of the magic
constants used in inlining heuristics (especially
those used in more than one file).  No functional change.

llvm-svn: 83675
2009-10-09 21:42:02 +00:00
Dale Johannesen
10c870b46f When considering whether to inline Callee into Caller,
and that will make Caller too big to inline, see if it
might be better to inline Caller into its callers instead.
This situation is described in PR 2973, although I haven't
tried the specific case in SPASS.

llvm-svn: 83602
2009-10-09 00:11:32 +00:00
Evan Cheng
eae1bb9779 Allow -inline-threshold override default threshold even if compiling to optimize for size.
llvm-svn: 83274
2009-10-04 06:13:54 +00:00