There were cases where a value could be used and it's both crossing an invoke
and NOT crossing an invoke. This could happen in the landing pads. In that case,
we will demote the value to the stack like we did before.
<rdar://problem/10609139>
llvm-svn: 152705
New flags: -misched-topdown, -misched-bottomup. They can be used with
the default scheduler or with -misched=shuffle. Without either
topdown/bottomup flag -misched=shuffle now alternates scheduling
direction.
LiveIntervals update is unimplemented with bottom-up scheduling, so
only -misched-topdown currently works.
Capped the ScheduleDAG hierarchy with a concrete ScheduleDAGMI class.
ScheduleDAGMI is aware of the top and bottom of the unscheduled zone
within the current region. Scheduling policy can be plugged into
the ScheduleDAGMI driver by implementing MachineSchedStrategy.
ConvergingScheduler is now the default scheduling algorithm.
It exercises the new driver but still does no reordering.
llvm-svn: 152700
output (we're emitting a specification already and the information
isn't changing).
Saves 1% on the debug information for a build of llvm.
Fixes rdar://11043421
llvm-svn: 152697
(i16 load $addr+c*sizeof(i16)) and replace uses of (i32 vextract) with the
i16 load. It should issue an extload instead: (i32 extload $addr+c*sizeof(i16)).
rdar://11035895
llvm-svn: 152675
take a TargetLibraryInfo parameter. Internally, rather than passing TD, TLI
and DT parameters around all over the place, introduce a struct for holding
them.
llvm-svn: 152623
Also refactor the existing OProfile profiling code to reuse the same interfaces with the VTune profiling code.
In addition, unit tests for the profiling interfaces were added.
This patch was prepared by Andrew Kaylor and Daniel Malea, and reviewed in the llvm-commits list by Jim Grosbach
llvm-svn: 152620
offset accumulation to use a boring APInt instead of ConstantExprs.
I didn't go all the way to an 'int64_t' because I wanted APInt to handle
any magic required to properly wrap the arithmetic when the pointer
width is <64 bits. If there is a significant penalty from using APInt
here, first off WTF, and secondly let me know and I'll do the math by
hand.
I've left one layer still operating w/ ConstantExpr because it makes the
interface quite a bit simpler, and that one isn't iterative so has much
lower cost.
I suppose this may potentially speed up some strang compilation
situations, but I don't really expect much. It should have no functional
impact either way.
llvm-svn: 152590
candidate set for subsequent inlining, try to simplify the arguments to
the inner call site now that inlining has been performed.
The goal here is to propagate and fold constants through deeply nested
call chains. Without doing this, we loose the inliner bonus that should
be applied because the arguments don't match the exact pattern the cost
estimator uses.
Reviewed on IRC by Benjamin Kramer.
llvm-svn: 152556
Typically instcombine has handled this, but pointer differences show up
in several contexts where we would like to get constant folding, and
cannot afford to run instcombine. Specifically, I'm working on improving
the constant folding of arguments used in inline cost analysis with
instsimplify.
Doing this in instsimplify implies some algorithm changes. We have to
handle multiple layers of all-constant GEPs because instsimplify cannot
fold them into a single GEP the way instcombine can. Also, we're only
interested in all-constant GEPs. The result is that this doesn't really
replace the instcombine logic, it's just complimentary and focused on
constant folding.
Reviewed on IRC by Benjamin Kramer.
llvm-svn: 152555