Allows the new machine model to be used for NumMicroOps and OutputLatency.
Allows the HazardRecognizer to be disabled along with itineraries.
llvm-svn: 165603
This wasn't contributing anything significant to postRA heuristics except compile time (by my measurements) and will be replaced by a more general heuristic for cross-region dependencies within the scheduler itself.
llvm-svn: 165563
- Update maximal stack alignment when stack arguments are prepared before a
call.
- Test cases are enhanced to show it's not a Win32 specific issue but a generic
one.
llvm-svn: 164946
Add LIS::pruneValue() and extendToIndices(). These two functions are
used by the register coalescer when merging two live ranges requires
more than a trivial value mapping as supported by LiveInterval::join().
The pruneValue() function can remove the part of a value number that is
going to conflict in join(). Afterwards, extendToIndices can restore the
live range, using any new dominating value numbers and updating the SSA
form.
Use this complex value mapping to support merging a register into a
vector lane that has a conflicting value, but the clobbered lane is
undef.
llvm-svn: 164074
A value that is live in to a basic block should be returned by valueIn()
in LiveRangeQuery(getMBBStartIdx(MBB)), unless it is a PHI-def which
should be returned by valueDefined() instead.
Current code isn't using this functionality. Future code will.
llvm-svn: 163990
If a PHI value happens to be live out from the layout predecessor of its
def block, the def slot index will be in the middle of the segment:
%vreg11 = [192r,240B:0)[352r,416B:2)[416B,496r:1) 0@192r 1@480B-phi %2@352r
A LiveRangeQuery for 480 should return NULL from valueIn() since the
PHI value is defined at the block entry, not live in to the block.
No test case, future code depends on this functionality.
llvm-svn: 163971
- BlockAddress has no support of BA + offset form and there is no way to
propagate that offset into machine operand;
- Add BA + offset support and a new interface 'getTargetBlockAddress' to
simplify target block address forming;
- All targets are modified to use new interface and X86 backend is enhanced to
support BA + offset addressing.
llvm-svn: 163743
The search for liveness is clipped to a specific number of instructions around the target MachineInstr, in order to avoid degenerating into an O(N^2) algorithm. It tries to use various clues about how instructions around (both before and after) a given MachineInstr use that register, to determine its state at the MachineInstr.
llvm-svn: 163695
The Hexagon target decided to use a lot of functionality from the
target-independent scheduler. That's fine, and other targets should be
able to do the same. This reorg and API update makes that easy.
For the record, ScheduleDAGMI was not meant to be subclassed. Instead,
new scheduling algorithms should be able to implement
MachineSchedStrategy and be done. But if need be, it's nice to be
able to extend ScheduleDAGMI, so I also made that easier. The target
scheduler is somewhat more apt to break that way though.
llvm-svn: 163580
The RegisterCoalescer understands overlapping live ranges where one
register is defined as a copy of the other. With this change, register
allocators using LiveRegMatrix can do the same, at least for copies
between physical and virtual registers.
When a physreg is defined by a copy from a virtreg, allow those live
ranges to overlap:
%CL<def> = COPY %vreg11:sub_8bit; GR32_ABCD:%vreg11
%vreg13<def,tied1> = SAR32rCL %vreg13<tied0>, %CL<imp-use,kill>
We can assign %vreg11 to %ECX, overlapping the live range of %CL.
llvm-svn: 163336
We will soon allow virtual register live ranges to overlap regunit live
ranges when the physreg is defined as a copy of the virtreg:
%EAX = COPY %vreg5
FOO %vreg5
BAR %EAX<kill>
There is no real interference since %vreg5 and %EAX have the same value
where they overlap.
This patch prevents addKillFlags from adding virtreg kill flags to FOO
where the assigned physreg is overlapping the virtual register live
range.
llvm-svn: 163335
The MachineOperand::TiedTo field was maintained, but not used.
This patch enables it in isRegTiedToDefOperand() and
isRegTiedToUseOperand() which are the actual functions use by the
register allocator.
llvm-svn: 163153
After much agonizing, use a full 4 bits of precious MachineOperand space
to encode this. This uses existing padding, and doesn't grow
MachineOperand beyond its current 32 bytes.
This allows tied defs among the first 15 operands on a normal
instruction, just like the current MCInstrDesc constraint encoding.
Inline assembly needs to be able to tie more than the first 15 operands,
and gets special treatment.
Tied uses can appear beyond 15 operands, as long as they are tied to a
def that's in range.
llvm-svn: 163151
Manage tied operands entirely internally to MachineInstr. This makes it
possible to change the representation of tied operands, as I will do
shortly.
The constraint that tied uses and defs must be in the same order was too
restrictive.
llvm-svn: 163021
Ordered memory operations are more constrained than volatile loads and
stores because they must be ordered with respect to all other memory
operations.
llvm-svn: 162861
This means the same as LoadInst/StoreInst::isUnordered(), and implies
!isVolatile().
Atomic loads and stored are also ordered, and this is the right method
to check if it is safe to reorder memory operations. Ordered atomics
can't be reordered wrt normal loads and stores, which is a stronger
constraint than volatile.
llvm-svn: 162859
The isTied bit is set automatically when a tied use is added and
MCInstrDesc indicates a tied operand. The tie is broken when one of the
tied operands is removed.
llvm-svn: 162814
While in SSA form, a MachineInstr can have pairs of tied defs and uses.
The tied operands are used to represent read-modify-write operands that
must be assigned the same physical register.
Previously, tied operand pairs were computed from fixed MCInstrDesc
fields, or by using black magic on inline assembly instructions.
The isTied flag makes it possible to add tied operands to any
instruction while getting rid of (some of) the inlineasm magic.
Tied operands on normal instructions are needed to represent predicated
individual instructions in SSA form. An extra <tied,imp-use> operand is
required to represent the output value when the instruction predicate is
false.
Adding a predicate to:
%vreg0<def> = ADD %vreg1, %vreg2
Will look like:
%vreg0<tied,def> = ADD %vreg1, %vreg2, pred:3, %vreg7<tied,imp-use>
The virtual register %vreg7 is the value given to %vreg0 when the
predicate is false. It will be assigned the same physreg as %vreg0.
This commit adds the isTied flag and sets it based on MCInstrDesc when
building an instruction. The flag is not used for anything yet.
llvm-svn: 162774
Register operands are manipulated by a lot of target-independent code,
and it is not always possible to preserve target flags. That means it is
not safe to use target flags on register operands.
None of the targets in the tree are using register operand target flags.
External targets should be using immediate operands to annotate
instructions with operand modifiers.
llvm-svn: 162770
These extra flags are not required to properly order the atomic
load/store instructions. SelectionDAGBuilder chains atomics as if they
were volatile, and SelectionDAG::getAtomic() sets the isVolatile bit on
the memory operands of all atomic operations.
The volatile bit is enough to order atomic loads and stores during and
after SelectionDAG.
This means we set mayLoad on atomic_load, mayStore on atomic_store, and
mayLoad+mayStore on the remaining atomic read-modify-write operations.
llvm-svn: 162733
The logic for recomputing latency based on a ScheduleDAG edge was
shady. This bypasses the problem by requiring the client to provide
operand indices. This ensures consistent use of the machine model's
API.
llvm-svn: 162420
The getSumForBlock function was quadratic in the number of successors
because getSuccWeight would perform a linear search for an already known
iterator.
This patch was originally committed as r161460, but reverted again
because of assertion failures. Now that duplicate Machine CFG edges have
been eliminated, this works properly.
llvm-svn: 162233
LLVM IR has labeled duplicate CFG edges, but since Machine CFG edges
don't have labels, it doesn't make sense to allow duplicates. There is
no way of telling what the edges mean.
Duplicate CFG edges cause confusion when dealing with edge weights. It
seems that code producing duplicate CFG edges usually does the wrong
thing with edge weights.
llvm-svn: 162227
Select instructions pick one of two virtual registers based on a
condition, like x86 cmov. On targets like ARM that support predication,
selects can sometimes be eliminated by predicating the instruction
defining one of the operands.
Teach PeepholeOptimizer to recognize select instructions, and ask the
target to optimize them.
llvm-svn: 162059
This makes it possible to speed up def_iterator by stopping at the first
use. This makes def_empty() and getUniqueVRegDef() much faster when
there are many uses.
In a +Asserts build, LiveVariables is 100x faster in one case because
getVRegDef() has an assertion that would scan to the end of a
def_iterator chain.
Spill weight calculation is significantly faster (300x in one case)
because isTriviallyReMaterializable() calls MRI->isConstantPhysReg(%RIP)
which calls def_empty(%RIP).
llvm-svn: 161634
Use a more conventional doubly linked list where the Prev pointers form
a cycle. This means it is no longer necessary to adjust the Prev
pointers when reallocating the VRegInfo array.
The test changes are required because the register allocation hint is
using the use-list order to break ties.
llvm-svn: 161633
Register MachineOperands are kept in linked lists accessible via MRI's
reg_iterator interfaces. The linked list management was handled partly
by MachineOperand methods, partly by MRI methods.
Move all of the list management into MRI, delete
MO::AddRegOperandToRegInfo() and MO::RemoveRegOperandFromRegInfo().
Be more explicit about handling the cases where an MRI pointer isn't
available.
llvm-svn: 161632
The getSumForBlock function was quadratic in the number of successors
because getSuccWeight would perform a linear search for an already known
iterator.
llvm-svn: 161460
This adds support for TargetIndex operands during isel. The meaning of
these (index, offset, flags) operands is entirely defined by the target.
llvm-svn: 161453
A target index operand looks a lot like a constant pool reference, but
it is completely target-defined. It contains the 8-bit TargetFlags, a
32-bit index, and a 64-bit offset. It is preserved by all code generator
passes.
TargetIndex operands can be used to carry target-specific information in
cases where immediate operands won't suffice.
llvm-svn: 161441
This option runs LiveIntervals before TwoAddressInstructionPass which
will eventually learn to exploit and update the analysis.
Eventually, LiveIntervals will run before PHIElimination, and we can get
rid of LiveVariables.
llvm-svn: 161270
The 'unused' state of a value number can be represented as an invalid
def SlotIndex. This also exposed code that shouldn't have been looking
at unused value VNInfos.
llvm-svn: 161258
The only real user of the flag was removeCopyByCommutingDef(), and it
has been switched to LiveIntervals::hasPHIKill().
All the code changed by this patch was only concerned with computing and
propagating the flag.
llvm-svn: 161255
The VNInfo::HAS_PHI_KILL is only half supported. We precompute it in
LiveIntervalAnalysis, but it isn't properly updated by live range
splitting and functions like shrinkToUses().
It is only used in one place: RegisterCoalescer::removeCopyByCommutingDef().
This patch changes that function to use a new LiveIntervals::hasPHIKill()
function that computes the flag for a given value number.
llvm-svn: 161254
Fast isel doesn't currently have support for translating builtin function
calls to target instructions. For embedded environments where the library
functions are not available, this is a matter of correctness and not
just optimization. Most of this patch is just arranging to make the
TargetLibraryInfo available in fast isel. <rdar://problem/12008746>
llvm-svn: 161232
This is a cleaned up version of the isFree() function in
MachineTraceMetrics.cpp.
Transient instructions are very unlikely to produce any code in the
final output. Either because they get eliminated by RegisterCoalescing,
or because they are pseudo-instructions like labels and debug values.
llvm-svn: 160977
A->isPredecessor(B) is the same as B->isSuccessor(A), but it can
tolerate a B that is null or dangling. This shouldn't happen normally,
but it it useful for verification code.
llvm-svn: 160968
A value number is a PHI def if and only if it begins at a block
boundary. This can be derived from the def slot, a separate flag is not
necessary.
llvm-svn: 160893
This option replaces the existing live interval computation with one
based on LiveRangeCalc.cpp. The new algorithm does not depend on
LiveVariables, and it can be run at any time, before or after leaving
SSA form.
llvm-svn: 160892
This is still a work in progress.
Out-of-order CPUs usually execute instructions from multiple basic
blocks simultaneously, so it is necessary to look at longer traces when
estimating the performance effects of code transformations.
The MachineTraceMetrics analysis will pick a typical trace through a
given basic block and provide performance metrics for the trace. Metrics
will include:
- Instruction count through the trace.
- Issue count per functional unit.
- Critical path length, and per-instruction 'slack'.
These metrics can be used to determine the performance limiting factor
when executing the trace, and how it will be affected by a code
transformation.
Initially, this will be used by the early if-conversion pass.
llvm-svn: 160796
Add a micro-optimization to getNode of CONCAT_VECTORS when both operands are undefs.
Can't find a testcase for this because VECTOR_SHUFFLE already handles undef operands, but Duncan suggested that we add this.
Together with Michael Kuperstein <michael.m.kuperstein@intel.com>
llvm-svn: 160229
generalizing its implementation sufficiently to support this value
number scenario as well.
This cuts out another significant performance hit in large functions
(over 10k basic blocks, etc), especially those with "natural" CFG
structures.
llvm-svn: 160026
X86MachineFunctionInfo as this is currently only used by X86. If this ever
becomes an issue on another arch (e.g., ARM) then we can hoist it back out.
llvm-svn: 160009
X86. Basically, this is a reapplication of r158087 with a few fixes.
Specifically, (1) the stack pointer is restored from the base pointer before
popping callee-saved registers and (2) in obscure cases (see comments in patch)
we must cache the value of the original stack adjustment in the prologue and
apply it in the epilogue.
rdar://11496434
llvm-svn: 160002