This helps compile time when the greedy register allocator splits live
ranges in giant functions. Without the bias, we would try to grow
regions through the giant edge bundles, usually to find out that the
region became too big and expensive.
If a live range has many uses in blocks near the giant bundle, the small
negative bias doesn't make a big difference, and we still consider
regions including the giant edge bundle.
Giant edge bundles are usually connected to landing pads or indirect
branches.
llvm-svn: 157174
With physreg joining out of the way, it is easy to recognize the
instructions that need their kill flags cleared while testing for
interference.
This allows us to skip the final scan of all instructions for an 11%
speedup of the coalescer pass.
llvm-svn: 157169
may be RAUW'd by the recursive call to LegalizeOps; instead, retrieve
the other operands when calling UpdateNodeOperands. Fixes PR12889.
llvm-svn: 157162
There should be no difference in the resulting binary, given a sufficiently
smart compiler. However we already had compiler timeouts on the generated
code in Intrinsics.gen, this hopefully makes the lives of slow buildbots a
little easier.
llvm-svn: 157161
This class is meant to be the primary interface for examining a live
range in the vicinity on a given instruction. It avoids all the messy
dealings with iterators and early clobbers.
This is a more abstract interface to live ranges, hiding the
implementation as a vector of segments.
llvm-svn: 157141
Dead code elimination during coalescing could cause a virtual register
to be split into connected components. The following rewriting would be
confused about the already joined copies present in the code, but
without a corresponding value number in the live range.
Erase all joined copies instantly when joining intervals such that the
MI and LiveInterval representations are always in sync.
llvm-svn: 157135
The current code will generate a prologue which starts with something like:
mflr 0
stw 31, -4(1)
stw 0, 4(1)
stwu 1, -16(1)
But under the PPC32 SVR4 ABI, access to negative offsets from R1 is not allowed.
This was pointed out by Peter Bergner.
llvm-svn: 157133
Dead code and joined copies are now eliminated on the fly, and there is
no need for a post pass.
This makes the coalescer work like other modern register allocator
passes: Code is changed on the fly, there is no pending list of changes
to be committed.
llvm-svn: 157132
The late dead code elimination is no longer necessary.
The test changes are cause by a register hint that can be either %rdi or
%rax. The choice depends on the use list order, which this patch changes.
llvm-svn: 157131
Before rewriting uses of one value in A to register B, check that there
are no tied uses. That would require multiple A values to be rewritten.
This bug can't bite in the current version of the code for a fairly
subtle reason: A tied use would have caused 2-addr to insert a copy
before the use. If the copy has been coalesced, it will be found by the
same loop changed by this patch, and the optimization is aborted.
This was exposed by 400.perlbench and lua after applying a patch that
deletes joined copies aggressively.
llvm-svn: 157130
Remaining virtreg->physreg copies were rematerialized during
updateRegDefsUses(), but we already do the same thing in joinCopy() when
visiting the physreg copy instruction.
Eliminate the preserveSrcInt argument to reMaterializeTrivialDef(). It
is now always true.
llvm-svn: 157103
Dead copies cause problems because they are trivial to coalesce, but
removing them gived the live range a dangling end point. This patch
enables full dead code elimination which trims live ranges to their uses
so end points don't dangle.
DCE may erase multiple instructions. Put the pointers in an ErasedInstrs
set so we never risk visiting erased instructions in the work list.
There isn't supposed to be any dead copies entering RegisterCoalescer,
but they do slip by as evidenced by test/CodeGen/X86/coalescer-dce.ll.
llvm-svn: 157101