1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 11:02:59 +02:00
llvm-mirror/include/llvm/CodeGen/LiveIntervals.h

494 lines
20 KiB
C
Raw Normal View History

//===- LiveIntervals.h - Live Interval Analysis -----------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
/// \file This file implements the LiveInterval analysis pass. Given some
/// numbering of each the machine instructions (in this implemention depth-first
/// order) an interval [i, j) is said to be a live interval for register v if
/// there is no instruction with number j' > j such that v is live at j' and
/// there is no instruction with number i' < i such that v is live at i'. In
/// this implementation intervals can have holes, i.e. an interval might look
/// like [1,20), [50,65), [1000,1001).
//
//===----------------------------------------------------------------------===//
#ifndef LLVM_CODEGEN_LIVEINTERVALS_H
#define LLVM_CODEGEN_LIVEINTERVALS_H
#include "llvm/ADT/ArrayRef.h"
#include "llvm/ADT/IndexedMap.h"
#include "llvm/ADT/SmallVector.h"
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible with the new pass manager, and no longer relying on analysis groups. This builds essentially a ground-up new AA infrastructure stack for LLVM. The core ideas are the same that are used throughout the new pass manager: type erased polymorphism and direct composition. The design is as follows: - FunctionAAResults is a type-erasing alias analysis results aggregation interface to walk a single query across a range of results from different alias analyses. Currently this is function-specific as we always assume that aliasing queries are *within* a function. - AAResultBase is a CRTP utility providing stub implementations of various parts of the alias analysis result concept, notably in several cases in terms of other more general parts of the interface. This can be used to implement only a narrow part of the interface rather than the entire interface. This isn't really ideal, this logic should be hoisted into FunctionAAResults as currently it will cause a significant amount of redundant work, but it faithfully models the behavior of the prior infrastructure. - All the alias analysis passes are ported to be wrapper passes for the legacy PM and new-style analysis passes for the new PM with a shared result object. In some cases (most notably CFL), this is an extremely naive approach that we should revisit when we can specialize for the new pass manager. - BasicAA has been restructured to reflect that it is much more fundamentally a function analysis because it uses dominator trees and loop info that need to be constructed for each function. All of the references to getting alias analysis results have been updated to use the new aggregation interface. All the preservation and other pass management code has been updated accordingly. The way the FunctionAAResultsWrapperPass works is to detect the available alias analyses when run, and add them to the results object. This means that we should be able to continue to respect when various passes are added to the pipeline, for example adding CFL or adding TBAA passes should just cause their results to be available and to get folded into this. The exception to this rule is BasicAA which really needs to be a function pass due to using dominator trees and loop info. As a consequence, the FunctionAAResultsWrapperPass directly depends on BasicAA and always includes it in the aggregation. This has significant implications for preserving analyses. Generally, most passes shouldn't bother preserving FunctionAAResultsWrapperPass because rebuilding the results just updates the set of known AA passes. The exception to this rule are LoopPass instances which need to preserve all the function analyses that the loop pass manager will end up needing. This means preserving both BasicAAWrapperPass and the aggregating FunctionAAResultsWrapperPass. Now, when preserving an alias analysis, you do so by directly preserving that analysis. This is only necessary for non-immutable-pass-provided alias analyses though, and there are only three of interest: BasicAA, GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is preserved when needed because it (like DominatorTree and LoopInfo) is marked as a CFG-only pass. I've expanded GlobalsAA into the preserved set everywhere we previously were preserving all of AliasAnalysis, and I've added SCEVAA in the intersection of that with where we preserve SCEV itself. One significant challenge to all of this is that the CGSCC passes were actually using the alias analysis implementations by taking advantage of a pretty amazing set of loop holes in the old pass manager's analysis management code which allowed analysis groups to slide through in many cases. Moving away from analysis groups makes this problem much more obvious. To fix it, I've leveraged the flexibility the design of the new PM components provides to just directly construct the relevant alias analyses for the relevant functions in the IPO passes that need them. This is a bit hacky, but should go away with the new pass manager, and is already in many ways cleaner than the prior state. Another significant challenge is that various facilities of the old alias analysis infrastructure just don't fit any more. The most significant of these is the alias analysis 'counter' pass. That pass relied on the ability to snoop on AA queries at different points in the analysis group chain. Instead, I'm planning to build printing functionality directly into the aggregation layer. I've not included that in this patch merely to keep it smaller. Note that all of this needs a nearly complete rewrite of the AA documentation. I'm planning to do that, but I'd like to make sure the new design settles, and to flesh out a bit more of what it looks like in the new pass manager first. Differential Revision: http://reviews.llvm.org/D12080 llvm-svn: 247167
2015-09-09 19:55:00 +02:00
#include "llvm/Analysis/AliasAnalysis.h"
#include "llvm/CodeGen/LiveInterval.h"
#include "llvm/CodeGen/MachineBasicBlock.h"
#include "llvm/CodeGen/MachineFunctionPass.h"
#include "llvm/CodeGen/SlotIndexes.h"
#include "llvm/CodeGen/TargetRegisterInfo.h"
#include "llvm/MC/LaneBitmask.h"
[LiveIntervalAnalysis] Speed up creation of live ranges for physical registers by using a segment set. The patch addresses a compile-time performance regression in the LiveIntervals analysis pass (see http://llvm.org/bugs/show_bug.cgi?id=18580). This regression is especially critical when compiling long functions. Our analysis had shown that the most of time is taken for generation of live intervals for physical registers. Insertions in the middle of the array of live ranges cause quadratic algorithmic complexity, which is apparently the main reason for the slow-down. Overview of changes: - The patch introduces an additional std::set<Segment>* member in LiveRange for storing segments in the phase of initial creation. The set is used if this member is not NULL, otherwise everything works the old way. - The set of operations on LiveRange used during initial creation (i.e. used by createDeadDefs and extendToUses) have been reimplemented to use the segment set if it is available. - After a live range is created the contents of the set are flushed to the segment vector, because the set is not as efficient as the vector for the later uses of the live range. After the flushing, the set is deleted and cannot be used again. - The set is only for live ranges computed in LiveIntervalAnalysis::computeLiveInRegUnits() and getRegUnit() but not in computeVirtRegs(), because I did not bring any performance benefits to computeVirtRegs() and for some examples even brought a slow down. Patch by Vaidas Gasiunas <vaidas.gasiunas@sap.com> Differential Revision: http://reviews.llvm.org/D6013 llvm-svn: 228421
2015-02-06 19:42:41 +01:00
#include "llvm/Support/CommandLine.h"
#include "llvm/Support/Compiler.h"
#include "llvm/Support/ErrorHandling.h"
#include <cassert>
#include <cstdint>
#include <utility>
namespace llvm {
[LiveIntervalAnalysis] Speed up creation of live ranges for physical registers by using a segment set. The patch addresses a compile-time performance regression in the LiveIntervals analysis pass (see http://llvm.org/bugs/show_bug.cgi?id=18580). This regression is especially critical when compiling long functions. Our analysis had shown that the most of time is taken for generation of live intervals for physical registers. Insertions in the middle of the array of live ranges cause quadratic algorithmic complexity, which is apparently the main reason for the slow-down. Overview of changes: - The patch introduces an additional std::set<Segment>* member in LiveRange for storing segments in the phase of initial creation. The set is used if this member is not NULL, otherwise everything works the old way. - The set of operations on LiveRange used during initial creation (i.e. used by createDeadDefs and extendToUses) have been reimplemented to use the segment set if it is available. - After a live range is created the contents of the set are flushed to the segment vector, because the set is not as efficient as the vector for the later uses of the live range. After the flushing, the set is deleted and cannot be used again. - The set is only for live ranges computed in LiveIntervalAnalysis::computeLiveInRegUnits() and getRegUnit() but not in computeVirtRegs(), because I did not bring any performance benefits to computeVirtRegs() and for some examples even brought a slow down. Patch by Vaidas Gasiunas <vaidas.gasiunas@sap.com> Differential Revision: http://reviews.llvm.org/D6013 llvm-svn: 228421
2015-02-06 19:42:41 +01:00
extern cl::opt<bool> UseSegmentSetForPhysRegs;
class BitVector;
class LiveRangeCalc;
class MachineBlockFrequencyInfo;
class MachineDominatorTree;
class MachineFunction;
class MachineInstr;
class MachineRegisterInfo;
class raw_ostream;
class TargetInstrInfo;
class VirtRegMap;
class LiveIntervals : public MachineFunctionPass {
MachineFunction* MF;
MachineRegisterInfo* MRI;
const TargetRegisterInfo* TRI;
const TargetInstrInfo* TII;
AliasAnalysis *AA;
SlotIndexes* Indexes;
MachineDominatorTree *DomTree = nullptr;
LiveRangeCalc *LRCalc = nullptr;
/// Special pool allocator for VNInfo's (LiveInterval val#).
VNInfo::Allocator VNInfoAllocator;
/// Live interval pointers for all the virtual registers.
IndexedMap<LiveInterval*, VirtReg2IndexFunctor> VirtRegIntervals;
/// Sorted list of instructions with register mask operands. Always use the
/// 'r' slot, RegMasks are normal clobbers, not early clobbers.
SmallVector<SlotIndex, 8> RegMaskSlots;
/// This vector is parallel to RegMaskSlots, it holds a pointer to the
/// corresponding register mask. This pointer can be recomputed as:
///
/// MI = Indexes->getInstructionFromIndex(RegMaskSlot[N]);
/// unsigned OpNum = findRegMaskOperand(MI);
/// RegMaskBits[N] = MI->getOperand(OpNum).getRegMask();
///
/// This is kept in a separate vector partly because some standard
/// libraries don't support lower_bound() with mixed objects, partly to
/// improve locality when searching in RegMaskSlots.
/// Also see the comment in LiveInterval::find().
SmallVector<const uint32_t*, 8> RegMaskBits;
/// For each basic block number, keep (begin, size) pairs indexing into the
/// RegMaskSlots and RegMaskBits arrays.
/// Note that basic block numbers may not be layout contiguous, that's why
/// we can't just keep track of the first register mask in each basic
/// block.
SmallVector<std::pair<unsigned, unsigned>, 8> RegMaskBlocks;
/// Keeps a live range set for each register unit to track fixed physreg
/// interference.
SmallVector<LiveRange*, 0> RegUnitRanges;
public:
static char ID;
LiveIntervals();
~LiveIntervals() override;
/// Calculate the spill weight to assign to a single instruction.
static float getSpillWeight(bool isDef, bool isUse,
const MachineBlockFrequencyInfo *MBFI,
const MachineInstr &MI);
2007-11-12 07:35:08 +01:00
Add logic to greedy reg alloc to avoid bad eviction chains This fixes bugzilla 26810 https://bugs.llvm.org/show_bug.cgi?id=26810 This is intended to prevent sequences like: movl %ebp, 8(%esp) # 4-byte Spill movl %ecx, %ebp movl %ebx, %ecx movl %edi, %ebx movl %edx, %edi cltd idivl %esi movl %edi, %edx movl %ebx, %edi movl %ecx, %ebx movl %ebp, %ecx movl 16(%esp), %ebp # 4 - byte Reload Such sequences are created in 2 scenarios: Scenario #1: vreg0 is evicted from physreg0 by vreg1 Evictee vreg0 is intended for region splitting with split candidate physreg0 (the reg vreg0 was evicted from) Region splitting creates a local interval because of interference with the evictor vreg1 (normally region spliiting creates 2 interval, the "by reg" and "by stack" intervals. Local interval created when interference occurs.) one of the split intervals ends up evicting vreg2 from physreg1 Evictee vreg2 is intended for region splitting with split candidate physreg1 one of the split intervals ends up evicting vreg3 from physreg2 etc.. until someone spills Scenario #2 vreg0 is evicted from physreg0 by vreg1 vreg2 is evicted from physreg2 by vreg3 etc Evictee vreg0 is intended for region splitting with split candidate physreg1 Region splitting creates a local interval because of interference with the evictor vreg1 one of the split intervals ends up evicting back original evictor vreg1 from physreg0 (the reg vreg0 was evicted from) Another evictee vreg2 is intended for region splitting with split candidate physreg1 one of the split intervals ends up evicting vreg3 from physreg2 etc.. until someone spills As compile time was a concern, I've added a flag to control weather we do cost calculations for local intervals we expect to be created (it's on by default for X86 target, off for the rest). Differential Revision: https://reviews.llvm.org/D35816 Change-Id: Id9411ff7bbb845463d289ba2ae97737a1ee7cc39 llvm-svn: 316295
2017-10-22 19:59:38 +02:00
/// Calculate the spill weight to assign to a single instruction.
static float getSpillWeight(bool isDef, bool isUse,
const MachineBlockFrequencyInfo *MBFI,
const MachineBasicBlock *MBB);
LiveInterval &getInterval(unsigned Reg) {
if (hasInterval(Reg))
return *VirtRegIntervals[Reg];
else
return createAndComputeVirtRegInterval(Reg);
}
const LiveInterval &getInterval(unsigned Reg) const {
return const_cast<LiveIntervals*>(this)->getInterval(Reg);
}
bool hasInterval(unsigned Reg) const {
return VirtRegIntervals.inBounds(Reg) && VirtRegIntervals[Reg];
}
/// Interval creation.
LiveInterval &createEmptyInterval(unsigned Reg) {
assert(!hasInterval(Reg) && "Interval already exists!");
VirtRegIntervals.grow(Reg);
VirtRegIntervals[Reg] = createInterval(Reg);
return *VirtRegIntervals[Reg];
}
LiveInterval &createAndComputeVirtRegInterval(unsigned Reg) {
LiveInterval &LI = createEmptyInterval(Reg);
computeVirtRegInterval(LI);
return LI;
}
/// Interval removal.
void removeInterval(unsigned Reg) {
delete VirtRegIntervals[Reg];
VirtRegIntervals[Reg] = nullptr;
}
/// Given a register and an instruction, adds a live segment from that
/// instruction to the end of its MBB.
LiveInterval::Segment addSegmentToEndOfBlock(unsigned reg,
MachineInstr &startInst);
/// After removing some uses of a register, shrink its live range to just
/// the remaining uses. This method does not compute reaching defs for new
/// uses, and it doesn't remove dead defs.
/// Dead PHIDef values are marked as unused. New dead machine instructions
/// are added to the dead vector. Returns true if the interval may have been
/// separated into multiple connected components.
bool shrinkToUses(LiveInterval *li,
SmallVectorImpl<MachineInstr*> *dead = nullptr);
/// Specialized version of
/// shrinkToUses(LiveInterval *li, SmallVectorImpl<MachineInstr*> *dead)
/// that works on a subregister live range and only looks at uses matching
/// the lane mask of the subregister range.
/// This may leave the subrange empty which needs to be cleaned up with
/// LiveInterval::removeEmptySubranges() afterwards.
void shrinkToUses(LiveInterval::SubRange &SR, unsigned Reg);
/// Extend the live range \p LR to reach all points in \p Indices. The
/// points in the \p Indices array must be jointly dominated by the union
/// of the existing defs in \p LR and points in \p Undefs.
///
/// PHI-defs are added as needed to maintain SSA form.
///
/// If a SlotIndex in \p Indices is the end index of a basic block, \p LR
/// will be extended to be live out of the basic block.
/// If a SlotIndex in \p Indices is jointy dominated only by points in
/// \p Undefs, the live range will not be extended to that point.
///
/// See also LiveRangeCalc::extend().
void extendToIndices(LiveRange &LR, ArrayRef<SlotIndex> Indices,
ArrayRef<SlotIndex> Undefs);
void extendToIndices(LiveRange &LR, ArrayRef<SlotIndex> Indices) {
extendToIndices(LR, Indices, /*Undefs=*/{});
}
/// If \p LR has a live value at \p Kill, prune its live range by removing
/// any liveness reachable from Kill. Add live range end points to
/// EndPoints such that extendToIndices(LI, EndPoints) will reconstruct the
/// value's live range.
///
/// Calling pruneValue() and extendToIndices() can be used to reconstruct
/// SSA form after adding defs to a virtual register.
void pruneValue(LiveRange &LR, SlotIndex Kill,
SmallVectorImpl<SlotIndex> *EndPoints);
2018-10-30 02:11:52 +01:00
/// This function should not be used. Its intent is to tell you that you are
/// doing something wrong if you call pruneValue directly on a
/// LiveInterval. Indeed, you are supposed to call pruneValue on the main
2018-10-30 02:11:52 +01:00
/// LiveRange and all the LiveRanges of the subranges if any.
LLVM_ATTRIBUTE_UNUSED void pruneValue(LiveInterval &, SlotIndex,
SmallVectorImpl<SlotIndex> *) {
llvm_unreachable(
"Use pruneValue on the main LiveRange and on each subrange");
}
SlotIndexes *getSlotIndexes() const {
return Indexes;
}
AliasAnalysis *getAliasAnalysis() const {
return AA;
}
/// Returns true if the specified machine instr has been removed or was
/// never entered in the map.
bool isNotInMIMap(const MachineInstr &Instr) const {
return !Indexes->hasIndex(Instr);
}
/// Returns the base index of the given instruction.
SlotIndex getInstructionIndex(const MachineInstr &Instr) const {
return Indexes->getInstructionIndex(Instr);
}
/// Returns the instruction associated with the given index.
MachineInstr* getInstructionFromIndex(SlotIndex index) const {
return Indexes->getInstructionFromIndex(index);
}
/// Return the first index in the given basic block.
SlotIndex getMBBStartIdx(const MachineBasicBlock *mbb) const {
return Indexes->getMBBStartIdx(mbb);
}
/// Return the last index in the given basic block.
SlotIndex getMBBEndIdx(const MachineBasicBlock *mbb) const {
return Indexes->getMBBEndIdx(mbb);
}
bool isLiveInToMBB(const LiveRange &LR,
const MachineBasicBlock *mbb) const {
return LR.liveAt(getMBBStartIdx(mbb));
}
bool isLiveOutOfMBB(const LiveRange &LR,
const MachineBasicBlock *mbb) const {
return LR.liveAt(getMBBEndIdx(mbb).getPrevSlot());
}
MachineBasicBlock* getMBBFromIndex(SlotIndex index) const {
return Indexes->getMBBFromIndex(index);
}
void insertMBBInMaps(MachineBasicBlock *MBB) {
Indexes->insertMBBInMaps(MBB);
assert(unsigned(MBB->getNumber()) == RegMaskBlocks.size() &&
"Blocks must be added in order.");
RegMaskBlocks.push_back(std::make_pair(RegMaskSlots.size(), 0));
}
SlotIndex InsertMachineInstrInMaps(MachineInstr &MI) {
return Indexes->insertMachineInstrInMaps(MI);
}
void InsertMachineInstrRangeInMaps(MachineBasicBlock::iterator B,
MachineBasicBlock::iterator E) {
for (MachineBasicBlock::iterator I = B; I != E; ++I)
Indexes->insertMachineInstrInMaps(*I);
}
void RemoveMachineInstrFromMaps(MachineInstr &MI) {
Indexes->removeMachineInstrFromMaps(MI);
}
SlotIndex ReplaceMachineInstrInMaps(MachineInstr &MI, MachineInstr &NewMI) {
return Indexes->replaceMachineInstrInMaps(MI, NewMI);
}
VNInfo::Allocator& getVNInfoAllocator() { return VNInfoAllocator; }
void getAnalysisUsage(AnalysisUsage &AU) const override;
void releaseMemory() override;
/// Pass entry point; Calculates LiveIntervals.
bool runOnMachineFunction(MachineFunction&) override;
/// Implement the dump method.
void print(raw_ostream &O, const Module* = nullptr) const override;
/// If LI is confined to a single basic block, return a pointer to that
/// block. If LI is live in to or out of any block, return NULL.
MachineBasicBlock *intervalIsInOneMBB(const LiveInterval &LI) const;
/// Returns true if VNI is killed by any PHI-def values in LI.
/// This may conservatively return true to avoid expensive computations.
bool hasPHIKill(const LiveInterval &LI, const VNInfo *VNI) const;
/// Add kill flags to any instruction that kills a virtual register.
void addKillFlags(const VirtRegMap*);
/// Call this method to notify LiveIntervals that instruction \p MI has been
/// moved within a basic block. This will update the live intervals for all
/// operands of \p MI. Moves between basic blocks are not supported.
///
/// \param UpdateFlags Update live intervals for nonallocatable physregs.
void handleMove(MachineInstr &MI, bool UpdateFlags = false);
/// Update intervals for operands of \p MI so that they begin/end on the
/// SlotIndex for \p BundleStart.
///
/// \param UpdateFlags Update live intervals for nonallocatable physregs.
///
/// Requires MI and BundleStart to have SlotIndexes, and assumes
/// existing liveness is accurate. BundleStart should be the first
/// instruction in the Bundle.
void handleMoveIntoBundle(MachineInstr &MI, MachineInstr &BundleStart,
bool UpdateFlags = false);
/// Update live intervals for instructions in a range of iterators. It is
/// intended for use after target hooks that may insert or remove
/// instructions, and is only efficient for a small number of instructions.
///
/// OrigRegs is a vector of registers that were originally used by the
/// instructions in the range between the two iterators.
///
/// Currently, the only only changes that are supported are simple removal
/// and addition of uses.
void repairIntervalsInRange(MachineBasicBlock *MBB,
MachineBasicBlock::iterator Begin,
MachineBasicBlock::iterator End,
ArrayRef<unsigned> OrigRegs);
// Register mask functions.
//
// Machine instructions may use a register mask operand to indicate that a
// large number of registers are clobbered by the instruction. This is
// typically used for calls.
//
// For compile time performance reasons, these clobbers are not recorded in
// the live intervals for individual physical registers. Instead,
// LiveIntervalAnalysis maintains a sorted list of instructions with
// register mask operands.
/// Returns a sorted array of slot indices of all instructions with
/// register mask operands.
ArrayRef<SlotIndex> getRegMaskSlots() const { return RegMaskSlots; }
/// Returns a sorted array of slot indices of all instructions with register
/// mask operands in the basic block numbered \p MBBNum.
ArrayRef<SlotIndex> getRegMaskSlotsInBlock(unsigned MBBNum) const {
std::pair<unsigned, unsigned> P = RegMaskBlocks[MBBNum];
return getRegMaskSlots().slice(P.first, P.second);
}
/// Returns an array of register mask pointers corresponding to
/// getRegMaskSlots().
ArrayRef<const uint32_t*> getRegMaskBits() const { return RegMaskBits; }
/// Returns an array of mask pointers corresponding to
/// getRegMaskSlotsInBlock(MBBNum).
ArrayRef<const uint32_t*> getRegMaskBitsInBlock(unsigned MBBNum) const {
std::pair<unsigned, unsigned> P = RegMaskBlocks[MBBNum];
return getRegMaskBits().slice(P.first, P.second);
}
/// Test if \p LI is live across any register mask instructions, and
/// compute a bit mask of physical registers that are not clobbered by any
/// of them.
///
/// Returns false if \p LI doesn't cross any register mask instructions. In
/// that case, the bit vector is not filled in.
bool checkRegMaskInterference(LiveInterval &LI,
BitVector &UsableRegs);
// Register unit functions.
//
// Fixed interference occurs when MachineInstrs use physregs directly
// instead of virtual registers. This typically happens when passing
// arguments to a function call, or when instructions require operands in
// fixed registers.
//
// Each physreg has one or more register units, see MCRegisterInfo. We
// track liveness per register unit to handle aliasing registers more
// efficiently.
/// Return the live range for register unit \p Unit. It will be computed if
/// it doesn't exist.
LiveRange &getRegUnit(unsigned Unit) {
LiveRange *LR = RegUnitRanges[Unit];
if (!LR) {
// Compute missing ranges on demand.
[LiveIntervalAnalysis] Speed up creation of live ranges for physical registers by using a segment set. The patch addresses a compile-time performance regression in the LiveIntervals analysis pass (see http://llvm.org/bugs/show_bug.cgi?id=18580). This regression is especially critical when compiling long functions. Our analysis had shown that the most of time is taken for generation of live intervals for physical registers. Insertions in the middle of the array of live ranges cause quadratic algorithmic complexity, which is apparently the main reason for the slow-down. Overview of changes: - The patch introduces an additional std::set<Segment>* member in LiveRange for storing segments in the phase of initial creation. The set is used if this member is not NULL, otherwise everything works the old way. - The set of operations on LiveRange used during initial creation (i.e. used by createDeadDefs and extendToUses) have been reimplemented to use the segment set if it is available. - After a live range is created the contents of the set are flushed to the segment vector, because the set is not as efficient as the vector for the later uses of the live range. After the flushing, the set is deleted and cannot be used again. - The set is only for live ranges computed in LiveIntervalAnalysis::computeLiveInRegUnits() and getRegUnit() but not in computeVirtRegs(), because I did not bring any performance benefits to computeVirtRegs() and for some examples even brought a slow down. Patch by Vaidas Gasiunas <vaidas.gasiunas@sap.com> Differential Revision: http://reviews.llvm.org/D6013 llvm-svn: 228421
2015-02-06 19:42:41 +01:00
// Use segment set to speed-up initial computation of the live range.
RegUnitRanges[Unit] = LR = new LiveRange(UseSegmentSetForPhysRegs);
computeRegUnitRange(*LR, Unit);
}
return *LR;
}
/// Return the live range for register unit \p Unit if it has already been
/// computed, or nullptr if it hasn't been computed yet.
LiveRange *getCachedRegUnit(unsigned Unit) {
return RegUnitRanges[Unit];
}
const LiveRange *getCachedRegUnit(unsigned Unit) const {
return RegUnitRanges[Unit];
}
/// Remove computed live range for register unit \p Unit. Subsequent uses
/// should rely on on-demand recomputation.
void removeRegUnit(unsigned Unit) {
delete RegUnitRanges[Unit];
RegUnitRanges[Unit] = nullptr;
}
/// Remove associated live ranges for the register units associated with \p
/// Reg. Subsequent uses should rely on on-demand recomputation. \note This
/// method can result in inconsistent liveness tracking if multiple phyical
/// registers share a regunit, and should be used cautiously.
void removeAllRegUnitsForPhysReg(unsigned Reg) {
for (MCRegUnitIterator Units(Reg, TRI); Units.isValid(); ++Units)
removeRegUnit(*Units);
}
/// Remove value numbers and related live segments starting at position
/// \p Pos that are part of any liverange of physical register \p Reg or one
/// of its subregisters.
void removePhysRegDefAt(unsigned Reg, SlotIndex Pos);
/// Remove value number and related live segments of \p LI and its subranges
/// that start at position \p Pos.
void removeVRegDefAt(LiveInterval &LI, SlotIndex Pos);
/// Split separate components in LiveInterval \p LI into separate intervals.
void splitSeparateComponents(LiveInterval &LI,
SmallVectorImpl<LiveInterval*> &SplitLIs);
/// For live interval \p LI with correct SubRanges construct matching
/// information for the main live range. Expects the main live range to not
/// have any segments or value numbers.
void constructMainRangeFromSubranges(LiveInterval &LI);
private:
/// Compute live intervals for all virtual registers.
void computeVirtRegs();
/// Compute RegMaskSlots and RegMaskBits.
void computeRegMasks();
/// Walk the values in \p LI and check for dead values:
/// - Dead PHIDef values are marked as unused.
/// - Dead operands are marked as such.
/// - Completely dead machine instructions are added to the \p dead vector
/// if it is not nullptr.
/// Returns true if any PHI value numbers have been removed which may
/// have separated the interval into multiple connected components.
bool computeDeadValues(LiveInterval &LI,
SmallVectorImpl<MachineInstr*> *dead);
static LiveInterval* createInterval(unsigned Reg);
void printInstrs(raw_ostream &O) const;
void dumpInstrs() const;
void computeLiveInRegUnits();
void computeRegUnitRange(LiveRange&, unsigned Unit);
void computeVirtRegInterval(LiveInterval&);
using ShrinkToUsesWorkList = SmallVector<std::pair<SlotIndex, VNInfo*>, 16>;
void extendSegmentsToUses(LiveRange &Segments,
ShrinkToUsesWorkList &WorkList, unsigned Reg,
LaneBitmask LaneMask);
/// Helper function for repairIntervalsInRange(), walks backwards and
/// creates/modifies live segments in \p LR to match the operands found.
/// Only full operands or operands with subregisters matching \p LaneMask
/// are considered.
void repairOldRegInRange(MachineBasicBlock::iterator Begin,
MachineBasicBlock::iterator End,
const SlotIndex endIdx, LiveRange &LR,
unsigned Reg,
LaneBitmask LaneMask = LaneBitmask::getAll());
class HMEditor;
};
} // end namespace llvm
#endif