1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 12:02:58 +02:00
Commit Graph

198 Commits

Author SHA1 Message Date
Dmitri Gribenko
8982c8a34d Fix a couple of Doxygen comment issues pointed out by -Wdocumentation.
llvm-svn: 163721
2012-09-12 16:59:47 +00:00
David Blaikie
a00a3562bd Tidy up a few more uses of MF.getFunction()->getName().
Based on CR feedback from r162301 and Craig Topper's refactoring in r162347
here are a few other places that could use the same API (& in one instance drop
a Function.h dependency).

llvm-svn: 162367
2012-08-22 17:18:53 +00:00
Craig Topper
d66ff79b2c Add a getName function to MachineFunction. Use it in places that previously did getFunction()->getName(). Remove includes of Function.h that are no longer needed.
llvm-svn: 162347
2012-08-22 06:07:19 +00:00
David Blaikie
d7ccf61630 Remove unnecessary cast that was also unnecessarily casting away constness.
Even looking at the revision history I couldn't quite piece together why this
cast was ever written in the first place, but I assume it was because of some
change in the inheritance, perhaps this function was reimplemented in a
derived type & this caller was meant to get the base version (& it wasn't
virtual)?

llvm-svn: 162301
2012-08-21 18:54:23 +00:00
Jakob Stoklund Olesen
684fbc6e89 Remove LiveIntervalUnions from RegAllocBase.
They are living in LiveRegMatrix now.

llvm-svn: 158868
2012-06-20 22:52:29 +00:00
Jakob Stoklund Olesen
8976daa3cc Convert RAGreedy to LiveRegMatrix interference checking.
Stop depending on the LiveIntervalUnions in RegAllocBase, they are about
to be removed.

The changes are mostly replacing register alias iterators with regunit
iterators, and querying LiveRegMatrix instrad of RegAllocBase.

InterferenceCache is converted to work with per-regunit
LiveIntervalUnions, and it checks fixed regunit interference separately,
using the fixed live intervals provided by LiveIntervalAnalysis.

The local splitting helper calcGapWeights() is also considering fixed
regunit interference which is kept on the side now.

llvm-svn: 158867
2012-06-20 22:52:26 +00:00
Jakob Stoklund Olesen
b5a23c5c71 Also compute MBB live-in lists in the new rewriter pass.
This deduplicates some code from the optimizing register allocators, and
it means that it is now possible to change the register allocators'
solutions simply by editing the VirtRegMap between the register
allocator pass and the rewriter.

llvm-svn: 158249
2012-06-09 00:14:47 +00:00
Jakob Stoklund Olesen
c0bb0e899d Reintroduce VirtRegRewriter.
OK, not really. We don't want to reintroduce the old rewriter hacks.

This patch extracts virtual register rewriting as a separate pass that
runs after the register allocator. This is possible now that
CodeGen/Passes.cpp can configure the full optimizing register allocator
pipeline.

The rewriter pass uses register assignments in VirtRegMap to rewrite
virtual registers to physical registers, and it inserts kill flags based
on live intervals.

These finalization steps are the same for the optimizing register
allocators: RABasic, RAGreedy, and PBQP.

llvm-svn: 158244
2012-06-08 23:44:45 +00:00
Benjamin Kramer
58b98297ac Round 2 of dead private variable removal.
LLVM is now -Wunused-private-field clean except for
- lib/MC/MCDisassembler/Disassembler.h. Not sure why it keeps all those unaccessible fields.
- gtest.

llvm-svn: 158096
2012-06-06 19:47:08 +00:00
Jakob Stoklund Olesen
be0b8939c0 Switch all register list clients to the new MC*Iterator interface.
No functional change intended.

Sorry for the churn. The iterator classes are supposed to help avoid
giant commits like this one in the future. The TableGen-produced
register lists are getting quite large, and it may be necessary to
change the table representation.

This makes it possible to do so without changing all clients (again).

llvm-svn: 157854
2012-06-01 23:28:30 +00:00
Jakob Stoklund Olesen
8a558631c0 Prioritize smaller register classes for urgent evictions.
It helps compile exotic inline asm. In the test case, normal GR32
virtual registers use up eax-edx so the final GR32_ABCD live range has
no registers left. Since all the live ranges were tiny, we had no way of
prioritizing the smaller register class.

This patch allows tiny unspillable live ranges to be evicted by tiny
unspillable live ranges from a smaller register class.

<rdar://problem/11542429>

llvm-svn: 157715
2012-05-30 21:46:58 +00:00
Jakob Stoklund Olesen
2313afcd00 Add a last resort tryInstructionSplit() to RAGreedy.
Live ranges with a constrained register class may benefit from splitting
around individual uses. It allows the remaining live range to use a
larger register class where it may allocate. This is like spilling to a
different register class.

This is only attempted on constrained register classes.

<rdar://problem/11438902>

llvm-svn: 157354
2012-05-23 22:37:27 +00:00
Jakob Stoklund Olesen
6a5bbcc25c Allow LiveRangeEdit to be created with a NULL parent.
The dead code elimination with callbacks is still useful.

llvm-svn: 157100
2012-05-19 05:25:46 +00:00
Pete Cooper
426b167bc5 Moved LiveRangeEdit.h so that it can be called from other parts of the backend, not just libCodeGen
llvm-svn: 153906
2012-04-02 22:44:18 +00:00
Jakob Stoklund Olesen
97f47c37b6 Allocate virtual registers in ascending order.
This is just the fallback tie-breaker ordering, the main allocation
order is still descending size.

Patch by Shamil Kurmangaleev!

llvm-svn: 153904
2012-04-02 22:30:39 +00:00
Pete Cooper
a76a82ef6f Refactored the LiveRangeEdit interface so that MachineFunction, TargetInstrInfo, MachineRegisterInfo, LiveIntervals, and VirtRegMap are all passed into the constructor and stored as members instead of passed in to each method.
llvm-svn: 153903
2012-04-02 22:22:53 +00:00
Craig Topper
8cc9d75c6a Use uint16_t to store register overlaps to reduce static data.
llvm-svn: 152001
2012-03-04 10:43:23 +00:00
Andrew Trick
25ec43e9fe Clear virtual registers after they are no longer referenced.
Passes after RegAlloc should be able to rely on MRI->getNumVirtRegs() == 0.
This makes sharing code for pre/postRA passes more robust.
Now, to check if a pass is running before the RA pipeline begins, use MRI->isSSA().
To check if a pass is running after the RA pipeline ends, use !MRI->getNumVirtRegs().

PEI resets virtual regs when it's done scavenging.

PTX will either have to provide its own PEI pass or assign physregs.

llvm-svn: 151032
2012-02-21 04:51:23 +00:00
Jakob Stoklund Olesen
c1054e87e4 Fix details in local live range splitting with regmasks.
Perform all comparisons at instruction granularity, and make sure
register masks on uses count in both gaps.

llvm-svn: 150530
2012-02-14 23:51:27 +00:00
Jakob Stoklund Olesen
cea998ba92 Handle register masks in local live range splitting.
Again the goal is to produce identical assembly with register mask
operands enabled.

llvm-svn: 150287
2012-02-11 00:42:18 +00:00
Jakob Stoklund Olesen
4fe2a13535 Add register mask support to InterferenceCache.
This makes global live range splitting behave identically with and
without register mask operands.

This is not necessarily the best way of using register masks for live
range splitting.  It would be more efficient to first split global live
ranges around calls (i.e., register masks), and reserve the fine grained
per-physreg interference guidance for global live ranges that do not
cross calls.

For now the goal is to produce identical assembly when enabling register
masks.

llvm-svn: 150259
2012-02-10 18:58:34 +00:00
Andrew Trick
c3cc8fa604 RegAlloc superpass: includes phi elimination, coalescing, and scheduling.
Creates a configurable regalloc pipeline.

Ensure specific llc options do what they say and nothing more: -reglloc=... has no effect other than selecting the allocator pass itself. This patch introduces a new umbrella flag, "-optimize-regalloc", to enable/disable the optimizing regalloc "superpass". This allows for example testing coalscing and scheduling under -O0 or vice-versa.

When a CodeGen pass requires the MachineFunction to have a particular property, we need to explicitly define that property so it can be directly queried rather than naming a specific Pass. For example, to check for SSA, use MRI->isSSA, not addRequired<PHIElimination>.

CodeGen transformation passes are never "required" as an analysis

ProcessImplicitDefs does not require LiveVariables.

We have a plan to massively simplify some of the early passes within the regalloc superpass.

llvm-svn: 150226
2012-02-10 04:10:36 +00:00
Jakob Stoklund Olesen
56d323e88d Add register mask support to RAGreedy.
This only adds the interference checks required for correctness.
We still need to take advantage of register masks for the
interference driven live range splitting.

llvm-svn: 150191
2012-02-09 18:25:05 +00:00
Andrew Trick
cbb72886ec Renamed MachineScheduler to ScheduleTopDownLive.
Responding to code review.

llvm-svn: 148290
2012-01-17 06:55:03 +00:00
Andrew Trick
8cee8a6cb3 Moving options declarations around.
More short term hackery until we have a way to configure passes that work on LiveIntervals.

llvm-svn: 148289
2012-01-17 06:54:59 +00:00
Andrew Trick
85c44d1485 Added the MachineSchedulerPass skeleton.
llvm-svn: 148105
2012-01-13 06:30:30 +00:00
Jakob Stoklund Olesen
73a72e7241 Make SplitAnalysis::UseSlots private.
llvm-svn: 148031
2012-01-12 17:53:44 +00:00
Jakob Stoklund Olesen
e65885114c Make data structures private.
llvm-svn: 147979
2012-01-11 23:19:08 +00:00
Jakob Stoklund Olesen
eb77ff7f7d Stop tracking spill slot uses in VirtRegMap.
Nobody cared, StackSlotColoring scans the instructions to find used stack
slots.

llvm-svn: 144485
2011-11-13 01:23:30 +00:00
Jakob Stoklund Olesen
7b1107fe0e Update split candidate correctly when interference cache is full.
No test case, spotted by inspection.

llvm-svn: 143407
2011-11-01 00:02:31 +00:00
Jakob Stoklund Olesen
44b1ea8bae Ignore the cloning of unknown registers.
THe LRE_DidCloneVirtReg callback may be called with vitual registers
that RAGreedy doesn't even know about yet.  In that case, there are no
data structures to update.

llvm-svn: 139702
2011-09-14 17:34:37 +00:00
Jakob Stoklund Olesen
a147642106 Remove the -compact-regions flag.
It has been enabled by default for a while, it was only there to allow
performance comparisons.

llvm-svn: 139501
2011-09-12 16:54:42 +00:00
Jakob Stoklund Olesen
e43edf1b4a Add an interface for SplitKit complement spill modes.
SplitKit always computes a complement live range to cover the places
where the original live range was live, but no explicit region has been
allocated.

Currently, the complement live range is created to be as small as
possible - it never overlaps any of the regions.  This minimizes
register pressure, but if the complement is going to be spilled anyway,
that is not very important.  The spiller will eliminate redundant
spills, and hoist others by making the spill slot live range overlap
some of the regions created by splitting.  Stack slots are cheap.

This patch adds the interface to enable spill modes in SplitKit.  In
spill mode, SplitKit will assume that the complement is going to spill,
so it will allow it to overlap regions in order to avoid back-copies.
By doing some of the spiller's work early, the complement live range
becomes simpler.  In some cases, it can become much simpler because no
extra PHI-defs are required.  This will speed up both splitting and
spilling.

This is only the interface to enable spill modes, no implementation yet.

llvm-svn: 139500
2011-09-12 16:49:21 +00:00
Benjamin Kramer
0d60a5573c Make a bunch of symbols private.
llvm-svn: 138025
2011-08-19 01:42:18 +00:00
Jakob Stoklund Olesen
2f58336f2f Refer to the RegisterCoalescer pass by ID.
A public interface is no longer needed since RegisterCoalescer is not an
analysis any more.

llvm-svn: 137082
2011-08-09 00:29:53 +00:00
Jakob Stoklund Olesen
32649cbc65 Fix typo. Thanks, Andy!
llvm-svn: 137023
2011-08-06 18:20:24 +00:00
Jakob Stoklund Olesen
a8eabeb2b3 Reject RS_Spill ranges from local splitting as well.
All new local ranges are marked as RS_New now, so there is no need to
attempt splitting of RS_Spill ranges any more.

llvm-svn: 137002
2011-08-05 23:50:33 +00:00
Jakob Stoklund Olesen
af4cd7e67f Only mark remainder intervals as RS_Spill after per-block splitting.
The local ranges created get to stay in the RS_New stage, just like for
local and region splitting.

This gives tryLocalSplit a bit more freedom the first time it sees one
of these new local ranges.

llvm-svn: 137001
2011-08-05 23:50:31 +00:00
Jakob Stoklund Olesen
3f2f3056e5 Remember to update LiveDebugVariables after per-block splitting.
llvm-svn: 136996
2011-08-05 23:10:40 +00:00
Jakob Stoklund Olesen
6225dc24a6 Extract per-block splitting into its own method.
No functional change.

llvm-svn: 136994
2011-08-05 23:04:18 +00:00
Jakob Stoklund Olesen
d352ade356 Also use shouldSplitSingleBlock() in the fallback splitting mode.
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go
through a SmallPtrSet any more.

llvm-svn: 136992
2011-08-05 22:43:23 +00:00
Jakob Stoklund Olesen
83a365a416 Split around single instructions to enable register class inflation.
Normally, we don't create a live range for a single instruction in a
basic block, the spiller does that anyway. However, when splitting a
live range that belongs to a proper register sub-class, inserting these
extra COPY instructions completely remove the constraints from the
remainder interval, and it may be allocated from the larger super-class.

The spiller will mop up these small live ranges if we end up spilling
anyway. It calls them snippets.

llvm-svn: 136989
2011-08-05 22:20:45 +00:00
Jakob Stoklund Olesen
5c1e079053 Enable compact region splitting by default.
This helps generate better code in functions with high register
pressure.

The previous version of compact region splitting caused regressions
because the regions were a bit too large. A stronger negative bias
applied in r136832 fixed this problem.

llvm-svn: 136836
2011-08-03 23:16:09 +00:00
Jakob Stoklund Olesen
faae878e65 Be more conservative when forming compact regions.
Apply twice the negative bias on transparent blocks when computing the
compact regions. This excludes loop backedges from the region when only
one of the loop blocks uses the register.

Previously, we would include the backedge in the region if the loop
preheader and the loop latch both used the register, but the loop header
didn't.

When both the header and latch blocks use the register, we still keep it
live on the backedge.

llvm-svn: 136832
2011-08-03 23:09:38 +00:00
Chandler Carruth
b2c1fe2ccc Fix some warnings from Clang in release builds:
lib/CodeGen/RegAllocGreedy.cpp:1176:18: warning: unused variable 'B' [-Wunused-variable]
    if (unsigned B = Cand.getBundles(BundleCand, BestCand)) {
                 ^
lib/CodeGen/RegAllocGreedy.cpp:1188:18: warning: unused variable 'B' [-Wunused-variable]
    if (unsigned B = Cand.getBundles(BundleCand, 0)) {
                 ^

llvm-svn: 136831
2011-08-03 23:07:27 +00:00
Jakob Stoklund Olesen
ec971376f9 Use the precomputed def presence in RAGreedy::calcSpillCost.
llvm-svn: 136742
2011-08-02 23:04:08 +00:00
Jakob Stoklund Olesen
ffdbb7cc16 Inform SpillPlacement about blocks with defs.
This information is not used for anything yet.

llvm-svn: 136741
2011-08-02 23:04:06 +00:00
Jakob Stoklund Olesen
4b62c6ea69 Rename {First,Last}Use to {First,Last}Instr.
With a 'FirstDef' field right there, it is very confusing that FirstUse
refers to an instruction that may be a def.

llvm-svn: 136739
2011-08-02 22:54:14 +00:00
Jakob Stoklund Olesen
81247d41e4 Time the emission of debug values.
llvm-svn: 136584
2011-07-31 03:53:42 +00:00
Jakob Stoklund Olesen
0caedc8db8 Revert r136528 "Enable compact region splitting by default."
While this generally helped x86-64, there was some large regressions
for i386.

llvm-svn: 136571
2011-07-30 17:19:14 +00:00