1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-27 22:12:47 +01:00
Commit Graph

644 Commits

Author SHA1 Message Date
David Blaikie
a00a3562bd Tidy up a few more uses of MF.getFunction()->getName().
Based on CR feedback from r162301 and Craig Topper's refactoring in r162347
here are a few other places that could use the same API (& in one instance drop
a Function.h dependency).

llvm-svn: 162367
2012-08-22 17:18:53 +00:00
Jakob Stoklund Olesen
14c88af5f2 Add an experimental -early-live-intervals option.
This option runs LiveIntervals before TwoAddressInstructionPass which
will eventually learn to exploit and update the analysis.

Eventually, LiveIntervals will run before PHIElimination, and we can get
rid of LiveVariables.

llvm-svn: 161270
2012-08-03 22:12:54 +00:00
Jakob Stoklund Olesen
bf7a000191 Completely eliminate VNInfo flags.
The 'unused' state of a value number can be represented as an invalid
def SlotIndex. This also exposed code that shouldn't have been looking
at unused value VNInfos.

llvm-svn: 161258
2012-08-03 20:59:32 +00:00
Jakob Stoklund Olesen
38aec46f18 Eliminate the VNInfo::hasPHIKill() flag.
The only real user of the flag was removeCopyByCommutingDef(), and it
has been switched to LiveIntervals::hasPHIKill().

All the code changed by this patch was only concerned with computing and
propagating the flag.

llvm-svn: 161255
2012-08-03 20:19:44 +00:00
Jakob Stoklund Olesen
74dfaec8d7 Make the hasPHIKills flag a computed property.
The VNInfo::HAS_PHI_KILL is only half supported. We precompute it in
LiveIntervalAnalysis, but it isn't properly updated by live range
splitting and functions like shrinkToUses().

It is only used in one place: RegisterCoalescer::removeCopyByCommutingDef().

This patch changes that function to use a new LiveIntervals::hasPHIKill()
function that computes the flag for a given value number.

llvm-svn: 161254
2012-08-03 20:10:24 +00:00
Jakob Stoklund Olesen
88319a3e66 Also compute register mask lists under -new-live-intervals.
llvm-svn: 160898
2012-07-27 21:56:39 +00:00
Jakob Stoklund Olesen
8e957f3c0b Eliminate the IS_PHI_DEF flag and VNInfo::setIsPHIDef().
A value number is a PHI def if and only if it begins at a block
boundary. This can be derived from the def slot, a separate flag is not
necessary.

llvm-svn: 160893
2012-07-27 21:11:14 +00:00
Jakob Stoklund Olesen
d60f4942e6 Add a -new-live-intervals experimental option.
This option replaces the existing live interval computation with one
based on LiveRangeCalc.cpp. The new algorithm does not depend on
LiveVariables, and it can be run at any time, before or after leaving
SSA form.

llvm-svn: 160892
2012-07-27 20:58:46 +00:00
Jakob Stoklund Olesen
3a972a4f8d Delete a boring statistic.
llvm-svn: 159030
2012-06-22 20:40:15 +00:00
Jakob Stoklund Olesen
5b5a4305f1 Store live intervals in an IndexedMap.
It is both smaller and faster than DenseMap.

llvm-svn: 159029
2012-06-22 20:37:52 +00:00
Jakob Stoklund Olesen
8e2cfdc8f4 Simplify handleMove() a bit.
There is no need to check for physreg live ranges. They don't exist any
more.

llvm-svn: 159019
2012-06-22 18:38:57 +00:00
Jakob Stoklund Olesen
a925ef2596 Stop computing physreg live ranges.
Everyone is using on-demand regunit ranges now.

llvm-svn: 159018
2012-06-22 18:20:50 +00:00
Jakob Stoklund Olesen
0d48b013fb Remove LiveIntervals::trackingRegUnits().
With regunit liveness permanently enabled, this function would always
return true.

Also remove now obsolete code for checking physreg interference.

llvm-svn: 159006
2012-06-22 16:46:44 +00:00
Jakob Stoklund Olesen
914857b29a Remove the -live-regunits command line option.
Register allocators depend on it being permanently enabled now.

llvm-svn: 158873
2012-06-20 23:31:34 +00:00
Jakob Stoklund Olesen
f3deae3d2f Fix some more LiveInterval enumerations.
Deterministically enumerate the virtual registers instead.

llvm-svn: 158872
2012-06-20 23:23:59 +00:00
Jakob Stoklund Olesen
6bd9aa4463 Enable register unit liveness by default.
Soon we won't need to compute live intervals for physical registers.

llvm-svn: 158865
2012-06-20 22:52:22 +00:00
Jakob Stoklund Olesen
6d2db5c3d9 Only update regunit live ranges that have been precomputed.
Regunit live ranges are computed on demand, so when mi-sched calls
handleMove, some regunits may not have live ranges yet.

That makes updating them easier: Just skip the non-existing ranges. They
will be computed correctly from the rescheduled machine code when they
are needed.

llvm-svn: 158831
2012-06-20 18:00:57 +00:00
Jakob Stoklund Olesen
b2d8a68462 Delete dead code.
llvm-svn: 158827
2012-06-20 16:38:50 +00:00
Jakob Stoklund Olesen
355b171711 Add regunit liveness support to LiveIntervals::handleMove().
When LiveIntervals is tracking fixed interference in regunits, make sure
to update those intervals as well. Currently guarded by -live-regunits.

llvm-svn: 158766
2012-06-19 23:50:18 +00:00
Jakob Stoklund Olesen
55d7590707 80 col.
llvm-svn: 158755
2012-06-19 22:50:53 +00:00
Jakob Stoklund Olesen
cf188ced7f Remove dead debug option -disable-rematerialization.
Remat has been stable for years, and it isn't done by
LiveIntervalAnalysis any longer. (See LiveRangeEdit).

llvm-svn: 158079
2012-06-06 16:22:41 +00:00
Matt Beaumont-Gay
b6f7234429 Suppress -Wunused-variable in -Asserts build
llvm-svn: 158037
2012-06-05 23:00:03 +00:00
Jakob Stoklund Olesen
12dac91467 Simplify LiveInterval::print().
Don't print out the register number and spill weight, making the TRI
argument unnecessary.

This allows callers to interpret the reg field. It can currently be a
virtual register, a physical register, a spill slot, or a register unit.

llvm-svn: 158031
2012-06-05 22:51:54 +00:00
Jakob Stoklund Olesen
f12252632a Add experimental support for register unit liveness.
Instead of computing a live interval per physreg, LiveIntervals can
compute live intervals per register unit. This makes impossible the
confusing situation where aliasing registers could have overlapping live
intervals. It should also make fixed interferernce checking cheaper
since registers have fewer register units than aliases.

Live intervals for regunits are computed on demand, using MRI use-def
chains and the new LiveRangeCalc class. Only regunits live in to ABI
blocks are precomputed during LiveIntervals::runOnMachineFunction().

The regunit liveness computations don't depend on LiveVariables.

llvm-svn: 158029
2012-06-05 22:02:15 +00:00
Jakob Stoklund Olesen
4e37f536bf Remove the last remat-related code from LiveIntervalAnalysis.
Rematerialization is handled by LiveRangeEdit now.

llvm-svn: 157974
2012-06-05 01:06:15 +00:00
Jakob Stoklund Olesen
04457a5cc7 Delete dead code.
llvm-svn: 157963
2012-06-04 23:01:41 +00:00
Jakob Stoklund Olesen
17fcb85c11 Switch LiveIntervals member variable to LLVM naming standards.
No functional change.

llvm-svn: 157957
2012-06-04 22:39:14 +00:00
Lang Hames
953fa2637f Clear the entering, exiting and internal ranges of a bundle before collecting
ranges for the instruction about to be bundled. This fixes a bug in an external
project where an assertion was triggered due to spurious 'multiple defs' within
the bundle.

Patch by Ivan Llopard. Thanks Ivan!

llvm-svn: 157632
2012-05-29 18:19:54 +00:00
Jakob Stoklund Olesen
9d172f4b64 LiveRangeQuery simplifies shrinkToUses().
llvm-svn: 157145
2012-05-20 02:54:52 +00:00
Pete Cooper
d839376c4b LiveIntervalUpdate validators weren't recorded after the calls to std::for_each. Turns out std::for_each doesn't update the variable passed in for the functor but instead copy constructs a new one.
llvm-svn: 155041
2012-04-18 20:29:17 +00:00
Andrew Trick
6663a6f2ef misched: fix LiveInterval update for bottom-up scheduling
llvm-svn: 153162
2012-03-21 04:12:16 +00:00
Andrew Trick
24064c0f39 misched: fix LI update for bottom-up.
llvm-svn: 153158
2012-03-21 04:12:01 +00:00
Jakob Stoklund Olesen
d184d231a8 Stop fixing bad machine code in LiveIntervalAnalysis.
The first def of a virtual register cannot also read the register.
Assert on such bad machine code instead of trying to fix it.
TwoAddressInstructionPass should never create code like that.

llvm-svn: 152010
2012-03-04 19:19:10 +00:00
Jakob Stoklund Olesen
1885bf0ac7 Move getBundleStart() into MachineInstrBundle.h.
This allows the function to be inlined, and makes it suitable for use in
getInstructionIndex().

Also provide a const version. C++ is great for touch typing practice.

llvm-svn: 151782
2012-03-01 01:26:01 +00:00
Lang Hames
15c7539a46 Add API "handleMoveIntoBundl" for updating liveness when moving instructions into
bundles. This method takes a bundle start and an MI being bundled, and makes
the intervals for the MI's operands appear to start/end on the bundle start.

Also fixes some minor cosmetic issues (whitespace, naming convention) in the
HMEditor code.

llvm-svn: 151099
2012-02-21 22:29:38 +00:00
Lang Hames
1b774db571 Fix some bugs in HMEditor's moveAllOperandsInto logic.
llvm-svn: 151006
2012-02-21 00:00:36 +00:00
Benjamin Kramer
576a9ea6ca Silence operator precedence warning.
llvm-svn: 150921
2012-02-19 12:25:07 +00:00
Lang Hames
88e5e4d72e Add machinery for pushing live ranges onto bundle starts while bundling.
llvm-svn: 150915
2012-02-19 07:13:05 +00:00
Lang Hames
bdb4efcb20 Simplify moveEnteringDownFrom rules.
llvm-svn: 150914
2012-02-19 06:13:56 +00:00
Lang Hames
831e129c9d Skip through instructions rather than operands when looking for last use slot.
llvm-svn: 150912
2012-02-19 04:38:25 +00:00
Lang Hames
8b2e08187a Fix TODO and trailing whitespace.
llvm-svn: 150910
2012-02-19 03:09:55 +00:00
Lang Hames
b946cb5e75 Defer sanity checks on live intervals until after all have been updated. Hold (LiveInterval, LiveRange) pairs to update, rather than vregs.
llvm-svn: 150909
2012-02-19 03:00:30 +00:00
Lang Hames
095e9964bd Bring HMEditor into line with LLVM coding standards.
llvm-svn: 150851
2012-02-17 23:43:40 +00:00
Matt Beaumont-Gay
a45b6e23d0 Sink variable into assert
llvm-svn: 150841
2012-02-17 21:40:48 +00:00
Lang Hames
27171ecf20 Add support for regmask slots to HMEditor. Also fixes a comment error.
llvm-svn: 150840
2012-02-17 21:29:41 +00:00
Lang Hames
ed9553242f Refactor 'handleMove' code in live intervals. Clients of LiveIntervals won't see
any changes.

Internally this adds a private inner class HMEditor, to LiveIntervals. HMEditor provides
an API for updating live intervals when code is moved or bundled.

llvm-svn: 150826
2012-02-17 18:44:18 +00:00
Lang Hames
680ee0f7e0 Oops - isRegLiveIntoSuccessor is used in non-assert builds now. Remove NDEBUG guards.
llvm-svn: 150771
2012-02-17 00:51:32 +00:00
Lang Hames
89b5263016 Turn off assertion, conservatively compute liveness for live-in un-allocatable registers.
llvm-svn: 150768
2012-02-17 00:18:18 +00:00
Lang Hames
0e954f92c1 Make LiveIntervals::handleMove() bundle aware.
llvm-svn: 150630
2012-02-15 23:21:33 +00:00
Lang Hames
5edc051415 Fix assertion condition.
llvm-svn: 150627
2012-02-15 22:45:51 +00:00
Lang Hames
641eeb6959 Remove overly conservative assert.
llvm-svn: 150608
2012-02-15 19:04:53 +00:00
Lang Hames
5c8cc9c7f0 Don't emit live ranges for physregs live-ins that are dead.
llvm-svn: 150553
2012-02-15 01:31:10 +00:00
Lang Hames
5c5532d32d Disentangle moving a machine instr from updating LiveIntervals.
llvm-svn: 150552
2012-02-15 01:23:52 +00:00
Jakob Stoklund Olesen
248b6c4556 Use the proper clobber check in handleLiveInRegister().
When a physreg is live in to a basic block, look for any instruction in
the block that clobbers the physreg.

The instruction doesn't have to properly redefine the register, any
overlapping clobber is OK.

This slightly changes live ranges when compiling with register masks.

llvm-svn: 150528
2012-02-14 23:46:24 +00:00
Jakob Stoklund Olesen
bf8c36fea9 Dump live intervals in numerical order.
The old DenseMap hashed order was very confusing.

llvm-svn: 150527
2012-02-14 23:46:21 +00:00
Lang Hames
3a181593ec Don't create a new copy of reserved regs - we already have one handy.
llvm-svn: 150525
2012-02-14 23:06:12 +00:00
Lang Hames
11ccc79191 Tighten physical register invariants: Allocatable physical registers can
only be live in to a block if it is the function entry point or a landing pad.

llvm-svn: 150494
2012-02-14 18:51:53 +00:00
Lang Hames
724a5e8fe1 Use convenience function for consistency.
llvm-svn: 150457
2012-02-14 03:04:29 +00:00
Andrew Trick
b94e7e93b2 LiveIntervalAnalysis does not depend on MachineLoopInfo.
llvm-svn: 150411
2012-02-13 20:44:42 +00:00
Andrew Trick
c3cc8fa604 RegAlloc superpass: includes phi elimination, coalescing, and scheduling.
Creates a configurable regalloc pipeline.

Ensure specific llc options do what they say and nothing more: -reglloc=... has no effect other than selecting the allocator pass itself. This patch introduces a new umbrella flag, "-optimize-regalloc", to enable/disable the optimizing regalloc "superpass". This allows for example testing coalscing and scheduling under -O0 or vice-versa.

When a CodeGen pass requires the MachineFunction to have a particular property, we need to explicitly define that property so it can be directly queried rather than naming a specific Pass. For example, to check for SSA, use MRI->isSSA, not addRequired<PHIElimination>.

CodeGen transformation passes are never "required" as an analysis

ProcessImplicitDefs does not require LiveVariables.

We have a plan to massively simplify some of the early passes within the regalloc superpass.

llvm-svn: 150226
2012-02-10 04:10:36 +00:00
Lang Hames
d211d8e431 Remove unused 'isAlias' parameter.
llvm-svn: 150224
2012-02-10 03:19:36 +00:00
Jakob Stoklund Olesen
e60cd3cc02 Constrain the regmask search space for local live ranges.
When checking a local live range for interference, restrict the binary
search to the single block.

llvm-svn: 150220
2012-02-10 01:31:31 +00:00
Jakob Stoklund Olesen
4fc4d8d8ab Cache basic block boundaries for faster RegMaskSlots access.
Provide API to get a list of register mask slots and bits in a basic
block.

llvm-svn: 150219
2012-02-10 01:26:29 +00:00
Jakob Stoklund Olesen
ac14d7774a Optimize LiveIntervals::intervalIsInOneMBB().
No looping and binary searches necessary.

Return a pointer to the containing block instead of just a bool.

llvm-svn: 150218
2012-02-10 01:23:55 +00:00
Lang Hames
4defdead69 Fix kill flags when moving instructions using LiveIntervals::moveInstr(...).
llvm-svn: 150150
2012-02-09 04:45:38 +00:00
Lang Hames
4147d04e10 Remove assertion. Not all use operands are reads.
llvm-svn: 150149
2012-02-09 04:39:48 +00:00
Jakob Stoklund Olesen
74f9e359dd Keep track of register masks in LiveIntervalAnalysis.
Build an ordered vector of register mask operands (i.e., calls) when
computing live intervals. Provide a checkRegMaskInterference() function
that computes a bit mask of usable registers for a live range.

This is a quick way of determining of a live range crosses any calls,
and restricting it to the callee saved registers if it does.
Previously, we had to discover call clobbers for each candidate register
independently.

llvm-svn: 150077
2012-02-08 17:33:45 +00:00
Andrew Trick
bbd036d602 Added MachineInstr::isBundled() to check if an instruction is part of a bundle.
llvm-svn: 150044
2012-02-08 02:17:25 +00:00
Jakob Stoklund Olesen
af71c23a28 Drop the REDEF_BY_EC VNInfo flag.
A live range that has an early clobber tied redef now looks like a
normal tied redef, except the early clobber def uses the early clobber
slot.

This is enough to handle any strange interference problems.

llvm-svn: 149769
2012-02-04 05:51:25 +00:00
Jakob Stoklund Olesen
8492adf8ed Correctly terminate a physreg redefined by an early clobber.
I don't have a test that fails because of this, but a test case like
CodeGen/X86/2009-12-01-EarlyClobberBug.ll exposes the problem.  EAX is
redefined by a tied early clobber operand on inline asm, and the live
range should look like this:

  %EAX,inf = [48r,64e:0)[64e,80r:1)  0@48r 1@64e

Previously, the two values got merged:

  %EAX,inf = [48r,80r:0)  0@48r

With this bug fixed, the REDEF_BY_EC VNInfo flag is no longer needed.

llvm-svn: 149768
2012-02-04 05:41:20 +00:00
Jakob Stoklund Olesen
44c5cece20 Don't store COPY pointers in VNInfo.
If a value is defined by a COPY, that instuction can easily and cheaply
be found by getInstructionFromIndex(VNI->def).

This reduces the size of VNInfo from 24 to 16 bytes, and improves
llc compile time by 3%.

llvm-svn: 149763
2012-02-04 05:20:49 +00:00
Jakob Stoklund Olesen
6fa2bf030b Trim headers.
llvm-svn: 149722
2012-02-03 23:51:15 +00:00
Jakob Stoklund Olesen
d29f6d792b Delete some dead code.
llvm-svn: 149717
2012-02-03 21:32:06 +00:00
Matt Beaumont-Gay
b94c5f7413 Here's a new one: GCC was complaining about an only-used-in-asserts
*function*. Wrap the function in #ifndef NDEBUG.

llvm-svn: 149259
2012-01-30 19:26:20 +00:00
Lang Hames
bbf83feb7f Silence warning about parens for && within ||
llvm-svn: 149152
2012-01-27 23:52:25 +00:00
Lang Hames
e0f0352889 Add a "moveInstr" method to LiveIntervals. This can be used to move instructions
around within a basic block while maintaining live-intervals.

Updated ScheduleTopDownLive in MachineScheduler.cpp to use the moveInstr API
when reordering MIs.

llvm-svn: 149147
2012-01-27 22:36:19 +00:00
Lang Hames
ffd4eb644b Don't add live ranges for aliases of physregs that are live in to the
function. They don't appear to be used, and are inconsistent with handling of
other physreg intervals (i.e. intervals that are not live-in) where ranges are
not inserted for aliases.

llvm-svn: 148986
2012-01-25 22:11:06 +00:00
Lang Hames
cd34c5aa54 Always break upon finding a vreg operand (in Release as well as +Asserts). Remove assertion which can no longer trigger.
llvm-svn: 148984
2012-01-25 21:53:23 +00:00
Lang Hames
0958707826 Fixed macro condition.
llvm-svn: 148408
2012-01-18 19:48:31 +00:00
Jakob Stoklund Olesen
63258fcd99 Exclusively use SplitAnalysis::getLastSplitPoint().
Delete the alternative implementation in LiveIntervalAnalysis.

These functions computed the same thing, but SplitAnalysis caches the
result.

llvm-svn: 147911
2012-01-11 02:07:00 +00:00
Jakob Stoklund Olesen
3221d31706 Use the 'regalloc' debug tag for most register allocator tracing.
llvm-svn: 147725
2012-01-07 07:39:47 +00:00
Lang Hames
57893457a4 Clarified assert text.
llvm-svn: 147471
2012-01-03 20:05:57 +00:00
Evan Cheng
1acd685d87 Add bundle aware API for querying instruction properties and switch the code
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.

For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.

llvm-svn: 146026
2011-12-07 07:15:52 +00:00
Jakob Stoklund Olesen
6035535c96 Fix early-clobber handling in shrinkToUses.
I broke this in r144515, it affected most ARM testers.

<rdar://problem/10441389>

llvm-svn: 144547
2011-11-14 18:45:38 +00:00
Jakob Stoklund Olesen
44da32bc7f Use kill slots instead of the previous slot in shrinkToUses.
It's more natural to use the actual end points.

llvm-svn: 144515
2011-11-13 23:53:25 +00:00
Jakob Stoklund Olesen
4255046343 Terminate all dead defs at the dead slot instead of the 'next' slot.
This makes no difference for normal defs, but early clobber dead defs
now look like:

  [Slot_EarlyClobber; Slot_Dead)

instead of:

  [Slot_EarlyClobber; Slot_Register).

Live ranges for normal dead defs look like:

  [Slot_Register; Slot_Dead)

as before.

llvm-svn: 144512
2011-11-13 22:42:13 +00:00
Jakob Stoklund Olesen
170f021d76 Simplify early clobber slots a bit.
llvm-svn: 144507
2011-11-13 22:05:42 +00:00
Jakob Stoklund Olesen
9b34607bdf Rename SlotIndexes to match how they are used.
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.

The load and store slots are not needed after the deferred spill code
insertion framework was deleted.

The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals.  In reality, these slots were used to distinguish
early-clobber defs from normal defs.

The new naming scheme also has 4 slots, but the names match how the
slots are really used.  This is a purely mechanical renaming, but some
of the code makes a lot more sense now.

llvm-svn: 144503
2011-11-13 20:45:27 +00:00
Jakob Stoklund Olesen
5a265aeb70 Delete the old spilling framework from LiveIntervalAnalysis.
This is dead code, all register allocators use InlineSpiller.

llvm-svn: 144478
2011-11-12 23:57:05 +00:00
Jakob Stoklund Olesen
6be9354316 Add a FIXME.
TwoAddressInstructionPass should annotate instructions with <undef>
flags when it lower REG_SEQUENCE instructions.  LiveIntervals should not
be in the business of modifying code (except for kill flags, perhaps).

llvm-svn: 141187
2011-10-05 16:51:21 +00:00
Jakob Stoklund Olesen
a6045464c8 Allow <undef> flags on def operands as well as uses.
The <undef> flag says that a MachineOperand doesn't read its register,
or doesn't depend on the previous value of its register.

A full register def never depends on the previous register value.  A
partial register def may depend on the previous value if it is intended
to update part of a register.

For example:

  %vreg10:dsub_0<def,undef> = COPY %vreg1
  %vreg10:dsub_1<def> = COPY %vreg2

The first copy instruction defines the full %vreg10 register with the
bits not covered by dsub_0 defined as <undef>.  It is not considered a
read of %vreg10.

The second copy modifies part of %vreg10 while preserving the rest.  It
has an implicit read of %vreg10.

This patch adds a MachineOperand::readsReg() method to determine if an
operand reads its register.

Previously, this was modelled by adding a full-register <imp-def>
operand to the instruction.  This approach makes it possible to
determine directly from a MachineOperand if it reads its register.  No
scanning of MI operands is required.

llvm-svn: 141124
2011-10-04 21:49:33 +00:00
Jakob Stoklund Olesen
1c42989edf Speed up LiveIntervals::shrinkToUse with some caching.
Blocks with multiple PHI successors only need to go on the worklist
once.  Use a SmallPtrSet to track the live-out blocks that have already
been handled.  This is a lot faster than the two live range check we
would otherwise do.

Also stop recomputing hasPHIKill flags.  Like RenumberValues(), it is
conservatively correct to leave them in, and they are not used for
anything important.

llvm-svn: 139792
2011-09-15 15:24:16 +00:00
Jakob Stoklund Olesen
8e739db8a2 Switch extendInBlock() to take a kill slot instead of the last use slot.
Three out of four clients prefer this interface which is consistent with
extendIntervalEndTo() and LiveRangeCalc::extend().

llvm-svn: 139604
2011-09-13 16:47:56 +00:00
Jakob Stoklund Olesen
ffe1dbc840 When a physreg is live-in and live through a basic block, make sure its live
range covers the entire block.

The live range can't be terminated at a random instruction.

llvm-svn: 130619
2011-04-30 19:12:33 +00:00
Chris Lattner
0304b82f80 Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Jakob Stoklund Olesen
d224a3530a Don't add live ranges for sub-registers when clobbering a physical register.
Both coalescing and register allocation already check aliases for interference,
so these extra segments are only slowing us down.

This speeds up both linear scan and the greedy register allocator.

llvm-svn: 129283
2011-04-11 18:08:10 +00:00
Jakob Stoklund Olesen
3e349f2950 Recompute hasPHIKill flags when shrinking live intervals.
PHI values may be deleted, causing the flags to be wrong. This fixes PR9616.

llvm-svn: 129092
2011-04-07 18:43:14 +00:00
Jakob Stoklund Olesen
1454095d5e Allow coalescing with reserved physregs in certain cases:
When a virtual register has a single value that is defined as a copy of a
reserved register, permit that copy to be joined. These virtual register are
usually copies of the stack pointer:

  %vreg75<def> = COPY %ESP; GR32:%vreg75
  MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill>
  MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0
  MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0
  CALLpcrel32 ...

Coalescing these virtual registers early decreases register pressure.
Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after
register allocation was completed.

The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail
because it depends on linear scan spilling a particular register.

I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of
instructions emitted, and its revision history shows the 'correct' count being
edited many times.

llvm-svn: 128845
2011-04-04 21:00:03 +00:00
NAKAMURA Takumi
e0a71fb3e0 lib/CodeGen/LiveIntervalAnalysis.cpp: [PR9590] Don't use std::pow(float,float) here.
We don't expect the real "powf()" on some hosts (and powf() would be available on other hosts).
For consistency, std::pow(double,double) may be called instead.
Or, precision issue might attack us, to see unstable regalloc and stack coloring.

llvm-svn: 128629
2011-03-31 12:11:33 +00:00
Jakob Stoklund Olesen
2956265983 Accept instructions that read undefined values.
This is not supposed to happen, but I have seen the x86 rematter getting
confused when rematerializing partial redefs.

llvm-svn: 127857
2011-03-18 03:06:04 +00:00