1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

322 Commits

Author SHA1 Message Date
Evan Cheng
b3c82db63d Change TargetInstrInfo::isMoveInstr to return source and destination sub-register indices as well.
llvm-svn: 62600
2009-01-20 19:12:24 +00:00
Evan Cheng
a70ecc2f51 The coalescer does not coalesce a virtual register to a physical register if any of the physical register's sub-register live intervals overlaps with the virtual register. This is overly conservative. It prevents a extract_subreg from being coalesced away:
v1024 = EDI  // not killed
      =
      = EDI

One possible solution is for the coalescer to examine the sub-register live intervals in the same manner as the physical register. Another possibility is to examine defs and uses (when needed) of sub-registers. Both solutions are too expensive. For now, look for "short virtual intervals" and scan instructions to look for conflict instead.

This is a small win on x86-64. e.g. It shaves 403.gcc by ~80 instructions.

llvm-svn: 61847
2009-01-07 02:08:57 +00:00
Evan Cheng
da55c4ffb7 Fix PR3149. If an early clobber def is a physical register and it is tied to an input operand, it effectively extends the live range of the physical register. Currently we do not have a good way to represent this.
172     %ECX<def> = MOV32rr %reg1039<kill>
180     INLINEASM <es:subl $5,$1
        sbbl $3,$0>, 10, %EAX<def>, 14, %ECX<earlyclobber,def>, 9, %EAX<kill>,
36, <fi#0>, 1, %reg0, 0, 9, %ECX<kill>, 36, <fi#1>, 1, %reg0, 0
188     %EAX<def> = MOV32rr %EAX<kill>
196     %ECX<def> = MOV32rr %ECX<kill>
204     %ECX<def> = MOV32rr %ECX<kill>
212     %EAX<def> = MOV32rr %EAX<kill>
220     %EAX<def> = MOV32rr %EAX
228     %reg1039<def> = MOV32rr %ECX<kill>

The early clobber operand ties ECX input to the ECX def.

The live interval of ECX is represented as this:
%reg20,inf = [46,47:1)[174,230:0)  0@174-(230) 1@46-(47)

The right way to represent this is something like
%reg20,inf = [46,47:2)[174,182:1)[181:230:0)  0@174-(182) 1@181-230 @2@46-(47)

Of course that won't work since that means overlapping live ranges defined by two val#.

The workaround for now is to add a bit to val# which says the val# is redefined by a early clobber def somewhere. This prevents the move at 228 from being optimized away by SimpleRegisterCoalescing::AdjustCopiesBackFrom.

llvm-svn: 61259
2008-12-19 20:58:01 +00:00
Dan Gohman
7954facae5 Clarify some comments.
llvm-svn: 60683
2008-12-08 04:53:23 +00:00
Evan Cheng
03ef7cf749 Reason #3 from 60595 doesn't hold true. If we can fold a PIC load from constpool into a use, the rewrite happens at time of spill (not in VirtRegMap). Later on, if the GlobalBaseReg is spilled, the spiller can see the use uses GlobalBaseReg and do the right thing.
llvm-svn: 60596
2008-12-05 17:41:31 +00:00
Evan Cheng
6879b66c9e Fix comment.
llvm-svn: 60592
2008-12-05 17:00:16 +00:00
Dan Gohman
1e7dff35a6 Drop the reg argument to isRegReDefinedByTwoAddr, which was redundant.
llvm-svn: 60586
2008-12-05 05:45:42 +00:00
Dan Gohman
5dad0993a9 Rename isSimpleLoad to canFoldAsLoad, to better reflect its meaning.
llvm-svn: 60487
2008-12-03 18:15:48 +00:00
Dan Gohman
7b56f8686f LiveRanges are represented as half-open ranges. Fix the findLiveInMBBs code
and the LiveInterval.h top-level comment and accordingly. This fixes blocks
having spurious live-in registers in boundary cases.

llvm-svn: 60092
2008-11-26 05:50:31 +00:00
Devang Patel
a2ccbea45a Silence unused variable warnings.
llvm-svn: 59841
2008-11-21 20:00:59 +00:00
Dan Gohman
f63a02ae6c Use find_first/find_next to iterate through all the set bits in a
BitVector, instead of manually testing each bit.

llvm-svn: 59246
2008-11-13 16:31:27 +00:00
Dan Gohman
ceb5b7477b Remove some debugging code made redundant by the change to do
coalescing as a separate pass rather than inside of
LiveIntervalAnalysis.

llvm-svn: 59146
2008-11-12 17:09:23 +00:00
Evan Cheng
cd21d433bb - Rewrite code that update register live interval that's split.
- Create and update spill slot live intervals.
- Lots of bug fixes.

llvm-svn: 58367
2008-10-29 05:06:14 +00:00
David Greene
78744a795a Fix PR2634. Create new virtual registers from spills early so that we
can give it the same stack slot as the spilled interval if it is folded.
This prevents the fold/unfold code from pointing to the wrong register.

llvm-svn: 58255
2008-10-27 17:38:59 +00:00
Evan Cheng
a7a0aabf99 Avoid splitting an interval multiple times; avoid splitting re-materializable val# (for now).
llvm-svn: 58068
2008-10-24 02:05:00 +00:00
Evan Cheng
e742ce1a58 By min, I mean max.
llvm-svn: 57766
2008-10-18 05:21:37 +00:00
Evan Cheng
cf0977d7b1 When creating intervals, leave min(1, numdefs) holes after each instruction.
llvm-svn: 57765
2008-10-18 05:18:55 +00:00
Owen Anderson
c9a628af26 Add an option to enable StrongPHIElimination, for ease of testing.
llvm-svn: 57259
2008-10-07 20:22:28 +00:00
Dan Gohman
30c5ce1b7d Switch the MachineOperand accessors back to the short names like
isReg, etc., from isRegister, etc.

llvm-svn: 57006
2008-10-03 15:45:36 +00:00
Owen Anderson
b89aa47081 Fix a simple error in renumbering kill markaers, that took an inordinant amount of time to track down.
llvm-svn: 56889
2008-09-30 22:51:54 +00:00
Evan Cheng
1c8ff02eeb Re-apply 56835 along with header file changes.
llvm-svn: 56848
2008-09-30 15:44:16 +00:00
Duncan Sands
a2c8482495 Revert commit 56835 since it breaks the build.
"If a re-materializable instruction has a register
operand, the spiller will change the register operand's
spill weight to HUGE_VAL to avoid it being spilled.
However, if the operand is already in the queue ready
to be spilled, avoid re-materializing it".

llvm-svn: 56837
2008-09-30 10:00:30 +00:00
Evan Cheng
4eee17f4fb If a re-materializable instruction has a register operand, the spiller will change the register operand's spill weight to HUGE_VAL to avoid it being spilled. However, if the operand is already in the queue ready to be spilled, avoid re-materializing it.
llvm-svn: 56835
2008-09-30 06:36:58 +00:00
Dale Johannesen
bc29bec7f8 Next round of earlyclobber handling. Approach the
RA problem by expanding the live interval of an
earlyclobber def back one slot.  Remove
overlap-earlyclobber throughout.  Remove 
earlyclobber bits and their handling from
live internals.

llvm-svn: 56539
2008-09-24 01:07:17 +00:00
Owen Anderson
981113c401 Fetch the starting index of the block when assigning intervals. This gets live-in indices
correct in the presence of things like EH labels.

llvm-svn: 56410
2008-09-21 20:43:24 +00:00
Dale Johannesen
214ddc92d0 Remove AsmThatEarlyClobber etc. from LiveIntervalAnalysis
and redo as linked list walk.  Logic moved into RA.
Per review feedback.

llvm-svn: 56326
2008-09-19 01:02:35 +00:00
Dale Johannesen
99091ed94f Add a bit to mark operands of asm's that conflict
with an earlyclobber operand elsewhere.  Propagate
this bit and the earlyclobber bit through SDISel.
Change linear-scan RA not to allocate regs in a way 
that conflicts with an earlyclobber.  See also comments.

llvm-svn: 56290
2008-09-17 21:13:11 +00:00
Owen Anderson
553caa6ed4 Live intervals for live-in registers should begin at the beginning of a basic block, not at the first
instruction.  Also, their valno's should have an unknown def.  This has no effect currently, but was
causing issues when StrongPHIElimination was enabled.

llvm-svn: 56231
2008-09-15 22:00:38 +00:00
Dan Gohman
fa32c7c6d9 Remove isImm(), isReg(), and friends, in favor of
isImmediate(), isRegister(), and friends, to avoid confusion
about having two different names with the same meaning. I'm
not attached to the longer names, and would be ok with
changing to the shorter names if others prefer it.

llvm-svn: 56189
2008-09-13 17:58:21 +00:00
Owen Anderson
5325218d4b Allow the fast-path spilling code to attempt folding, but still leaving out remat and splitting.
llvm-svn: 55012
2008-08-19 22:12:11 +00:00
Owen Anderson
b110ebd37d The fast-path still needs to set kill markers and spill/restore points as appropriate.
With this patch, all of MultiSource/Applications and all of SPEC2000/2006 pass with
the SimpleSpiller and this fast-path enabled.

llvm-svn: 55000
2008-08-19 20:09:52 +00:00
Owen Anderson
94b85d88da Add a flag to enable the fast spilling path.
llvm-svn: 54958
2008-08-19 00:17:30 +00:00
Owen Anderson
f7f555763d Fix a few more bugs:
1) Assign stack slots to new temporaries.
  2) Don't insert an interval into the return vector more than once.

llvm-svn: 54956
2008-08-18 23:41:04 +00:00
Owen Anderson
0c22b6d22c Fix several bugs in the new fast-path:
1) Remove an incorrect assertion.
  2) Set the stack slot weight properly.
  3) Resize the VirtRegMap when needed.

llvm-svn: 54949
2008-08-18 21:20:32 +00:00
Owen Anderson
c25fc6fc35 Clients of addIntervalForSpills expect the added intervals to be returned sorted by starting index.
llvm-svn: 54939
2008-08-18 19:52:22 +00:00
Owen Anderson
751ff13625 Simplify the fast-patch interval spilling by using MachineRegisterInfo::reg_iterator.
llvm-svn: 54930
2008-08-18 18:38:12 +00:00
Owen Anderson
39bb486001 Resurrect some ancient code to add spill ranges without attempting folding, remat, or splitting. This code has been updated to current APIs
in so far as it compiles and, in theory, works, but does not take advantage of recent advancements.  For instance, it could be improved by using
MachineRegisterInfo::use_iterator.

llvm-svn: 54924
2008-08-18 18:05:32 +00:00
Owen Anderson
de29970fc6 Expunge the last uses of std::map from LiveIntervals.
llvm-svn: 54766
2008-08-13 22:28:50 +00:00
Owen Anderson
550fc15832 Move r2iMap_ over to DenseMap from std::map.
llvm-svn: 54765
2008-08-13 22:08:30 +00:00
Owen Anderson
28f2e658f1 Make the allocation of LiveIntervals explicit, rather than holding them in the r2iMap_ by value. This will prevent references to them from being invalidated
if the map is changed.

llvm-svn: 54763
2008-08-13 21:49:13 +00:00
Owen Anderson
33cf898dff Oops, didn't mean to commit this.
llvm-svn: 54425
2008-08-06 20:58:38 +00:00
Owen Anderson
e95b75bed2 Only remap each VNInfo once when doing renumbering.
llvm-svn: 54420
2008-08-06 18:35:45 +00:00
Owen Anderson
e6dca3efcb Value numbers whose def index is a special sentinel value should not be remapped.
llvm-svn: 54218
2008-07-30 17:42:47 +00:00
Owen Anderson
377ad6cb31 More fixes for corner cases when remapping live range indices.
llvm-svn: 54186
2008-07-30 00:22:56 +00:00
Owen Anderson
bebb8e5d20 Don't decrement the BB remap when we don't need to.
llvm-svn: 54173
2008-07-29 21:15:44 +00:00
Dan Gohman
9653b21dc2 Fold the useful features of alist and alist_node into ilist, and
a new ilist_node class, and remove them. Unlike alist_node,
ilist_node doesn't attempt to manage storage itself, so it avoids
the associated problems, including being opaque in gdb.

Adjust the Recycler class so that it doesn't depend on alist_node.
Also, change it to use explicit Size and Align parameters, allowing
it to work when the largest-sized node doesn't have the greatest
alignment requirement.

Change MachineInstr's MachineMemOperand list from a pool-backed
alist to a std::list for now.

llvm-svn: 54146
2008-07-28 21:51:04 +00:00
Dan Gohman
f6391108c6 Fix a typo in a comment.
llvm-svn: 54136
2008-07-28 18:43:51 +00:00
Owen Anderson
8f9ab550a2 Revert my previous patch. In retrospect, this is completely the wrong way to fix this problem.
llvm-svn: 54072
2008-07-25 23:06:59 +00:00
Owen Anderson
9408721750 Special cases are needed in renumbering when dealing with renumbering after a PHI has been removed. The interval previously defined
by the PHI needs to be extended to the beginning of its basic block, and the intervals that were inputs need to be trimmed to the end 
of their basic blocks.

llvm-svn: 54070
2008-07-25 22:32:01 +00:00
Owen Anderson
fe97082ddd Properly remap live ranges whose end indices are the end of the function.
llvm-svn: 54061
2008-07-25 21:07:13 +00:00