1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 12:02:58 +02:00
Commit Graph

11689 Commits

Author SHA1 Message Date
Bill Wendling
a8db395dc1 Revamp the SjLj "dispatch setup" intrinsic.
It needed to be moved closer to the setjmp statement, because the code directly
after the setjmp needs to know about values that are on the stack. Also, the
'bitcast' of the function context was causing a dead load. This wouldn't be too
horrible, except that at -O0 it wasn't optimized out, and because it wasn't
using the correct base pointer (if there is a VLA), it would try to access a
value from a garbage address.
<rdar://problem/9130540>

llvm-svn: 128873
2011-04-05 01:37:43 +00:00
Stuart Hastings
1635b37415 Revert 123704; it broke threaded LLVM.
llvm-svn: 128868
2011-04-05 00:37:28 +00:00
Jakob Stoklund Olesen
1454095d5e Allow coalescing with reserved physregs in certain cases:
When a virtual register has a single value that is defined as a copy of a
reserved register, permit that copy to be joined. These virtual register are
usually copies of the stack pointer:

  %vreg75<def> = COPY %ESP; GR32:%vreg75
  MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill>
  MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0
  MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0
  CALLpcrel32 ...

Coalescing these virtual registers early decreases register pressure.
Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after
register allocation was completed.

The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail
because it depends on linear scan spilling a particular register.

I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of
instructions emitted, and its revision history shows the 'correct' count being
edited many times.

llvm-svn: 128845
2011-04-04 21:00:03 +00:00
Jakob Stoklund Olesen
d5ddbadc69 Extract physreg joining policy to a separate method.
llvm-svn: 128844
2011-04-04 20:59:59 +00:00
Jakob Stoklund Olesen
78d65c6632 Stop caching basic block index ranges now that SlotIndexes can keep up.
llvm-svn: 128821
2011-04-04 15:32:15 +00:00
Jakob Stoklund Olesen
6092c3d81f Delete leftover data members.
llvm-svn: 128820
2011-04-04 15:32:11 +00:00
Jakob Stoklund Olesen
e5f6956148 Use InterferenceCache in RegAllocGreedy.
llvm-svn: 128765
2011-04-02 06:03:38 +00:00
Jakob Stoklund Olesen
f881310607 Add an InterferenceCache class for caching per-block interference ranges.
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.

llvm-svn: 128764
2011-04-02 06:03:35 +00:00
Jakob Stoklund Olesen
024a1de4ae Use basic block numbers as indexes when mapping slot index ranges.
This is more compact and faster than using DenseMap.

llvm-svn: 128763
2011-04-02 06:03:31 +00:00
Cameron Zwarich
2748634089 Add a RemoveFromWorklist method to DCI. This is needed to do some complicated
transformations in target-specific DAG combines without causing DAGCombiner to
delete the same node twice. If you know of a better way to avoid this (see my
next patch for an example), please let me know.

llvm-svn: 128758
2011-04-02 02:40:26 +00:00
Evan Cheng
28382f9178 Add comments.
llvm-svn: 128730
2011-04-01 19:57:01 +00:00
Evan Cheng
13c73e4836 Assign node order numbers to results of call instruction lowering. This should improve src line debug info when sdisel is used. rdar://9199118
llvm-svn: 128728
2011-04-01 19:42:22 +00:00
Evan Cheng
39574b2766 Issue libcalls __udivmod*i4 / __divmod*i4 for div / rem pairs.
rdar://8911343

llvm-svn: 128696
2011-04-01 00:42:02 +00:00
Jakob Stoklund Olesen
203727c92e The basic register allocator must also use the inline spiller.
It is using a trivial rewriter that doesn't know how to insert spill code
requested by the standard spiller.

llvm-svn: 128688
2011-03-31 23:02:17 +00:00
Jakob Stoklund Olesen
a935319339 Don't completely eliminate identity copies that also modify super register liveness.
Turn them into noop KILL instructions instead. This lets the scavenger know when
super-registers are killed and defined.

llvm-svn: 128645
2011-03-31 17:55:25 +00:00
Jakob Stoklund Olesen
c0874a65a0 Allow kill flags on two-address instructions. They are harmless.
llvm-svn: 128643
2011-03-31 17:52:41 +00:00
Jakob Stoklund Olesen
84bb8092b6 Mark all uses as <undef> when joining a copy.
This way, shrinkToUses() will ignore the instruction that is about to be
deleted, and we avoid leaving invalid live ranges that SplitKit doesn't like.

Fix a misunderstanding in MachineVerifier about <def,undef> operands. The
<undef> flag is valid on def operands where it has the same meaning as <undef>
on a use operand. It only applies to sub-register defines which also read the
full register.

llvm-svn: 128642
2011-03-31 17:23:25 +00:00
Devang Patel
eb032aede2 Remove dead code.
llvm-svn: 128639
2011-03-31 16:53:49 +00:00
Jakob Stoklund Olesen
03a6cd0433 Fix bug found by valgrind.
llvm-svn: 128634
2011-03-31 15:14:11 +00:00
NAKAMURA Takumi
e0a71fb3e0 lib/CodeGen/LiveIntervalAnalysis.cpp: [PR9590] Don't use std::pow(float,float) here.
We don't expect the real "powf()" on some hosts (and powf() would be available on other hosts).
For consistency, std::pow(double,double) may be called instead.
Or, precision issue might attack us, to see unstable regalloc and stack coloring.

llvm-svn: 128629
2011-03-31 12:11:33 +00:00
Jakob Stoklund Olesen
e72dfb1c45 Pick a conservative register class when creating a small live range for remat.
The rematerialized instruction may require a more constrained register class
than the register being spilled. In the test case, the spilled register has been
inflated to the DPR register class, but we are rematerializing a load of the
ssub_0 sub-register which only exists for DPR_VFP2 registers.

The register class is reinflated after spilling, so the conservative choice is
only temporary.

llvm-svn: 128610
2011-03-31 03:54:44 +00:00
Jakob Stoklund Olesen
30de09d279 Fix evil VirtRegRewriter bug.
The rewriter can keep track of multiple stack slots in the same register if they
happen to have the same value. When an instruction modifies a stack slot by
defining a register that is mapped to a stack slot, other stack slots in that
register are no longer valid.

This is a very rare problem, and I don't have a simple test case. I get the
impression that VirtRegRewriter knows it is about to be deleted, inventing a
last opaque problem.

<rdar://problem/9204040>

llvm-svn: 128562
2011-03-30 18:14:07 +00:00
Jakob Stoklund Olesen
41a7b0951b Teach VirtRegRewriter about the new virtual register numbers. No functional change.
llvm-svn: 128561
2011-03-30 18:14:04 +00:00
Jay Foad
53632b7c03 Remove PHINode::reserveOperandSpace(). Instead, add a parameter to
PHINode::Create() giving the (known or expected) number of operands.

llvm-svn: 128537
2011-03-30 11:28:46 +00:00
Jay Foad
dc5a008237 (Almost) always call reserveOperandSpace() on newly created PHINodes.
llvm-svn: 128535
2011-03-30 11:19:20 +00:00
Jakob Stoklund Olesen
8ce46ee438 Treat clones the same as their origin.
When DCE clones a live range because it separates into connected components,
make sure that the clones enter the same register allocator stage as the
register they were cloned from.

For instance, clones may be split even when they where created during spilling.
Other registers created during spilling are not candidates for splitting or even
(re-)spilling.

llvm-svn: 128524
2011-03-30 02:52:39 +00:00
Jim Grosbach
47b87dbc29 Tidy up. 80 columns and trailing whitespace.
llvm-svn: 128504
2011-03-29 23:20:22 +00:00
Jakob Stoklund Olesen
a292fa3d1e Recompute register class and hint for registers created during spilling.
The spill weight is not recomputed for an unspillable register - it stays infinite.

llvm-svn: 128490
2011-03-29 21:20:19 +00:00
Jakob Stoklund Olesen
229e589bd1 Remember to use the correct register when rematerializing for snippets.
llvm-svn: 128469
2011-03-29 17:47:02 +00:00
Jakob Stoklund Olesen
4676323ac8 Run dead code elimination immediately after rematerialization.
This may eliminate some uses of the spilled registers, and we don't want to
insert reloads for that.

llvm-svn: 128468
2011-03-29 17:47:00 +00:00
Bill Wendling
7469ccb3bd Inline check that's used only once.
llvm-svn: 128465
2011-03-29 17:12:55 +00:00
Bill Wendling
47b8e67328 Rework the logic (and removing the bad check for an unreachable block) so that
the FailBB dominator is correctly calculated. Believe it or not, there isn't a
functionality change here.

llvm-svn: 128455
2011-03-29 07:28:52 +00:00
Bill Wendling
ded022ad8b Don't try to add stack protector logic to a dead basic block. It messes up
dominator information.

llvm-svn: 128452
2011-03-29 05:15:48 +00:00
Jakob Stoklund Olesen
a92d74e8cb Handle the special case when all uses follow the last split point.
llvm-svn: 128450
2011-03-29 03:12:04 +00:00
Jakob Stoklund Olesen
c209e050dd Properly enable rematerialization when spilling after live range splitting.
The instruction to be rematerialized may not be the one defining the register
that is being spilled. The traceSiblingValue() function sees through sibling
copies to find the remat candidate.

llvm-svn: 128449
2011-03-29 03:12:02 +00:00
Bill Wendling
cb8447ad52 In some cases, the "fail BB dominator" may be null after the BB was split (and
becomes reachable when before it wasn't). Check to make sure that it's not null
before trying to use it.

llvm-svn: 128434
2011-03-28 23:02:18 +00:00
Daniel Dunbar
cec6959c23 Integrated-As: Add support for setting the AllowTemporaryLabels flag via
integrated-as.

llvm-svn: 128431
2011-03-28 22:49:19 +00:00
Jakob Stoklund Olesen
18eaae730c Amend debug output.
llvm-svn: 128398
2011-03-27 22:49:23 +00:00
Jakob Stoklund Olesen
9b9cae35db Drop interference reassignment in favor of eviction.
The reassignment phase was able to move interference with a higher spill weight,
but it didn't happen very often and it was fairly expensive.

The existing interference eviction picks up the slack.

llvm-svn: 128397
2011-03-27 22:49:21 +00:00
Jakob Stoklund Olesen
25ff895ebe Use individual register classes when spilling snippets.
The main register class may have been inflated by live range splitting, so that
register class is not necessarily valid for the snippet instructions.

Use the original register class for the stack slot interval.

llvm-svn: 128351
2011-03-26 22:16:41 +00:00
Benjamin Kramer
f9e1ba7398 Turn SelectionDAGBuilder::GetRegistersForValue into a local function.
It couldn't be used outside of the file because SDISelAsmOperandInfo
is local to SelectionDAGBuilder.cpp. Making it a static function avoids
a weird linkage dance.

llvm-svn: 128342
2011-03-26 16:35:10 +00:00
Jakob Stoklund Olesen
446412de55 Collect and coalesce DBG_VALUE instructions before emitting the function.
Correctly terminate the range of register DBG_VALUEs when the register is
clobbered or when the basic block ends.

The code is now ready to deal with variables that are sometimes in a register
and sometimes on the stack. We just need to teach emitDebugLoc to say 'stack
slot'.

llvm-svn: 128327
2011-03-26 02:19:36 +00:00
Jakob Stoklund Olesen
ab0501221b Emit less labels for debug info and stop emitting .loc directives for DBG_VALUEs.
The .dot directives don't need labels, that is a leftover from when we created
line number info manually.

Instructions following a DBG_VALUE can share its label since the DBG_VALUE
doesn't produce any code.

llvm-svn: 128284
2011-03-25 17:20:59 +00:00
Andrew Trick
651a3701f9 Fix for -pre-RA-sched=source.
Yet another case of unchecked NULL node (for physreg copy).
May fix PR9509.

llvm-svn: 128266
2011-03-25 06:40:55 +00:00
Nick Lewycky
66eaeb513a No functionality change. Fix up some whitespace and switch out "" for '' when
printing a single character.

llvm-svn: 128256
2011-03-25 06:04:26 +00:00
Jakob Stoklund Olesen
e20f22be07 Ignore special ARM allocation hints for unexpected register classes.
Add an assertion to linear scan to prevent it from allocating registers outside
the register class.

<rdar://problem/9183021>

llvm-svn: 128254
2011-03-25 01:48:18 +00:00
Devang Patel
4909f41ec5 Keep track of directory namd and fIx regression caused by Rafael's patch r119613.
A better approach would be to move source id handling inside MC.

llvm-svn: 128233
2011-03-24 20:30:50 +00:00
Eli Friedman
76fcfaab12 PR9535: add support for splitting and scalarizing vector ISD::FP_ROUND.
Also cleaning up some duplicated code while I'm here.

llvm-svn: 128176
2011-03-23 22:18:48 +00:00
Andrew Trick
b702dae9b2 Ensure that def-side physreg copies are scheduled above any other uses
so the scheduler can't create new interferences on the copies
themselves. Prior to this fix the scheduler could get stuck in a loop
creating copies.
Fixes PR9509.

llvm-svn: 128164
2011-03-23 20:42:39 +00:00
Andrew Trick
ca42e62048 whitespace
llvm-svn: 128163
2011-03-23 20:40:18 +00:00
Jakob Stoklund Olesen
c62f168ec5 Don't coalesce identical DBG_VALUE instructions prematurely.
Each of these instructions may have a RegsClobberInsn entry that can't be
ignored. Consecutive ranges are coalesced later when DwarfDebug::emitDebugLoc
merges entries.

llvm-svn: 128155
2011-03-23 18:37:30 +00:00
Jakob Stoklund Olesen
6570595e4c Notify the delegate before removing dead values from a live interval.
The register allocator needs to know when the range shrinks.

llvm-svn: 128145
2011-03-23 04:43:16 +00:00
Jakob Stoklund Olesen
d75298c7cd Allow the allocation of empty live ranges that have uses.
Empty ranges may represent undef values.

llvm-svn: 128144
2011-03-23 04:32:51 +00:00
Jakob Stoklund Olesen
660147b1d8 Dump the register map before rewriting.
llvm-svn: 128143
2011-03-23 04:32:49 +00:00
Andrew Trick
d9c599d01c Added block number and name to isel debug output.
I'm tired of doing this manually for each checkout.
If anyone knows a better way debug isel for non-trivial tests feel
free to revert and let me know how to do it.

llvm-svn: 128132
2011-03-23 01:38:28 +00:00
Jakob Stoklund Olesen
28ebc380f6 Reapply r128045 and r128051 with fixes.
This will extend the ranges of debug info variables in registers until they are
clobbered.

Fix 1: Don't mistake DBG_VALUE instructions referring to incoming arguments on
the stack with DBG_VALUE instructions referring to variables in the frame
pointer. This fixes the gdb test-suite failure.

Fix 2: Don't trace through copies to physical registers setting up call
arguments. These registers are call clobbered, and the source register is more
likely to be a callee-saved register that can be extended through the call
instruction.

llvm-svn: 128114
2011-03-22 22:33:08 +00:00
Andrew Trick
63dc418ea3 Revert r128045 and r128051, debug info enhancements.
Temporarily reverting these to see if we can get llvm-objdump to link. Hopefully this is not the problem.

llvm-svn: 128097
2011-03-22 19:18:42 +00:00
Jakob Stoklund Olesen
ac3cdb2811 Clear map after use.
This is likely to fix the segfault in llvm-gcc-x86_64-darwin10-cross-mingw32.

llvm-svn: 128051
2011-03-22 01:03:24 +00:00
Jakob Stoklund Olesen
fc9e8a04c3 Dont emit 'DBG_VALUE %noreg, ...' to terminate user variable ranges.
These ranges get completely jumbled by the post-ra scheduler, and it is not
really reasonable to expect it to make sense of them.

Instead, teach DwarfDebug to notice when user variables in registers are
clobbered, and terminate the ranges there.

llvm-svn: 128045
2011-03-22 00:21:41 +00:00
Eric Christopher
44e3d3c26e Grammar-o.
llvm-svn: 128004
2011-03-21 18:06:21 +00:00
Bill Wendling
a2eec46242 We need to pass the TargetMachine object to the InstPrinter if we are printing
the alias of an InstAlias instead of the thing being aliased. Because we need to
know the features that are valid for an InstAlias.

This is part of a work-in-progress.

llvm-svn: 127986
2011-03-21 04:13:46 +00:00
Jakob Stoklund Olesen
7f9de06cc4 Process all dead defs after rematerializing during splitting.
llvm-svn: 127973
2011-03-20 19:46:23 +00:00
Jakob Stoklund Olesen
6bc47435a9 Also eliminate redundant spills downstream of inserted reloads.
This can happen when multiple sibling registers are spilled after live range
splitting.

llvm-svn: 127965
2011-03-20 05:44:58 +00:00
Jakob Stoklund Olesen
911619d9e2 Change an argument to a LiveInterval instead of a register number to save some redundant lookups.
llvm-svn: 127964
2011-03-20 05:44:55 +00:00
Jakob Stoklund Olesen
916b11e88b Replace a broken LiveInterval::MergeValueInAsValue() with something simpler.
llvm-svn: 127960
2011-03-19 23:02:49 +00:00
Jakob Stoklund Olesen
bf1a7cb32d Add debug output.
llvm-svn: 127959
2011-03-19 23:02:47 +00:00
Evan Cheng
adde7c1aae Minor code re-structuring.
llvm-svn: 127952
2011-03-19 17:03:16 +00:00
Nadav Rotem
92561196b7 Add support for legalizing UINT_TO_FP of vectors on platforms which do
not have native support for this operation (such as X86).
The legalized code uses two vector INT_TO_FP operations and is faster
than scalarizing.

llvm-svn: 127951
2011-03-19 13:09:10 +00:00
Stuart Hastings
0c1298989f Reapply 127939 since Daniel fixed the breakage. <rdar://problem/9012638>
llvm-svn: 127944
2011-03-19 02:42:31 +00:00
Stuart Hastings
0b337d0fe8 Revert 127939. <rdar://problem/9012638>
llvm-svn: 127943
2011-03-19 02:33:56 +00:00
Stuart Hastings
4a2b1ca9c1 Revise r126127 to address Daniel's comments. <rdar://problem/9012638>
llvm-svn: 127939
2011-03-19 01:32:01 +00:00
Jim Grosbach
75deb766b9 Beginnings of MC-JIT code generation.
Proof-of-concept code that code-gens a module to an in-memory MachO object.
This will be hooked up to a run-time dynamic linker library (see: llvm-rtdyld
for similarly conceptual work for that part) which will take the compiled
object and link it together with the rest of the system, providing back to the
JIT a table of available symbols which will be used to respond to the
getPointerTo*() queries.

llvm-svn: 127916
2011-03-18 22:48:41 +00:00
Jakob Stoklund Olesen
9e6d3b0a42 Extend live debug values down the dominator tree by following copies.
The llvm.dbg.value intrinsic refers to SSA values, not virtual registers, so we
should be able to extend the range of a value by tracking that value through
register copies. This greatly improves the debug value tracking for function
arguments that for some reason are copied to a second virtual register at the
end of the entry block.

We only extend the debug value range where its register is killed. All original
llvm.dbg.value locations are still respected.

Copies from physical registers are ignored. That should not be a problem since
the entry block already adds DBG_VALUE instructions for the virtual registers
holding the function arguments.

llvm-svn: 127912
2011-03-18 21:42:19 +00:00
Jakob Stoklund Olesen
dbc283787d Hoist spills when the same value is known to be in less loopy sibling registers.
Stack slot real estate is virtually free compared to registers, so it is
advantageous to spill earlier even though the same value is now kept in both a
register and a stack slot.

Also eliminate redundant spills by extending the stack slot live range
underneath reloaded registers.

This can trigger a dead code elimination, removing copies and even reloads that
were only feeding spills.

llvm-svn: 127868
2011-03-18 04:23:06 +00:00
Jakob Stoklund Olesen
2956265983 Accept instructions that read undefined values.
This is not supposed to happen, but I have seen the x86 rematter getting
confused when rematerializing partial redefs.

llvm-svn: 127857
2011-03-18 03:06:04 +00:00
Jakob Stoklund Olesen
08bfed9973 Be more accurate about the slot index reading a register when dealing with defs
and early clobbers.

Assert when trying to find an undefined value.

llvm-svn: 127856
2011-03-18 03:06:02 +00:00
Benjamin Kramer
52ffb6ea96 BuildUDIV: If the divisor is even we can simplify the fixup of the multiplied value by introducing an early shift.
This allows us to compile "unsigned foo(unsigned x) { return x/28; }" into
	shrl	$2, %edi
	imulq	$613566757, %rdi, %rax
	shrq	$32, %rax
	ret

instead of
	movl    %edi, %eax
	imulq   $613566757, %rax, %rcx
	shrq    $32, %rcx
	subl    %ecx, %eax
	shrl    %eax
	addl    %ecx, %eax
	shrl    $4, %eax

on x86_64

llvm-svn: 127829
2011-03-17 20:39:14 +00:00
Jakob Stoklund Olesen
047a25b0b0 Dead code elimination may separate the live interval into multiple connected components.
I have convinced myself that it can only happen when a phi value dies. When it
happens, allocate new virtual registers for the components.

llvm-svn: 127827
2011-03-17 20:37:07 +00:00
Cameron Zwarich
cea63dc052 Move more logic into getTypeForExtArgOrReturn.
llvm-svn: 127809
2011-03-17 14:53:37 +00:00
Cameron Zwarich
a5746339cc Rename getTypeForExtendedInteger() to getTypeForExtArgOrReturn().
llvm-svn: 127807
2011-03-17 14:21:56 +00:00
Jakob Stoklund Olesen
2786187b43 Rewrite instructions as part of ConnectedVNInfoEqClasses::Distribute.
llvm-svn: 127779
2011-03-17 00:23:45 +00:00
Jakob Stoklund Olesen
5c0d2aecc5 Add a LiveRangeEdit delegate callback before shrinking a live range.
The register allocator needs to adjust its live interval unions when that happens.

llvm-svn: 127774
2011-03-16 22:56:16 +00:00
Jakob Stoklund Olesen
8751b4e276 Erase virtual registers that are unused after DCE.
llvm-svn: 127773
2011-03-16 22:56:13 +00:00
Jakob Stoklund Olesen
940b7d46d3 Tag cached interference with a user-provided tag instead of the virtual register number.
The live range of a virtual register may change which invalidates the cached
interference information.

llvm-svn: 127772
2011-03-16 22:56:11 +00:00
Jakob Stoklund Olesen
7b60f4161a Clarify debugging output.
llvm-svn: 127771
2011-03-16 22:56:08 +00:00
Cameron Zwarich
2bb1e45ea3 The x86-64 ABI says that a bool is only guaranteed to be sign-extended to a byte
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not
generate incorrect code.

This just fixes the zext at the return. We still insert an i32 ZextAssert when
reading a function's arguments, but it is followed by a truncate and another i8
ZextAssert so it is not optimized.

llvm-svn: 127766
2011-03-16 22:20:18 +00:00
Cameron Zwarich
860d06739b Don't recompute something that we already have in a local variable.
llvm-svn: 127764
2011-03-16 22:20:07 +00:00
Daniel Dunbar
8757b8c000 Revert r127757, "Patch to a fix dwarf relocation problem on ARM. One-line fix
plus the test where it used to break.", which broke Clang self-host of a
Debug+Asserts compiler, on OS X.

llvm-svn: 127763
2011-03-16 22:16:39 +00:00
Renato Golin
bf788a5626 Patch to a fix dwarf relocation problem on ARM. One-line fix plus the test where it used to break.
llvm-svn: 127757
2011-03-16 21:05:52 +00:00
Jakob Stoklund Olesen
26ac368165 Trace back through sibling copies to hoist spills and find rematerializable defs.
After live range splitting, an original value may be available in multiple
registers. Tracing back through the registers containing the same value, find
the best place to insert a spill, determine if the value has already been
spilled, or discover a reaching def that may be rematerialized.

This is only the analysis part. The information is not used for anything yet.

llvm-svn: 127698
2011-03-15 21:13:25 +00:00
Jakob Stoklund Olesen
992adc7152 Preserve both isPHIDef and isDefByCopy bits when copying parent values.
llvm-svn: 127697
2011-03-15 21:13:22 +00:00
Evan Cheng
29faaebae9 Add a peephole optimization to optimize pairs of bitcasts. e.g.
v2 = bitcast v1
...
v3 = bitcast v2
...
   = v3
=>
v2 = bitcast v1
...
   = v1
if v1 and v3 are of in the same register class.

bitcast between i32 and fp (and others) are often not nops since they
are in different register classes. These bitcast instructions are often
left because they are in different basic blocks and cannot be
eliminated by dag combine.

rdar://9104514

llvm-svn: 127668
2011-03-15 05:13:13 +00:00
Evan Cheng
bac3e87eaa sext(undef) = 0, because the top bits will all be the same.
zext(undef) = 0, because the top bits will be zero.

llvm-svn: 127649
2011-03-15 02:22:10 +00:00
Bill Wendling
af19decfc9 There are some situations which can cause the URoR hack to infinitely recurse
and then go kablooie. The problem was that it was tracking the PHI nodes anew
each time into this function. But it didn't need to. And because the recursion
didn't know that a PHINode was visited before, it would go ahead and call
itself.

There is a testcase, but unfortunately it's too big to add. This problem will go
away with the EH rewrite.
<rdar://problem/8856298>

llvm-svn: 127640
2011-03-15 01:03:17 +00:00
Jakob Stoklund Olesen
29a9539e7f Place context in member variables instead of passing around pointers.
Use the opportunity to get rid of the trailing underscore variable names.

llvm-svn: 127618
2011-03-14 20:57:14 +00:00
Jakob Stoklund Olesen
da1afc2d80 Rename members to match LLVM naming conventions more closely.
Remove the unused reserved_ bit vector, no functional change intended.

This doesn't break 'svn blame', this file really is all my fault.

llvm-svn: 127607
2011-03-14 19:56:43 +00:00
Evan Cheng
50f2d406ec BIT_CONVERT has been renamed to BITCAST.
llvm-svn: 127600
2011-03-14 18:19:52 +00:00
Evan Cheng
cb70b9e80b Minor optimization. sign-ext/anyext of undef is still undef.
llvm-svn: 127598
2011-03-14 18:15:55 +00:00
Jakob Stoklund Olesen
7d23be25ab Now that we are deleting unused live intervals during allocation, pointers may be reused.
Use the virtual register number as a cache tag instead. They are not reused.

llvm-svn: 127561
2011-03-13 01:29:32 +00:00
Jakob Stoklund Olesen
2d87d5139b Tell the register allocator about new unused virtual registers.
This allows the allocator to free any resources used by the virtual register,
including physical register assignments.

llvm-svn: 127560
2011-03-13 01:23:11 +00:00
Duncan Sands
0514e10276 Speculatively revert commit 127478 (jsjodin) in an attempt to fix the
llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots.
The original log entry:
Remove optimization emitting a reference insted of label difference, since
it can create more relocations. Removed isBaseAddressKnownZero method,
because it is no longer used.

llvm-svn: 127540
2011-03-12 13:07:37 +00:00
Jakob Stoklund Olesen
6d02ddbbc3 Include snippets in the live stack interval.
llvm-svn: 127530
2011-03-12 04:25:36 +00:00
Jakob Stoklund Olesen
1f9f236b8a Spill multiple registers at once.
Live range splitting can create a number of small live ranges containing only a
single real use. Spill these small live ranges along with the large range they
are connected to with copies. This enables memory operand folding and maximizes
the spill to fill distance.

Work in progress with known bugs.

llvm-svn: 127529
2011-03-12 04:17:20 +00:00
Jakob Stoklund Olesen
925b25d53d That's it, I am declaring this a failure of the C++03 STL.
There are too many compatibility problems with using mixed types in
std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
a call to a 10-line function. Binary search is not /that/ hard to implement
correctly.

I tried terminating the binary search with a linear search, but that actually
made the algorithm slower against my expectation. Most live intervals have less
than 4 segments. The early test against endIndex() does pay, and this version is
25% faster than plain std::upper_bound().

llvm-svn: 127522
2011-03-12 01:50:35 +00:00
Cameron Zwarich
39a49276db Fix the GCC test suite issue exposed by r127477, which was caused by stack
protector insertion not working correctly with unreachable code. Since that
revision was rolled out, this test doesn't actual fail before this fix.

llvm-svn: 127497
2011-03-11 21:51:56 +00:00
Owen Anderson
78afadfa5d Teach FastISel to support register-immediate-immediate instructions.
llvm-svn: 127496
2011-03-11 21:33:55 +00:00
Jan Sjödin
b58b9618ce Remove optimization emitting a reference insted of label difference, since it can create more relocations. Removed isBaseAddressKnownZero method, because it is no longer used.
llvm-svn: 127478
2011-03-11 19:37:02 +00:00
Andrew Trick
6aa37a4a2b Replace -dag-chain-limit flag with constant. It has survived a release cycle without being touched, so no longer needs to pollute the hidden-help text.
llvm-svn: 127468
2011-03-11 17:46:59 +00:00
John Wiegley
e1168a569b Fix use of CompEnd predicate to be standards conforming
The existing CompEnd predicate does not define a strict weak order as required
by the C++03 standard; therefore, its use as a predicate to std::upper_bound
is invalid. For a discussion of this issue, see
http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#270

This patch replaces the asymmetrical comparison with an iterator adaptor that
achieves the same effect while being strictly standard-conforming by ensuring
an apples-to-apples comparison.

llvm-svn: 127462
2011-03-11 08:54:34 +00:00
Evan Cheng
d5d2d4a158 Avoid replacing the value of a directly stored load with the stored value if the load is indexed. rdar://9117613.
llvm-svn: 127440
2011-03-11 00:48:56 +00:00
Cameron Zwarich
206503113e Add an option to disable critical edge splitting in PHIElimination.
llvm-svn: 127398
2011-03-10 05:59:17 +00:00
Jakob Stoklund Olesen
92652a803f Change the Spiller interface to take a LiveRangeEdit reference.
This makes it possible to register delegates and get callbacks when the spiller
edits live ranges.

llvm-svn: 127389
2011-03-10 01:51:42 +00:00
Jakob Stoklund Olesen
70541686bf Make SpillIs an optional pointer. Avoid creating a bunch of temporary SmallVectors.
llvm-svn: 127388
2011-03-10 01:21:58 +00:00
Evan Cheng
a3a7a7e364 Re-commit 127368 and 127371. They are exonerated.
llvm-svn: 127380
2011-03-10 00:16:32 +00:00
Evan Cheng
d7a2008a55 Revert 127368 and 127371 for now.
llvm-svn: 127376
2011-03-09 23:53:17 +00:00
Evan Cheng
b717770dfe Change the definition of TargetRegisterInfo::getCrossCopyRegClass to be more
flexible.

If it returns a register class that's different from the input, then that's the
register class used for cross-register class copies.
If it returns a register class that's the same as the input, then no cross-
register class copies are needed (normal copies would do).
If it returns null, then it's not at all possible to copy registers of the
specified register class.

llvm-svn: 127368
2011-03-09 22:47:38 +00:00
Jakob Stoklund Olesen
4d0c9d0af7 Make physreg coalescing independent on the number of uses of the virtual register.
The damage done by physreg coalescing only depends on the number of instructions
the extended physreg live range covers. This fixes PR9438.

The heuristic is still luck-based, and physreg coalescing really should be
disabled completely. We need a register allocator with better hinting support
before that is possible.

Convert a test to FileCheck and force spilling by inserting an extra call. The
previous spilling behavior was dependent on misguided physreg coalescing
decisions.

llvm-svn: 127351
2011-03-09 19:27:06 +00:00
Andrew Trick
e529ddb2d7 Improve pre-RA-sched register pressure tracking for duplicate operands.
This helps cases like 2008-07-19-movups-spills.ll, but doesn't have an obvious impact on benchmarks

llvm-svn: 127347
2011-03-09 19:12:43 +00:00
Benjamin Kramer
4cf03850a2 Fix typo, make helper static.
llvm-svn: 127335
2011-03-09 16:19:12 +00:00
Benjamin Kramer
782cb6d68d Remove unused virtual dtor.
llvm-svn: 127331
2011-03-09 14:20:28 +00:00
Matt Beaumont-Gay
3e3b6cc819 Add a virtual dtor to Delegate to silence -Wnon-virtual-dtor
llvm-svn: 127311
2011-03-09 04:02:15 +00:00
Jakob Stoklund Olesen
eec325fc2f Add a LiveRangeEdit::Delegate protocol.
This will we used for keeping register allocator data structures up to date
while LiveRangeEdit is trimming live intervals.

llvm-svn: 127300
2011-03-09 00:57:29 +00:00
Jakob Stoklund Olesen
7905ed6549 Delete dead code.
llvm-svn: 127295
2011-03-09 00:07:39 +00:00
Jakob Stoklund Olesen
b8f5f15468 Delete dead code after rematerializing.
LiveRangeEdit::eliminateDeadDefs() will eventually be used by coalescing,
splitting, and spilling for dead code elimination. It can delete chains of dead
instructions as long as there are no dependency loops.

llvm-svn: 127287
2011-03-08 22:46:11 +00:00
Jakob Stoklund Olesen
b7aa589217 Fix the build for MSVC 9 whose upper_bound() wants to compare elements in the sorted array.
Patch by Olaf Krzikalla!

llvm-svn: 127264
2011-03-08 19:37:54 +00:00
Eric Christopher
d3c7e834e8 Fix some latent bugs if the nodes are unschedulable. We'd gotten away
with this before since none of the register tracking or nightly tests
had unschedulable nodes.

This should probably be refixed with a special default Node that just
returns some "don't touch me" values.

Fixes PR9427

llvm-svn: 127263
2011-03-08 19:35:47 +00:00
Oscar Fuentes
3faa2722ef Revert "Make a comparator's argument `const'. This fixes the build for
MSVC 9."

The "fix" was meaningless.

This reverts commit r127245.

llvm-svn: 127260
2011-03-08 19:26:21 +00:00
Benjamin Kramer
872abcc8a9 Reduce vector reallocations.
llvm-svn: 127254
2011-03-08 17:28:36 +00:00
Oscar Fuentes
27fc386e64 Make a comparator's argument `const'. This fixes the build for MSVC 9.
llvm-svn: 127245
2011-03-08 13:52:07 +00:00
Andrew Trick
fd853e8757 Further improvements to pre-RA-sched=list-ilp.
This change uses the MaxReorderWindow for both height and depth, which
tends to limit the negative effects of high register pressure.

llvm-svn: 127203
2011-03-08 01:51:56 +00:00
Jakob Stoklund Olesen
f9401745e0 Let shrinkToUses optionally return a list of now dead machine instructions.
llvm-svn: 127192
2011-03-07 23:29:10 +00:00
Jakob Stoklund Olesen
158af1f7e9 Make the UselessRegs argument optional in the LiveRangeEdit constructor.
llvm-svn: 127181
2011-03-07 22:42:16 +00:00
Cameron Zwarich
a1920d7f51 Move getRegPressureLimit() from TargetLoweringInfo to TargetRegisterInfo.
llvm-svn: 127175
2011-03-07 21:56:36 +00:00
Jakob Stoklund Olesen
68b2c1d239 Handle the special case of registers begin redefined by early-clobber defs.
In this case, the value need to be available at the load index instead of the
normal use index.

llvm-svn: 127167
2011-03-07 18:56:16 +00:00
Owen Anderson
11a49e845a Use the correct LHS type when determining the legalization of a shift's RHS type.
llvm-svn: 127163
2011-03-07 18:29:47 +00:00
Eric Christopher
1da11eb2ae Typo.
llvm-svn: 127131
2011-03-06 21:13:45 +00:00
NAKAMURA Takumi
6aa6938d66 lib/CodeGen/AsmPrinter/CMakeLists.txt: Fix CMake build, following up to r127099.
llvm-svn: 127114
2011-03-06 00:13:15 +00:00
Andrew Trick
ebbe4680ae Disable a couple of experimental heuristics to get the best results from the current implementation of -pre-RA-sched=list-ilp.
llvm-svn: 127113
2011-03-06 00:03:32 +00:00
Anton Korobeynikov
62e48532b9 Some first rudimentary support for ARM EHABI: print exception table in "text mode".
llvm-svn: 127099
2011-03-05 18:43:15 +00:00
Anton Korobeynikov
c746be3dc4 Add FrameSetup MI flags
llvm-svn: 127098
2011-03-05 18:43:04 +00:00
Jakob Stoklund Olesen
1d0ca5680a Work around a coalescer bug.
The coalescer can in very rare cases leave too large live intervals around after
rematerializing cheap-as-a-move instructions.

Linear scan doesn't really care, but live range splitting gets very confused
when a live range is killed by a ghost instruction.

I will fix this properly in the coalescer after 2.9 branches.

llvm-svn: 127096
2011-03-05 18:33:49 +00:00
Andrew Trick
2451bad445 Be explicit with abs(). Visual Studio workaround.
llvm-svn: 127075
2011-03-05 10:29:25 +00:00
Andrew Trick
dd4a20e7d7 Fix for -sched-high-latency-cycles in sched=list-ilp mode.
llvm-svn: 127071
2011-03-05 09:18:16 +00:00
Andrew Trick
d267891a21 Missing comment.
llvm-svn: 127068
2011-03-05 08:04:11 +00:00
Andrew Trick
7db197d209 Increased the register pressure limit on x86_64 from 8 to 12
regs. This is the only change in this checkin that may affects the
default scheduler. With better register tracking and heuristics, it
doesn't make sense to artificially lower the register limit so much.

Added -sched-high-latency-cycles and X86InstrInfo::isHighLatencyDef to
give the scheduler a way to account for div and sqrt on targets that
don't have an itinerary. It is currently defaults to 10 (the actual
number doesn't matter much), but only takes effect on non-default
schedulers: list-hybrid and list-ilp.

Added several heuristics that can be individually disabled for the
non-default sched=list-ilp mode. This helps us determine how much
better we can do on a given benchmark than the default
scheduler. Certain compute intensive loops run much faster in this
mode with the right set of heuristics, and it doesn't seem to have
much negative impact elsewhere. Not all of the heuristics are needed,
but we still need to experiment to decide which should be disabled by
default for sched=list-ilp.

llvm-svn: 127067
2011-03-05 08:00:22 +00:00
Jakob Stoklund Olesen
52420a0a1e Rework the global split cost calculation.
The global cost is the sum of block frequencies for spill code that must be
inserted because preferences weren't met.

llvm-svn: 127062
2011-03-05 03:28:51 +00:00
Jakob Stoklund Olesen
79944a508a Compute the constraints for global live range splitting from an interference pattern.
This simplifies the code and makes it faster too.

The interference patterns are saved for each candidate register. It will be
reused for actually executing the split. Work in progress.

llvm-svn: 127054
2011-03-05 01:10:31 +00:00
Jim Grosbach
372394916e Teach the register scavenger to take subregs into account when finding a free register.
llvm-svn: 127049
2011-03-05 00:20:19 +00:00
Eric Christopher
94626fc599 Improve readability with some whitespace!
llvm-svn: 127043
2011-03-04 22:47:12 +00:00
Jakob Stoklund Olesen
87b598dd85 Extract a method. No functional change.
llvm-svn: 127040
2011-03-04 22:11:11 +00:00
Jakob Stoklund Olesen
e3b9a00584 Go back to comparing spill weights when deciding if interference can be evicted.
It gives better results. Sometimes, a live range can be large and still have
high spill weight. Such a range should not be spilled.

llvm-svn: 127036
2011-03-04 21:32:50 +00:00
Jakob Stoklund Olesen
8b66caf12a Renumber slot indexes locally when possible.
Initially, slot indexes are quad-spaced. There is room for inserting up to 3
new instructions between the original instructions.

When we run out of indexes between two instructions, renumber locally using
double-spaced indexes. The original quad-spacing means that we catch up quickly,
and we only have to renumber a handful of instructions to get a monotonic
sequence. This is much faster than renumbering the whole function as we did
before.

llvm-svn: 127023
2011-03-04 19:43:38 +00:00
Jakob Stoklund Olesen
d99d8cfdf0 Number SlotIndexes uniformly without looking at the number of defs on each instruction.
You can't really predict how many indexes will be needed from the number of
defs, so let's keep it simple.

Also remove an extra empty index that was inserted after each basic block. It
was intended for live-out ranges, but it was never used that way.

llvm-svn: 127014
2011-03-04 18:51:09 +00:00
Jakob Stoklund Olesen
dfc9b0b15f Add SlotIndex statistics.
llvm-svn: 127007
2011-03-04 18:08:29 +00:00
Jakob Stoklund Olesen
c3193f4aa8 Tweak debug output. No functional changes.
llvm-svn: 127006
2011-03-04 18:08:26 +00:00
Duncan Sands
f6c9de94f8 Revert commit 126684 "Use the correct shift amount type". It is only the correct
type after type legalization has completed.  Before then it may simply not be big
enough to hold the shift amount, particularly on x86 which uses a very small type
for shifts (this issue broke stuff in the past which is why LegalizeTypes carefully
uses a large type for shift amounts).

llvm-svn: 127000
2011-03-04 14:28:59 +00:00
Andrew Trick
1478921dc0 Minor pre-RA-sched fixes and cleanup.
Fix the PendingQueue, then disable it because it's not required for
the current schedulers' heuristics.
Fix the logic for the unused list-ilp scheduler.

llvm-svn: 126981
2011-03-04 02:03:45 +00:00
Jakob Stoklund Olesen
3c64ea5ad2 Precompute block frequencies, pow() isn't free.
llvm-svn: 126975
2011-03-04 00:58:40 +00:00
Jakob Stoklund Olesen
5bc7a96cca Use an IndexedMap instead of a DenseMap for the live-out cache.
This speeds up updateSSA() so it only accounts for 5% of the live range
splitting time.

llvm-svn: 126972
2011-03-04 00:15:36 +00:00
Bill Wendling
6e6a0422eb There are times when the landing pad won't have a call to 'eh.selector' in
it. It's been assumed up til now that it would be in its immediate
successor. However, this isn't necessarily the case. It could be in one of its
successor's successors.

Modify the code to more thoroughly check for an 'eh.selector' call in
successors. It only looks at a successor if we get there as a result of an
unconditional branch.

Testcase ObjC/exceptions-4.m in r126968.

llvm-svn: 126969
2011-03-03 23:14:05 +00:00
Eli Friedman
26f5c96de3 Revert r123908; the code in question is completely untested and wrong.
llvm-svn: 126964
2011-03-03 22:33:23 +00:00
Devang Patel
a7b5a00033 Fix typo.
llvm-svn: 126962
2011-03-03 21:49:41 +00:00
Devang Patel
3c2bd93f9e Fix thinko in previous check-in.
Add comment.

llvm-svn: 126959
2011-03-03 20:08:10 +00:00
Devang Patel
4adbb350be llvm::Function argument count is not a good indicator of how many arugments does the function have at source level. If we need more space, just resize vector conservatively. This vector is only used once per function.
llvm-svn: 126957
2011-03-03 20:02:02 +00:00
Jim Grosbach
e4ef37b298 Allow a target to choose whether to prefer the scavenger emergency spill slot
be next to the frame pointer or the stack pointer.

llvm-svn: 126956
2011-03-03 20:01:52 +00:00
Jakob Stoklund Olesen
3511ad1023 Renumber slot indexes uniformly instead of spacing according to the number of defs.
There are probably much larger speedups to be had by renumbering locally instead
of looping over the whole function. For now, the greedy register allocator is
25% faster.

llvm-svn: 126926
2011-03-03 06:29:01 +00:00
Jakob Stoklund Olesen
d0930e03c1 Represent sentinel slot indexes with a null pointer.
This is much faster than using a pointer to a ManagedStatic object accessed with
a function call. The greedy register allocator is 5% faster overall just from
the SlotIndex default constructor savings.

llvm-svn: 126925
2011-03-03 05:40:04 +00:00
Jakob Stoklund Olesen
2e6407e9aa Avoid comparing invalid slot indexes, and assert that it doesn't happen.
The SlotIndex created by the default construction does not represent a position
in the function, and it doesn't make sense to compare it to other indexes.

llvm-svn: 126924
2011-03-03 05:18:19 +00:00
Jakob Stoklund Olesen
92d724fb69 Avoid comparing invalid slot indexes.
llvm-svn: 126922
2011-03-03 04:23:52 +00:00
Jakob Stoklund Olesen
2759a169ea Cache basic block bounds instead of asking SlotIndexes::getMBBRange all the time.
This speeds up the greedy register allocator by 15%.
DenseMap is not as fast as one might hope.

llvm-svn: 126921
2011-03-03 03:41:29 +00:00
Jakob Stoklund Olesen
09a1939164 Change the SplitEditor interface to a single instance can be shared for multiple splits.
llvm-svn: 126912
2011-03-03 01:29:13 +00:00
Jakob Stoklund Olesen
df789f852e Only run the updateSSA loop when we have actually seen multiple values.
When only a single value has been seen, new PHIDefs are never needed.

llvm-svn: 126911
2011-03-03 01:29:10 +00:00
Jakob Stoklund Olesen
31949ff3f8 Fix PHI handling in LiveIntervals::shrinkToUses().
We need to wait until we meet a PHIDef in its defining block before resurrecting
PHIKills in the predecessors.

This should unbreak the llvm-gcc-build-x86_64-darwin10-x-mingw32-x-armeabi bot.

llvm-svn: 126905
2011-03-03 00:20:51 +00:00
Bob Wilson
42e5d02b10 Avoid exponential blow-up when printing DAGs.
David Greene changed CannotYetSelect() to print the full DAG including multiple
copies of operands reached through different paths in the DAG.  Unfortunately
this blows up exponentially in some cases.  The depth limit of 100 is way too
high to prevent this -- I'm seeing a message string of 150MB with a depth of
only 40 in one particularly bad case, even though the DAG has less than 200
nodes.  Part of the problem is that the printing code is following chain
operands, so if you fail to select an operation with a chain, the printer will
follow all the chained operations back to the entry node.

llvm-svn: 126899
2011-03-02 23:38:06 +00:00
Jakob Stoklund Olesen
c5dc2c61fc Turn the Edit member into a pointer so it can change dynamically.
No functional change.

llvm-svn: 126898
2011-03-02 23:31:50 +00:00
Jakob Stoklund Olesen
9697ef837c Transfer simply defined values directly without recomputing liveness and SSA.
Values that map to a single new value in a new interval after splitting don't
need new PHIDefs, and if the parent value was never rematerialized the live
range will be the same.

llvm-svn: 126894
2011-03-02 23:05:19 +00:00
Jakob Stoklund Olesen
22469169cd Extract a method. No functional change.
llvm-svn: 126893
2011-03-02 23:05:16 +00:00
Stuart Hastings
a40e563e4e Can't introduce floating-point immediate constants after legalization.
Radar 9056407.

llvm-svn: 126864
2011-03-02 19:36:30 +00:00
Cameron Zwarich
bb1d033824 Fix some typos.
llvm-svn: 126829
2011-03-02 04:03:46 +00:00
Jakob Stoklund Olesen
6751d65fd6 Move extendRange() into SplitEditor and delete the LiveRangeMap class.
Extract the updateSSA() method from the too long extendRange().

LiveOutCache can be shared among all the new intervals since there is at most
one of the new ranges live out from each basic block.

llvm-svn: 126818
2011-03-02 01:59:34 +00:00
Nick Lewycky
6ad594dd20 Quiet a compiler warning about unused variable 'ExtVNI'.
llvm-svn: 126815
2011-03-02 01:43:30 +00:00
Evan Cheng
5275ba7f98 Catch more cases where 2-address pass should 3-addressify instructions. rdar://9002648.
llvm-svn: 126811
2011-03-02 01:08:17 +00:00
Jakob Stoklund Olesen
a6fd3b66b0 Rename mapValue to extendRange because that is its function now.
Simplify the signature - The return value and ParentVNI are no longer needed.

llvm-svn: 126809
2011-03-02 00:49:28 +00:00
Jakob Stoklund Olesen
200a74580c Simplify LiveIntervals::shrinkToUses() a bit by using the new extendInBlock().
llvm-svn: 126806
2011-03-02 00:33:03 +00:00
Jakob Stoklund Olesen
be281bb02c Fix typo.
llvm-svn: 126805
2011-03-02 00:33:01 +00:00
Jakob Stoklund Olesen
4d3c996555 Move LiveIntervalMap::extendTo into LiveInterval itself.
This method could probably be used by LiveIntervalAnalysis::shrinkToUses, and
now it can use extendIntervalEndTo() which coalesces ranges.

llvm-svn: 126803
2011-03-02 00:06:15 +00:00
Jakob Stoklund Olesen
4f3da0eab4 Delete dead code.
llvm-svn: 126801
2011-03-01 23:24:19 +00:00
Jakob Stoklund Olesen
15d9bf0424 Move the value map from LiveIntervalMap to SplitEditor.
The value map is currently not used, all values are 'complex mapped' and
LiveIntervalMap::mapValue is used to dig them out.

This is the first step in a series changes leading to the removal of
LiveIntervalMap. Its data structures can be shared among all the live intervals
created by a split, so it is wasteful to create a copy for each.

llvm-svn: 126800
2011-03-01 23:14:53 +00:00
Jakob Stoklund Olesen
13b552560b Delete dead code.
Local live range splitting is better driven by interference. This code was just
guessing.

llvm-svn: 126799
2011-03-01 23:14:50 +00:00
Jakob Stoklund Olesen
36e4edd585 Drop RAGreedy::trySpillInterferences().
This is a waste of time since we already know how to evict all interferences
which is a better approach anyway.

llvm-svn: 126798
2011-03-01 23:14:48 +00:00
Devang Patel
8233c58f5d If argument numbering is encoded in metadata then emit arguments' debug info in that order.
llvm-svn: 126794
2011-03-01 22:58:55 +00:00
Jakob Stoklund Olesen
ef6bec1ff5 Keep track of which stage produced a live range, and bypass earlier stages when revisiting.
This effectively disables the 'turbo' functionality of the greedy register
allocator where all new live ranges created by splitting would be reconsidered
as if they were originals.

There are two reasons for doing this, 1. It guarantees that the algorithm
terminates. Early versions were prone to infinite looping in certain corner
cases. 2. It is a 2x speedup. We can skip a lot of unnecessary interference
checks that won't lead to good splitting anyway.

The problem is that region splitting only gets one shot, so it should probably
be changed to target multiple physical registers at once.

Local live range splitting is still 'turbo' enabled. It only accounts for a
small fraction of compile time, so it is probably not necessary to do anything
about that.

llvm-svn: 126781
2011-03-01 21:10:07 +00:00
Duncan Sands
b0cd2d9a1e Add a few missed unary cases when legalizing vector results. Put some cases
in alphabetical order.

llvm-svn: 126745
2011-03-01 15:15:43 +00:00
Jim Grosbach
3b72823981 trailing whitespace.
llvm-svn: 126733
2011-03-01 01:39:05 +00:00
Jim Grosbach
fbdcd70f4b Generalize the register matching code in DAGISel a bit.
llvm-svn: 126731
2011-03-01 01:37:19 +00:00
Owen Anderson
22d34c260e Use the correct shift amount type.
llvm-svn: 126684
2011-02-28 21:10:10 +00:00
Owen Anderson
1614a324d9 Clean whitespace.
llvm-svn: 126683
2011-02-28 20:57:56 +00:00
Dan Gohman
91de965338 Delete the GEPSplitter experiment.
llvm-svn: 126671
2011-02-28 19:47:47 +00:00
Stuart Hastings
539d4e1460 Support for byval parameters on ARM. Will be enabled by a forthcoming
patch to the front-end.  Radar 7662569.

llvm-svn: 126655
2011-02-28 17:17:53 +00:00
Duncan Sands
b6f7dcb996 Legalize support for fpextend of vector. PR9309.
llvm-svn: 126574
2011-02-27 14:41:27 +00:00