1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-01 16:33:37 +01:00
Commit Graph

178 Commits

Author SHA1 Message Date
Chris Lattner
9ce833945e improve portability to avoid conflicting with std::next in c++'0x.
Patch by Howard Hinnant!

llvm-svn: 90365
2009-12-03 00:50:42 +00:00
David Greene
58e7c6145b Add a bool flag to StackObjects telling whether they reference spill
slots.  The AsmPrinter will use this information to determine whether to
print a spill/reload comment.

Remove default argument values.  It's too easy to pass a wrong argument
value when multiple arguments have default values.  Make everything
explicit to trap bugs early.

Update all targets to adhere to the new interfaces..

llvm-svn: 87022
2009-11-12 20:49:22 +00:00
Jim Grosbach
220e39263c When the function is doing dynamic stack realignment, the spill slot will be
indexed via the stack pointer, even if a frame pointer is present. Update the
heuristic to place it nearest the stack pointer in that case, rather than
nearest the frame pointer.

llvm-svn: 85474
2009-10-29 02:33:47 +00:00
Jim Grosbach
8b0af4b521 Cleanup of frame index scavenging. Better code flow and more accurately
handles T2 and ARM use cases.

llvm-svn: 84761
2009-10-21 15:26:21 +00:00
Jim Grosbach
7800d00f20 Better handle instructions that re-def a scratch register
llvm-svn: 84657
2009-10-20 19:52:35 +00:00
Jim Grosbach
5ce30430a4 Register re-use for scavenged frame indices must check for re-deginition
of the register in the instruction which kills the scavenged value.

llvm-svn: 84641
2009-10-20 16:33:57 +00:00
Jim Grosbach
c4acd85bfd Enable post-pass frame index register scavenging for ARM and Thumb2
llvm-svn: 84585
2009-10-20 01:26:58 +00:00
Evan Cheng
5ab1ccfaee Distinquish stack slots from other stack objects. They (and fixed objects) get FixedStack PseudoSourceValues.
llvm-svn: 84326
2009-10-17 09:20:14 +00:00
Jim Grosbach
8da653b238 Make loop not recalc getNumOperands() each time around
llvm-svn: 84138
2009-10-14 21:22:39 +00:00
Jim Grosbach
9233bebc36 quiet compiler warning
llvm-svn: 84133
2009-10-14 21:07:11 +00:00
Jim Grosbach
809d1bfdd3 when previous scratch register is killed, flag the value as no longer tracking
llvm-svn: 83653
2009-10-09 17:33:33 +00:00
Jim Grosbach
a3e1149941 Re-enable register scavenging in Thumb1 by default.
llvm-svn: 83521
2009-10-08 01:46:59 +00:00
Jim Grosbach
a8e8c8a28f bugfix. The target may use virtual registers that aren't tracked for re-use but are allocated by the scavenger. The re-use algorithm needs to watch for that.
llvm-svn: 83519
2009-10-08 01:09:45 +00:00
Jim Grosbach
cb905d28a8 reverting thumb1 scavenging default due to test failure while I figure out what's up.
llvm-svn: 83501
2009-10-07 22:49:41 +00:00
Jim Grosbach
cc952b2ff8 Enable thumb1 register scavenging by default.
llvm-svn: 83496
2009-10-07 22:26:31 +00:00
Jim Grosbach
0609c91ec4 grammar
llvm-svn: 83483
2009-10-07 19:08:36 +00:00
Jim Grosbach
5db951c6a2 add initializers for clarity. Add missing assignment of PrevLastUseOp.
llvm-svn: 83481
2009-10-07 18:44:24 +00:00
Jim Grosbach
61c5ce1bde Add register-reuse to frame-index register scavenging. When a target uses
a virtual register to eliminate a frame index, it can return that register
and the constant stored there to PEI to track. When scavenging to allocate
for those registers, PEI then tracks the last-used register and value, and
if it is still available and matches the value for the next index, reuses
the existing value rather and removes the re-materialization instructions.
Fancier tracking and adjustment of scavenger allocations to keep more
values live for longer is possible, but not yet implemented and would likely
be better done via a different, less special-purpose, approach to the
problem.

eliminateFrameIndex() is modified so the target implementations can return
the registers they wish to be tracked for reuse.

ARM Thumb1 implements and utilizes the new mechanism. All other targets are
simply modified to adjust for the changed eliminateFrameIndex() prototype.

llvm-svn: 83467
2009-10-07 17:12:56 +00:00
Jim Grosbach
b4fdc9252d Add additional assert() to verify no extraneous use of a scavenged register.
llvm-svn: 83163
2009-09-30 20:35:36 +00:00
Jim Grosbach
d885068ad4 replace TRI->isVirtualRegister() with TargetRegisterInfo::isVirtualRegister()
per customary usage

llvm-svn: 83137
2009-09-30 01:47:59 +00:00
Jim Grosbach
5cc33f7dac fix compiler warning
llvm-svn: 83132
2009-09-30 00:37:40 +00:00
Jim Grosbach
0e6ac9047c Simplify the tracking of virtual frame index registers. Ranges cannot overlap,
so a simple "current register" will suffice. Also add some additional
sanity-checking assertions to make sure things are as we expect.

llvm-svn: 83081
2009-09-29 18:23:15 +00:00
Tilmann Scheller
a23802520a Use explicit structs instead of std::pair to map callee saved regs to spill slots.
llvm-svn: 82909
2009-09-27 17:58:47 +00:00
Bob Wilson
94e29af5ac pr4926: ARM requires the stack pointer to be aligned, even for leaf functions.
For the AAPCS ABI, SP must always be 4-byte aligned, and at any "public
interface" it must be 8-byte aligned.  For the older ARM APCS ABI, the stack
alignment is just always 4 bytes.  For X86, we currently align SP at
entry to a function (e.g., to 16 bytes for Darwin), but no stack alignment
is needed at other times, such as for a leaf function.

After discussing this with Dan, I decided to go with the approach of adding
a new "TransientStackAlignment" field to TargetFrameInfo.  This value
specifies the stack alignment that must be maintained even in between calls.
It defaults to 1 except for ARM, where it is 4.  (Some other targets may
also want to set this if they have similar stack requirements. It's not
currently required for PPC because it sets targetHandlesStackFrameRounding
and handles the alignment in target-specific code.) The existing StackAlignment
value specifies the alignment upon entry to a function, which is how we've
been using it anyway.

llvm-svn: 82767
2009-09-25 14:41:49 +00:00
Jim Grosbach
3d527ff081 Start of revamping the register scavenging in PEI. ARM Thumb1 is the driving
interest for this, as it currently reserves a register rather than using
the scavenger for matierializing constants as needed.

Instead of scavenging registers on the fly while eliminating frame indices,
new virtual registers are created, and then a scavenged collectively in a
post-pass over the function. This isolates the bits that need to interact
with the scavenger, and sets the stage for more intelligent use, and reuse,
of scavenged registers.

For the time being, this is disabled by default. Once the bugs are worked out,
the current scavenging calls in replaceFrameIndices() will be removed and
the post-pass scavenging will be the default. Until then,
-enable-frame-index-scavenging enables the new code. Currently, only the
Thumb1 back end is set up to use it.

llvm-svn: 82734
2009-09-24 23:52:18 +00:00
Bob Wilson
4a0decbe2e Fix a hypothetical problem for targets with StackGrowsUp and a non-zero
LocalAreaOffset.  (We don't have any of those right now.)
PEI::calculateFrameObjectOffsets includes the absolute value of the
LocalAreaOffset in the cumulative offset value used to calculate the
stack frame size.  It then adds the raw value of the LocalAreaOffset
to the stack size.  For a StackGrowsDown target, that raw value is negative
and has the effect of cancelling out the absolute value that was added
earlier, but that obviously won't work for a StackGrowsUp target.  Change
to subtract the absolute value of the LocalAreaOffset.

llvm-svn: 82693
2009-09-24 16:42:27 +00:00
Bob Wilson
e92f61e68b Edit a comment.
llvm-svn: 82641
2009-09-23 18:53:19 +00:00
Bob Wilson
0ad3a28b10 Fix a comment typo and some whitespace.
llvm-svn: 82285
2009-09-18 21:43:11 +00:00
Devang Patel
c071d6c1b4 Record variable debug info at ISel time directly.
llvm-svn: 79742
2009-08-22 17:12:53 +00:00
Jakob Stoklund Olesen
8f6660c417 Don't setCalleeSavedInfoValid() until spills are interted.
In a naked function, the flag is never set and getPristineRegs() returns an
empty list. That means naked functions are able to clobber callee saved
registers, but that is the whole point of naked functions.

This fixes PR4716.

llvm-svn: 79096
2009-08-15 13:10:46 +00:00
Jakob Stoklund Olesen
96890fb0cf Add MachineFrameInfo::getPristineRegisters(MBB) method.
llvm-svn: 78911
2009-08-13 16:19:33 +00:00
Dan Gohman
4529d71681 Use setPreservesAll and setPreservesCFG in CodeGen passes.
llvm-svn: 77754
2009-07-31 23:37:33 +00:00
Dan Gohman
f28b3bb262 Reapply r77654 with a fix: MachineFunctionPass's getAnalysisUsage
shouldn't do AU.setPreservesCFG(), because even though CodeGen passes
don't modify the LLVM IR CFG, they may modify the MachineFunction CFG,
and passes like MachineLoop are registered with isCFGOnly set to true.

llvm-svn: 77691
2009-07-31 18:16:33 +00:00
Daniel Dunbar
60d71a790c Revert r77654, it appears to be causing llvm-gcc bootstrap failures, and many
failures when building assorted projects with clang.

--- Reverse-merging r77654 into '.':
U    include/llvm/CodeGen/Passes.h
U    include/llvm/CodeGen/MachineFunctionPass.h
U    include/llvm/CodeGen/MachineFunction.h
U    include/llvm/CodeGen/LazyLiveness.h
U    include/llvm/CodeGen/SelectionDAGISel.h
D    include/llvm/CodeGen/MachineFunctionAnalysis.h
U    include/llvm/Function.h
U    lib/Target/CellSPU/SPUISelDAGToDAG.cpp
U    lib/Target/PowerPC/PPCISelDAGToDAG.cpp
U    lib/CodeGen/LLVMTargetMachine.cpp
U    lib/CodeGen/MachineVerifier.cpp
U    lib/CodeGen/MachineFunction.cpp
U    lib/CodeGen/PrologEpilogInserter.cpp
U    lib/CodeGen/MachineLoopInfo.cpp
U    lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
D    lib/CodeGen/MachineFunctionAnalysis.cpp
D    lib/CodeGen/MachineFunctionPass.cpp
U    lib/CodeGen/LiveVariables.cpp

llvm-svn: 77661
2009-07-31 03:02:41 +00:00
Dan Gohman
645f1122c0 Manage MachineFunctions with an analysis Pass instead of the Annotable
mechanism. To support this, make MachineFunctionPass a little more
complete.

llvm-svn: 77654
2009-07-31 01:52:50 +00:00
Anton Korobeynikov
60f7bd3562 Add support for naked functions
llvm-svn: 76198
2009-07-17 18:07:26 +00:00
Dale Johannesen
e08bda67c2 Assume an inline asm might be a call, so we get
stack alignment right when it is.  This is not
ideal but conservatively correct.  Adjust a test
to compensate for changed stack offset value.
gcc.apple/asm-block-57.c

llvm-svn: 76120
2009-07-16 22:34:45 +00:00
Anton Korobeynikov
64ff9c023a Scan for presence of calls and determine max callframe size early. To allow ProcessFunctionBeforeCalleeSaveScan() use this information
llvm-svn: 75942
2009-07-16 13:50:40 +00:00
Evan Cheng
4f87295872 Targets sometimes assign fixed stack object to spill certain callee-saved
registers based on dynamic conditions. For example, X86 EBP/RBP, when used as
frame register has to be spilled in the first fixed object. It should inform
PEI this so it doesn't get allocated another stack object. Also, it should not
be spilled as other callee-saved registers but rather its spilling and restoring
are being handled by emitPrologue and emitEpilogue. Avoid spilling it twice.

llvm-svn: 75116
2009-07-09 06:53:48 +00:00
Bill Wendling
8538bf59a8 Use interators instead of counters for loops.
llvm-svn: 75046
2009-07-08 20:57:27 +00:00
Jim Grosbach
40d13bf382 Removing the HasBuiltinSetjmp flag and associated bits. Flagging the presence
of exception handling builtin sjlj targets in functions turns out not to 
be necessary. Marking the intrinsic implementation in the .td file as 
defining all registers is sufficient to get the context saved properly by 
the containing function.

llvm-svn: 71743
2009-05-13 23:50:53 +00:00
John Mosby
6b2d45fe66 PEI: rename PEI.h to PrologEpilogInserter.h to adhere to file naming standard
llvm-svn: 71678
2009-05-13 17:52:11 +00:00
Jim Grosbach
4bb5e9d1df Add support for GCC compatible builtin setjmp and longjmp intrinsics. This is
a supporting preliminary patch for GCC-compatible SjLJ exception handling. Note that these intrinsics are not designed to be invoked directly by the user, but
rather used by the front-end as target hooks for exception handling.

llvm-svn: 71610
2009-05-12 23:59:14 +00:00
John Mosby
618e3e6578 Restructure PEI code:
- moved shrink wrapping code from PrologEpilogInserter.cpp to
  new file ShrinkWrapping.cpp.

- moved PEI pass definition into new shared header PEI.h.

llvm-svn: 71588
2009-05-12 20:33:29 +00:00
Evan Cheng
645351d3a8 Apply patch review feedback.
llvm-svn: 71472
2009-05-11 20:53:52 +00:00
Evan Cheng
dad97e1bfc Unbreak non-debug build.
llvm-svn: 71457
2009-05-11 18:40:52 +00:00
John Mosby
366006cfd3 Shrink wrapping in PEI:
- reduces _static_ callee saved register spills
  and restores similar to Chow's original algorithm.
- iterative implementation with simple heuristic
  limits to mitigate compile time impact.
- handles placing spills/restores for multi-entry,
  multi-exit regions in the Machine CFG without
  splitting edges.
- passes test-suite in LLCBETA mode.

Added contains() method to ADT/SparseBitVector.

llvm-svn: 71438
2009-05-11 17:04:19 +00:00
John Mosby
67ffa789e8 Shrink wrapping in PEI: initial release. Finishing development, enable with --shrink-wrap.
llvm-svn: 67828
2009-03-27 06:09:40 +00:00
Evan Cheng
758c95f07a Fix PR3845: Avoid stale MachineInstruction pointer reference.
llvm-svn: 67649
2009-03-24 20:33:17 +00:00
Chris Lattner
b18e8617be Apply the patch requested in PR3846.
llvm-svn: 67364
2009-03-20 05:08:24 +00:00