to TargetFrameLowering, where it belongs. Incidentally, this allows us
to delete some duplicated (and slightly different!) code in TRI.
There are potentially other layering problems that can be cleaned up
as a result, or in a similar manner.
The refactoring was OK'd by Anton Korobeynikov on llvmdev.
Note: this touches the target interfaces, so out-of-tree targets may
be affected.
llvm-svn: 175788
- Besides used in SjLj exception handling, __builtin_setjmp/__longjmp is also
used as a light-weight replacement of setjmp/longjmp which are used to
implementation continuation, user-level threading, and etc. The support added
in this patch ONLY addresses this usage and is NOT intended to support SjLj
exception handling as zero-cost DWARF exception handling is used by default
in X86.
llvm-svn: 165989
- Add 'HwEncoding' for X86 registers and call getEncodingValue() to
retrieve their encoding values.
- This's the first step to adopt new scheme. Furthur revising is onging.
llvm-svn: 165241
X86. Basically, this is a reapplication of r158087 with a few fixes.
Specifically, (1) the stack pointer is restored from the base pointer before
popping callee-saved registers and (2) in obscure cases (see comments in patch)
we must cache the value of the original stack adjustment in the prologue and
apply it in the epilogue.
rdar://11496434
llvm-svn: 160002
This patch causes problems when both dynamic stack realignment and
dynamic allocas combine in the same function. With this patch, we no
longer build the epilog correctly, and silently restore registers from
the wrong position in the stack.
Thanks to Matt for tracking this down, and getting at least an initial
test case to Chad. I'm going to try to check a variation of that test
case in so we can easily track the fixes required.
llvm-svn: 158654
The getPointerRegClass() hook can return register classes that depend on
the calling convention of the current function (ptr_rc_tailcall).
So far, we have been able to infer the calling convention from the
subtarget alone, but as we add support for multiple calling conventions
per target, that no longer works.
Patch by Yiannis Tsiouris!
llvm-svn: 156328
on X86 Atom. Some of our tests failed because the tail merging part of
the BranchFolding pass was creating new basic blocks which did not
contain live-in information. When the anti-dependency code in the Post-RA
scheduler ran, it would sometimes rename the register containing
the function return value because the fact that the return value was
live-in to the subsequent block had been lost. To fix this, it is necessary
to run the RegisterScavenging code in the BranchFolding pass.
This patch makes sure that the register scavenging code is invoked
in the X86 subtarget only when post-RA scheduling is being done.
Post RA scheduling in the X86 subtarget is only done for Atom.
This patch adds a new function to the TargetRegisterClass to control
whether or not live-ins should be preserved during branch folding.
This is necessary in order for the anti-dependency optimizations done
during the PostRASchedulerList pass to work properly when doing
Post-RA scheduling for the X86 in general and for the Intel Atom in particular.
The patch adds and invokes the new function trackLivenessAfterRegAlloc()
instead of using the existing requiresRegisterScavenging().
It changes BranchFolding.cpp to call trackLivenessAfterRegAlloc() instead of
requiresRegisterScavenging(). It changes the all the targets that
implemented requiresRegisterScavenging() to also implement
trackLivenessAfterRegAlloc().
It adds an assertion in the Post RA scheduler to make sure that post RA
liveness information is available when it is needed.
It changes the X86 break-anti-dependencies test to use –mcpu=atom, in order
to avoid running into the added assertion.
Finally, this patch restores the use of anti-dependency checking
(which was turned off temporarily for the 3.1 release) for
Intel Atom in the Post RA scheduler.
Patch by Andy Zhang!
Thanks to Jakob and Anton for their reviews.
llvm-svn: 155395
There are fewer registers with sub_8bit sub-registers in 32-bit mode
than in 64-bit mode. In 32-bit mode, sub_8bit behaves the same as
sub_8bit_hi.
llvm-svn: 141206
to MCRegisterInfo. Also initialize the mapping at construction time.
This patch eliminate TargetRegisterInfo from TargetAsmInfo. It's another step
towards fixing the layering violation.
llvm-svn: 135424
The hook will be used by the register allocator when recomputing register
classes after removing constraints.
Thumb1 code doesn't allow anything larger than tGPR, and x86 needs to ensure
that the spill size doesn't change.
llvm-svn: 130228
is preparatory to having PEI's scavenged frame index value reuse logic
properly distinguish types of frame values (e.g., whether the value is
stack-pointer relative or frame-pointer relative).
No functionality change.
llvm-svn: 98086
Extracting the low element of a vector is now done with EXTRACT_SUBREG,
and the zero-extension performed by load movss is now modeled with
SUBREG_TO_REG, and so on.
Register-to-register movss and movsd are no longer considered copies;
they are two-address instructions which insert a scalar into a vector.
llvm-svn: 97354
function can support dynamic stack realignment. That's a much easier question
to answer at instruction selection stage than whether the function actually
will have dynamic alignment prologue. This allows the removal of the
stack alignment heuristic pass, and improves code quality for cases where
the heuristic would result in dynamic alignment code being generated when
it was not strictly necessary.
llvm-svn: 93885
a virtual register to eliminate a frame index, it can return that register
and the constant stored there to PEI to track. When scavenging to allocate
for those registers, PEI then tracks the last-used register and value, and
if it is still available and matches the value for the next index, reuses
the existing value rather and removes the re-materialization instructions.
Fancier tracking and adjustment of scavenger allocations to keep more
values live for longer is possible, but not yet implemented and would likely
be better done via a different, less special-purpose, approach to the
problem.
eliminateFrameIndex() is modified so the target implementations can return
the registers they wish to be tracked for reuse.
ARM Thumb1 implements and utilizes the new mechanism. All other targets are
simply modified to adjust for the changed eliminateFrameIndex() prototype.
llvm-svn: 83467
registers based on dynamic conditions. For example, X86 EBP/RBP, when used as
frame register has to be spilled in the first fixed object. It should inform
PEI this so it doesn't get allocated another stack object. Also, it should not
be spilled as other callee-saved registers but rather its spilling and restoring
are being handled by emitPrologue and emitEpilogue. Avoid spilling it twice.
llvm-svn: 75116
DWARF requires frame moves be specified at specific times. If you have a
prologue like this:
__Z3fooi:
Leh_func_begin1:
LBB1_0: ## entry
pushl %ebp
Llabel1:
movl %esp, %ebp
Llabel2:
pushl %esi
Llabel3:
subl $20, %esp
call "L1$pb"
"L1$pb":
popl %esi
The "pushl %ebp" needs a table entry specifying the offset. The "movl %esp,
%ebp" makes %ebp the new stack frame register, so that needs to be specified in
DWARF. And "pushl %esi" saves the callee-saved %esi register, which also needs
to be specified in DWARF.
Before, all of this logic was in one method. This didn't work too well, because
as you can see there are multiple FDE line entries that need to be created.
This fix creates the "MachineMove" objects directly when they're needed; instead
of waiting until the end, and losing information.
There is some ugliness where we generate code like this:
LBB22_0: ## entry
pushl %ebp
Llabel280:
movl %esp, %ebp
Llabel281:
Llabel284:
pushl %ebp <----------
pushl %ebx
pushl %edi
pushl %esi
Llabel282:
subl $328, %esp
Notice the extra "pushl %ebp". If we generate a "machine move" instruction in
the FDE for that pushl, the linker may get very confused about what value %ebp
should have when exitting the function. I.e., it'll give it the value %esp
instead of the %ebp value from the first "pushl". Not to mention that, in this
case, %ebp isn't modified in the function (that's a separate bug). I put a small
hack in to get it to work. It might be the only solution, but should be
revisited once the above case is fixed.
llvm-svn: 75047
U lib/Target/X86/X86RegisterInfo.cpp
U lib/Target/X86/X86RegisterInfo.h
Temporarily revert. This was causing an infinite loop in the linker on Leopard.
llvm-svn: 74970
prologue like this:
__Z3fooi:
Leh_func_begin1:
LBB1_0: ## entry
pushl %ebp
Llabel1:
movl %esp, %ebp
Llabel2:
pushl %esi
Llabel3:
subl $20, %esp
call "L1$pb"
"L1$pb":
popl %esi
The "pushl %ebp" needs a table entry specifying the offset. The "movl %esp,
%ebp" makes %ebp the new stack frame register, so that needs to be specified in
DWARF. And "pushl %esi" saves the callee-saved %esi register, which also needs
to be specified in DWARF.
Before, all of this logic was in one method. This didn't work too well, because
as you can see there are multiple FDE line entries that need to be created.
This fix creates the "MachineMove" objects directly when they're needed; instead
of waiting until the end, and losing information.
llvm-svn: 74952
- Add patterns for h-register extract, which avoids a shift and mask,
and in some cases a temporary register.
- Add address-mode matching for turning (X>>(8-n))&(255<<n), where
n is a valid address-mode scale value, into an h-register extract
and a scaled-offset address.
- Replace X86's MOV32to32_ and related instructions with the new
target-independent COPY_TO_SUBREG instruction.
On x86-64 there are complicated constraints on h registers, and
CodeGen doesn't currently provide a high-level way to express all of them,
so they are handled with a bunch of special code. This code currently only
supports extracts where the result is used by a zero-extend or a store,
though these are fairly common.
These transformations are not always beneficial; since there are only
4 h registers, they sometimes require extra move instructions, and
this sometimes increases register pressure because it can force out
values that would otherwise be in one of those registers. However,
this appears to be relatively uncommon.
llvm-svn: 68962