1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 12:12:47 +01:00
Commit Graph

2832 Commits

Author SHA1 Message Date
Evan Cheng
c0dc7b6e61 Oops. Debugging code shouldn't have been checked in.
llvm-svn: 44128
2007-11-14 19:08:32 +00:00
Anton Korobeynikov
58298cb9cc Fix PIC jump table codegen on x86-32/linux. In fact, such thing should be applied
to all targets uses GOT-relative offsets for PIC (Alpha?)

llvm-svn: 44108
2007-11-14 09:18:41 +00:00
Duncan Sands
e6821dd990 Eliminate the recently introduced CCAssignToStackABISizeAlign
in favour of teaching CCAssignToStack that size 0 and/or align
0 means to use the ABI values.  This seems a neater solution.
It is safe since no legal value type has size 0.

llvm-svn: 44107
2007-11-14 08:29:13 +00:00
Evan Cheng
fd33cb316f Clean up sub-register implementation by moving subReg information back to
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.

llvm-svn: 44104
2007-11-14 07:59:08 +00:00
Dale Johannesen
70ca3c1f03 Revert previous; these files aren't ready to go in yet.
llvm-svn: 44057
2007-11-13 19:16:02 +00:00
Dale Johannesen
5fd9e7a615 Add parameter to getDwarfRegNum to permit targets
to use different mappings for EH and debug info;
no functional change yet.
Fix warning in X86CodeEmitter.

llvm-svn: 44056
2007-11-13 19:13:01 +00:00
Evan Cheng
994043f515 Fix x86-64 jit: remove reliance on Dwarf numbers.
llvm-svn: 44048
2007-11-13 17:54:34 +00:00
Bill Wendling
934fcd87e7 Unifacalize the CALLSEQ{START,END} stuff.
llvm-svn: 44045
2007-11-13 09:19:02 +00:00
Bill Wendling
cc75435ebf Unify CALLSEQ_{START,END}. They take 4 parameters: the chain, two stack
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).

This can only result in tears...

llvm-svn: 44037
2007-11-13 00:44:25 +00:00
Owen Anderson
aba398a5ce Add a flag for indirect branch instructions.
Target maintainers: please check that the instructions for your target are correctly marked.

llvm-svn: 44012
2007-11-12 07:39:39 +00:00
Anton Korobeynikov
8e8473c783 Use TableGen to emit information for dwarf register numbers.
This makes DwarfRegNum to accept list of numbers instead.
Added three different "flavours", but only slightly tested on x86-32/linux.
Please check another subtargets if possible,

llvm-svn: 43997
2007-11-11 19:50:10 +00:00
Dale Johannesen
2e9b020e89 Add CCAssignToStackABISizeAlign for convenience in
dealing with types whose size & alignment are
different on different subtargets.  Use it for x86 f80.

llvm-svn: 43988
2007-11-10 22:07:15 +00:00
Arnold Schwaighofer
64ad6fa1fa Update tailcall code to include inline attribute operand for memcpy.
llvm-svn: 43978
2007-11-10 10:48:01 +00:00
Evan Cheng
946afd2f6c Unbreak x86-64 jumptable.
llvm-svn: 43955
2007-11-09 19:11:23 +00:00
Dale Johannesen
eca19e7eca Revert previous rewrite per chris's comments.
llvm-svn: 43950
2007-11-09 18:07:11 +00:00
Evan Cheng
7d8deec92f Much improved pic jumptable codegen:
Then:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        imull   $4, %ecx, %ecx
        leal    LJTI1_0-"L1$pb"(%eax), %edx
        addl    LJTI1_0-"L1$pb"(%ecx,%eax), %edx
        jmpl    *%edx

        .align  2
        .set L1_0_set_3,LBB1_3-LJTI1_0
        .set L1_0_set_2,LBB1_2-LJTI1_0
        .set L1_0_set_5,LBB1_5-LJTI1_0
        .set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

Now:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        addl    LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
        jmpl    *%eax

		.align  2
		.set L1_0_set_3,LBB1_3-"L1$pb"
		.set L1_0_set_2,LBB1_2-"L1$pb"
		.set L1_0_set_5,LBB1_5-"L1$pb"
		.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

llvm-svn: 43924
2007-11-09 01:32:10 +00:00
Dale Johannesen
8a9ec1582b Rewrite Dwarf number handling per review comments.
llvm-svn: 43918
2007-11-09 00:47:10 +00:00
Dale Johannesen
b11aca8a92 Complete conditionalization of Dwarf reg numbers.
Would somebody not on Darwin please make sure this
doesn't break anything.  Exception handling failures
would be the most likely symptom.

llvm-svn: 43844
2007-11-07 21:48:35 +00:00
Dale Johannesen
a863789700 Interchange Dwarf numbers of ESP and EBP on x86 Darwin.
Much improvement in exception handling.

llvm-svn: 43794
2007-11-07 00:25:05 +00:00
Rafael Espindola
ec025c3042 Move the LowerMEMCPY and LowerMEMCPYCall to a common place.
Thanks for the suggestions Bill :-)

llvm-svn: 43742
2007-11-05 23:12:20 +00:00
Evan Cheng
c49995c027 Use movups to spill / restore SSE registers on targets where stacks alignment is
less than 16. This is a temporary solution until dynamic stack alignment is
implemented.

llvm-svn: 43703
2007-11-05 07:30:01 +00:00
Duncan Sands
d1bdbd010b Eliminate the remaining uses of getTypeSize. This
should only effect x86 when using long double.  Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment).  This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.

llvm-svn: 43688
2007-11-05 00:04:43 +00:00
Chris Lattner
8fac63c8b5 Fix PR1761 by not printing (rip) suffix when in -static mode.
Evan, please review this.

llvm-svn: 43680
2007-11-04 19:23:28 +00:00
Chris Lattner
67cd357fb8 Fix PR1763 by allowing the 'q' constraint to work with 64-bit
regs on x86-64.

llvm-svn: 43669
2007-11-04 06:51:12 +00:00
Evan Cheng
bf8e7c6644 Unbreak tailcall opt.
llvm-svn: 43646
2007-11-02 17:45:40 +00:00
Chris Lattner
679e22d547 add a note
llvm-svn: 43642
2007-11-02 17:04:20 +00:00
Evan Cheng
b50cc64eb0 Missing a getNumOperands check.
llvm-svn: 43630
2007-11-02 01:26:22 +00:00
Bill Wendling
df2eaa8a55 Silence, accersed warning
llvm-svn: 43609
2007-11-01 08:51:44 +00:00
Rafael Espindola
27a8907a7c Make ARM and X86 LowerMEMCPY identical by moving the isThumb check into getMaxInlineSizeThreshold
and by restructuring the X86 version.

New I just have to move this to a common place :-)

llvm-svn: 43554
2007-10-31 14:39:58 +00:00
Rafael Espindola
fae98471a9 Make ARM an X86 memcpy expansion more similar to each other.
Now both subtarget define getMaxInlineSizeThreshold and the expansion uses it.

This should not change generated code.

llvm-svn: 43552
2007-10-31 11:52:06 +00:00
Dale Johannesen
9bc04ae496 Make i64=expand_vector_elt(v2i64) work in 32-bit mode.
llvm-svn: 43535
2007-10-31 00:32:36 +00:00
Dale Johannesen
7167117945 Add missing SSE builtins: CVTPD2PI, CVTPS2PI,
CVTTPD2PI, CVTTPS2PI, CVTPI2PD, CVTPI2PS.

llvm-svn: 43523
2007-10-30 22:15:38 +00:00
Duncan Sands
f6837e8634 Fix for visibility warnings generated by gcc-4.2.
llvm-svn: 43500
2007-10-30 13:14:37 +00:00
Dale Johannesen
461a0c47f8 Add missing MMX PSUBQ.
llvm-svn: 43488
2007-10-30 01:18:38 +00:00
Evan Cheng
5fe81cf64e Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).

llvm-svn: 43465
2007-10-29 19:58:20 +00:00
Evan Cheng
1113931fd8 Avoid doing something dumb like rewriting using a 64-bit iv in 32-bit mode.
llvm-svn: 43446
2007-10-29 07:57:50 +00:00
Chris Lattner
be8379fac5 add a note.
llvm-svn: 43444
2007-10-29 06:19:48 +00:00
Chris Lattner
1503362624 Add support for the x86-64 'q' regigster modifier, and add support for the
b/h/w/k/q inline asm memory modifiers, which are just ignored.  This fixes
PR1748 and CodeGen/X86/2007-10-28-inlineasm-q-modifier.ll

llvm-svn: 43430
2007-10-29 03:09:07 +00:00
Evan Cheng
053178440a New entry.
llvm-svn: 43420
2007-10-28 04:01:09 +00:00
Anton Korobeynikov
0d3f43480e Fix off-by-one stack offset computations (dwarf information) for callee-saved
registers in case, when FP pointer was eliminated. This should fixes misc. random
EH-related crahses, when stuff is compiled with -fomit-frame-pointer.
Thanks Duncan for nailing this bug!

llvm-svn: 43381
2007-10-26 09:13:24 +00:00
Evan Cheng
53696b7e9f Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
e.g.
Turns this loop:
LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
        movw    %dx, %si
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %edi
        movw    %si, (%edi)
        movl    L_Y$non_lazy_ptr, %edi
        movw    %dx, (%edi)
		addw    $4, %dx
		incw    %si
		incl    %ecx
		cmpl    %eax, %ecx
		jne     LBB1_2  # bb
	
into

LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %esi
        movw    %cx, (%esi)
        movl    L_Y$non_lazy_ptr, %esi
        movw    %dx, (%esi)
        addw    $4, %dx
		incl    %ecx
        cmpl    %eax, %ecx
        jne     LBB1_2  # bb

llvm-svn: 43375
2007-10-26 01:56:11 +00:00
Dan Gohman
76e104c8ad Fix the folding of multiplication into addresses on x86, which was broken
by the recent {U,S}MUL_LOHI changes.

llvm-svn: 43230
2007-10-22 20:22:24 +00:00
Evan Cheng
ddeab10144 Fix an unfolding bug.
llvm-svn: 43212
2007-10-22 03:03:20 +00:00
Dale Johannesen
2edd0fb69d Allow for copysign having f80 second argument.
Fixes 5550319.

llvm-svn: 43205
2007-10-21 01:07:44 +00:00
Evan Cheng
b56784f9ea Resolve unfold tables ambiguity.
llvm-svn: 43194
2007-10-19 23:50:58 +00:00
Evan Cheng
ded6550885 Local spiller optimization:
Turn a store folding instruction into a load folding instruction. e.g.
     xorl  %edi, %eax
     movl  %eax, -32(%ebp)
     movl  -36(%ebp), %eax
     orl   %eax, -32(%ebp)
=>
     xorl  %edi, %eax
     orl   -36(%ebp), %eax
     mov   %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.

llvm-svn: 43192
2007-10-19 21:23:22 +00:00
Rafael Espindola
d8d4372845 Add support for byval function whose argument is not 32 bit aligned.
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset.  I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)

llvm-svn: 43172
2007-10-19 10:41:11 +00:00
Evan Cheng
0449186690 - Added getOpcodeAfterMemoryUnfold(). It doesn't unfold an instruction, but only returns the opcode of the instruction post unfolding.
- Fix some copy+paste bugs.

llvm-svn: 43153
2007-10-18 22:40:57 +00:00
Evan Cheng
c852780685 Use SmallVectorImpl instead of SmallVector with hardcoded size in MRegister public interface.
llvm-svn: 43150
2007-10-18 21:29:24 +00:00
Christopher Lamb
a26b82ea94 Fix a typo
llvm-svn: 43144
2007-10-18 19:28:55 +00:00