1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 20:23:11 +01:00
Commit Graph

2465 Commits

Author SHA1 Message Date
Bill Wendling
1087888176 Unbreak mmx arithmetic. It was barfing trying to do v8i8 arithmetic.
llvm-svn: 35392
2007-03-28 00:57:11 +00:00
Bill Wendling
6b555c80c0 Add the "unpack low packed data" instructions. This should be the last of
the MMX instructions that are needed...

llvm-svn: 35389
2007-03-27 21:20:36 +00:00
Bill Wendling
d43819da2f Fix so that pandn is emitted instead of an xor/and combo. Add integer
comparison operators.

llvm-svn: 35385
2007-03-27 20:22:40 +00:00
Bill Wendling
8065cc3173 Promote to v1i64 type...
llvm-svn: 35353
2007-03-26 08:03:33 +00:00
Bill Wendling
3c4e130b77 Updated.
llvm-svn: 35352
2007-03-26 07:55:58 +00:00
Bill Wendling
a42484728c Add support for the v1i64 type. This makes better code for this:
#include <mmintrin.h>

extern __m64 C;

void baz(__v2si *A, __v2si *B)
{
  *A = C;
  _mm_empty();
}

We get this:

_baz:
        call "L1$pb"
"L1$pb":
        popl %eax
        movl L_C$non_lazy_ptr-"L1$pb"(%eax), %eax
        movq (%eax), %mm0
        movl 4(%esp), %eax
        movq %mm0, (%eax)
        emms
        ret

GCC gives us this:

_baz:
        pushl   %ebx
        call    L3
"L00000000001$pb":
L3:
        popl    %ebx
        subl    $8, %esp
        movl    L_C$non_lazy_ptr-"L00000000001$pb"(%ebx), %eax
        movl    (%eax), %edx
        movl    4(%eax), %ecx
        movl    16(%esp), %eax
        movl    %edx, (%eax)
        movl    %ecx, 4(%eax)
        emms
        addl    $8, %esp
        popl    %ebx
        ret

llvm-svn: 35351
2007-03-26 07:53:08 +00:00
Chris Lattner
b19069959d switch TargetLowering::getConstraintType to take the entire constraint,
not just the first letter.  No functionality change.

llvm-svn: 35322
2007-03-25 02:14:49 +00:00
Chris Lattner
18c3c6a01d Allow the b/h/w/k constraints to be applied to values that have multiple alternatives, and end up not being registers.
llvm-svn: 35320
2007-03-25 02:01:03 +00:00
Chris Lattner
104e73382c enforce the proper range for the i386 N constraint.
llvm-svn: 35319
2007-03-25 01:57:35 +00:00
Chris Lattner
92e2ee7b2d Fix test/CodeGen/X86/2007-03-24-InlineAsmPModifier.ll
llvm-svn: 35318
2007-03-25 01:44:57 +00:00
Anton Korobeynikov
63bff8af0c Autodetect MMX & SSE stuff for AMD processors
llvm-svn: 35292
2007-03-23 23:46:48 +00:00
Bill Wendling
124f2c8706 PR1260:
Add final support to get the QT example to compile.

llvm-svn: 35290
2007-03-23 22:35:46 +00:00
Bill Wendling
e6a9c6dfe6 We generate a shufflevector instruction, so we don't need the builtin
intrinsic.

llvm-svn: 35269
2007-03-22 20:29:26 +00:00
Bill Wendling
1bcad4c1cd Support added for shifts and unpacking MMX instructions.
llvm-svn: 35266
2007-03-22 18:42:45 +00:00
Dale Johannesen
44c0a5d545 repair x86 performance, dejagnu problems from previous change
llvm-svn: 35245
2007-03-21 21:51:52 +00:00
Dale Johannesen
fb7b59f5dd add generation of unnecessary push/pop around calls
llvm-svn: 35241
2007-03-21 21:16:39 +00:00
Evan Cheng
00a5cbf9e7 Mark re-materializable instructions.
llvm-svn: 35230
2007-03-21 00:16:56 +00:00
Evan Cheng
41f4f032ee Added MRegisterInfo hook to re-materialize an instruction.
llvm-svn: 35205
2007-03-20 08:09:38 +00:00
Chris Lattner
b9cc0ade43 Two changes:
1) codegen a shift of a register as a shift, not an LEA.
2) teach the RA to convert a shift to an LEA instruction if it wants something
   in three-address form.

This gives us asm diffs like:

-       leal (,%eax,4), %eax
+       shll $2, %eax

which is faster on some processors and smaller on all of them.

and, more interestingly:

-       movl 24(%esi), %eax
-       leal (,%eax,4), %edi
+       movl 24(%esi), %edi
+       shll $2, %edi

Without #2, #1 was a significant pessimization in some cases.

This implements CodeGen/X86/shift-codegen.ll

llvm-svn: 35204
2007-03-20 06:08:29 +00:00
Chris Lattner
59fe2be1c4 fix a warning
llvm-svn: 35152
2007-03-19 00:39:32 +00:00
Devang Patel
2dabb16eac Support 'I' inline asm constraint.
llvm-svn: 35129
2007-03-17 00:13:28 +00:00
Bill Wendling
8ced23ee5a And now support for MMX logical operations.
llvm-svn: 35125
2007-03-16 09:44:46 +00:00
Bill Wendling
feaff80149 Multiplication support for MMX.
llvm-svn: 35118
2007-03-15 21:24:36 +00:00
Evan Cheng
00edaa08b5 Under X86-64 large code model, do not emit 32-bit pc relative calls.
llvm-svn: 35108
2007-03-14 22:11:11 +00:00
Evan Cheng
fc80b5b712 Notes about codegen issues.
llvm-svn: 35107
2007-03-14 21:03:53 +00:00
Evan Cheng
50a0af3b57 Clean up.
llvm-svn: 35105
2007-03-14 20:20:19 +00:00
Evan Cheng
2617c8dd3a Oops.
llvm-svn: 35104
2007-03-14 19:44:58 +00:00
Evan Cheng
371b8e8fa9 X86-64 JIT is in large code model. Need stubs for direct calls.
llvm-svn: 35097
2007-03-14 10:51:55 +00:00
Evan Cheng
1092e481ce x86-64 JIT stub codegen.
llvm-svn: 35096
2007-03-14 10:48:08 +00:00
Evan Cheng
15de6714a4 Preliminary support for X86-64 JIT stub codegen.
llvm-svn: 35095
2007-03-14 10:44:30 +00:00
Evan Cheng
0eeb8b59eb More flexible TargetLowering LSR hooks for testing whether an immediate is
a legal target address immediate or scale.

llvm-svn: 35073
2007-03-12 23:28:50 +00:00
Evan Cheng
4224fa3617 Stupid bug: SSE2 supports v2i64 add / sub.
llvm-svn: 35070
2007-03-12 22:58:52 +00:00
Bill Wendling
236cfc4344 Adding more arithmetic operators to MMX. This is an almost exact copy of
the addition. Please let me know if you have suggestions.

llvm-svn: 35055
2007-03-10 09:57:05 +00:00
Bill Wendling
5fef3fd7e7 Added "padd*" support for MMX. Added MMX move stuff to X86InstrInfo so that
moves, loads, etc. are recognized.

llvm-svn: 35031
2007-03-08 22:09:11 +00:00
Evan Cheng
7d528d089c Putting more constants which do not contain relocations into .literal{4|8|16}
llvm-svn: 35026
2007-03-08 08:31:54 +00:00
Evan Cheng
14b63d89c1 Put constant data to .const, .const_data, .literal{4|8|16} sections.
llvm-svn: 35016
2007-03-08 01:07:07 +00:00
Bill Wendling
8f49ba1000 Remove useless pattern fragments.
llvm-svn: 35009
2007-03-07 18:23:09 +00:00
Anton Korobeynikov
85d6c1ebad Refactoring of formal parameter flags. Enable properly use of
zext/sext/aext stuff.

llvm-svn: 35008
2007-03-07 16:25:09 +00:00
Bill Wendling
3c201ddd02 Properly support v8i8 and v4i16 types. It now converts them to v2i32 for
load and stores.

llvm-svn: 35002
2007-03-07 05:43:18 +00:00
Anton Korobeynikov
090c2d50ea Fix DWARF debugging information on x86/Linux and (hopefully)
Mingw32/Cygwin targets. This fixes PR978

llvm-svn: 35000
2007-03-07 02:47:57 +00:00
Bill Wendling
a02d43fbbd Add LOAD/STORE support for MMX.
llvm-svn: 34978
2007-03-06 18:53:42 +00:00
Anton Korobeynikov
6da6c8c48b Use new SDIselParamAttr enumeration. This removes "magick" constants
from formal attributes' flags processing.

llvm-svn: 34963
2007-03-06 08:12:33 +00:00
Bill Wendling
c52174dee3 Add the emms intrinsic for MMX support.
llvm-svn: 34938
2007-03-05 23:09:45 +00:00
Chris Lattner
6d7701714e add missing braces
llvm-svn: 34905
2007-03-04 06:13:52 +00:00
Evan Cheng
2fb461c1b5 X86-64 VACOPY needs custom expansion. va_list is a struct { i32, i32, i8*, i8* }.
llvm-svn: 34857
2007-03-02 23:16:35 +00:00
Anton Korobeynikov
7cec92bcd2 Simplify things
llvm-svn: 34849
2007-03-02 21:50:27 +00:00
Chris Lattner
55dcf58453 argument lowering should copy from the vreg shadows of live-in arguments
passed in registers, not directly from the pregs themselves.

llvm-svn: 34838
2007-03-02 05:12:29 +00:00
Chris Lattner
e29ef5d9cb add a note
llvm-svn: 34837
2007-03-02 05:04:52 +00:00
Anton Korobeynikov
eaf27d276a Ensure that fastcall'ed function is correctly mangled & stack is
properly aligned

llvm-svn: 34788
2007-03-01 16:29:22 +00:00
Chris Lattner
bcc44762bc remove dead option
llvm-svn: 34754
2007-02-28 18:39:53 +00:00
Chris Lattner
d8c7e8999e bugfix: fastcall does not require the first two params to be marked 'inreg',
they always get registers.

llvm-svn: 34748
2007-02-28 18:35:11 +00:00
Chris Lattner
a66d550298 use high-level functions in CCState
llvm-svn: 34739
2007-02-28 07:09:55 +00:00
Chris Lattner
3663b6e73a make use of helper functions in CCState for analyzing formals and calls.
llvm-svn: 34737
2007-02-28 07:00:42 +00:00
Chris Lattner
3762b44a0c switch LowerFastCCCallTo over to using the new fastcall description.
llvm-svn: 34734
2007-02-28 06:26:33 +00:00
Chris Lattner
a8dd712470 switch LowerFastCCArguments over to using the autogenerated Fastcall description.
llvm-svn: 34733
2007-02-28 06:21:19 +00:00
Chris Lattner
9a1f1c41b0 add new CC_X86_32_FastCall calling conv, which describes fastcall on win32.
Factor out a CC_X86_32_Common convention, which is the part shared between
ccc, stdcall and fastcall

llvm-svn: 34732
2007-02-28 06:20:01 +00:00
Chris Lattner
3b16744840 rearrange code
llvm-svn: 34731
2007-02-28 06:10:12 +00:00
Chris Lattner
023751c20b remove fastcc (not fastcall) support
llvm-svn: 34730
2007-02-28 06:05:16 +00:00
Chris Lattner
012066f78b switch LowerCCCArguments over to using autogenerated CC.
llvm-svn: 34729
2007-02-28 05:46:49 +00:00
Chris Lattner
6424f8e245 simplify sret handling
llvm-svn: 34728
2007-02-28 05:39:26 +00:00
Chris Lattner
76147834d6 switch LowerCCCCallTo over to using an autogenerated callingconv
llvm-svn: 34727
2007-02-28 05:31:48 +00:00
Chris Lattner
d663281088 rename stuff
llvm-svn: 34726
2007-02-28 05:30:29 +00:00
Chris Lattner
eef57fed6e switch return value passing and the x86-64 calling convention information
over to being autogenerated from the X86CallingConv.td file.

llvm-svn: 34722
2007-02-28 04:55:35 +00:00
Chris Lattner
3eb95551d7 make subtarget references work.
llvm-svn: 34721
2007-02-28 04:51:41 +00:00
Evan Cheng
116f97f2c7 PEI now passes a RegScavenger ptr to eliminateFrameIndex.
llvm-svn: 34707
2007-02-28 00:21:17 +00:00
Chris Lattner
da909a2df7 reenable generation of CC info
llvm-svn: 34699
2007-02-27 22:12:19 +00:00
Evan Cheng
b314459812 Back out previous commit temporarily.
llvm-svn: 34694
2007-02-27 21:47:22 +00:00
Chris Lattner
4d29a90170 build cc info
llvm-svn: 34684
2007-02-27 20:44:31 +00:00
Chris Lattner
da49dee51d a note
llvm-svn: 34670
2007-02-27 17:21:09 +00:00
Chris Lattner
90c768b913 Add calling convention info
llvm-svn: 34661
2007-02-27 06:59:52 +00:00
Chris Lattner
9f0e5d8b03 move target independent calling convention stuff to TargetCallingConv.td
llvm-svn: 34659
2007-02-27 05:57:32 +00:00
Chris Lattner
2b737abea1 fill in some holes
llvm-svn: 34658
2007-02-27 05:51:05 +00:00
Chris Lattner
9117648533 switch x86-64 return value lowering over to using same mechanism as argument
lowering uses.

llvm-svn: 34657
2007-02-27 05:28:59 +00:00
Chris Lattner
11a1c2113c Minor refactoring of CC Lowering interfaces
llvm-svn: 34656
2007-02-27 05:13:54 +00:00
Chris Lattner
e34136f6d5 move CC Lowering stuff to its own public interface
llvm-svn: 34655
2007-02-27 04:43:02 +00:00
Chris Lattner
cac44e283d refactor x86-64 argument lowering yet again, this time eliminating templates,
'clients', etc, and adding CCValAssign instead.

llvm-svn: 34654
2007-02-27 04:18:15 +00:00
Chris Lattner
656996aab8 fix attribution
llvm-svn: 34637
2007-02-26 18:56:07 +00:00
Chris Lattner
948965f809 Add a description of the X86-64 calling convention and the return
conventions.  This doesn't do anything yet, but may in the future.

llvm-svn: 34636
2007-02-26 18:17:14 +00:00
Chris Lattner
7165ee9b6b switch to smallvector
llvm-svn: 34633
2007-02-26 07:59:53 +00:00
Chris Lattner
3fe1132dcd initial hack at splitting the x86-64 calling convention info out from the
mechanics that process it.  I'm still not happy with this, but it's a step
in the right direction.

llvm-svn: 34631
2007-02-26 07:50:02 +00:00
Chris Lattner
d0c941c89e the truncate must always be done, it's only the assert that is conditional.
llvm-svn: 34628
2007-02-26 05:21:05 +00:00
Chris Lattner
decf97fae2 add an accessor.
llvm-svn: 34625
2007-02-26 04:01:25 +00:00
Chris Lattner
2e7125dc74 in X86-64 CCC, i8/i16 arguments are already properly zext/sext'd on input.
Capture this so that downstream zext/sext's are optimized out.  This
compiles:
  int test(short X) { return (int)X; }

to:

_test:
        movl %edi, %eax
        ret

instead of:

_test:
        movswl %di, %eax
        ret


GCC produces this bizarre code:

_test:
        movw    %di, -12(%rsp)
        movswl  -12(%rsp),%eax
        ret

llvm-svn: 34623
2007-02-26 03:18:56 +00:00
Chris Lattner
ad14e21b97 Fix an X86-64 abi bug. We now compile:
void foo(short);
void bar(unsigned short A) {
  foo(A);
}

into:

_bar:
        subq $8, %rsp
        movswl %di, %edi
        call _foo
        addq $8, %rsp
        ret

instead of:

_bar:
        subq $8, %rsp
        call _foo
        addq $8, %rsp
        ret

Testcase here: test/CodeGen/X86/x86-64-shortint.ll

llvm-svn: 34615
2007-02-25 23:10:46 +00:00
Chris Lattner
15c167cc61 fix CodeGen/X86/2007-02-25-FastCCStack.ll, a regression from my patch last
night:  fastcc returns should only go in XMM0 if we have SSE2 or above.

llvm-svn: 34613
2007-02-25 22:23:46 +00:00
Chris Lattner
65ba08d627 fastcc functions that return double values now return them in xmm0 on x86-32.
This implements CodeGen/X86/fp-stack-ret.ll:test[23]

llvm-svn: 34592
2007-02-25 09:31:16 +00:00
Chris Lattner
e4ba88824d allow vectors to be passed to stdcall/fastcall functions
llvm-svn: 34590
2007-02-25 09:14:25 +00:00
Chris Lattner
fac0b30da0 move LowerRET into the 'Return Value Calling Convention Implementation'
section of the file.

llvm-svn: 34589
2007-02-25 09:12:39 +00:00
Chris Lattner
65d915a3b6 make all Lower*CallTo implementations use LowerCallResult to handle their
result value stuff.  This eliminates a bunch of duplicated code and now
GetRetValueLocs is the sole place that decides where a value is returned.

llvm-svn: 34588
2007-02-25 09:10:05 +00:00
Chris Lattner
423224a7b4 pass the calling convention into Lower*CallTo, instead of using ad-hoc flags.
llvm-svn: 34587
2007-02-25 09:06:15 +00:00
Chris Lattner
8fa75c3ae8 factor a bunch of code out of LowerCCCCallTo into a new LowerCallResult
function.  This function now uses GetRetValueLocs to determine *where*
the result values are located and concerns itself with *how* to pull the
values out.

llvm-svn: 34586
2007-02-25 08:59:22 +00:00
Chris Lattner
3bfbc23ccd move some code around, pass in calling conv, even though it is unused
llvm-svn: 34585
2007-02-25 08:29:00 +00:00
Chris Lattner
f119813ff4 simplify result value lowering by splitting the selection of *where* to return
registers out from the logic of *how* to return them.

This changes X86-64 to mark EAX live out when returning a 32-bit value,
where before it marked RAX liveout.

llvm-svn: 34582
2007-02-25 08:15:11 +00:00
Chris Lattner
bcce79717b make void-return not a special case
llvm-svn: 34579
2007-02-25 07:18:38 +00:00
Chris Lattner
d00fcb3277 eliminate a bunch more temporary vectors from X86 lowering.
llvm-svn: 34578
2007-02-25 07:10:00 +00:00
Chris Lattner
f7eeef816d eliminate temporary vectors created during X86 lowering.
llvm-svn: 34577
2007-02-25 06:40:16 +00:00
Chris Lattner
6f25082e67 remove std::vector's in RET lowering.
llvm-svn: 34576
2007-02-25 06:21:57 +00:00
Evan Cheng
3ddb5d1018 80 col. violation.
llvm-svn: 34520
2007-02-23 03:03:16 +00:00
Anton Korobeynikov
b7350e191e External weak linkage is supported by recent binutils on mingw32.
llvm-svn: 34519
2007-02-23 01:58:50 +00:00
Evan Cheng
da51cf986a By default, spills kills the register being stored.
llvm-svn: 34515
2007-02-23 01:10:04 +00:00