1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-28 06:22:51 +01:00
Commit Graph

75 Commits

Author SHA1 Message Date
Mon P Wang
98c2e01394 Removed pinsrd and pinsrq intrinsics because the code generator does not support
them since they map to insert element

llvm-svn: 57564
2008-10-15 06:27:16 +00:00
Dale Johannesen
03d0a7d70c Fix SSE4.1 roundss, roundsd. While the instructions have
the same pattern as roundpd/roundps, the Intel compiler 
builtins do not:  rounds* has an extra operand.  Fixes
gcc.target/i386/sse4_1-rounds[sd]-[1234].c

llvm-svn: 57370
2008-10-10 23:51:03 +00:00
Bill Wendling
1c97db4090 "The original bug was a complaint that _mm_srli_si128 mis-compiled when passed
a constant vector ("{0x123, 0x456}" syntax).  The fix is to simplify the
_mm_srli_si128 macro, and  move the "* 8" from the macro into the compiler
back-end.  I can't change the existing __builtins because so many people are
using them :-(."
Patch by Stuart Hastings!

llvm-svn: 56944
2008-10-02 05:56:52 +00:00
Nate Begeman
af01bfff99 SSE codegen for vsetcc nodes
llvm-svn: 53719
2008-07-17 16:51:19 +00:00
Evan Cheng
4e7b7b21a2 Horizontal-add instructions are not commutative.
llvm-svn: 52363
2008-06-16 21:16:24 +00:00
Evan Cheng
d27948e716 - Add "Commutative" property to intrinsics. This allows tblgen to generate the commuted variants for dagisel matching code.
- Mark lots of X86 intrinsics as "Commutative" to allow load folding.

llvm-svn: 52353
2008-06-16 20:29:38 +00:00
Evan Cheng
e5e0b4660d Eliminate x86.sse2.punpckh.qdq and x86.sse2.punpckl.qdq.
llvm-svn: 51533
2008-05-24 02:56:30 +00:00
Evan Cheng
564238c841 Eliminate x86.sse2.movs.d, x86.sse2.shuf.pd, x86.sse2.unpckh.pd, and x86.sse2.unpckl.pd intrinsics. These will be lowered into shuffles.
llvm-svn: 51531
2008-05-24 02:14:05 +00:00
Evan Cheng
47bd4b07a8 Remove x86.sse2.loadh.pd and x86.sse2.loadl.pd. These will be lowered into load and shuffle instructions.
llvm-svn: 51521
2008-05-24 00:07:06 +00:00
Evan Cheng
c1c2adbfc6 Add separate intrinsics for MMX / SSE shifts with i32 integer operands. This allow us to simplify the horribly complicated matching code.
llvm-svn: 50601
2008-05-03 00:52:09 +00:00
Evan Cheng
4ae9fee64c Undo 48570. Correctly match mmx shift instructions with an immediate operand.
llvm-svn: 48627
2008-03-21 00:40:09 +00:00
Evan Cheng
6f729b2820 Add intrinsics to match mmx shift builtin's with immediate operand.
llvm-svn: 48569
2008-03-19 23:38:52 +00:00
Nate Begeman
f50ef51ded __builtin_ia32_movntdqa reads memory
llvm-svn: 48431
2008-03-16 21:15:47 +00:00
Dale Johannesen
bf7691e2e8 Missed one.
llvm-svn: 46733
2008-02-05 01:12:10 +00:00
Dale Johannesen
9b7f4064e3 Do not unconditionally redefine vec_ext_v16qi and
vec_ext_v4si builtins.  This is a hack; they should
be defined here, then resolved in the X86 BE.
However there is enough other stuff missing in the
X86 BE for SSE41 that this will do for now.

llvm-svn: 46727
2008-02-04 23:27:29 +00:00
Nate Begeman
ead8dfeef2 SSE 4.1 Intrinsics and detection
llvm-svn: 46681
2008-02-03 07:18:54 +00:00
Evan Cheng
f91cfb435f Fix sse2.psrl.w and sse2.psrl.q definitions.
llvm-svn: 45772
2008-01-09 02:16:44 +00:00
Chris Lattner
7a9b0bf0eb remove attribution from a variety of miscellaneous files.
llvm-svn: 45425
2007-12-29 22:59:10 +00:00
Evan Cheng
ec3c87a7ef Add a few more missing gcc builtin's.
llvm-svn: 45278
2007-12-21 01:30:39 +00:00
Evan Cheng
efdab94d4a Type specification didn't match gcc's.
llvm-svn: 45260
2007-12-20 09:35:28 +00:00
Evan Cheng
8b4947fe98 Remove int_x86_sse2_movl_dq. It's replaced with a string compare.
llvm-svn: 45140
2007-12-18 01:04:25 +00:00
Evan Cheng
21f2afd4df These have matching builtin's in 4.2.
llvm-svn: 45139
2007-12-18 00:52:20 +00:00
Evan Cheng
d9df071ead Bring back int_x86_sse2_movl_dq intrinsic for backward compatibility. Make sure
it's auto-upgraded to a shufflevector instruction.

llvm-svn: 45131
2007-12-17 22:33:23 +00:00
Evan Cheng
063e019ff6 __builtin_ia32_movqv4si is now expanded to a shuffle.
llvm-svn: 45057
2007-12-15 02:54:12 +00:00
Anders Carlsson
900a684ae7 All MMX shift instructions took a <2 x i32> vector as the shift amount parameter. Change this to be <1 x i64> instead, which matches the assembler instruction.
llvm-svn: 45027
2007-12-14 06:38:54 +00:00
Dale Johannesen
7167117945 Add missing SSE builtins: CVTPD2PI, CVTPS2PI,
CVTTPD2PI, CVTTPS2PI, CVTPI2PD, CVTPI2PS.

llvm-svn: 43523
2007-10-30 22:15:38 +00:00
Dale Johannesen
c4bfe4a99e Fix argument types for PSLLQ, PSRLQ.
llvm-svn: 43490
2007-10-30 01:44:33 +00:00
Dan Gohman
230fc11cca There is no {rsqrt,rcp}{p,s}d.
llvm-svn: 42190
2007-09-21 15:24:00 +00:00
Bill Wendling
55c3dc2409 Adding SSSE3 intrinsics.
llvm-svn: 40982
2007-08-10 06:22:27 +00:00
Bill Wendling
97342a9b0c Add missing SSE builtins:
__builtin_ia32_cvtss2si64
    __builtin_ia32_cvttss2si64
    __builtin_ia32_cvtsi642ss
    __builtin_ia32_cvtsd2si64
    __builtin_ia32_cvttsd2si64
    __builtin_ia32_cvtsi642sd

llvm-svn: 40411
2007-07-23 03:07:27 +00:00
Chris Lattner
db7986458d add missing mmx intrinsic
llvm-svn: 37099
2007-05-16 06:03:49 +00:00
Bill Wendling
498c102df6 Add the final MMX instructions. Correct a few wrong patterns.
llvm-svn: 36405
2007-04-24 21:18:37 +00:00
Bill Wendling
3b1189afbf Add support for our first SSSE3 instruction "pmulhrsw".
llvm-svn: 35869
2007-04-10 22:10:25 +00:00
Bill Wendling
a4aa65bc38 Adding more MMX instructions.
llvm-svn: 35638
2007-04-03 23:48:32 +00:00
Bill Wendling
ca2124e5a9 Add FEMMS and ADDQ. Renamed MMX recipes to prepend the MMX_ to them.
llvm-svn: 35616
2007-04-03 06:00:37 +00:00
Bill Wendling
f9c5de3792 Add support for integer comparison builtins.
llvm-svn: 35384
2007-03-27 20:21:31 +00:00
Bill Wendling
ad01036d5c This is dead. DEAD I tells you!!
llvm-svn: 35291
2007-03-23 22:42:04 +00:00
Bill Wendling
124f2c8706 PR1260:
Add final support to get the QT example to compile.

llvm-svn: 35290
2007-03-23 22:35:46 +00:00
Bill Wendling
e6a9c6dfe6 We generate a shufflevector instruction, so we don't need the builtin
intrinsic.

llvm-svn: 35269
2007-03-22 20:29:26 +00:00
Bill Wendling
1bcad4c1cd Support added for shifts and unpacking MMX instructions.
llvm-svn: 35266
2007-03-22 18:42:45 +00:00
Bill Wendling
feaff80149 Multiplication support for MMX.
llvm-svn: 35118
2007-03-15 21:24:36 +00:00
Bill Wendling
236cfc4344 Adding more arithmetic operators to MMX. This is an almost exact copy of
the addition. Please let me know if you have suggestions.

llvm-svn: 35055
2007-03-10 09:57:05 +00:00
Bill Wendling
5fef3fd7e7 Added "padd*" support for MMX. Added MMX move stuff to X86InstrInfo so that
moves, loads, etc. are recognized.

llvm-svn: 35031
2007-03-08 22:09:11 +00:00
Bill Wendling
c52174dee3 Add the emms intrinsic for MMX support.
llvm-svn: 34938
2007-03-05 23:09:45 +00:00
Reid Spencer
c8ac2ee78c Convert the intrinsic function definitions to use llvm_i32_ty instead of
llvm_uint_ty or llvm_int_ty. Similarly for i8, i16 and i64

llvm-svn: 32802
2006-12-31 22:24:55 +00:00
Evan Cheng
e521de4e60 Added X86 SSE2 intrinsics which can be represented as vector_shuffles. This is
a temporary workaround for the 2-wide vector_shuffle problem (i.e. its mask
would have type v2i32 which is not legal).

llvm-svn: 27964
2006-04-24 23:34:56 +00:00
Evan Cheng
32c4470374 Last few SSE3 intrinsics.
llvm-svn: 27711
2006-04-14 21:59:03 +00:00
Evan Cheng
184264997e Misc. SSE2 intrinsics: clflush, lfench, mfence
llvm-svn: 27699
2006-04-14 07:43:12 +00:00
Evan Cheng
360a73046f pcmpeq* and pcmpgt* intrinsics.
llvm-svn: 27685
2006-04-14 01:39:53 +00:00
Evan Cheng
18a1a0e199 psll*, psrl*, and psra* intrinsics.
llvm-svn: 27684
2006-04-14 00:14:05 +00:00