1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 22:42:46 +02:00
Commit Graph

59 Commits

Author SHA1 Message Date
Evan Cheng
3f7a10bee8 PSHUF* encoding bugs.
llvm-svn: 27405
2006-04-04 18:40:36 +00:00
Evan Cheng
f07104b717 cmpps / cmppd encoding bug
llvm-svn: 27393
2006-04-04 03:04:07 +00:00
Evan Cheng
2be8582ddb Compact some intrinsic definitions.
llvm-svn: 27388
2006-04-04 00:10:53 +00:00
Evan Cheng
bbe72d932c Some SSE1 intrinsics: min, max, sqrt, etc.
llvm-svn: 27384
2006-04-03 23:49:17 +00:00
Evan Cheng
7ff32cd571 Use movlpd to: store lower f64 extracted from v2f64.
Use movhpd to: store upper f64 extracted from v2f64.

llvm-svn: 27382
2006-04-03 22:30:54 +00:00
Evan Cheng
169240beb7 - More efficient extract_vector_elt with shuffle and movss, movsd, movd, etc.
- Some bug fixes and naming inconsistency fixes.

llvm-svn: 27377
2006-04-03 20:53:28 +00:00
Evan Cheng
4623ebd3d0 Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector.

llvm-svn: 27314
2006-03-31 21:55:24 +00:00
Evan Cheng
fb980688f1 Added support for SSE3 horizontal ops: haddp{s|d} and hsub{s|d}.
llvm-svn: 27310
2006-03-31 21:29:33 +00:00
Evan Cheng
7b9a0c6d7a Add support to use pextrw and pinsrw to extract and insert a word element
from a 128-bit vector.

llvm-svn: 27304
2006-03-31 19:22:53 +00:00
Evan Cheng
d3c692650f Make sure all possible shuffles are matched.
Use pshufd, pshuhw, and pshulw to shuffle v4f32 if shufps doesn't match.
Use shufps to shuffle v4f32 if pshufd, pshuhw, and pshulw don't match.

llvm-svn: 27259
2006-03-30 19:54:57 +00:00
Evan Cheng
4150ec59a3 More logical ops patterns
llvm-svn: 27257
2006-03-30 07:33:32 +00:00
Evan Cheng
57d481a78a Add support for _mm_cmp{cc}_ss and _mm_cmp{cc}_ps intrinsics
llvm-svn: 27256
2006-03-30 06:21:22 +00:00
Evan Cheng
82d2a6910f Add 128-bit pmovmskb intrinsic support.
llvm-svn: 27255
2006-03-30 00:33:26 +00:00
Evan Cheng
9ebe75e915 Change SSE pack operation definitions to fit what the intrinsics expected.
For example, packsswb actually creates a v16i8 from a pair of v8i16. But since
the intrinsic specification forces the output type to match the operands.

llvm-svn: 27254
2006-03-29 23:53:14 +00:00
Evan Cheng
7bc3bc8246 - Added some SSE2 128-bit packed integer ops.
- Added SSE2 128-bit integer pack with signed saturation ops.
- Added pshufhw and pshuflw ops.

llvm-svn: 27252
2006-03-29 23:07:14 +00:00
Evan Cheng
d0d3eade59 Need to special case splat after all. Make the second operand of splat
vector_shuffle undef.

llvm-svn: 27250
2006-03-29 19:02:40 +00:00
Evan Cheng
e7701928bb Floating point logical operation patterns should match bit_convert. Or else
integer vector logical operations would match andp{s|d} instead of pand.

llvm-svn: 27248
2006-03-29 18:47:40 +00:00
Evan Cheng
02b5de9b3e - More shuffle related bug fixes.
- Whenever possible use ops of the right packed types for vector shuffles /
  splats.

llvm-svn: 27246
2006-03-29 03:04:49 +00:00
Evan Cheng
5194a37602 - Only use pshufd for v4i32 vector shuffles.
- Other shuffle related fixes.

llvm-svn: 27244
2006-03-29 01:30:51 +00:00
Evan Cheng
e7a50a5851 Added aliases to scalar SSE instructions, e.g. addss, to match x86 intrinsics.
The source operands type are v4sf with upper bits passes through.
Added matching code for these.

llvm-svn: 27240
2006-03-28 23:51:43 +00:00
Evan Cheng
7b7954f53f movlps and movlpd should be modeled as two address code.
llvm-svn: 27221
2006-03-28 07:01:28 +00:00
Evan Cheng
9f0e244187 Typo
llvm-svn: 27219
2006-03-28 06:53:49 +00:00
Evan Cheng
fb4b2bfc7d * Prefer using operation of matching types. e.g unpcklpd rather than movlhps.
* Bug fixes.

llvm-svn: 27218
2006-03-28 06:50:32 +00:00
Evan Cheng
4d554dae17 - Clean up / consoladate various shuffle masks.
- Some misc. bug fixes.
- Use MOVHPDrm to load from m64 to upper half of a XMM register.

llvm-svn: 27210
2006-03-28 02:43:26 +00:00
Evan Cheng
d8d7ec47bd Model unpack lower and interleave as vector_shuffle so we can lower the
intrinsics as such.

llvm-svn: 27200
2006-03-28 00:39:58 +00:00
Chris Lattner
b64eb6593a unbreak the build
llvm-svn: 27174
2006-03-27 16:52:45 +00:00
Evan Cheng
0865274fa5 Use pcmpeq to generate vector of all ones.
llvm-svn: 27167
2006-03-27 07:00:16 +00:00
Evan Cheng
1dfbede1d1 Remove X86:isZeroVector, use ISD::isBuildVectorAllZeros instead; some fixes / cleanups
llvm-svn: 27150
2006-03-26 09:53:12 +00:00
Evan Cheng
e5807f6b47 Build arbitrary vector with more than 2 distinct scalar elements with a
series of unpack and interleave ops.

llvm-svn: 27119
2006-03-25 09:37:23 +00:00
Evan Cheng
8601147e68 Added SSE cachebility ops
llvm-svn: 27103
2006-03-25 06:03:26 +00:00
Evan Cheng
234090b386 Added 128-bit packed integer subtraction.
llvm-svn: 27096
2006-03-25 01:33:37 +00:00
Evan Cheng
041c9c534b Added CVTSS2SI.
llvm-svn: 27094
2006-03-25 01:00:18 +00:00
Evan Cheng
bdb85b387f Support for scalar to vector with zero extension.
llvm-svn: 27091
2006-03-24 23:15:12 +00:00
Evan Cheng
33dad39b46 Added LDMXCSR
llvm-svn: 27087
2006-03-24 22:28:37 +00:00
Chris Lattner
2d08e8aee1 plug the intrinsics into the patterns for movmsk*
llvm-svn: 27083
2006-03-24 21:49:18 +00:00
Evan Cheng
d58d54cf3e Handle BUILD_VECTOR with all zero elements.
llvm-svn: 27056
2006-03-24 07:29:27 +00:00
Evan Cheng
3028b04057 More efficient v2f64 shuffle using movlhps, movhlps, unpckhpd, and unpcklpd.
llvm-svn: 27040
2006-03-24 02:58:06 +00:00
Evan Cheng
68410804f0 Handle more shuffle cases with SHUFP* instructions.
llvm-svn: 27024
2006-03-24 01:18:28 +00:00
Evan Cheng
a6dc6e535d Following icc's lead: use movdqa to load / store 128-bit integer vectors
llvm-svn: 26980
2006-03-23 07:44:07 +00:00
Evan Cheng
c1fe87ea8b Add v4i32 <-> v4f32 bitconvert patterns.
llvm-svn: 26969
2006-03-23 02:36:37 +00:00
Evan Cheng
5f7cf963db Add 128-bit integer vector load and add (for testing).
llvm-svn: 26967
2006-03-23 01:57:24 +00:00
Evan Cheng
56b67b2c4f SHUFP* are two address code.
llvm-svn: 26959
2006-03-22 20:08:18 +00:00
Evan Cheng
ae6a39ea92 - Supposely movlhps is faster / better than unpcklpd.
- Don't forget pshufd is only available with sse2.

llvm-svn: 26956
2006-03-22 19:16:21 +00:00
Evan Cheng
cff38e19c3 - Implement X86ISelLowering::isShuffleMaskLegal(). We currently only support
splat and PSHUFD cases.
- Clean up shuffle / splat matching code.

llvm-svn: 26954
2006-03-22 18:59:22 +00:00
Evan Cheng
f6dc0a7f5e - VECTOR_SHUFFLE of v4i32 / v4f32 with undef second vector always matches
PSHUFD. We can make permutes entries which point to the undef pointing
  anything we want.
- Change some names to appease Chris.

llvm-svn: 26951
2006-03-22 08:01:21 +00:00
Evan Cheng
25440d19b1 Fix PSHUF* and SHUF* jit code emission problems
llvm-svn: 26949
2006-03-22 07:10:28 +00:00
Evan Cheng
7aac4350c7 Some splat and shuffle support.
llvm-svn: 26940
2006-03-22 02:53:00 +00:00
Evan Cheng
47dd756c72 - Use movaps to store 128-bit vector integers.
- Each scalar to vector v8i16 and v16i8 is a any_extend followed by a movd.

llvm-svn: 26932
2006-03-21 23:01:21 +00:00
Evan Cheng
6ec225863c - Remove scalar to vector pseudo ops. They are just wrong.
- Handle FR32 to VR128:v4f32 and FR64 to VR128:v2f64 with aliases of MOVAPS
and MOVAPD. Mark them as move instructions and *hope* they will be deleted.

llvm-svn: 26919
2006-03-21 07:09:35 +00:00
Evan Cheng
a4db61ddc1 x86 ISD::SCALAR_TO_VECTOR support.
llvm-svn: 26911
2006-03-21 00:33:35 +00:00