1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-27 22:12:47 +01:00
Commit Graph

580 Commits

Author SHA1 Message Date
Benjamin Kramer
851691ddb2 Add patterns for the x86 popcnt instruction.
- Also adds a new POPCNT subtarget feature that is currently enabled if the target
  supports SSE4.2 (nehalem) or SSE4A (barcelona).

llvm-svn: 120917
2010-12-04 20:32:23 +00:00
Nate Begeman
deb26223bd Scalar f32/f64 are also subregs of ymm regs
llvm-svn: 120844
2010-12-03 21:54:39 +00:00
Eric Christopher
6a21ceab5c Implement a PseudoI class and transfer the sse instructions over to use
it.

llvm-svn: 120412
2010-11-30 08:57:23 +00:00
Eric Christopher
f27f0b5234 Rewrite mwait and monitor support and custom lower arguments.
Fixes PR8573.

llvm-svn: 120404
2010-11-30 07:20:12 +00:00
Bruno Cardoso Lopes
9f9f796756 Fix PR8211
llvm-svn: 118445
2010-11-08 21:24:59 +00:00
Dale Johannesen
b78530f9b0 Fix pastos in handling of AVX cvttsd2si, PR8491.
Bruno, please review, but I'm pretty sure this is right.
Patch by Alex Mac!

llvm-svn: 117514
2010-10-28 00:35:54 +00:00
Chris Lattner
72e7e84c3f simplify some map operations.
llvm-svn: 116014
2010-10-07 23:57:02 +00:00
Evan Cheng
7c89d70f27 Canonicalize X86ISD::MOVDDUP nodes to v2f64 to make sure all cases match. Also eliminate unneeded isel patterns. rdar://8520311
llvm-svn: 115977
2010-10-07 20:50:20 +00:00
Chris Lattner
84846b71af remove the !nameconcat tblgen feature. It "shorthand" and only used in 4 places
where !cast is just as short.

llvm-svn: 115722
2010-10-06 00:19:21 +00:00
Chris Lattner
12274b9845 allow !strconcat to take more than two operands to eliminate
!strconcat(!strconcat(!strconcat(!strconcat

Simplify some x86 td files to use it.

llvm-svn: 115719
2010-10-05 23:58:18 +00:00
Chris Lattner
5d7d5a81eb distribute the rest of the contents of X86Instr64bit.td out to
the right places.  X86Instr64bit.td now dies, long live x86-64!

llvm-svn: 115669
2010-10-05 20:49:15 +00:00
Chris Lattner
9317bf2ed5 move CMOV_FR32 and friends to InstrCompiler, since they are
pseudo instructions.

Move POPCNT to InstrSSE since they are SSE4 instructions.

llvm-svn: 115603
2010-10-05 06:41:40 +00:00
Chris Lattner
9c58de2dc4 fix rdar://8490728 - llvm-mc rejects gpr64 form of 'movmskpd'
llvm-svn: 115029
2010-09-29 05:05:03 +00:00
Chris Lattner
890c21a20a add assembler support for the cvtsd2sil/cvtsd2siq mnemonics, rdar://8456382
llvm-svn: 115027
2010-09-29 04:55:40 +00:00
Chris Lattner
c14d59589c add basic avx support to the disassembler, also teach it about ssmem/sdmem
operands.

With this done, we can remove the _Int suffixes from the round instructions
without the disassembler blowing up.  This allows the assembler to support
them, implementing rdar://8456376 - llvm-mc rejects 'roundss'

llvm-svn: 115019
2010-09-29 02:57:56 +00:00
Chris Lattner
f90296b045 add asmparser support for cvttpd2dq by removing some Int_ prefixes.
Clean up cvttps2dq by removing some redundant implementations of the
same instruction.  rdar://8456382

llvm-svn: 115018
2010-09-29 02:36:32 +00:00
Chris Lattner
e5c5c8dc1f implement rdar://8456382 - cvtsd2si support, by removing some Int_ prefixes.
llvm-svn: 115017
2010-09-29 02:24:57 +00:00
Dale Johannesen
eb807a15a3 Fix typos. 128-bit PSHUFB takes 128-bit memory op.
v8i16 is not an MMX type; put it where it belongs.

llvm-svn: 113785
2010-09-13 21:15:43 +00:00
Bruno Cardoso Lopes
49efee5c95 Add one more pattern to fallback movddup
llvm-svn: 113522
2010-09-09 18:48:34 +00:00
Dale Johannesen
de53df20d6 Move remaining MMX instructions from SSE to MMX.
llvm-svn: 113501
2010-09-09 17:13:07 +00:00
Dale Johannesen
7469923117 Move most MMX instructions (defined as anything that
uses MMX, even if it also uses other things) from InstrSSE
into InstrMMX.  No (intended) functional change.

llvm-svn: 113462
2010-09-09 01:02:39 +00:00
Bruno Cardoso Lopes
892c337123 x86 vector shuffle lowering now relies only on target specific
nodes to emit shuffles and don't do isel mask matching anymore.
- Add the selection of the remaining shuffle opcode (movddup)
- Introduce two new functions to "recognize" where we may get
potential folds and add several comments to them explaining why
they are not yet in the desidered shape.
- Add more patterns to fallback the case where we select
a specific shuffle opcode as if it could fold a load, but it
can't, so remap to a valid instruction.
- Add a couple of FIXMEs to address in the following days once
there's a good solution to the current folding problem.

llvm-svn: 113369
2010-09-08 17:43:25 +00:00
Dale Johannesen
8354cab2de Add patterns for MMX that use the new intrinsics.
Enable palignr intrinsic.
These may need adjustment for a new VT in due course.

llvm-svn: 113233
2010-09-07 18:10:56 +00:00
Bruno Cardoso Lopes
92bb02f722 Remove unused target specific node
llvm-svn: 113224
2010-09-07 17:38:55 +00:00
Dale Johannesen
2f4f8f5705 Remove the rest of the nonexistent 64-bit AVX instructions.
Bruno, please review.

llvm-svn: 113014
2010-09-03 21:23:00 +00:00
Bruno Cardoso Lopes
b8ce8b7e9f Reapply last harmless part of r112934, the pattern fragment to match X86Unpcklpd
llvm-svn: 113009
2010-09-03 20:44:26 +00:00
Daniel Dunbar
26e0e964ab Revert r112934, "- Use specific nodes to match unpckl masks.", which introduced
some infinite loop and select failures.
 - Apologies for eager reverting, but its branch day.

llvm-svn: 113000
2010-09-03 19:38:11 +00:00
Bruno Cardoso Lopes
f91bd70e9a AVX doesn't support mm operations neither its instrinsics.
The AVX versions of PALIGN and PABS* should only exist for
128-bit. Remove the unnecessary stuff.

llvm-svn: 112944
2010-09-03 02:08:45 +00:00
Bruno Cardoso Lopes
e1ad6555a8 - Use specific nodes to match unpckl masks.
- Teach getShuffleScalarElt how to handle more target
specific nodes, so the DAGCombine can make use of it.
- Add another hack to avoid the node update problem
during legalization. More description on the comments

llvm-svn: 112934
2010-09-03 01:24:00 +00:00
Bruno Cardoso Lopes
dcdab94661 become more strict about when it's safe to use X86ISD::MOVLPS
llvm-svn: 112799
2010-09-02 02:35:51 +00:00
Bruno Cardoso Lopes
601bf4c6d3 Using target specific nodes for shuffle nodes makes the mask
check more strict, breaking some cases not checked in the
testsuite, but also exposes some foldings not done before,
as this example:

  movaps  (%rdi), %xmm0
  movaps  (%rax), %xmm1
  movaps  %xmm0, %xmm2
  movss %xmm1, %xmm2
  shufps  $36, %xmm2, %xmm0

now is generated as:

  movaps  (%rdi), %xmm0
  movaps  %xmm0, %xmm1
  movlps  (%rax), %xmm1
  shufps  $36, %xmm1, %xmm0

llvm-svn: 112753
2010-09-01 22:33:20 +00:00
Bruno Cardoso Lopes
9375b2f67d Use movlps, movlpd, movss and movsd specific nodes instead of pattern matching with movlp pattern fragment
llvm-svn: 112694
2010-09-01 05:08:25 +00:00
Bruno Cardoso Lopes
80613a070e Use x86 specific MOVSLDUP node, add more patterns to match it and remove useless load nodes
llvm-svn: 112661
2010-08-31 22:35:05 +00:00
Bruno Cardoso Lopes
8fc83b1960 Use x86 specific MOVSHDUP node and add more patterns to match it
llvm-svn: 112657
2010-08-31 22:22:11 +00:00
Bruno Cardoso Lopes
6fbe7b9ddd Use MOVLHPS and MOVHLPS x86 nodes whenever possible. Also remove some useless nodes
llvm-svn: 112642
2010-08-31 21:15:21 +00:00
Bruno Cardoso Lopes
7939025262 Use pshufhw and pshuflw in more cases and fix getTargetShuffleNode number of arguments
llvm-svn: 111890
2010-08-24 01:16:15 +00:00
Bruno Cardoso Lopes
28d9071635 This is the first step towards refactoring the x86 vector shuffle code. The
general idea here is to have a group of x86 target specific nodes which are
going to be selected during lowering and then directly matched in isel.

The commit includes the addition of those specific nodes and a *bunch* of
patterns, and incrementally we're going to switch between them and what we
have right now. Both the patterns and target specific nodes can change as
we move forward with this work.

llvm-svn: 111691
2010-08-20 22:55:05 +00:00
Dale Johannesen
3f9c148d0e Revert 110491. While not wrong, it was based on a
misanalysis and is undesirable.

llvm-svn: 111028
2010-08-13 18:43:45 +00:00
Bruno Cardoso Lopes
de5f3f5cb6 Improve comment to make explicit why not to touch this could before JIT goes MC
llvm-svn: 111021
2010-08-13 17:44:10 +00:00
Eric Christopher
63c83f19a0 Revert last patch and r110954 as I meant to.
llvm-svn: 111001
2010-08-13 02:37:50 +00:00
Bruno Cardoso Lopes
350d186d69 Some small clean-up: use of pseudo instructions
llvm-svn: 110954
2010-08-12 20:55:18 +00:00
Bruno Cardoso Lopes
7cb26cb8be - Teach SSEDomainFix to switch between different levels of AVX instructions. Here we guess that AVX will have domain issues, so just implement them for consistency and in the future we remove if it's unnecessary.
- Make foldMemoryOperandImpl aware of 256-bit zero vectors folding and support the 128-bit counterparts of AVX too.
- Make sure MOV[AU]PS instructions are only selected when SSE1 is enabled, and duplicate the patterns to match AVX.
- Add a testcase for a simple 128-bit zero vector creation.

llvm-svn: 110946
2010-08-12 20:20:53 +00:00
Bruno Cardoso Lopes
99b5298854 Define AVX 128-bit pattern versions of SET0PS/PD.
llvm-svn: 110937
2010-08-12 18:20:59 +00:00
Bruno Cardoso Lopes
bb491bd56c Begin to support some vector operations for AVX 256-bit intructions. The long
term goal here is to be able to match enough of vector_shuffle and build_vector
so all avx intrinsics which aren't mapped to their own built-ins but to
shufflevector calls can be codegen'd. This is the first (baby) step, support
building zeroed vectors.

llvm-svn: 110897
2010-08-12 02:06:36 +00:00
Bruno Cardoso Lopes
6eb24fd744 Add AVX matching patterns to Packed Bit Test intrinsics.
Apply the same approach of SSE4.1 ptest intrinsics but
create a new x86 node "testp" since AVX introduces
vtest{ps}{pd} instructions which set ZF and CF depending
on sign bit AND and ANDN of packed floating-point sources.

This is slightly different from what the "ptest" does.
Tests comming with the other 256 intrinsics tests.

llvm-svn: 110744
2010-08-10 23:25:42 +00:00
Bruno Cardoso Lopes
f1928b60c0 Add AVX movnt{pd,ps,dq} 256-bit intrinsics
llvm-svn: 110650
2010-08-10 02:49:24 +00:00
Bruno Cardoso Lopes
f5884c6791 Add AVX movmsk 256-bit intrinsics
llvm-svn: 110648
2010-08-10 02:34:56 +00:00
Bruno Cardoso Lopes
2a7ed4b5c9 Support AVX 256-bit load and store intrinsics
llvm-svn: 110645
2010-08-10 01:43:16 +00:00
Bruno Cardoso Lopes
1ea37cfa7b Patterns to match AVX cmp instructions
llvm-svn: 110633
2010-08-10 00:13:20 +00:00
Bruno Cardoso Lopes
4e8d77892c Add matching patterns for vblend AVX intrinsics
llvm-svn: 110630
2010-08-10 00:02:05 +00:00