1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 19:42:54 +02:00
Commit Graph

677 Commits

Author SHA1 Message Date
Craig Topper
06ed6cb856 Add TB encoding to VEROALL, VZEROUPPER, and VCVTPS2PD to allow them to be disassembled. Fixes PR10723.
llvm-svn: 138551
2011-08-25 06:57:46 +00:00
Bruno Cardoso Lopes
5d34219953 Add support for 256-bit versions of VSHUFPD and VSHUFPS.
llvm-svn: 138546
2011-08-25 02:58:26 +00:00
Bruno Cardoso Lopes
dfa5cf4620 Create a section for non-instructions patterns in the beginning of the
file, and move more code around!

llvm-svn: 138521
2011-08-24 23:18:11 +00:00
Bruno Cardoso Lopes
719b357628 Move code around!
llvm-svn: 138520
2011-08-24 23:18:09 +00:00
Bruno Cardoso Lopes
3824b766ac Organize UNPCK* patterns, also add remaining for AVX.
llvm-svn: 138519
2011-08-24 23:18:06 +00:00
Bruno Cardoso Lopes
82c8bc7efd Move remaining MOVDDUP patterns close to MOVDDUP defintion and duplicate
the missing ones for AVX.

llvm-svn: 138518
2011-08-24 23:18:04 +00:00
Bruno Cardoso Lopes
d315a6b6e6 Organize and tidy up MOVDDUP section. Also update comments!
llvm-svn: 138517
2011-08-24 23:18:02 +00:00
Bruno Cardoso Lopes
762fb13cc9 Move MOVHLPS patterns close to MOVHLPS definition, and duplicate the
pattern for 128-bit AVX mode.

llvm-svn: 138516
2011-08-24 23:17:59 +00:00
Bruno Cardoso Lopes
d62766849f Move all PSHUF* patterns close to the PSHUF* definitions. Also be
explicit about which subtarget they refer to, and add AVX versions of
the ones we currently don't. Remove old and now wrong comments!

llvm-svn: 138515
2011-08-24 23:17:57 +00:00
Bruno Cardoso Lopes
122f7cfc92 Move all SHUFP* patterns close to the SHUFP* definitions. Also be
explicit about which subtarget they refer to, and add AVX versions of
the ones we currently don't. Make the mask check more strict, to be
clear it won't be used to match to 256-bit versions!

llvm-svn: 138514
2011-08-24 23:17:55 +00:00
Bruno Cardoso Lopes
734febce18 Mark VZEROALL as clobbering all YMM registers
llvm-svn: 138461
2011-08-24 18:48:33 +00:00
Bruno Cardoso Lopes
8959b54713 Fix a nasty bug where a v4i64 was being wrong emitted with 32-bit
permutations. Also tidy up some patterns and make them close to their
instruction definition!

llvm-svn: 138392
2011-08-23 22:06:37 +00:00
Craig Topper
67b22aedb4 Add support for breaking 256-bit v16i16 and v32i8 VSETCC into two 128-bit ones, avoiding sclarization. Add vex form of pcmpeqq and pcmpgtq. Fixes more cases for PR10712.
llvm-svn: 138321
2011-08-23 04:36:33 +00:00
Bruno Cardoso Lopes
23ff325f5b Add 128-bit AVX codegen for PCMP* family of integer instructions
llvm-svn: 138270
2011-08-22 20:31:00 +00:00
Craig Topper
f68d77215d Add TB encoding to VEX versions of SSE fp logical operations to fix disassembler
llvm-svn: 138034
2011-08-19 05:28:50 +00:00
Bruno Cardoso Lopes
0d458d4bb3 Re-encoded 128-bit AVX versions of SQRT, RSQRT, RCP have 3 operands
instead of 2. They were already defined this way in their regular
version, but not for the intrinsics versions (*_Int), and that would work
for assembly emission but not for object code, since a MachineOperand
would be missing. This commit fix PR10697.

Also removed the {VSQRT,VRSQRT,VRCP}r_Int forms and match the intrinsic
via INSERT_SUBREG+EXTRACT_SUBREG patterns. The same couldn't be done for
memory versions because sse_load_f32/sse_load_f64 operand need special
handling and don't work like regular "addr" operands.

There are right now 114 "*_Int" and 98 "Int_*" forms! I'm slowly
removing them as I step through, but hope we can get rid of these
someday, they are really annoying :)

llvm-svn: 138012
2011-08-18 23:59:21 +00:00
Bruno Cardoso Lopes
c174d8ac48 Cleanup vector logical ops in AVX and add use int versions for simple
v2i64

llvm-svn: 137919
2011-08-18 02:11:34 +00:00
Bruno Cardoso Lopes
98531dfd08 Introduce matching patterns for vbroadcast AVX instruction. The idea is to
match splats in the form (splat (scalar_to_vector (load ...))) whenever
the load can be folded. All the logic and instruction emission is
working but because of PR8156, there are no ways to match loads, cause
they can never be folded for splats. Thus, the tests are XFAILed, but
I've tested and exercised all the logic using a relaxed version for
checking the foldable loads, as if the bug was already fixed. This
should work out of the box once PR8156 gets fixed since MayFoldLoad will
work as expected.

llvm-svn: 137810
2011-08-17 02:29:19 +00:00
Bruno Cardoso Lopes
f026c60f3d While I'm here, remove the "_alt" hacks to a series of INSERT_SUBREG and
also add the AVX versions of the 128-bit patterns

llvm-svn: 137685
2011-08-15 23:36:51 +00:00
Bruno Cardoso Lopes
1e817d1451 Reorder declarations of vmovmskp* and also put the necessary AVX
predicate and TB encoding fields. This fix the encoding for the
attached testcase. This fixes PR10625.

llvm-svn: 137684
2011-08-15 23:36:45 +00:00
Bruno Cardoso Lopes
2d100ca13c The VPERM2F128 is a AVX instruction which permutes between two 256-bit
vectors. It operates on 128-bit elements instead of regular scalar
types. Recognize shuffles that are suitable for VPERM2F128 and teach
the x86 legalizer how to handle them.

llvm-svn: 137519
2011-08-12 21:48:26 +00:00
Bruno Cardoso Lopes
17ae896095 Move code around and add comments
llvm-svn: 137518
2011-08-12 21:48:22 +00:00
Bruno Cardoso Lopes
4106caa9af Cleanup: Remove Int_ CVTSS2SI* forms
llvm-svn: 137297
2011-08-11 02:52:36 +00:00
Bruno Cardoso Lopes
565ab1542a The following X86 pattern is incorrect:
def : Pat<(X86Movss VR128:$src1,
                   (bc_v4i32 (v2i64 (load addr:$src2)))),
          (MOVLPSrm VR128:$src1, addr:$src2)>;
This matches a MOVSS dag with a MOVLPS instruction. However, MOVSS will replace only the low 32 bits of the register, while the MOVLPS instruction will replace the low 64 bits. A testcase is added and illustrates the bug and also modified the one that was already present. Patch by Tanya Lattner.

llvm-svn: 137227
2011-08-10 17:45:17 +00:00
Bruno Cardoso Lopes
7461b930f3 Add v16i16 and v32i8 store patterns
llvm-svn: 137166
2011-08-09 22:39:53 +00:00
Bruno Cardoso Lopes
028c6aa951 Use fp unpack instructions to unpack int types. Until we have AVX2, this
is the best we can do for these patterns. This fix PR10554.

llvm-svn: 137161
2011-08-09 22:18:37 +00:00
Bruno Cardoso Lopes
633400ee00 Reapply a more appropriate solution than in r137114. AVX supports
v4f64 = sitofp v4i32. This fix PR10559.
Also add support for v4i32 = fptosi v4f64.

llvm-svn: 137128
2011-08-09 17:39:13 +00:00
Bruno Cardoso Lopes
d521431558 Add support for avx vector fextend
llvm-svn: 137105
2011-08-09 03:04:29 +00:00
Bruno Cardoso Lopes
09a727298f Add AVX versions of 128-bit sitofp and fptosi
llvm-svn: 137104
2011-08-09 03:04:25 +00:00
Bruno Cardoso Lopes
1025d1eb3b Add two patterns to match special vmovss and vmovsd cases. Also fix
the patterns already there to be more strict regarding the predicate.
This fixes PR10558

llvm-svn: 137100
2011-08-09 01:43:09 +00:00
Bruno Cardoso Lopes
d7eac41193 Make LowerVSETCC aware of AVX types and add patterns to match them.
llvm-svn: 137090
2011-08-09 00:46:57 +00:00
Bruno Cardoso Lopes
771876cade Add v4f64 -> v2f32 fp_round support. Also add a testcase to exercise
the legalizer. This commit together with the two previous ones fixes
PR10495.

llvm-svn: 136654
2011-08-01 21:54:09 +00:00
Bruno Cardoso Lopes
473d982caf Add v8i32 and v4i64 vpermil patterns
llvm-svn: 136451
2011-07-29 01:31:07 +00:00
Bruno Cardoso Lopes
02bbf20b02 Cleanup PALIGNR handling and remove the old palign pattern fragment.
Also make PALIGNR masks to don't match 256-bits, which isn't supported
It's also a step to solve PR10489

llvm-svn: 136448
2011-07-29 01:30:59 +00:00
Bruno Cardoso Lopes
e24a043703 Add patterns to generate copies for extract_subvector instead of
using vextractf128. This will reduce the number of issued instruction
for several avx codes.

llvm-svn: 136323
2011-07-28 01:26:50 +00:00
Bruno Cardoso Lopes
73945bf79a movd/movq write zeros in the high 128-bit part of the vector. Use
them to match 256-bit scalar_to_vector+zext.

llvm-svn: 136322
2011-07-28 01:26:46 +00:00
Bruno Cardoso Lopes
1f63a37172 Add a few patterns to match allzeros without having to use the fp unit.
Take advantage that the 128-bit vpxor zeros the higher part and use it.
This also fixes PR10491

llvm-svn: 136321
2011-07-28 01:26:43 +00:00
Bruno Cardoso Lopes
06d8be564f Add SINT_TO_FP and FP_TO_SINT support for v8i32 types. Also move
a convert pattern close to the instruction definition.

llvm-svn: 136320
2011-07-28 01:26:39 +00:00
Kevin Enderby
9adbbfffd0 Fix llvm-mc handing of x86 instructions that take 8-bit unsigned immediates.
llvm-mc gives an "invalid operand" error for instructions that take an unsigned
immediate which have the high bit set such as:
    pblendw $0xc5, %xmm2, %xmm1
llvm-mc treats all x86 immediates as signed values and range checks them.
A small number of x86 instructions use the imm8 field as a set of bits.
This change only changes those instructions and where the high bit is not
ignored.  The others remain unchanged.

llvm-svn: 136287
2011-07-27 23:01:50 +00:00
Bruno Cardoso Lopes
8830fde434 The vpermilps and vpermilpd have different behaviour regarding the
usage of the shuffle bitmask. Both work in 128-bit lanes without
crossing, but in the former the mask of the high part is the same
used by the low part while in the later both lanes have independent
masks. Handle this properly and and add support for vpermilpd.

llvm-svn: 136200
2011-07-27 00:56:34 +00:00
Bruno Cardoso Lopes
e53bb853ea Recognize unpckh* masks and match 256-bit versions. The new versions are
different from the previous 128-bit because they work in lanes.
Update a few comments and add testcases

llvm-svn: 136157
2011-07-26 22:03:40 +00:00
Bruno Cardoso Lopes
a493ad3938 Remove now unused patterns. 0 insertions(+), 98 deletions(-)
llvm-svn: 136109
2011-07-26 18:22:39 +00:00
Bruno Cardoso Lopes
b24e958ffb Cleanup old matching for PUNPCK* variants
llvm-svn: 136108
2011-07-26 18:22:27 +00:00
Bruno Cardoso Lopes
ab40a57cce Add 256-bit isel for movsldup/movshdup
llvm-svn: 136051
2011-07-26 02:39:32 +00:00
Bruno Cardoso Lopes
cde45ac9ca Add 128-bit AVX versions of movshdup/mosldup
llvm-svn: 136048
2011-07-26 02:39:23 +00:00
Bruno Cardoso Lopes
25698b90e9 Cleanup movsldup/movshdup matching.
27 insertions(+), 62 deletions(-)

llvm-svn: 136047
2011-07-26 02:39:13 +00:00
Bruno Cardoso Lopes
c94d6a2d2c Codegen allonesvector better while using AVX: vpcmpeqd + vinsertf128
This also fixes PR10452

llvm-svn: 136004
2011-07-25 23:05:32 +00:00
Bruno Cardoso Lopes
f457bc8120 Add remaining 256-bit vector bitcasts. This also fixes PR10451
llvm-svn: 136003
2011-07-25 23:05:28 +00:00
Bruno Cardoso Lopes
9380919dc5 - Handle special scalar_to_vector case: splats. Using a native 128-bit
shuffle before inserting on a 256-bit vector.
- Add AVX versions of movd/movq instructions
- Introduce a few COPY patterns to match insert_subvector instructions.
This turns a trivial insert_subvector instruction into a register copy,
coalescing the xmm into a ymm and avoid emiting on more instruction.

llvm-svn: 136002
2011-07-25 23:05:25 +00:00
Bruno Cardoso Lopes
5382a1e65e Add v8f32->v8i32 bitcast. Fixes PR10440
llvm-svn: 135794
2011-07-22 19:51:02 +00:00