1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 12:12:47 +01:00
Commit Graph

13691 Commits

Author SHA1 Message Date
Reid Spencer
b08854af39 Add the README files to the distribution.
llvm-svn: 27651
2006-04-13 06:39:24 +00:00
Evan Cheng
2de048bc69 psad, pmax, pmin intrinsics.
llvm-svn: 27647
2006-04-13 06:11:45 +00:00
Evan Cheng
93dcea2b5a Various SSE2 packed integer intrinsics: pmulhuw, pavgw, etc.
llvm-svn: 27645
2006-04-13 05:24:54 +00:00
Evan Cheng
25fcfb9f2d X86 SSE2 supports v8i16 multiplication
llvm-svn: 27644
2006-04-13 05:10:25 +00:00
Evan Cheng
d6cad69ef4 Update
llvm-svn: 27643
2006-04-13 05:09:45 +00:00
Evan Cheng
2f634fac6d padds{b|w}, paddus{b|w}, psubs{b|w}, psubus{b|w} intrinsics.
llvm-svn: 27639
2006-04-13 00:43:35 +00:00
Evan Cheng
537bdb370c Naming inconsistency.
llvm-svn: 27638
2006-04-13 00:00:23 +00:00
Evan Cheng
8768f25c80 SSE / SSE2 conversion intrinsics.
llvm-svn: 27637
2006-04-12 23:42:44 +00:00
Evan Cheng
2c2d734efd All "integer" logical ops (pand, por, pxor) are now promoted to v2i64.
Clean up and fix various logical ops issues.

llvm-svn: 27633
2006-04-12 21:21:57 +00:00
Evan Cheng
1477d2d08f Promote vector AND, OR, and XOR
llvm-svn: 27632
2006-04-12 21:20:24 +00:00
Reid Spencer
7f718db335 Make sure CVS versions of yacc and lex files get distributed.
llvm-svn: 27630
2006-04-12 20:57:05 +00:00
Reid Spencer
56aa7c79b7 Get rid of a signed/unsigned compare warning.
llvm-svn: 27625
2006-04-12 19:28:15 +00:00
Chris Lattner
e087b8e321 Add a new way to match vector constants, which make it easier to bang bits of
different types.

Codegen spltw(0x7FFFFFFF) and spltw(0x80000000) without a constant pool load,
implementing PowerPC/vec_constants.ll:test1.  This compiles:

typedef float vf __attribute__ ((vector_size (16)));
typedef int vi __attribute__ ((vector_size (16)));
void test(vi *P1, vi *P2, vf *P3) {
  *P1 &= (vi){0x80000000,0x80000000,0x80000000,0x80000000};
  *P2 &= (vi){0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF};
  *P3 = vec_abs((vector float)*P3);
}

to:

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        vspltisw v0, -1
        vslw v0, v0, v0
        lvx v1, 0, r3
        vand v1, v1, v0
        stvx v1, 0, r3
        lvx v1, 0, r4
        vandc v1, v1, v0
        stvx v1, 0, r4
        lvx v1, 0, r5
        vandc v0, v1, v0
        stvx v0, 0, r5
        mtspr 256, r2
        blr

instead of (with two constant pool entries):

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        li r6, lo16(LCPI1_0)
        lis r7, ha16(LCPI1_0)
        li r8, lo16(LCPI1_1)
        lis r9, ha16(LCPI1_1)
        lvx v0, r7, r6
        lvx v1, 0, r3
        vand v0, v1, v0
        stvx v0, 0, r3
        lvx v0, r9, r8
        lvx v1, 0, r4
        vand v1, v1, v0
        stvx v1, 0, r4
        lvx v1, 0, r5
        vand v0, v1, v0
        stvx v0, 0, r5
        mtspr 256, r2
        blr

GCC produces (with 2 cp entries):

_test:
        mfspr r0,256
        stw r0,-4(r1)
        oris r0,r0,0xc00c
        mtspr 256,r0
        lis r2,ha16(LC0)
        lis r9,ha16(LC1)
        la r2,lo16(LC0)(r2)
        lvx v0,0,r3
        lvx v1,0,r5
        la r9,lo16(LC1)(r9)
        lwz r12,-4(r1)
        lvx v12,0,r2
        lvx v13,0,r9
        vand v0,v0,v12
        stvx v0,0,r3
        vspltisw v0,-1
        vslw v12,v0,v0
        vandc v1,v1,v12
        stvx v1,0,r5
        lvx v0,0,r4
        vand v0,v0,v13
        stvx v0,0,r4
        mtspr 256,r12
        blr

llvm-svn: 27624
2006-04-12 19:07:14 +00:00
Chris Lattner
7900e6da3b Turn casts into getelementptr's when possible. This enables SROA to be more
aggressive in some cases where LLVMGCC 4 is inserting casts for no reason.

This implements InstCombine/cast.ll:test27/28.

llvm-svn: 27620
2006-04-12 18:09:35 +00:00
Reid Spencer
23eae83205 Don't emit useless warning messages.
llvm-svn: 27617
2006-04-12 17:56:16 +00:00
Chris Lattner
ce6e988fa6 Rename get_VSPLI_elt -> get_VSPLTI_elt
Canonicalize BUILD_VECTOR's that match VSPLTI's into a single type for each
form, eliminating a bunch of Pat patterns in the .td file and allowing us to
CSE stuff more aggressively.  This implements
PowerPC/buildvec_canonicalize.ll:VSPLTI

llvm-svn: 27614
2006-04-12 17:37:20 +00:00
Evan Cheng
66fb7beed7 Promote v4i32, v8i16, v16i8 load to v2i64 load.
llvm-svn: 27612
2006-04-12 17:12:36 +00:00
Chris Lattner
602d86f7af Ensure that zero vectors are always v4i32, which forces them to CSE with
each other.  This implements CodeGen/PowerPC/vxor-canonicalize.ll

llvm-svn: 27609
2006-04-12 16:53:28 +00:00
Evan Cheng
ce4e1c0068 Vector type promotion for ISD::LOAD and ISD::SELECT
llvm-svn: 27606
2006-04-12 16:33:18 +00:00
Chris Lattner
70d68fcfcb Implement support for the formal_arguments node. To get this, targets shouldcustom legalize it and remove their XXXTargetLowering::LowerArguments overload
llvm-svn: 27604
2006-04-12 16:20:43 +00:00
Evan Cheng
fbdf6ece4a Various SSE2 conversion intrinsics
llvm-svn: 27603
2006-04-12 05:20:24 +00:00
Chris Lattner
3df4b4ca55 Don't memoize vloads in the load map! Don't memoize them anywhere here, let
getNode do it.  This fixes CodeGen/Generic/2006-04-11-vecload.ll

llvm-svn: 27602
2006-04-12 03:25:41 +00:00
Evan Cheng
68b885f50c Added __builtin_ia32_storelv4si, __builtin_ia32_movqv4si,
__builtin_ia32_loadlv4si, __builtin_ia32_loaddqu, __builtin_ia32_storedqu.

llvm-svn: 27599
2006-04-11 22:28:25 +00:00
Nate Begeman
ccd6ea1913 Fix SingleSource/UnitTests/Vector/sumarray-dbl
llvm-svn: 27594
2006-04-11 19:44:43 +00:00
Nate Begeman
786d44f822 Fix PR727, correctly handling large stack aligments on ppc
llvm-svn: 27593
2006-04-11 19:29:21 +00:00
Chris Lattner
0e63e916b3 we have a shuffle instr, add an example.
llvm-svn: 27592
2006-04-11 18:47:03 +00:00
Evan Cheng
c0848b1eaf gcc lower SSE prefetch into generic prefetch intrinsic. Need to add support
later.

llvm-svn: 27591
2006-04-11 18:04:57 +00:00
Evan Cheng
7a9fca11b5 Misc. intrinsics.
llvm-svn: 27590
2006-04-11 17:35:57 +00:00
Jim Laskey
1e0cbe4158 Suppress debug label when not debug.
llvm-svn: 27588
2006-04-11 08:11:53 +00:00
Evan Cheng
798acd4094 movnt* and maskmovdqu intrinsics
llvm-svn: 27587
2006-04-11 06:57:30 +00:00
Evan Cheng
be9f313cd8 Only get Tmp2 for cases where number of operands is > 1. Fixed return void.
llvm-svn: 27586
2006-04-11 06:33:39 +00:00
Chris Lattner
651f0655d0 add some todos
llvm-svn: 27580
2006-04-11 02:00:08 +00:00
Chris Lattner
e12152a64b Vector function results go into V2 according to GCC. The darwin ABI doc
doesn't say where they go :-/

llvm-svn: 27579
2006-04-11 01:38:39 +00:00
Chris Lattner
d02e72ffc3 Add basic support for legalizing returns of vectors
llvm-svn: 27578
2006-04-11 01:31:51 +00:00
Chris Lattner
5d1acb831a Move some return-handling code from lowerarguments to the ISD::RET handling stuff.
No functionality change.

llvm-svn: 27577
2006-04-11 01:21:43 +00:00
Evan Cheng
da283be867 Added support for _mm_move_ss and _mm_move_sd.
llvm-svn: 27575
2006-04-11 00:19:04 +00:00
Jim Laskey
54dc261ef6 Use existing information.
llvm-svn: 27574
2006-04-10 23:09:19 +00:00
Chris Lattner
ec4fbd3b41 Implement vec_shuffle.ll:test3
llvm-svn: 27573
2006-04-10 23:06:36 +00:00
Chris Lattner
42be18f65f Implement InstCombine/vec_shuffle.ll:test[12]
llvm-svn: 27571
2006-04-10 22:45:52 +00:00
Evan Cheng
b7ccf3b282 Remove some bogus patterns; clean up.
llvm-svn: 27569
2006-04-10 22:35:16 +00:00
Chris Lattner
2879e2222e add a note
llvm-svn: 27567
2006-04-10 21:51:03 +00:00
Evan Cheng
34dd1c80dd Remove an entry that is now done.
llvm-svn: 27565
2006-04-10 21:42:57 +00:00
Evan Cheng
983d251e3d Added some missing shuffle patterns.
llvm-svn: 27564
2006-04-10 21:42:19 +00:00
Evan Cheng
255a990223 Correct an entry
llvm-svn: 27563
2006-04-10 21:41:39 +00:00
Evan Cheng
352e751a9e movups / movupd
llvm-svn: 27562
2006-04-10 21:11:06 +00:00
Andrew Lenharth
b3f434b83d Add a simple pass to make sure that all (non-library) calls to malloc and free
are visible to analysis as intrinsics.  That is, make sure someone doesn't pass
free around by address in some struct (as happens in say 176.gcc).

This doesn't get rid of any indirect calls, just ensure calls to free and malloc
are always direct.

llvm-svn: 27560
2006-04-10 19:26:09 +00:00
Evan Cheng
d333e6853c Missing break
llvm-svn: 27559
2006-04-10 18:54:36 +00:00
Evan Cheng
2b6c899eb2 Conditional move of vector types.
llvm-svn: 27556
2006-04-10 07:23:14 +00:00
Evan Cheng
5326565791 New entries
llvm-svn: 27555
2006-04-10 07:22:03 +00:00
Evan Cheng
4f357911ad Use movaps to do VR128 reg-to-reg copies for now. It's shorter and available for SSE1.
llvm-svn: 27554
2006-04-10 07:21:31 +00:00