1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 03:53:04 +02:00
Commit Graph

23818 Commits

Author SHA1 Message Date
Chris Lattner
4e99e6dfdd Ask legalize to promote all vector shuffles to be v16i8 instead of having to
handle all 4 PPC vector types.   This simplifies the matching code and allows
us to eliminate a bunch of patterns.  This also adds cases we were missing,
such as CodeGen/PowerPC/vec_splat.ll:splat_h.

llvm-svn: 27400
2006-04-04 17:25:31 +00:00
Chris Lattner
136a27d0d0 * Add supprot for SCALAR_TO_VECTOR operations where the input needs to be
promoted/expanded (e.g. SCALAR_TO_VECTOR from i8/i16 on PPC).
* Add support for targets to request that VECTOR_SHUFFLE nodes be promoted
  to a canonical type, for example, we only want v16i8 shuffles on PPC.
* Move isShuffleLegal out of TLI into Legalize.
* Teach isShuffleLegal to allow shuffles that need to be promoted.

llvm-svn: 27399
2006-04-04 17:23:26 +00:00
Chris Lattner
1dc3c03ee7 Move isShuffleLegal from TLI to Legalize.
llvm-svn: 27398
2006-04-04 17:21:22 +00:00
Chris Lattner
0e93cb9bc0 new testcase
llvm-svn: 27397
2006-04-04 17:20:45 +00:00
Chris Lattner
020ff34600 Signed shr by a constant is not the same as sdiv by 2^k
llvm-svn: 27395
2006-04-04 06:11:42 +00:00
Evan Cheng
f07104b717 cmpps / cmppd encoding bug
llvm-svn: 27393
2006-04-04 03:04:07 +00:00
Chris Lattner
a1fc0eb694 Fix the types for these intrinsics.
llvm-svn: 27392
2006-04-04 01:40:06 +00:00
Chris Lattner
85dc06c29e Constant fold bitconvert(undef)
llvm-svn: 27391
2006-04-04 01:02:22 +00:00
Chris Lattner
9925c6018f Allow targets to have fine grained control over which types various ops get
promoted to, if they desire.

llvm-svn: 27389
2006-04-04 00:25:10 +00:00
Evan Cheng
2be8582ddb Compact some intrinsic definitions.
llvm-svn: 27388
2006-04-04 00:10:53 +00:00
Chris Lattner
2bf9c8cc18 Plug in the byte and short splats
llvm-svn: 27387
2006-04-04 00:05:13 +00:00
Chris Lattner
0128e4d335 Revert accidentally committed hunks.
llvm-svn: 27386
2006-04-03 23:58:04 +00:00
Chris Lattner
57b9e01b3e Make sure to mark unsupported SCALAR_TO_VECTOR operations as expand.
llvm-svn: 27385
2006-04-03 23:55:43 +00:00
Evan Cheng
bbe72d932c Some SSE1 intrinsics: min, max, sqrt, etc.
llvm-svn: 27384
2006-04-03 23:49:17 +00:00
Chris Lattner
ab96554280 revert previous patch
llvm-svn: 27383
2006-04-03 23:14:49 +00:00
Evan Cheng
7ff32cd571 Use movlpd to: store lower f64 extracted from v2f64.
Use movhpd to: store upper f64 extracted from v2f64.

llvm-svn: 27382
2006-04-03 22:30:54 +00:00
Chris Lattner
eb9684f6a4 Force use of a frame-pointer if there is anything on the stack that is aligned
more than the OS keeps the stack aligned.

llvm-svn: 27381
2006-04-03 22:03:29 +00:00
Chris Lattner
4601b86ad0 The stack alignment is now computed dynamically, just verify it is correct.
llvm-svn: 27380
2006-04-03 21:39:57 +00:00
Chris Lattner
264e4a438a Remove unused method
llvm-svn: 27379
2006-04-03 21:39:03 +00:00
Chris Lattner
b46d7d9fc4 Keep track of max stack alignment as objects are added. Remove an obsolete method.
llvm-svn: 27378
2006-04-03 21:38:39 +00:00
Evan Cheng
169240beb7 - More efficient extract_vector_elt with shuffle and movss, movsd, movd, etc.
- Some bug fixes and naming inconsistency fixes.

llvm-svn: 27377
2006-04-03 20:53:28 +00:00
Chris Lattner
758878175b Align vectors to the size in bytes, not bits.
llvm-svn: 27376
2006-04-03 19:28:50 +00:00
Chris Lattner
d13dd8ef5c Add a missing check, this fixes UnitTests/Vector/sumarray.c
llvm-svn: 27375
2006-04-03 17:29:28 +00:00
Chris Lattner
d9902c3de0 Add a missing check, which broke a bunch of vector tests.
llvm-svn: 27374
2006-04-03 17:21:50 +00:00
Chris Lattner
29156e5094 shrinkify intrinsics more by using some local classes
llvm-svn: 27373
2006-04-03 17:20:06 +00:00
Chris Lattner
c65511b05c Add the full set of min/max instructions
llvm-svn: 27372
2006-04-03 15:58:28 +00:00
Chris Lattner
c0cba5e03f Add some classes to make it easier to define intrinsics. Add min/max intrinsics.
llvm-svn: 27371
2006-04-03 15:43:07 +00:00
Andrew Lenharth
4760eaae91 support x * (c1 + c2) where c1 and c2 are pow2s. special case for c2 == 4
llvm-svn: 27370
2006-04-03 04:19:17 +00:00
Andrew Lenharth
7d56275f4f test powers of 2
llvm-svn: 27369
2006-04-03 04:14:39 +00:00
Andrew Lenharth
af4a638eab mul by const conversion sequences. more coming soon
llvm-svn: 27368
2006-04-03 03:18:59 +00:00
Andrew Lenharth
b133e47444 back this out
llvm-svn: 27367
2006-04-03 03:16:50 +00:00
Andrew Lenharth
a6e52528c0 test some more mul by constant removal
llvm-svn: 27366
2006-04-03 03:16:09 +00:00
Andrew Lenharth
6aa31b3b67 Make sure mul by constant 5 is turned into a s4addq
llvm-svn: 27365
2006-04-02 21:47:07 +00:00
Andrew Lenharth
91c6f28ad6 This should be a win of every arch
llvm-svn: 27364
2006-04-02 21:42:45 +00:00
Andrew Lenharth
4c950de4c1 This makes McCat/12-IOtest go 8x faster or so
llvm-svn: 27363
2006-04-02 21:08:39 +00:00
Andrew Lenharth
067eaad4d9 This will be needed soon
llvm-svn: 27362
2006-04-02 20:13:57 +00:00
Reid Spencer
8cb87336b8 For PR722:
Change the check for llvm-gcc from using LLVMGCCDIR to LLVMGCC. This checks
for the actual tool rather than the directory in which the tool resides. In
the case of this bug, it is possible that the directory exists but that the
tools in that directory do not. This fix should avoid the makefile from
erroneously proceeding without the actual tools being available.

llvm-svn: 27361
2006-04-02 14:34:26 +00:00
Chris Lattner
fa82c33ae7 add a note
llvm-svn: 27360
2006-04-02 07:20:00 +00:00
Chris Lattner
8ba4723c74 Inform the dag combiner that the predicate compares only return a low bit.
llvm-svn: 27359
2006-04-02 06:26:07 +00:00
Chris Lattner
0b4f7786d2 relax assertion
llvm-svn: 27358
2006-04-02 06:19:46 +00:00
Chris Lattner
6433904644 Allow targets to compute masked bits for intrinsics.
llvm-svn: 27357
2006-04-02 06:15:09 +00:00
Chris Lattner
dbdc830c83 Add a little dag combine to compile this:
int %AreSecondAndThirdElementsBothNegative(<4 x float>* %in) {
entry:
        %tmp1 = load <4 x float>* %in           ; <<4 x float>> [#uses=1]
        %tmp = tail call int %llvm.ppc.altivec.vcmpgefp.p( int 1, <4 x float> < float 0x7FF8000000000000, float 0.000000e+00, float 0.000000e+00, float 0x7FF8000000000000 >, <4 x float> %tmp1 )           ; <int> [#uses=1]
        %tmp = seteq int %tmp, 0                ; <bool> [#uses=1]
        %tmp3 = cast bool %tmp to int           ; <int> [#uses=1]
        ret int %tmp3
}

into this:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        lvx v0, 0, r3
        lvx v1, r5, r4
        vcmpgefp. v0, v1, v0
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        mtspr 256, r2
        blr

instead of this:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        lvx v0, 0, r3
        lvx v1, r5, r4
        vcmpgefp. v0, v1, v0
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        xori r3, r3, 1
        cntlzw r3, r3
        srwi r3, r3, 5
        mtspr 256, r2
        blr

llvm-svn: 27356
2006-04-02 06:11:11 +00:00
Chris Lattner
42a1e621f1 vector casts of casts are eliminable. Transform this:
%tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]
        %tmp = cast <4 x int> %tmp to <4 x float>               ; <<4 x float>> [#uses=1]

into:

        %tmp = cast <4 x uint> %tmp to <4 x float>              ; <<4 x float>> [#uses=1]

llvm-svn: 27355
2006-04-02 05:43:13 +00:00
Chris Lattner
d9ca6b5bfc vector casts never reinterpret bits
llvm-svn: 27354
2006-04-02 05:40:28 +00:00
Chris Lattner
3c994295fe Allow transforming this:
%tmp = cast <4 x uint>* %testData to <4 x int>*         ; <<4 x int>*> [#uses=1]
        %tmp = load <4 x int>* %tmp             ; <<4 x int>> [#uses=1]

to this:

        %tmp = load <4 x uint>* %testData               ; <<4 x uint>> [#uses=1]
        %tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]

llvm-svn: 27353
2006-04-02 05:37:12 +00:00
Chris Lattner
cb26b2dfe8 Turn altivec lvx/stvx intrinsics into loads and stores. This allows the
elimination of one load from this:

int AreSecondAndThirdElementsBothNegative( vector float *in ) {
#define QNaN 0x7FC00000
const vector unsigned int testData = (vector unsigned int)( QNaN, 0, 0, QNaN );
vector float test = vec_ld( 0, (float*) &testData );
return ! vec_any_ge( test, *in );
}

Now generating:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        addi r6, r1, -16
        lvx v0, r5, r4
        stvx v0, 0, r6
        lvx v1, 0, r3
        vcmpgefp. v0, v0, v1
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        xori r3, r3, 1
        cntlzw r3, r3
        srwi r3, r3, 5
        mtspr 256, r2
        blr

llvm-svn: 27352
2006-04-02 05:30:25 +00:00
Chris Lattner
8967316b8c Remove done item
llvm-svn: 27351
2006-04-02 05:28:54 +00:00
Jeff Cohen
66d2c4d350 Fix tablegen related dependencies in Visual Studio.
llvm-svn: 27350
2006-04-02 05:20:53 +00:00
Chris Lattner
c76f9e8691 Implement promotion for EXTRACT_VECTOR_ELT, allowing v16i8 multiplies to work with PowerPC.
llvm-svn: 27349
2006-04-02 05:06:04 +00:00
Chris Lattner
9c24ec6de5 add a note
llvm-svn: 27348
2006-04-02 03:59:11 +00:00