1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 03:23:01 +02:00
Commit Graph

116 Commits

Author SHA1 Message Date
Chris Lattner
26f2bd4d4b Implement 64-bit undef, sub, shl/shr, srem/urem
llvm-svn: 28929
2006-06-27 18:18:41 +00:00
Chris Lattner
494f476ca7 Implement a bunch of 64-bit cleanliness work. With this, treeadd builds (but
doesn't work right).

llvm-svn: 28921
2006-06-27 00:04:13 +00:00
Chris Lattner
5d0654b832 Remove two more definitions
llvm-svn: 28918
2006-06-26 22:47:37 +00:00
Chris Lattner
209c2db6b9 remove two unused instructions.
llvm-svn: 28917
2006-06-26 22:44:13 +00:00
Chris Lattner
10d22c274e Make these predicates correct in 64-bit mode too.
llvm-svn: 28890
2006-06-20 23:21:20 +00:00
Chris Lattner
75e6449a0f Rename OR4 -> OR. Move some PPC64-specific stuff to the 64-bit file
llvm-svn: 28889
2006-06-20 23:18:58 +00:00
Chris Lattner
2e1d3158f1 remove unused flag
llvm-svn: 28888
2006-06-20 23:15:07 +00:00
Chris Lattner
19df1fcd72 remove some unused patterns
llvm-svn: 28886
2006-06-20 23:11:36 +00:00
Chris Lattner
eede1e2c00 Add some 64-bit logical ops.
Split imm16Shifted into a sext/zext form for 64-bit support.
Add some patterns for immediate formation.  For example, we now compile this:

static unsigned long long Y;
void test3() {
  Y = 0xF0F00F00;
}

into:

_test3:
        li r2, 3840
        lis r3, ha16(_Y)
        xoris r2, r2, 61680
        std r2, lo16(_Y)(r3)
        blr

GCC produces:

_test3:
        li r0,0
        lis r2,ha16(_Y)
        ori r0,r0,61680
        sldi r0,r0,16
        ori r0,r0,3840
        std r0,lo16(_Y)(r2)
        blr

llvm-svn: 28883
2006-06-20 22:34:10 +00:00
Chris Lattner
4ff5f3d852 64-bit bugfix: 0xFFFF0000 cannot be formed with a single lis.
llvm-svn: 28880
2006-06-20 21:39:30 +00:00
Chris Lattner
3ae4156dd7 Remove some now-unneeded casts from instruction patterns. With the casts
removed, tblgen produces identical output to with them in.

llvm-svn: 28867
2006-06-20 00:39:56 +00:00
Chris Lattner
163da7cdcb In 64-bit mode, addr mode operands use G8RC instead of GPRC.
llvm-svn: 28840
2006-06-16 21:29:03 +00:00
Chris Lattner
81845946ff fix some assumptions that pointers can only be 32-bits. With this, we can
now compile:

static unsigned long X;
void test1() {
  X = 0;
}

into:

_test1:
        lis r2, ha16(_X)
        li r3, 0
        stw r3, lo16(_X)(r2)
        blr

Totally amazing :)

llvm-svn: 28839
2006-06-16 21:01:35 +00:00
Chris Lattner
cb294464e7 Split 64-bit instructions out into a separate .td file
llvm-svn: 28838
2006-06-16 20:22:01 +00:00
Chris Lattner
b231c3d11c Fix a problem exposed by the local allocator. CALL instructions are not marked
as using incoming argument registers, so the local allocator would clobber them
between their set and use.  To fix this, we give the call instructions a variable
number of uses in the CALL MachineInstr itself, so live variables understands
the live ranges of these register arguments.

llvm-svn: 28744
2006-06-10 01:14:28 +00:00
Chris Lattner
bfbee64ecf Add PowerPC intrinsics to support dcbz[l]
llvm-svn: 28696
2006-06-06 21:29:23 +00:00
Chris Lattner
2208c3214c Make PPC call lowering more aggressive, making the isel matching code simple
enough to be autogenerated.

llvm-svn: 28354
2006-05-17 19:00:46 +00:00
Chris Lattner
03c70b7f27 Switch PPC over to a call-selection model where the lowering code creates
the copyto/fromregs instead of making the PPCISD::CALL selection code create
them.  This vastly simplifies the selection code, and moves the ABI handling
parts into one place.

llvm-svn: 28346
2006-05-17 06:01:33 +00:00
Nate Begeman
7ed816f900 JumpTable support! What this represents is working asm and jit support for
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.

llvm-svn: 27947
2006-04-22 18:53:45 +00:00
Chris Lattner
f58f727be6 These are correctly encoded by the JIT. I checked :)
llvm-svn: 27810
2006-04-18 19:03:38 +00:00
Chris Lattner
44ea12c5f8 Implement an important entry from README_ALTIVEC:
If an altivec predicate compare is used immediately by a branch, don't
use a (serializing) MFCR instruction to read the CR6 register, which requires
a compare to get it back to CR's.  Instead, just branch on CR6 directly. :)

For example, for:
void foo2(vector float *A, vector float *B) {
  if (!vec_any_eq(*A, *B))
    *B = (vector float){0,0,0,0};
}

We now generate:

_foo2:
        mfspr r2, 256
        oris r5, r2, 12288
        mtspr 256, r5
        lvx v2, 0, r4
        lvx v3, 0, r3
        vcmpeqfp. v2, v3, v2
        bne cr6, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
        vxor v2, v2, v2
        stvx v2, 0, r4
        mtspr 256, r2
        blr
LBB1_2: ; UnifiedReturnBlock
        mtspr 256, r2
        blr

instead of:

_foo2:
        mfspr r2, 256
        oris r5, r2, 12288
        mtspr 256, r5
        lvx v2, 0, r4
        lvx v3, 0, r3
        vcmpeqfp. v2, v3, v2
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        cmpwi cr0, r3, 0
        beq cr0, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
        vxor v2, v2, v2
        stvx v2, 0, r4
        mtspr 256, r2
        blr
LBB1_2: ; UnifiedReturnBlock
        mtspr 256, r2
        blr

This implements CodeGen/PowerPC/vec_br_cmp.ll.

llvm-svn: 27804
2006-04-18 17:59:36 +00:00
Chris Lattner
2ffa288a23 Add VRRC select support
llvm-svn: 27543
2006-04-08 22:45:08 +00:00
Chris Lattner
e330741a6c Lower vector compares to VCMP nodes, just like we lower vector comparison
predicates to VCMPo nodes.

llvm-svn: 27285
2006-03-31 05:13:27 +00:00
Chris Lattner
ac98e20cc9 Use normal lvx for scalar_to_vector instead of lve*x. They do the exact
same thing and we have a dag node for the former.

llvm-svn: 27205
2006-03-28 01:43:22 +00:00
Chris Lattner
65a455b060 Codegen vector predicate compares.
llvm-svn: 27151
2006-03-26 10:06:40 +00:00
Chris Lattner
cb5f9269a9 Move all Altivec stuff out into a new PPCInstrAltivec.td file.
Add a bunch of patterns for different datatypes, e.g. bit_convert, undef and
zero vector support.

llvm-svn: 27117
2006-03-25 07:51:43 +00:00
Chris Lattner
57064915a6 Add some basic patterns for other datatypes
llvm-svn: 27116
2006-03-25 07:39:07 +00:00
Chris Lattner
2fa3a6c436 Add support for __builtin_altivec_vnmsubfp /vmaddfp
llvm-svn: 27112
2006-03-25 07:05:55 +00:00
Chris Lattner
0899b16b2d Codegen things like:
<int -1, int -1, int -1, int -1>
and
 <int 65537, int 65537, int 65537, int 65537>

Using things like:
  vspltisb v0, -1
and:
  vspltish v0, 1

instead of using constant pool loads.

This implements CodeGen/PowerPC/vec_splat.ll:splat_imm_i{32|16}.

llvm-svn: 27106
2006-03-25 06:12:06 +00:00
Chris Lattner
21abff3712 Fix a bad JIT encoding of VPERM. Why is VPERM D,A,B,C but vfmadd is D,A,C,B ??
llvm-svn: 27069
2006-03-24 18:24:43 +00:00
Chris Lattner
ba4966c16c add support for using vxor to build zero vectors. This implements
Regression/CodeGen/PowerPC/vec_zero.ll

llvm-svn: 27059
2006-03-24 07:48:08 +00:00
Chris Lattner
ace2d0d227 Gabor points out that we can't spell. :)
llvm-svn: 27049
2006-03-24 07:12:19 +00:00
Chris Lattner
974982c89c Add PPC vector bit-convert support
llvm-svn: 26995
2006-03-23 19:54:27 +00:00
Chris Lattner
f84f3bf95b When possible, custom lower 32-bit SINT_TO_FP to this:
_foo2:
        extsw r2, r3
        std r2, -8(r1)
        lfd f0, -8(r1)
        fcfid f0, f0
        frsp f1, f0
        blr

instead of this:

_foo2:
        lis r2, ha16(LCPI2_0)
        lis r4, 17200
        xoris r3, r3, 32768
        stw r3, -4(r1)
        stw r4, -8(r1)
        lfs f0, lo16(LCPI2_0)(r2)
        lfd f1, -8(r1)
        fsub f0, f1, f0
        frsp f1, f0
        blr

This speeds up Misc/pi from 2.44s->2.09s with LLC and from 3.01->2.18s
with llcbeta (16.7% and 38.1% respectively).

llvm-svn: 26943
2006-03-22 05:30:33 +00:00
Chris Lattner
2e606dc60f Fix the JIT encoding of the VAForm_1 instructions, including vmaddfp
llvm-svn: 26935
2006-03-22 01:44:36 +00:00
Chris Lattner
acb2506622 When codegen'ing vector MUL using VFMADD, *add* the 0, don't *mul* the 0.
llvm-svn: 26913
2006-03-21 00:51:38 +00:00
Chris Lattner
978628896b Fix a couple of bugs in permute/splat generate, thanks to Nate for actually
figuring these out! :)

llvm-svn: 26904
2006-03-20 18:26:51 +00:00
Chris Lattner
fb0e160aa5 Fix the pattern for VADDUWM, add i32 splat
llvm-svn: 26901
2006-03-20 17:51:58 +00:00
Evan Cheng
57da1afbc8 Use tblgen'd VECTOR_SHUFFLE selection code.
llvm-svn: 26900
2006-03-20 08:14:16 +00:00
Chris Lattner
dc3605efdb Add support for generating vspltw, instead of a vperm instruction with a
constant pool load.  This generates significantly nicer code for splats.

When tblgen gets bugfixed, we can remove the custom selection code.

llvm-svn: 26898
2006-03-20 06:51:10 +00:00
Chris Lattner
1cdeda1c5a Check in some intermediate code that adds a skeleton for matching vsplt*
instructions

llvm-svn: 26894
2006-03-20 06:15:45 +00:00
Chris Lattner
c230af9810 fix typo
llvm-svn: 26889
2006-03-20 05:05:55 +00:00
Chris Lattner
bea056ecf2 add vsplat instructions, fix sched description for vperm
llvm-svn: 26888
2006-03-20 04:47:33 +00:00
Chris Lattner
0e56cf0d94 Custom lower arbitrary VECTOR_SHUFFLE's to VPERM.
TODO: leave specific ones as VECTOR_SHUFFLE's and turn them into specialized
operations like vsplt*

llvm-svn: 26887
2006-03-20 01:53:53 +00:00
Chris Lattner
6f502da274 add the vperm instruction
llvm-svn: 26883
2006-03-20 01:00:56 +00:00
Chris Lattner
789570bafb Custom lower SCALAR_TO_VECTOR into lve*x.
llvm-svn: 26868
2006-03-19 06:55:52 +00:00
Chris Lattner
89bc332152 add support for vector undef
llvm-svn: 26863
2006-03-19 06:10:09 +00:00
Chris Lattner
a9b4a2ab99 minor fixes
llvm-svn: 26857
2006-03-19 05:43:01 +00:00
Chris Lattner
b46a4c28ad we don't use lmw/stmw. When we want them they are easy enough to add
llvm-svn: 26853
2006-03-19 04:33:37 +00:00
Nate Begeman
793c8136ae Fix subfic to match subc by default instead of sub so that it is correctly
cost-modeled as producing a flag.  This fixes the test I just added for neg

llvm-svn: 26835
2006-03-17 22:41:37 +00:00