1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-30 07:22:55 +01:00
Commit Graph

1489 Commits

Author SHA1 Message Date
Chris Lattner
6ec8bb9e8d Legalize FSQRT, FSIN, FCOS nodes, patch contributed by Morten Ofstad
llvm-svn: 21606
2005-04-28 21:44:33 +00:00
Chris Lattner
4678a790e6 Add FSQRT, FSIN, FCOS nodes, patch contributed by Morten Ofstad
llvm-svn: 21605
2005-04-28 21:44:03 +00:00
Andrew Lenharth
2a00530fa7 Implement Value* tracking for loads and stores in the selection DAG. This enables one to use alias analysis in the backends.
(TRUNK)Stores and (EXT|ZEXT|SEXT)Loads have an extra SDOperand which is a SrcValueSDNode which contains the Value*.  Note that if the operation is introduced by the backend, it will still have the operand, but the value* will be null.

llvm-svn: 21599
2005-04-27 20:10:01 +00:00
Chris Lattner
15bcc5273b Fold (X > -1) | (Y > -1) --> (X&Y > -1)
llvm-svn: 21552
2005-04-26 01:18:33 +00:00
Chris Lattner
d8ac4da793 implement some more logical compares with constants, so that:
int foo1(int x, int y) {
  int t1 = x >= 0;
  int t2 = y >= 0;
  return t1 & t2;
}
int foo2(int x, int y) {
  int t1 = x == -1;
  int t2 = y == -1;
  return t1 & t2;
}

produces:

_foo1:
        or r2, r4, r3
        srwi r2, r2, 31
        xori r3, r2, 1
        blr
_foo2:
        and r2, r4, r3
        addic r2, r2, 1
        li r2, 0
        addze r3, r2
        blr

instead of:

_foo1:
        srwi r2, r4, 31
        xori r2, r2, 1
        srwi r3, r3, 31
        xori r3, r3, 1
        and r3, r2, r3
        blr
_foo2:
        addic r2, r4, 1
        li r2, 0
        addze r2, r2
        addic r3, r3, 1
        li r3, 0
        addze r3, r3
        and r3, r2, r3
        blr

llvm-svn: 21547
2005-04-25 21:20:28 +00:00
Chris Lattner
7931b75a81 Codegen x < 0 | y < 0 as (x|y) < 0. This allows us to compile this to:
_foo:
        or r2, r4, r3
        srwi r3, r2, 31
        blr

instead of:

_foo:
        srwi r2, r4, 31
        srwi r3, r3, 31
        or r3, r2, r3
        blr

llvm-svn: 21544
2005-04-25 21:03:25 +00:00
Misha Brukman
a9a1982a44 Convert tabs to spaces
llvm-svn: 21439
2005-04-22 04:01:18 +00:00
Misha Brukman
774e55c446 Remove trailing whitespace
llvm-svn: 21420
2005-04-21 22:36:52 +00:00
Chris Lattner
87fbc1c554 Improve and elimination. On PPC, for:
bool %test(int %X) {
        %Y = and int %X, 8
        %Z = setne int %Y, 0
        ret bool %Z
}

we now generate this:

        rlwinm r2, r3, 0, 28, 28
        srwi r3, r2, 3

instead of this:

        rlwinm r2, r3, 0, 28, 28
        srwi r2, r2, 3
        rlwinm r3, r2, 0, 31, 31

I'll leave it to Nate to get it down to one instruction. :)

---------------------------------------------------------------------

llvm-svn: 21391
2005-04-21 06:28:15 +00:00
Chris Lattner
d0a2fda2c6 Fold (x & 8) != 0 and (x & 8) == 8 into (x & 8) >> 3.
This turns this PPC code:

        rlwinm r2, r3, 0, 28, 28
        cmpwi cr7, r2, 8
        mfcr r2
        rlwinm r3, r2, 31, 31, 31

into this:

        rlwinm r2, r3, 0, 28, 28
        srwi r2, r2, 3
        rlwinm r3, r2, 0, 31, 31

Next up, nuking the extra and.

llvm-svn: 21390
2005-04-21 06:12:41 +00:00
Chris Lattner
188ecaab1d Fold setcc of MVT::i1 operands into logical operations
llvm-svn: 21319
2005-04-18 04:48:12 +00:00
Chris Lattner
72aca1b758 Another minor simplification: handle setcc (zero_extend x), c -> setcc(x, c')
llvm-svn: 21318
2005-04-18 04:30:45 +00:00
Chris Lattner
e6117e5d4f Another simple xform
llvm-svn: 21317
2005-04-18 04:11:19 +00:00
Chris Lattner
f6f5b23a00 Fold:
// (X != 0) | (Y != 0) -> (X|Y != 0)
        // (X == 0) & (Y == 0) -> (X|Y == 0)

Compiling this:

int %bar(int %a, int %b) {
        entry:
        %tmp.1 = setne int %a, 0
        %tmp.2 = setne int %b, 0
        %tmp.3 = or bool %tmp.1, %tmp.2
        %retval = cast bool %tmp.3 to int
        ret int %retval
        }

to this:

_bar:
        or r2, r3, r4
        addic r3, r2, -1
        subfe r3, r3, r2
        blr

instead of:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r3, r2, r3
        blr

llvm-svn: 21316
2005-04-18 03:59:53 +00:00
Chris Lattner
a32c50520c Make the AND elimination operation recursive and significantly more powerful,
eliminating an and for Nate's testcase:

int %bar(int %a, int %b) {
        entry:
        %tmp.1 = setne int %a, 0
        %tmp.2 = setne int %b, 0
        %tmp.3 = or bool %tmp.1, %tmp.2
        %retval = cast bool %tmp.3 to int
        ret int %retval
        }

generating:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r3, r2, r3
        blr

instead of:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r2, r2, r3
        rlwinm r3, r2, 0, 31, 31
        blr

llvm-svn: 21315
2005-04-18 03:48:41 +00:00
Nate Begeman
ce63e383b8 Add a couple missing transforms in getSetCC that were triggering assertions
in the PPC Pattern ISel

llvm-svn: 21297
2005-04-14 08:56:52 +00:00
Nate Begeman
20b3399465 Disbale the broken fold of shift + sz[ext] for now
Move the transform for select (a < 0) ? b : 0 into the dag from ppc isel
Enable the dag to fold and (setcc, 1) -> setcc for targets where setcc
  always produces zero or one.

llvm-svn: 21291
2005-04-13 21:23:31 +00:00
Chris Lattner
89f7e115a4 fix an infinite loop
llvm-svn: 21289
2005-04-13 20:06:29 +00:00
Chris Lattner
475fe85ddf fix some serious miscompiles on ia64, alpha, and ppc
llvm-svn: 21288
2005-04-13 19:53:40 +00:00
Chris Lattner
03d675414e avoid work when possible, perhaps fix the problem nate and andrew are seeing
with != 0 comparisons vanishing.

llvm-svn: 21287
2005-04-13 19:41:05 +00:00
Chris Lattner
9540cf8c7e Implement expansion of unsigned i64 -> FP.
Note that this probably only works for little endian targets, but is enough
to get siod working :)

llvm-svn: 21280
2005-04-13 05:09:42 +00:00
Chris Lattner
1a6247ff51 Make expansion of uint->fp cast assert out instead of infinitely recurse.
llvm-svn: 21275
2005-04-13 03:42:14 +00:00
Chris Lattner
63450e87d9 add back the optimization that Nate added for shl X, (zext_inreg y)
llvm-svn: 21273
2005-04-13 02:58:13 +00:00
Chris Lattner
759afe07d7 Oops, remove these too.
llvm-svn: 21272
2005-04-13 02:47:57 +00:00
Chris Lattner
4f188f949c Instead of making ZERO_EXTEND_INREG nodes, use the helper method in
SelectionDAG to do the job with AND.  Don't legalize Z_E_I anymore as
it is gone

llvm-svn: 21266
2005-04-13 02:38:47 +00:00
Chris Lattner
bce0030a88 Remove all foldings of ZERO_EXTEND_INREG, moving them to work for AND nodes
instead.  OVerall, this increases the amount of folding we can do.

llvm-svn: 21265
2005-04-13 02:38:18 +00:00
Nate Begeman
38d8248a9e Fold shift x, [sz]ext(y) -> shift x, y
llvm-svn: 21262
2005-04-12 23:32:28 +00:00
Nate Begeman
a56527ea5f Fold shift by size larger than type size to undef
Make llvm undef values generate ISD::UNDEF nodes

llvm-svn: 21261
2005-04-12 23:12:17 +00:00
Chris Lattner
58f72ab722 promote extload i1 -> extload i8
llvm-svn: 21258
2005-04-12 20:30:10 +00:00
Chris Lattner
cfc7093ca6 Remove some redundant checks, add a couple of new ones. This allows us to
compile this:

int foo (unsigned long a, unsigned long long g) {
  return a >= g;
}

To:

foo:
        movl 8(%esp), %eax
        cmpl %eax, 4(%esp)
        setae %al
        cmpl $0, 12(%esp)
        sete %cl
        andb %al, %cl
        movzbl %cl, %eax
        ret

instead of:

foo:
        movl 8(%esp), %eax
        cmpl %eax, 4(%esp)
        setae %al
        movzbw %al, %cx
        movl 12(%esp), %edx
        cmpl $0, %edx
        sete %al
        movzbw %al, %ax
        cmpl $0, %edx
        cmove %cx, %ax
        movzbl %al, %eax
        ret

llvm-svn: 21244
2005-04-12 02:54:39 +00:00
Chris Lattner
61f353dbdc Emit comparisons against the sign bit better. Codegen this:
bool %test1(long %X) {
        %A = setlt long %X, 0
        ret bool %A
}

like this:

test1:
        cmpl $0, 8(%esp)
        setl %al
        movzbl %al, %eax
        ret

instead of:

test1:
        movl 8(%esp), %ecx
        cmpl $0, %ecx
        setl %al
        movzbw %al, %ax
        cmpl $0, 4(%esp)
        setb %dl
        movzbw %dl, %dx
        cmpl $0, %ecx
        cmove %dx, %ax
        movzbl %al, %eax
        ret

llvm-svn: 21243
2005-04-12 02:19:10 +00:00
Chris Lattner
6cbbb55967 Emit long comparison against -1 better. Instead of this (x86):
test2:
        movl 8(%esp), %eax
        notl %eax
        movl 4(%esp), %ecx
        notl %ecx
        orl %eax, %ecx
        cmpl $0, %ecx
        sete %al
        movzbl %al, %eax
        ret

or this (PPC):

_test2:
        nor r2, r4, r4
        nor r3, r3, r3
        or r2, r2, r3
        cntlzw r2, r2
        srwi r3, r2, 5
        blr

Emit this:

test2:
        movl 8(%esp), %eax
        andl 4(%esp), %eax
        cmpl $-1, %eax
        sete %al
        movzbl %al, %eax
        ret

or this:

_test2:
.LBB_test2_0:   ;
        and r2, r4, r3
        cmpwi cr0, r2, -1
        li r3, 1
        li r2, 0
        beq .LBB_test2_2        ;
.LBB_test2_1:   ;
        or r3, r2, r2
.LBB_test2_2:   ;
        blr

it seems like the PPC isel could do better for R32 == -1 case.

llvm-svn: 21242
2005-04-12 01:46:05 +00:00
Chris Lattner
37534d43d0 canonicalize x <u 1 -> x == 0. On this testcase:
unsigned long long g;
unsigned long foo (unsigned long a) {
  return (a >= g) ? 1 : 0;
}

It changes the ppc code from:

_foo:
.LBB_foo_0:     ; entry
        mflr r11
        stw r11, 8(r1)
        bl "L00000$pb"
"L00000$pb":
        mflr r2
        addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
        lwz r4, 0(r2)
        lwz r2, 4(r2)
        cmplw cr0, r3, r2
        li r2, 1
        li r3, 0
        bge .LBB_foo_2  ; entry
.LBB_foo_1:     ; entry
        or r2, r3, r3
.LBB_foo_2:     ; entry
        cmplwi cr0, r4, 1
        li r3, 1
        li r5, 0
        blt .LBB_foo_4  ; entry
.LBB_foo_3:     ; entry
        or r3, r5, r5
.LBB_foo_4:     ; entry
        cmpwi cr0, r4, 0
        beq .LBB_foo_6  ; entry
.LBB_foo_5:     ; entry
        or r2, r3, r3
.LBB_foo_6:     ; entry
        rlwinm r3, r2, 0, 31, 31
        lwz r11, 8(r1)
        mtlr r11
        blr


to:

_foo:
.LBB_foo_0:     ; entry
        mflr r11
        stw r11, 8(r1)
        bl "L00000$pb"
"L00000$pb":
        mflr r2
        addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
        lwz r4, 0(r2)
        lwz r2, 4(r2)
        cmplw cr0, r3, r2
        li r2, 1
        li r3, 0
        bge .LBB_foo_2  ; entry
.LBB_foo_1:     ; entry
        or r2, r3, r3
.LBB_foo_2:     ; entry
        cntlzw r3, r4
        srwi r3, r3, 5
        cmpwi cr0, r4, 0
        beq .LBB_foo_4  ; entry
.LBB_foo_3:     ; entry
        or r2, r3, r3
.LBB_foo_4:     ; entry
        rlwinm r3, r2, 0, 31, 31
        lwz r11, 8(r1)
        mtlr r11
        blr

llvm-svn: 21241
2005-04-12 00:28:49 +00:00
Chris Lattner
7f0f0854fa Teach the dag mechanism that this:
long long test2(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) + B;
}

is equivalent to this:

long long test1(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) | B;
}

Now they are both codegen'd to this on ppc:

_test2:
        blr

or this on x86:

test2:
        movl 4(%esp), %edx
        movl 8(%esp), %eax
        ret

llvm-svn: 21231
2005-04-11 20:29:59 +00:00
Chris Lattner
71f3d4ce57 Fix expansion of shifts by exactly NVT bits on arch's (like X86) that have
masking shifts.

This fixes the miscompilation of this:

long long test1(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) | B;
}

into this:

test1:
        movl 4(%esp), %edx
        movl %edx, %eax
        orl 8(%esp), %eax
        ret

allowing us to generate this instead:

test1:
        movl 4(%esp), %edx
        movl 8(%esp), %eax
        ret

llvm-svn: 21230
2005-04-11 20:08:52 +00:00
Nate Begeman
32163963cb Fix libcall code to not pass a NULL Chain to LowerCallTo
Fix libcall code to not crash or assert looking for an ADJCALLSTACKUP node
  when it is known that there is no ADJCALLSTACKDOWN to match.
Expand i64 multiply when ISD::MULHU is legal for the target.

llvm-svn: 21214
2005-04-11 03:01:51 +00:00
Chris Lattner
4f26677dc9 Don't bother sign/zext_inreg'ing the result of an and operation if we know
the result does change as a result of the extend.

This improves codegen for Alpha on this testcase:

int %a(ushort* %i) {
        %tmp.1 = load ushort* %i
        %tmp.2 = cast ushort %tmp.1 to int
        %tmp.4 = and int %tmp.2, 1
        ret int %tmp.4
}

Generating:

a:
        ldgp $29, 0($27)
        ldwu $0,0($16)
        and $0,1,$0
        ret $31,($26),1

instead of:

a:
        ldgp $29, 0($27)
        ldwu $0,0($16)
        and $0,1,$0
        addl $0,0,$0
        ret $31,($26),1

btw, alpha really should switch to livein/outs for args :)

llvm-svn: 21213
2005-04-10 23:37:16 +00:00
Chris Lattner
c730ea00e2 Teach legalize to deal with targets that don't support some SEXTLOAD/ZEXTLOADs
llvm-svn: 21212
2005-04-10 22:54:25 +00:00
Chris Lattner
1b9e1e26cb don't zextload fp values!
llvm-svn: 21209
2005-04-10 17:40:35 +00:00
Chris Lattner
0c089eae41 Until we have a dag combiner, promote using zextload's instead of extloads.
This gives the optimizer a bit of information about the top-part of the
value.

llvm-svn: 21205
2005-04-10 04:33:47 +00:00
Chris Lattner
9d13d0b958 Fold zext_inreg(zextload), likewise for sext's
llvm-svn: 21204
2005-04-10 04:33:08 +00:00
Chris Lattner
9c8fe594e5 add a simple xform
llvm-svn: 21203
2005-04-10 04:04:49 +00:00
Chris Lattner
b3518a838c Fix a thinko. If the operand is promoted, pass the promoted value into
the new zero extend, not the original operand.  This fixes cast bool -> long
on ppc.

Add an unrelated fixme

llvm-svn: 21196
2005-04-10 01:13:15 +00:00
Chris Lattner
034716de24 add a little peephole optimization. This allows us to codegen:
int a(short i) {
        return i & 1;
}

as

_a:
        andi. r3, r3, 1
        blr

instead of:

_a:
        rlwinm r2, r3, 0, 16, 31
        andi. r3, r2, 1
        blr

on ppc.  It should also help the other risc targets.

llvm-svn: 21189
2005-04-09 21:43:54 +00:00
Chris Lattner
77ab286605 there is no need to remove this instruction, linscan does it already as it
removes noop moves.

llvm-svn: 21183
2005-04-09 16:24:20 +00:00
Chris Lattner
f408e9a07b Adjust live intervals to support a livein set
llvm-svn: 21182
2005-04-09 16:17:50 +00:00
Chris Lattner
1a9c8fc64a Consider the livein/out set for a function, allowing targets to not have to
use ugly imp_def/imp_uses for arguments and return values.

llvm-svn: 21180
2005-04-09 15:23:25 +00:00
Chris Lattner
afa0001d54 recognize some patterns as fabs operations, so that fabs at the source level
is deconstructed then reconstructed here.  This catches 19 fabs's in 177.mesa
9 in 168.wupwise, 5 in 171.swim, 3 in 172.mgrid, and 14 in 173.applu out of
specfp2000.

This allows the X86 code generator to make MUCH better code than before for
each of these and saves one instr on ppc.

This depends on the previous CFE patch to expose these correctly.

llvm-svn: 21171
2005-04-09 05:15:53 +00:00
Chris Lattner
8e6eafa8e1 Emit BRCONDTWOWAY when possible.
llvm-svn: 21167
2005-04-09 03:30:29 +00:00
Chris Lattner
55b73bda6c Legalize BRCONDTWOWAY into a BRCOND/BR pair if a target doesn't support it.
llvm-svn: 21166
2005-04-09 03:30:19 +00:00