1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-30 07:22:55 +01:00
Commit Graph

1565 Commits

Author SHA1 Message Date
Duraid Madina
8ad9786fcd expand count-leading/trailing-zeros; the test 2005-05-11-Popcount-ffs-fls.c
should now pass (the "LLVM" and "REF" results should be identical)

llvm-svn: 21866
2005-05-11 08:45:08 +00:00
Chris Lattner
b452b5aa42 Add some notes for expanding clz/ctz
llvm-svn: 21862
2005-05-11 05:27:09 +00:00
Chris Lattner
4f05136f61 Simplify this code, use the proper shift amount
llvm-svn: 21861
2005-05-11 05:21:31 +00:00
Chris Lattner
3edc8ecb53 Legalize this correctly
llvm-svn: 21859
2005-05-11 05:09:47 +00:00
Chris Lattner
457996c4a6 implement expansion of ctpop nodes, implementing CodeGen/Generic/llvm-ct-intrinsics.ll
llvm-svn: 21856
2005-05-11 04:51:16 +00:00
Chris Lattner
ce84b90a3d Print bit count nodes correctly
llvm-svn: 21855
2005-05-11 04:50:30 +00:00
Jeff Cohen
afc58006b7 Silence some VC++ warnings
llvm-svn: 21838
2005-05-10 02:22:38 +00:00
Chris Lattner
5edb4c4af6 The semantics of cast X to bool are a comparison against zero, not a truncation!
llvm-svn: 21833
2005-05-09 22:17:13 +00:00
Chris Lattner
95c836384b legalize readio/writeio into a load/store if requested
llvm-svn: 21827
2005-05-09 20:36:57 +00:00
Chris Lattner
7cc8edfc30 legalize READPORT, WRITEPORT, READIO, WRITEIO, at least in the basic cases
where they are directly supported by the architecture.  Wrap a bunch of
long lines :(

llvm-svn: 21826
2005-05-09 20:23:03 +00:00
Chris Lattner
af6bde0db6 Add support for matching the READPORT, WRITEPORT, READIO, WRITEIO intrinsics
llvm-svn: 21825
2005-05-09 20:22:36 +00:00
Chris Lattner
eee649df34 Add support for READPORT, WRITEPORT, READIO, WRITEIO
llvm-svn: 21824
2005-05-09 20:22:17 +00:00
Chris Lattner
c3fa88e7c8 Fold shifts into subsequent SHL's. These shifts often arise due to addrses
arithmetic lowering.

llvm-svn: 21818
2005-05-09 17:06:45 +00:00
Chris Lattner
a1e633ef7a Don't use the load/store instruction as the source pointer, use the pointer
being stored/loaded through!

llvm-svn: 21806
2005-05-09 04:28:51 +00:00
Chris Lattner
bfbefe0837 memoize all nodes, even null Value* nodes. Do not add two token chain outputs
llvm-svn: 21805
2005-05-09 04:14:13 +00:00
Chris Lattner
b85030373d wrap long lines
llvm-svn: 21804
2005-05-09 04:08:33 +00:00
Chris Lattner
6ffae1a3ec Print SrcValue nodes correctly
llvm-svn: 21803
2005-05-09 04:08:27 +00:00
Chris Lattner
6d85b91b24 Wrap long lines. Fix "warning: conflicting types for built-in function 'memset'"
warning from the CBE+GCC.

llvm-svn: 21779
2005-05-08 19:46:29 +00:00
Misha Brukman
1996bf6ea5 * Order #includes alphabetically
* Remove commented-out debug printouts

llvm-svn: 21707
2005-05-05 23:45:17 +00:00
Chris Lattner
6e8167d1c2 When hitting an unsupported intrinsic, actually print it
Lower debug info to noops.

llvm-svn: 21698
2005-05-05 17:55:17 +00:00
Andrew Lenharth
09c3c4add4 ctpop lowering in legalize
llvm-svn: 21697
2005-05-05 15:55:21 +00:00
Andrew Lenharth
9282d00d4f Make promoteOp work for CT*
Proof?

ubyte %bar(ubyte %x) {
entry:
        %tmp.1 = call ubyte %llvm.ctlz( ubyte %x )
        ret ubyte %tmp.1
}

==>

zapnot $16,1,$0
CTLZ $0,$0
subq $0,56,$0
zapnot $0,1,$0
ret $31,($26),1

llvm-svn: 21691
2005-05-04 19:11:05 +00:00
Andrew Lenharth
8b64bd0fd5 Implement count leading zeros (ctlz), count trailing zeros (cttz), and count
population (ctpop).  Generic lowering is implemented, however only promotion
is implemented for SelectionDAG at the moment.

More coming soon.

llvm-svn: 21676
2005-05-03 17:19:30 +00:00
Alkis Evlogimenos
66f1632de8 Do not use deprecated APIs
llvm-svn: 21639
2005-04-30 07:13:31 +00:00
Chris Lattner
fe72cdf838 Codegen and legalize sin/cos/llvm.sqrt as FSIN/FCOS/FSQRT calls. This patch
was contributed by Morten Ofstad, with some minor tweaks and bug fixes added
by me.

llvm-svn: 21636
2005-04-30 04:43:14 +00:00
Chris Lattner
0366e4c0d3 Lower llvm.sqrt -> fsqrt/sqrt
llvm-svn: 21629
2005-04-30 04:07:50 +00:00
Chris Lattner
6ec8bb9e8d Legalize FSQRT, FSIN, FCOS nodes, patch contributed by Morten Ofstad
llvm-svn: 21606
2005-04-28 21:44:33 +00:00
Chris Lattner
4678a790e6 Add FSQRT, FSIN, FCOS nodes, patch contributed by Morten Ofstad
llvm-svn: 21605
2005-04-28 21:44:03 +00:00
Andrew Lenharth
2a00530fa7 Implement Value* tracking for loads and stores in the selection DAG. This enables one to use alias analysis in the backends.
(TRUNK)Stores and (EXT|ZEXT|SEXT)Loads have an extra SDOperand which is a SrcValueSDNode which contains the Value*.  Note that if the operation is introduced by the backend, it will still have the operand, but the value* will be null.

llvm-svn: 21599
2005-04-27 20:10:01 +00:00
Chris Lattner
15bcc5273b Fold (X > -1) | (Y > -1) --> (X&Y > -1)
llvm-svn: 21552
2005-04-26 01:18:33 +00:00
Chris Lattner
d8ac4da793 implement some more logical compares with constants, so that:
int foo1(int x, int y) {
  int t1 = x >= 0;
  int t2 = y >= 0;
  return t1 & t2;
}
int foo2(int x, int y) {
  int t1 = x == -1;
  int t2 = y == -1;
  return t1 & t2;
}

produces:

_foo1:
        or r2, r4, r3
        srwi r2, r2, 31
        xori r3, r2, 1
        blr
_foo2:
        and r2, r4, r3
        addic r2, r2, 1
        li r2, 0
        addze r3, r2
        blr

instead of:

_foo1:
        srwi r2, r4, 31
        xori r2, r2, 1
        srwi r3, r3, 31
        xori r3, r3, 1
        and r3, r2, r3
        blr
_foo2:
        addic r2, r4, 1
        li r2, 0
        addze r2, r2
        addic r3, r3, 1
        li r3, 0
        addze r3, r3
        and r3, r2, r3
        blr

llvm-svn: 21547
2005-04-25 21:20:28 +00:00
Chris Lattner
7931b75a81 Codegen x < 0 | y < 0 as (x|y) < 0. This allows us to compile this to:
_foo:
        or r2, r4, r3
        srwi r3, r2, 31
        blr

instead of:

_foo:
        srwi r2, r4, 31
        srwi r3, r3, 31
        or r3, r2, r3
        blr

llvm-svn: 21544
2005-04-25 21:03:25 +00:00
Misha Brukman
a9a1982a44 Convert tabs to spaces
llvm-svn: 21439
2005-04-22 04:01:18 +00:00
Misha Brukman
774e55c446 Remove trailing whitespace
llvm-svn: 21420
2005-04-21 22:36:52 +00:00
Chris Lattner
87fbc1c554 Improve and elimination. On PPC, for:
bool %test(int %X) {
        %Y = and int %X, 8
        %Z = setne int %Y, 0
        ret bool %Z
}

we now generate this:

        rlwinm r2, r3, 0, 28, 28
        srwi r3, r2, 3

instead of this:

        rlwinm r2, r3, 0, 28, 28
        srwi r2, r2, 3
        rlwinm r3, r2, 0, 31, 31

I'll leave it to Nate to get it down to one instruction. :)

---------------------------------------------------------------------

llvm-svn: 21391
2005-04-21 06:28:15 +00:00
Chris Lattner
d0a2fda2c6 Fold (x & 8) != 0 and (x & 8) == 8 into (x & 8) >> 3.
This turns this PPC code:

        rlwinm r2, r3, 0, 28, 28
        cmpwi cr7, r2, 8
        mfcr r2
        rlwinm r3, r2, 31, 31, 31

into this:

        rlwinm r2, r3, 0, 28, 28
        srwi r2, r2, 3
        rlwinm r3, r2, 0, 31, 31

Next up, nuking the extra and.

llvm-svn: 21390
2005-04-21 06:12:41 +00:00
Chris Lattner
188ecaab1d Fold setcc of MVT::i1 operands into logical operations
llvm-svn: 21319
2005-04-18 04:48:12 +00:00
Chris Lattner
72aca1b758 Another minor simplification: handle setcc (zero_extend x), c -> setcc(x, c')
llvm-svn: 21318
2005-04-18 04:30:45 +00:00
Chris Lattner
e6117e5d4f Another simple xform
llvm-svn: 21317
2005-04-18 04:11:19 +00:00
Chris Lattner
f6f5b23a00 Fold:
// (X != 0) | (Y != 0) -> (X|Y != 0)
        // (X == 0) & (Y == 0) -> (X|Y == 0)

Compiling this:

int %bar(int %a, int %b) {
        entry:
        %tmp.1 = setne int %a, 0
        %tmp.2 = setne int %b, 0
        %tmp.3 = or bool %tmp.1, %tmp.2
        %retval = cast bool %tmp.3 to int
        ret int %retval
        }

to this:

_bar:
        or r2, r3, r4
        addic r3, r2, -1
        subfe r3, r3, r2
        blr

instead of:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r3, r2, r3
        blr

llvm-svn: 21316
2005-04-18 03:59:53 +00:00
Chris Lattner
a32c50520c Make the AND elimination operation recursive and significantly more powerful,
eliminating an and for Nate's testcase:

int %bar(int %a, int %b) {
        entry:
        %tmp.1 = setne int %a, 0
        %tmp.2 = setne int %b, 0
        %tmp.3 = or bool %tmp.1, %tmp.2
        %retval = cast bool %tmp.3 to int
        ret int %retval
        }

generating:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r3, r2, r3
        blr

instead of:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r2, r2, r3
        rlwinm r3, r2, 0, 31, 31
        blr

llvm-svn: 21315
2005-04-18 03:48:41 +00:00
Nate Begeman
ce63e383b8 Add a couple missing transforms in getSetCC that were triggering assertions
in the PPC Pattern ISel

llvm-svn: 21297
2005-04-14 08:56:52 +00:00
Nate Begeman
20b3399465 Disbale the broken fold of shift + sz[ext] for now
Move the transform for select (a < 0) ? b : 0 into the dag from ppc isel
Enable the dag to fold and (setcc, 1) -> setcc for targets where setcc
  always produces zero or one.

llvm-svn: 21291
2005-04-13 21:23:31 +00:00
Chris Lattner
89f7e115a4 fix an infinite loop
llvm-svn: 21289
2005-04-13 20:06:29 +00:00
Chris Lattner
475fe85ddf fix some serious miscompiles on ia64, alpha, and ppc
llvm-svn: 21288
2005-04-13 19:53:40 +00:00
Chris Lattner
03d675414e avoid work when possible, perhaps fix the problem nate and andrew are seeing
with != 0 comparisons vanishing.

llvm-svn: 21287
2005-04-13 19:41:05 +00:00
Chris Lattner
9540cf8c7e Implement expansion of unsigned i64 -> FP.
Note that this probably only works for little endian targets, but is enough
to get siod working :)

llvm-svn: 21280
2005-04-13 05:09:42 +00:00
Chris Lattner
1a6247ff51 Make expansion of uint->fp cast assert out instead of infinitely recurse.
llvm-svn: 21275
2005-04-13 03:42:14 +00:00
Chris Lattner
63450e87d9 add back the optimization that Nate added for shl X, (zext_inreg y)
llvm-svn: 21273
2005-04-13 02:58:13 +00:00
Chris Lattner
759afe07d7 Oops, remove these too.
llvm-svn: 21272
2005-04-13 02:47:57 +00:00
Chris Lattner
4f188f949c Instead of making ZERO_EXTEND_INREG nodes, use the helper method in
SelectionDAG to do the job with AND.  Don't legalize Z_E_I anymore as
it is gone

llvm-svn: 21266
2005-04-13 02:38:47 +00:00
Chris Lattner
bce0030a88 Remove all foldings of ZERO_EXTEND_INREG, moving them to work for AND nodes
instead.  OVerall, this increases the amount of folding we can do.

llvm-svn: 21265
2005-04-13 02:38:18 +00:00
Nate Begeman
38d8248a9e Fold shift x, [sz]ext(y) -> shift x, y
llvm-svn: 21262
2005-04-12 23:32:28 +00:00
Nate Begeman
a56527ea5f Fold shift by size larger than type size to undef
Make llvm undef values generate ISD::UNDEF nodes

llvm-svn: 21261
2005-04-12 23:12:17 +00:00
Chris Lattner
58f72ab722 promote extload i1 -> extload i8
llvm-svn: 21258
2005-04-12 20:30:10 +00:00
Chris Lattner
cfc7093ca6 Remove some redundant checks, add a couple of new ones. This allows us to
compile this:

int foo (unsigned long a, unsigned long long g) {
  return a >= g;
}

To:

foo:
        movl 8(%esp), %eax
        cmpl %eax, 4(%esp)
        setae %al
        cmpl $0, 12(%esp)
        sete %cl
        andb %al, %cl
        movzbl %cl, %eax
        ret

instead of:

foo:
        movl 8(%esp), %eax
        cmpl %eax, 4(%esp)
        setae %al
        movzbw %al, %cx
        movl 12(%esp), %edx
        cmpl $0, %edx
        sete %al
        movzbw %al, %ax
        cmpl $0, %edx
        cmove %cx, %ax
        movzbl %al, %eax
        ret

llvm-svn: 21244
2005-04-12 02:54:39 +00:00
Chris Lattner
61f353dbdc Emit comparisons against the sign bit better. Codegen this:
bool %test1(long %X) {
        %A = setlt long %X, 0
        ret bool %A
}

like this:

test1:
        cmpl $0, 8(%esp)
        setl %al
        movzbl %al, %eax
        ret

instead of:

test1:
        movl 8(%esp), %ecx
        cmpl $0, %ecx
        setl %al
        movzbw %al, %ax
        cmpl $0, 4(%esp)
        setb %dl
        movzbw %dl, %dx
        cmpl $0, %ecx
        cmove %dx, %ax
        movzbl %al, %eax
        ret

llvm-svn: 21243
2005-04-12 02:19:10 +00:00
Chris Lattner
6cbbb55967 Emit long comparison against -1 better. Instead of this (x86):
test2:
        movl 8(%esp), %eax
        notl %eax
        movl 4(%esp), %ecx
        notl %ecx
        orl %eax, %ecx
        cmpl $0, %ecx
        sete %al
        movzbl %al, %eax
        ret

or this (PPC):

_test2:
        nor r2, r4, r4
        nor r3, r3, r3
        or r2, r2, r3
        cntlzw r2, r2
        srwi r3, r2, 5
        blr

Emit this:

test2:
        movl 8(%esp), %eax
        andl 4(%esp), %eax
        cmpl $-1, %eax
        sete %al
        movzbl %al, %eax
        ret

or this:

_test2:
.LBB_test2_0:   ;
        and r2, r4, r3
        cmpwi cr0, r2, -1
        li r3, 1
        li r2, 0
        beq .LBB_test2_2        ;
.LBB_test2_1:   ;
        or r3, r2, r2
.LBB_test2_2:   ;
        blr

it seems like the PPC isel could do better for R32 == -1 case.

llvm-svn: 21242
2005-04-12 01:46:05 +00:00
Chris Lattner
37534d43d0 canonicalize x <u 1 -> x == 0. On this testcase:
unsigned long long g;
unsigned long foo (unsigned long a) {
  return (a >= g) ? 1 : 0;
}

It changes the ppc code from:

_foo:
.LBB_foo_0:     ; entry
        mflr r11
        stw r11, 8(r1)
        bl "L00000$pb"
"L00000$pb":
        mflr r2
        addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
        lwz r4, 0(r2)
        lwz r2, 4(r2)
        cmplw cr0, r3, r2
        li r2, 1
        li r3, 0
        bge .LBB_foo_2  ; entry
.LBB_foo_1:     ; entry
        or r2, r3, r3
.LBB_foo_2:     ; entry
        cmplwi cr0, r4, 1
        li r3, 1
        li r5, 0
        blt .LBB_foo_4  ; entry
.LBB_foo_3:     ; entry
        or r3, r5, r5
.LBB_foo_4:     ; entry
        cmpwi cr0, r4, 0
        beq .LBB_foo_6  ; entry
.LBB_foo_5:     ; entry
        or r2, r3, r3
.LBB_foo_6:     ; entry
        rlwinm r3, r2, 0, 31, 31
        lwz r11, 8(r1)
        mtlr r11
        blr


to:

_foo:
.LBB_foo_0:     ; entry
        mflr r11
        stw r11, 8(r1)
        bl "L00000$pb"
"L00000$pb":
        mflr r2
        addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
        lwz r4, 0(r2)
        lwz r2, 4(r2)
        cmplw cr0, r3, r2
        li r2, 1
        li r3, 0
        bge .LBB_foo_2  ; entry
.LBB_foo_1:     ; entry
        or r2, r3, r3
.LBB_foo_2:     ; entry
        cntlzw r3, r4
        srwi r3, r3, 5
        cmpwi cr0, r4, 0
        beq .LBB_foo_4  ; entry
.LBB_foo_3:     ; entry
        or r2, r3, r3
.LBB_foo_4:     ; entry
        rlwinm r3, r2, 0, 31, 31
        lwz r11, 8(r1)
        mtlr r11
        blr

llvm-svn: 21241
2005-04-12 00:28:49 +00:00
Chris Lattner
7f0f0854fa Teach the dag mechanism that this:
long long test2(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) + B;
}

is equivalent to this:

long long test1(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) | B;
}

Now they are both codegen'd to this on ppc:

_test2:
        blr

or this on x86:

test2:
        movl 4(%esp), %edx
        movl 8(%esp), %eax
        ret

llvm-svn: 21231
2005-04-11 20:29:59 +00:00
Chris Lattner
71f3d4ce57 Fix expansion of shifts by exactly NVT bits on arch's (like X86) that have
masking shifts.

This fixes the miscompilation of this:

long long test1(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) | B;
}

into this:

test1:
        movl 4(%esp), %edx
        movl %edx, %eax
        orl 8(%esp), %eax
        ret

allowing us to generate this instead:

test1:
        movl 4(%esp), %edx
        movl 8(%esp), %eax
        ret

llvm-svn: 21230
2005-04-11 20:08:52 +00:00
Nate Begeman
32163963cb Fix libcall code to not pass a NULL Chain to LowerCallTo
Fix libcall code to not crash or assert looking for an ADJCALLSTACKUP node
  when it is known that there is no ADJCALLSTACKDOWN to match.
Expand i64 multiply when ISD::MULHU is legal for the target.

llvm-svn: 21214
2005-04-11 03:01:51 +00:00
Chris Lattner
4f26677dc9 Don't bother sign/zext_inreg'ing the result of an and operation if we know
the result does change as a result of the extend.

This improves codegen for Alpha on this testcase:

int %a(ushort* %i) {
        %tmp.1 = load ushort* %i
        %tmp.2 = cast ushort %tmp.1 to int
        %tmp.4 = and int %tmp.2, 1
        ret int %tmp.4
}

Generating:

a:
        ldgp $29, 0($27)
        ldwu $0,0($16)
        and $0,1,$0
        ret $31,($26),1

instead of:

a:
        ldgp $29, 0($27)
        ldwu $0,0($16)
        and $0,1,$0
        addl $0,0,$0
        ret $31,($26),1

btw, alpha really should switch to livein/outs for args :)

llvm-svn: 21213
2005-04-10 23:37:16 +00:00
Chris Lattner
c730ea00e2 Teach legalize to deal with targets that don't support some SEXTLOAD/ZEXTLOADs
llvm-svn: 21212
2005-04-10 22:54:25 +00:00
Chris Lattner
1b9e1e26cb don't zextload fp values!
llvm-svn: 21209
2005-04-10 17:40:35 +00:00
Chris Lattner
0c089eae41 Until we have a dag combiner, promote using zextload's instead of extloads.
This gives the optimizer a bit of information about the top-part of the
value.

llvm-svn: 21205
2005-04-10 04:33:47 +00:00
Chris Lattner
9d13d0b958 Fold zext_inreg(zextload), likewise for sext's
llvm-svn: 21204
2005-04-10 04:33:08 +00:00
Chris Lattner
9c8fe594e5 add a simple xform
llvm-svn: 21203
2005-04-10 04:04:49 +00:00
Chris Lattner
b3518a838c Fix a thinko. If the operand is promoted, pass the promoted value into
the new zero extend, not the original operand.  This fixes cast bool -> long
on ppc.

Add an unrelated fixme

llvm-svn: 21196
2005-04-10 01:13:15 +00:00
Chris Lattner
034716de24 add a little peephole optimization. This allows us to codegen:
int a(short i) {
        return i & 1;
}

as

_a:
        andi. r3, r3, 1
        blr

instead of:

_a:
        rlwinm r2, r3, 0, 16, 31
        andi. r3, r2, 1
        blr

on ppc.  It should also help the other risc targets.

llvm-svn: 21189
2005-04-09 21:43:54 +00:00
Chris Lattner
77ab286605 there is no need to remove this instruction, linscan does it already as it
removes noop moves.

llvm-svn: 21183
2005-04-09 16:24:20 +00:00
Chris Lattner
f408e9a07b Adjust live intervals to support a livein set
llvm-svn: 21182
2005-04-09 16:17:50 +00:00
Chris Lattner
1a9c8fc64a Consider the livein/out set for a function, allowing targets to not have to
use ugly imp_def/imp_uses for arguments and return values.

llvm-svn: 21180
2005-04-09 15:23:25 +00:00
Chris Lattner
afa0001d54 recognize some patterns as fabs operations, so that fabs at the source level
is deconstructed then reconstructed here.  This catches 19 fabs's in 177.mesa
9 in 168.wupwise, 5 in 171.swim, 3 in 172.mgrid, and 14 in 173.applu out of
specfp2000.

This allows the X86 code generator to make MUCH better code than before for
each of these and saves one instr on ppc.

This depends on the previous CFE patch to expose these correctly.

llvm-svn: 21171
2005-04-09 05:15:53 +00:00
Chris Lattner
8e6eafa8e1 Emit BRCONDTWOWAY when possible.
llvm-svn: 21167
2005-04-09 03:30:29 +00:00
Chris Lattner
55b73bda6c Legalize BRCONDTWOWAY into a BRCOND/BR pair if a target doesn't support it.
llvm-svn: 21166
2005-04-09 03:30:19 +00:00
Chris Lattner
da902bdf1b print and fold BRCONDTWOWAY correctly
llvm-svn: 21165
2005-04-09 03:27:28 +00:00
Chris Lattner
31170cd2ec canonicalize a bunch of operations involving fneg
llvm-svn: 21160
2005-04-09 03:02:46 +00:00
Chris Lattner
9a56ef5693 If a target zero or sign extends the result of its setcc, allow folding of
this into sign/zero extension instructions later.

On PPC, for example, this testcase:

%G = external global sbyte
implementation
void %test(int %X, int %Y) {
  %C = setlt int %X, %Y
  %D = cast bool %C to sbyte
  store sbyte %D, sbyte* %G
  ret void
}

Now codegens to:

        cmpw cr0, r3, r4
        li r3, 1
        li r4, 0
        blt .LBB_test_2 ;
.LBB_test_1:    ;
        or r3, r4, r4
.LBB_test_2:    ;
        addis r2, r2, ha16(L_G$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_G$non_lazy_ptr-"L00000$pb")(r2)
        stb r3, 0(r2)

instead of:

        cmpw cr0, r3, r4
        li r3, 1
        li r4, 0
        blt .LBB_test_2 ;
.LBB_test_1:    ;
        or r3, r4, r4
.LBB_test_2:    ;
***     rlwinm r3, r3, 0, 31, 31
        addis r2, r2, ha16(L_G$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_G$non_lazy_ptr-"L00000$pb")(r2)
        stb r3, 0(r2)

llvm-svn: 21148
2005-04-07 19:43:53 +00:00
Chris Lattner
bbe0e9e9db Remove somethign I had for testing
llvm-svn: 21144
2005-04-07 18:58:54 +00:00
Chris Lattner
ee836c7b32 This patch does two things. First, it canonicalizes 'X >= C' -> 'X > C-1'
(likewise for <= >=u >=u).

Second, it implements a special case hack to turn 'X gtu SINTMAX' -> 'X lt 0'

On powerpc, for example, this changes this:

        lis r2, 32767
        ori r2, r2, 65535
        cmplw cr0, r3, r2
        bgt .LBB_test_2

into:

        cmpwi cr0, r3, 0
        blt .LBB_test_2

llvm-svn: 21142
2005-04-07 18:14:58 +00:00
Chris Lattner
22bbc2351e Fix a really scary bug that Nate found where we weren't deleting the right
elements auto of the autoCSE maps.

llvm-svn: 21128
2005-04-07 00:30:13 +00:00
Nate Begeman
7898fc8cc8 Teach ExpandShift how to handle shifts by a constant. This allows targets
like PowerPC to codegen long shifts in many fewer instructions.

llvm-svn: 21122
2005-04-06 21:13:14 +00:00
Nate Begeman
4457b4994c Expand SREM and UREM for targets that claim not to have them, like PowerPC
llvm-svn: 21103
2005-04-06 00:23:54 +00:00
Nate Begeman
12af81407b Add MULHU and MULHS nodes for the high part of an (un)signed 32x32=64b
multiply.

llvm-svn: 21102
2005-04-05 22:36:56 +00:00
Chris Lattner
f81edb57b6 Make sure to notice that explicit physregs are used in the function
llvm-svn: 21084
2005-04-04 21:35:34 +00:00
Nate Begeman
a8be5b976f Handle expanding arguments to ISD::TRUNCATE. This happens on PowerPC when
you have something like i16 = truncate i64.  This fixes Regression/C/casts

llvm-svn: 21073
2005-04-04 00:57:08 +00:00
Chris Lattner
a8bccb73cd Fix sign_extend and zero_extend of promoted value types to expanded value
types.  This occurs when casting short to long on PPC for example.

llvm-svn: 21072
2005-04-03 23:41:52 +00:00
Duraid Madina
3a10f491f0 add support for prefix/suffix strings to go around GlobalValue(s)
(which may or be function pointers) in the asmprinter. For the moment,
this changes nothing, except the IA64 backend which can use this to write:

  data8.ua  @fptr(blah__blah__mangled_function_name)

  (by setting FunctionAddrPrefix/Suffix to "@fptr(" / ")")

llvm-svn: 21024
2005-04-02 12:21:51 +00:00
Chris Lattner
1a15f58a92 transform fabs/fabsf calls into FABS nodes.
llvm-svn: 21014
2005-04-02 05:26:53 +00:00
Chris Lattner
206a694a7b Expand fabs into fneg
llvm-svn: 21013
2005-04-02 05:26:37 +00:00
Chris Lattner
fcf6ee0a8b Turn -0.0 - X -> fneg
llvm-svn: 21011
2005-04-02 05:04:50 +00:00
Chris Lattner
8644181cd6 Several changes mixed up here. First when legalizing a DAG with pcmarker,
dont' regen the whole dag if unneccesary.  Second, fix and ugly bug with
the _PARTS nodes that caused legalize to produce multiples of them.
Finally, implement initial support for FABS and FNEG.  Currently FNEG is
the only one to be trusted though.

llvm-svn: 21009
2005-04-02 05:00:07 +00:00
Chris Lattner
c8f36868e6 print fneg/fabs
llvm-svn: 21008
2005-04-02 04:58:41 +00:00
Chris Lattner
8be5696874 fix some bugs in the implementation of SHL_PARTS and friends.
llvm-svn: 21004
2005-04-02 04:00:59 +00:00
Chris Lattner
964ab5d408 Turn expanded shift operations into (e.g.) SHL_PARTS if the target supports it.
llvm-svn: 21002
2005-04-02 03:38:53 +00:00
Chris Lattner
33ca1ce8e0 Print some new nodes
llvm-svn: 21001
2005-04-02 03:30:42 +00:00
Chris Lattner
20027c6b30 Fix a bug when inserting a libcall into a function with no other calls.
llvm-svn: 20999
2005-04-02 03:22:40 +00:00
Nate Begeman
893f5729ce Fix a warning about an unhandled switch case
llvm-svn: 20994
2005-04-02 00:41:14 +00:00
Nate Begeman
4034852ba9 Add ISD::UNDEF node
Teach the SelectionDAG code how to expand and promote it
Have PPC32 LowerCallTo generate ISD::UNDEF for int arg regs used up by fp
  arguments, but not shadowing their value.  This allows us to do the right
  thing with both fixed and vararg floating point arguments.

llvm-svn: 20988
2005-04-01 22:34:39 +00:00