Chris Lattner
89f7e115a4
fix an infinite loop
...
llvm-svn: 21289
2005-04-13 20:06:29 +00:00
Chris Lattner
475fe85ddf
fix some serious miscompiles on ia64, alpha, and ppc
...
llvm-svn: 21288
2005-04-13 19:53:40 +00:00
Chris Lattner
03d675414e
avoid work when possible, perhaps fix the problem nate and andrew are seeing
...
with != 0 comparisons vanishing.
llvm-svn: 21287
2005-04-13 19:41:05 +00:00
Andrew Lenharth
cbf7a52768
WOW, function calls still seem to work after this.
...
llvm-svn: 21286
2005-04-13 17:17:28 +00:00
Andrew Lenharth
d1fee6d24a
prepare for func call optimization
...
llvm-svn: 21285
2005-04-13 16:19:50 +00:00
Andrew Lenharth
69541f896f
regression case for faster call sequence
...
llvm-svn: 21284
2005-04-13 16:16:01 +00:00
Andrew Lenharth
18f93d4455
check that casts still use zap
...
llvm-svn: 21283
2005-04-13 13:00:16 +00:00
Duraid Madina
b9d2d9ac63
* add the shladd instruction
...
* fold left shifts of 1, 2, 3 or 4 bits into adds
This doesn't save much now, but should get a serious workout once
multiplies by constants get converted to shift/add/sub sequences.
Hold on! :)
llvm-svn: 21282
2005-04-13 06:12:04 +00:00
Andrew Lenharth
ec33ab6a2f
add matches for SxADDL and company, as well as simplify the SxADDQ code
...
llvm-svn: 21281
2005-04-13 05:19:55 +00:00
Chris Lattner
9540cf8c7e
Implement expansion of unsigned i64 -> FP.
...
Note that this probably only works for little endian targets, but is enough
to get siod working :)
llvm-svn: 21280
2005-04-13 05:09:42 +00:00
Duraid Madina
67e553e521
* if ANDing with a constant of the form:
...
0x00000..00FFF..FF
^ ^
^ ^
any number of
0's followed by
some number of
1's
then we use dep.z to just paste zeros over the input. For the special
cases where this is zxt1/zxt2/zxt4, we use those instructions instead,
because we're all about readability!!!
that's what it's about!! readability!
*twitch* ;D
llvm-svn: 21279
2005-04-13 04:50:54 +00:00
Andrew Lenharth
9be54f8a6a
added s4addl matching test
...
llvm-svn: 21277
2005-04-13 04:41:06 +00:00
Andrew Lenharth
510db15268
added all flavors of zap for anding
...
llvm-svn: 21276
2005-04-13 03:47:03 +00:00
Chris Lattner
1a6247ff51
Make expansion of uint->fp cast assert out instead of infinitely recurse.
...
llvm-svn: 21275
2005-04-13 03:42:14 +00:00
Chris Lattner
85c6a7bed0
Fix some mysteriously missing {}'s which cause the miscompilation of
...
Olden/mst, Ptrdist/bc, Obsequi, etc.
llvm-svn: 21274
2005-04-13 03:29:53 +00:00
Chris Lattner
63450e87d9
add back the optimization that Nate added for shl X, (zext_inreg y)
...
llvm-svn: 21273
2005-04-13 02:58:13 +00:00
Chris Lattner
759afe07d7
Oops, remove these too.
...
llvm-svn: 21272
2005-04-13 02:47:57 +00:00
Chris Lattner
8489ac991d
remove one more occurance of this that snuck in
...
llvm-svn: 21271
2005-04-13 02:46:17 +00:00
Chris Lattner
5fdb103328
Remove support for ZERO_EXTEND_INREG. This pessimizes code, genering stuff
...
like this:
ldah $1,1($31)
lda $1,-1($1)
and $0,$1,$24
instead of this:
zap $0,252,$24
To get this back, the selector should recognize the ISD::AND case where this
happens and emit the appropriate ZAP instruction.
llvm-svn: 21270
2005-04-13 02:43:40 +00:00
Chris Lattner
a2e92e69da
Remove special handling of ZERO_EXTEND_INREG. This pessimizes code, causing
...
things like this:
mov r9 = 65535;;
and r8 = r8, r9;;
To be emitted instead of:
zxt2 r8 = r8;;
To get this back, the selector for ISD::AND should recognize this case.
llvm-svn: 21269
2005-04-13 02:41:52 +00:00
Chris Lattner
26c7c9150a
Elimate handling of ZERO_EXTEND_INREG. This causes the PPC backend to emit
...
andi instructions instead of rlwinm instructions for zero extend, but they
seem like they would take the same time.
llvm-svn: 21268
2005-04-13 02:40:26 +00:00
Chris Lattner
f25fefd9cf
Z_E_I is gone
...
llvm-svn: 21267
2005-04-13 02:39:05 +00:00
Chris Lattner
4f188f949c
Instead of making ZERO_EXTEND_INREG nodes, use the helper method in
...
SelectionDAG to do the job with AND. Don't legalize Z_E_I anymore as
it is gone
llvm-svn: 21266
2005-04-13 02:38:47 +00:00
Chris Lattner
bce0030a88
Remove all foldings of ZERO_EXTEND_INREG, moving them to work for AND nodes
...
instead. OVerall, this increases the amount of folding we can do.
llvm-svn: 21265
2005-04-13 02:38:18 +00:00
Chris Lattner
41aabb9427
Add a new helper method which returns the and that is equivalent to what
...
ZERO_EXTEND_INREG was.
llvm-svn: 21264
2005-04-13 02:37:19 +00:00
Chris Lattner
f5fe51581b
Remove the ZERO_EXTEND_INREG node which is redundant with AND
...
llvm-svn: 21263
2005-04-13 02:36:41 +00:00
Nate Begeman
38d8248a9e
Fold shift x, [sz]ext(y) -> shift x, y
...
llvm-svn: 21262
2005-04-12 23:32:28 +00:00
Nate Begeman
a56527ea5f
Fold shift by size larger than type size to undef
...
Make llvm undef values generate ISD::UNDEF nodes
llvm-svn: 21261
2005-04-12 23:12:17 +00:00
Nate Begeman
79c8b8fd1c
Implement setcc op, -1 sequences
...
Remove dead setcc op, 0 sequences
Coming later: generalization of op, imm
llvm-svn: 21260
2005-04-12 21:22:28 +00:00
Chris Lattner
58f72ab722
promote extload i1 -> extload i8
...
llvm-svn: 21258
2005-04-12 20:30:10 +00:00
Chris Lattner
7c88662870
add an argument to allow avoiding deleting phi nodes.
...
llvm-svn: 21255
2005-04-12 18:52:14 +00:00
Chris Lattner
ff1eca851a
add an argument.
...
llvm-svn: 21254
2005-04-12 18:51:53 +00:00
Chris Lattner
ee06161a63
Get rid of this for_each loop
...
llvm-svn: 21253
2005-04-12 18:51:33 +00:00
Duraid Madina
39fcec1541
* OK, after changing to use liveIn/liveOut instead of IDEFs,
...
to avoid redundant mov out3=r44 type instructions, we need to
tell the register allocator the truth about out? registers.
FIXME: unfortunately, since the list of allocatable registers is immutable,
we can't simply 'delete r127' from the allocation order, say, if 'out0' is
used. The only correct thing we can do is have a linear order of regs:
out7, out6 ... out2, out1, out0, r32, r33, r34 ... r126, r127
and slide a 'window' of 96 registers along this line, depending on how many
of the out? regs a function actually uses. The only downside of this is
that the out? registers will be allocated _first_, which makes the
resulting assembly ugly. :( Note this in the README. Hope this gets fixed
soon. :) (note the 3rd person speech there)
llvm-svn: 21252
2005-04-12 18:42:59 +00:00
Andrew Lenharth
174d44f223
Get rid of idefs for arguments (oops)
...
llvm-svn: 21251
2005-04-12 17:47:57 +00:00
Andrew Lenharth
1b8a8331c9
Get rid of idefs for arguments
...
llvm-svn: 21250
2005-04-12 17:35:16 +00:00
Chris Lattner
be2dceff48
Put out* into the allocation order, allowing the register allocator to
...
coallesce moves into outgoing args.
llvm-svn: 21249
2005-04-12 15:12:51 +00:00
Chris Lattner
37712352c0
Make sure to realize that calls use their argument regs
...
llvm-svn: 21248
2005-04-12 15:12:19 +00:00
Duraid Madina
2821f99f19
stop emitting IDEFs for args - change to using liveIn/liveOut
...
llvm-svn: 21247
2005-04-12 14:54:44 +00:00
Nate Begeman
f96b42f1b6
Initial support for allocation condition registers
...
llvm-svn: 21246
2005-04-12 07:04:16 +00:00
Chris Lattner
f8d9224d8c
Fix a crash analyzing MultiSource/Benchmarks/MallocBench/gs
...
llvm-svn: 21245
2005-04-12 03:59:27 +00:00
Chris Lattner
cfc7093ca6
Remove some redundant checks, add a couple of new ones. This allows us to
...
compile this:
int foo (unsigned long a, unsigned long long g) {
return a >= g;
}
To:
foo:
movl 8(%esp), %eax
cmpl %eax, 4(%esp)
setae %al
cmpl $0, 12(%esp)
sete %cl
andb %al, %cl
movzbl %cl, %eax
ret
instead of:
foo:
movl 8(%esp), %eax
cmpl %eax, 4(%esp)
setae %al
movzbw %al, %cx
movl 12(%esp), %edx
cmpl $0, %edx
sete %al
movzbw %al, %ax
cmpl $0, %edx
cmove %cx, %ax
movzbl %al, %eax
ret
llvm-svn: 21244
2005-04-12 02:54:39 +00:00
Chris Lattner
61f353dbdc
Emit comparisons against the sign bit better. Codegen this:
...
bool %test1(long %X) {
%A = setlt long %X, 0
ret bool %A
}
like this:
test1:
cmpl $0, 8(%esp)
setl %al
movzbl %al, %eax
ret
instead of:
test1:
movl 8(%esp), %ecx
cmpl $0, %ecx
setl %al
movzbw %al, %ax
cmpl $0, 4(%esp)
setb %dl
movzbw %dl, %dx
cmpl $0, %ecx
cmove %dx, %ax
movzbl %al, %eax
ret
llvm-svn: 21243
2005-04-12 02:19:10 +00:00
Chris Lattner
6cbbb55967
Emit long comparison against -1 better. Instead of this (x86):
...
test2:
movl 8(%esp), %eax
notl %eax
movl 4(%esp), %ecx
notl %ecx
orl %eax, %ecx
cmpl $0, %ecx
sete %al
movzbl %al, %eax
ret
or this (PPC):
_test2:
nor r2, r4, r4
nor r3, r3, r3
or r2, r2, r3
cntlzw r2, r2
srwi r3, r2, 5
blr
Emit this:
test2:
movl 8(%esp), %eax
andl 4(%esp), %eax
cmpl $-1, %eax
sete %al
movzbl %al, %eax
ret
or this:
_test2:
.LBB_test2_0: ;
and r2, r4, r3
cmpwi cr0, r2, -1
li r3, 1
li r2, 0
beq .LBB_test2_2 ;
.LBB_test2_1: ;
or r3, r2, r2
.LBB_test2_2: ;
blr
it seems like the PPC isel could do better for R32 == -1 case.
llvm-svn: 21242
2005-04-12 01:46:05 +00:00
Chris Lattner
37534d43d0
canonicalize x <u 1 -> x == 0. On this testcase:
...
unsigned long long g;
unsigned long foo (unsigned long a) {
return (a >= g) ? 1 : 0;
}
It changes the ppc code from:
_foo:
.LBB_foo_0: ; entry
mflr r11
stw r11, 8(r1)
bl "L00000$pb"
"L00000$pb":
mflr r2
addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
lwz r4, 0(r2)
lwz r2, 4(r2)
cmplw cr0, r3, r2
li r2, 1
li r3, 0
bge .LBB_foo_2 ; entry
.LBB_foo_1: ; entry
or r2, r3, r3
.LBB_foo_2: ; entry
cmplwi cr0, r4, 1
li r3, 1
li r5, 0
blt .LBB_foo_4 ; entry
.LBB_foo_3: ; entry
or r3, r5, r5
.LBB_foo_4: ; entry
cmpwi cr0, r4, 0
beq .LBB_foo_6 ; entry
.LBB_foo_5: ; entry
or r2, r3, r3
.LBB_foo_6: ; entry
rlwinm r3, r2, 0, 31, 31
lwz r11, 8(r1)
mtlr r11
blr
to:
_foo:
.LBB_foo_0: ; entry
mflr r11
stw r11, 8(r1)
bl "L00000$pb"
"L00000$pb":
mflr r2
addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
lwz r4, 0(r2)
lwz r2, 4(r2)
cmplw cr0, r3, r2
li r2, 1
li r3, 0
bge .LBB_foo_2 ; entry
.LBB_foo_1: ; entry
or r2, r3, r3
.LBB_foo_2: ; entry
cntlzw r3, r4
srwi r3, r3, 5
cmpwi cr0, r4, 0
beq .LBB_foo_4 ; entry
.LBB_foo_3: ; entry
or r2, r3, r3
.LBB_foo_4: ; entry
rlwinm r3, r2, 0, 31, 31
lwz r11, 8(r1)
mtlr r11
blr
llvm-svn: 21241
2005-04-12 00:28:49 +00:00
Nate Begeman
a154deaaff
Implement bitfield clears
...
Implement divide by negative power of two
llvm-svn: 21240
2005-04-12 00:10:02 +00:00
Nate Begeman
f31b58f145
Update PPC readme. Remove things that are done or aren't ppc specific
...
llvm-svn: 21232
2005-04-11 20:48:57 +00:00
Chris Lattner
7f0f0854fa
Teach the dag mechanism that this:
...
long long test2(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) + B;
}
is equivalent to this:
long long test1(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) | B;
}
Now they are both codegen'd to this on ppc:
_test2:
blr
or this on x86:
test2:
movl 4(%esp), %edx
movl 8(%esp), %eax
ret
llvm-svn: 21231
2005-04-11 20:29:59 +00:00
Chris Lattner
71f3d4ce57
Fix expansion of shifts by exactly NVT bits on arch's (like X86) that have
...
masking shifts.
This fixes the miscompilation of this:
long long test1(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) | B;
}
into this:
test1:
movl 4(%esp), %edx
movl %edx, %eax
orl 8(%esp), %eax
ret
allowing us to generate this instead:
test1:
movl 4(%esp), %edx
movl 8(%esp), %eax
ret
llvm-svn: 21230
2005-04-11 20:08:52 +00:00
Chris Lattner
55e620f08d
IA64 supports this operation.
...
llvm-svn: 21228
2005-04-11 18:55:36 +00:00