Chris Lattner
68e050feda
testcase that crashes lsr, distilled from 175.vpr
...
llvm-svn: 22796
2005-08-16 00:36:12 +00:00
Chris Lattner
543dd2e01c
Turn loop strength reduction on by default.
...
Only run createLowerConstantExpressionsPass for the simple isel. The DAG
isel has no need for it.
llvm-svn: 22794
2005-08-15 23:47:04 +00:00
Chris Lattner
c2cbe96e25
Teach LLVM to know how many times a loop executes when constructed with
...
a < expression, e.g.: for (i = m; i < n; ++i)
llvm-svn: 22793
2005-08-15 23:33:51 +00:00
Jim Laskey
c501f51bd2
Broke 80 column rule.
...
llvm-svn: 22792
2005-08-15 17:35:26 +00:00
Jim Laskey
92eccb9901
Changed code gen for int to f32 to use rounding. This makes FP results
...
consistent with gcc.
llvm-svn: 22791
2005-08-15 17:14:19 +00:00
Andrew Lenharth
b9c77079dc
isIntImmediate is a good Idea. Add a flavor that checks bounds while it is at it
...
llvm-svn: 22790
2005-08-15 14:31:37 +00:00
Nate Begeman
54423e60c6
Fix last night's PPC32 regressions by
...
1. Not selecting the false value of a select_cc in the false arm, which
isn't legal for nested selects.
2. Actually returning the node we created and Legalized in the FP_TO_UINT
Expander.
llvm-svn: 22789
2005-08-14 18:38:32 +00:00
Nate Begeman
6e0168fe5f
Fix last night's X86 regressions by putting code for SSE in the if(SSE)
...
block. nur.
llvm-svn: 22788
2005-08-14 18:37:02 +00:00
Andrew Lenharth
972c05771c
only build .a on alpha
...
llvm-svn: 22787
2005-08-14 15:14:34 +00:00
Nate Begeman
89f12b7721
Fix FP_TO_UINT with Scalar SSE2 now that the legalizer can handle it. We
...
now generate the relatively good code sequences:
unsigned short foo(float a) { return a; }
_foo:
movss 4(%esp), %xmm0
cvttss2si %xmm0, %eax
movzwl %ax, %eax
ret
and
unsigned bar(float a) { return a; }
_bar:
movss .CPI_bar_0, %xmm0
movss 4(%esp), %xmm1
movapd %xmm1, %xmm2
subss %xmm0, %xmm2
cvttss2si %xmm2, %eax
xorl $-2147483648, %eax
cvttss2si %xmm1, %ecx
ucomiss %xmm0, %xmm1
cmovb %ecx, %eax
ret
llvm-svn: 22786
2005-08-14 04:36:51 +00:00
Nate Begeman
9be6a214ff
Teach the legalizer how to legalize FP_TO_UINT.
...
Teach the legalizer to promote FP_TO_UINT to FP_TO_SINT if the wider
FP_TO_UINT is also illegal. This allows us on PPC to codegen
unsigned short foo(float a) { return a; }
as:
_foo:
.LBB_foo_0: ; entry
fctiwz f0, f1
stfd f0, -8(r1)
lwz r2, -4(r1)
rlwinm r3, r2, 0, 16, 31
blr
instead of:
_foo:
.LBB_foo_0: ; entry
fctiwz f0, f1
stfd f0, -8(r1)
lwz r2, -4(r1)
lis r3, ha16(.CPI_foo_0)
lfs f0, lo16(.CPI_foo_0)(r3)
fcmpu cr0, f1, f0
blt .LBB_foo_2 ; entry
.LBB_foo_1: ; entry
fsubs f0, f1, f0
fctiwz f0, f0
stfd f0, -16(r1)
lwz r2, -12(r1)
xoris r2, r2, 32768
.LBB_foo_2: ; entry
rlwinm r3, r2, 0, 16, 31
blr
llvm-svn: 22785
2005-08-14 01:20:53 +00:00
Nate Begeman
2426bb6589
Make FP_TO_UINT Illegal. This allows us to generate significantly better
...
codegen for FP_TO_UINT by using the legalizer's SELECT variant.
Implement a codegen improvement for SELECT_CC, selecting the false node in
the MBB that feeds the phi node. This allows us to codegen:
void foo(int *a, int b, int c) { int d = (a < b) ? 5 : 9; *a = d; }
as:
_foo:
li r2, 5
cmpw cr0, r4, r3
bgt .LBB_foo_2 ; entry
.LBB_foo_1: ; entry
li r2, 9
.LBB_foo_2: ; entry
stw r2, 0(r3)
blr
insted of:
_foo:
li r2, 5
li r5, 9
cmpw cr0, r4, r3
bgt .LBB_foo_2 ; entry
.LBB_foo_1: ; entry
or r2, r5, r5
.LBB_foo_2: ; entry
stw r2, 0(r3)
blr
llvm-svn: 22784
2005-08-14 01:17:16 +00:00
Andrew Lenharth
5f4a72a18e
Testing a variable before it is defined doesn't work so well. It is a fairly small thing, so just let everyone build the .a file
...
llvm-svn: 22783
2005-08-13 14:58:23 +00:00
Chris Lattner
57a2a74e99
Ooops, don't forget to clear this. The real inner loop is now:
...
.LBB_foo_3: ; no_exit.1
lfd f2, 0(r9)
lfd f3, 8(r9)
fmul f4, f1, f2
fmadd f4, f0, f3, f4
stfd f4, 8(r9)
fmul f3, f1, f3
fmsub f2, f0, f2, f3
stfd f2, 0(r9)
addi r9, r9, 16
addi r8, r8, 1
cmpw cr0, r8, r4
ble .LBB_foo_3 ; no_exit.1
llvm-svn: 22782
2005-08-13 07:42:01 +00:00
Chris Lattner
f59e855dbc
Recursively scan scev expressions for common subexpressions. This allows us
...
to handle nested loops much better, for example, by being able to tell that
these two expressions:
{( 8 + ( 16 * ( 1 + %Tmp11 + %Tmp12)) + %c_),+,( 16 * %Tmp 12)}<loopentry.1>
{(( 16 * ( 1 + %Tmp11 + %Tmp12)) + %c_),+,( 16 * %Tmp12)}<loopentry.1>
Have the following common part that can be shared:
{(( 16 * ( 1 + %Tmp11 + %Tmp12)) + %c_),+,( 16 * %Tmp12)}<loopentry.1>
This allows us to codegen an important inner loop in 168.wupwise as:
.LBB_foo_4: ; no_exit.1
lfd f2, 16(r9)
fmul f3, f0, f2
fmul f2, f1, f2
fadd f4, f3, f2
stfd f4, 8(r9)
fsub f2, f3, f2
stfd f2, 16(r9)
addi r8, r8, 1
addi r9, r9, 16
cmpw cr0, r8, r4
ble .LBB_foo_4 ; no_exit.1
instead of:
.LBB_foo_3: ; no_exit.1
lfdx f2, r6, r9
add r10, r6, r9
lfd f3, 8(r10)
fmul f4, f1, f2
fmadd f4, f0, f3, f4
stfd f4, 8(r10)
fmul f3, f1, f3
fmsub f2, f0, f2, f3
stfdx f2, r6, r9
addi r9, r9, 16
addi r8, r8, 1
cmpw cr0, r8, r4
ble .LBB_foo_3 ; no_exit.1
llvm-svn: 22781
2005-08-13 07:27:18 +00:00
Nate Begeman
021a5b3fe1
Remove an unncessary argument to SimplifySelectCC and add an additional
...
assert when creating a select_cc node.
llvm-svn: 22780
2005-08-13 06:14:17 +00:00
Nate Begeman
4e8f777256
Fix the fabs regression on x86 by abstracting the select_cc optimization
...
out into SimplifySelectCC. This allows both ISD::SELECT and ISD::SELECT_CC
to use the same set of simplifying folds.
llvm-svn: 22779
2005-08-13 06:00:21 +00:00
Nate Begeman
55d6c917c5
Remove support for 64b PPC, it's been broken for a long time. It'll be
...
back once a DAG->DAG ISel exists.
llvm-svn: 22778
2005-08-13 05:59:16 +00:00
Andrew Lenharth
d9a0c60da4
Fix oversized GOT problem with gcc-4 on alpha
...
llvm-svn: 22777
2005-08-13 05:09:50 +00:00
Chris Lattner
2681a83c43
Teach SplitCriticalEdge to update LoopInfo if it is alive. This fixes
...
a problem in LoopStrengthReduction, where it would split critical edges
then confused itself with outdated loop information.
llvm-svn: 22776
2005-08-13 01:38:43 +00:00
Chris Lattner
87bcd2794b
remove dead code. The exit block list is computed on demand, thus does not
...
need to be updated. This code is a relic from when it did.
llvm-svn: 22775
2005-08-13 01:30:36 +00:00
Chris Lattner
e06d2c3760
implement a couple of simple shift foldings.
...
e.g. (X & 7) >> 3 -> 0
llvm-svn: 22774
2005-08-12 23:54:58 +00:00
Jim Laskey
6b95280f3c
Fix for 2005-08-12-rlwimi-crash.ll. Make allowance for masks being shifted to
...
zero.
llvm-svn: 22773
2005-08-12 23:52:46 +00:00
Jim Laskey
a6aec27dd8
Added test cases to guarantee use of ORC and ANDC.
...
llvm-svn: 22772
2005-08-12 23:40:14 +00:00
Jim Laskey
7b32c9131d
1. This changes handles the cases of (~x)&y and x&(~y) yielding ANDC, and
...
(~x)|y and x|(~y) yielding ORC.
llvm-svn: 22771
2005-08-12 23:38:02 +00:00
Chris Lattner
bdc747bc46
testcase that crashed the ppc backend, distilled from crafty
...
llvm-svn: 22770
2005-08-12 23:34:03 +00:00
Chris Lattner
811ef4cce0
When splitting critical edges, make sure not to leave the new block in the
...
middle of the loop. This turns a critical loop in gzip into this:
.LBB_test_1: ; loopentry
or r27, r28, r28
add r28, r3, r27
lhz r28, 3(r28)
add r26, r4, r27
lhz r26, 3(r26)
cmpw cr0, r28, r26
bne .LBB_test_8 ; loopentry.loopexit_crit_edge
.LBB_test_2: ; shortcirc_next.0
add r28, r3, r27
lhz r28, 5(r28)
add r26, r4, r27
lhz r26, 5(r26)
cmpw cr0, r28, r26
bne .LBB_test_7 ; shortcirc_next.0.loopexit_crit_edge
.LBB_test_3: ; shortcirc_next.1
add r28, r3, r27
lhz r28, 7(r28)
add r26, r4, r27
lhz r26, 7(r26)
cmpw cr0, r28, r26
bne .LBB_test_6 ; shortcirc_next.1.loopexit_crit_edge
.LBB_test_4: ; shortcirc_next.2
add r28, r3, r27
lhz r26, 9(r28)
add r28, r4, r27
lhz r25, 9(r28)
addi r28, r27, 8
cmpw cr7, r26, r25
mfcr r26, 1
rlwinm r26, r26, 31, 31, 31
add r25, r8, r27
cmpw cr7, r25, r7
mfcr r25, 1
rlwinm r25, r25, 29, 31, 31
and. r26, r26, r25
bne .LBB_test_1 ; loopentry
instead of this:
.LBB_test_1: ; loopentry
or r27, r28, r28
add r28, r3, r27
lhz r28, 3(r28)
add r26, r4, r27
lhz r26, 3(r26)
cmpw cr0, r28, r26
beq .LBB_test_3 ; shortcirc_next.0
.LBB_test_2: ; loopentry.loopexit_crit_edge
add r2, r30, r27
add r8, r29, r27
b .LBB_test_9 ; loopexit
.LBB_test_3: ; shortcirc_next.0
add r28, r3, r27
lhz r28, 5(r28)
add r26, r4, r27
lhz r26, 5(r26)
cmpw cr0, r28, r26
beq .LBB_test_5 ; shortcirc_next.1
.LBB_test_4: ; shortcirc_next.0.loopexit_crit_edge
add r2, r11, r27
add r8, r12, r27
b .LBB_test_9 ; loopexit
.LBB_test_5: ; shortcirc_next.1
add r28, r3, r27
lhz r28, 7(r28)
add r26, r4, r27
lhz r26, 7(r26)
cmpw cr0, r28, r26
beq .LBB_test_7 ; shortcirc_next.2
.LBB_test_6: ; shortcirc_next.1.loopexit_crit_edge
add r2, r9, r27
add r8, r10, r27
b .LBB_test_9 ; loopexit
.LBB_test_7: ; shortcirc_next.2
add r28, r3, r27
lhz r26, 9(r28)
add r28, r4, r27
lhz r25, 9(r28)
addi r28, r27, 8
cmpw cr7, r26, r25
mfcr r26, 1
rlwinm r26, r26, 31, 31, 31
add r25, r8, r27
cmpw cr7, r25, r7
mfcr r25, 1
rlwinm r25, r25, 29, 31, 31
and. r26, r26, r25
bne .LBB_test_1 ; loopentry
Next up, improve the code for the loop.
llvm-svn: 22769
2005-08-12 22:22:17 +00:00
Chris Lattner
4e2d62ba5d
Add a helper method
...
llvm-svn: 22768
2005-08-12 22:14:06 +00:00
Chris Lattner
879c4db070
add a helper method
...
llvm-svn: 22767
2005-08-12 22:13:27 +00:00
Chris Lattner
a5c0038c25
Fix a FIXME: if we are inserting code for a PHI argument, split the critical
...
edge so that the code is not always executed for both operands. This
prevents LSR from inserting code into loops whose exit blocks contain
PHI uses of IV expressions (which are outside of loops). On gzip, for
example, we turn this ugly code:
.LBB_test_1: ; loopentry
add r27, r3, r28
lhz r27, 3(r27)
add r26, r4, r28
lhz r26, 3(r26)
add r25, r30, r28 ;; Only live if exiting the loop
add r24, r29, r28 ;; Only live if exiting the loop
cmpw cr0, r27, r26
bne .LBB_test_5 ; loopexit
into this:
.LBB_test_1: ; loopentry
or r27, r28, r28
add r28, r3, r27
lhz r28, 3(r28)
add r26, r4, r27
lhz r26, 3(r26)
cmpw cr0, r28, r26
beq .LBB_test_3 ; shortcirc_next.0
.LBB_test_2: ; loopentry.loopexit_crit_edge
add r2, r30, r27
add r8, r29, r27
b .LBB_test_9 ; loopexit
.LBB_test_2: ; shortcirc_next.0
...
blt .LBB_test_1
into this:
.LBB_test_1: ; loopentry
or r27, r28, r28
add r28, r3, r27
lhz r28, 3(r28)
add r26, r4, r27
lhz r26, 3(r26)
cmpw cr0, r28, r26
beq .LBB_test_3 ; shortcirc_next.0
.LBB_test_2: ; loopentry.loopexit_crit_edge
add r2, r30, r27
add r8, r29, r27
b .LBB_t_3: ; shortcirc_next.0
.LBB_test_3: ; shortcirc_next.0
...
blt .LBB_test_1
Next step: get the block out of the loop so that the loop is all
fall-throughs again.
llvm-svn: 22766
2005-08-12 22:06:11 +00:00
Chris Lattner
7b5d0e1463
Change break critical edges to not remove, then insert, PHI node entries.
...
Instead, just update the BB in-place. This is both faster, and it prevents
split-critical-edges from shuffling the PHI argument list unneccesarily.
llvm-svn: 22765
2005-08-12 21:58:07 +00:00
Andrew Lenharth
b9b598d55f
match gcc's use of tabs, makes diffs easier
...
llvm-svn: 22764
2005-08-12 16:14:08 +00:00
Andrew Lenharth
57c0f5da32
.section cleanup, patch from Nicholas Riley
...
llvm-svn: 22763
2005-08-12 16:13:43 +00:00
Chris Lattner
3d791df426
First rev of Xcode 2.1 project
...
llvm-svn: 22762
2005-08-11 22:19:26 +00:00
Jim Laskey
c79447e1ec
1. Added the function isOpcWithIntImmediate to simplify testing of operand with
...
specified opcode and an integer constant right operand.
2. Modified ISD::SHL, ISD::SRL, ISD::SRA to use rlwinm when applied after a mask.
llvm-svn: 22761
2005-08-11 21:59:23 +00:00
Chris Lattner
4d353459a3
Tidied up the use of dyn_cast<ConstantSDNode> by using isIntImmediate more.
...
Patch by Jim Laskey.
llvm-svn: 22760
2005-08-11 17:56:50 +00:00
Chris Lattner
5cba0bb5bb
Use a more efficient method of creating integer and float virtual registers
...
(avoids an extra level of indirection in MakeReg).
defined MakeIntReg using RegMap->createVirtualRegister(PPC32::GPRCRegisterClass)
defined MakeFPReg using RegMap->createVirtualRegister(PPC32::FPRCRegisterClass)
s/MakeReg(MVT::i32)/MakeIntReg/
s/MakeReg(MVT::f64)/MakeFPReg/
Patch by Jim Laskey!
llvm-svn: 22759
2005-08-11 17:15:31 +00:00
Nate Begeman
09c56e0432
Add a select_cc optimization for recognizing abs(int). This speeds up an
...
integer MPEG encoding loop by a factor of two.
llvm-svn: 22758
2005-08-11 02:18:13 +00:00
Nate Begeman
206e850add
Some SELECT_CC cleanups:
...
1. move assertions for node creation to getNode()
2. legalize the values returned in ExpandOp immediately
3. Move select_cc optimizations from SELECT's getNode() to SELECT_CC's,
allowing them to be cleaned up significantly.
This paves the way to pick up additional optimizations on SELECT_CC, such
as sum-of-absolute-differences.
llvm-svn: 22757
2005-08-11 01:12:20 +00:00
Nate Begeman
23479935cc
Make SELECT illegal on PPC32, switch to using SELECT_CC, which more closely
...
reflects what the hardware is capable of. This significantly simplifies
the CC handling logic throughout the ISel.
llvm-svn: 22756
2005-08-10 20:52:09 +00:00
Nate Begeman
eddc9d4856
Add new node, SELECT_CC. This node is for targets that don't natively
...
implement SELECT.
llvm-svn: 22755
2005-08-10 20:51:12 +00:00
Chris Lattner
512a5e507e
Changes for PPC32ISelPattern.cpp
...
1. Clean up how SelectIntImmediateExpr handles use counts.
2. "Subtract from" was not clearing hi 16 bits.
Patch by Jim Laskey
llvm-svn: 22754
2005-08-10 18:11:33 +00:00
Chris Lattner
51cf9fd316
Fix an oversight that may be causing PR617.
...
llvm-svn: 22753
2005-08-10 17:37:53 +00:00
Chris Lattner
6df80b5d49
now that we handle non-constant strides, this testcase passes
...
llvm-svn: 22752
2005-08-10 17:17:45 +00:00
Chris Lattner
67cef1a1d8
remove some trickiness that broke yacr2 and some other programs last night
...
llvm-svn: 22751
2005-08-10 17:15:20 +00:00
Chris Lattner
91f83576d8
Changed the XOR case to use the isOprNot predicate.
...
Patch by Jim Laskey!
llvm-svn: 22750
2005-08-10 16:35:46 +00:00
Chris Lattner
ad6d368eee
1. Refactored handling of integer immediate values for add, or, xor and sub.
...
New routine: ISel::SelectIntImmediateExpr
2. Now checking use counts of large constants. If use count is > 2 then drop
thru so that the constant gets loaded into a register.
Source:
int %test1(int %a) {
entry:
%tmp.1 = add int %a, 123456789 ; <int> [#uses=1]
%tmp.2 = or int %tmp.1, 123456789 ; <int> [#uses=1]
%tmp.3 = xor int %tmp.2, 123456789 ; <int> [#uses=1]
%tmp.4 = sub int %tmp.3, -123456789 ; <int> [#uses=1]
ret int %tmp.4
}
Did Emit:
.machine ppc970
.text
.align 2
.globl _test1
_test1:
.LBB_test1_0: ; entry
addi r2, r3, -13035
addis r2, r2, 1884
ori r2, r2, 52501
oris r2, r2, 1883
xori r2, r2, 52501
xoris r2, r2, 1883
addi r2, r2, 52501
addis r3, r2, 1883
blr
Now Emits:
.machine ppc970
.text
.align 2
.globl _test1
_test1:
.LBB_test1_0: ; entry
lis r2, 1883
ori r2, r2, 52501
add r3, r3, r2
or r3, r3, r2
xor r3, r3, r2
add r3, r3, r2
blr
Patch by Jim Laskey!
llvm-svn: 22749
2005-08-10 16:34:52 +00:00
Duraid Madina
6325af5006
sorry!! this is temporary; for some reason the nasty constmul code seems to
...
be an infinite loop when using g++-4.0.1*, this kills the ia64 nightly
tester. A proper fix shall be forthcoming!!! thanks for not killing me. :)
llvm-svn: 22748
2005-08-10 12:38:57 +00:00
Chris Lattner
74acf5edc8
Fix a bug compiling: select (i32 < i32), f32, f32
...
llvm-svn: 22747
2005-08-10 03:40:09 +00:00
Chris Lattner
179fc33e59
Make loop-simplify produce better loops by turning PHI nodes like X = phi [X, Y]
...
into just Y. This often occurs when it seperates loops that have collapsed loop
headers. This implements LoopSimplify/phi-node-simplify.ll
llvm-svn: 22746
2005-08-10 02:07:32 +00:00