Reid Spencer
8d035f492c
Fix a doxygen problem and break lines at 80 columns
...
llvm-svn: 28395
2006-05-19 19:09:46 +00:00
Chris Lattner
cc9a99f371
Declare that lowerinvoke doesn't interact with other lowering passes.
...
Patch written by Domagoj Babic!
llvm-svn: 28367
2006-05-17 21:05:27 +00:00
Chris Lattner
9a0d02f8f7
Add a CloneModule call that exposes the mapping of values from the old module
...
to the new module. Patch provided by Nick Lewycky!
llvm-svn: 28349
2006-05-17 18:05:35 +00:00
Chris Lattner
5aa7f78065
remove some dead code identified by coverity
...
llvm-svn: 28289
2006-05-14 18:45:44 +00:00
Chris Lattner
4ad747c469
remove dead variables
...
llvm-svn: 28286
2006-05-14 18:33:57 +00:00
Evan Cheng
111642322d
Backing out last check-in for now. It's causing an infinite loop gccas lencode.
...
llvm-svn: 28284
2006-05-14 06:46:03 +00:00
Chris Lattner
c927dced9e
Add/Sub/Mul are safe to promote here as well. Incrementing a single-bit
...
bitfield now gives this code:
_plus:
lwz r2, 0(r3)
rlwimi r2, r2, 0, 1, 31
xoris r2, r2, 32768
stw r2, 0(r3)
blr
instead of this:
_plus:
lwz r2, 0(r3)
srwi r4, r2, 31
slwi r4, r4, 31
addis r4, r4, -32768
rlwimi r2, r4, 0, 0, 0
stw r2, 0(r3)
blr
this can obviously still be improved.
llvm-svn: 28275
2006-05-13 02:16:08 +00:00
Chris Lattner
eea864472d
Implement simple promotion for cast elimination in instcombine. This is
...
currently very limited, but can be extended in the future. For example,
we now compile:
uint %test30(uint %c1) {
%c2 = cast uint %c1 to ubyte
%c3 = xor ubyte %c2, 1
%c4 = cast ubyte %c3 to uint
ret uint %c4
}
to:
_xor:
movzbl 4(%esp), %eax
xorl $1, %eax
ret
instead of:
_xor:
movb $1, %al
xorb 4(%esp), %al
movzbl %al, %eax
ret
More impressively, we now compile:
struct B { unsigned bit : 1; };
void xor(struct B *b) { b->bit = b->bit ^ 1; }
To (X86/PPC):
_xor:
movl 4(%esp), %eax
xorl $-2147483648, (%eax)
ret
_xor:
lwz r2, 0(r3)
xoris r2, r2, 32768
stw r2, 0(r3)
blr
instead of (X86/PPC):
_xor:
movl 4(%esp), %eax
movl (%eax), %ecx
movl %ecx, %edx
shrl $31, %edx
# TRUNCATE movb %dl, %dl
xorb $1, %dl
movzbl %dl, %edx
andl $2147483647, %ecx
shll $31, %edx
orl %ecx, %edx
movl %edx, (%eax)
ret
_xor:
lwz r2, 0(r3)
srwi r4, r2, 31
xori r4, r4, 1
rlwimi r2, r4, 31, 0, 0
stw r2, 0(r3)
blr
This implements InstCombine/cast.ll:test30.
llvm-svn: 28273
2006-05-13 02:06:03 +00:00
Chris Lattner
4bbc1d8e95
Remove some dead variables.
...
Fix a nasty bug in the memcmp optimizer where we used the wrong variable!
llvm-svn: 28269
2006-05-12 23:35:26 +00:00
Chris Lattner
08efc01479
Remove dead stuff
...
llvm-svn: 28268
2006-05-12 23:32:01 +00:00
Chris Lattner
3fad520c62
Refactor some code, making it simpler.
...
When doing the initial pass of constant folding, if we get a constantexpr,
simplify the constant expr like we would do if the constant is folded in the
normal loop.
This fixes the missed-optimization regression in
Transforms/InstCombine/getelementptr.ll last night.
llvm-svn: 28224
2006-05-11 17:11:52 +00:00
Chris Lattner
e8fe3f2a08
Two changes:
...
1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
blocks (due to constants in conditional branches/switches) to the worklist.
This causes them to be deleted before instcombine starts up, leading to
better optimization.
2. In the prepass over instructions, do trivial constprop/dce as we go. This
has the effect of improving the effectiveness of #1 . In addition, it
*significantly* speeds up instcombine on test cases with large amounts of
constant folding code (for example, that produced by code specialization
or partial evaluation). In one example, it speeds up instcombine from
0.0589s to 0.0224s with a release build (a 2.6x speedup).
llvm-svn: 28215
2006-05-10 19:00:36 +00:00
Chris Lattner
f49a22d601
Patch to make some xforms preserve each other. Patch contributed by
...
Domagoj Babic!
llvm-svn: 28181
2006-05-09 04:13:41 +00:00
Chris Lattner
7661770087
Move some code around.
...
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated. If one or
both doesn't, then this xform doesn't remove a cast.
This fixes Transforms/InstCombine/2006-05-06-Infloop.ll
llvm-svn: 28141
2006-05-06 09:00:16 +00:00
Chris Lattner
73bfc4c2ea
Fix an infinite loop compiling oggenc last night.
...
llvm-svn: 28128
2006-05-05 20:51:30 +00:00
Chris Lattner
95637c4889
Implement InstCombine/cast.ll:test29
...
llvm-svn: 28126
2006-05-05 06:39:07 +00:00
Chris Lattner
55938f67ae
Fix Transforms/InstCombine/2006-05-04-DemandedBitCrash.ll
...
llvm-svn: 28101
2006-05-04 17:33:35 +00:00
Chris Lattner
1275941193
Add pass ID's for various passes, so they can be AddRequiredID. Patch by
...
Domagoj Babic!
llvm-svn: 28048
2006-05-02 04:24:36 +00:00
Chris Lattner
4c79d3b238
Fix InstCombine/2006-04-28-ShiftShiftLongLong.ll
...
llvm-svn: 28019
2006-04-28 22:21:41 +00:00
Chris Lattner
2118062b9c
Fix Transforms/Reassociate/2006-04-27-ReassociateVector.ll
...
llvm-svn: 28007
2006-04-28 04:14:49 +00:00
Chris Lattner
3f6a151e2d
Add support for inserting undef into a vector. This implements
...
Transforms/InstCombine/vec_insert_to_shuffle.ll
llvm-svn: 27997
2006-04-27 21:14:21 +00:00
Chris Lattner
be67e21327
Fix some nondeterminstic behavior in the mem2reg pass that (in addition to
...
nondeterminism being bad) could cause some trivial missed optimizations (dead
phi nodes being left around for later passes to clean up).
With this, llvm-gcc4 now bootstraps and correctly compares. I don't know
why I never tried to do it before... :)
llvm-svn: 27984
2006-04-27 01:14:43 +00:00
Chris Lattner
01afcd337a
Fix Transforms/ScalarRepl/2006-04-20-PromoteCrash.ll
...
llvm-svn: 27912
2006-04-20 20:48:50 +00:00
Andrew Lenharth
e2b150550a
Make code match cvs commit message :)
...
llvm-svn: 27881
2006-04-20 15:41:37 +00:00
Andrew Lenharth
8f08647a6d
If we can convert the return pointer type into an integer that IntPtrType
...
can be converted to losslessly, we can continue the conversion to a direct call.
llvm-svn: 27880
2006-04-20 14:56:47 +00:00
Chris Lattner
76beeb373d
Turn x86 unaligned load/store intrinsics into aligned load/store instructions
...
if the pointer is known aligned.
llvm-svn: 27781
2006-04-17 22:26:56 +00:00
Chris Lattner
4422d3de1b
Fix a bug in the 'shuffle(undef,x,mask) -> shuffle(x, undef,mask')' xform
...
Make the insert/extract elt -> shuffle code more aggressive.
This fixes CodeGen/PowerPC/vec_shuffle.ll
llvm-svn: 27728
2006-04-16 00:51:47 +00:00
Chris Lattner
da260db137
Canonicalize shuffle(undef,x,mask) -> shuffle(x, undef,mask').
...
llvm-svn: 27727
2006-04-16 00:03:56 +00:00
Chris Lattner
055889cfb9
significant cleanups to code that uses insert/extractelt heavily. This builds
...
maximal shuffles out of them where possible.
llvm-svn: 27717
2006-04-15 01:39:45 +00:00
Chris Lattner
b3cae60d0b
Teach scalarrepl to promote unions of vectors and floats, producing
...
insert/extractelement operations. This implements
Transforms/ScalarRepl/vector_promote.ll
llvm-svn: 27710
2006-04-14 21:42:41 +00:00
Andrew Lenharth
bffed48656
linear -> constant time
...
llvm-svn: 27652
2006-04-13 13:43:31 +00:00
Reid Spencer
56aa7c79b7
Get rid of a signed/unsigned compare warning.
...
llvm-svn: 27625
2006-04-12 19:28:15 +00:00
Chris Lattner
7900e6da3b
Turn casts into getelementptr's when possible. This enables SROA to be more
...
aggressive in some cases where LLVMGCC 4 is inserting casts for no reason.
This implements InstCombine/cast.ll:test27/28.
llvm-svn: 27620
2006-04-12 18:09:35 +00:00
Chris Lattner
ec4fbd3b41
Implement vec_shuffle.ll:test3
...
llvm-svn: 27573
2006-04-10 23:06:36 +00:00
Chris Lattner
42be18f65f
Implement InstCombine/vec_shuffle.ll:test[12]
...
llvm-svn: 27571
2006-04-10 22:45:52 +00:00
Andrew Lenharth
b3f434b83d
Add a simple pass to make sure that all (non-library) calls to malloc and free
...
are visible to analysis as intrinsics. That is, make sure someone doesn't pass
free around by address in some struct (as happens in say 176.gcc).
This doesn't get rid of any indirect calls, just ensure calls to free and malloc
are always direct.
llvm-svn: 27560
2006-04-10 19:26:09 +00:00
Chris Lattner
a0a718c0cc
Add supprot for shufflevector
...
llvm-svn: 27513
2006-04-08 01:19:12 +00:00
Chris Lattner
32b65613d9
Fix inlining of insert/extract element constantexprs
...
llvm-svn: 27478
2006-04-07 04:41:03 +00:00
Chris Lattner
bc0489232b
Lower vperm(x,y, mask) -> shuffle(x,y,mask) if mask is constant. This allows
...
us to compile oh-so-realistic stuff like this:
vec_vperm(A, B, (vector unsigned char){14});
to:
vspltb v0, v0, 14
instead of:
vspltisb v0, 14
vperm v0, v2, v1, v0
llvm-svn: 27452
2006-04-06 19:19:17 +00:00
Chris Lattner
42a1e621f1
vector casts of casts are eliminable. Transform this:
...
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
%tmp = cast <4 x int> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
into:
%tmp = cast <4 x uint> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
llvm-svn: 27355
2006-04-02 05:43:13 +00:00
Chris Lattner
3c994295fe
Allow transforming this:
...
%tmp = cast <4 x uint>* %testData to <4 x int>* ; <<4 x int>*> [#uses=1]
%tmp = load <4 x int>* %tmp ; <<4 x int>> [#uses=1]
to this:
%tmp = load <4 x uint>* %testData ; <<4 x uint>> [#uses=1]
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
llvm-svn: 27353
2006-04-02 05:37:12 +00:00
Chris Lattner
cb26b2dfe8
Turn altivec lvx/stvx intrinsics into loads and stores. This allows the
...
elimination of one load from this:
int AreSecondAndThirdElementsBothNegative( vector float *in ) {
#define QNaN 0x7FC00000
const vector unsigned int testData = (vector unsigned int)( QNaN, 0, 0, QNaN );
vector float test = vec_ld( 0, (float*) &testData );
return ! vec_any_ge( test, *in );
}
Now generating:
_AreSecondAndThirdElementsBothNegative:
mfspr r2, 256
oris r4, r2, 49152
mtspr 256, r4
li r4, lo16(LCPI1_0)
lis r5, ha16(LCPI1_0)
addi r6, r1, -16
lvx v0, r5, r4
stvx v0, 0, r6
lvx v1, 0, r3
vcmpgefp. v0, v0, v1
mfcr r3, 2
rlwinm r3, r3, 27, 31, 31
xori r3, r3, 1
cntlzw r3, r3
srwi r3, r3, 5
mtspr 256, r2
blr
llvm-svn: 27352
2006-04-02 05:30:25 +00:00
Chris Lattner
e314cf19ba
Adjust to change in Intrinsics.gen interface.
...
llvm-svn: 27344
2006-04-02 03:35:01 +00:00
Chris Lattner
704770bfe7
add valuemapper support for inline asm
...
llvm-svn: 27332
2006-04-01 23:17:11 +00:00
Chris Lattner
c2e9b030da
Fix InstCombine/2006-04-01-InfLoop.ll
...
llvm-svn: 27330
2006-04-01 22:05:01 +00:00
Chris Lattner
497bbd4650
Fold A^(B&A) -> (B&A)^A
...
Fold (B&A)^A == ~B & A
This implements InstCombine/xor.ll:test2[56]
llvm-svn: 27328
2006-04-01 08:03:55 +00:00
Chris Lattner
79819f52dc
If we can look through vector operations to find the scalar version of an
...
extract_element'd value, do so.
llvm-svn: 27323
2006-03-31 23:01:56 +00:00
Chris Lattner
0af2e8be73
extractelement(undef,x) -> undef
...
llvm-svn: 27300
2006-03-31 18:25:14 +00:00
Chris Lattner
e57e873543
Fix Transforms/InstCombine/2006-03-30-ExtractElement.ll
...
llvm-svn: 27261
2006-03-30 22:02:40 +00:00
Chris Lattner
dc5d97a341
teach the inliner to work with packed constants
...
llvm-svn: 27161
2006-03-27 05:50:18 +00:00