Owen Anderson
a857559867
Major think-o. Iterate over all live out-of-loop values, and perform the
...
other calculations on each individually, rather than trying to delay it and do
them all at the end.
llvm-svn: 28527
2006-05-28 19:33:28 +00:00
Owen Anderson
703f6baab0
Make LCSSA insert proper Phi nodes throughout the rest of the CFG by computing
...
the iterated Dominance Frontier of the loop-closure Phi's. This is the
second phase of the LCSSA pass. The third phase (coming soon) will be to
update all uses of loop variables to use the loop-closure Phi's instead.
llvm-svn: 28524
2006-05-27 18:47:11 +00:00
Chris Lattner
0189e09b89
Fix some regression from the inliner patch I committed last night. This fixes
...
ldecod, lencod, and SPASS.
llvm-svn: 28523
2006-05-27 17:28:13 +00:00
Chris Lattner
04d52ee9a2
Switch the inliner over to using CloneAndPruneFunctionInto. This effectively
...
makes it so that it constant folds instructions on the fly. This is good
for several reasons:
0. Many instructions are constant foldable after inlining, particularly if
inlining a call with constant arguments.
1. Without this, the inliner has to allocate memory for all of the instructions
that can be constant folded, then a subsequent pass has to delete them. This
gets the job done without this extra work.
2. This makes the inliner *pass* a bit more aggressive: in particular, it
partially solves a phase order issue where the inliner would inline lots
of code that folds away to nothing, but think that the resultant function
is big because of this code that will be gone. Now the code never exists.
This is the first part of a 2-step process. The second part will be smart
enough to see when this implicit constant folding propagates a constant into
a branch or switch instruction, making CFG edges dead.
This implements Transforms/Inline/inline_constprop.ll
llvm-svn: 28521
2006-05-27 01:28:04 +00:00
Chris Lattner
12c9d54f79
Implement a new method, CloneAndPruneFunctionInto, as documented.
...
llvm-svn: 28519
2006-05-27 01:22:24 +00:00
Chris Lattner
1835cfb302
Refactor some code to expose an interface to constant fold and instruction given it's opcode, typeand operands.
...
llvm-svn: 28517
2006-05-27 01:18:04 +00:00
Owen Anderson
1843c1ee17
A few small clean-ups, and the addition of an LCSSA statistic.
...
llvm-svn: 28512
2006-05-27 00:31:37 +00:00
Owen Anderson
d706fc78b2
Fix a copy-and-paste-o that would break some compilers.
...
llvm-svn: 28507
2006-05-26 21:19:17 +00:00
Owen Anderson
2414055ac6
Clean up and refactor LCSSA a bunch. It should also run faster now, though
...
there's still a lot of work to be done on it.
llvm-svn: 28506
2006-05-26 21:11:53 +00:00
Chris Lattner
0043931185
Implement Transforms/InstCombine/store.ll:test2.
...
llvm-svn: 28503
2006-05-26 19:19:20 +00:00
Owen Anderson
93098cfc4c
Skeletal LCSSA pass. This is currently non-functional. Expect functionality
...
and documentation updates soo.
llvm-svn: 28495
2006-05-26 13:58:26 +00:00
Chris Lattner
c6c2770e08
Transform things like (splat(splat)) -> splat
...
llvm-svn: 28490
2006-05-26 00:29:06 +00:00
Chris Lattner
261299e3f5
Introduce a helper function that simplifies interpretation of shuffle masks.
...
No functionality change.
llvm-svn: 28489
2006-05-25 23:48:38 +00:00
Chris Lattner
c678c720a7
Turn (cast (shuffle (cast)) -> shuffle (cast) if it reduces the # casts in
...
the program. This exposes more opportunities for the instcombiner, and implements
vec_shuffle.ll:test6
llvm-svn: 28487
2006-05-25 23:24:33 +00:00
Chris Lattner
5df88a112b
extract element from a shuffle vector can be trivially turned into an
...
extractelement from the SV's source. This implement vec_shuffle.ll:test[45]
llvm-svn: 28485
2006-05-25 22:53:38 +00:00
Chris Lattner
d3eff919d8
Revert a patch that is unsafe, due to out of range array accesses in inner
...
array scopes possibly accessing valid memory in outer subscripts.
llvm-svn: 28478
2006-05-25 21:25:12 +00:00
Chris Lattner
0b38bc2a99
Patch for a new instcombine xform, patch contributed by Nick Lewycky!
...
This implements Transforms/InstCombine/2006-05-10-InvalidIndexUndef.ll
llvm-svn: 28450
2006-05-24 17:34:30 +00:00
Chris Lattner
f604017e47
Patches to make the LLVM sources more -pedantic clean. Patch provided
...
by Anton Korobeynikov! This is a step towards closing PR786.
llvm-svn: 28447
2006-05-24 17:04:05 +00:00
Chris Lattner
1ddd46999b
Silence a bogus gcc warning
...
llvm-svn: 28422
2006-05-20 23:14:03 +00:00
Reid Spencer
8d035f492c
Fix a doxygen problem and break lines at 80 columns
...
llvm-svn: 28395
2006-05-19 19:09:46 +00:00
Chris Lattner
cc9a99f371
Declare that lowerinvoke doesn't interact with other lowering passes.
...
Patch written by Domagoj Babic!
llvm-svn: 28367
2006-05-17 21:05:27 +00:00
Chris Lattner
9a0d02f8f7
Add a CloneModule call that exposes the mapping of values from the old module
...
to the new module. Patch provided by Nick Lewycky!
llvm-svn: 28349
2006-05-17 18:05:35 +00:00
Chris Lattner
5aa7f78065
remove some dead code identified by coverity
...
llvm-svn: 28289
2006-05-14 18:45:44 +00:00
Chris Lattner
4ad747c469
remove dead variables
...
llvm-svn: 28286
2006-05-14 18:33:57 +00:00
Evan Cheng
111642322d
Backing out last check-in for now. It's causing an infinite loop gccas lencode.
...
llvm-svn: 28284
2006-05-14 06:46:03 +00:00
Chris Lattner
c927dced9e
Add/Sub/Mul are safe to promote here as well. Incrementing a single-bit
...
bitfield now gives this code:
_plus:
lwz r2, 0(r3)
rlwimi r2, r2, 0, 1, 31
xoris r2, r2, 32768
stw r2, 0(r3)
blr
instead of this:
_plus:
lwz r2, 0(r3)
srwi r4, r2, 31
slwi r4, r4, 31
addis r4, r4, -32768
rlwimi r2, r4, 0, 0, 0
stw r2, 0(r3)
blr
this can obviously still be improved.
llvm-svn: 28275
2006-05-13 02:16:08 +00:00
Chris Lattner
eea864472d
Implement simple promotion for cast elimination in instcombine. This is
...
currently very limited, but can be extended in the future. For example,
we now compile:
uint %test30(uint %c1) {
%c2 = cast uint %c1 to ubyte
%c3 = xor ubyte %c2, 1
%c4 = cast ubyte %c3 to uint
ret uint %c4
}
to:
_xor:
movzbl 4(%esp), %eax
xorl $1, %eax
ret
instead of:
_xor:
movb $1, %al
xorb 4(%esp), %al
movzbl %al, %eax
ret
More impressively, we now compile:
struct B { unsigned bit : 1; };
void xor(struct B *b) { b->bit = b->bit ^ 1; }
To (X86/PPC):
_xor:
movl 4(%esp), %eax
xorl $-2147483648, (%eax)
ret
_xor:
lwz r2, 0(r3)
xoris r2, r2, 32768
stw r2, 0(r3)
blr
instead of (X86/PPC):
_xor:
movl 4(%esp), %eax
movl (%eax), %ecx
movl %ecx, %edx
shrl $31, %edx
# TRUNCATE movb %dl, %dl
xorb $1, %dl
movzbl %dl, %edx
andl $2147483647, %ecx
shll $31, %edx
orl %ecx, %edx
movl %edx, (%eax)
ret
_xor:
lwz r2, 0(r3)
srwi r4, r2, 31
xori r4, r4, 1
rlwimi r2, r4, 31, 0, 0
stw r2, 0(r3)
blr
This implements InstCombine/cast.ll:test30.
llvm-svn: 28273
2006-05-13 02:06:03 +00:00
Chris Lattner
4bbc1d8e95
Remove some dead variables.
...
Fix a nasty bug in the memcmp optimizer where we used the wrong variable!
llvm-svn: 28269
2006-05-12 23:35:26 +00:00
Chris Lattner
08efc01479
Remove dead stuff
...
llvm-svn: 28268
2006-05-12 23:32:01 +00:00
Chris Lattner
3fad520c62
Refactor some code, making it simpler.
...
When doing the initial pass of constant folding, if we get a constantexpr,
simplify the constant expr like we would do if the constant is folded in the
normal loop.
This fixes the missed-optimization regression in
Transforms/InstCombine/getelementptr.ll last night.
llvm-svn: 28224
2006-05-11 17:11:52 +00:00
Chris Lattner
e8fe3f2a08
Two changes:
...
1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
blocks (due to constants in conditional branches/switches) to the worklist.
This causes them to be deleted before instcombine starts up, leading to
better optimization.
2. In the prepass over instructions, do trivial constprop/dce as we go. This
has the effect of improving the effectiveness of #1 . In addition, it
*significantly* speeds up instcombine on test cases with large amounts of
constant folding code (for example, that produced by code specialization
or partial evaluation). In one example, it speeds up instcombine from
0.0589s to 0.0224s with a release build (a 2.6x speedup).
llvm-svn: 28215
2006-05-10 19:00:36 +00:00
Chris Lattner
f49a22d601
Patch to make some xforms preserve each other. Patch contributed by
...
Domagoj Babic!
llvm-svn: 28181
2006-05-09 04:13:41 +00:00
Chris Lattner
7661770087
Move some code around.
...
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated. If one or
both doesn't, then this xform doesn't remove a cast.
This fixes Transforms/InstCombine/2006-05-06-Infloop.ll
llvm-svn: 28141
2006-05-06 09:00:16 +00:00
Chris Lattner
73bfc4c2ea
Fix an infinite loop compiling oggenc last night.
...
llvm-svn: 28128
2006-05-05 20:51:30 +00:00
Chris Lattner
95637c4889
Implement InstCombine/cast.ll:test29
...
llvm-svn: 28126
2006-05-05 06:39:07 +00:00
Chris Lattner
55938f67ae
Fix Transforms/InstCombine/2006-05-04-DemandedBitCrash.ll
...
llvm-svn: 28101
2006-05-04 17:33:35 +00:00
Chris Lattner
1275941193
Add pass ID's for various passes, so they can be AddRequiredID. Patch by
...
Domagoj Babic!
llvm-svn: 28048
2006-05-02 04:24:36 +00:00
Chris Lattner
4c79d3b238
Fix InstCombine/2006-04-28-ShiftShiftLongLong.ll
...
llvm-svn: 28019
2006-04-28 22:21:41 +00:00
Chris Lattner
2118062b9c
Fix Transforms/Reassociate/2006-04-27-ReassociateVector.ll
...
llvm-svn: 28007
2006-04-28 04:14:49 +00:00
Chris Lattner
3f6a151e2d
Add support for inserting undef into a vector. This implements
...
Transforms/InstCombine/vec_insert_to_shuffle.ll
llvm-svn: 27997
2006-04-27 21:14:21 +00:00
Chris Lattner
be67e21327
Fix some nondeterminstic behavior in the mem2reg pass that (in addition to
...
nondeterminism being bad) could cause some trivial missed optimizations (dead
phi nodes being left around for later passes to clean up).
With this, llvm-gcc4 now bootstraps and correctly compares. I don't know
why I never tried to do it before... :)
llvm-svn: 27984
2006-04-27 01:14:43 +00:00
Chris Lattner
01afcd337a
Fix Transforms/ScalarRepl/2006-04-20-PromoteCrash.ll
...
llvm-svn: 27912
2006-04-20 20:48:50 +00:00
Andrew Lenharth
e2b150550a
Make code match cvs commit message :)
...
llvm-svn: 27881
2006-04-20 15:41:37 +00:00
Andrew Lenharth
8f08647a6d
If we can convert the return pointer type into an integer that IntPtrType
...
can be converted to losslessly, we can continue the conversion to a direct call.
llvm-svn: 27880
2006-04-20 14:56:47 +00:00
Chris Lattner
76beeb373d
Turn x86 unaligned load/store intrinsics into aligned load/store instructions
...
if the pointer is known aligned.
llvm-svn: 27781
2006-04-17 22:26:56 +00:00
Chris Lattner
4422d3de1b
Fix a bug in the 'shuffle(undef,x,mask) -> shuffle(x, undef,mask')' xform
...
Make the insert/extract elt -> shuffle code more aggressive.
This fixes CodeGen/PowerPC/vec_shuffle.ll
llvm-svn: 27728
2006-04-16 00:51:47 +00:00
Chris Lattner
da260db137
Canonicalize shuffle(undef,x,mask) -> shuffle(x, undef,mask').
...
llvm-svn: 27727
2006-04-16 00:03:56 +00:00
Chris Lattner
055889cfb9
significant cleanups to code that uses insert/extractelt heavily. This builds
...
maximal shuffles out of them where possible.
llvm-svn: 27717
2006-04-15 01:39:45 +00:00
Chris Lattner
b3cae60d0b
Teach scalarrepl to promote unions of vectors and floats, producing
...
insert/extractelement operations. This implements
Transforms/ScalarRepl/vector_promote.ll
llvm-svn: 27710
2006-04-14 21:42:41 +00:00
Andrew Lenharth
bffed48656
linear -> constant time
...
llvm-svn: 27652
2006-04-13 13:43:31 +00:00