1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 19:42:54 +02:00
Commit Graph

841 Commits

Author SHA1 Message Date
Chris Lattner
318f80ae39 Get rid of getSizeOf, using ConstantExpr::getSizeOf instead.
do not insert a prototype for malloc of: void* malloc(uint): on 64-bit u
targets this is not correct.  Instead of prototype it as void *malloc(...),
and pass the correct intptr_t through the "...".

Finally, fix Regression/CodeGen/SparcV9/2004-12-13-MallocCrash.ll, by not
forming constantexpr casts from pointer to uint.

llvm-svn: 18908
2004-12-13 20:00:02 +00:00
Chris Lattner
bc889ab3c7 Change indentation of a whole bunch of code, no real changes here.
llvm-svn: 18843
2004-12-12 23:49:37 +00:00
Chris Lattner
85776b0e99 More substantial simplifications and speedups. This makes ADCE about 20% faster
in some cases.

llvm-svn: 18842
2004-12-12 23:40:17 +00:00
Chris Lattner
deda69d473 More minor microoptimizations
llvm-svn: 18841
2004-12-12 22:44:30 +00:00
Chris Lattner
1629a92f18 Remove some more set operations
llvm-svn: 18840
2004-12-12 22:22:18 +00:00
Chris Lattner
a47e7e4093 Reduce number of set operations.
llvm-svn: 18839
2004-12-12 22:16:13 +00:00
Chris Lattner
2be8c4a870 Optimize div/rem + select combinations more.
In particular, implement div.ll:test10 and rem.ll:test4.

llvm-svn: 18838
2004-12-12 21:48:58 +00:00
Chris Lattner
9e7ce53b82 Simplify code and do not invalidate iterators.
This fixes a crash compiling TimberWolfMC that was exposed due to recent
optimizer changes.

llvm-svn: 18831
2004-12-12 18:23:20 +00:00
Chris Lattner
02c04bbf45 If one side of and/or is known to be 0/-1, it doesn't matter
if the other side is overdefined.

This allows us to fold conditions like:  if (X < Y || Y > Z) in some cases.

llvm-svn: 18807
2004-12-11 23:15:19 +00:00
Chris Lattner
06bfa390f6 Two bug fixes:
1. Actually increment the Statistic for the GV elim optzn
 2. When resolving undef branches, only resolve branches in executable blocks,
    avoiding marking a bunch of completely dead blocks live.  This has a big
    impact on the quality of the generated code.

With this patch, we positively rip up vortex, compiling Ut_MoveBytes to a
single memcpy call. In vortex we get this:

     12 ipsccp           - Number of globals found to be constant
    986 ipsccp           - Number of arguments constant propagated
   1378 ipsccp           - Number of basic blocks unreachable
   8919 ipsccp           - Number of instructions removed

llvm-svn: 18796
2004-12-11 06:05:53 +00:00
Chris Lattner
5af0ef5c44 Do not delete the entry block to a function.
llvm-svn: 18795
2004-12-11 05:32:19 +00:00
Chris Lattner
943f94d2b3 Implement Transforms/SCCP/ipsccp-gvar.ll, by tracking values stored to
non-address-taken global variables.

llvm-svn: 18790
2004-12-11 05:15:59 +00:00
Chris Lattner
beeab3a124 Fix a bug where we could delete dead invoke instructions with uses.
In functions where we fully constant prop the return value, replace all
ret instructions with 'ret undef'.

llvm-svn: 18786
2004-12-11 02:53:57 +00:00
Chris Lattner
d1d00e017b Implement SCCP/ipsccp-conditional.ll, by totally deleting dead blocks.
llvm-svn: 18781
2004-12-10 22:29:08 +00:00
Chris Lattner
5be2c2e299 Fix SCCP/2004-12-10-UndefBranchBug.ll
llvm-svn: 18776
2004-12-10 20:41:50 +00:00
Chris Lattner
ead42a768e This is the initial implementation of IPSCCP, as requested by Brian.
This implements SCCP/ipsccp-basic.ll, rips apart Olden/mst (as described in
PR415), and does other nice things.

There is still more to come with this, but it's a start.

llvm-svn: 18752
2004-12-10 08:02:06 +00:00
Chris Lattner
f5962a7d6c note to self: Do not check in debugging code!
llvm-svn: 18693
2004-12-09 07:15:52 +00:00
Chris Lattner
1e2b3fd343 Implement trivial sinking for load instructions. This causes us to sink 567 loads in spec
llvm-svn: 18692
2004-12-09 07:14:34 +00:00
Chris Lattner
8529319568 Do extremely simple sinking of instructions when they are only used in a
successor block.  This turns cases like this:

x = a op b
if (c) {
  use x
}

into:

if (c) {
  x = a op b
  use x
}

This triggers 3965 times in spec, and is tested by
Regression/Transforms/InstCombine/sink_instruction.ll

This appears to expose a bug in the X86 backend for 177.mesa, which I'm
looking in to.

llvm-svn: 18677
2004-12-08 23:43:58 +00:00
Alkis Evlogimenos
fe549f46e1 Fix this regression and remove the XFAIL from this test.
llvm-svn: 18674
2004-12-08 23:10:30 +00:00
Chris Lattner
62e3f533f3 Fix Transforms/InstCombine/2004-12-08-RemInfiniteLoop.ll
llvm-svn: 18670
2004-12-08 22:20:34 +00:00
Reid Spencer
f929fb661d For PR387:\
Add doInitialization method to avoid overloaded virtuals

llvm-svn: 18602
2004-12-07 08:11:36 +00:00
Chris Lattner
d742c10964 This pass is moving to lib IPO
llvm-svn: 18439
2004-12-02 21:24:40 +00:00
Chris Lattner
23c9c1bb62 This pass is completely broken.
llvm-svn: 18387
2004-11-30 17:09:06 +00:00
Chris Lattner
ba782edfb7 Allow hoisting loads of globals and alloca's in conditionals.
llvm-svn: 18363
2004-11-29 21:26:12 +00:00
Reid Spencer
dd287f7758 Fix for PR454:
* Make sure we handle signed to unsigned conversion correctly
* Move this visitSetCondInst case to its own method.

llvm-svn: 18312
2004-11-28 21:31:15 +00:00
Chris Lattner
7301399232 Make DSE potentially more aggressive by being more specific about alloca sizes.
llvm-svn: 18309
2004-11-28 20:44:37 +00:00
Chris Lattner
2c4161fc57 Implement Regression/Transforms/InstCombine/getelementptr_cast.ll, which
occurs many times in crafty

llvm-svn: 18273
2004-11-27 17:55:46 +00:00
Chris Lattner
8715ef738f Provide size information when checking to see if we can LICM a load, this
allows us to hoist more loads in some cases.

llvm-svn: 18265
2004-11-26 21:20:09 +00:00
Chris Lattner
ee2f552b0f Do not count debugger intrinsics in size estimation.
llvm-svn: 18110
2004-11-22 17:23:57 +00:00
Chris Lattner
4b58b75683 Do not consider debug intrinsics in the size computations for loop unrolling.
Patch contributed by Michael McCracken!

llvm-svn: 18108
2004-11-22 17:18:36 +00:00
Chris Lattner
73b5c56fc1 Fix the exposed prototype for the lower packed pass, thanks to
Morten Ofstad.

llvm-svn: 17996
2004-11-19 16:49:34 +00:00
Chris Lattner
ce8249f570 Delete stoppoints that occur for the same source line.
llvm-svn: 17970
2004-11-18 21:41:39 +00:00
Chris Lattner
7fb7c81ebf Check in hook that I forgot
llvm-svn: 17956
2004-11-18 17:24:20 +00:00
Chris Lattner
92e712b00f Do not delete dead invoke instructions!
llvm-svn: 17897
2004-11-16 16:32:28 +00:00
Reid Spencer
e986ef23d7 Remove unused variable for compilation by VC++.
Patch contributed by Morten Ofstad.

llvm-svn: 17830
2004-11-15 17:29:41 +00:00
Chris Lattner
f95f7e05a5 Minor cleanups. There is no reason for SCCP to derive from instvisitor anymore.
llvm-svn: 17825
2004-11-15 07:15:04 +00:00
Chris Lattner
4aa7dc02bf Count more accurately
llvm-svn: 17824
2004-11-15 07:02:42 +00:00
Chris Lattner
20a9efa189 Quiet warnings on the persephone tester
llvm-svn: 17821
2004-11-15 05:54:07 +00:00
Chris Lattner
e87a1360b3 Two minor improvements:
1. Speedup getValueState by having it not consider Arguments.  It's better
    to just add them before we start SCCP'ing.
 2. SCCP can delete the contents of dead blocks.  No really, it's ok!  This
    reduces the size of the IR for subsequent passes, even though
    simplifycfg would do the same job.  In practice, simplifycfg does not
    run until much later than sccp in gccas

llvm-svn: 17820
2004-11-15 05:45:33 +00:00
Chris Lattner
4ad574191b rename InstValue to LatticeValue, as it holds for more than instructions.
llvm-svn: 17818
2004-11-15 05:03:30 +00:00
Chris Lattner
bde8da9e43 Substantially refactor the SCCP class into an SCCP pass and an SCCPSolver
class.  The only changes are minor:

 * Do not try to SCCP instructions that return void in the rewrite loop.
   This is silly and fool hardy, wasting a map lookup and adding an entry
   to the map which is never used.
 * If we decide something has an undefined value, rewrite it to undef,
   potentially leading to further simplications.

llvm-svn: 17816
2004-11-15 04:44:20 +00:00
Chris Lattner
3d61b688a9 This optimization makes MANY phi nodes that all have the same incoming value.
If this happens, detect it early instead of relying on instcombine to notice
it later.  This can be a big speedup, because PHI nodes can have many
incoming values.

llvm-svn: 17741
2004-11-14 19:29:34 +00:00
Chris Lattner
1e4cad9176 Implement instcombine/phi.ll:test6 - pulling operations through PHI nodes.
This exposes subsequent optimization possiblities and reduces code size.
This triggers 1423 times in spec.

llvm-svn: 17740
2004-11-14 19:13:23 +00:00
Chris Lattner
fdd41995d8 Transform this:
%X = alloca ...
  %Y = alloca ...
    X == Y

into false.  This allows us to simplify some stuff in eon (and probably
many other C++ programs) where operator= was checking for self assignment.
Folding this allows us to SROA several additional structs.

llvm-svn: 17735
2004-11-14 07:33:16 +00:00
Chris Lattner
d0a0af0818 Teach SROA how to promote an array index that is variable, if the dimension
of the array is just two.  This occurs 8 times in gcc, 6 times in crafty, and
12 times in 099.go.

This implements ScalarRepl/sroa_two.ll

llvm-svn: 17727
2004-11-14 05:00:19 +00:00
Chris Lattner
bc35272f86 Rearrange some code, no functionality changes.
llvm-svn: 17724
2004-11-14 04:24:28 +00:00
Chris Lattner
70e351fb1c Simplify handling of shifts to be the same as we do for adds. Add support
for (X * C1) + (X * C2) (where * can be mul or shl), allowing us to fold:

   Y+Y+Y+Y+Y+Y+Y+Y

into
         %tmp.8 = shl long %Y, ubyte 3           ; <long> [#uses=1]

instead of

        %tmp.4 = shl long %Y, ubyte 2           ; <long> [#uses=1]
        %tmp.12 = shl long %Y, ubyte 2          ; <long> [#uses=1]
        %tmp.8 = add long %tmp.4, %tmp.12               ; <long> [#uses=1]

This implements add.ll:test25

Also add support for (X*C1)-(X*C2) -> X*(C1-C2), implementing sub.ll:test18

llvm-svn: 17704
2004-11-13 19:50:12 +00:00
Chris Lattner
7a8d26a581 Fold:
(X + (X << C2)) --> X * ((1 << C2) + 1)
   ((X << C2) + X) --> X * ((1 << C2) + 1)

This means that we now canonicalize "Y+Y+Y" into:

        %tmp.2 = mul long %Y, 3         ; <long> [#uses=1]

instead of:

        %tmp.10 = shl long %Y, ubyte 1          ; <long> [#uses=1]
        %tmp.6 = add long %Y, %tmp.10               ; <long> [#uses=1]

llvm-svn: 17701
2004-11-13 19:31:40 +00:00
Chris Lattner
d348f5b9fb Lazily create the abort message, so only translation units that use unwind
will actually get it.

llvm-svn: 17700
2004-11-13 19:07:32 +00:00