1. Instead of copying Local graphs to the BU graphs to start with, use
spliceFrom to do the job (which is constant time in this case). On
176.gcc, this chops off .17s from the bu pass.
2. When building SCC graphs, simplify the logic and use spliceFrom to
do the heavy lifting, instead of cloneInto/delete. This slices
another .14s off 176.gcc.
llvm-svn: 20826
based approach to find globals and call sites that need to be copied. This
speeds up the BU pass on 176.gcc from 22s back up to 2.3s. Not as good
as 1.5s, but at least it's correct :)
llvm-svn: 20820
something correct. Unfortunately this takes 176.gcc's BU phase back
up to 29s from 1.5. This fixes DSGraph/2005-03-24-Global-Arg-Alias.ll
llvm-svn: 20817
global roots in from callees to callers. The BU graphs do not have accurate
globals information and all of the clients know it. Instead, just make sure
the GG is up-to-date, and they will be perfectly satiated.
This speeds up the BU pass on 176.gcc from 5.5s to 1.5s, and Loc+BU+TD
from 7s to 2.7s.
llvm-svn: 20786
1. Increase max node size from 64->256 to avoid collapsing an important
structure in 181.mcf
2. If we have multiple calls to an indirect call node with an indirect
callee, fold these call nodes together, to avoid DSA turning apoc into
a flaming fireball of death when analyzing 176.gcc.
With this change, 176.gcc now takes ~7s to analyze for loc+bu+td, with
5.7s of that in the BU pass.
llvm-svn: 20775
this clone is supposed to be used for *ALL* of the functions in the SCC.
This fixes the memory explosion problem the TD pass was having, reducing the
memory growth from 24MB -> 3.5MB on povray and 270MB ->8.3MB on perlbmk!
This obviously also speeds up the TD pass *a lot*.
llvm-svn: 20763
up the TD pass about 30% for povray and perlbmk. It's still not clear why
copying a 5MB set of graphs turns into a 25MB set of graphs though :(
llvm-svn: 20762
sites that target multiple callees. If we have a function table, for
example, with N callees, and M callers call through it, we used to have
to perform O(M*N) graph inlinings. Now we perform O(M+N) inlinings.
This speeds up the td pass on perlbmk from 36.26s to 25.75s.
llvm-svn: 20743
graph into all of the functions it calls when we visit a graph, change it so
that the graph visitor inlines all of the callers of a graph into the current
graph when it visits it.
While we're at it, inline global information from the GG instead of from each
of the callers. The GG contains a superset of the info that the callers do
anyway, and this way we only need to do it one time (not one for each caller).
This speeds up the TD pass substantially on several programs, and there is
still room for improvement. For example, the TD pass used to take 147s
on perlbmk, it now takes 36s. On povray, we went from about 5s to 1.97s.
134.perl is down from ~1s for Loc+BU+TD to .6s.
The TD pass needs a lot of improvement though, which will occur with later
patches.
llvm-svn: 20723