1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 20:12:56 +02:00
Commit Graph

15 Commits

Author SHA1 Message Date
Chandler Carruth
b55b69947a [LCG] Switch the Callee sets to be DenseMaps pointing to the index into
the Callee list. This is going to be quite important to prevent removal
from going quadratic. No functionality changed at this point, this is
one of the refactoring patches I've broken out of my initial work toward
mutation updates of the call graph.

llvm-svn: 206938
2014-04-23 04:00:17 +00:00
Chandler Carruth
3104562756 [LCG] Remove all of the complexity stemming from supporting copying.
Reality is that we're never going to copy one of these. Supporting this
was becoming a nightmare because nothing even causes it to compile most
of the time. Lots of subtle errors built up that wouldn't have been
caught by any "normal" testing.

Also, make the move assignment actually work rather than the bogus swap
implementation that would just infloop if used. As part of that, factor
out the graph pointer updates into a helper to share between move
construction and move assignment.

llvm-svn: 206583
2014-04-18 11:02:33 +00:00
Chandler Carruth
d0707ed2ba [LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]

The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]

So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.

However, we still want the SCCs to be formed lazily where possible.

These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.

The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:

1) We want to discover the SCC graph in a postorder fashion. That means
   the root node will be the *last* node we find. Using the call-SCC DAG
   as the graph structure of the SCCs results in an orphaned graph until
   we discover a root.

2) We will eventually want to walk the SCC graph in parallel, exploring
   distinct sub-graphs independently, and synchronizing at merge points.
   This again is not helped by the call-SCC DAG structure.

The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.

Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.

Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.

llvm-svn: 206581
2014-04-18 10:50:32 +00:00
Chandler Carruth
fff2acfa93 [LCG] Remove a dead declaration. This stopped being used when I switched
to a more normal move operation on the graph itself. The definition
already got removed, but I missed the declaration.

llvm-svn: 206455
2014-04-17 09:41:54 +00:00
Chandler Carruth
ffe8931733 [LCG] Move the call graph node class into the graph class's definition.
This will become necessary to build up the SCC iterators and SCC
definitions. Moving it now so that subsequent diffs are incremental.

llvm-svn: 206454
2014-04-17 09:40:13 +00:00
Chandler Carruth
b1022ecf9e [LCG] Just move the allocator (now that we can) when moving a call
graph. This simplifies the custom move constructor operation to one of
walking the graph and updating the 'up' pointers to point to the new
location of the graph. Switch the nodes from a reference to a pointer
for the 'up' edge to facilitate this.

llvm-svn: 206450
2014-04-17 07:25:59 +00:00
Chandler Carruth
3ea4c82c5f [LCG] Remove the Module reference member which we weren't using for
anything and doesn't make sense if assigning.

llvm-svn: 206449
2014-04-17 07:22:19 +00:00
Chandler Carruth
ac39998430 [LCG] Stop playing fast and loose with reference members and assignment.
It doesn't work. I'm still cleaning up all the places where I blindly
followed this pattern. There are more to come in this code too.

As a benefit, this lets the default copy and move operations Just Work.

llvm-svn: 206375
2014-04-16 11:14:28 +00:00
Chandler Carruth
a92c6a2ed3 [LCG] Make this call graph a fully regular type by giving it assignment
as well. I don't see any particular need but it imposes no cost to
support it and it makes the API cleaner.

llvm-svn: 203448
2014-03-10 08:08:59 +00:00
Chandler Carruth
ee0de313d2 [LCG] Make the iterator move constructable (and thus movable in general)
now that there is essentially no cost to doing so. Yay C++11.

llvm-svn: 203447
2014-03-10 08:08:47 +00:00
Chandler Carruth
8c51636a52 [LCG] One more formatting fix that I failed to get into the prior
commit. Sorry for the churn, just trying to keep it out of any
functionality changed.

llvm-svn: 203438
2014-03-10 02:50:21 +00:00
Chandler Carruth
cd48c56575 [cleanup] Re-sort all the includes with utils/sort_includes.py.
llvm-svn: 202811
2014-03-04 10:07:28 +00:00
Chandler Carruth
435d84ee45 [C++11] Remove the use of LLVM_HAS_RVALUE_REFERENCES from the rest of
the core LLVM libraries.

llvm-svn: 202582
2014-03-01 09:32:03 +00:00
Chandler Carruth
a10bbb470e [PM] Fix horrible typos that somehow didn't cause a failure in a C++11
build but spectacularly changed behavior of the C++98 build. =]

This shows my one problem with not having unittests -- basic API
expectations aren't well exercised by the integration tests because they
*happen* to not come up, even though they might later. I'll probably add
a basic unittest to complement the integration testing later, but
I wanted to revive the bots.

llvm-svn: 200905
2014-02-06 05:17:02 +00:00
Chandler Carruth
039285cbbf [PM] Add a new "lazy" call graph analysis pass for the new pass manager.
The primary motivation for this pass is to separate the call graph
analysis used by the new pass manager's CGSCC pass management from the
existing call graph analysis pass. That analysis pass is (somewhat
unfortunately) over-constrained by the existing CallGraphSCCPassManager
requirements. Those requirements make it *really* hard to cleanly layer
the needed functionality for the new pass manager on top of the existing
analysis.

However, there are also a bunch of things that the pass manager would
specifically benefit from doing differently from the existing call graph
analysis, and this new implementation tries to address several of them:

- Be lazy about scanning function definitions. The existing pass eagerly
  scans the entire module to build the initial graph. This new pass is
  significantly more lazy, and I plan to push this even further to
  maximize locality during CGSCC walks.
- Don't use a single synthetic node to partition functions with an
  indirect call from functions whose address is taken. This node creates
  a huge choke-point which would preclude good parallelization across
  the fanout of the SCC graph when we got to the point of looking at
  such changes to LLVM.
- Use a memory dense and lightweight representation of the call graph
  rather than value handles and tracking call instructions. This will
  require explicit update calls instead of some updates working
  transparently, but should end up being significantly more efficient.
  The explicit update calls ended up being needed in many cases for the
  existing call graph so we don't really lose anything.
- Doesn't explicitly model SCCs and thus doesn't provide an "identity"
  for an SCC which is stable across updates. This is essential for the
  new pass manager to work correctly.
- Only form the graph necessary for traversing all of the functions in
  an SCC friendly order. This is a much simpler graph structure and
  should be more memory dense. It does limit the ways in which it is
  appropriate to use this analysis. I wish I had a better name than
  "call graph". I've commented extensively this aspect.

This is still very much a WIP, in fact it is really just the initial
bits. But it is about the fourth version of the initial bits that I've
implemented with each of the others running into really frustrating
problms. This looks like it will actually work and I'd like to split the
actual complexity across commits for the sake of my reviewers. =] The
rest of the implementation along with lots of wiring will follow
somewhat more rapidly now that there is a good path forward.

Naturally, this doesn't impact any of the existing optimizer. This code
is specific to the new pass manager.

A bunch of thanks are deserved for the various folks that have helped
with the design of this, especially Nick Lewycky who actually sat with
me to go through the fundamentals of the final version here.

llvm-svn: 200903
2014-02-06 04:37:03 +00:00