2006-08-21 07:37:03 +02:00
|
|
|
//===- llvm/LinkAllPasses.h ------------ Reference All Passes ---*- C++ -*-===//
|
2005-04-21 22:59:05 +02:00
|
|
|
//
|
2005-01-07 08:46:40 +01:00
|
|
|
// The LLVM Compiler Infrastructure
|
2005-01-06 07:02:53 +01:00
|
|
|
//
|
2007-12-29 20:59:42 +01:00
|
|
|
// This file is distributed under the University of Illinois Open Source
|
|
|
|
// License. See LICENSE.TXT for details.
|
2005-04-21 22:59:05 +02:00
|
|
|
//
|
2005-01-06 07:02:53 +01:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
2011-01-29 02:09:53 +01:00
|
|
|
// This header file pulls in all transformation and analysis passes for tools
|
2006-08-21 07:37:03 +02:00
|
|
|
// like opt and bugpoint that need this functionality.
|
2005-01-06 07:02:53 +01:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2006-08-21 07:34:03 +02:00
|
|
|
#ifndef LLVM_LINKALLPASSES_H
|
|
|
|
#define LLVM_LINKALLPASSES_H
|
2005-01-06 07:02:53 +01:00
|
|
|
|
2015-04-14 05:49:53 +02:00
|
|
|
#include "llvm/ADT/Statistic.h"
|
2016-02-20 04:46:03 +01:00
|
|
|
#include "llvm/Analysis/AliasAnalysisEvaluator.h"
|
2017-06-06 13:49:48 +02:00
|
|
|
#include "llvm/Analysis/AliasSetTracker.h"
|
2015-08-06 09:33:15 +02:00
|
|
|
#include "llvm/Analysis/BasicAliasAnalysis.h"
|
2016-07-06 02:26:41 +02:00
|
|
|
#include "llvm/Analysis/CFLAndersAliasAnalysis.h"
|
|
|
|
#include "llvm/Analysis/CFLSteensAliasAnalysis.h"
|
2013-01-11 18:28:14 +01:00
|
|
|
#include "llvm/Analysis/CallPrinter.h"
|
2009-10-18 06:10:40 +02:00
|
|
|
#include "llvm/Analysis/DomPrinter.h"
|
2015-08-14 05:48:20 +02:00
|
|
|
#include "llvm/Analysis/GlobalsModRef.h"
|
2006-08-21 07:34:03 +02:00
|
|
|
#include "llvm/Analysis/IntervalPartition.h"
|
2012-12-03 18:02:12 +01:00
|
|
|
#include "llvm/Analysis/Lint.h"
|
2006-08-21 07:34:03 +02:00
|
|
|
#include "llvm/Analysis/Passes.h"
|
|
|
|
#include "llvm/Analysis/PostDominators.h"
|
2010-10-25 17:36:50 +02:00
|
|
|
#include "llvm/Analysis/RegionPass.h"
|
2010-07-22 09:46:31 +02:00
|
|
|
#include "llvm/Analysis/RegionPrinter.h"
|
2006-08-21 07:34:03 +02:00
|
|
|
#include "llvm/Analysis/ScalarEvolution.h"
|
2015-08-14 05:11:16 +02:00
|
|
|
#include "llvm/Analysis/ScalarEvolutionAliasAnalysis.h"
|
2015-08-14 04:55:50 +02:00
|
|
|
#include "llvm/Analysis/ScopedNoAliasAA.h"
|
[AA] Hoist the logic to reformulate various AA queries in terms of other
parts of the AA interface out of the base class of every single AA
result object.
Because this logic reformulates the query in terms of some other aspect
of the API, it would easily cause O(n^2) query patterns in alias
analysis. These could in turn be magnified further based on the number
of call arguments, and then further based on the number of AA queries
made for a particular call. This ended up causing problems for Rust that
were actually noticable enough to get a bug (PR26564) and probably other
places as well.
When originally re-working the AA infrastructure, the desire was to
regularize the pattern of refinement without losing any generality.
While I think it was successful, that is clearly proving to be too
costly. And the cost is needless: we gain no actual improvement for this
generality of making a direct query to tbaa actually be able to
re-use some other alias analysis's refinement logic for one of the other
APIs, or some such. In short, this is entirely wasted work.
To the extent possible, delegation to other API surfaces should be done
at the aggregation layer so that we can avoid re-walking the
aggregation. In fact, this significantly simplifies the logic as we no
longer need to smuggle the aggregation layer into each alias analysis
(or the TargetLibraryInfo into each alias analysis just so we can form
argument memory locations!).
However, we also have some delegation logic inside of BasicAA and some
of it even makes sense. When the delegation logic is baking in specific
knowledge of aliasing properties of the LLVM IR, as opposed to simply
reformulating the query to utilize a different alias analysis interface
entry point, it makes a lot of sense to restrict that logic to
a different layer such as BasicAA. So one aspect of the delegation that
was in every AA base class is that when we don't have operand bundles,
we re-use function AA results as a fallback for callsite alias results.
This relies on the IR properties of calls and functions w.r.t. aliasing,
and so seems a better fit to BasicAA. I've lifted the logic up to that
point where it seems to be a natural fit. This still does a bit of
redundant work (we query function attributes twice, once via the
callsite and once via the function AA query) but it is *exactly* twice
here, no more.
The end result is that all of the delegation logic is hoisted out of the
base class and into either the aggregation layer when it is a pure
retargeting to a different API surface, or into BasicAA when it relies
on the IR's aliasing properties. This should fix the quadratic query
pattern reported in PR26564, although I don't have a stand-alone test
case to reproduce it.
It also seems general goodness. Now the numerous AAs that don't need
target library info don't carry it around and depend on it. I think
I can even rip out the general access to the aggregation layer and only
expose that in BasicAA as it is the only place where we re-query in that
manner.
However, this is a non-trivial change to the AA infrastructure so I want
to get some additional eyes on this before it lands. Sadly, it can't
wait long because we should really cherry pick this into 3.8 if we're
going to go this route.
Differential Revision: http://reviews.llvm.org/D17329
llvm-svn: 262490
2016-03-02 16:56:53 +01:00
|
|
|
#include "llvm/Analysis/TargetLibraryInfo.h"
|
2015-08-14 05:33:48 +02:00
|
|
|
#include "llvm/Analysis/TypeBasedAliasAnalysis.h"
|
2005-01-08 19:15:23 +01:00
|
|
|
#include "llvm/CodeGen/Passes.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/Function.h"
|
2014-01-12 12:10:32 +01:00
|
|
|
#include "llvm/IR/IRPrintingPasses.h"
|
2017-06-06 13:49:48 +02:00
|
|
|
#include "llvm/Support/Valgrind.h"
|
2005-01-06 07:02:53 +01:00
|
|
|
#include "llvm/Transforms/IPO.h"
|
[PM] Port the always inliner to the new pass manager in a much more
minimal and boring form than the old pass manager's version.
This pass does the very minimal amount of work necessary to inline
functions declared as always-inline. It doesn't support a wide array of
things that the legacy pass manager did support, but is alse ... about
20 lines of code. So it has that going for it. Notably things this
doesn't support:
- Array alloca merging
- To support the above, bottom-up inlining with careful history
tracking and call graph updates
- DCE of the functions that become dead after this inlining.
- Inlining through call instructions with the always_inline attribute.
Instead, it focuses on inlining functions with that attribute.
The first I've omitted because I'm hoping to just turn it off for the
primary pass manager. If that doesn't pan out, I can add it here but it
will be reasonably expensive to do so.
The second should really be handled by running global-dce after the
inliner. I don't want to re-implement the non-trivial logic necessary to
do comdat-correct DCE of functions. This means the -O0 pipeline will
have to be at least 'always-inline,global-dce', but that seems
reasonable to me. If others are seriously worried about this I'd like to
hear about it and understand why. Again, this is all solveable by
factoring that logic into a utility and calling it here, but I'd like to
wait to do that until there is a clear reason why the existing
pass-based factoring won't work.
The final point is a serious one. I can fairly easily add support for
this, but it seems both costly and a confusing construct for the use
case of the always inliner running at -O0. This attribute can of course
still impact the normal inliner easily (although I find that
a questionable re-use of the same attribute). I've started a discussion
to sort out what semantics we want here and based on that can figure out
if it makes sense ta have this complexity at O0 or not.
One other advantage of this design is that it should be quite a bit
faster due to checking for whether the function is a viable candidate
for inlining exactly once per function instead of doing it for each call
site.
Anyways, hopefully a reasonable starting point for this pass.
Differential Revision: https://reviews.llvm.org/D23299
llvm-svn: 278896
2016-08-17 04:56:20 +02:00
|
|
|
#include "llvm/Transforms/IPO/AlwaysInliner.h"
|
2016-02-18 12:03:11 +01:00
|
|
|
#include "llvm/Transforms/IPO/FunctionAttrs.h"
|
2012-12-03 18:02:12 +01:00
|
|
|
#include "llvm/Transforms/Instrumentation.h"
|
2013-01-28 02:35:51 +01:00
|
|
|
#include "llvm/Transforms/ObjCARC.h"
|
2005-01-06 07:02:53 +01:00
|
|
|
#include "llvm/Transforms/Scalar.h"
|
2016-03-11 09:50:55 +01:00
|
|
|
#include "llvm/Transforms/Scalar/GVN.h"
|
2014-11-07 22:32:08 +01:00
|
|
|
#include "llvm/Transforms/Utils/SymbolRewriter.h"
|
2015-01-14 12:23:27 +01:00
|
|
|
#include "llvm/Transforms/Utils/UnifyFunctionExitNodes.h"
|
2012-12-03 18:02:12 +01:00
|
|
|
#include "llvm/Transforms/Vectorize.h"
|
2005-10-24 02:08:51 +02:00
|
|
|
#include <cstdlib>
|
2005-01-06 07:02:53 +01:00
|
|
|
|
|
|
|
namespace {
|
2005-01-08 23:01:16 +01:00
|
|
|
struct ForcePassLinking {
|
|
|
|
ForcePassLinking() {
|
2005-10-24 02:08:51 +02:00
|
|
|
// We must reference the passes in such a way that compilers will not
|
2005-01-07 08:44:02 +01:00
|
|
|
// delete it all as dead code, even with whole program optimization,
|
|
|
|
// yet is effectively a NO-OP. As the compiler isn't smart enough
|
2005-10-24 02:08:51 +02:00
|
|
|
// to know that getenv() never returns -1, this will do the job.
|
|
|
|
if (std::getenv("bar") != (char*) -1)
|
2005-01-07 08:44:02 +01:00
|
|
|
return;
|
2005-01-06 07:02:53 +01:00
|
|
|
|
2005-01-08 23:01:16 +01:00
|
|
|
(void) llvm::createAAEvalPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createAggressiveDCEPass();
|
[BDCE] Add a bit-tracking DCE pass
BDCE is a bit-tracking dead code elimination pass. It is based on ADCE (the
"aggressive DCE" pass), with the added capability to track dead bits of integer
valued instructions and remove those instructions when all of the bits are
dead.
Currently, it does not actually do this all-bits-dead removal, but rather
replaces the instruction's uses with a constant zero, and lets instcombine (and
the later run of ADCE) do the rest. Because we essentially get a run of ADCE
"for free" while tracking the dead bits, we also do what ADCE does and removes
actually-dead instructions as well (this includes instructions newly trivially
dead because all bits were dead, but not all such instructions can be removed).
The motivation for this is a case like:
int __attribute__((const)) foo(int i);
int bar(int x) {
x |= (4 & foo(5));
x |= (8 & foo(3));
x |= (16 & foo(2));
x |= (32 & foo(1));
x |= (64 & foo(0));
x |= (128& foo(4));
return x >> 4;
}
As it turns out, if you order the bit-field insertions so that all of the dead
ones come last, then instcombine will remove them. However, if you pick some
other order (such as the one above), the fact that some of the calls to foo()
are useless is not locally obvious, and we don't remove them (without this
pass).
I did a quick compile-time overhead check using sqlite from the test suite
(Release+Asserts). BDCE took ~0.4% of the compilation time (making it about
twice as expensive as ADCE).
I've not looked at why yet, but we eliminate instructions due to having
all-dead bits in:
External/SPEC/CFP2006/447.dealII/447.dealII
External/SPEC/CINT2006/400.perlbench/400.perlbench
External/SPEC/CINT2006/403.gcc/403.gcc
MultiSource/Applications/ClamAV/clamscan
MultiSource/Benchmarks/7zip/7zip-benchmark
llvm-svn: 229462
2015-02-17 02:36:59 +01:00
|
|
|
(void) llvm::createBitTrackingDCEPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createArgumentPromotionPass();
|
2014-09-07 22:05:11 +02:00
|
|
|
(void) llvm::createAlignmentFromAssumptionsPass();
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 19:55:00 +02:00
|
|
|
(void) llvm::createBasicAAWrapperPass();
|
|
|
|
(void) llvm::createSCEVAAWrapperPass();
|
|
|
|
(void) llvm::createTypeBasedAAWrapperPass();
|
|
|
|
(void) llvm::createScopedNoAliasAAWrapperPass();
|
2012-05-22 19:19:09 +02:00
|
|
|
(void) llvm::createBoundsCheckingPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createBreakCriticalEdgesPass();
|
2016-03-10 12:04:40 +01:00
|
|
|
(void) llvm::createCallGraphDOTPrinterPass();
|
2013-01-11 18:28:14 +01:00
|
|
|
(void) llvm::createCallGraphViewerPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createCFGSimplificationPass();
|
2017-03-26 08:44:08 +02:00
|
|
|
(void) llvm::createLateCFGSimplificationPass();
|
2016-07-06 02:26:41 +02:00
|
|
|
(void) llvm::createCFLAndersAAWrapperPass();
|
|
|
|
(void) llvm::createCFLSteensAAWrapperPass();
|
2013-06-19 22:18:24 +02:00
|
|
|
(void) llvm::createStructurizeCFGPass();
|
Conditionally eliminate library calls where the result value is not used
Summary:
This pass shrink-wraps a condition to some library calls where the call
result is not used. For example:
sqrt(val);
is transformed to
if (val < 0)
sqrt(val);
Even if the result of library call is not being used, the compiler cannot
safely delete the call because the function can set errno on error
conditions.
Note in many functions, the error condition solely depends on the incoming
parameter. In this optimization, we can generate the condition can lead to
the errno to shrink-wrap the call. Since the chances of hitting the error
condition is low, the runtime call is effectively eliminated.
These partially dead calls are usually results of C++ abstraction penalty
exposed by inlining. This optimization hits 108 times in 19 C/C++ programs
in SPEC2006.
Reviewers: hfinkel, mehdi_amini, davidxl
Subscribers: modocache, mgorny, mehdi_amini, xur, llvm-commits, beanz
Differential Revision: https://reviews.llvm.org/D24414
llvm-svn: 284542
2016-10-18 23:36:27 +02:00
|
|
|
(void) llvm::createLibCallsShrinkWrapPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createConstantMergePass();
|
|
|
|
(void) llvm::createConstantPropagationPass();
|
2012-11-02 22:48:17 +01:00
|
|
|
(void) llvm::createCostModelAnalysisPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createDeadArgEliminationPass();
|
|
|
|
(void) llvm::createDeadCodeEliminationPass();
|
|
|
|
(void) llvm::createDeadInstEliminationPass();
|
|
|
|
(void) llvm::createDeadStoreEliminationPass();
|
2016-05-13 00:19:39 +02:00
|
|
|
(void) llvm::createDependenceAnalysisWrapperPass();
|
Divergence analysis for GPU programs
Summary:
Some optimizations such as jump threading and loop unswitching can negatively
affect performance when applied to divergent branches. The divergence analysis
added in this patch conservatively estimates which branches in a GPU program
can diverge. This information can then help LLVM to run certain optimizations
selectively.
Test Plan: test/Analysis/DivergenceAnalysis/NVPTX/diverge.ll
Reviewers: resistor, hfinkel, eliben, meheff, jholewinski
Subscribers: broune, bjarke.roune, madhur13490, tstellarAMD, dberlin, echristo, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D8576
llvm-svn: 234567
2015-04-10 07:03:50 +02:00
|
|
|
(void) llvm::createDivergenceAnalysisPass();
|
2009-10-18 06:10:40 +02:00
|
|
|
(void) llvm::createDomOnlyPrinterPass();
|
|
|
|
(void) llvm::createDomPrinterPass();
|
|
|
|
(void) llvm::createDomOnlyViewerPass();
|
|
|
|
(void) llvm::createDomViewerPass();
|
2011-12-06 01:11:58 +01:00
|
|
|
(void) llvm::createGCOVProfilerPass();
|
2016-05-06 07:49:19 +02:00
|
|
|
(void) llvm::createPGOInstrumentationGenLegacyPass();
|
2016-05-07 07:39:12 +02:00
|
|
|
(void) llvm::createPGOInstrumentationUseLegacyPass();
|
2016-05-15 03:04:24 +02:00
|
|
|
(void) llvm::createPGOIndirectCallPromotionLegacyPass();
|
2017-04-04 18:42:20 +02:00
|
|
|
(void) llvm::createPGOMemOPSizeOptLegacyPass();
|
2016-04-18 19:47:38 +02:00
|
|
|
(void) llvm::createInstrProfilingLegacyPass();
|
2015-12-07 20:21:11 +01:00
|
|
|
(void) llvm::createFunctionImportPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createFunctionInliningPass();
|
[PM] Port the always inliner to the new pass manager in a much more
minimal and boring form than the old pass manager's version.
This pass does the very minimal amount of work necessary to inline
functions declared as always-inline. It doesn't support a wide array of
things that the legacy pass manager did support, but is alse ... about
20 lines of code. So it has that going for it. Notably things this
doesn't support:
- Array alloca merging
- To support the above, bottom-up inlining with careful history
tracking and call graph updates
- DCE of the functions that become dead after this inlining.
- Inlining through call instructions with the always_inline attribute.
Instead, it focuses on inlining functions with that attribute.
The first I've omitted because I'm hoping to just turn it off for the
primary pass manager. If that doesn't pan out, I can add it here but it
will be reasonably expensive to do so.
The second should really be handled by running global-dce after the
inliner. I don't want to re-implement the non-trivial logic necessary to
do comdat-correct DCE of functions. This means the -O0 pipeline will
have to be at least 'always-inline,global-dce', but that seems
reasonable to me. If others are seriously worried about this I'd like to
hear about it and understand why. Again, this is all solveable by
factoring that logic into a utility and calling it here, but I'd like to
wait to do that until there is a clear reason why the existing
pass-based factoring won't work.
The final point is a serious one. I can fairly easily add support for
this, but it seems both costly and a confusing construct for the use
case of the always inliner running at -O0. This attribute can of course
still impact the normal inliner easily (although I find that
a questionable re-use of the same attribute). I've started a discussion
to sort out what semantics we want here and based on that can figure out
if it makes sense ta have this complexity at O0 or not.
One other advantage of this design is that it should be quite a bit
faster due to checking for whether the function is a viable candidate
for inlining exactly once per function instead of doing it for each call
site.
Anyways, hopefully a reasonable starting point for this pass.
Differential Revision: https://reviews.llvm.org/D23299
llvm-svn: 278896
2016-08-17 04:56:20 +02:00
|
|
|
(void) llvm::createAlwaysInlinerLegacyPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createGlobalDCEPass();
|
|
|
|
(void) llvm::createGlobalOptimizerPass();
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 19:55:00 +02:00
|
|
|
(void) llvm::createGlobalsAAWrapperPass();
|
2016-05-19 00:55:34 +02:00
|
|
|
(void) llvm::createGuardWideningPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createIPConstantPropagationPass();
|
|
|
|
(void) llvm::createIPSCCPPass();
|
2015-01-16 02:03:22 +01:00
|
|
|
(void) llvm::createInductiveRangeCheckEliminationPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createIndVarSimplifyPass();
|
|
|
|
(void) llvm::createInstructionCombiningPass();
|
2012-10-26 20:47:48 +02:00
|
|
|
(void) llvm::createInternalizePass();
|
2006-05-26 15:58:26 +02:00
|
|
|
(void) llvm::createLCSSAPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createLICMPass();
|
Add Loop Sink pass to reverse the LICM based of basic block frequency.
Summary: LICM may hoist instructions to preheader speculatively. Before code generation, we need to sink down the hoisted instructions inside to loop if it's beneficial. This pass is a reverse of LICM: looking at instructions in preheader and sinks the instruction to basic blocks inside the loop body if basic block frequency is smaller than the preheader frequency.
Reviewers: hfinkel, davidxl, chandlerc
Subscribers: anna, modocache, mgorny, beanz, reames, dberlin, chandlerc, mcrosier, junbuml, sanjoy, mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D22778
llvm-svn: 285308
2016-10-27 18:30:08 +02:00
|
|
|
(void) llvm::createLoopSinkPass();
|
2009-11-11 01:22:30 +01:00
|
|
|
(void) llvm::createLazyValueInfoPass();
|
2005-01-08 18:21:40 +01:00
|
|
|
(void) llvm::createLoopExtractorPass();
|
2015-06-11 00:54:02 +02:00
|
|
|
(void) llvm::createLoopInterchangePass();
|
2017-01-25 17:00:44 +01:00
|
|
|
(void) llvm::createLoopPredicationPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createLoopSimplifyPass();
|
2016-01-29 23:35:36 +01:00
|
|
|
(void) llvm::createLoopSimplifyCFGPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createLoopStrengthReducePass();
|
2013-11-17 00:59:05 +01:00
|
|
|
(void) llvm::createLoopRerollPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createLoopUnrollPass();
|
|
|
|
(void) llvm::createLoopUnswitchPass();
|
New Loop Versioning LICM Pass
Summary:
When alias analysis is uncertain about the aliasing between any two accesses,
it will return MayAlias. This uncertainty from alias analysis restricts LICM
from proceeding further. In cases where alias analysis is uncertain we might
use loop versioning as an alternative.
Loop Versioning will create a version of the loop with aggressive aliasing
assumptions in addition to the original with conservative (default) aliasing
assumptions. The version of the loop making aggressive aliasing assumptions
will have all the memory accesses marked as no-alias. These two versions of
loop will be preceded by a memory runtime check. This runtime check consists
of bound checks for all unique memory accessed in loop, and it ensures the
lack of memory aliasing. The result of the runtime check determines which of
the loop versions is executed: If the runtime check detects any memory
aliasing, then the original loop is executed. Otherwise, the version with
aggressive aliasing assumptions is used.
The pass is off by default and can be enabled with command line option
-enable-loop-versioning-licm.
Reviewers: hfinkel, anemet, chatur01, reames
Subscribers: MatzeB, grosser, joker.eph, sanjoy, javed.absar, sbaranga,
llvm-commits
Differential Revision: http://reviews.llvm.org/D9151
llvm-svn: 259986
2016-02-06 08:47:48 +01:00
|
|
|
(void) llvm::createLoopVersioningLICMPass();
|
2010-12-26 20:32:44 +01:00
|
|
|
(void) llvm::createLoopIdiomPass();
|
2007-04-07 06:43:02 +02:00
|
|
|
(void) llvm::createLoopRotatePass();
|
2011-07-06 20:22:43 +02:00
|
|
|
(void) llvm::createLowerExpectIntrinsicPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createLowerInvokePass();
|
|
|
|
(void) llvm::createLowerSwitchPass();
|
2015-04-14 06:59:22 +02:00
|
|
|
(void) llvm::createNaryReassociatePass();
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 19:55:00 +02:00
|
|
|
(void) llvm::createObjCARCAAWrapperPass();
|
2012-01-17 21:52:24 +01:00
|
|
|
(void) llvm::createObjCARCAPElimPass();
|
2011-06-16 01:37:01 +02:00
|
|
|
(void) llvm::createObjCARCExpandPass();
|
|
|
|
(void) llvm::createObjCARCContractPass();
|
|
|
|
(void) llvm::createObjCARCOptPass();
|
2014-11-17 03:28:27 +01:00
|
|
|
(void) llvm::createPAEvalPass();
|
2005-03-28 04:52:28 +02:00
|
|
|
(void) llvm::createPromoteMemoryToRegisterPass();
|
2005-11-10 03:07:45 +01:00
|
|
|
(void) llvm::createDemoteRegisterToMemoryPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createPruneEHPass();
|
2009-10-18 06:10:40 +02:00
|
|
|
(void) llvm::createPostDomOnlyPrinterPass();
|
|
|
|
(void) llvm::createPostDomPrinterPass();
|
|
|
|
(void) llvm::createPostDomOnlyViewerPass();
|
|
|
|
(void) llvm::createPostDomViewerPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createReassociatePass();
|
2010-07-22 09:46:31 +02:00
|
|
|
(void) llvm::createRegionInfoPass();
|
|
|
|
(void) llvm::createRegionOnlyPrinterPass();
|
|
|
|
(void) llvm::createRegionOnlyViewerPass();
|
|
|
|
(void) llvm::createRegionPrinterPass();
|
|
|
|
(void) llvm::createRegionViewerPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createSCCPPass();
|
Protection against stack-based memory corruption errors using SafeStack
This patch adds the safe stack instrumentation pass to LLVM, which separates
the program stack into a safe stack, which stores return addresses, register
spills, and local variables that are statically verified to be accessed
in a safe way, and the unsafe stack, which stores everything else. Such
separation makes it much harder for an attacker to corrupt objects on the
safe stack, including function pointers stored in spilled registers and
return addresses. You can find more information about the safe stack, as
well as other parts of or control-flow hijack protection technique in our
OSDI paper on code-pointer integrity (http://dslab.epfl.ch/pubs/cpi.pdf)
and our project website (http://levee.epfl.ch).
The overhead of our implementation of the safe stack is very close to zero
(0.01% on the Phoronix benchmarks). This is lower than the overhead of
stack cookies, which are supported by LLVM and are commonly used today,
yet the security guarantees of the safe stack are strictly stronger than
stack cookies. In some cases, the safe stack improves performance due to
better cache locality.
Our current implementation of the safe stack is stable and robust, we
used it to recompile multiple projects on Linux including Chromium, and
we also recompiled the entire FreeBSD user-space system and more than 100
packages. We ran unit tests on the FreeBSD system and many of the packages
and observed no errors caused by the safe stack. The safe stack is also fully
binary compatible with non-instrumented code and can be applied to parts of
a program selectively.
This patch is our implementation of the safe stack on top of LLVM. The
patches make the following changes:
- Add the safestack function attribute, similar to the ssp, sspstrong and
sspreq attributes.
- Add the SafeStack instrumentation pass that applies the safe stack to all
functions that have the safestack attribute. This pass moves all unsafe local
variables to the unsafe stack with a separate stack pointer, whereas all
safe variables remain on the regular stack that is managed by LLVM as usual.
- Invoke the pass as the last stage before code generation (at the same time
the existing cookie-based stack protector pass is invoked).
- Add unit tests for the safe stack.
Original patch by Volodymyr Kuznetsov and others at the Dependable Systems
Lab at EPFL; updates and upstreaming by myself.
Differential Revision: http://reviews.llvm.org/D6094
llvm-svn: 239761
2015-06-15 23:07:11 +02:00
|
|
|
(void) llvm::createSafeStackPass();
|
2016-06-15 02:19:09 +02:00
|
|
|
(void) llvm::createSROAPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createSingleLoopExtractorPass();
|
2005-01-08 18:21:40 +01:00
|
|
|
(void) llvm::createStripSymbolsPass();
|
2008-11-18 22:34:39 +01:00
|
|
|
(void) llvm::createStripNonDebugSymbolsPass();
|
2010-07-01 21:49:20 +02:00
|
|
|
(void) llvm::createStripDeadDebugInfoPass();
|
2008-02-22 19:39:29 +01:00
|
|
|
(void) llvm::createStripDeadPrototypesPass();
|
2005-01-07 08:44:02 +01:00
|
|
|
(void) llvm::createTailCallEliminationPass();
|
2008-04-20 22:35:01 +02:00
|
|
|
(void) llvm::createJumpThreadingPass();
|
2005-01-08 18:21:40 +01:00
|
|
|
(void) llvm::createUnifyFunctionExitNodesPass();
|
2006-08-21 07:34:03 +02:00
|
|
|
(void) llvm::createInstCountPass();
|
2014-01-25 03:02:55 +01:00
|
|
|
(void) llvm::createConstantHoistingPass();
|
2007-03-31 06:06:36 +02:00
|
|
|
(void) llvm::createCodeGenPreparePass();
|
2016-09-01 11:42:39 +02:00
|
|
|
(void) llvm::createCountingFunctionInserterPass();
|
2011-01-02 22:47:05 +01:00
|
|
|
(void) llvm::createEarlyCSEPass();
|
2016-07-15 15:45:20 +02:00
|
|
|
(void) llvm::createGVNHoistPass();
|
2014-09-10 19:52:27 +02:00
|
|
|
(void) llvm::createMergedLoadStoreMotionPass();
|
2007-07-24 19:55:58 +02:00
|
|
|
(void) llvm::createGVNPass();
|
[GVN] Initial check-in of a new global value numbering algorithm.
The code have been developed by Daniel Berlin over the years, and
the new implementation goal is that of addressing shortcomings of
the current GVN infrastructure, i.e. long compile time for large
testcases, lack of phi predication, no load/store value numbering
etc...
The current code just implements the "core" GVN algorithm, although
other pieces (load coercion, phi handling, predicate system) are
already implemented in a branch out of tree. Once the core is stable,
we'll start adding pieces on top of the base framework.
The test currently living in test/Transform/NewGVN are a copy
of the ones in GVN, with proper `XFAIL` (missing features in NewGVN).
A flag will be added in a future commit to enable NewGVN, so that
interested parties can exercise this code easily.
Differential Revision: https://reviews.llvm.org/D26224
llvm-svn: 290346
2016-12-22 17:03:48 +01:00
|
|
|
(void) llvm::createNewGVNPass();
|
2008-04-09 10:23:16 +02:00
|
|
|
(void) llvm::createMemCpyOptPass();
|
2008-04-29 22:06:54 +02:00
|
|
|
(void) llvm::createLoopDeletionPass();
|
2008-05-29 19:00:13 +02:00
|
|
|
(void) llvm::createPostDomTree();
|
2008-08-23 08:07:02 +02:00
|
|
|
(void) llvm::createInstructionNamerPass();
|
2012-09-11 04:46:18 +02:00
|
|
|
(void) llvm::createMetaRenamerPass();
|
2016-02-18 12:03:11 +01:00
|
|
|
(void) llvm::createPostOrderFunctionAttrsLegacyPass();
|
2016-01-08 11:55:52 +01:00
|
|
|
(void) llvm::createReversePostOrderFunctionAttrsPass();
|
2008-11-02 06:52:50 +01:00
|
|
|
(void) llvm::createMergeFunctionsPass();
|
2017-09-01 12:56:34 +02:00
|
|
|
(void) llvm::createMergeICmpsPass();
|
Avoid undefined behavior in LinkAllPasses.h
The LinkAllPasses.h file is included in several main programs, to force
a large number of passes to be linked in. However, the ForcePassLinking
constructor uses undefined behavior, since it calls member functions on
`nullptr`, e.g.:
((llvm::Function*)nullptr)->viewCFGOnly();
llvm::RGPassManager RGM;
((llvm::RegionPass*)nullptr)->runOnRegion((llvm::Region*)nullptr, RGM);
When the optimization level is -O2 or higher, the code below the first
nullptr dereference is optimized away, and replaced by `ud2` (on x86).
Therefore, the calls after that first dereference are never emitted. In
my case, I noticed there was no call to `llvm::sys::RunningOnValgrind()`!
Replace instances of dereferencing `nullptr` with either objects on the
stack, or regular function calls.
Differential Revision: http://reviews.llvm.org/D15996
llvm-svn: 257645
2016-01-13 19:29:46 +01:00
|
|
|
std::string buf;
|
|
|
|
llvm::raw_string_ostream os(buf);
|
|
|
|
(void) llvm::createPrintModulePass(os);
|
|
|
|
(void) llvm::createPrintFunctionPass(os);
|
|
|
|
(void) llvm::createPrintBasicBlockPass(os);
|
2010-05-07 18:22:32 +02:00
|
|
|
(void) llvm::createModuleDebugInfoPrinterPass();
|
2009-06-14 10:26:32 +02:00
|
|
|
(void) llvm::createPartialInliningPass();
|
2010-04-08 20:47:09 +02:00
|
|
|
(void) llvm::createLintPass();
|
2010-05-07 17:40:13 +02:00
|
|
|
(void) llvm::createSinkingPass();
|
2010-08-03 18:19:16 +02:00
|
|
|
(void) llvm::createLowerAtomicPass();
|
2010-08-31 09:48:34 +02:00
|
|
|
(void) llvm::createCorrelatedValuePropagationPass();
|
2010-09-17 00:08:32 +02:00
|
|
|
(void) llvm::createMemDepPrinter();
|
2010-12-20 21:54:37 +01:00
|
|
|
(void) llvm::createInstructionSimplifierPass();
|
2012-12-12 20:29:45 +01:00
|
|
|
(void) llvm::createLoopVectorizePass();
|
2013-04-09 21:44:35 +02:00
|
|
|
(void) llvm::createSLPVectorizerPass();
|
2016-07-01 04:07:22 +02:00
|
|
|
(void) llvm::createLoadStoreVectorizerPass();
|
2013-08-23 12:27:02 +02:00
|
|
|
(void) llvm::createPartiallyInlineLibCallsPass();
|
2013-11-22 17:58:05 +01:00
|
|
|
(void) llvm::createScalarizerPass();
|
2014-05-01 20:38:36 +02:00
|
|
|
(void) llvm::createSeparateConstOffsetFromGEPPass();
|
2015-05-15 19:54:48 +02:00
|
|
|
(void) llvm::createSpeculativeExecutionPass();
|
2016-04-15 02:32:09 +02:00
|
|
|
(void) llvm::createSpeculativeExecutionIfHasBranchDivergencePass();
|
2014-11-07 22:32:08 +01:00
|
|
|
(void) llvm::createRewriteSymbolsPass();
|
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
llvm-svn: 228016
2015-02-03 20:37:06 +01:00
|
|
|
(void) llvm::createStraightLineStrengthReducePass();
|
2015-02-06 02:46:42 +01:00
|
|
|
(void) llvm::createMemDerefPrinter();
|
2015-03-27 11:36:57 +01:00
|
|
|
(void) llvm::createFloat2IntPass();
|
2015-07-13 16:18:22 +02:00
|
|
|
(void) llvm::createEliminateAvailableExternallyPass();
|
2017-05-15 13:30:54 +02:00
|
|
|
(void) llvm::createScalarizeMaskedMemIntrinPass();
|
2006-08-21 07:34:03 +02:00
|
|
|
|
|
|
|
(void)new llvm::IntervalPartition();
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 04:08:17 +02:00
|
|
|
(void)new llvm::ScalarEvolutionWrapperPass();
|
Avoid undefined behavior in LinkAllPasses.h
The LinkAllPasses.h file is included in several main programs, to force
a large number of passes to be linked in. However, the ForcePassLinking
constructor uses undefined behavior, since it calls member functions on
`nullptr`, e.g.:
((llvm::Function*)nullptr)->viewCFGOnly();
llvm::RGPassManager RGM;
((llvm::RegionPass*)nullptr)->runOnRegion((llvm::Region*)nullptr, RGM);
When the optimization level is -O2 or higher, the code below the first
nullptr dereference is optimized away, and replaced by `ud2` (on x86).
Therefore, the calls after that first dereference are never emitted. In
my case, I noticed there was no call to `llvm::sys::RunningOnValgrind()`!
Replace instances of dereferencing `nullptr` with either objects on the
stack, or regular function calls.
Differential Revision: http://reviews.llvm.org/D15996
llvm-svn: 257645
2016-01-13 19:29:46 +01:00
|
|
|
llvm::Function::Create(nullptr, llvm::GlobalValue::ExternalLinkage)->viewCFGOnly();
|
2011-08-29 19:07:00 +02:00
|
|
|
llvm::RGPassManager RGM;
|
[AA] Hoist the logic to reformulate various AA queries in terms of other
parts of the AA interface out of the base class of every single AA
result object.
Because this logic reformulates the query in terms of some other aspect
of the API, it would easily cause O(n^2) query patterns in alias
analysis. These could in turn be magnified further based on the number
of call arguments, and then further based on the number of AA queries
made for a particular call. This ended up causing problems for Rust that
were actually noticable enough to get a bug (PR26564) and probably other
places as well.
When originally re-working the AA infrastructure, the desire was to
regularize the pattern of refinement without losing any generality.
While I think it was successful, that is clearly proving to be too
costly. And the cost is needless: we gain no actual improvement for this
generality of making a direct query to tbaa actually be able to
re-use some other alias analysis's refinement logic for one of the other
APIs, or some such. In short, this is entirely wasted work.
To the extent possible, delegation to other API surfaces should be done
at the aggregation layer so that we can avoid re-walking the
aggregation. In fact, this significantly simplifies the logic as we no
longer need to smuggle the aggregation layer into each alias analysis
(or the TargetLibraryInfo into each alias analysis just so we can form
argument memory locations!).
However, we also have some delegation logic inside of BasicAA and some
of it even makes sense. When the delegation logic is baking in specific
knowledge of aliasing properties of the LLVM IR, as opposed to simply
reformulating the query to utilize a different alias analysis interface
entry point, it makes a lot of sense to restrict that logic to
a different layer such as BasicAA. So one aspect of the delegation that
was in every AA base class is that when we don't have operand bundles,
we re-use function AA results as a fallback for callsite alias results.
This relies on the IR properties of calls and functions w.r.t. aliasing,
and so seems a better fit to BasicAA. I've lifted the logic up to that
point where it seems to be a natural fit. This still does a bit of
redundant work (we query function attributes twice, once via the
callsite and once via the function AA query) but it is *exactly* twice
here, no more.
The end result is that all of the delegation logic is hoisted out of the
base class and into either the aggregation layer when it is a pure
retargeting to a different API surface, or into BasicAA when it relies
on the IR's aliasing properties. This should fix the quadratic query
pattern reported in PR26564, although I don't have a stand-alone test
case to reproduce it.
It also seems general goodness. Now the numerous AAs that don't need
target library info don't carry it around and depend on it. I think
I can even rip out the general access to the aggregation layer and only
expose that in BasicAA as it is the only place where we re-query in that
manner.
However, this is a non-trivial change to the AA infrastructure so I want
to get some additional eyes on this before it lands. Sadly, it can't
wait long because we should really cherry pick this into 3.8 if we're
going to go this route.
Differential Revision: http://reviews.llvm.org/D17329
llvm-svn: 262490
2016-03-02 16:56:53 +01:00
|
|
|
llvm::TargetLibraryInfoImpl TLII;
|
|
|
|
llvm::TargetLibraryInfo TLI(TLII);
|
|
|
|
llvm::AliasAnalysis AA(TLI);
|
Avoid undefined behavior in LinkAllPasses.h
The LinkAllPasses.h file is included in several main programs, to force
a large number of passes to be linked in. However, the ForcePassLinking
constructor uses undefined behavior, since it calls member functions on
`nullptr`, e.g.:
((llvm::Function*)nullptr)->viewCFGOnly();
llvm::RGPassManager RGM;
((llvm::RegionPass*)nullptr)->runOnRegion((llvm::Region*)nullptr, RGM);
When the optimization level is -O2 or higher, the code below the first
nullptr dereference is optimized away, and replaced by `ud2` (on x86).
Therefore, the calls after that first dereference are never emitted. In
my case, I noticed there was no call to `llvm::sys::RunningOnValgrind()`!
Replace instances of dereferencing `nullptr` with either objects on the
stack, or regular function calls.
Differential Revision: http://reviews.llvm.org/D15996
llvm-svn: 257645
2016-01-13 19:29:46 +01:00
|
|
|
llvm::AliasSetTracker X(AA);
|
2014-10-05 00:44:29 +02:00
|
|
|
X.add(nullptr, 0, llvm::AAMDNodes()); // for -print-alias-sets
|
2015-04-14 05:49:53 +02:00
|
|
|
(void) llvm::AreStatisticsEnabled();
|
|
|
|
(void) llvm::sys::RunningOnValgrind();
|
2005-01-07 08:44:02 +01:00
|
|
|
}
|
2006-08-01 16:21:23 +02:00
|
|
|
} ForcePassLinking; // Force link by creating a global definition.
|
2015-06-23 12:48:35 +02:00
|
|
|
}
|
2005-01-06 07:02:53 +01:00
|
|
|
|
|
|
|
#endif
|