2014-04-21 10:08:50 +02:00
|
|
|
//===- PassRegistry.def - Registry of passes --------------------*- C++ -*-===//
|
|
|
|
//
|
2019-01-19 09:50:56 +01:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2014-04-21 10:08:50 +02:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This file is used as the registry of passes that are part of the core LLVM
|
|
|
|
// libraries. This file describes both transformation passes and analyses
|
|
|
|
// Analyses are registered while transformation passes have names registered
|
|
|
|
// that can be used when providing a textual pass pipeline.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
// NOTE: NO INCLUDE GUARD DESIRED!
|
|
|
|
|
2014-04-21 10:20:10 +02:00
|
|
|
#ifndef MODULE_ANALYSIS
|
|
|
|
#define MODULE_ANALYSIS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
2016-03-10 12:24:11 +01:00
|
|
|
MODULE_ANALYSIS("callgraph", CallGraphAnalysis())
|
2014-04-21 10:20:10 +02:00
|
|
|
MODULE_ANALYSIS("lcg", LazyCallGraphAnalysis())
|
2016-08-12 15:53:02 +02:00
|
|
|
MODULE_ANALYSIS("module-summary", ModuleSummaryIndexAnalysis())
|
2015-01-06 03:50:06 +01:00
|
|
|
MODULE_ANALYSIS("no-op-module", NoOpModuleAnalysis())
|
2016-06-04 00:54:26 +02:00
|
|
|
MODULE_ANALYSIS("profile-summary", ProfileSummaryAnalysis())
|
2018-11-27 00:05:48 +01:00
|
|
|
MODULE_ANALYSIS("stack-safety", StackSafetyGlobalAnalysis())
|
2016-05-09 21:57:29 +02:00
|
|
|
MODULE_ANALYSIS("verify", VerifierAnalysis())
|
2018-09-20 19:08:45 +02:00
|
|
|
MODULE_ANALYSIS("pass-instrumentation", PassInstrumentationAnalysis(PIC))
|
2019-02-13 23:22:48 +01:00
|
|
|
MODULE_ANALYSIS("asan-globals-md", ASanGlobalsMetadataAnalysis())
|
2020-04-28 22:25:15 +02:00
|
|
|
MODULE_ANALYSIS("inline-advisor", InlineAdvisorAnalysis())
|
2020-09-17 23:29:43 +02:00
|
|
|
MODULE_ANALYSIS("ir-similarity", IRSimilarityAnalysis())
|
2016-03-11 10:15:11 +01:00
|
|
|
|
|
|
|
#ifndef MODULE_ALIAS_ANALYSIS
|
2016-07-06 02:26:41 +02:00
|
|
|
#define MODULE_ALIAS_ANALYSIS(NAME, CREATE_PASS) \
|
2016-03-11 10:15:11 +01:00
|
|
|
MODULE_ANALYSIS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
|
|
|
MODULE_ALIAS_ANALYSIS("globals-aa", GlobalsAA())
|
|
|
|
#undef MODULE_ALIAS_ANALYSIS
|
2014-04-21 10:20:10 +02:00
|
|
|
#undef MODULE_ANALYSIS
|
|
|
|
|
2014-04-21 10:08:50 +02:00
|
|
|
#ifndef MODULE_PASS
|
|
|
|
#define MODULE_PASS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
[PM] Port the always inliner to the new pass manager in a much more
minimal and boring form than the old pass manager's version.
This pass does the very minimal amount of work necessary to inline
functions declared as always-inline. It doesn't support a wide array of
things that the legacy pass manager did support, but is alse ... about
20 lines of code. So it has that going for it. Notably things this
doesn't support:
- Array alloca merging
- To support the above, bottom-up inlining with careful history
tracking and call graph updates
- DCE of the functions that become dead after this inlining.
- Inlining through call instructions with the always_inline attribute.
Instead, it focuses on inlining functions with that attribute.
The first I've omitted because I'm hoping to just turn it off for the
primary pass manager. If that doesn't pan out, I can add it here but it
will be reasonably expensive to do so.
The second should really be handled by running global-dce after the
inliner. I don't want to re-implement the non-trivial logic necessary to
do comdat-correct DCE of functions. This means the -O0 pipeline will
have to be at least 'always-inline,global-dce', but that seems
reasonable to me. If others are seriously worried about this I'd like to
hear about it and understand why. Again, this is all solveable by
factoring that logic into a utility and calling it here, but I'd like to
wait to do that until there is a clear reason why the existing
pass-based factoring won't work.
The final point is a serious one. I can fairly easily add support for
this, but it seems both costly and a confusing construct for the use
case of the always inliner running at -O0. This attribute can of course
still impact the normal inliner easily (although I find that
a questionable re-use of the same attribute). I've started a discussion
to sort out what semantics we want here and based on that can figure out
if it makes sense ta have this complexity at O0 or not.
One other advantage of this design is that it should be quite a bit
faster due to checking for whether the function is a viable candidate
for inlining exactly once per function instead of doing it for each call
site.
Anyways, hopefully a reasonable starting point for this pass.
Differential Revision: https://reviews.llvm.org/D23299
llvm-svn: 278896
2016-08-17 04:56:20 +02:00
|
|
|
MODULE_PASS("always-inline", AlwaysInlinerPass())
|
[Attributor] Pass infrastructure and fixpoint framework
NOTE: Note that no attributes are derived yet. This patch will not go in
alone but only with others that derive attributes. The framework is
split for review purposes.
This commit introduces the Attributor pass infrastructure and fixpoint
iteration framework. Further patches will introduce abstract attributes
into this framework.
In a nutshell, the Attributor will update instances of abstract
arguments until a fixpoint, or a "timeout", is reached. Communication
between the Attributor and the abstract attributes that are derived is
restricted to the AbstractState and AbstractAttribute interfaces.
Please see the file comment in Attributor.h for detailed information
including design decisions and typical use case. Also consider the class
documentation for Attributor, AbstractState, and AbstractAttribute.
Reviewers: chandlerc, homerdin, hfinkel, fedor.sergeev, sanjoy, spatel, nlopes, nicholas, reames
Subscribers: mehdi_amini, mgorny, hiraditya, bollu, steven_wu, dexonsmith, dang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59918
llvm-svn: 362578
2019-06-05 05:02:24 +02:00
|
|
|
MODULE_PASS("attributor", AttributorPass())
|
2020-11-16 10:49:04 +01:00
|
|
|
MODULE_PASS("annotation2metadata", Annotation2MetadataPass())
|
2021-03-24 15:11:32 +01:00
|
|
|
MODULE_PASS("openmp-opt", OpenMPOptPass())
|
2017-10-25 15:40:08 +02:00
|
|
|
MODULE_PASS("called-value-propagation", CalledValuePropagationPass())
|
[ThinLTO] Handle chains of aliases
At -O0, globalopt is not run during the compile step, and we can have a
chain of an alias having an immediate aliasee of another alias. The
summaries are constructed assuming aliases in a canonical form
(flattened chains), and as a result only the base object but no
intermediate aliases were preserved.
Fix by adding a pass that canonicalize aliases, which ensures each
alias is a direct alias of the base object.
Reviewers: pcc, davidxl
Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, arphaman, llvm-commits
Differential Revision: https://reviews.llvm.org/D54507
llvm-svn: 350423
2019-01-04 20:04:54 +01:00
|
|
|
MODULE_PASS("canonicalize-aliases", CanonicalizeAliasesPass())
|
2018-07-16 02:28:24 +02:00
|
|
|
MODULE_PASS("cg-profile", CGProfilePass())
|
2016-05-05 02:51:09 +02:00
|
|
|
MODULE_PASS("constmerge", ConstantMergePass())
|
2016-07-09 05:25:35 +02:00
|
|
|
MODULE_PASS("cross-dso-cfi", CrossDSOCFIPass())
|
2016-06-12 11:16:39 +02:00
|
|
|
MODULE_PASS("deadargelim", DeadArgumentEliminationPass())
|
2016-05-05 04:37:32 +02:00
|
|
|
MODULE_PASS("elim-avail-extern", EliminateAvailableExternallyPass())
|
2020-10-08 00:15:09 +02:00
|
|
|
MODULE_PASS("extract-blocks", BlockExtractorPass())
|
2015-12-27 09:13:45 +01:00
|
|
|
MODULE_PASS("forceattrs", ForceFunctionAttrsPass())
|
2016-07-18 23:22:24 +02:00
|
|
|
MODULE_PASS("function-import", FunctionImportPass())
|
Function Specialization Pass
This adds a function specialization pass to LLVM. Constant parameters
like function pointers and constant globals are propagated to the callee by
specializing the function.
This is a first version with a number of limitations:
- The pass is off by default, so needs to be enabled on the command line,
- It does not handle specialization of recursive functions,
- It does not yet handle constants and constant ranges,
- Only 1 argument per function is specialised,
- The cost-model could be further looked into, and perhaps related,
- We are not yet caching analysis results.
This is based on earlier work by Matthew Simpson (D36432) and Vinay Madhusudan.
More recently this was also discussed on the list, see:
https://lists.llvm.org/pipermail/llvm-dev/2021-March/149380.html.
The motivation for this work is that function specialisation often comes up as
a reason for performance differences of generated code between LLVM and GCC,
which has this enabled by default from optimisation level -O3 and up. And while
this certainly helps a few cpu benchmark cases, this also triggers in real
world codes and is thus a generally useful transformation to have in LLVM.
Function specialisation has great potential to increase compile-times and
code-size. The summary from some investigations with this patch is:
- Compile-time increases for short compile jobs is high relatively, but the
increase in absolute numbers still low.
- For longer compile-jobs, the extra compile time is around 1%, and very much
in line with GCC.
- It is difficult to blame one thing for compile-time increases: it looks like
everywhere a little bit more time is spent processing more functions and
instructions.
- But the function specialisation pass itself is not very expensive; it doesn't
show up very high in the profile of the optimisation passes.
The goal of this work is to reach parity with GCC which means that eventually
we would like to get this enabled by default. But first we would like to address
some of the limitations before that.
Differential Revision: https://reviews.llvm.org/D93838
2021-05-04 16:12:44 +02:00
|
|
|
MODULE_PASS("function-specialization", FunctionSpecializationPass())
|
2016-05-03 21:39:15 +02:00
|
|
|
MODULE_PASS("globaldce", GlobalDCEPass())
|
2016-04-26 02:28:01 +02:00
|
|
|
MODULE_PASS("globalopt", GlobalOptPass())
|
2016-11-21 01:28:23 +01:00
|
|
|
MODULE_PASS("globalsplit", GlobalSplitPass())
|
2018-10-03 07:55:20 +02:00
|
|
|
MODULE_PASS("hotcoldsplit", HotColdSplittingPass())
|
2019-07-17 23:45:19 +02:00
|
|
|
MODULE_PASS("hwasan", HWAddressSanitizerPass(false, false))
|
|
|
|
MODULE_PASS("khwasan", HWAddressSanitizerPass(true, true))
|
2015-12-27 09:41:34 +01:00
|
|
|
MODULE_PASS("inferattrs", InferFunctionAttrsPass())
|
2020-04-28 22:25:15 +02:00
|
|
|
MODULE_PASS("inliner-wrapper", ModuleInlinerWrapperPass())
|
2021-01-15 22:56:57 +01:00
|
|
|
MODULE_PASS("inliner-wrapper-no-mandatory-first", ModuleInlinerWrapperPass(
|
2020-12-29 22:32:13 +01:00
|
|
|
getInlineParams(),
|
2021-01-15 22:56:57 +01:00
|
|
|
false))
|
2016-06-05 07:12:23 +02:00
|
|
|
MODULE_PASS("insert-gcov-profiling", GCOVProfilerPass())
|
2019-02-28 21:13:38 +01:00
|
|
|
MODULE_PASS("instrorderfile", InstrOrderFilePass())
|
2016-04-18 19:47:38 +02:00
|
|
|
MODULE_PASS("instrprof", InstrProfiling())
|
2016-06-05 07:15:45 +02:00
|
|
|
MODULE_PASS("internalize", InternalizePass())
|
2015-01-06 10:06:35 +01:00
|
|
|
MODULE_PASS("invalidate<all>", InvalidateAllAnalysesPass())
|
2016-05-05 23:05:36 +02:00
|
|
|
MODULE_PASS("ipsccp", IPSCCPPass())
|
2020-09-16 01:05:38 +02:00
|
|
|
MODULE_PASS("iroutliner", IROutlinerPass())
|
2020-09-17 23:29:43 +02:00
|
|
|
MODULE_PASS("print-ir-similarity", IRSimilarityAnalysisPrinterPass(dbgs()))
|
2020-10-07 23:40:35 +02:00
|
|
|
MODULE_PASS("loop-extract", LoopExtractorPass())
|
2020-09-17 19:36:39 +02:00
|
|
|
MODULE_PASS("lowertypetests", LowerTypeTestsPass())
|
2020-10-01 20:49:45 +02:00
|
|
|
MODULE_PASS("metarenamer", MetaRenamerPass())
|
2020-01-10 21:52:19 +01:00
|
|
|
MODULE_PASS("mergefunc", MergeFunctionsPass())
|
2016-09-16 19:18:16 +02:00
|
|
|
MODULE_PASS("name-anon-globals", NameAnonGlobalPass())
|
2015-01-06 03:37:55 +01:00
|
|
|
MODULE_PASS("no-op-module", NoOpModulePass())
|
2020-12-23 06:40:43 +01:00
|
|
|
MODULE_PASS("objc-arc-apelim", ObjCARCAPElimPass())
|
2016-06-27 18:50:18 +02:00
|
|
|
MODULE_PASS("partial-inliner", PartialInlinerPass())
|
2016-05-16 18:31:07 +02:00
|
|
|
MODULE_PASS("pgo-icall-prom", PGOIndirectCallPromotion())
|
2016-05-06 07:49:19 +02:00
|
|
|
MODULE_PASS("pgo-instr-gen", PGOInstrumentationGen())
|
2016-05-10 23:59:52 +02:00
|
|
|
MODULE_PASS("pgo-instr-use", PGOInstrumentationUse())
|
2016-06-04 00:54:26 +02:00
|
|
|
MODULE_PASS("print-profile-summary", ProfileSummaryPrinterPass(dbgs()))
|
2016-06-03 23:14:26 +02:00
|
|
|
MODULE_PASS("print-callgraph", CallGraphPrinterPass(dbgs()))
|
2016-06-04 00:54:26 +02:00
|
|
|
MODULE_PASS("print", PrintModulePass(dbgs()))
|
2016-03-10 12:24:06 +01:00
|
|
|
MODULE_PASS("print-lcg", LazyCallGraphPrinterPass(dbgs()))
|
2016-06-18 11:17:32 +02:00
|
|
|
MODULE_PASS("print-lcg-dot", LazyCallGraphDOTPrinterPass(dbgs()))
|
2020-10-27 03:57:39 +01:00
|
|
|
MODULE_PASS("print-must-be-executed-contexts", MustBeExecutedContextPrinterPass(dbgs()))
|
2018-11-27 00:05:48 +01:00
|
|
|
MODULE_PASS("print-stack-safety", StackSafetyGlobalPrinterPass(dbgs()))
|
2020-10-19 23:31:17 +02:00
|
|
|
MODULE_PASS("print<module-debuginfo>", ModuleDebugInfoPrinterPass(dbgs()))
|
2020-12-29 22:32:13 +01:00
|
|
|
MODULE_PASS("rel-lookup-table-converter", RelLookupTableConverterPass())
|
2017-12-15 10:32:11 +01:00
|
|
|
MODULE_PASS("rewrite-statepoints-for-gc", RewriteStatepointsForGC())
|
2016-07-25 22:52:00 +02:00
|
|
|
MODULE_PASS("rewrite-symbols", RewriteSymbolPass())
|
2020-07-28 18:08:08 +02:00
|
|
|
MODULE_PASS("rpo-function-attrs", ReversePostOrderFunctionAttrsPass())
|
2016-05-28 01:20:16 +02:00
|
|
|
MODULE_PASS("sample-profile", SampleProfileLoaderPass())
|
2020-04-20 20:05:29 +02:00
|
|
|
MODULE_PASS("scc-oz-module-inliner",
|
2021-01-15 22:56:57 +01:00
|
|
|
buildInlinerPipeline(OptimizationLevel::Oz, ThinOrFullLTOPhase::None))
|
2020-10-07 23:40:35 +02:00
|
|
|
MODULE_PASS("loop-extract-single", LoopExtractorPass(1))
|
2020-09-14 23:37:46 +02:00
|
|
|
MODULE_PASS("strip", StripSymbolsPass())
|
|
|
|
MODULE_PASS("strip-dead-debug-info", StripDeadDebugInfoPass())
|
[CSSPGO] Pseudo probe instrumentation pass
This change introduces a pseudo probe instrumentation pass for block instrumentation. Please refer to https://reviews.llvm.org/D86193 for the whole story.
Given the following LLVM IR:
```
define internal void @foo2(i32 %x, void (i32)* %f) !dbg !4 {
bb0:
%cmp = icmp eq i32 %x, 0
br i1 %cmp, label %bb1, label %bb2
bb1:
br label %bb3
bb2:
br label %bb3
bb3:
ret void
}
```
The instrumented IR will look like below. Note that each llvm.pseudoprobe intrinsic call represents a pseudo probe at a block, of which the first parameter is the GUID of the probe’s owner function and the second parameter is the probe’s ID.
```
define internal void @foo2(i32 %x, void (i32)* %f) !dbg !4 {
bb0:
%cmp = icmp eq i32 %x, 0
call void @llvm.pseudoprobe(i64 837061429793323041, i64 1)
br i1 %cmp, label %bb1, label %bb2
bb1:
call void @llvm.pseudoprobe(i64 837061429793323041, i64 2)
br label %bb3
bb2:
call void @llvm.pseudoprobe(i64 837061429793323041, i64 3)
br label %bb3
bb3:
call void @llvm.pseudoprobe(i64 837061429793323041, i64 4)
ret void
}
```
Reviewed By: wmi
Differential Revision: https://reviews.llvm.org/D86499
2020-08-25 02:52:47 +02:00
|
|
|
MODULE_PASS("pseudo-probe", SampleProfileProbePass(TM))
|
2015-10-31 00:28:12 +01:00
|
|
|
MODULE_PASS("strip-dead-prototypes", StripDeadPrototypesPass())
|
2020-09-14 23:37:46 +02:00
|
|
|
MODULE_PASS("strip-debug-declare", StripDebugDeclarePass())
|
|
|
|
MODULE_PASS("strip-nondebug", StripNonDebugSymbolsPass())
|
2020-10-03 01:18:47 +02:00
|
|
|
MODULE_PASS("strip-nonlinetable-debuginfo", StripNonLineTableDebugInfoPass())
|
2018-01-09 20:39:35 +01:00
|
|
|
MODULE_PASS("synthetic-counts-propagation", SyntheticCountsPropagation())
|
2015-01-05 01:08:53 +01:00
|
|
|
MODULE_PASS("verify", VerifierPass())
|
2020-11-09 21:54:13 +01:00
|
|
|
MODULE_PASS("wholeprogramdevirt", WholeProgramDevirtPass())
|
2020-07-27 23:19:12 +02:00
|
|
|
MODULE_PASS("dfsan", DataFlowSanitizerPass())
|
2019-05-09 08:09:35 +02:00
|
|
|
MODULE_PASS("asan-module", ModuleAddressSanitizerPass(/*CompileKernel=*/false, false, true, false))
|
2019-10-11 10:47:03 +02:00
|
|
|
MODULE_PASS("msan-module", MemorySanitizerPass({}))
|
|
|
|
MODULE_PASS("tsan-module", ThreadSanitizerPass())
|
2019-05-09 08:09:35 +02:00
|
|
|
MODULE_PASS("kasan-module", ModuleAddressSanitizerPass(/*CompileKernel=*/true, false, true, false))
|
2019-07-25 22:53:15 +02:00
|
|
|
MODULE_PASS("sancov-module", ModuleSanitizerCoveragePass())
|
2020-09-14 18:12:13 +02:00
|
|
|
MODULE_PASS("memprof-module", ModuleMemProfilerPass())
|
2019-07-09 20:49:29 +02:00
|
|
|
MODULE_PASS("poison-checking", PoisonCheckingPass())
|
2020-12-11 21:18:31 +01:00
|
|
|
MODULE_PASS("pseudo-probe-update", PseudoProbeUpdatePass())
|
2014-04-21 10:08:50 +02:00
|
|
|
#undef MODULE_PASS
|
|
|
|
|
2014-04-21 13:12:00 +02:00
|
|
|
#ifndef CGSCC_ANALYSIS
|
|
|
|
#define CGSCC_ANALYSIS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
2015-01-06 03:50:06 +01:00
|
|
|
CGSCC_ANALYSIS("no-op-cgscc", NoOpCGSCCAnalysis())
|
[PM] Support invalidation of inner analysis managers from a pass over the outer IR unit.
Summary:
This never really got implemented, and was very hard to test before
a lot of the refactoring changes to make things more robust. But now we
can test it thoroughly and cleanly, especially at the CGSCC level.
The core idea is that when an inner analysis manager proxy receives the
invalidation event for the outer IR unit, it needs to walk the inner IR
units and propagate it to the inner analysis manager for each of those
units. For example, each function in the SCC needs to get an
invalidation event when the SCC gets one.
The function / module interaction is somewhat boring here. This really
becomes interesting in the face of analysis-backed IR units. This patch
effectively handles all of the CGSCC layer's needs -- both invalidating
SCC analysis and invalidating function analysis when an SCC gets
invalidated.
However, this second aspect doesn't really handle the
LoopAnalysisManager well at this point. That one will need some change
of design in order to fully integrate, because unlike the call graph,
the entire function behind a LoopAnalysis's results can vanish out from
under us, and we won't even have a cached API to access. I'd like to try
to separate solving the loop problems into a subsequent patch though in
order to keep this more focused so I've adapted them to the API and
updated the tests that immediately fail, but I've not added the level of
testing and validation at that layer that I have at the CGSCC layer.
An important aspect of this change is that the proxy for the
FunctionAnalysisManager at the SCC pass layer doesn't work like the
other proxies for an inner IR unit as it doesn't directly manage the
FunctionAnalysisManager and invalidation or clearing of it. This would
create an ever worsening problem of dual ownership of this
responsibility, split between the module-level FAM proxy and this
SCC-level FAM proxy. Instead, this patch changes the SCC-level FAM proxy
to work in terms of the module-level proxy and defer to it to handle
much of the updates. It only does SCC-specific invalidation. This will
become more important in subsequent patches that support more complex
invalidaiton scenarios.
Reviewers: jlebar
Subscribers: mehdi_amini, mcrosier, mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D27197
llvm-svn: 289317
2016-12-10 07:34:44 +01:00
|
|
|
CGSCC_ANALYSIS("fam-proxy", FunctionAnalysisManagerCGSCCProxy())
|
2018-09-20 19:08:45 +02:00
|
|
|
CGSCC_ANALYSIS("pass-instrumentation", PassInstrumentationAnalysis(PIC))
|
2014-04-21 13:12:00 +02:00
|
|
|
#undef CGSCC_ANALYSIS
|
|
|
|
|
|
|
|
#ifndef CGSCC_PASS
|
|
|
|
#define CGSCC_PASS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
2017-02-10 00:46:27 +01:00
|
|
|
CGSCC_PASS("argpromotion", ArgumentPromotionPass())
|
2015-01-06 10:06:35 +01:00
|
|
|
CGSCC_PASS("invalidate<all>", InvalidateAllAnalysesPass())
|
2016-02-18 12:03:11 +01:00
|
|
|
CGSCC_PASS("function-attrs", PostOrderFunctionAttrsPass())
|
2019-11-27 07:30:12 +01:00
|
|
|
CGSCC_PASS("attributor-cgscc", AttributorCGSCCPass())
|
[PM] Provide an initial, minimal port of the inliner to the new pass manager.
This doesn't implement *every* feature of the existing inliner, but
tries to implement the most important ones for building a functional
optimization pipeline and beginning to sort out bugs, regressions, and
other problems.
Notable, but intentional omissions:
- No alloca merging support. Why? Because it isn't clear we want to do
this at all. Active discussion and investigation is going on to remove
it, so for simplicity I omitted it.
- No support for trying to iterate on "internally" devirtualized calls.
Why? Because it adds what I suspect is inappropriate coupling for
little or no benefit. We will have an outer iteration system that
tracks devirtualization including that from function passes and
iterates already. We should improve that rather than approximate it
here.
- Optimization remarks. Why? Purely to make the patch smaller, no other
reason at all.
The last one I'll probably work on almost immediately. But I wanted to
skip it in the initial patch to try to focus the change as much as
possible as there is already a lot of code moving around and both of
these *could* be skipped without really disrupting the core logic.
A summary of the different things happening here:
1) Adding the usual new PM class and rigging.
2) Fixing minor underlying assumptions in the inline cost analysis or
inline logic that don't generally hold in the new PM world.
3) Adding the core pass logic which is in essence a loop over the calls
in the nodes in the call graph. This is a bit duplicated from the old
inliner, but only a handful of lines could realistically be shared.
(I tried at first, and it really didn't help anything.) All told,
this is only about 100 lines of code, and most of that is the
mechanics of wiring up analyses from the new PM world.
4) Updating the LazyCallGraph (in the new PM) based on the *newly
inlined* calls and references. This is very minimal because we cannot
form cycles.
5) When inlining removes the last use of a function, eagerly nuking the
body of the function so that any "one use remaining" inline cost
heuristics are immediately refined, and queuing these functions to be
completely deleted once inlining is complete and the call graph
updated to reflect that they have become dead.
6) After all the inlining for a particular function, updating the
LazyCallGraph and the CGSCC pass manager to reflect the
function-local simplifications that are done immediately and
internally by the inline utilties. These are the exact same
fundamental set of CG updates done by arbitrary function passes.
7) Adding a bunch of test cases to specifically target CGSCC and other
subtle aspects in the new PM world.
Many thanks to the careful review from Easwaran and Sanjoy and others!
Differential Revision: https://reviews.llvm.org/D24226
llvm-svn: 290161
2016-12-20 04:15:32 +01:00
|
|
|
CGSCC_PASS("inline", InlinerPass())
|
2021-03-24 15:11:32 +01:00
|
|
|
CGSCC_PASS("openmp-opt-cgscc", OpenMPOptCGSCCPass())
|
2020-02-18 22:29:13 +01:00
|
|
|
CGSCC_PASS("coro-split", CoroSplitPass())
|
2015-01-06 03:37:55 +01:00
|
|
|
CGSCC_PASS("no-op-cgscc", NoOpCGSCCPass())
|
2014-04-21 13:12:00 +02:00
|
|
|
#undef CGSCC_PASS
|
|
|
|
|
2014-04-21 10:20:10 +02:00
|
|
|
#ifndef FUNCTION_ANALYSIS
|
|
|
|
#define FUNCTION_ANALYSIS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
2016-02-14 00:32:00 +01:00
|
|
|
FUNCTION_ANALYSIS("aa", AAManager())
|
2016-12-19 09:22:17 +01:00
|
|
|
FUNCTION_ANALYSIS("assumptions", AssumptionAnalysis())
|
2016-05-05 23:13:27 +02:00
|
|
|
FUNCTION_ANALYSIS("block-freq", BlockFrequencyAnalysis())
|
2016-05-05 04:59:57 +02:00
|
|
|
FUNCTION_ANALYSIS("branch-prob", BranchProbabilityAnalysis())
|
2015-01-14 11:19:28 +01:00
|
|
|
FUNCTION_ANALYSIS("domtree", DominatorTreeAnalysis())
|
2016-02-25 18:54:07 +01:00
|
|
|
FUNCTION_ANALYSIS("postdomtree", PostDominatorTreeAnalysis())
|
2016-04-19 01:55:01 +02:00
|
|
|
FUNCTION_ANALYSIS("demanded-bits", DemandedBitsAnalysis())
|
2016-02-25 18:54:15 +01:00
|
|
|
FUNCTION_ANALYSIS("domfrontier", DominanceFrontierAnalysis())
|
2020-07-23 20:56:56 +02:00
|
|
|
FUNCTION_ANALYSIS("func-properties", FunctionPropertiesAnalysis())
|
2015-01-20 11:58:50 +01:00
|
|
|
FUNCTION_ANALYSIS("loops", LoopAnalysis())
|
2016-06-14 00:01:25 +02:00
|
|
|
FUNCTION_ANALYSIS("lazy-value-info", LazyValueAnalysis())
|
2016-05-13 00:19:39 +02:00
|
|
|
FUNCTION_ANALYSIS("da", DependenceAnalysis())
|
2020-07-13 23:12:32 +02:00
|
|
|
FUNCTION_ANALYSIS("inliner-size-estimator", InlineSizeEstimatorAnalysis())
|
2016-03-10 01:55:30 +01:00
|
|
|
FUNCTION_ANALYSIS("memdep", MemoryDependenceAnalysis())
|
2016-06-01 23:30:40 +02:00
|
|
|
FUNCTION_ANALYSIS("memoryssa", MemorySSAAnalysis())
|
2018-06-28 16:13:06 +02:00
|
|
|
FUNCTION_ANALYSIS("phi-values", PhiValuesAnalysis())
|
2016-02-25 18:54:25 +01:00
|
|
|
FUNCTION_ANALYSIS("regions", RegionInfoAnalysis())
|
2015-01-06 03:50:06 +01:00
|
|
|
FUNCTION_ANALYSIS("no-op-function", NoOpFunctionAnalysis())
|
2016-07-18 18:29:21 +02:00
|
|
|
FUNCTION_ANALYSIS("opt-remark-emit", OptimizationRemarkEmitterAnalysis())
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 04:08:17 +02:00
|
|
|
FUNCTION_ANALYSIS("scalar-evolution", ScalarEvolutionAnalysis())
|
2018-11-26 22:57:47 +01:00
|
|
|
FUNCTION_ANALYSIS("stack-safety-local", StackSafetyAnalysis())
|
[PM] Rework how the TargetLibraryInfo pass integrates with the new pass
manager to support the actual uses of it. =]
When I ported instcombine to the new pass manager I discover that it
didn't work because TLI wasn't available in the right places. This is
a somewhat surprising and/or subtle aspect of the new pass manager
design that came up before but I think is useful to be reminded of:
While the new pass manager *allows* a function pass to query a module
analysis, it requires that the module analysis is already run and cached
prior to the function pass manager starting up, possibly with
a 'require<foo>' style utility in the pass pipeline. This is an
intentional hurdle because using a module analysis from a function pass
*requires* that the module analysis is run prior to entering the
function pass manager. Otherwise the other functions in the module could
be in who-knows-what state, etc.
A somewhat surprising consequence of this design decision (at least to
me) is that you have to design a function pass that leverages
a module analysis to do so as an optional feature. Even if that means
your function pass does no work in the absence of the module analysis,
you have to handle that possibility and remain conservatively correct.
This is a natural consequence of things being able to invalidate the
module analysis and us being unable to re-run it. And it's a generally
good thing because it lets us reorder passes arbitrarily without
breaking correctness, etc.
This ends up causing problems in one case. What if we have a module
analysis that is *definitionally* impossible to invalidate. In the
places this might come up, the analysis is usually also definitionally
trivial to run even while other transformation passes run on the module,
regardless of the state of anything. And so, it follows that it is
natural to have a hard requirement on such analyses from a function
pass.
It turns out, that TargetLibraryInfo is just such an analysis, and
InstCombine has a hard requirement on it.
The approach I've taken here is to produce an analysis that models this
flexibility by making it both a module and a function analysis. This
exposes the fact that it is in fact safe to compute at any point. We can
even make it a valid CGSCC analysis at some point if that is useful.
However, we don't want to have a copy of the actual target library info
state for each function! This state is specific to the triple. The
somewhat direct and blunt approach here is to turn TLI into a pimpl,
with the state and mutators in the implementation class and the query
routines primarily in the wrapper. Then the analysis can lazily
construct and cache the implementations, keyed on the triple, and
on-demand produce wrappers of them for each function.
One minor annoyance is that we will end up with a wrapper for each
function in the module. While this is a bit wasteful (one pointer per
function) it seems tolerable. And it has the advantage of ensuring that
we pay the absolute minimum synchronization cost to access this
information should we end up with a nice parallel function pass manager
in the future. We could look into trying to mark when analysis results
are especially cheap to recompute and more eagerly GC-ing the cached
results, or we could look at supporting a variant of analyses whose
results are specifically *not* cached and expected to just be used and
discarded by the consumer. Either way, these seem like incremental
enhancements that should happen when we start profiling the memory and
CPU usage of the new pass manager and not before.
The other minor annoyance is that if we end up using the TLI in both
a module pass and a function pass, those will be produced by two
separate analyses, and thus will point to separate copies of the
implementation state. While a minor issue, I dislike this and would like
to find a way to cleanly allow a single analysis instance to be used
across multiple IR unit managers. But I don't have a good solution to
this today, and I don't want to hold up all of the work waiting to come
up with one. This too seems like a reasonable thing to incrementally
improve later.
llvm-svn: 226981
2015-01-24 03:06:09 +01:00
|
|
|
FUNCTION_ANALYSIS("targetlibinfo", TargetLibraryAnalysis())
|
2015-02-01 11:11:22 +01:00
|
|
|
FUNCTION_ANALYSIS("targetir",
|
|
|
|
TM ? TM->getTargetIRAnalysis() : TargetIRAnalysis())
|
2016-05-09 21:57:29 +02:00
|
|
|
FUNCTION_ANALYSIS("verify", VerifierAnalysis())
|
2018-09-20 19:08:45 +02:00
|
|
|
FUNCTION_ANALYSIS("pass-instrumentation", PassInstrumentationAnalysis(PIC))
|
2021-02-16 05:56:45 +01:00
|
|
|
FUNCTION_ANALYSIS("divergence", DivergenceAnalysis())
|
2016-02-18 10:45:17 +01:00
|
|
|
|
|
|
|
#ifndef FUNCTION_ALIAS_ANALYSIS
|
|
|
|
#define FUNCTION_ALIAS_ANALYSIS(NAME, CREATE_PASS) \
|
|
|
|
FUNCTION_ANALYSIS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
|
|
|
FUNCTION_ALIAS_ANALYSIS("basic-aa", BasicAA())
|
2016-07-06 02:26:41 +02:00
|
|
|
FUNCTION_ALIAS_ANALYSIS("cfl-anders-aa", CFLAndersAA())
|
|
|
|
FUNCTION_ALIAS_ANALYSIS("cfl-steens-aa", CFLSteensAA())
|
2020-09-26 09:18:42 +02:00
|
|
|
FUNCTION_ALIAS_ANALYSIS("objc-arc-aa", objcarc::ObjCARCAA())
|
2016-02-20 05:01:45 +01:00
|
|
|
FUNCTION_ALIAS_ANALYSIS("scev-aa", SCEVAA())
|
2016-02-20 05:03:06 +01:00
|
|
|
FUNCTION_ALIAS_ANALYSIS("scoped-noalias-aa", ScopedNoAliasAA())
|
2020-07-30 01:44:22 +02:00
|
|
|
FUNCTION_ALIAS_ANALYSIS("tbaa", TypeBasedAA())
|
2016-02-18 10:45:17 +01:00
|
|
|
#undef FUNCTION_ALIAS_ANALYSIS
|
2014-04-21 10:20:10 +02:00
|
|
|
#undef FUNCTION_ANALYSIS
|
|
|
|
|
2014-04-21 10:08:50 +02:00
|
|
|
#ifndef FUNCTION_PASS
|
|
|
|
#define FUNCTION_PASS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
2016-02-20 04:46:03 +01:00
|
|
|
FUNCTION_PASS("aa-eval", AAEvaluator())
|
2015-10-31 00:13:18 +01:00
|
|
|
FUNCTION_PASS("adce", ADCEPass())
|
2016-06-15 23:51:30 +02:00
|
|
|
FUNCTION_PASS("add-discriminators", AddDiscriminatorsPass())
|
2018-01-25 13:06:32 +01:00
|
|
|
FUNCTION_PASS("aggressive-instcombine", AggressiveInstCombinePass())
|
2020-02-02 14:46:59 +01:00
|
|
|
FUNCTION_PASS("assume-builder", AssumeBuilderPass())
|
2020-05-07 13:41:20 +02:00
|
|
|
FUNCTION_PASS("assume-simplify", AssumeSimplifyPass())
|
2016-06-15 08:18:01 +02:00
|
|
|
FUNCTION_PASS("alignment-from-assumptions", AlignmentFromAssumptionsPass())
|
2020-11-13 10:46:55 +01:00
|
|
|
FUNCTION_PASS("annotation-remarks", AnnotationRemarksPass())
|
2016-05-25 03:57:04 +02:00
|
|
|
FUNCTION_PASS("bdce", BDCEPass())
|
2017-11-14 02:30:04 +01:00
|
|
|
FUNCTION_PASS("bounds-checking", BoundsCheckingPass())
|
2016-07-22 20:04:25 +02:00
|
|
|
FUNCTION_PASS("break-crit-edges", BreakCriticalEdgesPass())
|
Recommit r317351 : Add CallSiteSplitting pass
This recommit r317351 after fixing a buildbot failure.
Original commit message:
Summary:
This change add a pass which tries to split a call-site to pass
more constrained arguments if its argument is predicated in the control flow
so that we can expose better context to the later passes (e.g, inliner, jump
threading, or IPA-CP based function cloning, etc.).
As of now we support two cases :
1) If a call site is dominated by an OR condition and if any of its arguments
are predicated on this OR condition, try to split the condition with more
constrained arguments. For example, in the code below, we try to split the
call site since we can predicate the argument (ptr) based on the OR condition.
Split from :
if (!ptr || c)
callee(ptr);
to :
if (!ptr)
callee(null ptr) // set the known constant value
else if (c)
callee(nonnull ptr) // set non-null attribute in the argument
2) We can also split a call-site based on constant incoming values of a PHI
For example,
from :
BB0:
%c = icmp eq i32 %i1, %i2
br i1 %c, label %BB2, label %BB1
BB1:
br label %BB2
BB2:
%p = phi i32 [ 0, %BB0 ], [ 1, %BB1 ]
call void @bar(i32 %p)
to
BB0:
%c = icmp eq i32 %i1, %i2
br i1 %c, label %BB2-split0, label %BB1
BB1:
br label %BB2-split1
BB2-split0:
call void @bar(i32 0)
br label %BB2
BB2-split1:
call void @bar(i32 1)
br label %BB2
BB2:
%p = phi i32 [ 0, %BB2-split0 ], [ 1, %BB2-split1 ]
llvm-svn: 317362
2017-11-03 21:41:16 +01:00
|
|
|
FUNCTION_PASS("callsite-splitting", CallSiteSplittingPass())
|
2016-07-02 02:16:47 +02:00
|
|
|
FUNCTION_PASS("consthoist", ConstantHoistingPass())
|
2020-09-27 20:12:13 +02:00
|
|
|
FUNCTION_PASS("constraint-elimination", ConstraintEliminationPass())
|
2018-09-04 19:19:13 +02:00
|
|
|
FUNCTION_PASS("chr", ControlHeightReductionPass())
|
[Coroutines][1/6] New pass manager: coro-early
Summary:
The first in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure. This patch implements
'coro-early'.
NB: All coroutines passes begin by checking that coroutine intrinsics are
declared within the LLVM IR module they're operating on. To do so, they call
`coro::declaresIntrinsics`. The next 3 patches in this series, which add new
pass manager implementations of the 'coro-split', 'coro-elide', and
'coro-cleanup' passes, use a similar pattern as the one used here: a static
function is shared across both old and new passes to detect if relevant
coroutine intrinsics are delcared. To make this pattern easier to read, this
patch adds `const` keywords to the parameters of `coro::declaresIntrinsics`.
Reviewers: GorNishanov, lewissbaker, junparser, chandlerc, deadalnix, wenlei
Reviewed By: wenlei
Subscribers: ychen, wenlei, EricWF, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71898
2019-12-26 14:00:00 +01:00
|
|
|
FUNCTION_PASS("coro-early", CoroEarlyPass())
|
2020-02-18 22:29:13 +01:00
|
|
|
FUNCTION_PASS("coro-elide", CoroElidePass())
|
[Coroutines][4/6] New pass manager: coro-cleanup
Summary:
Depends on https://reviews.llvm.org/D71900.
The fourth in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure. This patch implements
'coro-cleanup'.
No existing regression tests check the behavior of coro-cleanup on its
own, so this patch adds one. (A test named 'coro-cleanup.ll' exists, but
it relies on the entire coroutines pipeline being run. It's updated to
test the new pass manager in the 5th patch of this series.)
Reviewers: GorNishanov, lewissbaker, chandlerc, junparser, deadalnix, wenlei
Reviewed By: wenlei
Subscribers: wenlei, EricWF, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71901
2019-12-26 14:00:00 +01:00
|
|
|
FUNCTION_PASS("coro-cleanup", CoroCleanupPass())
|
2016-07-07 01:26:29 +02:00
|
|
|
FUNCTION_PASS("correlated-propagation", CorrelatedValuePropagationPass())
|
2016-04-22 21:40:41 +02:00
|
|
|
FUNCTION_PASS("dce", DCEPass())
|
2021-07-27 20:26:49 +02:00
|
|
|
FUNCTION_PASS("dfa-jump-threading", DFAJumpThreadingPass())
|
2017-09-09 15:38:18 +02:00
|
|
|
FUNCTION_PASS("div-rem-pairs", DivRemPairsPass())
|
2016-05-17 23:38:13 +02:00
|
|
|
FUNCTION_PASS("dse", DSEPass())
|
2018-06-29 19:48:58 +02:00
|
|
|
FUNCTION_PASS("dot-cfg", CFGPrinterPass())
|
|
|
|
FUNCTION_PASS("dot-cfg-only", CFGOnlyPrinterPass())
|
2016-08-31 21:24:10 +02:00
|
|
|
FUNCTION_PASS("early-cse", EarlyCSEPass(/*UseMemorySSA=*/false))
|
|
|
|
FUNCTION_PASS("early-cse-memssa", EarlyCSEPass(/*UseMemorySSA=*/true))
|
2017-11-14 22:09:45 +01:00
|
|
|
FUNCTION_PASS("ee-instrument", EntryExitInstrumenterPass(/*PostInlining=*/false))
|
2020-10-08 17:53:00 +02:00
|
|
|
FUNCTION_PASS("fix-irreducible", FixIrreduciblePass())
|
Introduce llvm.experimental.widenable_condition intrinsic
This patch introduces a new instinsic `@llvm.experimental.widenable_condition`
that allows explicit representation for guards. It is an alternative to using
`@llvm.experimental.guard` intrinsic that does not contain implicit control flow.
We keep finding places where `@llvm.experimental.guard` is not supported or
treated too conservatively, and there are 2 reasons to that:
- `@llvm.experimental.guard` has memory write side effect to model implicit control flow,
and this sometimes confuses passes and analyzes that work with memory;
- Not all passes and analysis are aware of the semantics of guards. These passes treat them
as regular throwing call and have no idea that the condition of guard may be used to prove
something. One well-known place which had caused us troubles in the past is explicit loop
iteration count calculation in SCEV. Another example is new loop unswitching which is not
aware of guards. Whenever a new pass appears, we potentially have this problem there.
Rather than go and fix all these places (and commit to keep track of them and add support
in future), it seems more reasonable to leverage the existing optimizer's logic as much as possible.
The only significant difference between guards and regular explicit branches is that guard's condition
can be widened. It means that a guard contains (explicitly or implicitly) a `deopt` block successor,
and it is always legal to go there no matter what the guard condition is. The other successor is
a guarded block, and it is only legal to go there if the condition is true.
This patch introduces a new explicit form of guards alternative to `@llvm.experimental.guard`
intrinsic. Now a widenable guard can be represented in the CFG explicitly like this:
%widenable_condition = call i1 @llvm.experimental.widenable.condition()
%new_condition = and i1 %cond, %widenable_condition
br i1 %new_condition, label %guarded, label %deopt
guarded:
; Guarded instructions
deopt:
call type @llvm.experimental.deoptimize(<args...>) [ "deopt"(<deopt_args...>) ]
The new intrinsic `@llvm.experimental.widenable.condition` has semantics of an
`undef`, but the intrinsic prevents the optimizer from folding it early. This form
should exploit all optimization boons provided to `br` instuction, and it still can be
widened by replacing the result of `@llvm.experimental.widenable.condition()`
with `and` with any arbitrary boolean value (as long as the branch that is taken when
it is `false` has a deopt and has no side-effects).
For more motivation, please check llvm-dev discussion "[llvm-dev] Giving up using
implicit control flow in guards".
This patch introduces this new intrinsic with respective LangRef changes and a pass
that converts old-style guards (expressed as intrinsics) into the new form.
The naming discussion is still ungoing. Merging this to unblock further items. We can
later change the name of this intrinsic.
Reviewed By: reames, fedor.sergeev, sanjoy
Differential Revision: https://reviews.llvm.org/D51207
llvm-svn: 348593
2018-12-07 15:39:46 +01:00
|
|
|
FUNCTION_PASS("make-guards-explicit", MakeGuardsExplicitPass())
|
2017-11-14 22:09:45 +01:00
|
|
|
FUNCTION_PASS("post-inline-ee-instrument", EntryExitInstrumenterPass(/*PostInlining=*/true))
|
2016-07-15 15:45:20 +02:00
|
|
|
FUNCTION_PASS("gvn-hoist", GVNHoistPass())
|
2020-09-22 17:20:11 +02:00
|
|
|
FUNCTION_PASS("gvn-sink", GVNSinkPass())
|
2020-09-01 03:36:11 +02:00
|
|
|
FUNCTION_PASS("helloworld", HelloWorldPass())
|
2020-12-28 22:07:46 +01:00
|
|
|
FUNCTION_PASS("infer-address-spaces", InferAddressSpacesPass())
|
2015-01-24 05:19:17 +01:00
|
|
|
FUNCTION_PASS("instcombine", InstCombinePass())
|
2020-07-17 13:30:51 +02:00
|
|
|
FUNCTION_PASS("instcount", InstCountPass())
|
2018-06-30 01:36:03 +02:00
|
|
|
FUNCTION_PASS("instsimplify", InstSimplifyPass())
|
2015-01-06 10:06:35 +01:00
|
|
|
FUNCTION_PASS("invalidate<all>", InvalidateAllAnalysesPass())
|
2020-01-27 22:33:34 +01:00
|
|
|
FUNCTION_PASS("irce", IRCEPass())
|
2016-06-25 01:32:02 +02:00
|
|
|
FUNCTION_PASS("float2int", Float2IntPass())
|
2015-01-06 03:37:55 +01:00
|
|
|
FUNCTION_PASS("no-op-function", NoOpFunctionPass())
|
Conditionally eliminate library calls where the result value is not used
Summary:
This pass shrink-wraps a condition to some library calls where the call
result is not used. For example:
sqrt(val);
is transformed to
if (val < 0)
sqrt(val);
Even if the result of library call is not being used, the compiler cannot
safely delete the call because the function can set errno on error
conditions.
Note in many functions, the error condition solely depends on the incoming
parameter. In this optimization, we can generate the condition can lead to
the errno to shrink-wrap the call. Since the chances of hitting the error
condition is low, the runtime call is effectively eliminated.
These partially dead calls are usually results of C++ abstraction penalty
exposed by inlining. This optimization hits 108 times in 19 C/C++ programs
in SPEC2006.
Reviewers: hfinkel, mehdi_amini, davidxl
Subscribers: modocache, mgorny, mehdi_amini, xur, llvm-commits, beanz
Differential Revision: https://reviews.llvm.org/D24414
llvm-svn: 284542
2016-10-18 23:36:27 +02:00
|
|
|
FUNCTION_PASS("libcalls-shrinkwrap", LibCallsShrinkWrapPass())
|
2020-09-03 06:54:27 +02:00
|
|
|
FUNCTION_PASS("lint", LintPass())
|
2019-11-11 20:42:18 +01:00
|
|
|
FUNCTION_PASS("inject-tli-mappings", InjectTLIMappings())
|
2020-10-22 07:55:34 +02:00
|
|
|
FUNCTION_PASS("instnamer", InstructionNamerPass())
|
2016-05-14 00:52:35 +02:00
|
|
|
FUNCTION_PASS("loweratomic", LowerAtomicPass())
|
2015-01-24 12:13:02 +01:00
|
|
|
FUNCTION_PASS("lower-expect", LowerExpectIntrinsicPass())
|
2016-07-29 00:08:41 +02:00
|
|
|
FUNCTION_PASS("lower-guard-intrinsic", LowerGuardIntrinsicPass())
|
2019-10-14 18:15:14 +02:00
|
|
|
FUNCTION_PASS("lower-constant-intrinsics", LowerConstantIntrinsicsPass())
|
[Matrix] Add first set of matrix intrinsics and initial lowering pass.
This is the first patch adding an initial set of matrix intrinsics and a
corresponding lowering pass. This has been discussed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2019-October/136240.html
The first patch introduces four new intrinsics (transpose, multiply,
columnwise load and store) and a LowerMatrixIntrinsics pass, that
lowers those intrinsics to vector operations.
Matrixes are embedded in a 'flat' vector (e.g. a 4 x 4 float matrix
embedded in a <16 x float> vector) and the intrinsics take the dimension
information as parameters. Those parameters need to be ConstantInt.
For the memory layout, we initially assume column-major, but in the RFC
we also described how to extend the intrinsics to support row-major as
well.
For the initial lowering, we split the input of the intrinsics into a
set of column vectors, transform those column vectors and concatenate
the result columns to a flat result vector.
This allows us to lower the intrinsics without any shape propagation, as
mentioned in the RFC. In follow-up patches, we plan to submit the
following improvements:
* Shape propagation to eliminate the embedding/splitting for each
intrinsic.
* Fused & tiled lowering of multiply and other operations.
* Optimization remarks highlighting matrix expressions and costs.
* Generate loops for operations on large matrixes.
* More general block processing for operation on large vectors,
exploiting shape information.
We would like to add dedicated transpose, columnwise load and store
intrinsics, even though they are not strictly necessary. For example, we
could instead emit a large shufflevector instruction instead of the
transpose. But we expect that to
(1) become unwieldy for larger matrixes (even for 16x16 matrixes,
the resulting shufflevector masks would be huge),
(2) risk instcombine making small changes, causing us to fail to
detect the transpose, preventing better lowerings
For the load/store, we are additionally planning on exploiting the
intrinsics for better alias analysis.
Reviewers: anemet, Gerolf, reames, hfinkel, andrew.w.kaylor, efriedma, rengolin
Reviewed By: anemet
Differential Revision: https://reviews.llvm.org/D70456
2019-12-12 16:27:28 +01:00
|
|
|
FUNCTION_PASS("lower-matrix-intrinsics", LowerMatrixIntrinsicsPass())
|
2020-11-18 08:09:20 +01:00
|
|
|
FUNCTION_PASS("lower-matrix-intrinsics-minimal", LowerMatrixIntrinsicsPass(true))
|
2019-01-31 10:10:17 +01:00
|
|
|
FUNCTION_PASS("lower-widenable-condition", LowerWidenableConditionPass())
|
2016-05-19 00:55:34 +02:00
|
|
|
FUNCTION_PASS("guard-widening", GuardWideningPass())
|
2018-12-07 09:23:37 +01:00
|
|
|
FUNCTION_PASS("load-store-vectorizer", LoadStoreVectorizerPass())
|
2016-07-09 05:03:01 +02:00
|
|
|
FUNCTION_PASS("loop-simplify", LoopSimplifyPass())
|
2017-01-20 09:42:19 +01:00
|
|
|
FUNCTION_PASS("loop-sink", LoopSinkPass())
|
2016-08-12 19:28:27 +02:00
|
|
|
FUNCTION_PASS("lowerinvoke", LowerInvokePass())
|
2020-09-16 00:02:23 +02:00
|
|
|
FUNCTION_PASS("lowerswitch", LowerSwitchPass())
|
2016-06-14 05:22:22 +02:00
|
|
|
FUNCTION_PASS("mem2reg", PromotePass())
|
2016-06-14 04:44:55 +02:00
|
|
|
FUNCTION_PASS("memcpyopt", MemCpyOptPass())
|
2019-05-23 14:35:26 +02:00
|
|
|
FUNCTION_PASS("mergeicmps", MergeICmpsPass())
|
2020-10-20 19:32:28 +02:00
|
|
|
FUNCTION_PASS("mergereturn", UnifyFunctionExitNodesPass())
|
2016-07-22 00:28:52 +02:00
|
|
|
FUNCTION_PASS("nary-reassociate", NaryReassociatePass())
|
2016-12-22 17:35:02 +01:00
|
|
|
FUNCTION_PASS("newgvn", NewGVNPass())
|
2016-06-14 02:51:09 +02:00
|
|
|
FUNCTION_PASS("jump-threading", JumpThreadingPass())
|
2016-05-26 01:38:53 +02:00
|
|
|
FUNCTION_PASS("partially-inline-libcalls", PartiallyInlineLibCallsPass())
|
2016-06-09 21:44:46 +02:00
|
|
|
FUNCTION_PASS("lcssa", LCSSAPass())
|
2016-08-13 06:11:27 +02:00
|
|
|
FUNCTION_PASS("loop-data-prefetch", LoopDataPrefetchPass())
|
2017-01-27 02:32:26 +01:00
|
|
|
FUNCTION_PASS("loop-load-elim", LoopLoadEliminationPass())
|
2020-07-07 19:42:33 +02:00
|
|
|
FUNCTION_PASS("loop-fusion", LoopFusePass())
|
2016-07-18 18:29:27 +02:00
|
|
|
FUNCTION_PASS("loop-distribute", LoopDistributePass())
|
2020-08-01 02:30:30 +02:00
|
|
|
FUNCTION_PASS("loop-versioning", LoopVersioningPass())
|
2020-12-23 06:41:25 +01:00
|
|
|
FUNCTION_PASS("objc-arc", ObjCARCOptPass())
|
|
|
|
FUNCTION_PASS("objc-arc-contract", ObjCARCContractPass())
|
2020-10-26 19:59:23 +01:00
|
|
|
FUNCTION_PASS("objc-arc-expand", ObjCARCExpandPass())
|
2017-04-04 18:42:20 +02:00
|
|
|
FUNCTION_PASS("pgo-memop-opt", PGOMemOPSizeOpt())
|
2014-04-21 10:08:50 +02:00
|
|
|
FUNCTION_PASS("print", PrintFunctionPass(dbgs()))
|
2016-12-19 09:22:17 +01:00
|
|
|
FUNCTION_PASS("print<assumptions>", AssumptionPrinterPass(dbgs()))
|
2016-05-05 23:13:27 +02:00
|
|
|
FUNCTION_PASS("print<block-freq>", BlockFrequencyPrinterPass(dbgs()))
|
2016-05-05 04:59:57 +02:00
|
|
|
FUNCTION_PASS("print<branch-prob>", BranchProbabilityPrinterPass(dbgs()))
|
2019-01-08 15:06:58 +01:00
|
|
|
FUNCTION_PASS("print<da>", DependenceAnalysisPrinterPass(dbgs()))
|
2021-02-16 05:56:45 +01:00
|
|
|
FUNCTION_PASS("print<divergence>", DivergenceAnalysisPrinterPass(dbgs()))
|
2015-01-14 11:19:28 +01:00
|
|
|
FUNCTION_PASS("print<domtree>", DominatorTreePrinterPass(dbgs()))
|
2016-02-25 18:54:07 +01:00
|
|
|
FUNCTION_PASS("print<postdomtree>", PostDominatorTreePrinterPass(dbgs()))
|
2020-09-16 06:10:22 +02:00
|
|
|
FUNCTION_PASS("print<delinearization>", DelinearizationPrinterPass(dbgs()))
|
2016-04-19 01:55:01 +02:00
|
|
|
FUNCTION_PASS("print<demanded-bits>", DemandedBitsPrinterPass(dbgs()))
|
2016-02-25 18:54:15 +01:00
|
|
|
FUNCTION_PASS("print<domfrontier>", DominanceFrontierPrinterPass(dbgs()))
|
2020-07-23 20:56:56 +02:00
|
|
|
FUNCTION_PASS("print<func-properties>", FunctionPropertiesPrinterPass(dbgs()))
|
2020-06-12 00:24:10 +02:00
|
|
|
FUNCTION_PASS("print<inline-cost>", InlineCostAnnotationPrinterPass(dbgs()))
|
2020-12-29 22:32:13 +01:00
|
|
|
FUNCTION_PASS("print<inliner-size-estimator>",
|
2020-07-16 01:02:15 +02:00
|
|
|
InlineSizeEstimatorAnalysisPrinterPass(dbgs()))
|
2015-01-20 11:58:50 +01:00
|
|
|
FUNCTION_PASS("print<loops>", LoopPrinterPass(dbgs()))
|
2016-06-01 23:30:40 +02:00
|
|
|
FUNCTION_PASS("print<memoryssa>", MemorySSAPrinterPass(dbgs()))
|
2018-06-28 16:13:06 +02:00
|
|
|
FUNCTION_PASS("print<phi-values>", PhiValuesPrinterPass(dbgs()))
|
2016-02-25 18:54:25 +01:00
|
|
|
FUNCTION_PASS("print<regions>", RegionInfoPrinterPass(dbgs()))
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 04:08:17 +02:00
|
|
|
FUNCTION_PASS("print<scalar-evolution>", ScalarEvolutionPrinterPass(dbgs()))
|
2018-11-26 22:57:47 +01:00
|
|
|
FUNCTION_PASS("print<stack-safety-local>", StackSafetyPrinterPass(dbgs()))
|
2020-10-27 03:57:39 +01:00
|
|
|
// TODO: rename to print<foo> after NPM switch
|
2020-09-15 04:01:38 +02:00
|
|
|
FUNCTION_PASS("print-alias-sets", AliasSetsPrinterPass(dbgs()))
|
2020-07-08 18:27:57 +02:00
|
|
|
FUNCTION_PASS("print-predicateinfo", PredicateInfoPrinterPass(dbgs()))
|
2020-10-27 03:57:39 +01:00
|
|
|
FUNCTION_PASS("print-mustexecute", MustExecutePrinterPass(dbgs()))
|
2020-11-18 08:42:18 +01:00
|
|
|
FUNCTION_PASS("print-memderefs", MemDerefPrinterPass(dbgs()))
|
2016-04-27 01:39:29 +02:00
|
|
|
FUNCTION_PASS("reassociate", ReassociatePass())
|
2020-11-14 08:40:33 +01:00
|
|
|
FUNCTION_PASS("redundant-dbg-inst-elim", RedundantDbgInstEliminationPass())
|
2020-11-05 20:03:08 +01:00
|
|
|
FUNCTION_PASS("reg2mem", RegToMemPass())
|
2020-12-07 04:51:23 +01:00
|
|
|
FUNCTION_PASS("scalarize-masked-mem-intrin", ScalarizeMaskedMemIntrinPass())
|
2018-11-21 15:00:17 +01:00
|
|
|
FUNCTION_PASS("scalarizer", ScalarizerPass())
|
2020-11-09 20:40:49 +01:00
|
|
|
FUNCTION_PASS("separate-const-offset-from-gep", SeparateConstOffsetFromGEPPass())
|
2016-05-18 17:18:25 +02:00
|
|
|
FUNCTION_PASS("sccp", SCCPPass())
|
2016-04-22 21:54:10 +02:00
|
|
|
FUNCTION_PASS("sink", SinkingPass())
|
2016-06-15 10:43:40 +02:00
|
|
|
FUNCTION_PASS("slp-vectorizer", SLPVectorizerPass())
|
2020-10-27 02:21:07 +01:00
|
|
|
FUNCTION_PASS("slsr", StraightLineStrengthReducePass())
|
2016-08-01 23:48:33 +02:00
|
|
|
FUNCTION_PASS("speculative-execution", SpeculativeExecutionPass())
|
2015-09-12 11:09:14 +02:00
|
|
|
FUNCTION_PASS("sroa", SROA())
|
2020-10-03 01:31:57 +02:00
|
|
|
FUNCTION_PASS("strip-gc-relocates", StripGCRelocates())
|
2020-10-08 07:07:30 +02:00
|
|
|
FUNCTION_PASS("structurizecfg", StructurizeCFGPass())
|
2016-07-07 01:48:41 +02:00
|
|
|
FUNCTION_PASS("tailcallelim", TailCallElimPass())
|
2020-10-20 19:41:38 +02:00
|
|
|
FUNCTION_PASS("unify-loop-exits", UnifyLoopExitsPass())
|
[VectorCombine] new IR transform pass for partial vector ops
We have several bug reports that could be characterized as "reducing scalarization",
and this topic was also raised on llvm-dev recently:
http://lists.llvm.org/pipermail/llvm-dev/2020-January/138157.html
...so I'm proposing that we deal with these patterns in a new, lightweight IR vector
pass that runs before/after other vectorization passes.
There are 4 alternate options that I can think of to deal with this kind of problem
(and we've seen various attempts at all of these), but they all have flaws:
InstCombine - can't happen without TTI, but we don't want target-specific
folds there.
SDAG - too late to assist other vectorization passes; TLI is not equipped
for these kind of cost queries; limited to a single basic block.
CGP - too late to assist other vectorization passes; would need to re-implement
basic cleanups like CSE/instcombine.
SLP - doesn't fit with existing transforms; limited to a single basic block.
This initial patch/transform is based on existing code in AggressiveInstCombine:
we walk backwards through the function looking for a pattern match. But we diverge
from that cost-independent IR canonicalization pass by using TTI to decide if the
vector alternative is profitable.
We probably have at least 10 similar bug reports/patterns (binops, constants,
inserts, cheap shuffles, etc) that would fit in this pass as follow-up enhancements.
It's possible that we could iterate on a worklist to fix-point like InstCombine does,
but it's safer to start with a most basic case and evolve from there, so I didn't
try to do anything fancy with this initial implementation.
Differential Revision: https://reviews.llvm.org/D73480
2020-02-09 16:04:41 +01:00
|
|
|
FUNCTION_PASS("vector-combine", VectorCombinePass())
|
2015-01-05 01:08:53 +01:00
|
|
|
FUNCTION_PASS("verify", VerifierPass())
|
2015-01-14 11:19:28 +01:00
|
|
|
FUNCTION_PASS("verify<domtree>", DominatorTreeVerifierPass())
|
2016-07-20 01:54:23 +02:00
|
|
|
FUNCTION_PASS("verify<loops>", LoopVerifierPass())
|
2016-06-01 23:30:40 +02:00
|
|
|
FUNCTION_PASS("verify<memoryssa>", MemorySSAVerifierPass())
|
2016-02-25 18:54:25 +01:00
|
|
|
FUNCTION_PASS("verify<regions>", RegionInfoVerifierPass())
|
2019-03-31 12:15:39 +02:00
|
|
|
FUNCTION_PASS("verify<safepoint-ir>", SafepointIRVerifierPass())
|
2019-11-19 08:16:39 +01:00
|
|
|
FUNCTION_PASS("verify<scalar-evolution>", ScalarEvolutionVerifierPass())
|
2018-06-29 19:48:58 +02:00
|
|
|
FUNCTION_PASS("view-cfg", CFGViewerPass())
|
|
|
|
FUNCTION_PASS("view-cfg-only", CFGOnlyViewerPass())
|
[Unroll/UnrollAndJam/Vectorizer/Distribute] Add followup loop attributes.
When multiple loop transformation are defined in a loop's metadata, their order of execution is defined by the order of their respective passes in the pass pipeline. For instance, e.g.
#pragma clang loop unroll_and_jam(enable)
#pragma clang loop distribute(enable)
is the same as
#pragma clang loop distribute(enable)
#pragma clang loop unroll_and_jam(enable)
and will try to loop-distribute before Unroll-And-Jam because the LoopDistribute pass is scheduled after UnrollAndJam pass. UnrollAndJamPass only supports one inner loop, i.e. it will necessarily fail after loop distribution. It is not possible to specify another execution order. Also,t the order of passes in the pipeline is subject to change between versions of LLVM, optimization options and which pass manager is used.
This patch adds 'followup' attributes to various loop transformation passes. These attributes define which attributes the resulting loop of a transformation should have. For instance,
!0 = !{!0, !1, !2}
!1 = !{!"llvm.loop.unroll_and_jam.enable"}
!2 = !{!"llvm.loop.unroll_and_jam.followup_inner", !3}
!3 = !{!"llvm.loop.distribute.enable"}
defines a loop ID (!0) to be unrolled-and-jammed (!1) and then the attribute !3 to be added to the jammed inner loop, which contains the instruction to distribute the inner loop.
Currently, in both pass managers, pass execution is in a fixed order and UnrollAndJamPass will not execute again after LoopDistribute. We hope to fix this in the future by allowing pass managers to run passes until a fixpoint is reached, use Polly to perform these transformations, or add a loop transformation pass which takes the order issue into account.
For mandatory/forced transformations (e.g. by having been declared by #pragma omp simd), the user must be notified when a transformation could not be performed. It is not possible that the responsible pass emits such a warning because the transformation might be 'hidden' in a followup attribute when it is executed, or it is not present in the pipeline at all. For this reason, this patche introduces a WarnMissedTransformations pass, to warn about orphaned transformations.
Since this changes the user-visible diagnostic message when a transformation is applied, two test cases in the clang repository need to be updated.
To ensure that no other transformation is executed before the intended one, the attribute `llvm.loop.disable_nonforced` can be added which should disable transformation heuristics before the intended transformation is applied. E.g. it would be surprising if a loop is distributed before a #pragma unroll_and_jam is applied.
With more supported code transformations (loop fusion, interchange, stripmining, offloading, etc.), transformations can be used as building blocks for more complex transformations (e.g. stripmining+stripmining+interchange -> tiling).
Reviewed By: hfinkel, dmgreen
Differential Revision: https://reviews.llvm.org/D49281
Differential Revision: https://reviews.llvm.org/D55288
llvm-svn: 348944
2018-12-12 18:32:52 +01:00
|
|
|
FUNCTION_PASS("transform-warning", WarnMissedTransformationsPass())
|
2019-02-13 23:22:48 +01:00
|
|
|
FUNCTION_PASS("asan", AddressSanitizerPass(false, false, false))
|
2019-05-09 08:09:35 +02:00
|
|
|
FUNCTION_PASS("kasan", AddressSanitizerPass(true, false, false))
|
2019-02-04 22:02:49 +01:00
|
|
|
FUNCTION_PASS("msan", MemorySanitizerPass({}))
|
2019-05-09 08:09:35 +02:00
|
|
|
FUNCTION_PASS("kmsan", MemorySanitizerPass({0, false, /*Kernel=*/true}))
|
2019-01-16 10:28:01 +01:00
|
|
|
FUNCTION_PASS("tsan", ThreadSanitizerPass())
|
2020-09-14 18:12:13 +02:00
|
|
|
FUNCTION_PASS("memprof", MemProfilerPass())
|
2014-04-21 10:08:50 +02:00
|
|
|
#undef FUNCTION_PASS
|
2016-02-25 08:23:08 +01:00
|
|
|
|
2019-01-10 11:01:53 +01:00
|
|
|
#ifndef FUNCTION_PASS_WITH_PARAMS
|
2021-06-28 00:22:11 +02:00
|
|
|
#define FUNCTION_PASS_WITH_PARAMS(NAME, CLASS, CREATE_PASS, PARSER, PARAMS)
|
2019-01-10 11:01:53 +01:00
|
|
|
#endif
|
2020-06-26 18:28:32 +02:00
|
|
|
FUNCTION_PASS_WITH_PARAMS("loop-unroll",
|
2021-06-28 00:22:11 +02:00
|
|
|
"LoopUnrollPass",
|
2019-02-04 22:02:49 +01:00
|
|
|
[](LoopUnrollOptions Opts) {
|
|
|
|
return LoopUnrollPass(Opts);
|
|
|
|
},
|
2021-06-21 11:22:14 +02:00
|
|
|
parseLoopUnrollOptions,
|
|
|
|
"O0;O1;O2;O3;full-unroll-max=N;"
|
|
|
|
"no-partial;partial;"
|
|
|
|
"no-peeling;peeling;"
|
|
|
|
"no-profile-peeling;profile-peeling;"
|
|
|
|
"no-runtime;runtime;"
|
|
|
|
"no-upperbound;upperbound")
|
2019-02-04 22:02:49 +01:00
|
|
|
FUNCTION_PASS_WITH_PARAMS("msan",
|
2021-06-28 00:22:11 +02:00
|
|
|
"MemorySanitizerPass",
|
2019-02-04 22:02:49 +01:00
|
|
|
[](MemorySanitizerOptions Opts) {
|
|
|
|
return MemorySanitizerPass(Opts);
|
|
|
|
},
|
2021-06-21 11:22:14 +02:00
|
|
|
parseMSanPassOptions,
|
|
|
|
"recover;kernel;track-origins=N")
|
2021-07-08 14:24:03 +02:00
|
|
|
FUNCTION_PASS_WITH_PARAMS("simplifycfg",
|
2021-06-28 00:22:11 +02:00
|
|
|
"SimplifyCFGPass",
|
2019-04-15 10:57:53 +02:00
|
|
|
[](SimplifyCFGOptions Opts) {
|
|
|
|
return SimplifyCFGPass(Opts);
|
|
|
|
},
|
2021-06-21 11:22:14 +02:00
|
|
|
parseSimplifyCFGOptions,
|
|
|
|
"no-forward-switch-cond;forward-switch-cond;"
|
|
|
|
"no-switch-to-lookup;switch-to-lookup;"
|
|
|
|
"no-keep-loops;keep-loops;"
|
|
|
|
"no-hoist-common-insts;hoist-common-insts;"
|
|
|
|
"no-sink-common-insts;sink-common-insts;"
|
|
|
|
"bonus-inst-threshold=N"
|
|
|
|
)
|
2019-04-18 10:46:11 +02:00
|
|
|
FUNCTION_PASS_WITH_PARAMS("loop-vectorize",
|
2021-06-28 00:22:11 +02:00
|
|
|
"LoopVectorizePass",
|
2019-04-18 10:46:11 +02:00
|
|
|
[](LoopVectorizeOptions Opts) {
|
|
|
|
return LoopVectorizePass(Opts);
|
|
|
|
},
|
2021-06-21 11:22:14 +02:00
|
|
|
parseLoopVectorizeOptions,
|
|
|
|
"no-interleave-forced-only;interleave-forced-only;"
|
|
|
|
"no-vectorize-forced-only;vectorize-forced-only")
|
[MergedLoadStoreMotion] Sink stores to BB with more than 2 predecessors
If we have:
bb5:
br i1 %arg3, label %bb6, label %bb7
bb6:
%tmp = getelementptr inbounds i32, i32* %arg1, i64 2
store i32 3, i32* %tmp, align 4
br label %bb9
bb7:
%tmp8 = getelementptr inbounds i32, i32* %arg1, i64 2
store i32 3, i32* %tmp8, align 4
br label %bb9
bb9: ; preds = %bb4, %bb6, %bb7
...
We can't sink stores directly into bb9.
This patch creates new BB that is successor of %bb6 and %bb7
and sinks stores into that block.
SplitFooterBB is the parameter to the pass that controls
that behavior.
Change-Id: I7fdf50a772b84633e4b1b860e905bf7e3e29940f
Differential: https://reviews.llvm.org/D66234
llvm-svn: 371089
2019-09-05 19:00:32 +02:00
|
|
|
FUNCTION_PASS_WITH_PARAMS("mldst-motion",
|
2021-06-28 00:22:11 +02:00
|
|
|
"MergedLoadStoreMotionPass",
|
[MergedLoadStoreMotion] Sink stores to BB with more than 2 predecessors
If we have:
bb5:
br i1 %arg3, label %bb6, label %bb7
bb6:
%tmp = getelementptr inbounds i32, i32* %arg1, i64 2
store i32 3, i32* %tmp, align 4
br label %bb9
bb7:
%tmp8 = getelementptr inbounds i32, i32* %arg1, i64 2
store i32 3, i32* %tmp8, align 4
br label %bb9
bb9: ; preds = %bb4, %bb6, %bb7
...
We can't sink stores directly into bb9.
This patch creates new BB that is successor of %bb6 and %bb7
and sinks stores into that block.
SplitFooterBB is the parameter to the pass that controls
that behavior.
Change-Id: I7fdf50a772b84633e4b1b860e905bf7e3e29940f
Differential: https://reviews.llvm.org/D66234
llvm-svn: 371089
2019-09-05 19:00:32 +02:00
|
|
|
[](MergedLoadStoreMotionOptions Opts) {
|
|
|
|
return MergedLoadStoreMotionPass(Opts);
|
|
|
|
},
|
2021-06-21 11:22:14 +02:00
|
|
|
parseMergedLoadStoreMotionOptions,
|
|
|
|
"no-split-footer-bb;split-footer-bb")
|
2020-01-16 18:31:24 +01:00
|
|
|
FUNCTION_PASS_WITH_PARAMS("gvn",
|
2021-06-28 00:22:11 +02:00
|
|
|
"GVN",
|
2020-01-16 18:31:24 +01:00
|
|
|
[](GVNOptions Opts) {
|
|
|
|
return GVN(Opts);
|
|
|
|
},
|
2021-06-21 11:22:14 +02:00
|
|
|
parseGVNOptions,
|
|
|
|
"no-pre;pre;"
|
|
|
|
"no-load-pre;load-pre;"
|
|
|
|
"no-split-backedge-load-pre;split-backedge-load-pre;"
|
|
|
|
"no-memdep;memdep")
|
2020-06-18 11:24:00 +02:00
|
|
|
FUNCTION_PASS_WITH_PARAMS("print<stack-lifetime>",
|
2021-06-28 00:22:11 +02:00
|
|
|
"StackLifetimePrinterPass",
|
2020-06-18 11:24:00 +02:00
|
|
|
[](StackLifetime::LivenessType Type) {
|
|
|
|
return StackLifetimePrinterPass(dbgs(), Type);
|
|
|
|
},
|
2021-06-21 11:22:14 +02:00
|
|
|
parseStackLifetimeOptions,
|
|
|
|
"may;must")
|
2019-01-10 11:01:53 +01:00
|
|
|
#undef FUNCTION_PASS_WITH_PARAMS
|
|
|
|
|
2016-02-25 08:23:08 +01:00
|
|
|
#ifndef LOOP_ANALYSIS
|
|
|
|
#define LOOP_ANALYSIS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
|
|
|
LOOP_ANALYSIS("no-op-loop", NoOpLoopAnalysis())
|
2016-07-08 23:21:44 +02:00
|
|
|
LOOP_ANALYSIS("access-info", LoopAccessAnalysis())
|
Data Dependence Graph Basics
Summary:
This is the first patch in a series of patches that will implement data dependence graph in LLVM. Many of the ideas used in this implementation are based on the following paper:
D. J. Kuck, R. H. Kuhn, D. A. Padua, B. Leasure, and M. Wolfe (1981). DEPENDENCE GRAPHS AND COMPILER OPTIMIZATIONS.
This patch contains support for a basic DDGs containing only atomic nodes (one node for each instruction). The edges are two fold: def-use edges and memory-dependence edges.
The implementation takes a list of basic-blocks and only considers dependencies among instructions in those basic blocks. Any dependencies coming into or going out of instructions that do not belong to those basic blocks are ignored.
The algorithm for building the graph involves the following steps in order:
1. For each instruction in the range of basic blocks to consider, create an atomic node in the resulting graph.
2. For each node in the graph establish def-use edges to/from other nodes in the graph.
3. For each pair of nodes containing memory instruction(s) create memory edges between them. This part of the algorithm goes through the instructions in lexicographical order and creates edges in reverse order if the sink of the dependence occurs before the source of it.
Authored By: bmahjour
Reviewer: Meinersbur, fhahn, myhsu, xtian, dmgreen, kbarton, jdoerfert
Reviewed By: Meinersbur, fhahn, myhsu
Subscribers: ychen, arphaman, simoll, a.elovikov, mgorny, hiraditya, jfb, wuzish, llvm-commits, jsji, Whitney, etiotto
Tag: #llvm
Differential Revision: https://reviews.llvm.org/D65350
llvm-svn: 372238
2019-09-18 19:43:45 +02:00
|
|
|
LOOP_ANALYSIS("ddg", DDGAnalysis())
|
2020-07-15 18:34:44 +02:00
|
|
|
LOOP_ANALYSIS("iv-users", IVUsersAnalysis())
|
2018-09-20 19:08:45 +02:00
|
|
|
LOOP_ANALYSIS("pass-instrumentation", PassInstrumentationAnalysis(PIC))
|
2016-02-25 08:23:08 +01:00
|
|
|
#undef LOOP_ANALYSIS
|
|
|
|
|
|
|
|
#ifndef LOOP_PASS
|
|
|
|
#define LOOP_PASS(NAME, CREATE_PASS)
|
|
|
|
#endif
|
Add CanonicalizeFreezeInLoops pass
Summary:
If an induction variable is frozen and used, SCEV yields imprecise result
because it doesn't say anything about frozen variables.
Due to this reason, performance degradation happened after
https://reviews.llvm.org/D76483 is merged, causing
SCEV yield imprecise result and preventing LSR to optimize a loop.
The suggested solution here is to add a pass which canonicalizes frozen variables
inside a loop. To be specific, it pushes freezes out of the loop by freezing
the initial value and step values instead & dropping nsw/nuw flags from instructions used by freeze.
This solution was also mentioned at https://reviews.llvm.org/D70623 .
Reviewers: spatel, efriedma, lebedev.ri, fhahn, jdoerfert
Reviewed By: fhahn
Subscribers: nikic, mgorny, hiraditya, javed.absar, llvm-commits, sanwou01, nlopes
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77523
2020-05-07 22:28:42 +02:00
|
|
|
LOOP_PASS("canon-freeze", CanonicalizeFreezeInLoopsPass())
|
2020-12-16 18:34:38 +01:00
|
|
|
LOOP_PASS("dot-ddg", DDGDotPrinterPass())
|
2016-02-25 08:23:08 +01:00
|
|
|
LOOP_PASS("invalidate<all>", InvalidateAllAnalysesPass())
|
2016-07-13 00:42:24 +02:00
|
|
|
LOOP_PASS("licm", LICMPass())
|
2021-07-19 17:31:18 +02:00
|
|
|
LOOP_PASS("lnicm", LNICMPass())
|
2021-05-28 08:58:10 +02:00
|
|
|
LOOP_PASS("loop-flatten", LoopFlattenPass())
|
2016-07-12 20:45:51 +02:00
|
|
|
LOOP_PASS("loop-idiom", LoopIdiomRecognizePass())
|
2018-05-25 03:32:36 +02:00
|
|
|
LOOP_PASS("loop-instsimplify", LoopInstSimplifyPass())
|
2020-09-18 01:19:04 +02:00
|
|
|
LOOP_PASS("loop-interchange", LoopInterchangePass())
|
2020-08-05 21:22:07 +02:00
|
|
|
LOOP_PASS("loop-rotate", LoopRotatePass())
|
2016-02-25 08:23:08 +01:00
|
|
|
LOOP_PASS("no-op-loop", NoOpLoopPass())
|
|
|
|
LOOP_PASS("print", PrintLoopPass(dbgs()))
|
2016-07-14 20:28:29 +02:00
|
|
|
LOOP_PASS("loop-deletion", LoopDeletionPass())
|
2020-09-18 23:43:36 +02:00
|
|
|
LOOP_PASS("loop-simplifycfg", LoopSimplifyCFGPass())
|
2020-07-02 20:15:29 +02:00
|
|
|
LOOP_PASS("loop-reduce", LoopStrengthReducePass())
|
2016-06-05 20:01:19 +02:00
|
|
|
LOOP_PASS("indvars", IndVarSimplifyPass())
|
2021-06-08 13:29:48 +02:00
|
|
|
LOOP_PASS("loop-unroll-and-jam", LoopUnrollAndJamPass())
|
2020-06-26 18:28:32 +02:00
|
|
|
LOOP_PASS("loop-unroll-full", LoopFullUnrollPass())
|
2016-07-02 23:18:40 +02:00
|
|
|
LOOP_PASS("print-access-info", LoopAccessInfoPrinterPass(dbgs()))
|
Data Dependence Graph Basics
Summary:
This is the first patch in a series of patches that will implement data dependence graph in LLVM. Many of the ideas used in this implementation are based on the following paper:
D. J. Kuck, R. H. Kuhn, D. A. Padua, B. Leasure, and M. Wolfe (1981). DEPENDENCE GRAPHS AND COMPILER OPTIMIZATIONS.
This patch contains support for a basic DDGs containing only atomic nodes (one node for each instruction). The edges are two fold: def-use edges and memory-dependence edges.
The implementation takes a list of basic-blocks and only considers dependencies among instructions in those basic blocks. Any dependencies coming into or going out of instructions that do not belong to those basic blocks are ignored.
The algorithm for building the graph involves the following steps in order:
1. For each instruction in the range of basic blocks to consider, create an atomic node in the resulting graph.
2. For each node in the graph establish def-use edges to/from other nodes in the graph.
3. For each pair of nodes containing memory instruction(s) create memory edges between them. This part of the algorithm goes through the instructions in lexicographical order and creates edges in reverse order if the sink of the dependence occurs before the source of it.
Authored By: bmahjour
Reviewer: Meinersbur, fhahn, myhsu, xtian, dmgreen, kbarton, jdoerfert
Reviewed By: Meinersbur, fhahn, myhsu
Subscribers: ychen, arphaman, simoll, a.elovikov, mgorny, hiraditya, jfb, wuzish, llvm-commits, jsji, Whitney, etiotto
Tag: #llvm
Differential Revision: https://reviews.llvm.org/D65350
llvm-svn: 372238
2019-09-18 19:43:45 +02:00
|
|
|
LOOP_PASS("print<ddg>", DDGAnalysisPrinterPass(dbgs()))
|
2020-07-15 18:34:44 +02:00
|
|
|
LOOP_PASS("print<iv-users>", IVUsersPrinterPass(dbgs()))
|
[LoopNest]: Analysis to discover properties of a loop nest.
Summary: This patch adds an analysis pass to collect loop nests and
summarize properties of the nest (e.g the nest depth, whether the nest
is perfect, what's the innermost loop, etc...).
The motivation for this patch was discussed at the latest meeting of the
LLVM loop group (https://ibm.box.com/v/llvm-loop-nest-analysis) where we
discussed
the unimodular loop transformation framework ( “A Loop Transformation
Theory and an Algorithm to Maximize Parallelism”, Michael E. Wolf and
Monica S. Lam, IEEE TPDS, October 1991). The unimodular framework
provides a convenient way to unify legality checking and code generation
for several loop nest transformations (e.g. loop reversal, loop
interchange, loop skewing) and their compositions. Given that the
unimodular framework is applicable to perfect loop nests this is one
property of interest we expose in this analysis. Several other utility
functions are also provided. In the future other properties of interest
can be added in a centralized place.
Authored By: etiotto
Reviewer: Meinersbur, bmahjour, kbarton, Whitney, dmgreen, fhahn,
reames, hfinkel, jdoerfert, ppc-slack
Reviewed By: Meinersbur
Subscribers: bryanpkc, ppc-slack, mgorny, hiraditya, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D68789
2020-03-03 18:38:19 +01:00
|
|
|
LOOP_PASS("print<loopnest>", LoopNestPrinterPass(dbgs()))
|
Title: Loop Cache Analysis
Summary: Implement a new analysis to estimate the number of cache lines
required by a loop nest.
The analysis is largely based on the following paper:
Compiler Optimizations for Improving Data Locality
By: Steve Carr, Katherine S. McKinley, Chau-Wen Tseng
http://www.cs.utexas.edu/users/mckinley/papers/asplos-1994.pdf
The analysis considers temporal reuse (accesses to the same memory
location) and spatial reuse (accesses to memory locations within a cache
line). For simplicity the analysis considers memory accesses in the
innermost loop in a loop nest, and thus determines the number of cache
lines used when the loop L in loop nest LN is placed in the innermost
position.
The result of the analysis can be used to drive several transformations.
As an example, loop interchange could use it determine which loops in a
perfect loop nest should be interchanged to maximize cache reuse.
Similarly, loop distribution could be enhanced to take into
consideration cache reuse between arrays when distributing a loop to
eliminate vectorization inhibiting dependencies.
The general approach taken to estimate the number of cache lines used by
the memory references in the inner loop of a loop nest is:
Partition memory references that exhibit temporal or spatial reuse into
reference groups.
For each loop L in the a loop nest LN: a. Compute the cost of the
reference group b. Compute the 'cache cost' of the loop nest by summing
up the reference groups costs
For further details of the algorithm please refer to the paper.
Authored By: etiotto
Reviewers: hfinkel, Meinersbur, jdoerfert, kbarton, bmahjour, anemet,
fhahn
Reviewed By: Meinersbur
Subscribers: reames, nemanjai, MaskRay, wuzish, Hahnfeld, xusx595,
venkataramanan.kumar.llvm, greened, dmgreen, steleman, fhahn, xblvaOO,
Whitney, mgorny, hiraditya, mgrang, jsji, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D63459
llvm-svn: 368439
2019-08-09 15:56:29 +02:00
|
|
|
LOOP_PASS("print<loop-cache-cost>", LoopCachePrinterPass(dbgs()))
|
2017-01-25 17:00:44 +01:00
|
|
|
LOOP_PASS("loop-predication", LoopPredicationPass())
|
2019-04-18 21:17:14 +02:00
|
|
|
LOOP_PASS("guard-widening", GuardWideningPass())
|
2021-05-06 16:53:00 +02:00
|
|
|
LOOP_PASS("loop-bound-split", LoopBoundSplitPass())
|
2020-09-19 02:25:40 +02:00
|
|
|
LOOP_PASS("loop-reroll", LoopRerollPass())
|
2020-10-24 15:50:33 +02:00
|
|
|
LOOP_PASS("loop-versioning-licm", LoopVersioningLICMPass())
|
2016-02-25 08:23:08 +01:00
|
|
|
#undef LOOP_PASS
|
2019-04-22 12:35:07 +02:00
|
|
|
|
|
|
|
#ifndef LOOP_PASS_WITH_PARAMS
|
2021-06-28 00:22:11 +02:00
|
|
|
#define LOOP_PASS_WITH_PARAMS(NAME, CLASS, CREATE_PASS, PARSER, PARAMS)
|
2019-04-22 12:35:07 +02:00
|
|
|
#endif
|
2021-07-08 14:12:19 +02:00
|
|
|
LOOP_PASS_WITH_PARAMS("simple-loop-unswitch",
|
2021-06-28 00:22:11 +02:00
|
|
|
"SimpleLoopUnswitchPass",
|
2021-07-13 21:50:34 +02:00
|
|
|
[](std::pair<bool, bool> Params) {
|
|
|
|
return SimpleLoopUnswitchPass(Params.first, Params.second);
|
2021-06-21 11:22:14 +02:00
|
|
|
},
|
|
|
|
parseLoopUnswitchOptions,
|
2021-07-13 21:50:34 +02:00
|
|
|
"nontrivial;no-nontrivial;trivial;no-trivial")
|
2019-04-22 12:35:07 +02:00
|
|
|
#undef LOOP_PASS_WITH_PARAMS
|