2017-08-11 23:30:02 +02:00
|
|
|
//===- CFLAndersAliasAnalysis.cpp - Unification-based Alias Analysis ------===//
|
2016-07-06 02:26:41 +02:00
|
|
|
//
|
2019-01-19 09:50:56 +01:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2016-07-06 02:26:41 +02:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This file implements a CFL-based, summary-based alias analysis algorithm. It
|
|
|
|
// differs from CFLSteensAliasAnalysis in its inclusion-based nature while
|
|
|
|
// CFLSteensAliasAnalysis is unification-based. This pass has worse performance
|
|
|
|
// than CFLSteensAliasAnalysis (the worst case complexity of
|
|
|
|
// CFLAndersAliasAnalysis is cubic, while the worst case complexity of
|
|
|
|
// CFLSteensAliasAnalysis is almost linear), but it is able to yield more
|
|
|
|
// precise analysis result. The precision of this analysis is roughly the same
|
|
|
|
// as that of an one level context-sensitive Andersen's algorithm.
|
|
|
|
//
|
2016-07-15 21:53:25 +02:00
|
|
|
// The algorithm used here is based on recursive state machine matching scheme
|
|
|
|
// proposed in "Demand-driven alias analysis for C" by Xin Zheng and Radu
|
2018-03-02 19:57:02 +01:00
|
|
|
// Rugina. The general idea is to extend the traditional transitive closure
|
2016-07-15 21:53:25 +02:00
|
|
|
// algorithm to perform CFL matching along the way: instead of recording
|
|
|
|
// "whether X is reachable from Y", we keep track of "whether X is reachable
|
|
|
|
// from Y at state Z", where the "state" field indicates where we are in the CFL
|
|
|
|
// matching process. To understand the matching better, it is advisable to have
|
|
|
|
// the state machine shown in Figure 3 of the paper available when reading the
|
|
|
|
// codes: all we do here is to selectively expand the transitive closure by
|
|
|
|
// discarding edges that are not recognized by the state machine.
|
|
|
|
//
|
2016-07-19 22:38:21 +02:00
|
|
|
// There are two differences between our current implementation and the one
|
|
|
|
// described in the paper:
|
|
|
|
// - Our algorithm eagerly computes all alias pairs after the CFLGraph is built,
|
|
|
|
// while in the paper the authors did the computation in a demand-driven
|
|
|
|
// fashion. We did not implement the demand-driven algorithm due to the
|
|
|
|
// additional coding complexity and higher memory profile, but if we found it
|
|
|
|
// necessary we may switch to it eventually.
|
|
|
|
// - In the paper the authors use a state machine that does not distinguish
|
|
|
|
// value reads from value writes. For example, if Y is reachable from X at state
|
|
|
|
// S3, it may be the case that X is written into Y, or it may be the case that
|
|
|
|
// there's a third value Z that writes into both X and Y. To make that
|
|
|
|
// distinction (which is crucial in building function summary as well as
|
|
|
|
// retrieving mod-ref info), we choose to duplicate some of the states in the
|
|
|
|
// paper's proposed state machine. The duplication does not change the set the
|
|
|
|
// machine accepts. Given a pair of reachable values, it only provides more
|
|
|
|
// detailed information on which value is being written into and which is being
|
|
|
|
// read from.
|
2016-07-15 21:53:25 +02:00
|
|
|
//
|
2016-07-06 02:26:41 +02:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
// N.B. AliasAnalysis as a whole is phrased as a FunctionPass at the moment, and
|
|
|
|
// CFLAndersAA is interprocedural. This is *technically* A Bad Thing, because
|
|
|
|
// FunctionPasses are only allowed to inspect the Function that they're being
|
|
|
|
// run on. Realistically, this likely isn't a problem until we allow
|
|
|
|
// FunctionPasses to run concurrently.
|
|
|
|
|
|
|
|
#include "llvm/Analysis/CFLAndersAliasAnalysis.h"
|
2017-08-11 23:30:02 +02:00
|
|
|
#include "AliasAnalysisSummary.h"
|
2016-07-06 02:36:12 +02:00
|
|
|
#include "CFLGraph.h"
|
2017-08-11 23:30:02 +02:00
|
|
|
#include "llvm/ADT/DenseMap.h"
|
|
|
|
#include "llvm/ADT/DenseMapInfo.h"
|
2016-07-15 21:53:25 +02:00
|
|
|
#include "llvm/ADT/DenseSet.h"
|
2017-08-11 23:30:02 +02:00
|
|
|
#include "llvm/ADT/None.h"
|
|
|
|
#include "llvm/ADT/Optional.h"
|
|
|
|
#include "llvm/ADT/STLExtras.h"
|
|
|
|
#include "llvm/ADT/SmallVector.h"
|
|
|
|
#include "llvm/ADT/iterator_range.h"
|
|
|
|
#include "llvm/Analysis/AliasAnalysis.h"
|
|
|
|
#include "llvm/Analysis/MemoryLocation.h"
|
|
|
|
#include "llvm/IR/Argument.h"
|
|
|
|
#include "llvm/IR/Function.h"
|
|
|
|
#include "llvm/IR/PassManager.h"
|
|
|
|
#include "llvm/IR/Type.h"
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-13 22:15:01 +01:00
|
|
|
#include "llvm/InitializePasses.h"
|
2016-07-06 02:26:41 +02:00
|
|
|
#include "llvm/Pass.h"
|
2017-08-11 23:30:02 +02:00
|
|
|
#include "llvm/Support/Casting.h"
|
|
|
|
#include "llvm/Support/Compiler.h"
|
|
|
|
#include "llvm/Support/Debug.h"
|
|
|
|
#include "llvm/Support/raw_ostream.h"
|
|
|
|
#include <algorithm>
|
|
|
|
#include <bitset>
|
|
|
|
#include <cassert>
|
|
|
|
#include <cstddef>
|
|
|
|
#include <cstdint>
|
|
|
|
#include <functional>
|
|
|
|
#include <utility>
|
|
|
|
#include <vector>
|
2016-07-06 02:26:41 +02:00
|
|
|
|
|
|
|
using namespace llvm;
|
2016-07-06 02:36:12 +02:00
|
|
|
using namespace llvm::cflaa;
|
2016-07-06 02:26:41 +02:00
|
|
|
|
|
|
|
#define DEBUG_TYPE "cfl-anders-aa"
|
|
|
|
|
Change TargetLibraryInfo analysis passes to always require Function
Summary:
This is the first change to enable the TLI to be built per-function so
that -fno-builtin* handling can be migrated to use function attributes.
See discussion on D61634 for background. This is an enabler for fixing
handling of these options for LTO, for example.
This change should not affect behavior, as the provided function is not
yet used to build a specifically per-function TLI, but rather enables
that migration.
Most of the changes were very mechanical, e.g. passing a Function to the
legacy analysis pass's getTLI interface, or in Module level cases,
adding a callback. This is similar to the way the per-function TTI
analysis works.
There was one place where we were looking for builtins but not in the
context of a specific function. See FindCXAAtExit in
lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround
could provide the wrong behavior in some corner cases. Suggestions
welcome.
Reviewers: chandlerc, hfinkel
Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66428
llvm-svn: 371284
2019-09-07 05:09:36 +02:00
|
|
|
CFLAndersAAResult::CFLAndersAAResult(
|
|
|
|
std::function<const TargetLibraryInfo &(Function &F)> GetTLI)
|
|
|
|
: GetTLI(std::move(GetTLI)) {}
|
2016-07-15 21:53:25 +02:00
|
|
|
CFLAndersAAResult::CFLAndersAAResult(CFLAndersAAResult &&RHS)
|
Change TargetLibraryInfo analysis passes to always require Function
Summary:
This is the first change to enable the TLI to be built per-function so
that -fno-builtin* handling can be migrated to use function attributes.
See discussion on D61634 for background. This is an enabler for fixing
handling of these options for LTO, for example.
This change should not affect behavior, as the provided function is not
yet used to build a specifically per-function TLI, but rather enables
that migration.
Most of the changes were very mechanical, e.g. passing a Function to the
legacy analysis pass's getTLI interface, or in Module level cases,
adding a callback. This is similar to the way the per-function TTI
analysis works.
There was one place where we were looking for builtins but not in the
context of a specific function. See FindCXAAtExit in
lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround
could provide the wrong behavior in some corner cases. Suggestions
welcome.
Reviewers: chandlerc, hfinkel
Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66428
llvm-svn: 371284
2019-09-07 05:09:36 +02:00
|
|
|
: AAResultBase(std::move(RHS)), GetTLI(std::move(RHS.GetTLI)) {}
|
2017-08-11 23:30:02 +02:00
|
|
|
CFLAndersAAResult::~CFLAndersAAResult() = default;
|
2016-07-15 21:53:25 +02:00
|
|
|
|
|
|
|
namespace {
|
|
|
|
|
|
|
|
enum class MatchState : uint8_t {
|
2016-07-19 22:38:21 +02:00
|
|
|
// The following state represents S1 in the paper.
|
|
|
|
FlowFromReadOnly = 0,
|
|
|
|
// The following two states together represent S2 in the paper.
|
|
|
|
// The 'NoReadWrite' suffix indicates that there exists an alias path that
|
|
|
|
// does not contain assignment and reverse assignment edges.
|
|
|
|
// The 'ReadOnly' suffix indicates that there exists an alias path that
|
|
|
|
// contains reverse assignment edges only.
|
|
|
|
FlowFromMemAliasNoReadWrite,
|
|
|
|
FlowFromMemAliasReadOnly,
|
|
|
|
// The following two states together represent S3 in the paper.
|
|
|
|
// The 'WriteOnly' suffix indicates that there exists an alias path that
|
|
|
|
// contains assignment edges only.
|
|
|
|
// The 'ReadWrite' suffix indicates that there exists an alias path that
|
|
|
|
// contains both assignment and reverse assignment edges. Note that if X and Y
|
|
|
|
// are reachable at 'ReadWrite' state, it does NOT mean X is both read from
|
|
|
|
// and written to Y. Instead, it means that a third value Z is written to both
|
|
|
|
// X and Y.
|
|
|
|
FlowToWriteOnly,
|
|
|
|
FlowToReadWrite,
|
|
|
|
// The following two states together represent S4 in the paper.
|
|
|
|
FlowToMemAliasWriteOnly,
|
|
|
|
FlowToMemAliasReadWrite,
|
2016-07-15 21:53:25 +02:00
|
|
|
};
|
|
|
|
|
2017-08-11 23:30:02 +02:00
|
|
|
using StateSet = std::bitset<7>;
|
|
|
|
|
2016-08-25 03:05:08 +02:00
|
|
|
const unsigned ReadOnlyStateMask =
|
2016-07-19 23:35:47 +02:00
|
|
|
(1U << static_cast<uint8_t>(MatchState::FlowFromReadOnly)) |
|
|
|
|
(1U << static_cast<uint8_t>(MatchState::FlowFromMemAliasReadOnly));
|
2016-08-25 03:05:08 +02:00
|
|
|
const unsigned WriteOnlyStateMask =
|
2016-07-19 23:35:47 +02:00
|
|
|
(1U << static_cast<uint8_t>(MatchState::FlowToWriteOnly)) |
|
|
|
|
(1U << static_cast<uint8_t>(MatchState::FlowToMemAliasWriteOnly));
|
2016-07-19 22:47:15 +02:00
|
|
|
|
2016-07-23 00:30:48 +02:00
|
|
|
// A pair that consists of a value and an offset
|
|
|
|
struct OffsetValue {
|
|
|
|
const Value *Val;
|
|
|
|
int64_t Offset;
|
|
|
|
};
|
|
|
|
|
|
|
|
bool operator==(OffsetValue LHS, OffsetValue RHS) {
|
|
|
|
return LHS.Val == RHS.Val && LHS.Offset == RHS.Offset;
|
|
|
|
}
|
|
|
|
bool operator<(OffsetValue LHS, OffsetValue RHS) {
|
|
|
|
return std::less<const Value *>()(LHS.Val, RHS.Val) ||
|
|
|
|
(LHS.Val == RHS.Val && LHS.Offset < RHS.Offset);
|
|
|
|
}
|
|
|
|
|
|
|
|
// A pair that consists of an InstantiatedValue and an offset
|
|
|
|
struct OffsetInstantiatedValue {
|
|
|
|
InstantiatedValue IVal;
|
|
|
|
int64_t Offset;
|
|
|
|
};
|
|
|
|
|
|
|
|
bool operator==(OffsetInstantiatedValue LHS, OffsetInstantiatedValue RHS) {
|
|
|
|
return LHS.IVal == RHS.IVal && LHS.Offset == RHS.Offset;
|
|
|
|
}
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
// We use ReachabilitySet to keep track of value aliases (The nonterminal "V" in
|
|
|
|
// the paper) during the analysis.
|
|
|
|
class ReachabilitySet {
|
2018-05-17 23:56:39 +02:00
|
|
|
using ValueStateMap = DenseMap<InstantiatedValue, StateSet>;
|
2017-08-11 23:30:02 +02:00
|
|
|
using ValueReachMap = DenseMap<InstantiatedValue, ValueStateMap>;
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
ValueReachMap ReachMap;
|
|
|
|
|
|
|
|
public:
|
2017-08-11 23:30:02 +02:00
|
|
|
using const_valuestate_iterator = ValueStateMap::const_iterator;
|
|
|
|
using const_value_iterator = ValueReachMap::const_iterator;
|
2016-07-15 21:53:25 +02:00
|
|
|
|
2018-05-17 23:56:39 +02:00
|
|
|
// Insert edge 'From->To' at state 'State'
|
|
|
|
bool insert(InstantiatedValue From, InstantiatedValue To, MatchState State) {
|
2016-07-19 22:47:15 +02:00
|
|
|
assert(From != To);
|
2018-05-17 23:56:39 +02:00
|
|
|
auto &States = ReachMap[To][From];
|
2016-07-15 21:53:25 +02:00
|
|
|
auto Idx = static_cast<size_t>(State);
|
|
|
|
if (!States.test(Idx)) {
|
|
|
|
States.set(Idx);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Return the set of all ('From', 'State') pair for a given node 'To'
|
|
|
|
iterator_range<const_valuestate_iterator>
|
|
|
|
reachableValueAliases(InstantiatedValue V) const {
|
|
|
|
auto Itr = ReachMap.find(V);
|
|
|
|
if (Itr == ReachMap.end())
|
|
|
|
return make_range<const_valuestate_iterator>(const_valuestate_iterator(),
|
|
|
|
const_valuestate_iterator());
|
|
|
|
return make_range<const_valuestate_iterator>(Itr->second.begin(),
|
|
|
|
Itr->second.end());
|
|
|
|
}
|
|
|
|
|
|
|
|
iterator_range<const_value_iterator> value_mappings() const {
|
|
|
|
return make_range<const_value_iterator>(ReachMap.begin(), ReachMap.end());
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
// We use AliasMemSet to keep track of all memory aliases (the nonterminal "M"
|
|
|
|
// in the paper) during the analysis.
|
|
|
|
class AliasMemSet {
|
2017-08-11 23:30:02 +02:00
|
|
|
using MemSet = DenseSet<InstantiatedValue>;
|
|
|
|
using MemMapType = DenseMap<InstantiatedValue, MemSet>;
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
MemMapType MemMap;
|
|
|
|
|
|
|
|
public:
|
2017-08-11 23:30:02 +02:00
|
|
|
using const_mem_iterator = MemSet::const_iterator;
|
2016-07-15 21:53:25 +02:00
|
|
|
|
|
|
|
bool insert(InstantiatedValue LHS, InstantiatedValue RHS) {
|
|
|
|
// Top-level values can never be memory aliases because one cannot take the
|
|
|
|
// addresses of them
|
|
|
|
assert(LHS.DerefLevel > 0 && RHS.DerefLevel > 0);
|
|
|
|
return MemMap[LHS].insert(RHS).second;
|
|
|
|
}
|
|
|
|
|
|
|
|
const MemSet *getMemoryAliases(InstantiatedValue V) const {
|
|
|
|
auto Itr = MemMap.find(V);
|
|
|
|
if (Itr == MemMap.end())
|
|
|
|
return nullptr;
|
|
|
|
return &Itr->second;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2016-07-15 22:02:49 +02:00
|
|
|
// We use AliasAttrMap to keep track of the AliasAttr of each node.
|
|
|
|
class AliasAttrMap {
|
2017-08-11 23:30:02 +02:00
|
|
|
using MapType = DenseMap<InstantiatedValue, AliasAttrs>;
|
|
|
|
|
2016-07-15 22:02:49 +02:00
|
|
|
MapType AttrMap;
|
|
|
|
|
|
|
|
public:
|
2017-08-11 23:30:02 +02:00
|
|
|
using const_iterator = MapType::const_iterator;
|
2016-07-15 22:02:49 +02:00
|
|
|
|
|
|
|
bool add(InstantiatedValue V, AliasAttrs Attr) {
|
|
|
|
auto &OldAttr = AttrMap[V];
|
|
|
|
auto NewAttr = OldAttr | Attr;
|
|
|
|
if (OldAttr == NewAttr)
|
|
|
|
return false;
|
|
|
|
OldAttr = NewAttr;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
AliasAttrs getAttrs(InstantiatedValue V) const {
|
|
|
|
AliasAttrs Attr;
|
|
|
|
auto Itr = AttrMap.find(V);
|
|
|
|
if (Itr != AttrMap.end())
|
|
|
|
Attr = Itr->second;
|
|
|
|
return Attr;
|
|
|
|
}
|
|
|
|
|
|
|
|
iterator_range<const_iterator> mappings() const {
|
|
|
|
return make_range<const_iterator>(AttrMap.begin(), AttrMap.end());
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
struct WorkListItem {
|
|
|
|
InstantiatedValue From;
|
|
|
|
InstantiatedValue To;
|
|
|
|
MatchState State;
|
|
|
|
};
|
2016-07-19 22:47:15 +02:00
|
|
|
|
|
|
|
struct ValueSummary {
|
|
|
|
struct Record {
|
|
|
|
InterfaceValue IValue;
|
|
|
|
unsigned DerefLevel;
|
|
|
|
};
|
|
|
|
SmallVector<Record, 4> FromRecords, ToRecords;
|
|
|
|
};
|
2017-08-11 23:30:02 +02:00
|
|
|
|
|
|
|
} // end anonymous namespace
|
2016-07-15 21:53:25 +02:00
|
|
|
|
2018-05-17 23:56:39 +02:00
|
|
|
namespace llvm {
|
|
|
|
|
|
|
|
// Specialize DenseMapInfo for OffsetValue.
|
|
|
|
template <> struct DenseMapInfo<OffsetValue> {
|
|
|
|
static OffsetValue getEmptyKey() {
|
|
|
|
return OffsetValue{DenseMapInfo<const Value *>::getEmptyKey(),
|
|
|
|
DenseMapInfo<int64_t>::getEmptyKey()};
|
|
|
|
}
|
|
|
|
|
|
|
|
static OffsetValue getTombstoneKey() {
|
|
|
|
return OffsetValue{DenseMapInfo<const Value *>::getTombstoneKey(),
|
|
|
|
DenseMapInfo<int64_t>::getEmptyKey()};
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned getHashValue(const OffsetValue &OVal) {
|
|
|
|
return DenseMapInfo<std::pair<const Value *, int64_t>>::getHashValue(
|
|
|
|
std::make_pair(OVal.Val, OVal.Offset));
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool isEqual(const OffsetValue &LHS, const OffsetValue &RHS) {
|
|
|
|
return LHS == RHS;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
// Specialize DenseMapInfo for OffsetInstantiatedValue.
|
|
|
|
template <> struct DenseMapInfo<OffsetInstantiatedValue> {
|
|
|
|
static OffsetInstantiatedValue getEmptyKey() {
|
|
|
|
return OffsetInstantiatedValue{
|
|
|
|
DenseMapInfo<InstantiatedValue>::getEmptyKey(),
|
|
|
|
DenseMapInfo<int64_t>::getEmptyKey()};
|
|
|
|
}
|
|
|
|
|
|
|
|
static OffsetInstantiatedValue getTombstoneKey() {
|
|
|
|
return OffsetInstantiatedValue{
|
|
|
|
DenseMapInfo<InstantiatedValue>::getTombstoneKey(),
|
|
|
|
DenseMapInfo<int64_t>::getEmptyKey()};
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned getHashValue(const OffsetInstantiatedValue &OVal) {
|
|
|
|
return DenseMapInfo<std::pair<InstantiatedValue, int64_t>>::getHashValue(
|
|
|
|
std::make_pair(OVal.IVal, OVal.Offset));
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool isEqual(const OffsetInstantiatedValue &LHS,
|
|
|
|
const OffsetInstantiatedValue &RHS) {
|
|
|
|
return LHS == RHS;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
} // end namespace llvm
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
class CFLAndersAAResult::FunctionInfo {
|
|
|
|
/// Map a value to other values that may alias it
|
|
|
|
/// Since the alias relation is symmetric, to save some space we assume values
|
|
|
|
/// are properly ordered: if a and b alias each other, and a < b, then b is in
|
|
|
|
/// AliasMap[a] but not vice versa.
|
2016-07-23 00:30:48 +02:00
|
|
|
DenseMap<const Value *, std::vector<OffsetValue>> AliasMap;
|
2016-07-15 21:53:25 +02:00
|
|
|
|
2016-07-15 22:02:49 +02:00
|
|
|
/// Map a value to its corresponding AliasAttrs
|
|
|
|
DenseMap<const Value *, AliasAttrs> AttrMap;
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
/// Summary of externally visible effects.
|
|
|
|
AliasSummary Summary;
|
|
|
|
|
2016-08-03 00:17:25 +02:00
|
|
|
Optional<AliasAttrs> getAttrs(const Value *) const;
|
2016-07-15 22:02:49 +02:00
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
public:
|
2016-07-19 22:47:15 +02:00
|
|
|
FunctionInfo(const Function &, const SmallVectorImpl<Value *> &,
|
2017-01-13 15:39:03 +01:00
|
|
|
const ReachabilitySet &, const AliasAttrMap &);
|
2016-07-15 21:53:25 +02:00
|
|
|
|
2018-05-25 23:16:58 +02:00
|
|
|
bool mayAlias(const Value *, LocationSize, const Value *, LocationSize) const;
|
2016-07-15 21:53:25 +02:00
|
|
|
const AliasSummary &getAliasSummary() const { return Summary; }
|
|
|
|
};
|
|
|
|
|
2016-07-19 22:47:15 +02:00
|
|
|
static bool hasReadOnlyState(StateSet Set) {
|
2016-07-19 23:35:47 +02:00
|
|
|
return (Set & StateSet(ReadOnlyStateMask)).any();
|
2016-07-19 22:47:15 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static bool hasWriteOnlyState(StateSet Set) {
|
2016-07-19 23:35:47 +02:00
|
|
|
return (Set & StateSet(WriteOnlyStateMask)).any();
|
2016-07-19 22:47:15 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static Optional<InterfaceValue>
|
|
|
|
getInterfaceValue(InstantiatedValue IValue,
|
|
|
|
const SmallVectorImpl<Value *> &RetVals) {
|
|
|
|
auto Val = IValue.Val;
|
|
|
|
|
|
|
|
Optional<unsigned> Index;
|
|
|
|
if (auto Arg = dyn_cast<Argument>(Val))
|
|
|
|
Index = Arg->getArgNo() + 1;
|
|
|
|
else if (is_contained(RetVals, Val))
|
|
|
|
Index = 0;
|
|
|
|
|
|
|
|
if (Index)
|
|
|
|
return InterfaceValue{*Index, IValue.DerefLevel};
|
|
|
|
return None;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void populateAttrMap(DenseMap<const Value *, AliasAttrs> &AttrMap,
|
|
|
|
const AliasAttrMap &AMap) {
|
2016-07-15 22:02:49 +02:00
|
|
|
for (const auto &Mapping : AMap.mappings()) {
|
|
|
|
auto IVal = Mapping.first;
|
|
|
|
|
2016-08-01 20:27:33 +02:00
|
|
|
// Insert IVal into the map
|
|
|
|
auto &Attr = AttrMap[IVal.Val];
|
2016-07-15 22:02:49 +02:00
|
|
|
// AttrMap only cares about top-level values
|
|
|
|
if (IVal.DerefLevel == 0)
|
2016-08-01 20:27:33 +02:00
|
|
|
Attr |= Mapping.second;
|
2016-07-15 22:02:49 +02:00
|
|
|
}
|
2016-07-19 22:47:15 +02:00
|
|
|
}
|
2016-07-15 22:02:49 +02:00
|
|
|
|
2016-07-19 22:47:15 +02:00
|
|
|
static void
|
2016-07-23 00:30:48 +02:00
|
|
|
populateAliasMap(DenseMap<const Value *, std::vector<OffsetValue>> &AliasMap,
|
2016-07-19 22:47:15 +02:00
|
|
|
const ReachabilitySet &ReachSet) {
|
2016-07-15 21:53:25 +02:00
|
|
|
for (const auto &OuterMapping : ReachSet.value_mappings()) {
|
|
|
|
// AliasMap only cares about top-level values
|
|
|
|
if (OuterMapping.first.DerefLevel > 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
auto Val = OuterMapping.first.Val;
|
|
|
|
auto &AliasList = AliasMap[Val];
|
|
|
|
for (const auto &InnerMapping : OuterMapping.second) {
|
|
|
|
// Again, AliasMap only cares about top-level values
|
2018-05-17 23:56:39 +02:00
|
|
|
if (InnerMapping.first.DerefLevel == 0)
|
|
|
|
AliasList.push_back(OffsetValue{InnerMapping.first.Val, UnknownOffset});
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Sort AliasList for faster lookup
|
llvm::sort(C.begin(), C.end(), ...) -> llvm::sort(C, ...)
Summary: The convenience wrapper in STLExtras is available since rL342102.
Reviewers: dblaikie, javed.absar, JDevlieghere, andreadb
Subscribers: MatzeB, sanjoy, arsenm, dschuff, mehdi_amini, sdardis, nemanjai, jvesely, nhaehnle, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, javed.absar, gbedwell, jrtc27, mgrang, atanasyan, steven_wu, george.burgess.iv, dexonsmith, kristina, jsji, llvm-commits
Differential Revision: https://reviews.llvm.org/D52573
llvm-svn: 343163
2018-09-27 04:13:45 +02:00
|
|
|
llvm::sort(AliasList);
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
2016-07-19 22:47:15 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void populateExternalRelations(
|
|
|
|
SmallVectorImpl<ExternalRelation> &ExtRelations, const Function &Fn,
|
|
|
|
const SmallVectorImpl<Value *> &RetVals, const ReachabilitySet &ReachSet) {
|
|
|
|
// If a function only returns one of its argument X, then X will be both an
|
|
|
|
// argument and a return value at the same time. This is an edge case that
|
|
|
|
// needs special handling here.
|
|
|
|
for (const auto &Arg : Fn.args()) {
|
|
|
|
if (is_contained(RetVals, &Arg)) {
|
|
|
|
auto ArgVal = InterfaceValue{Arg.getArgNo() + 1, 0};
|
|
|
|
auto RetVal = InterfaceValue{0, 0};
|
2016-07-23 00:30:48 +02:00
|
|
|
ExtRelations.push_back(ExternalRelation{ArgVal, RetVal, 0});
|
2016-07-19 22:47:15 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Below is the core summary construction logic.
|
|
|
|
// A naive solution of adding only the value aliases that are parameters or
|
|
|
|
// return values in ReachSet to the summary won't work: It is possible that a
|
|
|
|
// parameter P is written into an intermediate value I, and the function
|
|
|
|
// subsequently returns *I. In that case, *I is does not value alias anything
|
|
|
|
// in ReachSet, and the naive solution will miss a summary edge from (P, 1) to
|
|
|
|
// (I, 1).
|
|
|
|
// To account for the aforementioned case, we need to check each non-parameter
|
|
|
|
// and non-return value for the possibility of acting as an intermediate.
|
|
|
|
// 'ValueMap' here records, for each value, which InterfaceValues read from or
|
|
|
|
// write into it. If both the read list and the write list of a given value
|
|
|
|
// are non-empty, we know that a particular value is an intermidate and we
|
|
|
|
// need to add summary edges from the writes to the reads.
|
|
|
|
DenseMap<Value *, ValueSummary> ValueMap;
|
|
|
|
for (const auto &OuterMapping : ReachSet.value_mappings()) {
|
|
|
|
if (auto Dst = getInterfaceValue(OuterMapping.first, RetVals)) {
|
|
|
|
for (const auto &InnerMapping : OuterMapping.second) {
|
|
|
|
// If Src is a param/return value, we get a same-level assignment.
|
2018-05-17 23:56:39 +02:00
|
|
|
if (auto Src = getInterfaceValue(InnerMapping.first, RetVals)) {
|
2016-07-19 22:47:15 +02:00
|
|
|
// This may happen if both Dst and Src are return values
|
|
|
|
if (*Dst == *Src)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (hasReadOnlyState(InnerMapping.second))
|
2018-05-17 23:56:39 +02:00
|
|
|
ExtRelations.push_back(ExternalRelation{*Dst, *Src, UnknownOffset});
|
2016-07-19 22:47:15 +02:00
|
|
|
// No need to check for WriteOnly state, since ReachSet is symmetric
|
|
|
|
} else {
|
|
|
|
// If Src is not a param/return, add it to ValueMap
|
2018-05-17 23:56:39 +02:00
|
|
|
auto SrcIVal = InnerMapping.first;
|
2016-07-19 22:47:15 +02:00
|
|
|
if (hasReadOnlyState(InnerMapping.second))
|
|
|
|
ValueMap[SrcIVal.Val].FromRecords.push_back(
|
2018-05-17 23:56:39 +02:00
|
|
|
ValueSummary::Record{*Dst, SrcIVal.DerefLevel});
|
2016-07-19 22:47:15 +02:00
|
|
|
if (hasWriteOnlyState(InnerMapping.second))
|
|
|
|
ValueMap[SrcIVal.Val].ToRecords.push_back(
|
2018-05-17 23:56:39 +02:00
|
|
|
ValueSummary::Record{*Dst, SrcIVal.DerefLevel});
|
2016-07-19 22:47:15 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (const auto &Mapping : ValueMap) {
|
|
|
|
for (const auto &FromRecord : Mapping.second.FromRecords) {
|
|
|
|
for (const auto &ToRecord : Mapping.second.ToRecords) {
|
|
|
|
auto ToLevel = ToRecord.DerefLevel;
|
|
|
|
auto FromLevel = FromRecord.DerefLevel;
|
|
|
|
// Same-level assignments should have already been processed by now
|
|
|
|
if (ToLevel == FromLevel)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
auto SrcIndex = FromRecord.IValue.Index;
|
|
|
|
auto SrcLevel = FromRecord.IValue.DerefLevel;
|
|
|
|
auto DstIndex = ToRecord.IValue.Index;
|
|
|
|
auto DstLevel = ToRecord.IValue.DerefLevel;
|
|
|
|
if (ToLevel > FromLevel)
|
|
|
|
SrcLevel += ToLevel - FromLevel;
|
|
|
|
else
|
|
|
|
DstLevel += FromLevel - ToLevel;
|
|
|
|
|
2018-05-17 23:56:39 +02:00
|
|
|
ExtRelations.push_back(ExternalRelation{
|
|
|
|
InterfaceValue{SrcIndex, SrcLevel},
|
|
|
|
InterfaceValue{DstIndex, DstLevel}, UnknownOffset});
|
2016-07-19 22:47:15 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Remove duplicates in ExtRelations
|
llvm::sort(C.begin(), C.end(), ...) -> llvm::sort(C, ...)
Summary: The convenience wrapper in STLExtras is available since rL342102.
Reviewers: dblaikie, javed.absar, JDevlieghere, andreadb
Subscribers: MatzeB, sanjoy, arsenm, dschuff, mehdi_amini, sdardis, nemanjai, jvesely, nhaehnle, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, javed.absar, gbedwell, jrtc27, mgrang, atanasyan, steven_wu, george.burgess.iv, dexonsmith, kristina, jsji, llvm-commits
Differential Revision: https://reviews.llvm.org/D52573
llvm-svn: 343163
2018-09-27 04:13:45 +02:00
|
|
|
llvm::sort(ExtRelations);
|
2016-07-19 22:47:15 +02:00
|
|
|
ExtRelations.erase(std::unique(ExtRelations.begin(), ExtRelations.end()),
|
|
|
|
ExtRelations.end());
|
|
|
|
}
|
|
|
|
|
|
|
|
static void populateExternalAttributes(
|
|
|
|
SmallVectorImpl<ExternalAttribute> &ExtAttributes, const Function &Fn,
|
|
|
|
const SmallVectorImpl<Value *> &RetVals, const AliasAttrMap &AMap) {
|
|
|
|
for (const auto &Mapping : AMap.mappings()) {
|
|
|
|
if (auto IVal = getInterfaceValue(Mapping.first, RetVals)) {
|
|
|
|
auto Attr = getExternallyVisibleAttrs(Mapping.second);
|
|
|
|
if (Attr.any())
|
|
|
|
ExtAttributes.push_back(ExternalAttribute{*IVal, Attr});
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2016-07-15 21:53:25 +02:00
|
|
|
|
2016-07-19 22:47:15 +02:00
|
|
|
CFLAndersAAResult::FunctionInfo::FunctionInfo(
|
|
|
|
const Function &Fn, const SmallVectorImpl<Value *> &RetVals,
|
2017-01-13 15:39:03 +01:00
|
|
|
const ReachabilitySet &ReachSet, const AliasAttrMap &AMap) {
|
2016-07-19 22:47:15 +02:00
|
|
|
populateAttrMap(AttrMap, AMap);
|
|
|
|
populateExternalAttributes(Summary.RetParamAttributes, Fn, RetVals, AMap);
|
|
|
|
populateAliasMap(AliasMap, ReachSet);
|
|
|
|
populateExternalRelations(Summary.RetParamRelations, Fn, RetVals, ReachSet);
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
|
2016-08-03 00:17:25 +02:00
|
|
|
Optional<AliasAttrs>
|
|
|
|
CFLAndersAAResult::FunctionInfo::getAttrs(const Value *V) const {
|
2016-07-15 22:02:49 +02:00
|
|
|
assert(V != nullptr);
|
|
|
|
|
|
|
|
auto Itr = AttrMap.find(V);
|
|
|
|
if (Itr != AttrMap.end())
|
2016-08-03 00:17:25 +02:00
|
|
|
return Itr->second;
|
|
|
|
return None;
|
2016-07-15 22:02:49 +02:00
|
|
|
}
|
|
|
|
|
2018-10-09 04:14:33 +02:00
|
|
|
bool CFLAndersAAResult::FunctionInfo::mayAlias(
|
|
|
|
const Value *LHS, LocationSize MaybeLHSSize, const Value *RHS,
|
|
|
|
LocationSize MaybeRHSSize) const {
|
2016-07-15 21:53:25 +02:00
|
|
|
assert(LHS && RHS);
|
|
|
|
|
2016-08-03 00:17:25 +02:00
|
|
|
// Check if we've seen LHS and RHS before. Sometimes LHS or RHS can be created
|
|
|
|
// after the analysis gets executed, and we want to be conservative in those
|
|
|
|
// cases.
|
|
|
|
auto MaybeAttrsA = getAttrs(LHS);
|
|
|
|
auto MaybeAttrsB = getAttrs(RHS);
|
|
|
|
if (!MaybeAttrsA || !MaybeAttrsB)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
// Check AliasAttrs before AliasMap lookup since it's cheaper
|
|
|
|
auto AttrsA = *MaybeAttrsA;
|
|
|
|
auto AttrsB = *MaybeAttrsB;
|
2016-07-23 00:30:48 +02:00
|
|
|
if (hasUnknownOrCallerAttr(AttrsA))
|
|
|
|
return AttrsB.any();
|
|
|
|
if (hasUnknownOrCallerAttr(AttrsB))
|
|
|
|
return AttrsA.any();
|
|
|
|
if (isGlobalOrArgAttr(AttrsA))
|
|
|
|
return isGlobalOrArgAttr(AttrsB);
|
|
|
|
if (isGlobalOrArgAttr(AttrsB))
|
|
|
|
return isGlobalOrArgAttr(AttrsA);
|
|
|
|
|
|
|
|
// At this point both LHS and RHS should point to locally allocated objects
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
auto Itr = AliasMap.find(LHS);
|
2016-07-15 22:02:49 +02:00
|
|
|
if (Itr != AliasMap.end()) {
|
2016-07-15 21:53:25 +02:00
|
|
|
|
2016-07-23 00:30:48 +02:00
|
|
|
// Find out all (X, Offset) where X == RHS
|
|
|
|
auto Comparator = [](OffsetValue LHS, OffsetValue RHS) {
|
|
|
|
return std::less<const Value *>()(LHS.Val, RHS.Val);
|
|
|
|
};
|
|
|
|
#ifdef EXPENSIVE_CHECKS
|
2020-04-13 13:46:41 +02:00
|
|
|
assert(llvm::is_sorted(Itr->second, Comparator));
|
2016-07-23 00:30:48 +02:00
|
|
|
#endif
|
|
|
|
auto RangePair = std::equal_range(Itr->second.begin(), Itr->second.end(),
|
|
|
|
OffsetValue{RHS, 0}, Comparator);
|
|
|
|
|
|
|
|
if (RangePair.first != RangePair.second) {
|
2018-10-10 23:28:44 +02:00
|
|
|
// Be conservative about unknown sizes
|
2020-11-19 21:53:20 +01:00
|
|
|
if (!MaybeLHSSize.hasValue() || !MaybeRHSSize.hasValue())
|
2016-07-23 00:30:48 +02:00
|
|
|
return true;
|
|
|
|
|
2018-10-09 05:18:56 +02:00
|
|
|
const uint64_t LHSSize = MaybeLHSSize.getValue();
|
|
|
|
const uint64_t RHSSize = MaybeRHSSize.getValue();
|
2018-10-09 04:14:33 +02:00
|
|
|
|
2016-07-23 00:30:48 +02:00
|
|
|
for (const auto &OVal : make_range(RangePair)) {
|
|
|
|
// Be conservative about UnknownOffset
|
|
|
|
if (OVal.Offset == UnknownOffset)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
// We know that LHS aliases (RHS + OVal.Offset) if the control flow
|
|
|
|
// reaches here. The may-alias query essentially becomes integer
|
|
|
|
// range-overlap queries over two ranges [OVal.Offset, OVal.Offset +
|
|
|
|
// LHSSize) and [0, RHSSize).
|
|
|
|
|
|
|
|
// Try to be conservative on super large offsets
|
|
|
|
if (LLVM_UNLIKELY(LHSSize > INT64_MAX || RHSSize > INT64_MAX))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
auto LHSStart = OVal.Offset;
|
|
|
|
// FIXME: Do we need to guard against integer overflow?
|
|
|
|
auto LHSEnd = OVal.Offset + static_cast<int64_t>(LHSSize);
|
|
|
|
auto RHSStart = 0;
|
|
|
|
auto RHSEnd = static_cast<int64_t>(RHSSize);
|
|
|
|
if (LHSEnd > RHSStart && LHSStart < RHSEnd)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2016-07-15 21:53:25 +02:00
|
|
|
|
2016-07-15 22:02:49 +02:00
|
|
|
return false;
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void propagate(InstantiatedValue From, InstantiatedValue To,
|
2018-05-17 23:56:39 +02:00
|
|
|
MatchState State, ReachabilitySet &ReachSet,
|
2016-07-15 21:53:25 +02:00
|
|
|
std::vector<WorkListItem> &WorkList) {
|
|
|
|
if (From == To)
|
|
|
|
return;
|
2018-05-17 23:56:39 +02:00
|
|
|
if (ReachSet.insert(From, To, State))
|
|
|
|
WorkList.push_back(WorkListItem{From, To, State});
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void initializeWorkList(std::vector<WorkListItem> &WorkList,
|
|
|
|
ReachabilitySet &ReachSet,
|
|
|
|
const CFLGraph &Graph) {
|
|
|
|
for (const auto &Mapping : Graph.value_mappings()) {
|
|
|
|
auto Val = Mapping.first;
|
|
|
|
auto &ValueInfo = Mapping.second;
|
|
|
|
assert(ValueInfo.getNumLevels() > 0);
|
|
|
|
|
|
|
|
// Insert all immediate assignment neighbors to the worklist
|
|
|
|
for (unsigned I = 0, E = ValueInfo.getNumLevels(); I < E; ++I) {
|
|
|
|
auto Src = InstantiatedValue{Val, I};
|
2018-05-17 23:56:39 +02:00
|
|
|
// If there's an assignment edge from X to Y, it means Y is reachable from
|
2019-03-08 20:28:55 +01:00
|
|
|
// X at S3 and X is reachable from Y at S1
|
2016-07-15 21:53:25 +02:00
|
|
|
for (auto &Edge : ValueInfo.getNodeInfoAtLevel(I).Edges) {
|
2018-05-17 23:56:39 +02:00
|
|
|
propagate(Edge.Other, Src, MatchState::FlowFromReadOnly, ReachSet,
|
|
|
|
WorkList);
|
|
|
|
propagate(Src, Edge.Other, MatchState::FlowToWriteOnly, ReachSet,
|
|
|
|
WorkList);
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static Optional<InstantiatedValue> getNodeBelow(const CFLGraph &Graph,
|
|
|
|
InstantiatedValue V) {
|
|
|
|
auto NodeBelow = InstantiatedValue{V.Val, V.DerefLevel + 1};
|
|
|
|
if (Graph.getNode(NodeBelow))
|
|
|
|
return NodeBelow;
|
|
|
|
return None;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void processWorkListItem(const WorkListItem &Item, const CFLGraph &Graph,
|
|
|
|
ReachabilitySet &ReachSet, AliasMemSet &MemSet,
|
|
|
|
std::vector<WorkListItem> &WorkList) {
|
|
|
|
auto FromNode = Item.From;
|
|
|
|
auto ToNode = Item.To;
|
|
|
|
|
|
|
|
auto NodeInfo = Graph.getNode(ToNode);
|
|
|
|
assert(NodeInfo != nullptr);
|
|
|
|
|
2018-05-17 23:56:39 +02:00
|
|
|
// TODO: propagate field offsets
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
// FIXME: Here is a neat trick we can do: since both ReachSet and MemSet holds
|
|
|
|
// relations that are symmetric, we could actually cut the storage by half by
|
|
|
|
// sorting FromNode and ToNode before insertion happens.
|
|
|
|
|
2018-03-02 19:57:02 +01:00
|
|
|
// The newly added value alias pair may potentially generate more memory
|
2016-07-15 21:53:25 +02:00
|
|
|
// alias pairs. Check for them here.
|
2018-05-17 23:56:39 +02:00
|
|
|
auto FromNodeBelow = getNodeBelow(Graph, FromNode);
|
|
|
|
auto ToNodeBelow = getNodeBelow(Graph, ToNode);
|
|
|
|
if (FromNodeBelow && ToNodeBelow &&
|
|
|
|
MemSet.insert(*FromNodeBelow, *ToNodeBelow)) {
|
|
|
|
propagate(*FromNodeBelow, *ToNodeBelow,
|
|
|
|
MatchState::FlowFromMemAliasNoReadWrite, ReachSet, WorkList);
|
|
|
|
for (const auto &Mapping : ReachSet.reachableValueAliases(*FromNodeBelow)) {
|
|
|
|
auto Src = Mapping.first;
|
|
|
|
auto MemAliasPropagate = [&](MatchState FromState, MatchState ToState) {
|
|
|
|
if (Mapping.second.test(static_cast<size_t>(FromState)))
|
|
|
|
propagate(Src, *ToNodeBelow, ToState, ReachSet, WorkList);
|
|
|
|
};
|
|
|
|
|
|
|
|
MemAliasPropagate(MatchState::FlowFromReadOnly,
|
|
|
|
MatchState::FlowFromMemAliasReadOnly);
|
|
|
|
MemAliasPropagate(MatchState::FlowToWriteOnly,
|
|
|
|
MatchState::FlowToMemAliasWriteOnly);
|
|
|
|
MemAliasPropagate(MatchState::FlowToReadWrite,
|
|
|
|
MatchState::FlowToMemAliasReadWrite);
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// This is the core of the state machine walking algorithm. We expand ReachSet
|
|
|
|
// based on which state we are at (which in turn dictates what edges we
|
|
|
|
// should examine)
|
|
|
|
// From a high-level point of view, the state machine here guarantees two
|
|
|
|
// properties:
|
|
|
|
// - If *X and *Y are memory aliases, then X and Y are value aliases
|
|
|
|
// - If Y is an alias of X, then reverse assignment edges (if there is any)
|
|
|
|
// should precede any assignment edges on the path from X to Y.
|
2016-07-19 22:38:21 +02:00
|
|
|
auto NextAssignState = [&](MatchState State) {
|
2018-05-17 23:56:39 +02:00
|
|
|
for (const auto &AssignEdge : NodeInfo->Edges)
|
|
|
|
propagate(FromNode, AssignEdge.Other, State, ReachSet, WorkList);
|
2016-07-19 22:38:21 +02:00
|
|
|
};
|
|
|
|
auto NextRevAssignState = [&](MatchState State) {
|
2018-05-17 23:56:39 +02:00
|
|
|
for (const auto &RevAssignEdge : NodeInfo->ReverseEdges)
|
|
|
|
propagate(FromNode, RevAssignEdge.Other, State, ReachSet, WorkList);
|
2016-07-19 22:38:21 +02:00
|
|
|
};
|
|
|
|
auto NextMemState = [&](MatchState State) {
|
2016-07-15 21:53:25 +02:00
|
|
|
if (auto AliasSet = MemSet.getMemoryAliases(ToNode)) {
|
|
|
|
for (const auto &MemAlias : *AliasSet)
|
2018-05-17 23:56:39 +02:00
|
|
|
propagate(FromNode, MemAlias, State, ReachSet, WorkList);
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
2016-07-19 22:38:21 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
switch (Item.State) {
|
2017-08-11 23:30:02 +02:00
|
|
|
case MatchState::FlowFromReadOnly:
|
2016-07-19 22:38:21 +02:00
|
|
|
NextRevAssignState(MatchState::FlowFromReadOnly);
|
|
|
|
NextAssignState(MatchState::FlowToReadWrite);
|
|
|
|
NextMemState(MatchState::FlowFromMemAliasReadOnly);
|
2016-07-15 21:53:25 +02:00
|
|
|
break;
|
2017-08-11 23:30:02 +02:00
|
|
|
|
|
|
|
case MatchState::FlowFromMemAliasNoReadWrite:
|
2016-07-19 22:38:21 +02:00
|
|
|
NextRevAssignState(MatchState::FlowFromReadOnly);
|
|
|
|
NextAssignState(MatchState::FlowToWriteOnly);
|
2016-07-15 21:53:25 +02:00
|
|
|
break;
|
2017-08-11 23:30:02 +02:00
|
|
|
|
|
|
|
case MatchState::FlowFromMemAliasReadOnly:
|
2016-07-19 22:38:21 +02:00
|
|
|
NextRevAssignState(MatchState::FlowFromReadOnly);
|
|
|
|
NextAssignState(MatchState::FlowToReadWrite);
|
2016-07-15 21:53:25 +02:00
|
|
|
break;
|
2017-08-11 23:30:02 +02:00
|
|
|
|
|
|
|
case MatchState::FlowToWriteOnly:
|
2016-07-19 22:38:21 +02:00
|
|
|
NextAssignState(MatchState::FlowToWriteOnly);
|
|
|
|
NextMemState(MatchState::FlowToMemAliasWriteOnly);
|
|
|
|
break;
|
2017-08-11 23:30:02 +02:00
|
|
|
|
|
|
|
case MatchState::FlowToReadWrite:
|
2016-07-19 22:38:21 +02:00
|
|
|
NextAssignState(MatchState::FlowToReadWrite);
|
|
|
|
NextMemState(MatchState::FlowToMemAliasReadWrite);
|
|
|
|
break;
|
2017-08-11 23:30:02 +02:00
|
|
|
|
|
|
|
case MatchState::FlowToMemAliasWriteOnly:
|
2016-07-19 22:38:21 +02:00
|
|
|
NextAssignState(MatchState::FlowToWriteOnly);
|
|
|
|
break;
|
2017-08-11 23:30:02 +02:00
|
|
|
|
|
|
|
case MatchState::FlowToMemAliasReadWrite:
|
2016-07-19 22:38:21 +02:00
|
|
|
NextAssignState(MatchState::FlowToReadWrite);
|
2016-07-15 21:53:25 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-07-15 22:02:49 +02:00
|
|
|
static AliasAttrMap buildAttrMap(const CFLGraph &Graph,
|
|
|
|
const ReachabilitySet &ReachSet) {
|
|
|
|
AliasAttrMap AttrMap;
|
|
|
|
std::vector<InstantiatedValue> WorkList, NextList;
|
|
|
|
|
|
|
|
// Initialize each node with its original AliasAttrs in CFLGraph
|
|
|
|
for (const auto &Mapping : Graph.value_mappings()) {
|
|
|
|
auto Val = Mapping.first;
|
|
|
|
auto &ValueInfo = Mapping.second;
|
|
|
|
for (unsigned I = 0, E = ValueInfo.getNumLevels(); I < E; ++I) {
|
|
|
|
auto Node = InstantiatedValue{Val, I};
|
|
|
|
AttrMap.add(Node, ValueInfo.getNodeInfoAtLevel(I).Attr);
|
|
|
|
WorkList.push_back(Node);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
while (!WorkList.empty()) {
|
|
|
|
for (const auto &Dst : WorkList) {
|
|
|
|
auto DstAttr = AttrMap.getAttrs(Dst);
|
|
|
|
if (DstAttr.none())
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// Propagate attr on the same level
|
|
|
|
for (const auto &Mapping : ReachSet.reachableValueAliases(Dst)) {
|
2018-05-17 23:56:39 +02:00
|
|
|
auto Src = Mapping.first;
|
2016-07-15 22:02:49 +02:00
|
|
|
if (AttrMap.add(Src, DstAttr))
|
|
|
|
NextList.push_back(Src);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Propagate attr to the levels below
|
|
|
|
auto DstBelow = getNodeBelow(Graph, Dst);
|
|
|
|
while (DstBelow) {
|
|
|
|
if (AttrMap.add(*DstBelow, DstAttr)) {
|
|
|
|
NextList.push_back(*DstBelow);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
DstBelow = getNodeBelow(Graph, *DstBelow);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
WorkList.swap(NextList);
|
|
|
|
NextList.clear();
|
|
|
|
}
|
|
|
|
|
|
|
|
return AttrMap;
|
|
|
|
}
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
CFLAndersAAResult::FunctionInfo
|
|
|
|
CFLAndersAAResult::buildInfoFrom(const Function &Fn) {
|
|
|
|
CFLGraphBuilder<CFLAndersAAResult> GraphBuilder(
|
Change TargetLibraryInfo analysis passes to always require Function
Summary:
This is the first change to enable the TLI to be built per-function so
that -fno-builtin* handling can be migrated to use function attributes.
See discussion on D61634 for background. This is an enabler for fixing
handling of these options for LTO, for example.
This change should not affect behavior, as the provided function is not
yet used to build a specifically per-function TLI, but rather enables
that migration.
Most of the changes were very mechanical, e.g. passing a Function to the
legacy analysis pass's getTLI interface, or in Module level cases,
adding a callback. This is similar to the way the per-function TTI
analysis works.
There was one place where we were looking for builtins but not in the
context of a specific function. See FindCXAAtExit in
lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround
could provide the wrong behavior in some corner cases. Suggestions
welcome.
Reviewers: chandlerc, hfinkel
Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66428
llvm-svn: 371284
2019-09-07 05:09:36 +02:00
|
|
|
*this, GetTLI(const_cast<Function &>(Fn)),
|
2016-07-15 21:53:25 +02:00
|
|
|
// Cast away the constness here due to GraphBuilder's API requirement
|
|
|
|
const_cast<Function &>(Fn));
|
|
|
|
auto &Graph = GraphBuilder.getCFLGraph();
|
|
|
|
|
|
|
|
ReachabilitySet ReachSet;
|
|
|
|
AliasMemSet MemSet;
|
|
|
|
|
|
|
|
std::vector<WorkListItem> WorkList, NextList;
|
|
|
|
initializeWorkList(WorkList, ReachSet, Graph);
|
2018-05-17 23:56:39 +02:00
|
|
|
// TODO: make sure we don't stop before the fix point is reached
|
2016-07-15 21:53:25 +02:00
|
|
|
while (!WorkList.empty()) {
|
|
|
|
for (const auto &Item : WorkList)
|
|
|
|
processWorkListItem(Item, Graph, ReachSet, MemSet, NextList);
|
|
|
|
|
|
|
|
NextList.swap(WorkList);
|
|
|
|
NextList.clear();
|
|
|
|
}
|
|
|
|
|
2016-07-15 22:02:49 +02:00
|
|
|
// Now that we have all the reachability info, propagate AliasAttrs according
|
|
|
|
// to it
|
|
|
|
auto IValueAttrMap = buildAttrMap(Graph, ReachSet);
|
|
|
|
|
2016-07-19 22:47:15 +02:00
|
|
|
return FunctionInfo(Fn, GraphBuilder.getReturnValues(), ReachSet,
|
|
|
|
std::move(IValueAttrMap));
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void CFLAndersAAResult::scan(const Function &Fn) {
|
|
|
|
auto InsertPair = Cache.insert(std::make_pair(&Fn, Optional<FunctionInfo>()));
|
|
|
|
(void)InsertPair;
|
|
|
|
assert(InsertPair.second &&
|
|
|
|
"Trying to scan a function that has already been cached");
|
|
|
|
|
|
|
|
// Note that we can't do Cache[Fn] = buildSetsFrom(Fn) here: the function call
|
|
|
|
// may get evaluated after operator[], potentially triggering a DenseMap
|
|
|
|
// resize and invalidating the reference returned by operator[]
|
|
|
|
auto FunInfo = buildInfoFrom(Fn);
|
|
|
|
Cache[&Fn] = std::move(FunInfo);
|
2017-06-27 01:59:14 +02:00
|
|
|
Handles.emplace_front(const_cast<Function *>(&Fn), this);
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
|
2017-06-27 01:59:14 +02:00
|
|
|
void CFLAndersAAResult::evict(const Function *Fn) { Cache.erase(Fn); }
|
2016-07-15 21:53:25 +02:00
|
|
|
|
|
|
|
const Optional<CFLAndersAAResult::FunctionInfo> &
|
|
|
|
CFLAndersAAResult::ensureCached(const Function &Fn) {
|
|
|
|
auto Iter = Cache.find(&Fn);
|
|
|
|
if (Iter == Cache.end()) {
|
|
|
|
scan(Fn);
|
|
|
|
Iter = Cache.find(&Fn);
|
|
|
|
assert(Iter != Cache.end());
|
|
|
|
assert(Iter->second.hasValue());
|
|
|
|
}
|
|
|
|
return Iter->second;
|
|
|
|
}
|
|
|
|
|
|
|
|
const AliasSummary *CFLAndersAAResult::getAliasSummary(const Function &Fn) {
|
|
|
|
auto &FunInfo = ensureCached(Fn);
|
|
|
|
if (FunInfo.hasValue())
|
|
|
|
return &FunInfo->getAliasSummary();
|
|
|
|
else
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
AliasResult CFLAndersAAResult::query(const MemoryLocation &LocA,
|
|
|
|
const MemoryLocation &LocB) {
|
|
|
|
auto *ValA = LocA.Ptr;
|
|
|
|
auto *ValB = LocB.Ptr;
|
|
|
|
|
|
|
|
if (!ValA->getType()->isPointerTy() || !ValB->getType()->isPointerTy())
|
2021-03-05 11:58:13 +01:00
|
|
|
return AliasResult::NoAlias;
|
2016-07-15 21:53:25 +02:00
|
|
|
|
|
|
|
auto *Fn = parentFunctionOfValue(ValA);
|
|
|
|
if (!Fn) {
|
|
|
|
Fn = parentFunctionOfValue(ValB);
|
|
|
|
if (!Fn) {
|
|
|
|
// The only times this is known to happen are when globals + InlineAsm are
|
|
|
|
// involved
|
2018-05-14 14:53:11 +02:00
|
|
|
LLVM_DEBUG(
|
|
|
|
dbgs()
|
|
|
|
<< "CFLAndersAA: could not extract parent function information.\n");
|
2021-03-05 11:58:13 +01:00
|
|
|
return AliasResult::MayAlias;
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
assert(!parentFunctionOfValue(ValB) || parentFunctionOfValue(ValB) == Fn);
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(Fn != nullptr);
|
|
|
|
auto &FunInfo = ensureCached(*Fn);
|
|
|
|
|
|
|
|
// AliasMap lookup
|
2016-07-23 00:30:48 +02:00
|
|
|
if (FunInfo->mayAlias(ValA, LocA.Size, ValB, LocB.Size))
|
2021-03-05 11:58:13 +01:00
|
|
|
return AliasResult::MayAlias;
|
|
|
|
return AliasResult::NoAlias;
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
AliasResult CFLAndersAAResult::alias(const MemoryLocation &LocA,
|
[AliasAnalysis] Second prototype to cache BasicAA / anyAA state.
Summary:
Adding contained caching to AliasAnalysis. BasicAA is currently the only one using it.
AA changes:
- This patch is pulling the caches from BasicAAResults to AAResults, meaning the getModRefInfo call benefits from the IsCapturedCache as well when in "batch mode".
- All AAResultBase implementations add the QueryInfo member to all APIs. AAResults APIs maintain wrapper APIs such that all alias()/getModRefInfo call sites are unchanged.
- AA now provides a BatchAAResults type as a wrapper to AAResults. It keeps the AAResults instance and a QueryInfo instantiated to batch mode. It delegates all work to the AAResults instance with the batched QueryInfo. More API wrappers may be needed in BatchAAResults; only the minimum needed is currently added.
MemorySSA changes:
- All walkers are now templated on the AA used (AliasAnalysis=AAResults or BatchAAResults).
- At build time, we optimize uses; now we create a local walker (lives only as long as OptimizeUses does) using BatchAAResults.
- All Walkers have an internal AA and only use that now, never the AA in MemorySSA. The Walkers receive the AA they will use when built.
- The walker we use for queries after the build is instantiated on AliasAnalysis and is built after building MemorySSA and setting AA.
- All static methods doing walking are now templated on AliasAnalysisType if they are used both during build and after. If used only during build, the method now only takes a BatchAAResults. If used only after build, the method now takes an AliasAnalysis.
Subscribers: sanjoy, arsenm, jvesely, nhaehnle, jlebar, george.burgess.iv, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59315
llvm-svn: 356783
2019-03-22 18:22:19 +01:00
|
|
|
const MemoryLocation &LocB,
|
|
|
|
AAQueryInfo &AAQI) {
|
2016-07-15 21:53:25 +02:00
|
|
|
if (LocA.Ptr == LocB.Ptr)
|
2021-03-05 11:58:13 +01:00
|
|
|
return AliasResult::MustAlias;
|
2016-07-15 21:53:25 +02:00
|
|
|
|
|
|
|
// Comparisons between global variables and other constants should be
|
|
|
|
// handled by BasicAA.
|
|
|
|
// CFLAndersAA may report NoAlias when comparing a GlobalValue and
|
|
|
|
// ConstantExpr, but every query needs to have at least one Value tied to a
|
|
|
|
// Function, and neither GlobalValues nor ConstantExprs are.
|
|
|
|
if (isa<Constant>(LocA.Ptr) && isa<Constant>(LocB.Ptr))
|
[AliasAnalysis] Second prototype to cache BasicAA / anyAA state.
Summary:
Adding contained caching to AliasAnalysis. BasicAA is currently the only one using it.
AA changes:
- This patch is pulling the caches from BasicAAResults to AAResults, meaning the getModRefInfo call benefits from the IsCapturedCache as well when in "batch mode".
- All AAResultBase implementations add the QueryInfo member to all APIs. AAResults APIs maintain wrapper APIs such that all alias()/getModRefInfo call sites are unchanged.
- AA now provides a BatchAAResults type as a wrapper to AAResults. It keeps the AAResults instance and a QueryInfo instantiated to batch mode. It delegates all work to the AAResults instance with the batched QueryInfo. More API wrappers may be needed in BatchAAResults; only the minimum needed is currently added.
MemorySSA changes:
- All walkers are now templated on the AA used (AliasAnalysis=AAResults or BatchAAResults).
- At build time, we optimize uses; now we create a local walker (lives only as long as OptimizeUses does) using BatchAAResults.
- All Walkers have an internal AA and only use that now, never the AA in MemorySSA. The Walkers receive the AA they will use when built.
- The walker we use for queries after the build is instantiated on AliasAnalysis and is built after building MemorySSA and setting AA.
- All static methods doing walking are now templated on AliasAnalysisType if they are used both during build and after. If used only during build, the method now only takes a BatchAAResults. If used only after build, the method now takes an AliasAnalysis.
Subscribers: sanjoy, arsenm, jvesely, nhaehnle, jlebar, george.burgess.iv, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59315
llvm-svn: 356783
2019-03-22 18:22:19 +01:00
|
|
|
return AAResultBase::alias(LocA, LocB, AAQI);
|
2016-07-15 21:53:25 +02:00
|
|
|
|
|
|
|
AliasResult QueryResult = query(LocA, LocB);
|
2021-03-05 11:58:13 +01:00
|
|
|
if (QueryResult == AliasResult::MayAlias)
|
[AliasAnalysis] Second prototype to cache BasicAA / anyAA state.
Summary:
Adding contained caching to AliasAnalysis. BasicAA is currently the only one using it.
AA changes:
- This patch is pulling the caches from BasicAAResults to AAResults, meaning the getModRefInfo call benefits from the IsCapturedCache as well when in "batch mode".
- All AAResultBase implementations add the QueryInfo member to all APIs. AAResults APIs maintain wrapper APIs such that all alias()/getModRefInfo call sites are unchanged.
- AA now provides a BatchAAResults type as a wrapper to AAResults. It keeps the AAResults instance and a QueryInfo instantiated to batch mode. It delegates all work to the AAResults instance with the batched QueryInfo. More API wrappers may be needed in BatchAAResults; only the minimum needed is currently added.
MemorySSA changes:
- All walkers are now templated on the AA used (AliasAnalysis=AAResults or BatchAAResults).
- At build time, we optimize uses; now we create a local walker (lives only as long as OptimizeUses does) using BatchAAResults.
- All Walkers have an internal AA and only use that now, never the AA in MemorySSA. The Walkers receive the AA they will use when built.
- The walker we use for queries after the build is instantiated on AliasAnalysis and is built after building MemorySSA and setting AA.
- All static methods doing walking are now templated on AliasAnalysisType if they are used both during build and after. If used only during build, the method now only takes a BatchAAResults. If used only after build, the method now takes an AliasAnalysis.
Subscribers: sanjoy, arsenm, jvesely, nhaehnle, jlebar, george.burgess.iv, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59315
llvm-svn: 356783
2019-03-22 18:22:19 +01:00
|
|
|
return AAResultBase::alias(LocA, LocB, AAQI);
|
2016-07-15 21:53:25 +02:00
|
|
|
|
|
|
|
return QueryResult;
|
|
|
|
}
|
2016-07-06 02:26:41 +02:00
|
|
|
|
2016-11-23 18:53:26 +01:00
|
|
|
AnalysisKey CFLAndersAA::Key;
|
2016-07-06 02:26:41 +02:00
|
|
|
|
2016-08-09 02:28:15 +02:00
|
|
|
CFLAndersAAResult CFLAndersAA::run(Function &F, FunctionAnalysisManager &AM) {
|
Change TargetLibraryInfo analysis passes to always require Function
Summary:
This is the first change to enable the TLI to be built per-function so
that -fno-builtin* handling can be migrated to use function attributes.
See discussion on D61634 for background. This is an enabler for fixing
handling of these options for LTO, for example.
This change should not affect behavior, as the provided function is not
yet used to build a specifically per-function TLI, but rather enables
that migration.
Most of the changes were very mechanical, e.g. passing a Function to the
legacy analysis pass's getTLI interface, or in Module level cases,
adding a callback. This is similar to the way the per-function TTI
analysis works.
There was one place where we were looking for builtins but not in the
context of a specific function. See FindCXAAtExit in
lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround
could provide the wrong behavior in some corner cases. Suggestions
welcome.
Reviewers: chandlerc, hfinkel
Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66428
llvm-svn: 371284
2019-09-07 05:09:36 +02:00
|
|
|
auto GetTLI = [&AM](Function &F) -> TargetLibraryInfo & {
|
|
|
|
return AM.getResult<TargetLibraryAnalysis>(F);
|
|
|
|
};
|
|
|
|
return CFLAndersAAResult(GetTLI);
|
2016-07-06 02:26:41 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
char CFLAndersAAWrapperPass::ID = 0;
|
|
|
|
INITIALIZE_PASS(CFLAndersAAWrapperPass, "cfl-anders-aa",
|
|
|
|
"Inclusion-Based CFL Alias Analysis", false, true)
|
|
|
|
|
|
|
|
ImmutablePass *llvm::createCFLAndersAAWrapperPass() {
|
|
|
|
return new CFLAndersAAWrapperPass();
|
|
|
|
}
|
|
|
|
|
|
|
|
CFLAndersAAWrapperPass::CFLAndersAAWrapperPass() : ImmutablePass(ID) {
|
|
|
|
initializeCFLAndersAAWrapperPassPass(*PassRegistry::getPassRegistry());
|
|
|
|
}
|
|
|
|
|
2016-07-15 21:53:25 +02:00
|
|
|
void CFLAndersAAWrapperPass::initializePass() {
|
Change TargetLibraryInfo analysis passes to always require Function
Summary:
This is the first change to enable the TLI to be built per-function so
that -fno-builtin* handling can be migrated to use function attributes.
See discussion on D61634 for background. This is an enabler for fixing
handling of these options for LTO, for example.
This change should not affect behavior, as the provided function is not
yet used to build a specifically per-function TLI, but rather enables
that migration.
Most of the changes were very mechanical, e.g. passing a Function to the
legacy analysis pass's getTLI interface, or in Module level cases,
adding a callback. This is similar to the way the per-function TTI
analysis works.
There was one place where we were looking for builtins but not in the
context of a specific function. See FindCXAAtExit in
lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround
could provide the wrong behavior in some corner cases. Suggestions
welcome.
Reviewers: chandlerc, hfinkel
Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66428
llvm-svn: 371284
2019-09-07 05:09:36 +02:00
|
|
|
auto GetTLI = [this](Function &F) -> TargetLibraryInfo & {
|
|
|
|
return this->getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F);
|
|
|
|
};
|
|
|
|
Result.reset(new CFLAndersAAResult(GetTLI));
|
2016-07-15 21:53:25 +02:00
|
|
|
}
|
2016-07-06 02:26:41 +02:00
|
|
|
|
|
|
|
void CFLAndersAAWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
|
|
|
|
AU.setPreservesAll();
|
2016-07-15 21:53:25 +02:00
|
|
|
AU.addRequired<TargetLibraryInfoWrapperPass>();
|
2016-07-06 02:26:41 +02:00
|
|
|
}
|