2002-09-26 23:49:07 +02:00
|
|
|
//===- AliasSetTracker.cpp - Alias Sets Tracker implementation-------------===//
|
2005-04-21 23:13:18 +02:00
|
|
|
//
|
2019-01-19 09:50:56 +01:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2005-04-21 23:13:18 +02:00
|
|
|
//
|
2003-10-20 21:43:21 +02:00
|
|
|
//===----------------------------------------------------------------------===//
|
2002-09-26 23:49:07 +02:00
|
|
|
//
|
|
|
|
// This file implements the AliasSetTracker and AliasSet classes.
|
2005-04-21 23:13:18 +02:00
|
|
|
//
|
2002-09-26 23:49:07 +02:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#include "llvm/Analysis/AliasSetTracker.h"
|
Reapply [LICM] Make promotion faster
Relative to the previous implementation, this always uses
aliasesUnknownInst() instead of aliasesPointer() to correctly
handle atomics. The added test case was previously miscompiled.
-----
Even when MemorySSA-based LICM is used, an AST is still populated
for scalar promotion. As the AST has quadratic complexity, a lot
of time is spent in this step despite the existing access count
limit. This patch optimizes the identification of promotable stores.
The idea here is pretty simple: We're only interested in must-alias
mod sets of loop invariant pointers. As such, only populate the AST
with loop-invariant loads and stores (anything else is definitely
not promotable) and then discard any sets which alias with any of
the remaining, definitely non-promotable accesses.
If we promoted something, check whether this has made some other
accesses loop invariant and thus possible promotion candidates.
This is much faster in practice, because we need to perform AA
queries for O(NumPromotable^2 + NumPromotable*NumNonPromotable)
instead of O(NumTotal^2), and NumPromotable tends to be small.
Additionally, promotable accesses have loop invariant pointers,
for which AA is cheaper.
This has a signicant positive compile-time impact. We save ~1.8%
geomean on CTMark at O3, with 6% on lencod in particular and 25%
on individual files.
Conceptually, this change is NFC, but may not be so in practice,
because the AST is only an approximation, and can produce
different results depending on the order in which accesses are
added. However, there is at least no impact on the number of promotions
(licm.NumPromoted) in test-suite O3 configuration with this change.
Differential Revision: https://reviews.llvm.org/D89264
2021-03-09 12:07:55 +01:00
|
|
|
#include "llvm/Analysis/AliasAnalysis.h"
|
2018-08-30 05:39:16 +02:00
|
|
|
#include "llvm/Analysis/GuardUtils.h"
|
[LICM/MSSA] Add promotion to scalars by building an AliasSetTracker with MemorySSA.
Summary:
Experimentally we found that promotion to scalars carries less benefits
than sinking and hoisting in LICM. When using MemorySSA, we build an
AliasSetTracker on demand in order to reuse the current infrastructure.
We only build it if less than AccessCapForMSSAPromotion exist in the
loop, a cap that is by default set to 250. This value ensures there are
no runtime regressions, and there are small compile time gains for
pathological cases. A much lower value (20) was found to yield a single
regression in the llvm-test-suite and much higher benefits for compile
times. Conservatively we set the current cap to a high value, but we will
explore lowering it when MemorySSA is enabled by default.
Reviewers: sanjoy, chandlerc
Subscribers: nemanjai, jlebar, Prazek, george.burgess.iv, jfb, jsji, llvm-commits
Differential Revision: https://reviews.llvm.org/D56625
llvm-svn: 353339
2019-02-06 21:25:17 +01:00
|
|
|
#include "llvm/Analysis/LoopInfo.h"
|
2017-07-25 01:16:33 +02:00
|
|
|
#include "llvm/Analysis/MemoryLocation.h"
|
2018-04-30 16:59:11 +02:00
|
|
|
#include "llvm/Config/llvm-config.h"
|
2017-07-25 01:16:33 +02:00
|
|
|
#include "llvm/IR/Constants.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/DataLayout.h"
|
2017-07-25 01:16:33 +02:00
|
|
|
#include "llvm/IR/Function.h"
|
2014-03-04 11:30:26 +01:00
|
|
|
#include "llvm/IR/InstIterator.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/Instructions.h"
|
|
|
|
#include "llvm/IR/IntrinsicInst.h"
|
2017-06-06 13:49:48 +02:00
|
|
|
#include "llvm/IR/Module.h"
|
2020-09-15 04:01:38 +02:00
|
|
|
#include "llvm/IR/PassManager.h"
|
2018-08-15 08:21:02 +02:00
|
|
|
#include "llvm/IR/PatternMatch.h"
|
2017-07-25 01:16:33 +02:00
|
|
|
#include "llvm/IR/Value.h"
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-13 22:15:01 +01:00
|
|
|
#include "llvm/InitializePasses.h"
|
2003-02-24 21:37:56 +01:00
|
|
|
#include "llvm/Pass.h"
|
2017-07-25 01:16:33 +02:00
|
|
|
#include "llvm/Support/AtomicOrdering.h"
|
|
|
|
#include "llvm/Support/CommandLine.h"
|
|
|
|
#include "llvm/Support/Compiler.h"
|
2009-12-23 20:27:59 +01:00
|
|
|
#include "llvm/Support/Debug.h"
|
2009-07-11 22:10:48 +02:00
|
|
|
#include "llvm/Support/ErrorHandling.h"
|
2009-08-23 07:17:37 +02:00
|
|
|
#include "llvm/Support/raw_ostream.h"
|
2017-07-25 01:16:33 +02:00
|
|
|
|
2003-12-14 05:52:11 +01:00
|
|
|
using namespace llvm;
|
2003-11-11 23:41:34 +01:00
|
|
|
|
2016-08-19 19:05:22 +02:00
|
|
|
static cl::opt<unsigned>
|
|
|
|
SaturationThreshold("alias-set-saturation-threshold", cl::Hidden,
|
|
|
|
cl::init(250),
|
|
|
|
cl::desc("The maximum number of pointers may-alias "
|
|
|
|
"sets may contain before degradation"));
|
|
|
|
|
2004-11-27 19:37:42 +01:00
|
|
|
/// mergeSetIn - Merge the specified alias set into this alias set.
|
2002-09-26 23:49:07 +02:00
|
|
|
///
|
2004-11-27 19:37:42 +01:00
|
|
|
void AliasSet::mergeSetIn(AliasSet &AS, AliasSetTracker &AST) {
|
2003-02-24 21:37:56 +01:00
|
|
|
assert(!AS.Forward && "Alias set is already forwarding!");
|
|
|
|
assert(!Forward && "This set is a forwarding set!!");
|
|
|
|
|
2016-08-19 19:05:22 +02:00
|
|
|
bool WasMustAlias = (Alias == SetMustAlias);
|
2003-02-24 21:37:56 +01:00
|
|
|
// Update the alias and access types of this set...
|
2015-06-22 04:12:52 +02:00
|
|
|
Access |= AS.Access;
|
|
|
|
Alias |= AS.Alias;
|
2003-02-24 21:37:56 +01:00
|
|
|
|
2015-06-22 04:12:52 +02:00
|
|
|
if (Alias == SetMustAlias) {
|
2004-11-27 19:37:42 +01:00
|
|
|
// Check that these two merged sets really are must aliases. Since both
|
|
|
|
// used to be must-alias sets, we can just check any pointer from each set
|
|
|
|
// for aliasing.
|
|
|
|
AliasAnalysis &AA = AST.getAliasAnalysis();
|
2009-03-09 06:11:09 +01:00
|
|
|
PointerRec *L = getSomePointer();
|
|
|
|
PointerRec *R = AS.getSomePointer();
|
2004-11-27 19:37:42 +01:00
|
|
|
|
|
|
|
// If the pointers are not a must-alias pair, this set becomes a may alias.
|
2015-06-17 09:18:54 +02:00
|
|
|
if (AA.alias(MemoryLocation(L->getValue(), L->getSize(), L->getAAInfo()),
|
|
|
|
MemoryLocation(R->getValue(), R->getSize(), R->getAAInfo())) !=
|
2015-06-22 04:16:51 +02:00
|
|
|
MustAlias)
|
2015-06-22 04:12:52 +02:00
|
|
|
Alias = SetMayAlias;
|
2004-11-27 19:37:42 +01:00
|
|
|
}
|
|
|
|
|
2016-08-19 19:05:22 +02:00
|
|
|
if (Alias == SetMayAlias) {
|
|
|
|
if (WasMustAlias)
|
|
|
|
AST.TotalMayAliasSetSize += size();
|
|
|
|
if (AS.Alias == SetMustAlias)
|
|
|
|
AST.TotalMayAliasSetSize += AS.size();
|
|
|
|
}
|
|
|
|
|
2014-11-19 20:36:18 +01:00
|
|
|
bool ASHadUnknownInsts = !AS.UnknownInsts.empty();
|
2011-07-27 02:46:46 +02:00
|
|
|
if (UnknownInsts.empty()) { // Merge call sites...
|
2014-11-19 20:36:18 +01:00
|
|
|
if (ASHadUnknownInsts) {
|
2011-07-27 02:46:46 +02:00
|
|
|
std::swap(UnknownInsts, AS.UnknownInsts);
|
2014-11-19 10:41:05 +01:00
|
|
|
addRef();
|
|
|
|
}
|
2014-11-19 20:36:18 +01:00
|
|
|
} else if (ASHadUnknownInsts) {
|
2020-12-30 04:23:21 +01:00
|
|
|
llvm::append_range(UnknownInsts, AS.UnknownInsts);
|
2011-07-27 02:46:46 +02:00
|
|
|
AS.UnknownInsts.clear();
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
2005-04-21 23:13:18 +02:00
|
|
|
|
2016-08-19 19:05:22 +02:00
|
|
|
AS.Forward = this; // Forward across AS now...
|
|
|
|
addRef(); // AS is now pointing to us...
|
2003-02-24 21:37:56 +01:00
|
|
|
|
|
|
|
// Merge the list of constituent pointers...
|
2003-12-18 09:11:56 +01:00
|
|
|
if (AS.PtrList) {
|
2016-08-19 19:05:22 +02:00
|
|
|
SetSize += AS.size();
|
|
|
|
AS.SetSize = 0;
|
2003-12-18 09:11:56 +01:00
|
|
|
*PtrListEnd = AS.PtrList;
|
2009-03-09 06:11:09 +01:00
|
|
|
AS.PtrList->setPrevInList(PtrListEnd);
|
2003-12-18 09:11:56 +01:00
|
|
|
PtrListEnd = AS.PtrListEnd;
|
|
|
|
|
2014-04-15 06:59:12 +02:00
|
|
|
AS.PtrList = nullptr;
|
2003-12-18 09:11:56 +01:00
|
|
|
AS.PtrListEnd = &AS.PtrList;
|
2014-04-15 06:59:12 +02:00
|
|
|
assert(*AS.PtrListEnd == nullptr && "End of list is not null?");
|
2003-12-18 09:11:56 +01:00
|
|
|
}
|
2014-11-19 10:41:05 +01:00
|
|
|
if (ASHadUnknownInsts)
|
|
|
|
AS.dropRef(AST);
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
|
|
|
|
2003-02-24 21:37:56 +01:00
|
|
|
void AliasSetTracker::removeAliasSet(AliasSet *AS) {
|
2004-07-22 09:58:18 +02:00
|
|
|
if (AliasSet *Fwd = AS->Forward) {
|
|
|
|
Fwd->dropRef(*this);
|
2014-04-15 06:59:12 +02:00
|
|
|
AS->Forward = nullptr;
|
2018-11-02 00:37:51 +01:00
|
|
|
} else // Update TotalMayAliasSetSize only if not forwarding.
|
|
|
|
if (AS->Alias == AliasSet::SetMayAlias)
|
|
|
|
TotalMayAliasSetSize -= AS->size();
|
2016-08-19 19:05:22 +02:00
|
|
|
|
2003-02-24 21:37:56 +01:00
|
|
|
AliasSets.erase(AS);
|
2019-09-12 20:09:47 +02:00
|
|
|
// If we've removed the saturated alias set, set saturated marker back to
|
|
|
|
// nullptr and ensure this tracker is empty.
|
|
|
|
if (AS == AliasAnyAS) {
|
|
|
|
AliasAnyAS = nullptr;
|
|
|
|
assert(AliasSets.empty() && "Tracker not empty");
|
|
|
|
}
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
2002-09-26 23:49:07 +02:00
|
|
|
|
2003-02-24 21:37:56 +01:00
|
|
|
void AliasSet::removeFromTracker(AliasSetTracker &AST) {
|
|
|
|
assert(RefCount == 0 && "Cannot remove non-dead alias set from tracker!");
|
|
|
|
AST.removeAliasSet(this);
|
|
|
|
}
|
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
void AliasSet::addPointer(AliasSetTracker &AST, PointerRec &Entry,
|
2018-05-25 23:16:58 +02:00
|
|
|
LocationSize Size, const AAMDNodes &AAInfo,
|
2019-02-06 20:55:12 +01:00
|
|
|
bool KnownMustAlias, bool SkipSizeUpdate) {
|
2009-03-09 06:11:09 +01:00
|
|
|
assert(!Entry.hasAliasSet() && "Entry already in set!");
|
2003-02-24 21:37:56 +01:00
|
|
|
|
2004-09-14 21:15:32 +02:00
|
|
|
// Check to see if we have to downgrade to _may_ alias.
|
2019-02-06 20:55:12 +01:00
|
|
|
if (isMustAlias())
|
2009-03-09 06:11:09 +01:00
|
|
|
if (PointerRec *P = getSomePointer()) {
|
2019-02-06 20:55:12 +01:00
|
|
|
if (!KnownMustAlias) {
|
|
|
|
AliasAnalysis &AA = AST.getAliasAnalysis();
|
|
|
|
AliasResult Result = AA.alias(
|
|
|
|
MemoryLocation(P->getValue(), P->getSize(), P->getAAInfo()),
|
|
|
|
MemoryLocation(Entry.getValue(), Size, AAInfo));
|
|
|
|
if (Result != MustAlias) {
|
|
|
|
Alias = SetMayAlias;
|
|
|
|
AST.TotalMayAliasSetSize += size();
|
|
|
|
}
|
|
|
|
assert(Result != NoAlias && "Cannot be part of must set!");
|
|
|
|
} else if (!SkipSizeUpdate)
|
2014-07-24 14:16:19 +02:00
|
|
|
P->updateSizeAndAAInfo(Size, AAInfo);
|
2003-02-26 23:11:00 +01:00
|
|
|
}
|
2003-02-24 21:37:56 +01:00
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
Entry.setAliasSet(this);
|
2014-07-24 14:16:19 +02:00
|
|
|
Entry.updateSizeAndAAInfo(Size, AAInfo);
|
2003-02-24 21:37:56 +01:00
|
|
|
|
|
|
|
// Add it to the end of the list...
|
2016-08-19 19:05:22 +02:00
|
|
|
++SetSize;
|
2014-04-15 06:59:12 +02:00
|
|
|
assert(*PtrListEnd == nullptr && "End of list is not null?");
|
2003-12-18 09:11:56 +01:00
|
|
|
*PtrListEnd = &Entry;
|
2009-03-09 06:11:09 +01:00
|
|
|
PtrListEnd = Entry.setPrevInList(PtrListEnd);
|
2014-04-15 06:59:12 +02:00
|
|
|
assert(*PtrListEnd == nullptr && "End of list is not null?");
|
2016-08-19 19:05:22 +02:00
|
|
|
// Entry points to alias set.
|
|
|
|
addRef();
|
|
|
|
|
|
|
|
if (Alias == SetMayAlias)
|
|
|
|
AST.TotalMayAliasSetSize++;
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
|
|
|
|
2015-10-28 23:13:41 +01:00
|
|
|
void AliasSet::addUnknownInst(Instruction *I, AliasAnalysis &AA) {
|
2014-11-19 10:41:05 +01:00
|
|
|
if (UnknownInsts.empty())
|
|
|
|
addRef();
|
2015-05-29 21:43:39 +02:00
|
|
|
UnknownInsts.emplace_back(I);
|
2004-03-15 05:08:36 +01:00
|
|
|
|
2018-08-15 08:21:02 +02:00
|
|
|
// Guards are marked as modifying memory for control flow modelling purposes,
|
|
|
|
// but don't actually modify any specific memory location.
|
|
|
|
using namespace PatternMatch;
|
2018-08-30 05:39:16 +02:00
|
|
|
bool MayWriteMemory = I->mayWriteToMemory() && !isGuard(I) &&
|
2018-08-21 02:55:35 +02:00
|
|
|
!(I->use_empty() && match(I, m_Intrinsic<Intrinsic::invariant_start>()));
|
2018-08-15 08:21:02 +02:00
|
|
|
if (!MayWriteMemory) {
|
2015-06-22 04:12:52 +02:00
|
|
|
Alias = SetMayAlias;
|
2015-10-28 23:13:41 +01:00
|
|
|
Access |= RefAccess;
|
2007-12-01 08:51:45 +01:00
|
|
|
return;
|
2004-03-15 05:08:36 +01:00
|
|
|
}
|
|
|
|
|
2015-10-28 23:13:41 +01:00
|
|
|
// FIXME: This should use mod/ref information to make this not suck so bad
|
2015-06-22 04:12:52 +02:00
|
|
|
Alias = SetMayAlias;
|
2015-10-28 23:13:41 +01:00
|
|
|
Access = ModRefAccess;
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
|
|
|
|
2019-01-28 19:30:05 +01:00
|
|
|
/// aliasesPointer - If the specified pointer "may" (or must) alias one of the
|
|
|
|
/// members in the set return the appropriate AliasResult. Otherwise return
|
|
|
|
/// NoAlias.
|
2002-09-26 23:49:07 +02:00
|
|
|
///
|
2019-01-28 19:30:05 +01:00
|
|
|
AliasResult AliasSet::aliasesPointer(const Value *Ptr, LocationSize Size,
|
|
|
|
const AAMDNodes &AAInfo,
|
|
|
|
AliasAnalysis &AA) const {
|
2016-08-19 19:05:22 +02:00
|
|
|
if (AliasAny)
|
2019-01-28 19:30:05 +01:00
|
|
|
return MayAlias;
|
2016-08-19 19:05:22 +02:00
|
|
|
|
2015-06-22 04:12:52 +02:00
|
|
|
if (Alias == SetMustAlias) {
|
2011-07-27 02:46:46 +02:00
|
|
|
assert(UnknownInsts.empty() && "Illegal must alias set!");
|
2004-03-15 07:28:07 +01:00
|
|
|
|
2003-02-24 21:37:56 +01:00
|
|
|
// If this is a set of MustAliases, only check to see if the pointer aliases
|
2010-08-29 06:13:43 +02:00
|
|
|
// SOME value in the set.
|
2009-03-09 06:11:09 +01:00
|
|
|
PointerRec *SomePtr = getSomePointer();
|
2003-02-24 21:37:56 +01:00
|
|
|
assert(SomePtr && "Empty must-alias set??");
|
2015-06-17 09:18:54 +02:00
|
|
|
return AA.alias(MemoryLocation(SomePtr->getValue(), SomePtr->getSize(),
|
|
|
|
SomePtr->getAAInfo()),
|
|
|
|
MemoryLocation(Ptr, Size, AAInfo));
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// If this is a may-alias set, we have to check all of the pointers in the set
|
|
|
|
// to be sure it doesn't alias the set...
|
|
|
|
for (iterator I = begin(), E = end(); I != E; ++I)
|
2019-01-28 19:30:05 +01:00
|
|
|
if (AliasResult AR = AA.alias(
|
|
|
|
MemoryLocation(Ptr, Size, AAInfo),
|
|
|
|
MemoryLocation(I.getPointer(), I.getSize(), I.getAAInfo())))
|
|
|
|
return AR;
|
2002-09-26 23:49:07 +02:00
|
|
|
|
2011-07-27 02:46:46 +02:00
|
|
|
// Check the unknown instructions...
|
|
|
|
if (!UnknownInsts.empty()) {
|
|
|
|
for (unsigned i = 0, e = UnknownInsts.size(); i != e; ++i)
|
2017-03-11 02:15:48 +01:00
|
|
|
if (auto *Inst = getUnknownInst(i))
|
2017-12-05 21:12:23 +01:00
|
|
|
if (isModOrRefSet(
|
|
|
|
AA.getModRefInfo(Inst, MemoryLocation(Ptr, Size, AAInfo))))
|
2019-01-28 19:30:05 +01:00
|
|
|
return MayAlias;
|
2004-07-27 04:20:26 +02:00
|
|
|
}
|
2002-09-26 23:49:07 +02:00
|
|
|
|
2019-01-28 19:30:05 +01:00
|
|
|
return NoAlias;
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
|
|
|
|
2015-05-13 03:12:12 +02:00
|
|
|
bool AliasSet::aliasesUnknownInst(const Instruction *Inst,
|
2015-10-28 23:13:41 +01:00
|
|
|
AliasAnalysis &AA) const {
|
2016-08-19 19:05:22 +02:00
|
|
|
|
|
|
|
if (AliasAny)
|
|
|
|
return true;
|
|
|
|
|
2018-11-02 00:37:51 +01:00
|
|
|
assert(Inst->mayReadOrWriteMemory() &&
|
|
|
|
"Instruction must either read or write memory.");
|
2004-03-15 05:08:36 +01:00
|
|
|
|
2011-07-27 02:46:46 +02:00
|
|
|
for (unsigned i = 0, e = UnknownInsts.size(); i != e; ++i) {
|
2017-06-25 14:55:11 +02:00
|
|
|
if (auto *UnknownInst = getUnknownInst(i)) {
|
2019-01-07 06:42:51 +01:00
|
|
|
const auto *C1 = dyn_cast<CallBase>(UnknownInst);
|
|
|
|
const auto *C2 = dyn_cast<CallBase>(Inst);
|
2017-12-05 21:12:23 +01:00
|
|
|
if (!C1 || !C2 || isModOrRefSet(AA.getModRefInfo(C1, C2)) ||
|
|
|
|
isModOrRefSet(AA.getModRefInfo(C2, C1)))
|
2017-03-11 02:15:48 +01:00
|
|
|
return true;
|
|
|
|
}
|
2010-08-29 20:42:23 +02:00
|
|
|
}
|
2004-07-27 04:20:26 +02:00
|
|
|
|
2015-10-28 23:13:41 +01:00
|
|
|
for (iterator I = begin(), E = end(); I != E; ++I)
|
2017-12-05 21:12:23 +01:00
|
|
|
if (isModOrRefSet(AA.getModRefInfo(
|
|
|
|
Inst, MemoryLocation(I.getPointer(), I.getSize(), I.getAAInfo()))))
|
2015-10-28 23:13:41 +01:00
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
2002-09-26 23:49:07 +02:00
|
|
|
|
2018-08-22 05:32:52 +02:00
|
|
|
Instruction* AliasSet::getUniqueInstruction() {
|
|
|
|
if (AliasAny)
|
|
|
|
// May have collapses alias set
|
|
|
|
return nullptr;
|
2018-08-29 23:49:30 +02:00
|
|
|
if (begin() != end()) {
|
|
|
|
if (!UnknownInsts.empty())
|
|
|
|
// Another instruction found
|
|
|
|
return nullptr;
|
|
|
|
if (std::next(begin()) != end())
|
|
|
|
// Another instruction found
|
|
|
|
return nullptr;
|
|
|
|
Value *Addr = begin()->getValue();
|
|
|
|
assert(!Addr->user_empty() &&
|
|
|
|
"where's the instruction which added this pointer?");
|
|
|
|
if (std::next(Addr->user_begin()) != Addr->user_end())
|
|
|
|
// Another instruction found -- this is really restrictive
|
|
|
|
// TODO: generalize!
|
|
|
|
return nullptr;
|
|
|
|
return cast<Instruction>(*(Addr->user_begin()));
|
|
|
|
}
|
2018-08-22 05:36:42 +02:00
|
|
|
if (1 != UnknownInsts.size())
|
2018-08-22 05:32:52 +02:00
|
|
|
return nullptr;
|
|
|
|
return cast<Instruction>(UnknownInsts[0]);
|
|
|
|
}
|
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
void AliasSetTracker::clear() {
|
|
|
|
// Delete all the PointerRec entries.
|
2021-02-23 05:17:18 +01:00
|
|
|
for (auto &I : PointerMap)
|
|
|
|
I.second->eraseFromList();
|
2018-07-30 21:41:25 +02:00
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
PointerMap.clear();
|
2018-07-30 21:41:25 +02:00
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
// The alias sets should all be clear now.
|
|
|
|
AliasSets.clear();
|
|
|
|
}
|
|
|
|
|
2016-04-15 00:00:11 +02:00
|
|
|
/// mergeAliasSetsForPointer - Given a pointer, merge all alias sets that may
|
|
|
|
/// alias the pointer. Return the unified set, or nullptr if no set that aliases
|
2019-02-06 20:55:12 +01:00
|
|
|
/// the pointer was found. MustAliasAll is updated to true/false if the pointer
|
|
|
|
/// is found to MustAlias all the sets it merged.
|
2016-04-15 00:00:11 +02:00
|
|
|
AliasSet *AliasSetTracker::mergeAliasSetsForPointer(const Value *Ptr,
|
2018-05-25 23:16:58 +02:00
|
|
|
LocationSize Size,
|
2019-02-06 20:55:12 +01:00
|
|
|
const AAMDNodes &AAInfo,
|
|
|
|
bool &MustAliasAll) {
|
2014-04-15 06:59:12 +02:00
|
|
|
AliasSet *FoundSet = nullptr;
|
2021-03-04 21:17:49 +01:00
|
|
|
MustAliasAll = true;
|
2021-02-23 05:17:18 +01:00
|
|
|
for (AliasSet &AS : llvm::make_early_inc_range(*this)) {
|
|
|
|
if (AS.Forward)
|
2019-02-06 20:55:12 +01:00
|
|
|
continue;
|
|
|
|
|
2021-02-23 05:17:18 +01:00
|
|
|
AliasResult AR = AS.aliasesPointer(Ptr, Size, AAInfo, AA);
|
2019-02-06 20:55:12 +01:00
|
|
|
if (AR == NoAlias)
|
|
|
|
continue;
|
|
|
|
|
2021-03-04 21:17:49 +01:00
|
|
|
if (AR != MustAlias)
|
|
|
|
MustAliasAll = false;
|
2018-07-30 21:41:25 +02:00
|
|
|
|
2019-01-28 20:38:03 +01:00
|
|
|
if (!FoundSet) {
|
|
|
|
// If this is the first alias set ptr can go into, remember it.
|
2021-02-23 05:17:18 +01:00
|
|
|
FoundSet = &AS;
|
2019-01-28 20:38:03 +01:00
|
|
|
} else {
|
|
|
|
// Otherwise, we must merge the sets.
|
2021-02-23 05:17:18 +01:00
|
|
|
FoundSet->mergeSetIn(AS, *this);
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
2010-08-29 06:06:55 +02:00
|
|
|
}
|
2002-09-26 23:49:07 +02:00
|
|
|
|
|
|
|
return FoundSet;
|
|
|
|
}
|
|
|
|
|
2015-10-28 23:13:41 +01:00
|
|
|
AliasSet *AliasSetTracker::findAliasSetForUnknownInst(Instruction *Inst) {
|
2014-04-15 06:59:12 +02:00
|
|
|
AliasSet *FoundSet = nullptr;
|
2021-02-23 05:17:18 +01:00
|
|
|
for (AliasSet &AS : llvm::make_early_inc_range(*this)) {
|
|
|
|
if (AS.Forward || !AS.aliasesUnknownInst(Inst, AA))
|
2010-08-29 06:06:55 +02:00
|
|
|
continue;
|
2019-01-28 20:01:32 +01:00
|
|
|
if (!FoundSet) {
|
|
|
|
// If this is the first alias set ptr can go into, remember it.
|
2021-02-23 05:17:18 +01:00
|
|
|
FoundSet = &AS;
|
2019-01-28 20:01:32 +01:00
|
|
|
} else {
|
|
|
|
// Otherwise, we must merge the sets.
|
2021-02-23 05:17:18 +01:00
|
|
|
FoundSet->mergeSetIn(AS, *this);
|
2019-01-28 20:01:32 +01:00
|
|
|
}
|
2010-08-29 06:06:55 +02:00
|
|
|
}
|
2003-02-24 21:37:56 +01:00
|
|
|
return FoundSet;
|
|
|
|
}
|
|
|
|
|
2018-08-16 22:11:15 +02:00
|
|
|
AliasSet &AliasSetTracker::getAliasSetFor(const MemoryLocation &MemLoc) {
|
|
|
|
|
|
|
|
Value * const Pointer = const_cast<Value*>(MemLoc.Ptr);
|
|
|
|
const LocationSize Size = MemLoc.Size;
|
|
|
|
const AAMDNodes &AAInfo = MemLoc.AATags;
|
2019-01-28 20:01:32 +01:00
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
AliasSet::PointerRec &Entry = getEntryFor(Pointer);
|
2003-02-24 21:37:56 +01:00
|
|
|
|
2016-08-19 19:05:22 +02:00
|
|
|
if (AliasAnyAS) {
|
|
|
|
// At this point, the AST is saturated, so we only have one active alias
|
|
|
|
// set. That means we already know which alias set we want to return, and
|
|
|
|
// just need to add the pointer to that set to keep the data structure
|
|
|
|
// consistent.
|
|
|
|
// This, of course, means that we will never need a merge here.
|
|
|
|
if (Entry.hasAliasSet()) {
|
|
|
|
Entry.updateSizeAndAAInfo(Size, AAInfo);
|
|
|
|
assert(Entry.getAliasSet(*this) == AliasAnyAS &&
|
|
|
|
"Entry in saturated AST must belong to only alias set");
|
|
|
|
} else {
|
|
|
|
AliasAnyAS->addPointer(*this, Entry, Size, AAInfo);
|
|
|
|
}
|
|
|
|
return *AliasAnyAS;
|
|
|
|
}
|
|
|
|
|
2019-02-06 20:55:12 +01:00
|
|
|
bool MustAliasAll = false;
|
2010-08-29 06:13:43 +02:00
|
|
|
// Check to see if the pointer is already known.
|
2009-03-09 06:11:09 +01:00
|
|
|
if (Entry.hasAliasSet()) {
|
2016-04-15 00:00:11 +02:00
|
|
|
// If the size changed, we may need to merge several alias sets.
|
|
|
|
// Note that we can *not* return the result of mergeAliasSetsForPointer
|
|
|
|
// due to a quirk of alias analysis behavior. Since alias(undef, undef)
|
|
|
|
// is NoAlias, mergeAliasSetsForPointer(undef, ...) will not find the
|
|
|
|
// the right set for undef, even if it exists.
|
|
|
|
if (Entry.updateSizeAndAAInfo(Size, AAInfo))
|
2019-02-06 20:55:12 +01:00
|
|
|
mergeAliasSetsForPointer(Pointer, Size, AAInfo, MustAliasAll);
|
2003-02-24 21:37:56 +01:00
|
|
|
// Return the set!
|
2009-03-09 06:11:09 +01:00
|
|
|
return *Entry.getAliasSet(*this)->getForwardedTarget(*this);
|
2010-08-29 06:06:55 +02:00
|
|
|
}
|
2018-07-30 21:41:25 +02:00
|
|
|
|
2019-02-06 20:55:12 +01:00
|
|
|
if (AliasSet *AS =
|
|
|
|
mergeAliasSetsForPointer(Pointer, Size, AAInfo, MustAliasAll)) {
|
2010-08-29 06:13:43 +02:00
|
|
|
// Add it to the alias set it aliases.
|
2019-02-06 20:55:12 +01:00
|
|
|
AS->addPointer(*this, Entry, Size, AAInfo, MustAliasAll);
|
2003-02-24 21:37:56 +01:00
|
|
|
return *AS;
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
2018-07-30 21:41:25 +02:00
|
|
|
|
2010-08-29 06:13:43 +02:00
|
|
|
// Otherwise create a new alias set to hold the loaded pointer.
|
2010-08-29 06:06:55 +02:00
|
|
|
AliasSets.push_back(new AliasSet());
|
2019-02-06 20:55:12 +01:00
|
|
|
AliasSets.back().addPointer(*this, Entry, Size, AAInfo, true);
|
2010-08-29 06:06:55 +02:00
|
|
|
return AliasSets.back();
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
|
|
|
|
2018-05-25 23:16:58 +02:00
|
|
|
void AliasSetTracker::add(Value *Ptr, LocationSize Size,
|
|
|
|
const AAMDNodes &AAInfo) {
|
2018-10-29 23:25:59 +01:00
|
|
|
addPointer(MemoryLocation(Ptr, Size, AAInfo), AliasSet::NoAccess);
|
2004-07-26 07:50:23 +02:00
|
|
|
}
|
|
|
|
|
2016-10-19 20:50:32 +02:00
|
|
|
void AliasSetTracker::add(LoadInst *LI) {
|
2018-08-22 21:30:46 +02:00
|
|
|
if (isStrongerThanMonotonic(LI->getOrdering()))
|
|
|
|
return addUnknown(LI);
|
2018-08-21 19:59:11 +02:00
|
|
|
addPointer(MemoryLocation::get(LI), AliasSet::RefAccess);
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
|
|
|
|
2016-10-19 20:50:32 +02:00
|
|
|
void AliasSetTracker::add(StoreInst *SI) {
|
2018-08-22 21:30:46 +02:00
|
|
|
if (isStrongerThanMonotonic(SI->getOrdering()))
|
|
|
|
return addUnknown(SI);
|
2018-08-21 19:59:11 +02:00
|
|
|
addPointer(MemoryLocation::get(SI), AliasSet::ModAccess);
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
|
|
|
|
2016-10-19 20:50:32 +02:00
|
|
|
void AliasSetTracker::add(VAArgInst *VAAI) {
|
2018-08-14 00:34:14 +02:00
|
|
|
addPointer(MemoryLocation::get(VAAI), AliasSet::ModRefAccess);
|
2008-04-14 20:34:50 +02:00
|
|
|
}
|
|
|
|
|
2018-05-30 16:43:39 +02:00
|
|
|
void AliasSetTracker::add(AnyMemSetInst *MSI) {
|
2018-08-22 21:30:46 +02:00
|
|
|
addPointer(MemoryLocation::getForDest(MSI), AliasSet::ModAccess);
|
2016-02-17 03:01:50 +01:00
|
|
|
}
|
2003-12-14 05:52:11 +01:00
|
|
|
|
2018-05-30 16:43:39 +02:00
|
|
|
void AliasSetTracker::add(AnyMemTransferInst *MTI) {
|
2018-08-22 21:30:46 +02:00
|
|
|
addPointer(MemoryLocation::getForDest(MTI), AliasSet::ModAccess);
|
2018-09-10 18:00:27 +02:00
|
|
|
addPointer(MemoryLocation::getForSource(MTI), AliasSet::RefAccess);
|
2016-10-19 21:09:03 +02:00
|
|
|
}
|
|
|
|
|
2016-10-19 20:50:32 +02:00
|
|
|
void AliasSetTracker::addUnknown(Instruction *Inst) {
|
|
|
|
if (isa<DbgInfoIntrinsic>(Inst))
|
|
|
|
return; // Ignore DbgInfo Intrinsics.
|
2016-11-07 15:11:45 +01:00
|
|
|
|
|
|
|
if (auto *II = dyn_cast<IntrinsicInst>(Inst)) {
|
|
|
|
// These intrinsics will show up as affecting memory, but they are just
|
|
|
|
// markers.
|
|
|
|
switch (II->getIntrinsicID()) {
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
// FIXME: Add lifetime/invariant intrinsics (See: PR30807).
|
|
|
|
case Intrinsic::assume:
|
2021-01-19 20:04:52 +01:00
|
|
|
case Intrinsic::experimental_noalias_scope_decl:
|
Add an @llvm.sideeffect intrinsic
This patch implements Chandler's idea [0] for supporting languages that
require support for infinite loops with side effects, such as Rust, providing
part of a solution to bug 965 [1].
Specifically, it adds an `llvm.sideeffect()` intrinsic, which has no actual
effect, but which appears to optimization passes to have obscure side effects,
such that they don't optimize away loops containing it. It also teaches
several optimization passes to ignore this intrinsic, so that it doesn't
significantly impact optimization in most cases.
As discussed on llvm-dev [2], this patch is the first of two major parts.
The second part, to change LLVM's semantics to have defined behavior
on infinite loops by default, with a function attribute for opting into
potential-undefined-behavior, will be implemented and posted for review in
a separate patch.
[0] http://lists.llvm.org/pipermail/llvm-dev/2015-July/088103.html
[1] https://bugs.llvm.org/show_bug.cgi?id=965
[2] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118632.html
Differential Revision: https://reviews.llvm.org/D38336
llvm-svn: 317729
2017-11-08 22:59:51 +01:00
|
|
|
case Intrinsic::sideeffect:
|
[CSSPGO] IR intrinsic for pseudo-probe block instrumentation
This change introduces a new IR intrinsic named `llvm.pseudoprobe` for pseudo-probe block instrumentation. Please refer to https://reviews.llvm.org/D86193 for the whole story.
A pseudo probe is used to collect the execution count of the block where the probe is instrumented. This requires a pseudo probe to be persisting. The LLVM PGO instrumentation also instruments in similar places by placing a counter in the form of atomic read/write operations or runtime helper calls. While these operations are very persisting or optimization-resilient, in theory we can borrow the atomic read/write implementation from PGO counters and cut it off at the end of compilation with all the atomics converted into binary data. This was our initial design and we’ve seen promising sample correlation quality with it. However, the atomics approach has a couple issues:
1. IR Optimizations are blocked unexpectedly. Those atomic instructions are not going to be physically present in the binary code, but since they are on the IR till very end of compilation, they can still prevent certain IR optimizations and result in lower code quality.
2. The counter atomics may not be fully cleaned up from the code stream eventually.
3. Extra work is needed for re-targeting.
We choose to implement pseudo probes based on a special LLVM intrinsic, which is expected to have most of the semantics that comes with an atomic operation but does not block desired optimizations as much as possible. More specifically the semantics associated with the new intrinsic enforces a pseudo probe to be virtually executed exactly the same number of times before and after an IR optimization. The intrinsic also comes with certain flags that are carefully chosen so that the places they are probing are not going to be messed up by the optimizer while most of the IR optimizations still work. The core flags given to the special intrinsic is `IntrInaccessibleMemOnly`, which means the intrinsic accesses memory and does have a side effect so that it is not removable, but is does not access memory locations that are accessible by any original instructions. This way the intrinsic does not alias with any original instruction and thus it does not block optimizations as much as an atomic operation does. We also assign a function GUID and a block index to an intrinsic so that they are uniquely identified and not merged in order to achieve good correlation quality.
Let's now look at an example. Given the following LLVM IR:
```
define internal void @foo2(i32 %x, void (i32)* %f) !dbg !4 {
bb0:
%cmp = icmp eq i32 %x, 0
br i1 %cmp, label %bb1, label %bb2
bb1:
br label %bb3
bb2:
br label %bb3
bb3:
ret void
}
```
The instrumented IR will look like below. Note that each `llvm.pseudoprobe` intrinsic call represents a pseudo probe at a block, of which the first parameter is the GUID of the probe’s owner function and the second parameter is the probe’s ID.
```
define internal void @foo2(i32 %x, void (i32)* %f) !dbg !4 {
bb0:
%cmp = icmp eq i32 %x, 0
call void @llvm.pseudoprobe(i64 837061429793323041, i64 1)
br i1 %cmp, label %bb1, label %bb2
bb1:
call void @llvm.pseudoprobe(i64 837061429793323041, i64 2)
br label %bb3
bb2:
call void @llvm.pseudoprobe(i64 837061429793323041, i64 3)
br label %bb3
bb3:
call void @llvm.pseudoprobe(i64 837061429793323041, i64 4)
ret void
}
```
Reviewed By: wmi
Differential Revision: https://reviews.llvm.org/D86490
2020-11-18 21:42:51 +01:00
|
|
|
case Intrinsic::pseudoprobe:
|
2016-11-07 15:11:45 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2011-07-27 02:46:46 +02:00
|
|
|
if (!Inst->mayReadOrWriteMemory())
|
2016-10-19 20:50:32 +02:00
|
|
|
return; // doesn't alias anything
|
2004-03-15 07:28:07 +01:00
|
|
|
|
2019-02-06 04:46:40 +01:00
|
|
|
if (AliasSet *AS = findAliasSetForUnknownInst(Inst)) {
|
2015-10-28 23:13:41 +01:00
|
|
|
AS->addUnknownInst(Inst, AA);
|
2016-10-19 20:50:32 +02:00
|
|
|
return;
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
2010-08-29 06:06:55 +02:00
|
|
|
AliasSets.push_back(new AliasSet());
|
2019-02-06 04:46:40 +01:00
|
|
|
AliasSets.back().addUnknownInst(Inst, AA);
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
|
|
|
|
2016-10-19 20:50:32 +02:00
|
|
|
void AliasSetTracker::add(Instruction *I) {
|
2010-08-29 06:06:55 +02:00
|
|
|
// Dispatch to one of the other add methods.
|
2003-02-24 21:37:56 +01:00
|
|
|
if (LoadInst *LI = dyn_cast<LoadInst>(I))
|
2004-07-21 07:18:04 +02:00
|
|
|
return add(LI);
|
2010-08-29 06:06:55 +02:00
|
|
|
if (StoreInst *SI = dyn_cast<StoreInst>(I))
|
2004-07-21 07:18:04 +02:00
|
|
|
return add(SI);
|
2010-08-29 06:06:55 +02:00
|
|
|
if (VAArgInst *VAAI = dyn_cast<VAArgInst>(I))
|
2008-04-14 20:34:50 +02:00
|
|
|
return add(VAAI);
|
2018-05-30 16:43:39 +02:00
|
|
|
if (AnyMemSetInst *MSI = dyn_cast<AnyMemSetInst>(I))
|
2016-02-17 03:01:50 +01:00
|
|
|
return add(MSI);
|
2018-05-30 16:43:39 +02:00
|
|
|
if (AnyMemTransferInst *MTI = dyn_cast<AnyMemTransferInst>(I))
|
2016-10-19 21:09:03 +02:00
|
|
|
return add(MTI);
|
2018-09-07 23:36:11 +02:00
|
|
|
|
|
|
|
// Handle all calls with known mod/ref sets genericall
|
2019-01-07 06:42:51 +01:00
|
|
|
if (auto *Call = dyn_cast<CallBase>(I))
|
|
|
|
if (Call->onlyAccessesArgMemory()) {
|
|
|
|
auto getAccessFromModRef = [](ModRefInfo MRI) {
|
|
|
|
if (isRefSet(MRI) && isModSet(MRI))
|
|
|
|
return AliasSet::ModRefAccess;
|
|
|
|
else if (isModSet(MRI))
|
|
|
|
return AliasSet::ModAccess;
|
|
|
|
else if (isRefSet(MRI))
|
|
|
|
return AliasSet::RefAccess;
|
|
|
|
else
|
|
|
|
return AliasSet::NoAccess;
|
|
|
|
};
|
|
|
|
|
|
|
|
ModRefInfo CallMask = createModRefInfo(AA.getModRefBehavior(Call));
|
|
|
|
|
|
|
|
// Some intrinsics are marked as modifying memory for control flow
|
|
|
|
// modelling purposes, but don't actually modify any specific memory
|
|
|
|
// location.
|
|
|
|
using namespace PatternMatch;
|
|
|
|
if (Call->use_empty() &&
|
|
|
|
match(Call, m_Intrinsic<Intrinsic::invariant_start>()))
|
|
|
|
CallMask = clearMod(CallMask);
|
|
|
|
|
|
|
|
for (auto IdxArgPair : enumerate(Call->args())) {
|
|
|
|
int ArgIdx = IdxArgPair.index();
|
|
|
|
const Value *Arg = IdxArgPair.value();
|
|
|
|
if (!Arg->getType()->isPointerTy())
|
|
|
|
continue;
|
|
|
|
MemoryLocation ArgLoc =
|
|
|
|
MemoryLocation::getForArgument(Call, ArgIdx, nullptr);
|
|
|
|
ModRefInfo ArgMask = AA.getArgModRefInfo(Call, ArgIdx);
|
|
|
|
ArgMask = intersectModRef(CallMask, ArgMask);
|
|
|
|
if (!isNoModRef(ArgMask))
|
|
|
|
addPointer(ArgLoc, getAccessFromModRef(ArgMask));
|
|
|
|
}
|
|
|
|
return;
|
2018-09-07 23:36:11 +02:00
|
|
|
}
|
2019-01-07 06:42:51 +01:00
|
|
|
|
2011-07-27 02:46:46 +02:00
|
|
|
return addUnknown(I);
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
2002-09-26 23:49:07 +02:00
|
|
|
|
2003-03-04 00:28:05 +01:00
|
|
|
void AliasSetTracker::add(BasicBlock &BB) {
|
Analysis: Remove implicit ilist iterator conversions
Remove implicit ilist iterator conversions from LLVMAnalysis.
I came across something really scary in `llvm::isKnownNotFullPoison()`
which relied on `Instruction::getNextNode()` being completely broken
(not surprising, but scary nevertheless). This function is documented
(and coded to) return `nullptr` when it gets to the sentinel, but with
an `ilist_half_node` as a sentinel, the sentinel check looks into some
other memory and we don't recognize we've hit the end.
Rooting out these scary cases is the reason I'm removing the implicit
conversions before doing anything else with `ilist`; I'm not at all
surprised that clients rely on badness.
I found another scary case -- this time, not relying on badness, just
bad (but I guess getting lucky so far) -- in
`ObjectSizeOffsetEvaluator::compute_()`. Here, we save out the
insertion point, do some things, and then restore it. Previously, we
let the iterator auto-convert to `Instruction*`, and then set it back
using the `Instruction*` version:
Instruction *PrevInsertPoint = Builder.GetInsertPoint();
/* Logic that may change insert point */
if (PrevInsertPoint)
Builder.SetInsertPoint(PrevInsertPoint);
The check for `PrevInsertPoint` doesn't protect correctly against bad
accesses. If the insertion point has been set to the end of a basic
block (i.e., `SetInsertPoint(SomeBB)`), then `GetInsertPoint()` returns
an iterator pointing at the list sentinel. The version of
`SetInsertPoint()` that's getting called will then call
`PrevInsertPoint->getParent()`, which explodes horribly. The only
reason this hasn't blown up is that it's fairly unlikely the builder is
adding to the end of the block; usually, we're adding instructions
somewhere before the terminator.
llvm-svn: 249925
2015-10-10 02:53:03 +02:00
|
|
|
for (auto &I : BB)
|
|
|
|
add(&I);
|
2003-03-04 00:28:05 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
void AliasSetTracker::add(const AliasSetTracker &AST) {
|
|
|
|
assert(&AA == &AST.AA &&
|
|
|
|
"Merging AliasSetTracker objects with different Alias Analyses!");
|
|
|
|
|
|
|
|
// Loop over all of the alias sets in AST, adding the pointers contained
|
|
|
|
// therein into the current alias sets. This can cause alias sets to be
|
|
|
|
// merged together in the current AST.
|
2016-06-26 19:27:42 +02:00
|
|
|
for (const AliasSet &AS : AST) {
|
|
|
|
if (AS.Forward)
|
|
|
|
continue; // Ignore forwarding alias sets
|
2010-08-29 06:06:55 +02:00
|
|
|
|
|
|
|
// If there are any call sites in the alias set, add them to this AST.
|
2011-07-27 02:46:46 +02:00
|
|
|
for (unsigned i = 0, e = AS.UnknownInsts.size(); i != e; ++i)
|
2017-03-11 02:15:48 +01:00
|
|
|
if (auto *Inst = AS.getUnknownInst(i))
|
|
|
|
add(Inst);
|
2010-08-29 06:06:55 +02:00
|
|
|
|
|
|
|
// Loop over all of the pointers in this alias set.
|
2018-08-21 19:59:11 +02:00
|
|
|
for (AliasSet::iterator ASI = AS.begin(), E = AS.end(); ASI != E; ++ASI)
|
2018-10-29 23:25:59 +01:00
|
|
|
addPointer(
|
|
|
|
MemoryLocation(ASI.getPointer(), ASI.getSize(), ASI.getAAInfo()),
|
|
|
|
(AliasSet::AccessLattice)AS.Access);
|
2010-08-29 06:06:55 +02:00
|
|
|
}
|
2003-03-04 00:28:05 +01:00
|
|
|
}
|
|
|
|
|
2004-05-23 23:10:58 +02:00
|
|
|
// deleteValue method - This method is used to remove a pointer value from the
|
2003-12-18 09:11:56 +01:00
|
|
|
// AliasSetTracker entirely. It should be used when an instruction is deleted
|
|
|
|
// from the program to update the AST. If you don't use this, you would have
|
|
|
|
// dangling pointers to deleted instructions.
|
|
|
|
//
|
2004-05-23 23:10:58 +02:00
|
|
|
void AliasSetTracker::deleteValue(Value *PtrVal) {
|
2004-09-14 21:15:32 +02:00
|
|
|
// First, look up the PointerRec for this pointer.
|
2012-07-01 00:37:15 +02:00
|
|
|
PointerMapType::iterator I = PointerMap.find_as(PtrVal);
|
2003-12-18 09:11:56 +01:00
|
|
|
if (I == PointerMap.end()) return; // Noop
|
|
|
|
|
|
|
|
// If we found one, remove the pointer from the alias set it is in.
|
2009-03-09 06:11:09 +01:00
|
|
|
AliasSet::PointerRec *PtrValEnt = I->second;
|
|
|
|
AliasSet *AS = PtrValEnt->getAliasSet(*this);
|
2003-12-18 09:11:56 +01:00
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
// Unlink and delete from the list of values.
|
|
|
|
PtrValEnt->eraseFromList();
|
2016-08-19 19:05:22 +02:00
|
|
|
|
|
|
|
if (AS->Alias == AliasSet::SetMayAlias) {
|
|
|
|
AS->SetSize--;
|
|
|
|
TotalMayAliasSetSize--;
|
|
|
|
}
|
2018-07-30 21:41:25 +02:00
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
// Stop using the alias set.
|
2004-07-22 09:58:18 +02:00
|
|
|
AS->dropRef(*this);
|
2018-07-30 21:41:25 +02:00
|
|
|
|
2003-12-18 09:11:56 +01:00
|
|
|
PointerMap.erase(I);
|
|
|
|
}
|
|
|
|
|
2004-09-14 21:15:32 +02:00
|
|
|
// copyValue - This method should be used whenever a preexisting value in the
|
|
|
|
// program is copied or cloned, introducing a new value. Note that it is ok for
|
|
|
|
// clients that use this method to introduce the same value multiple times: if
|
|
|
|
// the tracker already knows about a value, it will ignore the request.
|
|
|
|
//
|
|
|
|
void AliasSetTracker::copyValue(Value *From, Value *To) {
|
|
|
|
// First, look up the PointerRec for this pointer.
|
2012-07-01 00:37:15 +02:00
|
|
|
PointerMapType::iterator I = PointerMap.find_as(From);
|
2009-03-09 06:11:09 +01:00
|
|
|
if (I == PointerMap.end())
|
2004-09-14 21:15:32 +02:00
|
|
|
return; // Noop
|
2009-03-09 06:11:09 +01:00
|
|
|
assert(I->second->hasAliasSet() && "Dead entry?");
|
2004-09-14 21:15:32 +02:00
|
|
|
|
2009-03-09 06:11:09 +01:00
|
|
|
AliasSet::PointerRec &Entry = getEntryFor(To);
|
|
|
|
if (Entry.hasAliasSet()) return; // Already in the tracker!
|
2004-09-14 21:15:32 +02:00
|
|
|
|
2016-08-12 01:09:56 +02:00
|
|
|
// getEntryFor above may invalidate iterator \c I, so reinitialize it.
|
2012-07-01 00:37:15 +02:00
|
|
|
I = PointerMap.find_as(From);
|
2016-08-12 01:09:56 +02:00
|
|
|
// Add it to the alias set it aliases...
|
2009-03-09 06:11:09 +01:00
|
|
|
AliasSet *AS = I->second->getAliasSet(*this);
|
2019-02-06 20:55:12 +01:00
|
|
|
AS->addPointer(*this, Entry, I->second->getSize(), I->second->getAAInfo(),
|
|
|
|
true, true);
|
2004-09-14 21:15:32 +02:00
|
|
|
}
|
|
|
|
|
2016-08-19 19:05:22 +02:00
|
|
|
AliasSet &AliasSetTracker::mergeAllAliasSets() {
|
|
|
|
assert(!AliasAnyAS && (TotalMayAliasSetSize > SaturationThreshold) &&
|
|
|
|
"Full merge should happen once, when the saturation threshold is "
|
|
|
|
"reached");
|
|
|
|
|
|
|
|
// Collect all alias sets, so that we can drop references with impunity
|
|
|
|
// without worrying about iterator invalidation.
|
|
|
|
std::vector<AliasSet *> ASVector;
|
|
|
|
ASVector.reserve(SaturationThreshold);
|
2021-02-06 20:17:09 +01:00
|
|
|
for (AliasSet &AS : *this)
|
|
|
|
ASVector.push_back(&AS);
|
2016-08-19 19:05:22 +02:00
|
|
|
|
|
|
|
// Copy all instructions and pointers into a new set, and forward all other
|
|
|
|
// sets to it.
|
|
|
|
AliasSets.push_back(new AliasSet());
|
|
|
|
AliasAnyAS = &AliasSets.back();
|
|
|
|
AliasAnyAS->Alias = AliasSet::SetMayAlias;
|
|
|
|
AliasAnyAS->Access = AliasSet::ModRefAccess;
|
|
|
|
AliasAnyAS->AliasAny = true;
|
|
|
|
|
|
|
|
for (auto Cur : ASVector) {
|
|
|
|
// If Cur was already forwarding, just forward to the new AS instead.
|
|
|
|
AliasSet *FwdTo = Cur->Forward;
|
|
|
|
if (FwdTo) {
|
|
|
|
Cur->Forward = AliasAnyAS;
|
2017-12-05 21:12:23 +01:00
|
|
|
AliasAnyAS->addRef();
|
2016-08-19 19:05:22 +02:00
|
|
|
FwdTo->dropRef(*this);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, perform the actual merge.
|
|
|
|
AliasAnyAS->mergeSetIn(*Cur, *this);
|
|
|
|
}
|
|
|
|
|
|
|
|
return *AliasAnyAS;
|
|
|
|
}
|
|
|
|
|
2018-10-29 23:25:59 +01:00
|
|
|
AliasSet &AliasSetTracker::addPointer(MemoryLocation Loc,
|
2016-10-19 20:50:32 +02:00
|
|
|
AliasSet::AccessLattice E) {
|
2018-10-29 23:25:59 +01:00
|
|
|
AliasSet &AS = getAliasSetFor(Loc);
|
2016-08-19 19:05:22 +02:00
|
|
|
AS.Access |= E;
|
|
|
|
|
|
|
|
if (!AliasAnyAS && (TotalMayAliasSetSize > SaturationThreshold)) {
|
|
|
|
// The AST is now saturated. From here on, we conservatively consider all
|
|
|
|
// pointers to alias each-other.
|
|
|
|
return mergeAllAliasSets();
|
|
|
|
}
|
|
|
|
|
|
|
|
return AS;
|
|
|
|
}
|
2003-12-18 09:11:56 +01:00
|
|
|
|
2003-02-24 21:37:56 +01:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// AliasSet/AliasSetTracker Printing Support
|
|
|
|
//===----------------------------------------------------------------------===//
|
2002-09-26 23:49:07 +02:00
|
|
|
|
2009-08-23 07:17:37 +02:00
|
|
|
void AliasSet::print(raw_ostream &OS) const {
|
2012-09-06 00:26:57 +02:00
|
|
|
OS << " AliasSet[" << (const void*)this << ", " << RefCount << "] ";
|
2015-06-22 04:12:52 +02:00
|
|
|
OS << (Alias == SetMustAlias ? "must" : "may") << " alias, ";
|
|
|
|
switch (Access) {
|
2015-10-28 23:13:41 +01:00
|
|
|
case NoAccess: OS << "No access "; break;
|
|
|
|
case RefAccess: OS << "Ref "; break;
|
|
|
|
case ModAccess: OS << "Mod "; break;
|
|
|
|
case ModRefAccess: OS << "Mod/Ref "; break;
|
2015-06-22 04:12:52 +02:00
|
|
|
default: llvm_unreachable("Bad value for Access!");
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
|
|
|
if (Forward)
|
|
|
|
OS << " forwarding to " << (void*)Forward;
|
2002-09-26 23:49:07 +02:00
|
|
|
|
2007-10-03 21:26:29 +02:00
|
|
|
if (!empty()) {
|
2003-02-24 21:37:56 +01:00
|
|
|
OS << "Pointers: ";
|
|
|
|
for (iterator I = begin(), E = end(); I != E; ++I) {
|
|
|
|
if (I != begin()) OS << ", ";
|
2014-01-09 03:29:41 +01:00
|
|
|
I.getPointer()->printAsOperand(OS << "(");
|
2020-11-17 20:11:09 +01:00
|
|
|
if (I.getSize() == LocationSize::afterPointer())
|
|
|
|
OS << ", unknown after)";
|
|
|
|
else if (I.getSize() == LocationSize::beforeOrAfterPointer())
|
|
|
|
OS << ", unknown before-or-after)";
|
2020-02-18 03:48:38 +01:00
|
|
|
else
|
2018-08-18 01:17:31 +02:00
|
|
|
OS << ", " << I.getSize() << ")";
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
2011-07-27 02:46:46 +02:00
|
|
|
if (!UnknownInsts.empty()) {
|
|
|
|
OS << "\n " << UnknownInsts.size() << " Unknown instructions: ";
|
|
|
|
for (unsigned i = 0, e = UnknownInsts.size(); i != e; ++i) {
|
2003-02-24 21:37:56 +01:00
|
|
|
if (i) OS << ", ";
|
2018-06-27 18:34:30 +02:00
|
|
|
if (auto *I = getUnknownInst(i)) {
|
|
|
|
if (I->hasName())
|
|
|
|
I->printAsOperand(OS);
|
|
|
|
else
|
|
|
|
I->print(OS);
|
|
|
|
}
|
2005-04-21 23:13:18 +02:00
|
|
|
}
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
|
|
|
OS << "\n";
|
2002-09-26 23:49:07 +02:00
|
|
|
}
|
|
|
|
|
2009-08-23 07:17:37 +02:00
|
|
|
void AliasSetTracker::print(raw_ostream &OS) const {
|
2019-09-12 20:09:47 +02:00
|
|
|
OS << "Alias Set Tracker: " << AliasSets.size();
|
|
|
|
if (AliasAnyAS)
|
|
|
|
OS << " (Saturated)";
|
|
|
|
OS << " alias sets for " << PointerMap.size() << " pointer values.\n";
|
2016-06-26 19:27:42 +02:00
|
|
|
for (const AliasSet &AS : *this)
|
|
|
|
AS.print(OS);
|
2003-02-24 21:37:56 +01:00
|
|
|
OS << "\n";
|
|
|
|
}
|
|
|
|
|
2017-10-15 16:32:27 +02:00
|
|
|
#if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
|
2016-01-29 21:50:44 +01:00
|
|
|
LLVM_DUMP_METHOD void AliasSet::dump() const { print(dbgs()); }
|
|
|
|
LLVM_DUMP_METHOD void AliasSetTracker::dump() const { print(dbgs()); }
|
2012-09-06 21:55:56 +02:00
|
|
|
#endif
|
2003-02-24 21:37:56 +01:00
|
|
|
|
2009-07-30 22:21:41 +02:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// ASTCallbackVH Class Implementation
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
void AliasSetTracker::ASTCallbackVH::deleted() {
|
|
|
|
assert(AST && "ASTCallbackVH called with a null AliasSetTracker!");
|
|
|
|
AST->deleteValue(getValPtr());
|
|
|
|
// this now dangles!
|
|
|
|
}
|
|
|
|
|
2011-04-09 08:55:46 +02:00
|
|
|
void AliasSetTracker::ASTCallbackVH::allUsesReplacedWith(Value *V) {
|
|
|
|
AST->copyValue(getValPtr(), V);
|
|
|
|
}
|
|
|
|
|
2009-07-30 22:21:41 +02:00
|
|
|
AliasSetTracker::ASTCallbackVH::ASTCallbackVH(Value *V, AliasSetTracker *ast)
|
2009-07-31 20:21:48 +02:00
|
|
|
: CallbackVH(V), AST(ast) {}
|
|
|
|
|
|
|
|
AliasSetTracker::ASTCallbackVH &
|
|
|
|
AliasSetTracker::ASTCallbackVH::operator=(Value *V) {
|
|
|
|
return *this = ASTCallbackVH(V, AST);
|
|
|
|
}
|
2009-07-30 22:21:41 +02:00
|
|
|
|
2003-02-24 21:37:56 +01:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// AliasSetPrinter Pass
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
namespace {
|
2017-07-25 01:16:33 +02:00
|
|
|
|
2009-10-25 07:33:48 +01:00
|
|
|
class AliasSetPrinter : public FunctionPass {
|
2003-02-24 21:37:56 +01:00
|
|
|
public:
|
2007-05-06 15:37:16 +02:00
|
|
|
static char ID; // Pass identification, replacement for typeid
|
2017-07-25 01:16:33 +02:00
|
|
|
|
2010-10-19 19:21:58 +02:00
|
|
|
AliasSetPrinter() : FunctionPass(ID) {
|
|
|
|
initializeAliasSetPrinterPass(*PassRegistry::getPassRegistry());
|
|
|
|
}
|
2007-05-01 23:15:47 +02:00
|
|
|
|
2014-03-05 08:30:04 +01:00
|
|
|
void getAnalysisUsage(AnalysisUsage &AU) const override {
|
2003-02-24 21:37:56 +01:00
|
|
|
AU.setPreservesAll();
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 19:55:00 +02:00
|
|
|
AU.addRequired<AAResultsWrapperPass>();
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
|
|
|
|
2014-03-05 08:30:04 +01:00
|
|
|
bool runOnFunction(Function &F) override {
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 19:55:00 +02:00
|
|
|
auto &AAWP = getAnalysis<AAResultsWrapperPass>();
|
2020-09-15 04:01:38 +02:00
|
|
|
AliasSetTracker Tracker(AAWP.getAAResults());
|
2016-08-19 19:05:22 +02:00
|
|
|
errs() << "Alias sets for function '" << F.getName() << "':\n";
|
2021-02-06 20:17:09 +01:00
|
|
|
for (Instruction &I : instructions(F))
|
|
|
|
Tracker.add(&I);
|
2020-09-15 04:01:38 +02:00
|
|
|
Tracker.print(errs());
|
2006-01-03 07:05:22 +01:00
|
|
|
return false;
|
2003-02-24 21:37:56 +01:00
|
|
|
}
|
|
|
|
};
|
2017-07-25 01:16:33 +02:00
|
|
|
|
|
|
|
} // end anonymous namespace
|
2008-05-13 02:00:25 +02:00
|
|
|
|
|
|
|
char AliasSetPrinter::ID = 0;
|
2017-07-25 01:16:33 +02:00
|
|
|
|
2010-10-12 21:48:12 +02:00
|
|
|
INITIALIZE_PASS_BEGIN(AliasSetPrinter, "print-alias-sets",
|
|
|
|
"Alias Set Printer", false, true)
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 19:55:00 +02:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(AAResultsWrapperPass)
|
2010-10-12 21:48:12 +02:00
|
|
|
INITIALIZE_PASS_END(AliasSetPrinter, "print-alias-sets",
|
2010-10-08 00:25:06 +02:00
|
|
|
"Alias Set Printer", false, true)
|
2020-09-15 04:01:38 +02:00
|
|
|
|
|
|
|
AliasSetsPrinterPass::AliasSetsPrinterPass(raw_ostream &OS) : OS(OS) {}
|
|
|
|
|
|
|
|
PreservedAnalyses AliasSetsPrinterPass::run(Function &F,
|
|
|
|
FunctionAnalysisManager &AM) {
|
|
|
|
auto &AA = AM.getResult<AAManager>(F);
|
|
|
|
AliasSetTracker Tracker(AA);
|
|
|
|
OS << "Alias sets for function '" << F.getName() << "':\n";
|
2021-02-06 20:17:09 +01:00
|
|
|
for (Instruction &I : instructions(F))
|
|
|
|
Tracker.add(&I);
|
2020-09-15 04:01:38 +02:00
|
|
|
Tracker.print(OS);
|
|
|
|
return PreservedAnalyses::all();
|
|
|
|
}
|