2013-01-07 04:08:10 +01:00
|
|
|
//===- llvm/Analysis/TargetTransformInfo.cpp ------------------------------===//
|
2012-10-19 01:22:48 +02:00
|
|
|
//
|
2019-01-19 09:50:56 +01:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2012-10-19 01:22:48 +02:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2013-01-07 04:08:10 +01:00
|
|
|
#include "llvm/Analysis/TargetTransformInfo.h"
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-13 22:15:01 +01:00
|
|
|
#include "llvm/Analysis/CFG.h"
|
|
|
|
#include "llvm/Analysis/LoopIterator.h"
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
#include "llvm/Analysis/TargetTransformInfoImpl.h"
|
2019-10-01 09:53:28 +02:00
|
|
|
#include "llvm/IR/CFG.h"
|
2013-01-21 02:27:39 +01:00
|
|
|
#include "llvm/IR/DataLayout.h"
|
2020-06-06 15:06:25 +02:00
|
|
|
#include "llvm/IR/Dominators.h"
|
2013-01-21 02:27:39 +01:00
|
|
|
#include "llvm/IR/Instruction.h"
|
|
|
|
#include "llvm/IR/Instructions.h"
|
2014-01-07 12:48:04 +01:00
|
|
|
#include "llvm/IR/IntrinsicInst.h"
|
2015-02-01 11:11:22 +01:00
|
|
|
#include "llvm/IR/Module.h"
|
2014-01-07 12:48:04 +01:00
|
|
|
#include "llvm/IR/Operator.h"
|
2017-09-09 00:29:17 +02:00
|
|
|
#include "llvm/IR/PatternMatch.h"
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-13 22:15:01 +01:00
|
|
|
#include "llvm/InitializePasses.h"
|
2017-07-07 04:00:06 +02:00
|
|
|
#include "llvm/Support/CommandLine.h"
|
2012-10-19 01:22:48 +02:00
|
|
|
#include "llvm/Support/ErrorHandling.h"
|
2016-05-27 16:27:24 +02:00
|
|
|
#include <utility>
|
2012-10-19 01:22:48 +02:00
|
|
|
|
|
|
|
using namespace llvm;
|
2017-09-09 00:29:17 +02:00
|
|
|
using namespace PatternMatch;
|
2012-10-19 01:22:48 +02:00
|
|
|
|
2014-04-22 04:48:03 +02:00
|
|
|
#define DEBUG_TYPE "tti"
|
|
|
|
|
2017-09-09 00:29:17 +02:00
|
|
|
static cl::opt<bool> EnableReduxCost("costmodel-reduxcost", cl::init(false),
|
|
|
|
cl::Hidden,
|
|
|
|
cl::desc("Recognize reduction patterns."));
|
|
|
|
|
2015-01-31 12:17:59 +01:00
|
|
|
namespace {
|
2018-05-01 17:54:18 +02:00
|
|
|
/// No-op implementation of the TTI interface using the utility base
|
2015-01-31 12:17:59 +01:00
|
|
|
/// classes.
|
|
|
|
///
|
|
|
|
/// This is used when no target specific information is available.
|
|
|
|
struct NoTTIImpl : TargetTransformInfoImplCRTPBase<NoTTIImpl> {
|
2015-07-09 04:08:42 +02:00
|
|
|
explicit NoTTIImpl(const DataLayout &DL)
|
2015-01-31 12:17:59 +01:00
|
|
|
: TargetTransformInfoImplCRTPBase<NoTTIImpl>(DL) {}
|
|
|
|
};
|
2020-04-15 14:43:26 +02:00
|
|
|
} // namespace
|
2015-01-31 12:17:59 +01:00
|
|
|
|
2019-06-26 14:02:43 +02:00
|
|
|
bool HardwareLoopInfo::canAnalyze(LoopInfo &LI) {
|
|
|
|
// If the loop has irreducible control flow, it can not be converted to
|
|
|
|
// Hardware loop.
|
2020-02-18 03:48:38 +01:00
|
|
|
LoopBlocksRPO RPOT(L);
|
2019-06-26 14:02:43 +02:00
|
|
|
RPOT.perform(&LI);
|
|
|
|
if (containsIrreducibleCFG<const BasicBlock *>(RPOT, LI))
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2021-04-22 13:52:25 +02:00
|
|
|
IntrinsicCostAttributes::IntrinsicCostAttributes(
|
|
|
|
Intrinsic::ID Id, const CallBase &CI, InstructionCost ScalarizationCost)
|
2021-02-23 14:03:26 +01:00
|
|
|
: II(dyn_cast<IntrinsicInst>(&CI)), RetTy(CI.getType()), IID(Id),
|
|
|
|
ScalarizationCost(ScalarizationCost) {
|
2020-05-20 10:18:42 +02:00
|
|
|
|
2020-06-23 15:07:44 +02:00
|
|
|
if (const auto *FPMO = dyn_cast<FPMathOperator>(&CI))
|
2020-05-20 10:18:42 +02:00
|
|
|
FMF = FPMO->getFastMathFlags();
|
|
|
|
|
|
|
|
Arguments.insert(Arguments.begin(), CI.arg_begin(), CI.arg_end());
|
2021-02-23 14:03:26 +01:00
|
|
|
FunctionType *FTy = CI.getCalledFunction()->getFunctionType();
|
2020-05-20 10:18:42 +02:00
|
|
|
ParamTys.insert(ParamTys.begin(), FTy->param_begin(), FTy->param_end());
|
|
|
|
}
|
|
|
|
|
2021-02-09 03:12:54 +01:00
|
|
|
IntrinsicCostAttributes::IntrinsicCostAttributes(Intrinsic::ID Id, Type *RTy,
|
|
|
|
ArrayRef<Type *> Tys,
|
|
|
|
FastMathFlags Flags,
|
2021-02-23 14:03:26 +01:00
|
|
|
const IntrinsicInst *I,
|
2021-04-22 13:52:25 +02:00
|
|
|
InstructionCost ScalarCost)
|
2021-02-23 14:03:26 +01:00
|
|
|
: II(I), RetTy(RTy), IID(Id), FMF(Flags), ScalarizationCost(ScalarCost) {
|
2020-05-20 10:18:42 +02:00
|
|
|
ParamTys.insert(ParamTys.begin(), Tys.begin(), Tys.end());
|
|
|
|
}
|
|
|
|
|
|
|
|
IntrinsicCostAttributes::IntrinsicCostAttributes(Intrinsic::ID Id, Type *Ty,
|
2020-06-23 15:07:44 +02:00
|
|
|
ArrayRef<const Value *> Args)
|
|
|
|
: RetTy(Ty), IID(Id) {
|
2020-05-20 10:18:42 +02:00
|
|
|
|
|
|
|
Arguments.insert(Arguments.begin(), Args.begin(), Args.end());
|
|
|
|
ParamTys.reserve(Arguments.size());
|
|
|
|
for (unsigned Idx = 0, Size = Arguments.size(); Idx != Size; ++Idx)
|
|
|
|
ParamTys.push_back(Arguments[Idx]->getType());
|
|
|
|
}
|
|
|
|
|
2021-02-23 14:03:26 +01:00
|
|
|
IntrinsicCostAttributes::IntrinsicCostAttributes(Intrinsic::ID Id, Type *RTy,
|
|
|
|
ArrayRef<const Value *> Args,
|
|
|
|
ArrayRef<Type *> Tys,
|
|
|
|
FastMathFlags Flags,
|
|
|
|
const IntrinsicInst *I,
|
2021-04-22 13:52:25 +02:00
|
|
|
InstructionCost ScalarCost)
|
2021-02-23 14:03:26 +01:00
|
|
|
: II(I), RetTy(RTy), IID(Id), FMF(Flags), ScalarizationCost(ScalarCost) {
|
|
|
|
ParamTys.insert(ParamTys.begin(), Tys.begin(), Tys.end());
|
|
|
|
Arguments.insert(Arguments.begin(), Args.begin(), Args.end());
|
|
|
|
}
|
|
|
|
|
2019-06-19 03:26:31 +02:00
|
|
|
bool HardwareLoopInfo::isHardwareLoopCandidate(ScalarEvolution &SE,
|
|
|
|
LoopInfo &LI, DominatorTree &DT,
|
|
|
|
bool ForceNestedLoop,
|
2019-07-09 19:53:09 +02:00
|
|
|
bool ForceHardwareLoopPHI) {
|
2019-06-19 03:26:31 +02:00
|
|
|
SmallVector<BasicBlock *, 4> ExitingBlocks;
|
|
|
|
L->getExitingBlocks(ExitingBlocks);
|
|
|
|
|
2019-10-01 09:53:28 +02:00
|
|
|
for (BasicBlock *BB : ExitingBlocks) {
|
2019-06-19 03:26:31 +02:00
|
|
|
// If we pass the updated counter back through a phi, we need to know
|
|
|
|
// which latch the updated value will be coming from.
|
|
|
|
if (!L->isLoopLatch(BB)) {
|
|
|
|
if (ForceHardwareLoopPHI || CounterInReg)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
const SCEV *EC = SE.getExitCount(L, BB);
|
|
|
|
if (isa<SCEVCouldNotCompute>(EC))
|
|
|
|
continue;
|
|
|
|
if (const SCEVConstant *ConstEC = dyn_cast<SCEVConstant>(EC)) {
|
|
|
|
if (ConstEC->getValue()->isZero())
|
|
|
|
continue;
|
|
|
|
} else if (!SE.isLoopInvariant(EC, L))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (SE.getTypeSizeInBits(EC->getType()) > CountType->getBitWidth())
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// If this exiting block is contained in a nested loop, it is not eligible
|
|
|
|
// for insertion of the branch-and-decrement since the inner loop would
|
|
|
|
// end up messing up the value in the CTR.
|
|
|
|
if (!IsNestingLegal && LI.getLoopFor(BB) != L && !ForceNestedLoop)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// We now have a loop-invariant count of loop iterations (which is not the
|
|
|
|
// constant zero) for which we know that this loop will not exit via this
|
|
|
|
// existing block.
|
|
|
|
|
|
|
|
// We need to make sure that this block will run on every loop iteration.
|
|
|
|
// For this to be true, we must dominate all blocks with backedges. Such
|
|
|
|
// blocks are in-loop predecessors to the header block.
|
|
|
|
bool NotAlways = false;
|
2019-10-01 09:53:28 +02:00
|
|
|
for (BasicBlock *Pred : predecessors(L->getHeader())) {
|
|
|
|
if (!L->contains(Pred))
|
2019-06-19 03:26:31 +02:00
|
|
|
continue;
|
|
|
|
|
2019-10-01 09:53:28 +02:00
|
|
|
if (!DT.dominates(BB, Pred)) {
|
2019-06-19 03:26:31 +02:00
|
|
|
NotAlways = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (NotAlways)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// Make sure this blocks ends with a conditional branch.
|
|
|
|
Instruction *TI = BB->getTerminator();
|
|
|
|
if (!TI)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (BranchInst *BI = dyn_cast<BranchInst>(TI)) {
|
|
|
|
if (!BI->isConditional())
|
|
|
|
continue;
|
|
|
|
|
|
|
|
ExitBranch = BI;
|
|
|
|
} else
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// Note that this block may not be the loop latch block, even if the loop
|
|
|
|
// has a latch block.
|
2019-10-01 09:53:28 +02:00
|
|
|
ExitBlock = BB;
|
2021-09-03 04:53:31 +02:00
|
|
|
ExitCount = EC;
|
2019-06-19 03:26:31 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ExitBlock)
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2015-07-09 04:08:42 +02:00
|
|
|
TargetTransformInfo::TargetTransformInfo(const DataLayout &DL)
|
2015-01-31 12:17:59 +01:00
|
|
|
: TTIImpl(new Model<NoTTIImpl>(NoTTIImpl(DL))) {}
|
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
TargetTransformInfo::~TargetTransformInfo() {}
|
2012-10-19 01:22:48 +02:00
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
TargetTransformInfo::TargetTransformInfo(TargetTransformInfo &&Arg)
|
|
|
|
: TTIImpl(std::move(Arg.TTIImpl)) {}
|
Switch TargetTransformInfo from an immutable analysis pass that requires
a TargetMachine to construct (and thus isn't always available), to an
analysis group that supports layered implementations much like
AliasAnalysis does. This is a pretty massive change, with a few parts
that I was unable to easily separate (sorry), so I'll walk through it.
The first step of this conversion was to make TargetTransformInfo an
analysis group, and to sink the nonce implementations in
ScalarTargetTransformInfo and VectorTargetTranformInfo into
a NoTargetTransformInfo pass. This allows other passes to add a hard
requirement on TTI, and assume they will always get at least on
implementation.
The TargetTransformInfo analysis group leverages the delegation chaining
trick that AliasAnalysis uses, where the base class for the analysis
group delegates to the previous analysis *pass*, allowing all but tho
NoFoo analysis passes to only implement the parts of the interfaces they
support. It also introduces a new trick where each pass in the group
retains a pointer to the top-most pass that has been initialized. This
allows passes to implement one API in terms of another API and benefit
when some other pass above them in the stack has more precise results
for the second API.
The second step of this conversion is to create a pass that implements
the TargetTransformInfo analysis using the target-independent
abstractions in the code generator. This replaces the
ScalarTargetTransformImpl and VectorTargetTransformImpl classes in
lib/Target with a single pass in lib/CodeGen called
BasicTargetTransformInfo. This class actually provides most of the TTI
functionality, basing it upon the TargetLowering abstraction and other
information in the target independent code generator.
The third step of the conversion adds support to all TargetMachines to
register custom analysis passes. This allows building those passes with
access to TargetLowering or other target-specific classes, and it also
allows each target to customize the set of analysis passes desired in
the pass manager. The baseline LLVMTargetMachine implements this
interface to add the BasicTTI pass to the pass manager, and all of the
tools that want to support target-aware TTI passes call this routine on
whatever target machine they end up with to add the appropriate passes.
The fourth step of the conversion created target-specific TTI analysis
passes for the X86 and ARM backends. These passes contain the custom
logic that was previously in their extensions of the
ScalarTargetTransformInfo and VectorTargetTransformInfo interfaces.
I separated them into their own file, as now all of the interface bits
are private and they just expose a function to create the pass itself.
Then I extended these target machines to set up a custom set of analysis
passes, first adding BasicTTI as a fallback, and then adding their
customized TTI implementations.
The fourth step required logic that was shared between the target
independent layer and the specific targets to move to a different
interface, as they no longer derive from each other. As a consequence,
a helper functions were added to TargetLowering representing the common
logic needed both in the target implementation and the codegen
implementation of the TTI pass. While technically this is the only
change that could have been committed separately, it would have been
a nightmare to extract.
The final step of the conversion was just to delete all the old
boilerplate. This got rid of the ScalarTargetTransformInfo and
VectorTargetTransformInfo classes, all of the support in all of the
targets for producing instances of them, and all of the support in the
tools for manually constructing a pass based around them.
Now that TTI is a relatively normal analysis group, two things become
straightforward. First, we can sink it into lib/Analysis which is a more
natural layer for it to live. Second, clients of this interface can
depend on it *always* being available which will simplify their code and
behavior. These (and other) simplifications will follow in subsequent
commits, this one is clearly big enough.
Finally, I'm very aware that much of the comments and documentation
needs to be updated. As soon as I had this working, and plausibly well
commented, I wanted to get it committed and in front of the build bots.
I'll be doing a few passes over documentation later if it sticks.
Commits to update DragonEgg and Clang will be made presently.
llvm-svn: 171681
2013-01-07 02:37:14 +01:00
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
TargetTransformInfo &TargetTransformInfo::operator=(TargetTransformInfo &&RHS) {
|
|
|
|
TTIImpl = std::move(RHS.TTIImpl);
|
|
|
|
return *this;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2016-04-15 03:38:48 +02:00
|
|
|
unsigned TargetTransformInfo::getInliningThresholdMultiplier() const {
|
|
|
|
return TTIImpl->getInliningThresholdMultiplier();
|
|
|
|
}
|
|
|
|
|
2021-01-06 06:11:21 +01:00
|
|
|
unsigned
|
|
|
|
TargetTransformInfo::adjustInliningThreshold(const CallBase *CB) const {
|
|
|
|
return TTIImpl->adjustInliningThreshold(CB);
|
|
|
|
}
|
|
|
|
|
[AMDGPU] Tune inlining parameters for AMDGPU target
Summary:
Since the target has no significant advantage of vectorization,
vector instructions bous threshold bonus should be optional.
amdgpu-inline-arg-alloca-cost parameter default value and the target
InliningThresholdMultiplier value tuned then respectively.
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, eraman, hiraditya, haicheng, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64642
llvm-svn: 366348
2019-07-17 18:51:29 +02:00
|
|
|
int TargetTransformInfo::getInlinerVectorBonusPercent() const {
|
|
|
|
return TTIImpl->getInlinerVectorBonusPercent();
|
|
|
|
}
|
|
|
|
|
2021-01-27 14:15:21 +01:00
|
|
|
InstructionCost
|
|
|
|
TargetTransformInfo::getGEPCost(Type *PointeeType, const Value *Ptr,
|
|
|
|
ArrayRef<const Value *> Operands,
|
|
|
|
TTI::TargetCostKind CostKind) const {
|
2020-04-28 15:11:27 +02:00
|
|
|
return TTIImpl->getGEPCost(PointeeType, Ptr, Operands, CostKind);
|
2016-07-08 23:48:05 +02:00
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
unsigned TargetTransformInfo::getEstimatedNumberOfCaseClusters(
|
2019-10-29 19:30:30 +01:00
|
|
|
const SwitchInst &SI, unsigned &JTSize, ProfileSummaryInfo *PSI,
|
|
|
|
BlockFrequencyInfo *BFI) const {
|
|
|
|
return TTIImpl->getEstimatedNumberOfCaseClusters(SI, JTSize, PSI, BFI);
|
[InlineCost] Improve the cost heuristic for Switch
Summary:
The motivation example is like below which has 13 cases but only 2 distinct targets
```
lor.lhs.false2: ; preds = %if.then
switch i32 %Status, label %if.then27 [
i32 -7012, label %if.end35
i32 -10008, label %if.end35
i32 -10016, label %if.end35
i32 15000, label %if.end35
i32 14013, label %if.end35
i32 10114, label %if.end35
i32 10107, label %if.end35
i32 10105, label %if.end35
i32 10013, label %if.end35
i32 10011, label %if.end35
i32 7008, label %if.end35
i32 7007, label %if.end35
i32 5002, label %if.end35
]
```
which is compiled into a balanced binary tree like this on AArch64 (similar on X86)
```
.LBB853_9: // %lor.lhs.false2
mov w8, #10012
cmp w19, w8
b.gt .LBB853_14
// BB#10: // %lor.lhs.false2
mov w8, #5001
cmp w19, w8
b.gt .LBB853_18
// BB#11: // %lor.lhs.false2
mov w8, #-10016
cmp w19, w8
b.eq .LBB853_23
// BB#12: // %lor.lhs.false2
mov w8, #-10008
cmp w19, w8
b.eq .LBB853_23
// BB#13: // %lor.lhs.false2
mov w8, #-7012
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_14: // %lor.lhs.false2
mov w8, #14012
cmp w19, w8
b.gt .LBB853_21
// BB#15: // %lor.lhs.false2
mov w8, #-10105
add w8, w19, w8
cmp w8, #9 // =9
b.hi .LBB853_17
// BB#16: // %lor.lhs.false2
orr w9, wzr, #0x1
lsl w8, w9, w8
mov w9, #517
and w8, w8, w9
cbnz w8, .LBB853_23
.LBB853_17: // %lor.lhs.false2
mov w8, #10013
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_18: // %lor.lhs.false2
mov w8, #-7007
add w8, w19, w8
cmp w8, #2 // =2
b.lo .LBB853_23
// BB#19: // %lor.lhs.false2
mov w8, #5002
cmp w19, w8
b.eq .LBB853_23
// BB#20: // %lor.lhs.false2
mov w8, #10011
cmp w19, w8
b.eq .LBB853_23
b .LBB853_3
.LBB853_21: // %lor.lhs.false2
mov w8, #14013
cmp w19, w8
b.eq .LBB853_23
// BB#22: // %lor.lhs.false2
mov w8, #15000
cmp w19, w8
b.ne .LBB853_3
```
However, the inline cost model estimates the cost to be linear with the number
of distinct targets and the cost of the above switch is just 2 InstrCosts.
The function containing this switch is then inlined about 900 times.
This change use the general way of switch lowering for the inline heuristic. It
etimate the number of case clusters with the suitability check for a jump table
or bit test. Considering the binary search tree built for the clusters, this
change modifies the model to be linear with the size of the balanced binary
tree. The model is off by default for now :
-inline-generic-switch-cost=false
This change was originally proposed by Haicheng in D29870.
Reviewers: hans, bmakam, chandlerc, eraman, haicheng, mcrosier
Reviewed By: hans
Subscribers: joerg, aemerson, llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D31085
llvm-svn: 301649
2017-04-28 18:04:03 +02:00
|
|
|
}
|
|
|
|
|
2021-01-20 18:17:23 +01:00
|
|
|
InstructionCost
|
|
|
|
TargetTransformInfo::getUserCost(const User *U,
|
|
|
|
ArrayRef<const Value *> Operands,
|
|
|
|
enum TargetCostKind CostKind) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getUserCost(U, Operands, CostKind);
|
2020-05-26 13:17:26 +02:00
|
|
|
assert((CostKind == TTI::TCK_RecipThroughput || Cost >= 0) &&
|
|
|
|
"TTI should not produce negative costs!");
|
2015-08-05 20:08:10 +02:00
|
|
|
return Cost;
|
2013-01-21 02:27:39 +01:00
|
|
|
}
|
|
|
|
|
2021-03-22 18:46:18 +01:00
|
|
|
BranchProbability TargetTransformInfo::getPredictableBranchThreshold() const {
|
|
|
|
return TTIImpl->getPredictableBranchThreshold();
|
|
|
|
}
|
|
|
|
|
2013-07-27 02:01:07 +02:00
|
|
|
bool TargetTransformInfo::hasBranchDivergence() const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->hasBranchDivergence();
|
2013-07-27 02:01:07 +02:00
|
|
|
}
|
|
|
|
|
Resubmit: [DA][TTI][AMDGPU] Add option to select GPUDA with TTI
Summary:
Enable the new diveregence analysis by default for AMDGPU.
Resubmit with test updates since GPUDA was causing failures on Windows.
Reviewers: rampitec, nhaehnle, arsenm, thakis
Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73315
2020-01-20 16:25:20 +01:00
|
|
|
bool TargetTransformInfo::useGPUDivergenceAnalysis() const {
|
|
|
|
return TTIImpl->useGPUDivergenceAnalysis();
|
|
|
|
}
|
|
|
|
|
Divergence analysis for GPU programs
Summary:
Some optimizations such as jump threading and loop unswitching can negatively
affect performance when applied to divergent branches. The divergence analysis
added in this patch conservatively estimates which branches in a GPU program
can diverge. This information can then help LLVM to run certain optimizations
selectively.
Test Plan: test/Analysis/DivergenceAnalysis/NVPTX/diverge.ll
Reviewers: resistor, hfinkel, eliben, meheff, jholewinski
Subscribers: broune, bjarke.roune, madhur13490, tstellarAMD, dberlin, echristo, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D8576
llvm-svn: 234567
2015-04-10 07:03:50 +02:00
|
|
|
bool TargetTransformInfo::isSourceOfDivergence(const Value *V) const {
|
|
|
|
return TTIImpl->isSourceOfDivergence(V);
|
|
|
|
}
|
|
|
|
|
2017-06-15 21:33:10 +02:00
|
|
|
bool llvm::TargetTransformInfo::isAlwaysUniform(const Value *V) const {
|
|
|
|
return TTIImpl->isAlwaysUniform(V);
|
|
|
|
}
|
|
|
|
|
2017-01-31 00:02:12 +01:00
|
|
|
unsigned TargetTransformInfo::getFlatAddressSpace() const {
|
|
|
|
return TTIImpl->getFlatAddressSpace();
|
|
|
|
}
|
|
|
|
|
2019-08-14 20:13:00 +02:00
|
|
|
bool TargetTransformInfo::collectFlatAddressOperands(
|
2020-04-15 14:43:26 +02:00
|
|
|
SmallVectorImpl<int> &OpIndexes, Intrinsic::ID IID) const {
|
2019-08-14 20:13:00 +02:00
|
|
|
return TTIImpl->collectFlatAddressOperands(OpIndexes, IID);
|
|
|
|
}
|
|
|
|
|
2020-06-09 21:07:08 +02:00
|
|
|
bool TargetTransformInfo::isNoopAddrSpaceCast(unsigned FromAS,
|
|
|
|
unsigned ToAS) const {
|
|
|
|
return TTIImpl->isNoopAddrSpaceCast(FromAS, ToAS);
|
|
|
|
}
|
|
|
|
|
2020-11-07 12:47:57 +01:00
|
|
|
unsigned TargetTransformInfo::getAssumedAddrSpace(const Value *V) const {
|
|
|
|
return TTIImpl->getAssumedAddrSpace(V);
|
|
|
|
}
|
|
|
|
|
2020-05-15 20:54:51 +02:00
|
|
|
Value *TargetTransformInfo::rewriteIntrinsicWithAddressSpace(
|
|
|
|
IntrinsicInst *II, Value *OldV, Value *NewV) const {
|
2019-08-14 20:13:00 +02:00
|
|
|
return TTIImpl->rewriteIntrinsicWithAddressSpace(II, OldV, NewV);
|
|
|
|
}
|
|
|
|
|
2013-01-22 12:26:02 +01:00
|
|
|
bool TargetTransformInfo::isLoweredToCall(const Function *F) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->isLoweredToCall(F);
|
2013-01-22 12:26:02 +01:00
|
|
|
}
|
|
|
|
|
2019-06-07 09:35:30 +02:00
|
|
|
bool TargetTransformInfo::isHardwareLoopProfitable(
|
2020-04-15 14:43:26 +02:00
|
|
|
Loop *L, ScalarEvolution &SE, AssumptionCache &AC,
|
|
|
|
TargetLibraryInfo *LibInfo, HardwareLoopInfo &HWLoopInfo) const {
|
2019-06-07 09:35:30 +02:00
|
|
|
return TTIImpl->isHardwareLoopProfitable(L, SE, AC, LibInfo, HWLoopInfo);
|
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
bool TargetTransformInfo::preferPredicateOverEpilogue(
|
|
|
|
Loop *L, LoopInfo *LI, ScalarEvolution &SE, AssumptionCache &AC,
|
|
|
|
TargetLibraryInfo *TLI, DominatorTree *DT,
|
|
|
|
const LoopAccessInfo *LAI) const {
|
2019-11-06 10:58:36 +01:00
|
|
|
return TTIImpl->preferPredicateOverEpilogue(L, LI, SE, AC, TLI, DT, LAI);
|
|
|
|
}
|
|
|
|
|
2020-06-09 18:19:57 +02:00
|
|
|
bool TargetTransformInfo::emitGetActiveLaneMask() const {
|
|
|
|
return TTIImpl->emitGetActiveLaneMask();
|
2020-05-29 10:05:41 +02:00
|
|
|
}
|
|
|
|
|
2020-06-03 15:56:40 +02:00
|
|
|
Optional<Instruction *>
|
|
|
|
TargetTransformInfo::instCombineIntrinsic(InstCombiner &IC,
|
|
|
|
IntrinsicInst &II) const {
|
|
|
|
return TTIImpl->instCombineIntrinsic(IC, II);
|
|
|
|
}
|
|
|
|
|
|
|
|
Optional<Value *> TargetTransformInfo::simplifyDemandedUseBitsIntrinsic(
|
|
|
|
InstCombiner &IC, IntrinsicInst &II, APInt DemandedMask, KnownBits &Known,
|
|
|
|
bool &KnownBitsComputed) const {
|
|
|
|
return TTIImpl->simplifyDemandedUseBitsIntrinsic(IC, II, DemandedMask, Known,
|
|
|
|
KnownBitsComputed);
|
|
|
|
}
|
|
|
|
|
|
|
|
Optional<Value *> TargetTransformInfo::simplifyDemandedVectorEltsIntrinsic(
|
|
|
|
InstCombiner &IC, IntrinsicInst &II, APInt DemandedElts, APInt &UndefElts,
|
|
|
|
APInt &UndefElts2, APInt &UndefElts3,
|
|
|
|
std::function<void(Instruction *, unsigned, APInt, APInt &)>
|
|
|
|
SimplifyAndSetOp) const {
|
|
|
|
return TTIImpl->simplifyDemandedVectorEltsIntrinsic(
|
|
|
|
IC, II, DemandedElts, UndefElts, UndefElts2, UndefElts3,
|
|
|
|
SimplifyAndSetOp);
|
|
|
|
}
|
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
void TargetTransformInfo::getUnrollingPreferences(
|
[LoopUnroll] Pass SCEV to getUnrollingPreferences hook. NFCI.
Reviewers: sanjoy, anna, reames, apilipenko, igor-laevsky, mkuper
Subscribers: jholewinski, arsenm, mzolotukhin, nemanjai, nhaehnle, javed.absar, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D34531
llvm-svn: 306554
2017-06-28 17:53:17 +02:00
|
|
|
Loop *L, ScalarEvolution &SE, UnrollingPreferences &UP) const {
|
|
|
|
return TTIImpl->getUnrollingPreferences(L, SE, UP);
|
2013-09-11 21:25:43 +02:00
|
|
|
}
|
|
|
|
|
[NFC] Separate Peeling Properties into its own struct (re-land after minor fix)
Summary:
This patch separates the peeling specific parameters from the UnrollingPreferences,
and creates a new struct called PeelingPreferences. Functions which used the
UnrollingPreferences struct for peeling have been updated to use the PeelingPreferences struct.
Author: sidbav (Sidharth Baveja)
Reviewers: Whitney (Whitney Tsang), Meinersbur (Michael Kruse), skatkov (Serguei Katkov), ashlykov (Arkady Shlykov), bogner (Justin Bogner), hfinkel (Hal Finkel), anhtuyen (Anh Tuyen Tran), nikic (Nikita Popov)
Reviewed By: Meinersbur (Michael Kruse)
Subscribers: fhahn (Florian Hahn), hiraditya (Aditya Kumar), llvm-commits, LLVM
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D80580
2020-07-10 20:38:08 +02:00
|
|
|
void TargetTransformInfo::getPeelingPreferences(Loop *L, ScalarEvolution &SE,
|
|
|
|
PeelingPreferences &PP) const {
|
|
|
|
return TTIImpl->getPeelingPreferences(L, SE, PP);
|
|
|
|
}
|
|
|
|
|
2013-01-05 12:43:11 +01:00
|
|
|
bool TargetTransformInfo::isLegalAddImmediate(int64_t Imm) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->isLegalAddImmediate(Imm);
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
bool TargetTransformInfo::isLegalICmpImmediate(int64_t Imm) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->isLegalICmpImmediate(Imm);
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
bool TargetTransformInfo::isLegalAddressingMode(Type *Ty, GlobalValue *BaseGV,
|
|
|
|
int64_t BaseOffset,
|
2020-04-15 14:43:26 +02:00
|
|
|
bool HasBaseReg, int64_t Scale,
|
2017-07-21 13:59:37 +02:00
|
|
|
unsigned AddrSpace,
|
|
|
|
Instruction *I) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->isLegalAddressingMode(Ty, BaseGV, BaseOffset, HasBaseReg,
|
2017-07-21 13:59:37 +02:00
|
|
|
Scale, AddrSpace, I);
|
2014-12-04 10:40:44 +01:00
|
|
|
}
|
|
|
|
|
2017-06-06 01:37:00 +02:00
|
|
|
bool TargetTransformInfo::isLSRCostLess(LSRCost &C1, LSRCost &C2) const {
|
|
|
|
return TTIImpl->isLSRCostLess(C1, C2);
|
|
|
|
}
|
|
|
|
|
2020-10-27 03:29:22 +01:00
|
|
|
bool TargetTransformInfo::isNumRegsMajorCostOfLSR() const {
|
|
|
|
return TTIImpl->isNumRegsMajorCostOfLSR();
|
2020-10-21 05:25:27 +02:00
|
|
|
}
|
|
|
|
|
2020-05-05 15:25:23 +02:00
|
|
|
bool TargetTransformInfo::isProfitableLSRChainElement(Instruction *I) const {
|
|
|
|
return TTIImpl->isProfitableLSRChainElement(I);
|
|
|
|
}
|
|
|
|
|
2018-02-06 00:43:05 +01:00
|
|
|
bool TargetTransformInfo::canMacroFuseCmp() const {
|
|
|
|
return TTIImpl->canMacroFuseCmp();
|
|
|
|
}
|
|
|
|
|
2019-07-03 03:49:03 +02:00
|
|
|
bool TargetTransformInfo::canSaveCmp(Loop *L, BranchInst **BI,
|
|
|
|
ScalarEvolution *SE, LoopInfo *LI,
|
|
|
|
DominatorTree *DT, AssumptionCache *AC,
|
|
|
|
TargetLibraryInfo *LibInfo) const {
|
|
|
|
return TTIImpl->canSaveCmp(L, BI, SE, LI, DT, AC, LibInfo);
|
|
|
|
}
|
|
|
|
|
2021-02-15 12:06:42 +01:00
|
|
|
TTI::AddressingModeKind
|
|
|
|
TargetTransformInfo::getPreferredAddressingMode(const Loop *L,
|
|
|
|
ScalarEvolution *SE) const {
|
|
|
|
return TTIImpl->getPreferredAddressingMode(L, SE);
|
2019-02-07 14:32:54 +01:00
|
|
|
}
|
|
|
|
|
2019-10-14 12:00:21 +02:00
|
|
|
bool TargetTransformInfo::isLegalMaskedStore(Type *DataType,
|
2020-05-19 04:16:06 +02:00
|
|
|
Align Alignment) const {
|
2019-10-14 12:00:21 +02:00
|
|
|
return TTIImpl->isLegalMaskedStore(DataType, Alignment);
|
2014-12-04 10:40:44 +01:00
|
|
|
}
|
|
|
|
|
2019-10-14 12:00:21 +02:00
|
|
|
bool TargetTransformInfo::isLegalMaskedLoad(Type *DataType,
|
2020-05-19 04:16:06 +02:00
|
|
|
Align Alignment) const {
|
2019-10-14 12:00:21 +02:00
|
|
|
return TTIImpl->isLegalMaskedLoad(DataType, Alignment);
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2019-06-17 19:20:08 +02:00
|
|
|
bool TargetTransformInfo::isLegalNTStore(Type *DataType,
|
2019-09-27 14:54:21 +02:00
|
|
|
Align Alignment) const {
|
2019-06-17 19:20:08 +02:00
|
|
|
return TTIImpl->isLegalNTStore(DataType, Alignment);
|
|
|
|
}
|
|
|
|
|
2019-09-27 14:54:21 +02:00
|
|
|
bool TargetTransformInfo::isLegalNTLoad(Type *DataType, Align Alignment) const {
|
2019-06-17 19:20:08 +02:00
|
|
|
return TTIImpl->isLegalNTLoad(DataType, Alignment);
|
|
|
|
}
|
|
|
|
|
2019-12-18 09:42:53 +01:00
|
|
|
bool TargetTransformInfo::isLegalMaskedGather(Type *DataType,
|
2020-05-19 04:16:06 +02:00
|
|
|
Align Alignment) const {
|
2019-12-18 09:42:53 +01:00
|
|
|
return TTIImpl->isLegalMaskedGather(DataType, Alignment);
|
2015-10-25 16:37:55 +01:00
|
|
|
}
|
|
|
|
|
2019-12-18 09:42:53 +01:00
|
|
|
bool TargetTransformInfo::isLegalMaskedScatter(Type *DataType,
|
2020-05-19 04:16:06 +02:00
|
|
|
Align Alignment) const {
|
2019-12-18 09:42:53 +01:00
|
|
|
return TTIImpl->isLegalMaskedScatter(DataType, Alignment);
|
2015-10-25 16:37:55 +01:00
|
|
|
}
|
|
|
|
|
2019-03-21 18:38:52 +01:00
|
|
|
bool TargetTransformInfo::isLegalMaskedCompressStore(Type *DataType) const {
|
|
|
|
return TTIImpl->isLegalMaskedCompressStore(DataType);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool TargetTransformInfo::isLegalMaskedExpandLoad(Type *DataType) const {
|
|
|
|
return TTIImpl->isLegalMaskedExpandLoad(DataType);
|
|
|
|
}
|
|
|
|
|
2017-09-09 15:38:18 +02:00
|
|
|
bool TargetTransformInfo::hasDivRemOp(Type *DataType, bool IsSigned) const {
|
|
|
|
return TTIImpl->hasDivRemOp(DataType, IsSigned);
|
|
|
|
}
|
|
|
|
|
2017-10-24 22:31:44 +02:00
|
|
|
bool TargetTransformInfo::hasVolatileVariant(Instruction *I,
|
|
|
|
unsigned AddrSpace) const {
|
|
|
|
return TTIImpl->hasVolatileVariant(I, AddrSpace);
|
|
|
|
}
|
|
|
|
|
2017-05-24 15:42:56 +02:00
|
|
|
bool TargetTransformInfo::prefersVectorizedAddressing() const {
|
|
|
|
return TTIImpl->prefersVectorizedAddressing();
|
|
|
|
}
|
|
|
|
|
2021-01-27 14:32:39 +01:00
|
|
|
InstructionCost TargetTransformInfo::getScalingFactorCost(
|
|
|
|
Type *Ty, GlobalValue *BaseGV, int64_t BaseOffset, bool HasBaseReg,
|
|
|
|
int64_t Scale, unsigned AddrSpace) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getScalingFactorCost(
|
|
|
|
Ty, BaseGV, BaseOffset, HasBaseReg, Scale, AddrSpace);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-05-31 23:29:03 +02:00
|
|
|
}
|
|
|
|
|
2017-07-21 13:59:37 +02:00
|
|
|
bool TargetTransformInfo::LSRWithInstrQueries() const {
|
|
|
|
return TTIImpl->LSRWithInstrQueries();
|
|
|
|
}
|
|
|
|
|
2013-01-05 12:43:11 +01:00
|
|
|
bool TargetTransformInfo::isTruncateFree(Type *Ty1, Type *Ty2) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->isTruncateFree(Ty1, Ty2);
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2015-02-23 20:15:16 +01:00
|
|
|
bool TargetTransformInfo::isProfitableToHoist(Instruction *I) const {
|
|
|
|
return TTIImpl->isProfitableToHoist(I);
|
|
|
|
}
|
|
|
|
|
2018-03-29 00:28:50 +02:00
|
|
|
bool TargetTransformInfo::useAA() const { return TTIImpl->useAA(); }
|
|
|
|
|
2013-01-05 12:43:11 +01:00
|
|
|
bool TargetTransformInfo::isTypeLegal(Type *Ty) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->isTypeLegal(Ty);
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-05-15 01:44:05 +02:00
|
|
|
InstructionCost TargetTransformInfo::getRegUsageForType(Type *Ty) const {
|
2020-11-12 13:33:36 +01:00
|
|
|
return TTIImpl->getRegUsageForType(Ty);
|
|
|
|
}
|
|
|
|
|
2013-01-05 12:43:11 +01:00
|
|
|
bool TargetTransformInfo::shouldBuildLookupTables() const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->shouldBuildLookupTables();
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
2020-12-29 22:32:13 +01:00
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
bool TargetTransformInfo::shouldBuildLookupTablesForConstant(
|
|
|
|
Constant *C) const {
|
2016-10-07 10:48:24 +02:00
|
|
|
return TTIImpl->shouldBuildLookupTablesForConstant(C);
|
|
|
|
}
|
2013-01-05 12:43:11 +01:00
|
|
|
|
2020-12-29 22:32:13 +01:00
|
|
|
bool TargetTransformInfo::shouldBuildRelLookupTables() const {
|
|
|
|
return TTIImpl->shouldBuildRelLookupTables();
|
|
|
|
}
|
|
|
|
|
2018-01-30 17:17:22 +01:00
|
|
|
bool TargetTransformInfo::useColdCCForColdCall(Function &F) const {
|
|
|
|
return TTIImpl->useColdCCForColdCall(F);
|
|
|
|
}
|
|
|
|
|
2021-04-22 11:41:01 +02:00
|
|
|
InstructionCost
|
2020-05-05 17:57:55 +02:00
|
|
|
TargetTransformInfo::getScalarizationOverhead(VectorType *Ty,
|
|
|
|
const APInt &DemandedElts,
|
|
|
|
bool Insert, bool Extract) const {
|
2020-04-29 12:39:13 +02:00
|
|
|
return TTIImpl->getScalarizationOverhead(Ty, DemandedElts, Insert, Extract);
|
2017-01-26 08:03:25 +01:00
|
|
|
}
|
|
|
|
|
2021-04-22 11:41:01 +02:00
|
|
|
InstructionCost TargetTransformInfo::getOperandsScalarizationOverhead(
|
2021-02-23 14:04:59 +01:00
|
|
|
ArrayRef<const Value *> Args, ArrayRef<Type *> Tys) const {
|
|
|
|
return TTIImpl->getOperandsScalarizationOverhead(Args, Tys);
|
2017-01-26 08:03:25 +01:00
|
|
|
}
|
|
|
|
|
2017-04-12 14:41:37 +02:00
|
|
|
bool TargetTransformInfo::supportsEfficientVectorElementLoadStore() const {
|
|
|
|
return TTIImpl->supportsEfficientVectorElementLoadStore();
|
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
bool TargetTransformInfo::enableAggressiveInterleaving(
|
|
|
|
bool LoopHasReductions) const {
|
2015-03-07 00:12:04 +01:00
|
|
|
return TTIImpl->enableAggressiveInterleaving(LoopHasReductions);
|
|
|
|
}
|
|
|
|
|
2019-06-25 10:04:13 +02:00
|
|
|
TargetTransformInfo::MemCmpExpansionOptions
|
|
|
|
TargetTransformInfo::enableMemCmpExpansion(bool OptSize, bool IsZeroCmp) const {
|
|
|
|
return TTIImpl->enableMemCmpExpansion(OptSize, IsZeroCmp);
|
2017-05-31 19:12:38 +02:00
|
|
|
}
|
|
|
|
|
2015-08-10 16:50:54 +02:00
|
|
|
bool TargetTransformInfo::enableInterleavedAccessVectorization() const {
|
|
|
|
return TTIImpl->enableInterleavedAccessVectorization();
|
|
|
|
}
|
|
|
|
|
2018-10-14 10:50:06 +02:00
|
|
|
bool TargetTransformInfo::enableMaskedInterleavedAccessVectorization() const {
|
|
|
|
return TTIImpl->enableMaskedInterleavedAccessVectorization();
|
|
|
|
}
|
|
|
|
|
2016-04-14 22:42:18 +02:00
|
|
|
bool TargetTransformInfo::isFPVectorizationPotentiallyUnsafe() const {
|
|
|
|
return TTIImpl->isFPVectorizationPotentiallyUnsafe();
|
|
|
|
}
|
|
|
|
|
2016-08-04 18:38:44 +02:00
|
|
|
bool TargetTransformInfo::allowsMisalignedMemoryAccesses(LLVMContext &Context,
|
|
|
|
unsigned BitWidth,
|
2016-07-11 22:46:17 +02:00
|
|
|
unsigned AddressSpace,
|
2021-02-05 04:22:04 +01:00
|
|
|
Align Alignment,
|
2016-07-11 22:46:17 +02:00
|
|
|
bool *Fast) const {
|
2020-04-15 14:43:26 +02:00
|
|
|
return TTIImpl->allowsMisalignedMemoryAccesses(Context, BitWidth,
|
|
|
|
AddressSpace, Alignment, Fast);
|
2016-07-11 22:46:17 +02:00
|
|
|
}
|
|
|
|
|
2013-01-07 04:16:03 +01:00
|
|
|
TargetTransformInfo::PopcntSupportKind
|
|
|
|
TargetTransformInfo::getPopcntSupport(unsigned IntTyWidthInBit) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->getPopcntSupport(IntTyWidthInBit);
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2013-08-23 12:27:02 +02:00
|
|
|
bool TargetTransformInfo::haveFastSqrt(Type *Ty) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->haveFastSqrt(Ty);
|
2013-08-23 12:27:02 +02:00
|
|
|
}
|
|
|
|
|
2017-11-27 22:15:43 +01:00
|
|
|
bool TargetTransformInfo::isFCmpOrdCheaperThanFCmpZero(Type *Ty) const {
|
|
|
|
return TTIImpl->isFCmpOrdCheaperThanFCmpZero(Ty);
|
|
|
|
}
|
|
|
|
|
2021-04-14 17:52:51 +02:00
|
|
|
InstructionCost TargetTransformInfo::getFPOpCost(Type *Ty) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getFPOpCost(Ty);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2015-02-05 03:09:33 +01:00
|
|
|
}
|
|
|
|
|
2021-05-20 15:57:19 +02:00
|
|
|
InstructionCost TargetTransformInfo::getIntImmCodeSizeCost(unsigned Opcode,
|
|
|
|
unsigned Idx,
|
|
|
|
const APInt &Imm,
|
|
|
|
Type *Ty) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getIntImmCodeSizeCost(Opcode, Idx, Imm, Ty);
|
2016-07-14 09:44:20 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
|
|
|
}
|
|
|
|
|
2021-01-27 16:01:16 +01:00
|
|
|
InstructionCost
|
|
|
|
TargetTransformInfo::getIntImmCost(const APInt &Imm, Type *Ty,
|
|
|
|
TTI::TargetCostKind CostKind) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getIntImmCost(Imm, Ty, CostKind);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-01-27 16:01:16 +01:00
|
|
|
InstructionCost TargetTransformInfo::getIntImmCostInst(
|
|
|
|
unsigned Opcode, unsigned Idx, const APInt &Imm, Type *Ty,
|
|
|
|
TTI::TargetCostKind CostKind, Instruction *Inst) const {
|
|
|
|
InstructionCost Cost =
|
|
|
|
TTIImpl->getIntImmCostInst(Opcode, Idx, Imm, Ty, CostKind, Inst);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2014-01-25 03:02:55 +01:00
|
|
|
}
|
|
|
|
|
2021-01-27 16:01:16 +01:00
|
|
|
InstructionCost
|
2020-04-28 15:11:27 +02:00
|
|
|
TargetTransformInfo::getIntImmCostIntrin(Intrinsic::ID IID, unsigned Idx,
|
|
|
|
const APInt &Imm, Type *Ty,
|
|
|
|
TTI::TargetCostKind CostKind) const {
|
2021-01-27 16:01:16 +01:00
|
|
|
InstructionCost Cost =
|
|
|
|
TTIImpl->getIntImmCostIntrin(IID, Idx, Imm, Ty, CostKind);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2014-01-25 03:02:55 +01:00
|
|
|
}
|
|
|
|
|
recommit: [LoopVectorize][PowerPC] Estimate int and float register pressure separately in loop-vectorize
In loop-vectorize, interleave count and vector factor depend on target register number. Currently, it does not
estimate different register pressure for different register class separately(especially for scalar type,
float type should not be on the same position with int type), so it's not accurate. Specifically,
it causes too many times interleaving/unrolling, result in too many register spills in loop body and hurting performance.
So we need classify the register classes in IR level, and importantly these are abstract register classes,
and are not the target register class of backend provided in td file. It's used to establish the mapping between
the types of IR values and the number of simultaneous live ranges to which we'd like to limit for some set of those types.
For example, POWER target, register num is special when VSX is enabled. When VSX is enabled, the number of int scalar register is 32(GPR),
float is 64(VSR), but for int and float vector register both are 64(VSR). So there should be 2 kinds of register class when vsx is enabled,
and 3 kinds of register class when VSX is NOT enabled.
It runs on POWER target, it makes big(+~30%) performance improvement in one specific bmk(503.bwaves_r) of spec2017 and no other obvious degressions.
Differential revision: https://reviews.llvm.org/D67148
llvm-svn: 374634
2019-10-12 04:53:04 +02:00
|
|
|
unsigned TargetTransformInfo::getNumberOfRegisters(unsigned ClassID) const {
|
|
|
|
return TTIImpl->getNumberOfRegisters(ClassID);
|
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
unsigned TargetTransformInfo::getRegisterClassForType(bool Vector,
|
|
|
|
Type *Ty) const {
|
recommit: [LoopVectorize][PowerPC] Estimate int and float register pressure separately in loop-vectorize
In loop-vectorize, interleave count and vector factor depend on target register number. Currently, it does not
estimate different register pressure for different register class separately(especially for scalar type,
float type should not be on the same position with int type), so it's not accurate. Specifically,
it causes too many times interleaving/unrolling, result in too many register spills in loop body and hurting performance.
So we need classify the register classes in IR level, and importantly these are abstract register classes,
and are not the target register class of backend provided in td file. It's used to establish the mapping between
the types of IR values and the number of simultaneous live ranges to which we'd like to limit for some set of those types.
For example, POWER target, register num is special when VSX is enabled. When VSX is enabled, the number of int scalar register is 32(GPR),
float is 64(VSR), but for int and float vector register both are 64(VSR). So there should be 2 kinds of register class when vsx is enabled,
and 3 kinds of register class when VSX is NOT enabled.
It runs on POWER target, it makes big(+~30%) performance improvement in one specific bmk(503.bwaves_r) of spec2017 and no other obvious degressions.
Differential revision: https://reviews.llvm.org/D67148
llvm-svn: 374634
2019-10-12 04:53:04 +02:00
|
|
|
return TTIImpl->getRegisterClassForType(Vector, Ty);
|
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
const char *TargetTransformInfo::getRegisterClassName(unsigned ClassID) const {
|
recommit: [LoopVectorize][PowerPC] Estimate int and float register pressure separately in loop-vectorize
In loop-vectorize, interleave count and vector factor depend on target register number. Currently, it does not
estimate different register pressure for different register class separately(especially for scalar type,
float type should not be on the same position with int type), so it's not accurate. Specifically,
it causes too many times interleaving/unrolling, result in too many register spills in loop body and hurting performance.
So we need classify the register classes in IR level, and importantly these are abstract register classes,
and are not the target register class of backend provided in td file. It's used to establish the mapping between
the types of IR values and the number of simultaneous live ranges to which we'd like to limit for some set of those types.
For example, POWER target, register num is special when VSX is enabled. When VSX is enabled, the number of int scalar register is 32(GPR),
float is 64(VSR), but for int and float vector register both are 64(VSR). So there should be 2 kinds of register class when vsx is enabled,
and 3 kinds of register class when VSX is NOT enabled.
It runs on POWER target, it makes big(+~30%) performance improvement in one specific bmk(503.bwaves_r) of spec2017 and no other obvious degressions.
Differential revision: https://reviews.llvm.org/D67148
llvm-svn: 374634
2019-10-12 04:53:04 +02:00
|
|
|
return TTIImpl->getRegisterClassName(ClassID);
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-03-24 14:50:31 +01:00
|
|
|
TypeSize TargetTransformInfo::getRegisterBitWidth(
|
|
|
|
TargetTransformInfo::RegisterKind K) const {
|
|
|
|
return TTIImpl->getRegisterBitWidth(K);
|
2013-01-09 23:29:00 +01:00
|
|
|
}
|
|
|
|
|
2017-05-15 23:15:01 +02:00
|
|
|
unsigned TargetTransformInfo::getMinVectorRegisterBitWidth() const {
|
|
|
|
return TTIImpl->getMinVectorRegisterBitWidth();
|
|
|
|
}
|
|
|
|
|
2020-12-17 17:15:28 +01:00
|
|
|
Optional<unsigned> TargetTransformInfo::getMaxVScale() const {
|
|
|
|
return TTIImpl->getMaxVScale();
|
|
|
|
}
|
|
|
|
|
2021-04-13 13:46:33 +02:00
|
|
|
bool TargetTransformInfo::shouldMaximizeVectorBandwidth() const {
|
|
|
|
return TTIImpl->shouldMaximizeVectorBandwidth();
|
2018-03-27 18:14:11 +02:00
|
|
|
}
|
|
|
|
|
2021-02-11 09:50:08 +01:00
|
|
|
ElementCount TargetTransformInfo::getMinimumVF(unsigned ElemWidth,
|
|
|
|
bool IsScalable) const {
|
|
|
|
return TTIImpl->getMinimumVF(ElemWidth, IsScalable);
|
2018-04-13 22:16:32 +02:00
|
|
|
}
|
|
|
|
|
2020-11-24 19:42:43 +01:00
|
|
|
unsigned TargetTransformInfo::getMaximumVF(unsigned ElemWidth,
|
|
|
|
unsigned Opcode) const {
|
|
|
|
return TTIImpl->getMaximumVF(ElemWidth, Opcode);
|
|
|
|
}
|
|
|
|
|
2017-04-03 21:20:07 +02:00
|
|
|
bool TargetTransformInfo::shouldConsiderAddressTypePromotion(
|
|
|
|
const Instruction &I, bool &AllowPromotionWithoutCommonHeader) const {
|
|
|
|
return TTIImpl->shouldConsiderAddressTypePromotion(
|
|
|
|
I, AllowPromotionWithoutCommonHeader);
|
|
|
|
}
|
|
|
|
|
2016-01-21 19:28:36 +01:00
|
|
|
unsigned TargetTransformInfo::getCacheLineSize() const {
|
|
|
|
return TTIImpl->getCacheLineSize();
|
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
llvm::Optional<unsigned>
|
|
|
|
TargetTransformInfo::getCacheSize(CacheLevel Level) const {
|
Model cache size and associativity in TargetTransformInfo
Summary:
We add the precise cache sizes and associativity for the following Intel
architectures:
- Penry
- Nehalem
- Westmere
- Sandy Bridge
- Ivy Bridge
- Haswell
- Broadwell
- Skylake
- Kabylake
Polly uses since several months a performance model for BLAS computations that
derives optimal cache and register tile sizes from cache and latency
information (based on ideas from "Analytical Modeling Is Enough for High-Performance BLIS", by Tze Meng Low published at TOMS 2016).
While bootstrapping this model, these target values have been kept in Polly.
However, as our implementation is now rather mature, it seems time to teach
LLVM itself about cache sizes.
Interestingly, L1 and L2 cache sizes are pretty constant across
micro-architectures, hence a set of architecture specific default values
seems like a good start. They can be expanded to more target specific values,
in case certain newer architectures require different values. For now a set
of Intel architectures are provided.
Just as a little teaser, for a simple gemm kernel this model allows us to
improve performance from 1.2s to 0.27s. For gemm kernels with less optimal
memory layouts even larger speedups can be reported.
Reviewers: Meinersbur, bollu, singam-sanjay, hfinkel, gareevroman, fhahn, sebpop, efriedma, asb
Reviewed By: fhahn, asb
Subscribers: lsaba, asb, pollydev, llvm-commits
Differential Revision: https://reviews.llvm.org/D37051
llvm-svn: 311647
2017-08-24 11:46:25 +02:00
|
|
|
return TTIImpl->getCacheSize(Level);
|
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
llvm::Optional<unsigned>
|
|
|
|
TargetTransformInfo::getCacheAssociativity(CacheLevel Level) const {
|
Model cache size and associativity in TargetTransformInfo
Summary:
We add the precise cache sizes and associativity for the following Intel
architectures:
- Penry
- Nehalem
- Westmere
- Sandy Bridge
- Ivy Bridge
- Haswell
- Broadwell
- Skylake
- Kabylake
Polly uses since several months a performance model for BLAS computations that
derives optimal cache and register tile sizes from cache and latency
information (based on ideas from "Analytical Modeling Is Enough for High-Performance BLIS", by Tze Meng Low published at TOMS 2016).
While bootstrapping this model, these target values have been kept in Polly.
However, as our implementation is now rather mature, it seems time to teach
LLVM itself about cache sizes.
Interestingly, L1 and L2 cache sizes are pretty constant across
micro-architectures, hence a set of architecture specific default values
seems like a good start. They can be expanded to more target specific values,
in case certain newer architectures require different values. For now a set
of Intel architectures are provided.
Just as a little teaser, for a simple gemm kernel this model allows us to
improve performance from 1.2s to 0.27s. For gemm kernels with less optimal
memory layouts even larger speedups can be reported.
Reviewers: Meinersbur, bollu, singam-sanjay, hfinkel, gareevroman, fhahn, sebpop, efriedma, asb
Reviewed By: fhahn, asb
Subscribers: lsaba, asb, pollydev, llvm-commits
Differential Revision: https://reviews.llvm.org/D37051
llvm-svn: 311647
2017-08-24 11:46:25 +02:00
|
|
|
return TTIImpl->getCacheAssociativity(Level);
|
|
|
|
}
|
|
|
|
|
2016-01-27 23:21:25 +01:00
|
|
|
unsigned TargetTransformInfo::getPrefetchDistance() const {
|
|
|
|
return TTIImpl->getPrefetchDistance();
|
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
unsigned TargetTransformInfo::getMinPrefetchStride(
|
|
|
|
unsigned NumMemAccesses, unsigned NumStridedMemAccesses,
|
|
|
|
unsigned NumPrefetches, bool HasCall) const {
|
2019-10-31 16:05:58 +01:00
|
|
|
return TTIImpl->getMinPrefetchStride(NumMemAccesses, NumStridedMemAccesses,
|
|
|
|
NumPrefetches, HasCall);
|
2016-03-18 01:27:38 +01:00
|
|
|
}
|
|
|
|
|
2016-03-18 01:27:43 +01:00
|
|
|
unsigned TargetTransformInfo::getMaxPrefetchIterationsAhead() const {
|
|
|
|
return TTIImpl->getMaxPrefetchIterationsAhead();
|
|
|
|
}
|
|
|
|
|
2019-10-31 16:05:58 +01:00
|
|
|
bool TargetTransformInfo::enableWritePrefetching() const {
|
|
|
|
return TTIImpl->enableWritePrefetching();
|
|
|
|
}
|
|
|
|
|
2015-05-06 19:12:25 +02:00
|
|
|
unsigned TargetTransformInfo::getMaxInterleaveFactor(unsigned VF) const {
|
|
|
|
return TTIImpl->getMaxInterleaveFactor(VF);
|
2013-01-09 02:15:42 +01:00
|
|
|
}
|
|
|
|
|
2018-10-05 16:34:04 +02:00
|
|
|
TargetTransformInfo::OperandValueKind
|
2020-06-23 15:07:44 +02:00
|
|
|
TargetTransformInfo::getOperandInfo(const Value *V,
|
|
|
|
OperandValueProperties &OpProps) {
|
2018-10-05 16:34:04 +02:00
|
|
|
OperandValueKind OpInfo = OK_AnyValue;
|
|
|
|
OpProps = OP_None;
|
|
|
|
|
2020-06-23 15:07:44 +02:00
|
|
|
if (const auto *CI = dyn_cast<ConstantInt>(V)) {
|
2018-10-05 16:34:04 +02:00
|
|
|
if (CI->getValue().isPowerOf2())
|
|
|
|
OpProps = OP_PowerOf2;
|
|
|
|
return OK_UniformConstantValue;
|
|
|
|
}
|
|
|
|
|
2018-11-14 16:04:08 +01:00
|
|
|
// A broadcast shuffle creates a uniform value.
|
|
|
|
// TODO: Add support for non-zero index broadcasts.
|
|
|
|
// TODO: Add support for different source vector width.
|
2020-06-23 15:07:44 +02:00
|
|
|
if (const auto *ShuffleInst = dyn_cast<ShuffleVectorInst>(V))
|
2018-11-14 16:04:08 +01:00
|
|
|
if (ShuffleInst->isZeroEltSplat())
|
|
|
|
OpInfo = OK_UniformValue;
|
|
|
|
|
2018-10-05 16:34:04 +02:00
|
|
|
const Value *Splat = getSplatValue(V);
|
|
|
|
|
|
|
|
// Check for a splat of a constant or for a non uniform vector of constants
|
|
|
|
// and check if the constant(s) are all powers of two.
|
|
|
|
if (isa<ConstantVector>(V) || isa<ConstantDataVector>(V)) {
|
|
|
|
OpInfo = OK_NonUniformConstantValue;
|
|
|
|
if (Splat) {
|
|
|
|
OpInfo = OK_UniformConstantValue;
|
|
|
|
if (auto *CI = dyn_cast<ConstantInt>(Splat))
|
|
|
|
if (CI->getValue().isPowerOf2())
|
|
|
|
OpProps = OP_PowerOf2;
|
2020-06-23 15:07:44 +02:00
|
|
|
} else if (const auto *CDS = dyn_cast<ConstantDataSequential>(V)) {
|
2018-10-05 16:34:04 +02:00
|
|
|
OpProps = OP_PowerOf2;
|
|
|
|
for (unsigned I = 0, E = CDS->getNumElements(); I != E; ++I) {
|
|
|
|
if (auto *CI = dyn_cast<ConstantInt>(CDS->getElementAsConstant(I)))
|
|
|
|
if (CI->getValue().isPowerOf2())
|
|
|
|
continue;
|
|
|
|
OpProps = OP_None;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check for a splat of a uniform value. This is not loop aware, so return
|
|
|
|
// true only for the obviously uniform cases (argument, globalvalue)
|
|
|
|
if (Splat && (isa<Argument>(Splat) || isa<GlobalValue>(Splat)))
|
|
|
|
OpInfo = OK_UniformValue;
|
|
|
|
|
|
|
|
return OpInfo;
|
|
|
|
}
|
|
|
|
|
2021-04-14 17:53:01 +02:00
|
|
|
InstructionCost TargetTransformInfo::getArithmeticInstrCost(
|
2020-04-28 15:11:27 +02:00
|
|
|
unsigned Opcode, Type *Ty, TTI::TargetCostKind CostKind,
|
2021-04-14 17:53:01 +02:00
|
|
|
OperandValueKind Opd1Info, OperandValueKind Opd2Info,
|
|
|
|
OperandValueProperties Opd1PropInfo, OperandValueProperties Opd2PropInfo,
|
|
|
|
ArrayRef<const Value *> Args, const Instruction *CxtI) const {
|
|
|
|
InstructionCost Cost =
|
|
|
|
TTIImpl->getArithmeticInstrCost(Opcode, Ty, CostKind, Opd1Info, Opd2Info,
|
|
|
|
Opd1PropInfo, Opd2PropInfo, Args, CxtI);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-04-14 17:50:20 +02:00
|
|
|
InstructionCost TargetTransformInfo::getShuffleCost(ShuffleKind Kind,
|
|
|
|
VectorType *Ty,
|
|
|
|
ArrayRef<int> Mask,
|
|
|
|
int Index,
|
|
|
|
VectorType *SubTp) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getShuffleCost(Kind, Ty, Mask, Index, SubTp);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
[Analysis] TTI: Add CastContextHint for getCastInstrCost
Currently, getCastInstrCost has limited information about the cast it's
rating, often just the opcode and types. Sometimes there is a context
instruction as well, but it isn't trustworthy: for instance, when the
vectorizer is rating a plan, it calls getCastInstrCost with the old
instructions when, in fact, it's trying to evaluate the cost of the
instruction post-vectorization. Thus, the current system can get the
cost of certain casts incorrect as the correct cost can vary greatly
based on the context in which it's used.
For example, if the vectorizer queries getCastInstrCost to evaluate the
cost of a sext(load) with tail predication enabled, getCastInstrCost
will think it's free most of the time, but it's not always free. On ARM
MVE, a VLD2 group cannot be extended like a normal VLDR can. Similar
situations can come up with how masked loads can be extended when being
split.
To fix that, this path adds a new parameter to getCastInstrCost to give
it a hint about the context of the cast. It adds a CastContextHint enum
which contains the type of the load/store being created by the
vectorizer - one for each of the types it can produce.
Original patch by Pierre van Houtryve
Differential Revision: https://reviews.llvm.org/D79162
2020-07-29 14:32:53 +02:00
|
|
|
TTI::CastContextHint
|
|
|
|
TargetTransformInfo::getCastContextHint(const Instruction *I) {
|
|
|
|
if (!I)
|
|
|
|
return CastContextHint::None;
|
|
|
|
|
|
|
|
auto getLoadStoreKind = [](const Value *V, unsigned LdStOp, unsigned MaskedOp,
|
|
|
|
unsigned GatScatOp) {
|
|
|
|
const Instruction *I = dyn_cast<Instruction>(V);
|
|
|
|
if (!I)
|
|
|
|
return CastContextHint::None;
|
|
|
|
|
|
|
|
if (I->getOpcode() == LdStOp)
|
|
|
|
return CastContextHint::Normal;
|
|
|
|
|
|
|
|
if (const IntrinsicInst *II = dyn_cast<IntrinsicInst>(I)) {
|
|
|
|
if (II->getIntrinsicID() == MaskedOp)
|
|
|
|
return TTI::CastContextHint::Masked;
|
|
|
|
if (II->getIntrinsicID() == GatScatOp)
|
|
|
|
return TTI::CastContextHint::GatherScatter;
|
|
|
|
}
|
|
|
|
|
|
|
|
return TTI::CastContextHint::None;
|
|
|
|
};
|
|
|
|
|
|
|
|
switch (I->getOpcode()) {
|
|
|
|
case Instruction::ZExt:
|
|
|
|
case Instruction::SExt:
|
|
|
|
case Instruction::FPExt:
|
|
|
|
return getLoadStoreKind(I->getOperand(0), Instruction::Load,
|
|
|
|
Intrinsic::masked_load, Intrinsic::masked_gather);
|
|
|
|
case Instruction::Trunc:
|
|
|
|
case Instruction::FPTrunc:
|
|
|
|
if (I->hasOneUse())
|
|
|
|
return getLoadStoreKind(*I->user_begin(), Instruction::Store,
|
|
|
|
Intrinsic::masked_store,
|
|
|
|
Intrinsic::masked_scatter);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return CastContextHint::None;
|
|
|
|
}
|
|
|
|
|
|
|
|
return TTI::CastContextHint::None;
|
|
|
|
}
|
|
|
|
|
2021-01-21 14:40:22 +01:00
|
|
|
InstructionCost TargetTransformInfo::getCastInstrCost(
|
|
|
|
unsigned Opcode, Type *Dst, Type *Src, CastContextHint CCH,
|
|
|
|
TTI::TargetCostKind CostKind, const Instruction *I) const {
|
2020-04-15 14:43:26 +02:00
|
|
|
assert((I == nullptr || I->getOpcode() == Opcode) &&
|
|
|
|
"Opcode should reflect passed instruction.");
|
2021-01-21 14:40:22 +01:00
|
|
|
InstructionCost Cost =
|
|
|
|
TTIImpl->getCastInstrCost(Opcode, Dst, Src, CCH, CostKind, I);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-01-21 14:40:22 +01:00
|
|
|
InstructionCost TargetTransformInfo::getExtractWithExtendCost(
|
|
|
|
unsigned Opcode, Type *Dst, VectorType *VecTy, unsigned Index) const {
|
|
|
|
InstructionCost Cost =
|
|
|
|
TTIImpl->getExtractWithExtendCost(Opcode, Dst, VecTy, Index);
|
2016-04-27 17:20:21 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
|
|
|
}
|
|
|
|
|
2021-01-26 17:32:30 +01:00
|
|
|
InstructionCost TargetTransformInfo::getCFInstrCost(
|
|
|
|
unsigned Opcode, TTI::TargetCostKind CostKind, const Instruction *I) const {
|
2021-02-16 20:20:06 +01:00
|
|
|
assert((I == nullptr || I->getOpcode() == Opcode) &&
|
|
|
|
"Opcode should reflect passed instruction.");
|
2021-01-26 17:32:30 +01:00
|
|
|
InstructionCost Cost = TTIImpl->getCFInstrCost(Opcode, CostKind, I);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-01-22 22:44:23 +01:00
|
|
|
InstructionCost TargetTransformInfo::getCmpSelInstrCost(
|
|
|
|
unsigned Opcode, Type *ValTy, Type *CondTy, CmpInst::Predicate VecPred,
|
|
|
|
TTI::TargetCostKind CostKind, const Instruction *I) const {
|
2020-04-15 14:43:26 +02:00
|
|
|
assert((I == nullptr || I->getOpcode() == Opcode) &&
|
|
|
|
"Opcode should reflect passed instruction.");
|
2021-01-22 22:44:23 +01:00
|
|
|
InstructionCost Cost =
|
2020-11-02 13:40:34 +01:00
|
|
|
TTIImpl->getCmpSelInstrCost(Opcode, ValTy, CondTy, VecPred, CostKind, I);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-01-27 11:52:58 +01:00
|
|
|
InstructionCost TargetTransformInfo::getVectorInstrCost(unsigned Opcode,
|
|
|
|
Type *Val,
|
|
|
|
unsigned Index) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getVectorInstrCost(Opcode, Val, Index);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-01-23 13:14:21 +01:00
|
|
|
InstructionCost TargetTransformInfo::getMemoryOpCost(
|
|
|
|
unsigned Opcode, Type *Src, Align Alignment, unsigned AddressSpace,
|
|
|
|
TTI::TargetCostKind CostKind, const Instruction *I) const {
|
2020-04-15 14:43:26 +02:00
|
|
|
assert((I == nullptr || I->getOpcode() == Opcode) &&
|
|
|
|
"Opcode should reflect passed instruction.");
|
2021-01-23 13:14:21 +01:00
|
|
|
InstructionCost Cost = TTIImpl->getMemoryOpCost(Opcode, Src, Alignment,
|
|
|
|
AddressSpace, CostKind, I);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-01-22 23:41:19 +01:00
|
|
|
InstructionCost TargetTransformInfo::getMaskedMemoryOpCost(
|
2020-06-26 12:14:16 +02:00
|
|
|
unsigned Opcode, Type *Src, Align Alignment, unsigned AddressSpace,
|
|
|
|
TTI::TargetCostKind CostKind) const {
|
2021-01-22 23:41:19 +01:00
|
|
|
InstructionCost Cost = TTIImpl->getMaskedMemoryOpCost(Opcode, Src, Alignment,
|
|
|
|
AddressSpace, CostKind);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2015-01-25 09:44:46 +01:00
|
|
|
}
|
|
|
|
|
2021-01-22 22:25:50 +01:00
|
|
|
InstructionCost TargetTransformInfo::getGatherScatterOpCost(
|
2020-06-26 13:08:27 +02:00
|
|
|
unsigned Opcode, Type *DataTy, const Value *Ptr, bool VariableMask,
|
|
|
|
Align Alignment, TTI::TargetCostKind CostKind, const Instruction *I) const {
|
2021-01-22 22:25:50 +01:00
|
|
|
InstructionCost Cost = TTIImpl->getGatherScatterOpCost(
|
|
|
|
Opcode, DataTy, Ptr, VariableMask, Alignment, CostKind, I);
|
2015-12-28 21:10:59 +01:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
|
|
|
}
|
|
|
|
|
2021-01-23 13:14:21 +01:00
|
|
|
InstructionCost TargetTransformInfo::getInterleavedMemoryOpCost(
|
[LoopVectorize] Teach Loop Vectorizor about interleaved memory accesses.
Interleaved memory accesses are grouped and vectorized into vector load/store and shufflevector.
E.g. for (i = 0; i < N; i+=2) {
a = A[i]; // load of even element
b = A[i+1]; // load of odd element
... // operations on a, b, c, d
A[i] = c; // store of even element
A[i+1] = d; // store of odd element
}
The loads of even and odd elements are identified as an interleave load group, which will be transfered into vectorized IRs like:
%wide.vec = load <8 x i32>, <8 x i32>* %ptr
%vec.even = shufflevector <8 x i32> %wide.vec, <8 x i32> undef, <4 x i32> <i32 0, i32 2, i32 4, i32 6>
%vec.odd = shufflevector <8 x i32> %wide.vec, <8 x i32> undef, <4 x i32> <i32 1, i32 3, i32 5, i32 7>
The stores of even and odd elements are identified as an interleave store group, which will be transfered into vectorized IRs like:
%interleaved.vec = shufflevector <4 x i32> %vec.even, %vec.odd, <8 x i32> <i32 0, i32 4, i32 1, i32 5, i32 2, i32 6, i32 3, i32 7>
store <8 x i32> %interleaved.vec, <8 x i32>* %ptr
This optimization is currently disabled by defaut. To try it by adding '-enable-interleaved-mem-accesses=true'.
llvm-svn: 239291
2015-06-08 08:39:56 +02:00
|
|
|
unsigned Opcode, Type *VecTy, unsigned Factor, ArrayRef<unsigned> Indices,
|
2020-06-26 13:00:53 +02:00
|
|
|
Align Alignment, unsigned AddressSpace, TTI::TargetCostKind CostKind,
|
2020-04-28 15:11:27 +02:00
|
|
|
bool UseMaskForCond, bool UseMaskForGaps) const {
|
2021-01-23 13:14:21 +01:00
|
|
|
InstructionCost Cost = TTIImpl->getInterleavedMemoryOpCost(
|
2020-04-28 15:11:27 +02:00
|
|
|
Opcode, VecTy, Factor, Indices, Alignment, AddressSpace, CostKind,
|
|
|
|
UseMaskForCond, UseMaskForGaps);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
[LoopVectorize] Teach Loop Vectorizor about interleaved memory accesses.
Interleaved memory accesses are grouped and vectorized into vector load/store and shufflevector.
E.g. for (i = 0; i < N; i+=2) {
a = A[i]; // load of even element
b = A[i+1]; // load of odd element
... // operations on a, b, c, d
A[i] = c; // store of even element
A[i+1] = d; // store of odd element
}
The loads of even and odd elements are identified as an interleave load group, which will be transfered into vectorized IRs like:
%wide.vec = load <8 x i32>, <8 x i32>* %ptr
%vec.even = shufflevector <8 x i32> %wide.vec, <8 x i32> undef, <4 x i32> <i32 0, i32 2, i32 4, i32 6>
%vec.odd = shufflevector <8 x i32> %wide.vec, <8 x i32> undef, <4 x i32> <i32 1, i32 3, i32 5, i32 7>
The stores of even and odd elements are identified as an interleave store group, which will be transfered into vectorized IRs like:
%interleaved.vec = shufflevector <4 x i32> %vec.even, %vec.odd, <8 x i32> <i32 0, i32 4, i32 1, i32 5, i32 2, i32 6, i32 3, i32 7>
store <8 x i32> %interleaved.vec, <8 x i32>* %ptr
This optimization is currently disabled by defaut. To try it by adding '-enable-interleaved-mem-accesses=true'.
llvm-svn: 239291
2015-06-08 08:39:56 +02:00
|
|
|
}
|
|
|
|
|
2021-01-22 18:14:44 +01:00
|
|
|
InstructionCost
|
2020-05-20 10:18:42 +02:00
|
|
|
TargetTransformInfo::getIntrinsicInstrCost(const IntrinsicCostAttributes &ICA,
|
|
|
|
TTI::TargetCostKind CostKind) const {
|
2021-01-22 18:14:44 +01:00
|
|
|
InstructionCost Cost = TTIImpl->getIntrinsicInstrCost(ICA, CostKind);
|
2015-12-28 21:10:59 +01:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
|
|
|
}
|
|
|
|
|
2021-04-14 17:48:07 +02:00
|
|
|
InstructionCost
|
|
|
|
TargetTransformInfo::getCallInstrCost(Function *F, Type *RetTy,
|
|
|
|
ArrayRef<Type *> Tys,
|
|
|
|
TTI::TargetCostKind CostKind) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getCallInstrCost(F, RetTy, Tys, CostKind);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2015-03-17 20:26:23 +01:00
|
|
|
}
|
|
|
|
|
2013-01-05 12:43:11 +01:00
|
|
|
unsigned TargetTransformInfo::getNumberOfParts(Type *Tp) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->getNumberOfParts(Tp);
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|
|
|
|
|
2021-01-27 14:12:56 +01:00
|
|
|
InstructionCost
|
|
|
|
TargetTransformInfo::getAddressComputationCost(Type *Tp, ScalarEvolution *SE,
|
|
|
|
const SCEV *Ptr) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getAddressComputationCost(Tp, SE, Ptr);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
2013-02-08 15:50:48 +01:00
|
|
|
}
|
2013-01-05 12:43:11 +01:00
|
|
|
|
2021-01-27 14:25:18 +01:00
|
|
|
InstructionCost TargetTransformInfo::getMemcpyCost(const Instruction *I) const {
|
|
|
|
InstructionCost Cost = TTIImpl->getMemcpyCost(I);
|
2019-04-30 12:28:50 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
|
|
|
}
|
|
|
|
|
2021-01-22 22:33:51 +01:00
|
|
|
InstructionCost TargetTransformInfo::getArithmeticReductionCost(
|
2021-07-07 14:18:20 +02:00
|
|
|
unsigned Opcode, VectorType *Ty, Optional<FastMathFlags> FMF,
|
|
|
|
TTI::TargetCostKind CostKind) const {
|
2021-01-22 22:33:51 +01:00
|
|
|
InstructionCost Cost =
|
2021-07-07 14:18:20 +02:00
|
|
|
TTIImpl->getArithmeticReductionCost(Opcode, Ty, FMF, CostKind);
|
2015-08-05 20:08:10 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
}
|
|
|
|
|
2021-01-22 23:07:09 +01:00
|
|
|
InstructionCost TargetTransformInfo::getMinMaxReductionCost(
|
2021-07-09 12:51:16 +02:00
|
|
|
VectorType *Ty, VectorType *CondTy, bool IsUnsigned,
|
2020-04-28 15:11:27 +02:00
|
|
|
TTI::TargetCostKind CostKind) const {
|
2021-07-09 12:51:16 +02:00
|
|
|
InstructionCost Cost =
|
|
|
|
TTIImpl->getMinMaxReductionCost(Ty, CondTy, IsUnsigned, CostKind);
|
2017-09-08 15:49:36 +02:00
|
|
|
assert(Cost >= 0 && "TTI should not produce negative costs!");
|
|
|
|
return Cost;
|
|
|
|
}
|
|
|
|
|
[LV][ARM] Inloop reduction cost modelling
This adds cost modelling for the inloop vectorization added in
745bf6cf4471. Up until now they have been modelled as the original
underlying instruction, usually an add. This happens to works OK for MVE
with instructions that are reducing into the same type as they are
working on. But MVE's instructions can perform the equivalent of an
extended MLA as a single instruction:
%sa = sext <16 x i8> A to <16 x i32>
%sb = sext <16 x i8> B to <16 x i32>
%m = mul <16 x i32> %sa, %sb
%r = vecreduce.add(%m)
->
R = VMLADAV A, B
There are other instructions for performing add reductions of
v4i32/v8i16/v16i8 into i32 (VADDV), for doing the same with v4i32->i64
(VADDLV) and for performing a v4i32/v8i16 MLA into an i64 (VMLALDAV).
The i64 are particularly interesting as there are no native i64 add/mul
instructions, leading to the i64 add and mul naturally getting very
high costs.
Also worth mentioning, under NEON there is the concept of a sdot/udot
instruction which performs a partial reduction from a v16i8 to a v4i32.
They extend and mul/sum the first four elements from the inputs into the
first element of the output, repeating for each of the four output
lanes. They could possibly be represented in the same way as above in
llvm, so long as a vecreduce.add could perform a partial reduction. The
vectorizer would then produce a combination of in and outer loop
reductions to efficiently use the sdot and udot instructions. Although
this patch does not do that yet, it does suggest that separating the
input reduction type from the produced result type is a useful concept
to model. It also shows that a MLA reduction as a single instruction is
fairly common.
This patch attempt to improve the costmodelling of in-loop reductions
by:
- Adding some pattern matching in the loop vectorizer cost model to
match extended reduction patterns that are optionally extended and/or
MLA patterns. This marks the cost of the reduction instruction correctly
and the sext/zext/mul leading up to it as free, which is otherwise
difficult to tell and may get a very high cost. (In the long run this
can hopefully be replaced by vplan producing a single node and costing
it correctly, but that is not yet something that vplan can do).
- getExtendedAddReductionCost is added to query the cost of these
extended reduction patterns.
- Expanded the ARM costs to account for these expanded sizes, which is a
fairly simple change in itself.
- Some minor alterations to allow inloop reduction larger than the highest
vector width and i64 MVE reductions.
- An extra InLoopReductionImmediateChains map was added to the vectorizer
for it to efficiently detect which instructions are reductions in the
cost model.
- The tests have some updates to show what I believe is optimal
vectorization and where we are now.
Put together this can greatly improve performance for reduction loop
under MVE.
Differential Revision: https://reviews.llvm.org/D93476
2021-01-21 22:03:41 +01:00
|
|
|
InstructionCost TargetTransformInfo::getExtendedAddReductionCost(
|
|
|
|
bool IsMLA, bool IsUnsigned, Type *ResTy, VectorType *Ty,
|
|
|
|
TTI::TargetCostKind CostKind) const {
|
|
|
|
return TTIImpl->getExtendedAddReductionCost(IsMLA, IsUnsigned, ResTy, Ty,
|
|
|
|
CostKind);
|
|
|
|
}
|
|
|
|
|
2021-05-20 11:09:16 +02:00
|
|
|
InstructionCost
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
TargetTransformInfo::getCostOfKeepingLiveOverCall(ArrayRef<Type *> Tys) const {
|
|
|
|
return TTIImpl->getCostOfKeepingLiveOverCall(Tys);
|
Costmodel: Add support for horizontal vector reductions
Upcoming SLP vectorization improvements will want to be able to estimate costs
of horizontal reductions. Add infrastructure to support this.
We model reductions as a series of (shufflevector,add) tuples ultimately
followed by an extractelement. For example, for an add-reduction of <4 x float>
we could generate the following sequence:
(v0, v1, v2, v3)
\ \ / /
\ \ /
+ +
(v0+v2, v1+v3, undef, undef)
\ /
((v0+v2) + (v1+v3), undef, undef)
%rdx.shuf = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 2, i32 3, i32 undef, i32 undef>
%bin.rdx = fadd <4 x float> %rdx, %rdx.shuf
%rdx.shuf7 = shufflevector <4 x float> %bin.rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx8 = fadd <4 x float> %bin.rdx, %rdx.shuf7
%r = extractelement <4 x float> %bin.rdx8, i32 0
This commit adds a cost model interface "getReductionCost(Opcode, Ty, Pairwise)"
that will allow clients to ask for the cost of such a reduction (as backends
might generate more efficient code than the cost of the individual instructions
summed up). This interface is excercised by the CostModel analysis pass which
looks for reduction patterns like the one above - starting at extractelements -
and if it sees a matching sequence will call the cost model interface.
We will also support a second form of pairwise reduction that is well supported
on common architectures (haddps, vpadd, faddp).
(v0, v1, v2, v3)
\ / \ /
(v0+v1, v2+v3, undef, undef)
\ /
((v0+v1)+(v2+v3), undef, undef, undef)
%rdx.shuf.0.0 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 0, i32 2 , i32 undef, i32 undef>
%rdx.shuf.0.1 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 3, i32 undef, i32 undef>
%bin.rdx.0 = fadd <4 x float> %rdx.shuf.0.0, %rdx.shuf.0.1
%rdx.shuf.1.0 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 0, i32 undef, i32 undef, i32 undef>
%rdx.shuf.1.1 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx.1 = fadd <4 x float> %rdx.shuf.1.0, %rdx.shuf.1.1
%r = extractelement <4 x float> %bin.rdx.1, i32 0
llvm-svn: 190876
2013-09-17 20:06:50 +02:00
|
|
|
}
|
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
bool TargetTransformInfo::getTgtMemIntrinsic(IntrinsicInst *Inst,
|
|
|
|
MemIntrinsicInfo &Info) const {
|
|
|
|
return TTIImpl->getTgtMemIntrinsic(Inst, Info);
|
2014-08-05 14:30:34 +02:00
|
|
|
}
|
|
|
|
|
2017-06-06 18:45:25 +02:00
|
|
|
unsigned TargetTransformInfo::getAtomicMemIntrinsicMaxElementSize() const {
|
|
|
|
return TTIImpl->getAtomicMemIntrinsicMaxElementSize();
|
|
|
|
}
|
|
|
|
|
2015-01-26 23:51:15 +01:00
|
|
|
Value *TargetTransformInfo::getOrCreateResultFromMemIntrinsic(
|
|
|
|
IntrinsicInst *Inst, Type *ExpectedType) const {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
return TTIImpl->getOrCreateResultFromMemIntrinsic(Inst, ExpectedType);
|
2015-01-26 23:51:15 +01:00
|
|
|
}
|
|
|
|
|
2020-04-15 14:43:26 +02:00
|
|
|
Type *TargetTransformInfo::getMemcpyLoopLoweringType(
|
|
|
|
LLVMContext &Context, Value *Length, unsigned SrcAddrSpace,
|
|
|
|
unsigned DestAddrSpace, unsigned SrcAlign, unsigned DestAlign) const {
|
2020-02-14 19:22:53 +01:00
|
|
|
return TTIImpl->getMemcpyLoopLoweringType(Context, Length, SrcAddrSpace,
|
2020-04-15 14:43:26 +02:00
|
|
|
DestAddrSpace, SrcAlign, DestAlign);
|
2017-07-07 04:00:06 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void TargetTransformInfo::getMemcpyLoopResidualLoweringType(
|
|
|
|
SmallVectorImpl<Type *> &OpsOut, LLVMContext &Context,
|
2020-04-15 14:43:26 +02:00
|
|
|
unsigned RemainingBytes, unsigned SrcAddrSpace, unsigned DestAddrSpace,
|
2020-02-14 19:22:53 +01:00
|
|
|
unsigned SrcAlign, unsigned DestAlign) const {
|
2017-07-07 04:00:06 +02:00
|
|
|
TTIImpl->getMemcpyLoopResidualLoweringType(OpsOut, Context, RemainingBytes,
|
2020-02-14 19:22:53 +01:00
|
|
|
SrcAddrSpace, DestAddrSpace,
|
2017-07-07 04:00:06 +02:00
|
|
|
SrcAlign, DestAlign);
|
|
|
|
}
|
|
|
|
|
2015-07-30 00:09:48 +02:00
|
|
|
bool TargetTransformInfo::areInlineCompatible(const Function *Caller,
|
|
|
|
const Function *Callee) const {
|
|
|
|
return TTIImpl->areInlineCompatible(Caller, Callee);
|
2018-03-26 15:10:09 +02:00
|
|
|
}
|
|
|
|
|
2019-01-16 06:15:31 +01:00
|
|
|
bool TargetTransformInfo::areFunctionArgsABICompatible(
|
|
|
|
const Function *Caller, const Function *Callee,
|
|
|
|
SmallPtrSetImpl<Argument *> &Args) const {
|
|
|
|
return TTIImpl->areFunctionArgsABICompatible(Caller, Callee, Args);
|
|
|
|
}
|
|
|
|
|
2018-03-26 15:10:09 +02:00
|
|
|
bool TargetTransformInfo::isIndexedLoadLegal(MemIndexedMode Mode,
|
|
|
|
Type *Ty) const {
|
|
|
|
return TTIImpl->isIndexedLoadLegal(Mode, Ty);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool TargetTransformInfo::isIndexedStoreLegal(MemIndexedMode Mode,
|
|
|
|
Type *Ty) const {
|
|
|
|
return TTIImpl->isIndexedStoreLegal(Mode, Ty);
|
2015-07-02 03:11:47 +02:00
|
|
|
}
|
|
|
|
|
2016-10-03 12:31:34 +02:00
|
|
|
unsigned TargetTransformInfo::getLoadStoreVecRegBitWidth(unsigned AS) const {
|
|
|
|
return TTIImpl->getLoadStoreVecRegBitWidth(AS);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool TargetTransformInfo::isLegalToVectorizeLoad(LoadInst *LI) const {
|
|
|
|
return TTIImpl->isLegalToVectorizeLoad(LI);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool TargetTransformInfo::isLegalToVectorizeStore(StoreInst *SI) const {
|
|
|
|
return TTIImpl->isLegalToVectorizeStore(SI);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool TargetTransformInfo::isLegalToVectorizeLoadChain(
|
2020-06-26 16:14:27 +02:00
|
|
|
unsigned ChainSizeInBytes, Align Alignment, unsigned AddrSpace) const {
|
2016-10-03 12:31:34 +02:00
|
|
|
return TTIImpl->isLegalToVectorizeLoadChain(ChainSizeInBytes, Alignment,
|
|
|
|
AddrSpace);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool TargetTransformInfo::isLegalToVectorizeStoreChain(
|
2020-06-26 16:14:27 +02:00
|
|
|
unsigned ChainSizeInBytes, Align Alignment, unsigned AddrSpace) const {
|
2016-10-03 12:31:34 +02:00
|
|
|
return TTIImpl->isLegalToVectorizeStoreChain(ChainSizeInBytes, Alignment,
|
|
|
|
AddrSpace);
|
|
|
|
}
|
|
|
|
|
2021-02-16 11:43:42 +01:00
|
|
|
bool TargetTransformInfo::isLegalToVectorizeReduction(
|
2021-06-11 11:19:37 +02:00
|
|
|
const RecurrenceDescriptor &RdxDesc, ElementCount VF) const {
|
2021-02-16 11:43:42 +01:00
|
|
|
return TTIImpl->isLegalToVectorizeReduction(RdxDesc, VF);
|
|
|
|
}
|
|
|
|
|
2021-07-06 11:49:43 +02:00
|
|
|
bool TargetTransformInfo::isElementTypeLegalForScalableVector(Type *Ty) const {
|
|
|
|
return TTIImpl->isElementTypeLegalForScalableVector(Ty);
|
|
|
|
}
|
|
|
|
|
2016-10-03 12:31:34 +02:00
|
|
|
unsigned TargetTransformInfo::getLoadVectorFactor(unsigned VF,
|
|
|
|
unsigned LoadSize,
|
|
|
|
unsigned ChainSizeInBytes,
|
|
|
|
VectorType *VecTy) const {
|
|
|
|
return TTIImpl->getLoadVectorFactor(VF, LoadSize, ChainSizeInBytes, VecTy);
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned TargetTransformInfo::getStoreVectorFactor(unsigned VF,
|
|
|
|
unsigned StoreSize,
|
|
|
|
unsigned ChainSizeInBytes,
|
|
|
|
VectorType *VecTy) const {
|
|
|
|
return TTIImpl->getStoreVectorFactor(VF, StoreSize, ChainSizeInBytes, VecTy);
|
|
|
|
}
|
|
|
|
|
2020-09-12 18:47:04 +02:00
|
|
|
bool TargetTransformInfo::preferInLoopReduction(unsigned Opcode, Type *Ty,
|
|
|
|
ReductionFlags Flags) const {
|
|
|
|
return TTIImpl->preferInLoopReduction(Opcode, Ty, Flags);
|
|
|
|
}
|
|
|
|
|
2020-08-21 09:48:12 +02:00
|
|
|
bool TargetTransformInfo::preferPredicatedReductionSelect(
|
|
|
|
unsigned Opcode, Type *Ty, ReductionFlags Flags) const {
|
|
|
|
return TTIImpl->preferPredicatedReductionSelect(Opcode, Ty, Flags);
|
|
|
|
}
|
|
|
|
|
2021-04-30 13:43:48 +02:00
|
|
|
TargetTransformInfo::VPLegalization
|
|
|
|
TargetTransformInfo::getVPLegalizationStrategy(const VPIntrinsic &VPI) const {
|
|
|
|
return TTIImpl->getVPLegalizationStrategy(VPI);
|
|
|
|
}
|
|
|
|
|
2017-05-10 11:42:49 +02:00
|
|
|
bool TargetTransformInfo::shouldExpandReduction(const IntrinsicInst *II) const {
|
|
|
|
return TTIImpl->shouldExpandReduction(II);
|
|
|
|
}
|
2017-05-09 12:43:25 +02:00
|
|
|
|
2019-06-18 01:20:29 +02:00
|
|
|
unsigned TargetTransformInfo::getGISelRematGlobalCost() const {
|
|
|
|
return TTIImpl->getGISelRematGlobalCost();
|
|
|
|
}
|
|
|
|
|
2020-12-08 18:40:13 +01:00
|
|
|
bool TargetTransformInfo::supportsScalableVectors() const {
|
|
|
|
return TTIImpl->supportsScalableVectors();
|
|
|
|
}
|
|
|
|
|
2021-04-26 21:50:12 +02:00
|
|
|
bool TargetTransformInfo::hasActiveVectorLength() const {
|
|
|
|
return TTIImpl->hasActiveVectorLength();
|
|
|
|
}
|
|
|
|
|
2021-01-20 18:17:23 +01:00
|
|
|
InstructionCost
|
|
|
|
TargetTransformInfo::getInstructionLatency(const Instruction *I) const {
|
2017-09-09 00:29:17 +02:00
|
|
|
return TTIImpl->getInstructionLatency(I);
|
|
|
|
}
|
|
|
|
|
2021-01-20 18:17:23 +01:00
|
|
|
InstructionCost
|
|
|
|
TargetTransformInfo::getInstructionThroughput(const Instruction *I) const {
|
2020-04-28 15:11:27 +02:00
|
|
|
TTI::TargetCostKind CostKind = TTI::TCK_RecipThroughput;
|
|
|
|
|
2017-09-09 00:29:17 +02:00
|
|
|
switch (I->getOpcode()) {
|
|
|
|
case Instruction::GetElementPtr:
|
|
|
|
case Instruction::Ret:
|
|
|
|
case Instruction::PHI:
|
2020-06-16 09:33:28 +02:00
|
|
|
case Instruction::Br:
|
2017-09-09 00:29:17 +02:00
|
|
|
case Instruction::Add:
|
|
|
|
case Instruction::FAdd:
|
|
|
|
case Instruction::Sub:
|
|
|
|
case Instruction::FSub:
|
|
|
|
case Instruction::Mul:
|
|
|
|
case Instruction::FMul:
|
|
|
|
case Instruction::UDiv:
|
|
|
|
case Instruction::SDiv:
|
|
|
|
case Instruction::FDiv:
|
|
|
|
case Instruction::URem:
|
|
|
|
case Instruction::SRem:
|
|
|
|
case Instruction::FRem:
|
|
|
|
case Instruction::Shl:
|
|
|
|
case Instruction::LShr:
|
|
|
|
case Instruction::AShr:
|
|
|
|
case Instruction::And:
|
|
|
|
case Instruction::Or:
|
2020-06-05 09:42:03 +02:00
|
|
|
case Instruction::Xor:
|
2020-06-11 10:59:12 +02:00
|
|
|
case Instruction::FNeg:
|
2020-05-26 15:28:34 +02:00
|
|
|
case Instruction::Select:
|
2017-09-09 00:29:17 +02:00
|
|
|
case Instruction::ICmp:
|
2020-05-26 15:28:34 +02:00
|
|
|
case Instruction::FCmp:
|
2020-06-05 11:09:56 +02:00
|
|
|
case Instruction::Store:
|
|
|
|
case Instruction::Load:
|
2017-09-09 00:29:17 +02:00
|
|
|
case Instruction::ZExt:
|
|
|
|
case Instruction::SExt:
|
|
|
|
case Instruction::FPToUI:
|
|
|
|
case Instruction::FPToSI:
|
|
|
|
case Instruction::FPExt:
|
|
|
|
case Instruction::PtrToInt:
|
|
|
|
case Instruction::IntToPtr:
|
|
|
|
case Instruction::SIToFP:
|
|
|
|
case Instruction::UIToFP:
|
|
|
|
case Instruction::Trunc:
|
|
|
|
case Instruction::FPTrunc:
|
|
|
|
case Instruction::BitCast:
|
2020-05-26 12:27:57 +02:00
|
|
|
case Instruction::AddrSpaceCast:
|
2020-06-15 09:27:14 +02:00
|
|
|
case Instruction::ExtractElement:
|
2020-06-09 10:04:53 +02:00
|
|
|
case Instruction::InsertElement:
|
[CostModel] Model all `extractvalue`s as free.
Summary:
As disscussed in https://reviews.llvm.org/D65148#1606412,
`extractvalue` don't actually generate any code,
so we should treat them as free.
Reviewers: craig.topper, RKSimon, jnspaulsson, greened, asb, t.p.northover, jmolloy, dmgreen
Reviewed By: jmolloy
Subscribers: javed.absar, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66098
llvm-svn: 370339
2019-08-29 13:50:30 +02:00
|
|
|
case Instruction::ExtractValue:
|
2020-06-09 10:04:53 +02:00
|
|
|
case Instruction::ShuffleVector:
|
2017-09-09 00:29:17 +02:00
|
|
|
case Instruction::Call:
|
2021-02-16 20:20:06 +01:00
|
|
|
case Instruction::Switch:
|
2020-05-26 13:17:26 +02:00
|
|
|
return getUserCost(I, CostKind);
|
2017-09-09 00:29:17 +02:00
|
|
|
default:
|
|
|
|
// We don't have any information on this instruction.
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
TargetTransformInfo::Concept::~Concept() {}
|
2015-01-26 23:51:15 +01:00
|
|
|
|
2015-02-01 11:11:22 +01:00
|
|
|
TargetIRAnalysis::TargetIRAnalysis() : TTICallback(&getDefaultTTI) {}
|
|
|
|
|
|
|
|
TargetIRAnalysis::TargetIRAnalysis(
|
2015-09-17 01:38:13 +02:00
|
|
|
std::function<Result(const Function &)> TTICallback)
|
2016-05-27 16:27:24 +02:00
|
|
|
: TTICallback(std::move(TTICallback)) {}
|
2015-02-01 11:11:22 +01:00
|
|
|
|
2016-06-17 02:11:01 +02:00
|
|
|
TargetIRAnalysis::Result TargetIRAnalysis::run(const Function &F,
|
2016-08-09 02:28:15 +02:00
|
|
|
FunctionAnalysisManager &) {
|
2015-02-01 11:11:22 +01:00
|
|
|
return TTICallback(F);
|
|
|
|
}
|
|
|
|
|
2016-11-23 18:53:26 +01:00
|
|
|
AnalysisKey TargetIRAnalysis::Key;
|
2016-02-28 18:17:00 +01:00
|
|
|
|
2015-09-17 01:38:13 +02:00
|
|
|
TargetIRAnalysis::Result TargetIRAnalysis::getDefaultTTI(const Function &F) {
|
2015-07-09 04:08:42 +02:00
|
|
|
return Result(F.getParent()->getDataLayout());
|
2015-02-01 11:11:22 +01:00
|
|
|
}
|
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
// Register the basic pass.
|
|
|
|
INITIALIZE_PASS(TargetTransformInfoWrapperPass, "tti",
|
|
|
|
"Target Transform Information", false, true)
|
|
|
|
char TargetTransformInfoWrapperPass::ID = 0;
|
2013-01-05 12:43:11 +01:00
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
void TargetTransformInfoWrapperPass::anchor() {}
|
2013-01-05 12:43:11 +01:00
|
|
|
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
TargetTransformInfoWrapperPass::TargetTransformInfoWrapperPass()
|
2015-02-01 13:26:09 +01:00
|
|
|
: ImmutablePass(ID) {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
initializeTargetTransformInfoWrapperPassPass(
|
|
|
|
*PassRegistry::getPassRegistry());
|
|
|
|
}
|
|
|
|
|
|
|
|
TargetTransformInfoWrapperPass::TargetTransformInfoWrapperPass(
|
2015-02-01 13:26:09 +01:00
|
|
|
TargetIRAnalysis TIRA)
|
|
|
|
: ImmutablePass(ID), TIRA(std::move(TIRA)) {
|
[PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
2015-01-31 04:43:40 +01:00
|
|
|
initializeTargetTransformInfoWrapperPassPass(
|
|
|
|
*PassRegistry::getPassRegistry());
|
|
|
|
}
|
2013-01-05 12:43:11 +01:00
|
|
|
|
2015-09-17 01:38:13 +02:00
|
|
|
TargetTransformInfo &TargetTransformInfoWrapperPass::getTTI(const Function &F) {
|
2016-08-09 02:28:15 +02:00
|
|
|
FunctionAnalysisManager DummyFAM;
|
2016-06-17 02:11:01 +02:00
|
|
|
TTI = TIRA.run(F, DummyFAM);
|
2015-02-01 13:26:09 +01:00
|
|
|
return *TTI;
|
|
|
|
}
|
|
|
|
|
2015-01-31 12:17:59 +01:00
|
|
|
ImmutablePass *
|
2015-02-01 13:26:09 +01:00
|
|
|
llvm::createTargetTransformInfoWrapperPass(TargetIRAnalysis TIRA) {
|
|
|
|
return new TargetTransformInfoWrapperPass(std::move(TIRA));
|
2013-01-05 12:43:11 +01:00
|
|
|
}
|