1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-18 10:32:48 +02:00
llvm-mirror/tools/llvm-profgen/CSPreInliner.h
Wenlei He 1b193b8bb3 [CSSPGO][llvm-profgen] Context-sensitive global pre-inliner
This change sets up a framework in llvm-profgen to estimate inline decision and adjust context-sensitive profile based on that. We call it a global pre-inliner in llvm-profgen.

It will serve two purposes:
  1) Since context profile for not inlined context will be merged into base profile, if we estimate a context will not be inlined, we can merge the context profile in the output to save profile size.
  2) For thinLTO, when a context involving functions from different modules is not inined, we can't merge functions profiles across modules, leading to suboptimal post-inline count quality. By estimating some inline decisions, we would be able to adjust/merge context profiles beforehand as a mitigation.

Compiler inline heuristic uses inline cost which is not available in llvm-profgen. But since inline cost is closely related to size, we could get an estimate through function size from debug info. Because the size we have in llvm-profgen is the final size, it could also be more accurate than the inline cost estimation in the compiler.

This change only has the framework, with a few TODOs left for follow up patches for a complete implementation:
  1) We need to retrieve size for funciton//inlinee from debug info for inlining estimation. Currently we use number of samples in a profile as place holder for size estimation.
  2) Currently the thresholds are using the values used by sample loader inliner. But they need to be tuned since the size here is fully optimized machine code size, instead of inline cost based on not yet fully optimized IR.

Differential Revision: https://reviews.llvm.org/D99146
2021-03-29 09:46:14 -07:00

93 lines
3.4 KiB
C++

//===-- CSPreInliner.h - Profile guided preinliner ---------------- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef LLVM_TOOLS_LLVM_PROFGEN_PGOINLINEADVISOR_H
#define LLVM_TOOLS_LLVM_PROFGEN_PGOINLINEADVISOR_H
#include "llvm/ADT/PriorityQueue.h"
#include "llvm/ProfileData/ProfileCommon.h"
#include "llvm/ProfileData/SampleProf.h"
#include "llvm/Transforms/IPO/ProfiledCallGraph.h"
#include "llvm/Transforms/IPO/SampleContextTracker.h"
using namespace llvm;
using namespace sampleprof;
namespace llvm {
namespace sampleprof {
// Inline candidate seen from profile
struct ProfiledInlineCandidate {
ProfiledInlineCandidate(const FunctionSamples *Samples, uint64_t Count)
: CalleeSamples(Samples), CallsiteCount(Count),
SizeCost(Samples->getBodySamples().size()) {}
// Context-sensitive function profile for inline candidate
const FunctionSamples *CalleeSamples;
// Call site count for an inline candidate
// TODO: make sure entry count for context profile and call site
// target count for corresponding call are consistent.
uint64_t CallsiteCount;
// Size proxy for function under particular call context.
// TODO: use post-inline callee size from debug info.
uint64_t SizeCost;
};
// Inline candidate comparer using call site weight
struct ProfiledCandidateComparer {
bool operator()(const ProfiledInlineCandidate &LHS,
const ProfiledInlineCandidate &RHS) {
if (LHS.CallsiteCount != RHS.CallsiteCount)
return LHS.CallsiteCount < RHS.CallsiteCount;
if (LHS.SizeCost != RHS.SizeCost)
return LHS.SizeCost > RHS.SizeCost;
// Tie breaker using GUID so we have stable/deterministic inlining order
assert(LHS.CalleeSamples && RHS.CalleeSamples &&
"Expect non-null FunctionSamples");
return LHS.CalleeSamples->getGUID(LHS.CalleeSamples->getName()) <
RHS.CalleeSamples->getGUID(RHS.CalleeSamples->getName());
}
};
using ProfiledCandidateQueue =
PriorityQueue<ProfiledInlineCandidate, std::vector<ProfiledInlineCandidate>,
ProfiledCandidateComparer>;
// Pre-compilation inliner based on context-sensitive profile.
// The PreInliner estimates inline decision using hotness from profile
// and cost estimation from machine code size. It helps merges context
// profile globally and achieves better post-inine profile quality, which
// otherwise won't be possible for ThinLTO. It also reduce context profile
// size by only keep context that is estimated to be inlined.
class CSPreInliner {
public:
CSPreInliner(StringMap<FunctionSamples> &Profiles, uint64_t HotThreshold,
uint64_t ColdThreshold);
void run();
private:
bool getInlineCandidates(ProfiledCandidateQueue &CQueue,
const FunctionSamples *FCallerContextSamples);
std::vector<StringRef> buildTopDownOrder();
void processFunction(StringRef Name);
bool shouldInline(ProfiledInlineCandidate &Candidate);
SampleContextTracker ContextTracker;
StringMap<FunctionSamples> &ProfileMap;
// Count thresholds to answer isHotCount and isColdCount queries.
// Mirrors the threshold in ProfileSummaryInfo.
uint64_t HotCountThreshold;
uint64_t ColdCountThreshold;
};
} // end namespace sampleprof
} // end namespace llvm
#endif