mirror of
https://github.com/RPCS3/llvm-mirror.git
synced 2024-10-20 03:23:01 +02:00
5c3f34f10b
Summary: First, we need to explain the core of the vulnerability. Note that this is a very incomplete description, please see the Project Zero blog post for details: https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html The basis for branch target injection is to direct speculative execution of the processor to some "gadget" of executable code by poisoning the prediction of indirect branches with the address of that gadget. The gadget in turn contains an operation that provides a side channel for reading data. Most commonly, this will look like a load of secret data followed by a branch on the loaded value and then a load of some predictable cache line. The attacker then uses timing of the processors cache to determine which direction the branch took *in the speculative execution*, and in turn what one bit of the loaded value was. Due to the nature of these timing side channels and the branch predictor on Intel processors, this allows an attacker to leak data only accessible to a privileged domain (like the kernel) back into an unprivileged domain. The goal is simple: avoid generating code which contains an indirect branch that could have its prediction poisoned by an attacker. In many cases, the compiler can simply use directed conditional branches and a small search tree. LLVM already has support for lowering switches in this way and the first step of this patch is to disable jump-table lowering of switches and introduce a pass to rewrite explicit indirectbr sequences into a switch over integers. However, there is no fully general alternative to indirect calls. We introduce a new construct we call a "retpoline" to implement indirect calls in a non-speculatable way. It can be thought of loosely as a trampoline for indirect calls which uses the RET instruction on x86. Further, we arrange for a specific call->ret sequence which ensures the processor predicts the return to go to a controlled, known location. The retpoline then "smashes" the return address pushed onto the stack by the call with the desired target of the original indirect call. The result is a predicted return to the next instruction after a call (which can be used to trap speculative execution within an infinite loop) and an actual indirect branch to an arbitrary address. On 64-bit x86 ABIs, this is especially easily done in the compiler by using a guaranteed scratch register to pass the target into this device. For 32-bit ABIs there isn't a guaranteed scratch register and so several different retpoline variants are introduced to use a scratch register if one is available in the calling convention and to otherwise use direct stack push/pop sequences to pass the target address. This "retpoline" mitigation is fully described in the following blog post: https://support.google.com/faqs/answer/7625886 We also support a target feature that disables emission of the retpoline thunk by the compiler to allow for custom thunks if users want them. These are particularly useful in environments like kernels that routinely do hot-patching on boot and want to hot-patch their thunk to different code sequences. They can write this custom thunk and use `-mretpoline-external-thunk` *in addition* to `-mretpoline`. In this case, on x86-64 thu thunk names must be: ``` __llvm_external_retpoline_r11 ``` or on 32-bit: ``` __llvm_external_retpoline_eax __llvm_external_retpoline_ecx __llvm_external_retpoline_edx __llvm_external_retpoline_push ``` And the target of the retpoline is passed in the named register, or in the case of the `push` suffix on the top of the stack via a `pushl` instruction. There is one other important source of indirect branches in x86 ELF binaries: the PLT. These patches also include support for LLD to generate PLT entries that perform a retpoline-style indirection. The only other indirect branches remaining that we are aware of are from precompiled runtimes (such as crt0.o and similar). The ones we have found are not really attackable, and so we have not focused on them here, but eventually these runtimes should also be replicated for retpoline-ed configurations for completeness. For kernels or other freestanding or fully static executables, the compiler switch `-mretpoline` is sufficient to fully mitigate this particular attack. For dynamic executables, you must compile *all* libraries with `-mretpoline` and additionally link the dynamic executable and all shared libraries with LLD and pass `-z retpolineplt` (or use similar functionality from some other linker). We strongly recommend also using `-z now` as non-lazy binding allows the retpoline-mitigated PLT to be substantially smaller. When manually apply similar transformations to `-mretpoline` to the Linux kernel we observed very small performance hits to applications running typical workloads, and relatively minor hits (approximately 2%) even for extremely syscall-heavy applications. This is largely due to the small number of indirect branches that occur in performance sensitive paths of the kernel. When using these patches on statically linked applications, especially C++ applications, you should expect to see a much more dramatic performance hit. For microbenchmarks that are switch, indirect-, or virtual-call heavy we have seen overheads ranging from 10% to 50%. However, real-world workloads exhibit substantially lower performance impact. Notably, techniques such as PGO and ThinLTO dramatically reduce the impact of hot indirect calls (by speculatively promoting them to direct calls) and allow optimized search trees to be used to lower switches. If you need to deploy these techniques in C++ applications, we *strongly* recommend that you ensure all hot call targets are statically linked (avoiding PLT indirection) and use both PGO and ThinLTO. Well tuned servers using all of these techniques saw 5% - 10% overhead from the use of retpoline. We will add detailed documentation covering these components in subsequent patches, but wanted to make the core functionality available as soon as possible. Happy for more code review, but we'd really like to get these patches landed and backported ASAP for obvious reasons. We're planning to backport this to both 6.0 and 5.0 release streams and get a 5.0 release with just this cherry picked ASAP for distros and vendors. This patch is the work of a number of people over the past month: Eric, Reid, Rui, and myself. I'm mailing it out as a single commit due to the time sensitive nature of landing this and the need to backport it. Huge thanks to everyone who helped out here, and everyone at Intel who helped out in discussions about how to craft this. Also, credit goes to Paul Turner (at Google, but not an LLVM contributor) for much of the underlying retpoline design. Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D41723 llvm-svn: 323155
474 lines
16 KiB
C++
474 lines
16 KiB
C++
//===-- X86TargetMachine.cpp - Define TargetMachine for the X86 -----------===//
|
|
//
|
|
// The LLVM Compiler Infrastructure
|
|
//
|
|
// This file is distributed under the University of Illinois Open Source
|
|
// License. See LICENSE.TXT for details.
|
|
//
|
|
//===----------------------------------------------------------------------===//
|
|
//
|
|
// This file defines the X86 specific subclass of TargetMachine.
|
|
//
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
#include "X86TargetMachine.h"
|
|
#include "MCTargetDesc/X86MCTargetDesc.h"
|
|
#include "X86.h"
|
|
#include "X86CallLowering.h"
|
|
#include "X86LegalizerInfo.h"
|
|
#include "X86MacroFusion.h"
|
|
#include "X86Subtarget.h"
|
|
#include "X86TargetObjectFile.h"
|
|
#include "X86TargetTransformInfo.h"
|
|
#include "llvm/ADT/Optional.h"
|
|
#include "llvm/ADT/STLExtras.h"
|
|
#include "llvm/ADT/SmallString.h"
|
|
#include "llvm/ADT/StringRef.h"
|
|
#include "llvm/ADT/Triple.h"
|
|
#include "llvm/Analysis/TargetTransformInfo.h"
|
|
#include "llvm/CodeGen/ExecutionDomainFix.h"
|
|
#include "llvm/CodeGen/GlobalISel/CallLowering.h"
|
|
#include "llvm/CodeGen/GlobalISel/IRTranslator.h"
|
|
#include "llvm/CodeGen/GlobalISel/InstructionSelect.h"
|
|
#include "llvm/CodeGen/GlobalISel/Legalizer.h"
|
|
#include "llvm/CodeGen/GlobalISel/RegBankSelect.h"
|
|
#include "llvm/CodeGen/MachineScheduler.h"
|
|
#include "llvm/CodeGen/Passes.h"
|
|
#include "llvm/CodeGen/TargetLoweringObjectFile.h"
|
|
#include "llvm/CodeGen/TargetPassConfig.h"
|
|
#include "llvm/IR/Attributes.h"
|
|
#include "llvm/IR/DataLayout.h"
|
|
#include "llvm/IR/Function.h"
|
|
#include "llvm/Pass.h"
|
|
#include "llvm/Support/CodeGen.h"
|
|
#include "llvm/Support/CommandLine.h"
|
|
#include "llvm/Support/ErrorHandling.h"
|
|
#include "llvm/Support/TargetRegistry.h"
|
|
#include "llvm/Target/TargetOptions.h"
|
|
#include <memory>
|
|
#include <string>
|
|
|
|
using namespace llvm;
|
|
|
|
static cl::opt<bool> EnableMachineCombinerPass("x86-machine-combiner",
|
|
cl::desc("Enable the machine combiner pass"),
|
|
cl::init(true), cl::Hidden);
|
|
|
|
namespace llvm {
|
|
|
|
void initializeWinEHStatePassPass(PassRegistry &);
|
|
void initializeFixupLEAPassPass(PassRegistry &);
|
|
void initializeX86CallFrameOptimizationPass(PassRegistry &);
|
|
void initializeX86CmovConverterPassPass(PassRegistry &);
|
|
void initializeX86ExecutionDomainFixPass(PassRegistry &);
|
|
void initializeX86DomainReassignmentPass(PassRegistry &);
|
|
|
|
} // end namespace llvm
|
|
|
|
extern "C" void LLVMInitializeX86Target() {
|
|
// Register the target.
|
|
RegisterTargetMachine<X86TargetMachine> X(getTheX86_32Target());
|
|
RegisterTargetMachine<X86TargetMachine> Y(getTheX86_64Target());
|
|
|
|
PassRegistry &PR = *PassRegistry::getPassRegistry();
|
|
initializeGlobalISel(PR);
|
|
initializeWinEHStatePassPass(PR);
|
|
initializeFixupBWInstPassPass(PR);
|
|
initializeEvexToVexInstPassPass(PR);
|
|
initializeFixupLEAPassPass(PR);
|
|
initializeX86CallFrameOptimizationPass(PR);
|
|
initializeX86CmovConverterPassPass(PR);
|
|
initializeX86ExecutionDomainFixPass(PR);
|
|
initializeX86DomainReassignmentPass(PR);
|
|
}
|
|
|
|
static std::unique_ptr<TargetLoweringObjectFile> createTLOF(const Triple &TT) {
|
|
if (TT.isOSBinFormatMachO()) {
|
|
if (TT.getArch() == Triple::x86_64)
|
|
return llvm::make_unique<X86_64MachoTargetObjectFile>();
|
|
return llvm::make_unique<TargetLoweringObjectFileMachO>();
|
|
}
|
|
|
|
if (TT.isOSFreeBSD())
|
|
return llvm::make_unique<X86FreeBSDTargetObjectFile>();
|
|
if (TT.isOSLinux() || TT.isOSNaCl() || TT.isOSIAMCU())
|
|
return llvm::make_unique<X86LinuxNaClTargetObjectFile>();
|
|
if (TT.isOSSolaris())
|
|
return llvm::make_unique<X86SolarisTargetObjectFile>();
|
|
if (TT.isOSFuchsia())
|
|
return llvm::make_unique<X86FuchsiaTargetObjectFile>();
|
|
if (TT.isOSBinFormatELF())
|
|
return llvm::make_unique<X86ELFTargetObjectFile>();
|
|
if (TT.isKnownWindowsMSVCEnvironment() || TT.isWindowsCoreCLREnvironment())
|
|
return llvm::make_unique<X86WindowsTargetObjectFile>();
|
|
if (TT.isOSBinFormatCOFF())
|
|
return llvm::make_unique<TargetLoweringObjectFileCOFF>();
|
|
llvm_unreachable("unknown subtarget type");
|
|
}
|
|
|
|
static std::string computeDataLayout(const Triple &TT) {
|
|
// X86 is little endian
|
|
std::string Ret = "e";
|
|
|
|
Ret += DataLayout::getManglingComponent(TT);
|
|
// X86 and x32 have 32 bit pointers.
|
|
if ((TT.isArch64Bit() &&
|
|
(TT.getEnvironment() == Triple::GNUX32 || TT.isOSNaCl())) ||
|
|
!TT.isArch64Bit())
|
|
Ret += "-p:32:32";
|
|
|
|
// Some ABIs align 64 bit integers and doubles to 64 bits, others to 32.
|
|
if (TT.isArch64Bit() || TT.isOSWindows() || TT.isOSNaCl())
|
|
Ret += "-i64:64";
|
|
else if (TT.isOSIAMCU())
|
|
Ret += "-i64:32-f64:32";
|
|
else
|
|
Ret += "-f64:32:64";
|
|
|
|
// Some ABIs align long double to 128 bits, others to 32.
|
|
if (TT.isOSNaCl() || TT.isOSIAMCU())
|
|
; // No f80
|
|
else if (TT.isArch64Bit() || TT.isOSDarwin())
|
|
Ret += "-f80:128";
|
|
else
|
|
Ret += "-f80:32";
|
|
|
|
if (TT.isOSIAMCU())
|
|
Ret += "-f128:32";
|
|
|
|
// The registers can hold 8, 16, 32 or, in x86-64, 64 bits.
|
|
if (TT.isArch64Bit())
|
|
Ret += "-n8:16:32:64";
|
|
else
|
|
Ret += "-n8:16:32";
|
|
|
|
// The stack is aligned to 32 bits on some ABIs and 128 bits on others.
|
|
if ((!TT.isArch64Bit() && TT.isOSWindows()) || TT.isOSIAMCU())
|
|
Ret += "-a:0:32-S32";
|
|
else
|
|
Ret += "-S128";
|
|
|
|
return Ret;
|
|
}
|
|
|
|
static Reloc::Model getEffectiveRelocModel(const Triple &TT,
|
|
Optional<Reloc::Model> RM) {
|
|
bool is64Bit = TT.getArch() == Triple::x86_64;
|
|
if (!RM.hasValue()) {
|
|
// Darwin defaults to PIC in 64 bit mode and dynamic-no-pic in 32 bit mode.
|
|
// Win64 requires rip-rel addressing, thus we force it to PIC. Otherwise we
|
|
// use static relocation model by default.
|
|
if (TT.isOSDarwin()) {
|
|
if (is64Bit)
|
|
return Reloc::PIC_;
|
|
return Reloc::DynamicNoPIC;
|
|
}
|
|
if (TT.isOSWindows() && is64Bit)
|
|
return Reloc::PIC_;
|
|
return Reloc::Static;
|
|
}
|
|
|
|
// ELF and X86-64 don't have a distinct DynamicNoPIC model. DynamicNoPIC
|
|
// is defined as a model for code which may be used in static or dynamic
|
|
// executables but not necessarily a shared library. On X86-32 we just
|
|
// compile in -static mode, in x86-64 we use PIC.
|
|
if (*RM == Reloc::DynamicNoPIC) {
|
|
if (is64Bit)
|
|
return Reloc::PIC_;
|
|
if (!TT.isOSDarwin())
|
|
return Reloc::Static;
|
|
}
|
|
|
|
// If we are on Darwin, disallow static relocation model in X86-64 mode, since
|
|
// the Mach-O file format doesn't support it.
|
|
if (*RM == Reloc::Static && TT.isOSDarwin() && is64Bit)
|
|
return Reloc::PIC_;
|
|
|
|
return *RM;
|
|
}
|
|
|
|
static CodeModel::Model getEffectiveCodeModel(Optional<CodeModel::Model> CM,
|
|
bool JIT, bool Is64Bit) {
|
|
if (CM)
|
|
return *CM;
|
|
if (JIT)
|
|
return Is64Bit ? CodeModel::Large : CodeModel::Small;
|
|
return CodeModel::Small;
|
|
}
|
|
|
|
/// Create an X86 target.
|
|
///
|
|
X86TargetMachine::X86TargetMachine(const Target &T, const Triple &TT,
|
|
StringRef CPU, StringRef FS,
|
|
const TargetOptions &Options,
|
|
Optional<Reloc::Model> RM,
|
|
Optional<CodeModel::Model> CM,
|
|
CodeGenOpt::Level OL, bool JIT)
|
|
: LLVMTargetMachine(
|
|
T, computeDataLayout(TT), TT, CPU, FS, Options,
|
|
getEffectiveRelocModel(TT, RM),
|
|
getEffectiveCodeModel(CM, JIT, TT.getArch() == Triple::x86_64), OL),
|
|
TLOF(createTLOF(getTargetTriple())) {
|
|
// Windows stack unwinder gets confused when execution flow "falls through"
|
|
// after a call to 'noreturn' function.
|
|
// To prevent that, we emit a trap for 'unreachable' IR instructions.
|
|
// (which on X86, happens to be the 'ud2' instruction)
|
|
// On PS4, the "return address" of a 'noreturn' call must still be within
|
|
// the calling function, and TrapUnreachable is an easy way to get that.
|
|
// The check here for 64-bit windows is a bit icky, but as we're unlikely
|
|
// to ever want to mix 32 and 64-bit windows code in a single module
|
|
// this should be fine.
|
|
if ((TT.isOSWindows() && TT.getArch() == Triple::x86_64) || TT.isPS4())
|
|
this->Options.TrapUnreachable = true;
|
|
|
|
initAsmInfo();
|
|
}
|
|
|
|
X86TargetMachine::~X86TargetMachine() = default;
|
|
|
|
const X86Subtarget *
|
|
X86TargetMachine::getSubtargetImpl(const Function &F) const {
|
|
Attribute CPUAttr = F.getFnAttribute("target-cpu");
|
|
Attribute FSAttr = F.getFnAttribute("target-features");
|
|
|
|
StringRef CPU = !CPUAttr.hasAttribute(Attribute::None)
|
|
? CPUAttr.getValueAsString()
|
|
: (StringRef)TargetCPU;
|
|
StringRef FS = !FSAttr.hasAttribute(Attribute::None)
|
|
? FSAttr.getValueAsString()
|
|
: (StringRef)TargetFS;
|
|
|
|
SmallString<512> Key;
|
|
Key.reserve(CPU.size() + FS.size());
|
|
Key += CPU;
|
|
Key += FS;
|
|
|
|
// FIXME: This is related to the code below to reset the target options,
|
|
// we need to know whether or not the soft float flag is set on the
|
|
// function before we can generate a subtarget. We also need to use
|
|
// it as a key for the subtarget since that can be the only difference
|
|
// between two functions.
|
|
bool SoftFloat =
|
|
F.getFnAttribute("use-soft-float").getValueAsString() == "true";
|
|
// If the soft float attribute is set on the function turn on the soft float
|
|
// subtarget feature.
|
|
if (SoftFloat)
|
|
Key += FS.empty() ? "+soft-float" : ",+soft-float";
|
|
|
|
// Keep track of the key width after all features are added so we can extract
|
|
// the feature string out later.
|
|
unsigned CPUFSWidth = Key.size();
|
|
|
|
// Translate vector width function attribute into subtarget features. This
|
|
// overrides any CPU specific turning parameter
|
|
unsigned PreferVectorWidthOverride = 0;
|
|
if (F.hasFnAttribute("prefer-vector-width")) {
|
|
StringRef Val = F.getFnAttribute("prefer-vector-width").getValueAsString();
|
|
unsigned Width;
|
|
if (!Val.getAsInteger(0, Width)) {
|
|
Key += ",prefer-vector-width=";
|
|
Key += Val;
|
|
PreferVectorWidthOverride = Width;
|
|
}
|
|
}
|
|
|
|
FS = Key.slice(CPU.size(), CPUFSWidth);
|
|
|
|
auto &I = SubtargetMap[Key];
|
|
if (!I) {
|
|
// This needs to be done before we create a new subtarget since any
|
|
// creation will depend on the TM and the code generation flags on the
|
|
// function that reside in TargetOptions.
|
|
resetTargetOptions(F);
|
|
I = llvm::make_unique<X86Subtarget>(TargetTriple, CPU, FS, *this,
|
|
Options.StackAlignmentOverride,
|
|
PreferVectorWidthOverride);
|
|
}
|
|
return I.get();
|
|
}
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
// Command line options for x86
|
|
//===----------------------------------------------------------------------===//
|
|
static cl::opt<bool>
|
|
UseVZeroUpper("x86-use-vzeroupper", cl::Hidden,
|
|
cl::desc("Minimize AVX to SSE transition penalty"),
|
|
cl::init(true));
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
// X86 TTI query.
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
TargetTransformInfo
|
|
X86TargetMachine::getTargetTransformInfo(const Function &F) {
|
|
return TargetTransformInfo(X86TTIImpl(this, F));
|
|
}
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
// Pass Pipeline Configuration
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
namespace {
|
|
|
|
/// X86 Code Generator Pass Configuration Options.
|
|
class X86PassConfig : public TargetPassConfig {
|
|
public:
|
|
X86PassConfig(X86TargetMachine &TM, PassManagerBase &PM)
|
|
: TargetPassConfig(TM, PM) {}
|
|
|
|
X86TargetMachine &getX86TargetMachine() const {
|
|
return getTM<X86TargetMachine>();
|
|
}
|
|
|
|
ScheduleDAGInstrs *
|
|
createMachineScheduler(MachineSchedContext *C) const override {
|
|
ScheduleDAGMILive *DAG = createGenericSchedLive(C);
|
|
DAG->addMutation(createX86MacroFusionDAGMutation());
|
|
return DAG;
|
|
}
|
|
|
|
void addIRPasses() override;
|
|
bool addInstSelector() override;
|
|
bool addIRTranslator() override;
|
|
bool addLegalizeMachineIR() override;
|
|
bool addRegBankSelect() override;
|
|
bool addGlobalInstructionSelect() override;
|
|
bool addILPOpts() override;
|
|
bool addPreISel() override;
|
|
void addMachineSSAOptimization() override;
|
|
void addPreRegAlloc() override;
|
|
void addPostRegAlloc() override;
|
|
void addPreEmitPass() override;
|
|
void addPreEmitPass2() override;
|
|
void addPreSched2() override;
|
|
};
|
|
|
|
class X86ExecutionDomainFix : public ExecutionDomainFix {
|
|
public:
|
|
static char ID;
|
|
X86ExecutionDomainFix() : ExecutionDomainFix(ID, X86::VR128XRegClass) {}
|
|
StringRef getPassName() const override {
|
|
return "X86 Execution Dependency Fix";
|
|
}
|
|
};
|
|
char X86ExecutionDomainFix::ID;
|
|
|
|
} // end anonymous namespace
|
|
|
|
INITIALIZE_PASS_BEGIN(X86ExecutionDomainFix, "x86-execution-domain-fix",
|
|
"X86 Execution Domain Fix", false, false)
|
|
INITIALIZE_PASS_DEPENDENCY(ReachingDefAnalysis)
|
|
INITIALIZE_PASS_END(X86ExecutionDomainFix, "x86-execution-domain-fix",
|
|
"X86 Execution Domain Fix", false, false)
|
|
|
|
TargetPassConfig *X86TargetMachine::createPassConfig(PassManagerBase &PM) {
|
|
return new X86PassConfig(*this, PM);
|
|
}
|
|
|
|
void X86PassConfig::addIRPasses() {
|
|
addPass(createAtomicExpandPass());
|
|
|
|
TargetPassConfig::addIRPasses();
|
|
|
|
if (TM->getOptLevel() != CodeGenOpt::None)
|
|
addPass(createInterleavedAccessPass());
|
|
|
|
// Add passes that handle indirect branch removal and insertion of a retpoline
|
|
// thunk. These will be a no-op unless a function subtarget has the retpoline
|
|
// feature enabled.
|
|
addPass(createIndirectBrExpandPass());
|
|
}
|
|
|
|
bool X86PassConfig::addInstSelector() {
|
|
// Install an instruction selector.
|
|
addPass(createX86ISelDag(getX86TargetMachine(), getOptLevel()));
|
|
|
|
// For ELF, cleanup any local-dynamic TLS accesses.
|
|
if (TM->getTargetTriple().isOSBinFormatELF() &&
|
|
getOptLevel() != CodeGenOpt::None)
|
|
addPass(createCleanupLocalDynamicTLSPass());
|
|
|
|
addPass(createX86GlobalBaseRegPass());
|
|
return false;
|
|
}
|
|
|
|
bool X86PassConfig::addIRTranslator() {
|
|
addPass(new IRTranslator());
|
|
return false;
|
|
}
|
|
|
|
bool X86PassConfig::addLegalizeMachineIR() {
|
|
addPass(new Legalizer());
|
|
return false;
|
|
}
|
|
|
|
bool X86PassConfig::addRegBankSelect() {
|
|
addPass(new RegBankSelect());
|
|
return false;
|
|
}
|
|
|
|
bool X86PassConfig::addGlobalInstructionSelect() {
|
|
addPass(new InstructionSelect());
|
|
return false;
|
|
}
|
|
|
|
bool X86PassConfig::addILPOpts() {
|
|
addPass(&EarlyIfConverterID);
|
|
if (EnableMachineCombinerPass)
|
|
addPass(&MachineCombinerID);
|
|
addPass(createX86CmovConverterPass());
|
|
return true;
|
|
}
|
|
|
|
bool X86PassConfig::addPreISel() {
|
|
// Only add this pass for 32-bit x86 Windows.
|
|
const Triple &TT = TM->getTargetTriple();
|
|
if (TT.isOSWindows() && TT.getArch() == Triple::x86)
|
|
addPass(createX86WinEHStatePass());
|
|
return true;
|
|
}
|
|
|
|
void X86PassConfig::addPreRegAlloc() {
|
|
if (getOptLevel() != CodeGenOpt::None) {
|
|
addPass(&LiveRangeShrinkID);
|
|
addPass(createX86FixupSetCC());
|
|
addPass(createX86OptimizeLEAs());
|
|
addPass(createX86CallFrameOptimization());
|
|
}
|
|
|
|
addPass(createX86WinAllocaExpander());
|
|
}
|
|
void X86PassConfig::addMachineSSAOptimization() {
|
|
addPass(createX86DomainReassignmentPass());
|
|
TargetPassConfig::addMachineSSAOptimization();
|
|
}
|
|
|
|
void X86PassConfig::addPostRegAlloc() {
|
|
addPass(createX86FloatingPointStackifierPass());
|
|
}
|
|
|
|
void X86PassConfig::addPreSched2() { addPass(createX86ExpandPseudoPass()); }
|
|
|
|
void X86PassConfig::addPreEmitPass() {
|
|
if (getOptLevel() != CodeGenOpt::None) {
|
|
addPass(new X86ExecutionDomainFix());
|
|
addPass(createBreakFalseDeps());
|
|
}
|
|
|
|
addPass(createX86IndirectBranchTrackingPass());
|
|
|
|
if (UseVZeroUpper)
|
|
addPass(createX86IssueVZeroUpperPass());
|
|
|
|
if (getOptLevel() != CodeGenOpt::None) {
|
|
addPass(createX86FixupBWInsts());
|
|
addPass(createX86PadShortFunctions());
|
|
addPass(createX86FixupLEAs());
|
|
addPass(createX86EvexToVexInsts());
|
|
}
|
|
}
|
|
|
|
void X86PassConfig::addPreEmitPass2() {
|
|
addPass(createX86RetpolineThunksPass());
|
|
}
|