1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 11:02:59 +02:00
llvm-mirror/lib/IR/CMakeLists.txt

67 lines
1.2 KiB
CMake
Raw Normal View History

set(LLVM_TARGET_DEFINITIONS AttributesCompatFunc.td)
tablegen(LLVM AttributesCompatFunc.inc -gen-attrs)
add_public_tablegen_target(AttributeCompatFuncTableGen)
add_llvm_library(LLVMCore
AbstractCallSite -- A unified interface for (in)direct and callback calls An abstract call site is a wrapper that allows to treat direct, indirect, and callback calls the same. If an abstract call site represents a direct or indirect call site it behaves like a stripped down version of a normal call site object. The abstract call site can also represent a callback call, thus the fact that the initially called function (=broker) may invoke a third one (=callback callee). In this case, the abstract call side hides the middle man, hence the broker function. The result is a representation of the callback call, inside the broker, but in the context of the original instruction that invoked the broker. Again, there are up to three functions involved when we talk about callback call sites. The caller (1), which invokes the broker function. The broker function (2), that may or may not invoke the callback callee. And finally the callback callee (3), which is the target of the callback call. The abstract call site will handle the mapping from parameters to arguments depending on the semantic of the broker function. However, it is important to note that the mapping is often partial. Thus, some arguments of the call/invoke instruction are mapped to parameters of the callee while others are not. At the same time, arguments of the callback callee might be unknown, thus "null" if queried. This patch introduces also !callback metadata which describe how a callback broker maps from parameters to arguments. This metadata is directly created by clang for known broker functions, provided through source code attributes by the user, or later deduced by analyses. For motivation and additional information please see the corresponding talk (slides/video) https://llvm.org/devmtg/2018-10/talk-abstracts.html#talk20 as well as the LCPC paper http://compilers.cs.uni-saarland.de/people/doerfert/par_opt_lcpc18.pdf Differential Revision: https://reviews.llvm.org/D54498 llvm-svn: 351627
2019-01-19 06:19:06 +01:00
AbstractCallSite.cpp
AsmWriter.cpp
Attributes.cpp
AutoUpgrade.cpp
BasicBlock.cpp
Comdat.cpp
ConstantFold.cpp
ConstantRange.cpp
Constants.cpp
Core.cpp
DIBuilder.cpp
DataLayout.cpp
DebugInfo.cpp
DebugInfoMetadata.cpp
2011-10-04 20:22:24 +02:00
DebugLoc.cpp
DiagnosticHandler.cpp
DiagnosticInfo.cpp
DiagnosticPrinter.cpp
Dominators.cpp
Function.cpp
2010-01-27 21:44:12 +01:00
GVMaterializer.cpp
Globals.cpp
2010-01-27 21:44:12 +01:00
IRBuilder.cpp
IRPrintingPasses.cpp
InlineAsm.cpp
Instruction.cpp
Instructions.cpp
IntrinsicInst.cpp
LLVMContext.cpp
LLVMContextImpl.cpp
LegacyPassManager.cpp
MDBuilder.cpp
Mangler.cpp
Metadata.cpp
Module.cpp
ModuleSummaryIndex.cpp
Operator.cpp
OptBisect.cpp
Pass.cpp
[New PM] Introducing PassInstrumentation framework Pass Execution Instrumentation interface enables customizable instrumentation of pass execution, as per "RFC: Pass Execution Instrumentation interface" posted 06/07/2018 on llvm-dev@ The intent is to provide a common machinery to implement all the pass-execution-debugging features like print-before/after, opt-bisect, time-passes etc. Here we get a basic implementation consisting of: * PassInstrumentationCallbacks class that handles registration of callbacks and access to them. * PassInstrumentation class that handles instrumentation-point interfaces that call into PassInstrumentationCallbacks. * Callbacks accept StringRef which is just a name of the Pass right now. There were some ideas to pass an opaque wrapper for the pointer to pass instance, however it appears that pointer does not actually identify the instance (adaptors and managers might have the same address with the pass they govern). Hence it was decided to go simple for now and then later decide on what the proper mental model of identifying a "pass in a phase of pipeline" is. * Callbacks accept llvm::Any serving as a wrapper for const IRUnit*, to remove direct dependencies on different IRUnits (e.g. Analyses). * PassInstrumentationAnalysis analysis is explicitly requested from PassManager through usual AnalysisManager::getResult. All pass managers were updated to run that to get PassInstrumentation object for instrumentation calls. * Using tuples/index_sequence getAnalysisResult helper to extract generic AnalysisManager's extra args out of a generic PassManager's extra args. This is the only way I was able to explicitly run getResult for PassInstrumentationAnalysis out of a generic code like PassManager::run or RepeatedPass::run. TODO: Upon lengthy discussions we agreed to accept this as an initial implementation and then get rid of getAnalysisResult by improving RepeatedPass implementation. * PassBuilder takes PassInstrumentationCallbacks object to pass it further into PassInstrumentationAnalysis. Callbacks registration should be performed directly through PassInstrumentationCallbacks. * new-pm tests updated to account for PassInstrumentationAnalysis being run * Added PassInstrumentation tests to PassBuilderCallbacks unit tests. Other unit tests updated with registration of the now-required PassInstrumentationAnalysis. Made getName helper to return std::string (instead of StringRef initially) to fix asan builtbot failures on CGSCC tests. Reviewers: chandlerc, philip.pfaffe Differential Revision: https://reviews.llvm.org/D47858 llvm-svn: 342664
2018-09-20 19:08:45 +02:00
PassInstrumentation.cpp
Introduce an AnalysisManager which is like a pass manager but with a lot more smarts in it. This is where most of the interesting logic that used to live in the implicit-scheduling-hackery of the old pass manager will live. Like the previous commits, note that this is a very early prototype! I expect substantial changes before this is ready to use. The core of the design is the following: - We have an AnalysisManager which can be used across a series of passes over a module. - The code setting up a pass pipeline registers the analyses available with the manager. - Individual transform passes can check than an analysis manager provides the analyses they require in order to fail-fast. - There is *no* implicit registration or scheduling. - Analysis passes are different from other passes: they produce an analysis result that is cached and made available via the analysis manager. - Cached results are invalidated automatically by the pass managers. - When a transform pass requests an analysis result, either the analysis is run to produce the result or a cached result is provided. There are a few aspects of this design that I *know* will change in subsequent commits: - Currently there is no "preservation" system, that needs to be added. - All of the analysis management should move up to the analysis library. - The analysis management needs to support at least SCC passes. Maybe loop passes. Living in the analysis library will facilitate this. - Need support for analyses which are *both* module and function passes. - Need support for pro-actively running module analyses to have cached results within a function pass manager. - Need a clear design for "immutable" passes. - Need support for requesting cached results when available and not re-running the pass even if that would be necessary. - Need more thorough testing of all of this infrastructure. There are other aspects that I view as open questions I'm hoping to resolve as I iterate a bit on the infrastructure, and especially as I start writing actual passes against this. - Should we have separate management layers for function, module, and SCC analyses? I think "yes", but I'm not yet ready to switch the code. Adding SCC support will likely resolve this definitively. - How should the 'require' functionality work? Should *that* be the only way to request results to ensure that passes always require things? - How should preservation work? - Probably some other things I'm forgetting. =] Look forward to more patches in shorter order now that this is in place. llvm-svn: 194538
2013-11-13 02:12:08 +01:00
PassManager.cpp
PassRegistry.cpp
PassTimingInfo.cpp
RemarkStreamer.cpp
SafepointIRVerifier.cpp
ProfileSummary.cpp
[Statepoints 3/4] Statepoint infrastructure for garbage collection: SelectionDAGBuilder This is the third patch in a small series. It contains the CodeGen support for lowering the gc.statepoint intrinsic sequences (223078) to the STATEPOINT pseudo machine instruction (223085). The change also includes the set of helper routines and classes for working with gc.statepoints, gc.relocates, and gc.results since the lowering code uses them. With this change, gc.statepoints should be functionally complete. The documentation will follow in the fourth change, and there will likely be some cleanup changes, but interested parties can start experimenting now. I'm not particularly happy with the amount of code or complexity involved with the lowering step, but at least it's fairly well isolated. The statepoint lowering code is split into it's own files and anyone not working on the statepoint support itself should be able to ignore it. During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it's functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. Most of the complexity comes from trying to keep relocated copies of values in the same stack slots across statepoints. Doing so avoids the insertion of pointless load and store instructions to reshuffle the stack. The current implementation isn't as effective as I'd like, but it is functional and 'good enough' for many common use cases. In the long term, I'd like to figure out how to integrate the statepoint lowering with the register allocator. In principal, we shouldn't need to eagerly spill at all. The register allocator should do any spilling required and the statepoint should simply record that fact. Depending on how challenging that turns out to be, we may invest in a smarter global stack slot assignment mechanism as a stop gap measure. Reviewed by: atrick, ributzka llvm-svn: 223137
2014-12-02 19:50:36 +01:00
Statepoint.cpp
Type.cpp
TypeFinder.cpp
Use.cpp
User.cpp
Value.cpp
ValueSymbolTable.cpp
Verifier.cpp
ADDITIONAL_HEADER_DIRS
${LLVM_MAIN_INCLUDE_DIR}/llvm/IR
DEPENDS
intrinsics_gen
)