See r331124 for how I made a list of files missing the include.
I then ran this Python script:
for f in open('filelist.txt'):
f = f.strip()
fl = open(f).readlines()
found = False
for i in xrange(len(fl)):
p = '#include "llvm/'
if not fl[i].startswith(p):
continue
if fl[i][len(p):] > 'Config':
fl.insert(i, '#include "llvm/Config/llvm-config.h"\n')
found = True
break
if not found:
print 'not found', f
else:
open(f, 'w').write(''.join(fl))
and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p`
and tried to fix include ordering and whatnot.
No intended behavior change.
llvm-svn: 331184
materializing function definitions.
MaterializationUnit instances are responsible for resolving and finalizing
symbol definitions when their materialize method is called. By contract, the
MaterializationUnit must materialize all definitions it is responsible for and
no others. If it can not materialize all definitions (because of some error)
then it must notify the associated VSO about each definition that could not be
materialized. The MaterializationResponsibility class tracks this
responsibility, asserting that all required symbols are resolved and finalized,
and that no extraneous symbols are resolved or finalized. In the event of an
error it provides a convenience method for notifying the VSO about each
definition that could not be materialized.
llvm-svn: 330142
notifyMaterializationFailed.
The notifyMaterializationFailed method can determine which error to raise by
looking at which queue the pending queries are in (resolution or finalization).
llvm-svn: 330141
Previously this crashed because a nullptr (returned by
createLocalIndirectStubsManagerBuilder() on platforms without
indirection support) functor was unconditionally invoked.
Patch by Andres Freund. Thanks Andres!
llvm-svn: 328687
There's are race between this thread and the destructor of the test ORC
components on the main threads. I saw flaky failures there in about 4%
of the runs of this unit test.
llvm-svn: 328300
operation all-or-nothing, rather than allowing materialization on a per-symbol
basis.
This addresses a shortcoming of per-symbol materialization: If a
MaterializationUnit (/SymbolSource) wants to materialize more symbols than
requested (which is likely: most materializers will want to materialize whole
modules) then it needs a way to notify the symbol table about the extra symbols
being materialized. This process (checking what has been requested against what
is being provided and notifying the symbol table about the difference) has to
be repeated at every level of the JIT stack. Making materialization
all-or-nothing eliminates this issue, simplifying both materializer
implementations and the symbol table (VSO class) API. The cost is that
per-symbol materialization (e.g. for individual symbols in a module) now
requires multiple MaterializationUnits.
llvm-svn: 327946
This reverts commit r327566, it breaks
test/ExecutionEngine/OrcMCJIT/test-global-ctors.ll.
The test doesn't crash with a stack trace, unfortunately. It merely
returns 1 as the exit code.
ASan didn't produce a report, and I reproduced this on my Linux machine
and Windows box.
llvm-svn: 327576
Layer implementations typically mutate module state, and this is better
reflected by having layers own the Module they are operating on.
llvm-svn: 327566
The lookup function takes a list of VSOs, a set of symbol names (or just one
symbol name) and a materialization function object. It returns an
Expected<SymbolMap> (if given a set of names) or an Expected<JITEvaluatedSymbol>
(if given just one name). The lookup method constructs an
AsynchronousSymbolQuery for the given names, applies that query to each VSO in
the list in turn, and then blocks waiting for the query to complete. If
threading is enabled then the materialization function object can be used to
execute the materialization on different threads. If threading is disabled the
MaterializeOnCurrentThread utility must be used.
llvm-svn: 327474
than a shared ObjectFile/MemoryBuffer pair.
There's no need to pre-parse the buffer into an ObjectFile before passing it
down to the linking layer, and moving the parsing into the linking layer allows
us remove the parsing code at each call site.
llvm-svn: 325725
Handles were returned by addModule and used as keys for removeModule,
findSymbolIn, and emitAndFinalize. Their job is now subsumed by VModuleKeys,
which simplify resource management by providing a consistent handle across all
layers.
llvm-svn: 324700
In particular this patch switches RTDyldObjectLinkingLayer to use
orc::SymbolResolver and threads the requried changse (ExecutionSession
references and VModuleKeys) through the existing layer APIs.
The purpose of the new resolver interface is to improve query performance and
better support parallelism, both in JIT'd code and within the compiler itself.
The most visibile change is switch of the <Layer>::addModule signatures from:
Expected<Handle> addModule(std::shared_ptr<ModuleType> Mod,
std::shared_ptr<JITSymbolResolver> Resolver)
to:
Expected<Handle> addModule(VModuleKey K, std::shared_ptr<ModuleType> Mod);
Typical usage of addModule will now look like:
auto K = ES.allocateVModuleKey();
Resolvers[K] = createSymbolResolver(...);
Layer.addModule(K, std::move(Mod));
See the BuildingAJIT tutorial code for example usage.
llvm-svn: 324405
This resolver conforms to the LegacyJITSymbolResolver interface, and will be
replaced with a null-returning resolver conforming to the newer
orc::SymbolResolver interface in the near future. This patch renames the class
to avoid a clash.
llvm-svn: 324175
first argument.
This makes lookupFlags more consistent with lookup (which takes the query as the
first argument) and composes better in practice, since lookups are usually
linearly chained: Each lookupFlags can populate the result map based on the
symbols not found in the previous lookup. (If the maps were returned rather than
passed by reference there would have to be a merge step at the end).
llvm-svn: 323398
functions/methods that return JITSymbols.
lookupFlagsWithLegacyFn takes a SymbolNameSet and a legacy lookup function and
returns a LookupFlagsResult. It uses the legacy lookup function to search for
each symbol. If found, getFlags is called on the symbol and the flags added to
the SymbolFlags map. If not found, the symbol is added to the SymbolsNotFound
set.
lookupWithLegacyFn takes an AsynchronousSymbolQuery, a SymbolNameSet and a
legacy lookup function. Each symbol in the SymbolNameSet is searched for via the
legacy lookup function. If it is found, its getAddress function is called
(triggering materialization if it has not happened already) and the resulting
mapping stored in the query. If it is not found the symbol is added to the
unresolved symbols set which is returned at the end of the function. If an
error occurs during legacy lookup or materialization it is passed to the
query via setFailed and the function returns immediately.
llvm-svn: 323388
This patch adds a LambdaSymbolResolver convenience utility that can create an
orc::SymbolResolver from a pair of function objects that supply the behavior for
the lookupFlags and lookup methods.
This class plays the same role for orc::SymbolResolver as the legacy
LambdaResolver class plays for LegacyJITSymbolResolver, and will replace the
latter class once all ORC APIs are migrated to orc::SymbolResolver.
This patch also adds some documentation for the orc::SymbolResolver class as
this was left out of the original commit.
llvm-svn: 323375
orc::SymbolResolver to JITSymbolResolver adapter.
The new orc::SymbolResolver interface uses asynchronous queries for better
performance. (Asynchronous queries with bulk lookup minimize RPC/IPC overhead,
support parallel incoming queries, and expose more available work for
distribution). Existing ORC layers will soon be updated to use the
orc::SymbolResolver API rather than the legacy llvm::JITSymbolResolver API.
Because RuntimeDyld still uses JITSymbolResolver, this patch also includes an
adapter that wraps an orc::SymbolResolver with a JITSymbolResolver API.
llvm-svn: 323073
lookupFlags returns a SymbolFlagsMap for the requested symbols, along with a
set containing the SymbolStringPtr for any symbol not found in the VSO.
The JITSymbolFlags for each symbol will have been stripped of its transient
JIT-state flags (i.e. NotMaterialized, Materializing).
Calling lookupFlags does not trigger symbol materialization.
llvm-svn: 323060
version being used on some of the green dragon builders (plus a clang-format).
Workaround: AsynchronousSymbolQuery and VSO want to work with
JITEvaluatedSymbols anyway, so just use them (instead of JITSymbol, which
happens to tickle the bug).
The libcxx bug being worked around was fixed in r276003, and there are plans to
update the offending builders.
llvm-svn: 322140
The original commit broke the builders due to a think-o in an assertion:
AsynchronousSymbolQuery's constructor needs to check the callback member
variables, not the constructor arguments.
llvm-svn: 321853
SymbolSource.
These new APIs are a first stab at tackling some current shortcomings of ORC,
especially in performance and threading support.
VSO (Virtual Shared Object) is a symbol table representing the symbol
definitions of a set of modules that behave as if they had been statically
linked together into a shared object or dylib. Symbol definitions, either
pre-defined addresses or lazy definitions, can be added and queries for symbol
addresses made. The table applies the same linkage strength rules that static
linkers do when constructing a dylib or shared object: duplicate definitions
result in errors, strong definitions override weak or common ones. This class
should improve symbol lookup speed by providing centralized symbol tables (as
compared to the findSymbol implementation in the in-tree ORC layers, which
maintain one symbol table per object file / module added).
AsynchronousSymbolQuery is a query for the addresses of a set of symbols.
Query results are returned via a callback once they become available. Querying
for a set of symbols, rather than one symbol at a time (as the current lookup
scheme does) the JIT has the opportunity to make better use of available
resources (e.g. by spawning multiple jobs to materialize the requested symbols
if possible). Returning results via a callback makes queries asynchronous, so
queries from multiple threads of JIT'd code can proceed simultaneously.
SymbolSource represents a source of symbol definitions. It is used when
adding lazy symbol definitions to a VSO. Symbol definitions can be materialized
when needed or discarded if a stronger definition is found. Materializing on
demand via SymbolSources should (eventually) allow us to remove the lazy
materializers from JITSymbol, which will in turn allow the removal of many
current error checks and reduce the number of RPC round-trips involved in
materializing remote symbols. Adding a discard function allows sources to
discard symbol definitions (or mark them as available_externally), reducing the
amount of redundant code generated by the JIT for ODR symbols.
llvm-svn: 321838
rL319838 introduced SymbolStringPool which uses 8 byte atomics for
reference counters. On systems which do not support such atomics
natively such as MIPS32, explicitly add libatomic as one of the
libraries for SymbolStringPool's unittest.
Reviewers: lhames, beanz
Differential Revision: https://reviews.llvm.org/D41010
llvm-svn: 321225
We currently use target_link_libraries without an explicit scope
specifier (INTERFACE, PRIVATE or PUBLIC) when linking executables.
Dependencies added in this way apply to both the target and its
dependencies, i.e. they become part of the executable's link interface
and are transitive.
Transitive dependencies generally don't make sense for executables,
since you wouldn't normally be linking against an executable. This also
causes issues for generating install export files when using
LLVM_DISTRIBUTION_COMPONENTS. For example, clang has a lot of LLVM
library dependencies, which are currently added as interface
dependencies. If clang is in the distribution components but the LLVM
libraries it depends on aren't (which is a perfectly legitimate use case
if the LLVM libraries are being built static and there are therefore no
run-time dependencies on them), CMake will complain about the LLVM
libraries not being in export set when attempting to generate the
install export file for clang. This is reasonable behavior on CMake's
part, and the right thing is for LLVM's build system to explicitly use
PRIVATE dependencies for executables.
Unfortunately, CMake doesn't allow you to mix and match the keyword and
non-keyword target_link_libraries signatures for a single target; i.e.,
if a single call to target_link_libraries for a particular target uses
one of the INTERFACE, PRIVATE, or PUBLIC keywords, all other calls must
also be updated to use those keywords. This means we must do this change
in a single shot. I also fully expect to have missed some instances; I
tested by enabling all the projects in the monorepo (except dragonegg),
and configuring both with and without shared libraries, on both Darwin
and Linux, but I'm planning to rely on the buildbots for other
configurations (since it should be pretty easy to fix those).
Even after this change, we still have a lot of target_link_libraries
calls that don't specify a scope keyword, mostly for shared libraries.
I'm thinking about addressing those in a follow-up, but that's a
separate change IMO.
Differential Revision: https://reviews.llvm.org/D40823
llvm-svn: 319840
comparison of symbol names.
SymbolStringPool is a thread-safe string pool that will be used in upcoming Orc
APIs to facilitate efficient storage and fast comparison of symbol name strings.
llvm-svn: 319839
/code/llvm-project/llvm/unittests/ExecutionEngine/Orc/RTDyldObjectLinkingLayerTest.cpp:260:38: error: lambda capture 'this' is not used [-Werror,-Wunused-lambda-capture]
[this](decltype(ObjLayer)::ObjHandleT,
llvm-svn: 314454
concept.
Add a unit-test to make sure we don't backslide, and tweak the MockBaseLayer
utility to make it easier to test this kind of thing in the future.
llvm-svn: 314374
This will allow async handlers to be added that return void or Error::success().
Such handlers are expected to be common, since one of the primary uses of
addAsyncHandler is to run the body of the handler in a detached thread, in which
case the main handler returns immediately and does not need to provide an Error
value.
llvm-svn: 312746
The existing code created a JITSymbol with an invalid materializer instead,
guaranteeing a 'missing symbol' error when someone tried to materialize the
symbol.
llvm-svn: 312584
This patch introduces RemoteObjectClientLayer and RemoteObjectServerLayer,
which can be used to forward ORC object-layer operations from a JIT stack in
the client to a JIT stack (consisting only of object-layers) in the server.
This is a new way to support remote-JITing in LLVM. The previous approach
(supported by OrcRemoteTargetClient and OrcRemoteTargetServer) used a
remote-mapping memory manager that sat "beneath" the JIT stack and sent
fully-relocated binary blobs to the server. The main advantage of the new
approach is that relocatable objects can be cached on the server and re-used
(if the code that they represent hasn't changed), whereas fully-relocated blobs
can not (since the addresses they have been permanently bound to will change
from run to run).
llvm-svn: 312511