Doing this gets function's low_pc and global variable's locations right
in the output debug info. It also could get right other attributes
that need to be relocated (in linker terms), but I don't know of any
other than the address attributes.
This doesn't fixup low_pc attributes in compile_unit, lexical_block
or inlined subroutine, nor does it get right high_pc attributes
for function. This will come in a subsequent commit.
llvm-svn: 231544
Reference attributes are mainly handled by just creating DIEEntry
attributes for them. There is a special case for DW_FORM_ref_addr
attributes though, because the DIEEntry code needs a DwarfDebug
code to emit them (and we don't have one as we do no CodeGen).
In that case, just use DIEInteger attributes with the right form.
llvm-svn: 231531
The start offset of a linked unit is known before starting to clone
its DIEs. Handling DW_FORM_ref_addr attributes requires that this
offset is set while cloning the unit. Split CompileUnit::computeOffsets()
into setStartOffset() and computeNextUnitOffset() and call them
repsectively before cloning the DIEs and right after.
llvm-svn: 231530
This commit adds code to emit DIE trees that have been pruned from the
parts that haven't been marked as kept in the previous pass.
It works by 'cloning' the input DIE tree (as read by libDebugInfoDwarf)
into a tree of DIE objects. Cloning the DIEs means essentially cloning
their attributes. The code in this commit does only handle scalar and
block attributes (scalar because they are trivial, blocks because they
can't be easily replaced by a scalr placeholder), all the other ones
are replaced by placeholder zero values and will be handled in
further commits.
The added tests mostly check that the DIE tree has the correct layout and
also verify that a few chosen scalar and block attributes correctly make
their way into the output.
llvm-svn: 231300
The issue was that we were always printing the remarks. Fix that and add a test
showing that it prints nothing if -pass-remarks is not given.
Original message:
Correctly handle -pass-remarks in the gold plugin.
llvm-svn: 231273
Summary:
DataLayout keeps the string used for its creation.
As a side effect it is no longer needed in the Module.
This is "almost" NFC, the string is no longer
canonicalized, you can't rely on two "equals" DataLayout
having the same string returned by getStringRepresentation().
Get rid of DataLayoutPass: the DataLayout is in the Module
The DataLayout is "per-module", let's enforce this by not
duplicating it more than necessary.
One more step toward non-optionality of the DataLayout in the
module.
Make DataLayout Non-Optional in the Module
Module->getDataLayout() will never returns nullptr anymore.
Reviewers: echristo
Subscribers: resistor, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D7992
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231270
This will now display enum definitions both at the global
scope as well as nested inside of classes. Additionally,
it will no longer display enums at the global scope if the
enum is nested. Instead, it will omit the definition of
the enum globally and instead emit it in the corresponding
class definition.
llvm-svn: 231215
Accidentally committed a few more of these cleanup changes than
intended. Still breaking these out & tidying them up.
This reverts commit r231135.
llvm-svn: 231136
There doesn't seem to be any need to assert that iterator assignment is
between iterators over the same node - if you want to reuse an iterator
variable to iterate another node, that's perfectly acceptable. Just
don't mix comparisons between iterators into disjoint sequences, as
usual.
llvm-svn: 231135
The issue is that now we have a diag handler during optimizations
and get forward every optimization remark, flooding stdout.
The same filtering should probably be done with or without a
custom handler, but for now just ignore remarks.
Original message:
gold-plugin: "Upgrade" debug info and handle its warnings.
The gold plugin never calls MaterializeModule, so any old debug info
was not deleted and could cause crashes.
Now that it is being "upgraded", the plugin also has to handle warnings
and create Modules with a nice id (it shows in the warning).
llvm-svn: 230991
A short list of some of the improvements:
1) Now supports -all command line argument, which implies many
other command line arguments to simplify usage.
2) Now supports -no-compiler-generated command line argument to
exclude compiler generated types.
3) Prints base class list.
4) -class-definitions implies -types.
5) Proper display of bitfields.
6) Can now distinguish between struct/class/interface/union.
And a few other minor tweaks.
llvm-svn: 230933
Previously it was impossible to distinguish between "There is
no PDB implementation for this platform" and "I tried to load
the PDB, but couldn't find the file", making it hard to figure
out if you built llvm-pdbdump incorrectly or if you just mistyped
a file name.
This patch adds proper error handling so that we can know exactly
what went wrong.
llvm-svn: 230868
This class is responsible for getting the linked data to the
disk in the appropriate form. Today it it an empty shell that
just instantiates an MC layer.
As we do not put anything in the resulting file yet, we just
check it has the right architecture (and check that -o does
the right thing).
To be able to create all the components, this commit adds a
few dependencies to llvm-dsymutil, namely all-targets, MC and
AsmPrinter.
Also add a -no-output option, so that tests that do not need
the binary result can continue to run even if they do not have
the required target linked in.
llvm-svn: 230824
...and reimplement DwarfLinker::reportWarning in terms of it. Other
compenents than the DwarfLinker will need to report warnings, and I'm
about to add a similar "error()" helper at the same global level so
make that consistent.
llvm-svn: 230820
Function pointers were not correctly handled by the dumper, and
they would print as "* name". They now print as
"int (__cdecl *name)(int arg1, int arg2)" as they should.
Also, doubles were being printed as floats. This fixes that bug
as well, and adds tests for all builtin types. as well as a test
for function pointers.
llvm-svn: 230703
The gold plugin never calls MaterializeModule, so any old debug info
was not deleted and could cause crashes.
Now that it is being "upgraded", the plugin also has to handle warnings
and create Modules with a nice id (it shows in the warning).
llvm-svn: 230655
Since r199356, we've printed a warning when dropping debug info.
r225562 started crashing on that, since it registered a diagnostic
handler that only expected errors. This fixes the handler to expect
other severities. As a side effect, it now prints "error: " at the
start of error messages, similar to `llvm-as`.
There was a testcase for r199356, but it only really checked the
assembler. Move `test/Bitcode/drop-debug-info.ll` to `test/Assembler`,
and introduce `test/Bitcode/drop-debug-info.3.5.ll` (and companion
`.bc`) to test the bitcode reader.
Note: tools/gold/gold-plugin.cpp has an equivalent bug, but I'm not sure
what the best fix is there. I'll file a PR.
llvm-svn: 230416
Like r230414, add bitcode support including backwards compatibility, for
an explicit type parameter to GEP.
At the suggestion of Duncan I tried coalescing the two older bitcodes into a
single new bitcode, though I did hit a wrinkle: I couldn't figure out how to
create an explicit abbreviation for a record with a variable number of
arguments (the indicies to the gep). This means the discriminator between
inbounds and non-inbounds gep is a full variable-length field I believe? Is my
understanding correct? Is there a way to create such an abbreviation? Should I
just use two bitcodes as before?
Reviewers: dexonsmith
Differential Revision: http://reviews.llvm.org/D7736
llvm-svn: 230415
This reverts commit r230062.
Debian stable (wheezy) ships still with cmake 2.8.9.
The commit broke my LLVM/Polly buildbot, to my knowledge our only Linux+cmake
buildbot.
llvm-svn: 230343
When debugging LTO issues with ld64, we use -save-temps to save the merged
optimized bitcode file, then invoke ld64 again on the single bitcode file to
speed up debugging code generation passes and ld64 stuff after code generation.
llvm linking a single bitcode file via lto_codegen_add_module will generate a
different bitcode file from the single input. With the newly-added
lto_codegen_set_module, we can make sure the destination module is the same as
the input.
lto_codegen_set_module will transfer the ownship of the module to code
generator.
rdar://19024554
llvm-svn: 230290
When multiple regions start on the same line, llvm-cov was just
showing the count of the last one as the line count. This can be
confusing and misleading for things like one-liner loops, where the
count at the end isn't very interesting, or even "if" statements with
an opening brace at the end of the line.
Instead, use the maximum of all of the region start counts.
llvm-svn: 230263
This adds the --class-definitions flag. If specified, when dumping
types, instead of "class Foo" you will see the full class definition,
with member functions, constructors, access specifiers.
NOTE: Using this option can be very slow, as generating a full class
definition requires accessing many different parts of the PDB.
llvm-svn: 230203
This increases the flexibility of how to dump different
symbol types -- necessary for context-sensitive formatting of
symbol types -- and also improves the modularity by allowing
the dumping to be implemented in the actual dumper, as opposed
to in the PDB library.
llvm-svn: 230184
This removes a wealth of options, and instead now only provides
three options. -symbols, -types, and -compilands. This greatly
simplifies use of the tool, and makes it easier to understand
what you're going to see when you run the tool.
llvm-svn: 230182
This will help us study the format of individual symbol
records more closely.
Differential Revision: http://reviews.llvm.org/D7664
Reviewed by: Timur Iskhodzhanov
llvm-svn: 229730
with the Mach-O S_LITERAL_POINTERS section type.
Also fix the printing of the leading addresses for literal sections to be consistent and
not print the 0x prefix. Updated test cases to match.
llvm-svn: 229548
a gold binary explicitly. Substitute this binary into the tests rather
than just directly executing the 'ld' binary.
This should allow folks to inject a cross compiling gold binary, or in
my case to use a gold binary built and installed somewhere other than
/usr/bin/ld. It should also allow the tests to find 'ld.gold' so that
things work even if gold isn't the default on the system.
I've only stubbed out support in the makefile to preserve the existing
behavior with none of the fancy logic. If someone else wants to add
logic here, they're welcome to do so.
llvm-svn: 229251
This code didn't really make sense as is. If a filename is passed in,
the user obviously wants the coverage *for that file*, not *for
everything*.
llvm-svn: 229217
PR22575 occurred because we were unsafely storing references into a
std::vector. If the vector moved because it grew, we'd be left
iterating through garbage memory. This avoids the issue by simplifying
the logic to gather coverage information as we go, rather than storing
it and iterating over it.
I'm relying on the existing tests showing that this is semantically
NFC, since it's difficult to hit the issue this fixes without
relatively large covered programs.
llvm-svn: 229215
Now that llgo ships its own go command we can rely on it having support for $GCCGO.
Differential Revision: http://reviews.llvm.org/D7628
llvm-svn: 229210
With this commit, llvm-dsymutil learns how to choose which DIEs
it will link in the final output and which ones it won't. This
is based on the 'valid relocation' information that has been
built in the previous commits.
The test only tests that we choose the right 'root DIEs'. The
selection algorithm (and especially the part that walk the
dependencies of a root DIE) lacks a bit test coverage. This
will be much easier to cover when we output actual Dwarf and
thus can use llvm-dwarfdump to verify the structure of the
emitted DIE trees. I'll add more tests then.
llvm-svn: 229183
These 'valid relocations' in the debug_info section will be how
dsymutil identifies the DIEs it needs to keep in the linked debug
information.
llvm-svn: 229178
because I didn't have binutils set up properly to build the gold plugin.
Fixes PR22581 which was filed because this broke the build for folks
relying on the plugin. Very sorry! =]
I've gotten the plugin stuff building now as well so it shouldn't keep
happening.
llvm-svn: 229156
LLVM's include tree and the use of using declarations to hide the
'legacy' namespace for the old pass manager.
This undoes the primary modules-hostile change I made to keep
out-of-tree targets building. I sent an email inquiring about whether
this would be reasonable to do at this phase and people seemed fine with
it, so making it a reality. This should allow us to start bootstrapping
with modules to a certain extent along with making it easier to mix and
match headers in general.
The updates to any code for users of LLVM are very mechanical. Switch
from including "llvm/PassManager.h" to "llvm/IR/LegacyPassManager.h".
Qualify the types which now produce compile errors with "legacy::". The
most common ones are "PassManager", "PassManagerBase", and
"FunctionPassManager".
llvm-svn: 229094
In particular this patch adds the ability to dump complete
function signature information including argument types as
correctly formatted strings. A side effect of this is that
almost all symbol and meta types are now formatted.
llvm-svn: 229076
Frequently you only want to iterate over children of a specific
type (e.g. functions). Previously you would get back a generic
interface that allowed iteration over the base symbol type,
which you would have to dyn_cast<> each one of. With this patch,
we allow the user to specify the concrete type as a template
parameter, and it will return an iterator which returns instances
of the concrete type directly.
llvm-svn: 228960
bfd creates the output file early, so calling exit(0) is not enough, the file needs to be explicitly deleted.
Patch by: H.J. Lu <hjl.tools@gmail.com>
llvm-svn: 228946
Summary:
Move calls to get_input_file and release_input_file out of
getModuleForFile(). Otherwise release_input_file may end up
unmapping a view of the file while the view is still being
used by the Module (on 32-bit hosts).
Fix for PR22482.
Test Plan: Add test using --no-map-whole-files.
Reviewers: rafael, nlewycky
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7539
llvm-svn: 228842
This makes llvm-pdbdump available on all platforms, although it
will currently fail to create a dumper if there is no PDB reader
implementation for the current platform.
It implements dumping of compilands and children, which is less
information than was previously available, but it has to be
rewritten from scratch using the new set of interfaces, so the
rest of the functionality will be added back in subsequent commits.
llvm-svn: 228755
lto_codegen_compile_optimized. Also add lto_api_version.
Before this commit, we can only dump the optimized bitcode after running
lto_codegen_compile, but it includes some impacts of running codegen passes,
one example is StackProtector pass. We will get assertion failure when running
llc on the optimized bitcode, because StackProtector is effectively run twice.
After splitting lto_codegen_compile, the linker can choose to dump the bitcode
before running lto_codegen_compile_optimized.
lto_api_version is added so ld64 can check for runtime-availability of the new
API.
rdar://19565500
llvm-svn: 228000
terms of the new pass manager's TargetIRAnalysis.
Yep, this is one of the nicer bits of the new pass manager's design.
Passes can in many cases operate in a vacuum and so we can just nest
things when convenient. This is particularly convenient here as I can
now consolidate all of the TargetMachine logic on this analysis.
The most important change here is that this pushes the function we need
TTI for all the way into the TargetMachine, and re-creates the TTI
object for each function rather than re-using it for each function.
We're now prepared to teach the targets to produce function-specific TTI
objects with specific subtargets cached, etc.
One piece of feedback I'd love here is whether its worth renaming any of
this stuff. None of the names really seem that awesome to me at this
point, but TargetTransformInfoWrapperPass is particularly ... odd.
TargetIRAnalysisWrapper might make more sense. I would want to do that
rename separately anyways, but let me know what you think.
llvm-svn: 227731
This should be sufficient to replace the initial (minor) function pass
pipeline in Clang with the new pass manager. I'll probably add an (off
by default) flag to do that just to ensure we can get extra testing.
llvm-svn: 227726
I've added RUN lines both to the basic test for EarlyCSE and the
target-specific test, as this serves as a nice test that the TTI layer
in the new pass manager is in fact working well.
llvm-svn: 227725
produce it.
This adds a function to the TargetMachine that produces this analysis
via a callback for each function. This in turn faves the way to produce
a *different* TTI per-function with the correct subtarget cached.
I've also done the necessary wiring in the opt tool to thread the target
machine down and make it available to the pass registry so that we can
construct this analysis from a target machine when available.
llvm-svn: 227721
live in a class.
While this isn't really significant right now, I need to expose some
state to the pass construction expressions, and making them get
evaluated within a class context is a nice way to collect members that
they may need to access.
llvm-svn: 227715
base which it adds a single analysis pass to, to instead return the type
erased TargetTransformInfo object constructed for that TargetMachine.
This removes all of the pass variants for TTI. There is now a single TTI
*pass* in the Analysis layer. All of the Analysis <-> Target
communication is through the TTI's type erased interface itself. While
the diff is large here, it is nothing more that code motion to make
types available in a header file for use in a different source file
within each target.
I've tried to keep all the doxygen comments and file boilerplate in line
with this move, but let me know if I missed anything.
With this in place, the next step to making TTI work with the new pass
manager is to introduce a really simple new-style analysis that produces
a TTI object via a callback into this routine on the target machine.
Once we have that, we'll have the building blocks necessary to accept
a function argument as well.
llvm-svn: 227685
type erased interface and a single analysis pass rather than an
extremely complex analysis group.
The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.
I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.
There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.
The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.
Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.
The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]
Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:
1) Improving the TargetMachine interface by having it directly return
a TTI object. Because we have a non-pass object with value semantics
and an internal type erasure mechanism, we can narrow the interface
of the TargetMachine to *just* do what we need: build and return
a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
This will include splitting off a minimal form of it which is
sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
target machine for each function. This may actually be done as part
of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
just a bit messy and exacerbating the complexity of implementing
the TTI in each target.
Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.
Differential Revision: http://reviews.llvm.org/D7293
llvm-svn: 227669
segname,sectname to specify a Mach-O section to print. The printing is based on
the section type or section attributes.
The printing of the module initialization and termination section types is printed
with this change. Printing of other section types will be added next.
llvm-svn: 227649
In preparation for adding PDB support to LLVM, this moves the
DWARF parsing code to its own subdirectory under DebugInfo, and
renames LLVMDebugInfo to LLVMDebugInfoDWARF.
This is purely a mechanical / build system change.
Differential Revision: http://reviews.llvm.org/D7269
Reviewed by: Eric Christopher
llvm-svn: 227586
Certain aspects of llvm-pdbdump require language support only present in
MSVC 2013 and higher. Since this is strictly a utility, and since we hope
to drop support for MSVC 2012 soon, don't build this unless MSVC 2013 or
higher.
llvm-svn: 227479
If the personality is not a recognized MSVC personality function, this
pass delegates to the dwarf EH preparation pass. This chaining supports
people on *-windows-itanium or *-windows-gnu targets.
Currently this recognizes some personalities used by MSVC and turns
resume instructions into traps to avoid link errors. Even if cleanups
are not used in the source program, LLVM requires the frontend to emit a
code path that resumes unwinding after an exception. Clang does this,
and we get unreachable resume instructions. PR20300 covers cleaning up
these unreachable calls to resume.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D7216
llvm-svn: 227405
The libDebugInfo DIE parsing doesn't store these relationships, we have to
recompute them. This commit introduces the CompileUnit bookkeeping class to
store this data. It will be expanded with more fields in the future.
No tests as this produces no visible output.
llvm-svn: 227382
It's an empty shell for now. It's main method just opens the debug
map objects and parses their Dwarf info. Test that we at least do
that correctly.
llvm-svn: 227337
Summary:
MetadataAsValue uses a canonical format that strips the MDNode if it
contains only a single constant value. This triggers an assertion when
trying to cast the value to a MDNode.
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7165
llvm-svn: 227319
This adds two command line options:
--symbols dumps a list of all symbols found in the PDB.
--symbol-details dumps the same list, but with detailed information
for every symbol such as type, attributes, etc.
llvm-svn: 227286
This adds two command line options to llvm-pdbdump.
--source-files prints a flat list of all source files in the PDB.
--compilands prints a list of all compilands (e.g. object files)
that the PDB knows about, and for each one, a list of
source files that the compiland is composed of as well
as a hash of the original source file.
llvm-svn: 227276
PDB stores some of its data in streams and some in tables.
This patch teaches llvm-pdbdump to dump basic summary data
for the debug tables.
In support of this, this patch also adds some DIA helper
classes, such as a wrapper around an IDiaSymbol interface,
as well as helpers for outputting various enumerations to
a raw_ostream.
llvm-svn: 227257
llvm-pdbdump is a tool which can be used to dump the contents
of Microsoft-generated PDB files. It makes use of the Microsoft
DIA SDK, which is a COM based library designed specifically for
this purpose.
The initial commit of this tool dumps the raw bytes from PDB data
streams. Future commits will dump more semantic information such
as types, symbols, source files, etc similar to the types of
information accessible via llvm-dwarfdump.
Reviewed by: Aaron Ballman, Reid Kleckner, Chandler Carruth
Differential Revision: http://reviews.llvm.org/D7153
llvm-svn: 227241
derived classes.
Since global data alignment, layout, and mangling is often based on the
DataLayout, move it to the TargetMachine. This ensures that global
data is going to be layed out and mangled consistently if the subtarget
changes on a per function basis. Prior to this all targets(*) have
had subtarget dependent code moved out and onto the TargetMachine.
*One target hasn't been migrated as part of this change: R600. The
R600 port has, as a subtarget feature, the size of pointers and
this affects global data layout. I've currently hacked in a FIXME
to enable progress, but the port needs to be updated to either pass
the 64-bitness to the TargetMachine, or fix the DataLayout to
avoid subtarget dependent features.
llvm-svn: 227113
MIPS64 ELF file has a very specific relocation record format. Each
record might specify up to three relocation operations. So the `r_info`
field in fact consists of three relocation type sub-fields and optional
code of "special" symbols.
http://techpubs.sgi.com/library/manuals/4000/007-4658-001/pdf/007-4658-001.pdf
page 40
The patch implements support of the MIPS64 relocation record format in
yaml2obj/obj2yaml tools by introducing new optional Relocation fields:
Type2, Type3, and SpecSym. These fields are recognized only if the
object/YAML file relates to the MIPS64 target.
Differential Revision: http://reviews.llvm.org/D7136
llvm-svn: 227044
This just lifts the logic into a static helper function, sinks the
legacy pass to be a trivial wrapper of that helper fuction, and adds
a trivial wrapper for the new PM as well. Not much to see here.
I switched a test case to run in both modes, but we have to strip the
dead prototypes separately as that pass isn't in the new pass manager
(yet).
llvm-svn: 226999
This is exciting as this is a much more involved port. This is
a complex, existing transformation pass. All of the core logic is shared
between both old and new pass managers. Only the access to the analyses
is separate because the actual techniques are separate. This also uses
a bunch of different and interesting analyses and is the first time
where we need to use an analysis across an IR layer.
This also paves the way to expose instcombine utility functions. I've
got a static function that implements the core pass logic over
a function which might be mildly interesting, but more interesting is
likely exposing a routine which just uses instructions *already in* the
worklist and combines until empty.
I've switched one of my favorite instcombine tests to run with both as
well to make sure this keeps working.
llvm-svn: 226987
manager to support the actual uses of it. =]
When I ported instcombine to the new pass manager I discover that it
didn't work because TLI wasn't available in the right places. This is
a somewhat surprising and/or subtle aspect of the new pass manager
design that came up before but I think is useful to be reminded of:
While the new pass manager *allows* a function pass to query a module
analysis, it requires that the module analysis is already run and cached
prior to the function pass manager starting up, possibly with
a 'require<foo>' style utility in the pass pipeline. This is an
intentional hurdle because using a module analysis from a function pass
*requires* that the module analysis is run prior to entering the
function pass manager. Otherwise the other functions in the module could
be in who-knows-what state, etc.
A somewhat surprising consequence of this design decision (at least to
me) is that you have to design a function pass that leverages
a module analysis to do so as an optional feature. Even if that means
your function pass does no work in the absence of the module analysis,
you have to handle that possibility and remain conservatively correct.
This is a natural consequence of things being able to invalidate the
module analysis and us being unable to re-run it. And it's a generally
good thing because it lets us reorder passes arbitrarily without
breaking correctness, etc.
This ends up causing problems in one case. What if we have a module
analysis that is *definitionally* impossible to invalidate. In the
places this might come up, the analysis is usually also definitionally
trivial to run even while other transformation passes run on the module,
regardless of the state of anything. And so, it follows that it is
natural to have a hard requirement on such analyses from a function
pass.
It turns out, that TargetLibraryInfo is just such an analysis, and
InstCombine has a hard requirement on it.
The approach I've taken here is to produce an analysis that models this
flexibility by making it both a module and a function analysis. This
exposes the fact that it is in fact safe to compute at any point. We can
even make it a valid CGSCC analysis at some point if that is useful.
However, we don't want to have a copy of the actual target library info
state for each function! This state is specific to the triple. The
somewhat direct and blunt approach here is to turn TLI into a pimpl,
with the state and mutators in the implementation class and the query
routines primarily in the wrapper. Then the analysis can lazily
construct and cache the implementations, keyed on the triple, and
on-demand produce wrappers of them for each function.
One minor annoyance is that we will end up with a wrapper for each
function in the module. While this is a bit wasteful (one pointer per
function) it seems tolerable. And it has the advantage of ensuring that
we pay the absolute minimum synchronization cost to access this
information should we end up with a nice parallel function pass manager
in the future. We could look into trying to mark when analysis results
are especially cheap to recompute and more eagerly GC-ing the cached
results, or we could look at supporting a variant of analyses whose
results are specifically *not* cached and expected to just be used and
discarded by the consumer. Either way, these seem like incremental
enhancements that should happen when we start profiling the memory and
CPU usage of the new pass manager and not before.
The other minor annoyance is that if we end up using the TLI in both
a module pass and a function pass, those will be produced by two
separate analyses, and thus will point to separate copies of the
implementation state. While a minor issue, I dislike this and would like
to find a way to cleanly allow a single analysis instance to be used
across multiple IR unit managers. But I don't have a good solution to
this today, and I don't want to hold up all of the work waiting to come
up with one. This too seems like a reasonable thing to incrementally
improve later.
llvm-svn: 226981
This patch adds a new set of JIT APIs to LLVM. The aim of these new APIs is to
cleanly support a wider range of JIT use cases in LLVM, and encourage the
development and contribution of re-usable infrastructure for LLVM JIT use-cases.
These APIs are intended to live alongside the MCJIT APIs, and should not affect
existing clients.
Included in this patch:
1) New headers in include/llvm/ExecutionEngine/Orc that provide a set of
components for building JIT infrastructure.
Implementation code for these headers lives in lib/ExecutionEngine/Orc.
2) A prototype re-implementation of MCJIT (OrcMCJITReplacement) built out of the
new components.
3) Minor changes to RTDyldMemoryManager needed to support the new components.
These changes should not impact existing clients.
4) A new flag for lli, -use-orcmcjit, which will cause lli to use the
OrcMCJITReplacement class as its underlying execution engine, rather than
MCJIT itself.
Tests to follow shortly.
Special thanks to Michael Ilseman, Pete Cooper, David Blaikie, Eric Christopher,
Justin Bogner, and Jim Grosbach for extensive feedback and discussion.
llvm-svn: 226940
I had already factored this analysis specifically to enable doing this,
but hadn't actually committed the necessary wiring to get at this from
the new pass manager. This also nicely shows how the separate cache
object can be directly managed by the new pass manager.
This analysis didn't have any direct tests and so I've added a printer
pass and a boring test case. I chose to print the i1 value which is
being assumed rather than the call to llvm.assume as that seems much
more useful for testing... but suggestions on an even better printing
strategy welcome. My main goal was to make sure things actually work. =]
llvm-svn: 226868
Summary:
The default copy and assignment operators for these objects probably don't actually do what the clients intend, so they should be deleted.
Places using the assignment operator to set the value of an option should cast to the option's data type first to call into the override for operator=. Places using the copy constructor just need to be changed to not copy (i.e. passing by const reference instead of value).
Reviewers: dexonsmith, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7114
llvm-svn: 226762
pass and a LoopPrinterPass with the expected associated wiring.
I've added a RUN line to the only test case (!!!) we have that actually
prints loops. Everything seems to be working.
This is somewhat exciting as this is the first analysis using another
analysis to go in for the new pass manager. =D I also believe it is the
last analysis necessary for porting instcombine, but of course I may yet
discover more.
llvm-svn: 226560
Add an additional based relocation to the enumeration of based relocation names.
The lack of the enumerator value causes issues when inspecting WoA binaries.
llvm-svn: 226314