r267249 removed the dual ARM/Thumb interface from MachOObjectFile,
simplifying llvm-dsymutil's code. This unfortunately also regressed
llvm-dsymutil's ability to select thumb slices, because the simplified
code was also dealing with the discrepency between the slice arch
(eg. armv7m) and the triple arch name (eg. thumbv7m).
llvm-svn: 268894
This reuses the CVTypeDumper from libcodeview to dump full
information about type records within a PDB file.
Differential Revision: http://reviews.llvm.org/D20022
Reviewed By: rnk
llvm-svn: 268808
load commands.
The existing test case in test/Object/macho-invalid.test for
macho-invalid-too-small-segment-load-command has a cmdsize of 55, while
being too small also it is not a multiple of 4. So when that check is added
this test case will produce a different error. So I constructed a new test case
that will trigger the intended error.
I also changed the error message to be consistent with the other malformed Mach-O
file error messages which prints the load command index. I also removed both
object_error::macho_load_segment_too_small and
object_error::macho_load_segment_too_many_sections from Object/Error.h
as they are not needed and can just use object_error::parse_failed and let the
error message string distinguish the specific error.
llvm-svn: 268652
ThinLTO is using the Module Identifier to find the corresponding entry
in the index. However when reproducing part of the flow from temporary
files generated from the linker, you'd like to process a file and
force llvm-lto to use another module identifier than the current
filename. The alternative would be to tweak the index, which would be
more involved.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268643
Summary:
When launching ThinLTO backends in a distributed build (currently
supported in gold via the thinlto-index-only plugin option), emit
an individual index file for each backend process as described here:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098272.html
The individual index file encodes the summary and module information
required for implementing the importing/exporting decisions made
for a given module in the thin link step.
This is in place of the current mechanism that uses the combined index
to make importing decisions in each back end independently. It is an
enabler for doing global summary based optimizations in the thin link
step (which will be recorded in the individual index files), and reduces
the size of the index that must be sent to each backend process, and
the amount of work to scan it in the backends.
Rather than create entirely new ModuleSummaryIndex structures (and all
the included unique_ptrs) for each backend index file, a map is created
to record all of the GUID and summary pointers needed for a particular
index file. The IndexBitcodeWriter walks this map instead of the full
index (hiding the details of managing the appropriate summary iteration
in a new iterator subclass). This is more efficient than walking the
entire combined index and filtering out just the needed summaries during
each backend bitcode index write.
Depends on D19481.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19556
llvm-svn: 268627
Summary:
Port the dumper in llvm-readobj over to it.
I'm planning to use this visitor to power type stream merging.
While we're at it, try to switch from StringRef to ArrayRef<uint8_t> in some
places.
Reviewers: zturner, amccarth
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19899
llvm-svn: 268535
When printing raw PDB file fields, streams, and records, use the
ScopedPrinter class so we have consistency with llvm-readobj's output
format.
For the most part this is pretty mechanical, but I had to fix up the test
file to conform to the new YAMLesque output format. i added a few
additional helper functions to the ScopedPrinter such as one to print a
dotted version, etc.
Differential Revision: http://reviews.llvm.org/D19897
Reviewed By: rnk
llvm-svn: 268506
.MIPS.options section specifies miscellaneous options to be applied
to an object file. LLVM as well as modern versions of GNU tools emit
the only type of the options - ODK_REGINFO. The patch teaches llvm-readobj
to print details of the ODK_REGINFO and skip contents of other options.
llvm-svn: 268478
We forgot to consider the target of ifuncs when considering if a
function was alive or dead.
N.B. Also update a few auxiliary tools like bugpoint and
verify-uselistorder.
This fixes PR27593.
llvm-svn: 268468
Ability to parse codeview type streams is also needed by
DebugInfoPDB for parsing PDBs, so moving this into a library
gives us this option. Since DebugInfoPDB had already hand
rolled some code to do this, that code is now convereted over
to using this common abstraction.
Differential Revision: http://reviews.llvm.org/D19887
Reviewed By: dblaikie, amccarth
llvm-svn: 268454
This parses the TPI stream (stream 2) from the PDB file. This stream
contains some header information followed by a series of codeview records.
There is some additional complexity here in that alongside this stream of
codeview records is a serialized hash table in order to efficiently query
the types. We parse the necessary bookkeeping information to allow us to
reconstruct the hash table, but we do not actually construct it yet as
there are still a few things that need to be understood first.
Differential Revision: http://reviews.llvm.org/D19840
Reviewed By: ruiu, rnk
llvm-svn: 268343
We wish to re-use this from llvm-pdbdump, and it provides a nice
way to print structured data in scoped format that could prove
useful for many other dumping tools as well. Moving to support
and changing name to ScopedPrinter to better reflect its purpose.
llvm-svn: 268342
This is a small refactoring step toward moving CodeView type stream logic from llvm-readobj to a library. It abstracts the logic of stepping through the stream into an iterator class and updates llvm-readobj to use that iterator. This has no functional change; llvm-readobj produces identical output.
The next step is to abstract the parsing of the different leaf types and then move that and the iterator into a library.
Since this is my first contrib outside LLDB, please let me know if I'm messing up on any of the LLVM style guidelines, idioms, or patterns.
Differential Revision: http://reviews.llvm.org/D19746
llvm-svn: 268334
llvm-dsymutil used to create the temporary files in the output directory.
This works fine except when the output directory contains a '%' char, which
is then replaced by llvm::sys::fs::createUniqueFile() generating an invalid
path.
Just use the default temp dir for those files.
llvm-svn: 268304
This isolates the state we use for type dumping from the knowledge of
object files. We can use CVTypeDumper to dump types from anywhere in
memory now.
NFC
Reviewers: zturner
Differential Revision: http://reviews.llvm.org/D19824
llvm-svn: 268300
Produce another specific error message for a malformed Mach-O file when a symbol’s
section index is more than the number of sections. The existing test case in test/Object/macho-invalid.test
for macho-invalid-section-index-getSectionRawName now reports the error with the message indicating
that a symbol at a specific index has a bad section index and that bad section index value.
Again converting interfaces to Expected<> from ErrorOr<> does involve
touching a number of places. Where the existing code reported the error with a
string message or an error code it was converted to do the same.
Also there some were bugs in the existing code that did not deal with the
old ErrorOr<> return values. So now with Expected<> since they must be
checked and the error handled, I added a TODO and a comment:
"// TODO: Actually report errors helpfully" and a call something like
consumeError(NameOrErr.takeError()) so the buggy code will not crash
since needed to deal with the Error.
llvm-svn: 268298
PDB has a lot of similar data structures. We already have code
for parsing a Name Map, but PDB seems to have a different but
very similar structure that is a hash table. This is the
beginning of code needed in order to parse the name hash table,
but it is not yet complete. It parses the basic metadata of
the hash table, the bucket array, and the names buffer, but
doesn't use any of these fields yet as the data structure
requires a non-trivial amount of work to understand.
llvm-svn: 268268
This adds a new target `install-distribution-toolchain` which will install an Xcode toolchain featuring just the LLVM components specified in LLVM_DISTRIBUTION_COMPONENTS.
llvm-svn: 268125
The motivation for this change is that PDB has the notion of
streams and substreams. Substreams often consist of variable
length structures that are convenient to be able to treat as
guaranteed, contiguous byte arrays, whereas the streams they
are contained in are not necessarily so, as a single stream
could be spread across many discontiguous blocks.
So, when processing data from a substream, we want to be able
to assume that we have a contiguous byte array so that we can
cast pointers to variable length arrays and such.
This leads to the question of how to be able to read the same
data structure from either a stream or a substream using the
same interface, which is where this patch comes in.
We separate out the stream's read state from the underlying
representation, and introduce a `StreamReader` class. Then
we change the name of `PDBStream` to `MappedBlockStream`, and
introduce a second kind of stream called a `ByteStream` which is
simply a sequence of contiguous bytes. Finally, we update all
of the std::vectors in `PDBDbiStream` to use `ByteStream` instead
as a proof of concept.
llvm-svn: 268071
This is the second try. This time we disable this feature if no Polly checkout
is available. For this to work we need to check if tools/polly is present
early enough that our decision is known before cmake generates Config/config.h.
With Polly checked into LLVM it was since a long time possible to compile
clang/opt/bugpoint with Polly support directly linked in, instead of only
providing Polly as a separate loadable module. This commit switches the
default from providing Polly as a module to linking Polly into tools, such
that it becomes unnecessary to load the Polly module when playing with Polly.
Such configuration has shown a lot more convenient for day-to-day Polly use.
This change does not impact the default behavior of any tool, if Polly is not
explicitly enabled when calling clang/opt/bugpoint Polly does not affect
compilation.
This change also does not impact normal LLVM/clang checkouts that do not
contain Polly.
Reviewers: jdoerfert, Meinersbur
Subscribers: pollydev, llvm-commits
Differential Revision: http://reviews.llvm.org/D19711
llvm-svn: 268048
We now read out the rest of the substreams from the DBI streams. One of
these substreams, the FileInfo substream, contains information about which
source files contribute to each module (aka compiland). This patch
additionally parses out the file information from that substream, and
dumps it in llvm-pdbdump.
Differential Revision: http://reviews.llvm.org/D19634
Reviewed by: ruiu
llvm-svn: 267928
Two problems, 1) for the last 4 bytes it would print them as separate bytes not a word
and 2) it would print the same last byte for those bytes less than a word.
rdar://25938224
llvm-svn: 267819
This gets more data out of the DBI strema of the PDB. In
particular it extracts the metadata for the list of modules
(compilands) that this PDB contains info about, and adds support
for dumping these fields to llvm-pdbdump.
Differential Revision: http://reviews.llvm.org/D19570
Reviewed By: ruiu
llvm-svn: 267818
Summary:
This is a hook to allow TargetMachine to install passes at the
EP_EarlyAsPossible PassManagerBuilder extension point.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18614
llvm-svn: 267763
Summary:
With the removal of support for lazy parsing of combined index summary
records (e.g. r267344), we no longer need to include the summary record
bitcode offset in the VST entries for definitions. Change the combined
index format to be similar to the per-module index format in using value
ids to cross-reference from the summary record to the VST entry (rather
than the summary record bitcode offset to cross-reference in the other
direction).
The visible changes are:
1) Add the value id to the combined summary records
2) Remove the summary offset from the combined VST records, which has
the following effects:
- No longer need the VST_CODE_COMBINED_GVDEFENTRY record, as all
combined index VST entries now only contain the value id and
corresponding GUID.
- No longer have duplicate VST entries in the case where there are
multiple definitions of a symbol (e.g. weak/linkonce), as they all
have the same value id and GUID.
An implication of #2 above is that in order to hook up an alias to the
correct aliasee based on the value id of the aliasee recorded in the
combined index alias record, we need to scan the entries in the index
for that GUID to find the one from the same module (i.e. the case where
there are multiple entries for the aliasee). But the reader no longer
has to maintain a special map to hook up the alias/aliasee.
Reviewers: joker.eph
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19481
llvm-svn: 267712