One exception here is LoopInfo which must forward-declare it (because
the typedef is in LoopPassManager.h which depends on LoopInfo).
Also, some includes for LoopPassManager.h were needed since that file
provides the typedef.
Besides a general consistently benefit, the extra layer of indirection
allows the mechanical part of https://reviews.llvm.org/D23256 that
requires touching every transformation and analysis to be factored out
cleanly.
Thanks to David for the suggestion.
llvm-svn: 278079
Besides a general consistently benefit, the extra layer of indirection
allows the mechanical part of https://reviews.llvm.org/D23256 that
requires touching every transformation and analysis to be factored out
cleanly.
Thanks to David for the suggestion.
llvm-svn: 278077
The current approach isn't a long-term viable pattern. Given the set of
architectures A, vendors V, operating systems O, and environments E, it
does |A| * |V| * |O| * |E| * 4! tests. As LLVM grows, this test keeps
getting slower, despite my working very hard to make it get some
"optimizations" even in -O0 builds in order to lower the constant
factors. Fundamentally, we're doing an unreasonable amount of work.i
Looking at the specific thing being tested -- the goal seems very
clearly to be testing the *permutations*, not the *combinations*. The
combinations are driving up the complexity much more than anything else.
Instead, test every possible value for a given triple entry in every
permutation of *some* triple. This really seems to cover the core goal
of the test. Every single possible triple component is tested in every
position. But because we keep the rest of the triple constant, it does
so in a dramatically more scalable amount of time. With this model we do
(|A| + |V| + |O| + |E|) * 4! tests.
For me on a debug build, this goes from running for 19 seconds to 19
milliseconds, or a 1000x improvement. This makes a world of difference
for the critical path of 'ninja check-llvm' and other extremely common
workflows.
Thanks to Renato, Dean, and David for the helpful review comments and
helping me refine the explanation of the change.
Differential Revision: https://reviews.llvm.org/D23156
llvm-svn: 277912
String pooling is not guaranteed by the standard, so if
you're comparing two different string literals for equality,
you have to use strcmp.
llvm-svn: 277831
This is a follow-up to r277637. It teaches MemorySSA that invariant
loads (and loads of provably constant memory) are always liveOnEntry.
llvm-svn: 277640
This is a fix for PR28697.
An MDNode can indirectly refer to a GlobalValue, through a
ConstantAsMetadata. When the GlobalValue is deleted, the MDNode operand
is reset to `nullptr`. If the node is uniqued, this can lead to a
hard-to-detect cache invalidation in a Metadata map that's shared across
an LLVMContext.
Consider:
1. A map from Metadata* to `T` called RemappedMDs.
2. A node that references a global variable, `!{i1* @GV}`.
3. Insert `!{i1* @GV} -> SomeT` in the map.
4. Delete `@GV`, leaving behind `!{null} -> SomeT`.
Looking up the generic and uninteresting `!{null}` gives you `SomeT`,
which is likely related to `@GV`. Worse, `SomeT`'s lifetime may be tied
to the deleted `@GV`.
This occurs in practice in the shared ValueMap used since r266579 in the
IRMover. Other code that handles more than one Module (with different
lifetimes) in the same LLVMContext could hit it too.
The fix here is a partial revert of r225223: in the rare case that an
MDNode operand is a ConstantAsMetadata (i.e., wrapping a node from the
Value hierarchy), drop uniquing if it gets replaced with `nullptr`.
This changes step #4 above to leave behind `distinct !{null} -> SomeT`,
which can't be confused with the generic `!{null}`.
In theory, this can cause some churn in the LLVMContext's MDNode
uniquing map when Values are being deleted. However:
- The number of GlobalValues referenced from uniqued MDNodes is
expected to be quite small. E.g., the debug info metadata schema
only references GlobalValues from distinct nodes.
- Other Constants have the lifetime of the LLVMContext, whose teardown
is careful to drop references before deleting the constants.
As a result, I don't expect a compile time regression from this change.
llvm-svn: 277625
This fixes a bug where we'd sometimes cache overly-conservative results
with our walker. This bug was made more obvious by r277480, which makes
our cache far more spotty than it was. Test case is llvm-unit, because
we're likely going to use CachingWalker only for def optimization in the
future.
The bug stems from that there was a place where the walker assumed that
`DefNode.Last` was a valid target to cache to when failing to optimize
phis. This is sometimes incorrect if we have a cache hit. The fix is to
use the thing we *can* assume is a valid target to cache to. :)
llvm-svn: 277559
Summary: By generalize the interface, users are able to inject more flexible Node token into the algorithm, for example, a pair of vector<Node>* and index integer. Currently I only migrated SCCIterator to use NodeRef, but more is coming. It's a NFC.
Reviewers: dblaikie, chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22937
llvm-svn: 277399
This patch replaces RuntimeDyld::SymbolInfo with JITSymbol: A symbol class
that is capable of lazy materialization (i.e. the symbol definition needn't be
emitted until the address is requested). This can be used to support common
and weak symbols in the JIT (though this is not implemented in this patch).
For consistency, RuntimeDyld::SymbolResolver is renamed to JITSymbolResolver.
For space efficiency a new class, JITEvaluatedSymbol, is introduced that
behaves like the old RuntimeDyld::SymbolInfo - i.e. it is just a pair of an
address and symbol flags. Instances of JITEvaluatedSymbol can be used in
symbol-tables to avoid paying the space cost of the materializer.
llvm-svn: 277386
are very handy when parsing text.
They are essentially a combination of startswith and a self-modifying
drop_front, or endswith and drop_back respectively.
Differential Revision: https://reviews.llvm.org/D22723
llvm-svn: 277288
Summary:
This change fixes issues with `LLVM_CONSTEXPR` functions and
`TrailingObjects::FixedSizeStorage`. In particular, some of the
functions marked `LLVM_CONSTEXPR` used by `FixedSizeStorage` were not
implemented such that they evaluate successfully as part of a constant
expression despite constant arguments.
This change also implements a more traditional template-meta path to
accommodate MSVC, and adds unit tests for `FixedSizeStorage`.
Drive-by fix: the access control for members of `TrailingObjectsImpl` is
tightened.
Reviewers: faisalv, rsmith, aaron.ballman
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D22668
llvm-svn: 277270
Previously this change was submitted from a Windows machine, so
changes made to the case of filenames and directory names did
not survive the commit, and as a result the CMake source file
names and the on-disk file names did not match on case-sensitive
file systems.
I'm resubmitting this patch from a Linux system, which hopefully
allows the case changes to make it through unfettered.
llvm-svn: 277213
In a previous patch, it was suggested to use all caps instead of
rolling caps for initialisms, so this patch changes everything
to do this.
llvm-svn: 277190
When coming from an IR label type, we set a 0 NumElements, but not
when constructing an LLT using unsized(), causing comparisons to fail.
Pick one variant and fix the other.
llvm-svn: 277161
This was a pure virtual base class whose purpose was to abstract
away the notion of how you retrieve the layout of a discontiguous
stream of blocks in an Msf file. This led to too many layers of
abstraction making it difficult to figure out what was going on
and extend things. Ultimately, a stream's layout is decided by
its length and the array of block numbers that it lives on. So
rather than have an abstract base class which can return this in
any number of ways, it's more straightforward to simply store them
as fields of a trivial struct, and also to give a more appropriate
name.
This patch does that. It renames IMsfStreamData to MsfStreamLayout,
and deletes the 2 concrete implementations, DirectoryStreamData
and IndexedStreamData. MsfStreamLayout is a trivial struct
with the necessary data.
llvm-svn: 277018
These loop from 0 to AEK_XSCALE, which is currently defined as 0x80000000, and
thus the tests loop over the entire int range, which is unreasonable
and also too slow in debug builds.
llvm-svn: 276969
Add unittest to {ARM | AArch64}TargetParser,and by the way correct problems as below:
1.Correct a incorrect indexing problem in AArch64TargetParser. The architecture enumeration
is shared across ARM and AArch64 in original implementation.But In the code,I just used the
index which was offset by the ARM, and this would index into the array incorrectly. To make
AArch64 has its own arch enum,or we will do a lot of slowly iterating.
2.Correct a spelling error. The parameter of llvm::AArch64::getArchExtName.
3.Correct a writing mistake, in llvm::ARM::parseArchISA.
Differential Revision: https://reviews.llvm.org/D21785
llvm-svn: 276957
Change the syntax to use `%0.sub8` to denote a subregister.
This seems like a more natural fit to denote subregisters; I also plan
to introduce a new ":classname" syntax in upcoming patches to denote the
register class of a vreg.
Note that this commit disallows plain identifiers to start with a '.'
character. This shouldn't affect anything as external names/IR
references are all prefixed with '$'/'%', plain identifiers are only
used for instruction names, register mask names and subreg indexes.
Differential Revision: https://reviews.llvm.org/D22390
llvm-svn: 276815
If we move a last-use register read to a later position we may skip
intermediate segments. This may require us to not only extend the
segment before the NewIdx, but also extend the segment live-in to
OldIdx.
This switches LiveIntervalTest to use AMDGPU so we can test subregister
liveness.
llvm-svn: 276724
This adds versions of operator + and - which are optimized for the LHS/RHS of the
operator being RValue's. When an RValue is available, we can use its storage space
instead of allocating new space.
On code such as ConstantRange which makes heavy use of APInt's over 64-bits in size,
this results in significant numbers of saved allocations.
Thanks to David Blaikie for all the review and most of the code here.
llvm-svn: 276470
This provides a better layering of responsibilities among different
aspects of PDB writing code. Some of the MSF related code was
contained in CodeView, and some was in PDB prior to this. Further,
we were often saying PDB when we meant MSF, and the two are
actually independent of each other since in theory you can have
other types of data besides PDB data in an MSF. So, this patch
separates the MSF specific code into its own library, with no
dependencies on anything else, and DebugInfoCodeView and
DebugInfoPDB take dependencies on DebugInfoMsf.
llvm-svn: 276458
This allows ErrorAsOutParameter to work better with "optional" errors. For
example, consider a function where for certain input values it is known that
the function can't fail. This can now be written as:
Result foo(Arg X, Error *Err) {
ErrorAsOutParameter EAO(Err);
if (<Error Condition>) {
if (Err)
*Err = <report error>;
else
llvm_unreachable("Unexpected failure!");
}
}
Rather than having to construct an ErrorAsOutParameter under every conditional
where Err is known to be non-null.
llvm-svn: 276430
This provides an elegant pattern to solve the "construct if not in map
already" problem we have many times in LLVM. Without try_emplace we
either have to rely on a sentinel value (nullptr) or do two lookups.
llvm-svn: 276277
Add a "-j" option to llvm-profdata to control the number of threads used.
Auto-detect NumThreads when it isn't specified, and avoid spawning threads when
they wouldn't be beneficial.
I tested this patch using a raw profile produced by clang (147MB). Here is the
time taken to merge 4 copies together on my laptop:
No thread pool: 112.87s user 5.92s system 97% cpu 2:01.08 total
With 2 threads: 134.99s user 26.54s system 164% cpu 1:33.31 total
Changes since the initial commit:
- When handling odd-length inputs, call ThreadPool::wait() before merging the
last profile. Should fix a race/off-by-one (see r275937).
Differential Revision: https://reviews.llvm.org/D22438
llvm-svn: 275938
Add a "-j" option to llvm-profdata to control the number of threads
used. Auto-detect NumThreads when it isn't specified, and avoid spawning
threads when they wouldn't be beneficial.
I tested this patch using a raw profile produced by clang (147MB). Here is the
time taken to merge 4 copies together on my laptop:
No thread pool: 112.87s user 5.92s system 97% cpu 2:01.08 total
With 2 threads: 134.99s user 26.54s system 164% cpu 1:33.31 total
Differential Revision: https://reviews.llvm.org/D22438
llvm-svn: 275921
Summary:
Given that we had a bug on max/minUIntN(64), these should have tests
too.
Reviewers: rnk
Subscribers: dylanmckay, llvm-commits
Differential Revision: https://reviews.llvm.org/D22443
llvm-svn: 275723
Summary:
Previously we were doing 1 << S. "1" is an int, so this doesn't work
when S >= 32.
This patch also adds some static_asserts to these functions to ensure
that we don't hit UB by shifting left too much.
Reviewers: rnk
Subscribers: llvm-commits, dylanmckay
Differential Revision: https://reviews.llvm.org/D22441
llvm-svn: 275719
Doing "I++" inside of an EXPECT_* triggers
warning: expression with side effects has no effect in an unevaluated context
because EXPECT_* partially expands to
EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(i++)) == 1)>
which is an unevaluated context.
llvm-svn: 275717
Summary:
This shift is undefined behavior (and, as compiled by clang, gives the
wrong answer for maxUIntN(64)).
Reviewers: mkuper
Subscribers: llvm-commits, jroelofs, rsmith
Differential Revision: https://reviews.llvm.org/D22430
llvm-svn: 275656
Block 1 and 2 of an MSF file are bit vectors that represent the
list of blocks allocated and free in the file. We had been using
these blocks to write stream data and other data, so we mark them
as the free page map now. We don't yet serialize these pages to
the disk, but at least we make a note of what it is, and avoid
writing random data to them.
Doing this also necessitated cleaning up some of the tests to be
more general and hardcode fewer values, which is nice.
llvm-svn: 275629
Previously we would read a PDB, then write some of it back out,
but write the directory, super block, and other pertinent metadata
back out unchanged. This generates incorrect PDBs since the amount
of data written was not always the same as the amount of data read.
This patch changes things to use the newly introduced `MsfBuilder`
class to write out a correct and accurate set of Msf metadata for
the data *actually* written, which opens up the door for adding and
removing type records, symbol records, and other types of data to
an existing PDB.
llvm-svn: 275627
Doing "I++" inside of an EXPECT_* triggers
warning: expression with side effects has no effect in an unevaluated context
because EXPECT_* partially expands to
EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(MockObjects[I++] + 1)) == 1)>
which is an unevaluated context.
llvm-svn: 275293
Summary: Normally when you do a bitwise operation on an enum value, you
get back an instance of the underlying type (e.g. int). But using this
macro, bitwise ops on your enum will return you back instances of the
enum. This is particularly useful for enums which represent a
combination of flags.
Suppose you have a function which takes an int and a set of flags. One
way to do this would be to take two numeric params:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, int Flags);
void foo() {
Fn(42, F2 | F3);
}
But now if you get the order of arguments wrong, you won't get an error.
You might try to fix this by changing the signature of Fn so it accepts
a SomeFlags arg:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, static_cast<SomeFlags>(F2 | F3));
}
But now we need a static cast after doing "F2 | F3" because the result
of that computation is the enum's underlying type.
This patch adds a mechanism which gives us the safety of the second
approach with the brevity of the first.
enum SomeFlags {
F1 = 1, F2 = 2, F3 = 4, ..., F_MAX = 128,
LLVM_MARK_AS_BITMASK_ENUM(F_MAX)
};
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, F2 | F3); // No static_cast.
}
The LLVM_MARK_AS_BITMASK_ENUM macro enables overloads for bitwise
operators on SomeFlags. Critically, these operators return the enum
type, not its underlying type, so you don't need any static_casts.
An advantage of this solution over the previously-proposed BitMask class
[0, 1] is that we don't need any wrapper classes -- we can operate
directly on the enum itself.
The approach here is somewhat similar to OpenOffice's typed_flags_set
[2]. But we skirt the need for a wrapper class (and a good deal of
complexity) by judicious use of enable_if. We SFINAE on the presence of
a particular enumerator (added by the LLVM_MARK_AS_BITMASK_ENUM macro)
instead of using a traits class so that it's impossible to use the enum
before the overloads are present. The solution here also seamlessly
works across multiple namespaces.
[0] http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20150622/283369.html
[1] http://lists.llvm.org/pipermail/llvm-commits/attachments/20150623/073434b6/attachment.obj
[2] https://cgit.freedesktop.org/libreoffice/core/tree/include/o3tl/typed_flags_set.hxx
Reviewers: chandlerc, rsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22279
llvm-svn: 275292
Because of the goop involved in the EXPECT_EQ macro, we were getting the
following warning
expression with side effects has no effect in an unevaluated context
because the "I++" was being used inside of a template type:
switch (0) case 0: default: if (const ::testing::AssertionResult gtest_ar = (::testing::internal:: EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(Args[I++])) == 1)>::Compare("Args[I++]", "&A", Args[I++], &A))) ; else ::testing::internal::AssertHelper(::testing::TestPartResult::kNonFatalFailure, "../src/unittests/IR/FunctionTest.cpp", 94, gtest_ar.failure_message()) = ::testing::Message();
llvm-svn: 275291
Summary:
This represents the adjustment applied to the implicit 'this' parameter
in the prologue of a virtual method in the MS C++ ABI. The adjustment is
always zero unless multiple inheritance is involved.
This increases the size of DISubprogram by 8 bytes, unfortunately. The
adjustment really is a signed 32-bit integer. If this size increase is
too much, we could probably win it back by splitting out a subclass with
info specific to virtual methods (virtuality, vindex, thisadjustment,
containingType).
Reviewers: aprantl, dexonsmith
Subscribers: aaboud, amccarth, llvm-commits
Differential Revision: http://reviews.llvm.org/D21614
llvm-svn: 274325
When concatenating two error lists the ErrorList::join method (which is called
by joinErrors) was failing to set the checked bit on the second error, leading
to a 'failure to check error' assertion.
llvm-svn: 274249
re-insertion of entries into the worklist moves them to the end.
This is fairly similar to a SetVector, but helps in the case where in
addition to not inserting duplicates you want to adjust the sequence of
a pop-off-the-back worklist.
I'm not at all attached to the name of this data structure if others
have better suggestions, but this is one that David Majnemer brought up
in IRC discussions that seems plausible.
I've trimmed the interface down somewhat from SetVector's interface
because several things make less sense here IMO: iteration primarily.
I'd prefer to add these back as we have users that need them. My use
case doesn't even need all of what is provided here. =]
I've also included a basic unittest to make sure this functions
reasonably.
Differential Revision: http://reviews.llvm.org/D21866
llvm-svn: 274198
This fixes an issue where occurrence counts would be unexpectedly
reset when parsing different parts of a command line multiple
times.
**ORIGINAL COMMIT MESSAGE**
This allows command line tools to use syntaxes like the following:
llvm-foo.exe command1 -o1 -o2
llvm-foo.exe command2 -p1 -p2
Where command1 and command2 contain completely different sets of
valid options. This is backwards compatible with previous uses
of llvm cl which did not support subcommands, as any option
which specifies no optional subcommand (e.g. all existing
code) goes into a special "top level" subcommand that expects
dashed options to appear immediately after the program name.
For example, code which is subcommand unaware would generate
a command line such as the following, where no subcommand
is specified:
llvm-foo.exe -q1 -q2
The top level subcommand can co-exist with actual subcommands,
as it is implemented as an actual subcommand which is searched
if no explicit subcommand is specified. So llvm-foo.exe as
specified above could be written so as to support all three
aforementioned command lines simultaneously.
There is one additional "special" subcommand called AllSubCommands,
which can be used to inject an option into every subcommand.
This is useful to support things like help, so that commands
such as:
llvm-foo.exe --help
llvm-foo.exe command1 --help
llvm-foo.exe command2 --help
All work and display the help for the selected subcommand
without having to explicitly go and write code to handle each
one separately.
This patch is submitted without an example of anything actually
using subcommands, but a followup patch will convert the
llvm-pdbdump tool to use subcommands.
Reviewed By: beanz
llvm-svn: 274171
This allows command line tools to use syntaxes like the following:
llvm-foo.exe command1 -o1 -o2
llvm-foo.exe command2 -p1 -p2
Where command1 and command2 contain completely different sets of
valid options. This is backwards compatible with previous uses
of llvm cl which did not support subcommands, as any option
which specifies no optional subcommand (e.g. all existing
code) goes into a special "top level" subcommand that expects
dashed options to appear immediately after the program name.
For example, code which is subcommand unaware would generate
a command line such as the following, where no subcommand
is specified:
llvm-foo.exe -q1 -q2
The top level subcommand can co-exist with actual subcommands,
as it is implemented as an actual subcommand which is searched
if no explicit subcommand is specified. So llvm-foo.exe as
specified above could be written so as to support all three
aforementioned command lines simultaneously.
There is one additional "special" subcommand called AllSubCommands,
which can be used to inject an option into every subcommand.
This is useful to support things like help, so that commands
such as:
llvm-foo.exe --help
llvm-foo.exe command1 --help
llvm-foo.exe command2 --help
All work and display the help for the selected subcommand
without having to explicitly go and write code to handle each
one separately.
This patch is submitted without an example of anything actually
using subcommands, but a followup patch will convert the
llvm-pdbdump tool to use subcommands.
Reviewed By: beanz
Differential Revision: http://reviews.llvm.org/D21485
llvm-svn: 274054
The bitset metadata currently used in LLVM has a few problems:
1. It has the wrong name. The name "bitset" refers to an implementation
detail of one use of the metadata (i.e. its original use case, CFI).
This makes it harder to understand, as the name makes no sense in the
context of virtual call optimization.
2. It is represented using a global named metadata node, rather than
being directly associated with a global. This makes it harder to
manipulate the metadata when rebuilding global variables, summarise it
as part of ThinLTO and drop unused metadata when associated globals are
dropped. For this reason, CFI does not currently work correctly when
both CFI and vcall opt are enabled, as vcall opt needs to rebuild vtable
globals, and fails to associate metadata with the rebuilt globals. As I
understand it, the same problem could also affect ASan, which rebuilds
globals with a red zone.
This patch solves both of those problems in the following way:
1. Rename the metadata to "type metadata". This new name reflects how
the metadata is currently being used (i.e. to represent type information
for CFI and vtable opt). The new name is reflected in the name for the
associated intrinsic (llvm.type.test) and pass (LowerTypeTests).
2. Attach metadata directly to the globals that it pertains to, rather
than using the "llvm.bitsets" global metadata node as we are doing now.
This is done using the newly introduced capability to attach
metadata to global variables (r271348 and r271358).
See also: http://lists.llvm.org/pipermail/llvm-dev/2016-June/100462.html
Differential Revision: http://reviews.llvm.org/D21053
llvm-svn: 273729
This change is motivated by an upcoming change to the metadata representation
used for CFI. The indirect function call checker needs type information for
external function declarations in order to correctly generate jump table
entries for such declarations. We currently associate such type information
with declarations using a global metadata node, but I plan [1] to move all
such metadata to global object attachments.
In bitcode, metadata attachments for function declarations appear in the
global metadata block. This seems reasonable to me because I expect metadata
attachments on declarations to be uncommon. In the long term I'd also expect
this to be the case for CFI, because we'd want to use some specialized bitcode
format for this metadata that could be read as part of the ThinLTO thin-link
phase, which would mean that it would not appear in the global metadata block.
To solve the lazy loaded metadata issue I was seeing with D20147, I use the
same bitcode representation for metadata attachments for global variables as I
do for function declarations. Since there's a use case for metadata attachments
in the global metadata block, we might as well use that representation for
global variables as well, at least until we have a mechanism for lazy loading
global variables.
In the assembly format, the metadata attachments appear after the "declare"
keyword in order to avoid a parsing ambiguity.
[1] http://lists.llvm.org/pipermail/llvm-dev/2016-June/100462.html
Differential Revision: http://reviews.llvm.org/D21052
llvm-svn: 273336
We recently made MemorySSA own the walker it creates. As a part of this,
the MSSA test fixture was changed to have a `Walker*` instead of a
`unique_ptr<Walker>`. So, we no longer need to do `&*Walker` in order to
get a `Walker*`.
llvm-svn: 273189
pass manager passes' `run` methods.
This removes a bunch of SFINAE goop from the pass manager and just
requires pass authors to accept `AnalysisManager<IRUnitT> &` as a dead
argument. This is a small price to pay for the simplicity of the system
as a whole, despite the noise that changing it causes at this stage.
This will also helpfull allow us to make the signature of the run
methods much more flexible for different kinds af passes to support
things like intelligently updating the pass's progression over IR units.
While this touches many, many, files, the changes are really boring.
Mostly made with the help of my trusty perl one liners.
Thanks to Sean and Hal for bouncing ideas for this with me in IRC.
llvm-svn: 272978
We should update results of the BranchProbabilityInfo after removing block in JumpThreading. Otherwise
we will get dangling pointer inside BranchProbabilityInfo cache.
Differential Revision: http://reviews.llvm.org/D20957
llvm-svn: 272891
Differential Revision: http://reviews.llvm.org/D19842
Corresponding clang patch: http://reviews.llvm.org/D19843
Re-commit after addressing issues with of generating too many warnings for Windows and asan test failures
Patch by Eric Niebler
llvm-svn: 272555
undef uses are no real uses of a register and must be ignored by
findLastUseBefore() so that handleMove() does not produce invalid live
intervals in some cases.
This fixed http://llvm.org/PR28083
llvm-svn: 272446
This fixes an alignment issue by forcing all cached allocations
to be 8 byte aligned, and also fixes an issue arising on big
endian systems by writing ulittle32_t's instead of uint32_t's
in the test.
llvm-svn: 272437
This adds method and tests for writing to a PDB stream. With
this, even a PDB stream which is discontiguous can be treated
as a sequential stream of bytes for the purposes of writing.
Reviewed By: ruiu
Differential Revision: http://reviews.llvm.org/D21157
llvm-svn: 272369
Summary:
Now DISubroutineType has a 'cc' field which should be a DW_CC_ enum. If
it is present and non-zero, the backend will emit it as a
DW_AT_calling_convention attribute. On the CodeView side, we translate
it to the appropriate enum for the LF_PROCEDURE record.
I added a new LLVM vendor specific enum to the list of DWARF calling
conventions. DWARF does not appear to attempt to standardize these, so I
assume it's OK to do this until we coordinate with GCC on how to emit
vectorcall convention functions.
Reviewers: dexonsmith, majnemer, aaboud, amccarth
Subscribers: mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D21114
llvm-svn: 272197
The architecture enumeration is shared across ARM and AArch64. However, the
data is not. The code incorrectly would index into the array using the
architecture index which was offset by the ARMv7 architecture enumeration. We
do not have a marker for indicating the architectural family to which the
enumeration belongs so we cannot be clever about offsetting the index (at least
it is not immediately apparent to me). Instead, fall back to the tried-and-true
method of slowly iterating the array (its not a large array, so the impact of
this is not too high).
Because of the incorrect indexing, if we were lucky, we would crash, but usually
we would return an invalid StringRef. We did not have any tests for the AArch64
target parser previously;. Extend the previous tests I had added for ARM to
cover AArch64 for ensuring that we return expected StringRefs.
Take the opportunity to change some iterator types to references.
This work is needed to support parsing `.arch name` directives in the AArch64
target asm parser.
llvm-svn: 272145
This allows mapping of any endian-aware type whose underlying
type (e.g. uint32_t) provides a ScalarTraits specialization.
Reviewed by: majnemer
Differential Revision: http://reviews.llvm.org/D21057
llvm-svn: 272049
Arrange to call verify(Function &) on each function, followed by
verify(Module &), whether the verifier is being used from the pass or
from verifyModule(). As a side effect, this fixes an issue that caused
us not to call verify(Function &) on unmaterialized functions from
verifyModule().
Differential Revision: http://reviews.llvm.org/D21042
llvm-svn: 271956
Also fix slice wrappers drop_front and drop_back.
The unittests are pretty awkward, but do the job; alternatives
welcome!
..and yes, I do have ArrayRefs with more than 4 billion elements.
llvm-svn: 271546
We only considered the length of the operation and the length of the
StreamRef without considered what it meant for the offset to be at a
non-zero position.
llvm-svn: 271496
Add support for the new pass manager to MemorySSA pass.
Change MemorySSA to be computed eagerly upon construction.
Change MemorySSAWalker to be owned by the MemorySSA object that creates
it.
Reviewers: dberlin, george.burgess.iv
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19664
llvm-svn: 271432
This will be necessary to allow the global merge pass to attach
multiple debug info metadata nodes to global variables once we reverse
the edge from DIGlobalVariable to GlobalVariable.
Differential Revision: http://reviews.llvm.org/D20414
llvm-svn: 271358
This tidies up some code that was manually constructing RuntimeDyld::SymbolInfo
instances from JITSymbols. It will save more mess in the future when
JITSymbol::getAddress is extended to return an Expected<TargetAddress> rather
than just a TargetAddress, since we'll be able to embed the error checking in
the conversion.
llvm-svn: 271350
StringError can be used to represent Errors that aren't recoverable based on
the error type, but that have a useful error message that can be reported to
the user or logged.
llvm-svn: 270948
APInt::slt was copying the LHS and RHS in to temporaries then making
them unsigned so that it could use an unsigned comparision. It did
this even on the paths which were trivial to give results for, such
as the sign bit of the LHS being set while RHS was not set.
This changes the logic to return out immediately in the trivial cases,
and use an unsigned comparison in the remaining cases. But this time,
just use the unsigned comparison directly without creating any temporaries.
This works because, for example:
true = (-2 slt -1) = (0xFE ult 0xFF)
Also added some tests explicitly for slt with APInt's larger than 64-bits
so that this new code is tested.
Using the memory for 'opt -O2 verify-uselistorder.lto.opt.bc -o opt.bc'
(see r236629 for details), this reduces the number of allocations from
26.8M to 23.9M.
llvm-svn: 270881
Since r268966 the modern Verifier pass defaults to stripping invalid debug info
in nonasserts builds. This patch ports this behavior back to the legacy
Verifier pass as well. The primary motivation is that the clang frontend
accepts bitcode files as input but is still using the legacy pass pipeline.
Background: The problem I'm trying to solve with this sequence of patches is
that historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about breaking
bitcode compatibility with existing producers. For example, we don't necessarily
want IR produced by an older version of clang to be rejected by an LTO link just
because of malformed debug info, and rather provide an option to strip it. Note
that merely outdated (but well-formed) debug info would continue to be
auto-upgraded in this scenario.
http://reviews.llvm.org/D20629
<rdar://problem/26448800>
llvm-svn: 270768
This removes the subclasses of ProfileSummary, moves the members of the derived classes to the base class.
Differential Revision: http://reviews.llvm.org/D20390
llvm-svn: 270143
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
- Remove the base ProfError class to work around an MSVC ICE.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 270020
Having an enum member named Default is quite confusing: Is it distinct
from the others?
This patch removes that member and instead uses Optional<Reloc> in
places where we have a user input that still hasn't been maped to the
default value, which is now clear has no be one of the remaining 3
options.
llvm-svn: 269988
Summary:
Add support to control where files for a distributed backend (the
individual index files and optional imports files) are created.
This is invoked with a new thinlto-prefix-replace option in the gold
plugin and llvm-lto. If specified, expects a string of the form
"oldprefix:newprefix", and instead of generating these files in the
same directory path as the corresponding bitcode file, will use a path
formed by replacing the bitcode file's path prefix matching oldprefix
with newprefix.
Also add a new replace_path_prefix helper to Path.h in libSupport.
Depends on D19636.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19644
llvm-svn: 269771
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Address undefined-var-template warning.
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269694
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269491
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269462
a sequence of values.
It increments through the values in the half-open range: [Begin, End),
producing those values when indirecting the iterator. It should support
integers, iterators, and any other type providing these basic arithmetic
operations.
This came up in the C++ standards committee meeting, and it seemed like
a useful construct that LLVM might want as well, and I wanted to
understand how easily we could solve it. I suspect this can be used to
write simpler counting loops even in LLVM along the lines of:
for (int i : seq(0, v.size())) {
...
};
As part of this, I had to fix the lack of a proxy object returned from
the operator[] in our iterator facade.
Differential Revision: http://reviews.llvm.org/D17870
llvm-svn: 269390
Summary:
...loop after the last iteration.
This is really hard to do correctly. The core problem is that we need to
model liveness through the induction PHIs from iteration to iteration in
order to get the correct results, and we need to correctly de-duplicate
the common subgraphs of instructions feeding some subset of the
induction PHIs. All of this can be driven either from a side effect at
some iteration or from the loop values used after the loop finishes.
This patch implements this by storing the forward-propagating analysis
of each instruction in a cache to recall whether it was free and whether
it has become live and thus counted toward the total unroll cost. Then,
at each sink for a value in the loop, we recursively walk back through
every value that feeds the sink, including looping back through the
iterations as needed, until we have marked the entire input graph as
live. Because we cache this, we never visit instructions more than twice
-- once when we analyze them and put them into the cache, and once when
we count their cost towards the unrolled loop. Also, because the cache
is only two bits and because we are dealing with relatively small
iteration counts, we can store all of this very densely in memory to
avoid this from becoming an excessively slow analysis.
The code here is still pretty gross. I would appreciate suggestions
about better ways to factor or split this up, I've stared too long at
the algorithmic side to really have a good sense of what the design
should probably look at.
Also, it might seem like we should do all of this bottom-up, but I think
that is a red herring. Specifically, the simplification power is *much*
greater working top-down. We can forward propagate very effectively,
even across strange and interesting recurrances around the backedge.
Because we use data to propagate, this doesn't cause a state space
explosion. Doing this level of constant folding, etc, would be very
expensive to do bottom-up because it wouldn't be until the last moment
that you could collapse everything. The current solution is essentially
a top-down simplification with a bottom-up cost accounting which seems
to get the best of both worlds. It makes the simplification incremental
and powerful while leaving everything dead until we *know* it is needed.
Finally, a core property of this approach is its *monotonicity*. At all
times, the current UnrolledCost is a conservatively low estimate. This
ensures that we will never early-exit from the analysis due to exceeding
a threshold when if we had continued, the cost would have gone back
below the threshold. These kinds of bugs can cause incredibly hard to
track down random changes to behavior.
We could use a techinque similar (but much simpler) within the inliner
as well to avoid considering speculated code in the inline cost.
Reviewers: chandlerc
Subscribers: sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D11758
llvm-svn: 269388
Remove the ModuleLevelChanges argument, and the ability to create new
subprograms for cloned functions. The latter was added without review in
r203662, but it has no in-tree clients (all non-test callers pass false
for ModuleLevelChanges [1], so it isn't reachable outside of tests). It
also isn't clear that adding a duplicate subprogram to the compile unit is
always the right thing to do when cloning a function within a module. If
this functionality comes back it should be accompanied with a more concrete
use case.
Furthermore, all in-tree clients add the returned function to the module.
Since that's pretty much the only sensible thing you can do with the function,
just do that in CloneFunction.
[1] http://llvm-cs.pcc.me.uk/lib/Transforms/Utils/CloneFunction.cpp/rCloneFunction
Differential Revision: http://reviews.llvm.org/D18628
llvm-svn: 269110
Add convenience function to create MachineModuleInfo and
MachineFunctionAnalysis passes and add them to a pass manager.
Despite factoring out some shared code in
LiveIntervalTest/LLVMTargetMachine this will be used by my upcoming llc
change.
llvm-svn: 269002
allow the transformation to strip invalid debug info.
This patch separates the Verifier into an analysis and a transformation
pass, with the transformation pass optionally stripping malformed
debug info.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
http://reviews.llvm.org/D19988
rdar://problem/25818489
This reapplies r268937 without modifications.
llvm-svn: 268966
allow the transformation to strip invalid debug info.
This patch separates the Verifier into an analysis and a transformation
pass, with the transformation pass optionally stripping malformed
debug info.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
http://reviews.llvm.org/D19988
rdar://problem/25818489
llvm-svn: 268937
toString() consumes an Error and returns a string representation of its
contents. This commit also adds a message() method to ErrorInfoBase for
convenience.
Differential Revision: http://reviews.llvm.org/D19883
llvm-svn: 268465
A loop pass that didn't preserve this entire set of passes wouldn't
play well with other loop passes, since these are generally a basic
requirement to do any interesting transformations to a loop.
Adds a helper to get the set of analyses a loop pass should preserve,
and checks that any loop pass we run satisfies the requirement.
llvm-svn: 268444
We have it for StringRef but not ArrayRef, and ArrayRef has drop_back,
so I see no reason it shouldn't have drop_front. Splitting this out of a
change that I have that will use this funcitonality.
llvm-svn: 268434
Check for success values in the CoverageMappingTest unit test file.
This is part of a series of patches to transition ProfileData over to
the stricter Error/Expected interface.
llvm-svn: 268420
Check for success values in the InstrProfTest unit test file.
This is part of a series of patches to transition ProfileData over to
the stricter Error/Expected interface.
llvm-svn: 268402
This patch fixes two somewhat related bugs in MemorySSA's caching
walker. These bugs were found because D19695 brought up the problem
that we'd have defs cached to themselves, which is incorrect.
The bugs this fixes are:
- We would sometimes skip the nearest clobber of a MemoryAccess, because
we would query our cache for a given potential clobber before
checking if the potential clobber is the clobber we're looking for.
The cache entry for the potential clobber would point to the nearest
clobber *of the potential clobber*, so if that was a cache hit, we'd
ignore the potential clobber entirely.
- There are times (sometimes in DFS, sometimes in the getClobbering...
functions) where we would insert cache entries that say a def
clobbers itself.
There's a bit of common code between the fixes for the bugs, so they
aren't split out into multiple commits.
This patch also adds a few unit tests, and refactors existing tests a
bit to reduce the duplication of setup code.
llvm-svn: 268087
In gcc, \ escapes every character in response files. It is true that this makes
it harder to mention Windows files in rsp files, but not doing this means clang
disagrees with gcc, and also disagrees with the shell (on non-Windows) which
rsp file quoting is supposed to match. clang isn't free to choose what to do
here.
In general, the idea for response files is to take bits of your command line
and write them to a file unchanged, and have things work the same way. Since
the command line would've been interpreted by the shell, things in the rsp file
need to be subject to the same shell quoting rules.
People who want to put Windows-style paths in their response files either need
to do any of:
* escape their backslashes
* or use clang-cl which uses cl.exe/cmd.exe quoting rules
* pass --rsp-quoting=windows to clang to tell it to use
cl.exe/cmd.exe quoting rules for response files.
Fixes PR27464.
http://reviews.llvm.org/D19417
llvm-svn: 267556
This replaces use of std::error_code and ErrorOr in the ORC RPC support library
with Error and Expected. This required updating the OrcRemoteTarget API, Client,
and server code, as well as updating the Orc C API.
This patch also fixes several instances where Errors were dropped.
llvm-svn: 267457
If several regions cover the same area of code, we have to restore
the combined value for that area when return from a nested region.
This patch achieves that by combining regions before calling buildSegments.
Differential Revision: http://reviews.llvm.org/D18610
llvm-svn: 267390
Eliminate DITypeIdentifierMap and make DITypeRef a thin wrapper around
DIType*. It is no longer legal to refer to a DICompositeType by its
'identifier:', and DIBuilder no longer retains all types with an
'identifier:' automatically.
Aside from the bitcode upgrade, this is mainly removing logic to resolve
an MDString-based reference to an actualy DIType. The commits leading
up to this have made the implicit type map in DICompileUnit's
'retainedTypes:' field superfluous.
This does not remove DITypeRef, DIScopeRef, DINodeRef, and
DITypeRefArray, or stop using them in DI-related metadata. Although as
of this commit they aren't serving a useful purpose, there are patchces
under review to reuse them for CodeView support.
The tests in LLVM were updated with deref-typerefs.sh, which is attached
to the thread "[RFC] Lazy-loading of debug info metadata":
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098318.html
llvm-svn: 267296
Each reference to an unresolved MDNode is expensive, since the RAUW
support in MDNode uses a separate allocation and side map. Since
a distinct MDNode doesn't require its operands on creation (unlike
uniuqed nodes, there's no need to check for structural equivalence),
use nullptr for any of its unresolved operands. Besides reducing the
burden on MDNode maps, this can avoid allocating temporary MDNodes in
the first place.
We need some way to track operands. Invent DistinctMDOperandPlaceholder
for this purpose, which is a Metadata subclass that holds an ID and
points at its single user. DistinctMDOperandPlaceholder::replaceUseWith
is just like RAUW, but its name highlights that there is only ever
exactly one use.
There is no support for moving (or, obviously, copying) these. Move
support would be possible but expensive; leaving it unimplemented
prevents user error. In the BitcodeReader I originally considered
allocating on a BumpPtrAllocator and keeping a vector of pointers to
them, and then I realized that std::deque implements exactly this.
A couple of obvious follow-ups:
- Change ValueEnumerator to emit distinct nodes first to take more
advantage of this optimization. (How convenient... I think I might
have a couple of patches for this.)
- Change DIBuilder and its consumers (like CGDebugInfo in clang) to
use something like this when constructing debug info in the first
place.
llvm-svn: 267270
A DenseMap doesn't store the hashes, so it needs to recompute them when
the table is resized.
In some applications the hashing cost is noticeable. That is the case
for example in lld for symbol names (StringRef).
This patch adds a templated structure that can wraps any value that can
go in a DenseMap and caches the hash.
llvm-svn: 266981
Add a new method, DICompositeType::buildODRType, that will create or
mutate the DICompositeType for a given ODR identifier, and use it in
LLParser and BitcodeReader instead of DICompositeType::getODRType.
The logic is as follows:
- If there's no node, create one with the given arguments.
- Else, if the current node is a forward declaration and the new
arguments would create a definition, mutate the node to match the
new arguments.
- Else, return the old node.
This adds a missing feature supported by the current DITypeIdentifierMap
(which I'm slowly making redudant). The only remaining difference is
that the DITypeIdentifierMap has a "the-last-one-wins" rule, whereas
DICompositeType::buildODRType has a "the-first-one-wins" rule.
For now I'm leaving behind DICompositeType::getODRType since it has
obvious, low-level semantics that are convenient for unit testing.
llvm-svn: 266786
The second test in this file is actually testing DICompositeType API,
not LLVMContext API (after r266742 moved it to a higher level). This
really doesn't make sense in an LLVMContextTest. Rename the tests
before adding more.
llvm-svn: 266764
Lift the API for debug info ODR type uniquing up a layer. Instead of
clients managing the map directly on the LLVMContext, add a static
method to DICompositeType called getODRType and handle the map in the
background. Also adds DICompositeType::getODRTypeIfExists, so far just
for convenience in the unit tests.
This simplifies the logic in LLParser and BitcodeReader. Because of
argument spam there are actually a few more lines of code now; I'll see
if I come up with a reasonable way to clean that up.
llvm-svn: 266742
Tighten up the API for debug info ODR type uniquing in LLVMContext. The
only reason to allow other DIType subclasses is to make the unit tests
prettier :/.
llvm-svn: 266737
As per David's review, rename everything in the new API for ODR type
uniquing of debug info.
ensureDITypeMap => enableDebugTypeODRUniquing
destroyDITypeMap => disableDebugTypeODRUniquing
hasDITypeMap => isODRUniquingDebugTypes
llvm-svn: 266713
The root of the problem was that findMainViewFileID(File, Function)
could return some ID for any given file, even though that file
was not the main file for that function.
This patch ensures that the result of this function is conformed
with the result of findMainViewFileID(Function).
This commit reapplies r266436, which was reverted by r266458,
with the .covmapping file serialized in v1 format.
Differential Revision: http://reviews.llvm.org/D18787
llvm-svn: 266620
This reverts commit r266477.
This commit introduces cyclic dependency. This commit has "Analysis" depend on "ProfileData",
while "ProfileData" depends on "Object", which depends on "BitCode", which
depends on "Analysis".
llvm-svn: 266619
Three problems:
1. <future> can't be easily used. If you must use it, see
include/Support/ThreadPool.h for how.
2. constexpr problems, even after 266588.
3. Move assignment operators can't be defaulted in MSVC2013.
llvm-svn: 266615
Removed some unused headers, replaced some headers with forward class declarations.
Found using simple scripts like this one:
clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap'
Patch by Eugene Kosov <claprix@yandex.ru>
Differential Revision: http://reviews.llvm.org/D19219
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266595
asynchronous call/handle. Also updates the ORC remote JIT API to use the new
scheme.
The previous version of the RPC tools only supported void functions, and
required the user to manually call a paired function to return results. This
patch replaces the Procedure typedef (which only supported void functions) with
the Function typedef which supports return values, e.g.:
Function<FooId, int32_t(std::string)> Foo;
The RPC primitives and channel operations are also expanded. RPC channels must
support four new operations: startSendMessage, endSendMessage,
startRecieveMessage and endRecieveMessage, to handle channel locking. In
addition, serialization support for tuples to RPCChannels is added to enable
multiple return values.
The RPC primitives are expanded from callAppend, call, expect and handle, to:
appendCallAsync - Make an asynchronous call to the given function.
callAsync - The same as appendCallAsync, but calls send on the channel when
done.
callSTHandling - Blocking call for single-threaded code. Wraps a call to
callAsync then waits on the result, using a user-supplied
handler to handle any callbacks from the remote.
callST - The same as callSTHandling, except that it doesn't handle
callbacks - it expects the result to be the first return.
expect and handle - as before.
handleResponse - Handle a response from the remote.
waitForResult - Wait for the response with the given sequence number to arrive.
llvm-svn: 266581
Rather than relying on the structural equivalence of DICompositeType to
merge type definitions, use an explicit map on the LLVMContext that
LLParser and BitcodeReader consult when constructing new nodes.
Each non-forward-declaration DICompositeType with a non-empty
'identifier:' field is stored/loaded from the type map, and the first
definiton will "win".
This map is opt-in: clients that expect ODR types from different modules
to be merged must call LLVMContext::ensureDITypeMap.
- Clients that just happen to load more than one Module in the same
LLVMContext won't magically merge types.
- Clients (like LTO) that want to continue to merge types based on ODR
identifiers should opt-in immediately.
I have updated LTOCodeGenerator.cpp, the two "linking" spots in
gold-plugin.cpp, and llvm-link (unless -disable-debug-info-type-map) to
set this.
With this in place, it will be straightforward to remove the DITypeRef
concept (i.e., referencing types by their 'identifier:' string rather
than pointing at them directly).
llvm-svn: 266549
Stop memoizing ConstantAsMetadata in ValueMapper::mapMetadata. Now we
have to recompute it, but these metadata aren't particularly common, and
it restricts the lifetime of the Metadata map unnecessarily.
(The motivation is that I have a patch which uses a single Metadata map
for the lifetime of IRMover. Mehdi profiled r266446 with the patch
applied and we saw a pretty big speedup in lib/Linker.)
llvm-svn: 266513
This reverts commit r266507, reapplying r266503 (and r266505
"ValueMapper: Use API from r266503 in unit tests, NFC") completely
unchanged.
I reverted because of a bot failure here:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16810/
However, looking more closely, the failure was from a host-compiler
crash (clang 3.7.1) when building:
lib/CodeGen/AsmPrinter/CMakeFiles/LLVMAsmPrinter.dir/DwarfAccelTable.cpp.o
I didn't modify that file, or anything it includes, with that commit.
The next build (which hadn't picked up my revert) got past it:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16811/
I think this was just unfortunate timing. I suppose the bot must be
flakey.
llvm-svn: 266510
This reverts commit r266503, in case it's the root cause of this bot
failure:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16810
I'm also reverting r266505 -- "ValueMapper: Use API from r266503 in unit
tests, NFC" -- since it's in the way.
llvm-svn: 266507
Adds an interface to get ProfileSummary for a module and makes InlineCost use ProfileSummary to get max function count.
Differential Revision: http://reviews.llvm.org/D18622
llvm-svn: 266477
Currently each Function points to a DISubprogram and DISubprogram has a
scope field. For member functions the scope is a DICompositeType. DIScopes
point to the DICompileUnit to facilitate type uniquing.
Distinct DISubprograms (with isDefinition: true) are not part of the type
hierarchy and cannot be uniqued. This change removes the subprograms
list from DICompileUnit and instead adds a pointer to the owning compile
unit to distinct DISubprograms. This would make it easy for ThinLTO to
strip unneeded DISubprograms and their transitively referenced debug info.
Motivation
----------
Materializing DISubprograms is currently the most expensive operation when
doing a ThinLTO build of clang.
We want the DISubprogram to be stored in a separate Bitcode block (or the
same block as the function body) so we can avoid having to expensively
deserialize all DISubprograms together with the global metadata. If a
function has been inlined into another subprogram we need to store a
reference the block containing the inlined subprogram.
Attached to https://llvm.org/bugs/show_bug.cgi?id=27284 is a python script
that updates LLVM IR testcases to the new format.
http://reviews.llvm.org/D19034
<rdar://problem/25256815>
llvm-svn: 266446
The root of the problem was that findMainViewFileID(File, Function)
could return some ID for any given file, even though that file
was not the main file for that function.
This patch ensures that the result of this function is conformed
with the result of findMainViewFileID(Function).
Differential Revision: http://reviews.llvm.org/D18787
llvm-svn: 266436