At most these use the StringRef/Twine wrappers and don't have any implicit uses of std::string.
Move the include down to any cpp implementation where std::string is actually used.
Revert "Delete llvm::is_trivially_copyable and CMake variable HAVE_STD_IS_TRIVIALLY_COPYABLE"
This reverts commit 4d4bd40b578d77b8c5bc349ded405fb58c333c78.
This reverts commit 557b00e0afb2dc1776f50948094ca8cc62d97be4.
Create the LLVM / CodeView register mappings for the 32-bit ARM Window targets.
Reviewed By: compnerd
Differential Revision: https://reviews.llvm.org/D89622
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
Most clients only need CVType and CVSymbol, not structs for every type
and symbol. Move CVSymbol and CVType to CVRecord.h to accomplish this.
Update some of the common headers that need CVSymbol and CVType to use
the new location.
We don't need the StringsAndChecksumsRef forward declaration as we have to include StringsAndChecksums.h.
We don't need DebugSubsectionRecord.h and we forward declare all referenced classes.
We don't need to include cstdint as we don't use any stdint types.
The number of public symbols is very large, and each deserialization
does a few heap allocations. The public symbols are serialized by the
linker, so we can assume they have the expected layout and use it
directly.
Saves O(#publics) temporary heap allocations and shrinks some data
structures.
This accounts for a large portion of the memory allocations in LLD.
This DebugSubsectionRecordBuilder object can be stored directly in
C13Builders, it mostly wraps other subsections.
Remove the container kind field from the object. It is always the same
for all elements in the vector, and we can pass it in during writing.
When emitting PDBs, the TypeStreamMerger class is used to merge .debug$T records from the input .OBJ files into the output .PDB stream.
Records in .OBJs are not required to be aligned on 4-bytes, and "The Netwide Assembler 2.14" generates non-aligned records.
When compiling with -DLLVM_ENABLE_ASSERTIONS=ON, an assert was triggered in MergingTypeTableBuilder when non-ghash merging was used.
With ghash merging there was no assert.
As a result, LLD could potentially generate a non-aligned TPI stream.
We now align records on 4-bytes when record indices are remapped, in TypeStreamMerger::remapIndices().
Differential Revision: https://reviews.llvm.org/D75081
StringMap.h is very popular (4K uses), and it doesn't need to see
BumpPtrAllocator, which is relatively expensive according to
ClangBuildAnalyzer. StringMap only needs MallocAllocator, so split that
into AllocatorBase.h and use it instead.
Here is the change in header uses:
$ diff -u thedeps-before.txt thedeps-after.txt | \
grep '^[-+] ' | sort | uniq -c | sort -nr
3993 + ../llvm/include/llvm/Support/AllocatorBase.h
758 - ../llvm/include/llvm/Support/Allocator.h
270 - ../llvm/include/llvm/Support/Alignment.h
13 - ../llvm/include/llvm/Support/Host.h
6 - ../llvm/include/llvm/ADT/StringMap.h
4 - ../llvm/include/llvm/Support/SwapByteOrder.h
4 - ../llvm/include/llvm/Support/MathExtras.h
4 - ../llvm/include/llvm/Support/AlignOf.h
4 - ../llvm/include/llvm/ADT/SmallVector.h
1 - ../llvm/include/llvm/Support/PointerLikeTypeTraits.h
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D73392
Summary:
I used this information to motivate splitting up the Intrinsic::ID enum
(5d986953c8b917bacfaa1f800fc1e242559f76be) and adding a key method to
clang::Sema (586f65d31f32ca6bc8cfdb8a4f61bee5057bf6c8) which saved a
fair amount of object file size.
Example output for clang.pdb:
Top 10 types responsible for the most TPI input bytes:
index total bytes count size
0x3890: 8,671,220 = 1,805 * 4,804
0xE13BE: 5,634,720 = 252 * 22,360
0x6874C: 5,181,600 = 408 * 12,700
0x2A1F: 4,520,528 = 1,574 * 2,872
0x64BFF: 4,024,020 = 469 * 8,580
0x1123: 4,012,020 = 2,157 * 1,860
0x6952: 3,753,792 = 912 * 4,116
0xC16F: 3,630,888 = 633 * 5,736
0x69DD: 3,601,160 = 985 * 3,656
0x678D: 3,577,904 = 319 * 11,216
In this case, we can see that record 0x3890 is responsible for ~8MB of
total object file size for objects in clang.
The user can then use llvm-pdbutil to find out what the record is:
$ llvm-pdbutil dump -types -type-index 0x3890
Types (TPI Stream)
============================================================
Showing 1 records.
0x3890 | LF_FIELDLIST [size = 4804]
- LF_STMEMBER [name = `WORDTYPE_MAX`, type = 0x1001, attrs = public]
- LF_MEMBER [name = `U`, Type = 0x37F0, offset = 0, attrs = private]
- LF_MEMBER [name = `BitWidth`, Type = 0x0075 (unsigned), offset = 8, attrs = private]
- LF_METHOD [name = `APInt`, # overloads = 8, overload list = 0x3805]
...
In this case, we can see that these are members of the APInt class,
which is emitted in 1805 object files.
The next largest type is ASTContext:
$ llvm-pdbutil dump -types -type-index 0xE13BE bin/clang.pdb
0xE13BE | LF_FIELDLIST [size = 22360]
- LF_BCLASS
type = 0x653EA, offset = 0, attrs = public
- LF_MEMBER [name = `Types`, Type = 0x653EB, offset = 8, attrs = private]
- LF_MEMBER [name = `ExtQualNodes`, Type = 0x653EC, offset = 24, attrs = private]
- LF_MEMBER [name = `ComplexTypes`, Type = 0x653ED, offset = 48, attrs = private]
- LF_MEMBER [name = `PointerTypes`, Type = 0x653EE, offset = 72, attrs = private]
...
ASTContext only appears 252 times, but the list of members is long, and
must be repeated everywhere it is used.
This was the output before I split Intrinsic::ID:
Top 10 types responsible for the most TPI input:
0x686C: 69,823,920 = 1,070 * 65,256
0x686D: 69,819,640 = 1,070 * 65,252
0x686E: 69,819,640 = 1,070 * 65,252
0x686B: 16,371,000 = 1,070 * 15,300
...
These records were all lists of intrinsic enums.
Reviewers: MaskRay, ruiu
Subscribers: mgrang, zturner, thakis, hans, akhuang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71437
This fixes (one aspect of) compilation of LLDB with MSVC for ARM64.
LLDB source files include intrin.h, and the MSVC intrin.h transitively
includes arm64intr.h, which has an ARM64_FPSR define, which clashes
with the enum declaration.
Differential Revision: https://reviews.llvm.org/D67864
llvm-svn: 372481
Add the core registers and NEON registers mapping to the CodeView
register ID. This is sufficient to compile a basic C program with debug
info using CodeView debug info.
llvm-svn: 370423
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.
llvm-svn: 369013