1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 12:12:47 +01:00

AsmPrinter: Document why DIEValueList uses a linked-list, NFC

There are two main reasons why a linked-list makes sense for
`DIEValueList`.

 1. We want `DIE` to be on a `BumpPtrAllocator` to improve teardown
    efficiency.  Making `DIEValueList` array-based would make that much
    more complicated.
 2. The singly-linked list is fairly memory efficient.  The histogram
    [1] shows that most DIEs have relatively few values, so we often pay
    less than the 2/3-pointer static overhead of a vector.  Furthermore,
    we don't know ahead of time exactly how many values a `DIE` needs,
    so a vector-like scheme will on average over-allocate by ~50%.  As
    it happens, that's the same memory overhead as the linked list node.

[1]: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-May/085910.html

The comment I added to the code is a little more succinct, but I think
it's enough to give the idea.

llvm-svn: 240868
This commit is contained in:
Duncan P. N. Exon Smith 2015-06-27 01:19:17 +00:00
parent 887179f82e
commit ca12aec0a1

View File

@ -546,6 +546,16 @@ public:
/// This is a singly-linked list, but instead of reversing the order of
/// insertion, we keep a pointer to the back of the list so we can push in
/// order.
///
/// There are two main reasons to choose a linked list over a customized
/// vector-like data structure.
///
/// 1. For teardown efficiency, we want DIEs to be BumpPtrAllocated. Using a
/// linked list here makes this way easier to accomplish.
/// 2. Carrying an extra pointer per \a DIEValue isn't expensive. 45% of DIEs
/// have 2 or fewer values, and 90% have 5 or fewer. A vector would be
/// over-allocated by 50% on average anyway, the same cost as the
/// linked-list node.
class DIEValueList {
struct Node : IntrusiveBackListNode {
DIEValue V;