1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 11:42:57 +01:00
Commit Graph

1353 Commits

Author SHA1 Message Date
Andrew Trick
56264ae675 SparseSet: Add support for key-derived indexes and arbitrary key types.
This nicely handles the most common case of virtual register sets, but
also handles anticipated cases where we will map pointers to IDs.

The goal is not to develop a completely generic SparseSet
template. Instead we want to handle the expected uses within llvm
without any template antics in the client code. I'm adding a bit of
template nastiness here, and some assumption about expected usage in
order to make the client code very clean.

The expected common uses cases I'm designing for:
- integer keys that need to be reindexed, and may map to additional
  data
- densely numbered objects where we want pointer keys because no
  number->object map exists.

llvm-svn: 155227
2012-04-20 20:05:28 +00:00
Benjamin Kramer
ffa121d1ea SmallPtrSet: Reuse DenseMapInfo's pointer hash function instead of inventing a bad one ourselves.
DenseMap's hash function uses slightly more entropy and reduces hash collisions
significantly.  I also experimented with Hashing.h, but it didn't gave a lot of
improvement while being much more expensive to compute.

llvm-svn: 154996
2012-04-18 10:37:32 +00:00
Benjamin Kramer
b9eb9d651b Make StringMap's copy ctor non-explicit.
Without this gcc doesn't allow us to put a StringMap into a
std::map. Works with clang though.

llvm-svn: 154737
2012-04-14 09:04:57 +00:00
Benjamin Kramer
c1e98c85e2 FoldingSet: Push the hash through FoldingSetTraits::Equals, so clients can use it.
llvm-svn: 154496
2012-04-11 14:06:47 +00:00
Duncan Sands
f25460b85f Express the number of ULPs in fpaccuracy metadata as a real rather than a
rational number, eg as 2.5 rather than 5, 2.  OK'd by Peter Collingbourne.

llvm-svn: 154387
2012-04-10 08:22:43 +00:00
Chandler Carruth
95c7cfd717 Perform partial SROA on the helper hashing structure. I really wish the
optimizers could do this for us, but expecting partial SROA of classes
with template methods through cloning is probably expecting too much
heroics. With this change, the begin/end pointer pairs which indicate
the status of each loop iteration are actually passed directly into each
layer of the combine_data calls, and the inliner has a chance to see
when most of the combine_data function could be deleted by inlining.
Similarly for 'length'.

We have to be careful to limit the places where in/out reference
parameters are used as those will also defeat the inliner / optimizers
from properly propagating constants.

With this change, LLVM is able to fully inline and unroll the hash
computation of small sets of values, such as two or three pointers.
These now decompose into essentially straight-line code with no loops or
function calls.

There is still one code quality problem to be solved with the hashing --
LLVM is failing to nuke the alloca. It removes all loads from the
alloca, leaving only lifetime intrinsics and dead(!!) stores to the
alloca. =/ Very unfortunate.

llvm-svn: 154264
2012-04-07 20:01:31 +00:00
Benjamin Kramer
47a0e2efe1 DenseMap: Perform the pod-like object optimization when the value type is POD-like, not the DenseMapInfo for it.
Purge now unused template arguments. This has been broken since r91421. Patch by Lubos Lunak!

llvm-svn: 154170
2012-04-06 10:43:44 +00:00
Hal Finkel
d6e526ae11 Add triple support for the IBM BG/P and BG/Q supercomputers.
llvm-svn: 153882
2012-04-02 18:31:33 +00:00
Benjamin Kramer
01e4003c0f Move ftostr into its last user (cppbackend) and simplify it a bit.
New code should use raw_ostream.

llvm-svn: 153326
2012-03-23 11:26:29 +00:00
Anna Zaks
b35d6e77c8 Make sure ImmutableSet never inserts Tombstone/Entry into DenseMap.
ImmutAVLTree uses random unsigned values as keys into a DenseMap,
which could possibly happen to be the same value as the Tombstone or
Entry keys in the DenseMap.

Test case is hard to come up with. We randomly get failures on the
internal static analyzer bot, which most likely hits this issue
(hard to be 100% sure without the full stack).

llvm-svn: 153148
2012-03-20 22:56:27 +00:00
Francois Pichet
632c8aec93 Fixes the MSVC build.
Commit r152704 exposed a latent MSVC limitation (aka bug). 
Both ilist and and iplist contains the same function:
  template<class InIt> void insert(iterator where, InIt first, InIt last) {
    for (; first != last; ++first) insert(where, *first);
  }

Also ilist inherits from iplist and ilist contains a "using iplist<NodeTy>::insert".
MSVC doesn't know which one to pick and complain with an error.

I think it is safe to delete ilist::insert since it is redundant anyway.

llvm-svn: 152746
2012-03-14 22:36:10 +00:00
Benjamin Kramer
3bb5766d6e Move APInt::operator[] inline.
llvm-svn: 152692
2012-03-14 00:38:15 +00:00
Benjamin Kramer
8a2603ad7e Move APInt::operator! inline, it's small and fuses well with surrounding code when inlined.
llvm-svn: 152688
2012-03-14 00:01:35 +00:00
Benjamin Kramer
88de2712e2 Remove an old hack for pre-2005 MSVC. We don't support ancient microsoft compilers anymore.
llvm-svn: 152659
2012-03-13 20:07:36 +00:00
Douglas Gregor
bb5c9263b1 Add a few missing 'template' keywords
llvm-svn: 152525
2012-03-11 02:22:41 +00:00
Michael J. Spencer
69772efcb2 Make StringRef::getAsInteger work with all integer types. Before this change
it would fail with {,u}int64_t on x86-64 Linux.

This also removes code duplication.

llvm-svn: 152517
2012-03-10 23:02:54 +00:00
Anton Korobeynikov
906cbb66d3 Add support for r600 (AMD GPUs HD2XXX - HD6XXX) target triplet.
Patch by Tom Stellard!

llvm-svn: 152400
2012-03-09 10:09:36 +00:00
Chandler Carruth
b716b4a9c9 Fix a silly restriction on the fast-path for hash_combine_range. This
caused several clients to select the slow variation. =[ This is extra
annoying because we don't have any realistic way of testing this -- by
design, these two functions *must* compute the same value.

Found while inspecting the output of some benchmarks I'm working on.

llvm-svn: 152369
2012-03-09 02:49:38 +00:00
Duncan Sands
40d7bf2dbc Revert commit 152300 (ddunbar) since it still seems to be breaking
buildbots.  Original commit message:

[ADT] Change the trivial FoldingSetNodeID::Add* methods to be inline, reapplied
with a fix for the longstanding over-read of 32-bit pointer values.

llvm-svn: 152304
2012-03-08 09:32:21 +00:00
Daniel Dunbar
1f5bb1e781 [ADT] Change the trivial FoldingSetNodeID::Add* methods to be inline, reapplied
with a fix for the longstanding over-read of 32-bit pointer values.

llvm-svn: 152300
2012-03-08 07:42:18 +00:00
Daniel Dunbar
fe15f6be2f Revert r152288, "[ADT] Change the trivial FoldingSetNodeID::Add* methods to be
inline.", which is breaking the bots in a way I don't understand.

llvm-svn: 152295
2012-03-08 04:17:15 +00:00
Daniel Dunbar
b99e53d8c3 [ADT] Change the trivial FoldingSetNodeID::Add* methods to be inline.
llvm-svn: 152288
2012-03-08 02:52:00 +00:00
Chandler Carruth
d731be046b What's better than fixing and simplifying broken hash functions?
Deleting them because they aren't used. =D

Yell if you need these, I'm happy to instead replace them with nice uses
of the new infrastructure.

llvm-svn: 152219
2012-03-07 09:54:06 +00:00
Chandler Carruth
25594f9e13 Add support to the hashing infrastructure for automatically hashing both
integral and enumeration types. This is accomplished with a bit of
template type trait magic. Thanks to Richard Smith for the core idea
here to detect viable types by detecting the set of types which can be
default constructed in a template parameter.

This is used (in conjunction with a system for detecting nullptr_t
should it exist) to provide an is_integral_or_enum type trait that
doesn't need a whitelist or direct compiler support.

With this, the hashing is extended to the more general facility. This
will be used in a subsequent commit to hashing more things, but I wanted
to make sure the type trait magic went through the build bots separately
in case other compilers don't like this formulation.

llvm-svn: 152217
2012-03-07 09:32:32 +00:00
Eli Friedman
892a4b0100 Missing change in r152106 for TinyPtrVector.
llvm-svn: 152201
2012-03-07 03:37:32 +00:00
Chandler Carruth
fccdc0f2e7 Remove an accidental cut/paste of a comment into the middle of
a function. Dunno how I missed this when going through code...

llvm-svn: 152196
2012-03-07 02:33:06 +00:00
Benjamin Kramer
0637e87d74 SmallPtrSet: Provide a more efficient implementation of swap than the default triple-copy std::swap.
This currently assumes that both sets have the same SmallSize to keep the implementation simple,
a limitation that can be lifted if someone cares.

llvm-svn: 152143
2012-03-06 20:40:02 +00:00
Benjamin Kramer
2303c94eb1 Remove excess const, a const_iterator shouldn't be const itself.
Fixes 1242 warnings from gcc during clang build.

llvm-svn: 152120
2012-03-06 13:32:36 +00:00
Argyrios Kyrtzidis
c919e757e2 [TinyPtrVector] Add erase method and const-goodness.
llvm-svn: 152107
2012-03-06 07:14:58 +00:00
Argyrios Kyrtzidis
267b14e42c PointerUnion::getAddrOf() does not need to be template since we can only
use the first pointer type for it. Rename it to getAddrOfPtr1().

llvm-svn: 152106
2012-03-06 07:14:54 +00:00
Argyrios Kyrtzidis
e07aa2dee3 Remove UsuallyTinyPtrVector.
It is just a worse version of TinyPtrVector.

llvm-svn: 152097
2012-03-06 03:02:16 +00:00
Argyrios Kyrtzidis
c9658a580e Add include/llvm/ADT/UsuallyTinyPtrVector.h which is a vector that
optimizes the case where there is only one element.

llvm-svn: 152090
2012-03-06 02:08:48 +00:00
Chandler Carruth
fd1948653d Switch to a C-style cast here to silence a brain-dead MSVC warning. It
complains about the truncation of a 64-bit constant to a 32-bit value
when size_t is 32-bits wide, but *only with static_cast*!!! The exact
signal that should *silence* such a warning, and in fact does silence it
with both GCC and Clang.

Anyways, this was causing grief for all the MSVC builds, so pointless
change made. Thanks to Nikola on IRC for confirming that this works.

llvm-svn: 152021
2012-03-05 09:56:12 +00:00
Chandler Carruth
a93fbd8fff Replace the hashing functions on APInt and APFloat with overloads of the
new hash_value infrastructure, and replace their implementations using
hash_combine. This removes a complete copy of Jenkin's lookup3 hash
function (which is both significantly slower and lower quality than the
one implemented in hash_combine) along with a somewhat scary xor-only
hash function.

Now that APInt and APFloat can be passed directly to hash_combine,
simplify the rest of the LLVMContextImpl hashing to use the new
infrastructure.

llvm-svn: 152004
2012-03-04 12:02:57 +00:00
Chandler Carruth
b4a6f80d2e Add generic support for hashing StringRef objects using the new hashing library.
llvm-svn: 152003
2012-03-04 10:55:27 +00:00
Chandler Carruth
9b15a1b01a Teach the hashing facilities how to hash std::string objects.
llvm-svn: 152000
2012-03-04 10:23:15 +00:00
Daniel Dunbar
5ffbedef13 hash_state: Don't use initialization target during initialization.
llvm-svn: 151959
2012-03-03 00:35:48 +00:00
Benjamin Kramer
dc1ee2e852 Fix indentation.
llvm-svn: 151932
2012-03-02 19:19:34 +00:00
Benjamin Kramer
730f1fb5b3 Hashing: microoptimize a truncate on 64 bit away. This currently blocks dead code eliminating the conditional.
The optimizer should handle this eventually, but currently LVI isn't really designed for this kind of stuff.

llvm-svn: 151918
2012-03-02 15:34:35 +00:00
Chandler Carruth
53ca2f8c9e Make the hashing algorithm Endian neutral. This is a bit annoying, but
folks who know something about PPC tell me that the byte swap is crazy
fast and without this the bit mixture would actually be different. It
might not be worse, but I've not measured it and so I'd rather not trust
it. This way, the algorithm is identical on both endianness hosts. I'll
look into any performance issues etc stemming from this.

llvm-svn: 151892
2012-03-02 11:16:10 +00:00
Chandler Carruth
37925e436c Simplify the pair optimization. Rather than using complex type traits,
just ensure that the number of bytes in the pair is the sum of the bytes
in each side of the pair. As long as thats true, there are no extra
bytes that might be padding.

Also add a few tests that previously would have slipped through the
checking. The more accurate checking mechanism catches these and ensures
they are handled conservatively correctly.

Thanks to Duncan for prodding me to do this right and more simply.

llvm-svn: 151891
2012-03-02 10:56:40 +00:00
Chandler Carruth
8ef1184049 We really want to hash pairs of directly-hashable data as directly
hashable data. This matters when we have pair<T*, U*> as a key, which is
quite common in DenseMap, etc. To that end, we need to detect when this
is safe. The requirements on a generic std::pair<T, U> are:

1) Both T and U must satisfy the existing is_hashable_data trait. Note
   that this includes the requirement that T and U have no internal
   padding bits or other bits not contributing directly to equality.
2) The alignment constraints of std::pair<T, U> do not require padding
   between consecutive objects.
3) The alignment constraints of U and the size of T do not conspire to
   require padding between the first and second elements.

Grow two somewhat magical traits to detect this by forming a pod
structure and inspecting offset artifacts on it. Hopefully this won't
cause any compilers to panic.

Added and adjusted tests now that pairs, even nested pairs, are treated
as just sequences of data.

Thanks to Jeffrey Yasskin for helping me sort through this and reviewing
the somewhat subtle traits.

llvm-svn: 151883
2012-03-02 09:26:36 +00:00
Chandler Carruth
09d76cf26d Add support for hashing pairs by delegating to each sub-object. There is
an open question of whether we can do better than this by treating pairs
as boring data containers and directly hashing the two subobjects. This
at least makes the API reasonable.

In order to make this change, I reorganized the header a bit. I lifted
the declarations of the hash_value functions up to the top of the header
with their doxygen comments as these are intended for users to interact
with. They shouldn't have to wade through implementation details. I then
defined them at the very end so that they could be defined in terms of
hash_combine or any other hashing infrastructure.

Added various pair-hashing unittests.

llvm-svn: 151882
2012-03-02 08:32:29 +00:00
Chandler Carruth
e07f473768 Remove the misguided extension here that reserved two special values in
the hash_code. I'm not sure what I was thinking here, the use cases for
special values are in the *keys*, not in the hashes of those keys.

We can always resurrect this if needed, or clients can accomplish the
same goal themselves. This makes the general case somewhat faster (~5
cycles faster on my machine) and smaller with less branching.

llvm-svn: 151865
2012-03-02 00:48:38 +00:00
Chandler Carruth
5bca3bef43 Fix two warnings in this code that I missed.
llvm-svn: 151839
2012-03-01 21:45:51 +00:00
Argyrios Kyrtzidis
791437e5f8 Move include/llvm/ADT/SaveAndRestore.h -> include/llvm/Support/SaveAndRestore.h
llvm-svn: 151828
2012-03-01 19:45:47 +00:00
Chandler Carruth
cc9b4516cb Rewrite LLVM's generalized support library for hashing to follow the API
of the proposed standard hashing interfaces (N3333), and to use
a modified and tuned version of the CityHash algorithm.

Some of the highlights of this change:
 -- Significantly higher quality hashing algorithm with very well
    distributed results, and extremely few collisions. Should be close to
    a checksum for up to 64-bit keys. Very little clustering or clumping of
    hash codes, to better distribute load on probed hash tables.
 -- Built-in support for reserved values.
 -- Simplified API that composes cleanly with other C++ idioms and APIs.
 -- Better scaling performance as keys grow. This is the fastest
    algorithm I've found and measured for moderately sized keys (such as
    show up in some of the uniquing and folding use cases)
 -- Support for enabling per-execution seeds to prevent table ordering
    or other artifacts of hashing algorithms to impact the output of
    LLVM. The seeding would make each run different and highlight these
    problems during bootstrap.

This implementation was tested extensively using the SMHasher test
suite, and pased with flying colors, doing better than the original
CityHash algorithm even.

I've included a unittest, although it is somewhat minimal at the moment.
I've also added (or refactored into the proper location) type traits
necessary to implement this, and converted users of GeneralHash over.

My only immediate concerns with this implementation is the performance
of hashing small keys. I've already started working to improve this, and
will continue to do so. Currently, the only algorithms faster produce
lower quality results, but it is likely there is a better compromise
than the current one.

Many thanks to Jeffrey Yasskin who did most of the work on the N3333
paper, pair-programmed some of this code, and reviewed much of it. Many
thanks also go to Geoff Pike Pike and Jyrki Alakuijala, the original
authors of CityHash on which this is heavily based, and Austin Appleby
who created MurmurHash and the SMHasher test suite.

Also thanks to Nadav, Tobias, Howard, Jay, Nick, Ahmed, and Duncan for
all of the review comments! If there are further comments or concerns,
please let me know and I'll jump on 'em.

llvm-svn: 151822
2012-03-01 18:55:25 +00:00
Argyrios Kyrtzidis
20ba865302 Move "clang/Analysis/Support/SaveAndRestore.h" to "llvm/ADT/SaveAndRestore.h"
to make it more widely available.

llvm-svn: 151564
2012-02-27 21:08:33 +00:00
Jay Foad
e019c92c2a Help the compiler to eliminate some dead code when hashing an array of T
where sizeof (T) is a multiple of 4.

llvm-svn: 151523
2012-02-27 11:00:17 +00:00
Jay Foad
6414ccdb33 The implementation of GeneralHash::addBits broke C++ aliasing rules; fix
it with memcpy. This also fixes a problem on big-endian hosts, where
addUnaligned would return different results depending on the alignment
of the data.

llvm-svn: 151247
2012-02-23 09:16:04 +00:00