block that is being visited in the bitstream. The client can also now
skip blocks before reading them, and query the current abbreviation number
as seen from the perspective of the Deserializer. This allows the client
to be more interactive in the deserialization process (if they so choose).
llvm-svn: 43916
instead of just using "unsigned". This gives us more flexibility in changing
the definition of the handle later, and is more self-documenting.
Added tracking of block stack in the Deserializer. Now clients can query
if they are still within a block using the methods GetCurrentBlockLocation()
and FinishedBlock().
llvm-svn: 43903
array of pointers to not allocate a second array to contain the pointer ids.
Fixed bug in the same member function where deserialized pointers were
not being registered with the backpatcher.
llvm-svn: 43855
to group the pointer IDs together in the bitstream before their referenced
contents (which will lend itself to more efficient encoding).
llvm-svn: 43845
should only effect x86 when using long double. Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment). This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.
llvm-svn: 43688
or getTypeSizeInBits as appropriate in ScalarReplAggregates.
The right change to make was not always obvious, so it would
be good to have an sroa guru review this. While there I noticed
some bugs, and fixed them: (1) arrays of x86 long double have
holes due to alignment padding, but this wasn't being spotted
by HasStructPadding (renamed to HasPadding). The same goes
for arrays of oddly sized ints. Vectors also suffer from this,
in fact the problem for vectors is much worse because basic
vector assumptions seem to be broken by vectors of type with
alignment padding. I didn't try to fix any of these vector
problems. (2) The code for extracting smaller integers from
larger ones (in the "int union" case) was wrong on big-endian
machines for integers with size not a multiple of 8, like i1.
Probably this is impossible to hit via llvm-gcc, but I fixed
it anyway while there and added a testcase. I also got rid of
some trailing whitespace and changed a function name which
had an obvious typo in it.
llvm-svn: 43672
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.
The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
legal) so the copy can be eliminated.
This eliminates 443 extra moves from 403.gcc.
llvm-svn: 43662
the target pointer to be passed by reference. This can result in less
typing, as the object to be deserialized can be inferred from the
argument.
llvm-svn: 43647
memory rather than in a copy of the APFloat. This avoids problems
when the destination is wider than our significand and is cleaner.
Also provide deterministic values in all cases where conversion
fails, namely zero for NaNs and the minimal or maximal value
respectively for underflow or overflow.
llvm-svn: 43626
Deserializer.
There were issues with Visual C++ barfing when instantiating
SerializeTrait<T> when "T" was an abstract class AND
SerializeTrait<T>::ReadVal was *never* called:
template <typename T>
struct SerializeTrait {
<SNIP>
static inline T ReadVal(Deserializer& D) { T::ReadVal(D); }
<SNIP>
};
Visual C++ would complain about "T" being an abstract class, even
though ReadVal was never instantiated (although one of the other
member functions were).
Removing this from the trait is not a big deal. It was used hardly
ever, and users who want "read-by-value" deserialization can simply
call the appropriate methods directly instead of relying on
trait-based-dispatch. The trait dispatch for
serialization/deserialization is simply sugar in many cases (like this
one).
llvm-svn: 43624
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment. This gives a primitive type for
which getTypeSize differed from getABITypeSize. For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).
This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition). Instead there is:
(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type. For a primitive type, this is the minimum number
of bits. For an i36 this is 36 bits. For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.
(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it). For an
i36 this is 40 bits, for an x86 long double it is 80 bits. This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes). There doesn't seem to be anything
corresponding to this in gcc.
(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment. For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS. This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes). This is
TYPE_SIZE in gcc.
Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize. This means that the size of an array
is the length times the getABITypeSize. It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize. Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case. So alloca's and mallocs should use getABITypeSize. Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.
Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.
In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases). I will get around to auditing these too at some point,
but I could do with some help.
Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize. I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers. If someone wants to pack these types more
tightly they can always use a packed struct.
llvm-svn: 43620
flag in the **key** of the backpatch map, as opposed to the mapped
value which contains either the final pointer, or a pointer to a chain
of pointers that need to be backpatched. The bit flag was moved to
the key because we were erroneously assuming that the backpatched
pointers would be at an alignment of >= 2 bytes, which obviously
doesn't work for character strings. Now we just steal the bit from the key.
llvm-svn: 43595
Added method FindAndConstruct() to DenseMap, which does the same thing as
operator[], except that it refers value_type& (a reference to both the
key and mapped data pair). This method is useful for clients that wish
to access the stored key value, as opposed to the key used to do the
actual lookup (these need not always be the same).
Redefined operator[] to use FindAndConstruct() (same logic).
llvm-svn: 43594
just like pointers, except that they cannot be backpatched. This
means that references are essentially non-owning pointers where the
referred object must be deserialized prior to the reference being
deserialized. Because of the nature of references, this ordering of
objects is always possible.
Fixed a bug in backpatching code (returning the backpatched pointer
would accidentally include a bit flag).
llvm-svn: 43570