should only effect x86 when using long double. Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment). This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.
llvm-svn: 43688
or getTypeSizeInBits as appropriate in ScalarReplAggregates.
The right change to make was not always obvious, so it would
be good to have an sroa guru review this. While there I noticed
some bugs, and fixed them: (1) arrays of x86 long double have
holes due to alignment padding, but this wasn't being spotted
by HasStructPadding (renamed to HasPadding). The same goes
for arrays of oddly sized ints. Vectors also suffer from this,
in fact the problem for vectors is much worse because basic
vector assumptions seem to be broken by vectors of type with
alignment padding. I didn't try to fix any of these vector
problems. (2) The code for extracting smaller integers from
larger ones (in the "int union" case) was wrong on big-endian
machines for integers with size not a multiple of 8, like i1.
Probably this is impossible to hit via llvm-gcc, but I fixed
it anyway while there and added a testcase. I also got rid of
some trailing whitespace and changed a function name which
had an obvious typo in it.
llvm-svn: 43672
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.
The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
legal) so the copy can be eliminated.
This eliminates 443 extra moves from 403.gcc.
llvm-svn: 43662
the target pointer to be passed by reference. This can result in less
typing, as the object to be deserialized can be inferred from the
argument.
llvm-svn: 43647
memory rather than in a copy of the APFloat. This avoids problems
when the destination is wider than our significand and is cleaner.
Also provide deterministic values in all cases where conversion
fails, namely zero for NaNs and the minimal or maximal value
respectively for underflow or overflow.
llvm-svn: 43626
Deserializer.
There were issues with Visual C++ barfing when instantiating
SerializeTrait<T> when "T" was an abstract class AND
SerializeTrait<T>::ReadVal was *never* called:
template <typename T>
struct SerializeTrait {
<SNIP>
static inline T ReadVal(Deserializer& D) { T::ReadVal(D); }
<SNIP>
};
Visual C++ would complain about "T" being an abstract class, even
though ReadVal was never instantiated (although one of the other
member functions were).
Removing this from the trait is not a big deal. It was used hardly
ever, and users who want "read-by-value" deserialization can simply
call the appropriate methods directly instead of relying on
trait-based-dispatch. The trait dispatch for
serialization/deserialization is simply sugar in many cases (like this
one).
llvm-svn: 43624
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment. This gives a primitive type for
which getTypeSize differed from getABITypeSize. For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).
This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition). Instead there is:
(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type. For a primitive type, this is the minimum number
of bits. For an i36 this is 36 bits. For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.
(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it). For an
i36 this is 40 bits, for an x86 long double it is 80 bits. This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes). There doesn't seem to be anything
corresponding to this in gcc.
(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment. For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS. This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes). This is
TYPE_SIZE in gcc.
Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize. This means that the size of an array
is the length times the getABITypeSize. It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize. Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case. So alloca's and mallocs should use getABITypeSize. Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.
Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.
In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases). I will get around to auditing these too at some point,
but I could do with some help.
Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize. I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers. If someone wants to pack these types more
tightly they can always use a packed struct.
llvm-svn: 43620
flag in the **key** of the backpatch map, as opposed to the mapped
value which contains either the final pointer, or a pointer to a chain
of pointers that need to be backpatched. The bit flag was moved to
the key because we were erroneously assuming that the backpatched
pointers would be at an alignment of >= 2 bytes, which obviously
doesn't work for character strings. Now we just steal the bit from the key.
llvm-svn: 43595
Added method FindAndConstruct() to DenseMap, which does the same thing as
operator[], except that it refers value_type& (a reference to both the
key and mapped data pair). This method is useful for clients that wish
to access the stored key value, as opposed to the key used to do the
actual lookup (these need not always be the same).
Redefined operator[] to use FindAndConstruct() (same logic).
llvm-svn: 43594
just like pointers, except that they cannot be backpatched. This
means that references are essentially non-owning pointers where the
referred object must be deserialized prior to the reference being
deserialized. Because of the nature of references, this ordering of
objects is always possible.
Fixed a bug in backpatching code (returning the backpatched pointer
would accidentally include a bit flag).
llvm-svn: 43570
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).
llvm-svn: 43465
eager backpatching instead of waithing until all objects have been
deserialized. This allows us to reduce the memory footprint needed
for backpatching.
llvm-svn: 43422
of offset and the alignment of ptr if these are both powers of
2. While the ptr alignment is guaranteed to be a power of 2,
there is no reason to think that offset is. For example, if
offset is 12 (the size of a long double on x86-32 linux) and
the alignment of ptr is 8, then the alignment of ptr+offset
will in general be 4, not 8. Introduce a function MinAlign,
lifted from gcc, for computing the minimum guaranteed alignment.
I've tried to fix up everywhere under lib/CodeGen/SelectionDAG/.
I also changed some places that weren't wrong (because both values
were a power of 2), as a defensive change against people copying
and pasting the code.
Hopefully someone who cares about alignment will review the rest
of LLVM and fix up the remaining places. Since I'm on x86 I'm
not very motivated to do this myself...
llvm-svn: 43421
calling member functions of the target type to perform type-specific
serialization.
Added version of ReadPtr that allows passing references to uintptr_t
(useful for smart pointers).
llvm-svn: 43396
Turn a store folding instruction into a load folding instruction. e.g.
xorl %edi, %eax
movl %eax, -32(%ebp)
movl -36(%ebp), %eax
orl %eax, -32(%ebp)
=>
xorl %edi, %eax
orl -36(%ebp), %eax
mov %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.
llvm-svn: 43192
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset. I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)
llvm-svn: 43172
void*. This is hint that we are returning uninitialized memory rather
than a constructed object.
Patched ImutAVLTree to conform to this new interface.
llvm-svn: 43106
BumpPtrAllocator that implement allocations that return a properly
typed pointer. For BumpPtrAllocator, the allocated memory is
automatically aligned to the minimum alignment of the type (as
calculated by llvm::AlignOf::Alignment).
llvm-svn: 43087
types. This is needed for SIGN_EXTEND_INREG at least.
It is not clear if this is correct for other operations.
On the other hand, for the various load/store actions
it seems to correct to return the type action, as is
currently done.
Also, it seems that SelectionDAG::getValueType can be
called for extended value types; introduce a map for
holding these, since we don't really want to extend
the vector to be 2^32 pointers long!
Generalize DAGTypeLegalizer::PromoteResult_TRUNCATE
and DAGTypeLegalizer::PromoteResult_INT_EXTEND to handle
the various funky possibilities that apints introduce,
for example that you can promote to a type that needs
to be expanded.
llvm-svn: 43071
top bit of a ValueType to be zero. Enforce this by ensuring
an assertion failure if someone tries to create a ValueType
without this property. I chose this minimal approach rather
than a more official integration of the notion of reserved
bits into ValueType because I'm hoping that the verifier will
be changed to no longer require this :)
llvm-svn: 43031
codegen support. This should have no effect on codegen
for other types. Debatable bits: (1) the use (abuse?)
of a set in SDNode::getValueTypeList; (2) the length of
getTypeToTransformTo, which maybe should be refactored
with a non-inline part for extended value types.
llvm-svn: 43030
take a deleted nodes vector, instead of requiring it.
One more significant change: Implement the start of a legalizer that
just works on types. This legalizer is designed to run before the
operation legalizer and ensure just that the input dag is transformed
into an output dag whose operand and result types are all legal, even
if the operations on those types are not.
This design/impl has the following advantages:
1. When finished, this will *significantly* reduce the amount of code in
LegalizeDAG.cpp. It will remove all the code related to promotion and
expansion as well as splitting and scalarizing vectors.
2. The new code is very simple, idiomatic, and modular: unlike
LegalizeDAG.cpp, it has no 3000 line long functions. :)
3. The implementation is completely iterative instead of recursive, good
for hacking on large dags without blowing out your stack.
4. The implementation updates nodes in place when possible instead of
deallocating and reallocating the entire graph that points to some
mutated node.
5. The code nicely separates out handling of operations with invalid
results from operations with invalid operands, making some cases
simpler and easier to understand.
6. The new -debug-only=legalize-types option is very very handy :),
allowing you to easily understand what legalize types is doing.
This is not yet done. Until the ifdef added to SelectionDAGISel.cpp is
enabled, this does nothing. However, this code is sufficient to legalize
all of the code in 186.crafty, olden and freebench on an x86 machine. The
biggest issues are:
1. Vectors aren't implemented at all yet
2. SoftFP is a mess, I need to talk to Evan about it.
3. No lowering to libcalls is implemented yet.
4. Various operations are missing etc.
5. There are FIXME's for stuff I hax0r'd out, like softfp.
Hey, at least it is a step in the right direction :). If you'd like to help,
just enable the #ifdef in SelectionDAGISel.cpp and compile code with it. If
this explodes it will tell you what needs to be implemented. Help is
certainly appreciated.
Once this goes in, we can do three things:
1. Add a new pass of dag combine between the "type legalizer" and "operation
legalizer" passes. This will let us catch some long-standing isel issues
that we miss because operation legalization often obfuscates the dag with
target-specific nodes.
2. We can rip out all of the type legalization code from LegalizeDAG.cpp,
making it much smaller and simpler. When that happens we can then
reimplement the core functionality left in it in a much more efficient and
non-recursive way.
3. Once the whole legalizer is non-recursive, we can implement whole-function
selectiondags maybe...
llvm-svn: 42981
the source register will be coalesced to the super register of the LHS. Properly
merge in the live ranges of the resulting coalesced interval that were part of
the original source interval to the live interval of the super-register.
llvm-svn: 42961
from user input strings.
Such conversions are more intricate and subtle than they may appear;
it is unlikely I have got it completely right first time. I would
appreciate being informed of any bugs and incorrect roundings you
might discover.
llvm-svn: 42912
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.
llvm-svn: 42899
enabled by passing -tailcallopt to llc. The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
elf/pic enabled + callee is in module + callee has
visibility protected or hidden
llvm-svn: 42870
No compile-time support for constant operations yet,
just format transformations. Make readers and
writers work. Split constants into 2 doubles in
Legalize.
llvm-svn: 42865
implemented on top of a functional AVL tree. The AVL balancing code
is inspired by the OCaml implementation of Map, which also uses a functional
AVL tree.
Documentation is currently limited and cleanups are planned, but this code
compiles and has been tested.
llvm-svn: 42813
arbitrary range of bits embedded in the middle of another bignum.
This kind of operation is desirable in many cases of software
floating point, e.g. converting bignum integers to floating point
numbers of fixed precision (you want to extract the precision most
significant bits).
Elsewhere, add an assertion, and exit the shift functions early if
the shift count is zero.
llvm-svn: 42745
It used to modify its argument in-place.
This interface is saner and the implementation more efficient. It will
be needed for decimal->binary conversion.
llvm-svn: 42733
input. APInt unfortunately zero-extends signed integers, so Dale
modified the function to expect zero-extended input. Make this
assumption explicit in the function name.
llvm-svn: 42732
part widths. Also, return the number of parts actually required to
hold the result's value.
Remove an over-cautious condition from rounding of float->hex conversion.
llvm-svn: 42669
basic arithmetic works.
Rename RTLIB long double functions to distinguish
different flavors of long double; the lib functions
have different names, alas.
llvm-svn: 42644
scheduler will try a number of tricks in order to avoid generating the
copies. This may not be possible in case the node produces a chain value
that prevent movement. Try unfolding the load from the node before to allow
it to be moved / cloned.
llvm-svn: 42625
address (not just from / to frameindexes).
- Added target hooks to unfold load / store instructions / SDNodes into separate
load, data processing, store instructions / SDNodes.
llvm-svn: 42621
This version enhances the previous patch to add root initialization
as discussed here:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20070910/053455.html
Collector gives its subclasses control over generic algorithms:
unsigned NeededSafePoints; //< Bitmask of required safe points.
bool CustomReadBarriers; //< Default is to insert loads.
bool CustomWriteBarriers; //< Default is to insert stores.
bool CustomRoots; //< Default is to pass through to backend.
bool InitRoots; //< If set, roots are nulled during lowering.
It also has callbacks which collectors can hook:
/// If any of the actions are set to Custom, this is expected to
/// be overriden to create a transform to lower those actions to
/// LLVM IR.
virtual Pass *createCustomLoweringPass() const;
/// beginAssembly/finishAssembly - Emit module metadata as
/// assembly code.
virtual void beginAssembly(Module &M, std::ostream &OS,
AsmPrinter &AP,
const TargetAsmInfo &TAI) const;
virtual void finishAssembly(Module &M,
CollectorModuleMetadata &CMM,
std::ostream &OS, AsmPrinter &AP,
const TargetAsmInfo &TAI) const;
Various other independent algorithms could be implemented, but were
not necessary for the initial two collectors. Some examples are
listed here:
http://llvm.org/docs/GarbageCollection.html#collector-algos
llvm-svn: 42466
other than PPC64. Instead of fixing it, just remove it and fix all the
places that use it to use TargetData::getPointerSize() instead, as there
aren't very many. Most of the references were in DwarfWriter.cpp.
llvm-svn: 42419
It includes:
- location and of each safe point in machine code (identified by a
label)
- location of each root within the stack frame (identified by an
offset), including the metadata tag provided to llvm.gcroot in
the user program
- size of the stack frame (for collectors which want to cheat on
stack crawling :)
- and eventually will include liveness
It is to be populated by back-ends during code-generation.
CollectorModuleMetadata aggregates this information across the
entire module.
llvm-svn: 42418
instruction creation. No support yet for instruction introspection.
Also eliminated allocas from the Ocaml bindings for portability,
and avoided unnecessary casts.
llvm-svn: 42367
and time usage.
Fixup operator == to make this work, and add a resize method to DenseMap
so we can resize our hashtable once we know how big it should be.
llvm-svn: 42269
change is not useful in and of itself, but it lays the groundwork for combining
the dominator and postdominator implementations.
Also, factor a few methods that are common to DominatorTree and PostDominatorTree
into DominatorTreeBase. Again, this will make merging the two calculation methods
simpler in the future.
llvm-svn: 42248
keep f32 in SSE registers and f64 in x87. This
is effectively a new codegen mode.
Change addLegalFPImmediate to permit float and
double variants to do different things.
Adjust callers.
llvm-svn: 42246
bit width instead of number of words allocated, which
makes it actually work for int->APF conversions.
Adjust callers. Add const to one of the APInt constructors
to prevent surprising match when called with const
argument.
llvm-svn: 42210
returned a reference type. This patch allows operator*() to return a
non-reference type while still maintaining the old behavior when it
does return a reference type.
This patch was motivated when I tried to use "df_iterator" (see
llvm/ADT/DepthFirstIterator.h) as a "node_iterator", as df_iterator
does not return a reference type and thus we would get a compilation
error when trying to take the address of a temporary.
llvm-svn: 42151
function. The information isn't used heavily -- it's only used at the end
of exception handling emission -- so there's no need to cache it.
llvm-svn: 42078
- The naming prefix is LLVM.
- All types are represented using opaque references.
- Functions are not named LLVM{Type}{Method}; the names became
unreadable goop. Instead, they are named LLVM{ImperativeSentence}.
- Where an attribute only appears once in the class hierarchy (e.g.,
linkage only applies to values; parameter types only apply to
function types), the class is omitted from identifiers for
brevity. Tastes like methods.
- Strings are C strings or string/length tuples on a case-by-case
basis.
- APIs which give the caller ownership of an object are not mapped
(removeFromParent, certain constructor overloads). This keeps
keep memory management as simple as possible.
For each library with bindings:
llvm-c/<LIB>.h - Declares the bindings.
lib/<LIB>/<LIB>.cpp - Implements the bindings.
So just link with the library of your choice and use the C header
instead of the C++ one.
llvm-svn: 42077