MSVC doesn't seem to provide any is_error_code_enum enumeration for the
windows errors.
Fortunately very few places in llvm have to handle raw windows errors, so
we can just construct the corresponding error_code directly.
llvm-svn: 210631
Unfortunately there's no way to elegantly do this with pre-canned
algorithms. Using a generating iterator doesn't work because you default
construct for each element, then move construct into the actual slot
(bad for copy but non-movable types, and a little unneeded overhead even
in the move-only case), so just write it out manually.
This solution isn't exception safe (if one of the element's ctors calls
we don't fall back, destroy the constructed elements, and throw on -
which std::uninitialized_fill does do) but SmallVector (and LLVM) isn't
exception safe anyway.
llvm-svn: 210495
To test cases that involve actual repetition (> 1 elements), at least
one element before the insertion point, and some elements of the
original range that still fit in that range space after insertion.
Actually we need coverage for the inverse case too (where no elements
after the insertion point fit into the previously allocated space), but
this'll do for now, and I might end up rewriting bits of SmallVector to
avoid that special case anyway.
llvm-svn: 210436
Specifically this caused inserting an element from a SmallVector into
itself when such an insertion would cause a reallocation. We have code
to handle this for non-reallocating cases, but it's not robust against
reallocation.
llvm-svn: 210430
(& because it makes it easier to test, this also improves
correctness/performance slightly by moving the last element in an insert
operation, rather than copying it)
llvm-svn: 210429
Because we don't have a separate negate( ) function, 0 - NaN does double-duty as the IEEE-754 negate( ) operation, which (unlike most FP ops) *does* attach semantic meaning to the signbit of NaN.
llvm-svn: 210428
This would cause the last element in a range to be in a moved-from state
after an insert at a non-end position, losing that value entirely in the
process.
Side note: move_backward is subtle. It copies [A, B) to C-1 and down.
(the fact that it decrements both the second and third iterators before
the first movement is the subtle part... kind of surprising, anyway)
llvm-svn: 210426
Alias with unnamed_addr were in a strange state. It is stored in GlobalValue,
the language reference talks about "unnamed_addr aliases" but the verifier
was rejecting them.
It seems natural to allow unnamed_addr in aliases:
* It is a property of how it is accessed, not of the data itself.
* It is perfectly possible to write code that depends on the address
of an alias.
This patch then makes unname_addr legal for aliases. One side effect is that
the syntax changes for a corner case: In globals, unnamed_addr is now printed
before the address space.
llvm-svn: 210302
This patch changes GlobalAlias to point to an arbitrary ConstantExpr and it is
up to MC (or the system assembler) to decide if that expression is valid or not.
This reduces our ability to diagnose invalid uses and how early we can spot
them, but it also lets us do things like
@test5 = alias inttoptr(i32 sub (i32 ptrtoint (i32* @test2 to i32),
i32 ptrtoint (i32* @bar to i32)) to i32*)
An important implication of this patch is that the notion of aliased global
doesn't exist any more. The alias has to encode the information needed to
access it in its metadata (linkage, visibility, type, etc).
Another consequence to notice is that getSection has to return a "const char *".
It could return a NullTerminatedStringRef if there was such a thing, but when
that was proposed the decision was to just uses "const char*" for that.
llvm-svn: 210062
This patch changes the design of GlobalAlias so that it doesn't take a
ConstantExpr anymore. It now points directly to a GlobalObject, but its type is
independent of the aliasee type.
To avoid changing all alias related tests in this patches, I kept the common
syntax
@foo = alias i32* @bar
to mean the same as now. The cases that used to use cast now use the more
general syntax
@foo = alias i16, i32* @bar.
Note that GlobalAlias now behaves a bit more like GlobalVariable. We
know that its type is always a pointer, so we omit the '*'.
For the bitcode, a nice surprise is that we were writing both identical types
already, so the format change is minimal. Auto upgrade is handled by looking
through the casts and no new fields are needed for now. New bitcode will
simply have different types for Alias and Aliasee.
One last interesting point in the patch is that replaceAllUsesWith becomes
smart enough to avoid putting a ConstantExpr in the aliasee. This seems better
than checking and updating every caller.
A followup patch will delete getAliasedGlobal now that it is redundant. Another
patch will add support for an explicit offset.
llvm-svn: 209007
Sometimes a LLVM compilation may take more time then a client would like to
wait for. The problem is that it is not possible to safely suspend the LLVM
thread from the outside. When the timing is bad it might be possible that the
LLVM thread holds a global mutex and this would block any progress in any other
thread.
This commit adds a new yield callback function that can be registered with a
context. LLVM will try to yield by calling this callback function, but there is
no guaranteed frequency. LLVM will only do so if it can guarantee that
suspending the thread won't block any forward progress in other LLVM contexts
in the same process.
Once the client receives the call back it can suspend the thread safely and
resume it at another time.
Related to <rdar://problem/16728690>
llvm-svn: 208945
We already had an assert for foo->RAUW(foo), but not for something like
foo->RAUW(GEP(foo)) and would go in an infinite loop trying to apply
the replacement.
llvm-svn: 208663
operations on the call graph. This one forms a cycle, and while not as
complex as removing an internal edge from an SCC, it involves
a reasonable amount of work to find all of the nodes newly connected in
a cycle.
Also somewhat alarming is the worst case complexity here: it might have
to walk roughly the entire SCC inverse DAG to insert a single edge. This
is carefully documented in the API (I hope).
llvm-svn: 207935
This fix simply ensures that both metadata nodes are path-aware before
performing path-aware alias analysis.
This issue isn't normally triggered in LLVM, because we perform an autoupgrade
of the TBAA metadata to the new format when reading in LL or BC files. This
issue only appears when a client creates the IR manually and mixes old and new
TBAA metadata format.
This fixes <rdar://problem/16760860>.
llvm-svn: 207923
just connects an SCC to one of its descendants directly. Not much of an
impact. The last one is the hard one -- connecting an SCC to one of its
ancestors, and thereby forming a cycle such that we have to merge all
the SCCs participating in the cycle.
llvm-svn: 207751
of SCCs in the SCC DAG. Exercise them in the big graph test case. These
will be especially useful for establishing invariants in insertion
logic.
llvm-svn: 207749
We already do this for shstrtab, so might as well do it for strtab. This
extracts the string table building code into a separate class. The idea
is to use it for other object formats too.
I mostly wanted to do this for the general principle, but it does save a
little bit on object file size. I tried this on a clang bootstrap and
saved 0.54% on the sum of object file sizes (1.14 MB out of 212 MB for
a release build).
Differential Revision: http://reviews.llvm.org/D3533
llvm-svn: 207670
When we were moving from a larger vector to a smaller one but didn't
need to re-allocate, we would move-assign over uninitialized memory in
the target, then move-construct that same data again.
llvm-svn: 207663
edge entirely within an existing SCC. Shockingly, making the connected
component more connected is ... a total snooze fest. =]
Anyways, its wired up, and I even added a test case to make sure it
pretty much sorta works. =D
llvm-svn: 207631
bits), and discover that it's totally broken. Yay tests. Boo bug. Fix
the basic edge removal so that it works by nulling out the removed edges
rather than actually removing them. This leaves the indices valid in the
map from callee to index, and preserves some of the locality for
iterating over edges. The iterator is made bidirectional to reflect that
it now has to skip over null entries, and the skipping logic is layered
onto it.
As future work, I would like to track essentially the "load factor" of
the edge list, and when it falls below a threshold do a compaction.
An alternative I considered (and continue to consider) is storing the
callees in a doubly linked list where each element of the list is in
a set (which is essentially the classical linked-hash-table
datastructure). The problem with that approach is that either you need
to heap allocate the linked list nodes and use pointers to them, or use
a bucket hash table (with even *more* linked list pointer overhead!),
etc. It's pretty easy to get 5x overhead for values that are just
pointers. So far, I think punching holes in the vector, and periodic
compaction is likely to be much more efficient overall in the space/time
tradeoff.
llvm-svn: 207619
Change `BlockFrequency` to defer to `BranchProbability::scale()` and
`BranchProbability::scaleByInverse()`.
This removes `BlockFrequency::scale()` from its API (and drops the
ability to see the remainder), but the only user was the unit tests. If
some code in the future needs an API that exposes the remainder, we can
add something to `BranchProbability`, but I find that unlikely.
llvm-svn: 207550