Summary:
Funclet EH tables require that a given funclet have only one unwind
destination for exceptional exits. The verifier will therefore reject
e.g. two cleanuprets with different unwind dests for the same cleanup, or
two invokes exiting the same funclet but to different unwind dests.
Because catchswitch has no 'nounwind' variant, and because IR producers
are not *required* to annotate calls which will not unwind as 'nounwind',
it is legal to nest a call or an "unwind to caller" catchswitch within a
funclet pad that has an unwind destination other than caller; it is
undefined behavior for such a call or catchswitch to unwind.
Normally when inlining an invoke, calls in the inlined sequence are
rewritten to invokes that unwind to the callsite invoke's unwind
destination, and "unwind to caller" catchswitches in the inlined sequence
are rewritten to unwind to the callsite invoke's unwind destination.
However, if such a call or "unwind to caller" catchswitch is located in a
callee funclet that has another exceptional exit with an unwind
destination within the callee, applying the normal transformation would
give that callee funclet multiple unwind destinations for its exceptional
exits. There would be no way for EH table generation to determine which
is the "true" exit, and the verifier would reject the function
accordingly.
Add logic to the inliner to detect these cases and leave such calls and
"unwind to caller" catchswitches as calls and "unwind to caller"
catchswitches in the inlined sequence.
This fixes PR26147.
Reviewers: rnk, andrew.w.kaylor, majnemer
Subscribers: alexcrichton, llvm-commits
Differential Revision: http://reviews.llvm.org/D16319
llvm-svn: 258273
Note that this is disabled by default and still requires a patch to
handleMove() which is not upstreamed yet.
If the TrackLaneMasks policy/strategy is enabled the MachineScheduler
will build a schedule graph where definitions of independent
subregisters are no longer serialised.
Implementation comments:
- Without lane mask tracking a sub register def also counts as a use
(except for the first one with the read-undef flag set), with lane
mask tracking enabled this is no longer the case.
- Pressure Diffs where previously maintained per definition of a
vreg with the help of the SSA information contained in the
LiveIntervals. With lanemask tracking enabled we cannot do this
anymore and instead change the pressure diffs for all uses of the vreg
as it becomes live/dead. For this changed style to work correctly we
ignore uses of instructions that define the same register again: They
won't affect register pressure.
- With lanemask tracking we remove all read-undef flags from
sub register defs when building the graph and re-add them later when
all vreg lanes have become dead.
Differential Revision: http://reviews.llvm.org/D14969
llvm-svn: 258259
This renaming is necessary to avoid a subregister aware scheduler
accidentally creating liveness "holes" which are rejected by the
MachineVerifier.
Explanation as found in this patch:
Helper class that can divide MachineOperands of a virtual register into
equivalence classes of connected components.
MachineOperands belong to the same equivalence class when they are part of
the same SubRange segment or adjacent segments (adjacent in control
flow); Different subranges affected by the same MachineOperand belong to
the same equivalence class.
Example:
vreg0:sub0 = ...
vreg0:sub1 = ...
vreg0:sub2 = ...
...
xxx = op vreg0:sub1
vreg0:sub1 = ...
store vreg0:sub0_sub1
The example contains 3 different equivalence classes:
- One for the (dead) vreg0:sub2 definition
- One containing the first vreg0:sub1 definition and its use,
but not the second definition!
- The remaining class contains all other operands involving vreg0.
We provide a utility function here to rename disjunct classes to different
virtual registers.
Differential Revision: http://reviews.llvm.org/D16126
llvm-svn: 258257
Summary:
This teaches MachineSink to not sink instructions that might break the
implicit null check optimization that runs later. This should not
affect frontends that do not use implicit null checks.
Reviewers: aadg, reames, hfinkel, atrick
Subscribers: majnemer, llvm-commits
Differential Revision: http://reviews.llvm.org/D14632
llvm-svn: 258254
calling convention.
The implementation of the related callbacks in the x86 backend for such
functions are not ready to deal with a prologue block that is not the entry
block of the function.
This fixes PR26107, but the longer term solution would be to fix those callbacks.
llvm-svn: 258221
I think I fixed all instances of this in the codebase
(r258202, 258200, 258190). Also, the suppression didn't
have an effect on bots using make anyways, and it looks
like many bots still use configure/make bots.
llvm-svn: 258210
As vector shuffles can only reference two inputs many (V)INSERTPS patterns end up being split over two targets shuffles.
This patch adds combines to attempt to combine (V)INSERTPS nodes with input/output nodes that are just zeroing out these additional vector elements.
Differential Revision: http://reviews.llvm.org/D16072
llvm-svn: 258205
r100895 landed an llvm-only change to add minix support to googletest.
It did that by putting "defined()" in a macro, which has undefined
behavior. Slightly reshuffle things to remove that undefined behavior.
Also mention in README.LLVM that minix support is a local change.
llvm-svn: 258190
they're needed.
Prior to this patch objects were loaded (via RuntimeDyld::loadObject) when they
were added to the ObjectLinkingLayer, but were not relocated and finalized until
a symbol address was requested. In the interim, another object could be loaded
and finalized with the same memory manager, causing relocation/finalization of
the first object to fail (as the first finalization call may have marked the
allocated memory for the first object read-only).
By deferring the loadObject call (and subsequent memory allocations) until an
object file is needed we can avoid prematurely finalizing memory.
llvm-svn: 258185
In some cases, the max backedge taken count can be more conservative
than the exact backedge taken count (for instance, because
ScalarEvolution::getRange is not control-flow sensitive whereas
computeExitLimitFromICmp can be). In these cases,
computeExitLimitFromCond (specifically the bit that deals with `and` and
`or` instructions) can create an ExitLimit instance with a
`SCEVCouldNotCompute` max backedge count expression, but a computable
exact backedge count expression. This violates an implicit SCEV
assumption: a computable exact BE count should imply a computable max BE
count.
This change
- Makes the above implicit invariant explicit by adding an assert to
ExitLimit's constructor
- Changes `computeExitLimitFromCond` to be more robust around
conservative max backedge counts
llvm-svn: 258184
According the build bots, clang is using the Registry class somewhere as well. Will reapply with appropriate clang changes at a later point.
llvm-svn: 258159
The Registry class constructs a linked list of nodes whose storage is inside static variables and nodes are added via static initializers. The trick is that those static initializers are in both the LLVM code base, and some random plugin that might get loaded in at runtime. The existing code tries to use C++ templates and their ODR rules to get a single definition of the registry for each type, but, experimentally, this doesn't quite work as designed. (Well, the entire structure doesn't. It might not actually be an ODR problem.)
Previously, when I tried moving the GCStrategy class (along with it's registry) from CodeGen to IR, I ran into a problem where asking the GCStrategyRegistry a question would return inconsistent results depending on whether you asked from CodeGen (where the static initializers still were) or Transforms. My best guess is that this is a result of either a) an order of initialization error, or b) we ended up with two copies of the registry being created. I remember at the time having convinced myself it was probably (b), but I don't have any of my notes around from that investigation any more.
See http://reviews.llvm.org/rL226311 for the original patch in question.
This patch tries to remove the possibility of (b) above. (a) was already fixed in change 258109.
Differential Revision: http://reviews.llvm.org/D16170
llvm-svn: 258157
This patch creates the profile data variable before lowering the profile intrinsics.
Reviewers: davidxl, silvas
Differential Revision: http://reviews.llvm.org/D16015
llvm-svn: 258156
Our loop construct is not a way to identify cycles in the CFG. This wasn't immediately obvious from the header, so clarify that fact.
The motivation for this was that I just fixed a out of tree bug due to a mistaken assumption (on my part) on what a Loop actually was. While it was fresh in my mind, I wanted to document the key point.
llvm-svn: 258154
This is a continuation of adding FMF to call instructions:
http://reviews.llvm.org/rL255555
As with D15937, the intent of the patch is to preserve the current behavior of the transform
except that we use the pow call's 'fast' attribute as a trigger rather than a function-level
attribute.
The TODO comment notes a potential follow-on patch that would propagate FMF to the new
instructions.
Differential Revision: http://reviews.llvm.org/D16122
llvm-svn: 258153