This is similar to D94106, but for the
isGuaranteedToTransferExecutionToSuccessor() helper. We should not
assume that readonly functions will return, as this is only true for
mustprogress functions (in which case we already infer willreturn).
As with the DCE change, for now continue assuming that readonly
intrinsics will return, as not all target intrinsics have been
annotated yet.
Differential Revision: https://reviews.llvm.org/D95288
This is to support the memory routines vec_malloc, vec_calloc, vec_realloc, and vec_free. These routines manage memory that is 16-byte aligned. And they are only available on AIX.
Differential Revision: https://reviews.llvm.org/D94710
Split impliesPoison into two recursive walks, one over V, the
other over ValAssumedPoison. This allows us to reason about poison
implications in a number of additional cases that are important
in practice. This is a generalized form of D94859, which handles
the cmp to cmp implication in particular.
Differential Revision: https://reviews.llvm.org/D94866
When creating pi-blocks we try to avoid creating duplicate edges
between outside nodes and the pi-block when an edge is of the
same kind and direction as another one that has already been
created. We do this by keeping track of the edges in an
enumerated array called EdgeAlreadyCreated. The problem is that
this array is declared local to the loop that iterates over the
nodes in the pi-block, so the information gets lost every time a
new inside-node is iterated over. The fix is to move the
declaration to the outer loop.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D94094
Change the way NoAlias assumptions in BasicAA are handled. Instead of
handling this inside the phi-phi code, always initially insert a
NoAlias result into the map and keep track whether it is used.
If it is used, then we require that we also get back NoAlias from
the recursive queries. Otherwise, the entry is changed to MayAlias.
Additionally, keep track of all location pairs we inserted that may
still be based on assumptions higher up. If it turns out one of those
assumptions is incorrect, we flush them from the cache.
The compile-time impact for the new implementation is significantly
higher than the previous iteration of this patch:
https://llvm-compile-time-tracker.com/compare.php?from=c0bb9859de6991cc233e2dedb978dd118da8c382&to=c07112373279143e37568b5bcd293daf81a35973&stat=instructions
However, it should avoid the exponential runtime cases we run into
if we don't cache assumption-based results entirely.
This also produces better results in some cases, because NoAlias
assumptions can now start at any root, rather than just phi-phi pairs.
This is not just relevant for analysis quality, but also for BatchAA
consistency: Otherwise, results would once again depend on query order,
though at least they wouldn't be wrong.
This ended up both more complicated and more expensive than I hoped,
but I wasn't able to come up with another solution that satisfies all
the constraints.
Differential Revision: https://reviews.llvm.org/D91936
Previously when trying to support CoroSplit's function splitting, we
added in a hack that simply added the new function's node into the
original function's SCC (https://reviews.llvm.org/D87798). This is
incorrect since it might be in its own SCC.
Now, more similar to the previous design, we have callers explicitly
notify the LazyCallGraph that a function has been split out from another
one.
In order to properly support CoroSplit, there are two ways functions can
be split out.
One is the normal expected "outlining" of one function into a new one.
The new function may only contain references to other functions that the
original did. The original function must reference the new function. The
new function may reference the original function, which can result in
the new function being in the same SCC as the original function. The
weird case is when the original function indirectly references the new
function, but the new function directly calls the original function,
resulting in the new SCC being a parent of the original function's SCC.
This form of function splitting works with CoroSplit's Switch ABI.
The second way of splitting is more specific to CoroSplit. CoroSplit's
Retcon and Async ABIs split the original function into multiple
functions that all reference each other and are referenced by the
original function. In order to keep the LazyCallGraph in a valid state,
all new functions must be processed together, else some nodes won't be
populated. To keep things simple, this only supports the case where all
new edges are ref edges, and every new function references every other
new function. There can be a reference back from any new function to the
original function, putting all functions in the same RefSCC.
This also adds asserts that all nodes in a (Ref)SCC can reach all other
nodes to prevent future incorrect hacks.
The original hacks in https://reviews.llvm.org/D87798 are no longer
necessary since all new functions should have been registered before
calling updateCGAndAnalysisManagerForPass.
This fixes all coroutine tests when opt's -enable-new-pm is true by
default. This also fixes PR48190, which was likely due to the previous
hack breaking SCC invariants.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D93828
This patch
- Adds containsPoisonElement that checks existence of poison in constant vector elements,
- Renames containsUndefElement to containsUndefOrPoisonElement to clarify its behavior & updates its uses properly
With this patch, isGuaranteedNotToBeUndefOrPoison's tests w.r.t constant vectors are added because its analysis is improved.
Thanks!
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D94053
Here we let non-intrinsic calls be considered legal and valid for
similarity only if the call is not indirect, and has a name.
For two calls to be considered similar, they must have the same name,
the same function types, and the same set of parameters, including tail
calls and calling conventions.
Tests are found in unittests/Analysis/IRSimilarityIdentifierTest.cpp.
Reviewers: jroelofs, paquette
Differential Revision: https://reviews.llvm.org/D87312
GetElementPtr instructions require the extra check that all operands
after the first must only be constants and be exactly the same to be
considered similar.
Tests are found in unittests/Analysis/IRSimilarityIdentifierTest.cpp.
This PR adds impliesPoison(ValAssumedPoison, V) that returns true if V is
poison under the assumption that ValAssumedPoison is poison.
For example, impliesPoison('icmp X, 10', 'icmp X, Y') return true because
'icmp X, Y' is poison if 'icmp X, 10' is poison.
impliesPoison can be used for sound optimization of select, as discussed in
D77868.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D78152
This patch updates isImpliedCondition/isKnownNonZero to look into select form of
and/or as well.
See llvm.org/pr48353 and D93065 for more context
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D93845
Some predicates, can be considered the same as long as the operands are
flipped. For example, a > b gives the same result as b > a. This maps
instructions in a greater than form, to their appropriate less than
form, swapping the operands in the IRInstructionData only, allowing for
more flexible matching.
Tests:
llvm/test/Transforms/IROutliner/outlining-isomorphic-predicates.ll
llvm/unittests/Analysis/IRSimilarityIdentifierTest.cpp
Reviewers: jroelofs, paquette
Recommit of commit 050392660249c70c00e909ae4a7151ba2c766235
Differential Revision: https://reviews.llvm.org/D87310
Some predicates, can be considered the same as long as the operands are
flipped. For example, a > b gives the same result as b > a. This maps
instructions in a greater than form, to their appropriate less than
form, swapping the operands in the IRInstructionData only, allowing for
more flexible matching.
Tests:
llvm/test/Transforms/IROutliner/outlining-isomorphic-predicates.ll
llvm/unittests/Analysis/IRSimilarityIdentifierTest.cpp
Reviewers: jroelofs, paquette
Differential Revision: https://reviews.llvm.org/D87310
Certain instructions, such as adds and multiplies can have the operands
flipped and still be considered the same. When we are analyzing
structure, this gives slightly more flexibility to create a mapping from
one region to another. We can add both operands in a corresponding
instruction to an operand rather than just the exact match. We then try
to eliminate items from the set, until there is only one valid mapping
between the regions of code.
We do this for adds, multiplies, and equality checking. However, this is
not done for floating point instructions, since the order can still
matter in some cases.
Tests:
llvm/test/Transforms/IROutliner/outlining-commutative-fp.ll
llvm/test/Transforms/IROutliner/outlining-commutative.ll
llvm/unittests/Analysis/IRSimilarityIdentifierTest.cpp
Reviewers: jroelofs, paquette
Differential Revision: https://reviews.llvm.org/D87311
Don't iterate over SCC as we potentially modify it.
Verify module (and fix some broken ones).
Only run pass once and make sure that it's actually run.
Rename tests to just end in a number since I'm planning on adding a
bunch more which won't have good individual names. Instead, add comments
on the transformations that each test does.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D93427
1. Removed #include "...AliasAnalysis.h" in other headers and modules.
2. Cleaned up includes in AliasAnalysis.h.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D92489
Revert "Delete llvm::is_trivially_copyable and CMake variable HAVE_STD_IS_TRIVIALLY_COPYABLE"
This reverts commit 4d4bd40b578d77b8c5bc349ded405fb58c333c78.
This reverts commit 557b00e0afb2dc1776f50948094ca8cc62d97be4.
There's a small number of users of this function, they are all updated.
This updates the C API adding a new method LLVMGetTypeByName2 that takes a context and a name.
Differential Revision: https://reviews.llvm.org/D78793
This reverts commit 8166ed1a7a26ee8ea8db9005cc8ee5d156adad9b,
as it caused some compilations to hang/loop indefinitely, see
https://reviews.llvm.org/D91936 for details.
Add a flag that disables caching when computing aliasing results
potentially based on a phi-phi NoAlias assumption. We'll still
insert cache entries temporarily to catch infinite recursion,
but will drop them afterwards, so they won't persist in BatchAA.
Differential Revision: https://reviews.llvm.org/D91936
Currently, we have some confusion in the codebase regarding the
meaning of LocationSize::unknown(): Some parts (including most of
BasicAA) assume that LocationSize::unknown() only allows accesses
after the base pointer. Some parts (various callers of AA) assume
that LocationSize::unknown() allows accesses both before and after
the base pointer (but within the underlying object).
This patch splits up LocationSize::unknown() into
LocationSize::afterPointer() and LocationSize::beforeOrAfterPointer()
to make this completely unambiguous. I tried my best to determine
which one is appropriate for all the existing uses.
The test changes in cs-cs.ll in particular illustrate a previously
clearly incorrect AA result: We were effectively assuming that
argmemonly functions were only allowed to access their arguments
after the passed pointer, but not before it. I'm pretty sure that
this was not intentional, and it's certainly not specified by
LangRef that way.
Differential Revision: https://reviews.llvm.org/D91649
Previously the inliner did a bit of a hack by adding ref edges for all
new edges introduced by performing an inline before calling
updateCGAndAnalysisManagerForPass(). This was because
updateCGAndAnalysisManagerForPass() didn't handle new non-trivial call
edges.
This adds handling of non-trivial call edges to
updateCGAndAnalysisManagerForPass(). The inliner called
updateCGAndAnalysisManagerForFunctionPass() since it was handling adding
newly introduced edges (so updateCGAndAnalysisManagerForPass() would
only have to handle promotion), but now it needs to call
updateCGAndAnalysisManagerForCGSCCPass() since
updateCGAndAnalysisManagerForPass() is now handling the new call edges
and function passes cannot add new edges.
We follow the previous path of adding trivial ref edges then letting promotion
handle changing the ref edges to call edges and the CGSCC updates. So
this still does not allow adding call edges that result in an addition
of a non-trivial ref edge.
This is in preparation for better detecting devirtualization. Previously
since the inliner itself would add ref edges,
updateCGAndAnalysisManagerForPass() would think that promotion and thus
devirtualization had happened after any sort of inlining.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D91046
If the same value is used multiple times in the same instruction,
CaptureTracking may end up reporting the wrong use as being captured,
and/or report the same use as being captured multiple times.
Make sure that all checks take the use operand number into account,
rather than performing unreliable comparisons against the used value.
I'm not sure whether this can cause any problems in practice, but
at least some capture trackers (ArgUsesTracker, AACaptureUseTracker)
do care about which call argument is captured.
If getClobberingMemoryAccess() is called with an explicit
MemoryLocation, but the starting access happens to be a call, the
provided location is currently ignored, and alias analysis queries
will be performed against the call instruction instead. Something
similar happens if the starting access is a load with a MemoryDef.
Change the implementation to not set Q.Inst in the first place if
we want to perform a MemoryLocation-based query, to make sure it
can't be turned into an Instruction-based query along the way...
Additionally, remove the special handling that lifetime.start
intrinsics currently get. They simply report NoAlias for clobbers
between lifetime.start and other calls, but that's obviously not
right if the other call is something like a memset or memcpy. The
default behavior we get from getModRefInfo() will already do the
right thing here.
Differential Revision: https://reviews.llvm.org/D88782
```
// The legacy PM CGPassManager discovers SCCs this way:
for function in the source order
tarjanSCC(function)
// While the new PM CGSCCPassManager does:
for function in the reversed source order [1]
discover a reference graph SCC
build call graph SCCs inside the reference graph SCC
```
In the common cases, reference graph ~= call graph, the new PM order is
undesired because for `a | b | c` (3 independent functions), the new PM will
process them in the reversed order: c, b, a. If `a <-> b <-> c`, we can see
that `-print-after-all` will report the sole SCC as `scc: (c, b, a)`.
This patch corrects the iteration order. The discovered SCC order will match
the legacy PM in the common cases.
For some tests (`Transforms/Inline/cgscc-*.ll` and
`unittests/Analysis/CGSCCPassManagerTest.cpp`), the behaviors are dependent on
the SCC discovery order and there are too many check lines for the particular
order. This patch simply reverses the function order to avoid changing too many
check lines.
Differential Revision: https://reviews.llvm.org/D90566
CallInst::updateProfWeight() creates branch_weights with i64 instead of i32.
To be more consistent everywhere and remove lots of casts from uint64_t
to uint32_t, use i64 for branch_weights.
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D88609
URem operations with constant power-of-2 second operands are modeled as
such. This patch on its own has very little impact (e.g. no changes in
CodeGen for MultiSource/SPEC2000/SPEC2006 on X86 -O3 -flto), but I'll
soon post follow-up patches that make use of it to more accurately
determine the trip multiple.
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D89821
When we need to prove implication of expressions of different type width,
the default strategy is to widen everything to wider type and prove in this
type. This does not interact well with AddRecs with negative steps and
unsigned predicates: such AddRec will likely not have a `nuw` flag, and its
`zext` to wider type will not be an AddRec. In contraty, `trunc` of an AddRec
in some cases can easily be proved to be an `AddRec` too.
This patch introduces an alternative way to handling implications of different
type widths. If we can prove that wider type values actually fit in the narrow type,
we truncate them and prove the implication in narrow type.
The return was due to revert of underlying patch that this one depends on.
Unit test temporarily disabled because the required logic in SCEV is switched
off due to compile time reasons.
Differential Revision: https://reviews.llvm.org/D89548
I noticed that alignment was no longer inferred as well after I last merged
our CHERI fork from upstream. I opened this review before seeing that D88669
already fixes the same problem, so this commit simply adds the new test that
I added as part of this change.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D89830
CallInst::updateProfWeight() creates branch_weights with i64 instead of i32.
To be more consistent everywhere and remove lots of casts from uint64_t
to uint32_t, use i64 for branch_weights.
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D88609
Any time we insert a block into VisitedPhiBBs, previously cached
values may no longer be valid for the recursive alias queries. As
such, perform them using an empty AAQueryInfo.
Note that if we recurse to the same phi, the block will already
be inserted, so we reuse the old AAQueryInfo, and thus still
protect against infinite recursion.
This problem can appear with with an without BatchAA, but is more
likely to occur with BatchAA, as more values are cached.
Differential Revision: https://reviews.llvm.org/D90066
Prior to this patch, computeKnownBits would only try to deduce trailing zeros
bits for getelementptrs. This patch adds the logic to treat geps as a series
of add * scaling factor.
Thanks to this patch, using a gep or performing an address computation
directly "by hand" (ptrtoint followed by adds and mul followed by inttoptr)
offers the same computeKnownBits information.
Previously, the "by hand" approach would have given more information.
This is related to https://llvm.org/PR47241.
Differential Revision: https://reviews.llvm.org/D86364
When we need to prove implication of expressions of different type width,
the default strategy is to widen everything to wider type and prove in this
type. This does not interact well with AddRecs with negative steps and
unsigned predicates: such AddRec will likely not have a `nuw` flag, and its
`zext` to wider type will not be an AddRec. In contraty, `trunc` of an AddRec
in some cases can easily be proved to be an `AddRec` too.
This patch introduces an alternative way to handling implications of different
type widths. If we can prove that wider type values actually fit in the narrow type,
we truncate them and prove the implication in narrow type.
Differential Revision: https://reviews.llvm.org/D89548
Reviewed By: fhahn
AA computes the correct result for phi/a1 aliasing, while BatchAA
produces an incorrect result depening on which queries have been
performed beforehand.