1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-01-31 20:51:52 +01:00

370 Commits

Author SHA1 Message Date
Florian Hahn
bd03d58d2a [BasicAA] Use separate scale variable for GCD.
Use separate variable for adjusted scale used for GCD computations. This
fixes an issue where we incorrectly determined that all indices are
non-negative and returned noalias because of that.

Follow up to 91fa3565da16.
2021-06-30 20:04:39 +01:00
Florian Hahn
78eb09629d [BasicAA] Add test for incorrectly inferring noalias due to scale sign.
This patch adds a test where we currently incorrectly determine noalias,
because the sign of Scale is adjusted after 91fa3565da16.
2021-06-30 19:57:29 +01:00
Florian Hahn
8002fe7d67 [BasicAA] Be more careful with modulo ops on VariableGEPIndex.
(V * Scale) % X may not produce the same result for any possible value
of V, e.g. if the multiplication overflows. This means we currently
incorrectly determine NoAlias in some cases.

This patch updates LinearExpression to track whether the expression
has NSW and uses that to adjust the scale used for alias checks.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D99424
2021-06-29 09:22:36 +01:00
Florian Hahn
412987a7f5 [BasicAA] Add test to cover GetIndexDifference change in D99424.
Precommit test case for a change to GetIndexDifference in D99424.
2021-06-28 16:03:05 +01:00
Nicolai Hähnle
5d6415281f [IR] Memory intrinsics are not unconditionally nosync
Remove the `nosync` attribute from the memory intrinsic definitions
(i.e. memset, memcpy, memmove).

Like native memory accesses, memory intrinsics can be volatile. This is
indicated by an immarg in the intrinsic call. All else equal, a volatile
memory intrinsic is `sync`, so we cannot annotate the intrinsic functions
themselves as `nosync`. The attributor and function-attr passes know to
take the volatile bit into account.

Since `nosync` is a default attribute, this means we have to stop using
the DefaultAttrIntrinsic tablegen class for memory intrinsics, and
specify all default attributes other than `nosync` explicitly.

Most of the test changes are trivial churn, but one test case
(in nosync.ll) was in fact incorrect before this change.

Differential Revision: https://reviews.llvm.org/D102295
2021-05-21 03:40:59 +02:00
Joseph Tremoulet
b501eafd98 BasicAA: Recognize inttoptr as isEscapeSource
Pointers escape when converted to integers, so a pointer produced by
converting an integer to a pointer must not be a local non-escaping
object.

Reviewed By: nikic, nlopes, aqjune

Differential Revision: https://reviews.llvm.org/D101541
2021-05-07 07:48:50 -07:00
dfukalov
7b0a514671 [AA] Updates for D95543.
Addressing latter comments in D95543:
- `AliasResult::Result` renamed to `AliasResult::Kind`
- Offset printing added for `PartialAlias` case in `-aa-eval`
- Removed VisitedPhiBBs check from BasicAA'

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D100454
2021-04-15 12:22:03 +03:00
Florian Hahn
4b85eda637 [BasicAA] Add another GEP modulo test with shl with odd op. 2021-04-07 22:31:51 +01:00
Florian Hahn
50a9fd3de9 [BasicAA] Extend test coverage for GEP modulo logic.
Add a few additional test cases which combine multiplies with
powers-of-2, different wrapping flags.
2021-04-07 20:13:35 +01:00
Philip Reames
1695a86b7f Mark unordered memset/memmove/memcpy as nosync
Mostly a means to remove a bit of code from attributor in advance of implementing a FuncAttr inference for nosync.
2021-04-01 10:38:54 -07:00
Nikita Popov
ab6e561d30 [BasicAA] Make sure types match in constant offset heuristic
This can only happen if offset types that are larger than the
pointer size are involved. The previous implementation did not
assert in this case because it initialized the APInts to the
width of one of the variables -- though I strongly suspect it
did not compute correct results in this case.

Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=32621
reported by fhahn.
2021-03-28 21:38:09 +02:00
Nikita Popov
f2b0645b2f [BasicAA] Refactor linear expression decomposition
The current linear expression decomposition handles zext/sext by
decomposing the casted operand, and then checking NUW/NSW flags
to determine whether the extension can be distributed. This has
some disadvantages:

First, it is not possible to perform a partial decomposition. If
we have zext((x + C1) +<nuw> C2) then we will fail to decompose
the expression entirely, even though it would be safe and
profitable to decompose it to zext(x + C1) +<nuw> zext(C2)

Second, we may end up performing unnecessary decompositions,
which will later be discarded because they lack nowrap flags
necessary for extensions.

Third, correctness of the code is not entirely obvious: At a high
level, we encounter zext(x -<nuw> C) in the form of a zext on the
linear expression x + (-C) with nuw flag set. Notably, this case
must be treated as zext(x) + -zext(C) rather than zext(x) + zext(-C).
The code handles this correctly by speculatively zexting constants
to the final bitwidth, and performing additional fixup if the
actual extension turns out to be an sext. This was not immediately
obvious to me.

This patch inverts the approach: An ExtendedValue represents a
zext(sext(V)), and linear expression decomposition will try to
decompose V further, either by absorbing another sext/zext into the
ExtendedValue, or by distributing zext(sext(x op C)) over a binary
operator with appropriate nsw/nuw flags. At each step we can
determine whether distribution is legal and abort with a partial
decomposition if not. We also know which extensions we need to
apply to constants, and don't need to speculate or fixup.
2021-03-27 23:31:58 +01:00
Nikita Popov
4cf39be5a9 [BasicAA] Correct handle implicit sext in decomposition
While explicit sext instructions were handled correctly, the
implicit sext that occurs if the offset is smaller than the
pointer size blindly assumed that sext(X * Scale + Offset) is the
same as sext(X) * Scale + Offset, which is obviously not correct.

Fix this by extracting the code that handles linear expression
extension and reusing it for the implicit sext as well.
2021-03-27 15:15:47 +01:00
Nikita Popov
13fbfb64c9 [BasicAA] Clarify entry values of GetLinearExpression() (NFC)
A number of variables need to be correctly initialized on entry
to GetLinearExpression() for the implementation to behave reasonably.

The fact that SExtBits can currenlty be non-zero on entry is a bug,
as demonstrated by the added test: For implicit sexts by the GEP,
we do currently skip legality checks.
2021-03-27 14:50:09 +01:00
Nikita Popov
f99cc32fd4 [BasicAA] Retain shl nowrap flags in GetLinearExpression()
Nowrap flags between mul and shl differ in that mul nsw allows
multiplication of 1 * INT_MIN, while shl nsw does not. This means
that it is always fine to transfer shl nowrap flags to muls, but
not necessarily the other way around. In this case the NUW/NSW
results refer to mul/add operations, so it's fine to retain the
flags from the shl.
2021-03-27 12:26:22 +01:00
Florian Hahn
0e377c5502 [BasicAA] Add a few more interesting modulo tests. 2021-03-26 16:56:49 +00:00
Florian Hahn
00978dd33d [BasicAA] Add a few cases with overflows in index computations.
This patch adds a few test cases where currently NoAlias is returned,
but the pointers can alias if the multiply overflows while computing
a GEP index value.
2021-03-26 14:50:03 +00:00
Nikita Popov
57bd8da517 [BasicAA] Handle assumes with operand bundles
This fixes a regression reported on D99022: If a call has operand
bundles, then the inaccessiblememonly attribute on the function
will be ignored, as operand bundles can affect modref behavior in
the general case. However, for assume operand bundles in particular
this is not the case.

Adjust getModRefBehavior() to always report inaccessiblememonly
for assumes, regardless of presence of operand bundles.
2021-03-23 21:21:19 +01:00
Nikita Popov
7f52840c4e [BasicAA] Add test for assume with operand bundles (NFC) 2021-03-23 21:21:19 +01:00
Max Kazantsev
c38d6febb5 [BasicAA] Drop dependency on Loop Info. PR43276
BasicAA stores a reference to LoopInfo inside. This imposes an implicit
requirement of keeping it up to date whenever we modify the IR (in particular,
whenever we modify terminators of blocks that belong to loops). Failing
to do so leads to incorrect state of the LoopInfo.

Because general AA does not require loop info updates and provides to API to
update it properly, the users of AA reasonably assume that there is no need to
update the loop info. It may be a reason of bugs, as example in PR43276 shows.

This patch drops dependence of BasicAA on LoopInfo to avoid this problem.

This may potentially pessimize the result of queries to BasicAA.

Differential Revision: https://reviews.llvm.org/D98627
Reviewed By: nikic
2021-03-17 11:43:44 +07:00
Philip Reames
883f87b472 [basicaa] Recurse through a single phi input
BasicAA knows how to analyze phis, but to control compile time, we're fairly limited in doing so. This patch loosens that restriction just slightly when there is exactly one phi input (after discounting induction variable increments). The result of this is that we can handle more cases around nested and sibling loops with pointer induction variables.

A few points to note.
* This is deliberately extremely restrictive about recursing through at most one input of the phi.  There's a known general problem with BasicAA sometimes hitting exponential compile time already, and this patch makes every effort not to compound the problem.  Once the root issue is fixed, we can probably loosen the restrictions here a bit.
* As seen in the test file, we're still missing cases which aren't *directly* based on phis (e.g. using the indvar increment). I believe this to be a separate problem and am going to explore this in another patch once this one lands.
* As seen in the test file, this results in the unfortunate fact that using phivalues sometimes results in worse quality results. I believe this comes down to an oversight in how recursive phi detection was implemented for phivalues. I'm happy to tackle this in a follow up change.

Differential Revision: https://reviews.llvm.org/D97401
2021-03-04 13:07:06 -08:00
Philip Reames
05b6860c6e [basicaa] Rewrite isGEPBaseAtNegativeOffset in terms of index difference [mostly NFC]
This is almost purely NFC, it just fits more obviously in the flow of the code now that we've standardized on the index different approach.  The non-NFC bit is that because of canceling the VariableOffsets in the subtract, we can now handle the case where both sides involve a common variable offset.  This isn't an "interesting" improvement; it just happens to fall out of the natural code structure.

One subtle point - the placement of this above the BaseAlias check is important in the original code as this can return NoAlias even when we can't find a relation between the bases otherwise.

Also added some enhancement TODOs noticed while understanding the existing code.

Note: This is slightly different than the LGTMed version.  I fixed the "inbounds" issue Nikita noticed with the original code in e6e5ef4 and rebased this to include the same fix.

Differential Revision: https://reviews.llvm.org/D97520
2021-03-03 09:03:28 -08:00
Philip Reames
a40e17a3fc [basicaa] Fix a latent bug in isGEPBaseAtNegativeOffset
This was pointed out in review of D97520 by Nikita, but existed in the original code as well.

The basic issue is that a decomposed GEP expression describes (potentially) more than one getelementptr.  The "inbounds" derived UB which justifies this aliasing rule requires that the entire offset be composed of "inbounds" geps.  Otherwise, as can be seen in the recently added and changes in this patch test, we can end up with a large commulative offset with only a small sub-offset actually being "inbounds".  If that small sub-offset lies within the object, the result was unsound.

We could potentially be fancier here, but for the moment, simply be conservative when any of the GEPs parsed aren't inbounds.
2021-03-03 08:43:32 -08:00
Philip Reames
a36885fad4 [tests] Add tests for cases brought up during review of D97520 2021-03-03 08:30:54 -08:00
Philip Reames
605f2ea52d [tests] precommit tests for an upcoming AA improvement 2021-02-24 09:51:00 -08:00
Philip Reames
7f1a17b009 Revert "[tests] Mark an autogened test as such"
This reverts commit 43a569faeb332ae8b355fffc33eec1ef6e33052e.

Unhelpfully, the tool just added the header and didn't actually update any of the tests.  I didn't notice until after pushing.
2021-02-24 09:26:26 -08:00
Philip Reames
01ce9f6d68 [tests] Mark an autogened test as such 2021-02-24 09:15:19 -08:00
Dávid Bolvanský
a89bd24670 Reland "[Libcalls, Attrs] Annotate libcalls with noundef"
Fixed Clang tests.
2021-02-20 06:18:48 +01:00
Dávid Bolvanský
2b364cafc0 Revert "[Libcalls, Attrs] Annotate libcalls with noundef"
This reverts commit 33b0c63775ce58014c55e285671e3315104a6076. Bots are failing. Some Clang tests need to be updated too.
2021-02-20 04:18:42 +01:00
Dávid Bolvanský
6e69bbcd66 [Libcalls, Attrs] Annotate libcalls with noundef
I think we can use here same logic as for nonnull.

strlen(X) - X must be noundef => valid pointer.

for libcalls with size arg, we add noundef only if size is known and greater than 0 - so pointers must be noundef (valid ones)

Reviewed By: jdoerfert, aqjune

Differential Revision: https://reviews.llvm.org/D95122
2021-02-20 04:10:07 +01:00
Nikita Popov
92fca6922a [MemCopyOpt] Enable MemorySSA by default
This enables use of MemorySSA instead of MemDep in MemCpyOpt. To
allow this without significant compile-time impact, the MemCpyOpt
pass is moved directly before DSE (in the cases where this was not
already the case), which allows us to reuse the existing MemorySSA
analysis.

Unlike the MemDep-based implementation, the MemorySSA-based MemCpyOpt
can also perform simple optimizations across basic blocks.

Differential Revision: https://reviews.llvm.org/D94376
2021-02-19 18:06:25 +01:00
Nikita Popov
9fa93a3e70 [BasicAA] Always strip single-argument phi nodes
We can always look through single-argument (LCSSA) phi nodes when
performing alias analysis. getUnderlyingObject() already does this,
but stripPointerCastsAndInvariantGroups() does not. We still look
through these phi nodes with the usual aliasPhi() logic, but
sometimes get sub-optimal results due to the restrictions on value
equivalence when looking through arbitrary phi nodes. I think it's
generally beneficial to keep the underlying object logic and the
pointer cast stripping logic in sync, insofar as it is possible.

With this patch we get marginally better results:

  aa.NumMayAlias | 5010069 | 5009861
  aa.NumMustAlias | 347518 | 347674
  aa.NumNoAlias | 27201336 | 27201528
  ...
  licm.NumPromoted | 1293 | 1296

I've renamed the relevant strip method to stripPointerCastsForAliasAnalysis(),
as we're past the point where we can explicitly spell out everything
that's getting stripped.

Differential Revision: https://reviews.llvm.org/D96668
2021-02-18 23:07:50 +01:00
Nikita Popov
4566585f18 [BasicAA] Merge aliasGEP code paths
At this point, we can treat the case of GEP/GEP aliasing and
GEP/non-GEP aliasing in essentially the same way. The only
differences are that we need to do an additional negative GEP base
check, and that we perform a bailout on unknown sizes for the
GEP/non-GEP case (the latter exists only to limit compile-time).

This change is not quite NFC due to the peculiar effect that
the DecomposedGEP for V2 can actually be non-trivial even if V2
is not a GEP. The reason for this is that getUnderlyingObject()
can look through LCSSA phi nodes, while stripPointerCasts() doesn't.
This can lead to slightly better results if single-entry phi nodes
occur inside a loop, where looking through the phi node via aliasPhi()
would subject it to phi cycle equivalence restrictions. It would
probably make sense to adjust pointer cast stripping (for AA) to
handle this case, and ensure consistent results.
2021-02-14 19:35:36 +01:00
Nikita Popov
880c8747c1 [BasicAA] Add test for single arg phi in loop (NFC) 2021-02-14 19:35:36 +01:00
Tyker
443904009e reland [InstCombine] convert assumes to operand bundles
Instcombine will convert the nonnull and alignment assumption that use the boolean condtion
to an assumption that uses the operand bundles when knowledge retention is enabled.

Differential Revision: https://reviews.llvm.org/D82703
2021-02-13 13:03:11 +01:00
Tyker
919f469bdb Revert "[InstCombine] convert assumes to operand bundles"
This reverts commit 5eb2e994f9b3a5aff0a156d0a1f7e6121342cc11.
2021-02-10 01:32:00 +01:00
Tyker
76d5e82203 [InstCombine] convert assumes to operand bundles
Instcombine will convert the nonnull and alignment assumption that use the boolean condtion
to an assumption that uses the operand bundles when knowledge retention is enabled.

Differential Revision: https://reviews.llvm.org/D82703
2021-02-09 19:33:53 +01:00
Jeroen Dobbelaere
116cd71f2c [noalias.decl] Look through llvm.experimental.noalias.scope.decl
Just like llvm.assume, there are a lot of cases where we can just ignore llvm.experimental.noalias.scope.decl.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D93042
2021-01-19 20:09:42 +01:00
Nikita Popov
c2d5b85909 [BasicAA] Fix BatchAA results for phi-phi assumptions
Change the way NoAlias assumptions in BasicAA are handled. Instead of
handling this inside the phi-phi code, always initially insert a
NoAlias result into the map and keep track whether it is used.
If it is used, then we require that we also get back NoAlias from
the recursive queries. Otherwise, the entry is changed to MayAlias.

Additionally, keep track of all location pairs we inserted that may
still be based on assumptions higher up. If it turns out one of those
assumptions is incorrect, we flush them from the cache.

The compile-time impact for the new implementation is significantly
higher than the previous iteration of this patch:
https://llvm-compile-time-tracker.com/compare.php?from=c0bb9859de6991cc233e2dedb978dd118da8c382&to=c07112373279143e37568b5bcd293daf81a35973&stat=instructions
However, it should avoid the exponential runtime cases we run into
if we don't cache assumption-based results entirely.

This also produces better results in some cases, because NoAlias
assumptions can now start at any root, rather than just phi-phi pairs.
This is not just relevant for analysis quality, but also for BatchAA
consistency: Otherwise, results would once again depend on query order,
though at least they wouldn't be wrong.

This ended up both more complicated and more expensive than I hoped,
but I wasn't able to come up with another solution that satisfies all
the constraints.

Differential Revision: https://reviews.llvm.org/D91936
2021-01-06 22:15:30 +01:00
Nikita Popov
b9fd8ed0fc [BasicAA] Pass AC/DT to isKnownNonEqual()
This allows us to handle assumes etc in the recursive
isKnownNonZero() checks.
2020-12-25 18:29:20 +01:00
Nikita Popov
29aff8b8aa [BasicAA] Pass context instruction to isKnownNonZero()
This allows us to handle additional cases like assumes.
2020-12-25 12:58:19 +01:00
Nikita Popov
d9de690aaf [BasicAA] Make sure context instruction is symmetric
D71264 started using a context instruction in a computeKnownBits()
call. However, if aliasing between two GEPs is checked, then the
choice of context instruction will be different for alias(GEP1, GEP2)
and alias(GEP2, GEP1), which is not supposed to happen.

Resolve this by remembering which GEP a certain VarIndex belongs to,
and use that as the context instruction. This makes the choice of
context instruction predictable and symmetric.

It should be noted that this choice of context instruction is
non-optimal (just like the previous choice): The AA query result is
only valid at points that are reachable from *both* instructions.
Using either one of them is conservatively correct, but a larger
context may also be valid to use.

Differential Revision: https://reviews.llvm.org/D93183
2020-12-25 11:35:46 +01:00
Nikita Popov
4cddc2f391 [AA] byval argument is identified function local
byval arguments should mostly get the same treatment as noalias
arguments in alias analysis. This was not the case for the
isIdentifiedFunctionLocal() function.

Marking byval arguments as identified function local means that
they cannot alias with other arguments, which I believe is correct.

Differential Revision: https://reviews.llvm.org/D93602
2020-12-21 20:18:23 +01:00
Nikita Popov
3d38cdb1c2 [BasicAA] Add test for byval argument (NFC) 2020-12-20 21:58:22 +01:00
Florian Hahn
b0f7329005 Revert "[BasicAA] Handle two unknown sizes for GEPs"
Temporarily revert commit 8b1c4e310c2f9686cad925ad81d8e2be10a1ef3c.

After 8b1c4e310c2f the compile-time for `MultiSource/Benchmarks/MiBench/consumer-lame`
dramatically increases with -O3 & LTO, causing issues for builders with
that configuration.

I filed PR48553 with a smallish reproducer that shows a 10-100x compile
time increase.
2020-12-18 17:59:12 +00:00
Nikita Popov
5f5687abd0 [BasicAA] Handle known non-zero variable index
BasicAA currently handles cases like Scale*V0 + (-Scale)*V1 where
V0 != V1, but does not handle the simpler case of Scale*V with
V != 0. Add it based on an isKnownNonZero() call.

I'm not passing a context instruction for now, because the existing
approach of always using GEP1 for context could result in symmetry
issues.

Differential Revision: https://reviews.llvm.org/D93162
2020-12-13 13:20:05 +01:00
Nikita Popov
36c749f8f1 [BasicAA] Add tests for non-zero var index (NFC) 2020-12-12 15:00:46 +01:00
Nikita Popov
022645293f [BasicAA] Add extra check in phi-spec-order.ll (NFC)
The (scevgep, scevgep5) relation regressed with a patch I was
trying, but wasn't tested.
2020-12-11 21:20:51 +01:00
Nikita Popov
b4e8c6bdf0 [BasicAA] Handle two unknown sizes for GEPs
If we have two unknown sizes and one GEP operand and one non-GEP
operand, then we currently simply return MayAlias. The comment says
we can't do anything useful ... but we can! We can still check that
the underlying objects are different (and do so for the GEP-GEP case).

To reduce the compile-time impact, this a) checks this early, before
doing the relatively expensive GEP decomposition that will not be
used and b) doesn't do the check if the other operand is a phi or
select. In that case, the phi/select will already recurse, so this
would just do two slightly different recursive walks that arrive at
the same roots.

Compile-time is still a bit of a mixed bag: https://llvm-compile-time-tracker.com/compare.php?from=624af932a808b363a888139beca49f57313d9a3b&to=845356e14adbe651a553ed11318ddb5e79a24bcd&stat=instructions
On average this is a small improvement, but sqlite with ThinLTO has
a 0.5% regression (lencod has a 1% improvement).

The BasicAA test case checks this by using two memsets with unknown
size. However, the more interesting case where this is useful is
the LoopVectorize test case, as analysis of accesses in loops tends
to always us unknown sizes.

Differential Revision: https://reviews.llvm.org/D92401
2020-12-11 18:45:53 +01:00
Nikita Popov
0f3bf80439 [BasicAA] Migrate "same base pointer" logic to decomposed GEPs
BasicAA has some special bit of logic for "same base pointer" GEPs
that performs a structural comparison: It only looks at two GEPs
with the same base (as opposed to two GEP chains with a MustAlias
base) and compares their indexes in a limited way. I generalized
part of this code in D91027, and this patch merges the remainder
into the normal decomposed GEP logic.

What this code ultimately wants to do is to determine that
gep %base, %idx1 and gep %base, %idx2 don't alias if %idx1 != %idx2,
and the access size fits within the stride.

We can express this in terms of a decomposed GEP expression with
two indexes scale*%idx1 + -scale*%idx2 where %idx1 != %idx2, and
some appropriate checks for sizes and offsets.

This makes the reasoning slightly more powerful, and more
importantly brings all the GEP logic under a common umbrella.

Differential Revision: https://reviews.llvm.org/D92723
2020-12-06 10:27:35 +01:00