Can be used like
-debug-counter=dse-memoryssa-skip=10,dse-memoryssa-counter-count=20
Reviewers: dmgreen, rnk, efriedma, bryant, asbirlea
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D72147
For tracked globals that are unknown after solving, we expect all
non-store uses to be replaced.
This is a follow-up to f8045b250d80, which removed forcedconstant.
We should not mark unknown loads as overdefined, as they either load
from an unknown pointer or an undef global. Restore the original logic
for loads.
Summary:
As mentioned in D71974, it is useful for must-be-executed-context to explore CFG backwardly.
This patch is ported from parts of D64975. We use a dominator tree to find the previous context if
a dominator tree is available.
Reviewers: jdoerfert, hfinkel, baziotis, sstefan1
Reviewed By: jdoerfert
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74817
We can look through calls with `returned` argument attributes when we
collect subsuming positions. This allows us to get existing attributes
from more places.
We are often interested in an assumed constant and sometimes it has to
be an integer constant. Before we only looked for the latter, now we can
ask for either.
We usually will ask for liveness of an argument anyway so we ended up
lazily creating the attribute anyway. However, that is not always the
case and even if it is we should go the eager route here. Various tests
show how this can improve the outcome. One test exposed a problem with
type mismatches between argument and call site argument, a fix is
included. For liveness various more tests were added as well.
If a function pointer is casted into a different type the resulting
expression can be a constant. If so, it can be used multiple times which
cannot be handled by the AbstractCallSite constructor alone. Instead, we
follow the cast expression uses now explicitly during the call site
traversal.
In builds with assertions enabled (!NDEBUG), IndVarSimplify does an
additional query to ScalarEvolution which may change future SCEV queries
since it fills the internal cache differently. The result is actually
only used with the -verify-indvars command line option. We fix the issue
by only calling SE->getBackedgeTakenCount(L) if -verify-indvars is
enabled such that only -verify-indvars shows the behavior, but not debug
builds themselves. Also add a remark to the description of
-verify-indvars about this behavior.
Fixes llvm.org/PR44815
Differential Revision: https://reviews.llvm.org/D74810
Some IRBuilder methods that were originally defined on
IRBuilderBase do not respect custom IRBuilder inserters/folders,
because those were not accessible prior to D73835. Fix this by
making use of existing (and now accessible) IRBuilder methods,
which will handle inserters/folders correctly.
There are some changes in OpenMP and Instrumentation tests, where
bitcasts now get constant folded. I've also highlighted one
InstCombine test which now finishes in two rather than three
iterations, thanks to new instructions being inserted into the
worklist.
Differential Revision: https://reviews.llvm.org/D74787
Some IRBuilder methods that were originally defined on
IRBuilderBase do not respect custom IRBuilder inserters/folders,
because those were not accessible prior to D73835. Fix this by
making use of existing (and now accessible) IRBuilder methods,
which will handle inserters/folders correctly.
There are some changes in OpenMP tests, where bitcasts now get
constant folded. I've also highlighted one InstCombine test which
now finishes in two rather than three iterations, thanks to new
instructions being inserted into the worklist.
Differential Revision: https://reviews.llvm.org/D74787
Instcombine folds (a + b <u a) to (a ^ -1 <u b) and that does not match
the expected pattern in CodeGenPerpare via UAddWithOverflow.
This causes a regression over Clang 7 on both X86 and AArch64:
https://gcc.godbolt.org/z/juhXYV
This patch extends UAddWithOverflow to also catch the XOR case, if the
XOR is only used in the ICMP. This covers just a single case, but I'd
like to make sure I am not missing anything before tackling the other
cases.
Reviewers: nikic, RKSimon, lebedev.ri, spatel
Reviewed By: nikic, lebedev.ri
Differential Revision: https://reviews.llvm.org/D74228
On some targets, like SPARC, forming overflow ops is only profitable if
the math result is used: https://godbolt.org/z/DxSmdB
This patch adds a new MathUsed parameter to allow the targets
to make the decision and defaults to only allowing it
if the math result is used. That is the conservative choice.
This patch also updates AArch64ISelLowering, X86ISelLowering,
ARMISelLowering.h, SystemZISelLowering.h to allow forming overflow
ops if the math result is not used. On those targets using the
overflow intrinsic for the overflow check only generates better code.
Reviewers: nikic, RKSimon, lebedev.ri, spatel
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D74722
Summary:
Depends on https://reviews.llvm.org/D71901.
The fifth in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure.
The first 4 patches allow users to run coroutine passes by invoking, for
example `opt -passes=coro-early`. However, most of LLVM's tests for
coroutines use an option, `opt -enable-coroutines`, which adds all 4
coroutine passes to the appropriate legacy pass manager extension points.
This patch does the same, but using the new pass manager: when
coroutine features are enabled and the new pass manager is being used,
this adds the new-pass-manager-compliant coroutine passes to the pass
builder's pipeline.
This allows us to run all coroutine tests using the new pass manager
(besides those that use the coroutine retcon ABI used by the Swift
compiler, which is not yet supported in the new pass manager).
Reviewers: GorNishanov, lewissbaker, chandlerc, junparser, wenlei
Subscribers: wenlei, EricWF, Prazek, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71902
Summary:
Depends on https://reviews.llvm.org/D71900.
The fourth in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure. This patch implements
'coro-cleanup'.
No existing regression tests check the behavior of coro-cleanup on its
own, so this patch adds one. (A test named 'coro-cleanup.ll' exists, but
it relies on the entire coroutines pipeline being run. It's updated to
test the new pass manager in the 5th patch of this series.)
Reviewers: GorNishanov, lewissbaker, chandlerc, junparser, deadalnix, wenlei
Reviewed By: wenlei
Subscribers: wenlei, EricWF, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71901
Fixes https://bugs.llvm.org/show_bug.cgi?id=44922 (caused by 4698bf145d583e26ed438026ef7fde031ef322b1)
ThreadThroughTwoBasicBlocks assumes PredBBBranch is conditional. The following code can segfault.
AddPHINodeEntriesForMappedBlock(PredBBBranch->getSuccessor(1), PredBB, NewBB,
ValueMapping);
We can also allow unconditional PredBB, but the produced code is not
better.
Reviewed By: kazu
Differential Revision: https://reviews.llvm.org/D74747
The index of an ExtractElementInst is not guaranteed to be a
ConstantInt. It can be any integer value. Check explicitly for
ConstantInts.
The new test cases illustrate scenarios where we crash without
this patch. I've also added another test case to check the matching
of extractelement vector ops works.
Reviewers: RKSimon, ABataev, dtemirbulatov, vporpo
Reviewed By: ABataev
Differential Revision: https://reviews.llvm.org/D74758
When simplifying demanded bits, we currently only report the
instruction on which SimplifyDemandedBits was called as changed.
However, this is a recursive call, and the actually modified
instruction will usually be further up the chain. Additionally,
all the intermediate instructions should also be revisited,
as additional combines may be possible after the demanded bits
simplification. We fix this by explicitly adding them back to the
worklist.
Differential Revision: https://reviews.llvm.org/D72944
The select-of-cttz transform can currently duplicate cttz intrinsics
and zext/trunc ops. The cause is that it unnecessarily duplicates
the intrinsic and the zext/trunc when setting the "undef_on_zero"
flag to false. However, it's always legal to set the flag from true
to false, so we can make this replacement even if there are extra users.
Differential Revision: https://reviews.llvm.org/D74685
Fix for https://bugs.llvm.org/show_bug.cgi?id=44754. We already have
a fold that converts icmp (and (ashr X, C3), C2), C1 into
icmp (and C2'), C1', but it imposed overly strict requirements on the
transform.
Relax this by checking that both C2 and C1 don't shift out bits
(in a signed sense) when forming the new constants.
Alive proofs (https://rise4fun.com/Alive/PTz0):
Name: ashr_legal
Pre: ((C2 << C3) >> C3) == C2 && ((C1 << C3) >> C3) == C1
%a = ashr i16 %x, C3
%b = and i16 %a, C2
%c = icmp i16 %b, C1
=>
%d = and i16 %x, C2 << C3
%c = icmp i16 %d, C1 << C3
Name: ashr_shiftout_eq
Pre: ((C2 << C3) >> C3) == C2 && ((C1 << C3) >> C3) != C1
%a = ashr i16 %x, C3
%b = and i16 %a, C2
%c = icmp eq i16 %b, C1
=>
%c = false
Note that >> corresponds to ashr here. The case of an equality
comparison has some special handling in this transform, because
it will form to a true/false result if the condition on the comparison
constant it violated.
Differential Revision: https://reviews.llvm.org/D74294
This patch adds a simplification if an OR weakens the overflow condition
for umul.with.overflow by treating any non-zero result as overflow. In that
case, we overflow if both umul.with.overflow operands are != 0, as in that
case the result can only be 0, iff the multiplication overflows.
Code like this is generated by code using __builtin_mul_overflow with
negative integer constants, e.g.
bool test(unsigned long long v, unsigned long long *res) {
return __builtin_mul_overflow(v, -4775807LL, res);
}
```
----------------------------------------
Name: D74141
%res = umul_overflow {i8, i1} %a, %b
%mul = extractvalue {i8, i1} %res, 0
%overflow = extractvalue {i8, i1} %res, 1
%cmp = icmp ne %mul, 0
%ret = or i1 %overflow, %cmp
ret i1 %ret
=>
%t0 = icmp ne i8 %a, 0
%t1 = icmp ne i8 %b, 0
%ret = and i1 %t0, %t1
ret i1 %ret
%res = umul_overflow {i8, i1} %a, %b
%mul = extractvalue {i8, i1} %res, 0
%cmp = icmp ne %mul, 0
%overflow = extractvalue {i8, i1} %res, 1
Done: 1
Optimization is correct!
```
Reviewers: nikic, lebedev.ri, spatel, Bigcheese, dexonsmith, aemerson
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D74141
Summary:
Depends on https://reviews.llvm.org/D71899.
The third in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure. This patch implements 'coro-elide'.
The new pass manager infrastructure does not implicitly repeat CGSCC
pass pipelines when a function is devirtualized, and so the tests
for the new pass manager that rely on that behavior now explicitly
specify `repeat<2>`.
Reviewers: GorNishanov, lewissbaker, chandlerc, jdoerfert, junparser, deadalnix, wenlei
Reviewed By: wenlei
Subscribers: wenlei, EricWF, Prazek, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71900
Summary:
This patch has four dependencies:
1. The first in this series of patches that implement coroutine passes in the
new pass manager: https://reviews.llvm.org/D71898.
2. A patch that introduces an API for CGSCC passes to add new reference
edges to a `LazyCallGraph`, `updateCGAndAnalysisManagerForCGSCCPass`:
https://reviews.llvm.org/D72025.
3. A patch that introduces a `CallGraphUpdater` helper class that is
capable of mutating internal `LazyCallGraph` state in order to insert
new function nodes into a specific SCC: https://reviews.llvm.org/D70927.
4. And finally, a small edge case fix for updating `LazyCallGraph` that
patch 3 above happens to run into: https://reviews.llvm.org/D72226.
This is the second in a series of patches that ports the LLVM coroutines
passes to the new pass manager infrastructure. This patch implements
'coro-split'.
Some notes:
* Using the new CGSCC pass manager resulted in IR being printed in the
reverse order in some tests. To prevent FileCheck checks from failing due
to these reversed orders, this patch splits up test files that test
multiple different coroutine functions: specifically
coro-alloc-with-param.ll, coro-split-eh.ll, and coro-eh-aware-edge-split.ll.
* CoroSplit.cpp contained 2 overloads of `splitCoroutine`, one of which
dispatched to the other based on the coroutine ABI being used (C++20
switch-based versus Swift returned-continuation-based). I found this
confusing, especially with the additional branching based on `CallGraph`
vs. `LazyCallGraph`, so I removed the ABI-checking overload of
`splitCoroutine`.
Reviewers: GorNishanov, lewissbaker, chandlerc, jdoerfert, junparser, deadalnix, wenlei
Reviewed By: wenlei
Subscribers: wenlei, qcolombet, EricWF, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71899
Summary:
The first in a series of patches that ports the LLVM coroutines passes
to the new pass manager infrastructure. This patch implements
'coro-early'.
NB: All coroutines passes begin by checking that coroutine intrinsics are
declared within the LLVM IR module they're operating on. To do so, they call
`coro::declaresIntrinsics`. The next 3 patches in this series, which add new
pass manager implementations of the 'coro-split', 'coro-elide', and
'coro-cleanup' passes, use a similar pattern as the one used here: a static
function is shared across both old and new passes to detect if relevant
coroutine intrinsics are delcared. To make this pattern easier to read, this
patch adds `const` keywords to the parameters of `coro::declaresIntrinsics`.
Reviewers: GorNishanov, lewissbaker, junparser, chandlerc, deadalnix, wenlei
Reviewed By: wenlei
Subscribers: ychen, wenlei, EricWF, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71898
getOperationCost() is not the cost we wanted; that's not the
throughput value that the rest of the calculation uses.
We may want to switch everything in this code to use the
getInstructionThroughput() wrapper to avoid these kinds of
problems, but I'll look at that as a follow-up because that
can create other logical diffs via using optional parameters
(we'd need to speculatively create the vector instruction to
make a fair(er) comparison).
This includes a fix for cases where things get marked as overdefined in
ResolvedUndefsIn, but we later discover a constant. To avoid crashing,
we consistently bail out on overdefined values in the visitors. This is
similar to the previous behavior with forcedconstant.
This reverts the revert commit 02b72f564c8be0b4f4337d5c4a3fcf7e8018a818.
In addition to a single bit per memory locations, e.g., globals and
arguments, we now collect more information about the actual accesses,
e.g., what instruction caused it, was it a read/write/read+write, and
what the underlying base pointer was. Follow up patches will make
explicit use of this.
Reviewed By: uenoku
Differential Revision: https://reviews.llvm.org/D73527
While the function return updateImpl did only look at call sites the
manifest method looked at return values. If we don't do this during the
updateImpl we might create new abstract attributes during manifest. This
is a problem when it comes to liveness information.