... which requires not removing a DomTree edge if the switch's default
still points at that destination, because it can't be removed;
... and not processing the same predecessor more than once.
Summary:
This is to avoid unnecessary analysis since amdgpu.noclobber is only used for globals.
Reviewers:
arsenm
Fixes:
SWDEV-239161
Differential Revision:
https://reviews.llvm.org/D94107
A function is noreturn if all blocks terminating with a ReturnInst
contain a call to a noreturn function. Skip looking at naked functions
since there may be asm that returns.
This can be further refined in the future by checking unreachable blocks
and taking into account recursion. It looks like the attributor pass
does this, but that is not yet enabled by default.
This seems to help with code size under the new PM since PruneEH does
not run under the new PM, missing opportunities to mark some functions
noreturn, which in turn doesn't allow simplifycfg to clean up dead code.
https://bugs.llvm.org/show_bug.cgi?id=46858.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D93946
This patch teaches the inliner to compute the full cost for a call
site where the newly introduced cost benefit analysis is enabled.
Note that the cost benefit analysis requires the full cost to be
computed. However, without this patch or the -inline-cost-full
option, the early termination logic would kick in when the cost
exceeds the threshold, so we don't get to perform the cost benefit
analysis. For this reason, we would need to specify four clang
options:
-mllvm -inline-cost-full
-mllvm -inline-enable-cost-benefit-analysis
This patch eliminates the need to specify -inline-cost-full.
Differential Revision: https://reviews.llvm.org/D93658
This looks to have been done to save some duplicated code under
two different if statements, but it ends up being harmful to D94073.
This speculative constant can be called on a scalable vector type
with i64 element size when i64 scalars aren't legal. The code tries
and fails to find a vector type with i32 elements that it can use.
So only create the node when we know it will be used.
This should be no-functional-change because the reduction kind
opcodes are 1-for-1 mappings to the instructions we are matching
as reductions. But we want to remove the need for the
`OperationData` opcode field because that does not work when
we start matching intrinsics (eg, maxnum) as reduction candidates.
From C11 and C++11 onwards, a forward-progress requirement has been
introduced for both languages. In the case of C, loops with non-constant
conditionals that do not have any observable side-effects (as defined by
6.8.5p6) can be assumed by the implementation to terminate, and in the
case of C++, this assumption extends to all functions. The clang
frontend will emit the `mustprogress` function attribute for C++
functions (D86233, D85393, D86841) and emit the loop metadata
`llvm.loop.mustprogress` for every loop in C11 or later that has a
non-constant conditional.
This patch modifies LoopDeletion so that only loops with
the `llvm.loop.mustprogress` metadata or loops contained in functions
that are required to make progress (`mustprogress` or `willreturn`) are
checked for observable side-effects. If these loops do not have an
observable side-effect, then we delete them.
Loops without observable side-effects that do not satisfy the above
conditions will not be deleted.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D86844
ComplexPatterns are kind of weird, they don't call any of the predicates on their operands. And their "complexity" used for tablegen ordering purposes in the matcher table is hand specified.
This started as an attempt to just use sext_inreg + SLOIPat to implement SLOIW just to have one less Select function. The matching for the or+shl is the same as long as you know the immediate is less than 32 for SLOIW. But that didn't work out because using uimm5 with SLOIPat didn't do anything if it was a ComplexPattern.
I realized I could just use a PatFrag with the opcodes I wanted to match and an immediate predicate would then evaluate correctly. This also computes the complexity just like any other pattern does. Then I just needed to check the constraints on the immediates in the predicate. Conveniently the predicate is evaluated after the fragment has been matched. So the structure has already been checked, we just need to find the constants.
I'll note that this is unusual, I didn't find any other targets looking through operands in PatFrag predicate. There is a PredicateCodeUsesOperands feature that can be used to collect the operands into an array that is used by AMDGPU/VOP3Instructions.td. I believe that feature exists to handle commuted matching, but since the nodes here use constants, they aren't ever commuted
Differential Revision: https://reviews.llvm.org/D91901
vmsltu.vi v0, v1, 0 is always false there is no unsigned number
less than 0. vmsleu.vi v0, v1, -1 on the other hand is always true
since -1 will be considered unsigned max and all numbers are <=
unsigned max.
A similar problem exists for vmsgeu.vi v0, v1, 0 which is always true,
but becomes vmsgtu.vi v0, v1, -1 which is always false.
To match the GNU assembler we'll emit vmsne.vv and vmseq.vv with
the same register for these cases instead.
I'm using AsmParserOnly pseudo instructions here because we can't
match an explicit immediate in an InstAlias. And we can't use a
AsmOperand for the zero because the output we want doesn't use an
immediate so there's nowhere to name the AsmOperand we want to use.
To keep the implementations similar I'm also handling signed with
pseudo instructions even though they don't have this issue. This
way we can avoid the special renderMethod that decremented by 1 so
the immediate we see for the pseudo instruction in processInstruction
is 0 and not -1. Another option might have been to have a different
simm5_plus1 operand for the unsigned case or just live with the
immediate being pre-decremented. I felt this way was clearer, but I'm
open to other opinions.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D94035
This alias for andi x, 255 was recently added to the spec. If we
print it, code we output can't be compiled with -fno-integrated-as
unless the GNU assembler is also a version that supports alias.
Reviewed By: lenary
Differential Revision: https://reviews.llvm.org/D93826
SLP tries to model 2 forms of vector reductions: pairwise and splitting.
From the cost model code comments, those are defined using an example as:
/// Pairwise:
/// (v0, v1, v2, v3)
/// ((v0+v1), (v2+v3), undef, undef)
/// Split:
/// (v0, v1, v2, v3)
/// ((v0+v2), (v1+v3), undef, undef)
I don't know the full history of this functionality, but it was partly
added back in D29402. There are apparently no users at this point (no
regression tests change). X86 might have managed to work-around the need
for this through cost model and codegen improvements.
Removing this code makes it easier to continue the work that was started
in D87416 / D88193. The alternative -- if there is some target that is
silently using this option -- is to move this logic into LoopUtils. We
have related/duplicate functionality there via llvm::createTargetReduction().
Differential Revision: https://reviews.llvm.org/D93860
There are vmsle(u).vx and vmsle(u).vi instructions, but there is
only vmslt(u).vx and no vmslt(u).vi. vmslt(u).vi can be emulated
for some immediates by decrementing the immediate and using vmsle(u).vi.
To avoid the user needing to know about this, this patch does this
conversion.
The assembler does the same thing for vmslt(u).vi and vmsge(u).vi
pseudoinstructions. There is no vmsge(u).vx intrinsic or
instruction so this patch is limited to vmslt(u).
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D94070
This patch fixes linker behavior when archive is linked with other inputs
as a library (i.e. when --only-needed option is specified). In this case library
is expected to be normally linked first into a separate module and only after
that linker should import required symbols from the linked library module.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D92535
It was removed in GFX10 GPUs, but LLVM could
generate it.
Reviewed By: rampitec, arsenm
Differential Revision: https://reviews.llvm.org/D94020
Change-Id: Id1c716d71313edcfb768b2b175a6789ef9b01f3c
In some case, the RC may have 0 allocatable reg.
eg: VRSAVERC in PowerPC, which has only 1 reg, but it is also reserved.
The curreent implementation will keep calling the computePSetLimit because
getRegPressureSetLimit assume computePSetLimit will return a non-zero value.
The fix simply early return the value from TableGen for such special case.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D92907
This is an enhancement to LLVM Source-Based Code Coverage in clang to track how
many times individual branch-generating conditions are taken (evaluate to TRUE)
and not taken (evaluate to FALSE). Individual conditions may comprise larger
boolean expressions using boolean logical operators. This functionality is
very similar to what is supported by GCOV except that it is very closely
anchored to the ASTs.
Differential Revision: https://reviews.llvm.org/D84467
Allow loop nests with empty basic blocks without loops in different
levels as perfect.
Reviewers: Meinersbur
Differential Revision: https://reviews.llvm.org/D93665
Creating in-loop reductions relies on IR references to map
IR values to VPValues after interleave group creation.
Make sure we re-add the updated member to the plan, so the look-ups
still work as expected
This fixes a crash reported after D90562.
AVX512 has fast truncation ops, but if the truncation source is a concatenation of subvectors then its likely that we can use PACK more efficiently.
This is only guaranteed to work for truncations to 128/256-bit vectors as the PACK works across 128-bit sub-lanes, for now I've just disabled 512-bit truncation cases but we need to get them working eventually for D61129.
Use instead of the isa_and_nonnull<StoreInst> and use the StoreInst::getPointerOperand wrapper instead of a hardcoded Instruction::getOperand.
Looks cleaner and avoids a spurious clang static analyzer null dereference warning.
Convert it to v_fma_legacy_f32 if it is profitable to do so, just like
other mac instructions that are converted to their mad equivalents.
Differential Revision: https://reviews.llvm.org/D94010
Support EH_SJLJ_LONGJMP, EH_SJLJ_SETJMP, and EH_SJLJ_SETUP_DISPATCH
for SjLj exception handling. NC++ uses SjLj exception handling, so
implement it first. Add regression tests also.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D94071
CTLZ and CTPOP are lowered to CLZ and CNT instructions respectively.
CTTZ is not a native SVE operation but is instead lowered to:
CTTZ(V) => CTLZ(BITREVERSE(V))
In the case of fixed-length support using SVE we also lower CTTZ
operating on NEON sized vectors because of its reliance on
BITREVERSE which is also lowered to SVE intructions at these lengths.
Differential Revision: https://reviews.llvm.org/D93607
We're immediately dereferencing the casted pointer, so use cast<> which will assert instead of dyn_cast<> which can return null.
Fixes static analyzer warning.
Loop strength reduction tries to recover debug variable values by looking
for simple offsets from PHI values. In really extreme conditions there may
be an offset used that won't fit in an int64_t, hitting an APInt assertion.
This patch adds a regression test and adjusts the equivalent value
collecting code to filter out any values where the offset can't be
represented by an int64_t. This means that for very large integers with
very large offsets, the variable location will become undef, which is the
same behaviour as before 2a6782bb9f1 / D87494.
Differential Revision: https://reviews.llvm.org/D94016
For wasm-ld table linking work to proceed, object files should indicate
if they use an indirect function table. In the future this will be done
by the usual symbols and relocations mechanism, but until that support
lands in the linker, the presence of an `__indirect_function_table` in
the object file's import section shows that the object file needs an
indirect function table.
Prior to https://reviews.llvm.org/D91637, this condition was met by all
object files residualizing an `__indirect_function_table` import.
Since https://reviews.llvm.org/D91637, the intention has been that only
those object files needing an indirect function table would have the
`__indirect_function_table` import. However, we missed the case of
object files which use the table via `call_indirect` but which
themselves do not declare any indirect functions.
This changeset makes it so that when we lower a call to `call_indirect`,
that we ensure that a `__indirect_function_table` symbol is present and
that it will be propagated to the linker.
A followup patch will revise this mechanism to make an explicit link
between `call_indirect` and its associated indirect function table; see
https://reviews.llvm.org/D90948.
Differential Revision: https://reviews.llvm.org/D92840
We're immediately dereferencing the casted pointer, so use cast<> which will assert instead of dyn_cast<> which can return null.
Fixes static analyzer warning.
We're immediately dereferencing the casted pointer, so use cast<> which will assert instead of dyn_cast<> which can return null.
Fixes static analyzer warning.