XCore default subtarget does not support 8-byte stack alignment. These failures
can be seen on builder clang-xcore-ubuntu-20-x64 on staging buildbot.
Differential Revision: https://reviews.llvm.org/D99092
This commit adds a full WasmTableType to MCSymbolWasm, differing from
the current situation (just an ElemType) in that it additionally records
a WasmLimits.
We add support for specifying the limits in .S files also, via the
following syntax variations:
.tabletype SYM, ELEMTYPE
.tabletype SYM, ELEMTYPE, MINSIZE
.tabletype SYM, ELEMTYPE, MINSIZE, MAXSIZE
Depends on D99186.
Differential Revision: https://reviews.llvm.org/D99191
This patch renames the "Initial" member of WasmLimits to the name used
in the spec, "Minimum".
In the core WebAssembly specification, the Limits data type has one
required "min" member and one optional "max" member, indicating the
minimum required size of the corresponding table or memory, and the
maximum size, if any.
Although the WebAssembly spec does instantiate locally-defined tables
and memories with the initial size being equal to the minimum size, it
can't impose such a requirement for imports. It doesn't make sense to
require an initial size for a memory import, for example. The compiler
can only sensibly express the minimum and maximum sizes.
See
https://github.com/WebAssembly/js-types/blob/master/proposals/js-types/Overview.md#naming-of-size-limits
for a related discussion that agrees that the right name of "initial" is
"minimum" when querying the type of a table or memory from JavaScript.
(Of course it still makes sense for JS to speak in terms of an initial
size when it explicitly instantiates memories and tables.)
Differential Revision: https://reviews.llvm.org/D99186
Copysign from double and to double patterns have lack of HasStdExtD predicate.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D99234
The original comment says the same thing twice, and does not mention that
edges entering the block are also in the same bundle (which seems true from
what the underlying code is doing).
Differential Revision: https://reviews.llvm.org/D99144
Reviewed By: RKSimon
Statepoint instruction is known to have a variable and big number of operands.
It is possible that Register Allocator will split live intervals in the way that all
physical registers are occupied by "zero-length" live intervals which are marked
as not-spillable.
While intervals are marked as not-spillable in the moment of creation when they are
really zero-length it is possible that in future as part of re-materialization there will
need for physical register between def and use of such tiny interval (the use is not
related to this interval at all).
As all physical registers are assigned to not-spillable intervals there is not avaialbe
registers and RA reports an error.
The idea of the fix is avoid marking tiny live intervals where there is a use in statepoint
instruction in var args section. Such interval may be perfectly spilled and folded to
operand of statepoint.
Reviewers: reames, dantrushin, qcolombet, dsanders, dmgreen
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D98766
None of the code in this function was written to handle
vectors. Most of the cases already fail for vectors for one
reason or another. The exception is an optimization that
detects identical operands. This can be triggered by vectors,
but the code always creates a 0 or 1 constants in a scalar
register which is incorrect for vectors.
Fixes PR49706.
The current implementation keeps buffers generated for each object file
until it completes loading of all files. This approach requires a lot of memory
if there are a lot of huge object files. Thus, make it to load coverage records
immediately rather than waiting for other binaries to be loaded.
This reduces memory usage of llvm-cov from >128GB to 5GB when
loading Chromium binaries in Windows.
Additional testing: check-profile, check-llvm
Differential Revision: https://reviews.llvm.org/D99110
This patch addresses the removal of register size information done in
commit c8b782c.
Without this change, there is no viable option to get register size
information outside libTarget. We need this information to run
analysis that know the register size from the MC layer, used by
BOLT.
Discussion D50285 and D47199.
Reviewed By: kparzysz
Differential Revision: https://reviews.llvm.org/D97891
As mentioned in [[ https://reviews.llvm.org/D96979 | D96979 ]], I'm extending the **IsGuaranteedLoopInvariant** check also to the `MemorySSA.cpp` file.
@fhahn For now I didn't unify the function into `MemorySSA.h` because, as you mentioned, it's not directly MSSA related. I'm open to suggestions to find a better place so we can improve the unification process.
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D97155
Added getPointersDiff function to LoopAccessAnalysis and used it instead
direct calculatoin of the distance between pointers and/or
isConsecutiveAccess function in SLP vectorizer to improve compile time
and detection of stores consecutive chains.
Part of D57059
Differential Revision: https://reviews.llvm.org/D98967
This fixes a regression reported on D99022: If a call has operand
bundles, then the inaccessiblememonly attribute on the function
will be ignored, as operand bundles can affect modref behavior in
the general case. However, for assume operand bundles in particular
this is not the case.
Adjust getModRefBehavior() to always report inaccessiblememonly
for assumes, regardless of presence of operand bundles.
Added getPointersDiff function to LoopAccessAnalysis and used it instead
direct calculatoin of the distance between pointers and/or
isConsecutiveAccess function in SLP vectorizer to improve compile time
and detection of stores consecutive chains.
Part of D57059
Differential Revision: https://reviews.llvm.org/D98967
This adds some missing legalizer tests, which uncovered a v2s64 selection
test that wasn't working since there's no legalization or instruction for that.
This select of ctpop with 0 pattern can get left behind after
loop idiom recognize converts a loop to ctpop. LLVM 10 was able
to optimize this, but LLVM 11 and later is not. The difference
seems to be that some select transforms are now limited based
on canCreateUndefOrPoison.
Teaching canCreateUndefOrPoison about ctpop restores the
LLVM 10 codegen.
Differential Revision: https://reviews.llvm.org/D99207
In https://reviews.llvm.org/D72948 This was enabled for all MSVC but reverted as it was determined not to work on some 2017 versions.
The issue is assumed to be fixed on 2019 so enable for 2019 and newer.
Some testing could be done to determine which version of MSVC 2017 support this feature but its safer right now to leave it at 2019.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D98809
NFC. Extract IsShrinkable into a helper function, and
make Subtarget a member variable.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D99099
Change-Id: If4bc97a88a9ae4eb1df47e717345d46a6ed515bf
Previously we used selectImm for RV64 and isel patterns for
RV32. This should be NFC, but will allow RV32 and RV64 to share
improvements in the future. For example, it might be useful to
use BSETI from Zbs to make single bit constants.
Reviewed By: luismarques
Differential Revision: https://reviews.llvm.org/D98877
Coyp SchedRW from pseudos to real instructions so that llvm-mca has
access to it. This is NFC for normal compiler codegen, which schedules
pseudos not real instructions.
Add an llvm-mca test for some high latency double-precision instructions
as a smoke test.
Differential Revision: https://reviews.llvm.org/D99187
This doesn't change anything currently, but as discussed in
D98981 and D98152, some tests may fail to vectorize because
the cost model becomes more accurate as we switch over to
using min/max intrinsics.
`FoldBranchToCommonDest()` has a certain budget (`-bonus-inst-threshold=`)
for bonus instruction duplication. And currently it calculates the cost
as-if it will actually duplicate into each predecessor.
But ignoring the budget, it won't always duplicate into each predecessor,
there are some correctness and profitability checks.
So when calculating the cost, we should first check into which blocks
will we *actually* duplicate, and only then use that block count
to do budgeting.
Before this patch, register writes were always invalidated by the
RegisterFile at instruction commit stage. So,
the RegisterFile was often losing the knowledge about the `execute
cycle` of writes already committed. While this was not problematic
for non-delayed reads, this was sometimes leading to inaccurate read
latency computations in the presence of negative read-advance cycles.
This patch fixes the issue by changing how the RegisterFile component
internally keeps track of the `execute cycle` information of each
write. On every instruction executed, the RegisterFile gets notified
by the RetireStage, so that it can internally record the execute
cycle of each executed write.
The `execute cycle` information is stored within WriteRef itself, and
it is not invalidated when the write is committed.
We clone bonus instructions to the end of the predecessor block,
and then use `SSAUpdater::RewriteUseAfterInsertions()`.
But that only deals with the cases where the use-to-be-rewritten
are either in different block from the def, or come after the def.
But in some loop cases, the external use may be in the beginning of
predecessor block, before the newly cloned bonus instruction.
`SSAUpdater::RewriteUseAfterInsertions()` does not deal with that.
Notably, the external use can't happen to be both in the same block
and *after* the newly-cloned instruction, because of the fold preconditions.
To properly handle these cases, when the use is in the same block,
we should instead use `SSAUpdater::RewriteUse()`.
TBN, they do the same thing for PHI users.
Fixes https://bugs.llvm.org/show_bug.cgi?id=49510
Likely Fixes https://bugs.llvm.org/show_bug.cgi?id=49689
This patch builds upon the initial BUILD_VECTOR work introduced in
D98700. It further optimizes the lowering of BUILD_VECTOR by using
VSELECT operations to effectively insert repeated elements into the
vector with relatively few instructions. This allows us to optimize more
BUILD_VECTORs without significantly increasing the size of the generated
code.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D98969
2nd try (original: 27ae17a6b014) with fix/test for crash. We must make
sure that TTI is available before trying to use it because it is not
required (might be another bug).
Original commit message:
This is one step towards solving:
https://llvm.org/PR49336
In that example, we disregard the recommended usage of builtin_expect,
so an expensive (unpredictable) branch is folded into another branch
that is guarding it.
Here, we read the profile metadata to see if the 1st (predecessor)
condition is likely to cause execution to bypass the 2nd (successor)
condition before merging conditions by using logic ops.
Differential Revision: https://reviews.llvm.org/D98898
Summary:
The IR is saved in its print form before each pass is started and a
signal handler is registered. If the compilation crashes, the signal
handler will print the saved IR to dbgs(). This option
can be modified using -print-module-scope to get the IR for the complete
module. Note that this option only works with the new pass manager.
Author: Jamie Schmeiser <schmeise@ca.ibm.com>
Reviewed By: aeubanks (Arthur Eubanks) yrouban (Yevgeny Rouban)
Differential Revision: https://reviews.llvm.org/D86657
This avoids temporary and memcpy call when computing large expressions.
It's basically some kind of poor man's expression template, but it seems easier
to maintain to have a single generic `apply` call instead of the whole
expression template machinery here.
Differential Revision: https://reviews.llvm.org/D98176
Exclude AArch64 mapping symbols ($x and $d) for symtab symbolization as
it was done for ARM since D95916 tom bring bots back to green state.
This is implemented by setting SF_FormatSpecific such that
llvm-symbolizer will ignore them, and use this flag to re-implement
llvm-nm --special-syms option which make it work for both targets.
Differential Revision: https://reviews.llvm.org/D98803
Follow up from D92955 and D83636. This patch makes the base cpp files
OMP.cpp and ACC.cpp normal files and they now include the XXX.inc file
generated by tablegen. This reduces the number of file generated by the
DirectiveEmitter backend and makes it closer to the proposal in D83636.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D93560
Apply the way createLocalIndirectStubsManagerBuilder() deals with unsupported achritectures to createLocalLazyCallThroughManager(). The returned call-through manager is dysfunctional: It runs into an unreachable as soon as a lazy JIT attempts to use it. However, this results in broader platform support for lli in default (greedy) ORC mode where no lazy materialization is required.