Add given input and mark it as tied.
Doesn't create additional copy compared to
matching input constraint to virtual register.
Differential Revision: https://reviews.llvm.org/D85122
(cherry picked from commit d893278bba01b0e1209e8b8accbdd5cfa75a0932)
These might occur in seemingly generic assembly. Previously when
targeting COFF, they were silently ignored, which certainly won't
give the right result. Instead clearly error out, to make it clear
that the assembly needs to be adjusted for this target.
Also change a preexisting report_fatal_error into a proper error
message, pointing out the offending source instruction. This isn't
strictly an internal error, as it can be triggered by user input.
Differential Revision: https://reviews.llvm.org/D85242
(cherry picked from commit f5e6fbac24f198d075a7c4bc0879426e79040bcf)
In fixupIsDeadOrKill, we assume StartMI and EndMI not exist in same
basic block, so we add an assertion in that function. This is wrong
before RA, as before RA the true definition may exist in another
block through copy like instructions.
Reviewed By: nemanjai
Differential Revision: https://reviews.llvm.org/D83365
(cherry picked from commit 36f9fe2d3493717dbc6866d96b2e989839ce1a4c)
This can practically easily be a product of combining strings with
macros in resource files.
This fixes https://github.com/mstorsjo/llvm-mingw/issues/140.
As string literals within llvm-rc are handled as StringRefs, each
referencing an uninterpreted slice of the input file, with actual
interpretation of the input string (codepage handling, unescaping etc)
done only right before writing them out to disk, it's hard to
concatenate them other than just bundling them up in a vector,
without rearchitecting a large part of llvm-rc.
This matches how the same already is supported in VersionInfoValue,
with a std::vector<IntOrString> Values.
MS rc.exe only supports concatenated string literals in version info
values (already supported), string tables (implemented in this patch)
and user data resources (easily implemented in a separate patch, but
hasn't been requested by any end user yet), while GNU windres supports
string immediates split into multiple strings anywhere (e.g. like
(100 ICON "myicon" ".ico"). Not sure if concatenation in other
statements actually is used in the wild though, in resource files
normally built by GNU windres.
Differential Revision: https://reviews.llvm.org/D85183
(cherry picked from commit b989fcbae6f179ad887d19ceef83ace1c00b87cc)
This fixes the ExecutionEngine/MCJIT/stubs-sm-pic.ll test in no-asserts
builds which is set to XFAIL on some platforms like 32-bit x86. More
importantly, we probably don't want to silently error in these cases.
Differential revision: https://reviews.llvm.org/D84390
(cherry picked from commit 6a3b07a4bf14be32569550f2e9814d8797d27d31)
Summary:
This is in response to the review of https://reviews.llvm.org/D84873:
The expensive check should be reordered last
Reviewers:
arsenm
Differential Revision:
https://reviews.llvm.org/D84890
(cherry picked from commit 243376cdc7b719d443f42c8c4667e5d96af53dcc)
BUG_REPORT_URL is currently used both in LLVM and in Clang but declared
only in the latter. This means that it's missing in standalone clang
builds and the driver ends up outputting:
PLEASE submit a bug report to and include [...]
(note the missing URL)
To fix this, include LLVM_PACKAGE_BUGREPORT in LLVMConfig.cmake
(similarly to how we pass PACKAGE_VERSION) and use it to fill
BUG_REPORT_URL when building clang standalone.
Differential Revision: https://reviews.llvm.org/D84987
(cherry picked from commit 21c165de2a1bcca9dceb452f637d9e8959fba113)
Currently we skip alias sets with only reads or a single write and no
reads, but still add the pointers to the list of pointers in RtCheck.
This can lead to cases where we try to access a pointer that does not
exist when grouping checks. In most cases, the way we access
PositionMap masked that, as the value would default to index 0.
But in the example in PR46854 it causes a crash.
This patch updates the logic to avoid adding pointers for alias sets
that do not need any checks. It makes things slightly more verbose, by
first checking the numbers of reads/writes and bailing out early if we don't
need checks for the alias set.
I think this makes the logic a bit simpler to follow.
Reviewed By: anemet
Differential Revision: https://reviews.llvm.org/D84608
(cherry picked from commit 2062b3707c1ef698deaa9abc571b937fdd077168)
In cases where the alignment of the datatype is smaller than
expected by the instruction, the address is aligned. The aligned
address is used for the load, but wasn't used for the store
conditional, which resulted in a run-time alignment exception.
(cherry picked from commit 7b114446c320de542c50c4c02f566e5d18adee33)
It turned out that the D78704 included a private LLVM header, which is excluded
from the LLVM install target.
I'm substituting that `#include` with the public one by moving the necessary
`#define` into that. There was a discussion about this at D78704 and on the
cfe-dev mailing list.
I'm also placing a note to remind others of this pitfall.
Reviewed By: mgorny
Differential Revision: https://reviews.llvm.org/D84929
(cherry picked from commit 63d3aeb529a7b0fb95c2092ca38ad21c1f5cfd74)
...with the non-template version, as the template version might
increase the size of the compiler build.
Methods affected:
1.`findAddrModeSVELoadStore`
2. `SelectPredicatedStore`
Also, remove the `const` qualifier from the `unsigned` parameters of
the methods to conform with other similar methods in the class.
(cherry picked from commit dbeb184b7f54db2d3ef20ac153b1c77f81cf0b99)
When building code at -O0 We weren't falling back to DAG ISel correctly
when encountering alloca instructions with scalable vector types. This
is because the alloca has no operands that are scalable. I've fixed this by
adding a check in AArch64ISelLowering::fallBackToDAGISel for alloca
instructions with scalable types.
Differential Revision: https://reviews.llvm.org/D84746
(cherry picked from commit 23ad660b5d34930b2b5362f1bba63daee78f6aa4)
In vectorizeChainsInBlock we try to collect chains of PHI nodes
that have the same element type, but the code is relying upon
the implicit conversion from TypeSize -> uint64_t. For now, I have
modified the code to ignore PHI nodes with scalable types.
Differential Revision: https://reviews.llvm.org/D83542
(cherry picked from commit 9ad7c980bb47edd7db8f8db828b487cc7dfc9921)
I have added tests to:
CodeGen/AArch64/sve-intrinsics-int-arith.ll
for doing simple integer add operations on tuple types. Since these
tests introduced new warnings due to incorrect use of
getVectorNumElements() I have also fixed up these warnings in the
same patch. These fixes are:
1. In narrowExtractedVectorBinOp I have changed the code to bail out
early for scalable vector types, since we've not yet hit a case that
proves the optimisations are profitable for scalable vectors.
2. In DAGTypeLegalizer::WidenVecRes_CONCAT_VECTORS I have replaced
calls to getVectorNumElements with getVectorMinNumElements in cases
that work with scalable vectors. For the other cases I have added
asserts that the vector is not scalable because we should not be
using shuffle vectors and build vectors in such cases.
Differential revision: https://reviews.llvm.org/D84016
(cherry picked from commit 207877175944656bd9b52d36f391a092854572be)
Previous patches fixed up all the warnings in this test:
llvm/test/CodeGen/AArch64/sve-sext-zext.ll
and this change simply checks that no new warnings are added in future.
Differential revision: https://reviews.llvm.org/D83205
(cherry picked from commit f43b5c7a76ab83dcc80e6769d41d5c4b761312b1)
In DAGTypeLegalizer::SplitVecOp_EXTRACT_SUBVECTOR I have replaced
calls to getVectorNumElements with getVectorMinNumElements, since
this code path works for both fixed and scalable vector types. For
scalable vectors the index will be multiplied by VSCALE.
Fixes warnings in this test:
sve-sext-zext.ll
Differential revision: https://reviews.llvm.org/D83198
(cherry picked from commit 5d84eafc6b86a42e261af8d753c3a823e0e7c67e)
I have introduced a new TargetFrameLowering query function:
isStackIdSafeForLocalArea
that queries whether or not it is safe for objects of a given stack
id to be bundled into the local area. The default behaviour is to
always bundle regardless of the stack id, however for AArch64 this is
overriden so that it's only safe for fixed-size stack objects.
There is future work here to extend this algorithm for multiple local
areas so that SVE stack objects can be bundled together and accessed
from their own virtual base-pointer.
Differential Revision: https://reviews.llvm.org/D83859
(cherry picked from commit 14bc85e0ebb6c00c1672158ab6a692bfbb11e1cc)
While deallocating the stackframe, the offset used to reload the
callee-saved registers was not pointing to the SVE callee-saves,
but rather to the whole SVE area.
+--------------+
| GRP callee |
| saves |
+--------------+ <- FP
| SVE callee |
| saves |
+--------------+ <- Should restore SVE callee saves from here
| SVE Spills |
| and Locals |
+--------------+ <- instead of from here.
| |
: :
| |
+--------------+ <- SP
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D84539
(cherry picked from commit cda2eb3ad2bbe923e74d6eb083af196a0622d800)
Instead of aligning the last callee-saved-register slot to the stack
alignment (16 bytes), just align the SVE callee-saved block. This also
simplifies the code that allocates space for the callee-saves.
This change is needed to make sure the offset to which the callee-saved
register is spilled, corresponds to the offset used for e.g. unwind call
frame instructions.
Reviewers: efriedma, paulwalker-arm, david-arm, rengolin
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D84042
(cherry picked from commit 26b4ef3694973ea2fa656d3d3a7f67f16f135654)
Fixed stack objects are preallocated and defined to be allocated before
any of the regular stack objects. These are normally used to model stack
arguments.
The AAPCS does not support passing SVE registers on the stack by value
(only by reference). The current layout also doesn't place them before
all stack objects, but rather before all SVE objects. Removing this
simplifies the code that emits the allocation/deallocation
around callee-saved registers (D84042).
This patch also removes all uses of fixedStack from from
framelayout-sve.mir, where this was used purely for testing purposes.
Reviewers: paulwalker-arm, efriedma, rengolin
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D84538
(cherry picked from commit 54492a5843a34684ce21ae201dd8ca3e509288fd)
It's sort of tricky to hit this in practice, but not impossible. I have
a synthetic C testcase if anyone is interested.
The implementation is identical to the equivalent NEON register copies.
Differential Revision: https://reviews.llvm.org/D84373
(cherry picked from commit 993c1a3219a8ae69f1d700183bf174d75f3815d4)
This patch addresses two issues:
* Forces the availability of the base-pointer (x19) when the frame has
both scalable vectors and variable-length arrays. Otherwise it will
be expensive to access non-SVE locals.
* In presence of SVE stack objects, it will allocate the emergency
scavenging slot close to the SP, so that they can be accessed from
the SP or BP if available. If accessed from the frame-pointer, it will
otherwise need an extra register to access the scavenging slot because
of mixed scalable/non-scalable addressing modes.
Reviewers: efriedma, ostannard, cameron.mcinally, rengolin, david-arm
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D70174
(cherry picked from commit bef56f7fe2382ed1476aa67a55626b364635b44e)
The default calling convention needs to save/restore the SVE callee
saves according to the SVE PCS when the function takes or returns
scalable types, even when the `aarch64_sve_vector_pcs` CC is not
specified for the function.
Reviewers: efriedma, paulwalker-arm, david-arm, rengolin
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D84041
(cherry picked from commit 9bacf1588583014538a0217add18f370acb95788)
This isn't a natively supported operation, so convert it to a
mask+compare.
In addition to the operation itself, fix up some surrounding stuff to
make the testcase work: we need concat_vectors on i1 vectors, we need
legalization of i1 vector truncates, and we need to fix up all the
relevant uses of getVectorNumElements().
Differential Revision: https://reviews.llvm.org/D83811
(cherry picked from commit b8f765a1e17f8d212ab1cd8f630d35adc7495556)
v3i16 and v3f16 currently cannot be legalized and lowered so they should
not be emitted by inst combining.
Moved the check down to still allow extracting 1 or 2 elements via the dmask.
Fixes image intrinsics being combined to return v3x16.
Differential Revision: https://reviews.llvm.org/D84223
(cherry picked from commit 2c659082bda6319732118e746fe025d8d5f9bfac)
An initial backend patch towards fixing the various poor HADD combines (PR34724, PR41813, PR45747 etc.).
This extends isHorizontalBinOp to check if we have per-element horizontal ops (odd+even element pairs), but not in the expected serial order - in which case we build a "post shuffle mask" that we can apply to the HOP result, assuming we have fast-hops/optsize etc.
The next step will be to extend the SHUFFLE(HOP(X,Y)) combines as suggested on PR41813 - accepting more post-shuffle masks even on slow-hop targets if we can fold it into another shuffle.
Differential Revision: https://reviews.llvm.org/D83789
(cherry picked from commit 182111777b4ec215eeebe8ab5cc2a324e2f055ff)
XBEGIN causes several based blocks to be inserted. If flags are live across it we need to make eflags live in the new basic blocks to avoid machine verifier errors.
Fixes PR46827
Reviewed By: ivanbaev
Differential Revision: https://reviews.llvm.org/D84479
(cherry picked from commit 647e861e080382593648b234668ad2f5a376ac5e)
As shown in D82998, the basic-aa-recphi option can cause miscompiles for
gep's with negative constants. The option checks for recursive phi, that
recurse through a contant gep. If it finds one, it performs aliasing
calculations using the other phi operands with an unknown size, to
specify that an unknown number of elements after the initial value are
potentially accessed. This works fine expect where the constant is
negative, as the size is still considered to be positive. So this patch
expands the check to make sure that the constant is also positive.
Differential Revision: https://reviews.llvm.org/D83576
(cherry picked from commit 311fafd2c90aed5b3fed9566503eebe629f1e979)
SplitBlockPredecessors() can not split blocks that have such terminators,
and in two other places we already ensure that we don't end up calling
SplitBlockPredecessors() on such blocks. Do so in one more place.
Fixes https://bugs.llvm.org/show_bug.cgi?id=46857
(cherry picked from commit 1da9834557cd4302a5183b8228ce063e69f82602)
I mixed up the precedence of operators in the assert and thought I
had it right since there was no compiler warning. This just
adds the parentheses in the expression as needed.
(cherry picked from commit cdead4f89c0eecf11f50092bc088e3a9c6511825)
Unfortunately this is another regression from my canonicalization patch
(1fed131660b2). The patch contained two implicit assumptions:
1. That we would have a permuted load only if we are loading a partial vector
2. That a partial vector load would necessarily be as wide as the splat
However, assumption 2 is not correct since it is possible to do a wider
load and only splat a half of it. This patch corrects this assumption by
simply checking if the load is permuted and adjusting the offset if it is.
(cherry picked from commit 7d076e19e31a2a32e357cbdcf0183f88fe1fb0fb)
In the included test case the align 16 allowed the v23f32 load to handled as load v16f32, load v4f32, and load v4f32(one element not used). These loads all need to be concatenated together into a final vector. In this case we tried to concatenate the two v4f32 loads to match the type of the v16f32 load so we could do a second concat_vectors, but those loads alone only add up to v8f32. So we need to two v4f32 undefs to pad it.
It appears we've tried to hack around a similar issue in this code before by adding undef padding to loads in one of the earlier loops in this function. Originally in r147964 by padding all loads narrower than previous loads to the same size. Later modifed to only the last load in r293088. This patch removes that earlier code and just handles it on demand where we know we need it.
Fixes PR46820
Differential Revision: https://reviews.llvm.org/D84463
(cherry picked from commit 8131e190647ac2b5b085b48a6e3b48c1d7520a66)
For comdats (e.g. caused by -ffunction-sections), Section is already
set here; make sure it's null, for the weak external symbol to be undefined.
This fixes PR46779.
Differential Revision: https://reviews.llvm.org/D84507
(cherry picked from commit 9e81d8bbf19d72fca3d87b7334c613d1aa2a5795)
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions from the ternary subset (zbt subextension) of the
experimental B extension of RISC-V.
It adds also the correspondent codegen tests.
This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf
Differential Revision: https://reviews.llvm.org/D79875
(cherry picked from commit c9c955ada8e65205312f2bc41b46eefa0e98b36c)
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions from the single-bit subset (zbs subextension) of
the experimental B extension of RISC-V.
It adds also the correspondent codegen tests.
This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf
Differential Revision: https://reviews.llvm.org/D79874
(cherry picked from commit d4be33374c07ea9a9362892876aa76b227298181)
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions belonging to both the permutation and the base
subsets of the experimental B extension of RISC-V.
It adds also the correspondent codegen tests.
This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf
Differential Revision: https://reviews.llvm.org/D79873
(cherry picked from commit 6144f0a1e52e7f5439a67267ca65f2d72c21aaa6)
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions from the permutation subset (zbp subextension) of
the experimental B extension of RISC-V.
It adds also the correspondent codegen tests.
This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf
Differential Revision: https://reviews.llvm.org/D79871
(cherry picked from commit 31b52b4345e36b169a2b6a89eac44651f59889dd)
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions from the base subset (zbb subextension) of the
experimental B extension of RISC-V.
It adds also the correspondent codegen tests.
This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf
Differential Revision: https://reviews.llvm.org/D79870
(cherry picked from commit e2692f0ee7f338fea4fc918669643315cefc7678)
getTargetShuffleMask is used by the various "SimplifyDemanded" folds so we can't assume that the bypassed extract_subvector can be safely simplified - getFauxShuffleMask performs a more general decode that allows us to more safely catch many of these cases so the impact is minimal.
(cherry picked from commit 5b5dc2442ac7a574a3b7d17c15ebeeb9eb3bec26)