1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 20:23:11 +01:00
Commit Graph

200275 Commits

Author SHA1 Message Date
Roman Lebedev
7b99daf6d1 [InstCombine] Sanitize undef vector constant to 1 in X*(2^C) with X << C (PR47133)
While x*undef is undef, shift-by-undef is poison,
which we must avoid introducing.

Also log2(iN undef) is *NOT* iN undef, because log2(iN undef) u< N.

See https://bugs.llvm.org/show_bug.cgi?id=47133

(cherry picked from commit 12d93a27e7b78d58dd00817cb737f273d2dba8ae)
2020-08-18 12:26:01 +02:00
Chen Zheng
f0d66a02ba [PowerPC] Make StartMI ignore COPY like instructions.
Reviewed By: lkail

Differential Revision: https://reviews.llvm.org/D85659

(cherry picked from commit 4d52ebb9b9c72b656c1ccb6a1424841f246cd791)
2020-08-18 11:49:37 +02:00
David Sherwood
de2e9a1d9d [SVE] Fix bug in SVEIntrinsicOpts::optimizePTest
The code wasn't taking into account that the two operands
passed to ptest could be identical and was trying to erase
them twice.

Differential Revision: https://reviews.llvm.org/D85892

(cherry picked from commit 6c7957c9901714b7ad0a8d2743a8c431b57fd0c9)
2020-08-18 10:02:50 +02:00
David Blaikie
4fc1aa99f5 Fix -Wconstant-conversion warning with explicit cast
Introduced by fd6584a22043b254a323635c142b28ce80ae5b5b

Following similar use of casts in AsmParser.cpp, for instance - ideally
this type would use unsigned chars as they're more representative of raw
data and don't get confused around implementation defined choices of
char's signedness, but this is what it is & the signed/unsigned
conversions are (so far as I understand) safe/bit preserving in this
usage and what's intended, given the API design here.

(cherry picked from commit e31cfc4cd3e393300002e9c519787c96e3b67bab)
2020-08-17 14:01:44 +02:00
David Sherwood
e312733943 [SVE][CodeGen] Fix bug with store of unpacked FP scalable vectors
Fixed an incorrect pattern in lib/Target/AArch64/AArch64SVEInstrInfo.td
for storing out <vscale x 2 x f32> unpacked scalable vectors. Added
a couple of tests to

  test/CodeGen/AArch64/sve-st1-addressing-mode-reg-imm.ll

Differential Revision: https://reviews.llvm.org/D85441

(cherry picked from commit 0905d9f31ead399d054c5d2a2c353e690f5c8daa)
2020-08-17 13:58:11 +02:00
Sander de Smalen
08c4f4dedd [AArch64][SVE] Disable tail calls if callee does not preserve SVE regs.
This fixes an issue triggered by the following code, where emitEpilogue
got confused when trying to restore the SVE registers after the call,
whereas the call to bar() is implemented as a TCReturn:

  int non_sve();
  int sve(svint32_t x) { return non_sve(); }

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84869

(cherry picked from commit f2916636f83dfeb4808a16045db0025783743471)
2020-08-17 13:58:10 +02:00
Sander de Smalen
b70d8f0926 [AArch64][SVE] Add missing unwind info for SVE registers.
This patch adds a CFI entry for each SVE callee saved register
that needs unwind info at an offset from the CFA. The offset is
a DWARF expression because the offset is partly scalable.

The CFI entries only cover a subset of the SVE callee-saves and
only encodes the lower 64-bits, thus implementing the lowest
common denominator ABI. Existing unwinders may support VG but
only restore the lower 64-bits.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84044

(cherry picked from commit bb3344c7d8c2703c910dd481ada43ecaf11536a6)
2020-08-17 13:58:10 +02:00
Sander de Smalen
58d78be3d9 [AArch64][SVE] Fix CFA calculation in presence of SVE objects.
The CFA is calculated as (SP/FP + offset), but when there are
SVE objects on the stack the SP offset is partly scalable and
should instead be expressed as the DWARF expression:

     SP + offset + scalable_offset * VG

where VG is the Vector Granule register, containing the
number of 64bits 'granules' in a scalable vector.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84043

(cherry picked from commit fd6584a22043b254a323635c142b28ce80ae5b5b)
2020-08-17 13:58:09 +02:00
Martin Storsjö
4e760d6c21 [docs] Add release notes for the 11.x release 2020-08-17 11:17:15 +03:00
Petar Avramovic
9a16e544d3 [GlobalISel][InlineAsm] Fix matching input constraint to physreg
Add given input and mark it as tied.
Doesn't create additional copy compared to
matching input constraint to virtual register.

Differential Revision: https://reviews.llvm.org/D85122

(cherry picked from commit d893278bba01b0e1209e8b8accbdd5cfa75a0932)
2020-08-07 19:48:51 +02:00
Martin Storsjö
ece79ac6e6 [AArch64] [Windows] Error out on unsupported symbol locations
These might occur in seemingly generic assembly. Previously when
targeting COFF, they were silently ignored, which certainly won't
give the right result. Instead clearly error out, to make it clear
that the assembly needs to be adjusted for this target.

Also change a preexisting report_fatal_error into a proper error
message, pointing out the offending source instruction. This isn't
strictly an internal error, as it can be triggered by user input.

Differential Revision: https://reviews.llvm.org/D85242

(cherry picked from commit f5e6fbac24f198d075a7c4bc0879426e79040bcf)
2020-08-06 13:46:50 +02:00
Chen Zheng
888e055a40 [PowerPC] fixupIsDeadOrKill start and end in different block fixing
In fixupIsDeadOrKill, we assume StartMI and EndMI not exist in same
basic block, so we add an assertion in that function. This is wrong
before RA, as before RA the true definition may exist in another
block through copy like instructions.

Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D83365

(cherry picked from commit 36f9fe2d3493717dbc6866d96b2e989839ce1a4c)
2020-08-05 20:12:38 +02:00
Martin Storsjö
872454e51b [llvm-rc] Allow string table values split into multiple string literals
This can practically easily be a product of combining strings with
macros in resource files.

This fixes https://github.com/mstorsjo/llvm-mingw/issues/140.

As string literals within llvm-rc are handled as StringRefs, each
referencing an uninterpreted slice of the input file, with actual
interpretation of the input string (codepage handling, unescaping etc)
done only right before writing them out to disk, it's hard to
concatenate them other than just bundling them up in a vector,
without rearchitecting a large part of llvm-rc.

This matches how the same already is supported in VersionInfoValue,
with a std::vector<IntOrString> Values.

MS rc.exe only supports concatenated string literals in version info
values (already supported), string tables (implemented in this patch)
and user data resources (easily implemented in a separate patch, but
hasn't been requested by any end user yet), while GNU windres supports
string immediates split into multiple strings anywhere (e.g. like
(100 ICON "myicon" ".ico"). Not sure if concatenation in other
statements actually is used in the wild though, in resource files
normally built by GNU windres.

Differential Revision: https://reviews.llvm.org/D85183

(cherry picked from commit b989fcbae6f179ad887d19ceef83ace1c00b87cc)
2020-08-05 19:59:38 +02:00
Hans Wennborg
d8c783670a RuntimeDyldELF: report_fatal_error instead of asserting for unimplemented relocations (PR46816)
This fixes the ExecutionEngine/MCJIT/stubs-sm-pic.ll test in no-asserts
builds which is set to XFAIL on some platforms like 32-bit x86. More
importantly, we probably don't want to silently error in these cases.

Differential revision: https://reviews.llvm.org/D84390

(cherry picked from commit 6a3b07a4bf14be32569550f2e9814d8797d27d31)
2020-08-05 19:39:11 +02:00
Jonas Devlieghere
1dec893ffe [llvm] Add RISCVTargetParser.def to the module map
This fixes the modules build.

(cherry picked from commit 1b3c25e7b61f44b80788f8758f0d7f0b013135b5)
2020-08-05 17:32:25 +02:00
Hans Wennborg
78f30188b4 Bump forgotten version nbr in llvm/docs/conf.py 2020-08-05 17:12:51 +02:00
Changpeng Fang
5aeae1780f AMDGPU: Put inexpensive ops first in AMDGPUAnnotateUniformValues::visitLoadInst
Summary:
  This is in response to the review of https://reviews.llvm.org/D84873:
The expensive check should be reordered last

Reviewers:
  arsenm

Differential Revision:
  https://reviews.llvm.org/D84890

(cherry picked from commit 243376cdc7b719d443f42c8c4667e5d96af53dcc)
2020-08-03 16:01:25 +02:00
Michał Górny
2fc661ffb0 [CMake] Pass bugreport URL to standalone clang build
BUG_REPORT_URL is currently used both in LLVM and in Clang but declared
only in the latter.  This means that it's missing in standalone clang
builds and the driver ends up outputting:

  PLEASE submit a bug report to  and include [...]

(note the missing URL)

To fix this, include LLVM_PACKAGE_BUGREPORT in LLVMConfig.cmake
(similarly to how we pass PACKAGE_VERSION) and use it to fill
BUG_REPORT_URL when building clang standalone.

Differential Revision: https://reviews.llvm.org/D84987

(cherry picked from commit 21c165de2a1bcca9dceb452f637d9e8959fba113)
2020-08-03 15:59:06 +02:00
Florian Hahn
c96add5bee [LAA] Avoid adding pointers to the checks if they are not needed.
Currently we skip alias sets with only reads or a single write and no
reads, but still add the pointers to the list of pointers in RtCheck.

This can lead to cases where we try to access a pointer that does not
exist when grouping checks.  In most cases, the way we access
PositionMap masked that, as the value would default to index 0.

But in the example in PR46854 it causes a crash.

This patch updates the logic to avoid adding pointers for alias sets
that do not need any checks. It makes things slightly more verbose, by
first checking the numbers of reads/writes and bailing out early if we don't
need checks for the alias set.

I think this makes the logic a bit simpler to follow.

Reviewed By: anemet

Differential Revision: https://reviews.llvm.org/D84608

(cherry picked from commit 2062b3707c1ef698deaa9abc571b937fdd077168)
2020-08-03 15:55:25 +02:00
Brendon Cahoon
585524e839 Align store conditional address
In cases where the alignment of the datatype is smaller than
expected by the instruction, the address is aligned. The aligned
address is used for the load, but wasn't used for the store
conditional, which resulted in a run-time alignment exception.

(cherry picked from commit 7b114446c320de542c50c4c02f566e5d18adee33)
2020-08-03 15:52:15 +02:00
Balazs Benics
2cd4771119 [analyzer] Fix out-of-tree only clang build by not relaying on private header
It turned out that the D78704 included a private LLVM header, which is excluded
from the LLVM install target.
I'm substituting that `#include` with the public one by moving the necessary
`#define` into that. There was a discussion about this at D78704 and on the
cfe-dev mailing list.

I'm also placing a note to remind others of this pitfall.

Reviewed By: mgorny

Differential Revision: https://reviews.llvm.org/D84929

(cherry picked from commit 63d3aeb529a7b0fb95c2092ca38ad21c1f5cfd74)
2020-07-31 20:31:44 +02:00
Francesco Petrogalli
d7924b4be1 [llvm][CodeGen] Addressing modes for SVE ldN.
Reviewers: c-rhodes, efriedma, sdesmalen

Subscribers: huihuiz, tschuett, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77251

(cherry picked from commit adb28e0fb2b0e97ea9dce422c09b36979cf7cd2f)
2020-07-31 17:27:53 +02:00
Francesco Petrogalli
c3a85d666e [NFC][AArch64] Replace some template methods/invocations...
...with the non-template version, as the template version might
increase the size of the compiler build.

Methods affected:

1.`findAddrModeSVELoadStore`
2. `SelectPredicatedStore`

Also, remove the `const` qualifier from the `unsigned` parameters of
the methods to conform with other similar methods in the class.

(cherry picked from commit dbeb184b7f54db2d3ef20ac153b1c77f81cf0b99)
2020-07-31 17:27:52 +02:00
Francesco Petrogalli
a232ab7037 [llvm][sve] Reg + Imm addressing mode for ld1ro.
Reviewers: kmclaughlin, efriedma, sdesmalen

Subscribers: tschuett, hiraditya, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83357

(cherry picked from commit 809600d6642773f71245f76995dab355effc73af)
2020-07-31 17:27:52 +02:00
David Sherwood
05256451cd [SVE][CodeGen] At -O0 fallback to DAG ISel when translating alloca with scalable types
When building code at -O0 We weren't falling back to DAG ISel correctly
when encountering alloca instructions with scalable vector types. This
is because the alloca has no operands that are scalable. I've fixed this by
adding a check in AArch64ISelLowering::fallBackToDAGISel for alloca
instructions with scalable types.

Differential Revision: https://reviews.llvm.org/D84746

(cherry picked from commit 23ad660b5d34930b2b5362f1bba63daee78f6aa4)
2020-07-31 17:27:51 +02:00
David Sherwood
4f276364b5 [SVE] Don't consider scalable vector types in SLPVectorizerPass::vectorizeChainsInBlock
In vectorizeChainsInBlock we try to collect chains of PHI nodes
that have the same element type, but the code is relying upon
the implicit conversion from TypeSize -> uint64_t. For now, I have
modified the code to ignore PHI nodes with scalable types.

Differential Revision: https://reviews.llvm.org/D83542

(cherry picked from commit 9ad7c980bb47edd7db8f8db828b487cc7dfc9921)
2020-07-31 17:27:50 +02:00
David Sherwood
efb915bb8c [SVE][CodeGen] Add simple integer add tests for SVE tuple types
I have added tests to:

  CodeGen/AArch64/sve-intrinsics-int-arith.ll

for doing simple integer add operations on tuple types. Since these
tests introduced new warnings due to incorrect use of
getVectorNumElements() I have also fixed up these warnings in the
same patch. These fixes are:

1. In narrowExtractedVectorBinOp I have changed the code to bail out
early for scalable vector types, since we've not yet hit a case that
proves the optimisations are profitable for scalable vectors.
2. In DAGTypeLegalizer::WidenVecRes_CONCAT_VECTORS I have replaced
calls to getVectorNumElements with getVectorMinNumElements in cases
that work with scalable vectors. For the other cases I have added
asserts that the vector is not scalable because we should not be
using shuffle vectors and build vectors in such cases.

Differential revision: https://reviews.llvm.org/D84016

(cherry picked from commit 207877175944656bd9b52d36f391a092854572be)
2020-07-31 17:27:50 +02:00
David Sherwood
72e8f4492d [SVE] Add checks for no warnings in CodeGen/AArch64/sve-sext-zext.ll
Previous patches fixed up all the warnings in this test:

  llvm/test/CodeGen/AArch64/sve-sext-zext.ll

and this change simply checks that no new warnings are added in future.

Differential revision: https://reviews.llvm.org/D83205

(cherry picked from commit f43b5c7a76ab83dcc80e6769d41d5c4b761312b1)
2020-07-31 17:27:49 +02:00
David Sherwood
ebd7adf1d6 [CodeGen] Remove calls to getVectorNumElements in DAGTypeLegalizer::SplitVecOp_EXTRACT_SUBVECTOR
In DAGTypeLegalizer::SplitVecOp_EXTRACT_SUBVECTOR I have replaced
calls to getVectorNumElements with getVectorMinNumElements, since
this code path works for both fixed and scalable vector types. For
scalable vectors the index will be multiplied by VSCALE.

Fixes warnings in this test:

  sve-sext-zext.ll

Differential revision: https://reviews.llvm.org/D83198

(cherry picked from commit 5d84eafc6b86a42e261af8d753c3a823e0e7c67e)
2020-07-31 17:27:49 +02:00
David Sherwood
df55a6ed72 [SVE] Don't use LocalStackAllocation for SVE objects
I have introduced a new TargetFrameLowering query function:

  isStackIdSafeForLocalArea

that queries whether or not it is safe for objects of a given stack
id to be bundled into the local area. The default behaviour is to
always bundle regardless of the stack id, however for AArch64 this is
overriden so that it's only safe for fixed-size stack objects.
There is future work here to extend this algorithm for multiple local
areas so that SVE stack objects can be bundled together and accessed
from their own virtual base-pointer.

Differential Revision: https://reviews.llvm.org/D83859

(cherry picked from commit 14bc85e0ebb6c00c1672158ab6a692bfbb11e1cc)
2020-07-31 17:27:49 +02:00
Sander de Smalen
b14a3de842 [AArch64][SVE] Fix epilogue for SVE when the stack is realigned.
While deallocating the stackframe, the offset used to reload the
callee-saved registers was not pointing to the SVE callee-saves,
but rather to the whole SVE area.

   +--------------+
   | GRP callee   |
   |     saves    |
   +--------------+ <- FP
   | SVE callee   |
   |     saves    |
   +--------------+ <- Should restore SVE callee saves from here
   |  SVE Spills  |
   |  and Locals  |
   +--------------+ <- instead of from here.
   |              |
   :              :
   |              |
   +--------------+ <- SP

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D84539

(cherry picked from commit cda2eb3ad2bbe923e74d6eb083af196a0622d800)
2020-07-31 17:27:48 +02:00
Sander de Smalen
4b9a803323 [AArch64][SVE] Don't align the last SVE callee save.
Instead of aligning the last callee-saved-register slot to the stack
alignment (16 bytes), just align the SVE callee-saved block. This also
simplifies the code that allocates space for the callee-saves.

This change is needed to make sure the offset to which the callee-saved
register is spilled, corresponds to the offset used for e.g. unwind call
frame instructions.

Reviewers: efriedma, paulwalker-arm, david-arm, rengolin

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84042

(cherry picked from commit 26b4ef3694973ea2fa656d3d3a7f67f16f135654)
2020-07-31 17:27:48 +02:00
Sander de Smalen
03811e1752 [AArch64][SVE] Don't support fixedStack for SVE objects.
Fixed stack objects are preallocated and defined to be allocated before
any of the regular stack objects. These are normally used to model stack
arguments.

The AAPCS does not support passing SVE registers on the stack by value
(only by reference). The current layout also doesn't place them before
all stack objects, but rather before all SVE objects. Removing this
simplifies the code that emits the allocation/deallocation
around callee-saved registers (D84042).

This patch also removes all uses of fixedStack from from
framelayout-sve.mir, where this was used purely for testing purposes.

Reviewers: paulwalker-arm, efriedma, rengolin

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D84538

(cherry picked from commit 54492a5843a34684ce21ae201dd8ca3e509288fd)
2020-07-31 17:27:47 +02:00
Eli Friedman
8c995d4317 [AArch64][SVE] Teach copyPhysReg to copy ZPR2/3/4.
It's sort of tricky to hit this in practice, but not impossible. I have
a synthetic C testcase if anyone is interested.

The implementation is identical to the equivalent NEON register copies.

Differential Revision: https://reviews.llvm.org/D84373

(cherry picked from commit 993c1a3219a8ae69f1d700183bf174d75f3815d4)
2020-07-31 17:27:47 +02:00
Sander de Smalen
7c1a8570db [AArch64][SVE] Correctly allocate scavenging slot in presence of SVE.
This patch addresses two issues:

* Forces the availability of the base-pointer (x19) when the frame has
  both scalable vectors and variable-length arrays. Otherwise it will
  be expensive to access non-SVE locals.

* In presence of SVE stack objects, it will allocate the emergency
  scavenging slot close to the SP, so that they can be accessed from
  the SP or BP if available. If accessed from the frame-pointer, it will
  otherwise need an extra register to access the scavenging slot because
  of mixed scalable/non-scalable addressing modes.

Reviewers: efriedma, ostannard, cameron.mcinally, rengolin, david-arm

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D70174

(cherry picked from commit bef56f7fe2382ed1476aa67a55626b364635b44e)
2020-07-31 17:27:46 +02:00
Sander de Smalen
bd7f2f5f58 [AArch64][SVE] Fix PCS for functions taking/returning scalable types.
The default calling convention needs to save/restore the SVE callee
saves according to the SVE PCS when the function takes or returns
scalable types, even when the `aarch64_sve_vector_pcs` CC is not
specified for the function.

Reviewers: efriedma, paulwalker-arm, david-arm, rengolin

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D84041

(cherry picked from commit 9bacf1588583014538a0217add18f370acb95788)
2020-07-31 17:27:46 +02:00
Eli Friedman
48fbb591c8 [AArch64][SVE] Add support for trunc to <vscale x N x i1>.
This isn't a natively supported operation, so convert it to a
mask+compare.

In addition to the operation itself, fix up some surrounding stuff to
make the testcase work: we need concat_vectors on i1 vectors, we need
legalization of i1 vector truncates, and we need to fix up all the
relevant uses of getVectorNumElements().

Differential Revision: https://reviews.llvm.org/D83811

(cherry picked from commit b8f765a1e17f8d212ab1cd8f630d35adc7495556)
2020-07-31 17:27:45 +02:00
Hans Wennborg
9162532dea Add flang to export.sh to it gets source tarballs in releases
(cherry picked from commit 9853786ce39b9510eeb2688baaef7a364d58e113)
2020-07-31 17:23:43 +02:00
Sebastian Neubauer
a104e45e49 [AMDGPU] Don't combine memory intrs to v3i16
v3i16 and v3f16 currently cannot be legalized and lowered so they should
not be emitted by inst combining.

Moved the check down to still allow extracting 1 or 2 elements via the dmask.

Fixes image intrinsics being combined to return v3x16.

Differential Revision: https://reviews.llvm.org/D84223

(cherry picked from commit 2c659082bda6319732118e746fe025d8d5f9bfac)
2020-07-29 17:31:02 +02:00
Sanjay Patel
14afc00ece [InstCombine] avoid crashing on vector constant expression (PR46872)
(cherry picked from commit f75cf240d6ed528e1ce7770bbe09b417338b40ef)
2020-07-29 17:16:52 +02:00
Simon Pilgrim
07199183e0 [X86][SSE] Attempt to match OP(SHUFFLE(X,Y),SHUFFLE(X,Y)) -> SHUFFLE(HOP(X,Y))
An initial backend patch towards fixing the various poor HADD combines (PR34724, PR41813, PR45747 etc.).

This extends isHorizontalBinOp to check if we have per-element horizontal ops (odd+even element pairs), but not in the expected serial order - in which case we build a "post shuffle mask" that we can apply to the HOP result, assuming we have fast-hops/optsize etc.

The next step will be to extend the SHUFFLE(HOP(X,Y)) combines as suggested on PR41813 - accepting more post-shuffle masks even on slow-hop targets if we can fold it into another shuffle.

Differential Revision: https://reviews.llvm.org/D83789

(cherry picked from commit 182111777b4ec215eeebe8ab5cc2a324e2f055ff)
2020-07-28 13:47:17 +02:00
Simon Pilgrim
4a1983e013 [X86][SSE] Add additional (f)add(shuffle(x,y),shuffle(x,y)) tests for D83789
(cherry picked from commit bfc4294ef61d5cf69fffe6b64287a323c003d90f)
2020-07-28 13:47:17 +02:00
Craig Topper
73a82b1690 [X86] Detect if EFLAGs is live across XBEGIN pseudo instruction. Add it as livein to the basic blocks created when expanding the pseudo
XBEGIN causes several based blocks to be inserted. If flags are live across it we need to make eflags live in the new basic blocks to avoid machine verifier errors.

Fixes PR46827

Reviewed By: ivanbaev

Differential Revision: https://reviews.llvm.org/D84479

(cherry picked from commit 647e861e080382593648b234668ad2f5a376ac5e)
2020-07-28 13:43:23 +02:00
Hans Wennborg
ca9c579599 Drop the 'git' suffix from various version variables 2020-07-27 17:13:49 +02:00
David Green
95c58994a7 [BasicAA] Fix -basicaa-recphi for geps with negative offsets
As shown in D82998, the basic-aa-recphi option can cause miscompiles for
gep's with negative constants. The option checks for recursive phi, that
recurse through a contant gep. If it finds one, it performs aliasing
calculations using the other phi operands with an unknown size, to
specify that an unknown number of elements after the initial value are
potentially accessed. This works fine expect where the constant is
negative, as the size is still considered to be positive. So this patch
expands the check to make sure that the constant is also positive.

Differential Revision: https://reviews.llvm.org/D83576

(cherry picked from commit 311fafd2c90aed5b3fed9566503eebe629f1e979)
2020-07-27 17:02:13 +02:00
David Green
afe00fe14e [BasicAA] Add additional negative phi tests. NFC
(cherry picked from commit 30fa57662760e1489cf70cb411c55fbe9fc189fe)
2020-07-27 17:02:13 +02:00
Roman Lebedev
5158667c82 [JumpThreading] ProcessBranchOnXOR(): bailout if any pred ends in indirect branch (PR46857)
SplitBlockPredecessors() can not split blocks that have such terminators,
and in two other places we already ensure that we don't end up calling
SplitBlockPredecessors() on such blocks. Do so in one more place.

Fixes https://bugs.llvm.org/show_bug.cgi?id=46857

(cherry picked from commit 1da9834557cd4302a5183b8228ce063e69f82602)
2020-07-27 16:31:31 +02:00
Nemanja Ivanovic
031096854b [PowerPC][NFC] Fix an assert that cannot trip from 7d076e19e31a
I mixed up the precedence of operators in the assert and thought I
had it right since there was no compiler warning. This just
adds the parentheses in the expression as needed.

(cherry picked from commit cdead4f89c0eecf11f50092bc088e3a9c6511825)
2020-07-27 16:26:05 +02:00
Nemanja Ivanovic
a5f9c69e6e [PowerPC] Fix computation of offset for load-and-splat for permuted loads
Unfortunately this is another regression from my canonicalization patch
(1fed131660b2). The patch contained two implicit assumptions:
1. That we would have a permuted load only if we are loading a partial vector
2. That a partial vector load would necessarily be as wide as the splat

However, assumption 2 is not correct since it is possible to do a wider
load and only splat a half of it. This patch corrects this assumption by
simply checking if the load is permuted and adjusting the offset if it is.

(cherry picked from commit 7d076e19e31a2a32e357cbdcf0183f88fe1fb0fb)
2020-07-27 16:25:51 +02:00
Craig Topper
3e4949e540 [LegalizeTypes] Teach DAGTypeLegalizer::GenWidenVectorLoads to pad with undef if needed when concatenating small or loads to match a larger load
In the included test case the align 16 allowed the v23f32 load to handled as load v16f32, load v4f32, and load v4f32(one element not used). These loads all need to be concatenated together into a final vector. In this case we tried to concatenate the two v4f32 loads to match the type of the v16f32 load so we could do a second concat_vectors, but those loads alone only add up to v8f32. So we need to two v4f32 undefs to pad it.

It appears we've tried to hack around a similar issue in this code before by adding undef padding to loads in one of the earlier loops in this function. Originally in r147964 by padding all loads narrower than previous loads to the same size. Later modifed to only the last load in r293088. This patch removes that earlier code and just handles it on demand where we know we need it.

Fixes PR46820

Differential Revision: https://reviews.llvm.org/D84463

(cherry picked from commit 8131e190647ac2b5b085b48a6e3b48c1d7520a66)
2020-07-27 16:20:07 +02:00