1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-01-31 12:41:49 +01:00

3875 Commits

Author SHA1 Message Date
Sanjay Patel
e023c2fe80 [AArch64] add tests for store merge of truncs; NFC 2020-08-22 14:54:40 -04:00
Cameron McInally
5b6246666b [SVE] Lower fixed length UDIV to scalable
Pretty much just a copy of the SDIV patches (D86114 and D85982) with string replacement.

Differential Revision: https://reviews.llvm.org/D86316
2020-08-21 09:01:25 -05:00
Jay Foad
6d725be5b3 [SelectionDAG] Better legalization for FSHL and FSHR
In SelectionDAGBuilder always translate the fshl and fshr intrinsics to
FSHL and FSHR (or ROTL and ROTR) instead of lowering them to shifts and
ORs. Improve the legalization of FSHL and FSHR to avoid code quality
regressions.

Differential Revision: https://reviews.llvm.org/D77152
2020-08-21 10:32:49 +01:00
Cameron McInally
640a9a840f [NFCI][SVE] Move fixed length i32/i64 SDIV tests
Move fixed length SDIV tests from sve-fixed-length-int-arith.ll to sve-fixed-length-int-div.ll. The former uses CHECK lines that verify legalization decisions. That's overkill for the i8/i16 SDIV tests, since they have a tricky legalization.
2020-08-20 14:46:26 -05:00
Cameron McInally
06340b3cd4 [SVE] Lower fixed length vXi8/vXi16 SDIV to scalable
There are no nxv16i8/nxv8i16 SDIV instructions, so these fixed width operations must be promoted to nxv4i32.

Differential Revision: https://reviews.llvm.org/D86114
2020-08-20 13:47:01 -05:00
Paul Walker
fdacd25874 [SVE] Add ISEL patterns for predicated shifts by an immediate.
For scalable vector shifts the prediacte is typically all active,
which gets selected to an unpredicated shift by immediate.  When
code generating for fixed length vectors the predicate is based
on the vector length and so additional patterns are required to
make use of SVE's predicated shift by immediate instructions.

Differential Revision: https://reviews.llvm.org/D86204
2020-08-20 11:47:20 +01:00
Konstantin Schwarz
cfcda8d055 [GlobalISel][IRTranslator] Support PHI instructions in landingpad blocks
The check for the landingpad instructions was overly restrictive. In optimimized builds PHI nodes can appear
before the landingpad instructions, resulting in a fallback to SelectionDAG.

This change relaxes the check to allow PHI nodes.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D86141
2020-08-20 10:49:31 +02:00
Raul Tambre
90fec87fdc [AArch64][GlobalISel] Handle rtcGPR64RegClassID in AArch64RegisterBankInfo::getRegBankFromRegClass()
TargetRegisterInfo::getMinimalPhysRegClass() returns rtcGPR64RegClassID for X16
and X17, as it's the last matching class. This in turn gets passed to
AArch64RegisterBankInfo::getRegBankFromRegClass(), which hits an unreachable.

It seems sensible to handle this case, so copies from X16 and X17 work.
Copying from X17 is used in inline assembly in libunwind for pointer
authentication.

Differential Revision: https://reviews.llvm.org/D85720
2020-08-19 12:52:30 -07:00
Jessica Paquette
153c17604a [GlobalISel] Add combine for (x & mask) -> x when (x & mask) == x
If we have a mask, and a value x, where (x & mask) == x, we can drop the AND
and just use x.

This is about a 0.4% geomean code size improvement on CTMark at -O3 for AArch64.

In AArch64, this is most useful post-legalization. Patterns like this often
show up when legalizing s1s, which must be extended to larger types.

e.g.

```
%cmp:_(s32) = G_ICMP ...
%and:_(s32) = G_AND %cmp, 1
```

Since G_ICMP only produces a single bit, there's no reason to mask it with the
G_AND.

Differential Revision: https://reviews.llvm.org/D85463
2020-08-19 10:20:57 -07:00
Paul Walker
e3dc616e5a [SVE] Add tests for fixed length vector integer operations with immediate operands. 2020-08-19 11:12:03 +01:00
David Sherwood
f7a1832d69 [SVE][CodeGen] Fix scalable vector issues in DAGTypeLegalizer::GenWidenVectorLoads
In DAGTypeLegalizer::GenWidenVectorLoads the algorithm assumes it only
ever deals with fixed width types, hence the offsets for each individual
store never take 'vscale' into account. I've changed the code in that
function to use TypeSize instead of unsigned for tracking the remaining
load amount. In addition, I've changed the load loop to use the new
IncrementPointer helper function for updating the addresses in each
iteration, since this handles scalable vector types.

Also, I've added report_fatal_errors in GenWidenVectorExtLoads,
TargetLowering::scalarizeVectorLoad and TargetLowering::scalarizeVectorStores,
since these functions currently use a sequence of element-by-element
scalar loads/stores. In a similar vein, I've also added a fatal error
report in FindMemType for the case when we decide to return the element
type for a scalable vector type.

I've added new tests in

  CodeGen/AArch64/sve-split-load.ll
  CodeGen/AArch64/sve-ld-addressing-mode-reg-imm.ll

for the changes in GenWidenVectorLoads.

Differential Revision: https://reviews.llvm.org/D85909
2020-08-19 07:54:32 +01:00
Eli Friedman
610e3a8c86 [AArch64][SVE] Add patterns for integer mla/mls.
We probably want to introduce pseudo-instructions at some point, like
we have for binary operations, but this seems okay for now.

One thing I'm not sure about is whether we should be doing this as a
DAGCombine instead of directly pattern-matching it. I don't see any big
downside to doing it this way, though.

Differential Revision: https://reviews.llvm.org/D85681
2020-08-18 12:51:16 -07:00
Eli Friedman
2be753c211 [AArch64][SVE] Allow llvm.aarch64.sve.st2/3/4 with vectors of pointers.
This isn't necessaary for ACLE, but could be useful in other situations.
And the change is simple.

Differential Revision: https://reviews.llvm.org/D85251
2020-08-18 12:51:16 -07:00
Jessica Paquette
761fea8dc0 [GlobalISel][CallLowering] Don't tail call with non-forwarded explicit sret
Similar to this commit:

faf8065a99817bcb10e6f09b558fe3e0972c35ce

Testcase is pretty much the same as

test/CodeGen/AArch64/tailcall-explicit-sret.ll

Except it uses i64 (since we don't handle the i1024 return values yet), and
doesn't have indirect tail call testcases (because we can't translate those
yet).

Differential Revision: https://reviews.llvm.org/D86148
2020-08-18 11:06:57 -07:00
Amara Emerson
f6bce1ffcd [GlobalISel] Add a combine for sext_inreg(load x), c --> sextload x
This is restricted to single use loads, which if we fold to sextloads we can
find more optimal addressing modes on AArch64.

This also fixes an overload the MachineFunction::getMachineMemOperand() method
which was incorrectly using the MF alignment instead of the MMO alignment.

Differential Revision: https://reviews.llvm.org/D85966
2020-08-18 10:42:15 -07:00
Amara Emerson
d1d273ff1c [GlobalISel] Add a combine for ashr(shl x, c), c --> sext_inreg x, c'
By detecting this sign extend pattern early, we can uncover opportunities for
more optimizations.

Differential Revision: https://reviews.llvm.org/D85965
2020-08-18 10:42:15 -07:00
Jessica Paquette
7e08e6c7a3 [GlobalISel][CallLowering] Look through call parameters for flags
We weren't looking through the parameters on calls at all.

E.g., say you had

```
declare i32 @zext(i32 zeroext %x)

...
%y = call i32 @zext(i32 %something)
...

```

At the point of the call, we wouldn't know that the %something should have the
zeroext attribute.

This sets flags in about the same way as
TargetLoweringBase::ArgListEntry::setAttributes.

Differential Revision: https://reviews.llvm.org/D86125
2020-08-18 08:48:56 -07:00
Paul Walker
17b7f05023 [SVE] Fix shift-by-imm patterns used by asr, lsl & lsr intrinsics.
Right shift patterns will no longer incorrectly accept a shift
amount of zero.  At the same time they will allow larger shift
amounts that are now saturated to their upper bound.

Patterns have been extended to enable immediate forms for shifts
taking an arbitrary predicate.

This patch also unifies the code path for immediate parsing so the
i64 based shifts are no longer treated specially.

Differential Revision: https://reviews.llvm.org/D86084
2020-08-18 11:41:26 +01:00
Paul Walker
9d2dcdd5f6 [SVE] Lower fixed length vector ISD::SPLAT_VECTOR operations.
Also strengthens the CHECK lines for scalable vector splat tests.

Differential Revision: https://reviews.llvm.org/D86070
2020-08-18 11:19:43 +01:00
QingShan Zhang
3efdf8c410 [Test][NFC] Add a new test to verify if scheduler can cluster two ld/st
even with different preds
2020-08-18 09:42:15 +00:00
Dávid Bolvanský
26599cbe3f Revert "[BPI] Improve static heuristics for integer comparisons"
This reverts commit 50c743fa713002fe4e0c76d23043e6c1f9e9fe6f. Patch will be split to smaller ones.
2020-08-17 20:44:33 +02:00
Vitaly Buka
6f71d99b21 [StackSafety] Skip ambiguous lifetime analysis
If we can't identify alloca used in lifetime marker we
need to assume to worst case scenario.

Reviewed By: eugenis

Differential Revision: https://reviews.llvm.org/D84630
2020-08-16 18:05:52 -07:00
Amara Emerson
23bcf50ec2 [GlobalISel] Enable copy-propagation in post-legalizer combiner.
This cleans up copies that the legalizer or other combines leave around. They
can occasionally end up escaping as moves.

Differential Revision: https://reviews.llvm.org/D85964
2020-08-15 13:44:30 -07:00
Cameron McInally
d19dc4d38b [SVE] Lower fixed length vXi32/vXi64 SDIV to scalable vectors.
Differential Revision: https://reviews.llvm.org/D85982
2020-08-14 18:47:22 -05:00
Matt Arsenault
2348cd927a TableGen/GlobalISel: Partially handle immAllOnesV/immAllZerosV
These should really match either G_BUILD_VECTOR or
G_BUILD_VECTOR_TRUNC, but there doesn't seem to be an existing
mechanism for matching alternative opcodes. There is GIM_SwitchOpcode,
but it seems to assume it's oly only used for matcher optimization.

I could also omit any opcode check and rely on the matcher directly
checking the opcode, but the table optimizer currently assumes there
has to be an opcode check.

Also doesn't try to handle undef elements like the DAG version.
2020-08-14 13:55:30 -04:00
David Sherwood
6c42bb3937 [SVE] Fix bug in SVEIntrinsicOpts::optimizePTest
The code wasn't taking into account that the two operands
passed to ptest could be identical and was trying to erase
them twice.

Differential Revision: https://reviews.llvm.org/D85892
2020-08-14 07:57:21 +01:00
Ben Dunbobbin
f7bff6ec55 [X86][ELF] Prefer lowering MC_GlobalAddress operands to .Lfoo$local for STV_DEFAULT only
This patch restricts the behaviour of referencing via .Lfoo$local
local aliases, introduced in https://reviews.llvm.org/D73230, to
STV_DEFAULT globals only.

Hidden symbols via --fvisiblity=hidden (https://gcc.gnu.org/wiki/Visibility)
is an important scenario.

Benefits:

- Improves the size of object files by using fewer STT_SECTION symbols.

- The code reads a bit better (it was not obvious to me without going
  back to the code reviews why the canBenefitFromLocalAlias function
  currently doesn't consider visibility).

- There is also a side benefit in restoring the effectiveness of the
  --wrap linker option and making the behavior of --wrap consistent
  between LTO and normal builds for references within a translation-unit.
  Note: this --wrap behavior (which is specific to LLD) should not be
  considered reliable. See comments on https://reviews.llvm.org/D73230
  for more.

Differential Revision: https://reviews.llvm.org/D85782
2020-08-14 00:09:15 +01:00
Cameron McInally
5c7414d1a6 [SVE] Lower fixed length vector integer UMIN/UMAX
Differential Revision: https://reviews.llvm.org/D85926
2020-08-13 14:48:36 -05:00
Dávid Bolvanský
7129f2d26c [BPI] Improve static heuristics for integer comparisons
Similarly as for pointers, even for integers a == b is usually false.

GCC also uses this heuristic.

Reviewed By: ebrevnov

Differential Revision: https://reviews.llvm.org/D85781
2020-08-13 19:54:27 +02:00
Cameron McInally
53fa7800fd [SVE] Lower fixed length vector integer SMIN/SMAX
Differential Revision: https://reviews.llvm.org/D85855
2020-08-13 11:41:20 -05:00
Sanjay Patel
a0e7db9160 [AArch64][x86] add tests for x/sqrt(x); NFC 2020-08-13 11:34:56 -04:00
Dávid Bolvanský
baa55bd4d6 Revert "[BPI] Improve static heuristics for integer comparisons"
This reverts commit 44587e2f7e732604cd6340061d40ac21e7e188e5. Sanitizer tests need to be updated.
2020-08-13 14:37:40 +02:00
Dávid Bolvanský
f4c1a714d0 [BPI] Improve static heuristics for integer comparisons
Similarly as for pointers, even for integers a == b is usually false.

GCC also uses this heuristic.

Reviewed By: ebrevnov

Differential Revision: https://reviews.llvm.org/D85781
2020-08-13 14:23:58 +02:00
Paul Walker
712e02d446 [SVE] Lower fixed length vector integer shifts.
Differential Revision: https://reviews.llvm.org/D85724
2020-08-13 12:35:47 +01:00
Kerry McLaughlin
2e0bb7d648 [SVE][CodeGen] Legalisation of EXTRACT_VECTOR_ELT for scalable vectors
This patch changes SplitVecOp_EXTRACT_VECTOR_ELT to work correctly
for scalable vectors and also fixes an a bug in DAGCombiner where
the scalable property is dropped in visitTRUNCATE when attempting
to fold an extract + a truncate.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D85754
2020-08-13 12:32:59 +01:00
Paul Walker
036bc786e9 [SVE] Lower fixed length vector integer ISD::SETCC operations.
Differential Revision: https://reviews.llvm.org/D85831
2020-08-13 12:01:56 +01:00
Dávid Bolvanský
aecc53e597 Revert "[BPI] Improve static heuristics for integer comparisons"
This reverts commit 385c9d673f217e176b18e7bf6fe055154ac589c6.
2020-08-13 12:59:15 +02:00
Paul Walker
9cfbfc6b59 [SVE] Lower fixed length integer extend operations.
Differential Revision: https://reviews.llvm.org/D85640
2020-08-13 11:54:53 +01:00
Dávid Bolvanský
b38379d5d6 [BPI] Improve static heuristics for integer comparisons
Similarly as for pointers, even for integers a == b is usually false.

GCC also uses this heuristic.

Reviewed By: ebrevnov

Differential Revision: https://reviews.llvm.org/D85781
2020-08-13 12:45:40 +02:00
David Sherwood
ac8a8b4ad9 [SVE][CodeGen] Fix scalable vector issues in DAGTypeLegalizer::GenWidenVectorStores
In DAGTypeLegalizer::GenWidenVectorStores the algorithm assumes it only
ever deals with fixed width types, hence the offsets for each individual
store never take 'vscale' into account. I've changed the main loop in
that function to use TypeSize instead of unsigned for tracking the
remaining store amount and offset increment. In addition, I've changed
the loop to use the new IncrementPointer helper function for updating
the addresses in each iteration, since this handles scalable vector
types.

Whilst fixing this function I also fixed a minor issue in
IncrementPointer whereby we were not adding the no-unsigned-wrap flag
for the add instruction in the same way as the fixed width case does.

Also, I've added a report_fatal_error in GenWidenVectorTruncStores,
since this code currently uses a sequence of element-by-element scalar
stores.

I've added new tests in

  CodeGen/AArch64/sve-intrinsics-stores.ll
  CodeGen/AArch64/sve-st1-addressing-mode-reg-imm.ll

for the changes in GenWidenVectorStores.

Differential Revision: https://reviews.llvm.org/D84937
2020-08-13 11:07:17 +01:00
David Sherwood
8fe44dff5f [CodeGen] In narrowExtractedVectorLoad bail out for scalable vectors
In narrowExtractedVectorLoad there is an optimisation that tries to
combine extract_subvector with a narrowing vector load. At the moment
this produces warnings due to the incorrect calls to
getVectorNumElements() for scalable vector types. I've got this
working for scalable vectors too when the extract subvector index
is a multiple of the minimum number of elements. I have added a
new variant of the function:

  MachineFunction::getMachineMemOperand

that copies an existing MachineMemOperand, but replaces the pointer
info with a null version since we cannot currently represent scaled
offsets.

I've added a new test for this particular case in:

  CodeGen/AArch64/sve-extract-subvector.ll

Differential Revision: https://reviews.llvm.org/D83950
2020-08-13 10:46:18 +01:00
Francesco Petrogalli
6fee8f8f2e [SVE][VLS] Don't combine logical AND.
Testing is performed when targeting 128, 256 and 512-bit wide vectors.

For 128-bit vectors, the original behavior of using NEON instructions is
preserved.

Differential Revision: https://reviews.llvm.org/D85479
2020-08-12 20:00:07 +01:00
Amara Emerson
a56d446c69 [GlobalISel] Implement bit-test switch table optimization.
This is mostly a straight port from SelectionDAG. We re-use the actual bit-test
analysis part from SwitchLoweringUtils, which was factored out earlier to
support jump-tables.

Differential Revision: https://reviews.llvm.org/D85233
2020-08-12 11:31:39 -07:00
Cameron McInally
f5557910ab [SVE] Lower fixed length FP minnum/maxnum
Lower fixed length MINNUM/MAXNUM to scalable vectors. Cherry-picked from D71767 with added tests.

Differential Revision: https://reviews.llvm.org/D85744
2020-08-12 12:02:52 -05:00
David Sherwood
96c98d5a46 [SVE][CodeGen] Fix issues with EXTRACT_SUBVECTOR when using scalable FP vectors
In this patch I have fixed two issues:

1. Our SVE tuple get/set intrinsics were using the wrong constant type
for the index passed to EXTRACT_SUBVECTOR. I have fixed this by using the
function SelectionDAG::getVectorIdxConstant to create the value. Also, I
have updated the documentation for EXTRACT_SUBVECTOR describing what type
the constant index should be and we now enforce this when creating the
node.
2. The AArch64 backend was missing the appropriate patterns for
extracting certain subvectors (nxv4f16 and nxv2f32) from legal SVE types.
I have added them as part of this patch.

The only way that I could find to test the new patterns was to use the
SVE tuple get intrinsics, although I realise it looks a bit unusual.
Tests added here:

  test/CodeGen/AArch64/sve-extract-subvector.ll

Differential Revision: https://reviews.llvm.org/D85516
2020-08-12 08:35:46 +01:00
Jessica Paquette
8e321c8c5a [GlobalISel] Combine (logic_op (op x...), (op y...)) -> (op (logic_op x, y))
This implements

```
(logic_op (op x...), (op y...)) -> (op (logic_op x, y))
```

when `op` is an extend, a shift, or an and.

This is similar to `DAGCombiner::hoistLogicOpWithSameOpcodeHands`
(with a bunch of missing cases, e.g. G_TRUNC, G_BITCAST, etc.)

This is implemented so it works both pre and post-legalization.

This also adds a general way to add a series of instructions in a combine.
(`applyBuildInstructionSteps`).

Differential Revision: https://reviews.llvm.org/D85050
2020-08-11 10:40:06 -07:00
Jay Foad
928c1dd7ef [GlobalISel] Add G_ABS
This is equivalent to the new llvm.abs intrinsic added by D84125 with
is_int_min_poison=0.

Differential Revision: https://reviews.llvm.org/D85718
2020-08-11 16:34:37 +01:00
Matt Arsenault
c9ab6823ab TableGen/GlobalISel: Hack the operand order for atomic_store
ISD::ATOMIC_STORE arbitrarily has the operands in the opposite order
from regular ISD::STORE, which always introduced an annoying
duplication of patterns to handle both cases. Since in GlobalISel
there's just the one G_STORE, we need to swap the operands to
correctly emit the type check for the pointer operand.

Some work started in 20aafa31569b5157e792daa8860d71dd0df8a53a to
migrate SelectionDAG to use ISD::STORE for atomics, but that work
seems to have stalled. Since this is the pretty much the last
operation which matters which isn't supported for AMDGPU, use this
compatibility hack to unblock declaring it functionally complete.

Not sure what's going on with the pending_phis AArch64 test. It seems
it didn't always use atomics, and I'm not sure what it was originally
testing matters anymore.
2020-08-11 10:22:44 -04:00
Kerry McLaughlin
a9073b6100 [SVE][CodeGen] Legalisation of INSERT_VECTOR_ELT for scalable vectors
When the result type of insertelement needs to be split,
SplitVecRes_INSERT_VECTOR_ELT will try to store the vector to a
stack temporary, store the element at the location of the stack
temporary plus the index, and reload the Lo/Hi parts.

This patch does the following to ensure this works for scalable vectors:
 - Sets the StackID with getStackIDForScalableVectors() in CreateStackTemporary
 - Adds an IsScalable flag to getMemBasePlusOffset() and scales the
    offset by VScale when this is true
 - Ensures the immediate is clamped correctly by clampDynamicVectorIndex
    so that we don't try to use an out of range index

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D84874
2020-08-11 12:57:28 +01:00
Paul Walker
04aa37ecb2 [SVE] Lower fixed length vector integer subtract operations.
Differential Revision: https://reviews.llvm.org/D85665
2020-08-11 11:32:12 +01:00