1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-02-01 05:01:59 +01:00

30028 Commits

Author SHA1 Message Date
Jinsong Ji
4b2d844f4f [PowerPC][NFC][MachinePipeliner] Add some regression testcases
Exposed by refactoring in https://reviews.llvm.org/D64665.

llvm-svn: 367732
2019-08-02 22:27:44 +00:00
Douglas Yung
103ef490da Revert Fix and test inter-procedural register allocation for ARM
This reverts r367669 (git commit f6b00c279a5587a25876752a6ecd8da0bed959dc)

This was breaking a build bot http://lab.llvm.org:8011/builders/netbsd-amd64/builds/21233

llvm-svn: 367731
2019-08-02 22:11:49 +00:00
Yonghong Song
1f886e2f2a [BPF] annotate DIType metadata for builtin preseve_array_access_index()
Previously, debuginfo types are annotated to
IR builtin preserve_struct_access_index() and
preserve_union_access_index(), but not
preserve_array_access_index(). The debug info
is useful to identify the root type name which
later will be used for type comparison.

For user access without explicit type conversions,
the previous scheme works as we can ignore intermediate
compiler generated type conversions (e.g., from union types to
union members) and still generate correct access index string.

The issue comes with user explicit type conversions, e.g.,
converting an array to a structure like below:
  struct t { int a; char b[40]; };
  struct p { int c; int d; };
  struct t *var = ...;
  ... __builtin_preserve_access_index(&(((struct p *)&(var->b[0]))->d)) ...
Although BPF backend can derive the type of &(var->b[0]),
explicit type annotation make checking more consistent
and less error prone.

Another benefit is for multiple dimension array handling.
For example,
  struct p { int c; int d; } g[8][9][10];
  ... __builtin_preserve_access_index(&g[2][3][4].d) ...
It would be possible to calculate the number of "struct p"'s
before accessing its member "d" if array debug info is
available as it contains each dimension range.

This patch enables to annotate IR builtin preserve_array_access_index()
with proper debuginfo type. The unit test case and language reference
is updated as well.

Signed-off-by: Yonghong Song <yhs@fb.com>

Differential Revision: https://reviews.llvm.org/D65664

llvm-svn: 367724
2019-08-02 21:28:28 +00:00
Amara Emerson
785f4c1ed2 [AArch64][GlobalISel] Eliminate redundant G_ZEXT when the source is implicitly zext-loaded.
These cases can come up when the extending loads combiner doesn't combine a
zext(load) to a zextload op, due to some other operation being in between, which
then gets simplified at a later stage.

Differential Revision: https://reviews.llvm.org/D65360

llvm-svn: 367723
2019-08-02 21:15:36 +00:00
Philip Reames
050c82570a [Statepoints] Fix overalignment of loads in no-realign-stack functions
This really should have been part of 366765.  For some reason, I forgot to handle the corresponding load side, and the readable test cases (using deopt vs statepoints) turned out to be overly reduced.  Oops.

As seen in the test change, the problem was that we were using a load with alignment expectations rather than the unaligned variant when the stack alignment was less than that prefered type alignment.

llvm-svn: 367718
2019-08-02 20:17:37 +00:00
Craig Topper
6d76a4bcea [ScalarizeMaskedMemIntrin] Add constant mask support to expandload and compressstore scalarization
This adds support for generating all the loads or stores for a constant mask into a single basic block with no conditionals.

Differential Revision: https://reviews.llvm.org/D65613

llvm-svn: 367715
2019-08-02 20:04:34 +00:00
Philip Reames
f4079cb7ec [Test] Demonstrate a realignment bug missed in r366765
llvm-svn: 367714
2019-08-02 20:01:43 +00:00
Sanjay Patel
6449e66c97 [DAGCombiner] try to convert opposing shifts to casts
This reverses a questionable IR canonicalization when a truncate
is free:

sra (add (shl X, N1C), AddC), N1C -->
sext (add (trunc X to (width - N1C)), AddC')

https://rise4fun.com/Alive/slRC

More details in PR42644:
https://bugs.llvm.org/show_bug.cgi?id=42644

I limited this to pre-legalization for code simplicity because that
should be enough to reverse the IR patterns. I don't have any
evidence (no regression test diffs) that we need to try this later.

Differential Revision: https://reviews.llvm.org/D65607

llvm-svn: 367710
2019-08-02 19:33:46 +00:00
Jessica Paquette
7f4d840652 [AArch64][GlobalISel] Support the neg_addsub_shifted_imm32 pattern
Add an equivalent ComplexRendererFns function for SelectNegArithImmed. This
allows us to select immediate adds of -1 by turning them into subtracts.

Update select-binop.mir to show that the pattern works.

Differential Revision: https://reviews.llvm.org/D65460

llvm-svn: 367700
2019-08-02 18:12:53 +00:00
Simon Pilgrim
538c639d53 [AMDGPU] Regenerated saddo.ll test file for D47927
llvm-svn: 367698
2019-08-02 17:52:55 +00:00
Peter Collingbourne
35a0672873 CodeGen: Don't follow aliases when extracting type info.
This fixes a crash in the case where the type info object is an alias
pointing to a non-zero offset within a global or is otherwise unanalyzable
by the stripPointerCasts() function. Looking through the alias is not the
right thing to do anyway for similar reasons as D65118.

Differential Revision: https://reviews.llvm.org/D65314

llvm-svn: 367696
2019-08-02 17:43:45 +00:00
Tim Northover
2dffc88cad GlobalISel: support swiftself attribute
llvm-svn: 367683
2019-08-02 14:09:49 +00:00
Sanjay Patel
3f8e72c714 [x86] add/adjust tests for shift-add-shift; NFC
Goes with D65607.

llvm-svn: 367677
2019-08-02 11:50:03 +00:00
Oliver Stannard
fac519c9d7 [IPRA][ARM] Disable no-CSR optimisation for ARM
This optimisation isn't generally profitable for ARM, because we can
save/restore many registers in the prologue and epilogue using the PUSH
and POP instructions, but mostly use individual LDR/STR instructions for
other spills.

Differential revision: https://reviews.llvm.org/D64910

llvm-svn: 367670
2019-08-02 10:23:17 +00:00
Oliver Stannard
c00402ed88 Fix and test inter-procedural register allocation for ARM
- Avoid a crash when IPRA calls ARMFrameLowering::determineCalleeSaves
  with a null RegScavenger. Simply not updating the register scavenger
  is fine because IPRA only cares about the SavedRegs vector, the acutal
  code of the function has already been generated at this point.
- Add a new hook to TargetRegisterInfo to get the set of registers which
  can be clobbered inside a call, even if the compiler can see both
  sides, by linker-generated code.

Differential revision: https://reviews.llvm.org/D64908

llvm-svn: 367669
2019-08-02 10:23:05 +00:00
Kai Luo
bb3575b0df [PowerPC][Peephole] Check if extsw's second operand is a virtual register
Summary:
When combining `extsw` and `sldi` in `PPCMIPeephole`, we have to check
if `extsw`'s second operand is a virtual register, otherwise we might
get miscompile.

Differential Revision: https://reviews.llvm.org/D65315

llvm-svn: 367645
2019-08-02 03:14:17 +00:00
Sanjay Patel
bf90d1ff80 [AArch64][x86] adjust tests with shift-add-shift; NFC
Prevent folding away the math completely.

llvm-svn: 367612
2019-08-01 21:08:08 +00:00
Sanjay Patel
7db7973eed [AArch64][x86] add tests for shift-add-shift; NFC (PR42644)
llvm-svn: 367607
2019-08-01 20:32:27 +00:00
Matt Arsenault
31a388b7dd GlobalISel: Lower scalarizing unmerge of a vector to shifts
AMDGPU sometimes has legal s16 and <2 x s16> operations, but all
registers are really 32-bit. An unmerge destination really should ben
widened to a 32-bit register. If widening a scalarizing vector with a
target size that matches the vector size, bitcast to integer and
extract the relevant bits with shifts.

I'm not sure if this is the right place for this. This could arguably
be part of widenScalar for the result. I also have a growing feeling
that we're missing a bitcast legalize action.

llvm-svn: 367604
2019-08-01 19:10:05 +00:00
Craig Topper
38dded7a11 [X86] In decomposeMulByConstant, legalize the VT before querying whether the multiply is legal
If a type is larger than a legal type and needs to be split, we would previously allow the multiply to be decomposed even if the split multiply is legal. Since the shift + add/sub code would also need to be split, its not any better to decompose it.

This patch figures out what type the mul will eventually be legalized to and then uses that type for the query. I tried just returning false illegal types and letting them get handled after type legalization, but then we can't recognize and i64 constant splat on 32-bit targets since will be destroyed by type legalization. We could special case vectors of i64 to avoid that...

Differential Revision: https://reviews.llvm.org/D65533

llvm-svn: 367601
2019-08-01 18:49:07 +00:00
Craig Topper
e171364306 [X86] Add some test cases for 512-bit truncate to 128-bits with min-legal-vector-width=0 and prefer-vector-width=256.
We currently split the 512 type, truncate each half to 128 bits,
concatenate them, and then truncate again. Probably better to
truncate each half to 64-bits and then concat the results
using vpunpcklqdq.

llvm-svn: 367600
2019-08-01 18:48:57 +00:00
Matt Arsenault
55161e45ee AMDGPU: Remove v0 workaround for DS_GWS_* instructions
Any register should work for the src field since r366067, since the
used value is not pulled from the expected encoding field.

llvm-svn: 367598
2019-08-01 18:41:32 +00:00
Matt Arsenault
d319c0742b GlobalISel: Fix widenScalar for G_MERGE_VALUES to pointer
AMDGPU testcase isn't broken now, but will be in a future patch
without this.

llvm-svn: 367591
2019-08-01 18:13:16 +00:00
Wouter van Oortmerssen
83a02771fb [WebAssembly] Assembler/InstPrinter: support call_indirect type index.
A TYPE_INDEX operand (as used by call_indirect) used to be represented
by the InstPrinter as a symbol (e.g. .Ltype_index0@TYPE_INDEX) which
was a bit of a mismatch with the WasmObjectWriter which expects an
unnamed symbol, to receive the signature from and then turn into a
reloc.

There was really no good way to round-trip this information. An earlier
version of this patch tried to attach the signature information using
a .functype, but that ran into trouble when the symbol was re-emitted
without a name. Removing the name was a giant hack also.

The current version changes the assembly syntax to have an inline
signature spec for TYPEINDEX operands that is always unnamed, which
is much more elegant both in syntax and in implementation (as now the
assembler is able to follow the same path as the regular backend)

Reviewers: sbc100, dschuff, aheejin, jgravelle-google, sunfish, tlively

Subscribers: arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64758

llvm-svn: 367590
2019-08-01 18:08:26 +00:00
Simon Pilgrim
95bddec610 [TargetLowering] SimplifyMultipleUseDemandedBits - Add ISD::INSERT_VECTOR_ELT handling
Allow us to peek through vector insertions to avoid dependencies on entire insertion chains.

llvm-svn: 367588
2019-08-01 17:46:44 +00:00
Simon Pilgrim
bc7f5d0d92 [X86][SSE] Add PEXTR*(PINSR*(v, s, c), c) -> s combine.
We should probably extend this to cover bitcasts as well to help other cases in promote-vec3.ll.

llvm-svn: 367582
2019-08-01 16:38:39 +00:00
Simon Pilgrim
8a6da2044e [X86][SSE] SimplifyMultipleUseDemandedBits - Add PEXTR/PINSR B+W handling
This adds SimplifyMultipleUseDemandedBitsForTargetNode X86 support and uses it to allow us to peek through vector insertions to avoid dependencies on entire insertion chains.

llvm-svn: 367570
2019-08-01 14:46:03 +00:00
Simon Pilgrim
ab74e948b3 [X86] EltsFromConsecutiveLoads - don't attempt to merge volatile loads (PR42846)
llvm-svn: 367556
2019-08-01 13:13:18 +00:00
David Green
5bf94cf532 [ARM] Fix for MVE VREV64
The VREV64 instruction is apparently unpredictable if Qd == Qm, due to the
cross-beat nature of the instruction. This adds an earlyclobber to Qd, which
seems to be the same way we deal with this on other instructions like the
write-back on loads and stores.

Differential Revision: https://reviews.llvm.org/D65502

llvm-svn: 367544
2019-08-01 11:22:03 +00:00
Simon Pilgrim
c2f148f4bd [ARM] Regenerate BSWAP16 tests
llvm-svn: 367543
2019-08-01 11:12:10 +00:00
Sander de Smalen
3c4747236e [AArch64] Do not allocate unnecessary emergency slot.
Fix an issue where the compiler still allocates an emergency spill slot even
though it already decided to spill an extra callee-save register to use
as a scratch register.

Reviewers: gberry, thegameg, mstorsjo, t.p.northover

Reviewed By: thegameg

Differential Revision: https://reviews.llvm.org/D65504

llvm-svn: 367540
2019-08-01 10:53:45 +00:00
Petar Avramovic
53344cb357 [MIPS GlobalISel] Fold load/store + G_GEP + G_CONSTANT
Fold load/store + G_GEP + G_CONSTANT when
immediate in G_CONSTANT fits into 16 bit signed integer.

Differential Revision: https://reviews.llvm.org/D65507

llvm-svn: 367535
2019-08-01 09:40:13 +00:00
David Zarzycki
eb6539b002 [Testing] Fix tests that break with read-only checkouts
Found with `mount --bind -o ro ...` on Linux.

llvm-svn: 367519
2019-08-01 06:41:40 +00:00
Zi Xuan Wu
a9665b120e recommit:[PowerPC] Eliminate loads/swap feeding swap/store for vector type by using big-endian load/store
In PowerPC, there is instruction to load vector in big endian element order when it's in little endian target. 
So we can combine vector load + reverse into big endian load to eliminate the swap instruction.
Also combine vector reverse + store into big endian store.

Differential Revision: https://reviews.llvm.org/D65063

llvm-svn: 367516
2019-08-01 05:26:02 +00:00
Fangrui Song
2f4969c240 AMDGPU/GlobalISel: fix inst-select-load-local.mir in -DLLVM_ENABLE_ASSERTIONS=off builds after r367498
llvm-svn: 367514
2019-08-01 04:03:06 +00:00
Matt Arsenault
efe1680999 AMDGPU/GlobalISel: Fix flat load/store of pointer types
llvm-svn: 367513
2019-08-01 03:57:42 +00:00
Matt Arsenault
f13a8d4c05 AMDGPU/GlobalISel: Remove manual store select code
This regresses the weird types that are newly treated as legal load
types, but fixes incorrectly using flat instrucions on SI.

llvm-svn: 367512
2019-08-01 03:52:40 +00:00
Matt Arsenault
b7574e4487 AMDGPU/GlobalISel: Select local atomic cmpxchg
llvm-svn: 367511
2019-08-01 03:41:41 +00:00
Matt Arsenault
90731e897e AMDGPU/GlobalISel: Handle G_ATOMICRMW_FADD
llvm-svn: 367509
2019-08-01 03:33:15 +00:00
Matt Arsenault
1a171c4ef3 AMDGPU/GlobalISel: Allow selection of DS atomicrmw
llvm-svn: 367507
2019-08-01 03:29:01 +00:00
Matt Arsenault
9156809a6f AMDGPU/GlobalISel: Select simple local stores
llvm-svn: 367504
2019-08-01 03:09:15 +00:00
Matt Arsenault
c6739e6709 GlobalISel: moreElementsVector for G_LOAD/G_STORE
AMDGPU change and test is a placeholder until a future patch with
complete handling.

llvm-svn: 367503
2019-08-01 01:44:22 +00:00
Peter Collingbourne
6a9e39fd00 Create unique, but identically-named ELF sections for explicitly-sectioned functions and globals when using -function-sections and -data-sections.
This allows functions and globals to to be reordered later in the linking phase
(using the -symbol-ordering-file) even though reordering will be limited to
the scope of the explicit section.

Patch by Rahman Lavaee!

Differential Revision: https://reviews.llvm.org/D65478

llvm-svn: 367501
2019-08-01 01:38:53 +00:00
Matt Arsenault
2aa9ce0a05 Reapply "AMDGPU: Split block for si_end_cf"
This reverts commit r359363, reapplying r357634

llvm-svn: 367500
2019-08-01 01:25:27 +00:00
Matt Arsenault
4775479b39 AMDGPU/GlobalISel: Select local loads
llvm-svn: 367498
2019-08-01 00:53:38 +00:00
Amy Huang
4ab9f86e00 Revert "[MS] Emit S_HEAPALLOCSITE debug info in Selection DAG" and
and partial fix.
Causes windows buildbot errors.

This reverts commit 6e65c34523963094acd0d6c94a5f5c64b32fe6aa and
53da7ca94343166ac68aef81db0398932fc258bb.

llvm-svn: 367496
2019-07-31 23:59:31 +00:00
Eli Friedman
910badf6bd [ARM] Lower "(x<<c) > 0x80000000U" to "lsls" on Thumb1.
This is extremely specific, but saves three instructions when it's
legal.  I don't think the code can be usefully generalized.

Differential Revision: https://reviews.llvm.org/D65351

llvm-svn: 367492
2019-07-31 23:19:21 +00:00
Eli Friedman
c986dd14c4 [ARM] Transform compare of masked value to shift on Thumb1.
Thumb1 has very limited immediate modes, so turning an "and" into a
shift can save multiple instructions.

It's possible to simplify the generated code for test2 and test3 in
cmp-and-fold.ll a little more, but I'll implement that as a followup.

Differential Revision: https://reviews.llvm.org/D65175

llvm-svn: 367491
2019-07-31 23:17:34 +00:00
Craig Topper
4b86c5ad65 [ScalarizeMaskedMemIntrin] Bitcast the mask to the scalar domain and use scalar bit tests for the branches.
X86 at least is able to use movmsk or kmov to move the mask to the scalar
domain. Then we can just use test instructions to test individual bits.

This is more efficient than extracting each mask element
individually.

I special cased v1i1 to use the previous behavior. This avoids
poor type legalization of bitcast of v1i1 to i1.

I've skipped expandload/compressstore as I think we need to
handle constant masks for those better first.

Many tests end up with duplicate test instructions due to tail
duplication in the branch folding pass. But the same thing
happens when constructing similar code in C. So its not unique
to the scalarization.

Not sure if this lowering code will also be good for other targets,
but we're only testing X86 today.

Differential Revision: https://reviews.llvm.org/D65319

llvm-svn: 367489
2019-07-31 22:58:15 +00:00
Craig Topper
f234df8098 [X86] Add DAG combine to fold any_extend_vector_inreg+truncstore to an extractelement+store
We have custom code that ignores the normal promoting type legalization on less than 128-bit vector types like v4i8 to emit pavgb, paddusb, psubusb since we don't have the equivalent instruction on a larger element type like v4i32. If this operation appears before a store, we can be left with an any_extend_vector_inreg followed by a truncstore after type legalization. When truncstore isn't legal, this will normally be decomposed into shuffles and a non-truncating store. This will then combine away the any_extend_vector_inreg and shuffle leaving just the store. On avx512, truncstore is legal so we don't decompose it and we had no combines to fix it.

This patch adds a new DAG combine to detect this case and emit either an extract_store for 64-bit stoers or a extractelement+store for 32 and 16 bit stores. This makes the avx512 codegen match the avx2 codegen for these situations. I'm restricting to only when -x86-experimental-vector-widening-legalization is false. When we're widening we're not likely to create this any_extend_inreg+truncstore combination. This means we should be able to remove this code when we flip the default. I would like to flip the default soon, but I need to investigate some performance regressions its causing in our branch that I wasn't seeing on trunk.

Differential Revision: https://reviews.llvm.org/D65538

llvm-svn: 367488
2019-07-31 22:43:08 +00:00