1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-01-31 12:41:49 +01:00

30145 Commits

Author SHA1 Message Date
Kang Zhang
56462c7a94 [NFC][PowerPC] Add the test case shrink-wrap.mir and shrink-wrap.ll for PPC
llvm-svn: 368597
2019-08-12 17:50:01 +00:00
Craig Topper
1a2bd102eb [X86] Disable use of zmm registers for varargs musttail calls under prefer-vector-width=256 and min-legal-vector-width=256.
Under this config, the v16f32 type we try to use isn't to a register
class so the getRegClassFor call will fail.

llvm-svn: 368594
2019-08-12 17:43:26 +00:00
Stanislav Mekhanoshin
7306f3fe48 [AMDGPU] Printf runtime binding pass
This pass is a port of the according pass from the HSAIL compiler.
It parses printf calls and setup runtime printf buffer.
After that it copies printf arguments to the buffer and fills in
module metadata for runtime.

Differential Revision: https://reviews.llvm.org/D24035

llvm-svn: 368592
2019-08-12 17:12:29 +00:00
Hans Wennborg
1c46c7087d Revert r368339 "[MBP] Disable aggressive loop rotate in plain mode"
It caused assertions to fire when building Chromium:

  lib/CodeGen/LiveDebugValues.cpp:331: bool
  {anonymous}::LiveDebugValues::OpenRangesSet::empty() const: Assertion
  `Vars.empty() == VarLocs.empty() && "open ranges are inconsistent"' failed.

See https://crbug.com/992871#c3 for how to reproduce.

> Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.
>
> To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.
>
> Differential Revision: https://reviews.llvm.org/D65673

llvm-svn: 368579
2019-08-12 14:23:13 +00:00
Simon Pilgrim
83efc11093 [X86][SSE] Add test showing missing demanded elts PSADBW handling
llvm-svn: 368575
2019-08-12 14:01:16 +00:00
Kang Zhang
6c18f8cea5 Revert r368565: [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
llvm-svn: 368574
2019-08-12 14:00:31 +00:00
Sam Elliott
8662789349 [RISCV] Fix ICE in isDesirableToCommuteWithShift
Summary:
Ana Pazos reported a bug where we were not checking that an APInt would
fit into 64-bits before calling `getSExtValue()`. This caused asserts when
compiling large constants, such as i128s, as happens when compiling compiler-rt.

This patch adds a testcase and makes the callback less error-prone.

Reviewers: apazos, asb, luismarques

Reviewed By: luismarques

Subscribers: hiraditya, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66081

llvm-svn: 368572
2019-08-12 13:51:00 +00:00
Kang Zhang
cafaf9bdfb [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:

In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D63972

llvm-svn: 368565
2019-08-12 13:15:31 +00:00
Hans Wennborg
85ec3cf9ad Revert r368509 "[CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks"
> In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
> But the `early-ret` pass is before `block-placement`, we don't want to run it again.
> This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
>
> Reviewed By: efriedma
>
> Differential Revision: https://reviews.llvm.org/D63972

This also revertes follow-ups r368514 and r368532.

llvm-svn: 368560
2019-08-12 12:43:51 +00:00
Simon Pilgrim
f6ff10d8c4 [X86][SSE] ComputeKnownBits - add basic PSADBW handling
llvm-svn: 368558
2019-08-12 12:19:19 +00:00
Simon Pilgrim
f01bfd8575 [X86][SSE] Add test showing missing compute known bits PSADBW handling
The upper 48-bits of each i64 element is guaranteed to be zero.

llvm-svn: 368557
2019-08-12 12:13:08 +00:00
Simon Pilgrim
70688517d3 [TargetLowering] SimplifyDemandedBits - call SimplifyMultipleUseDemandedBits for ISD::TRUNCATE
llvm-svn: 368553
2019-08-12 10:56:05 +00:00
Craig Topper
4a675ce557 [X86] Add some reduction add test cases that show sub-optimal code on avx2 and later.
For v4i8 and v8i8 when the reduction starts with a load we end up
shifting the data in the scalar domain and copying to the vector
domain a second time using a broadcast.

We already copied it to the vector domain once. It's better to
just shuffle it there.

llvm-svn: 368544
2019-08-12 06:55:58 +00:00
Pengfei Wang
c35df9a0ee [X86] Support -march=tigerlake
Support -march=tigerlake for x86.
Compare with Icelake Client, It include 4 more new features ,they are
avx512vp2intersect, movdiri, movdir64b, shstk.

Patch by Xiang Zhang (xiangzhangllvm)

Differential Revision: https://reviews.llvm.org/D65840

llvm-svn: 368543
2019-08-12 01:29:46 +00:00
Bjorn Pettersson
47fce18f1a [X86] Remove redundant ';' chars ending IR lines in lit tests. NFC
Reviewers: RKSimon, craig.topper

Reviewed By: craig.topper

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66053

llvm-svn: 368541
2019-08-11 19:27:14 +00:00
Bjorn Pettersson
729026b2c4 [SelectionDAG] Widen vector results of SMULFIX/UMULFIX/SMULFIXSAT
Summary:
After the commits that changed x86 backend to widen vectors
instead of using promotion some of our downstream tests
started to fail. It was noticed that WidenVectorResult has
been missing support for SMULFIX/UMULFIX/SMULFIXSAT. This
patch adds the missing functionality.

Reviewers: craig.topper, RKSimon

Reviewed By: craig.topper

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66051

llvm-svn: 368540
2019-08-11 19:27:06 +00:00
David Green
15e5321ece [ARM] MVE spill vector test. NFC
llvm-svn: 368531
2019-08-11 09:12:57 +00:00
Heejin Ahn
01f5818544 Fix __clang_call_termiante's argument for foreign exceptions
Summary:
When exceptions are repeatedly thrown in the middle of handling another
exception, we call `__clang_call_terminate` with the exception pointer
(i32) as an argument. But in case of foreign exceptions, we don't have
the pointer, so we call the function with 0. (This requires
`__clang_call_terminate` can deal with 0 argument, which will be done
later)

But previously the 0 argument was not added as a `i32.const 0` but an
immediate by mistake, causing the `call` instruction to take not an i32
but rather an exnref, because an `exnref` is left on top of the value
stack if `br_on_exn` is not taken.

```
block i32
  br_on_exn 0, __cpp_exception
                               ;; exnref is on top of stack now
  i32.const 0                  ;; This was missing!
  call __clang_call_terminate
  unreachable
end
call __clang_call_terminate    ;; This takes i32 extracted by br_on_exn
```

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65475

llvm-svn: 368527
2019-08-11 06:24:07 +00:00
Craig Topper
09d1f0ad6d [X86] Remove some code from combineShuffle that seems largely unnecessary with widening legalization.
The test case that changed is probably better served through
allowing combineTruncatedArithmetic to create narrow vectors. It
also appears InstCombine would have simplified this test case
to remove the zext and trunc anyway.

llvm-svn: 368522
2019-08-11 02:08:38 +00:00
Simon Pilgrim
41fb8c9c5c [X86][SSE] Lower shuffle as ANY_EXTEND_VECTOR_INREG
On SSE41+ targets we always lower vector shuffles to ZERO_EXTEND_VECTOR_INREG, even if we don't need the extended bits.

This patch relaxes this so that we lower to ANY_EXTEND_VECTOR_INREG if we can, meaning that shuffle combines have a better idea of what elements need to be kept zero. This helps the multiple reduction code as we can now combine away a lot more of the pack+extend codes.

Differential Revision: https://reviews.llvm.org/D65741

llvm-svn: 368515
2019-08-10 16:46:07 +00:00
Kang Zhang
b37f16cb07 [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:

In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D63972

llvm-svn: 368509
2019-08-10 09:58:52 +00:00
Craig Topper
b7845c5bbc [X86] Match the IR pattern form movmsk on SSE1 only targets where v4i32 isn't legal
Summary:
This patch adds a special DAG combine for SSE1 to recognize the IR pattern InstCombine gives us for movmsk. This only does the recognition for a few cases where its obvious the input won't be scalarized resulting in building a vector just do to the movmsk. I've made it separate from our existing matching for movmsk since that's called in multiple places and I didn't spend time to see if the other callers would make sense here. Plus the restrictions and additional checks would complicate that.

This fixes the case from PR42870. Buts its probably still broken the presence of logic ops feeding the movmsk pattern which would further hide the v4f32 type.

Reviewers: spatel, RKSimon, xbolva00

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65689

llvm-svn: 368506
2019-08-10 07:51:13 +00:00
Luo, Yuanke
78c709910b [X86] Fix stack probe issue on windows32.
Summary:
On windows if the frame size exceed 4096 bytes, compiler need to
generate a call to _alloca_probe. X86CallFrameOptimization pass
changes the reserved stack size and cause of stack probe function
not be inserted. This patch fix the issue by detecting the call
frame size, if the size exceed 4096 bytes, drop X86CallFrameOptimization.

Reviewers: craig.topper, wxiao3, annita.zhang, rnk, RKSimon

Reviewed By: rnk

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65923

llvm-svn: 368503
2019-08-10 02:49:02 +00:00
Sanjay Patel
b53cb9b938 [DAGCombiner] exclude x*2.0 from normal negation profitability rules
This is the codegen part of fixing:
https://bugs.llvm.org/show_bug.cgi?id=32939

Even with the optimal/canonical IR that is ideally created by D65954,
we would reverse that transform in DAGCombiner and end up with the same
asm on AArch64 or x86.

I see 2 options for trying to correct this:

  1. Limit isNegatibleForFree() by special-casing the fmul pattern (this patch).
  2. Avoid creating (fmul X, 2.0) in the 1st place by adding a special-case
     transform to SelectionDAG::getNode() and/or SelectionDAGBuilder::visitFMul()
     that matches the transform done by DAGCombiner.

This seems like the less intrusive patch, but if there's some other reason to
prefer 1 option over the other, we can change to the other option.

Differential Revision: https://reviews.llvm.org/D66016

llvm-svn: 368490
2019-08-09 21:37:32 +00:00
Daniel Sanders
233ed83478 [globalisel] Add G_SEXT_INREG
Summary:
Targets often have instructions that can sign-extend certain cases faster
than the equivalent shift-left/arithmetic-shift-right. Such cases can be
identified by matching a shift-left/shift-right pair but there are some
issues with this in the context of combines. For example, suppose you can
sign-extend 8-bit up to 32-bit with a target extend instruction.
  %1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity)
  %2:_(s32) = G_ASHR %1:_(s32), i32 24
  %3:_(s32) = G_ASHR %2:_(s32), i32 1
would reasonably combine to:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 25
which no longer matches the special case. If your shifts and extend are
equal cost, this would break even as a pair of shifts but if your shift is
more expensive than the extend then it's cheaper as:
  %2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8
  %3:_(s32) = G_ASHR %2:_(s32), i32 1
It's possible to match the shift-pair in ISel and emit an extend and ashr.
However, this is far from the only way to break this shift pair and make
it hard to match the extends. Another example is that with the right
known-zeros, this:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 24
  %3:_(s32) = G_MUL %2:_(s32), i32 2
can become:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 23

All upstream targets have been configured to lower it to the current
G_SHL,G_ASHR pair but will likely want to make it legal in some cases to
handle their faster cases.

To follow-up: Provide a way to legalize based on the constant. At the
moment, I'm thinking that the best way to achieve this is to provide the
MI in LegalityQuery but that opens the door to breaking core principles
of the legalizer (legality is not context sensitive). That said, it's
worth noting that looking at other instructions and acting on that
information doesn't violate this principle in itself. It's only a
violation if, at the end of legalization, a pass that checks legality
without being able to see the context would say an instruction might not be
legal. That's a fairly subtle distinction so to give a concrete example,
saying %2 in:
  %1 = G_CONSTANT 16
  %2 = G_SEXT_INREG %0, %1
is legal is in violation of that principle if the legality of %2 depends
on %1 being constant and/or being 16. However, legalizing to either:
  %2 = G_SEXT_INREG %0, 16
or:
  %1 = G_CONSTANT 16
  %2:_(s32) = G_SHL %0, %1
  %3:_(s32) = G_ASHR %2, %1
depending on whether %1 is constant and 16 does not violate that principle
since both outputs are genuinely legal.

Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm

Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, kristof.beyls, javed.absar, hiraditya, jrtc27, atanasyan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D61289

llvm-svn: 368487
2019-08-09 21:11:20 +00:00
Bill Wendling
480ad615e4 [CodeGen] Require a name for a block addr target
Summary:
A block address may be used in inline assembly. In which case it
requires a name so that the asm parser has something to parse. Creating
a name for every block address is a large hammer, but is necessary
because at the point when a temp symbol is created we don't necessarily
know if it's used in inline asm. This ensures that it exists regardless.

Reviewers: nickdesaulniers, craig.topper

Subscribers: nathanchance, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65352

llvm-svn: 368478
2019-08-09 20:18:30 +00:00
Bill Wendling
5c86e999b1 [MC] Don't recreate a label if it's already used
Summary:
This patch keeps track of MCSymbols created for blocks that were
referenced in inline asm. It prevents creating a new symbol which
doesn't refer to the block.

Inline asm may have a reference to a label. The asm parser however
doesn't recognize it as a label and tries to create a new symbol. The
result being that instead of the original symbol (e.g. ".Ltmp0") the
parser replaces it in the inline asm with the new one (e.g. ".Ltmp00")
without updating it in the symbol table. So the machine basic block
retains the "old" symbol (".Ltmp0"), but the inline asm uses the new one
(".Ltmp00").

Reviewers: nickdesaulniers, craig.topper

Subscribers: nathanchance, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65304

llvm-svn: 368477
2019-08-09 20:16:31 +00:00
Sanjay Patel
366be061fd [AArch64][x86] add tests for pessimization of expression with X*2.0 (PR32939); NFC
llvm-svn: 368445
2019-08-09 14:52:31 +00:00
Jinsong Ji
de0c179102 [MachinePipeliner] Avoid indeterminate order in FuncUnitSorter
Summary:
This is exposed by adding a new testcase in PowerPC in
https://reviews.llvm.org/rL367732

The testcase got different output on different platform, hence breaking
buildbots.

The problem is that we get differnt FuncUnitOrder when calculateResMII.

The root cause is:
1. Two MachineInstr might get SAME priority(MFUsx) from minFuncUnits.
2. Current comparison operator() will return `MFUs1 > MFUs2`.
3. We use iterators for MachineInstr, so the input to FuncUnitSorter
   might be different on differnt platform due to the iterator nature.

So for two MI with same MFU, their order is actually depends on the
iterator order, which is platform (implemtation) dependent.

This is risky, and may cause cross-compiling problems.

The fix is to check make sure we assign a determine order when they are
equal.

Reviewers: bcahoon, hfinkel, jmolloy

Subscribers: nemanjai, hiraditya, MaskRay, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65992

llvm-svn: 368441
2019-08-09 14:10:57 +00:00
Simon Pilgrim
21c8bacf4a [X86][SSE] Swap X86ISD::BLENDV inputs with an inverted selection mask (PR42825)
As discussed on PR42825, if we are inverting the selection mask we can just swap the inputs and avoid the inversion.

Differential Revision: https://reviews.llvm.org/D65522

llvm-svn: 368438
2019-08-09 12:44:20 +00:00
Simon Atanasyan
8d51bb5b3b [Mips][Codegen] Fix fast-isel mixing of FGR64 and AFGR64 registers
Fast-isel was picking AFGR64 register class for processing call
arguments when +fp64 options was used. We simply check is option +fp64
is used and pick appropriate register.

Patch by Mirko Brkusanin.

Differential Revision: https://reviews.llvm.org/D65886

llvm-svn: 368433
2019-08-09 12:02:32 +00:00
Pablo Barrio
96fed620d2 [AArch64] Set pref. func. align to 8 bytes on Neoverse E1 & Cortex-A65
Summary:
The Arm Neoverse E1 and Cortex-A65 Software Optimization Guide [1][2],
Section "4.7 Branch instruction alignment" state:

"It is preferable for branch targets, including subroutine entry points,
to be placed on aligned 64-bit boundaries to maximize instruction fetch
efficiency."

This patch sets the preferred function alignment on Neoverse E1 and
Cortex-A65 to 2^3=8B. This was already the case in some Cortex-A CPUs
such as Cortex-A53.

[1] https://developer.arm.com/docs/swog466751/latest/arm-neoversetm-e1-core-software-optimization-guide
[2] https://developer.arm.com/docs/swog010045/latest/arm-cortex-a65-core-software-optimization-guide

Reviewers: dmgreen, fhahn, samparker

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65937

llvm-svn: 368431
2019-08-09 11:05:15 +00:00
Tim Northover
c802973c73 AArch64: support TLS on Darwin platforms in GlobalISel.
All TLS access on Darwin is in the "general dynamic" form where we call
a function to resolve the address, so implementation is pretty simple.

llvm-svn: 368418
2019-08-09 09:32:38 +00:00
Sam Parker
5f411782e9 [ARM][ParallelDSP] Replace SExt uses
As loads are combined and widened, we replaced their sext users
operands whereas we should have been replacing the uses of the sext.
I've added a load of tests, with only a few of them originally
causing assertion failures, the rest improve pattern coverage.

Differential Revision: https://reviews.llvm.org/D65740

llvm-svn: 368404
2019-08-09 07:48:50 +00:00
Evgeniy Stepanov
71027a56d5 Fix llvm.aarch64.irg properties.
Summary:
IRG does not access any memory.
Replace IntrInaccessibleMemOnly with IntrNoMem | IntrHasSideEffects.

Reviewers: chill

Subscribers: javed.absar, kristof.beyls, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64447

llvm-svn: 368362
2019-08-08 22:42:48 +00:00
Craig Topper
90f7ce4fe5 [X86] Improve codegen of v8i64->v8i16 and v16i32->v16i8 truncate with avx512vl, avx512bw, min-legal-vector-width<=256 and prefer-vector-width=256
Under this configuration we'll want to split the v8i64 or v16i32 into two vectors. The default legalization will try to truncate each of those 256-bit pieces one step to 128-bit, concatenate those, then truncate one more time from the new 256 to 128 bits.

With this patch we now truncate the two splits to 64-bits then concatenate those. We have to do this two different ways depending on whether have widening legalization enabled. Without widening legalization we have to manually construct X86ISD::VTRUNC to prevent the ISD::TRUNCATE with a narrow result being promoted to 128 bits with a larger element type than what we want followed by something like a pshufb to grab the lower half of each element to finish the job. With widening legalization we just get the right thing. When we switch to widening by default we can just delete the other code path.

Differential Revision: https://reviews.llvm.org/D65626

llvm-svn: 368349
2019-08-08 21:36:47 +00:00
Craig Topper
14499088f1 [SelectionDAG][X86] Move setcc mask splitting for mload/mstore/mgather/mscatter from DAGCombiner to the type legalizer.
We may be able to look to how VSELECT is handled to further
improve this, but this appears to be neutral or an improvement
on the test cases we have.

llvm-svn: 368344
2019-08-08 21:14:08 +00:00
Guozhi Wei
bba33a7e31 [MBP] Disable aggressive loop rotate in plain mode
Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.

To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.

Differential Revision: https://reviews.llvm.org/D65673

llvm-svn: 368339
2019-08-08 20:25:23 +00:00
Craig Topper
647e2e70ab [X86] Make CMPXCHG16B feature imply CMPXCHG8B feature.
This fixes znver1 so that it properly enables CMPXHG8B. We can
probably remove explicit CMPXCHG8B from CPUs that also have
CMPXCHG16B, but keeping this simple to allow cherry pick to 9.0.

Fixes PR42935.

llvm-svn: 368324
2019-08-08 18:11:17 +00:00
Pirama Arumuga Nainar
e1c7e683da [AArch64] Do not emit '#' before immediates in inline asm
Summary:
The A64 assembly language does not require the '#' character to
introduce constant immediate operands.  Avoid the '#' since the AArch64
asm parser does not accept '#' before the lane specifier and rejects the
following:
  __asm__ ("fmla v2.4s, v0.4s, v1.s[%0]" :: "I"(0x1))

Fix a test to not expect the '#' and add a new test case with the above
asm.

Fixes: https://github.com/android-ndk/ndk/issues/1036

Reviewers: peter.smith, kristof.beyls

Subscribers: javed.absar, hiraditya, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65550

llvm-svn: 368320
2019-08-08 17:50:39 +00:00
Jinsong Ji
0e3e0ac472 Re-commit "[PowerPC][NFC][MachinePipeliner] Add some regression testcases""
Remove sms-cpy1.ll first while I investigate the problem.

llvm-svn: 368318
2019-08-08 17:37:58 +00:00
Simon Pilgrim
eb76021ab0 [X86] XFormVExtractWithShuffleIntoLoad - handle shuffle mask scaling
If the target shuffle mask is from a wider type, attempt to scale the mask so that the extraction can attempt to peek through.

Fixes the regression mentioned in rL368307

llvm-svn: 368308
2019-08-08 16:05:23 +00:00
Simon Pilgrim
d43cdb1e90 [X86] SimplifyDemandedVectorElts - attempt to recombine target shuffle using DemandedElts mask
If we don't demand all elements, then attempt to combine to a simpler shuffle.

At the moment we can only do this if Depth == 0 as combineX86ShufflesRecursively uses Depth to track whether the shuffle has really changed or not - we'll need to change this before we can properly start merging combineX86ShufflesRecursively into SimplifyDemandedVectorElts. 

The insertps-combine.ll regression is because XFormVExtractWithShuffleIntoLoad can't see through shuffles of different widths - this will be fixed in a follow-up commit.

llvm-svn: 368307
2019-08-08 15:54:20 +00:00
David Tenty
fb66d80ccb Enable assembly output of local commons for AIX
Summary:
This patch enable assembly output of local commons for AIX using .lcomm
directives. Adds a EmitXCOFFLocalCommonSymbol to MCStreamer so we can emit the
AIX version of .lcomm assembly directives which include a csect name. Handle the
case of BSS locals in PPCAIXAsmPrinter by using EmitXCOFFLocalCommonSymbol. Adds
a test for generating .lcomm on AIX Targets.

Reviewers: cebowleratibm, hubert.reinterpretcast, Xiangling_L, jasonliu, sfertile

Reviewed By: sfertile

Subscribers: wuzish, nemanjai, hiraditya, kbarton, MaskRay, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64825

llvm-svn: 368306
2019-08-08 15:40:35 +00:00
David Green
b937fefeae [ARM] Add support for MVE pre and post inc loads and stores
This adds pre- and post- increment and decrements for MVE loads and stores. It
uses the builtin pre and post load/store detection, unlike Neon. Loads are
selected with the code in tryT2IndexedLoad, stores are selected with tablegen
patterns. The immediates have a +/-7bit range, multiplied by the size of the
element.

Differential Revision: https://reviews.llvm.org/D63840

llvm-svn: 368305
2019-08-08 15:27:58 +00:00
David Green
3265a2671b [ARM] MVE big endian loads/stores
This adds some missing patterns for big endian loads/stores, allowing unaligned
loads/stores to also be selected with an extra VREV, which produces better code
than aligning through a stack. Also moves VLDR_P0 to not be LE only, and
adjusts some of the tests to show all that working.

Differential Revision: https://reviews.llvm.org/D65583

llvm-svn: 368304
2019-08-08 15:15:19 +00:00
Sam Elliott
db4202796c [RISCV] Allow ABI Names in Inline Assembly Constraints
Summary:
Clang will replace references to registers using ABI names in inline
assembly constraints with references to architecture names, but other
frontends do not. LLVM uses the regular assembly parser to parse inline asm,
so inline assembly strings can contain references to registers using their
ABI names.

This patch adds support for parsing constraints using either the ABI name or
the architectural register name. This means we do not need to implement the
ABI name replacement code in every single frontend, especially those like
Rust which are a very thin shim on top of LLVM IR's inline asm, and that
constraints can more closely match the assembly strings they refer to.

Reviewers: asb, simoncook

Reviewed By: simoncook

Subscribers: hiraditya, rbar, johnrusso, JDevlieghere, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65947

llvm-svn: 368303
2019-08-08 14:59:16 +00:00
Sam Elliott
3d793f3d94 [RISCV] Minimal stack realignment support
Summary:
Currently the RISC-V backend does not realign the stack. This can be an issue even for the RV32I/RV64I ABIs (where the stack is 16-byte aligned), though is rare. It will be much more comment with RV32E (though the alignment requirements for common data types remain under-documented...).

This patch adds minimal support for stack realignment. It should cope with large realignments. It will error out if the stack needs realignment and variable sized objects are present.

It feels like a lot of the code like getFrameIndexReference and determineFrameLayout could be refactored somehow, as right now it feels fiddly and brittle. We also seem to allocate a lot more memory than GCC does for equivalent C code.

Reviewers: asb

Reviewed By: asb

Subscribers: wwei, jrtc27, s.egerton, MaskRay, Jim, lenary, hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D62007

llvm-svn: 368300
2019-08-08 14:40:54 +00:00
Simon Pilgrim
aafea54fa1 [X86][SSE] matchBinaryPermuteShuffle - split INSERTPS combines
We need to prefer INSERTPS with zeros over SHUFPS, but fallback to INSERTPS if that fails.

llvm-svn: 368292
2019-08-08 13:23:53 +00:00
Simon Pilgrim
b5a3e12a89 [X86][SSE] Add x64 load use test case
llvm-svn: 368278
2019-08-08 11:24:23 +00:00