1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 19:23:23 +01:00
Commit Graph

187725 Commits

Author SHA1 Message Date
Simon Atanasyan
002ca95135 [mips] Add tests to check jal sym+offset. NFC 2019-11-13 22:57:54 +03:00
Quentin Colombet
884b86d8a8 [LiveInterval] Allow updating subranges with slightly out-dated IR
During register coalescing, we update the live-intervals on-the-fly.
To do that we are in this strange mode where the live-intervals can
be slightly out-of-sync (more precisely they are forward looking)
compared to what the IR actually represents.
This happens because the register coalescer only updates the IR when
it is done with updating the live-intervals and it has to do it this
way because updating the IR on-the-fly would actually clobber some
information on how the live-ranges that are being updated look like.

This is problematic for updates that rely on the IR to accurately
represents the state of the live-ranges. Right now, we have only
one of those: stripValuesNotDefiningMask.
To reconcile this need of out-of-sync IR, this patch introduces a
new argument to LiveInterval::refineSubRanges that allows the code
doing the live range updates to reason about how the code should
look like after the coalescer will have rewritten the registers.
Essentially this captures how a subregister index with be offseted
to match its position in a new register class.

E.g., let say we want to merge:
    V1.sub1:<2 x s32> = COPY V2.sub3:<4 x s32>

We do that by choosing a class where sub1:<2 x s32> and sub3:<4 x s32>
overlap, i.e., by choosing a class where we can find "offset + 1 == 3".
Put differently we align V2's sub3 with V1's sub1:
    V2: sub0 sub1 sub2 sub3
    V1: <offset>  sub0 sub1

This offset will look like a composed subregidx in the the class:
     V1.(composed sub2 with sub1):<4 x s32> = COPY V2.sub3:<4 x s32>
 =>  V1.(composed sub2 with sub1):<4 x s32> = COPY V2.sub3:<4 x s32>

Now if we didn't rewrite the uses and def of V1, all the checks for V1
need to account for this offset to match what the live intervals intend
to capture.

Prior to this patch, we would fail to recognize the uses and def of V1
and would end up with machine verifier errors: No live segment at def.
This could lead to miscompile as we would drop some live-ranges and
thus, miss some interferences.

For this problem to trigger, we need to reach stripValuesNotDefiningMask
while having a mismatch between the IR and the live-ranges (i.e.,
we have to apply a subreg offset to the IR.)

This requires the following three conditions:
1. An update of overlapping subreg lanes: e.g., dsub0 == <ssub0, ssub1>
2. An update with Tuple registers with a possibility to coalesce the
   subreg index: e.g., v1.dsub_1 == v2.dsub_3
3. Subreg liveness enabled.

looking at the IR to decide what is alive and what is not, i.e., calling
stripValuesNotDefiningMask.
coalescer maintains for the live-ranges information.

None of the targets that currently use subreg liveness (i.e., the targets
that fulfill #3, Hexagon, AMDGPU, PowerPC, and SystemZ IIRC) expose #1 and
and #2, so this patch also artificial enables subreg liveness for ARM,
so that a nice test case can be attached.
2019-11-13 11:17:56 -08:00
Michael Liao
7baf13a5a7 [TTI] Fix cast cost on vector types.
- Only split vector types when both src and dst types are splittable.
2019-11-13 13:54:07 -05:00
Francis Visoiu Mistrih
e206b99c28 [llvm-bcanalyzer] Don't dump the contents if -dump is not passed
With all the previous refactorings this slipped through and now we
always dump the contents of the bitcode files, even if -dump is not
passed.
2019-11-13 10:38:57 -08:00
Ahmed Bougacha
7a21472d38 [AArch64][v8.3a] Add missing imp-defs on RETA*.
RETA always implicitly uses LR, unlike RET which merely has an
alias that defaults it to LR.
Additionally, RETA implicitly uses SP as well, which it uses as
a discriminator to authenticate LR.

This isn't usually noticeable, because RET_ReallyLR is used in most
of the backend.  However, the post-RA scheduler, if enabled, will
cause miscompiles if the imp-uses are missing.

While there, fix a typo in the lone affected testcase.
2019-11-13 10:38:11 -08:00
Ahmed Bougacha
66e06d497f [AArch64][v8.3a] Add LDRA '[xN]!' alias.
The instruction definition has been retroactively expanded to
allow for an alias for '[xN, 0]!' as '[xN]!'.
That wouldn't make sense on LDR, but does for LDRA.
2019-11-13 10:38:11 -08:00
Sanjay Patel
d605eb509b [SLP] improve test readability; NFC 2019-11-13 12:59:00 -05:00
David Stenberg
3535d2717c Fix typo in DwarfDebug [NFC] 2019-11-13 18:06:16 +01:00
David Tenty
2882d823c9 Don't set LLVM_NO_DEAD_STRIP on AIX
Summary:
when building plugins, as AIX has symbols in it's standard library that
must be garbage collected or we will see link errors. Export lists will
handle this instead on AIX.

Reviewers: stevewan, sfertile, jasonliu, xingxue, DiggerLin

Reviewed By: DiggerLin

Subscribers: mgorny, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70130
2019-11-13 11:59:47 -05:00
Sanjay Patel
6acfc012f6 [SLP] reduce code duplication for min/max vs. other reductions; NFCI 2019-11-13 11:26:08 -05:00
Matthew Malcomson
f73fd90184 Fix comment spelling {addresing -> addressing} (NFC) 2019-11-13 16:14:32 +00:00
Sanjay Patel
fcfe9b03ff [InstCombine] propagate fast-math-flags (FMF) to select when inverting fcmp+select
As noted by the FIXME comment, this is not correct based on our current FMF semantics.
We should be propagating FMF from the final value in a sequence (in this case the
'select'). So the behavior even without this patch is wrong, but we did not allow FMF
on 'select' until recently.

But if we do the correct thing right now in this patch, we'll inevitably introduce
regressions because we have not wired up FMF propagation for 'phi' and 'select' in
other passes (like SimplifyCFG) or other places in InstCombine. I'm not seeing a
better incremental way to make progress.

That said, the potential extra damage over the existing wrong behavior from this
patch is very limited. AFAIK, the only way to have different FMF on IR in the same
function is if we have LTO inlined IR from 2 modules that were compiled using
different fast-math settings.

As seen in the tests, we may actually see some improvements with this patch because
adding the FMF to the 'select' allows matching to min/max intrinsics that were
previously missed (in the common case, the 'fcmp' and 'select' should have identical
FMF to begin with).

Next steps in the transition:

    Make similar changes in instcombine as needed.
    Enable phi-to-select FMF propagation in SimplifyCFG.
    Remove dependencies on fcmp with FMF.
    Deprecate FMF on fcmp.

Differential Revision: https://reviews.llvm.org/D69720
2019-11-13 10:38:42 -05:00
Pavel Labath
325399186a DWARFDebugLoclists: Add an api to get the location lists of a DWARF unit
Summary:
This avoid the need to duplicate the location lists searching logic in
various users. The "inline location list dumping" code (which is the
only user actually updated to handle DWARF v5 location lists)  is
switched to this method. After adding v4 location list support, I'll
switch other users too.

Reviewers: dblaikie, probinson, JDevlieghere, aprantl, SouraVX

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70084
2019-11-13 16:26:16 +01:00
Simon Pilgrim
65d10d3de7 Remove commented out CHECK-NEXT to try and appease llvm-clang-x86_64-expensive-checks-win buildbot 2019-11-13 14:59:12 +00:00
Simon Pilgrim
ae5199ea6a PowerPC - fix uninitialized variable warnings. NFCI. 2019-11-13 14:40:21 +00:00
Simon Pilgrim
fe2040c0e4 Fix uninitialized variable warning. NFCI. 2019-11-13 14:40:21 +00:00
Simon Pilgrim
3ae81a6740 Fix uninitialized variable warning. NFCI. 2019-11-13 14:40:21 +00:00
Simon Pilgrim
fbeaf767dc Fix uninitialized variable warning. NFCI. 2019-11-13 14:40:20 +00:00
Simon Pilgrim
6487f7772b Sparc - fix uninitialized variable warnings. NFCI. 2019-11-13 14:40:20 +00:00
Simon Pilgrim
4d179389c1 PPCReduceCRLogicals - fix static analyzer warnings. NFC
- Fix uninitialized variable warnings.
- Fix null dereference warnings.
2019-11-13 14:40:20 +00:00
Simon Pilgrim
70c464fb40 SLPVectorizer - make comparison operators + isInSchedulingRegion const
Fixes cppcheck warnings.
2019-11-13 14:40:19 +00:00
Florian Hahn
9444e0ce86 [InstCombine] Avoid moving ops that do restrict undef across shuffles.
I think we have to be a bit more careful when it comes to moving
ops across shuffles, if the op does restrict undef. For example, without
this patch, we would move 'and %v, <0, 0, -1, -1>' over a
'shufflevector %a, undef, <undef, undef, 1, 2>'. As a result, the first
2 lanes of the result are undef after the combine, but they really
should be 0, unless I am missing something.

For ops that do fold to undef on undef operands, the current behavior
should be fine. I've add conservative check OpDoesRestrictUndef, maybe
there's a better existing utility?

Reviewers: spatel, RKSimon, lebedev.ri

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D70093
2019-11-13 13:40:34 +00:00
Luís Marques
71ae6c2752 Revert "[RISCV] Fix wrong CFI directives"
test/DebugInfo/RISCV/relax-debug-frame.ll wasn't properly updated.
2019-11-13 13:28:33 +00:00
Florian Hahn
9636cf33df [InstCombine] Precommit shuffle tests for D70093. 2019-11-13 13:25:28 +00:00
Sjoerd Meijer
b18c80b6e6 [ARM][MVE] canTailPredicateLoop
This implements TTI hook 'preferPredicateOverEpilogue' for MVE.  This is a
first version and it operates on single block loops only. With this change, the
vectoriser will now determine if tail-folding scalar remainder loops is
possible/desired, which is the first step to generate MVE tail-predicated
vector loops.

This is disabled by default for now. I.e,, this is depends on option
-disable-mve-tail-predication, which is off by default.

I will follow up on this soon with a patch for the vectoriser to respect loop
hint 'vectorize.predicate.enable'. I.e., with this loop hint set to Disabled,
we don't want to tail-fold and we shouldn't query this TTI hook, which is
done in D70125.

Differential Revision: https://reviews.llvm.org/D69845
2019-11-13 13:24:33 +00:00
Luís Marques
05ce94921e [RISCV] Fix wrong CFI directives
Summary: Removes CFI CFA directives that could incorrectly propagate
beyond the basic block they were inteded for. Specifically it removes
the epilogue CFI directives. See the branch_and_tail_call test for an
example of the issue. Should fix the stack unwinding issues caused by
the incorrect directives.

Reviewers: asb, lenary, shiva0217
Reviewed By: lenary
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69723
2019-11-13 13:06:15 +00:00
Simon Tatham
d3800be157 [ARM,MVE] Add intrinsics for contiguous load/stores.
This patch adds the ACLE intrinsics for all the MVE load and store
instructions not already handled by D69791. These ones don't need new
IR intrinsics, because they can be implemented in terms of standard
LLVM IR constructions.

Some of the load and store instructions access less than 128 bits of
memory, sign/zero extending each value to a wider vector lane on load
or truncating it on store. These are represented in IR by a load of a
shorter vector followed by a zext/sext, and conversely, a trunc
followed by a short store. Existing ISel patterns already recognize
those combinations and turn them into the right MVE instructions.

The predicated forms of all these instructions are represented in the
same way, except that the ordinary load/store operation is replaced
with the existing intrinsics @llvm.masked.{load,store}. These are
currently only code-generated as predicated MVE load/store
instructions if you give LLVM the `-enable-arm-maskedldst` option; so
I've done that in the LLVM codegen test. When we make that the
default, that option can be removed.

In the Tablegen backend, I've had to add a handful of extra support
features:

* We need to be able to make clang::Address objects out of a
  pointer and an alignment (previously we only needed these when the
  user passed us an existing one).

* We can now specify vector types that aren't 128 bits wide (for use
  in those intermediate values in IR), the parametrized type system
  can make one starting from two existing vector types (using the lane
  count of one and the element type of the other).

* I've added support for code generation of pointer casts, and for
  specifying LLVM types as operands to IRBuilder operations (for zext
  and sext, though I think they'll come in useful again).

* Now not all IR construction operations need to be specified as
  Builder.CreateFoo; some don't involve a Builder at all, and one
  passes it as a parameter to a tiny static helper function in
  CGBuiltin.cpp.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Subscribers: kristof.beyls, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70088
2019-11-13 12:47:00 +00:00
Simon Pilgrim
e4137af0ab [X86][AVX] Add plausible schedule classes to MASKPAIR/VP2INTERSECT/VDPBF16PS instructions
These are really just placeholders that use approximately the right resources - once we have CPUs scheduler models that support these instructions they will need revisiting.

In the meantime this means that all instructions have a class of some kind., meaning models can be more easily flagged as complete.
2019-11-13 12:02:01 +00:00
Hans Wennborg
34c588643e Revert 57dd4b0 "[ValueTracking] Allow context-sensitive nullness check for non-pointers"
This caused miscompiles of Chromium (https://crbug.com/1023818). The reduced
repro is small enough to fit here:

  $ cat /tmp/a.c
  unsigned char f(unsigned char *p) {
    unsigned char result = 0;
    for (int shift = 0; shift < 1; ++shift)
      result |= p[0] << (shift * 8);
    return result;
  }
  $ bin/clang -O2 -S -o - /tmp/a.c | grep -A4 f:
  f:                                      # @f
          .cfi_startproc
  # %bb.0:                                # %entry
          xorl    %eax, %eax
          retq

That's nicely optimized, but I don't think it's the right result :-)

> Same as D60846 but with a fix for the problem encountered there which
> was a missing context adjustment in the handling of PHI nodes.
>
> The test that caused D60846 to be reverted was added in e15ab8f277c7.
>
> Reviewers: nikic, nlopes, mkazantsev,spatel, dlrobertson, uabelho, hakzsam
>
> Subscribers: hiraditya, bollu, llvm-commits
>
> Tags: #llvm
>
> Differential Revision: https://reviews.llvm.org/D69571

This reverts commit 57dd4b03e4806bbb4760ab6150940150d884df20.
2019-11-13 12:19:02 +01:00
Mirko Brkusanin
b6dfc34cbc [Mips] Add rematerialization support for ldi.fmt
Instruction ldi.fmt can be considered cheap enough to avoid spill and restore
of value that it produces since it's loaded from immediate.

Differential Revision: https://reviews.llvm.org/D69898
2019-11-13 11:33:52 +01:00
Simon Atanasyan
1a524fda5c [mips] Show an error if 64-bit target triple provided with 32-bit CPU
When a 64-bit triple is used emit an error if the CPU only supports
32-bit code.

Patch by Miloš Stojanović.

Differential Revision: https://reviews.llvm.org/D70018
2019-11-13 13:32:39 +03:00
Simon Atanasyan
2b253ce1a6 [mips][test] Add Mips CPU tests. NFC
Adding tests check all available CPUs on Mips.

Patch by Miloš Stojanović.

Differential Revision: https://reviews.llvm.org/D70017
2019-11-13 13:32:39 +03:00
Daniil Suchkov
2f23262110 Temporarily revert "[InstCombine] Fold PHIs with equal incoming pointers"
Revert due to sanitizer-windows buildbot failure.

This reverts commit bbb29738b58aaf6f6518269abdcf8f64131665a9.
2019-11-13 17:14:11 +07:00
David Stenberg
09023bdaad [DebugInfo] Avoid creating entry values for clobbered registers
Summary:
Entry values are considered for parameters that have register-described
DBG_VALUEs in the entry block (along with other conditions).

If a parameter's value has been propagated from the caller to the
callee, then the parameter's DBG_VALUE in the entry block may be
described using a register defined by some instruction, and entry values
should not be emitted for the parameter, which can currently occur.
One such case was seen in the attached test case, in which the second
parameter, which is described by a redefinition of the first parameter's
register, would incorrectly get an entry value using the first
parameter's register. This commit intends to solve such cases by keeping
track of register defines, and ignoring DBG_VALUEs in the entry block
that are described by such registers.

In a RelWithDebInfo build of clang-8, the average size of the set was
27, and in a RelWithDebInfo+ASan build it was 30.

Reviewers: djtodoro, NikolaPrica, aprantl, vsk

Reviewed By: djtodoro, vsk

Subscribers: hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D69889
2019-11-13 11:10:47 +01:00
David Stenberg
d1448e33f6 [DebugInfo] Add helper for finding entry value candidates [NFC]
Summary:
The conditions that are used to determine if entry values should be
emitted for a parameter are quite many, and will grow slightly
in a follow-up commit, so move those to a helper function, as was
suggested in the code review for D69889.

Reviewers: djtodoro, NikolaPrica

Reviewed By: djtodoro

Subscribers: probinson, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69955
2019-11-13 11:10:47 +01:00
Sander de Smalen
50789d9a90 [AArch64] Extend storeRegToStackSlot to spill SVE registers.
This patch allows the register allocator to spill SVE registers to the stack.

Reviewers: ostannard, efriedma, rengolin, cameron.mcinally

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D70082
2019-11-13 10:09:32 +00:00
Daniil Suchkov
184ccaa44f [InstCombine] Fold PHIs with equal incoming pointers
In case when all incoming values of a PHI are equal pointers, this
transformation inserts a definition of such a pointer right after
definition of the base pointer and replaces with this value both PHI and
all it's incoming pointers. Primary goal of this transformation is
canonicalization of this pattern in order to enable optimizations that
can't handle PHIs. Non-inbounds pointers aren't currently supported.

Reviewers: spatel, RKSimon, lebedev.ri, apilipenko

Reviewed By: apilipenko

Tags: #llvm

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D68128
2019-11-13 17:00:34 +07:00
Sander de Smalen
5074e446e7 [AArch64][SVE] Allocate locals that are scalable vectors.
This patch adds a target interface to set the StackID for a given type,
which allows scalable vectors (e.g. `<vscale x 16 x i8>`) to be assigned a
'sve-vec' StackID, so it is allocated in the SVE area of the stack frame.

Reviewers: ostannard, efriedma, rengolin, cameron.mcinally

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D70080
2019-11-13 09:45:24 +00:00
Simon Tatham
6aa69a8001 [ARM,MVE] Use VMOV.{S8,S16} for sign-extended extractelement.
MVE includes instructions that extract an 8- or 16-bit lane from a
vector and sign-extend it into the output 32-bit GPR. `ARMInstrMVE.td`
already included isel patterns to select those instructions in
response to the `ARMISD::VGETLANEs` selection-DAG node type. But
`ARMISD::VGETLANEs` was never actually generated, because the code
that creates it was conditioned on NEON only.

It's an easy fix to enable the same code for integer MVE, and now IR
that sign-extends the result of an extractelement (whether explicitly
or as part of the function call ABI) will use `vmov.s8` instead of
`vmov.u8` followed by `sxtb`.

Reviewers: SjoerdMeijer, dmgreen, ostannard

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70132
2019-11-13 09:08:41 +00:00
joanlluch
a1334aac0e [TargetLowering][DAGCombine][MSP430] Shift Amount Threshold in DAGCombine (4)
Summary:
Replaces
```
unsigned getShiftAmountThreshold(EVT VT)
```
by

```
bool shouldAvoidTransformToShift(EVT VT, unsigned amount)
```
thus giving more flexibility for targets to decide whether particular shift amounts must be considered expensive or not.

Updates the MSP430 target with a custom implementation.

This continues  D69116, D69120, D69326 and updates them, so all of them must be committed before this.

Existing tests apply, a few more have been added.

Reviewers: asl, spatel

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70042
2019-11-13 09:23:08 +01:00
Craig Topper
e9ebc959cb [X86] Remove setOperationAction for FP_TO_SINT v8i16.
This is no longer needed after widening legalization as we
custom legalize v8i8 ourselves.

Added entries to the cost model, but bumped the cost slightly
to account for the truncate shuffle that wasn't costed before.
2019-11-12 22:45:52 -08:00
Francesco Petrogalli
d054b2dd05 [VFABI] Add LLVM internal mangling for vector functions.
Summary:
This patch adds a custom ISA for vector functions for internal use
in LLVM. The <isa> token is set to "_LLVM_", and it is not attached
to any specific instruction Vector ISA, or Vector Function ABI.

The ISA is used as a token for handling Vector Function ABI-style
vectorization for those vector functions that are not directly
associated to any existing Vector Function ABI (for example, some of
the vector functions exposed by TargetLibraryInfo). The demangling
function for this ISA in a Vector Function ABI context is set to be
the same as the common one shared between X86 and AArch64.

Reviewers: jdoerfert, sdesmalen, simoll

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70089
2019-11-13 03:26:39 +00:00
Matt Arsenault
ee4c001c6a AMDGPU: Extend add x, (ext setcc) combine to sub
This is the same as the add case, but inverts the operation type.

This avoids regressions in a future patch.
2019-11-13 07:13:58 +05:30
Matt Arsenault
bf9d9a1180 AMDGPU: Switch backend default max workgroup size to 1024
Previously this would default to 256, not the maximum supported size
of 1024. Using a maximum lower than the hardware maximum requires
language runtimes to enforce this limit for correctness, which no
language has correctly done. Switch the default to the conservatively
correct maximum, and force frontends to opt-in to the more optimal 256
default maximum.

I don't really understand why the changes in occupancy-levels.ll
increased the computed occupancy, which I expected to decrease. I'm
not sure if these tests should be forcing the old maximum.
2019-11-13 07:11:02 +05:30
Matt Arsenault
5cfd953988 AMDGPU Reduce reported maximum group size to 1024
While some targets allow encoding 2048, this was never tested or
supported.
2019-11-13 06:34:28 +05:30
Alina Sbirlea
a76fef4322 [GlobalsAA] Reenable test. 2019-11-12 16:53:28 -08:00
Alina Sbirlea
6f9d9bcfe8 Temporarily disable test. 2019-11-12 15:57:51 -08:00
Eric Christopher
ec6661f8aa Temporarily Revert "Reapply [LVI] Normalize pointer behavior" as it's broken python 3.6.
Reverting to figure out if it's a problem in python or the compiler for now.

This reverts commit 885a05f48a5d320946c89590b73a764e5884fe4f.
2019-11-12 15:51:51 -08:00
Craig Topper
335135422b [X86] Don't consider v64i1 as a legal type unless v64i8 is also a legal type.
This avoids some nasty issues with argument passing and lowering of
arbitrary v64i8 shuffles.
2019-11-12 14:56:02 -08:00
Craig Topper
90fb1e427b [X86] Only pass v64i8/v32i16 as v16i32 on non-avx512bw targets if the v16i32 type won't be split by prefer-vector-width=256
Otherwise just let the v64i8/v32i16 types be split to v32i8/v16i16.

In reality this shouldn't happen because it means we have a 512-bit
vector argument, but min-legal-vector-width says a value less than
512. But a 512-bit argument should have been factored into the
preferred vector width.
2019-11-12 14:56:01 -08:00