1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-26 12:43:36 +01:00
Commit Graph

188879 Commits

Author SHA1 Message Date
Nico Weber
3a938e8b27 gn build: add deps I failed to add in b2508ce85c1 2019-12-13 22:19:11 -05:00
Fangrui Song
0973858503 [AArch64] Save FP for leaf functions when disabling frame pointer elimination
The change allows clang -mno-omit-leaf-frame-pointer to disable frame
pointer elimination. This behavior matches X86 and Mips, and also GCC
AArch64.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D71168
2019-12-13 18:48:58 -08:00
Sean Fertile
aea1c9f58c [PowerPC] Add Support for indirect calls on AIX.
Extends the desciptor-based indirect call support for 32-bit codegen,
and enables indirect calls for AIX.

In-depth Description:
In a function descriptor based ABI, a function pointer points at a
descriptor structure as opposed to the function's entry point. The
descriptor takes the form of 3 pointers: 1 for the function's entry
point, 1 for the TOC anchor of the module containing the function
definition, and 1 for the environment pointer:

struct FunctionDescriptor {
  void *EntryPoint;
  void *TOCAnchor;
  void *EnvironmentPointer;
};

An indirect call has several steps of loading the the information from
the descriptor into the proper registers for setting up the call. Namely
it has to:

1) Save the caller's TOC pointer into the TOC save slot in the linkage
   area, and then load the callee's TOC pointer into the TOC register
   (GPR 2 on AIX).

2) Load the function descriptor's entry point into the count register.

3) Load the environment pointer into the environment pointer register
   (GPR 11 on AIX).

4) Perform the call by branching on count register.

5) Restore the caller's TOC pointer after returning from the indirect call.

A couple important caveats to the above:

- There is no way to directly load a value from memory into the count register.
  Instead we populate the count register by loading the entry point address into
  a gpr and then moving the gpr to the count register.

- The TOC restore has to come immediately after the branch on count register
  instruction (i.e., the 1st instruction executed after we return from the
  call). This is an implementation limitation. We could, in theory, schedule
  the restore elsewhere as long as no uses of the TOC pointer fall in between
  the call and the restore; however, to keep it simple, we insert a pseudo
  instruction that represents both the indirect branch instruction and the
  load instruction that restores the caller's TOC from the linkage area. As
  they flow through the compiler as a single pseudo instruction, nothing can be
  inserted between them and the caller's TOC is then valid at any use.

Differtential Revision: https://reviews.llvm.org/D70724
2019-12-13 20:07:00 -05:00
Julian Lettner
1be986ee4e [lit] Small cleanups. NFCI
Remove unnecessary (argument same as default), cleanup imports, use
"pythonic" names for variables, and general formatting.
2019-12-13 17:00:04 -08:00
Fangrui Song
ca4058689b [Mips] Fix gcc -Wunused-but-set-variable in -DLLVM_ENABLE_ASSERTIONS=Off builds after D71028 2019-12-13 16:41:08 -08:00
Roman Tereshin
4bdb48ce84 [Legalizer] Making artifact combining order-independent
Legalization algorithm is complicated by two facts:
1) While regular instructions should be possible to legalize in
   an isolated, per-instruction, context-free manner, legalization
   artifacts can only be eliminated in pairs, which could be deeply, and
   ultimately arbitrary nested: { [ () ] }, where which paranthesis kind
   depicts an artifact kind, like extend, unmerge, etc. Such structure
   can only be fully eliminated by simple local combines if they are
   attempted in a particular order (inside out), or alternatively by
   repeated scans each eliminating only one innermost pair, resulting in
   O(n^2) complexity.
2) Some artifacts might in fact be regular instructions that could (and
   sometimes should) be legalized by the target-specific rules. Which
   means failure to eliminate all artifacts on the first iteration is
   not a failure, they need to be tried as instructions, which may
   produce more artifacts, including the ones that are in fact regular
   instructions, resulting in a non-constant number of iterations
   required to finish the process.

I trust the recently introduced termination condition (no new artifacts
were created during as-a-regular-instruction-retrial of artifacts not
eliminated on the previous iteration) to be efficient in providing
termination, but only performing the legalization in full if and only if
at each step such chains of artifacts are successfully eliminated in
full as well.

Which is currently not guaranteed, as the artifact combines are applied
only once and in an arbitrary order that has to do with the order of
creation or insertion of artifacts into their worklist, which is a no
particular order.

In this patch I make a small change to the artifact combiner, making it
to re-insert into the worklist immediate (modulo a look-through copies)
artifact users of each vreg that changes its definition due to an
artifact combine.

Here the first scan through the artifacts worklist, while not
being done in any guaranteed order, only needs to find the innermost
pair(s) of artifacts that could be immediately combined out. After that
the process follows def-use chains, making them shorter at each step, thus
combining everything that can be combined in O(n) time.

Reviewers: volkan, aditya_nandakumar, qcolombet, paquette, aemerson, dsanders

Reviewed By: aditya_nandakumar, paquette

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71448
2019-12-13 15:45:18 -08:00
Roman Tereshin
a7ed9d1b7f [Legalizer] Refactoring out legalizeMachineFunction
and introducing new unittests/CodeGen/GlobalISel/LegalizerTest.cpp
relying on it to unit test the entire legalizer algorithm (including the
top-level main loop).

See also https://reviews.llvm.org/D71448
2019-12-13 15:45:18 -08:00
Roman Tereshin
1e9c992c54 [Legalizer] More detailed debugging printing in main loop 2019-12-13 15:45:18 -08:00
Alex Richardson
b2129c438f [NFC] Use SelectionDAG::getMemBasePlusOffset() instead of getNode(ISD::ADD)
Summary:
To find potential opportunities to use getMemBasePlusOffset() I looked at
all ISD::ADD uses found with the regex getNode\(ISD::ADD,.+,.+Ptr
in lib/CodeGen/SelectionDAG. If this patch is accepted I will convert
the files in the individual backends too.

The motivation for this change is our out-of-tree CHERI backend
(https://github.com/CTSRD-CHERI/llvm-project). We use a separate register
type to store pointers (128-bit capabilities, which are effectively
unforgeable and monotonic fat pointers). These capabilities permit a
reduced set of operations and therefore use a separate ValueType (iFATPTR).
to represent pointers implemented as capabilities.
Therefore, we need to avoid using ISD::ADD for our patterns that operate
on pointers and need to use a function that chooses ISD::ADD or a new
ISD::PTRADD opcode depending on the value type.

We originally added a new DAG.getPointerAdd() function, but after this
patch series we can modify the implementation of getMemBasePlusOffset()
instead. Avoiding direct uses of ISD::ADD for pointer types will
significantly reduce the amount of assertion/instruction selection
failures for us in future upstream merges.

Reviewers: spatel

Reviewed By: spatel

Subscribers: merge_guards_bot, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71207
2019-12-13 21:40:03 +00:00
Alex Richardson
047493f270 [NFC] Implement SelectionDAG::getObjectPtrOffset() using getMemBasePlusOffset()
Summary:
This change is preparatory work to use this helper functions in more places.
In order to make this change, getMemBasePlusOffset() has been extended to
also take a SDNodeFlags parameter.

The motivation for this change is our out-of-tree CHERI backend
(https://github.com/CTSRD-CHERI/llvm-project). We use a separate register
type to store pointers (128-bit capabilities, which are effectively
unforgeable and monotonic fat pointers). These capabilities permit a
reduced set of operations and therefore use a separate ValueType (iFATPTR).
to represent pointers implemented as capabilities.
Therefore, we need to avoid using ISD::ADD for our patterns that operate
on pointers and need to use a function that chooses ISD::ADD or a new
ISD::PTRADD opcode depending on the value type.

We originally added a new DAG.getPointerAdd() function, but after this
patch series we can modify the implementation of getMemBasePlusOffset()
instead. Avoiding direct uses of ISD::ADD for pointer types will
significantly reduce the amount of assertion/instruction selection
failures for us in future upstream merges.

Reviewers: spatel

Reviewed By: spatel

Subscribers: merge_guards_bot, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71206
2019-12-13 21:40:03 +00:00
Alex Richardson
538abea7e0 [NFC] Add a SDValue overload for SelectionDAG::getMemBasePlusOffset()
Summary:
This change is preparatory work to use this helper functions in more places.
Currently the function only allows integer constants offsets, but there
are cases where we can use an existing SDValue parameter.

The motivation for this change is our out-of-tree CHERI backend
(https://github.com/CTSRD-CHERI/llvm-project). We use a separate register
type to store pointers (128-bit capabilities, which are effectively
unforgeable and monotonic fat pointers). These capabilities permit a
reduced set of operations and therefore use a separate ValueType (iFATPTR).
to represent pointers implemented as capabilities.
Therefore, we need to avoid using ISD::ADD for our patterns that operate
on pointers and need to use a function that chooses ISD::ADD or a new
ISD::PTRADD opcode depending on the value type.

We originally added a new DAG.getPointerAdd() function, but after this
patch series we can modify the implementation of getMemBasePlusOffset()
instead. Avoiding direct uses of ISD::ADD for pointer types will
significantly reduce the amount of assertion/instruction selection
failures for us in future upstream merges.

Reviewers: spatel, craig.topper

Reviewed By: spatel, craig.topper

Subscribers: craig.topper, merge_guards_bot, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71205
2019-12-13 21:40:03 +00:00
Alex Richardson
cd979516f8 [NFC] Change SelectionDAG::getMemBasePlusOffset() to use int64_t
Summary:
This change is preparatory work to use this helper functions in more places.
Currently the function only allows positive offsets, but there are cases
where we want to subtract an offset from an existing pointer.

The motivation for this change is our out-of-tree CHERI backend
(https://github.com/CTSRD-CHERI/llvm-project). We use a separate register
type to store pointers (128-bit capabilities, which are effectively
unforgeable and monotonic fat pointers). These capabilities permit a
reduced set of operations and therefore use a separate ValueType (iFATPTR).
to represent pointers implemented as capabilities.
Therefore, we need to avoid using ISD::ADD for our patterns that operate
on pointers and need to use a function that chooses ISD::ADD or a new
ISD::PTRADD opcode depending on the value type.

We originally added a new DAG.getPointerAdd() function, but after this
patch series we can modify the implementation of getMemBasePlusOffset()
instead. Avoiding direct uses of ISD::ADD for pointer types will
significantly reduce the amount of assertion/instruction selection
failures for us in future upstream merges.

Reviewers: spatel

Reviewed By: spatel

Subscribers: merge_guards_bot, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71204
2019-12-13 21:40:03 +00:00
Sam Elliott
ab5c772c29 [RISCV] Move DebugLoc Copy into CompressInstEmitter
Summary:
This copy ensures that debug location information is kept for
compressed instructions. There are places where both compressInstruction and
uncompressInstruction are called that were not doing this copy, discarding some
debug info.

This change merely moves the copy into the generated file, so you cannot forget
to copy the location over when compressing or uncompressing.

Reviewers: asb, luismarques

Reviewed By: luismarques

Subscribers: sameer.abuasal, aprantl, hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67493
2019-12-13 20:01:04 +00:00
Francesco Petrogalli
16e0e6d447 Revert "[VectorUtils] Introduce the Vector Function Database (VFDatabase)."
This reverts commit 0be81968a283fd4161cb9ac9748d5ed200926292.

The VFDatabase needs some rework to be able to handle vectorization
and subsequent scalarization of intrinsics in out-of-tree versions of
the compiler. For more details, see the discussion in
https://reviews.llvm.org/D67572.
2019-12-13 19:42:04 +00:00
Fangrui Song
7b33e8ad1d [profile] Fix a crash when -fprofile-remapping-file= triggers an error
Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D71485
2019-12-13 11:38:20 -08:00
Sanjay Patel
0a9bb6cc33 [InstSimplify] improve test coverage for insert+splat; NFC 2019-12-13 14:03:54 -05:00
Sanjay Patel
51a0b587ab [DAGCombiner] fold shift-trunc-shift to shift-mask-trunc (2nd try)
The initial attempt (rG89633320) botched the logic by reversing
the source/dest types. Added x86 tests for additional coverage.
The vector tests show a potential improvement (fold vector load
instead of broadcasting), but that's a known/existing problem.

This fold is done in IR by instcombine, and we have a special
form of it already here in DAGCombiner, but we want the more
general transform too:
https://rise4fun.com/Alive/3jZm

Name: general
Pre: (C1 + zext(C2) < 64)
%s = lshr i64 %x, C1
%t = trunc i64 %s to i16
%r = lshr i16 %t, C2
=>
%s2 = lshr i64 %x, C1 + zext(C2)
%a = and i64 %s2, zext((1 << (16 - C2)) - 1)
%r = trunc %a to i16

Name: special
Pre: C1 == 48
%s = lshr i64 %x, C1
%t = trunc i64 %s to i16
%r = lshr i16 %t, C2
=>
%s2 = lshr i64 %x, C1 + zext(C2)
%r = trunc %s2 to i16

...because D58017 exposes a regression without this fold.
2019-12-13 14:03:54 -05:00
Hiroshi Yamauchi
88791ed5ec [PGO][PGSO] Enable size optimizations in code gen / target passes for cold code.
Summary: Split off of D67120.

Reviewers: davidxl

Subscribers: hiraditya, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71288
2019-12-13 11:01:19 -08:00
Momchil Velikov
c2d94cd107 [ARM] Fix in ICE when retrieving the number of micro-ops for vlldm/vlstm
The big switch in `ARMBaseInstrInfo::getNumMicroOps` is missing cases for
`VLLDM` and `VLSTM`, which are currently defined with itineraries having a
dynamic count of micro-ops.

Assuming an optimistic case in which these instruction do not actually perform
loads or stores, and with the idea that Armv8-m cores are supposed to use the
new style scheduling models, this patch just sets the itinerary for those two
instructions to `NoItinerary`.

Differential Revision: https://reviews.llvm.org/D71266
2019-12-13 18:19:40 +00:00
Nico Weber
a36abb8c80 gn docs: remove obsolete reference to monorepo 2019-12-13 13:03:55 -05:00
Momchil Velikov
ec8f60f452 [AArch64] Emit PAC/BTI .note.gnu.property flags
This patch make LLVM emit the processor specific program property types
defined in AArch64 ELF spec
https://developer.arm.com/docs/ihi0056/f/elf-for-the-arm-64-bit-architecture-aarch64-abi-2019q2-documentation

A file containing no functions gets both property flags.  Otherwise, a property
is set iff all the functions in the file have the corresponding attribute.

Patch by Daniel Kiss and Momchil Velikov.

Differential Revision: https://reviews.llvm.org/D71019
2019-12-13 17:38:20 +00:00
Fangrui Song
c95b1c8cf0 [MC][PowerPC] Fix a crash when redefining a symbol after .set
Fix PR44284. This is probably not valid assembly but we should not crash.

Reviewed By: luporl, #powerpc, steven.zhang

Differential Revision: https://reviews.llvm.org/D71443
2019-12-13 09:31:54 -08:00
Mark Murray
70244eebaf [ARM][MVE][Intrinsics] All vqdmulhq/vqrdmulhq tests should be for signed numbers.
Fix broken tests. I can't yet explain how they worked locally pre-commit.
2019-12-13 17:29:59 +00:00
Fangrui Song
510814e7a8 [ARM][MVE] Fix -Wunused-variable in -DLLVM_ENABLE_ASSERTIONS=Off builds after D71062 2019-12-13 09:26:26 -08:00
LLVM GN Syncbot
beedccd28a gn build: Merge 84728e65e95 2019-12-13 16:20:43 +00:00
Miloš Stojanović
05b9c1cd8b [llvm-exegesis][mips] Add BenchmarkResultTest unit test
Test writing and reading benchmark instructions to and from disc, and
check calculations of min, max and avg values from a list of benchmark
measures.

Differential Revision: https://reviews.llvm.org/D71265
2019-12-13 17:02:19 +01:00
Sam Parker
a3cacd08b6 [ARM][MVE] Make VPT invalid for tail predication
We've been marking VPT incompatible instructions as invalid for tail
predication too, though this may not strictly be true. VPT are
incompatible and, unless its the first predicate def in a loop,
they shouldn't be compatible for tail predication either.

Differential Revision: https://reviews.llvm.org/D71410
2019-12-13 15:01:08 +00:00
Kristina Bessonova
965f2386e0 [llvm-dwarfdump][Statistics] Don't count coverage less than 1% as 0%
Summary:
This is a follow up for D70548.
Currently, variables with debug info coverage between 0% and 1% are put into
zero-bucket. D70548 changed the way statistics calculate a variable's coverage:
we began to use enclosing scope rather than a possible variable life range.
Thus more variables might be moved to zero-bucket despite they have some debug
info coverage.
The patch is to distinguish between a variable that has location info but
it's significantly less than its enclosing scope and a variable that doesn't
have it at all.

Reviewers: djtodoro, aprantl, dblaikie, avl

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71070
2019-12-13 17:34:58 +03:00
Nicola Zaghen
8d0fd71b2b Reland [DataLayout] Fix occurrences that size and range of pointers are assumed to be the same.
GEP index size can be specified in the DataLayout, introduced in D42123. However, there were still places
in which getIndexSizeInBits was used interchangeably with getPointerSizeInBits. This notably caused issues
with Instcombine's visitPtrToInt; but the unit tests was incorrect, so this remained undiscovered.

This fixes the buildbot failures.

Differential Revision: https://reviews.llvm.org/D68328

Patch by Joseph Faulls!
2019-12-13 14:30:21 +00:00
Sanjay Patel
356915b055 [x86] add tests for shift-trunc-shift; NFC
More coverage for a possible generic transform.
2019-12-13 08:37:06 -05:00
Mikhail Maltsev
c04187c6e9 [ARM][MVE] Add vector reduction intrinsics with two vector operands
Summary:
This patch adds intrinsics for the following MVE instructions:
* VABAV
* VMLADAV, VMLSDAV
* VMLALDAV, VMLSLDAV
* VRMLALDAVH, VRMLSLDAVH

Each of the above 4 groups has a corresponding new LLVM IR intrinsic,
since the instructions cannot be easily represented using
general-purpose IR operations.

Reviewers: simon_tatham, ostannard, dmgreen, MarkMurrayARM

Reviewed By: MarkMurrayARM

Subscribers: merge_guards_bot, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71062
2019-12-13 13:17:29 +00:00
Kristina Bessonova
f42c025b44 [llvm-dwarfdump][Statistics] Change the coverage buckets representation. NFC
Summary:
This changes the representation of 'coverage buckets' in llvm-dwarfdump and
llvm-locstats to one that makes more clear what the buckets contain.

See some related details in D71070.

Reviewers: djtodoro, aprantl, cmtice, jhenderson

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71366
2019-12-13 16:08:25 +03:00
Simon Tatham
e5e610c551 [ARM][MVE] Add intrinsics for more immediate shifts.
Summary:
This fills in the remaining shift operations that take a single vector
input and an immediate shift count: the `vqshl`, `vqshlu`, `vrshr` and
`vshll[bt]` families.

`vshll[bt]` (which shifts each input lane left into a double-width
output lane) is the most interesting one. There are separate MC
instruction ids for shifting by exactly the input lane width and
shifting by less than that, because the instruction encoding is so
completely different for the lane-width special case. So I had to
write two sets of patterns to match based on the immediate shift
count, which involved adding a ComplexPattern matcher to avoid the
general-case pattern accidentally matching the special case too. For
that family I've made sure to add an llc codegen test for both
versions of each instruction.

I'm experimenting with a new strategy for parametrising the isel
patterns for all these instructions: adding extra fields to the
relevant `Instruction` subclass itself, which are ignored by the
Tablegen backends that generate the MC data, but can be retrieved from
each instance of that instruction subclass when it's passed as a
template parameter to the multiclass that generates its isel patterns.
A nice effect of that is that I can fill in those informational fields
using `let` blocks, rather than having to type them out once per
instruction at `defm` time.

(As a result, quite a lot of existing instruction `def`s are
reindented by this patch, so it's clearer to read with whitespace
changes ignored.)

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71458
2019-12-13 13:07:39 +00:00
John Brawn
27b751b877 [ARM] Add custom strict fp conversion lowering when non-strict is custom
We have custom lowering for operations converting to/from floating-point types
when we don't have hardware support for those types, and this doesn't interact
well with the target-independent legalization of the strict versions of these
operations. Fix this by adding similar custom lowering of the strict versions.

This fixes the last of the assertion failures in the CodeGen/ARM/fp-intrinsics
test, with the remaining failures due to poor instruction selection.

Differential Revision: https://reviews.llvm.org/D71127
2019-12-13 13:00:00 +00:00
Tim Renouf
030b9c457d Revert "AMDGPU: Try to commute sub of boolean ext"
This reverts commit 69fcfb7d3597e0cdb5554b4e672e9032b411b167.

As shown in the test I attached to this commit, the change I reverted
causes a problem with "zext(cc1) - zext(cc2)". It commuted
the operands to the sub and used different logic to select the addc/subc
instruction:
   sub zext (setcc), x => addcarry 0, x, setcc
   sub sext (setcc), x => subcarry 0, x, setcc

... but that is bogus. I believe it is not possible to fold those commuted
patterns into any form of addcarry or subcarry. It may have worked as
intended before "AMDGPU: Change boolean content type to 0 or 1" because
the setcc was considered to be -1 rather than 1.

Differential Revision: https://reviews.llvm.org/D70978

Change-Id: If2139421aa6c935cbd1d925af58fe4a4aa9e8f43
2019-12-13 12:49:06 +00:00
Djordje Todorovic
7c53d69f00 [llvm-locstats] Avoid the locstats when no scope bytes coverage found
If the total number of PC range bytes in each variable's enclosing scope
('scope bytes total') is 0, we will have division by zero.

Differential Revision: https://reviews.llvm.org/D71415
2019-12-13 13:45:44 +01:00
Alex Richardson
5244511578 [NFC] Use EVT instead of bool for getSetCCInverse()
Summary:
The use of a boolean isInteger flag (generally initialized using
VT.isInteger()) caused errors in our out-of-tree CHERI backend
(https://github.com/CTSRD-CHERI/llvm-project).

In our backend, pointers use a separate ValueType (iFATPTR) and therefore
.isInteger() returns false. This meant that getSetCCInverse() was using the
floating-point variant and generated incorrect code for us:
`(void *)0x12033091e < (void *)0xffffffffffffffff` would return false.

Committing this change will significantly reduce our merge conflicts
for each upstream merge.

Reviewers: spatel, bogner

Reviewed By: bogner

Subscribers: wuzish, arsenm, sdardis, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jrtc27, atanasyan, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70917
2019-12-13 12:22:03 +00:00
Sjoerd Meijer
ca4474f01a Revert "[ARM][MVE] findVCMPToFoldIntoVPS. NFC."
This reverts commit 9468e3334ba54fbb1b209aaec662d7375451fa1f.

There's a test that doesn't like this change. The RDA analysis
gets invalided by changes in the block, which is not taken into
account. Revert while I work on a fix for this.
2019-12-13 11:56:44 +00:00
Mark Murray
2d0e359316 [ARM][MVE][Intrinsics] Add *_x() variants of my *_m() intrinsics.
Summary:
Better use of multiclass is used, and this helped find some existing
bugs in the predicated VMULL* intrinsics, which are now fixed.

The refactored VMULL[TB]Q_(INT|POLY)_M() intrinsics were discovered
to have an argument ("inactive") with incorrect type, and this required
a fix that is included in this whole patch. The argument "inactive"
should have been the same width (per vector element) as the return
type of the intrinsic, but was not in the case where the return type
was double the element width of the input types.

To assist in testing the multiclassing , and to thwart further gremlins,
the unit tests are improved in scope.

The *.ll tests are all generated by a small bit of throw-away scripting
from the corresponding *.c tests, and as such the diffs are large and
nasty. Look at the file rather than the diff.

Reviewers: dmgreen, miyuki, ostannard, simon_tatham

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71421
2019-12-13 11:51:23 +00:00
Kai Nacke
520375e6f3 [Docs] Fix target feature matrix for PowerPC and SystemZ
The target feature matrix in the code generator documentation is
outdated. This PR fixes some entries for PowerPC and SystemZ.

Both have:
- assembly parser
- disassembler
- .o file writing

Reviewers: uweigand

Differential Revision: https://reviews.llvm.org/D71004
2019-12-13 06:18:08 -05:00
Kerry McLaughlin
07c7fe4c54 Recommit "[AArch64][SVE] Implement intrinsics for non-temporal loads & stores"
Updated pred_load patterns added to AArch64SVEInstrInfo.td by this patch
to use reg + imm non-temporal loads to fix previous test failures.

Original commit message:

Adds the following intrinsics:
  - llvm.aarch64.sve.ldnt1
  - llvm.aarch64.sve.stnt1

This patch creates masked loads and stores with the
MONonTemporal flag set when used with the intrinsics above.
2019-12-13 10:08:20 +00:00
David Stenberg
fa448e3ce2 [LiveDebugValues] Omit entry values for DBG_VALUEs with pre-existing expressions
Summary:
This is a quickfix for PR44275. An assertion that checks that the
DIExpression is valid failed due to attempting to create an entry value
for an indirect parameter. This started appearing after D69028, as the
indirect parameter started being represented using an DW_OP_deref,
rather than with the DBG_VALUE's second operand, meaning that the
isIndirectDebugValue() check in LiveDebugValues did not exclude such
parameters. A DIExpression that has an entry value operation can
currently not have any other operation, leading to the failed isValid()
check.

This patch simply makes us stop considering emitting entry values
for such parameters. To support such cases I think we at least need
to do the following changes:

 * In DIExpression::isValid(): Remove the limitation that a
   DW_OP_LLVM_entry_value operation can be the only operation in a
   DIExpression.

 * In LiveDebugValues::emitEntryValues(): Create an entry value of size
   1, so that it only wraps the register operand, and not the whole
   pre-existing expression (the DW_OP_deref).

 * In LiveDebugValues::removeEntryValue(): Check that the new debug
   value has the same debug expression as the original, rather than
   checking that the debug expression is empty.

 * In DwarfExpression::addMachineRegExpression(): Modify the logic so
   that a DW_OP_reg* expression is emitted for the entry value.
   That is how GCC emits entry values for indirect parameters. That will
   currently not happen to due the DW_OP_deref causing the
   !HasComplexExpression to fail. The LocationKind needs to be changed
   also, rather than always emitting a DW_OP_stack_value for entry values.

There are probably more things I have missed, but that could hopefully
be a good starting point for emitting such entry values.

Reviewers: djtodoro, aprantl, jmorse, vsk

Reviewed By: aprantl, vsk

Subscribers: hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D71416
2019-12-13 10:49:46 +01:00
Georgii Rymar
5ee22c8a68 [yaml2obj] - Add a way to override sh_flags section field.
Currently we have the `Flags` property that allows to
set flags for a section. The problem is that it does not
allow us to set an arbitrary value, because of bit fields
validation under the hood. An arbitrary values can be used
to test specific broken cases.

We probably do not want to relax the validation, so this
patch adds a `ShSize` property that allows to
override the `sh_size`. It is inline with others `Sh*` properties
we have already.

Differential revision: https://reviews.llvm.org/D71411
2019-12-13 11:54:37 +03:00
Georgii Rymar
0c77a265d1 [llvm-readobj] - Fix letters used for dumping section types in GNU style.
I've noticed that when we have all regular flags set, we print "WAEXMSILoGTx"
instead of "WAXMSILOGTCE" printed by GNU readelf.

It happens because:
1) We print SHF_EXCLUDE at the wrong place.
2) We do not recognize SHF_COMPRESSED, we print "x" instead of "C".
3) We print "o" instead of "O" for SHF_OS_NONCONFORMING.

This patch fixes differences and adds test cases.

Differential revision: https://reviews.llvm.org/D71418
2019-12-13 11:31:24 +03:00
Craig Topper
f2c16e8f5e [LegalizeTypes] Remove unnecessary if before calling ReplaceValueWith on the chain in SoftenFloatRes_LOAD.
I believe this is a leftover from when fp128 was softened to fp128
on X86-64. In that case type legalization must have been able to
create a load that was the same as N which would make this
replacement fail or assert. Since we no longer do that, this
check should be unneeded.
2019-12-13 00:14:41 -08:00
Nikita Popov
76a5a184aa Reapply [LVI] Normalize pointer behavior
This is a rebase of the change over D70376, which fixes an LVI cache
invalidation issue that also affected this patch.

-----

Related to D69686. As noted there, LVI currently behaves differently
for integer and pointer values: For integers, the block value is always
valid inside the basic block, while for pointers it is only valid at
the end of the basic block. I believe the integer behavior is the
correct one, and CVP relies on it via its getConstantRange() uses.

The reason for the special pointer behavior is that LVI checks whether
a pointer is dereferenced in a given basic block and marks it as
non-null in that case. Of course, this information is valid only after
the dereferencing instruction, or in conservative approximation,
at the end of the block.

This patch changes the treatment of dereferencability: Instead of
including it inside the block value, we instead treat it as something
similar to an assume (it essentially is a non-nullness assume) and
incorporate this information in intersectAssumeOrGuardBlockValueConstantRange()
if the context instruction is the terminator of the basic block.
This happens either when determining an edge-value internally in LVI,
or when a terminator was explicitly passed to getValueAt(). The latter
case makes this change not fully NFC, because we can now fold
terminator icmps based on the dereferencability information in the
same block. This is the reason why I changed one JumpThreading test
(it would optimize the condition away without the change).

Of course, we do not want to recompute dereferencability on each
intersectAssume call, so we need a new cache for this. The
dereferencability analysis requires walking the entire basic block
and computing underlying objects of all memory operands. This was
previously done separately for each queried pointer value. In the
new implementation (both because this makes the caching simpler,
and because it is faster), I instead only walk the full BB once and
cache all the dereferenced pointers. So the traversal is now performed
only once per BB, instead of once per queried pointer value.

I think the overall model now makes more sense than before, and there
will be no more pitfalls due to differing integer/pointer behavior.

Differential Revision: https://reviews.llvm.org/D69914
2019-12-13 08:59:58 +01:00
Rui Ueyama
bd7cf0f396 Revert an accidental commit af5ca40b47b3e85c3add81ccdc0b787c4bc355ae 2019-12-13 15:17:40 +09:00
Rui Ueyama
8ba96bda06 temporary 2019-12-13 14:35:03 +09:00
Nate Voorhies
95d42df811 [NFC][AArch64] Fix typo.
Summary: Coaleascer should be coalescer.

Reviewers: qcolombet, Jim

Reviewed By: Jim

Subscribers: Jim, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70731
2019-12-13 10:25:36 +08:00
Eric Christopher
a1871cdc3f Temporarily revert "NFC: DebugInfo: Refactor RangeSpanList to be a struct, like DebugLocStream::List"
as it was causing bot and build failures.

This reverts commit 8e04896288d22ed8bef7ac367923374f96b753d6.
2019-12-12 17:55:41 -08:00