1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

189765 Commits

Author SHA1 Message Date
Jonas Paulsson
a1f306c9ad Recommit "[MachineVerifier] Improve verification of live-in lists."
MachineVerifier::visitMachineFunctionAfter() is extended to check the
live-through case for live-in lists. This is only done for registers without
aliases and that are neither allocatable or reserved, such as the SystemZ::CC
register.

The MachineVerifier earlier only catched the case of a live-in use without an
entry in the live-in list (as "using an undefined physical register").

A comment in LivePhysRegs.h has been added stating a guarantee that
addLiveOuts() can be trusted for a full register both before and after
register allocation.

Review: Quentin Colombet

Differential Revision: https://reviews.llvm.org/D68267
2020-01-08 16:58:54 -08:00
Jonas Paulsson
ad6988e5af [X86] Remove EFLAGS from live-in lists in X86FlagsCopyLowering.
When EFLAGS is no longer live into a basic block, remove it from the live-in
list.

Fixes https://bugs.llvm.org/show_bug.cgi?id=44462.

Review: Craig Topper

Differential Revision: https://reviews.llvm.org/D71375
2020-01-08 16:36:03 -08:00
Evgenii Stepanov
bc4148c67f Revert "Merge memtag instructions with adjacent stack slots."
*** Bad machine code: Tied use must be a register ***
- function:    stg_alloca17
- basic block: %bb.0 entry (0x20076710580)
- instruction: early-clobber %0:gpr64common, early-clobber %1:gpr64sp = STGloop 272, %stack.0.a :: (store 272 into %ir.a, align 16)
- operand 3:   %stack.0.a

http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win/builds/21481/steps/test-check-all/logs/stdio

This reverts commit b675a7628ce6a21b1e4a71c079a67badfb8b073d.
2020-01-08 14:36:12 -08:00
Kazu Hirata
e45445b80c Revert "[JumpThreading] Thread jumps through two basic blocks"
It looks like my patch breaks the sanitizer-windows build:

http://lab.llvm.org:8011/builders/sanitizer-windows/builds/56324

This reverts commit ead815924e6ebeaf02c31c37ebf7a560b5fdf67b.
2020-01-08 13:58:39 -08:00
Sanjay Patel
82519af5e2 [InstSimplify] add tests for select of true/false; NFC 2020-01-08 16:22:48 -05:00
Sanjay Patel
af36773fd6 [x86] add test for concat-extract corner case; NFC
See D72361 for discussion.
2020-01-08 14:44:44 -05:00
Evgenii Stepanov
69bd6b331e Merge memtag instructions with adjacent stack slots.
Summary:
Detect a run of memory tagging instructions for adjacent stack frame slots,
and replace them with a shorter instruction sequence
* replace STG + STG with ST2G
* replace STGloop + STGloop with STGloop

This code needs to run when stack slot offsets are already known, but before
FrameIndex operands in STG instructions are eliminated; that's the
reason for the new hook in PrologueEpilogue.

This change modifies STGloop and STZGloop pseudos to take the size as an
immediate integer operand, and base address as a FI operand when
possible. This is needed to simplify recognizing an STGloop instruction
as operating on a stack slot post-regalloc.

This improves memtag code size by ~0.25%, and it looks like an additional ~0.1%
is possible by rearranging the stack frame such that consecutive STG
instructions reference adjacent slots (patch pending).

Reviewers: pcc, ostannard

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70286
2020-01-08 11:02:03 -08:00
Philip Reames
fa60af16c4 [X86] Keep cl::opts at top of file [NFC] 2020-01-08 10:54:02 -08:00
Craig Topper
3bc64083c4 [X86] Custom type legalize v4i64->v4f32 uint_to_fp on sse4.1 targets in 64-bit mode
For v4i64->v4f32 uint_to_fp on pre-avx targets where v4i64 isn't legal we create to v2i64->v2f32 uint_to_fp that need to be shuffled together. Our codegen for v2i64->v2f32 involves detecting if the number is larger than (2^31 - 1), if so we do a special divison by 2 so we can do a signed conversion which we need to scalarize, then do a multiply by 2 at the end if we divided earlier.

When v4i64 isn't legal we need to split the checking for a larger number and dividing by 2 into two v2i64 vectors. The scalar part can extract the 4 i64 values from those 4 splits. But we can reassemble the 4 scalar f32 results directly into a single v432 vector. Then we just need to combine the fixup indications from the 2 halves and we can do the final multiply by 2 fixup on all 4 values if needed at once using a single v4f32 blend and v4f32 fadd.

Differential Revision: https://reviews.llvm.org/D72368
2020-01-08 10:06:01 -08:00
Craig Topper
57efb09264 [X86] Add isel patterns for bitcasting between v32i1/v64i1 and float/double.
We have to do an intermediate jump to a GPR to make the cast.

Fixes PR43750.
2020-01-08 10:06:01 -08:00
Philip Reames
8de39d5e37 [BranchAlign] Compiler support for suppressing branch align
As discussed heavily in the original review (D70157), there's a need for the compiler to be able to selective suppress padding (either nop or prefix) to respect assumptions about the meaning of labels and instructions in generated code.

Rather than wait for syntax to be finalized - which appears to be a very slow process - this patch focuses on the compiler use case and *only* worries about the integrated assembler. To my knowledge, this covers all cases mentioned to date for clang/JIT support.

For testing purposes, I wired it up so that if the integrated assembler was using autopadding for branch alignment (e.g. enabled at command line) then the textual assembly output would contain a comment for each location where padding was enabled or disabled. This seemed like the least painful choice overall.

Note that the result of this patch effective disables the jcc errata mitigation for many constructs (statepoints, implicit null checks, xray, etc...) which is non ideal. It is at least *correct* and should allow us to enable the mitigation for the compiler. Once that's done, and a few other items are worked through, we probably want to come back to this an explore a bundling based approach instead so that we can pad instructions while keeping labels in the right place.

Differential Revision: https://reviews.llvm.org/D72303
2020-01-08 10:03:30 -08:00
Simon Pilgrim
d0f5096b25 [MC] writeFragment - assert MCFragment::FT_Fill length is legal.
Silence (clang/MSVC) static analyzer warnings that the fragment data may either write out of bounds of the local array or reference uninitialized data.
2020-01-08 17:19:11 +00:00
Simon Pilgrim
046d03c020 Fix "pointer is null" static analyzer warning. NFCI.
Use cast<> instead of dyn_cast<> since we know that the pointer should be valid (and is dereferenced immediately below in the getSignature call).
2020-01-08 17:19:10 +00:00
Michael Liao
ed0da9252a [amdgpu] Remove unused header. NFC. 2020-01-08 11:32:09 -05:00
Simon Pilgrim
45ccc05a7c [SelectionDAG] Use llvm::Optional<APInt> for FoldValue.
Use llvm::Optional<APInt> instead of std::pair<APInt, bool> with the bool second being used to report success/failure of fold.
2020-01-08 16:09:24 +00:00
Sanjay Patel
5cb8a60bd4 [InstCombine] Adding testcase for Z / (1.0 / Y) => (Y * Z); NFC
The added testcase shows the current transformation for the operation
Z / (1.0 / Y), which remains unchanged. This will be updated to align
with the transformed code (Y * Z) with D72319.

The existing transformation Z / (X / Y) => (Y * Z) / X is not handling
this case as there are multiple uses for (1.0 / Y) in this testcase.

Patch by: @raghesh (Raghesh Aloor)

Differential Revision: https://reviews.llvm.org/D72388
2020-01-08 10:33:44 -05:00
Sanjay Patel
aa51f59c1a [DAGCombiner] clean up extract-of-concat fold; NFC
This hopes to improve readability and adds an assert.
The functional change noted by the TODO comment is
proposed in:
D72361
2020-01-08 10:15:33 -05:00
Kazu Hirata
787afd8fb4 [JumpThreading] Thread jumps through two basic blocks
Summary:
This patch teaches JumpThreading.cpp to thread through two basic
blocks like:

  bb3:
    %var = phi i32* [ null, %bb1 ], [ @a, %bb2 ]
    %tobool = icmp eq i32 %cond, 0
    br i1 %tobool, label %bb4, label ...

  bb4:
    %cmp = icmp eq i32* %var, null
    br i1 %cmp, label bb5, label bb6

by duplicating basic blocks like bb3 above.  Once we duplicate bb3 as
bb3.dup and redirect edge bb2->bb3 to bb2->bb3.dup, we have:

  bb3:
    %var = phi i32* [ @a, %bb2 ]
    %tobool = icmp eq i32 %cond, 0
    br i1 %tobool, label %bb4, label ...

  bb3.dup:
    %var = phi i32* [ null, %bb1 ]
    %tobool = icmp eq i32 %cond, 0
    br i1 %tobool, label %bb4, label ...

  bb4:
    %cmp = icmp eq i32* %var, null
    br i1 %cmp, label bb5, label bb6

Then the existing code in JumpThreading.cpp can thread edge
bb3.dup->bb4 through bb4 and eventually create bb3.dup->bb5.

Reviewers: wmi

Subscribers: hiraditya, jfb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70247
2020-01-08 06:57:36 -08:00
Simon Tatham
0f41bbd87f [ARM,MVE] Intrinsics for variable shift instructions.
This batch of intrinsics fills in all the shift instructions that take
a variable shift distance in a register, instead of an immediate. Some
of these instructions take a single shift distance in a scalar
register and apply it to all lanes; others take a vector of per-lane
distances.

These instructions are all basically one family, varying in whether
they saturate out-of-range values, and whether they round when bits
are shifted off the bottom. I've implemented them at the IR level by a
much smaller family of IR intrinsics, which take flag parameters to
indicate saturating and/or rounding (along with the usual one to
specify signed/unsigned integers).

An oddity is that all of them are //left// shift instructions – but if
you pass a negative shift count, they'll shift right. So the vector
shift distances are always vectors of //signed// integers, regardless
of whether you're considering the other input vector to be of signed
or unsigned. Also, even the simplest `vshlq` instruction in this
family (neither saturating nor rounding) has to be implemented as an
IR intrinsic, because the ordinary LLVM IR `shl` operation would
consider an out-of-range shift count to be undefined behavior.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72329
2020-01-08 14:42:24 +00:00
Simon Tatham
5371ba4ab1 [ARM,MVE] Intrinsics for partial-overwrite imm shifts.
This batch of intrinsics covers two sets of immediate shift
instructions, which have in common that they only overwrite part of
their output register and so they need an extra input giving its
previous value.

The VSLI and VSRI instructions shift each lane of the input vector
left or right just as if they were normal immediate VSHL/VSHR, but
then they only overwrite the output bits that correspond to actual
shifted bits of the input. So VSLI will leave the low n bits of each
output lane unchanged, and VSRI the same with the top n bits.

The V[Q][R]SHR[U]N family are all narrowing shifts: they take an input
vector of 2n-bit integers, shift each lane right by a constant, and
then narrowing the shifted result to only n bits. So they only
overwrite half of the n-bit lanes in the output register, and the B/T
suffix indicates whether it's the bottom or top half of each 2n-bit
lane.

I've implemented the whole of the latter family using a single IR
intrinsic `vshrn`, which takes a lot of i32 parameters indicating
which instruction it expands to (by specifying signedness of the input
and output types, whether it saturates and/or rounds, etc).

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72328
2020-01-08 14:42:24 +00:00
Bevin Hansson
21be0de34d [Intrinsic] Add fixed point division intrinsics.
Summary:
This patch adds intrinsics and ISelDAG nodes for
signed and unsigned fixed-point division:

  llvm.sdiv.fix.*
  llvm.udiv.fix.*

These intrinsics perform scaled division on two
integers or vectors of integers. They are required
for the implementation of the Embedded-C fixed-point
arithmetic in Clang.

Patch by: ebevhan

Reviewers: bjope, leonardchan, efriedma, craig.topper

Reviewed By: craig.topper

Subscribers: Ka-Ka, ilya, hiraditya, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70007
2020-01-08 15:17:46 +01:00
Qiu Chaofan
e9f8f15265 [NFC] Move InPQueue into arguments of releaseNode
This patch moves `InPQueue` into function arguments instead of template
arguments of `releaseNode`, which is a cleaner approach.

Differential Revision: https://reviews.llvm.org/D72125
2020-01-08 22:15:32 +08:00
LLVM GN Syncbot
4861c0838e [gn build] Port 346f6b54bd1 2020-01-08 13:43:29 +00:00
Anna Welker
4b7c61b6c9 [ARM][MVE] Enable masked gathers from vector of pointers
Adds a pass to the ARM backend that takes a v4i32
gather and transforms it into a call to MVE's
masked gather intrinsics.

Differential Revision: https://reviews.llvm.org/D71743
2020-01-08 13:43:12 +00:00
Nico Weber
4710b1b396 [gn build] (manually) merge 1cf11a4c67a15 2020-01-08 07:44:33 -05:00
Alexey Lapshin
076ea9fc78 [Dsymutil][Debuginfo][NFC] Reland: Refactor dsymutil to separate DWARF optimizing part. #2.
Summary:
This patch relands D71271. The problem with D71271 is that it has cyclic dependency:
CodeGen->AsmPrinter->DebugInfoDWARF->CodeGen. To avoid cyclic dependency this patch
puts implementation for DWARFOptimizer into separate library: lib/DWARFLinker.

Thus the difference between this patch and D71271 is in that DWARFOptimizer renamed into
DWARFLinker and it`s files are put into lib/DWARFLinker.

Reviewers: JDevlieghere, friss, dblaikie, aprantl

Reviewed By: JDevlieghere

Subscribers: thegameg, merge_guards_bot, probinson, mgorny, hiraditya, llvm-commits

Tags: #llvm, #debug-info

Differential Revision: https://reviews.llvm.org/D71839
2020-01-08 14:15:31 +03:00
Sam Parker
dbbd596280 [NFC][ARM] Update tests
Run the update_mir_test on some of the low-overhead loop tests.
2020-01-08 06:08:30 -05:00
Xuanda Yang
a5bc6713d8 [llvm-symbolizer]Fix printing of malformed address values not passed via stdin
Summary:
relates https://bugs.llvm.org/show_bug.cgi?id=44443

Adding missing newline when printing bad input values.

Fix testcase

Reviewers: jhenderson

Reviewed By: jhenderson

Subscribers: rupprecht, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72313
2020-01-08 18:37:41 +08:00
Kadir Cetinkaya
95287dad6e Revert "[InstCombine] fold zext of masked bit set/clear"
This reverts commit a041c4ec6f7aa659b235cb67e9231a05e0a33b7d.

This looks like a non-trivial change and there has been no code
reviews (at least there were no phabricator revisions attached to the
commit description). It is also causing a regression in one of our
downstream integration tests, we haven't been able to come up with a
minimal reproducer yet.
2020-01-08 11:21:21 +01:00
Tim Northover
0916132710 AArch64: add missing Apple CPU names and use them by default.
Apple's CPUs are called A7-A13 in official communication, occasionally with
weird suffixes which we probably don't need to care about. This adds each one
and describes its features. It also switches the default CPU to the canonical
name for Cyclone, but leaves legacy support in so that existing bitcode still
compiles.
2020-01-08 09:24:06 +00:00
QingShan Zhang
5b470c2b5a [NFC][Test] Add the option -enable-no-signed-zeros-fp-math for test
fma-combine.ll
2020-01-08 06:48:51 +00:00
Wang, Pengfei
424f235504 [X86] Adding fp128 support for strict fcmp
Summary: Adding fp128 support for strict fcmp

Reviewers: craig.topper, LiuChen3, andrew.w.kaylor, RKSimon, uweigand

Subscribers: hiraditya, llvm-commits, LuoYuanke

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71897
2020-01-08 12:59:31 +08:00
James Clarke
0e3b53e64e [RISCV] Fix evalutePCRelLo for symbols at the end of a fragment
Summary:
This is analogous to D58943, which correctly finds the corresponding
fixup. However, when linker relaxations are disabled and we evaluate the
fixup, we need to also ensure we use an offset of 0 rather than the size
of the previous fragment.

Reviewers: asb, efriedma, lenary

Reviewed By: efriedma

Subscribers: hiraditya, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71978
2020-01-08 04:32:06 +00:00
Matt Arsenault
6aa7378d41 AMDGPU: Annotate EXTRACT_SUBREGs with source register classes
This partially fixes GlobalISel import of the patterns, but removes a
lot of entriess from the end of the skipped pattern log.
2020-01-07 21:56:16 -05:00
Jim Lin
d805544c8c [docs] Fix duplicate explicit target name: developer policy 2020-01-08 10:44:44 +08:00
czhengsz
d7d21ed611 [SCEV] get more accurate range for AddExpr with wrap flag.
Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D64869
2020-01-07 20:58:04 -05:00
Jim Lin
f984a8d893 [docs] Improve HowTo commit changes from git
Summary: As a novice here I tried to `git push` my changes for a while before figuring out the correct workflow which is described on other pages. This small change doesn't reduce redundancy between those pages, but at least readers can follow the links now.

Reviewers: Kokan, Jim

Reviewed By: Kokan, Jim

Subscribers: riccibruno, kiszk, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72077
2020-01-08 09:48:01 +08:00
Eric Christopher
c61e6c8784 XFAIL load_extension.ll for all targets currently - it's failing on
additional platforms than just darwin.
2020-01-07 17:00:55 -08:00
Philip Reames
dbfbd50aa3 [GVN/FP] Considate logic for reasoning about equality vs equivalance for floats
Factor out common logic into some reasonable commented helper functions. In the process, ensure that the in-block vs cross-block cases are handled the same. They previously weren't.

Differential Revision: https://reviews.llvm.org/D67126
2020-01-07 16:05:04 -08:00
Daniel Sanders
518303cb4e Fix warnings as errors that occur on sanitizer-x86_64-linux 2020-01-07 16:02:31 -08:00
Amara Emerson
52c3b98b3d [AArch64][GlobalISel] Fold a chain of two G_PTR_ADDs of constant offsets.
E.g.
%addr1 = G_PTR_ADD %base, G_CONSTANT 20
%addr2 = G_PTR_ADD %addr1, G_CONSTANT 8
  -->
%addr2 = G_PTR_ADD %base, G_CONSTANT 28

Differential Revision: https://reviews.llvm.org/D72351
2020-01-07 14:12:42 -08:00
Craig Topper
23e0f7572d [X86] Add SSE4.1 command lines to vec-strict-inttofp-128.ll to cover the v2i64->v2f32 strict_uitofp codegen. NFC 2020-01-07 13:53:38 -08:00
Bill Wendling
77dae6a102 Revert "Allow output constraints on "asm goto""
This reverts commit 52366088a8e42c2f1e96e8430b84b8b65ec3f7bc.

I accidentally pushed this before supporting changes.
2020-01-07 13:44:08 -08:00
Bill Wendling
1e81c3e696 Allow output constraints on "asm goto"
Summary:
Remove the restrictions that preventing "asm goto" from returning non-void
values. The values returned by "asm goto" are only valid on the "fallthrough"
path.

Reviewers: jyknight, nickdesaulniers, hfinkel

Reviewed By: jyknight, nickdesaulniers

Subscribers: rsmith, hiraditya, llvm-commits, cfe-commits, craig.topper, rnk

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D69876
2020-01-07 13:40:26 -08:00
Matt Arsenault
bd58701287 AMDGPU/GlobalISel: Fix scalar G_SELECT for arbitrary pointers
4e85ca9562a588eba491e44bcbf73cb2f419780f missed updating the legal
condition type set for pointers with any unrecognized address space.
2020-01-07 16:36:31 -05:00
Matt Arsenault
36122a39f4 AMDGPU/GlobalISel: Add some missing G_SELECT testcases 2020-01-07 16:36:31 -05:00
Matt Arsenault
3553840899 AMDGPU/GlobalISel: Fix missing test for s16 icmp 2020-01-07 16:36:31 -05:00
Matt Arsenault
b591d25615 AMDGPU: Apply i16 add->sub pattern with zext to i32
This was only applying the deeper nested zext pattern, and missing the
special case code size fold.
2020-01-07 16:36:31 -05:00
Craig Topper
c9ee289053 [X86] Enable v2i64->v2f32 uint_to_fp code in ReplaceNodeResults on SSE4.1 target
Now that we generate decent code for (v2i64 (setlt zero, X)) on pre-sse4.2 targets I think we can use this now.

Differential Revision: https://reviews.llvm.org/D72354
2020-01-07 13:25:29 -08:00
Daniel Sanders
1119adcb10 [gicombiner] Correct 64f1bb5cd2c to account for MSVC's %p format 2020-01-07 12:50:05 -08:00