1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-18 18:42:46 +02:00
Commit Graph

191586 Commits

Author SHA1 Message Date
Djordje Todorovic
f63712851c [CSInfo] Use isCandidateForCallSiteEntry() when updating the CSInfo
Use the isCandidateForCallSiteEntry().
This should mostly be an NFC, but there are some parts ensuring
the moveCallSiteInfo() and copyCallSiteInfo() operate with call site
entry candidates (both Src and Dest should be the call site entry
candidates).

Differential Revision: https://reviews.llvm.org/D74122
2020-02-10 10:03:14 +01:00
Sebastian Neubauer
b1879e506b [AMDGPU] Add a16 feature to gfx10
Based on D72931

This adds a new feature called A16 which is enabled for gfx10.
gfx9 keeps the R128A16 feature so it can share all the instruction encodings
with gfx7/8.

Differential Revision: https://reviews.llvm.org/D73956
2020-02-10 09:04:23 +01:00
Johannes Doerfert
5564f3468f [Attributor][FIX] Make check lines explicit
There is a bug in `update_test_checks.py` that combines check lines it
should not. For now we unbreak the bots by making all possibilities
explicit.
2020-02-10 01:31:20 -06:00
Johannes Doerfert
bdfa790992 [Attributor] Simple casts preserve no-alias property
This is a minimal but important advancement over the existing code. A
cast with an operand that is only used in the cast retains the no-alias
property of the operand.
2020-02-10 01:11:32 -06:00
Johannes Doerfert
dbd63134e3 [Attributor][Tests] Run the CGSCC versions on the range.ll test 2020-02-10 01:11:32 -06:00
Djordje Todorovic
a79d241e28 [llvm-dwarfdump][Stats] Fix the License header
Fix the added License.

Differential Revision: https://reviews.llvm.org/D74207
2020-02-10 08:01:56 +01:00
Amara Emerson
b69eda2e73 [GlobalISel][CallLowering] Tighten constantexpr check for callee.
I'm not sure there's a test case for this, but it's better to be safe.
2020-02-09 22:59:48 -08:00
Johannes Doerfert
bf5760c5ab [Attributor] Allow PHI nodes in AAValueConstantRangeFloating
Traversing PHI nodes is natural with the genericValueTraversal but also
a bit tricky. The problem is similar to the ones we have seen in AAAlign
and AADereferenceable, namely that we continue to increase the range in
each iteration. We use a pessimistic approach here to stop the
iterations. Nevertheless, optimistic information can now be propagated
through a PHI node.
2020-02-10 00:55:10 -06:00
Johannes Doerfert
6ac6a81d33 [Attributor][FIX] Remove FIXME that seems outdated
The change is performed as stated by the FIXME and the tests are
adjusted. All changes look fine to me and values can be inferred as
undef without it being an error.
2020-02-10 00:55:10 -06:00
Johannes Doerfert
6ad6b3038f [Attributor] Allow SelectInst in AAValueConstantRangeFloating
The genericValueTraversal will already handle SelectInst properly and we
just needed to allow them in the initialize method.
2020-02-10 00:55:09 -06:00
Johannes Doerfert
894ac6470f [Attributor] Look through (some) casts in AAValueConstantRangeFloating
Casts can be handled natively by the ConstantRange class. We do limit it
to extends for now as we assume an integer type in different locations.
A TODO and a test case with a FIXME was added to remove that restriction
in the future.
2020-02-10 00:38:01 -06:00
Johannes Doerfert
47c6de4aa0 [Attributor][FIX] Call right base method in AAValueConstantRangeFloating
We now call the base class method as we should.
2020-02-10 00:38:01 -06:00
Craig Topper
714576a916 [X86] Autogenerate complete checks. NFC 2020-02-09 22:31:30 -08:00
Johannes Doerfert
0f891168d3 [Attributor][Tests][NFC] Add more range tests
Inspired by https://llvm.discourse.group/t/impossible-condition-optimization/461
2020-02-10 00:24:04 -06:00
Johannes Doerfert
7bd22086ea [Attributor][NFC] Use existing constant instead of magic one 2020-02-10 00:24:03 -06:00
Craig Topper
d6b10b296b [X86] Make (insert_vector_elt (v8i16 zerovec), i16 %x, 0) generate the same code as (v8i16 (build_vector %x, 0, 0, 0, 0, 0, 0, 0)).
Instead of using a insrw to element 0, use movzx and movd.

Same for v16i8.
2020-02-09 21:52:11 -08:00
Michael Liao
b43dfd2f41 Fix -Wparentheses warning. NFC. 2020-02-10 00:45:02 -05:00
Craig Topper
1ed848a2bd [X86] Autogenerate complete checks. NFC 2020-02-09 20:39:52 -08:00
Craig Topper
9d2a7779c9 [X86] Use MOVZX instead of MOVSX in f16_to_fp isel patterns.
Using sign extend forces the adjacent element to either all zeros
or all ones. But all ones is a NAN. So that doesn't seem like a
great idea.

Trying to work on supporting this with strict FP where NAN would
definitely be bad.
2020-02-09 20:39:52 -08:00
Shiva Chen
93b04ecdf3 [RISCV] Fix incorrect FP base CFI offset for variable argument functions
When the FP exists, the FP base CFI directive offset should take the size of variable arguments into account.

Differential Revision: https://reviews.llvm.org/D73862
2020-02-10 11:56:08 +08:00
Fangrui Song
db8bcae104 [DebugInfo] Add a DWARFDataExtractor constructor that takes ArrayRef<uint8_t>
Similar to D67797 (DataExtractor).
2020-02-09 17:45:32 -08:00
Matt Arsenault
fabdf2e6da GlobalISel: Fix narrowScalar for G_{CTLZ|CTTZ}_ZERO_UNDEF
Narrow these for 64-bit VALU for AMDGPU.
2020-02-09 19:02:38 -05:00
Matt Arsenault
4dfb5b8a70 AMDGPU/GlobalISel: Split 64-bit G_CTPOP in RegBankSelect 2020-02-09 18:39:33 -05:00
Matt Arsenault
a025afb406 GlobalISel: Fix narrowing of G_CTLZ/G_CTTZ
The result type is separate from the source type.
2020-02-09 18:11:43 -05:00
Matt Arsenault
49e5ee334c AMDGPU/GlobalISel: Don't mis-select vector index on a constant
Vector indexing with a constant index should be folded out in the
legalizer, but this was accidentally falling through. This would
produce the indexing operation with $noreg. Handle this case as a
dynamic index just in case a bug like this happens again in the
future.
2020-02-09 18:02:37 -05:00
Matt Arsenault
11aefea8e8 AMDGPU/GlobalISel: Look through casts when legalizing vector indexing
We were failing to find constants that were casted. I feel like the
artifact combiner should have folded the constant in the trunc before
the custom lowering, but that doesn't happen.
2020-02-09 18:02:10 -05:00
Matt Arsenault
7f0a98ae35 AMDGPU: Remove dead kill handling
At one point a custom node was used for kill handling, but now the
intrinsic is directly selected. Remove leftover pattern machinery.
2020-02-09 17:59:24 -05:00
Matt Arsenault
780e89dd58 AMDGPU: Fix SI_IF lowering when the save exec reg has terminator uses
Reverts part of 6524a7a2b9ca072bd7f7b4355d1230e70c679d2f. Since that
commit, the expansion was ignoring the actual save exec register
produced by the instruction, and looking at other instructions. I do
not understand why it was looking at other instructions, but relying
on this scan was wrong.

Fixes verifier errors after SI_IF is tail duplicated, which should be
correct to do. The results were fed into a phi, which was lowered to
the S_MOV_B64_term instructions.
2020-02-09 17:59:19 -05:00
Simon Pilgrim
4d7c42b93d [X86] combineConcatVectorOps - combine VROTLI/VROTRI ops
Fix issue mentioned on rGe82e17d4d4ca - non-AVX512BW targets failed to concatenate 256-bit rotations back to 512-bits (split during shuffle lowering as they don't have v32i16/v64i8 types).
2020-02-09 21:50:10 +00:00
Craig Topper
d20adfcf42 [X86] Use custom isel for (X86sbb_flag 0, 0) so we can use 32-bit SBB for i8/i16.
We were using MOV32r0 and an extract_subreg as an input. By using
custom isel we can move the extract_subreg to after the SBB instead
of on the input.
2020-02-09 13:19:35 -08:00
Craig Topper
11d30da96a [X86] Add flag result VT to a MOV32r0 created in X86DAGToDAGISel::Select
The flag isn't used, but I believe this matches the MOV32r0 that
would be created by the table emitter. This should allow this node
to be CSEed with any others created by the table.
2020-02-09 13:19:21 -08:00
Simon Pilgrim
1f239dd6be [X86] Add lowerShuffleAsBitRotate (PR44379)
As noted on PR44379, we didn't attempt to lower vector shuffles using bit rotations on XOP/AVX512F targets.

This patch lowers to uniform ISD:ROTL nodes - ROTR isn't supported by XOP and they are interchangeable for constant values anyway.

There might be cases where targets without ISD:ROTL support would benefit from this (expanding to SRL+SHL+OR), which I'll investigate in a future patch.

Also, non-AVX512BW targets fail to concatenate 256-bit rotations back to 512-bits (split during shuffle lowering as they don't have v32i16/v64i8 types).
2020-02-09 21:15:03 +00:00
Craig Topper
ac35a8a35e [X86] Use MVT::i32 for the type of a MOV32r0 created in X86DAGToDAGISel::Select.
Not sure if this really matters. The VT isn't really used after
this point. At best it might affect CSE.
2020-02-09 11:57:42 -08:00
Craig Topper
c35ae80199 [X86] Remove isel patterns that include a vselect/X86selects and a strict FP node.
A vselect+strictfp node is not equivalent to a masked operation.
The exceptions of the strictfp node are not masked by a vselect
after it so we can't match it to a masked operation.

We already had a hack in IsLegalToFold to prevent these patterns from
matching. This patch removes that hack and removes the patterns.
2020-02-09 11:45:54 -08:00
Simon Pilgrim
574e93990b [X86][XOP] Add XOP target to vXi16/vXi8 shuffle tests
Helps with bit rotation test coverage for PR44379
2020-02-09 18:35:51 +00:00
Simon Pilgrim
ddaedac538 [X86][SSE] Add more tests showing failure to lower shuffles as bit rotations 2020-02-09 18:35:51 +00:00
Simon Pilgrim
9fac692aaa [X86] Rename matchShuffleAsRotate - matchShuffleAsByteRotate. NFCI.
A matchShuffleAsBitRotate variant will be added soon and we need to make the difference more obvious.
2020-02-09 18:35:50 +00:00
LLVM GN Syncbot
286adde9cb [gn build] Port a17f03bd939 2020-02-09 15:41:05 +00:00
Sanjay Patel
35167a94b8 [VectorCombine] new IR transform pass for partial vector ops
We have several bug reports that could be characterized as "reducing scalarization",
and this topic was also raised on llvm-dev recently:
http://lists.llvm.org/pipermail/llvm-dev/2020-January/138157.html
...so I'm proposing that we deal with these patterns in a new, lightweight IR vector
pass that runs before/after other vectorization passes.

There are 4 alternate options that I can think of to deal with this kind of problem
(and we've seen various attempts at all of these), but they all have flaws:

    InstCombine - can't happen without TTI, but we don't want target-specific
                  folds there.
    SDAG - too late to assist other vectorization passes; TLI is not equipped
           for these kind of cost queries; limited to a single basic block.
    CGP - too late to assist other vectorization passes; would need to re-implement
          basic cleanups like CSE/instcombine.
    SLP - doesn't fit with existing transforms; limited to a single basic block.

This initial patch/transform is based on existing code in AggressiveInstCombine:
we walk backwards through the function looking for a pattern match. But we diverge
from that cost-independent IR canonicalization pass by using TTI to decide if the
vector alternative is profitable.

We probably have at least 10 similar bug reports/patterns (binops, constants,
inserts, cheap shuffles, etc) that would fit in this pass as follow-up enhancements.
It's possible that we could iterate on a worklist to fix-point like InstCombine does,
but it's safer to start with a most basic case and evolve from there, so I didn't
try to do anything fancy with this initial implementation.

Differential Revision: https://reviews.llvm.org/D73480
2020-02-09 10:04:41 -05:00
Simon Pilgrim
c912faf386 Fix signed/unsigned warning. 2020-02-09 13:35:03 +00:00
Simon Pilgrim
d1b038d961 [X86] Recognise ROTLI/ROTRI rotations as faux shuffles
Allows us to combine rotations with shuffles.

One of many things necessary to fix PR44379 (lowering shuffles to rotations)
2020-02-09 12:25:49 +00:00
Ehud Katz
f6fd87fc2b [LoopExtractor] Convert LoopExtractor from LoopPass to ModulePass
The LoopExtractor created new functions (by definition), which violates
the restrictions of a LoopPass.
The correct implementation of this pass should be as a ModulePass.
Includes reverting rL82990 implications on the LoopExtractor.

Fixes PR3082 and PR8929.

Differential Revision: https://reviews.llvm.org/D69069
2020-02-09 12:25:21 +02:00
Ayman Musa
ab4b452855 [AggressiveInstCombine] Add test with baseline CHECKs for aggressive inst combine for SELECT. 2020-02-09 12:07:25 +02:00
serge_sans_paille
35ac7e7659 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a357a8f685b3540246c5d762734e035f with proper LiveIn
declaration, better option handling and more portable testing.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-09 10:42:45 +01:00
serge-sans-paille
85fea1133e Revert "Support -fstack-clash-protection for x86"
This reverts commit 0fd51a4554f5f4f90342f40afd35b077f6d88213.

Failures:

http://lab.llvm.org:8011/builders/llvm-clang-win-x-armv7l/builds/4354
2020-02-09 10:06:31 +01:00
serge_sans_paille
9120fd7c93 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a357a8f685b3540246c5d762734e035f with proper LiveIn
declaration, better option handling and more portable testing.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-09 09:35:42 +01:00
Craig Topper
5ed35d6e72 [X86] Add more scalar intrinsic instructions to isNonFoldablePartialRegisterLoad.
I think this covers most if not all of the scalar intrinsic
instructions.
2020-02-08 20:41:36 -08:00
Johannes Doerfert
9354843292 [Attributor] Add an Attributor CGSCC pass and run it
In addition to the module pass, this patch introduces a CGSCC pass that
runs the Attributor on a strongly connected component of the call graph
(both old and new PM). The Attributor was always design to be used on a
subset of functions which makes this patch mostly mechanical.

The one change is that we give up `norecurse` deduction in the module
pass in favor of doing it during the CGSCC pass. This makes the
interfaces simpler but can be revisited if needed.

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D70767
2020-02-08 21:27:34 -06:00
Fangrui Song
4104d396cb Fix -Wunused-lambda-capture for -DLLVM_ENABLE_ASSERTIONS=off builds after 6556c615f3c3aae8af876806777065961ae20024 2020-02-08 19:03:58 -08:00
Johannes Doerfert
d90ff4aa0e [FIX] Ordering problem accidentally introduced with D72304 2020-02-08 20:14:41 -06:00