1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-02-01 05:01:59 +01:00

140809 Commits

Author SHA1 Message Date
Mircea Trofin
724ebfc377 [NFC] Use Register/MCRegister
Differential Revision: https://reviews.llvm.org/D90724
2020-11-04 12:20:17 -08:00
Craig Topper
3631995968 [RISCV] Remove assertsexti32 from fslw/fsrw isel patterns.
The operations in these patterns shouldn't be effected by sign
bits. And the pattern is starting from a sign_extend_inreg so
we aren't expecting sign bits to be passed through either.

Differential Revision: https://reviews.llvm.org/D90739
2020-11-04 11:37:58 -08:00
Nikita Popov
41412f444d [MemorySSA] Use provided memory location even if instruction is call
If getClobberingMemoryAccess() is called with an explicit
MemoryLocation, but the starting access happens to be a call, the
provided location is currently ignored, and alias analysis queries
will be performed against the call instruction instead. Something
similar happens if the starting access is a load with a MemoryDef.

Change the implementation to not set Q.Inst in the first place if
we want to perform a MemoryLocation-based query, to make sure it
can't be turned into an Instruction-based query along the way...

Additionally, remove the special handling that lifetime.start
intrinsics currently get. They simply report NoAlias for clobbers
between lifetime.start and other calls, but that's obviously not
right if the other call is something like a memset or memcpy. The
default behavior we get from getModRefInfo() will already do the
right thing here.

Differential Revision: https://reviews.llvm.org/D88782
2020-11-04 20:30:22 +01:00
Craig Topper
1d80c19eed [RISCV] Correct the operand order for fshl/fshr to fsl/fsr instructions.
fsl/fsr take their shift amount in $rs2 or an immediate. The
sources are $rs1 and $rs3.

fshl/fshr ISD opcodes both concatenate operand 0 in the high bits and
operand 1 in the lower bits. fshl returns the high bits after
shifting and fshr returns the low bits. So a shift amount of 0
returns operand 0 for fshl and operand 1 for fshr.

fsl/fsr concatenate their operands in different orders such that
$rs1 will be returned for a shift amount of 0. So $rs1 needs to
come from operand 0 of fshl and operand 1 of fshr.

Differential Revision: https://reviews.llvm.org/D90735
2020-11-04 11:13:25 -08:00
Fraser Cormack
5f7fe12cc6 [DAGCombine] Fix bug in load scalarization
Summary:
For vector element types which are not byte-sized, we would generate
incorrect scalar offsets and produce incorrect codegen.

This optimization could potentially be supported in the future, e.g. by
loading in bytes, then shifting and masking out the remaining bits of
the vector element. However, without an upstream target to test against
it's best to avoid the bad codegen in the simplest possible way.

Related to this bug:

  https://bugs.llvm.org/show_bug.cgi?id=27600

Reviewed by: foad

Differential Revision: https://reviews.llvm.org/D78568
2020-11-04 19:02:40 +00:00
Craig Topper
4bde11ee29 [RISCV] Remove assertsexti32 from inputs to riscv_sllw/srlw nodes in B extension isel patterns.
riscv_sllw/srlw only reads the lower 32 bits of the first operand.
And the lower 5 bits of the second operands. Whether the upper
32 bits of the input are sign bits or not doesn't matter.

Also use ineg and not to shorten the patterns.

Differential Revision: https://reviews.llvm.org/D90668
2020-11-04 10:35:05 -08:00
Arnold Schwaighofer
d90984c1dd Start of an llvm.coro.async implementation
This patch adds the `async` lowering of coroutines.

This will be used by the Swift frontend to lower async functions. In
contrast to the `retcon` lowering the frontend needs to be in control
over control-flow at suspend points as execution might be suspended at
these points.

This is very much work in progress and the implementation will change as
it evolves with the frontend. As such the documentation is lacking
detail as some of it might change.

rdar://70097093

Reapply with fix for memory sanitizer failure and sphinx failure.

Differential Revision: https://reviews.llvm.org/D90612
2020-11-04 10:29:21 -08:00
Craig Topper
f850868dc3 [RISCV] Check all 64-bits of the mask in SelectRORIW.
We need to ensure the upper 32 bits of the mask are zero.
So that the srl shifts zeroes into the lower 32 bits.

Differential Revision: https://reviews.llvm.org/D90585
2020-11-04 10:15:30 -08:00
Christopher Tetreault
78607d1352 [UBSan] Cannot negate smallest negative signed integer
Silence warning Undefined Behavior Sanitzer warning:
runtime error: negation of -9223372036854775808 cannot be represented in type 'int64_t' (aka 'long'); cast to an unsigned type to negate this value to itself

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D90710
2020-11-04 10:07:52 -08:00
Craig Topper
7eeba8eadb [RISCV] Remove custom isel for (srl (shl val, 32), imm). Use pattern instead. NFCI
We don't need custom matching, we just a need a predicate to check
the immediate is greater than 32. We can use the existing ImmSub32
to adjust the immediate.

I've also used the new predicate in the other location that used
ImmSub32. I tried to create a test case where we would break without
the greater than 32 check on that pattern, but DAG combine defeated me.
Still seemed safer to have it.

Differential Revision: https://reviews.llvm.org/D90546
2020-11-04 09:59:14 -08:00
Joe Nash
3dea5f368c [AMDGPU] Resolve pseudo registers at encoding uses
Pseudo-registers allow different register encodings
between gpu generations. Make sure we resolve the
pseudo regs to real regs whenever we get their
hardware encoding.
Using the correct encodings revealed a register
bank conflict and an unnecessary write dependency.
Tests have been updated to match.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D90721

Change-Id: I73c154cd24aecc820993b50bebaf4df97a5710ca
2020-11-04 12:52:32 -05:00
Fangrui Song
032dc68e5b Revert "[GlobalISel] GISelKnownBits::computeKnownBitsImpl - Replace TargetOpcode::G_MUL handling with the common KnownBits::computeForMul implementation"
This reverts commit 0b8711e1af97d6c82dc9d25c12c5a06af060cc56 which broke GlobalISelTests AArch64GISelMITest.TestKnownBits
2020-11-04 09:54:04 -08:00
Arthur Eubanks
2e4e41af20 [NewPM] Don't run before pass instrumentation on required passes
This allows those instrumentation to log when they decide to skip a
pass. This provides extra helpful info for optnone functions and also
will help with opt-bisect.

Have OptNoneInstrumentation print when it skips due to seeing optnone.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D90545
2020-11-04 09:45:10 -08:00
Sebastian Neubauer
0003aeadad [AMDGPU] Fix iterating in SIFixSGPRCopies
The insertion of waterfall loops splits the current basic block into
three blocks. So the basic block that we iterate over must be updated.

This failed assert(!NodePtr->isKnownSentinel()) in ilist_iterator for
divergent calls in branches before.

Differential Revision: https://reviews.llvm.org/D90596
2020-11-04 18:43:19 +01:00
Simon Pilgrim
aff03bd34f [KnownBits] KnownBits::computeForMul - avoid unnecessary APInt copies. NFCI.
Use const references instead.
2020-11-04 17:25:25 +00:00
Simon Pilgrim
0aa6d48258 [GlobalISel] GISelKnownBits::computeKnownBitsImpl - Replace TargetOpcode::G_MUL handling with the common KnownBits::computeForMul implementation
Avoid code duplication
2020-11-04 17:25:24 +00:00
Arnold Schwaighofer
c8e9566a32 Revert "Start of an llvm.coro.async implementation"
This reverts commit ea606cced0583d1dbd4c44680601d1d4e9a56e58.

This patch causes memory sanitizer failures sanitizer-x86_64-linux-fast.
2020-11-04 08:26:20 -08:00
Arnold Schwaighofer
3e8facdd39 Start of an llvm.coro.async implementation
This patch adds the `async` lowering of coroutines.

This will be used by the Swift frontend to lower async functions. In
contrast to the `retcon` lowering the frontend needs to be in control
over control-flow at suspend points as execution might be suspended at
these points.

This is very much work in progress and the implementation will change as
it evolves with the frontend. As such the documentation is lacking
detail as some of it might change.

rdar://70097093

Differential Revision: https://reviews.llvm.org/D90612
2020-11-04 07:32:29 -08:00
Eric Astor
5e9623a87c [ms] [llvm-ml] Enable support for MASM-style macro procedures
Allows the MACRO directive to define macro procedures with parameters and macro-local symbols.

Supports required and optional parameters (including default values), and matches ml64.exe for its macro-local symbol handling (up to 65536 macro-local symbols in any translation unit).

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D89729
2020-11-04 10:29:57 -05:00
Simon Pilgrim
fab109bb67 Use isa<> instead of dyn_cast<> to avoid unused variable warning. NFCI. 2020-11-04 15:26:32 +00:00
Paul C. Anagnostopoulos
9295b21984 [TableGen] Add !interleave operator to concatenate a list of values with delimiters
Add a test. Use it in some TableGen files.

Differential Revision: https://reviews.llvm.org/D90469
2020-11-04 09:23:54 -05:00
Sanjay Patel
75c976d15a [InstSimplify] allow vector folds for icmp Pred (1 << X), 0x80 2020-11-04 08:12:48 -05:00
Roman Lebedev
a30754006b [Reassociate] Guard add-like or conversion into an add with profitability check
This is slightly better compile-time wise,
since we avoid potentially-costly knownbits analysis that will
ultimately not allow us to actually do anything with said `add`.
2020-11-04 16:10:34 +03:00
Simon Moll
1caff81b49 [VE] Add +vpu attribute
`+vpu` controls whether VEISelLowering adds any vregs.  This defaults to
`-vpu` to have scalar code generation out of the box.  We bring up
vector isel under the `+vpu` flag. Once vector isel is stable we switch
to `+vpu` and advertise vregs and vops in TTI.

Reviewed By: kaz7

Differential Revision: https://reviews.llvm.org/D90465
2020-11-04 12:42:00 +01:00
Kerry McLaughlin
1de78af5c4 [SVE][CodeGen] Lower scalable integer vector reductions
This patch uses the existing LowerFixedLengthReductionToSVE function to also lower
scalable vector reductions. A separate function has been added to lower VECREDUCE_AND
& VECREDUCE_OR operations with predicate types using ptest.

Lowering scalable floating-point reductions will be addressed in a follow up patch,
for now these will hit the assertion added to expandVecReduce() in TargetLowering.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D89382
2020-11-04 11:38:49 +00:00
Simon Pilgrim
ad61fbdaba [DAG] computeKnownBits - Replace ISD::MUL handling with the common KnownBits::computeForMul implementation 2020-11-04 11:32:08 +00:00
Sebastian Neubauer
7ea56efeb5 [AMDGPU] Set rsrc1 flags for graphics shaders
Before they were only set for compute kernels and compute shaders but
not for other shaders.

Differential Revision: https://reviews.llvm.org/D89399
2020-11-04 12:25:41 +01:00
Sebastian Neubauer
537305eda7 [AMDGPU] Fix ieee mode default value
Previously, the default value for ieee mode was
- on for compute kernels and compute shaders,
- off for all shaders except compute shaders.

This commit changes the default to be
- on for compute kernels,
- off for shaders.

This aligns the default value with the settings that are actually in
use.  To my knowledge, all users of shader calling conventions (mesa and
llpc) disable the ieee mode by default.

Differential Revision: https://reviews.llvm.org/D89388
2020-11-04 12:25:38 +01:00
David Green
2f8823c9c5 [ARM] Remove unused variable. NFC 2020-11-04 09:00:03 +00:00
Sander de Smalen
ca12e64408 [NFCI] Replace AArch64StackOffset by StackOffset.
This patch replaces the AArch64StackOffset class by the generic one
defined in TypeSize.h.

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D88983
2020-11-04 08:49:00 +00:00
Fangrui Song
878101132b [DebugInfo] Delete unused DwarfUnit::addConstantFPValue & addConstantValue overloads. NFC
This functions appear to be unused for many years.
2020-11-04 00:05:57 -08:00
Martin Storsjö
3bd74bda76 Revert "[AggressiveInstCombine] Generalize foldGuardedRotateToFunnelShift to generic funnel shifts"
This reverts commit 59b22e495c15d2830f41381a327f5d6bf49ff416.

That commit broke building for ARM and AArch64, reproducible like this:

$ cat apedec-reduced.c
a;
b(e) {
  int c;
  unsigned d = f();
  c = d >> 32 - e;
  return c;
}
g() {
  int h = i();
  if (a)
    h = h << a | b(a);
  return h;
}
$ clang -target aarch64-linux-gnu -w -c -O3 apedec-reduced.c
clang: ../lib/Transforms/InstCombine/InstructionCombining.cpp:3656: bool llvm::InstCombinerImpl::run(): Assertion `DT.dominates(BB, UserParent) && "Dominance relation broken?"' failed.

Same thing for e.g. an armv7-linux-gnueabihf target.
2020-11-04 08:39:32 +02:00
Arthur Eubanks
e22b9f13f5 Port print-must-be-executed-contexts and print-mustexecute to NPM
Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D90207
2020-11-03 21:06:46 -08:00
Xiang1 Zhang
8cbf808a67 [StackColoring] Conservatively merge catch point of V for catch(V)
Reviewed By: thanm, clin1

Differential Revision: https://reviews.llvm.org/D86673
2020-11-04 10:49:56 +08:00
Michael Liao
0014463efc [MachineInstr] Add support for instructions with multiple memory operands.
- Basically iterate each pair of memory operands from both instructions
  and return true if any of them may alias.
- The exception are memory instructions without any memory operand. They
  may touch everything and could alias to any memory instruction.

Differential Revision: https://reviews.llvm.org/D89447
2020-11-03 20:44:40 -05:00
Gaurav Jain
fe2efec249 [NFC] Use [MC]Register in register allocation
Differential Revision: https://reviews.llvm.org/D90725
2020-11-03 17:34:26 -08:00
Amara Emerson
4126569494 [AArch64][GlobalISel] Add combine for G_EXTRACT_VECTOR_ELT to allow selection of pairwise FADD.
For the <2 x float> case, instead of adding another combine or legalization to
get it into a <4 x float> form, I'm just adding a GISel specific selection
pattern to cover it.

Differential Revision: https://reviews.llvm.org/D90699
2020-11-03 17:25:14 -08:00
Julien Jorge
8488b68226 [WebAssembly] Don't fold frame offset for global addresses
When machine instructions are in the form of
```
%0 = CONST_I32 @str
%1 = ADD_I32 %stack.0, %0
%2 = LOAD 0, 0, %1
```

In the `ADD_I32` instruction, it is possible to fold it if `%0` is a
`CONST_I32` from an immediate number. But in this case it is a global
address, so we shouldn't do that. But we haven't checked if the operand
of `ADD` is an immediate so far. This fixes the problem. (The case
applies the same for `ADD_I64` and `CONST_I64` instructions.)

Fixes https://bugs.llvm.org/show_bug.cgi?id=47944.

Patch by Julien Jorge (jjorge@quarkslab.com)

Reviewed By: dschuff

Differential Revision: https://reviews.llvm.org/D90577
2020-11-03 14:56:25 -08:00
Sanjay Patel
4b28ca8f6f [ARM] remove cost-kind predicate for most math op costs
This is based on the same idea that I am using for the basic model implementation
and what I have partly already done for x86: throughput cost is number of
instructions/uops, so size/blended costs are identical except in special cases
(for example, fdiv or other known-expensive machine instructions or things like
MVE that may require cracking into >1 uop)).

Differential Revision: https://reviews.llvm.org/D90692
2020-11-03 17:23:46 -05:00
Jordan Rupprecht
e4fd7971cd [NFC] Inline wasm assertion-only variable 2020-11-03 13:06:59 -08:00
Xun Li
c93ef2201c [musttail] Unify musttail call preceding return checking
There is already an API in BasicBlock that checks and returns the musttail call if it precedes the return instruction.
Use it instead of manually checking in each place.

Differential Revision: https://reviews.llvm.org/D90693
2020-11-03 11:39:27 -08:00
Roman Lebedev
3e46fb4064 [Reassociate] Convert add-like or's into an add's to allow reassociation
InstCombine is quite aggressive in doing the opposite transform,
folding `add` of operands with no common bits set into an `or`,
and that not many things support that new pattern..

In this case, teaching Reassociate about it is easy,
there's preexisting art for `sub`/`shl`:
just convert such an `or` into an `add`:
https://rise4fun.com/Alive/Xlyv
2020-11-03 22:30:51 +03:00
Sanne Wouda
1cbe0dd7fb Revert "Add loop distribution to the LTO pipeline"
This reverts commit 6e80318eecde2639faa1e72be045c78b8b8aedad.
2020-11-03 19:29:27 +00:00
Sanne Wouda
4428a34963 Add loop distribution to the LTO pipeline
The LoopDistribute pass is missing from the LTO pipeline, so
-enable-loop-distribute has no effect during post-link. The pre-link
loop distribution doesn't seem to survive the LTO pipeline either.

With this patch (and -flto -mllvm -enable-loop-distribute) we see a 43%
uplift on SPEC 2006 hmmer for AArch64. The rest of SPECINT 2006 is
unaffected.

Differential Revision: https://reviews.llvm.org/D89896
2020-11-03 18:54:24 +00:00
Andy Wingo
d85c4c569d [WebAssembly] Implement ref.null
This patch adds a new "heap type" operand kind to the WebAssembly MC
layer, used by ref.null. Currently the possible values are "extern" and
"func"; when typed function references come, though, this operand may be
a type index.

Note that the "heap type" production is still known as "refedtype" in
the draft proposal; changing its name in the spec is
ongoing (https://github.com/WebAssembly/reference-types/issues/123).

The register form of ref.null is still untested.

Differential Revision: https://reviews.llvm.org/D90608
2020-11-03 10:46:23 -08:00
Jameson Nash
b65f8f86ef [GVN] small improvements to comments 2020-11-03 13:21:48 -05:00
Simon Pilgrim
8607b95320 Cleanup namespace comment to fix clang-tidy warning. NFCI. 2020-11-03 18:13:21 +00:00
Simon Pilgrim
977dc8c300 [DAG] computeKnownBits - Move ISD::SRA handling into KnownBits::ashr
As discussed on D90527, we should be trying to move shift handling functionality into KnownBits to avoid code duplication in SelectionDAG/GlobalISel/ValueTracking.
2020-11-03 18:09:33 +00:00
Craig Topper
b3e56d6425 [RISCV] Add missing patterns for rotr with immediate for Zbb/Zbp extensions.
DAGCombine doesn't canonicalize rotl/rotr with immediate so we
need patterns for both.

Remove the custom matcher for rotl to RORI and just use a SDNodeXForm
to convert the immediate instead. Doing this gives priority to the
rev32/rev16 versions of grevi over rori since an explicit immediate
is more precise than any immediate. I also added rotr patterns for
rev32/rev16. And removed the (or (shl), (shr)) patterns that should be
combined to rotl by DAG combine.

There is at least one other grev pattern that probably needs a
another rotr pattern, but we need more test coverage first.

Differential Revision: https://reviews.llvm.org/D90575
2020-11-03 10:04:52 -08:00
Simon Pilgrim
95f20bec93 [DAG] computeKnownBits - Move (most) ISD::SRL handling into KnownBits::lshr
As discussed on D90527, we should be be trying to move shift handling functionality into KnownBits to avoid code duplication in SelectionDAG/GlobalISel/ValueTracking.

The refactor to use the KnownBits fixed/min/max constant helpers allows us to hit a couple of cases that we were missing before.

We still need the getValidMinimumShiftAmountConstant case as KnownBits doesn't handle per-element vector cases.
2020-11-03 17:30:36 +00:00