1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 03:33:20 +01:00
Commit Graph

204714 Commits

Author SHA1 Message Date
Nikita Popov
c845089f3a [MemCpyOpt] Check for throwing calls during call slot optimization
When performing call slot optimization for a non-local destination,
we need to check whether there may be throwing calls between the
call and the copy. Otherwise, the early write to the destination
may be observable by the caller.

This was already done for call slot optimization of load/store,
but not for memcpys. For the sake of clarity, I'm moving this check
into the common optimization function, even if that does need an
additional instruction scan for the load/store case.

As efriedma pointed out, this check is not sufficient due to
potential accesses from another thread. This case is left as a TODO.

Differential Revision: https://reviews.llvm.org/D88799
2020-10-06 18:24:40 +02:00
Nikita Popov
c882ea3914 [MemCpyOpt] Add separate statistic for call slot optimization (NFC) 2020-10-06 18:14:10 +02:00
Simon Pilgrim
03d833fcca [APIntTest] Extend extractBits to check 'lshr+trunc' pattern for each case as well.
Noticed while triaging PR47731 that we don't have great coverage for such patterns.
2020-10-06 16:32:40 +01:00
Fangrui Song
4b618b1f5f [X86] .code16: temporarily set Mode32Bit when matching an instruction with the data32 prefix
PR47632

This allows MC to match `data32 ...` as one instruction instead of two (data32 without insn + insn).

The compatibility with GNU as improves: `data32 ljmp` will be matched as ljmpl.
`data32 lgdt 4(%eax)` will be matched as `lgdtl` (prefixes: 0x67 0x66, instead
of 0x66 0x67).

GNU as supports many other `data32 *w` as `*l`. We currently just hard code
`data32 callw` and `data32 ljmpw`.  Generalizing the suffix replacement is
tricky and requires a think about the "bwlq" appending suffix rules in MatchAndEmitATTInstruction.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D88772
2020-10-06 08:32:03 -07:00
Dávid Bolvanský
b959205f6d [SimplifyLibCalls] Optimize mempcpy_chk to mempcpy 2020-10-06 17:08:46 +02:00
LLVM GN Syncbot
045458557c [gn build] Port aa2b593f149 2020-10-06 14:49:44 +00:00
Arthur Eubanks
b093181fde [BPF][NewPM] Make BPFTargetMachine properly adjust NPM optimizer pipeline
This involves porting BPFAbstractMemberAccess and BPFPreserveDIType to
NPM, then adding them BPFTargetMachine::registerPassBuilderCallbacks
(the NPM equivalent of adjustPassManager()).

Reviewed By: yonghong-song, asbirlea

Differential Revision: https://reviews.llvm.org/D88855
2020-10-06 07:42:32 -07:00
Arthur Eubanks
edc441cd86 [test][InstCombine][NewPM] Fix InstCombine tests under NPM
Some of these depended on analyses being present that aren't provided
automatically in NPM.

early_dce_clobbers_callgraph.ll was previously inlining a noinline function?

cast-call-combine.ll relied on the legacy always-inline pass being a
CGSCC pass and getting rerun.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D88187
2020-10-06 07:39:00 -07:00
Arthur Eubanks
846976cef1 [test][NewPM] Make dead-uses.ll work under NPM
This one is weird...

globals-aa needs to be already computed at licm, or else a function pass
can't run a module analysis and won't have access to globals-aa.
But the globals-aa result is impacted by instcombine in a way that
affects what the test is expecting. If globals-aa is computed before
instcombine, it is cached and globals-aa used in licm won't contain the
necessary info provided by instcombine.
Another catch is that if we don't invalidate AAManager, it will use the
cached AAManager that instcombine requested, which may not contain
globals-aa. So we have to invalidate<aa> so that licm can recompute
an AAManager with the globals-aa created by the require<globals-aa>.

This is essentially the problem described in https://reviews.llvm.org/D84259.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D88118
2020-10-06 07:33:02 -07:00
Johannes Doerfert
3e4d5e877b [Attributor][FIX] Move assertion to make it not trivially fail
The idea of this assertion was to check the simplified value before we
assign it, not after, which caused this to trivially fail all the time.
2020-10-06 09:32:18 -05:00
Johannes Doerfert
c5111678c6 [Attributor][FIX] Dead return values are not noundef
When we assume a return value is dead we might still visit return
instructions via `Attributor::checkForAllReturnedValuesAndReturnInsts(..)`.
When we do so the "returned value" is potentially simplified to `undef`
as it is the assumed "returned value". This is a problem if there was a
preexisting `noundef` attribute that will only be removed as we manifest
the `undef` return value. We should not use this combination to derive
`unreachable` though. Two test cases fixed.
2020-10-06 09:32:18 -05:00
Johannes Doerfert
3d88a1b2b2 [Attributor][NFC] Ignore benign uses in AAMemoryBehaviorFloating
In AAMemoryBehaviorFloating we used to track benign uses in a SetVector.
With this change we look through benign uses eagerly to reduce the
number of elements (=Uses) we look at during an update.

The test does actually not fail prior to this commit but I already wrote
it so I kept it.
2020-10-06 09:32:18 -05:00
Dmitri Gribenko
6c75e7bd17 Silence -Wunused-variable in NDEBUG mode 2020-10-06 16:02:17 +02:00
Simon Pilgrim
9a98ab03d1 [InstCombine] canRewriteGEPAsOffset - don't dereference a dyn_cast<>. NFCI.
We know V is a IntToPtrInst or PtrToIntInst type so we know its a CastInst - so use cast<> directly.

Prevents clang static analyzer warning that we could deference a null pointer.
2020-10-06 14:48:34 +01:00
Simon Pilgrim
1305d04969 [InstCombine] FoldShiftByConstant - consistently use ConstantExpr in logicalshift(trunc(shift(x,c1)),c2) fold. NFCI.
This still only gets used for scalar types but now always uses ConstantExpr in preparation for vector support - it was using APInt methods in some places.
2020-10-06 14:48:34 +01:00
Sam Tebbs
8c5ecf8e27 [ARM] Fold select_cc(vecreduce_[u|s][min|max], x) into VMINV or VMAXV
This folds a select_cc or select(set_cc) of a max or min vector reduction with a scalar value into a VMAXV or VMINV.

    Differential Revision: https://reviews.llvm.org/D87836
2020-10-06 14:44:58 +01:00
Dmitry Preobrazhensky
7883b0e0f8 [AMDGPU][MC] Added detection of unsupported instructions
Implemented identification of unsupported instructions; improved errors reporting.

See bug 42590.

Reviewers: rampitec

Differential Revision: https://reviews.llvm.org/D88211
2020-10-06 16:44:27 +03:00
Jonas Paulsson
3490945bcb [SystemZAsmParser] Treat VR128 separately in ParseDirectiveInsn().
This patch makes the parser
  - reject higher vector registers (>=16) in operands where they should not
    be accepted.
  - accept higher integers (>=16) in vector register operands.

Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D88888
2020-10-06 14:42:40 +02:00
Simon Pilgrim
9f91f08925 [InstCombine] FoldShiftByConstant - use PatternMatch for logicalshift(trunc(shift(x,c1)),c2) fold. NFCI. 2020-10-06 13:13:08 +01:00
Simon Pilgrim
0a9d5521b3 [InstCombine] FoldShiftByConstant - remove unnecessary cast<>. NFC.
Op1 is already a Constant*
2020-10-06 13:13:08 +01:00
Alexey Lapshin
660205ea0e [llvm-objcopy][NFC] fix style issues reported by clang-format. 2020-10-06 15:06:25 +03:00
LLVM GN Syncbot
f024187bee [gn build] Port d6c9dc3c17e 2020-10-06 12:02:07 +00:00
Alexander Shaposhnikov
56273e756d [llvm-objcopy][MachO] Add support for universal binaries
This diff adds support for universal binaries to llvm-objcopy.
This is a recommit of 32c8435ef70031 with the asan issue fixed.

Test plan: make check-all

Differential revision: https://reviews.llvm.org/D88400
2020-10-06 04:01:40 -07:00
Denis Antrushin
91c614801d [Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:

   STATEPOINT
   <id>, <num patch bytes >, <num call arguments>, <call target>,
   [call arguments...],
   <StackMaps::ConstantOp>, <calling convention>,
   <StackMaps::ConstantOp>, <statepoint flags>,
   <StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
   <gc base/derived pairs...> <gc allocas...>

Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:

  %vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)

Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.

This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:

  STATEPOINT
  <id>, <num patch bytes>, <num call arguments>, <call target>,
  [call arguments...],
  <StackMaps::ConstantOp>, <calling convention>,
  <StackMaps::ConstantOp>, <statepoint flags>,
  <StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
  <StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
  <StackMaps::ConstantOp>, <num gc allocas>,  [gc allocas...]
  <StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]

Changes are:
  - every gc pointer is listed only once in a flat length-prefixed list;
  - alloca list is prefixed with its length too;
  - following alloca list is length-prefixed list of base-derived
    indices of pointers from gc pointer list. Note that indices are
    logical (number of pointer), not absolute (index of machine operand).

Differential Revision: https://reviews.llvm.org/D87154
2020-10-06 17:40:29 +07:00
Paul Walker
68de857582 [SVE] Lower fixed length vector fneg and fsqrt operations.
Also updates sve-fp.ll to use fneg directly.

Differential Revision: https://reviews.llvm.org/D88683
2020-10-06 10:48:16 +01:00
Paul Walker
829e52e19e [SVE] Lower fixed length vector floating point rounding operations.
Adds lowering for:
  llvm.ceil
  llvm.floor
  llvm.nearbyint
  llvm.rint
  llvm.round
  llvm.trunc

Differential Revision: https://reviews.llvm.org/D88671
2020-10-06 10:48:16 +01:00
Dmitri Gribenko
0d00b6a497 Revert "[llvm-objcopy][MachO] Add support for universal binaries"
This reverts commit 32c8435ef70031d7bd3dce48e41bdce65747e123. It fails
ASan, details in https://reviews.llvm.org/D88400.
2020-10-06 11:29:24 +02:00
Dmitri Gribenko
5384fe3962 Revert "[llvm-objcopy][MachO] Add missing std::move."
This reverts commit 6e25586990b93e2c9eaaa4f473b6720ccd646c46. It depends
on 32c8435ef70031d7bd3dce48e41bdce65747e123, which I'm reverting due to
ASan failures. Details in https://reviews.llvm.org/D88400.
2020-10-06 11:28:55 +02:00
Mauri Mustonen
f55288ff73 [VPlan] Add vplan native path vectorization test case for inner loop reduction
Regarding this bug I posted earlier: https://bugs.llvm.org/show_bug.cgi?id=47035

After reading through LLVM source code and getting familiar with VPlan I was able to vectorize the code using by enabling VPlan native path. After talking with @fhahn he suggested that I contribute this as a test case. So here it is. I tried to follow the available guides how to do this best I could. I modified IR code by hand to have more clear variable names instead of numbers.

One thing what I'd like to get input from someone is that is current CHECK lines sufficient enough to verify that the inner loop has been vectorized properly?

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D87564
2020-10-06 10:11:58 +01:00
Georgii Rymar
bd1081f889 [llvm-readobj/elf][test] - Stop using precompiled binaries in mips-got.test
This removed 2 last precompiled binaries from the mips-got.test.
YAML descriptions are used instead.

Differential revision: https://reviews.llvm.org/D88565
2020-10-06 12:04:44 +03:00
Sebastian Neubauer
fa2e771bf8 [AMDGPU] Fix gcc warnings
uint8_t types are implicitly promoted to int, leading to a
unsigned-signed comparison.

Thanks for the heads-up @uabelho.

Differential Revision: https://reviews.llvm.org/D88876
2020-10-06 10:55:08 +02:00
David Sherwood
4bc4e2e524 [SVE][CodeGen] Fix DAGCombiner::ForwardStoreValueToDirectLoad for scalable vectors
In DAGCombiner::ForwardStoreValueToDirectLoad I have fixed up some
implicit casts from TypeSize -> uint64_t and replaced calls to
getVectorNumElements() with getVectorElementCount(). There are some
simple cases of forwarding that we can definitely support for
scalable vectors, i.e. when the store and load are both scalable
vectors and have the same size. I have added tests for the new
code paths here:

  CodeGen/AArch64/sve-forward-st-to-ld.ll

Differential Revision: https://reviews.llvm.org/D87098
2020-10-06 08:04:03 +01:00
Johannes Doerfert
88c849a4b2 [AttributeFuncs][FIX] Update new tests (D87304, D87306) after sret changes
Hopefully the last of these, apologies for the noise.
2020-10-06 00:12:18 -05:00
Max Kazantsev
01edf36ec5 Revert "[SCEV] Prove implicaitons via AddRec start"
This reverts commit 69acdfe075fa8eb18781f88f4d0cd1ea40fa6e48.

Need to investigate reported miscompiles.
2020-10-06 11:40:14 +07:00
Johannes Doerfert
21a24161e2 [AttributeFuncs][FIX] Update new tests (D87304) after sret changes 2020-10-05 23:37:15 -05:00
Lang Hames
7368935004 [JITLink][ELF] Handle BSS sections, improve some error messages.
This patch enables basic BSS section handling, and improves a couple of error
messages in the ELF section parsing code.

Patch by Christian Schafmeister. Thanks Christian!

Differential Revision: https://reviews.llvm.org/D88867
2020-10-05 21:35:35 -07:00
Johannes Doerfert
e6334e7ecd [AttributeFuncs] Consider noundef in typeIncompatible
Drop `noundef` for return values that are replaced by void and make it
illegal to put `noundef` on a void value.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D87306
2020-10-05 23:23:06 -05:00
Johannes Doerfert
5dc5457fbf [AttributeFuncs] Consider align in typeIncompatible
Alignment attributes need to be dropped for non-pointer values.
This also introduces a check into the verifier to ensure you don't use
`align` on anything but a pointer. Test needed to be adjusted
accordingly.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D87304
2020-10-05 23:23:05 -05:00
Serguei Katkov
83ae2d53d6 [GVN LoadPRE] Extend the scope of optimization by using context to prove safety of speculation
Use context to prove that load can be safely executed at a point where load is being hoisted.

Postpone the decision about safety of speculative load execution till the moment we know
where we hoist load and check safety at that context.

Reviewers: nikic, fhahn, mkazantsev, lebedev.ri, efriedma, reames
Reviewed By: reames, mkazantsev
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D88725
2020-10-06 09:25:16 +07:00
Arthur Eubanks
442f289a62 [NewPM] Set -enable-npm-optnone to true by default
This makes the NPM skip not required passes on functions marked optnone.

If this causes a pass that should be required but has not been marked
required to be skipped, add
`static bool isRequired() { return true; }`
to the pass class. AlwaysInlinerPass is an example.

clang/test/CodeGen/O0-no-skipped-passes.c is useful for checking that
no passes are skipped under -O0.

The -enable-npm-optnone option will be removed once this has been stable
for long enough without issues.

Reviewed By: ychen, asbirlea

Differential Revision: https://reviews.llvm.org/D87869
2020-10-05 18:42:32 -07:00
Mircea Trofin
0d01493a68 [MLInliner] Factor out logging
Factored out the logging facility, to allow its reuse outside the
inliner.

Differential Revision: https://reviews.llvm.org/D88770
2020-10-05 18:09:17 -07:00
Carl Ritson
fcab491a2e [AMDGPU] SIInsertSkips: Refactor early exit block creation
Refactor exit block creation to a single call ensureEarlyExitBlock.
Add support for generating an early exit block which clears the
exec mask, but only add this instruction when required.
These changes are to facilitate adding more forms of early
termination for PS shaders in the near future.

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D88775
2020-10-06 09:44:55 +09:00
Carl Ritson
f8693149d1 Fix reordering of instructions during VirtRegRewriter unbundling
When unbundling COPY bundles in VirtRegRewriter the start of the
bundle is not correctly referenced in the unbundling loop.

The effect of this is that unbundled instructions are sometimes
inserted out-of-order, particular in cases where multiple
reordering have been applied to avoid clobbering dependencies.
The resulting instruction sequence clobbers dependencies.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D88821
2020-10-06 09:43:02 +09:00
Evandro Menezes
24a1d1faa1 [RISCV] Fix broken test
Fix test for the SiFive E76 core.

This patch fixes the issue introduced by the commit 5d6d8a2769.
2020-10-05 19:28:31 -05:00
Mircea Trofin
506e6e9067 [NFC][regalloc] Separate iteration from AllocationOrder
This separates the two concerns - encapsulation of traversal order; and
iteration.

Differential Revision: https://reviews.llvm.org/D88256
2020-10-05 16:13:18 -07:00
Greg Clayton
b5450e4ab7 Show register names in DWARF unwind info.
Register context information was already being passed into the DWARFDebugFrame code that dumps unwind information but it wasn't being used. This change adds the ability to dump registers names of a valid MC register context was passed in and if it knows about the register. Updated the tests to use the newly returned register names.

Differential Revision: https://reviews.llvm.org/D88767
2020-10-05 15:34:33 -07:00
Craig Topper
5b0ea952d4 [X86] Remove X86ISD::LCMPXCHG8_SAVE_EBX_DAG and LCMPXCHG8B_SAVE_EBX pseudo instruction
This and its friend X86ISD::LCMPXCHG8_SAVE_RBX_DAG are used if we need to avoid clobbering the frame pointer in EBX/RBX. EBX/RBX are only used a frame pointer in 64-bit mode. In 64-bit mode we don't use CMPXCHG8B since we have a GR64 cmpxchg available. So we don't need special handling for LCMPXCHG8B.

Split from D88808

Differential Revision: https://reviews.llvm.org/D88853
2020-10-05 15:03:07 -07:00
Craig Topper
2ef989f13e [SelectionDAG] Make sure FMF are propagated when getSetcc canonicalizes FP constants to RHS.
getNode handling for ISD:SETCC calls FoldSETCC which can canonicalize
FP constants to the RHS. When this happens we should create the node
with the FMF that was requested. By using FlagInserter when can ensure
any calls to getNode/getSetcc during canonicalization will also get the flags.

Differential Revision: https://reviews.llvm.org/D88063
2020-10-05 14:55:23 -07:00
Fangrui Song
d5e99701a1 Cleanup CodeGen/CallingConvLower.cpp
Patch by pi1024e (email unavailable)

Differential Revision: https://reviews.llvm.org/D82593
2020-10-05 14:47:46 -07:00
Vedant Kumar
891fde51fc Revert "Outline non returning functions unless a longjmp"
This reverts commit 20797989ea190f2ef22d13c5a7a0535fe9afa58b.

This patch (https://reviews.llvm.org/D69257) cannot complete a stage2
build due to the change:

```
CI->getCalledFunction()->getName().contains("longjmp")
```

There are several concrete issues here:

  - The callee may not be a function, so `getCalledFunction` can assert.
  - The called value may not have a name, so `getName` can assert.
  - There's no distinction made between "my_longjmp_test_helper" and the
    actual longjmp libcall.

At a higher level, there's a serious layering problem here. The
splitting pass makes policy decisions in a general way (e.g. based on
attributes or profile data). Special-casing certain names breaks the
layering. It subverts the work of library maintainers (who may now need
to opt-out of unexpected optimization behavior for any affected
functions) and can lead to inconsistent optimization behavior (as not
all llvm passes special-case ".*longjmp.*" in the same way).

The patch may need significant revision to address these issues.

But the immediate issue is that this crashes while compiling llvm's unit
tests in a stage2 build (due to the `getName` problem).
2020-10-05 14:10:25 -07:00