1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 03:02:36 +01:00
Commit Graph

217905 Commits

Author SHA1 Message Date
Jeremy Morse
c57016fd83 [LiveDebugValues][InstrRef][2/2] Emit entry value variable locations
This patch adds support to the instruction-referencing LiveDebugValues
implementation for emitting entry values. The instruction referencing
implementations tracking by value rather than location means that we can
get around two of the issues with VarLocs. DBG_VALUE instructions that
re-assign the same value to a variable are no longer a problem, because we
can "see through" to the value being assigned. We also don't need to do
anything special during the dataflow stages: the "variable value problem"
doesn't need to know whether a value is available most of the time, and the
times it deoes need to know are always when entry values need to be
terminated.

The patch modifies the "TransferTracker" class, adding methods to identify
when a variable ias an entry value candidate, and when a machine value is
an entry value. recoverAsEntryValue tests these two things and emits an
entry-value expression if they're true. It's used when we clobber or
otherwise lose a value and can't find a replacement location for the value
it contained.

Differential Revision: https://reviews.llvm.org/D88406
2021-06-30 23:07:39 +01:00
Matt Arsenault
d665475981 GlobalISel: Use LLT in memory legality queries
This enables proper lowering of non-byte sized loads. We still aren't
faithfully preserving memory types everywhere, so the legality checks
still only consider the size.
2021-06-30 17:44:13 -04:00
Matt Arsenault
9526df50fa GlobalISel: Lower non-byte loads and stores
Previously we didn't preserve the memory type and had to blindly
interpret a number of bytes. Now that non-byte memory accesses are
representable, we can handle these correctly.

Ported from DAG version (minus some weird special case i1 legality
checking which I don't fully understand, and we don't have a way to
query for)

For now, this is NFC and the test changes are placeholders. Since the
legality queries are still relying on byte-flattened memory sizes, the
legalizer can't actually see these non-byte accesses. This keeps this
change self contained without merging it with the larger patch to
switch to LLT memory queries.
2021-06-30 17:05:50 -04:00
Matt Arsenault
59f5d684fb GlobalISel: Preserve memory type when reducing load/store width 2021-06-30 17:05:29 -04:00
Matt Arsenault
82a59d63ea AMDGPU/GlobalISel: Remove some problematic testcases
These testcases are a bit nonsensical and won't be handled correctly
for a long time. Remove them to unblock load/store legalization work.
2021-06-30 17:05:29 -04:00
Jonas Paulsson
38b768656f [MCStreamer] Move emission of attributes section into MCELFStreamer
Enable the emission of a GNU attributes section by reusing the code for
emitting the ARM build attributes section.

The GNU attributes follow the exact same section format as the ARM
BuildAttributes section, so this can be factored out and reused for GNU
attributes generally.

The immediate motivation for this is to emit a GNU attributes section for the
vector ABI on SystemZ (https://reviews.llvm.org/D105067).

Review: Logan Chien, Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D102894
2021-06-30 16:00:27 -05:00
Matt Arsenault
cc12b285b6 CodeGen: Print/parse LLTs in MachineMemOperands
This will currently accept the old number of bytes syntax, and convert
it to a scalar. This should be removed in the near future (I think I
converted all of the tests already, but likely missed a few).

Not sure what the exact syntax and policy should be. We can continue
printing the number of bytes for non-generic instructions to avoid
test churn and only allow non-scalar types for generic instructions.

This will currently print the LLT in parentheses, but accept parsing
the existing integers and implicitly converting to scalar. The
parentheses are a bit ugly, but the parser logic seems unable to deal
without either parentheses or some keyword to indicate the start of a
type.
2021-06-30 16:54:13 -04:00
Martin Storsjö
fc25f6691b [CMake] Don't use -Bsymbolic-functions for MinGW targets
This is an ELF specific option which isn't supported for Windows/MinGW
targets, even if the MinGW linker otherwise uses an ld.bfd like linker
interface.

Differential Revision: https://reviews.llvm.org/D105148
2021-06-30 22:54:26 +03:00
Valentin Churavy
eafb5c90da [Orc] Run the examples as part of the tests
Enable the Orc C-Bindings for testing.

Reviewed By: lhames

Differential Revision: https://reviews.llvm.org/D104637
2021-06-30 21:45:16 +02:00
Valentin Churavy
0c3065489c [Orc] Fix name of LLVMOrcIRTransformLayerSetTransform
In https://reviews.llvm.org/D103855 we added access to IRTransformLayer, but I
just noticed that the function name is following the wrong pattern.

Differential Revision: https://reviews.llvm.org/D104840
2021-06-30 21:43:34 +02:00
Jon Roelofs
b3511ee3cf [GISel] Support llvm.memcpy.inline
Differential revision: https://reviews.llvm.org/D105072
2021-06-30 12:39:05 -07:00
Florian Hahn
bd03d58d2a [BasicAA] Use separate scale variable for GCD.
Use separate variable for adjusted scale used for GCD computations. This
fixes an issue where we incorrectly determined that all indices are
non-negative and returned noalias because of that.

Follow up to 91fa3565da16.
2021-06-30 20:04:39 +01:00
Florian Hahn
78eb09629d [BasicAA] Add test for incorrectly inferring noalias due to scale sign.
This patch adds a test where we currently incorrectly determine noalias,
because the sign of Scale is adjusted after 91fa3565da16.
2021-06-30 19:57:29 +01:00
LLVM GN Syncbot
a277ada21c [gn build] Port 381ded345bdd 2021-06-30 18:49:16 +00:00
Nico Weber
c59b205c9c [gn build] (manually) port f617ab104451 (DoublerPlugin) 2021-06-30 14:49:06 -04:00
Philip Reames
4133286a10 autogen two tests for ease of update 2021-06-30 11:47:36 -07:00
Stanislav Mekhanoshin
4b0ec23e84 [AMDGPU] Add S_MOV_B64_IMM_PSEUDO for wide constants
This is to allow 64 bit constant rematerialization. If a constant
is split into two separate moves initializing sub0 and sub1 like
now RA cannot rematerizalize a 64 bit register.

This gives 10-20% uplift in a set of huge apps heavily using double
precession math.

Fixes: SWDEV-292645

Differential Revision: https://reviews.llvm.org/D104874
2021-06-30 11:45:38 -07:00
Xun Li
791eaad746 [Coroutines] Add the newly generated SCCs back to the CGSCC work queue after CoroSplit actually happened
Relevant discussion can be found at: https://lists.llvm.org/pipermail/llvm-dev/2021-January/148197.html
In the existing design, An SCC that contains a coroutine will go through the folloing passes:
Inliner -> CoroSplitPass (fake) -> FunctionSimplificationPipeline -> Inliner -> CoroSplitPass (real) -> FunctionSimplificationPipeline

The first CoroSplitPass doesn't do anything other than putting the SCC back to the queue so that the entire pipeline can repeat.
As you can see, we run Inliner twice on the SCC consecutively without doing any real split, which is unnecessary and likely unintended.
What we really wanted is this:
Inliner -> FunctionSimplificationPipeline -> CoroSplitPass -> FunctionSimplificationPipeline
(note that we don't really need to run Inliner again on the ramp function after split).

Hence the way we do it here is to move CoroSplitPass to the end of the CGSCC pipeline, make it once for real, insert the newly generated SCCs (the clones) back to the pipeline so that they can be optimized, and also add a function simplification pipeline after CoroSplit to optimize the post-split ramp function.

This approach also conforms to how the new pass manager works instead of relying on an adhoc post split cleanup, making it ready for full switch to new pass manager eventually.

By looking at some of the changes to the tests, we can already observe that this changes allows for more optimizations applied to coroutines.

Reviewed By: aeubanks, ChuanqiXu

Differential Revision: https://reviews.llvm.org/D95807
2021-06-30 11:38:14 -07:00
David Green
9d69539db4 [ARM] Set the immediate cost of GEP operands to 0
This prevents constant gep operands from being hoisted by the Constant
Hoisting pass, leaving them to CodegenPrepare which can usually do a
better job at splitting large offsets. This can, in general, improve
performance and decrease codesize, especially for v6m where many
constants have a high cost.

Differential Revision: https://reviews.llvm.org/D104877
2021-06-30 19:19:03 +01:00
Michael Liao
e142d83be3 Fix shared build. 2021-06-30 14:04:16 -04:00
zhijian
a427135857 [AIX][XCOFF][BUG-Fixed] need to switch back to text section after emit a dumy eh structure
Summary:

in the patch https://reviews.llvm.org/D103651 [AIX][XCOFF] generate eh_info when vector registers are saved according to the traceback table.

when generate eh_info, it switch to other section, when it done, it need to switch back to text section again.

Reviewers: Jason Liu
Differential Revision: https://reviews.llvm.org/105195
2021-06-30 13:56:37 -04:00
Simon Pilgrim
f6f886a614 [X86] Canonicalize SGT/UGT compares with constants to use SGE/UGE to reduce the number of EFLAGs reads. (PR48760)
This demonstrates a possible fix for PR48760 - for compares with constants, canonicalize the SGT/UGT condition code to use SGE/UGE which should reduce the number of EFLAGs bits we need to read.

As discussed on PR48760, some EFLAG bits are treated independently which can require additional uops to merge together for certain CMOVcc/SETcc/etc. modes.

I've limited this to cases where the constant increment doesn't result in a larger encoding or additional i64 constant materializations.

Differential Revision: https://reviews.llvm.org/D101074
2021-06-30 18:46:50 +01:00
Sanjay Patel
627b46d2d3 [InstCombine] fold icmp of offset value with constant
There must be a better way to describe this pattern in words?
(X + C2) >u C --> X <s -C2 (if C == C2 + SMAX)

This could be extended to handle the more general (non-constant)
pattern too:
https://alive2.llvm.org/ce/z/rdfNFP

  define i1 @src(i8 %a, i8 %c1) {
    %t = add i8 %a, %c1
    %c2 = add i8 %c1, 127 ; SMAX
    %ov = icmp ugt i8 %t, %c2
    ret i1 %ov
  }

  define i1 @tgt(i8 %a, i8 %c1) {
    %neg_c1 = sub i8 0, %c1
    %ov = icmp slt i8 %a, %neg_c1
    ret i1 %ov
  }

The pattern was noticed as a by-product of D104932.
2021-06-30 13:37:31 -04:00
Sanjay Patel
1978151258 [InstCombine][test] add tests for icmp with constant and offset; NFC 2021-06-30 13:37:31 -04:00
Philip Reames
a8c3d333e5 [instcombine] Precommit tests for umin(a,b) ne/eq 0 fold 2021-06-30 10:29:19 -07:00
Philip Reames
884ef85b3f [instcombine] umin(x, 1) == zext(x != 0)
We already implemented this for the select form, but the intrinsic form was missing.  Note that this doesn't change poison behavior as 1 is non-poison, and the optimized form is still poison exactly when x is.
2021-06-30 10:20:01 -07:00
Tomas Matheson
fcb829d489 [NPM] Resolve llvmGetPassPluginInfo to the plugin being loaded
Dynamically loaded plugins for the new pass manager are initialised by
calling llvmGetPassPluginInfo. This is defined as a weak symbol so that
it is continually redefined by each plugin that is loaded. When loading
a plugin from a shared library, the intention is that
llvmGetPassPluginInfo will be resolved to the definition in the most
recent plugin. However, using a global search for this resolution can
fail in situations where multiple plugins are loaded.

Currently:

* If a plugin does not define llvmGetPassPluginInfo, then it will be
  silently resolved to the previous plugin's definition.

* If loading the same plugin twice with another in between, e.g. plugin
  A/plugin B/plugin A, then the second load of plugin A will resolve to
  llvmGetPassPluginInfo in plugin B.

* The previous case can also occur when a dynamic library defines both
  NPM and legacy plugins; the legacy plugins are loaded first and then
  with `-fplugin=A -fpass-plugin=B -fpass-plugin=A`: A will be loaded as
  a legacy plugin and define llvmGetPassPluginInfo; B will be loaded
  and redefine it; and finally when A is loaded as an NPM plugin it will
  be resolved to the definition from B.

Instead of searching globally, restrict the symbol lookup to the library
that is currently being loaded.

Differential Revision: https://reviews.llvm.org/D104916
2021-06-30 18:11:28 +01:00
Nico Weber
f604789ef5 [gn build] add dep needed after b56e5f8a10c1e 2021-06-30 12:58:59 -04:00
LLVM GN Syncbot
4addd52a45 [gn build] Port 0c96a92d8666 2021-06-30 15:57:43 +00:00
Jeremy Morse
12bce03eea [LiveDebugValues][InstrRef][1/2] Recover more clobbered variable locations
In various circumstances, when we clobber a register there may be
alternative locations that the value is live in. The classic example would
be a value loaded from the stack, and then clobbered: the value is still
available on the stack. InstrRefBasedLDV was coping with this at block
starts where it's forced to pick a location, however it wasn't searching
for alternative locations when values were clobbered.

This patch notifies the "Transfer Tracker" object when clobbers occur, and
it's able to find alternatives and issue DBG_VALUEs for that location. See:
the added test.

Differential Revision: https://reviews.llvm.org/D88405
2021-06-30 16:56:25 +01:00
Joseph Huber
de68609f74 [OpenMP] Change analysis remarks to not emit on cold functions
The remarks will trigger on some functions that are marked cold, such as the
`__muldc3` intrinsic functions. Change the remarks to avoid these functions.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D105196
2021-06-30 11:54:24 -04:00
Philip Reames
0b160e9666 [SCEV] Fold (0 udiv %x) to 0
We have analogous rules in instsimplify, etc.., but were missing the same in SCEV.  The fold is near trivial, but came up in the context of a larger change.
2021-06-30 08:31:13 -07:00
Philip Reames
7110dfba97 [test] precommit a test for missing (0 /u %x) SCEV fold 2021-06-30 08:26:34 -07:00
Craig Topper
11353cfb39 [ARM] Fix incorrect assignment of Changed variable in MVEGatherScatterLowering::optimiseOffsets.
I believe this Changed flag should be initialized to false,
otherwise the if (!Changed) is always dead. This doesn't
manifest in a functional issue because the PHINode checks will
fail if nothing changed. They are identical to the earlier
checks that must have already failed to get into this else block.

While there remove an else after return to reduce indentation.

Differential Revision: https://reviews.llvm.org/D105159
2021-06-30 07:52:57 -07:00
Louis Dionne
6766b6d75d [lit] Add the ability to parse regexes in Lit boolean expressions
This patch augments Lit with the ability to parse regular expressions
in boolean expressions. This includes REQUIRES:, XFAIL:, UNSUPPORTED:,
and all other special Lit markup that evaluates to a boolean expression.

Regular expressions can be specified by enclosing them in {{...}},
similarly to how FileCheck handles such regular expressions. The regular
expression can either be on its own, or it can be part of an identifier.
For example, a match expression like {{.+}}-apple-darwin{{.+}} would match
the following variables:

     x86_64-apple-darwin20.0
     arm64-apple-darwin20.0
     arm64-apple-darwin22.0
     etc...

In the long term, this could be used to remove the need to handle the
target triple specially when parsing boolean expressions.

Differential Revision: https://reviews.llvm.org/D104572
2021-06-30 10:52:16 -04:00
Simon Pilgrim
3ab40c9e92 [CostModel][X86] Adjust fp<->int vXi32 AVX1+ costs based on llvm-mca reports
Based off the worse case numbers generated by D103695, the AVX1/2/512 sitofp/uitofp/fptosi/fptoui costs were higher than necessary (based off instruction counts instead of actual throughput).

The SSE costs still need further fixes, but I hit an issue with the order in which SSE costs are checked - we need to check CUSTOM costs (with non-legal types) first, and then fallback to LEGALIZED types. I'm looking at this now, and this should let us start thinning out a lot of the duplicates in the costs tables.

Then we can finally start work on vXi64 / vXi16 / vXi8 / vXi1 integers, which should let us look at sub-128-bit vectorization (D103925).
2021-06-30 15:23:34 +01:00
Nico Weber
ffed507e0b Revert "[Coroutine] Add statistics for the number of elided coroutine"
This reverts commit 1d9539cf49a585e7c3cd8faa1b8e7291e0ce285c.
Test fails in LLVM_ENABLE_ASSERTIONS=OFF builds (such as regular
release builds).
2021-06-30 10:22:45 -04:00
Joseph Huber
58b4d4e578 [OpenMP] Add additional remarks for OpenMPOpt
This patch adds additional remarks, suggesting the use of `noescape` for failed
globalization and indicating when internalization failed.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D105150
2021-06-30 09:49:25 -04:00
Florian Hahn
f945af4cb8 [Matrix] Add tests for hoisting address computations. 2021-06-30 14:31:00 +01:00
alex-t
a236991319 [AMDGPU] PHI node cost should not be counted for the size and latency.
Details: https://reviews.llvm.org/D96805 changed the GCNTTIImpl::getCFInstrCost to return 1 for the PHI nodes
  for the TTI::TCK_CodeSize and TTI::TCK_SizeAndLatency. This is incorrect because the value moves that are the
  result of the PHI lowering are inserted into the basic block predecessors - not into the block itself.
  As a result of this change LoopRotate and LoopUnroll were broken because of the incorrect Loop header and loop
  body size/cost estimation.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D105104
2021-06-30 16:11:17 +03:00
Bradley Smith
ef187efe53 [TargetLowering][AArch64][SVE] Take into account accessed type when clamping address
When clamping the index for a memory access to a stacked vector we must
take into account the entire type being accessed, not just assume that
we are accessing only a single element.

Differential Revision: https://reviews.llvm.org/D105016
2021-06-30 13:30:18 +01:00
Simon Pilgrim
edbe6b43ea Fix MSVC "32-bit shift implicitly converted to 64 bits" warning. 2021-06-30 13:23:53 +01:00
madhur13490
dc2f2451d3 [AMDGPU] Simplify getReservedNumSGPRs
This is a followup patch on D103636 where
it seemed checking on amdgpu-calls and
amdgpu-stack-objects is unnecessary. Removing these
checks didn't regress any tests functionally.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D104513
2021-06-30 16:19:39 +05:30
Florian Hahn
c934ce7ceb [ConstantRanges] Use APInt for constant case for urem/srem.
Currently UREM & SREM on constant ranges produces overly pessimistic
results for single element constant ranges.

Delegate to APInt's implementation if both operands are single element
constant ranges. We already do something similar for other binary
operators, like binary AND.

Fixes PR49731.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D105115
2021-06-30 11:18:20 +01:00
David Sherwood
b307f6c209 [NFC] Rename shadowed variable in InnerLoopVectorizer::createInductionVariable
Avoid creating a IRBuilder stack variable with the same name as the
class member.
2021-06-30 11:11:49 +01:00
Florian Mayer
1510d39166 [MTE] Remove redundant helper function.
Looking at PostDominatorTree::dominates, we can see that has the same
logic (with the addition of handling Phi nodes - which are not used as inputs in
this pass) as the helper function.

Reviewed By: eugenis

Differential Revision: https://reviews.llvm.org/D105141
2021-06-30 11:11:26 +01:00
Jay Foad
558e354c8d [TableGen] Allow identical MnemonicAliases with no predicate
My use case for this is illustrated in the test case: I want to define
the same instruction twice with different (disjoint) predicates, because
the instruction has different operands on different subtargets. It's
convenient to do this with a multiclass that also defines an alias for
the instruction.

Previously tablegen would complain if this alias was defined twice with
no predicate. One way to fix this would be to add a predicate on each
definition of the alias, matching the predicate on the instruction. But
this (a) is slightly awkward to do in the real world use case I had, and
(b) leads to an inefficient matcher that will do something like this:

  if (Mnemonic == "foo_alias") {
    if (Features.test(Feature_Subtarget1Bit))
      Mnemonic == "foo";
    else if (Features.test(Feature_Subtarget2Bit))
      Mnemonic == "foo";
    return;
  }

It would be more efficient to skip the feature tests and return "foo"
unconditionally.

Overall it seems better to allow multiple definitions of the identical
alias with no predicate.

Differential Revision: https://reviews.llvm.org/D105033
2021-06-30 10:53:39 +01:00
Igor Kudrin
ddd200c2a0 [ARMInstPrinter] Print the target address of a branch instruction
This follows other patches that changed printing immediate values of
branch instructions to target addresses, see D76580 (x86), D76591 (PPC),
D77853 (AArch64).

As observing immediate values might sometimes be useful, they are
printed as comments for branch instructions.

// llvm-objdump -d output (before)
000200b4 <_start>:
   200b4: ff ff ff fa   blx     #-4 <thumb>
000200b8 <thumb>:
   200b8: ff f7 fc ef   blx     #-8 <_start>

// llvm-objdump -d output (after)
000200b4 <_start>:
   200b4: ff ff ff fa   blx     0x200b8 <thumb>         @ imm = #-4
000200b8 <thumb>:
   200b8: ff f7 fc ef   blx     0x200b4 <_start>        @ imm = #-8

// GNU objdump -d.
000200b4 <_start>:
   200b4:       faffffff        blx     200b8 <thumb>
000200b8 <thumb>:
   200b8:       f7ff effc       blx     200b4 <_start>

Differential Revision: https://reviews.llvm.org/D104701
2021-06-30 16:35:28 +07:00
Igor Kudrin
8d74201a0c [ARM][NFC] Remove an unused method
`ARMInstPrinter::printMveAddrModeQOperand()` was added in D62680, but
was never used. It looks like `printT2AddrModeImm8Operand<false>()` is
used instead.

Differential Revision: https://reviews.llvm.org/D105124
2021-06-30 15:55:37 +07:00
Sjoerd Meijer
e1bfac08c1 Recommit "[AArch64] Custom lower <4 x i8> loads"
This recommits D104782 including a fix for adding a wrong operand to the new
load node.

Differential Revision: https://reviews.llvm.org/D105110
2021-06-30 09:18:06 +01:00