1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-18 18:42:46 +02:00
Commit Graph

3590 Commits

Author SHA1 Message Date
Luke Cheeseman
9c155d3206 [AArch64] Update MTE system register encodings
The encodings for the system registers TFSRE0_EL1, TFSR_EL1 TFSR_EL2, TFSR_EL3
and TFSR_EL12 have been changed so that they consistently have CRn=5 and CRm=6
as per https://developer.arm.com/docs/ddi0487/latest.

Differential Revision: https://reviews.llvm.org/D65442

llvm-svn: 369505
2019-08-21 09:09:56 +00:00
Amara Emerson
a1004684f5 [AArch64][GlobalISel] Add support for narrowScalar of G_ZEXT
We do this by merging the source with the high bits set to 0.

Differential Revision: https://reviews.llvm.org/D66181

llvm-svn: 369480
2019-08-21 00:12:37 +00:00
Jessica Paquette
aa9ac0aaf9 [AArch64][GlobalISel] Select logical_imm32 and logical_imm64 patterns
Add a GlobalISel equivalent for the logical_imm32_XFORM and logical_imm64_XFORM
SDNodeXForms in AArch64InstrFormats.td.

- Add select-logical-imm.mir, which contains tests for each imported pattern.

- Update select-pr32733.mir and select-scalar-shift-imm.mir, since they now
select instructions of this form.

Differential Revision: https://reviews.llvm.org/D66162

llvm-svn: 369465
2019-08-20 22:31:25 +00:00
Jessica Paquette
bc7846dad0 [AArch64][GlobalISel] Select patterns which use shifted register operands
This adds GlobalISel equivalents for the following from AArch64InstrFormats:

- arith_shifted_reg32
- arith_shifted_reg64

And partial support for

- logical_shifted_reg32
- logical_shifted_reg32

The only thing missing for the logical cases is support for rotates. Other than
the missing support, the transformation is identical for the arithmetic shifted
register and the logical shifted register.

Lots of tests here:

- Add select-arith-shifted-reg.mir to show that we correctly select add and
sub instructions which use this pattern.

- Add select-logical-shifted-reg.mir to cover patterns which are not shared
between the arithmetic and logical cases.

- Update addsub-shifted.ll to show that we correctly fold shifts into
adds/subs.

- Update eon.ll to show that we can select the eon instruction by folding xors.

Differential Revision: https://reviews.llvm.org/D66163

llvm-svn: 369460
2019-08-20 22:18:06 +00:00
Evgeniy Stepanov
fddd9d319f Refactor isPointerOffset (NFC).
Summary:
Simplify the API using Optional<> and address comments in
         https://reviews.llvm.org/D66165

Reviewers: vitalybuka

Subscribers: hiraditya, llvm-commits, ostannard, pcc

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66317

llvm-svn: 369300
2019-08-19 21:08:04 +00:00
Evgeniy Stepanov
768d8b571a MemTag: stack initializer merging.
Summary:
MTE provides instructions to update memory tags and data at the same
time. This change makes use of those to generate more compact code for
stack variable tagging + initialization.

We collect memory store and memset instructions following an alloca or a
lifetime.start call, and replace them with the corresponding MTE
intrinsics. Since the intrinsics work on 16-byte aligned chunks, the
stored values are combined as necessary.

Reviewers: pcc, vitalybuka, ostannard

Subscribers: srhines, javed.absar, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66167

llvm-svn: 369297
2019-08-19 20:47:09 +00:00
Serge Guelton
d9db87717d [nfc] Silent gcc warning
llvm-svn: 369266
2019-08-19 14:40:33 +00:00
Paul Walker
699197ed9f Revert Revert [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for meta instructions.
This reverts r369132 (git commit 19301d75f086caae1a495d267f5d0264b225942d)

llvm-svn: 369186
2019-08-17 09:22:36 +00:00
Paul Walker
6baf63e72c Revert [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for meta instructions.
This reverts r369133 (git commit 2632c677f85cba1ac2aef5d68aaf8af0f5b3c944)

llvm-svn: 369185
2019-08-17 09:22:28 +00:00
Amara Emerson
2d9a3e9605 [AArch64][GlobalISel] Fix an assertion during G_UNMERGE selection for s128 types.
llvm-svn: 369172
2019-08-16 23:23:40 +00:00
Paul Walker
e81862697b [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for meta instructions.
Recommit with fixes for mac builders.

Summary:
AArch64InstrInfo::getInstSizeInBytes is incorrectly treating meta
instructions (e.g. CFI_INSTRUCTION) as normal instructions and
giving them a size of 4.

This results in branch relaxation calculating block sizes wrong.
Branch relaxation also considers alignment and thus a single
mistake can result in later blocks being incorrectly sized even
when they themselves do not contain meta instructions.

The net result is we might not relax a branch whose destination is
not within range.

Reviewers: nickdesaulniers, peter.smith

Reviewed By: peter.smith

Subscribers: javed.absar, kristof.beyls, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66337

> llvm-svn: 369111

llvm-svn: 369133
2019-08-16 17:29:53 +00:00
Paul Walker
3f769ba981 Revert [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for meta instructions.
This reverts r369111 (git commit 3ccee5f7c4087ed119dbeba537f3df1b048a4dff)

llvm-svn: 369132
2019-08-16 17:29:42 +00:00
Sander de Smalen
e877a2fb7c Relanding r368987 [AArch64] Change location of frame-record within callee-save area.
Changes:
There was a condition for `!NeedsFrameRecord` missing in the assert. The
assert in question has changed to:

+    assert((!RPI.isPaired() || !NeedsFrameRecord || RPI.Reg2 != AArch64::FP ||
+            RPI.Reg1 == AArch64::LR) &&
+           "FrameRecord must be allocated together with LR");

This addresses PR43016.

llvm-svn: 369122
2019-08-16 15:42:28 +00:00
Paul Walker
812a84f6cc [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for meta instructions.
Summary:
AArch64InstrInfo::getInstSizeInBytes is incorrectly treating meta
instructions (e.g. CFI_INSTRUCTION) as normal instructions and
giving them a size of 4.

This results in branch relaxation calculating block sizes wrong.
Branch relaxation also considers alignment and thus a single
mistake can result in later blocks being incorrectly sized even
when they themselves do not contain meta instructions.

The net result is we might not relax a branch whose destination is
not within range.

Reviewers: nickdesaulniers, peter.smith

Reviewed By: peter.smith

Subscribers: javed.absar, kristof.beyls, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66337

llvm-svn: 369111
2019-08-16 14:17:52 +00:00
Nico Weber
8e559d6c8a Revert r368987, it caused PR43016.
llvm-svn: 369080
2019-08-16 02:21:21 +00:00
Philip Reames
3fe9376581 [SDAG] Minor code cleanup/standardization of atomic accessors [NFC]
llvm-svn: 369057
2019-08-15 22:21:14 +00:00
Evgeniy Stepanov
ce5f1f9209 Add missing MIR serialization text for AArch64II::MO_TAGGED.
Reviewers: pcc

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66312

llvm-svn: 369053
2019-08-15 22:03:55 +00:00
Jonas Devlieghere
2c693415b7 [llvm] Migrate llvm::make_unique to std::make_unique
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.

llvm-svn: 369013
2019-08-15 15:54:37 +00:00
Sander de Smalen
b7a99b0894 [AArch64] Change location of frame-record within callee-save area.
This patch changes the location of the frame-record (FP, LR) to the 
bottom of the callee-saved area. According to the AAPCS the location of
the frame-record within the stackframe is unspecified (section 5.2.3 The 
Frame Pointer), so the compiler should be free to choose a different
location.

The reason for changing the location of the frame-record is to prepare
the frame for allocating an SVE area below the callee-saves. This way the 
compiler can use the VL-scaled addressing modes to directly access SVE 
objects from the frame-pointer.

            :                :   
        | stack |        | stack |
        |  args |        |  args |
        +-------+        +-------+
        |  x30  |        |  x19  |
        |  x29  |        |  x20  |
  FP -> |- - - -|        |  x21  |
        |  x19  |   ==>  |  x22  |
        |  x20  |        |- - - -|
        |  x21  |        |  x30  |
        |  x22  |        |  x29  |
        +-------+        +-------+ <- FP
        |///////|        |///////|         // realignment gap 
        |- - - -|        |- - - -|
        |spills/|        |spills/|
        | locals|        | locals|
  SP -> +-------+        +-------+ <- SP

Things to point out:
- The algorithm to find a paired register should be prevented from
  accidentally pairing some callee-saved register with LR that is not 
  FP, since they should always be paired together when the frame
  has a frame-record.
- For Darwin platforms the location of the frame-record is unchanged,
  since the unwind encoding does not allow for encoding this position
  dynamically and other tools currently depend on the former layout. 

Reviewers: efriedma, rovka, rengolin, thegameg, greened, t.p.northover

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D65653

llvm-svn: 368987
2019-08-15 10:34:16 +00:00
Amara Emerson
c97843653b [AArch64][GlobalISel] Custom selection for s8 load acquire.
Implement this single atomic load instruction so that we can compile stack
protector code.

Differential Revision: https://reviews.llvm.org/D66245

llvm-svn: 368923
2019-08-14 21:30:30 +00:00
Amara Emerson
8617c6d207 [AArch64][GlobalISel] RBS: Treat s128s like vectors when unmerging.
The destinations should be FPRs (for now).

Differential Revision: https://reviews.llvm.org/D66184

llvm-svn: 368775
2019-08-13 23:51:20 +00:00
Eli Friedman
6b0256e657 [AArch64] Remove incorrect usage of MONonTemporal.
This has no effect at the moment, but might matter if we try to
implement non-temporal loads in the future.

llvm-svn: 368770
2019-08-13 23:12:14 +00:00
Matt Arsenault
284e8e1c63 GlobalISel: Change representation of shuffle masks
Currently shufflemasks get emitted as any other constant, and you end
up with a bunch of virtual registers of G_CONSTANT with a
G_BUILD_VECTOR. The AArch64 selector then asserts on anything that
doesn't fit this pattern. This isn't an ideal representation, and
should avoid legalization and have fewer opportunities for a
representational error.

Rather than invent a new shuffle mask operand type, similar to what
ShuffleVectorSDNode does, just track the original IR Constant mask
operand. I don't completely like the idea of adding another link to
the IR, but MIR is already quite dependent on IR constants already,
and this will allow sharing the shuffle mask utility functions with
the IR.

llvm-svn: 368704
2019-08-13 15:34:38 +00:00
Amara Emerson
94dad7bb1c [AArch64][GlobalISel] Replace explicit vreg creation with implicit using SrcOp. NFC.
llvm-svn: 368653
2019-08-13 06:55:32 +00:00
Amara Emerson
13cefd7cf8 [GlobalISel] Make the InstructionSelector instance non-const, allowing state to be maintained.
Currently we can't keep any state in the selector object that we get from
subtarget. As a result we have to plumb through all our variables through
multiple functions. This change makes it non-const and adds a virtual init()
method to allow further state to be captured for each target.

AArch64 makes use of this in this patch to cache a call to hasFnAttribute()
which is expensive to call, and is used on each selection of G_BRCOND.

Differential Revision: https://reviews.llvm.org/D65984

llvm-svn: 368652
2019-08-13 06:26:59 +00:00
Daniel Sanders
4bbcbcaf46 [aarch64] Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM
Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).

Manual fixups in:
AArch64InstrInfo.cpp - genFusedMultiply() now takes a Register* instead of unsigned*
AArch64LoadStoreOptimizer.cpp - Ternary operator was ambiguous between Register/MCRegister. Settled on Register

Depends on D65919

Reviewers: aemerson

Subscribers: jholewinski, MatzeB, qcolombet, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, wdng, nhaehnle, sbc100, jgravelle-google, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, javed.absar, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, tpr, PkmX, jocewei, jsji, Petar.Avramovic, asbirlea, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision for full review was: https://reviews.llvm.org/D65962

llvm-svn: 368628
2019-08-12 22:40:53 +00:00
Daniel Sanders
233ed83478 [globalisel] Add G_SEXT_INREG
Summary:
Targets often have instructions that can sign-extend certain cases faster
than the equivalent shift-left/arithmetic-shift-right. Such cases can be
identified by matching a shift-left/shift-right pair but there are some
issues with this in the context of combines. For example, suppose you can
sign-extend 8-bit up to 32-bit with a target extend instruction.
  %1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity)
  %2:_(s32) = G_ASHR %1:_(s32), i32 24
  %3:_(s32) = G_ASHR %2:_(s32), i32 1
would reasonably combine to:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 25
which no longer matches the special case. If your shifts and extend are
equal cost, this would break even as a pair of shifts but if your shift is
more expensive than the extend then it's cheaper as:
  %2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8
  %3:_(s32) = G_ASHR %2:_(s32), i32 1
It's possible to match the shift-pair in ISel and emit an extend and ashr.
However, this is far from the only way to break this shift pair and make
it hard to match the extends. Another example is that with the right
known-zeros, this:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 24
  %3:_(s32) = G_MUL %2:_(s32), i32 2
can become:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 23

All upstream targets have been configured to lower it to the current
G_SHL,G_ASHR pair but will likely want to make it legal in some cases to
handle their faster cases.

To follow-up: Provide a way to legalize based on the constant. At the
moment, I'm thinking that the best way to achieve this is to provide the
MI in LegalityQuery but that opens the door to breaking core principles
of the legalizer (legality is not context sensitive). That said, it's
worth noting that looking at other instructions and acting on that
information doesn't violate this principle in itself. It's only a
violation if, at the end of legalization, a pass that checks legality
without being able to see the context would say an instruction might not be
legal. That's a fairly subtle distinction so to give a concrete example,
saying %2 in:
  %1 = G_CONSTANT 16
  %2 = G_SEXT_INREG %0, %1
is legal is in violation of that principle if the legality of %2 depends
on %1 being constant and/or being 16. However, legalizing to either:
  %2 = G_SEXT_INREG %0, 16
or:
  %1 = G_CONSTANT 16
  %2:_(s32) = G_SHL %0, %1
  %3:_(s32) = G_ASHR %2, %1
depending on whether %1 is constant and 16 does not violate that principle
since both outputs are genuinely legal.

Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm

Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, kristof.beyls, javed.absar, hiraditya, jrtc27, atanasyan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D61289

llvm-svn: 368487
2019-08-09 21:11:20 +00:00
Pablo Barrio
96fed620d2 [AArch64] Set pref. func. align to 8 bytes on Neoverse E1 & Cortex-A65
Summary:
The Arm Neoverse E1 and Cortex-A65 Software Optimization Guide [1][2],
Section "4.7 Branch instruction alignment" state:

"It is preferable for branch targets, including subroutine entry points,
to be placed on aligned 64-bit boundaries to maximize instruction fetch
efficiency."

This patch sets the preferred function alignment on Neoverse E1 and
Cortex-A65 to 2^3=8B. This was already the case in some Cortex-A CPUs
such as Cortex-A53.

[1] https://developer.arm.com/docs/swog466751/latest/arm-neoversetm-e1-core-software-optimization-guide
[2] https://developer.arm.com/docs/swog010045/latest/arm-cortex-a65-core-software-optimization-guide

Reviewers: dmgreen, fhahn, samparker

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65937

llvm-svn: 368431
2019-08-09 11:05:15 +00:00
Tim Northover
c802973c73 AArch64: support TLS on Darwin platforms in GlobalISel.
All TLS access on Darwin is in the "general dynamic" form where we call
a function to resolve the address, so implementation is pretty simple.

llvm-svn: 368418
2019-08-09 09:32:38 +00:00
Tim Northover
ce0a779ded GlobalISel: pack various parameters for lowerCall into a struct.
I've now needed to add an extra parameter to this call twice recently. Not only
is the signature getting extremely unwieldy, but just updating all of the
callsites and implementations is a pain. Putting the parameters in a struct
sidesteps both issues.

llvm-svn: 368408
2019-08-09 08:26:38 +00:00
Pirama Arumuga Nainar
e1c7e683da [AArch64] Do not emit '#' before immediates in inline asm
Summary:
The A64 assembly language does not require the '#' character to
introduce constant immediate operands.  Avoid the '#' since the AArch64
asm parser does not accept '#' before the lane specifier and rejects the
following:
  __asm__ ("fmla v2.4s, v0.4s, v1.s[%0]" :: "I"(0x1))

Fix a test to not expect the '#' and add a new test case with the above
asm.

Fixes: https://github.com/android-ndk/ndk/issues/1036

Reviewers: peter.smith, kristof.beyls

Subscribers: javed.absar, hiraditya, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65550

llvm-svn: 368320
2019-08-08 17:50:39 +00:00
Sander de Smalen
315ff30388 [AArch64][WinCFI] Do not pair callee-save instructions in LoadStoreOptimizer
Prevent the LoadStoreOptimizer from pairing any load/store instructions with
instructions from the prologue/epilogue if the CFI information has encoded the
operations as separate instructions.  This would otherwise lead to a mismatch
of the actual prologue size from the size as recorded in the Windows CFI.

Reviewers: efriedma, mstorsjo, ssijaric

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D65817

llvm-svn: 368164
2019-08-07 12:41:38 +00:00
Aditya Nandakumar
c0251c505e [GISel]: Add GISelKnownBits analysis
https://reviews.llvm.org/D65698

This adds a KnownBits analysis pass for GISel. This was done as a
pass (compared to static functions) so that we can add other features
such as caching queries(within a pass and across passes) in the future.
This patch only adds the basic pass boiler plate, and implements a lazy
non caching knownbits implementation (ported from SelectionDAG). I've
also hooked up the AArch64PreLegalizerCombiner pass to use this - there
should be no compile time regression as the analysis is lazy.

llvm-svn: 368065
2019-08-06 17:18:29 +00:00
Daniel Sanders
b29aefd602 [globalisel] Allow SrcOp to convert an APInt and render it as an immediate operand (MO.isImm() == true)
Summary:
This is tested by D61289 but has been pulled into a separate patch at
a reviewers request.

Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm, rovka

Reviewed By: arsenm

Subscribers: javed.absar, hiraditya, wdng, kristof.beyls, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D61321

llvm-svn: 368063
2019-08-06 17:16:27 +00:00
Sander de Smalen
ea40cc202d [AArch64] NFC: Generalize emitFrameOffset to support more than byte offsets.
Refactor emitFrameOffset to accept a StackOffset struct as its offset argument.
This method currently only supports byte offsets (MVT::i8) but will be extended
in a later patch to support scalable offsets (MVT::nxv1i8) as well.

Reviewers: thegameg, rovka, t.p.northover, efriedma, greened

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D61436

llvm-svn: 368049
2019-08-06 15:06:31 +00:00
Tim Northover
e4e2d95f9d AArch64: bail instead of asserting on unexpected type in G_CONSTANT 0.
llvm-svn: 368031
2019-08-06 13:34:08 +00:00
Sander de Smalen
dad288658b [AArch64] NFC: Add generic StackOffset to describe scalable offsets.
To support spilling/filling of scalable vectors we need a more generic
representation of a stack offset than simply 'int'.

For this we introduce the StackOffset struct, which comprises multiple
offsets sized by their respective MVTs. Byte-offsets will thus be a simple
tuple such as { offset, MVT::i8 }. Adding two byte-offsets will result in a
byte offset { offsetA + offsetB, MVT::i8 }. When two offsets have different
types, we can canonicalise them to use the same MVT, as long as their
runtime sizes are guaranteed to have the same size-ratio as they would have
at compile-time.

When we have both scalable- and fixed-size objects on the stack, we can 
create an offset that is: 

  ({ offset_fixed, MVT::i8 } + { offset_scalable, MVT::nxv1i8 })

The struct also contains a getForFrameOffset() method that is specific to
AArch64 and decomposes the frame-offset to be used directly in instructions
that operate on the stack or index into the stack.

Note: This patch adds StackOffset as an AArch64-only concept, but we would
like to make this a generic concept/struct that is supported by all 
interfaces that take or return stack offsets (currently as 'int'). Since
that would be a bigger change that is currently pending on D32530 landing,
we thought it makes sense to first show/prove the concept in the AArch64
target before proposing to roll this out further.

Reviewers: thegameg, rovka, t.p.northover, efriedma, greened

Reviewed By: rovka, greened

Differential Revision: https://reviews.llvm.org/D61435

llvm-svn: 368024
2019-08-06 13:06:40 +00:00
Tim Northover
ee608c40a5 AArch64: use xzr/wzr for constant 0 in GlobalISel.
COPYs from xzr and wzr can often be folded away entirely during register
allocation, unlike a movz.

llvm-svn: 368003
2019-08-06 09:18:41 +00:00
Amara Emerson
9cf5e68300 [GlobalISel][CallLowering] Rename isArgumentHandler() -> isIncomingArgumentHandler()
Previous name and comment incorrectly implied it was just for formal arg handlers,
which is not true.

llvm-svn: 367945
2019-08-05 23:05:28 +00:00
Amara Emerson
71886ba279 [AArch64][GlobalISel] Inline tiny memcpy et al at -O0.
FastISel already does this since the initial arm64 port was upstreamed, so
it seems there are no issues with doing this at -O0 for very small memcpys.

Gives a 0.2% geomean code size improvement on CTMark.

Differential Revision: https://reviews.llvm.org/D65758

llvm-svn: 367919
2019-08-05 20:02:52 +00:00
Evandro Menezes
66a85dd4d4 [AArch64] Expand bcmp() for small block lengths
Patch D56593 by @courbet results in calls to `bcmp()` in some cases, should
the target support the it.  Unless `TTI::MemCmpExpansionOptions()`
is overridden by the target.

In a proprietary benchmark we see a performance drop of about 12% on PNG
compression before this patch, though it passes all tests.

This patch mirrors X86 for AArch64 and initializes
`TTI::MemCmpExpansionOptions()` to then expand calls to `bcmp()` when
appropriate.  No tuning of the parameters was performed, but, at this point,
it's enough to recover the performance drop above.

This problem also exists on ARM.  Once a consensus is reached for AArch64, we
can work to fix ARM as well.

Authors:
- Evandro Menezes (@evandro) <e.menezes@samsung.com>
- Brian Rzycki (@brzycki) <b.rzycki@samsung.com>

Differential revision: https://reviews.llvm.org/D64805

llvm-svn: 367898
2019-08-05 18:09:14 +00:00
Pablo Barrio
7fb3a65624 [AArch64] Set preferred function alignment to 16 bytes on Neoverse N1
Summary:
The Arm Neoverse N1 Software Optimization Guide [1], Section "4.8 Branch
instruction alignment" states:

"Consider aligning subroutine entry points and branch targets to 32B
boundaries, within the bounds of the code-density requirements of the
program."

This patch sets the preferred function alignment on Neoverse N1 to 2^4=16B.
This was already the case in some of the latest Cortex-A CPUs. Benchmarking
in previous Cortex-A CPUs suggested that 16B alignment is already better
than the default. See commit d04ee305.

The reason we don't set it to 32B right now (as the optimisation guide
suggests) is that this will impact code size and perhaps the instruction
cache performance. Therefore we need benchmark numbers first.

I have also added testing for A75 and A76 that we were missing.

[1] https://developer.arm.com/docs/swog309707/latest

Reviewers: fhahn, greened, samparker, dmgreen

Reviewed By: dmgreen

Subscribers: dmgreen, javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65654

llvm-svn: 367894
2019-08-05 17:38:58 +00:00
Cullen Rhodes
87883e84e2 [AArch64] Implement initial SVE calling convention support
Summary:

This patch adds initial support for the SVE calling convention such that
SVE types can be passed as arguments and return values to/from a
subroutine.

The SVE AAPCS states [1]:

    z0-z7 are used to pass scalable vector arguments to a subroutine,
    and to return scalable vector results from a function. If a
    subroutine takes arguments in scalable vector or predicate
    registers, or if it is a function that returns results in such
    registers, it must ensure that the entire contents of z8-z23 are
    preserved across the call. In other cases it need only preserve the
    low 64 bits of z8-z15, as described in §5.1.2.

    p0-p3 are used to pass scalable predicate arguments to a subroutine
    and to return scalable predicate results from a function. If a
    subroutine takes arguments in scalable vector or predicate
    registers, or if it is a function that returns results in these
    registers, it must ensure that p4-p15 are preserved across the call.
    In other cases it need not preserve any scalable predicate register
    contents.

SVE predicate and data registers are passed indirectly (i.e. spilled to the
stack and pass the address) if they exceed the registers used for argument
passing defined by the PCS referenced above.  Until SVE stack support is merged
we can't spill SVE registers to the stack, so currently an llvm_unreachable is
used where we will eventually handle this.

[1] https://static.docs.arm.com/100986/0000/100986_0000.pdf

Reviewed By: ostannard

Differential Revision: https://reviews.llvm.org/D65448

llvm-svn: 367859
2019-08-05 13:44:10 +00:00
Florian Hahn
625dcd33f6 [AArch64] Skip isZIPMask check for masks with an odd number of elements.
We process 2 elements at a time and expect the number of elements to be
even. Similar to D60690.

Reviewers: dmgreen, samparker, t.p.northover

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D65400

llvm-svn: 367831
2019-08-05 11:12:23 +00:00
Guillaume Chatelet
cc3dd960fc [LLVM][Alignment] Introduce Alignment Type
Summary:
This is patch is part of a serie to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet, jfb, jakehehrlich

Reviewed By: jfb

Subscribers: wuzish, jholewinski, arsenm, dschuff, nemanjai, jvesely, nhaehnle, javed.absar, sbc100, jgravelle-google, hiraditya, aheejin, kbarton, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, s.egerton, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65514

llvm-svn: 367828
2019-08-05 11:02:05 +00:00
Bill Wendling
fa94d17420 Emit diagnostic if an inline asm constraint requires an immediate
Summary:
An inline asm call can result in an immediate after inlining. Therefore emit a
diagnostic here if constraint requires an immediate but one isn't supplied.

Reviewers: joerg, mgorny, efriedma, rsmith

Reviewed By: joerg

Subscribers: asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, s.egerton, MaskRay, jyknight, dylanmckay, javed.absar, fedor.sergeev, jrtc27, Jim, krytarowski, eraman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D60942

llvm-svn: 367750
2019-08-03 05:52:47 +00:00
Amara Emerson
d894b5f8e3 Re-commit "[GlobalISel] Add legalization support for non-power-2 loads and stores""
This is an old commit that exposed a bug in the GISel importer, which caused
non-truncating stores to be selected for truncating store patterns. Now that's
been fixed in r367737 this can go back in.

llvm-svn: 367739
2019-08-02 23:44:24 +00:00
Amara Emerson
785f4c1ed2 [AArch64][GlobalISel] Eliminate redundant G_ZEXT when the source is implicitly zext-loaded.
These cases can come up when the extending loads combiner doesn't combine a
zext(load) to a zextload op, due to some other operation being in between, which
then gets simplified at a later stage.

Differential Revision: https://reviews.llvm.org/D65360

llvm-svn: 367723
2019-08-02 21:15:36 +00:00
Jessica Paquette
7f4d840652 [AArch64][GlobalISel] Support the neg_addsub_shifted_imm32 pattern
Add an equivalent ComplexRendererFns function for SelectNegArithImmed. This
allows us to select immediate adds of -1 by turning them into subtracts.

Update select-binop.mir to show that the pattern works.

Differential Revision: https://reviews.llvm.org/D65460

llvm-svn: 367700
2019-08-02 18:12:53 +00:00
Tim Northover
2dffc88cad GlobalISel: support swiftself attribute
llvm-svn: 367683
2019-08-02 14:09:49 +00:00