1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 19:52:54 +01:00
llvm-mirror/lib/Target/SystemZ
Anirudh Prasad 46dd095887 [SystemZ] Adding extra extended mnemonics for SystemZ target
This patch consists of the addition of some common additional
extended mnemonics to the SystemZ target.

- These are jnop, jct, jctg, jas, jasl, jxh, jxhg, jxle,
  jxleg, bru, brul, br*, br*l.
- These mnemonics and the instructions they map to are
  defined here, Chapter 4 - Branching with extended
  mnemonic codes.
- Except for jnop (which is a variant of brc 0, label), every
  other mnemonic is marked as a MnemonicAlias since there is
  already a "defined" instruction with the same encoding
  and/or condition mask values.
- brc 0, label doesn't have a defined extended mnemonic, thus
  jnop is defined using as an InstAlias. Furthermore, the
  applyMnemonicAliases function is called in the overridden
  parseInstruction function in SystemZAsmParser.cpp to ensure
  any mnemonic aliases are applied before any further
  processing on the instruction is done.

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D92185
2020-12-02 08:25:31 -05:00
..
AsmParser [SystemZ] Adding extra extended mnemonics for SystemZ target 2020-12-02 08:25:31 -05:00
Disassembler llvmbuildectomy - replace llvm-build by plain cmake 2020-11-13 10:35:24 +01:00
MCTargetDesc [AsmWriter] Factor out mnemonic generation to accessible getMnemonic. 2020-11-17 09:47:38 +00:00
TargetInfo llvmbuildectomy - replace llvm-build by plain cmake 2020-11-13 10:35:24 +01:00
CMakeLists.txt llvmbuildectomy - replace llvm-build by plain cmake 2020-11-13 10:35:24 +01:00
README.txt
SystemZ.h
SystemZ.td
SystemZAsmPrinter.cpp
SystemZAsmPrinter.h
SystemZCallingConv.cpp
SystemZCallingConv.h
SystemZCallingConv.td
SystemZConstantPoolValue.cpp
SystemZConstantPoolValue.h
SystemZCopyPhysRegs.cpp
SystemZElimCompare.cpp
SystemZFeatures.td [NFC] Fix typo in comment. 2020-08-11 05:27:56 -04:00
SystemZFrameLowering.cpp [SVE] Return StackOffset for TargetFrameLowering::getFrameIndexReference. 2020-11-05 11:02:18 +00:00
SystemZFrameLowering.h [SVE] Return StackOffset for TargetFrameLowering::getFrameIndexReference. 2020-11-05 11:02:18 +00:00
SystemZHazardRecognizer.cpp HazardRecognizer - Fix definition/declaration argument name mismatches. NFCI. 2020-11-18 16:50:52 +00:00
SystemZHazardRecognizer.h HazardRecognizer - Fix definition/declaration argument name mismatches. NFCI. 2020-11-18 16:50:52 +00:00
SystemZInstrBuilder.h
SystemZInstrDFP.td
SystemZInstrFormats.td [SystemZ] Adding extra extended mnemonics for SystemZ target 2020-12-02 08:25:31 -05:00
SystemZInstrFP.td
SystemZInstrHFP.td
SystemZInstrInfo.cpp [NFC] Use [MC]Register 2020-11-09 08:37:14 -08:00
SystemZInstrInfo.h
SystemZInstrInfo.td [SystemZ] Adding extra extended mnemonics for SystemZ target 2020-12-02 08:25:31 -05:00
SystemZInstrSystem.td
SystemZInstrVector.td [SystemZ] Use ISD::ABS opcode during isel. 2020-11-18 14:43:55 +01:00
SystemZISelDAGToDAG.cpp [SystemZ] Don't emit PC-relative memory accesses to unaligned symbols. 2020-09-29 14:51:13 +02:00
SystemZISelLowering.cpp [SystemZ] Use ISD::ABS opcode during isel. 2020-11-18 14:43:55 +01:00
SystemZISelLowering.h [SystemZ] Use ISD::ABS opcode during isel. 2020-11-18 14:43:55 +01:00
SystemZLDCleanup.cpp
SystemZLongBranch.cpp
SystemZMachineFunctionInfo.cpp
SystemZMachineFunctionInfo.h
SystemZMachineScheduler.cpp
SystemZMachineScheduler.h
SystemZMCInstLower.cpp
SystemZMCInstLower.h
SystemZOperands.td
SystemZOperators.td [SystemZ] Use ISD::ABS opcode during isel. 2020-11-18 14:43:55 +01:00
SystemZPatterns.td
SystemZPostRewrite.cpp
SystemZProcessors.td
SystemZRegisterInfo.cpp [SVE] Return StackOffset for TargetFrameLowering::getFrameIndexReference. 2020-11-05 11:02:18 +00:00
SystemZRegisterInfo.h
SystemZRegisterInfo.td
SystemZSchedule.td
SystemZScheduleZ13.td
SystemZScheduleZ14.td
SystemZScheduleZ15.td
SystemZScheduleZ196.td
SystemZScheduleZEC12.td
SystemZSelectionDAGInfo.cpp [SelectionDAG] Use Align/MaybeAlign in calls to getLoad/getStore/getExtLoad/getTruncStore. 2020-09-14 13:54:50 -07:00
SystemZSelectionDAGInfo.h
SystemZShortenInst.cpp
SystemZSubtarget.cpp [X86][MC][Target] Initial backend support a tune CPU to support -mtune 2020-08-14 15:31:50 -07:00
SystemZSubtarget.h [X86][MC][Target] Initial backend support a tune CPU to support -mtune 2020-08-14 15:31:50 -07:00
SystemZTargetMachine.cpp [Attributes] Add a method to check if an Attribute has AttrKind None. Use instead of hasAttribute(Attribute::None) 2020-08-28 13:23:45 -07:00
SystemZTargetMachine.h
SystemZTargetTransformInfo.cpp Reland "[TTI] Add VecPred argument to getCmpSelInstrCost." 2020-11-02 15:39:29 +00:00
SystemZTargetTransformInfo.h Reland "[TTI] Add VecPred argument to getCmpSelInstrCost." 2020-11-02 15:39:29 +00:00
SystemZTDC.cpp

//===---------------------------------------------------------------------===//
// Random notes about and ideas for the SystemZ backend.
//===---------------------------------------------------------------------===//

The initial backend is deliberately restricted to z10.  We should add support
for later architectures at some point.

--

If an inline asm ties an i32 "r" result to an i64 input, the input
will be treated as an i32, leaving the upper bits uninitialised.
For example:

define void @f4(i32 *%dst) {
  %val = call i32 asm "blah $0", "=r,0" (i64 103)
  store i32 %val, i32 *%dst
  ret void
}

from CodeGen/SystemZ/asm-09.ll will use LHI rather than LGHI.
to load 103.  This seems to be a general target-independent problem.

--

The tuning of the choice between LOAD ADDRESS (LA) and addition in
SystemZISelDAGToDAG.cpp is suspect.  It should be tweaked based on
performance measurements.

--

There is no scheduling support.

--

We don't use the BRANCH ON INDEX instructions.

--

We only use MVC, XC and CLC for constant-length block operations.
We could extend them to variable-length operations too,
using EXECUTE RELATIVE LONG.

MVCIN, MVCLE and CLCLE may be worthwhile too.

--

We don't use CUSE or the TRANSLATE family of instructions for string
operations.  The TRANSLATE ones are probably more difficult to exploit.

--

We don't take full advantage of builtins like fabsl because the calling
conventions require f128s to be returned by invisible reference.

--

ADD LOGICAL WITH SIGNED IMMEDIATE could be useful when we need to
produce a carry.  SUBTRACT LOGICAL IMMEDIATE could be useful when we
need to produce a borrow.  (Note that there are no memory forms of
ADD LOGICAL WITH CARRY and SUBTRACT LOGICAL WITH BORROW, so the high
part of 128-bit memory operations would probably need to be done
via a register.)

--

We don't use ICM, STCM, or CLM.

--

We don't use ADD (LOGICAL) HIGH, SUBTRACT (LOGICAL) HIGH,
or COMPARE (LOGICAL) HIGH yet.

--

DAGCombiner doesn't yet fold truncations of extended loads.  Functions like:

    unsigned long f (unsigned long x, unsigned short *y)
    {
      return (x << 32) | *y;
    }

therefore end up as:

        sllg    %r2, %r2, 32
        llgh    %r0, 0(%r3)
        lr      %r2, %r0
        br      %r14

but truncating the load would give:

        sllg    %r2, %r2, 32
        lh      %r2, 0(%r3)
        br      %r14

--

Functions like:

define i64 @f1(i64 %a) {
  %and = and i64 %a, 1
  ret i64 %and
}

ought to be implemented as:

        lhi     %r0, 1
        ngr     %r2, %r0
        br      %r14

but two-address optimizations reverse the order of the AND and force:

        lhi     %r0, 1
        ngr     %r0, %r2
        lgr     %r2, %r0
        br      %r14

CodeGen/SystemZ/and-04.ll has several examples of this.

--

Out-of-range displacements are usually handled by loading the full
address into a register.  In many cases it would be better to create
an anchor point instead.  E.g. for:

define void @f4a(i128 *%aptr, i64 %base) {
  %addr = add i64 %base, 524288
  %bptr = inttoptr i64 %addr to i128 *
  %a = load volatile i128 *%aptr
  %b = load i128 *%bptr
  %add = add i128 %a, %b
  store i128 %add, i128 *%aptr
  ret void
}

(from CodeGen/SystemZ/int-add-08.ll) we load %base+524288 and %base+524296
into separate registers, rather than using %base+524288 as a base for both.

--

Dynamic stack allocations round the size to 8 bytes and then allocate
that rounded amount.  It would be simpler to subtract the unrounded
size from the copy of the stack pointer and then align the result.
See CodeGen/SystemZ/alloca-01.ll for an example.

--

If needed, we can support 16-byte atomics using LPQ, STPQ and CSDG.

--

We might want to model all access registers and use them to spill
32-bit values.

--

We might want to use the 'overflow' condition of eg. AR to support
llvm.sadd.with.overflow.i32 and related instructions - the generated code
for signed overflow check is currently quite bad.  This would improve
the results of using -ftrapv.