2017-02-04 01:47:05 +01:00
|
|
|
// RUN: llvm-tblgen -gen-global-isel -I %p/../../include %s | FileCheck %s
|
|
|
|
|
|
|
|
include "llvm/Target/Target.td"
|
|
|
|
|
|
|
|
//===- Define the necessary boilerplate for our test target. --------------===//
|
|
|
|
|
|
|
|
def MyTargetISA : InstrInfo;
|
|
|
|
def MyTarget : Target { let InstructionSet = MyTargetISA; }
|
|
|
|
|
[globalisel] Decouple src pattern operands from dst pattern operands.
Summary:
This isn't testable for AArch64 by itself so this patch also adds
support for constant immediates in the pattern and physical
register uses in the result.
The new IntOperandMatcher matches the constant in patterns such as
'(set $rd:GPR32, (G_XOR $rs:GPR32, -1))'. It's always safe to fold
immediates into an instruction so this is the first rule that will match
across multiple BB's.
The Renderer hierarchy is responsible for adding operands to the result
instruction. Renderers can copy operands (CopyRenderer) or add physical
registers (in particular %wzr and %xzr) to the result instruction
in any order (OperandMatchers now import the operand names from
SelectionDAG to allow renderers to access any operand). This allows us to
emit the result instruction for:
%1 = G_XOR %0, -1 --> %1 = ORNWrr %wzr, %0
%1 = G_XOR -1, %0 --> %1 = ORNWrr %wzr, %0
although the latter is untested since the matcher/importer has not been
taught about commutativity yet.
Added BuildMIAction which can build new instructions and mutate them where
possible. W.r.t the mutation aspect, MatchActions are now told the name of
an instruction they can recycle and BuildMIAction will emit mutation code
when the renderers are appropriate. They are appropriate when all operands
are rendered using CopyRenderer and the indices are the same as the matcher.
This currently assumes that all operands have at least one matcher.
Finally, this change also fixes a crash in
AArch64InstructionSelector::select() caused by an immediate operand
passing isImm() rather than isCImm(). This was uncovered by the other
changes and was detected by existing tests.
Depends on D29711
Reviewers: t.p.northover, ab, qcolombet, rovka, aditya_nandakumar, javed.absar
Reviewed By: rovka
Subscribers: aemerson, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D29712
llvm-svn: 296131
2017-02-24 16:43:30 +01:00
|
|
|
def R0 : Register<"r0"> { let Namespace = "MyTarget"; }
|
2017-02-04 01:47:05 +01:00
|
|
|
def GPR32 : RegisterClass<"MyTarget", [i32], 32, (add R0)>;
|
|
|
|
|
|
|
|
class I<dag OOps, dag IOps, list<dag> Pat>
|
|
|
|
: Instruction {
|
|
|
|
let Namespace = "MyTarget";
|
|
|
|
let OutOperandList = OOps;
|
|
|
|
let InOperandList = IOps;
|
|
|
|
let Pattern = Pat;
|
|
|
|
}
|
|
|
|
|
[globalisel][tablegen] Fix patterns involving multiple ComplexPatterns.
Summary:
Temporaries are now allocated to operands instead of predicates and this
allocation is used to correctly pair up the rendered operands with the
matched operands.
Previously, ComplexPatterns were allocated temporaries independently in the
Src Pattern and Dst Pattern, leading to mismatches. Additionally, the Dst
Pattern failed to account for the allocated index and therefore always used
temporary 0, 1, ... when it should have used base+0, base+1, ...
Thanks to Aditya Nandakumar for noticing the bug.
Depends on D30539
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar
Reviewed By: rovka
Subscribers: igorb, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D31054
llvm-svn: 299538
2017-04-05 15:14:03 +02:00
|
|
|
def complex : Operand<i32>, ComplexPattern<i32, 2, "SelectComplexPattern", []> {
|
|
|
|
let MIOperandInfo = (ops i32imm, i32imm);
|
|
|
|
}
|
|
|
|
def gi_complex :
|
|
|
|
GIComplexOperandMatcher<s32, (ops i32imm, i32imm), "selectComplexPattern">,
|
|
|
|
GIComplexPatternEquiv<complex>;
|
|
|
|
|
[globalisel][tablegen] Add experimental support for OperandWithDefaultOps, PredicateOperand, and OptionalDefOperand
Summary:
As far as instruction selection is concerned, all three appear to be same thing.
Support for these operands is experimental since AArch64 doesn't make use
of them and the in-tree targets that do use them (AMDGPU for
OperandWithDefaultOps, AMDGPU/ARM/Hexagon/Lanai for PredicateOperand, and ARM
for OperandWithDefaultOps) are not using tablegen-erated GlobalISel yet.
Reviewers: rovka, aditya_nandakumar, t.p.northover, qcolombet, ab
Reviewed By: rovka
Subscribers: inglorion, aemerson, rengolin, mehdi_amini, dberris, kristof.beyls, igorb, tpr, llvm-commits
Differential Revision: https://reviews.llvm.org/D31135
llvm-svn: 300037
2017-04-12 10:23:08 +02:00
|
|
|
def m1 : OperandWithDefaultOps <i32, (ops (i32 -1))>;
|
|
|
|
def Z : OperandWithDefaultOps <i32, (ops R0)>;
|
|
|
|
def m1Z : OperandWithDefaultOps <i32, (ops (i32 -1), R0)>;
|
|
|
|
|
2017-02-04 01:47:05 +01:00
|
|
|
//===- Test the function definition boilerplate. --------------------------===//
|
|
|
|
|
|
|
|
// CHECK: bool MyTargetInstructionSelector::selectImpl(MachineInstr &I) const {
|
2017-03-14 22:32:08 +01:00
|
|
|
// CHECK: MachineFunction &MF = *I.getParent()->getParent();
|
|
|
|
// CHECK: const MachineRegisterInfo &MRI = MF.getRegInfo();
|
2017-02-04 01:47:05 +01:00
|
|
|
|
[globalisel][tablegen] Fix patterns involving multiple ComplexPatterns.
Summary:
Temporaries are now allocated to operands instead of predicates and this
allocation is used to correctly pair up the rendered operands with the
matched operands.
Previously, ComplexPatterns were allocated temporaries independently in the
Src Pattern and Dst Pattern, leading to mismatches. Additionally, the Dst
Pattern failed to account for the allocated index and therefore always used
temporary 0, 1, ... when it should have used base+0, base+1, ...
Thanks to Aditya Nandakumar for noticing the bug.
Depends on D30539
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar
Reviewed By: rovka
Subscribers: igorb, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D31054
llvm-svn: 299538
2017-04-05 15:14:03 +02:00
|
|
|
//===- Test a pattern with multiple ComplexPattern operands. --------------===//
|
|
|
|
//
|
|
|
|
|
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 4)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_SELECT) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (selectComplexPattern(MI0.getOperand(2), TempOp0, TempOp1)))) &&
|
|
|
|
// CHECK-NEXT: ((/* src3 */ (MRI.getType(MI0.getOperand(3).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (selectComplexPattern(MI0.getOperand(3), TempOp2, TempOp3))))) {
|
|
|
|
// CHECK-NEXT: // (select:i32 GPR32:i32:$src1, complex:i32:$src2, complex:i32:$src3) => (INSN2:i32 GPR32:i32:$src1, complex:i32:$src3, complex:i32:$src2)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::INSN2));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(1)/*src1*/);
|
|
|
|
// CHECK-NEXT: MIB.add(TempOp2);
|
|
|
|
// CHECK-NEXT: MIB.add(TempOp3);
|
|
|
|
// CHECK-NEXT: MIB.add(TempOp0);
|
|
|
|
// CHECK-NEXT: MIB.add(TempOp1);
|
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
|
|
|
|
def : GINodeEquiv<G_SELECT, select>;
|
|
|
|
def INSN2 : I<(outs GPR32:$dst), (ins GPR32:$src1, complex:$src2, complex:$src3), []>;
|
|
|
|
def : Pat<(select GPR32:$src1, complex:$src2, complex:$src3),
|
|
|
|
(INSN2 GPR32:$src1, complex:$src3, complex:$src2)>;
|
|
|
|
|
2017-02-04 01:47:05 +01:00
|
|
|
//===- Test a simple pattern with regclass operands. ----------------------===//
|
|
|
|
|
[tablegen][globalisel] Capture instructions into locals and related infrastructure for multiple instructions matches.
Summary:
Prepare the way for nested instruction matching support by having actions
like CopyRenderer look up operands in the RuleMatcher rather than a
specific InstructionMatcher. This allows actions to reference any operand
from any matched instruction.
It works by checking the 'shape' of the match and capturing
each matched instruction to a local variable. If the shape is wrong
(not enough operands, leaf nodes where non-leafs are expected, etc.), then
the rule exits early without checking the predicates. Once we've captured
the instructions, we then test the predicates as before (except using the
local variables). If the match is successful, then we render the new
instruction as before using the local variables.
It's not noticable in this patch but by the time we support multiple
instruction matching, this patch will also cause a significant improvement
to readability of the emitted code since
MRI.getVRegDef(I->getOperand(0).getReg()) will simply be MI1 after
emitCxxCaptureStmts().
This isn't quite NFC because I've also fixed a bug that I'm surprised we
haven't encountered yet. It now checks there are at least the expected
number of operands before accessing them with getOperand().
Depends on D30531
Reviewers: t.p.northover, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: rovka
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D30535
llvm-svn: 298257
2017-03-20 16:20:42 +01:00
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_ADD) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(2).getReg(), MRI, TRI)))))) {
|
|
|
|
|
|
|
|
// CHECK-NEXT: // (add:i32 GPR32:i32:$src1, GPR32:i32:$src2) => (ADD:i32 GPR32:i32:$src1, GPR32:i32:$src2)
|
|
|
|
// CHECK-NEXT: I.setDesc(TII.get(MyTarget::ADD));
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = I;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: }()) { return true; }
|
|
|
|
|
2017-02-04 01:47:05 +01:00
|
|
|
|
|
|
|
def ADD : I<(outs GPR32:$dst), (ins GPR32:$src1, GPR32:$src2),
|
|
|
|
[(set GPR32:$dst, (add GPR32:$src1, GPR32:$src2))]>;
|
|
|
|
|
[tablegen][globalisel] Add support for nested instruction matching.
Summary:
Lift the restrictions that prevented the tree walking introduced in the
previous change and add support for patterns like:
(G_ADD (G_MUL (G_SEXT $src1), (G_SEXT $src2)), $src3) -> SMADDWrrr $dst, $src1, $src2, $src3
Also adds support for G_SEXT and G_ZEXT to support these cases.
One particular aspect of this that I should draw attention to is that I've
tried to be overly conservative in determining the safety of matches that
involve non-adjacent instructions and multiple basic blocks. This is intended
to be used as a cheap initial check and we may add a more expensive check in
the future. The current rules are:
* Reject if any instruction may load/store (we'd need to check for intervening
memory operations.
* Reject if any instruction has implicit operands.
* Reject if any instruction has unmodelled side-effects.
See isObviouslySafeToFold().
Reviewers: t.p.northover, javed.absar, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: ab
Subscribers: igorb, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D30539
llvm-svn: 299430
2017-04-04 15:25:23 +02:00
|
|
|
//===- Test a nested instruction match. -----------------------------------===//
|
|
|
|
|
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if (!MI0.getOperand(1).isReg())
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: MachineInstr &MI1 = *MRI.getVRegDef(MI0.getOperand(1).getReg());
|
|
|
|
// CHECK-NEXT: if (MI1.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_MUL) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* Operand 1 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (((MI1.getOpcode() == TargetOpcode::G_ADD) &&
|
|
|
|
// CHECK-NEXT: ((/* Operand 0 */ (MRI.getType(MI1.getOperand(0).getReg()) == (LLT::scalar(32))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI1.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI1.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src2 */ (MRI.getType(MI1.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI1.getOperand(2).getReg(), MRI, TRI))))))
|
|
|
|
// CHECK-NEXT: ))) &&
|
|
|
|
// CHECK-NEXT: ((/* src3 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(2).getReg(), MRI, TRI)))))) {
|
|
|
|
// CHECK-NEXT: if (!isObviouslySafeToFold(MI1)) return false;
|
|
|
|
// CHECK-NEXT: // (mul:i32 (add:i32 GPR32:i32:$src1, GPR32:i32:$src2), GPR32:i32:$src3) => (MULADD:i32 GPR32:i32:$src1, GPR32:i32:$src2, GPR32:i32:$src3)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::MULADD));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI1.getOperand(1)/*src1*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI1.getOperand(2)/*src2*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(2)/*src3*/);
|
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, &MI1, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
|
|
|
|
// We also get a second rule by commutativity.
|
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if (!MI0.getOperand(2).isReg())
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: MachineInstr &MI1 = *MRI.getVRegDef(MI0.getOperand(2).getReg());
|
|
|
|
// CHECK-NEXT: if (MI1.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_MUL) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src3 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* Operand 2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (((MI1.getOpcode() == TargetOpcode::G_ADD) &&
|
|
|
|
// CHECK-NEXT: ((/* Operand 0 */ (MRI.getType(MI1.getOperand(0).getReg()) == (LLT::scalar(32))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI1.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI1.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src2 */ (MRI.getType(MI1.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI1.getOperand(2).getReg(), MRI, TRI))))))
|
|
|
|
// CHECK-NEXT: )))) {
|
|
|
|
// CHECK-NEXT: if (!isObviouslySafeToFold(MI1)) return false;
|
|
|
|
// CHECK-NEXT: // (mul:i32 GPR32:i32:$src3, (add:i32 GPR32:i32:$src1, GPR32:i32:$src2)) => (MULADD:i32 GPR32:i32:$src1, GPR32:i32:$src2, GPR32:i32:$src3)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::MULADD));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI1.getOperand(1)/*src1*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI1.getOperand(2)/*src2*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(1)/*src3*/);
|
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, &MI1, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
|
|
|
|
def MULADD : I<(outs GPR32:$dst), (ins GPR32:$src1, GPR32:$src2, GPR32:$src3),
|
|
|
|
[(set GPR32:$dst,
|
|
|
|
(mul (add GPR32:$src1, GPR32:$src2), GPR32:$src3))]>;
|
|
|
|
|
|
|
|
//===- Test another simple pattern with regclass operands. ----------------===//
|
|
|
|
|
[tablegen][globalisel] Capture instructions into locals and related infrastructure for multiple instructions matches.
Summary:
Prepare the way for nested instruction matching support by having actions
like CopyRenderer look up operands in the RuleMatcher rather than a
specific InstructionMatcher. This allows actions to reference any operand
from any matched instruction.
It works by checking the 'shape' of the match and capturing
each matched instruction to a local variable. If the shape is wrong
(not enough operands, leaf nodes where non-leafs are expected, etc.), then
the rule exits early without checking the predicates. Once we've captured
the instructions, we then test the predicates as before (except using the
local variables). If the match is successful, then we render the new
instruction as before using the local variables.
It's not noticable in this patch but by the time we support multiple
instruction matching, this patch will also cause a significant improvement
to readability of the emitted code since
MRI.getVRegDef(I->getOperand(0).getReg()) will simply be MI1 after
emitCxxCaptureStmts().
This isn't quite NFC because I've also fixed a bug that I'm surprised we
haven't encountered yet. It now checks there are at least the expected
number of operands before accessing them with getOperand().
Depends on D30531
Reviewers: t.p.northover, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: rovka
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D30535
llvm-svn: 298257
2017-03-20 16:20:42 +01:00
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_MUL) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(2).getReg(), MRI, TRI)))))) {
|
|
|
|
// CHECK-NEXT: // (mul:i32 GPR32:i32:$src1, GPR32:i32:$src2) => (MUL:i32 GPR32:i32:$src2, GPR32:i32:$src1)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::MUL));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(2)/*src2*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(1)/*src1*/);
|
[tablegen][globalisel] Add support for nested instruction matching.
Summary:
Lift the restrictions that prevented the tree walking introduced in the
previous change and add support for patterns like:
(G_ADD (G_MUL (G_SEXT $src1), (G_SEXT $src2)), $src3) -> SMADDWrrr $dst, $src1, $src2, $src3
Also adds support for G_SEXT and G_ZEXT to support these cases.
One particular aspect of this that I should draw attention to is that I've
tried to be overly conservative in determining the safety of matches that
involve non-adjacent instructions and multiple basic blocks. This is intended
to be used as a cheap initial check and we may add a more expensive check in
the future. The current rules are:
* Reject if any instruction may load/store (we'd need to check for intervening
memory operations.
* Reject if any instruction has implicit operands.
* Reject if any instruction has unmodelled side-effects.
See isObviouslySafeToFold().
Reviewers: t.p.northover, javed.absar, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: ab
Subscribers: igorb, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D30539
llvm-svn: 299430
2017-04-04 15:25:23 +02:00
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
[tablegen][globalisel] Capture instructions into locals and related infrastructure for multiple instructions matches.
Summary:
Prepare the way for nested instruction matching support by having actions
like CopyRenderer look up operands in the RuleMatcher rather than a
specific InstructionMatcher. This allows actions to reference any operand
from any matched instruction.
It works by checking the 'shape' of the match and capturing
each matched instruction to a local variable. If the shape is wrong
(not enough operands, leaf nodes where non-leafs are expected, etc.), then
the rule exits early without checking the predicates. Once we've captured
the instructions, we then test the predicates as before (except using the
local variables). If the match is successful, then we render the new
instruction as before using the local variables.
It's not noticable in this patch but by the time we support multiple
instruction matching, this patch will also cause a significant improvement
to readability of the emitted code since
MRI.getVRegDef(I->getOperand(0).getReg()) will simply be MI1 after
emitCxxCaptureStmts().
This isn't quite NFC because I've also fixed a bug that I'm surprised we
haven't encountered yet. It now checks there are at least the expected
number of operands before accessing them with getOperand().
Depends on D30531
Reviewers: t.p.northover, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: rovka
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D30535
llvm-svn: 298257
2017-03-20 16:20:42 +01:00
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: }()) { return true; }
|
[globalisel] Decouple src pattern operands from dst pattern operands.
Summary:
This isn't testable for AArch64 by itself so this patch also adds
support for constant immediates in the pattern and physical
register uses in the result.
The new IntOperandMatcher matches the constant in patterns such as
'(set $rd:GPR32, (G_XOR $rs:GPR32, -1))'. It's always safe to fold
immediates into an instruction so this is the first rule that will match
across multiple BB's.
The Renderer hierarchy is responsible for adding operands to the result
instruction. Renderers can copy operands (CopyRenderer) or add physical
registers (in particular %wzr and %xzr) to the result instruction
in any order (OperandMatchers now import the operand names from
SelectionDAG to allow renderers to access any operand). This allows us to
emit the result instruction for:
%1 = G_XOR %0, -1 --> %1 = ORNWrr %wzr, %0
%1 = G_XOR -1, %0 --> %1 = ORNWrr %wzr, %0
although the latter is untested since the matcher/importer has not been
taught about commutativity yet.
Added BuildMIAction which can build new instructions and mutate them where
possible. W.r.t the mutation aspect, MatchActions are now told the name of
an instruction they can recycle and BuildMIAction will emit mutation code
when the renderers are appropriate. They are appropriate when all operands
are rendered using CopyRenderer and the indices are the same as the matcher.
This currently assumes that all operands have at least one matcher.
Finally, this change also fixes a crash in
AArch64InstructionSelector::select() caused by an immediate operand
passing isImm() rather than isCImm(). This was uncovered by the other
changes and was detected by existing tests.
Depends on D29711
Reviewers: t.p.northover, ab, qcolombet, rovka, aditya_nandakumar, javed.absar
Reviewed By: rovka
Subscribers: aemerson, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D29712
llvm-svn: 296131
2017-02-24 16:43:30 +01:00
|
|
|
|
|
|
|
def MUL : I<(outs GPR32:$dst), (ins GPR32:$src2, GPR32:$src1),
|
|
|
|
[(set GPR32:$dst, (mul GPR32:$src1, GPR32:$src2))]>;
|
|
|
|
|
[globalisel][tablegen] Fix patterns involving multiple ComplexPatterns.
Summary:
Temporaries are now allocated to operands instead of predicates and this
allocation is used to correctly pair up the rendered operands with the
matched operands.
Previously, ComplexPatterns were allocated temporaries independently in the
Src Pattern and Dst Pattern, leading to mismatches. Additionally, the Dst
Pattern failed to account for the allocated index and therefore always used
temporary 0, 1, ... when it should have used base+0, base+1, ...
Thanks to Aditya Nandakumar for noticing the bug.
Depends on D30539
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar
Reviewed By: rovka
Subscribers: igorb, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D31054
llvm-svn: 299538
2017-04-05 15:14:03 +02:00
|
|
|
//===- Test a pattern with ComplexPattern operands. -----------------------===//
|
|
|
|
//
|
|
|
|
|
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_SUB) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (selectComplexPattern(MI0.getOperand(2), TempOp0, TempOp1))))) {
|
|
|
|
// CHECK-NEXT: // (sub:i32 GPR32:i32:$src1, complex:i32:$src2) => (INSN1:i32 GPR32:i32:$src1, complex:i32:$src2)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::INSN1));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(1)/*src1*/);
|
|
|
|
// CHECK-NEXT: MIB.add(TempOp0);
|
|
|
|
// CHECK-NEXT: MIB.add(TempOp1);
|
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
|
|
|
|
def INSN1 : I<(outs GPR32:$dst), (ins GPR32:$src1, complex:$src2), []>;
|
|
|
|
def : Pat<(sub GPR32:$src1, complex:$src2), (INSN1 GPR32:$src1, complex:$src2)>;
|
|
|
|
|
[globalisel][tablegen] Add experimental support for OperandWithDefaultOps, PredicateOperand, and OptionalDefOperand
Summary:
As far as instruction selection is concerned, all three appear to be same thing.
Support for these operands is experimental since AArch64 doesn't make use
of them and the in-tree targets that do use them (AMDGPU for
OperandWithDefaultOps, AMDGPU/ARM/Hexagon/Lanai for PredicateOperand, and ARM
for OperandWithDefaultOps) are not using tablegen-erated GlobalISel yet.
Reviewers: rovka, aditya_nandakumar, t.p.northover, qcolombet, ab
Reviewed By: rovka
Subscribers: inglorion, aemerson, rengolin, mehdi_amini, dberris, kristof.beyls, igorb, tpr, llvm-commits
Differential Revision: https://reviews.llvm.org/D31135
llvm-svn: 300037
2017-04-12 10:23:08 +02:00
|
|
|
//===- Test a simple pattern with a default operand. ----------------------===//
|
|
|
|
//
|
|
|
|
|
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_XOR) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* Operand 2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (isOperandImmEqual(MI0.getOperand(2), -2, MRI))))) {
|
|
|
|
// CHECK-NEXT: // (xor:i32 GPR32:i32:$src1, -2:i32) => (XORI:i32 GPR32:i32:$src1)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::XORI));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.addImm(-1);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(1)/*src1*/);
|
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: }()) { return true; }
|
|
|
|
|
|
|
|
// The -2 is just to distinguish it from the 'not' case below.
|
|
|
|
def XORI : I<(outs GPR32:$dst), (ins m1:$src2, GPR32:$src1),
|
|
|
|
[(set GPR32:$dst, (xor GPR32:$src1, -2))]>;
|
|
|
|
|
|
|
|
//===- Test a simple pattern with a default register operand. -------------===//
|
|
|
|
//
|
|
|
|
|
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_XOR) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* Operand 2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (isOperandImmEqual(MI0.getOperand(2), -3, MRI))))) {
|
|
|
|
// CHECK-NEXT: // (xor:i32 GPR32:i32:$src1, -3:i32) => (XOR:i32 GPR32:i32:$src1)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::XOR));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.addReg(MyTarget::R0);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(1)/*src1*/);
|
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: }()) { return true; }
|
|
|
|
|
|
|
|
// The -3 is just to distinguish it from the 'not' case below and the other default op case above.
|
|
|
|
def XOR : I<(outs GPR32:$dst), (ins Z:$src2, GPR32:$src1),
|
|
|
|
[(set GPR32:$dst, (xor GPR32:$src1, -3))]>;
|
|
|
|
|
|
|
|
//===- Test a simple pattern with a multiple default operands. ------------===//
|
|
|
|
//
|
|
|
|
|
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_XOR) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* src1 */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* Operand 2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (isOperandImmEqual(MI0.getOperand(2), -4, MRI))))) {
|
|
|
|
// CHECK-NEXT: // (xor:i32 GPR32:i32:$src1, -4:i32) => (XORlike:i32 GPR32:i32:$src1)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::XORlike));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.addImm(-1);
|
|
|
|
// CHECK-NEXT: MIB.addReg(MyTarget::R0);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(1)/*src1*/);
|
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: }()) { return true; }
|
|
|
|
|
|
|
|
// The -4 is just to distinguish it from the other 'not' cases.
|
|
|
|
def XORlike : I<(outs GPR32:$dst), (ins m1Z:$src2, GPR32:$src1),
|
|
|
|
[(set GPR32:$dst, (xor GPR32:$src1, -4))]>;
|
|
|
|
|
[globalisel] Decouple src pattern operands from dst pattern operands.
Summary:
This isn't testable for AArch64 by itself so this patch also adds
support for constant immediates in the pattern and physical
register uses in the result.
The new IntOperandMatcher matches the constant in patterns such as
'(set $rd:GPR32, (G_XOR $rs:GPR32, -1))'. It's always safe to fold
immediates into an instruction so this is the first rule that will match
across multiple BB's.
The Renderer hierarchy is responsible for adding operands to the result
instruction. Renderers can copy operands (CopyRenderer) or add physical
registers (in particular %wzr and %xzr) to the result instruction
in any order (OperandMatchers now import the operand names from
SelectionDAG to allow renderers to access any operand). This allows us to
emit the result instruction for:
%1 = G_XOR %0, -1 --> %1 = ORNWrr %wzr, %0
%1 = G_XOR -1, %0 --> %1 = ORNWrr %wzr, %0
although the latter is untested since the matcher/importer has not been
taught about commutativity yet.
Added BuildMIAction which can build new instructions and mutate them where
possible. W.r.t the mutation aspect, MatchActions are now told the name of
an instruction they can recycle and BuildMIAction will emit mutation code
when the renderers are appropriate. They are appropriate when all operands
are rendered using CopyRenderer and the indices are the same as the matcher.
This currently assumes that all operands have at least one matcher.
Finally, this change also fixes a crash in
AArch64InstructionSelector::select() caused by an immediate operand
passing isImm() rather than isCImm(). This was uncovered by the other
changes and was detected by existing tests.
Depends on D29711
Reviewers: t.p.northover, ab, qcolombet, rovka, aditya_nandakumar, javed.absar
Reviewed By: rovka
Subscribers: aemerson, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D29712
llvm-svn: 296131
2017-02-24 16:43:30 +01:00
|
|
|
//===- Test a simple pattern with constant immediate operands. ------------===//
|
|
|
|
//
|
|
|
|
// This must precede the 3-register variants because constant immediates have
|
|
|
|
// priority over register banks.
|
|
|
|
|
[tablegen][globalisel] Capture instructions into locals and related infrastructure for multiple instructions matches.
Summary:
Prepare the way for nested instruction matching support by having actions
like CopyRenderer look up operands in the RuleMatcher rather than a
specific InstructionMatcher. This allows actions to reference any operand
from any matched instruction.
It works by checking the 'shape' of the match and capturing
each matched instruction to a local variable. If the shape is wrong
(not enough operands, leaf nodes where non-leafs are expected, etc.), then
the rule exits early without checking the predicates. Once we've captured
the instructions, we then test the predicates as before (except using the
local variables). If the match is successful, then we render the new
instruction as before using the local variables.
It's not noticable in this patch but by the time we support multiple
instruction matching, this patch will also cause a significant improvement
to readability of the emitted code since
MRI.getVRegDef(I->getOperand(0).getReg()) will simply be MI1 after
emitCxxCaptureStmts().
This isn't quite NFC because I've also fixed a bug that I'm surprised we
haven't encountered yet. It now checks there are at least the expected
number of operands before accessing them with getOperand().
Depends on D30531
Reviewers: t.p.northover, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: rovka
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D30535
llvm-svn: 298257
2017-03-20 16:20:42 +01:00
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 3)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_XOR) &&
|
|
|
|
// CHECK-NEXT: ((/* dst */ (MRI.getType(MI0.getOperand(0).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(0).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* Wm */ (MRI.getType(MI0.getOperand(1).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: ((&RBI.getRegBankFromRegClass(MyTarget::GPR32RegClass) == RBI.getRegBank(MI0.getOperand(1).getReg(), MRI, TRI))))) &&
|
|
|
|
// CHECK-NEXT: ((/* Operand 2 */ (MRI.getType(MI0.getOperand(2).getReg()) == (LLT::scalar(32))) &&
|
|
|
|
// CHECK-NEXT: (isOperandImmEqual(MI0.getOperand(2), -1, MRI))))) {
|
|
|
|
// CHECK-NEXT: // (xor:i32 GPR32:i32:$Wm, -1:i32) => (ORN:i32 R0:i32, GPR32:i32:$Wm)
|
|
|
|
// CHECK-NEXT: MachineInstrBuilder MIB = BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(MyTarget::ORN));
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(0)/*dst*/);
|
|
|
|
// CHECK-NEXT: MIB.addReg(MyTarget::R0);
|
|
|
|
// CHECK-NEXT: MIB.add(MI0.getOperand(1)/*Wm*/);
|
[tablegen][globalisel] Add support for nested instruction matching.
Summary:
Lift the restrictions that prevented the tree walking introduced in the
previous change and add support for patterns like:
(G_ADD (G_MUL (G_SEXT $src1), (G_SEXT $src2)), $src3) -> SMADDWrrr $dst, $src1, $src2, $src3
Also adds support for G_SEXT and G_ZEXT to support these cases.
One particular aspect of this that I should draw attention to is that I've
tried to be overly conservative in determining the safety of matches that
involve non-adjacent instructions and multiple basic blocks. This is intended
to be used as a cheap initial check and we may add a more expensive check in
the future. The current rules are:
* Reject if any instruction may load/store (we'd need to check for intervening
memory operations.
* Reject if any instruction has implicit operands.
* Reject if any instruction has unmodelled side-effects.
See isObviouslySafeToFold().
Reviewers: t.p.northover, javed.absar, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: ab
Subscribers: igorb, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D30539
llvm-svn: 299430
2017-04-04 15:25:23 +02:00
|
|
|
// CHECK-NEXT: for (const auto *FromMI : {&MI0, })
|
|
|
|
// CHECK-NEXT: for (const auto &MMO : FromMI->memoperands())
|
|
|
|
// CHECK-NEXT: MIB.addMemOperand(MMO);
|
[tablegen][globalisel] Capture instructions into locals and related infrastructure for multiple instructions matches.
Summary:
Prepare the way for nested instruction matching support by having actions
like CopyRenderer look up operands in the RuleMatcher rather than a
specific InstructionMatcher. This allows actions to reference any operand
from any matched instruction.
It works by checking the 'shape' of the match and capturing
each matched instruction to a local variable. If the shape is wrong
(not enough operands, leaf nodes where non-leafs are expected, etc.), then
the rule exits early without checking the predicates. Once we've captured
the instructions, we then test the predicates as before (except using the
local variables). If the match is successful, then we render the new
instruction as before using the local variables.
It's not noticable in this patch but by the time we support multiple
instruction matching, this patch will also cause a significant improvement
to readability of the emitted code since
MRI.getVRegDef(I->getOperand(0).getReg()) will simply be MI1 after
emitCxxCaptureStmts().
This isn't quite NFC because I've also fixed a bug that I'm surprised we
haven't encountered yet. It now checks there are at least the expected
number of operands before accessing them with getOperand().
Depends on D30531
Reviewers: t.p.northover, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: rovka
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D30535
llvm-svn: 298257
2017-03-20 16:20:42 +01:00
|
|
|
// CHECK-NEXT: I.eraseFromParent();
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = *MIB;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: }()) { return true; }
|
[globalisel] Decouple src pattern operands from dst pattern operands.
Summary:
This isn't testable for AArch64 by itself so this patch also adds
support for constant immediates in the pattern and physical
register uses in the result.
The new IntOperandMatcher matches the constant in patterns such as
'(set $rd:GPR32, (G_XOR $rs:GPR32, -1))'. It's always safe to fold
immediates into an instruction so this is the first rule that will match
across multiple BB's.
The Renderer hierarchy is responsible for adding operands to the result
instruction. Renderers can copy operands (CopyRenderer) or add physical
registers (in particular %wzr and %xzr) to the result instruction
in any order (OperandMatchers now import the operand names from
SelectionDAG to allow renderers to access any operand). This allows us to
emit the result instruction for:
%1 = G_XOR %0, -1 --> %1 = ORNWrr %wzr, %0
%1 = G_XOR -1, %0 --> %1 = ORNWrr %wzr, %0
although the latter is untested since the matcher/importer has not been
taught about commutativity yet.
Added BuildMIAction which can build new instructions and mutate them where
possible. W.r.t the mutation aspect, MatchActions are now told the name of
an instruction they can recycle and BuildMIAction will emit mutation code
when the renderers are appropriate. They are appropriate when all operands
are rendered using CopyRenderer and the indices are the same as the matcher.
This currently assumes that all operands have at least one matcher.
Finally, this change also fixes a crash in
AArch64InstructionSelector::select() caused by an immediate operand
passing isImm() rather than isCImm(). This was uncovered by the other
changes and was detected by existing tests.
Depends on D29711
Reviewers: t.p.northover, ab, qcolombet, rovka, aditya_nandakumar, javed.absar
Reviewed By: rovka
Subscribers: aemerson, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D29712
llvm-svn: 296131
2017-02-24 16:43:30 +01:00
|
|
|
|
|
|
|
def ORN : I<(outs GPR32:$dst), (ins GPR32:$src1, GPR32:$src2), []>;
|
|
|
|
def : Pat<(not GPR32:$Wm), (ORN R0, GPR32:$Wm)>;
|
|
|
|
|
2017-02-04 01:47:05 +01:00
|
|
|
//===- Test a pattern with an MBB operand. --------------------------------===//
|
|
|
|
|
[tablegen][globalisel] Capture instructions into locals and related infrastructure for multiple instructions matches.
Summary:
Prepare the way for nested instruction matching support by having actions
like CopyRenderer look up operands in the RuleMatcher rather than a
specific InstructionMatcher. This allows actions to reference any operand
from any matched instruction.
It works by checking the 'shape' of the match and capturing
each matched instruction to a local variable. If the shape is wrong
(not enough operands, leaf nodes where non-leafs are expected, etc.), then
the rule exits early without checking the predicates. Once we've captured
the instructions, we then test the predicates as before (except using the
local variables). If the match is successful, then we render the new
instruction as before using the local variables.
It's not noticable in this patch but by the time we support multiple
instruction matching, this patch will also cause a significant improvement
to readability of the emitted code since
MRI.getVRegDef(I->getOperand(0).getReg()) will simply be MI1 after
emitCxxCaptureStmts().
This isn't quite NFC because I've also fixed a bug that I'm surprised we
haven't encountered yet. It now checks there are at least the expected
number of operands before accessing them with getOperand().
Depends on D30531
Reviewers: t.p.northover, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: rovka
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D30535
llvm-svn: 298257
2017-03-20 16:20:42 +01:00
|
|
|
// CHECK-LABEL: if ([&]() {
|
|
|
|
// CHECK-NEXT: MachineInstr &MI0 = I;
|
|
|
|
// CHECK-NEXT: if (MI0.getNumOperands() < 1)
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: if ((MI0.getOpcode() == TargetOpcode::G_BR) &&
|
|
|
|
// CHECK-NEXT: ((/* target */ (MI0.getOperand(0).isMBB())))) {
|
|
|
|
|
|
|
|
// CHECK-NEXT: // (br (bb:Other):$target) => (BR (bb:Other):$target)
|
|
|
|
// CHECK-NEXT: I.setDesc(TII.get(MyTarget::BR));
|
|
|
|
// CHECK-NEXT: MachineInstr &NewI = I;
|
|
|
|
// CHECK-NEXT: constrainSelectedInstRegOperands(NewI, TII, TRI, RBI);
|
|
|
|
// CHECK-NEXT: return true;
|
|
|
|
// CHECK-NEXT: }
|
|
|
|
// CHECK-NEXT: return false;
|
|
|
|
// CHECK-NEXT: }()) { return true; }
|
2017-02-04 01:47:05 +01:00
|
|
|
|
|
|
|
def BR : I<(outs), (ins unknown:$target),
|
|
|
|
[(br bb:$target)]>;
|