2012-02-18 13:03:15 +01:00
|
|
|
//===-- PPCRegisterInfo.td - The PowerPC Register File -----*- tablegen -*-===//
|
|
|
|
//
|
2004-06-21 18:55:25 +02:00
|
|
|
// The LLVM Compiler Infrastructure
|
|
|
|
//
|
2007-12-29 21:36:04 +01:00
|
|
|
// This file is distributed under the University of Illinois Open Source
|
|
|
|
// License. See LICENSE.TXT for details.
|
2012-02-18 13:03:15 +01:00
|
|
|
//
|
2004-06-21 18:55:25 +02:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2010-05-26 19:27:12 +02:00
|
|
|
let Namespace = "PPC" in {
|
2013-06-01 01:45:26 +02:00
|
|
|
def sub_lt : SubRegIndex<1>;
|
|
|
|
def sub_gt : SubRegIndex<1, 1>;
|
|
|
|
def sub_eq : SubRegIndex<1, 2>;
|
|
|
|
def sub_un : SubRegIndex<1, 3>;
|
|
|
|
def sub_32 : SubRegIndex<32>;
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 08:58:58 +01:00
|
|
|
def sub_64 : SubRegIndex<64>;
|
|
|
|
def sub_128 : SubRegIndex<128>;
|
2010-05-26 19:27:12 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2004-09-14 06:17:02 +02:00
|
|
|
class PPCReg<string n> : Register<n> {
|
2004-08-11 00:47:03 +02:00
|
|
|
let Namespace = "PPC";
|
2004-06-21 18:55:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// We identify all our registers with a 5-bit ID, for consistency's sake.
|
|
|
|
|
|
|
|
// GPR - One of the 32 32-bit general-purpose registers
|
2013-03-26 22:50:26 +01:00
|
|
|
class GPR<bits<5> num, string n> : PPCReg<n> {
|
|
|
|
let HWEncoding{4-0} = num;
|
2004-06-21 18:55:25 +02:00
|
|
|
}
|
|
|
|
|
2005-10-19 02:05:37 +02:00
|
|
|
// GP8 - One of the 32 64-bit general-purpose registers
|
2008-07-08 00:22:07 +02:00
|
|
|
class GP8<GPR SubReg, string n> : PPCReg<n> {
|
2013-03-26 21:08:20 +01:00
|
|
|
let HWEncoding = SubReg.HWEncoding;
|
2007-05-08 19:03:51 +02:00
|
|
|
let SubRegs = [SubReg];
|
2010-05-26 19:27:12 +02:00
|
|
|
let SubRegIndices = [sub_32];
|
2005-10-19 02:17:55 +02:00
|
|
|
}
|
2005-10-19 02:05:37 +02:00
|
|
|
|
2004-06-21 18:55:25 +02:00
|
|
|
// SPR - One of the 32-bit special-purpose registers
|
2013-03-26 22:50:26 +01:00
|
|
|
class SPR<bits<10> num, string n> : PPCReg<n> {
|
|
|
|
let HWEncoding{9-0} = num;
|
2004-06-21 18:55:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// FPR - One of the 32 64-bit floating-point registers
|
2013-03-26 22:50:26 +01:00
|
|
|
class FPR<bits<5> num, string n> : PPCReg<n> {
|
|
|
|
let HWEncoding{4-0} = num;
|
2004-06-21 18:55:25 +02:00
|
|
|
}
|
|
|
|
|
[PowerPC] Add support for the QPX vector instruction set
This adds support for the QPX vector instruction set, which is used by the
enhanced A2 cores on the IBM BG/Q supercomputers. QPX vectors are 256 bytes
wide, holding 4 double-precision floating-point values. Boolean values, modeled
here as <4 x i1> are actually also represented as floating-point values
(essentially { -1, 1 } for { false, true }). QPX shares many features with
Altivec and VSX, but is distinct from both of them. One major difference is
that, instead of adding completely-separate vector registers, QPX vector
registers are extensions of the scalar floating-point registers (lane 0 is the
corresponding scalar floating-point value). The operations supported on QPX
vectors mirrors that supported on the scalar floating-point values (with some
additional ones for permutations and logical/comparison operations).
I've been maintaining this support out-of-tree, as part of the bgclang project,
for several years. This is not the entire bgclang patch set, but is most of the
subset that can be cleanly integrated into LLVM proper at this time. Adding
this to the LLVM backend is part of my efforts to rebase bgclang to the current
LLVM trunk, but is independently useful (especially for codes that use LLVM as
a JIT in library form).
The assembler/disassembler test coverage is complete. The CodeGen test coverage
is not, but I've included some tests, and more will be added as follow-up work.
llvm-svn: 230413
2015-02-25 02:06:45 +01:00
|
|
|
// QFPR - One of the 32 256-bit floating-point vector registers (used for QPX)
|
|
|
|
class QFPR<FPR SubReg, string n> : PPCReg<n> {
|
|
|
|
let HWEncoding = SubReg.HWEncoding;
|
|
|
|
let SubRegs = [SubReg];
|
|
|
|
let SubRegIndices = [sub_64];
|
|
|
|
}
|
|
|
|
|
2014-03-29 06:29:01 +01:00
|
|
|
// VF - One of the 32 64-bit floating-point subregisters of the vector
|
|
|
|
// registers (used by VSX).
|
|
|
|
class VF<bits<5> num, string n> : PPCReg<n> {
|
2013-03-26 22:50:26 +01:00
|
|
|
let HWEncoding{4-0} = num;
|
2014-03-29 06:29:01 +01:00
|
|
|
let HWEncoding{5} = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
// VR - One of the 32 128-bit vector registers
|
|
|
|
class VR<VF SubReg, string n> : PPCReg<n> {
|
|
|
|
let HWEncoding{4-0} = SubReg.HWEncoding{4-0};
|
|
|
|
let HWEncoding{5} = 0;
|
|
|
|
let SubRegs = [SubReg];
|
|
|
|
let SubRegIndices = [sub_64];
|
2005-11-23 06:29:52 +01:00
|
|
|
}
|
|
|
|
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 08:58:58 +01:00
|
|
|
// VSRL - One of the 32 128-bit VSX registers that overlap with the scalar
|
|
|
|
// floating-point registers.
|
|
|
|
class VSRL<FPR SubReg, string n> : PPCReg<n> {
|
|
|
|
let HWEncoding = SubReg.HWEncoding;
|
|
|
|
let SubRegs = [SubReg];
|
|
|
|
let SubRegIndices = [sub_64];
|
|
|
|
}
|
|
|
|
|
|
|
|
// VSRH - One of the 32 128-bit VSX registers that overlap with the vector
|
|
|
|
// registers.
|
|
|
|
class VSRH<VR SubReg, string n> : PPCReg<n> {
|
|
|
|
let HWEncoding{4-0} = SubReg.HWEncoding{4-0};
|
|
|
|
let HWEncoding{5} = 1;
|
|
|
|
let SubRegs = [SubReg];
|
|
|
|
let SubRegIndices = [sub_128];
|
|
|
|
}
|
|
|
|
|
2004-06-21 18:55:25 +02:00
|
|
|
// CR - One of the 8 4-bit condition registers
|
2013-03-26 22:50:26 +01:00
|
|
|
class CR<bits<3> num, string n, list<Register> subregs> : PPCReg<n> {
|
|
|
|
let HWEncoding{2-0} = num;
|
2009-07-03 08:47:55 +02:00
|
|
|
let SubRegs = subregs;
|
2007-05-01 07:57:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// CRBIT - One of the 32 1-bit condition register fields
|
2013-03-26 22:50:26 +01:00
|
|
|
class CRBIT<bits<5> num, string n> : PPCReg<n> {
|
|
|
|
let HWEncoding{4-0} = num;
|
2004-06-21 18:55:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// General-purpose registers
|
2013-01-24 21:43:18 +01:00
|
|
|
foreach Index = 0-31 in {
|
|
|
|
def R#Index : GPR<Index, "r"#Index>, DwarfRegNum<[-2, Index]>;
|
|
|
|
}
|
2004-06-21 18:55:25 +02:00
|
|
|
|
2005-10-19 02:05:37 +02:00
|
|
|
// 64-bit General-purpose registers
|
2013-01-24 21:43:18 +01:00
|
|
|
foreach Index = 0-31 in {
|
|
|
|
def X#Index : GP8<!cast<GPR>("R"#Index), "r"#Index>,
|
|
|
|
DwarfRegNum<[Index, -2]>;
|
|
|
|
}
|
2005-10-19 02:05:37 +02:00
|
|
|
|
2004-06-21 18:55:25 +02:00
|
|
|
// Floating-point registers
|
2013-01-25 15:49:10 +01:00
|
|
|
foreach Index = 0-31 in {
|
|
|
|
def F#Index : FPR<Index, "f"#Index>,
|
|
|
|
DwarfRegNum<[!add(Index, 32), !add(Index, 32)]>;
|
|
|
|
}
|
2004-09-14 06:17:02 +02:00
|
|
|
|
2014-03-29 06:29:01 +01:00
|
|
|
// Floating-point vector subregisters (for VSX)
|
|
|
|
foreach Index = 0-31 in {
|
|
|
|
def VF#Index : VF<Index, "vs" # !add(Index, 32)>;
|
|
|
|
}
|
|
|
|
|
[PowerPC] Add support for the QPX vector instruction set
This adds support for the QPX vector instruction set, which is used by the
enhanced A2 cores on the IBM BG/Q supercomputers. QPX vectors are 256 bytes
wide, holding 4 double-precision floating-point values. Boolean values, modeled
here as <4 x i1> are actually also represented as floating-point values
(essentially { -1, 1 } for { false, true }). QPX shares many features with
Altivec and VSX, but is distinct from both of them. One major difference is
that, instead of adding completely-separate vector registers, QPX vector
registers are extensions of the scalar floating-point registers (lane 0 is the
corresponding scalar floating-point value). The operations supported on QPX
vectors mirrors that supported on the scalar floating-point values (with some
additional ones for permutations and logical/comparison operations).
I've been maintaining this support out-of-tree, as part of the bgclang project,
for several years. This is not the entire bgclang patch set, but is most of the
subset that can be cleanly integrated into LLVM proper at this time. Adding
this to the LLVM backend is part of my efforts to rebase bgclang to the current
LLVM trunk, but is independently useful (especially for codes that use LLVM as
a JIT in library form).
The assembler/disassembler test coverage is complete. The CodeGen test coverage
is not, but I've included some tests, and more will be added as follow-up work.
llvm-svn: 230413
2015-02-25 02:06:45 +01:00
|
|
|
// QPX Floating-point registers
|
|
|
|
foreach Index = 0-31 in {
|
|
|
|
def QF#Index : QFPR<!cast<FPR>("F"#Index), "q"#Index>,
|
|
|
|
DwarfRegNum<[!add(Index, 32), !add(Index, 32)]>;
|
|
|
|
}
|
|
|
|
|
2005-11-26 23:39:34 +01:00
|
|
|
// Vector registers
|
2013-01-25 15:49:10 +01:00
|
|
|
foreach Index = 0-31 in {
|
2014-03-29 06:29:01 +01:00
|
|
|
def V#Index : VR<!cast<VF>("VF"#Index), "v"#Index>,
|
2013-01-25 15:49:10 +01:00
|
|
|
DwarfRegNum<[!add(Index, 77), !add(Index, 77)]>;
|
|
|
|
}
|
2004-06-21 18:55:25 +02:00
|
|
|
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 08:58:58 +01:00
|
|
|
// VSX registers
|
|
|
|
foreach Index = 0-31 in {
|
|
|
|
def VSL#Index : VSRL<!cast<FPR>("F"#Index), "vs"#Index>,
|
|
|
|
DwarfRegAlias<!cast<FPR>("F"#Index)>;
|
|
|
|
}
|
|
|
|
foreach Index = 0-31 in {
|
|
|
|
def VSH#Index : VSRH<!cast<VR>("V"#Index), "vs" # !add(Index, 32)>,
|
|
|
|
DwarfRegAlias<!cast<VR>("V"#Index)>;
|
|
|
|
}
|
|
|
|
|
Prepare to make r0 an allocatable register on PPC
Currently the PPC r0 register is unconditionally reserved. There are two reasons
for this:
1. r0 is treated specially (as the constant 0) by certain instructions, and so
cannot be used with those instructions as a regular register.
2. r0 is used as a temporary register in the CR-register spilling process
(where, under some circumstances, we require two GPRs).
This change addresses the first reason by introducing a restricted register
class (without r0) for use by those instructions that treat r0 specially. These
register classes have a new pseudo-register, ZERO, which represents the r0-as-0
use. This has the side benefit of making the existing target code simpler (and
easier to understand), and will make it clear to the register allocator that
uses of r0 as 0 don't conflict will real uses of the r0 register.
Once the CR spilling code is improved, we'll be able to allocate r0.
Adding these extra register classes, for some reason unclear to me, causes
requests to the target to copy 32-bit registers to 64-bit registers. The
resulting code seems correct (and causes no test-suite failures), and the new
test case covers this new kind of asymmetric copy.
As r0 is still reserved, no functionality change intended.
llvm-svn: 177423
2013-03-19 19:51:05 +01:00
|
|
|
// The reprsentation of r0 when treated as the constant 0.
|
2015-01-13 18:45:11 +01:00
|
|
|
def ZERO : GPR<0, "0">, DwarfRegAlias<R0>;
|
|
|
|
def ZERO8 : GP8<ZERO, "0">, DwarfRegAlias<X0>;
|
Prepare to make r0 an allocatable register on PPC
Currently the PPC r0 register is unconditionally reserved. There are two reasons
for this:
1. r0 is treated specially (as the constant 0) by certain instructions, and so
cannot be used with those instructions as a regular register.
2. r0 is used as a temporary register in the CR-register spilling process
(where, under some circumstances, we require two GPRs).
This change addresses the first reason by introducing a restricted register
class (without r0) for use by those instructions that treat r0 specially. These
register classes have a new pseudo-register, ZERO, which represents the r0-as-0
use. This has the side benefit of making the existing target code simpler (and
easier to understand), and will make it clear to the register allocator that
uses of r0 as 0 don't conflict will real uses of the r0 register.
Once the CR spilling code is improved, we'll be able to allocate r0.
Adding these extra register classes, for some reason unclear to me, causes
requests to the target to copy 32-bit registers to 64-bit registers. The
resulting code seems correct (and causes no test-suite failures), and the new
test case covers this new kind of asymmetric copy.
As r0 is still reserved, no functionality change intended.
llvm-svn: 177423
2013-03-19 19:51:05 +01:00
|
|
|
|
2013-03-21 20:03:19 +01:00
|
|
|
// Representations of the frame pointer used by ISD::FRAMEADDR.
|
|
|
|
def FP : GPR<0 /* arbitrary */, "**FRAME POINTER**">;
|
|
|
|
def FP8 : GP8<FP, "**FRAME POINTER**">;
|
|
|
|
|
2013-07-18 01:50:51 +02:00
|
|
|
// Representations of the base pointer used by setjmp.
|
|
|
|
def BP : GPR<0 /* arbitrary */, "**BASE POINTER**">;
|
|
|
|
def BP8 : GP8<BP, "**BASE POINTER**">;
|
|
|
|
|
2007-05-01 07:57:02 +02:00
|
|
|
// Condition register bits
|
2011-05-27 18:15:27 +02:00
|
|
|
def CR0LT : CRBIT< 0, "0">;
|
|
|
|
def CR0GT : CRBIT< 1, "1">;
|
|
|
|
def CR0EQ : CRBIT< 2, "2">;
|
|
|
|
def CR0UN : CRBIT< 3, "3">;
|
|
|
|
def CR1LT : CRBIT< 4, "4">;
|
|
|
|
def CR1GT : CRBIT< 5, "5">;
|
|
|
|
def CR1EQ : CRBIT< 6, "6">;
|
|
|
|
def CR1UN : CRBIT< 7, "7">;
|
|
|
|
def CR2LT : CRBIT< 8, "8">;
|
|
|
|
def CR2GT : CRBIT< 9, "9">;
|
|
|
|
def CR2EQ : CRBIT<10, "10">;
|
|
|
|
def CR2UN : CRBIT<11, "11">;
|
|
|
|
def CR3LT : CRBIT<12, "12">;
|
|
|
|
def CR3GT : CRBIT<13, "13">;
|
|
|
|
def CR3EQ : CRBIT<14, "14">;
|
|
|
|
def CR3UN : CRBIT<15, "15">;
|
|
|
|
def CR4LT : CRBIT<16, "16">;
|
|
|
|
def CR4GT : CRBIT<17, "17">;
|
|
|
|
def CR4EQ : CRBIT<18, "18">;
|
|
|
|
def CR4UN : CRBIT<19, "19">;
|
|
|
|
def CR5LT : CRBIT<20, "20">;
|
|
|
|
def CR5GT : CRBIT<21, "21">;
|
|
|
|
def CR5EQ : CRBIT<22, "22">;
|
|
|
|
def CR5UN : CRBIT<23, "23">;
|
|
|
|
def CR6LT : CRBIT<24, "24">;
|
|
|
|
def CR6GT : CRBIT<25, "25">;
|
|
|
|
def CR6EQ : CRBIT<26, "26">;
|
|
|
|
def CR6UN : CRBIT<27, "27">;
|
|
|
|
def CR7LT : CRBIT<28, "28">;
|
|
|
|
def CR7GT : CRBIT<29, "29">;
|
|
|
|
def CR7EQ : CRBIT<30, "30">;
|
|
|
|
def CR7UN : CRBIT<31, "31">;
|
2007-05-01 07:57:02 +02:00
|
|
|
|
2009-07-03 08:47:55 +02:00
|
|
|
// Condition registers
|
2010-05-26 19:27:12 +02:00
|
|
|
let SubRegIndices = [sub_lt, sub_gt, sub_eq, sub_un] in {
|
2011-05-30 20:24:44 +02:00
|
|
|
def CR0 : CR<0, "cr0", [CR0LT, CR0GT, CR0EQ, CR0UN]>, DwarfRegNum<[68, 68]>;
|
|
|
|
def CR1 : CR<1, "cr1", [CR1LT, CR1GT, CR1EQ, CR1UN]>, DwarfRegNum<[69, 69]>;
|
|
|
|
def CR2 : CR<2, "cr2", [CR2LT, CR2GT, CR2EQ, CR2UN]>, DwarfRegNum<[70, 70]>;
|
|
|
|
def CR3 : CR<3, "cr3", [CR3LT, CR3GT, CR3EQ, CR3UN]>, DwarfRegNum<[71, 71]>;
|
|
|
|
def CR4 : CR<4, "cr4", [CR4LT, CR4GT, CR4EQ, CR4UN]>, DwarfRegNum<[72, 72]>;
|
|
|
|
def CR5 : CR<5, "cr5", [CR5LT, CR5GT, CR5EQ, CR5UN]>, DwarfRegNum<[73, 73]>;
|
|
|
|
def CR6 : CR<6, "cr6", [CR6LT, CR6GT, CR6EQ, CR6UN]>, DwarfRegNum<[74, 74]>;
|
|
|
|
def CR7 : CR<7, "cr7", [CR7LT, CR7GT, CR7EQ, CR7UN]>, DwarfRegNum<[75, 75]>;
|
2010-05-24 19:55:38 +02:00
|
|
|
}
|
|
|
|
|
2004-06-21 18:55:25 +02:00
|
|
|
// Link register
|
2011-05-30 20:24:44 +02:00
|
|
|
def LR : SPR<8, "lr">, DwarfRegNum<[-2, 65]>;
|
2006-11-14 19:44:47 +01:00
|
|
|
//let Aliases = [LR] in
|
2011-05-30 20:24:44 +02:00
|
|
|
def LR8 : SPR<8, "lr">, DwarfRegNum<[65, -2]>;
|
2006-11-14 19:44:47 +01:00
|
|
|
|
2004-06-21 18:55:25 +02:00
|
|
|
// Count register
|
2011-05-30 20:24:44 +02:00
|
|
|
def CTR : SPR<9, "ctr">, DwarfRegNum<[-2, 66]>;
|
|
|
|
def CTR8 : SPR<9, "ctr">, DwarfRegNum<[66, -2]>;
|
2006-11-14 19:44:47 +01:00
|
|
|
|
2005-11-23 06:29:52 +01:00
|
|
|
// VRsave register
|
Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handling
There are a couple of (small) related changes here:
1. The printed name of the VRSAVE register has been changed from VRsave to
vrsave in order to match the name accepted by GNU binutils.
2. Support for parsing vrsave has been added to the asm parser (it seems that
there was no test case specifically covering this code, so I've added one).
3. The list of Altivec registers, which was common to all calling conventions,
has been separated out. This allows us to define the base CSR lists, and then
lists for each ABI with Altivec included. This allows SjLj, for example, to
work correctly on non-Altivec targets without using unnatural definitions of
the NoRegs CSR list.
4. VRSAVE is now always reserved on non-Darwin targets and all Altivec
registers are reserved when Altivec is disabled.
With these changes, it is now possible to compile a function containing
__builtin_unwind_init() on Linux/PPC64 with debugging information. This did not
work previously because GNU binutils assumes that all .cfi_offset offsets will
be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned
offset). This is not true for the vrsave register, however, because this
register is used only on Darwin, GCC does not bother printing a .cfi_offset
entry for it (even though there is a slot in the stack frame for it as
specified by the ABI). This change allows us to do the same: we will also not
print .cfi_offset directives for vrsave.
llvm-svn: 185409
2013-07-02 05:39:34 +02:00
|
|
|
def VRSAVE: SPR<256, "vrsave">, DwarfRegNum<[109]>;
|
2004-09-14 06:17:02 +02:00
|
|
|
|
2009-09-18 22:15:22 +02:00
|
|
|
// Carry bit. In the architecture this is really bit 0 of the XER register
|
|
|
|
// (which really is SPR register 1); this is the only bit interesting to a
|
|
|
|
// compiler.
|
2015-01-13 18:45:11 +01:00
|
|
|
def CARRY: SPR<1, "ca">, DwarfRegNum<[76]>;
|
2009-09-18 22:15:22 +02:00
|
|
|
|
2008-10-29 19:26:45 +01:00
|
|
|
// FP rounding mode: bits 30 and 31 of the FP status and control register
|
|
|
|
// This is not allocated as a normal register; it appears only in
|
|
|
|
// Uses and Defs. The ABI says it needs to be preserved by a function,
|
|
|
|
// but this is not achieved by saving and restoring it as with
|
|
|
|
// most registers, it has to be done in code; to make this work all the
|
|
|
|
// return and call instructions are described as Uses of RM, so instructions
|
|
|
|
// that do nothing but change RM will not get deleted.
|
2015-01-13 18:45:11 +01:00
|
|
|
def RM: PPCReg<"**ROUNDING MODE**">;
|
2008-10-29 19:26:45 +01:00
|
|
|
|
2005-10-14 20:58:46 +02:00
|
|
|
/// Register classes
|
|
|
|
// Allocate volatiles first
|
|
|
|
// then nonvolatiles in reverse order since stmw/lmw save from rN to r31
|
2011-06-16 01:28:14 +02:00
|
|
|
def GPRC : RegisterClass<"PPC", [i32], 32, (add (sequence "R%u", 2, 12),
|
|
|
|
(sequence "R%u", 30, 13),
|
[PowerPC] Make r2 allocatable on PPC64/ELF for some leaf functions
The TOC base pointer is passed in r2, and we normally reserve this register so
that we can depend on it being there. However, for leaf functions, and
specifically those leaf functions that don't do any TOC access of their own
(which is generally due to accessing the constant pool, using TLS, etc.),
we can treat r2 as an ordinary callee-saved register (it must be callee-saved
because, for local direct calls, the linker will not insert any save/restore
code).
The allocation order has been changed slightly for PPC64/ELF systems to put r2
at the end of the list (while leaving it near the beginning for Darwin systems
to prevent unnecessary output changes). While r2 is allocatable, using it still
requires spill/restore traffic, and thus comes at the end of the list.
llvm-svn: 227745
2015-02-01 16:03:28 +01:00
|
|
|
R31, R0, R1, FP, BP)> {
|
|
|
|
// On non-Darwin PPC64 systems, R2 can be allocated, but must be restored, so
|
|
|
|
// put it at the end of the list.
|
|
|
|
let AltOrders = [(add (sub GPRC, R2), R2)];
|
|
|
|
let AltOrderSelect = [{
|
2015-02-20 09:24:34 +01:00
|
|
|
const PPCSubtarget &S = MF.getSubtarget<PPCSubtarget>();
|
[PowerPC] Make r2 allocatable on PPC64/ELF for some leaf functions
The TOC base pointer is passed in r2, and we normally reserve this register so
that we can depend on it being there. However, for leaf functions, and
specifically those leaf functions that don't do any TOC access of their own
(which is generally due to accessing the constant pool, using TLS, etc.),
we can treat r2 as an ordinary callee-saved register (it must be callee-saved
because, for local direct calls, the linker will not insert any save/restore
code).
The allocation order has been changed slightly for PPC64/ELF systems to put r2
at the end of the list (while leaving it near the beginning for Darwin systems
to prevent unnecessary output changes). While r2 is allocatable, using it still
requires spill/restore traffic, and thus comes at the end of the list.
llvm-svn: 227745
2015-02-01 16:03:28 +01:00
|
|
|
return S.isPPC64() && S.isSVR4ABI();
|
|
|
|
}];
|
|
|
|
}
|
2011-06-09 18:56:59 +02:00
|
|
|
|
2011-06-16 01:28:14 +02:00
|
|
|
def G8RC : RegisterClass<"PPC", [i64], 64, (add (sequence "X%u", 2, 12),
|
|
|
|
(sequence "X%u", 30, 14),
|
[PowerPC] Make r2 allocatable on PPC64/ELF for some leaf functions
The TOC base pointer is passed in r2, and we normally reserve this register so
that we can depend on it being there. However, for leaf functions, and
specifically those leaf functions that don't do any TOC access of their own
(which is generally due to accessing the constant pool, using TLS, etc.),
we can treat r2 as an ordinary callee-saved register (it must be callee-saved
because, for local direct calls, the linker will not insert any save/restore
code).
The allocation order has been changed slightly for PPC64/ELF systems to put r2
at the end of the list (while leaving it near the beginning for Darwin systems
to prevent unnecessary output changes). While r2 is allocatable, using it still
requires spill/restore traffic, and thus comes at the end of the list.
llvm-svn: 227745
2015-02-01 16:03:28 +01:00
|
|
|
X31, X13, X0, X1, FP8, BP8)> {
|
|
|
|
// On non-Darwin PPC64 systems, R2 can be allocated, but must be restored, so
|
|
|
|
// put it at the end of the list.
|
|
|
|
let AltOrders = [(add (sub G8RC, X2), X2)];
|
|
|
|
let AltOrderSelect = [{
|
2015-02-20 09:24:34 +01:00
|
|
|
const PPCSubtarget &S = MF.getSubtarget<PPCSubtarget>();
|
[PowerPC] Make r2 allocatable on PPC64/ELF for some leaf functions
The TOC base pointer is passed in r2, and we normally reserve this register so
that we can depend on it being there. However, for leaf functions, and
specifically those leaf functions that don't do any TOC access of their own
(which is generally due to accessing the constant pool, using TLS, etc.),
we can treat r2 as an ordinary callee-saved register (it must be callee-saved
because, for local direct calls, the linker will not insert any save/restore
code).
The allocation order has been changed slightly for PPC64/ELF systems to put r2
at the end of the list (while leaving it near the beginning for Darwin systems
to prevent unnecessary output changes). While r2 is allocatable, using it still
requires spill/restore traffic, and thus comes at the end of the list.
llvm-svn: 227745
2015-02-01 16:03:28 +01:00
|
|
|
return S.isPPC64() && S.isSVR4ABI();
|
|
|
|
}];
|
|
|
|
}
|
2005-10-18 02:28:58 +02:00
|
|
|
|
Prepare to make r0 an allocatable register on PPC
Currently the PPC r0 register is unconditionally reserved. There are two reasons
for this:
1. r0 is treated specially (as the constant 0) by certain instructions, and so
cannot be used with those instructions as a regular register.
2. r0 is used as a temporary register in the CR-register spilling process
(where, under some circumstances, we require two GPRs).
This change addresses the first reason by introducing a restricted register
class (without r0) for use by those instructions that treat r0 specially. These
register classes have a new pseudo-register, ZERO, which represents the r0-as-0
use. This has the side benefit of making the existing target code simpler (and
easier to understand), and will make it clear to the register allocator that
uses of r0 as 0 don't conflict will real uses of the r0 register.
Once the CR spilling code is improved, we'll be able to allocate r0.
Adding these extra register classes, for some reason unclear to me, causes
requests to the target to copy 32-bit registers to 64-bit registers. The
resulting code seems correct (and causes no test-suite failures), and the new
test case covers this new kind of asymmetric copy.
As r0 is still reserved, no functionality change intended.
llvm-svn: 177423
2013-03-19 19:51:05 +01:00
|
|
|
// For some instructions r0 is special (representing the value 0 instead of
|
|
|
|
// the value in the r0 register), and we use these register subclasses to
|
|
|
|
// prevent r0 from being allocated for use by those instructions.
|
[PowerPC] Make r2 allocatable on PPC64/ELF for some leaf functions
The TOC base pointer is passed in r2, and we normally reserve this register so
that we can depend on it being there. However, for leaf functions, and
specifically those leaf functions that don't do any TOC access of their own
(which is generally due to accessing the constant pool, using TLS, etc.),
we can treat r2 as an ordinary callee-saved register (it must be callee-saved
because, for local direct calls, the linker will not insert any save/restore
code).
The allocation order has been changed slightly for PPC64/ELF systems to put r2
at the end of the list (while leaving it near the beginning for Darwin systems
to prevent unnecessary output changes). While r2 is allocatable, using it still
requires spill/restore traffic, and thus comes at the end of the list.
llvm-svn: 227745
2015-02-01 16:03:28 +01:00
|
|
|
def GPRC_NOR0 : RegisterClass<"PPC", [i32], 32, (add (sub GPRC, R0), ZERO)> {
|
|
|
|
// On non-Darwin PPC64 systems, R2 can be allocated, but must be restored, so
|
|
|
|
// put it at the end of the list.
|
|
|
|
let AltOrders = [(add (sub GPRC_NOR0, R2), R2)];
|
|
|
|
let AltOrderSelect = [{
|
2015-02-20 09:24:34 +01:00
|
|
|
const PPCSubtarget &S = MF.getSubtarget<PPCSubtarget>();
|
[PowerPC] Make r2 allocatable on PPC64/ELF for some leaf functions
The TOC base pointer is passed in r2, and we normally reserve this register so
that we can depend on it being there. However, for leaf functions, and
specifically those leaf functions that don't do any TOC access of their own
(which is generally due to accessing the constant pool, using TLS, etc.),
we can treat r2 as an ordinary callee-saved register (it must be callee-saved
because, for local direct calls, the linker will not insert any save/restore
code).
The allocation order has been changed slightly for PPC64/ELF systems to put r2
at the end of the list (while leaving it near the beginning for Darwin systems
to prevent unnecessary output changes). While r2 is allocatable, using it still
requires spill/restore traffic, and thus comes at the end of the list.
llvm-svn: 227745
2015-02-01 16:03:28 +01:00
|
|
|
return S.isPPC64() && S.isSVR4ABI();
|
|
|
|
}];
|
|
|
|
}
|
|
|
|
|
|
|
|
def G8RC_NOX0 : RegisterClass<"PPC", [i64], 64, (add (sub G8RC, X0), ZERO8)> {
|
|
|
|
// On non-Darwin PPC64 systems, R2 can be allocated, but must be restored, so
|
|
|
|
// put it at the end of the list.
|
|
|
|
let AltOrders = [(add (sub G8RC_NOX0, X2), X2)];
|
|
|
|
let AltOrderSelect = [{
|
2015-02-20 09:24:34 +01:00
|
|
|
const PPCSubtarget &S = MF.getSubtarget<PPCSubtarget>();
|
[PowerPC] Make r2 allocatable on PPC64/ELF for some leaf functions
The TOC base pointer is passed in r2, and we normally reserve this register so
that we can depend on it being there. However, for leaf functions, and
specifically those leaf functions that don't do any TOC access of their own
(which is generally due to accessing the constant pool, using TLS, etc.),
we can treat r2 as an ordinary callee-saved register (it must be callee-saved
because, for local direct calls, the linker will not insert any save/restore
code).
The allocation order has been changed slightly for PPC64/ELF systems to put r2
at the end of the list (while leaving it near the beginning for Darwin systems
to prevent unnecessary output changes). While r2 is allocatable, using it still
requires spill/restore traffic, and thus comes at the end of the list.
llvm-svn: 227745
2015-02-01 16:03:28 +01:00
|
|
|
return S.isPPC64() && S.isSVR4ABI();
|
|
|
|
}];
|
|
|
|
}
|
Prepare to make r0 an allocatable register on PPC
Currently the PPC r0 register is unconditionally reserved. There are two reasons
for this:
1. r0 is treated specially (as the constant 0) by certain instructions, and so
cannot be used with those instructions as a regular register.
2. r0 is used as a temporary register in the CR-register spilling process
(where, under some circumstances, we require two GPRs).
This change addresses the first reason by introducing a restricted register
class (without r0) for use by those instructions that treat r0 specially. These
register classes have a new pseudo-register, ZERO, which represents the r0-as-0
use. This has the side benefit of making the existing target code simpler (and
easier to understand), and will make it clear to the register allocator that
uses of r0 as 0 don't conflict will real uses of the r0 register.
Once the CR spilling code is improved, we'll be able to allocate r0.
Adding these extra register classes, for some reason unclear to me, causes
requests to the target to copy 32-bit registers to 64-bit registers. The
resulting code seems correct (and causes no test-suite failures), and the new
test case covers this new kind of asymmetric copy.
As r0 is still reserved, no functionality change intended.
llvm-svn: 177423
2013-03-19 19:51:05 +01:00
|
|
|
|
2009-07-03 08:45:56 +02:00
|
|
|
// Allocate volatiles first, then non-volatiles in reverse order. With the SVR4
|
|
|
|
// ABI the size of the Floating-point register save area is determined by the
|
|
|
|
// allocated non-volatile register with the lowest register number, as FP
|
|
|
|
// register N is spilled to offset 8 * (32 - N) below the back chain word of the
|
|
|
|
// previous stack frame. By allocating non-volatiles in reverse order we make
|
|
|
|
// sure that the Floating-point register save area is always as small as
|
|
|
|
// possible because there aren't any unused spill slots.
|
2011-06-16 01:28:14 +02:00
|
|
|
def F8RC : RegisterClass<"PPC", [f64], 64, (add (sequence "F%u", 0, 13),
|
|
|
|
(sequence "F%u", 31, 14))>;
|
|
|
|
def F4RC : RegisterClass<"PPC", [f32], 32, (add F8RC)>;
|
2005-10-14 20:58:46 +02:00
|
|
|
|
2015-05-05 18:10:44 +02:00
|
|
|
def VRRC : RegisterClass<"PPC", [v16i8,v8i16,v4i32,v2i64,v1i128,v4f32], 128,
|
2011-06-16 01:28:14 +02:00
|
|
|
(add V2, V3, V4, V5, V0, V1, V6, V7, V8, V9, V10, V11,
|
|
|
|
V12, V13, V14, V15, V16, V17, V18, V19, V31, V30,
|
|
|
|
V29, V28, V27, V26, V25, V24, V23, V22, V21, V20)>;
|
2005-10-14 20:58:46 +02:00
|
|
|
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 08:58:58 +01:00
|
|
|
// VSX register classes (the allocation order mirrors that of the corresponding
|
|
|
|
// subregister classes).
|
2014-03-29 06:29:01 +01:00
|
|
|
def VSLRC : RegisterClass<"PPC", [v4i32,v4f32,v2f64,v2i64], 128,
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 08:58:58 +01:00
|
|
|
(add (sequence "VSL%u", 0, 13),
|
|
|
|
(sequence "VSL%u", 31, 14))>;
|
2014-03-29 06:29:01 +01:00
|
|
|
def VSHRC : RegisterClass<"PPC", [v4i32,v4f32,v2f64,v2i64], 128,
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 08:58:58 +01:00
|
|
|
(add VSH2, VSH3, VSH4, VSH5, VSH0, VSH1, VSH6, VSH7,
|
|
|
|
VSH8, VSH9, VSH10, VSH11, VSH12, VSH13, VSH14,
|
|
|
|
VSH15, VSH16, VSH17, VSH18, VSH19, VSH31, VSH30,
|
|
|
|
VSH29, VSH28, VSH27, VSH26, VSH25, VSH24, VSH23,
|
|
|
|
VSH22, VSH21, VSH20)>;
|
2014-03-29 06:29:01 +01:00
|
|
|
def VSRC : RegisterClass<"PPC", [v4i32,v4f32,v2f64,v2i64], 128,
|
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
2014-03-13 08:58:58 +01:00
|
|
|
(add VSLRC, VSHRC)>;
|
|
|
|
|
2014-03-29 06:29:01 +01:00
|
|
|
// Register classes for the 64-bit "scalar" VSX subregisters.
|
|
|
|
def VFRC : RegisterClass<"PPC", [f64], 64,
|
|
|
|
(add VF2, VF3, VF4, VF5, VF0, VF1, VF6, VF7,
|
|
|
|
VF8, VF9, VF10, VF11, VF12, VF13, VF14,
|
|
|
|
VF15, VF16, VF17, VF18, VF19, VF31, VF30,
|
|
|
|
VF29, VF28, VF27, VF26, VF25, VF24, VF23,
|
|
|
|
VF22, VF21, VF20)>;
|
|
|
|
def VSFRC : RegisterClass<"PPC", [f64], 64, (add F8RC, VFRC)>;
|
|
|
|
|
2015-05-07 20:24:05 +02:00
|
|
|
// Register class for single precision scalars in VSX registers
|
|
|
|
def VSSRC : RegisterClass<"PPC", [f32], 32, (add VSFRC)>;
|
|
|
|
|
[PowerPC] Add support for the QPX vector instruction set
This adds support for the QPX vector instruction set, which is used by the
enhanced A2 cores on the IBM BG/Q supercomputers. QPX vectors are 256 bytes
wide, holding 4 double-precision floating-point values. Boolean values, modeled
here as <4 x i1> are actually also represented as floating-point values
(essentially { -1, 1 } for { false, true }). QPX shares many features with
Altivec and VSX, but is distinct from both of them. One major difference is
that, instead of adding completely-separate vector registers, QPX vector
registers are extensions of the scalar floating-point registers (lane 0 is the
corresponding scalar floating-point value). The operations supported on QPX
vectors mirrors that supported on the scalar floating-point values (with some
additional ones for permutations and logical/comparison operations).
I've been maintaining this support out-of-tree, as part of the bgclang project,
for several years. This is not the entire bgclang patch set, but is most of the
subset that can be cleanly integrated into LLVM proper at this time. Adding
this to the LLVM backend is part of my efforts to rebase bgclang to the current
LLVM trunk, but is independently useful (especially for codes that use LLVM as
a JIT in library form).
The assembler/disassembler test coverage is complete. The CodeGen test coverage
is not, but I've included some tests, and more will be added as follow-up work.
llvm-svn: 230413
2015-02-25 02:06:45 +01:00
|
|
|
// For QPX
|
|
|
|
def QFRC : RegisterClass<"PPC", [v4f64], 256, (add (sequence "QF%u", 0, 13),
|
|
|
|
(sequence "QF%u", 31, 14))>;
|
|
|
|
def QSRC : RegisterClass<"PPC", [v4f32], 128, (add QFRC)>;
|
|
|
|
def QBRC : RegisterClass<"PPC", [v4i1], 256, (add QFRC)> {
|
|
|
|
// These are actually stored as floating-point values where a positive
|
|
|
|
// number is true and anything else (including NaN) is false.
|
|
|
|
let Size = 256;
|
|
|
|
}
|
|
|
|
|
Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
2014-02-28 01:27:01 +01:00
|
|
|
def CRBITRC : RegisterClass<"PPC", [i1], 32,
|
|
|
|
(add CR2LT, CR2GT, CR2EQ, CR2UN,
|
2011-06-16 01:28:14 +02:00
|
|
|
CR3LT, CR3GT, CR3EQ, CR3UN,
|
|
|
|
CR4LT, CR4GT, CR4EQ, CR4UN,
|
|
|
|
CR5LT, CR5GT, CR5EQ, CR5UN,
|
|
|
|
CR6LT, CR6GT, CR6EQ, CR6UN,
|
Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
2014-02-28 01:27:01 +01:00
|
|
|
CR7LT, CR7GT, CR7EQ, CR7UN,
|
|
|
|
CR1LT, CR1GT, CR1EQ, CR1UN,
|
|
|
|
CR0LT, CR0GT, CR0EQ, CR0UN)> {
|
|
|
|
let Size = 32;
|
2008-03-10 15:12:10 +01:00
|
|
|
}
|
|
|
|
|
2011-06-16 01:28:14 +02:00
|
|
|
def CRRC : RegisterClass<"PPC", [i32], 32, (add CR0, CR1, CR5, CR6,
|
2012-05-04 05:30:34 +02:00
|
|
|
CR7, CR2, CR3, CR4)>;
|
2008-04-30 11:16:33 +02:00
|
|
|
|
2015-03-25 20:36:23 +01:00
|
|
|
def CRRC0 : RegisterClass<"PPC", [i32], 32, (add CR0)>;
|
|
|
|
|
2012-06-08 21:02:08 +02:00
|
|
|
// The CTR registers are not allocatable because they're used by the
|
|
|
|
// decrement-and-branch instructions, and thus need to stay live across
|
|
|
|
// multiple basic blocks.
|
|
|
|
def CTRRC : RegisterClass<"PPC", [i32], 32, (add CTR)> {
|
|
|
|
let isAllocatable = 0;
|
|
|
|
}
|
|
|
|
def CTRRC8 : RegisterClass<"PPC", [i64], 64, (add CTR8)> {
|
|
|
|
let isAllocatable = 0;
|
|
|
|
}
|
|
|
|
|
2011-06-16 01:28:14 +02:00
|
|
|
def VRSAVERC : RegisterClass<"PPC", [i32], 32, (add VRSAVE)>;
|
|
|
|
def CARRYRC : RegisterClass<"PPC", [i32], 32, (add CARRY)> {
|
2009-09-18 22:15:22 +02:00
|
|
|
let CopyCost = -1;
|
|
|
|
}
|
2014-04-04 17:15:57 +02:00
|
|
|
|