2019-11-26 03:18:32 +01:00
|
|
|
//===------ PPCLoopInstrFormPrep.cpp - Loop Instr Form Prep Pass ----------===//
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
//
|
2019-01-19 09:50:56 +01:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
2019-11-18 03:32:26 +01:00
|
|
|
// This file implements a pass to prepare loops for ppc preferred addressing
|
|
|
|
// modes, leveraging different instruction form. (eg: DS/DQ form, D/DS form with
|
|
|
|
// update)
|
|
|
|
// Additional PHIs are created for loop induction variables used by load/store
|
|
|
|
// instructions so that preferred addressing modes can be used.
|
|
|
|
//
|
|
|
|
// 1: DS/DQ form preparation, prepare the load/store instructions so that they
|
|
|
|
// can satisfy the DS/DQ form displacement requirements.
|
|
|
|
// Generically, this means transforming loops like this:
|
|
|
|
// for (int i = 0; i < n; ++i) {
|
|
|
|
// unsigned long x1 = *(unsigned long *)(p + i + 5);
|
|
|
|
// unsigned long x2 = *(unsigned long *)(p + i + 9);
|
|
|
|
// }
|
|
|
|
//
|
|
|
|
// to look like this:
|
|
|
|
//
|
|
|
|
// unsigned NewP = p + 5;
|
|
|
|
// for (int i = 0; i < n; ++i) {
|
|
|
|
// unsigned long x1 = *(unsigned long *)(i + NewP);
|
|
|
|
// unsigned long x2 = *(unsigned long *)(i + NewP + 4);
|
|
|
|
// }
|
|
|
|
//
|
|
|
|
// 2: D/DS form with update preparation, prepare the load/store instructions so
|
|
|
|
// that we can use update form to do pre-increment.
|
|
|
|
// Generically, this means transforming loops like this:
|
|
|
|
// for (int i = 0; i < n; ++i)
|
|
|
|
// array[i] = c;
|
|
|
|
//
|
|
|
|
// to look like this:
|
|
|
|
//
|
|
|
|
// T *p = array[-1];
|
|
|
|
// for (int i = 0; i < n; ++i)
|
|
|
|
// *++p = c;
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#include "PPC.h"
|
2016-12-09 23:06:55 +01:00
|
|
|
#include "PPCSubtarget.h"
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
#include "PPCTargetMachine.h"
|
2015-04-11 02:33:08 +02:00
|
|
|
#include "llvm/ADT/DepthFirstIterator.h"
|
2016-12-09 23:06:55 +01:00
|
|
|
#include "llvm/ADT/SmallPtrSet.h"
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
#include "llvm/ADT/SmallSet.h"
|
2016-12-09 23:06:55 +01:00
|
|
|
#include "llvm/ADT/SmallVector.h"
|
2017-08-21 15:36:18 +02:00
|
|
|
#include "llvm/ADT/Statistic.h"
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
#include "llvm/Analysis/LoopInfo.h"
|
|
|
|
#include "llvm/Analysis/ScalarEvolution.h"
|
|
|
|
#include "llvm/Analysis/ScalarEvolutionExpressions.h"
|
2016-12-09 23:06:55 +01:00
|
|
|
#include "llvm/IR/BasicBlock.h"
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
#include "llvm/IR/CFG.h"
|
|
|
|
#include "llvm/IR/Dominators.h"
|
2016-12-09 23:06:55 +01:00
|
|
|
#include "llvm/IR/Instruction.h"
|
|
|
|
#include "llvm/IR/Instructions.h"
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
#include "llvm/IR/IntrinsicInst.h"
|
2020-11-13 19:32:57 +01:00
|
|
|
#include "llvm/IR/IntrinsicsPowerPC.h"
|
2015-03-04 19:43:29 +01:00
|
|
|
#include "llvm/IR/Module.h"
|
2017-01-13 01:58:58 +01:00
|
|
|
#include "llvm/IR/Type.h"
|
2016-12-09 23:06:55 +01:00
|
|
|
#include "llvm/IR/Value.h"
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-13 22:15:01 +01:00
|
|
|
#include "llvm/InitializePasses.h"
|
2016-12-09 23:06:55 +01:00
|
|
|
#include "llvm/Pass.h"
|
|
|
|
#include "llvm/Support/Casting.h"
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
#include "llvm/Support/CommandLine.h"
|
|
|
|
#include "llvm/Support/Debug.h"
|
2015-02-13 10:09:03 +01:00
|
|
|
#include "llvm/Transforms/Scalar.h"
|
2018-03-28 19:44:36 +02:00
|
|
|
#include "llvm/Transforms/Utils.h"
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
#include "llvm/Transforms/Utils/BasicBlockUtils.h"
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-13 22:15:01 +01:00
|
|
|
#include "llvm/Transforms/Utils/Local.h"
|
2015-02-07 08:32:58 +01:00
|
|
|
#include "llvm/Transforms/Utils/LoopUtils.h"
|
2020-05-20 11:08:08 +02:00
|
|
|
#include "llvm/Transforms/Utils/ScalarEvolutionExpander.h"
|
2016-12-09 23:06:55 +01:00
|
|
|
#include <cassert>
|
|
|
|
#include <iterator>
|
|
|
|
#include <utility>
|
|
|
|
|
2021-05-30 11:13:48 +02:00
|
|
|
#define DEBUG_TYPE "ppc-loop-instr-form-prep"
|
|
|
|
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
using namespace llvm;
|
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
static cl::opt<unsigned> MaxVarsPrep("ppc-formprep-max-vars",
|
2020-12-01 14:48:17 +01:00
|
|
|
cl::Hidden, cl::init(24),
|
2019-11-18 03:32:26 +01:00
|
|
|
cl::desc("Potential common base number threshold per function for PPC loop "
|
|
|
|
"prep"));
|
|
|
|
|
|
|
|
static cl::opt<bool> PreferUpdateForm("ppc-formprep-prefer-update",
|
|
|
|
cl::init(true), cl::Hidden,
|
|
|
|
cl::desc("prefer update form when ds form is also a update form"));
|
|
|
|
|
|
|
|
// Sum of following 3 per loop thresholds for all loops can not be larger
|
|
|
|
// than MaxVarsPrep.
|
2020-12-01 14:48:17 +01:00
|
|
|
// now the thresholds for each kind prep are exterimental values on Power9.
|
2019-11-18 03:32:26 +01:00
|
|
|
static cl::opt<unsigned> MaxVarsUpdateForm("ppc-preinc-prep-max-vars",
|
|
|
|
cl::Hidden, cl::init(3),
|
|
|
|
cl::desc("Potential PHI threshold per loop for PPC loop prep of update "
|
|
|
|
"form"));
|
|
|
|
|
|
|
|
static cl::opt<unsigned> MaxVarsDSForm("ppc-dsprep-max-vars",
|
|
|
|
cl::Hidden, cl::init(3),
|
|
|
|
cl::desc("Potential PHI threshold per loop for PPC loop prep of DS form"));
|
|
|
|
|
|
|
|
static cl::opt<unsigned> MaxVarsDQForm("ppc-dqprep-max-vars",
|
2020-12-01 14:48:17 +01:00
|
|
|
cl::Hidden, cl::init(8),
|
2019-11-18 03:32:26 +01:00
|
|
|
cl::desc("Potential PHI threshold per loop for PPC loop prep of DQ form"));
|
|
|
|
|
|
|
|
|
|
|
|
// If would not be profitable if the common base has only one load/store, ISEL
|
|
|
|
// should already be able to choose best load/store form based on offset for
|
|
|
|
// single load/store. Set minimal profitable value default to 2 and make it as
|
|
|
|
// an option.
|
|
|
|
static cl::opt<unsigned> DispFormPrepMinThreshold("ppc-dispprep-min-threshold",
|
|
|
|
cl::Hidden, cl::init(2),
|
|
|
|
cl::desc("Minimal common base load/store instructions triggering DS/DQ form "
|
|
|
|
"preparation"));
|
|
|
|
|
|
|
|
STATISTIC(PHINodeAlreadyExistsUpdate, "PHI node already in pre-increment form");
|
|
|
|
STATISTIC(PHINodeAlreadyExistsDS, "PHI node already in DS form");
|
|
|
|
STATISTIC(PHINodeAlreadyExistsDQ, "PHI node already in DQ form");
|
|
|
|
STATISTIC(DSFormChainRewritten, "Num of DS form chain rewritten");
|
|
|
|
STATISTIC(DQFormChainRewritten, "Num of DQ form chain rewritten");
|
2019-09-25 05:02:19 +02:00
|
|
|
STATISTIC(UpdFormChainRewritten, "Num of update form chain rewritten");
|
2017-08-21 15:36:18 +02:00
|
|
|
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
namespace {
|
2019-09-25 05:02:19 +02:00
|
|
|
struct BucketElement {
|
|
|
|
BucketElement(const SCEVConstant *O, Instruction *I) : Offset(O), Instr(I) {}
|
|
|
|
BucketElement(Instruction *I) : Offset(nullptr), Instr(I) {}
|
|
|
|
|
|
|
|
const SCEVConstant *Offset;
|
|
|
|
Instruction *Instr;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct Bucket {
|
|
|
|
Bucket(const SCEV *B, Instruction *I) : BaseSCEV(B),
|
|
|
|
Elements(1, BucketElement(I)) {}
|
|
|
|
|
|
|
|
const SCEV *BaseSCEV;
|
|
|
|
SmallVector<BucketElement, 16> Elements;
|
|
|
|
};
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
// "UpdateForm" is not a real PPC instruction form, it stands for dform
|
|
|
|
// load/store with update like ldu/stdu, or Prefetch intrinsic.
|
|
|
|
// For DS form instructions, their displacements must be multiple of 4.
|
|
|
|
// For DQ form instructions, their displacements must be multiple of 16.
|
|
|
|
enum InstrForm { UpdateForm = 1, DSForm = 4, DQForm = 16 };
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
class PPCLoopInstrFormPrep : public FunctionPass {
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
public:
|
|
|
|
static char ID; // Pass ID, replacement for typeid
|
2016-12-09 23:06:55 +01:00
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
PPCLoopInstrFormPrep() : FunctionPass(ID) {
|
|
|
|
initializePPCLoopInstrFormPrepPass(*PassRegistry::getPassRegistry());
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
}
|
2017-01-13 01:58:58 +01:00
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
PPCLoopInstrFormPrep(PPCTargetMachine &TM) : FunctionPass(ID), TM(&TM) {
|
|
|
|
initializePPCLoopInstrFormPrepPass(*PassRegistry::getPassRegistry());
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
void getAnalysisUsage(AnalysisUsage &AU) const override {
|
|
|
|
AU.addPreserved<DominatorTreeWrapperPass>();
|
|
|
|
AU.addRequired<LoopInfoWrapperPass>();
|
|
|
|
AU.addPreserved<LoopInfoWrapperPass>();
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 04:08:17 +02:00
|
|
|
AU.addRequired<ScalarEvolutionWrapperPass>();
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
bool runOnFunction(Function &F) override;
|
|
|
|
|
|
|
|
private:
|
2017-01-13 01:58:58 +01:00
|
|
|
PPCTargetMachine *TM = nullptr;
|
2019-09-25 05:02:19 +02:00
|
|
|
const PPCSubtarget *ST;
|
2015-12-15 20:40:57 +01:00
|
|
|
DominatorTree *DT;
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
LoopInfo *LI;
|
|
|
|
ScalarEvolution *SE;
|
2015-12-15 20:40:57 +01:00
|
|
|
bool PreserveLCSSA;
|
2019-09-25 05:02:19 +02:00
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
/// Successful preparation number for Update/DS/DQ form in all inner most
|
|
|
|
/// loops. One successful preparation will put one common base out of loop,
|
|
|
|
/// this may leads to register presure like LICM does.
|
|
|
|
/// Make sure total preparation number can be controlled by option.
|
|
|
|
unsigned SuccPrepCount;
|
|
|
|
|
2019-09-25 05:02:19 +02:00
|
|
|
bool runOnLoop(Loop *L);
|
|
|
|
|
|
|
|
/// Check if required PHI node is already exist in Loop \p L.
|
|
|
|
bool alreadyPrepared(Loop *L, Instruction* MemI,
|
|
|
|
const SCEV *BasePtrStartSCEV,
|
2019-11-18 03:32:26 +01:00
|
|
|
const SCEVConstant *BasePtrIncSCEV,
|
|
|
|
InstrForm Form);
|
2019-09-25 05:02:19 +02:00
|
|
|
|
|
|
|
/// Collect condition matched(\p isValidCandidate() returns true)
|
|
|
|
/// candidates in Loop \p L.
|
2021-06-04 21:07:03 +02:00
|
|
|
SmallVector<Bucket, 16> collectCandidates(
|
|
|
|
Loop *L,
|
|
|
|
std::function<bool(const Instruction *, const Value *, const Type *)>
|
|
|
|
isValidCandidate,
|
|
|
|
unsigned MaxCandidateNum);
|
2019-09-25 05:02:19 +02:00
|
|
|
|
|
|
|
/// Add a candidate to candidates \p Buckets.
|
|
|
|
void addOneCandidate(Instruction *MemI, const SCEV *LSCEV,
|
|
|
|
SmallVector<Bucket, 16> &Buckets,
|
|
|
|
unsigned MaxCandidateNum);
|
|
|
|
|
|
|
|
/// Prepare all candidates in \p Buckets for update form.
|
|
|
|
bool updateFormPrep(Loop *L, SmallVector<Bucket, 16> &Buckets);
|
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
/// Prepare all candidates in \p Buckets for displacement form, now for
|
|
|
|
/// ds/dq.
|
|
|
|
bool dispFormPrep(Loop *L, SmallVector<Bucket, 16> &Buckets,
|
|
|
|
InstrForm Form);
|
|
|
|
|
2019-09-25 05:02:19 +02:00
|
|
|
/// Prepare for one chain \p BucketChain, find the best base element and
|
|
|
|
/// update all other elements in \p BucketChain accordingly.
|
2019-11-18 03:32:26 +01:00
|
|
|
/// \p Form is used to find the best base element.
|
|
|
|
/// If success, best base element must be stored as the first element of
|
|
|
|
/// \p BucketChain.
|
|
|
|
/// Return false if no base element found, otherwise return true.
|
|
|
|
bool prepareBaseForDispFormChain(Bucket &BucketChain,
|
|
|
|
InstrForm Form);
|
|
|
|
|
|
|
|
/// Prepare for one chain \p BucketChain, find the best base element and
|
|
|
|
/// update all other elements in \p BucketChain accordingly.
|
|
|
|
/// If success, best base element must be stored as the first element of
|
|
|
|
/// \p BucketChain.
|
|
|
|
/// Return false if no base element found, otherwise return true.
|
2019-09-25 05:02:19 +02:00
|
|
|
bool prepareBaseForUpdateFormChain(Bucket &BucketChain);
|
|
|
|
|
|
|
|
/// Rewrite load/store instructions in \p BucketChain according to
|
|
|
|
/// preparation.
|
|
|
|
bool rewriteLoadStores(Loop *L, Bucket &BucketChain,
|
2019-11-18 03:32:26 +01:00
|
|
|
SmallSet<BasicBlock *, 16> &BBChanged,
|
|
|
|
InstrForm Form);
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
};
|
2016-12-09 23:06:55 +01:00
|
|
|
|
|
|
|
} // end anonymous namespace
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
char PPCLoopInstrFormPrep::ID = 0;
|
|
|
|
static const char *name = "Prepare loop for ppc preferred instruction forms";
|
|
|
|
INITIALIZE_PASS_BEGIN(PPCLoopInstrFormPrep, DEBUG_TYPE, name, false, false)
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(LoopInfoWrapperPass)
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 04:08:17 +02:00
|
|
|
INITIALIZE_PASS_DEPENDENCY(ScalarEvolutionWrapperPass)
|
2019-11-26 03:18:32 +01:00
|
|
|
INITIALIZE_PASS_END(PPCLoopInstrFormPrep, DEBUG_TYPE, name, false, false)
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
|
2020-06-17 14:27:36 +02:00
|
|
|
static constexpr StringRef PHINodeNameSuffix = ".phi";
|
|
|
|
static constexpr StringRef CastNodeNameSuffix = ".cast";
|
|
|
|
static constexpr StringRef GEPNodeIncNameSuffix = ".inc";
|
|
|
|
static constexpr StringRef GEPNodeOffNameSuffix = ".off";
|
2019-09-25 05:02:19 +02:00
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
FunctionPass *llvm::createPPCLoopInstrFormPrepPass(PPCTargetMachine &TM) {
|
|
|
|
return new PPCLoopInstrFormPrep(TM);
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static bool IsPtrInBounds(Value *BasePtr) {
|
|
|
|
Value *StrippedBasePtr = BasePtr;
|
|
|
|
while (BitCastInst *BC = dyn_cast<BitCastInst>(StrippedBasePtr))
|
|
|
|
StrippedBasePtr = BC->getOperand(0);
|
|
|
|
if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(StrippedBasePtr))
|
|
|
|
return GEP->isInBounds();
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2020-06-17 14:27:36 +02:00
|
|
|
static std::string getInstrName(const Value *I, StringRef Suffix) {
|
2019-09-25 05:02:19 +02:00
|
|
|
assert(I && "Invalid paramater!");
|
|
|
|
if (I->hasName())
|
|
|
|
return (I->getName() + Suffix).str();
|
|
|
|
else
|
|
|
|
return "";
|
|
|
|
}
|
|
|
|
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
static Value *GetPointerOperand(Value *MemI) {
|
|
|
|
if (LoadInst *LMemI = dyn_cast<LoadInst>(MemI)) {
|
|
|
|
return LMemI->getPointerOperand();
|
|
|
|
} else if (StoreInst *SMemI = dyn_cast<StoreInst>(MemI)) {
|
|
|
|
return SMemI->getPointerOperand();
|
|
|
|
} else if (IntrinsicInst *IMemI = dyn_cast<IntrinsicInst>(MemI)) {
|
2020-11-13 19:32:57 +01:00
|
|
|
if (IMemI->getIntrinsicID() == Intrinsic::prefetch ||
|
2020-12-17 19:18:43 +01:00
|
|
|
IMemI->getIntrinsicID() == Intrinsic::ppc_vsx_lxvp)
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
return IMemI->getArgOperand(0);
|
2020-12-17 19:18:43 +01:00
|
|
|
if (IMemI->getIntrinsicID() == Intrinsic::ppc_vsx_stxvp)
|
2020-11-13 19:32:57 +01:00
|
|
|
return IMemI->getArgOperand(1);
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
}
|
|
|
|
|
2016-12-09 23:06:55 +01:00
|
|
|
return nullptr;
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
}
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
bool PPCLoopInstrFormPrep::runOnFunction(Function &F) {
|
2016-04-27 21:39:32 +02:00
|
|
|
if (skipFunction(F))
|
|
|
|
return false;
|
|
|
|
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo();
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 04:08:17 +02:00
|
|
|
SE = &getAnalysis<ScalarEvolutionWrapperPass>().getSE();
|
2015-12-15 20:40:57 +01:00
|
|
|
auto *DTWP = getAnalysisIfAvailable<DominatorTreeWrapperPass>();
|
|
|
|
DT = DTWP ? &DTWP->getDomTree() : nullptr;
|
|
|
|
PreserveLCSSA = mustPreserveAnalysisID(LCSSAID);
|
2019-09-25 05:02:19 +02:00
|
|
|
ST = TM ? TM->getSubtargetImpl(F) : nullptr;
|
2019-11-18 03:32:26 +01:00
|
|
|
SuccPrepCount = 0;
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
|
|
|
|
bool MadeChange = false;
|
|
|
|
|
2015-04-12 19:18:56 +02:00
|
|
|
for (auto I = LI->begin(), IE = LI->end(); I != IE; ++I)
|
|
|
|
for (auto L = df_begin(*I), LE = df_end(*I); L != LE; ++L)
|
|
|
|
MadeChange |= runOnLoop(*L);
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
|
|
|
|
return MadeChange;
|
|
|
|
}
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
void PPCLoopInstrFormPrep::addOneCandidate(Instruction *MemI, const SCEV *LSCEV,
|
2019-09-25 05:02:19 +02:00
|
|
|
SmallVector<Bucket, 16> &Buckets,
|
|
|
|
unsigned MaxCandidateNum) {
|
|
|
|
assert((MemI && GetPointerOperand(MemI)) &&
|
|
|
|
"Candidate should be a memory instruction.");
|
|
|
|
assert(LSCEV && "Invalid SCEV for Ptr value.");
|
|
|
|
bool FoundBucket = false;
|
|
|
|
for (auto &B : Buckets) {
|
|
|
|
const SCEV *Diff = SE->getMinusSCEV(LSCEV, B.BaseSCEV);
|
|
|
|
if (const auto *CDiff = dyn_cast<SCEVConstant>(Diff)) {
|
|
|
|
B.Elements.push_back(BucketElement(CDiff, MemI));
|
|
|
|
FoundBucket = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!FoundBucket) {
|
|
|
|
if (Buckets.size() == MaxCandidateNum)
|
|
|
|
return;
|
|
|
|
Buckets.push_back(Bucket(LSCEV, MemI));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
SmallVector<Bucket, 16> PPCLoopInstrFormPrep::collectCandidates(
|
2019-09-25 05:02:19 +02:00
|
|
|
Loop *L,
|
2021-06-04 21:07:03 +02:00
|
|
|
std::function<bool(const Instruction *, const Value *, const Type *)>
|
|
|
|
isValidCandidate,
|
2019-09-25 05:02:19 +02:00
|
|
|
unsigned MaxCandidateNum) {
|
|
|
|
SmallVector<Bucket, 16> Buckets;
|
|
|
|
for (const auto &BB : L->blocks())
|
|
|
|
for (auto &J : *BB) {
|
|
|
|
Value *PtrValue;
|
2021-06-04 21:07:03 +02:00
|
|
|
Type *PointerElementType;
|
2019-09-25 05:02:19 +02:00
|
|
|
|
|
|
|
if (LoadInst *LMemI = dyn_cast<LoadInst>(&J)) {
|
|
|
|
PtrValue = LMemI->getPointerOperand();
|
2021-06-04 21:07:03 +02:00
|
|
|
PointerElementType = LMemI->getType();
|
2019-09-25 05:02:19 +02:00
|
|
|
} else if (StoreInst *SMemI = dyn_cast<StoreInst>(&J)) {
|
|
|
|
PtrValue = SMemI->getPointerOperand();
|
2021-06-04 21:07:03 +02:00
|
|
|
PointerElementType = SMemI->getValueOperand()->getType();
|
2019-09-25 05:02:19 +02:00
|
|
|
} else if (IntrinsicInst *IMemI = dyn_cast<IntrinsicInst>(&J)) {
|
2021-06-04 21:07:03 +02:00
|
|
|
PointerElementType = Type::getInt8Ty(J.getContext());
|
2020-11-13 19:32:57 +01:00
|
|
|
if (IMemI->getIntrinsicID() == Intrinsic::prefetch ||
|
2020-12-17 19:18:43 +01:00
|
|
|
IMemI->getIntrinsicID() == Intrinsic::ppc_vsx_lxvp) {
|
2019-09-25 05:02:19 +02:00
|
|
|
PtrValue = IMemI->getArgOperand(0);
|
2020-12-17 19:18:43 +01:00
|
|
|
} else if (IMemI->getIntrinsicID() == Intrinsic::ppc_vsx_stxvp) {
|
2020-11-13 19:32:57 +01:00
|
|
|
PtrValue = IMemI->getArgOperand(1);
|
2019-09-25 05:02:19 +02:00
|
|
|
} else continue;
|
|
|
|
} else continue;
|
|
|
|
|
|
|
|
unsigned PtrAddrSpace = PtrValue->getType()->getPointerAddressSpace();
|
|
|
|
if (PtrAddrSpace)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (L->isLoopInvariant(PtrValue))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
const SCEV *LSCEV = SE->getSCEVAtScope(PtrValue, L);
|
|
|
|
const SCEVAddRecExpr *LARSCEV = dyn_cast<SCEVAddRecExpr>(LSCEV);
|
|
|
|
if (!LARSCEV || LARSCEV->getLoop() != L)
|
|
|
|
continue;
|
|
|
|
|
2021-06-04 21:07:03 +02:00
|
|
|
if (isValidCandidate(&J, PtrValue, PointerElementType))
|
2021-06-04 19:56:57 +02:00
|
|
|
addOneCandidate(&J, LSCEV, Buckets, MaxCandidateNum);
|
2019-09-25 05:02:19 +02:00
|
|
|
}
|
|
|
|
return Buckets;
|
|
|
|
}
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
bool PPCLoopInstrFormPrep::prepareBaseForDispFormChain(Bucket &BucketChain,
|
2019-11-18 03:32:26 +01:00
|
|
|
InstrForm Form) {
|
|
|
|
// RemainderOffsetInfo details:
|
|
|
|
// key: value of (Offset urem DispConstraint). For DSForm, it can
|
|
|
|
// be [0, 4).
|
|
|
|
// first of pair: the index of first BucketElement whose remainder is equal
|
|
|
|
// to key. For key 0, this value must be 0.
|
|
|
|
// second of pair: number of load/stores with the same remainder.
|
|
|
|
DenseMap<unsigned, std::pair<unsigned, unsigned>> RemainderOffsetInfo;
|
|
|
|
|
|
|
|
for (unsigned j = 0, je = BucketChain.Elements.size(); j != je; ++j) {
|
|
|
|
if (!BucketChain.Elements[j].Offset)
|
|
|
|
RemainderOffsetInfo[0] = std::make_pair(0, 1);
|
|
|
|
else {
|
|
|
|
unsigned Remainder =
|
|
|
|
BucketChain.Elements[j].Offset->getAPInt().urem(Form);
|
|
|
|
if (RemainderOffsetInfo.find(Remainder) == RemainderOffsetInfo.end())
|
|
|
|
RemainderOffsetInfo[Remainder] = std::make_pair(j, 1);
|
|
|
|
else
|
|
|
|
RemainderOffsetInfo[Remainder].second++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// Currently we choose the most profitable base as the one which has the max
|
|
|
|
// number of load/store with same remainder.
|
|
|
|
// FIXME: adjust the base selection strategy according to load/store offset
|
|
|
|
// distribution.
|
|
|
|
// For example, if we have one candidate chain for DS form preparation, which
|
|
|
|
// contains following load/stores with different remainders:
|
|
|
|
// 1: 10 load/store whose remainder is 1;
|
|
|
|
// 2: 9 load/store whose remainder is 2;
|
|
|
|
// 3: 1 for remainder 3 and 0 for remainder 0;
|
|
|
|
// Now we will choose the first load/store whose remainder is 1 as base and
|
|
|
|
// adjust all other load/stores according to new base, so we will get 10 DS
|
|
|
|
// form and 10 X form.
|
|
|
|
// But we should be more clever, for this case we could use two bases, one for
|
|
|
|
// remainder 1 and the other for remainder 2, thus we could get 19 DS form and 1
|
|
|
|
// X form.
|
|
|
|
unsigned MaxCountRemainder = 0;
|
2019-11-18 12:34:34 +01:00
|
|
|
for (unsigned j = 0; j < (unsigned)Form; j++)
|
2019-11-18 03:32:26 +01:00
|
|
|
if ((RemainderOffsetInfo.find(j) != RemainderOffsetInfo.end()) &&
|
|
|
|
RemainderOffsetInfo[j].second >
|
|
|
|
RemainderOffsetInfo[MaxCountRemainder].second)
|
|
|
|
MaxCountRemainder = j;
|
|
|
|
|
|
|
|
// Abort when there are too few insts with common base.
|
|
|
|
if (RemainderOffsetInfo[MaxCountRemainder].second < DispFormPrepMinThreshold)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
// If the first value is most profitable, no needed to adjust BucketChain
|
|
|
|
// elements as they are substracted the first value when collecting.
|
|
|
|
if (MaxCountRemainder == 0)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
// Adjust load/store to the new chosen base.
|
|
|
|
const SCEV *Offset =
|
|
|
|
BucketChain.Elements[RemainderOffsetInfo[MaxCountRemainder].first].Offset;
|
|
|
|
BucketChain.BaseSCEV = SE->getAddExpr(BucketChain.BaseSCEV, Offset);
|
|
|
|
for (auto &E : BucketChain.Elements) {
|
|
|
|
if (E.Offset)
|
|
|
|
E.Offset = cast<SCEVConstant>(SE->getMinusSCEV(E.Offset, Offset));
|
|
|
|
else
|
|
|
|
E.Offset = cast<SCEVConstant>(SE->getNegativeSCEV(Offset));
|
|
|
|
}
|
|
|
|
|
|
|
|
std::swap(BucketChain.Elements[RemainderOffsetInfo[MaxCountRemainder].first],
|
|
|
|
BucketChain.Elements[0]);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
// FIXME: implement a more clever base choosing policy.
|
2019-09-25 05:02:19 +02:00
|
|
|
// Currently we always choose an exist load/store offset. This maybe lead to
|
|
|
|
// suboptimal code sequences. For example, for one DS chain with offsets
|
|
|
|
// {-32769, 2003, 2007, 2011}, we choose -32769 as base offset, and left disp
|
|
|
|
// for load/stores are {0, 34772, 34776, 34780}. Though each offset now is a
|
|
|
|
// multipler of 4, it cannot be represented by sint16.
|
2019-11-26 03:18:32 +01:00
|
|
|
bool PPCLoopInstrFormPrep::prepareBaseForUpdateFormChain(Bucket &BucketChain) {
|
2019-09-25 05:02:19 +02:00
|
|
|
// We have a choice now of which instruction's memory operand we use as the
|
|
|
|
// base for the generated PHI. Always picking the first instruction in each
|
|
|
|
// bucket does not work well, specifically because that instruction might
|
|
|
|
// be a prefetch (and there are no pre-increment dcbt variants). Otherwise,
|
|
|
|
// the choice is somewhat arbitrary, because the backend will happily
|
|
|
|
// generate direct offsets from both the pre-incremented and
|
|
|
|
// post-incremented pointer values. Thus, we'll pick the first non-prefetch
|
|
|
|
// instruction in each bucket, and adjust the recurrence and other offsets
|
|
|
|
// accordingly.
|
|
|
|
for (int j = 0, je = BucketChain.Elements.size(); j != je; ++j) {
|
|
|
|
if (auto *II = dyn_cast<IntrinsicInst>(BucketChain.Elements[j].Instr))
|
|
|
|
if (II->getIntrinsicID() == Intrinsic::prefetch)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// If we'd otherwise pick the first element anyway, there's nothing to do.
|
|
|
|
if (j == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
// If our chosen element has no offset from the base pointer, there's
|
|
|
|
// nothing to do.
|
|
|
|
if (!BucketChain.Elements[j].Offset ||
|
|
|
|
BucketChain.Elements[j].Offset->isZero())
|
|
|
|
break;
|
|
|
|
|
|
|
|
const SCEV *Offset = BucketChain.Elements[j].Offset;
|
|
|
|
BucketChain.BaseSCEV = SE->getAddExpr(BucketChain.BaseSCEV, Offset);
|
|
|
|
for (auto &E : BucketChain.Elements) {
|
|
|
|
if (E.Offset)
|
|
|
|
E.Offset = cast<SCEVConstant>(SE->getMinusSCEV(E.Offset, Offset));
|
|
|
|
else
|
|
|
|
E.Offset = cast<SCEVConstant>(SE->getNegativeSCEV(Offset));
|
|
|
|
}
|
|
|
|
|
|
|
|
std::swap(BucketChain.Elements[j], BucketChain.Elements[0]);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
bool PPCLoopInstrFormPrep::rewriteLoadStores(Loop *L, Bucket &BucketChain,
|
2019-11-18 03:32:26 +01:00
|
|
|
SmallSet<BasicBlock *, 16> &BBChanged,
|
|
|
|
InstrForm Form) {
|
2019-09-25 05:02:19 +02:00
|
|
|
bool MadeChange = false;
|
|
|
|
const SCEVAddRecExpr *BasePtrSCEV =
|
|
|
|
cast<SCEVAddRecExpr>(BucketChain.BaseSCEV);
|
|
|
|
if (!BasePtrSCEV->isAffine())
|
|
|
|
return MadeChange;
|
|
|
|
|
|
|
|
LLVM_DEBUG(dbgs() << "PIP: Transforming: " << *BasePtrSCEV << "\n");
|
|
|
|
|
|
|
|
assert(BasePtrSCEV->getLoop() == L && "AddRec for the wrong loop?");
|
|
|
|
|
|
|
|
// The instruction corresponding to the Bucket's BaseSCEV must be the first
|
|
|
|
// in the vector of elements.
|
|
|
|
Instruction *MemI = BucketChain.Elements.begin()->Instr;
|
|
|
|
Value *BasePtr = GetPointerOperand(MemI);
|
|
|
|
assert(BasePtr && "No pointer operand");
|
|
|
|
|
|
|
|
Type *I8Ty = Type::getInt8Ty(MemI->getParent()->getContext());
|
|
|
|
Type *I8PtrTy = Type::getInt8PtrTy(MemI->getParent()->getContext(),
|
|
|
|
BasePtr->getType()->getPointerAddressSpace());
|
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
if (!SE->isLoopInvariant(BasePtrSCEV->getStart(), L))
|
2019-09-25 05:02:19 +02:00
|
|
|
return MadeChange;
|
|
|
|
|
|
|
|
const SCEVConstant *BasePtrIncSCEV =
|
|
|
|
dyn_cast<SCEVConstant>(BasePtrSCEV->getStepRecurrence(*SE));
|
|
|
|
if (!BasePtrIncSCEV)
|
|
|
|
return MadeChange;
|
2019-11-18 03:32:26 +01:00
|
|
|
|
|
|
|
// For some DS form load/store instructions, it can also be an update form,
|
|
|
|
// if the stride is a multipler of 4. Use update form if prefer it.
|
|
|
|
bool CanPreInc = (Form == UpdateForm ||
|
|
|
|
((Form == DSForm) && !BasePtrIncSCEV->getAPInt().urem(4) &&
|
|
|
|
PreferUpdateForm));
|
|
|
|
const SCEV *BasePtrStartSCEV = nullptr;
|
|
|
|
if (CanPreInc)
|
|
|
|
BasePtrStartSCEV =
|
|
|
|
SE->getMinusSCEV(BasePtrSCEV->getStart(), BasePtrIncSCEV);
|
|
|
|
else
|
|
|
|
BasePtrStartSCEV = BasePtrSCEV->getStart();
|
|
|
|
|
2019-09-25 05:02:19 +02:00
|
|
|
if (!isSafeToExpand(BasePtrStartSCEV, *SE))
|
|
|
|
return MadeChange;
|
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
if (alreadyPrepared(L, MemI, BasePtrStartSCEV, BasePtrIncSCEV, Form))
|
2019-09-25 05:02:19 +02:00
|
|
|
return MadeChange;
|
|
|
|
|
|
|
|
LLVM_DEBUG(dbgs() << "PIP: New start is: " << *BasePtrStartSCEV << "\n");
|
|
|
|
|
|
|
|
BasicBlock *Header = L->getHeader();
|
|
|
|
unsigned HeaderLoopPredCount = pred_size(Header);
|
|
|
|
BasicBlock *LoopPredecessor = L->getLoopPredecessor();
|
|
|
|
|
|
|
|
PHINode *NewPHI =
|
|
|
|
PHINode::Create(I8PtrTy, HeaderLoopPredCount,
|
|
|
|
getInstrName(MemI, PHINodeNameSuffix),
|
|
|
|
Header->getFirstNonPHI());
|
|
|
|
|
|
|
|
SCEVExpander SCEVE(*SE, Header->getModule()->getDataLayout(), "pistart");
|
|
|
|
Value *BasePtrStart = SCEVE.expandCodeFor(BasePtrStartSCEV, I8PtrTy,
|
|
|
|
LoopPredecessor->getTerminator());
|
|
|
|
|
|
|
|
// Note that LoopPredecessor might occur in the predecessor list multiple
|
|
|
|
// times, and we need to add it the right number of times.
|
2019-12-22 19:23:57 +01:00
|
|
|
for (auto PI : predecessors(Header)) {
|
2019-09-25 05:02:19 +02:00
|
|
|
if (PI != LoopPredecessor)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
NewPHI->addIncoming(BasePtrStart, LoopPredecessor);
|
|
|
|
}
|
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
Instruction *PtrInc = nullptr;
|
|
|
|
Instruction *NewBasePtr = nullptr;
|
|
|
|
if (CanPreInc) {
|
|
|
|
Instruction *InsPoint = &*Header->getFirstInsertionPt();
|
|
|
|
PtrInc = GetElementPtrInst::Create(
|
|
|
|
I8Ty, NewPHI, BasePtrIncSCEV->getValue(),
|
|
|
|
getInstrName(MemI, GEPNodeIncNameSuffix), InsPoint);
|
|
|
|
cast<GetElementPtrInst>(PtrInc)->setIsInBounds(IsPtrInBounds(BasePtr));
|
2019-12-22 19:23:57 +01:00
|
|
|
for (auto PI : predecessors(Header)) {
|
2019-11-18 03:32:26 +01:00
|
|
|
if (PI == LoopPredecessor)
|
|
|
|
continue;
|
2019-09-25 05:02:19 +02:00
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
NewPHI->addIncoming(PtrInc, PI);
|
|
|
|
}
|
|
|
|
if (PtrInc->getType() != BasePtr->getType())
|
|
|
|
NewBasePtr = new BitCastInst(
|
|
|
|
PtrInc, BasePtr->getType(),
|
|
|
|
getInstrName(PtrInc, CastNodeNameSuffix), InsPoint);
|
|
|
|
else
|
|
|
|
NewBasePtr = PtrInc;
|
|
|
|
} else {
|
|
|
|
// Note that LoopPredecessor might occur in the predecessor list multiple
|
|
|
|
// times, and we need to make sure no more incoming value for them in PHI.
|
2019-12-22 19:23:57 +01:00
|
|
|
for (auto PI : predecessors(Header)) {
|
2019-11-18 03:32:26 +01:00
|
|
|
if (PI == LoopPredecessor)
|
|
|
|
continue;
|
2019-09-25 05:02:19 +02:00
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
// For the latch predecessor, we need to insert a GEP just before the
|
|
|
|
// terminator to increase the address.
|
|
|
|
BasicBlock *BB = PI;
|
|
|
|
Instruction *InsPoint = BB->getTerminator();
|
|
|
|
PtrInc = GetElementPtrInst::Create(
|
|
|
|
I8Ty, NewPHI, BasePtrIncSCEV->getValue(),
|
|
|
|
getInstrName(MemI, GEPNodeIncNameSuffix), InsPoint);
|
|
|
|
|
|
|
|
cast<GetElementPtrInst>(PtrInc)->setIsInBounds(IsPtrInBounds(BasePtr));
|
|
|
|
|
|
|
|
NewPHI->addIncoming(PtrInc, PI);
|
|
|
|
}
|
|
|
|
PtrInc = NewPHI;
|
|
|
|
if (NewPHI->getType() != BasePtr->getType())
|
|
|
|
NewBasePtr =
|
|
|
|
new BitCastInst(NewPHI, BasePtr->getType(),
|
|
|
|
getInstrName(NewPHI, CastNodeNameSuffix),
|
|
|
|
&*Header->getFirstInsertionPt());
|
|
|
|
else
|
|
|
|
NewBasePtr = NewPHI;
|
|
|
|
}
|
2019-09-25 05:02:19 +02:00
|
|
|
|
2020-08-03 14:37:52 +02:00
|
|
|
// Clear the rewriter cache, because values that are in the rewriter's cache
|
|
|
|
// can be deleted below, causing the AssertingVH in the cache to trigger.
|
|
|
|
SCEVE.clear();
|
|
|
|
|
2019-09-25 05:02:19 +02:00
|
|
|
if (Instruction *IDel = dyn_cast<Instruction>(BasePtr))
|
|
|
|
BBChanged.insert(IDel->getParent());
|
|
|
|
BasePtr->replaceAllUsesWith(NewBasePtr);
|
|
|
|
RecursivelyDeleteTriviallyDeadInstructions(BasePtr);
|
|
|
|
|
|
|
|
// Keep track of the replacement pointer values we've inserted so that we
|
|
|
|
// don't generate more pointer values than necessary.
|
|
|
|
SmallPtrSet<Value *, 16> NewPtrs;
|
|
|
|
NewPtrs.insert(NewBasePtr);
|
|
|
|
|
|
|
|
for (auto I = std::next(BucketChain.Elements.begin()),
|
|
|
|
IE = BucketChain.Elements.end(); I != IE; ++I) {
|
|
|
|
Value *Ptr = GetPointerOperand(I->Instr);
|
|
|
|
assert(Ptr && "No pointer operand");
|
|
|
|
if (NewPtrs.count(Ptr))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
Instruction *RealNewPtr;
|
|
|
|
if (!I->Offset || I->Offset->getValue()->isZero()) {
|
|
|
|
RealNewPtr = NewBasePtr;
|
|
|
|
} else {
|
|
|
|
Instruction *PtrIP = dyn_cast<Instruction>(Ptr);
|
|
|
|
if (PtrIP && isa<Instruction>(NewBasePtr) &&
|
|
|
|
cast<Instruction>(NewBasePtr)->getParent() == PtrIP->getParent())
|
|
|
|
PtrIP = nullptr;
|
|
|
|
else if (PtrIP && isa<PHINode>(PtrIP))
|
|
|
|
PtrIP = &*PtrIP->getParent()->getFirstInsertionPt();
|
|
|
|
else if (!PtrIP)
|
|
|
|
PtrIP = I->Instr;
|
|
|
|
|
|
|
|
GetElementPtrInst *NewPtr = GetElementPtrInst::Create(
|
|
|
|
I8Ty, PtrInc, I->Offset->getValue(),
|
|
|
|
getInstrName(I->Instr, GEPNodeOffNameSuffix), PtrIP);
|
|
|
|
if (!PtrIP)
|
|
|
|
NewPtr->insertAfter(cast<Instruction>(PtrInc));
|
|
|
|
NewPtr->setIsInBounds(IsPtrInBounds(Ptr));
|
|
|
|
RealNewPtr = NewPtr;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (Instruction *IDel = dyn_cast<Instruction>(Ptr))
|
|
|
|
BBChanged.insert(IDel->getParent());
|
|
|
|
|
|
|
|
Instruction *ReplNewPtr;
|
|
|
|
if (Ptr->getType() != RealNewPtr->getType()) {
|
|
|
|
ReplNewPtr = new BitCastInst(RealNewPtr, Ptr->getType(),
|
|
|
|
getInstrName(Ptr, CastNodeNameSuffix));
|
|
|
|
ReplNewPtr->insertAfter(RealNewPtr);
|
|
|
|
} else
|
|
|
|
ReplNewPtr = RealNewPtr;
|
|
|
|
|
|
|
|
Ptr->replaceAllUsesWith(ReplNewPtr);
|
|
|
|
RecursivelyDeleteTriviallyDeadInstructions(Ptr);
|
|
|
|
|
|
|
|
NewPtrs.insert(RealNewPtr);
|
|
|
|
}
|
|
|
|
|
|
|
|
MadeChange = true;
|
2019-11-18 03:32:26 +01:00
|
|
|
|
|
|
|
SuccPrepCount++;
|
|
|
|
|
|
|
|
if (Form == DSForm && !CanPreInc)
|
|
|
|
DSFormChainRewritten++;
|
|
|
|
else if (Form == DQForm)
|
|
|
|
DQFormChainRewritten++;
|
|
|
|
else if (Form == UpdateForm || (Form == DSForm && CanPreInc))
|
|
|
|
UpdFormChainRewritten++;
|
2019-09-25 05:02:19 +02:00
|
|
|
|
|
|
|
return MadeChange;
|
|
|
|
}
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
bool PPCLoopInstrFormPrep::updateFormPrep(Loop *L,
|
2019-09-25 05:02:19 +02:00
|
|
|
SmallVector<Bucket, 16> &Buckets) {
|
|
|
|
bool MadeChange = false;
|
|
|
|
if (Buckets.empty())
|
|
|
|
return MadeChange;
|
|
|
|
SmallSet<BasicBlock *, 16> BBChanged;
|
|
|
|
for (auto &Bucket : Buckets)
|
|
|
|
// The base address of each bucket is transformed into a phi and the others
|
|
|
|
// are rewritten based on new base.
|
|
|
|
if (prepareBaseForUpdateFormChain(Bucket))
|
2019-11-18 03:32:26 +01:00
|
|
|
MadeChange |= rewriteLoadStores(L, Bucket, BBChanged, UpdateForm);
|
|
|
|
|
2019-09-25 05:02:19 +02:00
|
|
|
if (MadeChange)
|
|
|
|
for (auto &BB : L->blocks())
|
|
|
|
if (BBChanged.count(BB))
|
|
|
|
DeleteDeadPHIs(BB);
|
|
|
|
return MadeChange;
|
|
|
|
}
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
bool PPCLoopInstrFormPrep::dispFormPrep(Loop *L, SmallVector<Bucket, 16> &Buckets,
|
2019-11-18 03:32:26 +01:00
|
|
|
InstrForm Form) {
|
|
|
|
bool MadeChange = false;
|
|
|
|
|
|
|
|
if (Buckets.empty())
|
|
|
|
return MadeChange;
|
|
|
|
|
|
|
|
SmallSet<BasicBlock *, 16> BBChanged;
|
|
|
|
for (auto &Bucket : Buckets) {
|
|
|
|
if (Bucket.Elements.size() < DispFormPrepMinThreshold)
|
|
|
|
continue;
|
|
|
|
if (prepareBaseForDispFormChain(Bucket, Form))
|
|
|
|
MadeChange |= rewriteLoadStores(L, Bucket, BBChanged, Form);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (MadeChange)
|
|
|
|
for (auto &BB : L->blocks())
|
|
|
|
if (BBChanged.count(BB))
|
|
|
|
DeleteDeadPHIs(BB);
|
|
|
|
return MadeChange;
|
|
|
|
}
|
|
|
|
|
|
|
|
// In order to prepare for the preferred instruction form, a PHI is added.
|
2017-08-21 15:36:18 +02:00
|
|
|
// This function will check to see if that PHI already exists and will return
|
2019-11-18 03:32:26 +01:00
|
|
|
// true if it found an existing PHI with the matched start and increment as the
|
2019-09-25 05:02:19 +02:00
|
|
|
// one we wanted to create.
|
2019-11-26 03:18:32 +01:00
|
|
|
bool PPCLoopInstrFormPrep::alreadyPrepared(Loop *L, Instruction* MemI,
|
2017-08-21 15:36:18 +02:00
|
|
|
const SCEV *BasePtrStartSCEV,
|
2019-11-18 03:32:26 +01:00
|
|
|
const SCEVConstant *BasePtrIncSCEV,
|
|
|
|
InstrForm Form) {
|
2017-08-21 15:36:18 +02:00
|
|
|
BasicBlock *BB = MemI->getParent();
|
|
|
|
if (!BB)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
BasicBlock *PredBB = L->getLoopPredecessor();
|
|
|
|
BasicBlock *LatchBB = L->getLoopLatch();
|
|
|
|
|
|
|
|
if (!PredBB || !LatchBB)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
// Run through the PHIs and see if we have some that looks like a preparation
|
|
|
|
iterator_range<BasicBlock::phi_iterator> PHIIter = BB->phis();
|
|
|
|
for (auto & CurrentPHI : PHIIter) {
|
|
|
|
PHINode *CurrentPHINode = dyn_cast<PHINode>(&CurrentPHI);
|
|
|
|
if (!CurrentPHINode)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!SE->isSCEVable(CurrentPHINode->getType()))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
const SCEV *PHISCEV = SE->getSCEVAtScope(CurrentPHINode, L);
|
|
|
|
|
|
|
|
const SCEVAddRecExpr *PHIBasePtrSCEV = dyn_cast<SCEVAddRecExpr>(PHISCEV);
|
|
|
|
if (!PHIBasePtrSCEV)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
const SCEVConstant *PHIBasePtrIncSCEV =
|
|
|
|
dyn_cast<SCEVConstant>(PHIBasePtrSCEV->getStepRecurrence(*SE));
|
|
|
|
if (!PHIBasePtrIncSCEV)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (CurrentPHINode->getNumIncomingValues() == 2) {
|
2019-09-25 05:02:19 +02:00
|
|
|
if ((CurrentPHINode->getIncomingBlock(0) == LatchBB &&
|
|
|
|
CurrentPHINode->getIncomingBlock(1) == PredBB) ||
|
|
|
|
(CurrentPHINode->getIncomingBlock(1) == LatchBB &&
|
|
|
|
CurrentPHINode->getIncomingBlock(0) == PredBB)) {
|
2019-11-18 03:32:26 +01:00
|
|
|
if (PHIBasePtrIncSCEV == BasePtrIncSCEV) {
|
2017-08-21 15:36:18 +02:00
|
|
|
// The existing PHI (CurrentPHINode) has the same start and increment
|
2019-11-18 03:32:26 +01:00
|
|
|
// as the PHI that we wanted to create.
|
|
|
|
if (Form == UpdateForm &&
|
|
|
|
PHIBasePtrSCEV->getStart() == BasePtrStartSCEV) {
|
|
|
|
++PHINodeAlreadyExistsUpdate;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
if (Form == DSForm || Form == DQForm) {
|
|
|
|
const SCEVConstant *Diff = dyn_cast<SCEVConstant>(
|
|
|
|
SE->getMinusSCEV(PHIBasePtrSCEV->getStart(), BasePtrStartSCEV));
|
|
|
|
if (Diff && !Diff->getAPInt().urem(Form)) {
|
|
|
|
if (Form == DSForm)
|
|
|
|
++PHINodeAlreadyExistsDS;
|
|
|
|
else
|
|
|
|
++PHINodeAlreadyExistsDQ;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
2017-08-21 15:36:18 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2019-11-26 03:18:32 +01:00
|
|
|
bool PPCLoopInstrFormPrep::runOnLoop(Loop *L) {
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
bool MadeChange = false;
|
|
|
|
|
|
|
|
// Only prep. the inner-most loop
|
2020-09-22 22:59:34 +02:00
|
|
|
if (!L->isInnermost())
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
return MadeChange;
|
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
// Return if already done enough preparation.
|
|
|
|
if (SuccPrepCount >= MaxVarsPrep)
|
|
|
|
return MadeChange;
|
|
|
|
|
2018-05-14 14:53:11 +02:00
|
|
|
LLVM_DEBUG(dbgs() << "PIP: Examining: " << *L << "\n");
|
2015-04-11 02:33:08 +02:00
|
|
|
|
2015-02-07 08:32:58 +01:00
|
|
|
BasicBlock *LoopPredecessor = L->getLoopPredecessor();
|
|
|
|
// If there is no loop predecessor, or the loop predecessor's terminator
|
|
|
|
// returns a value (which might contribute to determining the loop's
|
|
|
|
// iteration space), insert a new preheader for the loop.
|
|
|
|
if (!LoopPredecessor ||
|
2015-04-11 02:33:08 +02:00
|
|
|
!LoopPredecessor->getTerminator()->getType()->isVoidTy()) {
|
[MemorySSA] Teach LoopSimplify to preserve MemorySSA.
Summary:
Preserve MemorySSA in LoopSimplify, in the old pass manager, if the analysis is available.
Do not preserve it in the new pass manager.
Update tests.
Subscribers: nemanjai, jlebar, javed.absar, Prazek, kbarton, zzheng, jsji, llvm-commits, george.burgess.iv, chandlerc
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60833
llvm-svn: 360270
2019-05-08 19:05:36 +02:00
|
|
|
LoopPredecessor = InsertPreheaderForLoop(L, DT, LI, nullptr, PreserveLCSSA);
|
2015-04-11 02:33:08 +02:00
|
|
|
if (LoopPredecessor)
|
|
|
|
MadeChange = true;
|
|
|
|
}
|
2019-09-25 05:02:19 +02:00
|
|
|
if (!LoopPredecessor) {
|
|
|
|
LLVM_DEBUG(dbgs() << "PIP fails since no predecessor for current loop.\n");
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
return MadeChange;
|
2019-09-25 05:02:19 +02:00
|
|
|
}
|
|
|
|
// Check if a load/store has update form. This lambda is used by function
|
|
|
|
// collectCandidates which can collect candidates for types defined by lambda.
|
2021-06-04 21:07:03 +02:00
|
|
|
auto isUpdateFormCandidate = [&](const Instruction *I, const Value *PtrValue,
|
|
|
|
const Type *PointerElementType) {
|
2019-09-25 05:02:19 +02:00
|
|
|
assert((PtrValue && I) && "Invalid parameter!");
|
|
|
|
// There are no update forms for Altivec vector load/stores.
|
2021-06-04 21:07:03 +02:00
|
|
|
if (ST && ST->hasAltivec() && PointerElementType->isVectorTy())
|
2019-09-25 05:02:19 +02:00
|
|
|
return false;
|
2020-11-13 19:32:57 +01:00
|
|
|
// There are no update forms for P10 lxvp/stxvp intrinsic.
|
|
|
|
auto *II = dyn_cast<IntrinsicInst>(I);
|
2020-12-17 19:18:43 +01:00
|
|
|
if (II && ((II->getIntrinsicID() == Intrinsic::ppc_vsx_lxvp) ||
|
|
|
|
II->getIntrinsicID() == Intrinsic::ppc_vsx_stxvp))
|
2020-11-13 19:32:57 +01:00
|
|
|
return false;
|
2019-09-25 05:02:19 +02:00
|
|
|
// See getPreIndexedAddressParts, the displacement for LDU/STDU has to
|
|
|
|
// be 4's multiple (DS-form). For i64 loads/stores when the displacement
|
|
|
|
// fits in a 16-bit signed field but isn't a multiple of 4, it will be
|
|
|
|
// useless and possible to break some original well-form addressing mode
|
|
|
|
// to make this pre-inc prep for it.
|
2021-06-04 21:07:03 +02:00
|
|
|
if (PointerElementType->isIntegerTy(64)) {
|
2019-09-25 05:02:19 +02:00
|
|
|
const SCEV *LSCEV = SE->getSCEVAtScope(const_cast<Value *>(PtrValue), L);
|
|
|
|
const SCEVAddRecExpr *LARSCEV = dyn_cast<SCEVAddRecExpr>(LSCEV);
|
|
|
|
if (!LARSCEV || LARSCEV->getLoop() != L)
|
|
|
|
return false;
|
|
|
|
if (const SCEVConstant *StepConst =
|
|
|
|
dyn_cast<SCEVConstant>(LARSCEV->getStepRecurrence(*SE))) {
|
|
|
|
const APInt &ConstInt = StepConst->getValue()->getValue();
|
|
|
|
if (ConstInt.isSignedIntN(16) && ConstInt.srem(4) != 0)
|
|
|
|
return false;
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
}
|
|
|
|
}
|
2019-09-25 05:02:19 +02:00
|
|
|
return true;
|
|
|
|
};
|
2021-06-04 21:07:03 +02:00
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
// Check if a load/store has DS form.
|
2021-06-04 21:07:03 +02:00
|
|
|
auto isDSFormCandidate = [](const Instruction *I, const Value *PtrValue,
|
|
|
|
const Type *PointerElementType) {
|
2019-11-18 03:32:26 +01:00
|
|
|
assert((PtrValue && I) && "Invalid parameter!");
|
2019-12-19 03:03:54 +01:00
|
|
|
if (isa<IntrinsicInst>(I))
|
|
|
|
return false;
|
|
|
|
return (PointerElementType->isIntegerTy(64)) ||
|
|
|
|
(PointerElementType->isFloatTy()) ||
|
|
|
|
(PointerElementType->isDoubleTy()) ||
|
|
|
|
(PointerElementType->isIntegerTy(32) &&
|
|
|
|
llvm::any_of(I->users(),
|
|
|
|
[](const User *U) { return isa<SExtInst>(U); }));
|
2019-11-18 03:32:26 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
// Check if a load/store has DQ form.
|
2021-06-04 21:07:03 +02:00
|
|
|
auto isDQFormCandidate = [&](const Instruction *I, const Value *PtrValue,
|
|
|
|
const Type *PointerElementType) {
|
2019-11-18 03:32:26 +01:00
|
|
|
assert((PtrValue && I) && "Invalid parameter!");
|
2020-11-13 19:32:57 +01:00
|
|
|
// Check if it is a P10 lxvp/stxvp intrinsic.
|
|
|
|
auto *II = dyn_cast<IntrinsicInst>(I);
|
|
|
|
if (II)
|
2020-12-17 19:18:43 +01:00
|
|
|
return II->getIntrinsicID() == Intrinsic::ppc_vsx_lxvp ||
|
|
|
|
II->getIntrinsicID() == Intrinsic::ppc_vsx_stxvp;
|
2020-11-13 19:32:57 +01:00
|
|
|
// Check if it is a P9 vector load/store.
|
2021-06-04 21:07:03 +02:00
|
|
|
return ST && ST->hasP9Vector() && (PointerElementType->isVectorTy());
|
2019-11-18 03:32:26 +01:00
|
|
|
};
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
|
2019-09-25 05:02:19 +02:00
|
|
|
// intrinsic for update form.
|
|
|
|
SmallVector<Bucket, 16> UpdateFormBuckets =
|
2019-11-18 03:32:26 +01:00
|
|
|
collectCandidates(L, isUpdateFormCandidate, MaxVarsUpdateForm);
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
|
2019-09-25 05:02:19 +02:00
|
|
|
// Prepare for update form.
|
|
|
|
if (!UpdateFormBuckets.empty())
|
|
|
|
MadeChange |= updateFormPrep(L, UpdateFormBuckets);
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
|
2019-11-18 03:32:26 +01:00
|
|
|
// Collect buckets of comparable addresses used by loads and stores for DS
|
|
|
|
// form.
|
|
|
|
SmallVector<Bucket, 16> DSFormBuckets =
|
|
|
|
collectCandidates(L, isDSFormCandidate, MaxVarsDSForm);
|
|
|
|
|
|
|
|
// Prepare for DS form.
|
|
|
|
if (!DSFormBuckets.empty())
|
|
|
|
MadeChange |= dispFormPrep(L, DSFormBuckets, DSForm);
|
|
|
|
|
|
|
|
// Collect buckets of comparable addresses used by loads and stores for DQ
|
|
|
|
// form.
|
|
|
|
SmallVector<Bucket, 16> DQFormBuckets =
|
|
|
|
collectCandidates(L, isDQFormCandidate, MaxVarsDQForm);
|
|
|
|
|
|
|
|
// Prepare for DQ form.
|
|
|
|
if (!DQFormBuckets.empty())
|
|
|
|
MadeChange |= dispFormPrep(L, DQFormBuckets, DQForm);
|
|
|
|
|
[PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:
for (int i = 0; i < n; ++i)
array[i] = c;
to look like this:
T *p = array[-1];
for (int i = 0; i < n; ++i)
*++p = c;
the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.
A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:
External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
-2.32514% +/- 1.03736%
So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.
In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).
llvm-svn: 228328
2015-02-05 19:43:00 +01:00
|
|
|
return MadeChange;
|
|
|
|
}
|