mirror of
https://github.com/RPCS3/llvm-mirror.git
synced 2024-11-25 12:12:47 +01:00
Fix typos.
Summary: This fixes a variety of typos in docs, code and headers. Subscribers: jholewinski, sanjoy, arsenm, llvm-commits Differential Revision: http://reviews.llvm.org/D12626 llvm-svn: 247495
This commit is contained in:
parent
2e86d35273
commit
9ad7a63fa9
@ -851,7 +851,7 @@ in the *paramattr* field of module block `FUNCTION`_ records, or within the
|
|||||||
*attr* field of function block ``INST_INVOKE`` and ``INST_CALL`` records.
|
*attr* field of function block ``INST_INVOKE`` and ``INST_CALL`` records.
|
||||||
|
|
||||||
Entries within ``PARAMATTR_BLOCK`` are constructed to ensure that each is unique
|
Entries within ``PARAMATTR_BLOCK`` are constructed to ensure that each is unique
|
||||||
(i.e., no two indicies represent equivalent attribute lists).
|
(i.e., no two indices represent equivalent attribute lists).
|
||||||
|
|
||||||
.. _PARAMATTR_CODE_ENTRY:
|
.. _PARAMATTR_CODE_ENTRY:
|
||||||
|
|
||||||
@ -904,7 +904,7 @@ table entry, which may be referenced by 0-based index from instructions,
|
|||||||
constants, metadata, type symbol table entries, or other type operator records.
|
constants, metadata, type symbol table entries, or other type operator records.
|
||||||
|
|
||||||
Entries within ``TYPE_BLOCK`` are constructed to ensure that each entry is
|
Entries within ``TYPE_BLOCK`` are constructed to ensure that each entry is
|
||||||
unique (i.e., no two indicies represent structurally equivalent types).
|
unique (i.e., no two indices represent structurally equivalent types).
|
||||||
|
|
||||||
.. _TYPE_CODE_NUMENTRY:
|
.. _TYPE_CODE_NUMENTRY:
|
||||||
.. _NUMENTRY:
|
.. _NUMENTRY:
|
||||||
|
@ -27,7 +27,7 @@ Supported Instructions
|
|||||||
^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^
|
||||||
|
|
||||||
Metadata is only assigned to the conditional branches. There are two extra
|
Metadata is only assigned to the conditional branches. There are two extra
|
||||||
operarands for the true and the false branch.
|
operands for the true and the false branch.
|
||||||
|
|
||||||
.. code-block:: llvm
|
.. code-block:: llvm
|
||||||
|
|
||||||
@ -114,12 +114,12 @@ CFG Modifications
|
|||||||
|
|
||||||
Branch Weight Metatada is not proof against CFG changes. If terminator operands'
|
Branch Weight Metatada is not proof against CFG changes. If terminator operands'
|
||||||
are changed some action should be taken. In other case some misoptimizations may
|
are changed some action should be taken. In other case some misoptimizations may
|
||||||
occur due to incorrent branch prediction information.
|
occur due to incorrect branch prediction information.
|
||||||
|
|
||||||
Function Entry Counts
|
Function Entry Counts
|
||||||
=====================
|
=====================
|
||||||
|
|
||||||
To allow comparing different functions durint inter-procedural analysis and
|
To allow comparing different functions during inter-procedural analysis and
|
||||||
optimization, ``MD_prof`` nodes can also be assigned to a function definition.
|
optimization, ``MD_prof`` nodes can also be assigned to a function definition.
|
||||||
The first operand is a string indicating the name of the associated counter.
|
The first operand is a string indicating the name of the associated counter.
|
||||||
|
|
||||||
|
@ -3875,7 +3875,7 @@ DILexicalBlock
|
|||||||
""""""""""""""
|
""""""""""""""
|
||||||
|
|
||||||
``DILexicalBlock`` nodes describe nested blocks within a :ref:`subprogram
|
``DILexicalBlock`` nodes describe nested blocks within a :ref:`subprogram
|
||||||
<DISubprogram>`. The line number and column numbers are used to dinstinguish
|
<DISubprogram>`. The line number and column numbers are used to distinguish
|
||||||
two lexical blocks at same depth. They are valid targets for ``scope:``
|
two lexical blocks at same depth. They are valid targets for ``scope:``
|
||||||
fields.
|
fields.
|
||||||
|
|
||||||
@ -4060,13 +4060,13 @@ alias.
|
|||||||
|
|
||||||
The metadata identifying each domain is itself a list containing one or two
|
The metadata identifying each domain is itself a list containing one or two
|
||||||
entries. The first entry is the name of the domain. Note that if the name is a
|
entries. The first entry is the name of the domain. Note that if the name is a
|
||||||
string then it can be combined accross functions and translation units. A
|
string then it can be combined across functions and translation units. A
|
||||||
self-reference can be used to create globally unique domain names. A
|
self-reference can be used to create globally unique domain names. A
|
||||||
descriptive string may optionally be provided as a second list entry.
|
descriptive string may optionally be provided as a second list entry.
|
||||||
|
|
||||||
The metadata identifying each scope is also itself a list containing two or
|
The metadata identifying each scope is also itself a list containing two or
|
||||||
three entries. The first entry is the name of the scope. Note that if the name
|
three entries. The first entry is the name of the scope. Note that if the name
|
||||||
is a string then it can be combined accross functions and translation units. A
|
is a string then it can be combined across functions and translation units. A
|
||||||
self-reference can be used to create globally unique scope names. A metadata
|
self-reference can be used to create globally unique scope names. A metadata
|
||||||
reference to the scope's domain is the second entry. A descriptive string may
|
reference to the scope's domain is the second entry. A descriptive string may
|
||||||
optionally be provided as a third list entry.
|
optionally be provided as a third list entry.
|
||||||
@ -5161,7 +5161,7 @@ is a catch block --- one where a personality routine attempts to transfer
|
|||||||
control to catch an exception.
|
control to catch an exception.
|
||||||
The ``args`` correspond to whatever information the personality
|
The ``args`` correspond to whatever information the personality
|
||||||
routine requires to know if this is an appropriate place to catch the
|
routine requires to know if this is an appropriate place to catch the
|
||||||
exception. Control is tranfered to the ``exception`` label if the
|
exception. Control is transfered to the ``exception`` label if the
|
||||||
``catchpad`` is not an appropriate handler for the in-flight exception.
|
``catchpad`` is not an appropriate handler for the in-flight exception.
|
||||||
The ``normal`` label should contain the code found in the ``catch``
|
The ``normal`` label should contain the code found in the ``catch``
|
||||||
portion of a ``try``/``catch`` sequence. The ``resultval`` has the type
|
portion of a ``try``/``catch`` sequence. The ``resultval`` has the type
|
||||||
@ -11311,7 +11311,7 @@ The first operand is a vector value to be written to memory. The second operand
|
|||||||
Semantics:
|
Semantics:
|
||||||
""""""""""
|
""""""""""
|
||||||
|
|
||||||
The '``llvm.masked.scatter``' intrinsics is designed for writing selected vector elements to arbitrary memory addresses in a single IR operation. The operation may be conditional, when not all bits in the mask are switched on. It is useful for targets that support vector masked scatter and allows vectorizing basic blocks with data and control divergency. Other targets may support this intrinsic differently, for example by lowering it into a sequence of branches that guard scalar store operations.
|
The '``llvm.masked.scatter``' intrinsics is designed for writing selected vector elements to arbitrary memory addresses in a single IR operation. The operation may be conditional, when not all bits in the mask are switched on. It is useful for targets that support vector masked scatter and allows vectorizing basic blocks with data and control divergence. Other targets may support this intrinsic differently, for example by lowering it into a sequence of branches that guard scalar store operations.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -708,7 +708,7 @@ qualified name. Debugger users tend not to enter their search strings as
|
|||||||
"``a::b::c``". So the name entered in the name table must be demangled in
|
"``a::b::c``". So the name entered in the name table must be demangled in
|
||||||
order to chop it up appropriately and additional names must be manually entered
|
order to chop it up appropriately and additional names must be manually entered
|
||||||
into the table to make it effective as a name lookup table for debuggers to
|
into the table to make it effective as a name lookup table for debuggers to
|
||||||
se.
|
use.
|
||||||
|
|
||||||
All debuggers currently ignore the "``.debug_pubnames``" table as a result of
|
All debuggers currently ignore the "``.debug_pubnames``" table as a result of
|
||||||
its inconsistent and useless public-only name content making it a waste of
|
its inconsistent and useless public-only name content making it a waste of
|
||||||
|
@ -53,7 +53,7 @@ load barriers, store barriers, and safepoints.
|
|||||||
loads, merely loads of a particular type (in the original source
|
loads, merely loads of a particular type (in the original source
|
||||||
language), or none at all.
|
language), or none at all.
|
||||||
|
|
||||||
#. Analogously, a store barrier is a code fragement that runs
|
#. Analogously, a store barrier is a code fragment that runs
|
||||||
immediately before the machine store instruction, but after the
|
immediately before the machine store instruction, but after the
|
||||||
computation of the value stored. The most common use of a store
|
computation of the value stored. The most common use of a store
|
||||||
barrier is to update a 'card table' in a generational garbage
|
barrier is to update a 'card table' in a generational garbage
|
||||||
@ -160,7 +160,7 @@ of the call, we use the ``gc.result`` intrinsic. To get the relocation
|
|||||||
of each pointer in turn, we use the ``gc.relocate`` intrinsic with the
|
of each pointer in turn, we use the ``gc.relocate`` intrinsic with the
|
||||||
appropriate index. Note that both the ``gc.relocate`` and ``gc.result`` are
|
appropriate index. Note that both the ``gc.relocate`` and ``gc.result`` are
|
||||||
tied to the statepoint. The combination forms a "statepoint relocation
|
tied to the statepoint. The combination forms a "statepoint relocation
|
||||||
sequence" and represents the entitety of a parseable call or 'statepoint'.
|
sequence" and represents the entirety of a parseable call or 'statepoint'.
|
||||||
|
|
||||||
When lowered, this example would generate the following x86 assembly:
|
When lowered, this example would generate the following x86 assembly:
|
||||||
|
|
||||||
@ -271,7 +271,7 @@ statepoint.
|
|||||||
transitions based on the function symbols involved (e.g. a call from a
|
transitions based on the function symbols involved (e.g. a call from a
|
||||||
function with GC strategy "foo" to a function with GC strategy "bar"),
|
function with GC strategy "foo" to a function with GC strategy "bar"),
|
||||||
indirect calls that are also GC transitions must also be supported. This
|
indirect calls that are also GC transitions must also be supported. This
|
||||||
requirement is the driving force behing the decision to require that GC
|
requirement is the driving force behind the decision to require that GC
|
||||||
transitions are explicitly marked.
|
transitions are explicitly marked.
|
||||||
|
|
||||||
Let's revisit the sample given above, this time treating the call to ``@foo``
|
Let's revisit the sample given above, this time treating the call to ``@foo``
|
||||||
|
@ -155,7 +155,7 @@ namespace llvm {
|
|||||||
"Attempt to construct index with 0 pointer.");
|
"Attempt to construct index with 0 pointer.");
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns true if this is a valid index. Invalid indicies do
|
/// Returns true if this is a valid index. Invalid indices do
|
||||||
/// not point into an index table, and cannot be compared.
|
/// not point into an index table, and cannot be compared.
|
||||||
bool isValid() const {
|
bool isValid() const {
|
||||||
return lie.getPointer();
|
return lie.getPointer();
|
||||||
|
@ -286,9 +286,9 @@ void AsmPrinter::emitDwarfDIE(const DIE &Die) const {
|
|||||||
|
|
||||||
void
|
void
|
||||||
AsmPrinter::emitDwarfAbbrevs(const std::vector<DIEAbbrev *>& Abbrevs) const {
|
AsmPrinter::emitDwarfAbbrevs(const std::vector<DIEAbbrev *>& Abbrevs) const {
|
||||||
// For each abbrevation.
|
// For each abbreviation.
|
||||||
for (const DIEAbbrev *Abbrev : Abbrevs) {
|
for (const DIEAbbrev *Abbrev : Abbrevs) {
|
||||||
// Emit the abbrevations code (base 1 index.)
|
// Emit the abbreviations code (base 1 index.)
|
||||||
EmitULEB128(Abbrev->getNumber(), "Abbreviation Code");
|
EmitULEB128(Abbrev->getNumber(), "Abbreviation Code");
|
||||||
|
|
||||||
// Emit the abbreviations data.
|
// Emit the abbreviations data.
|
||||||
|
@ -897,11 +897,11 @@ void PEI::replaceFrameIndices(MachineBasicBlock *BB, MachineFunction &Fn,
|
|||||||
if (!MI->getOperand(i).isFI())
|
if (!MI->getOperand(i).isFI())
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
// Frame indicies in debug values are encoded in a target independent
|
// Frame indices in debug values are encoded in a target independent
|
||||||
// way with simply the frame index and offset rather than any
|
// way with simply the frame index and offset rather than any
|
||||||
// target-specific addressing mode.
|
// target-specific addressing mode.
|
||||||
if (MI->isDebugValue()) {
|
if (MI->isDebugValue()) {
|
||||||
assert(i == 0 && "Frame indicies can only appear as the first "
|
assert(i == 0 && "Frame indices can only appear as the first "
|
||||||
"operand of a DBG_VALUE machine instruction");
|
"operand of a DBG_VALUE machine instruction");
|
||||||
unsigned Reg;
|
unsigned Reg;
|
||||||
MachineOperand &Offset = MI->getOperand(1);
|
MachineOperand &Offset = MI->getOperand(1);
|
||||||
|
@ -83,7 +83,7 @@ foldConstantCastPair(
|
|||||||
assert(DstTy && DstTy->isFirstClassType() && "Invalid cast destination type");
|
assert(DstTy && DstTy->isFirstClassType() && "Invalid cast destination type");
|
||||||
assert(CastInst::isCast(opc) && "Invalid cast opcode");
|
assert(CastInst::isCast(opc) && "Invalid cast opcode");
|
||||||
|
|
||||||
// The the types and opcodes for the two Cast constant expressions
|
// The types and opcodes for the two Cast constant expressions
|
||||||
Type *SrcTy = Op->getOperand(0)->getType();
|
Type *SrcTy = Op->getOperand(0)->getType();
|
||||||
Type *MidTy = Op->getType();
|
Type *MidTy = Op->getType();
|
||||||
Instruction::CastOps firstOp = Instruction::CastOps(Op->getOpcode());
|
Instruction::CastOps firstOp = Instruction::CastOps(Op->getOpcode());
|
||||||
@ -1277,9 +1277,9 @@ static bool isMaybeZeroSizedType(Type *Ty) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// IdxCompare - Compare the two constants as though they were getelementptr
|
/// IdxCompare - Compare the two constants as though they were getelementptr
|
||||||
/// indices. This allows coersion of the types to be the same thing.
|
/// indices. This allows coercion of the types to be the same thing.
|
||||||
///
|
///
|
||||||
/// If the two constants are the "same" (after coersion), return 0. If the
|
/// If the two constants are the "same" (after coercion), return 0. If the
|
||||||
/// first is less than the second, return -1, if the second is less than the
|
/// first is less than the second, return -1, if the second is less than the
|
||||||
/// first, return 1. If the constants are not integral, return -2.
|
/// first, return 1. If the constants are not integral, return -2.
|
||||||
///
|
///
|
||||||
@ -1999,7 +1999,7 @@ static bool isInBoundsIndices(ArrayRef<IndexTy> Idxs) {
|
|||||||
/// \brief Test whether a given ConstantInt is in-range for a SequentialType.
|
/// \brief Test whether a given ConstantInt is in-range for a SequentialType.
|
||||||
static bool isIndexInRangeOfSequentialType(SequentialType *STy,
|
static bool isIndexInRangeOfSequentialType(SequentialType *STy,
|
||||||
const ConstantInt *CI) {
|
const ConstantInt *CI) {
|
||||||
// And indicies are valid when indexing along a pointer
|
// And indices are valid when indexing along a pointer
|
||||||
if (isa<PointerType>(STy))
|
if (isa<PointerType>(STy))
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
|
@ -1655,7 +1655,7 @@ def : InsertVerticalPat <R600_INSERT_ELT_V4, v4f32, f32>;
|
|||||||
// ISel Patterns
|
// ISel Patterns
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
|
|
||||||
// CND*_INT Pattterns for f32 True / False values
|
// CND*_INT Patterns for f32 True / False values
|
||||||
|
|
||||||
class CND_INT_f32 <InstR600 cnd, CondCode cc> : Pat <
|
class CND_INT_f32 <InstR600 cnd, CondCode cc> : Pat <
|
||||||
(selectcc i32:$src0, 0, f32:$src1, f32:$src2, cc),
|
(selectcc i32:$src0, 0, f32:$src1, f32:$src2, cc),
|
||||||
|
@ -673,7 +673,7 @@ class shift_rotate_reg<string opstr, RegisterOperand RO, InstrItinClass itin,
|
|||||||
[(set RO:$rd, (OpNode RO:$rt, GPR32Opnd:$rs))], itin, FrmR,
|
[(set RO:$rd, (OpNode RO:$rt, GPR32Opnd:$rs))], itin, FrmR,
|
||||||
opstr>;
|
opstr>;
|
||||||
|
|
||||||
// Load Upper Imediate
|
// Load Upper Immediate
|
||||||
class LoadUpper<string opstr, RegisterOperand RO, Operand Imm>:
|
class LoadUpper<string opstr, RegisterOperand RO, Operand Imm>:
|
||||||
InstSE<(outs RO:$rt), (ins Imm:$imm16), !strconcat(opstr, "\t$rt, $imm16"),
|
InstSE<(outs RO:$rt), (ins Imm:$imm16), !strconcat(opstr, "\t$rt, $imm16"),
|
||||||
[], II_LUI, FrmI, opstr>, IsAsCheapAsAMove {
|
[], II_LUI, FrmI, opstr>, IsAsCheapAsAMove {
|
||||||
|
@ -357,8 +357,8 @@ bool llvm::isMemorySpaceTransferIntrinsic(Intrinsic::ID id) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// consider several special intrinsics in striping pointer casts, and
|
// consider several special intrinsics in striping pointer casts, and
|
||||||
// provide an option to ignore GEP indicies for find out the base address only
|
// provide an option to ignore GEP indices for find out the base address only
|
||||||
// which could be used in simple alias disambigurate.
|
// which could be used in simple alias disambiguation.
|
||||||
const Value *
|
const Value *
|
||||||
llvm::skipPointerTransfer(const Value *V, bool ignore_GEP_indices) {
|
llvm::skipPointerTransfer(const Value *V, bool ignore_GEP_indices) {
|
||||||
V = V->stripPointerCasts();
|
V = V->stripPointerCasts();
|
||||||
@ -379,9 +379,9 @@ llvm::skipPointerTransfer(const Value *V, bool ignore_GEP_indices) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// consider several special intrinsics in striping pointer casts, and
|
// consider several special intrinsics in striping pointer casts, and
|
||||||
// - ignore GEP indicies for find out the base address only, and
|
// - ignore GEP indices for find out the base address only, and
|
||||||
// - tracking PHINode
|
// - tracking PHINode
|
||||||
// which could be used in simple alias disambigurate.
|
// which could be used in simple alias disambiguation.
|
||||||
const Value *
|
const Value *
|
||||||
llvm::skipPointerTransfer(const Value *V, std::set<const Value *> &processed) {
|
llvm::skipPointerTransfer(const Value *V, std::set<const Value *> &processed) {
|
||||||
if (processed.find(V) != processed.end())
|
if (processed.find(V) != processed.end())
|
||||||
@ -428,7 +428,7 @@ llvm::skipPointerTransfer(const Value *V, std::set<const Value *> &processed) {
|
|||||||
return V;
|
return V;
|
||||||
}
|
}
|
||||||
|
|
||||||
// The following are some useful utilities for debuggung
|
// The following are some useful utilities for debugging
|
||||||
|
|
||||||
BasicBlock *llvm::getParentBlock(Value *v) {
|
BasicBlock *llvm::getParentBlock(Value *v) {
|
||||||
if (BasicBlock *B = dyn_cast<BasicBlock>(v))
|
if (BasicBlock *B = dyn_cast<BasicBlock>(v))
|
||||||
@ -484,7 +484,7 @@ Instruction *llvm::getInst(Value *base, char *instName) {
|
|||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Dump an instruction by nane
|
// Dump an instruction by name
|
||||||
void llvm::dumpInst(Value *base, char *instName) {
|
void llvm::dumpInst(Value *base, char *instName) {
|
||||||
Instruction *I = getInst(base, instName);
|
Instruction *I = getInst(base, instName);
|
||||||
if (I)
|
if (I)
|
||||||
|
@ -511,7 +511,7 @@ static Instruction *unpackLoadToAggregate(InstCombiner &IC, LoadInst &LI) {
|
|||||||
if (!T->isAggregateType())
|
if (!T->isAggregateType())
|
||||||
return nullptr;
|
return nullptr;
|
||||||
|
|
||||||
assert(LI.getAlignment() && "Alignement must be set at this point");
|
assert(LI.getAlignment() && "Alignment must be set at this point");
|
||||||
|
|
||||||
if (auto *ST = dyn_cast<StructType>(T)) {
|
if (auto *ST = dyn_cast<StructType>(T)) {
|
||||||
// If the struct only have one element, we unpack.
|
// If the struct only have one element, we unpack.
|
||||||
@ -681,7 +681,7 @@ static bool canReplaceGEPIdxWithZero(InstCombiner &IC, GetElementPtrInst *GEPI,
|
|||||||
// FIXME: If the GEP is not inbounds, and there are extra indices after the
|
// FIXME: If the GEP is not inbounds, and there are extra indices after the
|
||||||
// one we'll replace, those could cause the address computation to wrap
|
// one we'll replace, those could cause the address computation to wrap
|
||||||
// (rendering the IsAllNonNegative() check below insufficient). We can do
|
// (rendering the IsAllNonNegative() check below insufficient). We can do
|
||||||
// better, ignoring zero indicies (and other indicies we can prove small
|
// better, ignoring zero indices (and other indices we can prove small
|
||||||
// enough not to wrap).
|
// enough not to wrap).
|
||||||
if (Idx+1 != GEPI->getNumOperands() && !GEPI->isInBounds())
|
if (Idx+1 != GEPI->getNumOperands() && !GEPI->isInBounds())
|
||||||
return false;
|
return false;
|
||||||
@ -857,7 +857,7 @@ Instruction *InstCombiner::visitLoadInst(LoadInst &LI) {
|
|||||||
///
|
///
|
||||||
/// \returns true if the store was successfully combined away. This indicates
|
/// \returns true if the store was successfully combined away. This indicates
|
||||||
/// the caller must erase the store instruction. We have to let the caller erase
|
/// the caller must erase the store instruction. We have to let the caller erase
|
||||||
/// the store instruction sas otherwise there is no way to signal whether it was
|
/// the store instruction as otherwise there is no way to signal whether it was
|
||||||
/// combined or not: IC.EraseInstFromFunction returns a null pointer.
|
/// combined or not: IC.EraseInstFromFunction returns a null pointer.
|
||||||
static bool combineStoreToValueType(InstCombiner &IC, StoreInst &SI) {
|
static bool combineStoreToValueType(InstCombiner &IC, StoreInst &SI) {
|
||||||
// FIXME: We could probably with some care handle both volatile and atomic
|
// FIXME: We could probably with some care handle both volatile and atomic
|
||||||
|
@ -350,7 +350,7 @@ bool EEVT::TypeSet::EnforceVector(TreePattern &TP) {
|
|||||||
|
|
||||||
|
|
||||||
/// EnforceSmallerThan - 'this' must be a smaller VT than Other. For vectors
|
/// EnforceSmallerThan - 'this' must be a smaller VT than Other. For vectors
|
||||||
/// this shoud be based on the element type. Update this and other based on
|
/// this should be based on the element type. Update this and other based on
|
||||||
/// this information.
|
/// this information.
|
||||||
bool EEVT::TypeSet::EnforceSmallerThan(EEVT::TypeSet &Other, TreePattern &TP) {
|
bool EEVT::TypeSet::EnforceSmallerThan(EEVT::TypeSet &Other, TreePattern &TP) {
|
||||||
if (TP.hasError())
|
if (TP.hasError())
|
||||||
@ -456,7 +456,7 @@ bool EEVT::TypeSet::EnforceSmallerThan(EEVT::TypeSet &Other, TreePattern &TP) {
|
|||||||
return MadeChange;
|
return MadeChange;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// EnforceVectorEltTypeIs - 'this' is now constrainted to be a vector type
|
/// EnforceVectorEltTypeIs - 'this' is now constrained to be a vector type
|
||||||
/// whose element is specified by VTOperand.
|
/// whose element is specified by VTOperand.
|
||||||
bool EEVT::TypeSet::EnforceVectorEltTypeIs(MVT::SimpleValueType VT,
|
bool EEVT::TypeSet::EnforceVectorEltTypeIs(MVT::SimpleValueType VT,
|
||||||
TreePattern &TP) {
|
TreePattern &TP) {
|
||||||
@ -484,7 +484,7 @@ bool EEVT::TypeSet::EnforceVectorEltTypeIs(MVT::SimpleValueType VT,
|
|||||||
return MadeChange;
|
return MadeChange;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// EnforceVectorEltTypeIs - 'this' is now constrainted to be a vector type
|
/// EnforceVectorEltTypeIs - 'this' is now constrained to be a vector type
|
||||||
/// whose element is specified by VTOperand.
|
/// whose element is specified by VTOperand.
|
||||||
bool EEVT::TypeSet::EnforceVectorEltTypeIs(EEVT::TypeSet &VTOperand,
|
bool EEVT::TypeSet::EnforceVectorEltTypeIs(EEVT::TypeSet &VTOperand,
|
||||||
TreePattern &TP) {
|
TreePattern &TP) {
|
||||||
@ -530,7 +530,7 @@ bool EEVT::TypeSet::EnforceVectorEltTypeIs(EEVT::TypeSet &VTOperand,
|
|||||||
return MadeChange;
|
return MadeChange;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// EnforceVectorSubVectorTypeIs - 'this' is now constrainted to be a
|
/// EnforceVectorSubVectorTypeIs - 'this' is now constrained to be a
|
||||||
/// vector type specified by VTOperand.
|
/// vector type specified by VTOperand.
|
||||||
bool EEVT::TypeSet::EnforceVectorSubVectorTypeIs(EEVT::TypeSet &VTOperand,
|
bool EEVT::TypeSet::EnforceVectorSubVectorTypeIs(EEVT::TypeSet &VTOperand,
|
||||||
TreePattern &TP) {
|
TreePattern &TP) {
|
||||||
@ -611,7 +611,7 @@ bool EEVT::TypeSet::EnforceVectorSubVectorTypeIs(EEVT::TypeSet &VTOperand,
|
|||||||
return MadeChange;
|
return MadeChange;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// EnforceVectorSameNumElts - 'this' is now constrainted to
|
/// EnforceVectorSameNumElts - 'this' is now constrained to
|
||||||
/// be a vector with same num elements as VTOperand.
|
/// be a vector with same num elements as VTOperand.
|
||||||
bool EEVT::TypeSet::EnforceVectorSameNumElts(EEVT::TypeSet &VTOperand,
|
bool EEVT::TypeSet::EnforceVectorSameNumElts(EEVT::TypeSet &VTOperand,
|
||||||
TreePattern &TP) {
|
TreePattern &TP) {
|
||||||
@ -2815,7 +2815,7 @@ static bool InferFromPattern(CodeGenInstruction &InstInfo,
|
|||||||
|
|
||||||
if (InstInfo.mayLoad != PatInfo.mayLoad && !InstInfo.mayLoad_Unset) {
|
if (InstInfo.mayLoad != PatInfo.mayLoad && !InstInfo.mayLoad_Unset) {
|
||||||
// Allow explicitly setting mayLoad = 1, even when the pattern has no loads.
|
// Allow explicitly setting mayLoad = 1, even when the pattern has no loads.
|
||||||
// Some targets translate imediates to loads.
|
// Some targets translate immediates to loads.
|
||||||
if (!InstInfo.mayLoad) {
|
if (!InstInfo.mayLoad) {
|
||||||
Error = true;
|
Error = true;
|
||||||
PrintError(PatDef->getLoc(), "Pattern doesn't match mayLoad = " +
|
PrintError(PatDef->getLoc(), "Pattern doesn't match mayLoad = " +
|
||||||
@ -3347,7 +3347,7 @@ void CodeGenDAGPatterns::VerifyInstructionFlags() {
|
|||||||
if (InstInfo.InferredFrom &&
|
if (InstInfo.InferredFrom &&
|
||||||
InstInfo.InferredFrom != InstInfo.TheDef &&
|
InstInfo.InferredFrom != InstInfo.TheDef &&
|
||||||
InstInfo.InferredFrom != PTM.getSrcRecord())
|
InstInfo.InferredFrom != PTM.getSrcRecord())
|
||||||
PrintError(InstInfo.InferredFrom->getLoc(), "inferred from patttern");
|
PrintError(InstInfo.InferredFrom->getLoc(), "inferred from pattern");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (Errors)
|
if (Errors)
|
||||||
@ -3573,7 +3573,7 @@ static void CombineChildVariants(TreePatternNode *Orig,
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Increment indices to the next permutation by incrementing the
|
// Increment indices to the next permutation by incrementing the
|
||||||
// indicies from last index backward, e.g., generate the sequence
|
// indices from last index backward, e.g., generate the sequence
|
||||||
// [0, 0], [0, 1], [1, 0], [1, 1].
|
// [0, 0], [0, 1], [1, 0], [1, 1].
|
||||||
int IdxsIdx;
|
int IdxsIdx;
|
||||||
for (IdxsIdx = Idxs.size() - 1; IdxsIdx >= 0; --IdxsIdx) {
|
for (IdxsIdx = Idxs.size() - 1; IdxsIdx >= 0; --IdxsIdx) {
|
||||||
@ -3724,7 +3724,7 @@ static void GenerateVariantsOf(TreePatternNode *N,
|
|||||||
// operands are the commutative operands, and there might be more operands
|
// operands are the commutative operands, and there might be more operands
|
||||||
// after those.
|
// after those.
|
||||||
assert(NC >= 3 &&
|
assert(NC >= 3 &&
|
||||||
"Commutative intrinsic should have at least 3 childrean!");
|
"Commutative intrinsic should have at least 3 children!");
|
||||||
std::vector<std::vector<TreePatternNode*> > Variants;
|
std::vector<std::vector<TreePatternNode*> > Variants;
|
||||||
Variants.push_back(ChildVariants[0]); // Intrinsic id.
|
Variants.push_back(ChildVariants[0]); // Intrinsic id.
|
||||||
Variants.push_back(ChildVariants[2]);
|
Variants.push_back(ChildVariants[2]);
|
||||||
|
@ -132,19 +132,19 @@ namespace EEVT {
|
|||||||
/// this an other based on this information.
|
/// this an other based on this information.
|
||||||
bool EnforceSmallerThan(EEVT::TypeSet &Other, TreePattern &TP);
|
bool EnforceSmallerThan(EEVT::TypeSet &Other, TreePattern &TP);
|
||||||
|
|
||||||
/// EnforceVectorEltTypeIs - 'this' is now constrainted to be a vector type
|
/// EnforceVectorEltTypeIs - 'this' is now constrained to be a vector type
|
||||||
/// whose element is VT.
|
/// whose element is VT.
|
||||||
bool EnforceVectorEltTypeIs(EEVT::TypeSet &VT, TreePattern &TP);
|
bool EnforceVectorEltTypeIs(EEVT::TypeSet &VT, TreePattern &TP);
|
||||||
|
|
||||||
/// EnforceVectorEltTypeIs - 'this' is now constrainted to be a vector type
|
/// EnforceVectorEltTypeIs - 'this' is now constrained to be a vector type
|
||||||
/// whose element is VT.
|
/// whose element is VT.
|
||||||
bool EnforceVectorEltTypeIs(MVT::SimpleValueType VT, TreePattern &TP);
|
bool EnforceVectorEltTypeIs(MVT::SimpleValueType VT, TreePattern &TP);
|
||||||
|
|
||||||
/// EnforceVectorSubVectorTypeIs - 'this' is now constrainted to
|
/// EnforceVectorSubVectorTypeIs - 'this' is now constrained to
|
||||||
/// be a vector type VT.
|
/// be a vector type VT.
|
||||||
bool EnforceVectorSubVectorTypeIs(EEVT::TypeSet &VT, TreePattern &TP);
|
bool EnforceVectorSubVectorTypeIs(EEVT::TypeSet &VT, TreePattern &TP);
|
||||||
|
|
||||||
/// EnforceVectorSameNumElts - 'this' is now constrainted to
|
/// EnforceVectorSameNumElts - 'this' is now constrained to
|
||||||
/// be a vector with same num elements as VT.
|
/// be a vector with same num elements as VT.
|
||||||
bool EnforceVectorSameNumElts(EEVT::TypeSet &VT, TreePattern &TP);
|
bool EnforceVectorSameNumElts(EEVT::TypeSet &VT, TreePattern &TP);
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user