For now this is distinct from isCodeGenOnly, as code-gen-only
instructions can (and often do) still have encoding information
associated with them. Once we've migrated all of them over to true
pseudo-instructions that are lowered to real instructions prior to
the printer/emitter, we can remove isCodeGenOnly and just use isPseudo.
llvm-svn: 134539
itineraries.
- Refactor TargetSubtarget to be based on MCSubtargetInfo.
- Change tablegen generated subtarget info to initialize MCSubtargetInfo
and hide more details from targets.
llvm-svn: 134257
be the first encoded as the first feature. It then uses the CPU name to look up
features / scheduling itineray even though clients know full well the CPU name
being used to query these properties.
The fix is to just have the clients explictly pass the CPU name!
llvm-svn: 134127
Unlike Thumb1, Thumb2 does not have dedicated encodings for adjusting the
stack pointer. It can just use the normal add-register-immediate encoding
since it can use all registers as a source, not just R0-R7. The extra
instruction definitions are just duplicates of the normal instructions with
the (not well enforced) constraint that the source register was SP.
llvm-svn: 134114
The tSpill and tRestore instructions are just copies of the tSTRspi and
tLDRspi instructions, respectively. Just use those directly instead.
llvm-svn: 134092
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.
llvm-svn: 134021
Correctly parse the forms of the Thumb mov-immediate instruction:
1. 8-bit immediate 0-255.
2. 12-bit shifted-immediate.
The 16-bit immediate "movw" form is also legal with just a "mov" mnemonic,
but is not yet supported. More parser logic necessary there due to fixups.
llvm-svn: 133966
Sorry, this was a bad idea. Within clang these builtins are in a separate
"ARM" namespace, but the actual builtin names should clearly distinguish that
they are target specific.
llvm-svn: 133832
This caused linker errors when linking both libLLVMX86Desc and libLLVMX86CodeGen
into a single binary (for example when building a monolithic libLLVM shared library).
llvm-svn: 133791
target machine from those that are only needed by codegen. The goal is to
sink the essential target description into MC layer so we can start building
MC based tools without needing to link in the entire codegen.
First step is to refactor TargetRegisterInfo. This patch added a base class
MCRegisterInfo which TargetRegisterInfo is derived from. Changed TableGen to
separate register description from the rest of the stuff.
llvm-svn: 133782
A RegisterTuples instance is used to synthesize super-registers by
zipping together lists of sub-registers. This is useful for generating
pseudo-registers representing register sequence constraints like 'two
consecutive GPRs', or 'an even-odd pair of floating point registers'.
The RegisterTuples def can be used in register set operations when
building register classes. That is the only way of accessing the
synthesized super-registers.
For example, the ARM QQ register class of pseudo-registers could have
been formed like this:
// Form pairs Q0_Q1, Q2_Q3, ...
def QQPairs : RegisterTuples<[qsub_0, qsub_1],
[(decimate QPR, 2),
(decimate (shl QPR, 1), 2)]>;
def QQ : RegisterClass<..., (add QQPairs)>;
Similarly, pseudo-registers representing '3 consecutive D-regs with
wraparound' look like:
// Form D0_D1_D2, D1_D2_D3, ..., D30_D31_D0, D31_D0_D1.
def DSeqTriples : RegisterTuples<[dsub_0, dsub_1, dsub_2],
[(rotl DPR, 0),
(rotl DPR, 1),
(rotl DPR, 2)]>;
TableGen automatically computes aliasing information for the synthesized
registers.
Register tuples are still somewhat experimental. We still need to see
how they interact with MC.
llvm-svn: 133407
Targets that need to change the default allocation order should use the
AltOrders mechanism instead. See the X86 and ARM targets for examples.
The allocation_order_begin() and allocation_order_end() methods have been
replaced with getRawAllocationOrder(), and there is further support
functions in RegisterClassInfo.
It is no longer possible to insert arbitrary code into generated
register classes. This is a feature.
llvm-svn: 133332
A register class can define AltOrders and AltOrderSelect instead of
defining method protos and bodies. The AltOrders lists can be defined
with set operations, and TableGen can verify that the alternative
allocation orders only contain valid registers.
This is currently an opt-in feature, and it is still possible to
override allocation_order_begin/end. That will not be true for long.
llvm-svn: 133320
At the time I wrote this code (circa 2007), TargetRegisterInfo was using a std::set to perform these queries. Switching to the static hashtables was an obvious improvement, but in reality there's no reason to do anything other than scan.
With this change, total LLC time on a whole-program 403.gcc is reduced by approximately 1.5%, almost all of which comes from a 15% reduction in LiveVariables time. It also reduces the binary size of LLC by 86KB, thanks to eliminating a bunch of very large static tables.
llvm-svn: 133051
This prepares tablegen to compute register lists from set theoretic dag
expressions. This doesn't really make any difference as long as
Target.td still declares RegisterClass::MemberList as [Register].
llvm-svn: 133043
Make the Elements vector private and expose an ArrayRef through
getOrder() instead. getOrder will eventually provide multiple
user-specified allocation orders.
Use the sorted member set for member and subclass tests. Clean up a lot
of ad hoc searches.
llvm-svn: 133040
Measure the worst case number of probes for a miss instead of the less
conservative number of probes required for an insertion.
Lower the limit to < 6 probes worst case.
This doubles the size of the ARM and X86 hash tables, other targets are
unaffected. LiveVariables runs 12% faster with this change.
<rdar://problem/9598545>
llvm-svn: 132999
Make the hash tables as small as possible while ensuring that all
lookups can be done in less than 8 probes.
Cut the aliases hash table in half by only storing a < b pairs - it
is a symmetric relation.
Use larger multipliers on the initial hash function to ensure that it
properly covers the whole table, and to resolve some clustering in the
very regular ARM register bank.
This reduces the size of most of these tables by 4x - 8x. For instance,
the ARM tables shrink from 48 KB to 8 KB.
llvm-svn: 132888
The constant hash tables for sub-registers and overlaps are generated
the same way, so extract a function to generate and print the hash
table.
Also use the information computed by CodeGenRegisters.cpp instead of the
locally data.
llvm-svn: 132886
Besides moving structural computations to CodeGenRegisters.cpp, this
also well-defines the order of these lists:
- Sub-register lists come from a pre-order traversal of the graph
defined by the SubRegs lists in the .td files.
- Super-register lists are topologically ordered so no register comes
before any of its sub-registers. When the sub-register graph is not a
tree, independent super-registers appear in numerical order.
- Lists of overlapping registers are ordered according to register
number.
This reverses the order of the super-regs lists, but nobody was
depending on that. The previous order of the overlaps lists was odd, and
it may have depended on the precise behavior of std::stable_sort.
The old computations are still there, but will be removed shortly.
llvm-svn: 132881
I'll be moving some more code there to gather all of the
register-specific stuff in one place. Currently it is shared between
CodeGenTarget and RegisterInfoEmitter.
The plan is that CodeGenRegisters can compute the full register bank
structure while RegisterInfoEmitter only will handle the printing part.
llvm-svn: 132788
A TableGen backend can define how certain classes can be expanded into
ordered sets of defs, typically by evaluating a specific field in the
record. The SetTheory class can then evaluate DAG expressions that refer
to these named sets.
A number of standard set and list operations are predefined, and the
backend can add more specialized operators if needed. The -print-sets
backend is used by SetTheory.td to provide examples.
This is intended to simplify how register classes are defined:
def GR32_NOSP : RegisterClass<"X86", [i32], 32, (sub GR32, ESP)>;
llvm-svn: 132621
Some register classes are only used for instruction operand constraints.
They should never be used for virtual registers. Previously, those
register classes were given an empty allocation order, but now you can
say 'let isAllocatable=0' in the register class definition.
TableGen calculates if a register is part of any allocatable register
class, and makes that information available in TargetRegisterDesc::inAllocatableClass.
The goal here is to eliminate use cases for overriding allocation_order_*
methods.
llvm-svn: 132508
same dwarf number. This will be used for creating a dwarf number to register
mapping.
The only case that needs this so far is the XMM/YMM registers that unfortunately
do have the same numbers.
llvm-svn: 132314
switch. With this newfound organization, teach tblgen how not to give
all intrinsics the 'nounwind' attribute. Introduce a new intrinsic,
llvm.eh.resume, which does not have this attribute. Documentation and uses
to follow.
llvm-svn: 132252
There was no way to check if a given register/mode pair was valid. We now return
an error code (-2) instead of asserting. If anyone thinks that an assert
at this point is really needed, we can autogen a hasValidDwarfRegNum instead.
llvm-svn: 132236
Unfortunately, my only testcase for this is fragile, and the ARM AsmParser can't round trip the instruction in question.
<rdar://problem/9345702>
llvm-svn: 130410
On the x86-64 and thumb2 targets, some registers are more expensive to encode
than others in the same register class.
Add a CostPerUse field to the TableGen register description, and make it
available from TRI->getCostPerUse. This represents the cost of a REX prefix or a
32-bit instruction encoding required by choosing a high register.
Teach the greedy register allocator to prefer cheap registers for busy live
ranges (as indicated by spill weight).
llvm-svn: 129864
the generated FastISel. X86 doesn't need to generate code to match ADD16ri8
since ADD16ri will do just fine. This is a small codesize win in the generated
instruction selector.
llvm-svn: 129692
value constraints on them (when defined as ImmLeaf's). This is particularly important
for X86-64, where almost all reg/imm instructions take a i64immSExt32 immediate operand,
which has a value constraint. Before this patch we ended up iseling the examples into
such amazing code as:
movabsq $7, %rax
imulq %rax, %rdi
movq %rdi, %rax
ret
now we produce:
imulq $7, %rdi, %rax
ret
This dramatically shrinks the generated code at -O0 on x86-64.
llvm-svn: 129691
kind of predicate: one that is specific to imm nodes. The predicate function
specified here just checks an int64_t directly instead of messing around with
SDNode's. The virtue of this is that it means that fastisel and other things
can reason about these predicates.
llvm-svn: 129675
structure and fix some fixmes. We now have a TreePredicateFn class
that handles all of the decoding of these things. This is an internal
cleanup that has no impact on the code generated by tblgen.
llvm-svn: 129670
2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts
3. teach tblgen to handle shift immediates that are different sizes than the
shifted operands, eliminating some code from the X86 fast isel backend.
4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function
instead of FastEmit_ri to simplify code.
llvm-svn: 129666
with the newer, cleaner model. It uses the IAPrinter class to hold the
information that is needed to match an instruction with its alias. This also
takes into account the available features of the platform.
There is one bit of ugliness. The way the logic determines if a pattern is
unique is O(N**2), which is gross. But in reality, the number of items it's
checking against isn't large. So while it's N**2, it shouldn't be a massive time
sink.
llvm-svn: 129110
- Also emit a list of packages and groups sorted by name
- Avoid iterating over DenseSet so that the output of the arrays is deterministic.
llvm-svn: 128489
According to A8.6.189 STM/STMIA/STMEA (Encoding T1), there's only tSTMIA_UPD available.
Ignore tSTMIA for the decoder emitter and add a test case for that.
llvm-svn: 128246
Set the encoding bits to {0,?,?,0}, not 0. Plus delegate the disassembly of ADR to
the more generic ADDri/SUBri instructions, and add a test case for that.
llvm-svn: 128234
instruction set. This code adds support for the VEX prefix
and for the YMM registers accessible on AVX-enabled
architectures. Instruction table support that enables AVX
instructions for the disassembler is in an upcoming patch.
llvm-svn: 127644
CodeGenRegister entries. Use this information to more intelligently build
the literal register entires in the DAGISel matcher table. Specifically,
use a single-byte OPC_EmitRegister entry for registers with a value of
less than 256 and OPC_EmitRegister2 entry for registers with a larger value.
rdar://9066491
llvm-svn: 127456
InstAlias<{alias}, {aliasee}>;
The InstAlias instruction should be able to go from the MCInst to the
{alias}. All of the information is there to match the MCInst with the
{aliasee}. From there, it's a simple matter to emit the {alias}, with the
correct operands from the {aliasee}.
The code this patch generates can be used by the InstPrinter to automatically
print out the alias without having to write special C++ code to handle the
situation.
This is a WIP, and therefore are several limitations. For instance, it cannot
handle AsmOperands at the moment. It also doesn't know what to do when two
{alias}es match the same {aliasee}. (Currently, it just ignores those two cases
and allows the printInstruction method to handle them.)
llvm-svn: 126538
A major part of its (eventual) goal is to support a much cleaner separation between disassembly callbacks
provided by the target and the disassembler emitter itself, i.e. not requiring hardcoding of knowledge in tblgen
like the existing disassembly emitters do.
The hope is that some day this will allow us to replace the existing non-Thumb ARM disassembler and remove
some of the hacks the old one introduced to tblgen.
llvm-svn: 125966
- Add custom operand matching for imod and iflags.
- Rename SplitMnemonicAndCC to SplitMnemonic since it splits more than CC
from mnemonic.
- While adding ".w" as an operand, don't change "Head" to avoid passing the
wrong mnemonic to ParseOperand.
- Add asm parser tests.
- Add disassembler tests just to make sure it can catch all cps versions.
llvm-svn: 125489
Teach the AsmMatcher handling to distinguish between an error custom-parsing
an operand and a failure to match. The former should propogate the error
upwards, while the latter should continue attempting to parse with
alternative matchers.
Update the ARM asm parser accordingly.
llvm-svn: 125426
When matching operands for a candidate opcode match in the auto-generated
AsmMatcher, check each operand against the expected operand match class.
Previously, operands were classified independently of the opcode being
handled, which led to difficulties when operand match classes were
more complicated than simple subclass relationships.
llvm-svn: 125245
Motivation: Improve the parsing of not usual (different from registers or
immediates) operand forms.
This commit implements only the generic support. The ARM specific modifications
will come next.
A table like the one below is autogenerated for every instruction
containing a 'ParserMethod' in its AsmOperandClass
static const OperandMatchEntry OperandMatchTable[20] = {
/* Mnemonic, Operand List Mask, Operand Class, Features */
{ "cdp", 29 /* 0, 2, 3, 4 */, MCK_Coproc, Feature_IsThumb|Feature_HasV6 },
{ "cdp", 58 /* 1, 3, 4, 5 */, MCK_Coproc, Feature_IsARM },
A matcher function very similar (but lot more naive) to
MatchInstructionImpl scans the table. After the mnemonic match, the
features are checked and if the "to be parsed" operand index is
present in the mask, there's a real match. Then, a switch like the one
below dispatch the parsing to the custom method provided in
'ParseMethod':
case MCK_Coproc:
return TryParseCoprocessorOperandName(Operands);
llvm-svn: 125030
(yes, this is different from R_ARM_CALL)
- Adds a new method getARMBranchTargetOpValue() which handles the
necessary distinction between the conditional and unconditional br/bl
needed for ARM/ELF
At least for ARM mode, the needed fixup for conditional versus unconditional
br/bl is identical, but the ARM docs and existing ARM tools expect this
reloc type...
Added a few FIXME's for future naming fixups in ARMInstrInfo.td
llvm-svn: 124895
library.
Installs tblgen (required by Clang).
Translates handling of user settings and platform-dependant options to
its own file, where it can included by another project.
Installs the .cmake files required by projects like Clang.
llvm-svn: 124816
The algorithm for identifying which operand is invalid will now always point to
some operand and not the mnemonic sometimes. The change is now that ErrorInfo
is the index of the highest operand that does not match for any of the matching
mnemonics records. And no longer the ~0U value when the mnemonic matches and
not every record with a matching mnemonic has the same mismatching operand
index.
llvm-svn: 124734
makes type checking for extract_subvector and insert_subvector more
robust and will allow stricter typechecking of more patterns in the
future.
This change handles int and fp as disjoint sets so that it will
enforce integer types to be smaller than the largest integer type and
fp types to be smaller than the largest fp type. There is no attempt
to check type sizes across the int/fp sets.
llvm-svn: 124672
When an operand class is defined with MIOperandInfo set to a list of
suboperands, the AsmMatcher has so far required that operand to also define
a custom ParserMatchClass, and InstAlias patterns have not been able to
set the individual suboperands separately. This patch removes both of those
restrictions. If a "compound" operand does not override the default
ParserMatchClass, then the AsmMatcher will now parse its suboperands
separately. If an InstAlias operand has the same class as the corresponding
compound operand, then it will be handled as before; but if that check fails,
TableGen will now try to match up a sequence of InstAlias operands with the
corresponding suboperands.
llvm-svn: 124314
This will be used to check patterns referencing a forthcoming
INSERT_SUBVECTOR SDNode. INSERT_SUBVECTOR in turn is very useful for
matching to VINSERTF128 instructions and complements the already
existing EXTRACT_SUBVECTOR SDNode.
llvm-svn: 124145
Unfortunately, while this is the "right" thing to do, it breaks some ARM
asm parsing tests because MemMode5 and ThumbMemModeReg are ambiguous. This
is tricky to resolve since neither is a subset of the other.
XFAIL the test for now. The old way was broken in other ways, just ways
we didn't happen to be testing, and our ARM asm parsing is going to require
significant revisiting at a later point anyways.
llvm-svn: 123786
This is needed to allow an InstAlias for an instruction with an "OptionalDef"
result register (like ARM's cc_out) where you want to set the optional register
to reg0.
llvm-svn: 123490
the symbolic immediate names used for these instructions, fixing their pretty-printers, and
adding proper encoding information for them.
With this, we can properly pretty-print and encode assembly like:
mrc p15, #0, r3, c13, c0, #3
Fixes <rdar://problem/8857858>.
llvm-svn: 123404
in the right direction. It eliminated some hacks and will unblock codegen
work. But it's far from being done. It doesn't reject illegal expressions,
e.g. (FOO - :lower16:BAR). It also doesn't work in Thumb2 mode at all.
llvm-svn: 123369
Some quad-register intrinsics with lane operands only take a double-register
operand for the vector containing the lane. The valid range of lane numbers
is then half as big as you would expect from the quad-register type.
Note: This currently has no effect because those intrinsics are now handled
entirely in the header file using __builtin_shufflevector, which does its own
range checking, but I want to use this for generating tests.
llvm-svn: 121867
registers that alias Reg, including itself. This is almost the same as the
existing getAliasSet() method, except for the inclusion of Reg.
The name matches the reflexive TRI::regsOverlap(x, y) relation.
It is very common to do stuff to a register and all its aliases:
stuff(Reg)
for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias)
stuff(*Alias);
That can now be written as the simpler:
for (const unsigned *Alias = TRI->getOverlaps(Reg); *Alias; ++Alias)
stuff(*Alias);
This change requires a bit more constant space for the alias lists because Reg
is included and because the empty alias list cannot be shared any longer.
If the getAliasSet method is eventually removed, this space can be reclaimed by
sharing overlap lists. For instance, %rax and %eax have identical overlap sets.
llvm-svn: 121800
instruction based on the t_addrmode_s# mode and what it returned. There is some
obvious badness to this. In particular, it's hard to do MC-encoding when the
instruction may change out from underneath you after the t_addrmode_s# variable
is finally resolved.
The solution is to revert a long-ago change that merged the reg/reg and reg/imm
versions. There is the addition of several new addressing modes. They no longer
have extraneous operands associated with them. I.e., if it's reg/reg we don't
have to have a dummy zero immediate tacked on to the SDNode.
There are some obvious cleanups here, which will happen shortly.
llvm-svn: 121747
Use the same COPY_TO_REGCLASS approach as for the 2-register *_sfp instructions.
This change made a big difference in the code generated for the
CodeGen/Thumb2/cross-rc-coalescing-2.ll test: The coalescer is still doing
a fine job, but some instructions that were previously moved outside the loop
are not moved now. It's using fewer VFP registers now, which is generally
a good thing, so I think the estimates for register pressure changed and that
affected the LICM behavior. Since that isn't obviously wrong, I've just
changed the test file. This completes the work for Radar 8711675.
llvm-svn: 121730
as a "long" direct branch. While the mnemonics are the same, they encode the branch offset differently, and
the Darwin assembler appears to prefer the "long" form for direct branches. Thus, in the name of bitwise
equivalence, provide encoding and fixup support for it.
llvm-svn: 121710
class A<bit a, bits<3> x, bits<3> y> {
bits<3> z;
let z = !if(a, x, y);
}
The variable z will get the value of x when 'a' is 1 and 'y' when a is '0'.
llvm-svn: 121666
Remove the previous header. I don't think we need to expose to end users
that we use TableGen to produce our version of arm_neon.h, and that header
was also using doubleslash comments which could be a problem when using it
in strict C89 compilations.
llvm-svn: 121390
particular, the immediate has 20-bits of value instead of 21. And bit 0 is '0'
always. Going through the BL fixup encoding was trashing the "bit 0 is '0'"
invariant.
Attempt to get the encoding at slightly more correct with this.
llvm-svn: 121336
An OpReinterpret entry is handled by translating it to OpCast intrinsics for
all combinations of source and destination types with the same total size.
This will be used to generate all the vreinterpret intrinsics.
llvm-svn: 121087
Intrinsics implemented with Clang builtins could already be implemented as
either inline functions or macros, but intrinsics implemented directly
(without builtins) could only be inline functions.
llvm-svn: 120763
Since we're casting them for the calls to the builtins, we need this to
make sure their types get checked in the same way they would if the intrinsics
were implemented as inline functions.
llvm-svn: 120693
Thumb2 encoding to share code with the ARM encoding, which gets use fixup support for free.
It also allows us to fold away at least one codegen-only pattern.
llvm-svn: 120481
The only reasonable way I could find to do this is to provide an alternate
version of the addrmode6 operand with a different encoding function. Use it
for all the VLD-dup instructions for the sake of consistency.
llvm-svn: 120358
This makes it symmetric with the 'u' modifier that forces an unsigned type.
This is needed for unsigned vector shifts, where the shift amount still needs
to be signed. PR8482 (Radar 8603521).
llvm-svn: 119742
and xor. The 32-bit move immediates can be hoisted out of loops by machine
LICM but the isel hacks were preventing them.
Instead, let peephole optimization pass recognize registers that are defined by
immediates and the ARM target hook will fold the immediates in.
Other changes include 1) do not fold and / xor into cmp to isel TST / TEQ
instructions if there are multiple uses. This happens when the 'and' is live
out, machine sink would have sinked the computation and that ends up pessimizing
code. The peephole pass would recognize situations where the 'and' can be
toggled to define CPSR and eliminate the comparison anyway.
2) Move peephole pass to after machine LICM, sink, and CSE to avoid blocking
important optimizations.
rdar://8663787, rdar://8241368
llvm-svn: 119548
instructions have to distinguish between lists of single- and double-precision
registers in order for the ASM matcher to do a proper job. In all other
respects, a list of single- or double-precision registers are the same as a list
of GPR registers.
llvm-svn: 119460
Stop defining types with "__neon_" prefixes and then using typedefs without
the prefix; there's no reason to do that anymore. Remove types that combine
multiple Neon vectors and treat them as a single long vector; they are not
used.
llvm-svn: 119369
'db', 'ib', 'da') instead of having that mode as a separate field in the
instruction. It's more convenient for the asm parser and much more readable for
humans.
<rdar://problem/8654088>
llvm-svn: 119310
operand list instead of the operand list redundantly declared on the alias
or instruction.
With this change, we finally remove the ins/outs list on the alias. Before:
def : InstAlias<(outs GR16:$dst), (ins GR8 :$src),
"movsx $src, $dst",
(MOVSX16rr8W GR16:$dst, GR8:$src)>;
After:
def : InstAlias<"movsx $src, $dst",
(MOVSX16rr8W GR16:$dst, GR8:$src)>;
This also makes the alias mechanism more general and powerful, which will
be exploited in subsequent patches.
llvm-svn: 118329
(someinst GR16:$foo, GR32:$foo)
Reimplement BuildAliasOperandReference to be correctly
based on the names of operands in the result pattern,
instead of on the instruction operand definitions.
llvm-svn: 118328
Right now the code is partitioned but the behavior is the same.
This should be improved in the near future. This removes some
uses of TheOperandList.
llvm-svn: 118232
now matchables contain an explicit list of how to populate each
operand in the result instruction instead of having them somehow
magically be correlated to the input inst.
llvm-svn: 118217
value type, so there is no point in passing it around using
an EVT. Use the simpler MVT everywhere. Rather than trying
to propagate this information maximally in all the code that
using the calling convention stuff, I chose to do a mainly
low impact change instead.
llvm-svn: 118167
ins/outs list that isn't specified by their asmstring. Previously
the asmmatcher would just force a 0 register into it, which clearly
isn't right. Mark a bunch of ARM instructions that use this as
isCodeGenOnly. Some of them are clearly pseudo instructions (like
t2TBB) others use a weird hasExtraSrcRegAllocReq thing that will
either need to be removed or the asmmatcher will need to be taught
about it (someday).
llvm-svn: 118119
filling them in one at a time. Previously this iterated over the
asmoperands, which left the problem of "holes". The new approach
simplifies things.
llvm-svn: 118104