rdar://problem/8893967: JM/lencod miscompile at -arch armv7 -mthumb -O3
Added ResurrectKill to remove kill flags after we decide to reused a
physical register. And (hopefully) ensure that we call it in all the
right places.
Sorry, I'm not checking in a unit test given that it's a miscompile I
can't reproduce easily with a toy example. Failures in the rewriter
depend on a series of heuristic decisions maked during one of the many
upstream phases in codegen. This case would require coercing regalloc
to generate a couple of rematerialzations in a way that causes the
scavenger to reuse the same register at just the wrong point.
The general way to test this is to implement kill flags
verification. Then we could have a simple, robust compile-only unit
test. That would be worth doing if the whole pass was not about to
disappear. At this point we focus verification work on the next
generation of regalloc.
llvm-svn: 124442
llvm-config --cflags --cxxflags --cppflags
We shouldn't impose those flags on people who use llvm-config for
building their own projects.
llvm-svn: 124399
Linear scan regalloc is currently assuming that any register aliased with
a member of a regclass must also be in at least one regclass. That is not
always true. For example, for X86, RIP is in a regclass but IP is not.
If you're unlucky, this can cause a crash by invalidating the iterator.
llvm-svn: 124365
When an operand class is defined with MIOperandInfo set to a list of
suboperands, the AsmMatcher has so far required that operand to also define
a custom ParserMatchClass, and InstAlias patterns have not been able to
set the individual suboperands separately. This patch removes both of those
restrictions. If a "compound" operand does not override the default
ParserMatchClass, then the AsmMatcher will now parse its suboperands
separately. If an InstAlias operand has the same class as the corresponding
compound operand, then it will be handled as before; but if that check fails,
TableGen will now try to match up a sequence of InstAlias operands with the
corresponding suboperands.
llvm-svn: 124314
default implementation for x86, going through the stack in a similr
fashion to how the codegen implements BUILD_VECTOR. Eventually this
will get matched to VINSERTF128 if AVX is available.
llvm-svn: 124307
implementation of EXTRACT_SUBVECTOR for x86, going through the stack
in a similr fashion to how the codegen implements BUILD_VECTOR.
Eventually this will get matched to VEXTRACTF128 if AVX is available.
llvm-svn: 124292
operand being factorized (and erased) could occur several times in Ops,
resulting in freed memory being used when the next occurrence in Ops was
analyzed.
llvm-svn: 124287
merge vector<intptr_t>::push_back() and vector<void*>::push_back() because
Enumerate() doesn't realize that "i64* null" and "i8** null" are equivalent.
llvm-svn: 124285