aggregate types. Don't increment the current index after reaching
the end of a struct, as it will already be pointing at
one-past-the end. This fixes PR3288.
llvm-svn: 61828
AddPseudoTwoAddrDeps. This lets the scheduling infrastructure
avoid recalculating node heights. In very large testcases this
was a major bottleneck. Thanks to Roman Levenstein for finding
this!
As a side effect, fold-pcmpeqd-0.ll is now scheduled better
and it no longer requires spilling on x86-32.
llvm-svn: 61778
instructions to avoid copies, because TwoAddressInstructionPass
also does this optimization. The scheduler's version didn't
account for live-out values, which resulted in spurious commutes
and missed opportunities.
Now, TwoAddressInstructionPass handles all the opportunities,
instead of just those that the scheduler missed. The result is
usually the same, though there are occasional trivial differences
resulting from the avoidance of spurious commutes.
llvm-svn: 61611
promote from i1 all the way up to the canonical SetCC type.
In order to discover an appropriate type to use, pass
MVT::Other to getSetCCResultType. In order to be able to
do this, change getSetCCResultType to take a type as an
argument, not a value (this is also more logical).
llvm-svn: 61542
This removes all the _8, _16, _32, and _64 opcodes and replaces each
group with an unsuffixed opcode. The MemoryVT field of the AtomicSDNode
is now used to carry the size information. In tablegen, the size-specific
opcodes are replaced by size-independent opcodes that utilize the
ability to compose them with predicates.
This shrinks the per-opcode tables and makes the code that handles
atomics much more concise.
llvm-svn: 61389
temporary workaround for an obscure bug. When node cloning is
used, it is possible that more SUnits will be created, and
if the SUnits std::vector has to reallocate, it will
invalidate all the graph edges.
llvm-svn: 61122
computation code. Also, avoid adding output-depenency edges when both
defs are dead, which frequently happens with EFLAGS defs.
Compute Depth and Height lazily, and always in terms of edge latency
values. For the schedulers that don't care about latency, edge latencies
are set to 1.
Eliminate Cycle and CycleBound, and LatencyPriorityQueue's Latencies array.
These are all subsumed by the Depth and Height fields.
llvm-svn: 61073
width register load followed by a truncating
store for the copy, since the load will not place
the value in the lower bits. Probably partial
loads/stores can never happen here, but fix it
anyway.
llvm-svn: 60972
use of illegal integer types: instead, use a stack slot
and copying via integer registers. The existing code
is still used if the bitconvert is to a legal integer
type.
This fires on the PPC testcases 2007-09-08-unaligned.ll
and vec_misaligned.ll. It looks like equivalent code
is generated with these changes, just permuted, but
it's hard to tell.
With these changes, nothing in LegalizeDAG produces
illegal integer types anymore. This is a prerequisite
for removing the LegalizeDAG type legalization code.
While there I noticed that the existing code doesn't
handle trunc store of f64 to f32: it turns this into
an i64 store, which represents a 4 byte stack smash.
I added a FIXME about this. Hopefully someone more
motivated than I am will take care of it.
llvm-svn: 60964
ISD::ADD to emit an implicit EFLAGS. This was horribly broken. Instead, replace
the intrinsic with an ISD::SADDO node. Then custom lower that into an
X86ISD::ADD node with a associated SETCC that checks the correct condition code
(overflow or carry). Then that gets lowered into the correct X86::ADDOvf
instruction.
Similar for SUB and MUL instructions.
llvm-svn: 60915