out, where after the first CombineTo() call, the node the second CombineTo
wishes to replace may no longer exist.
Fix a very real bug with the truncated load optimization on little endian
targets, which do not need a byte offset added to the load.
llvm-svn: 23704
like turning:
_foo:
fctiwz f0, f1
stfd f0, -8(r1)
lwz r2, -4(r1)
rlwinm r3, r2, 0, 16, 31
blr
into
_foo:
fctiwz f0,f1
stfd f0,-8(r1)
lhz r3,-2(r1)
blr
Also removed an unncessary constraint from sra -> srl conversion, which
should take care of hte only reason we would ever need to handle sra in
MaskedValueIsZero, AFAIK.
llvm-svn: 23703
location, replace them with a new store of the last value. This occurs
in the same neighborhood in 197.parser, speeding it up about 1.5%
llvm-svn: 23691
multiple results.
Use this support to implement trivial store->load forwarding, implementing
CodeGen/PowerPC/store-load-fwd.ll. Though this is the most simple case and
can be extended in the future, it is still useful. For example, it speeds
up 197.parser by 6.2% by avoiding an LSU reject in xalloc:
stw r6, lo16(l5_end_of_array)(r2)
addi r2, r5, -4
stwx r5, r4, r2
- lwzx r5, r4, r2
- rlwinm r5, r5, 0, 0, 30
stwx r5, r4, r2
lwz r2, -4(r4)
ori r2, r2, 1
llvm-svn: 23690
creating a new vreg and inserting a copy: just use the input vreg directly.
This speeds up the compile (e.g. about 5% on mesa with a debug build of llc)
by not adding a bunch of copies and vregs to be coallesced away. On mesa,
for example, this reduces the number of intervals from 168601 to 129040
going into the coallescer.
llvm-svn: 23671
previous copy elisions and we discover we need to reload a register, make
sure to use the regclass of the original register for the reload, not the
class of the current register. This avoid using 16-bit loads to reload 32-bit
values.
llvm-svn: 23645
store r12 -> [ss#2]
R3 = load [ss#1]
use R3
R3 = load [ss#2]
R4 = load [ss#1]
and turn it into this code:
store R12 -> [ss#2]
R3 = load [ss#1]
use R3
R3 = R12
R4 = R3 <- oops!
The problem was that promoting R3 = load[ss#2] to a copy missed the fact that
the instruction invalidated R3 at that point.
llvm-svn: 23638
that testcase still does not pass with the dag combiner. This is because
not all forms of br* are folded yet.
Also, when we combine a node into another one, delete the node immediately
instead of waiting for the node to potentially come up in the future.
llvm-svn: 23632
Since calls return more than one value, don't bail if one of their uses
happens to be a node that's not an MVT::Other when following the chain
from CALLSEQ_START to CALLSEQ_END.
Once we've found a CALLSEQ_START, we can just return; there's no need to
tail-recurse further up the graph.
Most importantly, just because something only has one use doesn't mean we
should use it's one use to follow from start to end. This faulty logic
caused us to follow a chain of one-use FP operations back to a much earlier
call, putting a cycle in the graph from a later start to an earlier end.
This is a better fix that reverting to the workaround committed earlier
today.
llvm-svn: 23620