1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-31 16:02:52 +01:00
Commit Graph

3 Commits

Author SHA1 Message Date
Evan Cheng
5541068ad3 Run codegen dce pass for all targets at all optimization levels. Previously it's
only run for x86 with fastisel. I've found it being very effective in
eliminating some obvious dead code as result of formal parameter lowering
especially when tail call optimization eliminated the need for some of the loads
from fixed frame objects. It also shrinks a number of the tests. A couple of
tests no longer make sense and are now eliminated.

llvm-svn: 95493
2010-02-06 09:07:11 +00:00
Evan Cheng
c3ef5b0802 Follow up to 81494. When the folded reload is narrowed to a 32-bit load then change the destination register to a 32-bit one or add a sub-register index.
llvm-svn: 81496
2009-09-11 01:01:31 +00:00
Evan Cheng
93831c07de It's not legal to fold a load from a narrower stack slot into a wider instruction. If done, the instruction does a 64-bit load and that's not
safe. This can happen we a subreg_to_reg 0 has been coalesced. One
exception is when the instruction that folds the load is a move, then we
can simply turn it into a 32-bit load from the stack slot.                                                                                                                    

rdar://7170444

llvm-svn: 81494
2009-09-11 00:39:26 +00:00