1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-30 23:42:52 +01:00
Commit Graph

24736 Commits

Author SHA1 Message Date
Evan Cheng
698b0517b5 Typo's
llvm-svn: 28158
2006-05-07 10:10:20 +00:00
Jeff Cohen
44ece070c7 Unlike Unix, Windows won't let a file be implicitly replaced via renaming without explicit permission.
llvm-svn: 28157
2006-05-07 02:51:51 +00:00
Nate Begeman
dc94b738d0 New rlwimi implementation, which is superior to the old one. There are
still a couple missed optimizations, but we now generate all the possible
rlwimis for multiple inserts into the same bitfield.  More regression tests
to come.

llvm-svn: 28156
2006-05-07 00:23:38 +00:00
Chris Lattner
5c9c9f0eb6 Use ComputeMaskedBits to determine # sign bits as a fallback. This allows us
to handle all kinds of stuff, including silly things like:
sextinreg(setcc,i16) -> setcc.

llvm-svn: 28155
2006-05-06 23:48:13 +00:00
Chris Lattner
8b8093dea2 Add some more sign propagation cases
llvm-svn: 28154
2006-05-06 23:40:29 +00:00
Jeff Cohen
37413e2c29 Apply bug fix supplied by Greg Pettyjohn for a bug he found: '<invalid>' is not a legal path on Windows.
llvm-svn: 28153
2006-05-06 23:25:53 +00:00
Chris Lattner
f76c0b6662 Simplify some code, add a couple minor missed folds
llvm-svn: 28152
2006-05-06 23:06:26 +00:00
Chris Lattner
4a4cbde0bb constant fold sign_extend_inreg
llvm-svn: 28151
2006-05-06 23:05:41 +00:00
Chris Lattner
3d5d01a74b remove cases handled elsewhere
llvm-svn: 28150
2006-05-06 22:43:44 +00:00
Chris Lattner
1fce346023 Add some more simple sign bit propagation cases.
llvm-svn: 28149
2006-05-06 22:39:59 +00:00
Jeff Cohen
248e133255 Fix some loose ends in MASM support.
llvm-svn: 28148
2006-05-06 21:27:14 +00:00
Chris Lattner
3da425e496 new testcase we handle right now.
llvm-svn: 28147
2006-05-06 18:15:50 +00:00
Chris Lattner
ca06e2522e Use the new TargetLowering::ComputeNumSignBits method to eliminate
sign_extend_inreg operations.  Though ComputeNumSignBits is still rudimentary,
this is enough to compile this:

short test(short X, short x) {
  int Y = X+x;
  return (Y >> 1);
}
short test2(short X, short x) {
  int Y = (short)(X+x);
  return Y >> 1;
}

into:

_test:
        add r2, r3, r4
        srawi r3, r2, 1
        blr
_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r3, r2, 1
        blr

instead of:

_test:
        add r2, r3, r4
        srawi r2, r2, 1
        extsh r3, r2
        blr
_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r2, r2, 1
        extsh r3, r2
        blr

llvm-svn: 28146
2006-05-06 09:30:03 +00:00
Chris Lattner
3a77411d76 Add some really really simple code for computing sign-bit propagation.
This will certainly be enhanced in the future.

llvm-svn: 28145
2006-05-06 09:27:13 +00:00
Chris Lattner
06d617846d Add some new methods for computing sign bit information.
llvm-svn: 28144
2006-05-06 09:26:22 +00:00
Chris Lattner
02d3258820 When inserting casts, be careful of where we put them. We cannot insert
a cast immediately before a PHI node.

This fixes Regression/CodeGen/Generic/2006-05-06-GEP-Cast-Sink-Crash.ll

llvm-svn: 28143
2006-05-06 09:10:37 +00:00
Chris Lattner
1038441de5 new testcase
llvm-svn: 28142
2006-05-06 09:09:47 +00:00
Chris Lattner
7661770087 Move some code around.
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated.  If one or
both doesn't, then this xform doesn't remove a cast.

This fixes Transforms/InstCombine/2006-05-06-Infloop.ll

llvm-svn: 28141
2006-05-06 09:00:16 +00:00
Chris Lattner
d689dd7659 new testcase from ghostscript that inf looped instcombine
llvm-svn: 28140
2006-05-06 08:58:06 +00:00
Chris Lattner
89fa42b51e Teach the X86 backend about non-i32 inline asm register classes.
llvm-svn: 28139
2006-05-06 00:29:37 +00:00
Chris Lattner
fd1923bfa6 Fold (trunc (srl x, c)) -> (srl (trunc x), c)
llvm-svn: 28138
2006-05-06 00:11:52 +00:00
Chris Lattner
32e96402c0 Fold trunc(any_ext). This gives stuff like:
27,28c27
<       movzwl %di, %edi
<       movl %edi, %ebx
---
>       movw %di, %bx

llvm-svn: 28137
2006-05-05 22:56:26 +00:00
Chris Lattner
c912cf0b07 Shrink shifts when possible.
llvm-svn: 28136
2006-05-05 22:53:17 +00:00
Chris Lattner
2f0d27a72a Implement ComputeMaskedBits/SimplifyDemandedBits for ISD::TRUNCATE
llvm-svn: 28135
2006-05-05 22:32:12 +00:00
Chris Lattner
daae9ee503 Print a grouping around inline asm blocks so that we can tell when we are
using them.

llvm-svn: 28134
2006-05-05 21:50:04 +00:00
Chris Lattner
12bb901c93 Print *some* grouping around inline asm blocks so we know where they are.
llvm-svn: 28133
2006-05-05 21:48:50 +00:00
Chris Lattner
06cb36074a Indent multiline asm strings more nicely
llvm-svn: 28132
2006-05-05 21:47:05 +00:00
Chris Lattner
a03676690b Teach the code generator to use cvtss2sd as extload f32 -> f64
llvm-svn: 28131
2006-05-05 21:35:18 +00:00
Chris Lattner
4b581e4167 Fold (fpext (load x)) -> (extload x)
llvm-svn: 28130
2006-05-05 21:34:35 +00:00
Chris Lattner
229d3e3c2d More aggressively sink GEP offsets into loops. For example, before we
generated:

        movl 8(%esp), %eax
        movl %eax, %edx
        addl $4316, %edx
        cmpb $1, %cl
        ja LBB1_2       #cond_false
LBB1_1: #cond_true
        movl L_QuantizationTables720$non_lazy_ptr, %ecx
        movl %ecx, (%edx)
        movl L_QNOtoQuantTableShift720$non_lazy_ptr, %edx
        movl %edx, 4460(%eax)
        ret
...

Now we generate:

        movl 8(%esp), %eax
        cmpb $1, %cl
        ja LBB1_2       #cond_false
LBB1_1: #cond_true
        movl L_QuantizationTables720$non_lazy_ptr, %ecx
        movl %ecx, 4316(%eax)
        movl L_QNOtoQuantTableShift720$non_lazy_ptr, %ecx
        movl %ecx, 4460(%eax)
        ret

... which uses one fewer register.

llvm-svn: 28129
2006-05-05 21:17:49 +00:00
Chris Lattner
73bfc4c2ea Fix an infinite loop compiling oggenc last night.
llvm-svn: 28128
2006-05-05 20:51:30 +00:00
Evan Cheng
0e9ec8d566 Need extload patterns after Chris' DAG combiner changes
llvm-svn: 28127
2006-05-05 08:23:07 +00:00
Chris Lattner
95637c4889 Implement InstCombine/cast.ll:test29
llvm-svn: 28126
2006-05-05 06:39:07 +00:00
Chris Lattner
953b6c92de New testcase
llvm-svn: 28125
2006-05-05 06:38:40 +00:00
Chris Lattner
d7637651b6 Fold some common code.
llvm-svn: 28124
2006-05-05 06:32:04 +00:00
Chris Lattner
584874682a Implement:
// fold (and (sext x), (sext y)) -> (sext (and x, y))
  // fold (or  (sext x), (sext y)) -> (sext (or  x, y))
  // fold (xor (sext x), (sext y)) -> (sext (xor x, y))
  // fold (and (aext x), (aext y)) -> (aext (and x, y))
  // fold (or  (aext x), (aext y)) -> (aext (or  x, y))
  // fold (xor (aext x), (aext y)) -> (aext (xor x, y))

llvm-svn: 28123
2006-05-05 06:31:05 +00:00
Chris Lattner
bec98440f4 Pull and through and/or/xor. This compiles some bitfield code to:
mov EAX, DWORD PTR [ESP + 4]
        mov ECX, DWORD PTR [EAX]
        mov EDX, ECX
        add EDX, EDX
        or EDX, ECX
        and EDX, -2147483648
        and ECX, 2147483647
        or EDX, ECX
        mov DWORD PTR [EAX], EDX
        ret

instead of:

        sub ESP, 4
        mov DWORD PTR [ESP], ESI
        mov EAX, DWORD PTR [ESP + 8]
        mov ECX, DWORD PTR [EAX]
        mov EDX, ECX
        add EDX, EDX
        mov ESI, ECX
        and ESI, -2147483648
        and EDX, -2147483648
        or EDX, ESI
        and ECX, 2147483647
        or EDX, ECX
        mov DWORD PTR [EAX], EDX
        mov ESI, DWORD PTR [ESP]
        add ESP, 4
        ret

llvm-svn: 28122
2006-05-05 06:10:43 +00:00
Chris Lattner
02bb78abd3 Implement a variety of simplifications for ANY_EXTEND.
llvm-svn: 28121
2006-05-05 05:58:59 +00:00
Chris Lattner
53e8cbbb83 Factor some code, add these transformations:
// fold (and (trunc x), (trunc y)) -> (trunc (and x, y))
  // fold (or  (trunc x), (trunc y)) -> (trunc (or  x, y))
  // fold (xor (trunc x), (trunc y)) -> (trunc (xor x, y))

llvm-svn: 28120
2006-05-05 05:51:50 +00:00
Evan Cheng
84612a59c2 Better implementation of truncate. ISel matches it to a pseudo instruction
that gets emitted as movl (for r32 to i16, i8) or a movw (for r16 to i8). And
if the destination gets allocated a subregister of the source operand, then
the instruction will not be emitted at all.

llvm-svn: 28119
2006-05-05 05:40:20 +00:00
Chris Lattner
4978a4f2f4 New note, Nate, please check to see if I'm full of it :)
llvm-svn: 28118
2006-05-05 05:36:15 +00:00
Jeff Cohen
953baf0494 Fix VC++ compilation error.
llvm-svn: 28117
2006-05-05 01:47:05 +00:00
Nate Begeman
60c4e49b4e Somehow, I missed this part of the checkin a couple days ago
llvm-svn: 28116
2006-05-05 01:13:11 +00:00
Chris Lattner
56694d0bb0 Sink noop copies into the basic block that uses them. This reduces the number
of cross-block live ranges, and allows the bb-at-a-time selector to always
coallesce these away, at isel time.

This reduces the load on the coallescer and register allocator.  For example
on a codec on X86, we went from:

   1643 asm-printer           - Number of machine instrs printed
    419 liveintervals         - Number of loads/stores folded into instructions
   1144 liveintervals         - Number of identity moves eliminated after coalescing
   1022 liveintervals         - Number of interval joins performed
    282 liveintervals         - Number of intervals after coalescing
   1304 liveintervals         - Number of original intervals
     86 regalloc              - Number of times we had to backtrack
1.90232 regalloc              - Ratio of intervals processed over total intervals
     40 spiller               - Number of values reused
    182 spiller               - Number of loads added
    121 spiller               - Number of stores added
    132 spiller               - Number of register spills
      6 twoaddressinstruction - Number of instructions commuted to coalesce
    360 twoaddressinstruction - Number of two-address instructions

to:

   1636 asm-printer           - Number of machine instrs printed
    403 liveintervals         - Number of loads/stores folded into instructions
   1155 liveintervals         - Number of identity moves eliminated after coalescing
   1033 liveintervals         - Number of interval joins performed
    279 liveintervals         - Number of intervals after coalescing
   1312 liveintervals         - Number of original intervals
     76 regalloc              - Number of times we had to backtrack
1.88998 regalloc              - Ratio of intervals processed over total intervals
      1 spiller               - Number of copies elided
     41 spiller               - Number of values reused
    191 spiller               - Number of loads added
    114 spiller               - Number of stores added
    128 spiller               - Number of register spills
      4 twoaddressinstruction - Number of instructions commuted to coalesce
    356 twoaddressinstruction - Number of two-address instructions

On this testcase, this change provides a modest reduction in spill code,
regalloc iterations, and total instructions emitted.  It increases the number
of register coallesces.

llvm-svn: 28115
2006-05-05 01:04:50 +00:00
Chris Lattner
56ae04a191 Add a helper method.
llvm-svn: 28114
2006-05-05 00:51:42 +00:00
Chris Lattner
db59f434fd wrap long line
llvm-svn: 28113
2006-05-04 23:35:31 +00:00
Chris Lattner
bf8f91715e Adjust to use proper TargetData copy ctor
llvm-svn: 28112
2006-05-04 21:18:40 +00:00
Chris Lattner
9f535ef329 Fix this to be a proper copy ctor
llvm-svn: 28111
2006-05-04 21:17:35 +00:00
Chris Lattner
db888e7880 Final pass of minor cleanups for MachineInstr
llvm-svn: 28110
2006-05-04 19:36:09 +00:00
Evan Cheng
ceb7645d40 Initial support for register pressure aware scheduling. The register reduction
scheduler can go into a "vertical mode" (i.e. traversing up the two-address
chain, etc.) when the register pressure is low.
This does seem to reduce the number of spills in the cases I've looked at. But
with x86, it's no guarantee the performance of the code improves.
It can be turned on with -sched-vertically option.

llvm-svn: 28108
2006-05-04 19:16:39 +00:00