1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-25 05:52:53 +02:00
Commit Graph

13368 Commits

Author SHA1 Message Date
Reid Spencer
f35e8fcebe bug 263:
Add target triple and dependent libraries support to this test.

llvm-svn: 15213
2004-07-25 18:09:47 +00:00
Reid Spencer
11fc1457c0 bug 263:
Ensure the list of libraries is cleared.

llvm-svn: 15212
2004-07-25 18:08:57 +00:00
Reid Spencer
9077f12bb0 bug 263:
Add ability to write target triple and dependent libraries information.

llvm-svn: 15211
2004-07-25 18:08:18 +00:00
Reid Spencer
3043e82af5 bug 263:
- encode/decode target triple and dependent libraries
bug 401:
- fix encoding/decoding of FP values to be little-endian only
bug 402:
- initial (compatible) cut at 24-bit types instead of 32-bit
- reduce size of block headers by 50%
Other:
- cleanup Writer by consolidating to one compilation unit, rem. other files
- use a std::vector instead of std::deque so the buffer can be allocated
  in multiples of 64KByte chunks rather than in multiples of some smaller
  (default) number.

llvm-svn: 15210
2004-07-25 18:07:36 +00:00
Reid Spencer
4b76a409e5 bug 263:
Provide parsing for the target triple and dependent libraries.

llvm-svn: 15209
2004-07-25 17:58:28 +00:00
Reid Spencer
282e14928d bug 263:
Provide new tokens for target triples and dependent libraries.

llvm-svn: 15208
2004-07-25 17:56:00 +00:00
Reid Spencer
2e6721531f bug 263:
The necessary changes to module in order to support both target triples and
a list of dependent libraries.

llvm-svn: 15207
2004-07-25 17:52:27 +00:00
Reid Spencer
415011884f bug 402:
A new set of block identifiers has been added for version 1.3 so that the
range of values can fit within 5 bits. This aids in halving the size of
block headers.

llvm-svn: 15206
2004-07-25 17:50:00 +00:00
Chris Lattner
f1508c0452 Codify my thoughts on where we want to end up with the target-independent
code generator.  Comments welcome.

llvm-svn: 15205
2004-07-25 12:13:35 +00:00
Chris Lattner
a259f9201c * Substantially simplify how free instructions are handled (potentially fixing
a bug in DSE).
* Delete dead operand uses iteratively instead of recursively, using a
  SetVector.
* Defer deletion of dead operand uses until the end of processing, which means
  we don't have to bother with updating the AliasSetTracker.  This speeds up
  DSE substantially.

llvm-svn: 15204
2004-07-25 11:09:56 +00:00
Chris Lattner
6bc4b6c0dd Add back() and pop_back() methods to SetVector
Move clear to the end of the class
Add assertions

llvm-svn: 15203
2004-07-25 11:07:02 +00:00
Alkis Evlogimenos
3e3fe9ff5a Add some comments to the backtracking code.
llvm-svn: 15200
2004-07-25 08:10:33 +00:00
Chris Lattner
0a9d5e6f14 Free instructions kill values too. This implements DeadStoreElim/free.llx
llvm-svn: 15199
2004-07-25 07:58:38 +00:00
Chris Lattner
7822b1dcc4 New testcase for DSE
llvm-svn: 15198
2004-07-25 07:57:50 +00:00
Chris Lattner
31d9cbf7bb Add support for free instructions
llvm-svn: 15197
2004-07-25 07:57:37 +00:00
Chris Lattner
fd9f044bcf Fix the sense of joinable
llvm-svn: 15196
2004-07-25 07:47:25 +00:00
Chris Lattner
db6b0d2361 Remove linux/solaris specific stuff.
llvm-svn: 15195
2004-07-25 07:34:00 +00:00
Chris Lattner
7630df9aa0 This patch makes use of the infrastructure implemented before to safely and
aggressively coallesce live ranges even if they overlap.  Consider this LLVM
code for example:

int %test(int %X) {
        %Y = mul int %X, 1      ;; Codegens to Y = X
        %Z = add int %X, %Y
        ret int %Z
}

The mul is just there to get a copy into the code stream.  This produces
this machine code:

 (0x869e5a8, LLVM BB @0x869b9a0):
        %reg1024 = mov <fi#-2>, 1, %NOREG, 0    ;; "X"
        %reg1025 = mov %reg1024                 ;; "Y"  (subsumed by X)
        %reg1026 = add %reg1024, %reg1025
        %EAX = mov %reg1026
        ret

Note that the life times of reg1024 and reg1025 overlap, even though they
contain the same value.  This results in this machine code:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        add %EAX, %ECX
        ret

Another, worse case involves loops and PHI nodes.  Consider this trivial loop:
testcase:

int %test2(int %X) {
entry:
        br label %Loop
Loop:
        %Y = phi int [%X, %entry], [%Z, %Loop]
        %Z = add int %Y, 1
        %cond = seteq int %Z, 100
        br bool %cond, label %Out, label %Loop
Out:
        ret int %Z
}

Because of interactions between the PHI elimination pass and the register
allocator, this got compiled to this code:

test2:
        mov %ECX, DWORD PTR [%ESP + 4]
.LBBtest2_1:
***     mov %EAX, %ECX
        inc %EAX
        cmp %EAX, 100
***     mov %ECX, %EAX
        jne .LBBtest2_1

        ret

Or on powerpc, this code:

_test2:
        mflr r0
        stw r0, 8(r1)
        stwu r1, -60(r1)
.LBB_test2_1:
        addi r2, r3, 1
        cmpwi cr0, r2, 100
***     or r3, r2, r2
        bne cr0, .LBB_test2_1

***     or r3, r2, r2
        lwz r0, 68(r1)
        mtlr r0
        addi r1, r1, 60
        blr 0



With this improvement in place, we now generate this code for these two
testcases, which is what we want:


test:
        mov %EAX, DWORD PTR [%ESP + 4]
        add %EAX, %EAX
        ret

test2:
        mov %EAX, DWORD PTR [%ESP + 4]
.LBBtest2_1:
        inc %EAX
        cmp %EAX, 100
        jne .LBBtest2_1 # Loop
        ret

Or on PPC:

_test2:
        mflr r0
        stw r0, 8(r1)
        stwu r1, -60(r1)
.LBB_test2_1:
        addi r3, r3, 1
        cmpwi cr0, r3, 100
        bne cr0, .LBB_test2_1

        lwz r0, 68(r1)
        mtlr r0
        addi r1, r1, 60
        blr 0


Static numbers for spill code loads/stores/reg-reg copies (smaller is better):

em3d:       before: 47/25/26         after: 44/22/24
164.gzip:   before: 433/245/310      after: 403/231/278
175.vpr:    before: 3721/2189/1581   after: 4144/2081/1423
176.gcc:    before: 26195/8866/9235  after: 25942/8082/8275
186.crafty: before: 4295/2587/3079   after: 4119/2519/2916
252.eon:    before: 12754/7585/5803  after: 12508/7425/5643
256.bzip2:  before: 463/226/315      after: 482:241/309


Runtime perf number samples on X86:

gzip: before: 41.09 after: 39.86
bzip2: runtime: before: 56.71s after: 57.07s
gcc: before: 6.16 after: 6.12
eon: before: 2.03s after: 2.00s
llvm-svn: 15194
2004-07-25 07:11:19 +00:00
Chris Lattner
0997e50af5 Make a method const, no functionality changes
llvm-svn: 15193
2004-07-25 06:23:01 +00:00
Chris Lattner
5196d002f9 I think that V8 should coallesce registers, don't you?
llvm-svn: 15192
2004-07-25 06:19:04 +00:00
Alkis Evlogimenos
d4ef731684 Use name.empty() instead of testing against equality with the empty
string.

llvm-svn: 15191
2004-07-25 06:16:52 +00:00
Alkis Evlogimenos
d5f5b018e9 Disallow creation of named values of type void.
llvm-svn: 15190
2004-07-25 06:07:15 +00:00
Chris Lattner
32cab750d4 Fix a bug where we incorrectly value numbered the first PHI definition the
same as the PHI use.  This is not correct as the PHI use value is different
depending on which branch is taken.  This fixes espresso with aggressive
coallescing, and perhaps others.

llvm-svn: 15189
2004-07-25 05:45:18 +00:00
Chris Lattner
e3da94e11f Fix a bug in the range remover
llvm-svn: 15188
2004-07-25 05:43:53 +00:00
Chris Lattner
4ed2b97d51 Add debugging output for joining assignments
llvm-svn: 15187
2004-07-25 03:24:11 +00:00
Alkis Evlogimenos
2faa6cdc75 Remove implementation of operator= and make it private so that it is
not used accidentally.

llvm-svn: 15172
2004-07-24 18:55:15 +00:00
Alkis Evlogimenos
0618429149 Change std::map<unsigned, LiveInterval*> into a std::map<unsigned,
LiveInterval>. This saves some space and removes the pointer
indirection caused by following the pointer.

llvm-svn: 15167
2004-07-24 11:44:15 +00:00
Chris Lattner
0d3969f3d1 Finally give bugpoint -timeout support!
llvm-svn: 15163
2004-07-24 07:53:26 +00:00
Chris Lattner
29e97f36bf obvious fix
llvm-svn: 15162
2004-07-24 07:51:27 +00:00
Chris Lattner
71a9eec1d3 Get rid of the printout from the low-level system interface
llvm-svn: 15161
2004-07-24 07:50:48 +00:00
Chris Lattner
f162d7dac2 Pass timeouts into the low level "execute program with timeout" function
llvm-svn: 15160
2004-07-24 07:49:11 +00:00
Chris Lattner
30a4ae76ad Provide timeout values to all abstract interpreters
llvm-svn: 15159
2004-07-24 07:48:50 +00:00
Chris Lattner
07dc2875c2 Add support for killing the program if it executes for too long.
llvm-svn: 15158
2004-07-24 07:41:31 +00:00
Chris Lattner
26856ab7c3 whoops, didn't mean to remove this
llvm-svn: 15157
2004-07-24 04:32:22 +00:00
Chris Lattner
de8efecef1 In the joiner, merge the small interval into the large interval. This restores
us back to taking about 10.5s on gcc, instead of taking 15.6s!  The net result
is that my big patches have hand no significant effect on compile time or code
quality.  heh.

llvm-svn: 15156
2004-07-24 03:41:50 +00:00
Chris Lattner
c71a9684f8 Completely eliminate the intervals_ list. instead, the r2iMap_ maintains
ownership of the intervals.

llvm-svn: 15155
2004-07-24 03:32:06 +00:00
Chris Lattner
b05762ade7 Big change to compute logical value numbers for each LiveRange added to an
Interval.  This generalizes the isDefinedOnce mechanism that we used before
to help us coallesce ranges that overlap.  As part of this, every logical
range with a different value is assigned a different number in the interval.
For example, for code that looks like this:

0  X = ...
4  X += ...
  ...
N    = X

We now generate a live interval that contains two ranges: [2,6:0),[6,?:1)
reflecting the fact that there are two different values in the range at
different positions in the code.

Currently we are not using this information at all, so this just slows down
liveintervals.  In the future, this will change.

Note that this change also substantially refactors the joinIntervalsInMachineBB
method to merge the cases for virt-virt and phys-virt joining into a single
case, adds comments, and makes the code a bit easier to follow.

llvm-svn: 15154
2004-07-24 02:59:07 +00:00
Chris Lattner
5ff4567405 Add a new differingRegisterClasses method
make overlapsAliases take pointers instead of references
fix indentation

llvm-svn: 15153
2004-07-24 02:53:43 +00:00
Chris Lattner
a3f7433b58 Little stuff:
* Fix comment typeo
* add dump() methods
* add a few new methods like getLiveRangeContaining, removeRange & joinable
  (which is currently the same as overlaps)
* Remove the unused operator==

Bigger change:

* In LiveInterval, instead of using a boolean isDefinedOnce to keep track of
  if there are > 1 definitions in a particular interval, keep a counter,
  NumValues to keep track of exactly how many there are.
* In LiveRange, add a new ValId element to indicate which of the numbered
  values each LiveRange belongs to.   We now no longer merge LiveRanges if
  they are of differing value ID's even if they are neighbors.

llvm-svn: 15152
2004-07-24 02:52:23 +00:00
Misha Brukman
c1e23e1939 Running list of bugs, unimplemented features, currently broken tests, until we
have a nightly tester set up for PowerPC.

llvm-svn: 15147
2004-07-23 22:37:22 +00:00
Misha Brukman
c7291558eb Eliminate spurious empty space; make code easier to page through.
llvm-svn: 15146
2004-07-23 22:35:49 +00:00
Misha Brukman
eaac0bbed7 Simplify boolean test.
llvm-svn: 15145
2004-07-23 21:43:26 +00:00
Chris Lattner
f60cf84fc5 More minor changes:
* Inline some functions
 * Eliminate some comparisons from the release build

This is good for another .3 on gcc.

llvm-svn: 15144
2004-07-23 21:24:19 +00:00
Misha Brukman
a899694208 Implement casting a floating point to 32-bit unsigned value
llvm-svn: 15143
2004-07-23 20:32:59 +00:00
Brian Gaeke
1584828633 bug fixed
llvm-svn: 15142
2004-07-23 19:41:13 +00:00
Chris Lattner
0475ecedab Change addRange and join to be a little bit smarter. In particular, we don't
want to insert a new range into the middle of the vector, then delete ranges
one at a time next to the inserted one as they are merged.

Instead, if the inserted interval overlaps, just start merging.  The only time
we insert into the middle of the vector is when we don't overlap at all.  Also
delete blocks of live ranges if we overlap with many of them.

This patch speeds up joining by .7 seconds on a large testcase, but more
importantly gets all of the range adding code into addRangeFrom.

llvm-svn: 15141
2004-07-23 19:38:44 +00:00
Brian Gaeke
bdab8e50cd Fix problem with inserting FunctionPasses that depend on ImmutablePasses
(e.g., LICM) into FunctionPassManagers. The problem is that we were
using a C-style cast to cast required analysis passes to PassClass*, but
if it's a FunctionPassManager, and the required analysis pass is an
ImmutablePass, the types aren't really compatible, so the C-style cast
causes a crash.

llvm-svn: 15140
2004-07-23 19:35:50 +00:00
Chris Lattner
de2859251e Search by the start point, not by the whole interval. This saves some
comparisons, reducing linscan by another .1 seconds :)

llvm-svn: 15139
2004-07-23 18:40:00 +00:00
Chris Lattner
56e3526765 New helper method
llvm-svn: 15138
2004-07-23 18:39:12 +00:00
Chris Lattner
b05751f2c7 Speedup debug builds a bit
llvm-svn: 15137
2004-07-23 18:38:52 +00:00