1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 19:42:54 +02:00
Commit Graph

17014 Commits

Author SHA1 Message Date
Chris Lattner
2a03fa3a5c Add a new method, described in the comment.
llvm-svn: 19683
2005-01-19 06:53:02 +00:00
Chris Lattner
ceca0b7b62 Ensure that each these functions generates a sh[rl]d instruction.
llvm-svn: 19682
2005-01-19 06:30:36 +00:00
Chris Lattner
575e912fcf Codegen long >> 2 to this:
foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        shrd %EAX, %EDX, 2
        sar %EDX, 2
        ret

instead of this:

test1:
        mov %ECX, DWORD PTR [%ESP + 4]
        shr %ECX, 2
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %EAX, %EDX
        shl %EAX, 30
        or %EAX, %ECX
        sar %EDX, 2
        ret

and long << 2 to this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
***     mov %EDX, %EAX
        shrd %EDX, %ECX, 30
        shl %EAX, 2
        ret

instead of this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        shr %ECX, 30
        mov %EDX, DWORD PTR [%ESP + 8]
        shl %EDX, 2
        or %EDX, %ECX
        shl %EAX, 2
        ret

The extra copy (marked ***) can be eliminated when I teach the code generator
that shrd32rri8 is really commutative.

llvm-svn: 19681
2005-01-19 06:18:43 +00:00
Jeff Cohen
a3414ac8c7 Add missing data types for VC++
llvm-svn: 19680
2005-01-19 05:08:31 +00:00
Chris Lattner
743a36c818 Implement a way of expanding shifts. This applies to targets that offer
select operations or to shifts that are by a constant.  This automatically
implements (with no special code) all of the special cases for shift by 32,
shift by < 32 and shift by > 32.

llvm-svn: 19679
2005-01-19 04:19:40 +00:00
Chris Lattner
419a5d213b X86 shifts mask the amount.
llvm-svn: 19678
2005-01-19 03:36:30 +00:00
Chris Lattner
fbd1f8e4fd Add a hook to find out how the target handles shift amounts that are out of
range.  Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).

This defaults to undefined, which is conservatively correct.

llvm-svn: 19677
2005-01-19 03:36:14 +00:00
Chris Lattner
4938a7c8a1 Move all data members to the end of the class.
Add a hook to find out how the target handles shift amounts that are out of
range.  Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).

This defaults to undefined, which is conservatively correct.

llvm-svn: 19676
2005-01-19 03:36:03 +00:00
Chris Lattner
0df1935505 Zero is cheaper than sign extend.
llvm-svn: 19675
2005-01-18 21:57:59 +00:00
Chris Lattner
6dec8cb829 Code to handle FP_EXTEND is dead now. X86 doesn't support any data types to
FP_EXTEND from!

llvm-svn: 19674
2005-01-18 20:05:56 +00:00
Chris Lattner
798e9c85d6 Remove more dead code.
llvm-svn: 19673
2005-01-18 19:50:08 +00:00
Chris Lattner
401814508f The selection dag code handles the promotions from F32 to F64 for us, so we
don't need to even think about F32 in the X86 code anymore.

llvm-svn: 19672
2005-01-18 19:46:54 +00:00
Chris Lattner
4360871e16 Fix some fixmes (promoting bools for select and brcond), fix promotion
of zero and sign extends.

llvm-svn: 19671
2005-01-18 19:27:06 +00:00
Chris Lattner
eea485de1f Keep track of the retval type as well.
llvm-svn: 19670
2005-01-18 19:26:36 +00:00
Chris Lattner
0697def39d Keep track of the returned value type as well.
llvm-svn: 19669
2005-01-18 19:26:18 +00:00
Chris Lattner
ff086f3016 Teach legalize to promote copy(from|to)reg, instead of making the isel pass
do it.  This results in better code on X86 for floats (because if strict
precision is not required, we can elide some more expensive double -> float
conversions like the old isel did), and allows other targets to emit
CopyFromRegs that are not legal for arguments.

llvm-svn: 19668
2005-01-18 17:54:55 +00:00
Chris Lattner
dc09e52b3e Fix 124.m88ksim.
llvm-svn: 19667
2005-01-18 17:35:28 +00:00
Jeff Cohen
d991f0c15f Add project llvm-ld to Visual Studio
llvm-svn: 19665
2005-01-18 05:44:50 +00:00
Jeff Cohen
01ca103f97 Add project llvm-nm to Visual Studio
llvm-svn: 19664
2005-01-18 05:44:25 +00:00
Jeff Cohen
7c05504d8d Add project llvm-ld to Visual Studio
llvm-svn: 19663
2005-01-18 05:39:37 +00:00
Jeff Cohen
d07f37da2e Add llvm-bcanalyzer project to Visual Studio
llvm-svn: 19662
2005-01-18 05:31:34 +00:00
Chris Lattner
a04b1ee7a8 Do not emit loads multiple times, potentially in the wrong places.
llvm-svn: 19661
2005-01-18 04:18:32 +00:00
Tanya Lattner
d3459278f2 Minor changes.
llvm-svn: 19660
2005-01-18 04:15:41 +00:00
Chris Lattner
722ddeb86e Eliminate bad assertions.
llvm-svn: 19659
2005-01-18 04:00:54 +00:00
Chris Lattner
8f3a8d96e2 * Eliminate the TokenSet and just use the ExprMap for both tokens and values.
* Insert some really pedantic assertions that will notice when we emit the
  same loads more than one time, exposing bugs.  This turns a miscompilation in
  bzip2 into a compile-fail.  yaay.

llvm-svn: 19658
2005-01-18 03:51:59 +00:00
Chris Lattner
891aa537f7 Teach legalize to promote SetCC results.
llvm-svn: 19657
2005-01-18 02:59:52 +00:00
Chris Lattner
95307053ec Allow setcc operations to have nonbool types.
llvm-svn: 19656
2005-01-18 02:52:03 +00:00
Chris Lattner
818e819e43 Allow setcc operations to have non-bool types.
llvm-svn: 19655
2005-01-18 02:51:41 +00:00
Chris Lattner
b3edb09ede Rely on the code in MatchAddress to do this work. Otherwise we fail to
match (X+Y)+(Z << 1), because we match the X+Y first, consuming the index
register, then there is no place to put the Z.

llvm-svn: 19652
2005-01-18 02:25:52 +00:00
Chris Lattner
906541da95 Fix the completely broken FP constant folds for setcc's.
llvm-svn: 19651
2005-01-18 02:11:55 +00:00
Chris Lattner
ce2e0125dc Fix a problem where probing for addressing modes caused expressions to be
emitted too early.  In particular, this fixes
Regression/CodeGen/X86/regpressure.ll:regpressure3.

This also improves the 2nd basic block in 164.gzip:flush_block, which went from

.LBBflush_block_1:      # loopentry.1.i
        movzx %EAX, WORD PTR [dyn_ltree + 20]
        movzx %ECX, WORD PTR [dyn_ltree + 16]
        mov DWORD PTR [%ESP + 32], %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 12]
        movzx %EDX, WORD PTR [dyn_ltree + 8]
        movzx %EBX, WORD PTR [dyn_ltree + 4]
        mov DWORD PTR [%ESP + 36], %EBX
        movzx %EBX, WORD PTR [dyn_ltree]
        add DWORD PTR [%ESP + 36], %EBX
        add %EDX, DWORD PTR [%ESP + 36]
        add %ECX, %EDX
        add DWORD PTR [%ESP + 32], %ECX
        add %EAX, DWORD PTR [%ESP + 32]
        movzx %ECX, WORD PTR [dyn_ltree + 24]
        add %EAX, %ECX
        mov %ECX, 0
        mov %EDX, %ECX

to

.LBBflush_block_1:      # loopentry.1.i
        movzx %EAX, WORD PTR [dyn_ltree]
        movzx %ECX, WORD PTR [dyn_ltree + 4]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 8]
        add %EAX, %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 12]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 16]
        add %EAX, %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 20]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 24]
        add %ECX, %EAX
        mov %EAX, 0
        mov %EDX, %EAX

... which results in less spilling in the function.

This change alone speeds up 164.gzip from 37.23s to 36.24s on apoc.  The
default isel takes 37.31s.

llvm-svn: 19650
2005-01-18 01:06:26 +00:00
Chris Lattner
a78f9ced61 Fix indentation.
llvm-svn: 19649
2005-01-17 23:25:45 +00:00
Chris Lattner
84cb260633 This is a carefully contrived testcase where the X86 ISel is emitting all loads
before other ops, causing it to spill like mad.  This occurs in
164.gzip:flush_block.

llvm-svn: 19648
2005-01-17 23:16:01 +00:00
Chris Lattner
dff1e3e86f Don't bother using max here.
llvm-svn: 19647
2005-01-17 23:02:13 +00:00
Chris Lattner
2d86b43318 Do not give token factor nodes outrageous weights
llvm-svn: 19645
2005-01-17 22:56:09 +00:00
Chris Lattner
c0aca0d13c Non-volatile loads can be freely reordered against each other. This fixes
X86/reg-pressure.ll again, and allows us to do nice things in other cases.
For example, we now codegen this sort of thing:

int %loadload(int *%X, int* %Y) {
  %Z = load int* %Y
  %Y = load int* %X      ;; load between %Z and store
  %Q = add int %Z, 1
  store int %Q, int* %Y
  ret int %Y
}

Into this:

loadload:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%EAX]
        mov %ECX, DWORD PTR [%ESP + 8]
        inc DWORD PTR [%ECX]
        ret

where we weren't able to form the 'inc [mem]' before.  This also lets the
instruction selector emit loads in any order it wants to, which can be good
for register pressure as well.

llvm-svn: 19644
2005-01-17 22:19:26 +00:00
Chris Lattner
f2878ce8ba Two changes:
1. Fold  [mem] += (1|-1) into inc [mem]/dec [mem] to save some icache space.
 2. Do not let token factor nodes prevent forming '[mem] op= val' folds.

llvm-svn: 19643
2005-01-17 22:10:42 +00:00
Chris Lattner
49291c4d96 Don't call SelectionDAG.getRoot() directly, go through a forwarding method.
llvm-svn: 19642
2005-01-17 19:43:36 +00:00
Chris Lattner
40c0fca632 Refactor load/op/store folding into it's own method, no functionality changes.
llvm-svn: 19641
2005-01-17 19:25:26 +00:00
Chris Lattner
88bbcfc893 Implement a target independent optimization to codegen arguments only into
the basic block that uses them if possible.  This is a big win on X86, as it
lets us fold the argument loads into instructions and reduce register pressure
(by not loading all of the arguments in the entry block).

For this (contrived to show the optimization) testcase:

int %argtest(int %A, int %B) {
        %X = sub int 12345, %A
        br label %L
L:
        %Y = add int %X, %B
        ret int %Y
}

we used to produce:

argtest:
        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EAX, 12345
        sub %EAX, %ECX
        mov %EDX, DWORD PTR [%ESP + 8]
.LBBargtest_1:  # L
        add %EAX, %EDX
        ret


now we produce:

argtest:
        mov %EAX, 12345
        sub %EAX, DWORD PTR [%ESP + 4]
.LBBargtest_1:  # L
        add %EAX, DWORD PTR [%ESP + 8]
        ret

This also fixes the FIXME in the code.

BTW, this occurs in real code.  164.gzip shrinks from 8623 to 8608 lines of
.s file.  The stack frame in huft_build shrinks from 1644->1628 bytes,
inflate_codes shrinks from 116->108 bytes, and inflate_block from 2620->2612,
due to fewer spills.

Take that alkis. :-)

llvm-svn: 19639
2005-01-17 17:55:19 +00:00
Chris Lattner
2348abc421 Fix a major regression last night that prevented us from producing [mem] op= reg
operations.

The body of the if is less indented but unmodified in this patch.

llvm-svn: 19638
2005-01-17 17:49:14 +00:00
Chris Lattner
49a1f3a109 Refactor code into a new method.
llvm-svn: 19635
2005-01-17 17:15:02 +00:00
Chris Lattner
e2fd07b43d Make methods private, add a method.
llvm-svn: 19634
2005-01-17 17:14:43 +00:00
Chris Lattner
adb669ab1f Codegen this:
int %foo(int %X) {
        %T = add int %X, 13
        %S = mul int %T, 3
        ret int %S
}

as this:

        mov %ECX, DWORD PTR [%ESP + 4]
        lea %EAX, DWORD PTR [%ECX + 2*%ECX + 39]
        ret

instead of this:

        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EAX, %ECX
        add %EAX, 13
        imul %EAX, %EAX, 3
        ret

llvm-svn: 19633
2005-01-17 06:48:02 +00:00
Tanya Lattner
5a10531cf8 Added tmp instructions to preserve ssa.
llvm-svn: 19632
2005-01-17 06:47:26 +00:00
Chris Lattner
51590b615c Fix test/Regression/CodeGen/X86/2005-01-17-CycleInDAG.ll and 132.ijpeg.
Do not fold a load into an operation if it will induce a cycle in the DAG.

Repeat after me: dAg.

llvm-svn: 19631
2005-01-17 06:26:58 +00:00
Chris Lattner
a5f6a52471 New testcase for a problem that occurred in 132.ijpeg
llvm-svn: 19630
2005-01-17 06:25:59 +00:00
Chris Lattner
3402945d52 Delete PHI nodes that are not dead but are locked in a cycle of single
useness.

llvm-svn: 19629
2005-01-17 05:10:15 +00:00
Chris Lattner
de6b1ca556 Move code out of indentation one level to make it easier to read.
Disable the xform for < > cases.  It turns out that the following is being
miscompiled:

bool %test(sbyte %S) {
        %T = cast sbyte %S to uint
        %V = setgt uint %T, 255
        ret bool %V
}

llvm-svn: 19628
2005-01-17 03:20:02 +00:00
Chris Lattner
783a9d8893 Add methods
llvm-svn: 19627
2005-01-17 02:24:59 +00:00