1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 12:33:33 +02:00
Commit Graph

139 Commits

Author SHA1 Message Date
Chris Lattner
2755fb4171 Alpha doesn't have a native f32 extload instruction.
llvm-svn: 19880
2005-01-28 22:58:25 +00:00
Chris Lattner
da7b5277c1 implement legalization of truncates whose results and sources need to be
truncated, e.g. (truncate:i8 something:i16) on a 32 or 64-bit RISC.

llvm-svn: 19879
2005-01-28 22:52:50 +00:00
Chris Lattner
89cac82479 Get alpha working with memset/memcpy/memmove
llvm-svn: 19878
2005-01-28 22:29:18 +00:00
Chris Lattner
4134789c8f CopyFromReg produces two values. Make sure that we remember that both are
legalized, and actually return the correct result when we legalize the chain first.

llvm-svn: 19866
2005-01-28 06:27:38 +00:00
Chris Lattner
849899e193 Silence optimized warnings.
llvm-svn: 19797
2005-01-23 23:19:44 +00:00
Chris Lattner
b3a5fc3ec0 Adjust to changes in SelectionDAG interfaces
The first half of correct chain insertion for libcalls. This is not enough
to fix Fhourstones yet though.

llvm-svn: 19781
2005-01-23 04:42:50 +00:00
Chris Lattner
3165569ba9 Remove the 3 HACK HACK HACKs I put in before, fixing them properly with
the new TLI that is available.

Implement support for handling out of range shifts.  This allows us to
compile this code (a 64-bit rotate):

unsigned long long f3(unsigned long long x) {
  return (x << 32) | (x >> (64-32));
}

into this:

f3:
        mov %EDX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        ret

GCC produces this:

$ gcc t.c -masm=intel -O3 -S -o - -fomit-frame-pointer
..
f3:
        push    %ebx
        mov     %ebx, DWORD PTR [%esp+12]
        mov     %ecx, DWORD PTR [%esp+8]
        mov     %eax, %ebx
        mov     %edx, %ecx
        pop     %ebx
        ret

The Simple ISEL produces (eww gross):

f3:
        sub %ESP, 4
        mov DWORD PTR [%ESP], %ESI
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, DWORD PTR [%ESP + 12]
        mov %EAX, 0
        mov %ESI, 0
        or %EAX, %ECX
        or %EDX, %ESI
        mov %ESI, DWORD PTR [%ESP]
        add %ESP, 4
        ret

llvm-svn: 19780
2005-01-23 04:39:44 +00:00
Chris Lattner
4c997d281c Adjust to changes in SelectionDAG interface.
llvm-svn: 19779
2005-01-23 04:36:26 +00:00
Chris Lattner
63ec3c402b Get this to work for 64-bit systems.
llvm-svn: 19763
2005-01-22 23:04:37 +00:00
Chris Lattner
97f35a7a07 More bugfixes for IA64 shifts.
llvm-svn: 19739
2005-01-22 00:33:03 +00:00
Chris Lattner
67deea9d05 Fix problems with non-x86 targets.
llvm-svn: 19738
2005-01-22 00:31:52 +00:00
Chris Lattner
42e239ed58 Add a nasty hack to fix Alpha/IA64 multiplies by a power of two.
llvm-svn: 19737
2005-01-22 00:20:42 +00:00
Chris Lattner
e724100870 Remove unneeded line.
llvm-svn: 19736
2005-01-21 23:43:12 +00:00
Chris Lattner
a974e215a5 test commit
llvm-svn: 19735
2005-01-21 23:38:56 +00:00
Chris Lattner
392ddf430b Unary token factor nodes are unneeded.
llvm-svn: 19727
2005-01-21 18:01:22 +00:00
Chris Lattner
07c35617d5 Refactor libcall code a bit. Initial implementation of expanding int -> FP
operations for 64-bit integers.

llvm-svn: 19724
2005-01-21 06:05:23 +00:00
Chris Lattner
6258ec2e1d Simplify the shift-expansion code.
llvm-svn: 19721
2005-01-20 20:29:23 +00:00
Chris Lattner
c95c7c90c9 Expand add/sub into ADD_PARTS/SUB_PARTS instead of a non-existant libcall.
llvm-svn: 19715
2005-01-20 18:52:28 +00:00
Chris Lattner
4086a7a803 implement add_parts/sub_parts.
llvm-svn: 19714
2005-01-20 18:50:55 +00:00
Chris Lattner
e5212a16a2 Support targets that do not use i8 shift amounts.
llvm-svn: 19707
2005-01-19 22:31:21 +00:00
Chris Lattner
c662697319 Add support for targets that pass args in registers to calls.
llvm-svn: 19703
2005-01-19 20:24:35 +00:00
Chris Lattner
277ac2be70 Fold single use token factor nodes into other token factor nodes.
llvm-svn: 19701
2005-01-19 19:10:54 +00:00
Chris Lattner
85e0771f79 Realize the individual pieces of an expanded copytoreg/store/load are
independent of each other.

llvm-svn: 19700
2005-01-19 18:02:17 +00:00
Chris Lattner
027c97e93e Know some identities about tokenfactor nodes.
llvm-svn: 19699
2005-01-19 18:01:40 +00:00
Chris Lattner
7114e8a527 Know some simple identities. This improves codegen for (1LL << N).
llvm-svn: 19698
2005-01-19 17:29:49 +00:00
Chris Lattner
743a36c818 Implement a way of expanding shifts. This applies to targets that offer
select operations or to shifts that are by a constant.  This automatically
implements (with no special code) all of the special cases for shift by 32,
shift by < 32 and shift by > 32.

llvm-svn: 19679
2005-01-19 04:19:40 +00:00
Chris Lattner
0df1935505 Zero is cheaper than sign extend.
llvm-svn: 19675
2005-01-18 21:57:59 +00:00
Chris Lattner
4360871e16 Fix some fixmes (promoting bools for select and brcond), fix promotion
of zero and sign extends.

llvm-svn: 19671
2005-01-18 19:27:06 +00:00
Chris Lattner
eea485de1f Keep track of the retval type as well.
llvm-svn: 19670
2005-01-18 19:26:36 +00:00
Chris Lattner
ff086f3016 Teach legalize to promote copy(from|to)reg, instead of making the isel pass
do it.  This results in better code on X86 for floats (because if strict
precision is not required, we can elide some more expensive double -> float
conversions like the old isel did), and allows other targets to emit
CopyFromRegs that are not legal for arguments.

llvm-svn: 19668
2005-01-18 17:54:55 +00:00
Chris Lattner
891aa537f7 Teach legalize to promote SetCC results.
llvm-svn: 19657
2005-01-18 02:59:52 +00:00
Chris Lattner
95307053ec Allow setcc operations to have nonbool types.
llvm-svn: 19656
2005-01-18 02:52:03 +00:00
Chris Lattner
906541da95 Fix the completely broken FP constant folds for setcc's.
llvm-svn: 19651
2005-01-18 02:11:55 +00:00
Chris Lattner
c0aca0d13c Non-volatile loads can be freely reordered against each other. This fixes
X86/reg-pressure.ll again, and allows us to do nice things in other cases.
For example, we now codegen this sort of thing:

int %loadload(int *%X, int* %Y) {
  %Z = load int* %Y
  %Y = load int* %X      ;; load between %Z and store
  %Q = add int %Z, 1
  store int %Q, int* %Y
  ret int %Y
}

Into this:

loadload:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%EAX]
        mov %ECX, DWORD PTR [%ESP + 8]
        inc DWORD PTR [%ECX]
        ret

where we weren't able to form the 'inc [mem]' before.  This also lets the
instruction selector emit loads in any order it wants to, which can be good
for register pressure as well.

llvm-svn: 19644
2005-01-17 22:19:26 +00:00
Chris Lattner
49291c4d96 Don't call SelectionDAG.getRoot() directly, go through a forwarding method.
llvm-svn: 19642
2005-01-17 19:43:36 +00:00
Chris Lattner
88bbcfc893 Implement a target independent optimization to codegen arguments only into
the basic block that uses them if possible.  This is a big win on X86, as it
lets us fold the argument loads into instructions and reduce register pressure
(by not loading all of the arguments in the entry block).

For this (contrived to show the optimization) testcase:

int %argtest(int %A, int %B) {
        %X = sub int 12345, %A
        br label %L
L:
        %Y = add int %X, %B
        ret int %Y
}

we used to produce:

argtest:
        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EAX, 12345
        sub %EAX, %ECX
        mov %EDX, DWORD PTR [%ESP + 8]
.LBBargtest_1:  # L
        add %EAX, %EDX
        ret


now we produce:

argtest:
        mov %EAX, 12345
        sub %EAX, DWORD PTR [%ESP + 4]
.LBBargtest_1:  # L
        add %EAX, DWORD PTR [%ESP + 8]
        ret

This also fixes the FIXME in the code.

BTW, this occurs in real code.  164.gzip shrinks from 8623 to 8608 lines of
.s file.  The stack frame in huft_build shrinks from 1644->1628 bytes,
inflate_codes shrinks from 116->108 bytes, and inflate_block from 2620->2612,
due to fewer spills.

Take that alkis. :-)

llvm-svn: 19639
2005-01-17 17:55:19 +00:00
Chris Lattner
49a1f3a109 Refactor code into a new method.
llvm-svn: 19635
2005-01-17 17:15:02 +00:00
Chris Lattner
ec55e3e529 Implement legalize of call nodes.
llvm-svn: 19617
2005-01-16 19:46:48 +00:00
Chris Lattner
0eca430af1 Revamp supported ops. Instead of just being supported or not, we now keep
track of how to deal with it, and provide the target with a hook that they
can use to legalize arbitrary operations in arbitrary ways.

Implement custom lowering for a couple of ops, implement promotion for select
operations (which x86 needs).

llvm-svn: 19613
2005-01-16 07:29:19 +00:00
Chris Lattner
835a5efef3 add method stub
llvm-svn: 19612
2005-01-16 07:28:41 +00:00
Chris Lattner
907534af24 Don't mash stuff together.
llvm-svn: 19611
2005-01-16 07:28:31 +00:00
Chris Lattner
0f4f239899 Implement some more missing promotions.
llvm-svn: 19606
2005-01-16 05:06:12 +00:00
Chris Lattner
742b77f9af Clarify assertion.
llvm-svn: 19597
2005-01-16 02:23:34 +00:00
Chris Lattner
4517b8af97 Add assertions.
llvm-svn: 19596
2005-01-16 02:23:22 +00:00
Chris Lattner
9f8589f4b3 Add support for promoted registers being live across blocks.
llvm-svn: 19595
2005-01-16 02:23:07 +00:00
Chris Lattner
01e2ce8a4c Move some information into the TargetLowering object.
llvm-svn: 19583
2005-01-16 01:11:45 +00:00
Chris Lattner
9762070e50 Use the new TLI method to get this.
llvm-svn: 19582
2005-01-16 01:11:19 +00:00
Chris Lattner
0777f84d53 legalize a bunch of operations that I missed.
llvm-svn: 19580
2005-01-16 00:38:00 +00:00
Chris Lattner
1de18d422e Add support for targets that require promotions.
llvm-svn: 19579
2005-01-16 00:37:38 +00:00
Chris Lattner
8c4c81d6b3 Fix some serious bugs in promotion.
llvm-svn: 19578
2005-01-16 00:17:42 +00:00