1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-30 07:22:55 +01:00
Commit Graph

22375 Commits

Author SHA1 Message Date
Evan Cheng
ef93bdc0cf - Use xor to clear integer registers (set R, 0).
- Added a new format for instructions where the source register is implied
  and it is same as the destination register. Used for pseudo instructions
  that clear the destination register.

llvm-svn: 25872
2006-02-01 06:13:50 +00:00
Evan Cheng
6523b2de76 Remove another entry.
llvm-svn: 25871
2006-02-01 06:08:48 +00:00
Evan Cheng
efdfb534a7 If a pattern's root node is a constant, its size should be 3 rather than 2.
llvm-svn: 25870
2006-02-01 06:06:31 +00:00
Jeff Cohen
8b429890aa Fix VC++ compilation error.
llvm-svn: 25869
2006-02-01 04:37:04 +00:00
Chris Lattner
da8766b370 new testcase for the 'ret double folding with load' opzn
llvm-svn: 25868
2006-02-01 01:45:02 +00:00
Chris Lattner
0e7d439232 Another regression from the pattern isel
llvm-svn: 25867
2006-02-01 01:44:25 +00:00
Chris Lattner
fa15301572 Beef up the interface to inline asm constraint parsing, making it more general, useful, and easier to use.
llvm-svn: 25866
2006-02-01 01:29:47 +00:00
Chris Lattner
5549a10792 adjust to changes in InlineAsm interface. Fix a few minor bugs.
llvm-svn: 25865
2006-02-01 01:28:23 +00:00
Chris Lattner
fa737604c5 Beef up the interface to inline asm constraint parsing, making it more
general, useful, and easier to use.

llvm-svn: 25864
2006-02-01 01:27:37 +00:00
Evan Cheng
27738f635e Return's chain should be matching either the chain produced by the
value or the chain going into the load.

llvm-svn: 25863
2006-02-01 01:19:32 +00:00
Chris Lattner
c270dd87a9 another testcase.
llvm-svn: 25862
2006-02-01 00:28:12 +00:00
Evan Cheng
329e86ddfa When folding a load into a return of SSE value, check the chain to
ensure the memory location has not been clobbered.

llvm-svn: 25861
2006-02-01 00:20:21 +00:00
Evan Cheng
19b80eebb2 Remove an item. It's done.
llvm-svn: 25860
2006-02-01 00:15:53 +00:00
Evan Cheng
7eb36f4721 Be smarter about whether to store the SSE return value in memory. If
it is already available in memory, do a fld directly from there.

llvm-svn: 25859
2006-01-31 23:19:54 +00:00
Chris Lattner
bb410d0b63 turning these into 'adds' would require extra copies
llvm-svn: 25858
2006-01-31 22:59:46 +00:00
Evan Cheng
45ebd632f2 - Allow XMM load (for scalar use) to be folded into ANDP* and XORP*.
- Use XORP* to implement fneg.

llvm-svn: 25857
2006-01-31 22:28:30 +00:00
Evan Cheng
6d9aae44f9 Remove entries on fabs and fneg. These are done.
llvm-svn: 25856
2006-01-31 22:26:21 +00:00
Evan Cheng
f115c17f23 Allow the specification of explicit alignments for constant pool entries.
llvm-svn: 25855
2006-01-31 22:23:14 +00:00
Chris Lattner
5587b270e4 * Fix 80-column violations
* Rename hasSSE -> hasSSE1 to avoid my continual confusion with 'has any SSE'.
* Add inline asm constraint specification.

llvm-svn: 25854
2006-01-31 19:43:35 +00:00
Chris Lattner
892fe31362 add info about the inline asm register constraints for PPC
llvm-svn: 25853
2006-01-31 19:20:21 +00:00
Evan Cheng
80944be4e7 Allow custom lowering of fabs. I forgot to check in this change which
caused several test failures.

llvm-svn: 25852
2006-01-31 18:14:25 +00:00
Chris Lattner
b54e4baa26 add a missing break that caused a lot of failures last night :(
llvm-svn: 25851
2006-01-31 17:20:06 +00:00
Chris Lattner
b79e962b6b solaris won't clobber an existing symlink with ln -sf apparently
llvm-svn: 25849
2006-01-31 16:10:53 +00:00
Nate Begeman
7a83bb4285 Codegen
bool %test(int %X) {
  %Y = seteq int %X, 13
  ret bool %Y
}

as

_test:
        addi r2, r3, -13
        cntlzw r2, r2
        srwi r3, r2, 5
        blr

rather than

_test:
        cmpwi cr7, r3, 13
        mfcr r2
        rlwinm r3, r2, 31, 31, 31
        blr

This has very little effect on most code, but speeds up analyzer 23% and
mason 11%

llvm-svn: 25848
2006-01-31 08:17:29 +00:00
Chris Lattner
798747b798 okay, one more
llvm-svn: 25847
2006-01-31 07:45:45 +00:00
Chris Lattner
c831039792 another note
llvm-svn: 25846
2006-01-31 07:45:08 +00:00
Chris Lattner
5d718dbc1e More notes
llvm-svn: 25845
2006-01-31 07:43:33 +00:00
Chris Lattner
e0d5ccdecf another one
llvm-svn: 25844
2006-01-31 07:38:32 +00:00
Chris Lattner
66751830ed add a note
llvm-svn: 25843
2006-01-31 07:37:20 +00:00
Chris Lattner
1a0bf14366 add conditional moves of float and double values on int/fp condition codes.
llvm-svn: 25842
2006-01-31 07:26:55 +00:00
Chris Lattner
186b5f0887 example nate pointed out
llvm-svn: 25841
2006-01-31 07:16:34 +00:00
Chris Lattner
769e683663 treat conditional branches the same way as conditional moves (giving them
an operand that contains the condcode), making things significantly simpler.

llvm-svn: 25840
2006-01-31 06:56:30 +00:00
Chris Lattner
a999c688d5 compactify all of the integer conditional moves into one instruction that takes
a CC as an operand.  Much smaller, much happier.

llvm-svn: 25839
2006-01-31 06:49:09 +00:00
Chris Lattner
5aade7535c Add immediate forms of integer cmovs
llvm-svn: 25838
2006-01-31 06:24:29 +00:00
Chris Lattner
c1b80cb2fe Shrinkify
llvm-svn: 25837
2006-01-31 06:18:16 +00:00
Chris Lattner
8d9da62be7 implement test/Regression/TableGen/DagIntSubst.ll
llvm-svn: 25836
2006-01-31 06:02:35 +00:00
Chris Lattner
e6bed389d4 new testcase
llvm-svn: 25835
2006-01-31 06:01:40 +00:00
Chris Lattner
fcd00875a9 Add the full complement of conditional moves of integer registers.
llvm-svn: 25834
2006-01-31 05:26:36 +00:00
Chris Lattner
9d29548015 Compile this:
void %X(int %A) {
        %C = setlt int %A, 123          ; <bool> [#uses=1]
        br bool %C, label %T, label %F

T:              ; preds = %0
        call int %main( int 0 )         ; <int>:0 [#uses=0]
        ret void

F:              ; preds = %0
        ret void
}

to this:

X:
        save -96, %o6, %o6
        subcc %i0, 122, %l0
        bg .LBBX_2      ! F
        nop
...

not this:

X:
        save -96, %o6, %o6
        sethi 0, %l0
        or %g0, 1, %l1
        subcc %i0, 122, %l2
        bg .LBBX_4      !
        nop
.LBBX_3:        !
        or %g0, %l0, %l1
.LBBX_4:        !
        subcc %l1, 0, %l0
        bne .LBBX_2     ! F
        nop

llvm-svn: 25833
2006-01-31 05:05:52 +00:00
Chris Lattner
d33e8efd56 Only insert an AND when converting from BR_COND to BRCC if needed.
llvm-svn: 25832
2006-01-31 05:04:52 +00:00
Evan Cheng
49467b6b5b Added custom lowering of fabs
llvm-svn: 25831
2006-01-31 03:14:29 +00:00
Chris Lattner
830b1ad8cf add the 'lucas' optimization
llvm-svn: 25830
2006-01-31 02:55:28 +00:00
Chris Lattner
213a7b78fb I don't see why this optimization isn't safe, but it isn't, so disable it
llvm-svn: 25829
2006-01-31 02:45:52 +00:00
Chris Lattner
9b89cad951 Another high-prio selection performance bug
llvm-svn: 25828
2006-01-31 02:10:06 +00:00
Chris Lattner
e113238f5c Handle physreg input/outputs. We now compile this:
int %test_cpuid(int %op) {
        %B = alloca int
        %C = alloca int
        %D = alloca int
        %A = call int asm "cpuid", "=eax,==ebx,==ecx,==edx,eax"(int* %B, int* %C, int* %D, int %op)
        %Bv = load int* %B
        %Cv = load int* %C
        %Dv = load int* %D
        %x = add int %A, %Bv
        %y = add int %x, %Cv
        %z = add int %y, %Dv
        ret int %z
}

to this:

_test_cpuid:
        sub %ESP, 16
        mov DWORD PTR [%ESP], %EBX
        mov %EAX, DWORD PTR [%ESP + 20]
        cpuid
        mov DWORD PTR [%ESP + 8], %ECX
        mov DWORD PTR [%ESP + 12], %EBX
        mov DWORD PTR [%ESP + 4], %EDX
        mov %ECX, DWORD PTR [%ESP + 12]
        add %EAX, %ECX
        mov %ECX, DWORD PTR [%ESP + 8]
        add %EAX, %ECX
        mov %ECX, DWORD PTR [%ESP + 4]
        add %EAX, %ECX
        mov %EBX, DWORD PTR [%ESP]
        add %ESP, 16
        ret

... note the proper register allocation.  :)

it is unclear to me why the loads aren't folded into the adds.

llvm-svn: 25827
2006-01-31 02:03:41 +00:00
Chris Lattner
899754747c more mumbling
llvm-svn: 25826
2006-01-31 00:45:37 +00:00
Chris Lattner
76a85bd047 add some notes
llvm-svn: 25825
2006-01-31 00:20:38 +00:00
Evan Cheng
38aacb5f09 Don't generate complex sequence for SETOLE, SETOLT, SETULT, and SETUGT. Flip
the order of the compare operands and generate SETOGT, SETOGE, SETUGE, and
SETULE instead.

llvm-svn: 25824
2006-01-30 23:41:35 +00:00
Evan Cheng
7bcbf75f7f Don't generate (or setp, setae) for SETUGE. Simply flip the operands around and
generate SETULT instead.

llvm-svn: 25823
2006-01-30 23:39:40 +00:00
Chris Lattner
ce37ba3417 Print the most trivial inline asms.
llvm-svn: 25822
2006-01-30 23:00:08 +00:00