1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-23 13:02:52 +02:00
Commit Graph

719 Commits

Author SHA1 Message Date
Chris Lattner
6331eb6bbe Delete the allocate*TargetMachine function, which is now dead .
The shared command line options are now in a header that makes sense.

llvm-svn: 14756
2004-07-11 04:17:10 +00:00
Chris Lattner
b67e3b01bc Make these format a bit nicer
llvm-svn: 14747
2004-07-11 03:27:42 +00:00
Chris Lattner
2ada866a78 Auto-registrate target
llvm-svn: 14745
2004-07-11 02:48:49 +00:00
Reid Spencer
50ec3f9325 Add #include <iostream> since Value.h does not #include it any more.
llvm-svn: 14622
2004-07-04 12:19:56 +00:00
Chris Lattner
6da0499f4b Remove dead blocks
llvm-svn: 14564
2004-07-02 05:46:41 +00:00
Misha Brukman
9e015dddb8 Fix associativity of parameters to assert(): now it actually makes sense.
llvm-svn: 14483
2004-06-29 19:43:20 +00:00
Misha Brukman
b3e4179f42 Convert tabs to spaces.
llvm-svn: 14482
2004-06-29 19:28:53 +00:00
Chris Lattner
2abf0134d0 I believe that the code generator now properly handles dead basic blocks. If not,
this is a bug, and should be fixed.

llvm-svn: 14476
2004-06-29 07:17:12 +00:00
Chris Lattner
cd1a39bbec Fix a regression from r1.224. In particular, codegen a cast from double ->
float as a truncation by going through memory.  This truncation was being
skipped, which caused 175.vpr to fail after aggressive register promotion.

llvm-svn: 14473
2004-06-29 00:14:38 +00:00
Tanya Lattner
da38dc5180 Made a fix so that you can print out MachineInstrs that belong to a MachineBasicBlock that is not yet attached to a MachineFunction. This change includes changing the third operand (TargetMachine) to a pointer for the MachineInstr::print function.
llvm-svn: 14389
2004-06-25 00:13:11 +00:00
Misha Brukman
e38f7ed2cc Spell out `NoFramePointerElim' for readability.
llvm-svn: 14299
2004-06-21 21:17:44 +00:00
Misha Brukman
a2ac4e4345 Use the common `NoFPElim' setting instead of our own.
llvm-svn: 14298
2004-06-21 21:10:24 +00:00
Chris Lattner
cc465361d9 Move the IntrinsicLowering header into the CodeGen directory, as per PR346
llvm-svn: 14266
2004-06-20 07:49:54 +00:00
Chris Lattner
9e1bbe86ba Codegen sub C, X a little bit better for register pressure. Instead of
mov REG, C
sub REG, X

generate:

neg X
add X, C

which uses one less reg

llvm-svn: 14213
2004-06-18 00:50:37 +00:00
Chris Lattner
a5750b975a Fold setcc instructions into select and branches that are not in the same BB as
the setcc.

llvm-svn: 14212
2004-06-18 00:29:22 +00:00
Chris Lattner
f815117481 Do not fold loads into instructions if it is used more than once. In particular
we do not want to fold the load in cases like this:

  X = load
    = add A, X
    = add B, X

llvm-svn: 14204
2004-06-17 22:15:25 +00:00
Chris Lattner
0cd29ae2cd Rename Type::PrimitiveID to TypeId and ::getPrimitiveID() to ::getTypeID()
llvm-svn: 14201
2004-06-17 18:19:28 +00:00
Chris Lattner
9bb0083d16 Remove support for llvm.isnan. Alkis wins :)
llvm-svn: 14189
2004-06-15 21:48:07 +00:00
Chris Lattner
d11493d8c4 Add basic support for the isunordered intrinsic. The isnan stuff still needs to go
llvm-svn: 14185
2004-06-15 21:36:44 +00:00
Chris Lattner
3a8e675c03 By far, one of the most common uses of isnan is to make 'isunordered'
comparisons.  In an 'isunordered' predicate, which looks like this at
the LLVM level:

        %a = call bool %llvm.isnan(double %X)
        %b = call bool %llvm.isnan(double %Y)
        %COM = or bool %a, %b

We used to generate this code:

        fxch %ST(1)
        fucomip %ST(0), %ST(0)
        setp %AL
        fucomip %ST(0), %ST(0)
        setp %AH
        or %AL, %AH

With this patch, we generate this code:

        fucomip %ST(0), %ST(1)
        fstp %ST(0)
        setp %AL

Which should make alkis happy.  Tested as X86/compare_folding.llx:test1

llvm-svn: 14148
2004-06-11 05:33:49 +00:00
Chris Lattner
f78e3e7f63 Fix bug in previous checkin
llvm-svn: 14146
2004-06-11 05:22:44 +00:00
Chris Lattner
7d8093efb1 No really, these are dead now
llvm-svn: 14145
2004-06-11 04:50:14 +00:00
Chris Lattner
a8e603b719 Now that compare instructions aren't lumped in with the other twoargfp instructions,
we can get rid of the FpUCOM/FpUCOMi pseudo instructions, which makes stuff simpler
and faster.

llvm-svn: 14144
2004-06-11 04:49:02 +00:00
Chris Lattner
b050f778ca Introduce a new FP instruction type to separate the compare cases from the
twoarg cases.

llvm-svn: 14143
2004-06-11 04:41:24 +00:00
Chris Lattner
edb06042b9 Add direct support for the isnan intrinsic, implementing test/Regression/CodeGen/X86/isnan.llx
testcase

llvm-svn: 14141
2004-06-11 04:31:10 +00:00
Chris Lattner
4c8b57ea31 Add support for the setp instructions
llvm-svn: 14140
2004-06-11 04:30:06 +00:00
Chris Lattner
c66e996765 Split compare instruction handling OUT of handleTwoArgFP into handleCompareFP.
This makes the code much simpler, and the two cases really do belong apart.
Once we do it, it's pretty obvious how flawed the logic was for A != A case,
so I fixed it (fixing PR369).

This also uses freeStackSlotAfter instead of inserting an fxchg then
popStackAfter'ing in the case where there is a dead result (unlikely, but
possible), producing better code.

llvm-svn: 14139
2004-06-11 04:25:06 +00:00
Chris Lattner
1f0e0d55c4 Fix the fixed stack offset, patch contributed by Vladimir Prus
llvm-svn: 14110
2004-06-10 06:19:25 +00:00
John Criswell
287e3fc88b Fix for PR#366. We use getClassB() so that we can handle cast instructions
that cast to bool.

llvm-svn: 14096
2004-06-09 15:18:51 +00:00
Chris Lattner
c51b272047 This file is obsolete
llvm-svn: 14005
2004-06-04 00:15:21 +00:00
Chris Lattner
5ad9eaab1a Convert to the new TargetMachine interface.
llvm-svn: 13952
2004-06-02 05:55:25 +00:00
Chris Lattner
1e22b42cb6 Add support for accurate garbage collection to the LLVM code generators
llvm-svn: 13696
2004-05-23 21:23:35 +00:00
Chris Lattner
85f19c7b3f Add some notes to myself, no functional changes
llvm-svn: 13695
2004-05-23 21:23:12 +00:00
Chris Lattner
5862899c44 minor wording change
llvm-svn: 13694
2004-05-23 21:22:55 +00:00
Brian Gaeke
e5736bf986 Don't keep track of references to LLVM BasicBlocks while emitting; use
MachineBasicBlocks instead.

llvm-svn: 13568
2004-05-14 06:54:58 +00:00
Brian Gaeke
a25a10e73b Support MachineBasicBlock operands on RawFrm instructions.
Get rid of separate numbering for LLVM BasicBlocks; use the automatically
generated MachineBasicBlock numbering.

llvm-svn: 13567
2004-05-14 06:54:57 +00:00
Brian Gaeke
a17301ca8b Generate branch machine instructions with MachineBasicBlock operands instead of
LLVM BasicBlock operands.

llvm-svn: 13566
2004-05-14 06:54:56 +00:00
Chris Lattner
269da7901a Two more improvements for null pointer handling: storing a null pointer
and passing a null pointer into a function.

For this testcase:

void %test(int** %X) {
  store int* null, int** %X
  call void %test(int** null)
  ret void
}

we now generate this:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov DWORD PTR [%EAX], 0
        mov DWORD PTR [%ESP], 0
        call test
        add %ESP, 12
        ret

instead of this:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov %ECX, 0
        mov DWORD PTR [%EAX], %ECX
        mov %EAX, 0
        mov DWORD PTR [%ESP], %EAX
        call test
        add %ESP, 12
        ret

llvm-svn: 13558
2004-05-13 15:26:48 +00:00
Chris Lattner
dc8e8484e5 Second half of my fixed-sized-alloca patch. This folds the LEA to compute
the alloca address into common operations like loads/stores.

In a simple testcase like this (which is just designed to excersize the
alloca A, nothing more):

int %test(int %X, bool %C) {
        %A = alloca int
        store int %X, int* %A
        store int* %A, int** %G
        br bool %C, label %T, label %F
T:
        call int %test(int 1, bool false)
        %V = load int* %A
        ret int %V
F:
        call int %test(int 123, bool true)
        %V2 = load int* %A
        ret int %V2
}

We now generate:

test:
        sub %ESP, 12
        mov %EAX, DWORD PTR [%ESP + 16]
        mov %CL, BYTE PTR [%ESP + 20]
***     mov DWORD PTR [%ESP + 8], %EAX
        mov %EAX, OFFSET G
        lea %EDX, DWORD PTR [%ESP + 8]
        mov DWORD PTR [%EAX], %EDX
        test %CL, %CL
        je .LBB2 # PC rel: F
.LBB1:  # T
        mov DWORD PTR [%ESP], 1
        mov DWORD PTR [%ESP + 4], 0
        call test
***     mov %EAX, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret
.LBB2:  # F
        mov DWORD PTR [%ESP], 123
        mov DWORD PTR [%ESP + 4], 1
        call test
***     mov %EAX, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret

Instead of:

test:
        sub %ESP, 20
        mov %EAX, DWORD PTR [%ESP + 24]
        mov %CL, BYTE PTR [%ESP + 28]
***     lea %EDX, DWORD PTR [%ESP + 16]
***     mov DWORD PTR [%EDX], %EAX
        mov %EAX, OFFSET G
        mov DWORD PTR [%EAX], %EDX
        test %CL, %CL
***     mov DWORD PTR [%ESP + 12], %EDX
        je .LBB2 # PC rel: F
.LBB1:  # T
        mov DWORD PTR [%ESP], 1
        mov %EAX, 0
        mov DWORD PTR [%ESP + 4], %EAX
        call test
***     mov %EAX, DWORD PTR [%ESP + 12]
***     mov %EAX, DWORD PTR [%EAX]
        add %ESP, 20
        ret
.LBB2:  # F
        mov DWORD PTR [%ESP], 123
        mov %EAX, 1
        mov DWORD PTR [%ESP + 4], %EAX
        call test
***     mov %EAX, DWORD PTR [%ESP + 12]
***     mov %EAX, DWORD PTR [%EAX]
        add %ESP, 20
        ret

llvm-svn: 13557
2004-05-13 15:12:43 +00:00
Chris Lattner
94de563118 Substantially improve code generation for address exposed locals (aka fixed
sized allocas in the entry block).  Instead of generating code like this:

entry:
  reg1024 = ESP+1234
... (much later)
  *reg1024 = 17


Generate code that looks like this:
entry:
  (no code generated)
... (much later)
  t = ESP+1234
  *t = 17

The advantage being that we DRAMATICALLY reduce the register pressure for these
silly temporaries (they were all being spilled to the stack, resulting in very
silly code).  This is actually a manual implementation of rematerialization :)

I have a patch to fold the alloca address computation into loads & stores, which
will make this much better still, but just getting this right took way too much time
and I'm sleepy.

llvm-svn: 13554
2004-05-13 07:40:27 +00:00
Chris Lattner
a19bb14155 Pass boolean constants into function calls more efficiently, generating:
mov DWORD PTR [%ESP + 4], 1

instead of:

        mov %EAX, 1
        mov DWORD PTR [%ESP + 4], %EAX

llvm-svn: 13494
2004-05-12 16:35:04 +00:00
Chris Lattner
a407338e12 Fix a fairly serious pessimizaion that was preventing us from efficiently
compiling things like 'add long %X, 1'.  The problem is that we were switching
the order of the operands for longs even though we can't fold them yet.

llvm-svn: 13451
2004-05-10 15:15:55 +00:00
Chris Lattner
0962db8f10 Fix some comments, avoid sign extending booleans when zero extend works fine
llvm-svn: 13440
2004-05-09 23:16:33 +00:00
Chris Lattner
d18c637a37 Generate more efficient code for casting booleans to integers (no sign extension required)
llvm-svn: 13439
2004-05-09 22:28:45 +00:00
Chris Lattner
67c21e74ec Codegen floating point stores of constants into integer instructions. This
allows us to compile:

store float 10.0, float* %P

into:
        mov DWORD PTR [%EAX], 1092616192

instead of:

.CPItest_0:                                     # float 0x4024000000000000
.long   1092616192      # float 10
...
        fld DWORD PTR [.CPItest_0]
        fstp DWORD PTR [%EAX]

llvm-svn: 13409
2004-05-07 21:18:15 +00:00
Chris Lattner
2021030378 Make comparisons against the null pointer as efficient as integer comparisons
against zero.  In particular, don't emit:

        mov %ESI, 0
        cmp %ECX, %ESI

instead, emit:

       test %ECX, %ECX

llvm-svn: 13407
2004-05-07 19:55:55 +00:00
Chris Lattner
42e602b94f Remove unneeded check
llvm-svn: 13355
2004-05-04 19:35:11 +00:00
Chris Lattner
dac54ebbee Improve signed division by power of 2 *dramatically* from this:
div:
        mov %EDX, DWORD PTR [%ESP + 4]
        mov %ECX, 64
        mov %EAX, %EDX
        sar %EDX, 31
        idiv %ECX
        ret

to this:

div:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        sar %ECX, 5
        shr %ECX, 26
        mov %EDX, %EAX
        add %EDX, %ECX
        sar %EAX, 6
        ret

Note that the intel compiler is currently making this:

div:
        movl      4(%esp), %edx                                 #3.5
        movl      %edx, %eax                                    #4.14
        sarl      $5, %eax                                      #4.14
        shrl      $26, %eax                                     #4.14
        addl      %edx, %eax                                    #4.14
        sarl      $6, %eax                                      #4.14
        ret                                                     #4.14

Which has one less register->register copy.  (hint hint alkis :)

llvm-svn: 13354
2004-05-04 19:33:58 +00:00
Chris Lattner
cb9a614ea4 Improve code generated for integer multiplications by 2,3,5,9
llvm-svn: 13342
2004-05-04 15:47:14 +00:00
Chris Lattner
4b5d4eb5b1 Remove unused #include
llvm-svn: 13304
2004-05-01 21:29:16 +00:00