1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-30 07:22:55 +01:00
Commit Graph

2162 Commits

Author SHA1 Message Date
Jeff Cohen
e2f56a56f6 Fix VC++ compilation error caused by using a std::map iterator variable to receive
a std::multimap iterator value.  For some reason, GCC doesn't have a problem with this.

llvm-svn: 25927
2006-02-03 03:48:54 +00:00
Chris Lattner
9ca1b98733 Remove move copies and dead stuff by not clobbering the result reg of a noop copy.
llvm-svn: 25926
2006-02-03 03:16:14 +00:00
Chris Lattner
8e4e3207fe Simplify some code
llvm-svn: 25924
2006-02-03 03:06:49 +00:00
Chris Lattner
42c3d5124f Add code that checks for noop copies, which triggers when either:
1. a target doesn't know how to fold load/stores into copies, or
2. the spiller rewrites the input to a copy to the same register as the dest
   instead of to the reloaded reg.

This will be moved/improved in the near future, but allows elimination of
some ancient x86 hacks.  This eliminates 92 copies from SMG2000 on X86 and
163 copies from 252.eon.

llvm-svn: 25922
2006-02-03 02:02:59 +00:00
Evan Cheng
c21098e77c Added case HANDLENODE to getOperationName().
llvm-svn: 25920
2006-02-03 01:33:01 +00:00
Chris Lattner
e5344b1169 Physregs may hold multiple stack slot values at the same time. Keep track
of this, and use it to our advantage (bwahahah).  This allows us to eliminate another
60 instructions from smg2000 on PPC (probably significantly more on X86).  A common
old-new diff looks like this:

        stw r2, 3304(r1)
-       lwz r2, 3192(r1)
        stw r2, 3300(r1)
-       lwz r2, 3192(r1)
        stw r2, 3296(r1)
-       lwz r2, 3192(r1)
        stw r2, 3200(r1)
-       lwz r2, 3192(r1)
        stw r2, 3196(r1)
-       lwz r2, 3192(r1)
+       or r2, r2, r2
        stw r2, 3188(r1)

and

-       lwz r31, 604(r1)
-       lwz r13, 604(r1)
-       lwz r14, 604(r1)
-       lwz r15, 604(r1)
-       lwz r16, 604(r1)
-       lwz r30, 604(r1)
+       or r31, r30, r30
+       or r13, r30, r30
+       or r14, r30, r30
+       or r15, r30, r30
+       or r16, r30, r30
+       or r30, r30, r30

Removal of the R = R copies is coming next...

llvm-svn: 25919
2006-02-03 00:36:31 +00:00
Chris Lattner
935255c984 Fix a deficiency in the spiller that Evan noticed. In particular, consider
this code:

  store [stack slot #0],  R10
    = add R14, [stack slot #0]

The spiller didn't know that the store made the value of [stackslot#0] available
in R10 *IF* the store came from a copy instruction with the store folded into it.

This patch teaches VirtRegMap to look at these stores and recognize the values
they make available.  In one case Evan provided, this code:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
1)      movsd QWORD PTR [%ESP + 48], %XMM1
2)      movsd %XMM1, QWORD PTR [%ESP + 48]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

turns into:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

In this case, instruction #2 was removed because of the value made
available by #1, and inst #1 was later deleted because it is now
never used before the stack slot is redefined by #3.

This occurs here and there in a lot of code with high spilling, on PPC
most of the removed loads/stores are LSU-reject-causing loads, which is
nice.

On X86, things are much better (because it spills more), where we nuke
about 1% of the instructions from SMG2000 and several hundred from eon.

More improvements to come...

llvm-svn: 25917
2006-02-02 23:29:36 +00:00
Chris Lattner
15cb732cd7 Move isLoadFrom/StoreToStackSlot from MRegisterInfo to TargetInstrInfo,a far more logical place. Other methods should also be moved if anyoneis interested. :)
llvm-svn: 25913
2006-02-02 20:12:32 +00:00
Chris Lattner
456711ae45 Turn any_extend nodes into zero_extend nodes when it allows us to remove an
and instruction.  This allows us to compile stuff like this:

bool %X(int %X) {
        %Y = add int %X, 14
        %Z = setne int %Y, 12345
        ret bool %Z
}

to this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        ret

instead of this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

This occurs quite a bit with the X86 backend.  For example, 25 times in
lambda, 30 times in 177.mesa, 14 times in galgel,  70 times in fma3d,
25 times in vpr, several hundred times in gcc, ~45 times in crafty,
~60 times in parser, ~140 times in eon, 110 times in perlbmk, 55 on gap,
16 times on bzip2, 14 times on twolf, and 1-2 times in many other SPEC2K
programs.

llvm-svn: 25901
2006-02-02 07:17:31 +00:00
Chris Lattner
8f4d73f3da add two dag combines:
(C1-X) == C2 --> X == C1-C2
(X+C1) == C2 --> X == C2-C1

This allows us to compile this:

bool %X(int %X) {
        %Y = add int %X, 14
        %Z = setne int %Y, 12345
        ret bool %Z
}

into this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

not this:

_X:
        movl $14, %eax
        addl 4(%esp), %eax
        cmpl $12345, %eax
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

Testcase here: Regression/CodeGen/X86/compare-add.ll

nukage of the and coming up next.

llvm-svn: 25898
2006-02-02 06:36:13 +00:00
Chris Lattner
f3bfec6117 make -debug output less newliney
llvm-svn: 25895
2006-02-02 00:38:08 +00:00
Chris Lattner
e35694c0bf Implement matching constraints. We can now say things like this:
%C = call int asm "xyz $0, $1, $2, $3", "=r,r,r,0"(int %A, int %B, int 4)

and get:

xyz r2, r3, r4, r2

note that the r2's are pinned together.  Yaay for 2-address instructions.

2342 ----------------------------------------------------------------------

llvm-svn: 25893
2006-02-02 00:25:23 +00:00
Chris Lattner
41a35db65f Implement smart printing of inline asm strings, handling variants and
substituted operands.  For this testcase:

int %test(int %A, int %B) {
  %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
  ret int %C
}

we now emit:

_test:
        or r2, r3, r3
        or r3, r4, r4
        xyz r2, r2, r3  ;; look here
        or r3, r2, r2
        blr

... note the substituted operands. :)

llvm-svn: 25886
2006-02-01 22:41:11 +00:00
Nate Begeman
e74a2db0f0 *** empty log message ***
llvm-svn: 25879
2006-02-01 19:05:15 +00:00
Chris Lattner
9665c2d76f Implement simple register assignment for inline asms. This allows us to compile:
int %test(int %A, int %B) {
  %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
  ret int %C
}

into:

 (0x8906130, LLVM BB @0x8902220):
        %r2 = OR4 %r3, %r3
        %r3 = OR4 %r4, %r4
        INLINEASM <es:xyz $0, $1, $2>, %r2<def>, %r2, %r3
        %r3 = OR4 %r2, %r2
        BLR

which asmprints as:

_test:
        or r2, r3, r3
        or r3, r4, r4
        xyz $0, $1, $2      ;; need to print the operands now :)
        or r3, r2, r2
        blr

llvm-svn: 25878
2006-02-01 18:59:47 +00:00
Nate Begeman
0be60963bd Fix some of the stuff in the PPC README file, and clean up legalization
of the SELECT_CC, BR_CC, and BRTWOWAY_CC nodes.

llvm-svn: 25875
2006-02-01 07:19:44 +00:00
Chris Lattner
5549a10792 adjust to changes in InlineAsm interface. Fix a few minor bugs.
llvm-svn: 25865
2006-02-01 01:28:23 +00:00
Evan Cheng
f115c17f23 Allow the specification of explicit alignments for constant pool entries.
llvm-svn: 25855
2006-01-31 22:23:14 +00:00
Evan Cheng
80944be4e7 Allow custom lowering of fabs. I forgot to check in this change which
caused several test failures.

llvm-svn: 25852
2006-01-31 18:14:25 +00:00
Chris Lattner
d33e8efd56 Only insert an AND when converting from BR_COND to BRCC if needed.
llvm-svn: 25832
2006-01-31 05:04:52 +00:00
Chris Lattner
e113238f5c Handle physreg input/outputs. We now compile this:
int %test_cpuid(int %op) {
        %B = alloca int
        %C = alloca int
        %D = alloca int
        %A = call int asm "cpuid", "=eax,==ebx,==ecx,==edx,eax"(int* %B, int* %C, int* %D, int %op)
        %Bv = load int* %B
        %Cv = load int* %C
        %Dv = load int* %D
        %x = add int %A, %Bv
        %y = add int %x, %Cv
        %z = add int %y, %Dv
        ret int %z
}

to this:

_test_cpuid:
        sub %ESP, 16
        mov DWORD PTR [%ESP], %EBX
        mov %EAX, DWORD PTR [%ESP + 20]
        cpuid
        mov DWORD PTR [%ESP + 8], %ECX
        mov DWORD PTR [%ESP + 12], %EBX
        mov DWORD PTR [%ESP + 4], %EDX
        mov %ECX, DWORD PTR [%ESP + 12]
        add %EAX, %ECX
        mov %ECX, DWORD PTR [%ESP + 8]
        add %EAX, %ECX
        mov %ECX, DWORD PTR [%ESP + 4]
        add %EAX, %ECX
        mov %EBX, DWORD PTR [%ESP]
        add %ESP, 16
        ret

... note the proper register allocation.  :)

it is unclear to me why the loads aren't folded into the adds.

llvm-svn: 25827
2006-01-31 02:03:41 +00:00
Chris Lattner
ce37ba3417 Print the most trivial inline asms.
llvm-svn: 25822
2006-01-30 23:00:08 +00:00
Chris Lattner
e24c6ef1f9 Fix a bug in my legalizer reworking that caused the X86 backend to not get
a chance to custom legalize setcc, which broke a bunch of C++ Codes.
Testcase here: CodeGen/X86/2006-01-30-LongSetcc.ll

llvm-svn: 25821
2006-01-30 22:43:50 +00:00
Chris Lattner
4ba0e38a7a don't insert an and node if it isn't needed here, this can prevent folding
of lowered target nodes.

llvm-svn: 25804
2006-01-30 04:22:28 +00:00
Chris Lattner
a44182300b Move MaskedValueIsZero from the DAGCombiner to the TargetLowering interface,making isMaskedValueZeroForTargetNode simpler, and useable from other partsof the compiler.
llvm-svn: 25803
2006-01-30 04:09:27 +00:00
Chris Lattner
26589d32e6 pass the address of MaskedValueIsZero into isMaskedValueZeroForTargetNode,
to permit recursion

llvm-svn: 25799
2006-01-30 03:49:37 +00:00
Chris Lattner
414a73d6f6 Fix RET of promoted values on targets that custom expand RET to a target node.
llvm-svn: 25794
2006-01-29 21:02:23 +00:00
Chris Lattner
497157db4d cleanups to the ValueTypeActions interface
llvm-svn: 25785
2006-01-29 08:42:06 +00:00
Chris Lattner
3cddcf86e0 Remove some special case hacks for CALLSEQ_*, using UpdateNodeOperands
instead.

llvm-svn: 25780
2006-01-29 07:58:15 +00:00
Chris Lattner
94e9677374 Allow custom expansion of ConstantVec nodes. PPC will use this in the future.
llvm-svn: 25774
2006-01-29 06:34:16 +00:00
Chris Lattner
3f97b9cfa3 Legalize ConstantFP into TargetConstantFP when the target allows. Implement
custom expansion of ConstantFP nodes.

llvm-svn: 25772
2006-01-29 06:26:56 +00:00
Chris Lattner
82b23c2f5c eliminate uses of SelectionDAG::getBR2Way_CC
llvm-svn: 25767
2006-01-29 06:00:45 +00:00
Chris Lattner
40973347bf Use the new "UpdateNodeOperands" method to simplify LegalizeDAG and make it
faster.  This cuts about 120 lines of code out of the legalizer (mostly code
checking to see if operands have changed).

It also fixes an ugly performance issue, where the legalizer cloned the entire
graph after any change.  Now the "UpdateNodeOperands" method gives it a chance
to reuse nodes if the operands of a node change but not its opcode or valuetypes.

This speeds up instruction selection time on kimwitu++ by about 8.2% with a
release build.

llvm-svn: 25746
2006-01-28 10:58:55 +00:00
Chris Lattner
8768eb7532 add another method variant
llvm-svn: 25744
2006-01-28 10:09:25 +00:00
Chris Lattner
63f7e84632 add some methods for updating nodes
llvm-svn: 25742
2006-01-28 09:32:45 +00:00
Chris Lattner
99bdf26410 minor tweaks
llvm-svn: 25740
2006-01-28 08:31:04 +00:00
Chris Lattner
94ab3cee71 move a bunch of code, no other change.
llvm-svn: 25739
2006-01-28 08:25:58 +00:00
Chris Lattner
9633d61b5b remove a couple more now-extraneous legalizeop's
llvm-svn: 25738
2006-01-28 08:22:56 +00:00
Chris Lattner
4bc3abc3d0 fix a bug
llvm-svn: 25737
2006-01-28 07:42:08 +00:00
Chris Lattner
8eed1b6bda Several major changes:
1. Pull out the expand cases for BSWAP and CT* into a separate function,
   reducing the size of LegalizeOp.
2. Fix a bug where expand(bswap i64) was wrong when i64 is legal.
3. Changed LegalizeOp/PromoteOp so that the legalizer never needs to be
   iterative.  It now operates in a single pass over the nodes.
4. Simplify a LOT of code, with a net reduction of ~280 lines.

llvm-svn: 25736
2006-01-28 07:39:30 +00:00
Chris Lattner
84086fd689 Eliminate the need for ExpandOp to set 'needsanotheriteration', as it already
relegalizes the stuff it returns.

Add the ability to custom expand ADD/SUB, so that targets don't need to deal
with ADD_PARTS/SUB_PARTS if they don't want.

Fix some obscure potential bugs and simplify code.

llvm-svn: 25732
2006-01-28 05:07:51 +00:00
Chris Lattner
3b5a984065 Instead of making callers of ExpandLibCall legalize the result, make
ExpandLibCall do it itself.

llvm-svn: 25731
2006-01-28 04:28:26 +00:00
Chris Lattner
045a778e63 Eliminate the need to do another iteration of the legalizer after inserting
a libcall.

llvm-svn: 25730
2006-01-28 04:23:12 +00:00
Chris Lattner
d6d4bcc419 remove method I just added
llvm-svn: 25728
2006-01-28 03:43:09 +00:00
Chris Lattner
063c13029b add a new callback
llvm-svn: 25727
2006-01-28 03:37:03 +00:00
Nate Begeman
87c2c0e66b Implement Promote for VAARG, and allow it to be custom promoted for people
who don't want the default behavior (Alpha).

llvm-svn: 25726
2006-01-28 03:14:31 +00:00
Nate Begeman
dc3fba6b6b Add a missing case to the dag combiner.
llvm-svn: 25723
2006-01-28 01:06:30 +00:00
Chris Lattner
e7428f436a Remove the ISD::CALL and ISD::TAILCALL nodes
llvm-svn: 25721
2006-01-28 00:18:58 +00:00
Nate Begeman
d2c6fbef4a Remove TLI.LowerReturnTo, and just let targets custom lower ISD::RET for
the same functionality.  This addresses another piece of bug 680.  Next,
on to fixing Alpha VAARG, which I broke last time.

llvm-svn: 25696
2006-01-27 21:09:22 +00:00
Jim Laskey
c8759505c4 Using bit size of integers instead of ambiguous "long" et all.
llvm-svn: 25694
2006-01-27 20:31:25 +00:00