1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

26749 Commits

Author SHA1 Message Date
Rafael Espindola
b8ce0f8bbd implement fadds, faddd, fmuls and fmuld
llvm-svn: 30801
2006-10-07 13:46:42 +00:00
Chris Lattner
59bf33e5e4 handle pmin/pmax with multiclasses
llvm-svn: 30800
2006-10-07 07:49:33 +00:00
Chris Lattner
b007ac9ecf regenerate
llvm-svn: 30799
2006-10-07 07:15:19 +00:00
Chris Lattner
b5b2e0340e Bugfix: this allows multiclasses to have default arguments.
llvm-svn: 30798
2006-10-07 07:14:48 +00:00
Chris Lattner
2177d324c5 simplify pack and shift intrinsics with multiclasses
llvm-svn: 30797
2006-10-07 07:06:17 +00:00
Chris Lattner
31eb3af1a8 Use a multiclass to simplify 'SSE2 Integer comparison'
llvm-svn: 30796
2006-10-07 06:47:08 +00:00
Chris Lattner
7cde5d8820 move class defns close to uses to make it easier to read
llvm-svn: 30795
2006-10-07 06:33:36 +00:00
Chris Lattner
2842be4e37 simplify horizontal op definitions
llvm-svn: 30794
2006-10-07 06:31:41 +00:00
Chris Lattner
b3b659492b remove more unneeded type info
llvm-svn: 30793
2006-10-07 06:27:03 +00:00
Chris Lattner
8a2d78d3cf remove unneeded definitions and type info
llvm-svn: 30792
2006-10-07 06:19:41 +00:00
Chris Lattner
a75da38d99 remove some unneeded type info
llvm-svn: 30791
2006-10-07 06:17:43 +00:00
Chris Lattner
d704b454b9 simplify patterns by merging in operand info
llvm-svn: 30790
2006-10-07 05:50:25 +00:00
Chris Lattner
bf6419cef6 Factor operands into packed unary classes
llvm-svn: 30789
2006-10-07 05:47:20 +00:00
Chris Lattner
06c9aa41f1 remove dead/duplicate instructions
llvm-svn: 30788
2006-10-07 05:41:52 +00:00
Chris Lattner
72b130720d Pull operand info up into parent class for scalar sse intrinsics.
llvm-svn: 30787
2006-10-07 05:26:13 +00:00
Chris Lattner
cf13d058a3 convert the sole sd unary intrinsic to a multiclass for consistency
llvm-svn: 30786
2006-10-07 05:19:31 +00:00
Chris Lattner
67ea3292d2 pull operand string into the multiclass
llvm-svn: 30785
2006-10-07 05:13:26 +00:00
Chris Lattner
e234302d01 Remove RSQRTSS[rm] RCPSS[rm], which are dead.
Introduce SS_IntUnary, a multiclass to replace SS_Int[rm].

llvm-svn: 30784
2006-10-07 05:09:48 +00:00
Chris Lattner
22137d1891 eliminate redundancy
llvm-svn: 30783
2006-10-07 04:52:09 +00:00
Chris Lattner
f5758df6cd Fix a bug legalizing zero-extending i64 loads into 32-bit loads. The bottom
part was always forced to be sextload, even when we needed an zextload.

llvm-svn: 30782
2006-10-07 00:58:36 +00:00
Chris Lattner
f5b9b4a4b2 Set the jt section
llvm-svn: 30781
2006-10-06 22:52:33 +00:00
Chris Lattner
3f92c791b4 initialize ivar
llvm-svn: 30780
2006-10-06 22:52:08 +00:00
Chris Lattner
d5f5a433b2 If a target uses a GOT, put it in the jt data section, not the text
section.  This will fix alpha when Andrew implements
AlphaTargetMachine::getTargetLowering().

llvm-svn: 30779
2006-10-06 22:50:56 +00:00
Chris Lattner
2ca01febcf Alpha uses a got
llvm-svn: 30778
2006-10-06 22:46:51 +00:00
Chris Lattner
77545e4a28 Add support for targets to declare that they use a GOT
llvm-svn: 30777
2006-10-06 22:46:34 +00:00
Chris Lattner
b5b96302f2 jump tables handle pic
llvm-svn: 30776
2006-10-06 22:32:29 +00:00
Chris Lattner
ad60994822 print labels even if a MBB doesn't have a corresponding LLVM BB, just don't
print the LLVM BB label.

llvm-svn: 30775
2006-10-06 21:28:17 +00:00
Rafael Espindola
a96c205e12 add optional input flag to FMRRD
llvm-svn: 30774
2006-10-06 20:33:26 +00:00
Rafael Espindola
54301ca490 add support for calling functions that return double
llvm-svn: 30771
2006-10-06 19:10:05 +00:00
Evan Cheng
6d15f83d46 80 col violation.
llvm-svn: 30770
2006-10-06 18:57:51 +00:00
Chris Lattner
399106d8f8 ugly codegen
llvm-svn: 30769
2006-10-06 17:39:34 +00:00
Chris Lattner
0d39b3a4cf Fix a miscompilation of:
long long foo(long long X) {
  return (long long)(signed char)(int)X;
}

Instead of:

_foo:
        extsb r2, r4
        srawi r3, r4, 31
        mr r4, r2
        blr

we now produce:

_foo:
        extsb r4, r4
        srawi r3, r4, 31
        blr

This fixes a miscompilation in ConstantFolding.cpp.

llvm-svn: 30768
2006-10-06 17:34:12 +00:00
Rafael Espindola
d870b158b3 fix some bugs affecting functions with no arguments
llvm-svn: 30767
2006-10-06 17:26:30 +00:00
Rafael Espindola
f35563ff66 fix the stack alignment
llvm-svn: 30766
2006-10-06 14:29:47 +00:00
Rafael Espindola
f679bdf121 add support for calling functions that have double arguments
llvm-svn: 30765
2006-10-06 12:50:22 +00:00
Evan Cheng
9ce3d493f0 Still need to support -mcpu=<> or cross compilation will fail. Doh.
llvm-svn: 30764
2006-10-06 09:17:41 +00:00
Evan Cheng
6fc0ae2136 Do away with CPU feature list. Just use CPUID to detect MMX, SSE, SSE2, SSE3, and 64-bit support.
llvm-svn: 30763
2006-10-06 08:21:07 +00:00
Evan Cheng
35a3337e1d It appears the inline asm in GetCpuIDAndInfo() may clobbers some registers if it isn't inlined (at < -O3). Force it to be inlined.
llvm-svn: 30762
2006-10-06 07:50:56 +00:00
Chris Lattner
621a182abd add an accessor
llvm-svn: 30761
2006-10-06 01:16:29 +00:00
Chris Lattner
5fc3bb074c MachineBasicBlock::splice was incorrectly updating parent pointers on
instructions.

llvm-svn: 30760
2006-10-06 01:12:44 +00:00
Evan Cheng
275825195a Make use of getStore().
llvm-svn: 30759
2006-10-05 23:01:46 +00:00
Evan Cheng
c9e079d0c1 Add getStore() helper function to create ISD::STORE nodes.
llvm-svn: 30758
2006-10-05 22:57:11 +00:00
Chris Lattner
eca9897bd5 Don't crash if an MBB doesn't have an LLVM BB
llvm-svn: 30757
2006-10-05 21:40:14 +00:00
Rafael Espindola
2e4743b6d1 use a const ref for passing the vector to ArgumentLayout
llvm-svn: 30756
2006-10-05 17:46:48 +00:00
Rafael Espindola
f0e4950ef4 implement a ArgumentLayout class to factor code common to LowerFORMAL_ARGUMENTS and LowerCALL
implement FMDRR
add support for f64 function arguments

llvm-svn: 30754
2006-10-05 16:48:49 +00:00
Jim Laskey
3f9f064fd1 Alias analysis code clean ups.
llvm-svn: 30753
2006-10-05 15:07:25 +00:00
Chris Lattner
513ba43053 add a new SimplifyDemandedVectorElts method, which works similarly to
SimplifyDemandedBits.  The idea is that some operations can be simplified if
not all of the computed elements are needed.  Some targets (like x86) have a
large number of intrinsics that operate on a single element, but pass other
elts through unmodified.  If those other elements are not needed, the
intrinsics can be simplified to scalar operations, and insertelement ops can
be removed.

This turns (f.e.):

ushort %Convert_sse(float %f) {
        %tmp = insertelement <4 x float> undef, float %f, uint 0                ; <<4 x float>> [#uses=1]
        %tmp10 = insertelement <4 x float> %tmp, float 0.000000e+00, uint 1             ; <<4 x float>> [#uses=1]
        %tmp11 = insertelement <4 x float> %tmp10, float 0.000000e+00, uint 2           ; <<4 x float>> [#uses=1]
        %tmp12 = insertelement <4 x float> %tmp11, float 0.000000e+00, uint 3           ; <<4 x float>> [#uses=1]
        %tmp28 = tail call <4 x float> %llvm.x86.sse.sub.ss( <4 x float> %tmp12, <4 x float> < float 1.000000e+00, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > )               ; <<4 x float>> [#uses=1]
        %tmp37 = tail call <4 x float> %llvm.x86.sse.mul.ss( <4 x float> %tmp28, <4 x float> < float 5.000000e-01, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > )               ; <<4 x float>> [#uses=1]
        %tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp37, <4 x float> < float 6.553500e+04, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > )               ; <<4 x float>> [#uses=1]
        %tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> zeroinitializer )          ; <<4 x float>> [#uses=1]
        %tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 )              ; <int> [#uses=1]
        %tmp69 = cast int %tmp to ushort                ; <ushort> [#uses=1]
        ret ushort %tmp69
}

into:

ushort %Convert_sse(float %f) {
entry:
        %tmp28 = sub float %f, 1.000000e+00             ; <float> [#uses=1]
        %tmp37 = mul float %tmp28, 5.000000e-01         ; <float> [#uses=1]
        %tmp375 = insertelement <4 x float> undef, float %tmp37, uint 0         ; <<4 x float>> [#uses=1]
        %tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp375, <4 x float> < float 6.553500e+04, float undef, float undef, float undef > )           ; <<4 x float>> [#uses=1]
        %tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> < float 0.000000e+00, float undef, float undef, float undef > )            ; <<4 x float>> [#uses=1]
        %tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 )              ; <int> [#uses=1]
        %tmp69 = cast int %tmp to ushort                ; <ushort> [#uses=1]
        ret ushort %tmp69
}

which improves codegen from:

_Convert_sse:
        movss LCPI1_0, %xmm0
        movss 4(%esp), %xmm1
        subss %xmm0, %xmm1
        movss LCPI1_1, %xmm0
        mulss %xmm0, %xmm1
        movss LCPI1_2, %xmm0
        minss %xmm0, %xmm1
        xorps %xmm0, %xmm0
        maxss %xmm0, %xmm1
        cvttss2si %xmm1, %eax
        andl $65535, %eax
        ret

to:

_Convert_sse:
        movss 4(%esp), %xmm0
        subss LCPI1_0, %xmm0
        mulss LCPI1_1, %xmm0
        movss LCPI1_2, %xmm1
        minss %xmm1, %xmm0
        xorps %xmm1, %xmm1
        maxss %xmm1, %xmm0
        cvttss2si %xmm0, %eax
        andl $65535, %eax
        ret


This is just a first step, it can be extended in many ways.  Testcase here:
Transforms/InstCombine/vec_demanded_elts.ll

llvm-svn: 30752
2006-10-05 06:55:50 +00:00
Chris Lattner
6f645eeac6 new testcase
llvm-svn: 30751
2006-10-05 06:51:54 +00:00
Chris Lattner
14ad447136 Add insertelement/extractelement helper ctors.
llvm-svn: 30750
2006-10-05 06:24:58 +00:00
Chris Lattner
7f98896c02 Lower some min/max idioms to minss/maxss when unsafe fp math is enabled.
llvm-svn: 30748
2006-10-05 04:11:26 +00:00