Evan Cheng
d282cb8542
Should pass by reference.
...
llvm-svn: 28357
2006-05-17 19:07:40 +00:00
Chris Lattner
c04371da56
Implement the custom lowering hook right, returning values for all of the
...
arguments at once.
llvm-svn: 28327
2006-05-16 17:14:26 +00:00
Chris Lattner
f501a979ec
Fix a bug I introduced yesterday, which broke functions with *no* arguments.
...
llvm-svn: 28326
2006-05-16 17:08:35 +00:00
Evan Cheng
dc9b5f5fc0
X86 integer register classes naming changes. Make them consistent with FP, vector classes.
...
llvm-svn: 28324
2006-05-16 07:21:53 +00:00
Chris Lattner
ba1dfc1da7
Add a chain to FORMAL_ARGUMENTS. This is a minimal port of the X86 backend,
...
it doesn't currently use/maintain the chain properly. Also, make the
X86ISelLowering.cpp file 80-col clean.
llvm-svn: 28320
2006-05-16 06:45:34 +00:00
Chris Lattner
db8caed257
Dead variable
...
llvm-svn: 28265
2006-05-12 21:12:22 +00:00
Chris Lattner
89fa42b51e
Teach the X86 backend about non-i32 inline asm register classes.
...
llvm-svn: 28139
2006-05-06 00:29:37 +00:00
Chris Lattner
a03676690b
Teach the code generator to use cvtss2sd as extload f32 -> f64
...
llvm-svn: 28131
2006-05-05 21:35:18 +00:00
Owen Anderson
71bc529dfa
Refactor TargetMachine, pushing handling of TargetData into the target-specific subclasses. This has one caller-visible change: getTargetData() now returns a pointer instead of a reference.
...
This fixes PR 759.
llvm-svn: 28074
2006-05-03 01:29:57 +00:00
Evan Cheng
a33feb51db
Initial caller side support (for CCC only, not FastCC) of 128-bit vector
...
passing by value.
llvm-svn: 28015
2006-04-28 21:29:37 +00:00
Evan Cheng
d577ce4c4a
Implement four-wide shuffle with 2 shufps if no more than two elements come
...
from each vector. e.g.
shuffle(G1, G2, 7, 1, 5, 2)
==>
movaps _G2, %xmm0
shufps $151, _G1, %xmm0
shufps $216, %xmm0, %xmm0
llvm-svn: 28011
2006-04-28 07:03:38 +00:00
Evan Cheng
f843942504
TargetLowering::LowerArguments should return a VBIT_CONVERT of
...
FORMAL_ARGUMENTS SDOperand in the return result vector.
llvm-svn: 28009
2006-04-28 05:25:15 +00:00
Evan Cheng
11e3cec8bd
Make x86 isel lowering produce tailcall nodes. They are match to normal calls
...
for now.
Patch contributed by Alexander Friedman.
llvm-svn: 27994
2006-04-27 08:40:39 +00:00
Evan Cheng
24795120e1
Support for passing 128-bit vector arguments via XMM registers.
...
llvm-svn: 27992
2006-04-27 08:31:10 +00:00
Evan Cheng
1e065ae594
Oops
...
llvm-svn: 27989
2006-04-27 05:44:50 +00:00
Evan Cheng
a0e0eabc07
Bug fix: not updating NumIntRegs.
...
llvm-svn: 27988
2006-04-27 05:35:28 +00:00
Evan Cheng
a1f9f34f35
- Clean up formal argument lowering code. Prepare for vector pass by value work.
...
- Fixed vararg support.
llvm-svn: 27985
2006-04-27 01:32:22 +00:00
Evan Cheng
3abec16563
Fix fastcc failures.
...
llvm-svn: 27980
2006-04-26 18:21:31 +00:00
Evan Cheng
58d4133b60
Switching over FORMAL_ARGUMENTS mechanism to lower call arguments.
...
llvm-svn: 27975
2006-04-26 01:20:17 +00:00
Evan Cheng
09112df9d3
Separate LowerOperation() into multiple functions, one per opcode.
...
llvm-svn: 27972
2006-04-25 20:13:52 +00:00
Evan Cheng
0282b48ec2
Special case handling two wide build_vector(0, x).
...
llvm-svn: 27961
2006-04-24 22:58:52 +00:00
Evan Cheng
1eae7398a6
A little bit more build_vector enhancement for v8i16 cases.
...
llvm-svn: 27959
2006-04-24 18:01:45 +00:00
Evan Cheng
4812ce5035
MOVL shuffle (i.e. movd or movss / movsd from memory) of undef, V2 == V2
...
llvm-svn: 27953
2006-04-23 06:35:19 +00:00
Nate Begeman
7ed816f900
JumpTable support! What this represents is working asm and jit support for
...
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.
llvm-svn: 27947
2006-04-22 18:53:45 +00:00
Evan Cheng
1c33e83af5
Don't do all the lowering stuff for 2-wide build_vector's. Also, minor optimization for shuffle of undef.
...
llvm-svn: 27946
2006-04-22 08:34:05 +00:00
Evan Cheng
ec33bd04fb
Fix a performance regression. Use {p}shuf* when there are only two distinct elements in a build_vector.
...
llvm-svn: 27945
2006-04-22 06:21:46 +00:00
Evan Cheng
5cb5fdd8eb
Revamp build_vector lowering to take advantage of movss and movd instructions.
...
movd always clear the top 96 bits and movss does so when it's loading the
value from memory.
The net result is codegen for 4-wide shuffles is much improved. It is near
optimal if one or more elements is a zero. e.g.
__m128i test(int a, int b) {
return _mm_set_epi32(0, 0, b, a);
}
compiles to
_test:
movd 8(%esp), %xmm1
movd 4(%esp), %xmm0
punpckldq %xmm1, %xmm0
ret
compare to gcc:
_test:
subl $12, %esp
movd 20(%esp), %xmm0
movd 16(%esp), %xmm1
punpckldq %xmm0, %xmm1
movq %xmm1, %xmm0
movhps LC0, %xmm0
addl $12, %esp
ret
or icc:
_test:
movd 4(%esp), %xmm0 #5.10
movd 8(%esp), %xmm3 #5.10
xorl %eax, %eax #5.10
movd %eax, %xmm1 #5.10
punpckldq %xmm1, %xmm0 #5.10
movd %eax, %xmm2 #5.10
punpckldq %xmm2, %xmm3 #5.10
punpckldq %xmm3, %xmm0 #5.10
ret #5.10
There are still room for improvement, for example the FP variant of the above example:
__m128 test(float a, float b) {
return _mm_set_ps(0.0, 0.0, b, a);
}
_test:
movss 8(%esp), %xmm1
movss 4(%esp), %xmm0
unpcklps %xmm1, %xmm0
xorps %xmm1, %xmm1
movlhps %xmm1, %xmm0
ret
The xorps and movlhps are unnecessary. This will require post legalizer optimization to handle.
llvm-svn: 27939
2006-04-21 23:03:30 +00:00
Evan Cheng
e0289de5ab
Now generating perfect (I think) code for "vector set" with a single non-zero
...
scalar value.
e.g.
_mm_set_epi32(0, a, 0, 0);
==>
movd 4(%esp), %xmm0
pshufd $69, %xmm0, %xmm0
_mm_set_epi8(0, 0, 0, 0, 0, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
==>
movzbw 4(%esp), %ax
movzwl %ax, %eax
pxor %xmm0, %xmm0
pinsrw $5, %eax, %xmm0
llvm-svn: 27923
2006-04-21 01:05:10 +00:00
Evan Cheng
41f2933444
- Added support to turn "vector clear elements", e.g. pand V, <-1, -1, 0, -1>
...
to a vector shuffle.
- VECTOR_SHUFFLE lowering change in preparation for more efficient codegen
of vector shuffle with zero (or any splat) vector.
llvm-svn: 27875
2006-04-20 08:58:49 +00:00
Evan Cheng
9dcd046bbd
Handle v2i64 BUILD_VECTOR custom lowering correctly. v2i64 is a legal type,
...
but i64 is not. If possible, change a i64 op to a f64 (e.g. load, constant)
and then cast it back.
llvm-svn: 27849
2006-04-20 00:11:39 +00:00
Evan Cheng
d79f6a9f5a
isSplatMask() bug: first element can be an undef.
...
llvm-svn: 27847
2006-04-19 23:28:59 +00:00
Evan Cheng
019dea6886
- Added support to do aribitrary 4 wide shuffle with no more than three
...
instructions.
- Fixed a commute vector_shuff bug.
llvm-svn: 27845
2006-04-19 22:48:17 +00:00
Evan Cheng
265831aa45
Commute vector_shuffle to match more movlhps, movlp{s|d} cases.
...
llvm-svn: 27840
2006-04-19 20:35:22 +00:00
Evan Cheng
98b1ca65dd
Use movss to insert_vector_elt(v, s, 0).
...
llvm-svn: 27782
2006-04-17 22:45:49 +00:00
Evan Cheng
ecf13c5d79
Use two pinsrw to insert an element into v4i32 / v4f32 vector.
...
llvm-svn: 27779
2006-04-17 22:04:06 +00:00
Evan Cheng
4de1805c84
Implement v8i16, v16i8 splat using unpckl + pshufd.
...
llvm-svn: 27768
2006-04-17 20:43:08 +00:00
Chris Lattner
e1d38ad84b
implement returns of a vector, testcase here: CodeGen/X86/vec_return.ll
...
llvm-svn: 27767
2006-04-17 20:32:50 +00:00
Evan Cheng
eb739d0355
FP SETOLT, SETOLT, SETUGE, SETUGT conditions were implemented incorrectly
...
llvm-svn: 27755
2006-04-17 07:24:10 +00:00
Evan Cheng
32e5d4f6bc
Silly bug
...
llvm-svn: 27719
2006-04-15 05:37:34 +00:00
Evan Cheng
f9a93a1d3f
Do not use movs{h|l}dup for a shuffle with a single non-undef node.
...
llvm-svn: 27718
2006-04-15 03:13:24 +00:00
Evan Cheng
32c4470374
Last few SSE3 intrinsics.
...
llvm-svn: 27711
2006-04-14 21:59:03 +00:00
Evan Cheng
25fcfb9f2d
X86 SSE2 supports v8i16 multiplication
...
llvm-svn: 27644
2006-04-13 05:10:25 +00:00
Evan Cheng
2c2d734efd
All "integer" logical ops (pand, por, pxor) are now promoted to v2i64.
...
Clean up and fix various logical ops issues.
llvm-svn: 27633
2006-04-12 21:21:57 +00:00
Evan Cheng
66fb7beed7
Promote v4i32, v8i16, v16i8 load to v2i64 load.
...
llvm-svn: 27612
2006-04-12 17:12:36 +00:00
Evan Cheng
da283be867
Added support for _mm_move_ss and _mm_move_sd.
...
llvm-svn: 27575
2006-04-11 00:19:04 +00:00
Evan Cheng
2b6c899eb2
Conditional move of vector types.
...
llvm-svn: 27556
2006-04-10 07:23:14 +00:00
Evan Cheng
281a7abddf
Code clean up.
...
llvm-svn: 27501
2006-04-07 21:53:05 +00:00
Evan Cheng
9f27046dc9
- movlp{s|d} and movhp{s|d} support.
...
- Normalize shuffle nodes so result vector lower half elements come from the
first vector, the rest come from the second vector. (Except for the
exceptions :-).
- Other minor fixes.
llvm-svn: 27474
2006-04-06 23:23:56 +00:00
Evan Cheng
6d470008c8
Support for comi / ucomi intrinsics.
...
llvm-svn: 27444
2006-04-05 23:38:46 +00:00
Evan Cheng
056e0af55a
Handle canonical form of e.g.
...
vector_shuffle v1, v1, <0, 4, 1, 5, 2, 6, 3, 7>
This is turned into
vector_shuffle v1, <undef>, <0, 0, 1, 1, 2, 2, 3, 3>
by dag combiner.
It would match a {p}unpckl on x86.
llvm-svn: 27437
2006-04-05 07:20:06 +00:00