1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 03:53:04 +02:00
Commit Graph

472 Commits

Author SHA1 Message Date
Evan Cheng
151726ad30 Do away with addLegalFPImmediate. Add a target hook isFPImmLegal which returns true if the fp immediate can be natively codegened by target.
llvm-svn: 85281
2009-10-27 19:56:55 +00:00
Nate Begeman
1308a36647 Add support for matching shuffle patterns with palignr.
llvm-svn: 84459
2009-10-19 02:17:23 +00:00
Dan Gohman
0ac693a89e Improve MachineMemOperand handling.
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
   This eliminates MachineInstr's std::list member and allows the data to be
   created by isel and live for the remainder of codegen, avoiding a lot of
   copying and unnecessary translation. This also shrinks MemSDNode.
 - Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
   fields for MachineMemOperands.
 - Change MemSDNode to have a MachineMemOperand member instead of its own
   fields with the same information. This introduces some redundancy, but
   it's more consistent with what MachineInstr will eventually want.
 - Ignore alignment when searching for redundant loads for CSE, but remember
   the greatest alignment.

Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.

llvm-svn: 82794
2009-09-25 20:36:54 +00:00
Evan Cheng
a82b43fb81 Fix funky comments.
llvm-svn: 82314
2009-09-19 10:09:15 +00:00
Evan Cheng
7714c8412d Fix PR4926. When target hook EmitInstrWithCustomInserter() insert new basic blocks and update CFG, it should also inform sdisel of the changes so the phi source operands will come from the right basic blocks.
llvm-svn: 82311
2009-09-19 09:51:03 +00:00
Evan Cheng
7cb9c456e5 Enhance EmitInstrWithCustomInserter() so target can specify CFG changes that sdisel will use to properly complete phi nodes.
Not functionality change yet.

llvm-svn: 82273
2009-09-18 21:02:19 +00:00
Dan Gohman
0dcc5f9922 Add support for using the FLAGS result of or, xor, and and instructions
on x86, to avoid explicit test instructions. A few existing tests changed
due to arbitrary register allocation differences.

llvm-svn: 82263
2009-09-18 19:59:53 +00:00
Sandeep Patel
9c4e094e2a Retype from unsigned to CallingConv::ID accordingly. Approved by Bob Wilson.
llvm-svn: 80773
2009-09-02 08:44:58 +00:00
Chris Lattner
52279dbf17 refactor select 'sched insertion' out to its own method.
llvm-svn: 80764
2009-09-02 05:57:00 +00:00
Duncan Sands
66fce0befb Revert commit 80428. It completely broke exception
handling on x86-32 linux.

llvm-svn: 80592
2009-08-31 16:45:16 +00:00
Bill Wendling
9dddea2ec3 - Add target lowering methods to get the preferred format for the FDE and LSDA
encodings.
- Make some of the values emitted by the FDEs dependent upon the pointer
  size. This is in line with how GCC does things. And it has the benefit of
  working for Darwin in 64-bit mode now.

llvm-svn: 80428
2009-08-29 12:20:54 +00:00
Eric Christopher
32d5222c4e Implement sse4.2 string/text processing instructions:
Add patterns and instruction encoding information.
Add custom lowering to deal with hardwired return register of
uncertain type (xmm0).

llvm-svn: 79377
2009-08-18 22:50:32 +00:00
Bill Wendling
962adec4ee Reapply r79127. It was fixed by d0k.
llvm-svn: 79136
2009-08-15 21:21:19 +00:00
Bill Wendling
bfebbb6477 Revert r79127. It was causing compilation errors.
llvm-svn: 79135
2009-08-15 21:14:01 +00:00
Evan Cheng
5d841097a9 Change allowsUnalignedMemoryAccesses to take type argument since some targets
support unaligned mem access only for certain types. (Should it be size
instead?)

ARM v7 supports unaligned access for i16 and i32, some v6 variants support it
as well.

llvm-svn: 79127
2009-08-15 19:23:44 +00:00
Dan Gohman
d69323d37a On x86-64, for a varargs function, don't store the xmm registers to
the register save area if %al is 0. This avoids touching xmm
regsiters when they aren't actually used.

llvm-svn: 79061
2009-08-15 01:38:56 +00:00
Owen Anderson
48f2f0ae72 Split EVT into MVT and EVT, the former representing _just_ a primitive type, while
the latter is capable of representing either a primitive or an extended type.

llvm-svn: 78713
2009-08-11 20:47:22 +00:00
Owen Anderson
b4bce99769 Rename MVT to EVT, in preparation for splitting SimpleValueType out into its own struct type.
llvm-svn: 78610
2009-08-10 22:56:29 +00:00
Owen Anderson
dcb47bda67 Start moving TargetLowering away from using full MVTs and towards SimpleValueType, which will simplify the privatization of IntegerType in the future.
llvm-svn: 78584
2009-08-10 18:56:59 +00:00
Anton Korobeynikov
8e6a142223 Better handle kernel code model. Also, generalize the things and fix one
subtle bug with small code model.

llvm-svn: 78255
2009-08-05 23:01:26 +00:00
Dan Gohman
5d566d918b Major calling convention code refactoring.
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.

This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.

This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.

llvm-svn: 78142
2009-08-05 01:29:28 +00:00
Dan Gohman
8961d1aaa3 Fix typos in comments.
llvm-svn: 77806
2009-08-01 21:25:00 +00:00
Evan Cheng
148032a1a2 Optimize some common usage patterns of atomic built-ins __sync_add_and_fetch() and __sync_sub_and_fetch.
When the return value is not used (i.e. only care about the value in the memory), x86 does not have to use add to implement these. Instead, it can use add, sub, inc, dec instructions with the "lock" prefix.

This is currently implemented using a bit of instruction selection trick. The issue is the target independent pattern produces one output and a chain and we want to map it into one that just output a chain. The current trick is to select it into a merge_values with the first definition being an implicit_def. The proper solution is to add new ISD opcodes for the no-output variant. DAG combiner can then transform the node before it gets to target node selection.

Problem #2 is we are adding a whole bunch of x86 atomic instructions when in fact these instructions are identical to the non-lock versions. We need a way to add target specific information to target nodes and have this information carried over to machine instructions. Asm printer (or JIT) can use this information to add the "lock" prefix.

llvm-svn: 77582
2009-07-30 08:33:02 +00:00
Eric Christopher
c7b97d1f03 Add support for gcc __builtin_ia32_ptest{z,c,nzc} intrinsics. Lower
to ptest instruction plus setcc. Revamp ptest instruction. Add test.

llvm-svn: 77407
2009-07-29 00:28:05 +00:00
Chris Lattner
72b24cbbf6 Copy ExpandInlineAsm to TargetLowering from TargetAsmInfo.
llvm-svn: 76441
2009-07-20 17:51:36 +00:00
Chris Lattner
255e408e78 change a few methods to be static functions.
llvm-svn: 75089
2009-07-09 02:44:11 +00:00
Bill Wendling
fdd5badace Update comments to make it clear that the function alignment is the Log2 of the
bytes and not bytes.

llvm-svn: 74624
2009-07-01 18:50:55 +00:00
Bill Wendling
c0fb316bd3 Add an "alignment" field to the MachineFunction object. It makes more sense to
have the alignment be calculated up front, and have the back-ends obey whatever
alignment is decided upon.

This allows for future work that would allow for precise no-op placement and the
like.

llvm-svn: 74564
2009-06-30 22:38:32 +00:00
Devang Patel
8d170194e8 Add new function attribute - noimplicitfloat
Update code generator to use this attribute and remove NoImplicitFloat target option.
Update llc to set this attribute when -no-implicit-float command line option is used.

llvm-svn: 72959
2009-06-05 21:57:13 +00:00
Dale Johannesen
8b6ee9e312 Revert 72707 and 72709, for the moment.
llvm-svn: 72712
2009-06-02 03:12:52 +00:00
Dale Johannesen
c08669561e Make the implicit inputs and outputs of target-independent
ADDC/ADDE use MVT::i1 (later, whatever it gets legalized to)
instead of MVT::Flag.  Remove CARRY_FALSE in favor of 0; adjust
all target-independent code to use this format.

Most targets will still produce a Flag-setting target-dependent
version when selection is done.  X86 is converted to use i32
instead, which means TableGen needs to produce different code
in xxxGenDAGISel.inc.  This keys off the new supportsHasI1 bit
in xxxInstrInfo, currently set only for X86; in principle this
is temporary and should go away when all other targets have
been converted.  All relevant X86 instruction patterns are
modified to represent setting and using EFLAGS explicitly.  The
same can be done on other targets.

The immediate behavior change is that an ADC/ADD pair are no
longer tightly coupled in the X86 scheduler; they can be
separated by instructions that don't clobber the flags (MOV).
I will soon add some peephole optimizations based on using
other instructions that set the flags to feed into ADC.

llvm-svn: 72707
2009-06-01 23:27:20 +00:00
Evan Cheng
40810c4d1b Added optimization that narrow load / op / store and the 'op' is a bit twiddling instruction and its second operand is an immediate. If bits that are touched by 'op' can be done with a narrower instruction, reduce the width of the load and store as well. This happens a lot with bitfield manipulation code.
e.g.
orl     $65536, 8(%rax)
=>
orb     $1, 10(%rax)

Since narrowing is not always a win, e.g. i32 -> i16 is a loss on x86, dag combiner consults with the target before performing the optimization.

llvm-svn: 72507
2009-05-28 00:35:15 +00:00
Eli Friedman
d877b76d14 Make the x86 backend custom-lower UINT_TO_FP and FP_TO_UINT on 32-bit
systems instead of attempting to promote them to a 64-bit SINT_TO_FP or 
FP_TO_SINT.  This is in preparation for removing the type legalization 
code from LegalizeDAG: once type legalization is gone from LegalizeDAG, 
it won't be able to handle the i64 operand/result correctly.

This isn't quite ideal, but I don't think any other operation for any 
target ends up in this situation, so treating this case specially seems 
reasonable.

llvm-svn: 72324
2009-05-23 09:59:16 +00:00
Nate Begeman
414534b3eb Implement review feedback for vector shuffle work.
llvm-svn: 70372
2009-04-29 05:20:52 +00:00
Nate Begeman
9d121924fd 2nd attempt, fixing SSE4.1 issues and implementing feedback from duncan.
PR2957

ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask.  A value of -1 represents UNDEF.

In addition to eliminating the creation of illegal BUILD_VECTORS just to 
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.

llvm-svn: 70225
2009-04-27 18:41:29 +00:00
Rafael Espindola
0b1037ad26 Revert 69952. Causes testsuite failures on linux x86-64.
llvm-svn: 69967
2009-04-24 12:40:33 +00:00
Nate Begeman
c1a09c7dfa PR2957
ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask.  A value of -1 represents UNDEF.

In addition to eliminating the creation of illegal BUILD_VECTORS just to 
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.

A clean up of x86 shuffle code, and some canonicalizing in DAGCombiner is next.

llvm-svn: 69952
2009-04-24 03:42:54 +00:00
Rafael Espindola
7eb72dc5f2 Re-apply 68552.
Tested by bootstrapping llvm-gcc and using that to build llvm.

llvm-svn: 68645
2009-04-08 21:14:34 +00:00
Dan Gohman
c9ce27d6b7 Implement support for using modeling implicit-zero-extension on x86-64
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce
SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG
instructions), and teach the DAGCombiner to take advantage of this on
targets which support it. This eliminates many redundant
zero-extension operations on x86-64.

This adds a new TargetLowering hook, isZExtFree. It's similar to
isTruncateFree, except it only applies to actual definitions, and not
no-op truncates which may not zero the high bits.

Also, this adds a new optimization to SimplifyDemandedBits: transform
operations like x+y into (zext (add (trunc x), (trunc y))) on targets
where all the casts are no-ops. In contexts where the high part of the
add is explicitly masked off, this allows the mask operation to be
eliminated. Fix the DAGCombiner to avoid undoing these transformations
to eliminate casts on targets where the casts are no-ops.

Also, this adds a new two-address lowering heuristic. Since
two-address lowering runs before coalescing, it helps to be able to
look through copies when deciding whether commuting and/or
three-address conversion are profitable.

Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle
the case that a clobber range extended both before and beyond an
existing live range. In that case, multiple live ranges need to be
added. This was exposed by the new subreg coalescing code.

Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the
spiller behavior it was looking for no longer occurrs with the new
instruction selection.

llvm-svn: 68576
2009-04-08 00:15:30 +00:00
Bill Wendling
6e702cf68c Temporarily revert r68552. This was causing a failure in the self-hosting LLVM
builds.

--- Reverse-merging (from foreign repository) r68552 into '.':
U    test/CodeGen/X86/tls8.ll
U    test/CodeGen/X86/tls10.ll
U    test/CodeGen/X86/tls2.ll
U    test/CodeGen/X86/tls6.ll
U    lib/Target/X86/X86Instr64bit.td
U    lib/Target/X86/X86InstrSSE.td
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86RegisterInfo.cpp
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86CodeEmitter.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86InstrInfo.h
U    lib/Target/X86/X86ISelDAGToDAG.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
U    lib/Target/X86/X86ISelLowering.h
U    lib/Target/X86/X86InstrInfo.cpp
U    lib/Target/X86/X86InstrBuilder.h
U    lib/Target/X86/X86RegisterInfo.td

llvm-svn: 68560
2009-04-07 22:35:25 +00:00
Rafael Espindola
0324937229 Reduce code duplication on the TLS implementation.
This introduces a small regression on the generated code
quality in the case we are just computing addresses, not
loading values.

Will work on it and on X86-64 support.

llvm-svn: 68552
2009-04-07 21:37:46 +00:00
Evan Cheng
3e30bcbd69 When optimzing a mul by immediate into two, the resulting mul's should get a x86 specific node to avoid dag combiner from hacking on them further.
llvm-svn: 68066
2009-03-30 21:36:47 +00:00
Bill Wendling
40ac545f38 Doxygen-ify comments.
llvm-svn: 67727
2009-03-26 01:46:56 +00:00
Dan Gohman
e9cf3083d2 Correct some comments. Operand numbers start at 0.
llvm-svn: 67518
2009-03-23 15:40:10 +00:00
Chris Lattner
b904fac1b8 improve comment.
llvm-svn: 66778
2009-03-12 06:46:02 +00:00
Dan Gohman
b9c32f1aca Arithmetic instructions don't set EFLAGS bits OF and CF bits
the same say the "test" instruction does in overflow cases,
so eliminating the test is only safe when those bits aren't
needed, as is the case for COND_E and COND_NE, or if it
can be proven that no overflow will occur. For now, just
restrict the optimization to COND_E and COND_NE and don't
do any overflow analysis.

llvm-svn: 66318
2009-03-07 01:58:32 +00:00
Dan Gohman
31fb085c2e Re-apply 66008, now that the unfoldMemoryOperand bug is fixed.
llvm-svn: 66058
2009-03-04 19:44:21 +00:00
Dan Gohman
6831e2c2a6 Revert r66004 for now; it's causing a variety of test failures.
llvm-svn: 66008
2009-03-04 03:54:19 +00:00
Dan Gohman
c6c669cc1e Teach the x86 backend to eliminate "test" instructions by using the EFLAGS
result from add, sub, inc, and dec instructions in simple cases.

llvm-svn: 66004
2009-03-04 02:33:24 +00:00
Nate Begeman
e0093d2501 Generate better code for v8i16 shuffles on SSE2
Generate better code for v16i8 shuffles on SSE2 (avoids stack)
Generate pshufb for v8i16 and v16i8 shuffles on SSSE3 where it is fewer uops.
Document the shuffle matching logic and add some FIXMEs for later further
  cleanups.
New tests that test the above.

Examples:

New:
_shuf2:
	pextrw	$7, %xmm0, %eax
	punpcklqdq	%xmm1, %xmm0
	pshuflw	$128, %xmm0, %xmm0
	pinsrw	$2, %eax, %xmm0

Old:
_shuf2:
	pextrw	$2, %xmm0, %eax
	pextrw	$7, %xmm0, %ecx
	pinsrw	$2, %ecx, %xmm0
	pinsrw	$3, %eax, %xmm0
	movd	%xmm1, %eax
	pinsrw	$4, %eax, %xmm0
	ret

=========

New:
_shuf4:
	punpcklqdq	%xmm1, %xmm0
	pshufb	LCPI1_0, %xmm0

Old:
_shuf4:
	pextrw	$3, %xmm0, %eax
	movsd	%xmm1, %xmm0
	pextrw	$3, %xmm1, %ecx
	pinsrw	$4, %ecx, %xmm0
	pinsrw	$5, %eax, %xmm0

========

New:
_shuf1:
	pushl	%ebx
	pushl	%edi
	pushl	%esi
	pextrw	$1, %xmm0, %eax
	rolw	$8, %ax
	movd	%xmm0, %ecx
	rolw	$8, %cx
	pextrw	$5, %xmm0, %edx
	pextrw	$4, %xmm0, %esi
	pextrw	$3, %xmm0, %edi
	pextrw	$2, %xmm0, %ebx
	movaps	%xmm0, %xmm1
	pinsrw	$0, %ecx, %xmm1
	pinsrw	$1, %eax, %xmm1
	rolw	$8, %bx
	pinsrw	$2, %ebx, %xmm1
	rolw	$8, %di
	pinsrw	$3, %edi, %xmm1
	rolw	$8, %si
	pinsrw	$4, %esi, %xmm1
	rolw	$8, %dx
	pinsrw	$5, %edx, %xmm1
	pextrw	$7, %xmm0, %eax
	rolw	$8, %ax
	movaps	%xmm1, %xmm0
	pinsrw	$7, %eax, %xmm0
	popl	%esi
	popl	%edi
	popl	%ebx
	ret

Old:
_shuf1:
	subl	$252, %esp
	movaps	%xmm0, (%esp)
	movaps	%xmm0, 16(%esp)
	movaps	%xmm0, 32(%esp)
	movaps	%xmm0, 48(%esp)
	movaps	%xmm0, 64(%esp)
	movaps	%xmm0, 80(%esp)
	movaps	%xmm0, 96(%esp)
	movaps	%xmm0, 224(%esp)
	movaps	%xmm0, 208(%esp)
	movaps	%xmm0, 192(%esp)
	movaps	%xmm0, 176(%esp)
	movaps	%xmm0, 160(%esp)
	movaps	%xmm0, 144(%esp)
	movaps	%xmm0, 128(%esp)
	movaps	%xmm0, 112(%esp)
	movzbl	14(%esp), %eax
	movd	%eax, %xmm1
	movzbl	22(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm1, %xmm2
	movzbl	42(%esp), %eax
	movd	%eax, %xmm1
	movzbl	50(%esp), %eax
	movd	%eax, %xmm3
	punpcklbw	%xmm1, %xmm3
	punpcklbw	%xmm2, %xmm3
	movzbl	77(%esp), %eax
	movd	%eax, %xmm1
	movzbl	84(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm1, %xmm2
	movzbl	104(%esp), %eax
	movd	%eax, %xmm1
	punpcklbw	%xmm1, %xmm0
	punpcklbw	%xmm2, %xmm0
	movaps	%xmm0, %xmm1
	punpcklbw	%xmm3, %xmm1
	movzbl	127(%esp), %eax
	movd	%eax, %xmm0
	movzbl	135(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm0, %xmm2
	movzbl	155(%esp), %eax
	movd	%eax, %xmm0
	movzbl	163(%esp), %eax
	movd	%eax, %xmm3
	punpcklbw	%xmm0, %xmm3
	punpcklbw	%xmm2, %xmm3
	movzbl	188(%esp), %eax
	movd	%eax, %xmm0
	movzbl	197(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm0, %xmm2
	movzbl	217(%esp), %eax
	movd	%eax, %xmm4
	movzbl	225(%esp), %eax
	movd	%eax, %xmm0
	punpcklbw	%xmm4, %xmm0
	punpcklbw	%xmm2, %xmm0
	punpcklbw	%xmm3, %xmm0
	punpcklbw	%xmm1, %xmm0
	addl	$252, %esp
	ret

llvm-svn: 65311
2009-02-23 08:49:38 +00:00
Dan Gohman
4105a38248 Constify TargetInstrInfo::EmitInstrWithCustomInserter, allowing
ScheduleDAG's TLI member to use const.

llvm-svn: 64018
2009-02-07 16:15:20 +00:00
Dale Johannesen
15a801f11d Remove non-DebugLoc versions of getLoad and getStore.
Adjust the many callers of those versions.

llvm-svn: 63767
2009-02-04 20:06:27 +00:00
Dale Johannesen
f9b2746030 Need this file too.
llvm-svn: 63674
2009-02-03 22:26:34 +00:00
Dale Johannesen
6c8c315519 DebugLoc propagation. 2/3 through file.
llvm-svn: 63650
2009-02-03 19:33:06 +00:00
Nate Begeman
48f3fe9199 Fix an indent and a typo.
llvm-svn: 62940
2009-01-24 22:12:48 +00:00
Bill Wendling
d18c38c0f2 Implement a special algorithm for converting uint_to_fp for i32 values on
X86. This code:

void f() {
  uint32_t x;
  float y = (float)x;
}

used to be:

     movl     %eax, -8(%ebp)
     movl     [2^52 double], -4(%ebp)
     movsd    -8(%ebp), %xmm0
     subsd    [2^52 double], %xmm0
     cvtsd2ss %xmm0, %xmm0

Is now:

   movsd        [2^52 double], %xmm0
   movsd        %xmm0, %xmm1
   movd         %ecx, %xmm2
   orps         %xmm2, %xmm1
   subsd        %xmm0, %xmm1
   cvtsd2ss     %xmm1, %xmm0

This is faster on X86. Note that there's an extra load of %xmm0 into %xmm1. That
will be fixed in a later coalescer fix.

llvm-svn: 62404
2009-01-17 03:56:04 +00:00
Dan Gohman
37d7b5be33 Make getWidenVectorType const.
llvm-svn: 62265
2009-01-15 17:34:08 +00:00
Devang Patel
eed0505ed8 Use DebugInfo interface to lower dbg_* intrinsics.
llvm-svn: 62127
2009-01-13 00:35:13 +00:00
Duncan Sands
190d6bc636 Fix PR3274: when promoting the condition of a BRCOND node,
promote from i1 all the way up to the canonical SetCC type.
In order to discover an appropriate type to use, pass
MVT::Other to getSetCCResultType.  In order to be able to
do this, change getSetCCResultType to take a type as an
argument, not a value (this is also more logical).

llvm-svn: 61542
2009-01-01 15:52:00 +00:00
Dan Gohman
1ba93ac6be Add instruction patterns and encodings for the x86 bt instructions.
llvm-svn: 61400
2008-12-23 22:45:23 +00:00
Mon P Wang
9f8945c5b9 Fixed x86 code generation of multiple for v2i64. It was incorrect for SSE4.1.
llvm-svn: 61211
2008-12-18 21:42:19 +00:00
Bill Wendling
13e4a3d0b0 - Use patterns instead of creating completely new instruction matching patterns,
which are identical to the original patterns.

- Change the multiply with overflow so that we distinguish between signed and
  unsigned multiplication. Currently, unsigned multiplication with overflow
  isn't working!

llvm-svn: 60963
2008-12-12 21:15:41 +00:00
Bill Wendling
5d026e47c1 Redo the arithmetic with overflow architecture. I was changing the semantics of
ISD::ADD to emit an implicit EFLAGS. This was horribly broken. Instead, replace
the intrinsic with an ISD::SADDO node. Then custom lower that into an
X86ISD::ADD node with a associated SETCC that checks the correct condition code
(overflow or carry). Then that gets lowered into the correct X86::ADDOvf
instruction.

Similar for SUB and MUL instructions.

llvm-svn: 60915
2008-12-12 00:56:36 +00:00
Bill Wendling
4c8fb3a0cc Add sub/mul overflow intrinsics. This currently doesn't have a
target-independent way of determining overflow on multiplication. It's very
tricky. Patch by Zoltan Varga!

llvm-svn: 60800
2008-12-09 22:08:41 +00:00
Bill Wendling
580f12ae30 Second stab at target-dependent lowering of everyone's favorite nodes: [SU]ADDO
- LowerXADDO lowers [SU]ADDO into an ADD with an implicit EFLAGS define. The
  EFLAGS are fed into a SETCC node which has the conditional COND_O or COND_C,
  depending on the type of ADDO requested.

- LowerBRCOND now recognizes if it's coming from a SETCC node with COND_O or
  COND_C set.

llvm-svn: 60388
2008-12-02 01:06:39 +00:00
Duncan Sands
1fae2ea219 Change the interface to the type legalization method
ReplaceNodeResults: rather than returning a node which
must have the same number of results as the original
node (which means mucking around with MERGE_VALUES,
and which is also easy to get wrong since SelectionDAG
folding may mean you don't get the node you expect),
return the results in a vector.

llvm-svn: 60348
2008-12-01 11:39:25 +00:00
Bill Wendling
0560ba7668 - Make lowering of "add with overflow" customizable by back-ends.
- Mark "add with overflow" as having a custom lowering for X86. Give it a null
  lowering representation for now.

llvm-svn: 59971
2008-11-24 19:21:46 +00:00
Mon P Wang
d7e34cd378 Add initial support for vector widening. Logic is set to widen for X86.
One will only see an effect if legalizetype is not active.  Will move
support to LegalizeType soon.

llvm-svn: 58426
2008-10-30 08:01:45 +00:00
Dale Johannesen
eb7e2deb1d Add an SSE2 algorithm for uint64->f64 conversion.
The same one Apple gcc uses, faster.  Also gets the
extreme case in gcc.c-torture/execute/ieee/rbug.c
correct which we weren't before; this is not
sufficient to get the test to pass though, there
is another bug.

llvm-svn: 57926
2008-10-21 20:50:01 +00:00
Dan Gohman
15597f07b2 Teach DAGCombine to fold constant offsets into GlobalAddress nodes,
and add a TargetLowering hook for it to use to determine when this
is legal (i.e. not in PIC mode, etc.)

This allows instruction selection to emit folded constant offsets
in more cases, such as the included testcase, eliminating the need
for explicit arithmetic instructions.

This eliminates the need for the C++ code in X86ISelDAGToDAG.cpp
that attempted to achieve the same effect, but wasn't as effective.

Also, fix handling of offsets in GlobalAddressSDNodes in several
places, including changing GlobalAddressSDNode's offset from
int to int64_t.

The Mips, Alpha, Sparc, and CellSPU targets appear to be
unaware of GlobalAddress offsets currently, so set the hook to
false on those targets.

llvm-svn: 57748
2008-10-18 02:06:02 +00:00
Dan Gohman
c070ffc493 FastISel support for exception-handling constructs.
- Move the EH landing-pad code and adjust it so that it works
   with FastISel as well as with SDISel.
 - Add FastISel support for @llvm.eh.exception and
   @llvm.eh.selector.

llvm-svn: 57539
2008-10-14 23:54:11 +00:00
Dale Johannesen
dc83b95ba5 Make atomic Swap work, 64-bit on x86-32.
Make it all work in non-pic mode.

llvm-svn: 57034
2008-10-03 22:25:52 +00:00
Dale Johannesen
dbd7b1bd33 Handle some 64-bit atomics on x86-32, some of the time.
llvm-svn: 56963
2008-10-02 18:53:47 +00:00
Bill Wendling
d7effcf8da Implement the -fno-builtin option in the front-end, not in the back-end.
llvm-svn: 56900
2008-10-01 00:59:58 +00:00
Bill Wendling
9ad453e943 Add the new `-no-builtin' flag. This flag is meant to mimic the GCC
`-fno-builtin' flag. Currently, it's used to replace "memset" with "_bzero"
instead of "__bzero" on Darwin10+. This arguably violates the meaning of this
flag, but is currently sufficient. The meaning of this flag should become more
specific over time.

llvm-svn: 56885
2008-09-30 21:22:07 +00:00
Dale Johannesen
52987eab6e Remove misuse of ReplaceNodeResults for atomics with
valid types.  No functional change.

llvm-svn: 56808
2008-09-29 22:25:26 +00:00
Evan Cheng
d190aeb62d With sse3 and when the source is a load or has multiple uses, favors movddup over shuffp*, pshufd, etc. Without sse3 or when the source is from a register, make use of movlhps
llvm-svn: 56620
2008-09-25 20:50:48 +00:00
Evan Cheng
f942615847 Properly handle 'm' inline asm constraints. If a GV is being selected for the addressing mode, it requires the same logic for PIC relative addressing, etc.
llvm-svn: 56526
2008-09-24 00:05:32 +00:00
Dan Gohman
01a070f9c7 Arrange for FastISel code to have access to the MachineModuleInfo
object. This will be needed to support debug info.

llvm-svn: 56508
2008-09-23 21:53:34 +00:00
Dan Gohman
a712c8d29e Fix these enums' starting values to reflect the way that
instruction opcodes are now numbered. No functionality change.

llvm-svn: 56497
2008-09-23 18:42:32 +00:00
Bill Wendling
932818c75a Reverting r56249. On further investigation, this functionality isn't needed.
Apologies for the thrashing.

llvm-svn: 56251
2008-09-16 21:48:12 +00:00
Bill Wendling
1a240c8033 - Change "ExternalSymbolSDNode" to "SymbolSDNode".
- Add linkage to SymbolSDNode (default to external).
- Change ISD::ExternalSymbol to ISD::Symbol.
- Change ISD::TargetExternalSymbol to ISD::TargetSymbol

These changes pave the way to allowing SymbolSDNodes with non-external linkage.

llvm-svn: 56249
2008-09-16 21:12:30 +00:00
Dan Gohman
3450a8252f Define CallSDNode, an SDNode subclass for use with ISD::CALL.
Currently it just holds the calling convention and flags
for isVarArgs and isTailCall.

And it has several utility methods, which eliminate magic
5+2*i and similar index computations in several places.

CallSDNodes are not CSE'd. Teach UpdateNodeOperands to handle
nodes that are not CSE'd gracefully.

llvm-svn: 56183
2008-09-13 01:54:27 +00:00
Dan Gohman
3ccdde5eef Add X86FastISel support for static allocas, and refences
to static allocas. As part of this change, refactor the
address mode code for laods and stores.

llvm-svn: 56066
2008-09-10 20:11:02 +00:00
Anton Korobeynikov
59a67c60db Replace explicit pointer-size constants to TargetData query.
No functionality change.

llvm-svn: 55996
2008-09-09 18:22:57 +00:00
Dan Gohman
18cc2a26df Create HandlePHINodesInSuccessorBlocksFast, a version of
HandlePHINodesInSuccessorBlocks that works FastISel-style. This
allows PHI nodes to be updated correctly while using FastISel.

This also involves some code reorganization; ValueMap and
MBBMap are now members of the FastISel class, so they needn't
be passed around explicitly anymore. Also, SelectInstructions
is changed to SelectInstruction, and only does one instruction
at a time.

llvm-svn: 55746
2008-09-03 23:12:08 +00:00
Ted Kremenek
b7236d215b Fix capitalization in #include of FastISel.h. This unbreaks the build on case-sensitive filesystems.
llvm-svn: 55687
2008-09-03 02:54:11 +00:00
Evan Cheng
43c7084625 Let tblgen only generate fastisel routines, not the class definition. This makes it easier for targets to define its own fastisel class.
llvm-svn: 55679
2008-09-03 00:03:49 +00:00
Dan Gohman
ddebe95287 Simplify FastISel's constructor argument list, make the FastISel
class hold a MachineRegisterInfo member, and make the
MachineBasicBlock be passed in to SelectInstructions rather
than the FastISel constructor.

llvm-svn: 55076
2008-08-20 21:05:57 +00:00
Dan Gohman
36e732b8fc The X86 target will soon have an implementation of createFastISel.
llvm-svn: 55010
2008-08-19 21:32:53 +00:00
Dale Johannesen
15b76de064 Add support for 8 and 16 bit forms of __sync
builtins on X86.

Change "lock" instructions to be on a separate line.
This is needed to work around a bug in the Darwin
assembler.

llvm-svn: 54999
2008-08-19 18:47:28 +00:00
Dan Gohman
9742f7772d Rename SDOperand to SDValue.
llvm-svn: 54128
2008-07-27 21:46:04 +00:00
Nate Begeman
af01bfff99 SSE codegen for vsetcc nodes
llvm-svn: 53719
2008-07-17 16:51:19 +00:00
Duncan Sands
3ea6f15708 Rather than having a different custom legalization
hook for each way in which a result type can be
legalized (promotion, expansion, softening etc),
just use one: ReplaceNodeResults, which returns
a node with exactly the same result types as the
node passed to it, but presumably with a bunch of
custom code behind the scenes.  No change if the
new LegalizeTypes infrastructure is not turned on.

llvm-svn: 53137
2008-07-04 11:47:58 +00:00
Mon P Wang
7d89d61387 Added MemOperands to Atomic operations since Atomics touches memory.
Added abstract class MemSDNode for any Node that have an associated MemOperand
Changed atomic.lcs => atomic.cmp.swap, atomic.las => atomic.load.add, and
atomic.lss => atomic.load.sub

llvm-svn: 52706
2008-06-25 08:15:39 +00:00
Andrew Lenharth
327c3e7559 add missing atomic intrinsic from gcc
llvm-svn: 52270
2008-06-14 05:48:15 +00:00
Duncan Sands
d634afe3aa Wrap MVT::ValueType in a struct to get type safety
and better control the abstraction.  Rename the type
to MVT.  To update out-of-tree patches, the main
thing to do is to rename MVT::ValueType to MVT, and
rewrite expressions like MVT::getSizeInBits(VT) in
the form VT.getSizeInBits().  Use VT.getSimpleVT()
to extract a MVT::SimpleValueType for use in switch
statements (you will get an assert failure if VT is
an extended value type - these shouldn't exist after
type legalization).
This results in a small speedup of codegen and no
new testsuite failures (x86-64 linux).

llvm-svn: 52044
2008-06-06 12:08:01 +00:00
Evan Cheng
04c0915a2f Implement vector shift up / down and insert zero with ps{rl}lq / ps{rl}ldq.
llvm-svn: 51667
2008-05-29 08:22:04 +00:00
Evan Cheng
73dadf21ce Fix typos and comments.
llvm-svn: 51165
2008-05-15 22:13:02 +00:00
Evan Cheng
778a5e27b0 Make use of vector load and store operations to implement memcpy, memmove, and memset. Currently only X86 target is taking advantage of these.
llvm-svn: 51140
2008-05-15 08:39:06 +00:00
Dan Gohman
f9d5689496 Change target-specific classes to use more precise static types.
This eliminates the need for several awkward casts, including
the last dynamic_cast under lib/Target.

llvm-svn: 51091
2008-05-14 01:58:56 +00:00
Nate Begeman
2ae55cecc6 Initial X86 codegen support for VSETCC.
llvm-svn: 51000
2008-05-12 20:34:32 +00:00
Evan Cheng
c7e9acfed7 Refactor isConsecutiveLoad from X86 to TargetLowering so DAG combiner can make use of it.
llvm-svn: 50991
2008-05-12 19:56:52 +00:00
Dan Gohman
4b23d9e60a For now, abort when an ISD::VAARG is encountered on x86-64, rather
than silently generate invalid code.

llvm-gcc does not currently use VAArgInst; it lowers va_arg in the
front-end.

llvm-svn: 50930
2008-05-10 01:26:14 +00:00
Evan Cheng
3493e43afd Handle a few more cases of folding load i64 into xmm and zero top bits.
Note, some of the code will be moved into target independent part of DAG combiner in a subsequent patch.

llvm-svn: 50918
2008-05-09 21:53:03 +00:00
Evan Cheng
f97e716511 Handle vector move / load which zero the destination register top bits (i.e. movd, movq, movss (addr), movsd (addr)) with X86 specific dag combine.
llvm-svn: 50838
2008-05-08 00:57:18 +00:00
Mon P Wang
84a269e023 Added addition atomic instrinsics and, or, xor, min, and max.
llvm-svn: 50663
2008-05-05 19:05:59 +00:00
Arnold Schwaighofer
f58a35e2ec Tail call optimization improvements:
Move platform independent code (lowering of possibly overwritten
arguments, check for tail call optimization eligibility) from
target X86ISelectionLowering.cpp to TargetLowering.h and
SelectionDAGISel.cpp.

Initial PowerPC tail call implementation:

Support ppc32 implemented and tested (passes my tests and
test-suite llvm-test).  
Support ppc64 implemented and half tested (passes my tests).
On ppc tail call optimization is performed if 
  caller and callee are fastcc
  call is a tail call (in tail call position, call followed by ret)
  no variable argument lists or byval arguments
  option -tailcallopt is enabled
Supported:
 * non pic tail calls on linux/darwin
 * module-local tail calls on linux(PIC/GOT)/darwin(PIC)
 * inter-module tail calls on darwin(PIC)
If constraints are not met a normal call will be emitted.

A test checking the argument lowering behaviour on x86-64 was added.

llvm-svn: 50477
2008-04-30 09:16:33 +00:00
Dan Gohman
0285c1e9bb Fix the SVOffset values for loads and stores produced by
memcpy/memset expansion. It was a bug for the SVOffset value
to be used in the actual address calculations.

llvm-svn: 50359
2008-04-28 17:15:20 +00:00
Chris Lattner
b5bd654163 A few inline asm cleanups:
- Make targetlowering.h fit in 80 cols.
  - Make LowerAsmOperandForConstraint const.
  - Make lowerXConstraint -> LowerXConstraint
  - Make LowerXConstraint return a const char* instead of taking a string byref.

llvm-svn: 50312
2008-04-26 23:02:14 +00:00
Dan Gohman
6f9b55bc7c Remove X86_64SRet; it isn't used anymore.
llvm-svn: 49759
2008-04-16 00:24:30 +00:00
Dan Gohman
8d46278998 Fix const-correctness issues with the SrcValue handling in the
memory intrinsic expansion code.

llvm-svn: 49666
2008-04-14 17:55:48 +00:00
Arnold Schwaighofer
82af0e6a43 This patch corrects the handling of byval arguments for tailcall
optimized x86-64 (and x86) calls so that they work (... at least for
my test cases).

Should fix the following problems:

Problem 1: When i introduced the optimized handling of arguments for
tail called functions (using a sequence of copyto/copyfrom virtual
registers instead of always lowering to top of the stack) i did not
handle byval arguments correctly e.g they did not work at all :).

Problem 2: On x86-64 after the arguments of the tail called function
are moved to their registers (which include ESI/RSI etc), tail call
optimization performs byval lowering which causes xSI,xDI, xCX
registers to be overwritten. This is handled in this patch by moving
the arguments to virtual registers first and after the byval lowering
the arguments are moved from those virtual registers back to
RSI/RDI/RCX.

llvm-svn: 49584
2008-04-12 18:11:06 +00:00
Dan Gohman
15edbf989f Drop ISD::MEMSET, ISD::MEMMOVE, and ISD::MEMCPY, which are not Legal
on any current target and aren't optimized in DAGCombiner. Instead
of using intermediate nodes, expand the operations, choosing between
simple loads/stores, target-specific code, and library calls,
immediately.

Previously, the code to emit optimized code for these operations
was only used at initial SelectionDAG construction time; now it is
used at all times. This fixes some cases where rep;movs was being
used for small copies where simple loads/stores would be better.

This also cleans up code that checks for alignments less than 4;
let the targets make that decision instead of doing it in
target-independent code. This allows x86 to use rep;movs in
low-alignment cases.

Also, this fixes a bug that resulted in the use of rep;stos for
memsets of 0 with non-constant memory size when the alignment was
at least 4. It's better to use the library in this case, which
can be significantly faster when the size is large.

This also preserves more SourceValue information when memory
intrinsics are lowered into simple loads/stores.

llvm-svn: 49572
2008-04-12 04:36:06 +00:00
Dan Gohman
b3a511b236 Make isVectorClearMaskLegal's operand list const.
llvm-svn: 49446
2008-04-09 20:09:42 +00:00
Chris Lattner
edfc239ced remove Evan's "ugly hack" that sorta attempted to get
x86-64 return conventions correct, but was never enabled.
We can now do the "right thing" with multiple return values.

llvm-svn: 48635
2008-03-21 06:50:21 +00:00
Arnold Schwaighofer
19a78545d9 Don't loose incoming argument registers. Fix documentation style.
llvm-svn: 48545
2008-03-19 16:39:45 +00:00
Chris Lattner
d393772580 Eliminate the FP_GET_ST0/FP_SET_ST0 target-specific dag nodes, just lower to
copyfromreg/copytoreg instead.

llvm-svn: 48174
2008-03-10 21:08:41 +00:00
Scott Michel
bb8e8fca47 Give TargetLowering::getSetCCResultType() a parameter so that ISD::SETCC's
return ValueType can depend its operands' ValueType.

This is a cosmetic change, no functionality impacted.

llvm-svn: 48145
2008-03-10 15:42:14 +00:00
Chris Lattner
2e7537b60b rename FP_SETRESULT -> FP_SET_ST0
llvm-svn: 48094
2008-03-09 07:08:44 +00:00
Chris Lattner
826402e365 rename FpGETRESULT32 -> FpGET_ST0_32 etc. Add support for
isel'ing value preserving FP roundings from one fp stack reg to another
into a noop, instead of stack traffic.

llvm-svn: 48093
2008-03-09 07:05:32 +00:00
Evan Cheng
e0b3c221ab Add a target lowering hook to control whether it's worthwhile to compress fp constant.
For x86, if sse2 is available, it's not a good idea since cvtss2sd is slower than a movsd load and it prevents load folding. On x87, it's important to shrink fp constant since fldt is very expensive.

llvm-svn: 47931
2008-03-05 01:30:59 +00:00
Andrew Lenharth
95c88272c6 64bit CAS on 32bit x86.
llvm-svn: 47929
2008-03-05 01:15:49 +00:00
Andrew Lenharth
b91c664226 all but CAS working on x86
llvm-svn: 47798
2008-03-01 21:52:34 +00:00
Arnold Schwaighofer
642dc28734 Refactor according to Evan's and Anton's suggestions.
llvm-svn: 47635
2008-02-26 22:21:54 +00:00
Arnold Schwaighofer
6383666085 Change the lowering of arguments for tail call optimized
calls. Before arguments that could overwrite each other were
explicitly lowered to a stack slot, not giving the register allocator
a chance to optimize. Now a sequence of copyto/copyfrom virtual
registers ensures that arguments are loaded in (virtual) registers
before they are lowered to the stack slot (and might overwrite each
other). Also parameter stack slots are marked mutable for
(potentially) tail calling functions.

llvm-svn: 47593
2008-02-26 09:19:59 +00:00
Evan Cheng
bb577266bf - When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type.
- X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC.

llvm-svn: 47290
2008-02-18 23:04:32 +00:00
Dan Gohman
99b38405e3 Simplify some logic in ComputeMaskedBits. And change ComputeMaskedBits
to pass the mask APInt by value, not by reference. 

llvm-svn: 47096
2008-02-13 22:28:48 +00:00
Dan Gohman
09023887f8 Convert SelectionDAG::ComputeMaskedBits to use APInt instead of uint64_t.
Add an overload that supports the uint64_t interface for use by clients
that haven't been updated yet.

llvm-svn: 47039
2008-02-13 00:35:47 +00:00
Nate Begeman
5a4e290b70 Enable SSE4 codegen and pattern matching.
Add some notes to the README.

llvm-svn: 46949
2008-02-11 04:19:36 +00:00
Dan Gohman
cabaec582f Rename MRegisterInfo to TargetRegisterInfo.
llvm-svn: 46930
2008-02-10 18:45:23 +00:00
Dan Gohman
3993809a0c Rename ISD::FLT_ROUNDS to ISD::FLT_ROUNDS_ to avoid conflicting
with the real FLT_ROUNDS (defined in <float.h>).

llvm-svn: 46587
2008-01-31 00:41:03 +00:00
Evan Cheng
918b9c9335 Even though InsertAtEndOfBasicBlock is an ugly hack it still deserves a proper name. Rename it to EmitInstrWithCustomInserter since it does not necessarily insert
instruction at the end.

llvm-svn: 46562
2008-01-30 18:18:23 +00:00
Evan Cheng
618761903d Work in progress. This patch *fixes* x86-64 calls which are modelled as StructRet but really should be return in registers, e.g. _Complex long double, some 128-bit aggregates. This is a short term solution that is necessary only because llvm, for now, cannot model i128 nor call's with multiple results.
Status: This only works for direct calls, and only the caller side is done. Disabled for now.

llvm-svn: 46527
2008-01-29 19:34:22 +00:00
Dale Johannesen
f12104ce4b Handle 'X' constraint in asm's better.
llvm-svn: 46485
2008-01-29 02:21:21 +00:00
Evan Cheng
91089e6d66 Let each target decide byval alignment. For X86, it's 4-byte unless the aggregare contains SSE vector(s). For x86-64, it's max of 8 or alignment of the type.
llvm-svn: 46286
2008-01-23 23:17:41 +00:00
Chris Lattner
eb7d65d073 make a method public
llvm-svn: 46159
2008-01-18 06:52:41 +00:00
Chris Lattner
2825f8fff0 make it more clear that this predicate only applies to scalar FP types.
llvm-svn: 46058
2008-01-16 06:24:21 +00:00
Chris Lattner
fbb75278b2 introduce a isTypeInSSEReg predicate, which allows us to simplify
some code.  No functionality change.

llvm-svn: 46055
2008-01-16 06:19:45 +00:00
Chris Lattner
0072ce1ff4 no need to expand ISD::TRAP to X86ISD::TRAP, just match ISD::TRAP.
llvm-svn: 46015
2008-01-15 21:58:22 +00:00
Anton Korobeynikov
08ea121968 For PR1839: add initial support for __builtin_trap. llvm-gcc part is missed
as well as PPC codegen

llvm-svn: 46001
2008-01-15 07:02:33 +00:00
Gordon Henriksen
f4e137838b Refactoring the x86 and x86-64 calling convention implementations,
unifying the copied algorithms and saving over 500 LOC. There should
be no functionality change, but please test on your favorite x86
target.

llvm-svn: 45627
2008-01-05 16:56:59 +00:00
Chris Lattner
ad9a6ccb83 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Evan Cheng
51cf86ded0 Implement ctlz and cttz with bsr and bsf.
llvm-svn: 45024
2007-12-14 02:13:44 +00:00
Chris Lattner
28262fbaf2 Several changes:
1) Change the interface to TargetLowering::ExpandOperationResult to 
   take and return entire NODES that need a result expanded, not just
   the value.  This allows us to handle things like READCYCLECOUNTER,
   which returns two values.
2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES.
3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new
   ExpandOperationResult.  This makes the result simpler and fully 
   general.
4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes.
5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM
   i64 shifts, allowing them to work with LegalizeDAGTypes.
6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT,
   allowing them to work with LegalizeDAGTypes.

LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when
type legalization in LegalizeDAG is ifdef'd out.

llvm-svn: 44300
2007-11-24 07:07:01 +00:00
Anton Korobeynikov
cd9b16df61 Implement codegen for flt_rounds on x86
llvm-svn: 44183
2007-11-16 01:31:51 +00:00
Evan Cheng
7d8deec92f Much improved pic jumptable codegen:
Then:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        imull   $4, %ecx, %ecx
        leal    LJTI1_0-"L1$pb"(%eax), %edx
        addl    LJTI1_0-"L1$pb"(%ecx,%eax), %edx
        jmpl    *%edx

        .align  2
        .set L1_0_set_3,LBB1_3-LJTI1_0
        .set L1_0_set_2,LBB1_2-LJTI1_0
        .set L1_0_set_5,LBB1_5-LJTI1_0
        .set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

Now:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        addl    LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
        jmpl    *%eax

		.align  2
		.set L1_0_set_3,LBB1_3-"L1$pb"
		.set L1_0_set_2,LBB1_2-"L1$pb"
		.set L1_0_set_5,LBB1_5-"L1$pb"
		.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

llvm-svn: 43924
2007-11-09 01:32:10 +00:00
Rafael Espindola
ec025c3042 Move the LowerMEMCPY and LowerMEMCPYCall to a common place.
Thanks for the suggestions Bill :-)

llvm-svn: 43742
2007-11-05 23:12:20 +00:00
Evan Cheng
5fe81cf64e Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).

llvm-svn: 43465
2007-10-29 19:58:20 +00:00
Evan Cheng
53696b7e9f Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
e.g.
Turns this loop:
LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
        movw    %dx, %si
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %edi
        movw    %si, (%edi)
        movl    L_Y$non_lazy_ptr, %edi
        movw    %dx, (%edi)
		addw    $4, %dx
		incw    %si
		incl    %ecx
		cmpl    %eax, %ecx
		jne     LBB1_2  # bb
	
into

LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %esi
        movw    %cx, (%esi)
        movl    L_Y$non_lazy_ptr, %esi
        movw    %dx, (%esi)
        addw    $4, %dx
		incl    %ecx
        cmpl    %eax, %ecx
        jne     LBB1_2  # bb

llvm-svn: 43375
2007-10-26 01:56:11 +00:00
Arnold Schwaighofer
d47210011e Added tail call optimization to the x86 back end. It can be
enabled by passing -tailcallopt to llc.  The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
  elf/pic enabled + callee is in module + callee has
  visibility protected or hidden

llvm-svn: 42870
2007-10-11 19:40:01 +00:00
Dan Gohman
6c3e0cdd36 LowerIntegerDivOrRem no longer exists.
llvm-svn: 42787
2007-10-09 15:45:13 +00:00
Dan Gohman
6df332f0cb Migrate X86 and ARM from using X86ISD::{,I}DIV and ARMISD::MULHILO{U,S} to
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.

llvm-svn: 42762
2007-10-08 18:33:35 +00:00
Evan Cheng
f3c130a8b6 Enabling new condition code modeling scheme.
llvm-svn: 42459
2007-09-29 00:00:36 +00:00
Rafael Espindola
01b306e575 Refactor the memcpy lowering for the x86 target.
The only generated code difference is that now we call memcpy when
the size of the array is unknown. This matches GCC behavior and is
better since the run time value can be arbitrarily large.

llvm-svn: 42433
2007-09-28 12:53:01 +00:00
Dan Gohman
a01dd49472 Fix a typo in a comment.
llvm-svn: 42313
2007-09-25 19:37:26 +00:00
Dan Gohman
1bb346f9f1 When both x/y and x%y are needed (x and y both scalar integer), compute
both results with a single div or idiv instruction. This uses new X86ISD
nodes for DIV and IDIV which are introduced during the legalize phase
so that the SelectionDAG's CSE can automatically eliminate redundant
computations.

llvm-svn: 42308
2007-09-25 18:23:27 +00:00
Evan Cheng
36b3babfde Added support for new condition code modeling scheme (i.e. physical register dependency). These are a bunch of instructions that are duplicated so the x86 backend can support both the old and new schemes at the same time. They will be deleted after
all the kinks are worked out.

llvm-svn: 42285
2007-09-25 01:57:46 +00:00
Dale Johannesen
ea6ffa0b36 Fix PR 1681. When X86 target uses +sse -sse2,
keep f32 in SSE registers and f64 in x87.  This
is effectively a new codegen mode.
Change addLegalFPImmediate to permit float and
double variants to do different things.
Adjust callers.

llvm-svn: 42246
2007-09-23 14:52:20 +00:00
Evan Cheng
d7742e7939 X86ISD::TEST is dead.
llvm-svn: 42037
2007-09-17 17:42:53 +00:00
Rafael Espindola
5d8b225881 Add support for functions with byval arguments on x86
llvm-svn: 41953
2007-09-14 15:48:13 +00:00
Rafael Espindola
4ddaad4de0 Initial support for calling functions with byval arguments on x86-64
llvm-svn: 41643
2007-08-31 15:06:30 +00:00
Chris Lattner
1e089aac3a rename isOperandValidForConstraint to LowerAsmOperandForConstraint,
changing the interface to allow for future changes.

llvm-svn: 41384
2007-08-25 00:47:38 +00:00
Anton Korobeynikov
3094846993 Move ReturnAddrIndex variable to X86MachineFunctionInfo structure. This fixed
hard to catch bugs with retaddr lowering

llvm-svn: 41104
2007-08-15 17:12:32 +00:00
Dan Gohman
352de5aeb9 More explicit keywords.
llvm-svn: 40757
2007-08-02 21:21:54 +00:00
Duncan Sands
35a77d857b Trampoline codegen support for X86-32.
llvm-svn: 40566
2007-07-27 20:02:49 +00:00
Dan Gohman
0252aa07ee Re-apply 40504, but with a fix for the segfault it caused in oggenc:
Make the alignedload and alignedstore patterns always require 16-byte
alignment. This way when they are used in the "Fs" instructions, in which
a vector instruction is used for a scalar purpose, they can still require
the full vector alignment. And add a regression test for this.

llvm-svn: 40555
2007-07-27 17:16:43 +00:00
Evan Cheng
cb8f08ebca Reverting 40504 for now. It's breaking oggenc.
llvm-svn: 40547
2007-07-27 01:37:47 +00:00
Dan Gohman
513dcba4f8 Remove X86ISD::LOAD_PACK and X86ISD::LOAD_UA and associated code from the
x86 target, replacing them with the new alignment attributes on memory
references.

llvm-svn: 40504
2007-07-26 00:31:09 +00:00
Anton Korobeynikov
5635277c36 Long live the exception handling!
This patch fills the last necessary bits to enable exceptions
handling in LLVM. Currently only on x86-32/linux.

In fact, this patch adds necessary intrinsics (and their lowering) which
represent really weird target-specific gcc builtins used inside unwinder.

After corresponding llvm-gcc patch will land (easy) exceptions should be
more or less workable. However, exceptions handling support should not be 
thought as 'finished': I expect many small and not so small glitches
everywhere.

llvm-svn: 39855
2007-07-14 14:06:15 +00:00
Dan Gohman
928144b051 Define non-intrinsic instructions for vector min, max, sqrt, rsqrt, and rcp,
in addition to the intrinsic forms. Add spill-folding entries for these new
instructions, and for the scalar min and max instrinsic instructions which
were missing. And add some preliminary ISelLowering code for using the new
non-intrinsic vector sqrt instruction, and fneg and fabs.

llvm-svn: 38478
2007-07-10 00:05:58 +00:00
Dan Gohman
a62327ea40 Move ComputeMaskedBits, MaskedValueIsZero, and ComputeNumSignBits from
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.

llvm-svn: 37704
2007-06-22 14:59:07 +00:00
Bill Wendling
6856e741fa Support for the special case of a vector with the canonical form:
vector_shuffle v1, v2, <2, 6, 3, 7>

I.e.

         vector_shuffle v, undef, <2, 2, 3, 3>

MMX only has a shuffle for v4i16 vectors. It needs to use the unpackh for
this type of operation.

llvm-svn: 36403
2007-04-24 21:16:55 +00:00
Lauro Ramos Venancio
bc32d90b46 Implement "general dynamic", "initial exec" and "local exec" TLS models for
X86 32 bits.

llvm-svn: 36283
2007-04-20 21:38:10 +00:00
Anton Korobeynikov
9bc4b792bf Implemented correct stack probing on mingw/cygwin for dynamic alloca's.
Also, fixed static case in presence of eax livin. This fixes PR331

PS: Why don't we still have push/pop instructions? :)
llvm-svn: 36195
2007-04-17 09:20:00 +00:00
Chris Lattner
3f9ff05309 remove some dead hooks
llvm-svn: 35845
2007-04-09 23:31:19 +00:00
Chris Lattner
ae6e2c0ee5 remove some dead target hooks, subsumed by isLegalAddressingMode
llvm-svn: 35840
2007-04-09 22:27:04 +00:00
Chris Lattner
c0405a348d implement the new addressing mode description hook.
llvm-svn: 35521
2007-03-30 23:15:24 +00:00
Chris Lattner
b19069959d switch TargetLowering::getConstraintType to take the entire constraint,
not just the first letter.  No functionality change.

llvm-svn: 35322
2007-03-25 02:14:49 +00:00
Dale Johannesen
44c0a5d545 repair x86 performance, dejagnu problems from previous change
llvm-svn: 35245
2007-03-21 21:51:52 +00:00
Evan Cheng
0eeb8b59eb More flexible TargetLowering LSR hooks for testing whether an immediate is
a legal target address immediate or scale.

llvm-svn: 35073
2007-03-12 23:28:50 +00:00
Evan Cheng
2fb461c1b5 X86-64 VACOPY needs custom expansion. va_list is a struct { i32, i32, i8*, i8* }.
llvm-svn: 34857
2007-03-02 23:16:35 +00:00
Chris Lattner
023751c20b remove fastcc (not fastcall) support
llvm-svn: 34730
2007-02-28 06:05:16 +00:00
Chris Lattner
decf97fae2 add an accessor.
llvm-svn: 34625
2007-02-26 04:01:25 +00:00
Chris Lattner
423224a7b4 pass the calling convention into Lower*CallTo, instead of using ad-hoc flags.
llvm-svn: 34587
2007-02-25 09:06:15 +00:00
Chris Lattner
8fa75c3ae8 factor a bunch of code out of LowerCCCCallTo into a new LowerCallResult
function.  This function now uses GetRetValueLocs to determine *where*
the result values are located and concerns itself with *how* to pull the
values out.

llvm-svn: 34586
2007-02-25 08:59:22 +00:00
Chris Lattner
f119813ff4 simplify result value lowering by splitting the selection of *where* to return
registers out from the logic of *how* to return them.

This changes X86-64 to mark EAX live out when returning a 32-bit value,
where before it marked RAX liveout.

llvm-svn: 34582
2007-02-25 08:15:11 +00:00
Nate Begeman
dc46021355 Finish off bug 680, allowing targets to custom lower frame and return
address nodes.

llvm-svn: 33636
2007-01-29 22:58:52 +00:00
Anton Korobeynikov
611d5e2eda Propagate changes from my local tree. This patch includes:
1. New parameter attribute called 'inreg'. It has meaning "place this
parameter in registers, if possible". This is some generalization of
gcc's regparm(n) attribute. It's currently used only in X86-32 backend.
2. Completely rewritten CC handling/lowering code inside X86 backend.
Merged stdcall + c CCs and fastcall + fast CC.
3. Dropped CSRET CC. We cannot add struct return variant for each
target-specific CC (e.g. stdcall + csretcc and so on).
4. Instead of CSRET CC introduced 'sret' parameter attribute. Setting in
on first attribute has meaning 'This is hidden pointer to structure
return. Handle it gently'.
5. Fixed small bug in llvm-extract + add new feature to
FunctionExtraction pass, which relinks all internal-linkaged callees
from deleted function to external linkage. This will allow further
linking everything together.

NOTEs: 1. Documentation will be updated soon.
       2. llvm-upgrade should be improved to translate csret => sret.
          Before this, there will be some unexpected test fails.
llvm-svn: 33597
2007-01-28 13:31:35 +00:00
Evan Cheng
df277336b8 - FCOPYSIGN custom lowering bug. Clear the sign bit of operand 0 first before
or'ing in the sign bit of operand 1.
- Tweaking: rather than left shift the sign bit, fp_extend operand 1 first
  before taking its sign bit if its type is smaller than that of operand 0.

llvm-svn: 32932
2007-01-05 21:37:56 +00:00
Evan Cheng
bcf3d2bd15 With SSE2, expand FCOPYSIGN to a series of SSE bitwise operations.
llvm-svn: 32900
2007-01-05 07:55:56 +00:00
Evan Cheng
456101ebb9 - Use a different wrapper node for RIP-relative GV, etc.
- Proper support for both small static and PIC modes under X86-64
- Some (non-optimal) support for medium modes.

llvm-svn: 32046
2006-11-30 21:55:46 +00:00
Evan Cheng
ae1f3758bd Don't dag combine floating point select to max and min intrinsics. Those
take v4f32 / v2f64 operands and may end up causing larger spills / restores.
Added X86 specific nodes X86ISD::FMAX, X86ISD::FMIN instead.

This fixes PR996.

llvm-svn: 31645
2006-11-10 21:43:37 +00:00
Evan Cheng
7ca1f47a96 Fixed a bug which causes x86 be to incorrectly match
shuffle v, undef, <2, ?, 3, ?>
to movhlps
It should match to unpckhps instead.

Added proper matching code for
shuffle v, undef, <2, 3, 2, 3>

llvm-svn: 31519
2006-11-07 22:14:24 +00:00
Chris Lattner
def30d3eda allow the address of a global to be used with the "i" constraint when in
-static mode.  This implements PR882.

llvm-svn: 31326
2006-10-31 20:13:11 +00:00
Evan Cheng
090e9abaee Fixed a significant bug where unpcklpd is incorrectly used to extract element 1 from a v2f64 value.
llvm-svn: 31228
2006-10-27 21:08:32 +00:00
Chris Lattner
62a0f00312 Implement branch analysis/xform hooks required by the branch folding pass.
llvm-svn: 31065
2006-10-20 17:42:20 +00:00
Chris Lattner
a86fff7583 fit in 80 cols
llvm-svn: 31039
2006-10-18 18:26:48 +00:00
Chris Lattner
04ad43b4de update comments
llvm-svn: 30663
2006-09-28 23:33:12 +00:00
Anton Korobeynikov
59ef7e94eb Adding codegeneration for StdCall & FastCall calling conventions
llvm-svn: 30549
2006-09-20 22:03:51 +00:00
Evan Cheng
cfd7b147cf X86ISD::CMP now produces a chain as well as a flag. Make that the chain
operand of a conditional branch to allow load folding into CMP / TEST
instructions.

llvm-svn: 30241
2006-09-11 02:19:56 +00:00
Evan Cheng
15dd42884e Committing X86-64 support.
llvm-svn: 30177
2006-09-08 06:48:29 +00:00
Chris Lattner
26ff12f7f5 Fix PR850 and CodeGen/X86/2006-07-31-SingleRegClass.ll.
The CFE refers to all single-register constraints (like "A") by their 16-bit
name, even though the 8 or 32-bit version of the register may be needed.
The X86 backend should realize what is going on and redecode the name back
to its proper form.

llvm-svn: 29420
2006-07-31 23:26:50 +00:00
Chris Lattner
b75fe307e1 Implement the inline asm 'A' constraint. This implements PR825 and
CodeGen/X86/2006-07-10-InlineAsmAConstraint.ll

llvm-svn: 29101
2006-07-11 02:54:03 +00:00
Evan Cheng
1d48a494a2 X86 target specific DAG combine: turn build_vector (load x), (load x+4),
(load x+8), (load x+12), <0, 1, 2, 3> to a single 128-bit load (aligned and
unaligned).

e.g.

__m128 test(float a, float b, float c, float d) {
  return _mm_set_ps(d, c, b, a);
}

_test:
        movups 4(%esp), %xmm0
        ret

llvm-svn: 29042
2006-07-07 08:33:52 +00:00
Evan Cheng
db5c7909f5 Simplify X86CompilationCallback: always align to 16-byte boundary; don't save EAX/EDX if unnecessary.
llvm-svn: 28910
2006-06-24 08:36:10 +00:00
Evan Cheng
bb17ad5ffa Switch X86 over to a call-selection model where the lowering code creates
the copyto/fromregs instead of making the X86ISD::CALL selection code create
them.

llvm-svn: 28463
2006-05-25 00:59:30 +00:00
Chris Lattner
f604017e47 Patches to make the LLVM sources more -pedantic clean. Patch provided
by Anton Korobeynikov!  This is a step towards closing PR786.

llvm-svn: 28447
2006-05-24 17:04:05 +00:00
Evan Cheng
65c3f3f26b Remove PreprocessCCCArguments and PreprocessFastCCArguments now that
FORMAL_ARGUMENTS nodes include a token operand.

llvm-svn: 28439
2006-05-23 21:06:34 +00:00
Chris Lattner
9aec97df10 Implement an annoying part of the Darwin/X86 abi: the callee of a struct
return argument pops the hidden struct pointer if present, not the caller.

For example, in this testcase:

struct X { int D, E, F, G; };
struct X bar() {
  struct X a;
  a.D = 0;
  a.E = 1;
  a.F = 2;
  a.G = 3;
  return a;
}
void foo(struct X *P) {
  *P = bar();
}

We used to emit:

_foo:
        subl $28, %esp
        movl 32(%esp), %eax
        movl %eax, (%esp)
        call _bar
        addl $28, %esp
        ret
_bar:
        movl 4(%esp), %eax
        movl $0, (%eax)
        movl $1, 4(%eax)
        movl $2, 8(%eax)
        movl $3, 12(%eax)
        ret

This is correct on Linux/X86 but not Darwin/X86.  With this patch, we now
emit:

_foo:
        subl $28, %esp
        movl 32(%esp), %eax
        movl %eax, (%esp)
        call _bar
***     addl $24, %esp
        ret
_bar:
        movl 4(%esp), %eax
        movl $0, (%eax)
        movl $1, 4(%eax)
        movl $2, 8(%eax)
        movl $3, 12(%eax)
***     ret $4

For the record, GCC emits (which is functionally equivalent to our new code):

_bar:
        movl    4(%esp), %eax
        movl    $3, 12(%eax)
        movl    $2, 8(%eax)
        movl    $1, 4(%eax)
        movl    $0, (%eax)
        ret     $4
_foo:
        pushl   %esi
        subl    $40, %esp
        movl    48(%esp), %esi
        leal    16(%esp), %eax
        movl    %eax, (%esp)
        call    _bar
        subl    $4, %esp
        movl    16(%esp), %eax
        movl    %eax, (%esi)
        movl    20(%esp), %eax
        movl    %eax, 4(%esi)
        movl    24(%esp), %eax
        movl    %eax, 8(%esi)
        movl    28(%esp), %eax
        movl    %eax, 12(%esi)
        addl    $40, %esp
        popl    %esi
        ret

This fixes SingleSource/Benchmarks/CoyoteBench/fftbench with LLC and the
JIT, and fixes the X86-backend portion of PR729.  The CBE still needs to
be updated.

llvm-svn: 28438
2006-05-23 18:50:38 +00:00
Evan Cheng
d282cb8542 Should pass by reference.
llvm-svn: 28357
2006-05-17 19:07:40 +00:00
Evan Cheng
a1f9f34f35 - Clean up formal argument lowering code. Prepare for vector pass by value work.
- Fixed vararg support.

llvm-svn: 27985
2006-04-27 01:32:22 +00:00
Evan Cheng
58d4133b60 Switching over FORMAL_ARGUMENTS mechanism to lower call arguments.
llvm-svn: 27975
2006-04-26 01:20:17 +00:00
Evan Cheng
09112df9d3 Separate LowerOperation() into multiple functions, one per opcode.
llvm-svn: 27972
2006-04-25 20:13:52 +00:00
Evan Cheng
e0289de5ab Now generating perfect (I think) code for "vector set" with a single non-zero
scalar value.

e.g.
        _mm_set_epi32(0, a, 0, 0);
==>
	movd 4(%esp), %xmm0
	pshufd $69, %xmm0, %xmm0

        _mm_set_epi8(0, 0, 0, 0, 0, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
==>
	movzbw 4(%esp), %ax
	movzwl %ax, %eax
	pxor %xmm0, %xmm0
	pinsrw $5, %eax, %xmm0

llvm-svn: 27923
2006-04-21 01:05:10 +00:00
Evan Cheng
41f2933444 - Added support to turn "vector clear elements", e.g. pand V, <-1, -1, 0, -1>
to a vector shuffle.
- VECTOR_SHUFFLE lowering change in preparation for more efficient codegen
of vector shuffle with zero (or any splat) vector.

llvm-svn: 27875
2006-04-20 08:58:49 +00:00
Evan Cheng
265831aa45 Commute vector_shuffle to match more movlhps, movlp{s|d} cases.
llvm-svn: 27840
2006-04-19 20:35:22 +00:00
Evan Cheng
32c4470374 Last few SSE3 intrinsics.
llvm-svn: 27711
2006-04-14 21:59:03 +00:00
Evan Cheng
da283be867 Added support for _mm_move_ss and _mm_move_sd.
llvm-svn: 27575
2006-04-11 00:19:04 +00:00
Evan Cheng
9f27046dc9 - movlp{s|d} and movhp{s|d} support.
- Normalize shuffle nodes so result vector lower half elements come from the
  first vector, the rest come from the second vector. (Except for the
  exceptions :-).
- Other minor fixes.

llvm-svn: 27474
2006-04-06 23:23:56 +00:00
Evan Cheng
6d470008c8 Support for comi / ucomi intrinsics.
llvm-svn: 27444
2006-04-05 23:38:46 +00:00
Evan Cheng
056e0af55a Handle canonical form of e.g.
vector_shuffle v1, v1, <0, 4, 1, 5, 2, 6, 3, 7>

This is turned into
vector_shuffle v1, <undef>, <0, 0, 1, 1, 2, 2, 3, 3>
by dag combiner.

It would match a {p}unpckl on x86.

llvm-svn: 27437
2006-04-05 07:20:06 +00:00
Evan Cheng
4623ebd3d0 Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector.

llvm-svn: 27314
2006-03-31 21:55:24 +00:00
Evan Cheng
7b9a0c6d7a Add support to use pextrw and pinsrw to extract and insert a word element
from a 128-bit vector.

llvm-svn: 27304
2006-03-31 19:22:53 +00:00
Evan Cheng
7bc3bc8246 - Added some SSE2 128-bit packed integer ops.
- Added SSE2 128-bit integer pack with signed saturation ops.
- Added pshufhw and pshuflw ops.

llvm-svn: 27252
2006-03-29 23:07:14 +00:00
Evan Cheng
fb4b2bfc7d * Prefer using operation of matching types. e.g unpcklpd rather than movlhps.
* Bug fixes.

llvm-svn: 27218
2006-03-28 06:50:32 +00:00
Evan Cheng
4d554dae17 - Clean up / consoladate various shuffle masks.
- Some misc. bug fixes.
- Use MOVHPDrm to load from m64 to upper half of a XMM register.

llvm-svn: 27210
2006-03-28 02:43:26 +00:00
Evan Cheng
d8d7ec47bd Model unpack lower and interleave as vector_shuffle so we can lower the
intrinsics as such.

llvm-svn: 27200
2006-03-28 00:39:58 +00:00
Evan Cheng
1dfbede1d1 Remove X86:isZeroVector, use ISD::isBuildVectorAllZeros instead; some fixes / cleanups
llvm-svn: 27150
2006-03-26 09:53:12 +00:00
Evan Cheng
e5807f6b47 Build arbitrary vector with more than 2 distinct scalar elements with a
series of unpack and interleave ops.

llvm-svn: 27119
2006-03-25 09:37:23 +00:00
Evan Cheng
bdb85b387f Support for scalar to vector with zero extension.
llvm-svn: 27091
2006-03-24 23:15:12 +00:00
Evan Cheng
d58d54cf3e Handle BUILD_VECTOR with all zero elements.
llvm-svn: 27056
2006-03-24 07:29:27 +00:00
Evan Cheng
3028b04057 More efficient v2f64 shuffle using movlhps, movhlps, unpckhpd, and unpcklpd.
llvm-svn: 27040
2006-03-24 02:58:06 +00:00
Evan Cheng
68410804f0 Handle more shuffle cases with SHUFP* instructions.
llvm-svn: 27024
2006-03-24 01:18:28 +00:00
Evan Cheng
54215cd1ea Added a ValueType operand to isShuffleMaskLegal(). For now, x86 will not do
64-bit vector shuffle.

llvm-svn: 26964
2006-03-22 22:07:06 +00:00
Evan Cheng
cff38e19c3 - Implement X86ISelLowering::isShuffleMaskLegal(). We currently only support
splat and PSHUFD cases.
- Clean up shuffle / splat matching code.

llvm-svn: 26954
2006-03-22 18:59:22 +00:00
Evan Cheng
f6dc0a7f5e - VECTOR_SHUFFLE of v4i32 / v4f32 with undef second vector always matches
PSHUFD. We can make permutes entries which point to the undef pointing
  anything we want.
- Change some names to appease Chris.

llvm-svn: 26951
2006-03-22 08:01:21 +00:00
Evan Cheng
7aac4350c7 Some splat and shuffle support.
llvm-svn: 26940
2006-03-22 02:53:00 +00:00
Evan Cheng
47dd756c72 - Use movaps to store 128-bit vector integers.
- Each scalar to vector v8i16 and v16i8 is a any_extend followed by a movd.

llvm-svn: 26932
2006-03-21 23:01:21 +00:00
Evan Cheng
7ec94f2ff7 Added getTargetLowering() to TargetMachine. Refactored targets to support this.
llvm-svn: 26742
2006-03-13 23:20:37 +00:00
Evan Cheng
cb9fb051a5 - Clean up the lowering and selection code of ConstantPool, GlobalAddress,
and ExternalSymbol.
- Use C++ code (rather than tblgen'd selection code) to match the above
  mentioned leaf nodes. Do not mutate and nodes and do not record the
  selection in CodeGenMap. These nodes should be safe to duplicate. This is
  a performance win.

llvm-svn: 26335
2006-02-23 20:41:18 +00:00
Evan Cheng
2977507828 PIC related bug fixes.
1. Various asm printer bug.
2. Lowering bug. Now TargetGlobalAddress is wrapped in X86ISD::TGAWrapper.

llvm-svn: 26324
2006-02-23 02:43:52 +00:00
Chris Lattner
6bb2c3e9cd split register class handling from explicit physreg handling.
llvm-svn: 26308
2006-02-22 00:56:39 +00:00
Chris Lattner
a124432746 Updates to match change of getRegForInlineAsmConstraint prototype
llvm-svn: 26305
2006-02-21 23:11:00 +00:00
Evan Cheng
bf3558a375 x86 / Darwin PIC support.
llvm-svn: 26273
2006-02-18 00:15:05 +00:00
Nate Begeman
9c0ab71f4a kill ADD_PARTS & SUB_PARTS and replace them with fancy new ADDC, ADDE, SUBC
and SUBE nodes that actually expose what's going on and allow for
significant simplifications in the targets.

llvm-svn: 26255
2006-02-17 05:43:56 +00:00
Nate Begeman
0bc71999b9 Rework the SelectionDAG-based implementations of SimplifyDemandedBits
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.

llvm-svn: 26238
2006-02-16 21:11:51 +00:00
Evan Cheng
078962656b Separate FILD and FILD_FLAG, the later is only used for SSE2. It produces a
flag so it can be flagged to a FST.

llvm-svn: 25953
2006-02-04 02:20:30 +00:00
Evan Cheng
45ebd632f2 - Allow XMM load (for scalar use) to be folded into ANDP* and XORP*.
- Use XORP* to implement fneg.

llvm-svn: 25857
2006-01-31 22:28:30 +00:00
Chris Lattner
5587b270e4 * Fix 80-column violations
* Rename hasSSE -> hasSSE1 to avoid my continual confusion with 'has any SSE'.
* Add inline asm constraint specification.

llvm-svn: 25854
2006-01-31 19:43:35 +00:00