1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 03:33:20 +01:00
Commit Graph

1703 Commits

Author SHA1 Message Date
Evan Cheng
27be5272b4 Add this test back.
llvm-svn: 66838
2009-03-12 23:01:35 +00:00
Duncan Sands
968939fca2 Revert commit 66140 since it caused several failures
in the Ada testcase.  Reverting this only covers up
the real problem, which is a nasty conceptual difficulty
in the phi elimination pass: when eliminating phi nodes
in landing pads, the register copies need to come before
the invoke, not at the end of the basic block which is
too late...  See PR3784.

llvm-svn: 66826
2009-03-12 21:13:42 +00:00
Evan Cheng
e04a84ed9f Typo.
llvm-svn: 66797
2009-03-12 17:07:39 +00:00
Evan Cheng
45aa89fa9a Fix test after Chris' select changes.
llvm-svn: 66795
2009-03-12 16:10:08 +00:00
Chris Lattner
26a971c4ec Move 3 "(add (select cc, 0, c), x) -> (select cc, x, (add, x, c))"
related transformations out of target-specific dag combine into the
ARM backend.  These were added by Evan in r37685 with no testcases
and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll).

Add some simple X86-specific (for now) DAG combines that turn things
like cond ? 8 : 0  -> (zext(cond) << 3).  This happens frequently
with the recently added cp constant select optimization, but is a
very general xform.  For example, we now compile the second example
in const-select.ll to:

_test:
        movsd   LCPI2_0, %xmm0
        ucomisd 8(%esp), %xmm0
        seta    %al
        movzbl  %al, %eax
        movl    4(%esp), %ecx
        movsbl  (%ecx,%eax,4), %eax
        ret

instead of:

_test:
        movl    4(%esp), %eax
        leal    4(%eax), %ecx
        movsd   LCPI2_0, %xmm0
        ucomisd 8(%esp), %xmm0
        cmovbe  %eax, %ecx
        movsbl  (%ecx), %eax
        ret

This passes multisource and dejagnu.

llvm-svn: 66779
2009-03-12 06:52:53 +00:00
Evan Cheng
46e903d2f6 On x86, if the only use of a i64 load is a i64 store, generate a pair of double load and store instead.
llvm-svn: 66776
2009-03-12 05:59:15 +00:00
Chris Lattner
a5368fd283 add no-unwind, remove duplicate run line.
llvm-svn: 66775
2009-03-12 05:56:37 +00:00
Chris Lattner
cb52d81eea add nounwinds
llvm-svn: 66773
2009-03-12 05:35:33 +00:00
Dan Gohman
d30e108f0e Revert r66024. The JIT encoding for CALLpcrel32 is wrong -- see PR3773, and the
assembly text output uses an indirect call ("call *") instead of a direct call.

llvm-svn: 66735
2009-03-11 23:01:47 +00:00
Rafael Espindola
a8fe373200 optimize i8 and i16 tls values.
llvm-svn: 66725
2009-03-11 22:40:04 +00:00
Evan Cheng
042e05cb31 My last coalescer fix introduced a subtler one. It's aborting a commuting optimization too late and left the live intervals to be out of sync with instructions. This fixes 8b10b.
llvm-svn: 66715
2009-03-11 22:18:44 +00:00
Mon P Wang
287e422039 For yonah, fix a vector shuffle case for v16i8 where we didn't properly clear some bits.
llvm-svn: 66684
2009-03-11 18:47:57 +00:00
Mon P Wang
2867737ad2 Fixed a v8i16 shuffle case that should generate a pshufb instead of a pshuflw/hw.
llvm-svn: 66645
2009-03-11 06:35:11 +00:00
Chris Lattner
a0c4a99ec2 reapply my previous patch (r66358) with a tweak to set the
alignment of the generated constant pool entry to the
desired alignment of a type.  If we don't do this, we end up
trying to do movsd from 4-byte alignment memory.  This fixes
450.soplex and 456.hmmer.

llvm-svn: 66641
2009-03-11 05:08:08 +00:00
Evan Cheng
264173da40 Two coalescer fixes in one.
1. Use the same value# to represent unknown values being merged into sub-registers.
2. When coalescer commute an instruction and the destination is a physical register, update its sub-registers by merging in the extended ranges.

llvm-svn: 66610
2009-03-11 00:03:21 +00:00
Bill Wendling
5af061a657 Readd test, but XFAIL it.
llvm-svn: 66581
2009-03-10 21:31:00 +00:00
Evan Cheng
535dbf4ffd Revert 66358 for now. It's breaking povray, 450.soplex, and 456.hmmer on x86 / Darwin.
llvm-svn: 66574
2009-03-10 20:47:18 +00:00
Bill Wendling
4ddb112b2f Add radar number.
llvm-svn: 66534
2009-03-10 06:53:54 +00:00
Chris Lattner
03060f6d50 wire up support for emitting "special" values from inline asm
format strings with the standard ${:foo} syntax.

llvm-svn: 66527
2009-03-10 05:37:13 +00:00
Chris Lattner
93662edfb8 Fix PR3763 by using proper APInt methods instead of uint64_t's.
llvm-svn: 66434
2009-03-09 20:22:18 +00:00
Evan Cheng
3c9a084a1b ARM isLegalAddressImmediate should check if type is a simple type now that optimizer can create values of funky scalar types.
llvm-svn: 66429
2009-03-09 19:15:00 +00:00
Evan Cheng
8b47b553f9 Yet another case where the spiller marked two uses of the same register on the same instruction as kill. This fixes PR3706.
llvm-svn: 66428
2009-03-09 19:00:05 +00:00
Evan Cheng
67ccd79e39 Recognize triplets starting with armv5-, armv6- etc. And set the ARM arch version accordingly.
llvm-svn: 66365
2009-03-08 04:02:49 +00:00
Evan Cheng
483ece4d2e If a MI uses the same register more than once, only mark one of them as 'kill'.
llvm-svn: 66363
2009-03-08 03:58:35 +00:00
Chris Lattner
578b634f56 implement an optimization to codegen c ? 1.0 : 2.0 as load { 2.0, 1.0 } + c*4.
For 2009-03-07-FPConstSelect.ll we now produce:

_f:
	xorl	%eax, %eax
	testl	%edi, %edi
	movl	$4, %ecx
	cmovne	%rax, %rcx
	leaq	LCPI1_0(%rip), %rax
	movss	(%rcx,%rax), %xmm0
	ret

previously we produced:

_f:
	subl	$4, %esp
	cmpl	$0, 8(%esp)
	movss	LCPI1_0, %xmm0
	je	LBB1_2	## entry
LBB1_1:	## entry
	movss	LCPI1_1, %xmm0
LBB1_2:	## entry
	movss	%xmm0, (%esp)
	flds	(%esp)
	addl	$4, %esp
	ret

on PPC the code also improves to:

_f:
	cntlzw r2, r3
	srwi r2, r2, 5
	li r3, lo16(LCPI1_0)
	slwi r2, r2, 2
	addis r3, r3, ha16(LCPI1_0)
	lfsx f1, r3, r2
	blr 

from:

_f:
	li r2, lo16(LCPI1_1)
	cmplwi cr0, r3, 0
	addis r2, r2, ha16(LCPI1_1)
	beq cr0, LBB1_2	; entry
LBB1_1:	; entry
	li r2, lo16(LCPI1_0)
	addis r2, r2, ha16(LCPI1_0)
LBB1_2:	; entry
	lfs f1, 0(r2)
	blr 

This also improves the existing pic-cpool case from:

foo:
	subl	$12, %esp
	call	.Lllvm$1.$piclabel
.Lllvm$1.$piclabel:
	popl	%eax
	addl	$_GLOBAL_OFFSET_TABLE_ + [.-.Lllvm$1.$piclabel], %eax
	cmpl	$0, 16(%esp)
	movsd	.LCPI1_0@GOTOFF(%eax), %xmm0
	je	.LBB1_2	# entry
.LBB1_1:	# entry
	movsd	.LCPI1_1@GOTOFF(%eax), %xmm0
.LBB1_2:	# entry
	movsd	%xmm0, (%esp)
	fldl	(%esp)
	addl	$12, %esp
	ret

to:

foo:
	call	.Lllvm$1.$piclabel
.Lllvm$1.$piclabel:
	popl	%eax
	addl	$_GLOBAL_OFFSET_TABLE_ + [.-.Lllvm$1.$piclabel], %eax
	xorl	%ecx, %ecx
	cmpl	$0, 4(%esp)
	movl	$8, %edx
	cmovne	%ecx, %edx
	fldl	.LCPI1_0@GOTOFF(%eax,%edx)
	ret

This triggers a few dozen times in spec FP 2000.

llvm-svn: 66358
2009-03-08 01:51:30 +00:00
Dan Gohman
b9c32f1aca Arithmetic instructions don't set EFLAGS bits OF and CF bits
the same say the "test" instruction does in overflow cases,
so eliminating the test is only safe when those bits aren't
needed, as is the case for COND_E and COND_NE, or if it
can be proven that no overflow will occur. For now, just
restrict the optimization to COND_E and COND_NE and don't
do any overflow analysis.

llvm-svn: 66318
2009-03-07 01:58:32 +00:00
Dan Gohman
42049dafd7 Fix ScheduleDAGRRList::CopyAndMoveSuccessors' handling of nodes
with multiple chain operands. This can occur when the scheduler
has added chain operands to a node that already has a chain
operand, in order to handle physical register dependencies.

This fixes an llvm-gcc bootstrap failure on x86-64 introduced
in r66058.

llvm-svn: 66240
2009-03-06 02:23:01 +00:00
Dan Gohman
f6f684b206 Fix the "test" optimization to recognize "dec" as an add of
negative one, as subtracts of immediates are canonicalized
to adds.

llvm-svn: 66180
2009-03-05 19:32:48 +00:00
Dan Gohman
5e88bf4c00 Make this test more thorough. Not only should there be no %esi,
there should be no spilling of anything.

llvm-svn: 66179
2009-03-05 19:31:32 +00:00
Evan Cheng
24c138a1cd Do not split edges to EH landing pads. It will cause code size explosion.
llvm-svn: 66140
2009-03-05 06:31:26 +00:00
Dan Gohman
31fb085c2e Re-apply 66008, now that the unfoldMemoryOperand bug is fixed.
llvm-svn: 66058
2009-03-04 19:44:21 +00:00
Owen Anderson
fb7f64ea0d Add a restore folder, which shaves a dozen or so machineinstrs off oggenc. Update a testcase to check this.
llvm-svn: 66029
2009-03-04 08:52:31 +00:00
Evan Cheng
7d9019d0f3 Fix PR3666: isel calls to constant addresses.
llvm-svn: 66024
2009-03-04 06:48:53 +00:00
Eli Friedman
1bed40b86a PR3686: make the legalizer handle bitcast from i80 to x86 long double.
llvm-svn: 66021
2009-03-04 06:23:34 +00:00
Dan Gohman
6831e2c2a6 Revert r66004 for now; it's causing a variety of test failures.
llvm-svn: 66008
2009-03-04 03:54:19 +00:00
Evan Cheng
32eef2f73f Rename test.
llvm-svn: 66006
2009-03-04 02:47:25 +00:00
Dan Gohman
c6c669cc1e Teach the x86 backend to eliminate "test" instructions by using the EFLAGS
result from add, sub, inc, and dec instructions in simple cases.

llvm-svn: 66004
2009-03-04 02:33:24 +00:00
Evan Cheng
db402a7a49 Fix PR3701. 1. X86 target renamed eflags register to flags. This matches what llvm-gcc generates so codegen knows flags register is being clobbered by inline asm. 2. BURR scheduler should also check if inline asm nodes can clobber "live" physical registers. Previously it was only checking target nodes with implicit defs.
llvm-svn: 65996
2009-03-04 01:41:49 +00:00
Bill Wendling
93eeea0493 The DAG combiner was performing a BT combine. The BT combine had a value of -1,
so it changed it into a 31 via the TLO.ShrinkDemandedConstant() call. Then it
would go through the DAG combiner again. This time it had a value of 31, which
was turned into a -1 by TLI.SimplifyDemandedBits(). This would ping pong
forever.

Teach the TLO.ShrinkDemandedConstant() call not to lower a value if the demanded
value is an XOR of all ones.

llvm-svn: 65985
2009-03-04 00:18:06 +00:00
Nate Begeman
6b41d33726 Fix a problem with DAGCombine on 64b targets where folding
extracts + build_vector into a shuffle would fail, because the
type of the new build_vector would not be legal.  Try harder to
create a legal build_vector type.  Note: this will be totally 
irrelevant once vector_shuffle no longer takes a build_vector for
shuffle mask.

New:
_foo:
	xorps	%xmm0, %xmm0
	xorps	%xmm1, %xmm1
	subps	%xmm1, %xmm1
	mulps	%xmm0, %xmm1
	addps	%xmm0, %xmm1
	movaps	%xmm1, 0

Old:
_foo:
	xorps	%xmm0, %xmm0
	movss	%xmm0, %xmm1
	xorps	%xmm2, %xmm2
	unpcklps	%xmm1, %xmm2
	pshufd	$80, %xmm1, %xmm1
	unpcklps	%xmm1, %xmm2
	pslldq	$16, %xmm2
	pshufd	$57, %xmm2, %xmm1
	subps	%xmm0, %xmm1
	mulps	%xmm0, %xmm1
	addps	%xmm0, %xmm1
	movaps	%xmm1, 0

llvm-svn: 65791
2009-03-01 23:44:07 +00:00
Evan Cheng
276f9b02c5 Minor optimization:
Look for situations like this:                                                                                                                                                              
%reg1024<def> = MOV r1                                                                                                                                                                      
%reg1025<def> = MOV r0                                                                                                                                                                      
%reg1026<def> = ADD %reg1024, %reg1025                                                                                                                                                      
r0            = MOV %reg1026                                                                                                                                                                
Commute the ADD to hopefully eliminate an otherwise unavoidable copy.

llvm-svn: 65752
2009-03-01 02:03:43 +00:00
Evan Cheng
9c3ce7905e Last commit accidentially deleted this code.
llvm-svn: 65679
2009-02-28 06:02:14 +00:00
Rafael Espindola
880e63bf01 Refactor TLS code and add some tests. The tests and expected results are:
pic |  declaration | linkage  | visibility |

!pic |  declaration | external | default    | tls1.ll     tls2.ll     | local exec
 pic |  declaration | external | default    | tls1-pic.ll tls2-pic.ll | general dynamic
!pic | !declaration | external | default    | tls3.ll     tls4.ll     | initial exec
 pic | !declaration | external | default    | tls3-pic.ll tls4-pic.ll | general dynamic

!pic |  declaration | external | hidden     | tls7.ll     tls8.ll     | local exec
 pic |  declaration | external | hidden     | X                       | local dynamic
!pic | !declaration | external | hidden     | tls9.ll     tls10.ll    | local exec
 pic | !declaration | external | hidden     | X                       | local dynamic

!pic |  declaration | internal | default    | tls5.ll     tls6.ll     | local exec
 pic |  declaration | internal | default    | X                       | local dynamic

The ones marked with an X have not been implemented since local dynamic is not implemented.

llvm-svn: 65632
2009-02-27 13:37:18 +00:00
Evan Cheng
257e065df6 Make sure this test passes on linux-ppc.
llvm-svn: 65600
2009-02-27 00:51:50 +00:00
Evan Cheng
0ca6e3dba5 MachineLICM CSE should match destination register classes; avoid hoisting implicit_def's.
llvm-svn: 65592
2009-02-27 00:02:22 +00:00
Evan Cheng
4014a9a5b8 ADDS{D|S}rr_Int and MULS{D|S}rr_Int are not commutable. The users of these intrinsics expect the high bits will not be modified.
llvm-svn: 65499
2009-02-26 03:12:02 +00:00
Evan Cheng
86fc9440db The last commit was overly conservative. It's ok to reuse value that's already marked livein.
llvm-svn: 65498
2009-02-26 03:02:21 +00:00
Evan Cheng
ec34226c2b Revert BuildVectorSDNode related patches: 65426, 65427, and 65296.
llvm-svn: 65482
2009-02-25 22:49:59 +00:00
Dan Gohman
3766eeea36 Fast-isel can't do TLS yet, so it should fall back to SDISel
if it sees TLS addresses.

llvm-svn: 65341
2009-02-23 22:03:08 +00:00
Dan Gohman
c1229a3a86 Use the -stack-alignment option instead of using a target triple
for avoiding dynamic stack realignment.

llvm-svn: 65319
2009-02-23 16:34:46 +00:00
Evan Cheng
dd139e795c Only v1i16 (i.e. _m64) is returned via RAX / RDX.
llvm-svn: 65313
2009-02-23 09:03:22 +00:00
Nate Begeman
7d9e99560f Make this test use darwin targe triple, to avoid stack traffic on linux.
llvm-svn: 65312
2009-02-23 09:03:06 +00:00
Nate Begeman
e0093d2501 Generate better code for v8i16 shuffles on SSE2
Generate better code for v16i8 shuffles on SSE2 (avoids stack)
Generate pshufb for v8i16 and v16i8 shuffles on SSSE3 where it is fewer uops.
Document the shuffle matching logic and add some FIXMEs for later further
  cleanups.
New tests that test the above.

Examples:

New:
_shuf2:
	pextrw	$7, %xmm0, %eax
	punpcklqdq	%xmm1, %xmm0
	pshuflw	$128, %xmm0, %xmm0
	pinsrw	$2, %eax, %xmm0

Old:
_shuf2:
	pextrw	$2, %xmm0, %eax
	pextrw	$7, %xmm0, %ecx
	pinsrw	$2, %ecx, %xmm0
	pinsrw	$3, %eax, %xmm0
	movd	%xmm1, %eax
	pinsrw	$4, %eax, %xmm0
	ret

=========

New:
_shuf4:
	punpcklqdq	%xmm1, %xmm0
	pshufb	LCPI1_0, %xmm0

Old:
_shuf4:
	pextrw	$3, %xmm0, %eax
	movsd	%xmm1, %xmm0
	pextrw	$3, %xmm1, %ecx
	pinsrw	$4, %ecx, %xmm0
	pinsrw	$5, %eax, %xmm0

========

New:
_shuf1:
	pushl	%ebx
	pushl	%edi
	pushl	%esi
	pextrw	$1, %xmm0, %eax
	rolw	$8, %ax
	movd	%xmm0, %ecx
	rolw	$8, %cx
	pextrw	$5, %xmm0, %edx
	pextrw	$4, %xmm0, %esi
	pextrw	$3, %xmm0, %edi
	pextrw	$2, %xmm0, %ebx
	movaps	%xmm0, %xmm1
	pinsrw	$0, %ecx, %xmm1
	pinsrw	$1, %eax, %xmm1
	rolw	$8, %bx
	pinsrw	$2, %ebx, %xmm1
	rolw	$8, %di
	pinsrw	$3, %edi, %xmm1
	rolw	$8, %si
	pinsrw	$4, %esi, %xmm1
	rolw	$8, %dx
	pinsrw	$5, %edx, %xmm1
	pextrw	$7, %xmm0, %eax
	rolw	$8, %ax
	movaps	%xmm1, %xmm0
	pinsrw	$7, %eax, %xmm0
	popl	%esi
	popl	%edi
	popl	%ebx
	ret

Old:
_shuf1:
	subl	$252, %esp
	movaps	%xmm0, (%esp)
	movaps	%xmm0, 16(%esp)
	movaps	%xmm0, 32(%esp)
	movaps	%xmm0, 48(%esp)
	movaps	%xmm0, 64(%esp)
	movaps	%xmm0, 80(%esp)
	movaps	%xmm0, 96(%esp)
	movaps	%xmm0, 224(%esp)
	movaps	%xmm0, 208(%esp)
	movaps	%xmm0, 192(%esp)
	movaps	%xmm0, 176(%esp)
	movaps	%xmm0, 160(%esp)
	movaps	%xmm0, 144(%esp)
	movaps	%xmm0, 128(%esp)
	movaps	%xmm0, 112(%esp)
	movzbl	14(%esp), %eax
	movd	%eax, %xmm1
	movzbl	22(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm1, %xmm2
	movzbl	42(%esp), %eax
	movd	%eax, %xmm1
	movzbl	50(%esp), %eax
	movd	%eax, %xmm3
	punpcklbw	%xmm1, %xmm3
	punpcklbw	%xmm2, %xmm3
	movzbl	77(%esp), %eax
	movd	%eax, %xmm1
	movzbl	84(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm1, %xmm2
	movzbl	104(%esp), %eax
	movd	%eax, %xmm1
	punpcklbw	%xmm1, %xmm0
	punpcklbw	%xmm2, %xmm0
	movaps	%xmm0, %xmm1
	punpcklbw	%xmm3, %xmm1
	movzbl	127(%esp), %eax
	movd	%eax, %xmm0
	movzbl	135(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm0, %xmm2
	movzbl	155(%esp), %eax
	movd	%eax, %xmm0
	movzbl	163(%esp), %eax
	movd	%eax, %xmm3
	punpcklbw	%xmm0, %xmm3
	punpcklbw	%xmm2, %xmm3
	movzbl	188(%esp), %eax
	movd	%eax, %xmm0
	movzbl	197(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm0, %xmm2
	movzbl	217(%esp), %eax
	movd	%eax, %xmm4
	movzbl	225(%esp), %eax
	movd	%eax, %xmm0
	punpcklbw	%xmm4, %xmm0
	punpcklbw	%xmm2, %xmm0
	punpcklbw	%xmm3, %xmm0
	punpcklbw	%xmm1, %xmm0
	addl	$252, %esp
	ret

llvm-svn: 65311
2009-02-23 08:49:38 +00:00
Scott Michel
3f8637305f Introduce the BuildVectorSDNode class that encapsulates the ISD::BUILD_VECTOR
instruction. The class also consolidates the code for detecting constant
splats that's shared across PowerPC and the CellSPU backends (and might be
useful for other backends.) Also introduces SelectionDAG::getBUID_VECTOR() for
generating new BUILD_VECTOR nodes.

llvm-svn: 65296
2009-02-22 23:36:09 +00:00
Richard Pennington
72dc7c621e bug 3610: Test case.
llvm-svn: 65287
2009-02-22 15:54:44 +00:00
Evan Cheng
41687ff389 If a use operand is marked isKill, don't forget to add kill to its live interval as well.
llvm-svn: 65279
2009-02-22 08:35:56 +00:00
Evan Cheng
4385f393f7 Be bug compatible with gcc by returning MMX values in RAX.
llvm-svn: 65274
2009-02-22 08:05:12 +00:00
Anton Korobeynikov
5df82e3e25 Drop bunch of half-working stuff in the ext_weak linkage support.
Now we're using one gross, but quite robust hack :) (previous ones
did not work, for example, when ext_weak symbol was used deep inside
constant expression in the initializer).

The proper fix of this problem will require some quite huge asmprinter
changes and that's why was postponed. This fixes PR3629 by the way :)

llvm-svn: 65230
2009-02-21 11:53:32 +00:00
Evan Cheng
d4dc06f55a If two-address def is dead and the instruction does not define other registers, and it doesn't produce side effects, just delete the instruction.
llvm-svn: 65218
2009-02-21 03:14:25 +00:00
Evan Cheng
56b43045f6 Teach LSR sink to sink the immediate portion of the common expression back into uses if they fit in address modes of all the uses.
llvm-svn: 65215
2009-02-21 02:06:47 +00:00
Evan Cheng
c2541a4450 Fix strange logic in CollectIVUsers used to determine whether all uses are
addresses, part 1. This fixes an obvious logic bug. Previously if the only
in-loop use is a PHI, it would return AllUsesAreAddresses as true.

llvm-svn: 65178
2009-02-20 22:16:49 +00:00
Evan Cheng
c40c3e28f7 Support return of MMX values in 64-bit mode.
llvm-svn: 65152
2009-02-20 20:43:02 +00:00
Owen Anderson
8d12de1c25 Fix a crash in the pre-alloc splitter exposed by recent codegen changes.
llvm-svn: 65121
2009-02-20 10:02:23 +00:00
Chris Lattner
c25ea86424 make these tests pass when run on a G5.
llvm-svn: 65117
2009-02-20 07:10:11 +00:00
Dan Gohman
4e8fc41d48 Implement "superhero" strength reduction, or full strength
reduction of address calculations down to basic pointer arithmetic.
This is currently off by default, as it needs a few other features
before it becomes generally useful. And even when enabled, full
strength reduction is only performed when it doesn't increase
register pressure, and when several other conditions are true.

This also factors out a bunch of exisiting LSR code out of
StrengthReduceStridedIVUsers into separate functions, and tidies
up IV insertion. This actually decreases register pressure even
in non-superhero mode. The change in iv-users-in-other-loops.ll
is an example of this; there are two more adds because there are
two fewer leas, and there is less spilling.

llvm-svn: 65108
2009-02-20 04:17:46 +00:00
Evan Cheng
bd63a1f40d GV with null value initializer shouldn't go to BSS if it's meant for a mergeable strings section. Currently it only checks for Darwin. Someone else please check if it should apply to other targets as well.
llvm-svn: 64877
2009-02-18 02:19:52 +00:00
Evan Cheng
2e0cf05ad2 A couple of places where reused use operands should be marked kill. This is exposed by recent availability fallthrough changes.
llvm-svn: 64745
2009-02-17 06:41:03 +00:00
Dan Gohman
3d93bc5654 Change these tests to use regular loads instead of llvm.x86.sse2.loadu.dq.
Enhance instcombine to use the preferred field of
GetOrEnforceKnownAlignment in more cases, so that regular IR operations are
optimized in the same way that the intrinsics currently are.

llvm-svn: 64623
2009-02-16 00:44:23 +00:00
Evan Cheng
9428fd271c Fix PR3522. It's not safe to sink into landing pad BB's.
llvm-svn: 64582
2009-02-15 08:36:12 +00:00
Evan Cheng
9041a71923 Teach x86 target -soft-float.
llvm-svn: 64496
2009-02-13 22:36:38 +00:00
Dan Gohman
aec5be6b01 Fix the code that checked if a SCEVAddRecExpr Start contains an
addrec in a different loop to check the value being added to
the accumulated Start value, not the Start value before it has
the new value added to it. This prevents LSR from going crazy
on the included testcase. Dale, please review.

llvm-svn: 64440
2009-02-13 03:58:31 +00:00
Dan Gohman
3ade7d2346 Fix LSR's IV sorting function to explicitly sort by bitwidth
after sorting by stride value. This prevents it from missing
IV reuse opportunities in a host-sensitive manner.

llvm-svn: 64415
2009-02-13 00:26:43 +00:00
Dale Johannesen
47321cf01f Arrange to print constants that match "n" and "i" constraints
in inline asm as signed (what gcc does).  Add partial support
for x86-specific "e" and "Z" constraints, with appropriate
signedness for printing.

llvm-svn: 64400
2009-02-12 20:58:09 +00:00
Chris Lattner
1174b80823 fix the X86 backend to just drop llvm.declare nodes for VLAs instead of
leaving them in the DAG and then getting selection errors.  This is a 
fix for PR3538.

llvm-svn: 64382
2009-02-12 17:33:11 +00:00
Chris Lattner
2a13562296 add PR
llvm-svn: 64377
2009-02-12 17:04:57 +00:00
Evan Cheng
91c27ec711 It's (currently) not safe to keep certain physical registers live across basic blocks, e.g. x86 fp stack registers.
llvm-svn: 64374
2009-02-12 10:32:17 +00:00
Evan Cheng
19a1ba3680 Replace one of burr scheduling heuristic with something more sensible. Now calcMaxScratches simply compute the number of true data dependencies. This actually improve a couple of tests in dejagnu suite as many tests in llvm nightly test suite.
llvm-svn: 64369
2009-02-12 08:59:45 +00:00
Chris Lattner
e5ec807aaf fix PR3537: if resetting bbi back to the start of a block, we need to
forget about already inserted expressions.

llvm-svn: 64362
2009-02-12 06:56:08 +00:00
Chris Lattner
6b7bab07d3 rename test to avoid messing with tab completion of dates.
llvm-svn: 64361
2009-02-12 06:54:55 +00:00
Evan Cheng
a34255f910 Remove a bogus assertion. It's possible a live-in available value is used by a previous instruction.
llvm-svn: 64339
2009-02-11 23:41:57 +00:00
Dan Gohman
d252329703 Don't use special heuristics for nodes with no data predecessors
unless they actually have data successors, and likewise for nodes
with no data successors unless they actually have data precessors.

llvm-svn: 64327
2009-02-11 21:29:39 +00:00
Dale Johannesen
f367ef04af Make a transformation added in 63266 a bit less aggressive.
It was transforming (x&y)==y to (x&y)!=0 in the case where
y is variable and known to have at most one bit set (e.g. z&1).
This is not correct; the expressions are not equivalent when y==0.
I believe this patch salvages what can be salvaged, including
all the cases in bt.ll.  Dan, please review.
Fixes gcc.c-torture/execute/20040709-[12].c

llvm-svn: 64314
2009-02-11 19:19:41 +00:00
Evan Cheng
cfa084930b Implement PR3495: local spiller optimization. The local spiller can now keep availability information over BB boundaries. It visits BB's in depth first order. After visiting a BB if it find a successor which has a single predecessor it visits the successor next without clearing the availability information. This allows the successor to omit reloads or change them into copies.
llvm-svn: 64298
2009-02-11 08:24:21 +00:00
Evan Cheng
cdb35e3f0f Handle llvm.x86.sse2.maskmov.dqu in 64-bit.
llvm-svn: 64240
2009-02-10 22:06:28 +00:00
Evan Cheng
5be1afd928 Fix PR3457: Ignore control successors when looking for closest scheduled successor. A control successor doesn't read result(s) produced by the scheduling unit being evaluated.
llvm-svn: 64210
2009-02-10 08:30:11 +00:00
Evan Cheng
3b84024598 Implement FpSET_ST1_*.
llvm-svn: 64186
2009-02-09 23:32:07 +00:00
Evan Cheng
44e58653af Make sure constant subscript is truncated to ptr size if it may not fit.
llvm-svn: 64163
2009-02-09 20:54:38 +00:00
Evan Cheng
3f98669da2 Re-enable machine sinking pass now that the coalescer bugs and the AnalyzeBrnach bug are fixed.
llvm-svn: 64126
2009-02-09 08:45:39 +00:00
Evan Cheng
a7287a61fb Fix PR3486. Fix a bug in code that manually patch physical register live interval after its sub-register is coalesced with a virtual register.
llvm-svn: 64082
2009-02-08 11:04:35 +00:00
Evan Cheng
7d46312873 (no commit message)
llvm-svn: 64073
2009-02-08 07:48:37 +00:00
Bill Wendling
4ed0306d6f Revert r63999. It was breaking self-hosting builds.
llvm-svn: 64062
2009-02-08 00:58:05 +00:00
Evan Cheng
62694a52fe Enable machine sinking pass in non-fast mode.
llvm-svn: 63999
2009-02-07 01:57:46 +00:00
Evan Cheng
74f909616b Fix test. It produces unexpected code if sse4.1 is on.
llvm-svn: 63906
2009-02-06 01:49:19 +00:00
Evan Cheng
6a938dd9e7 isAsCheapAsMove instructions can have register src operands. Check if they are really re-materializable.
This fixes sse.expandfft and sse.stepfft.

llvm-svn: 63890
2009-02-05 22:24:17 +00:00
Evan Cheng
b36e3e34e7 Turn on machine LICM in non-fast mode.
llvm-svn: 63855
2009-02-05 08:46:33 +00:00
Chris Lattner
d8f77bbc07 if we have a large GEP offset on a 32-bit or other target, make
sure to print the value properly sext'd to the right pointer size.
This fixes PR3481.

llvm-svn: 63843
2009-02-05 06:55:21 +00:00
Mon P Wang
3596488f90 Add test case for r63760.
llvm-svn: 63774
2009-02-04 21:10:56 +00:00
Mon P Wang
430525dc4f Fixes a case where we generate an incorrect mask for pshfhw in the presence
of undefs and incorrectly determining if we have punpckldq.

llvm-svn: 63702
2009-02-04 01:16:59 +00:00
Duncan Sands
87db26ad2c Fix PR3411. When replacing values, nodes are analyzed
in any old order.  Since analyzing a node analyzes its
operands also, this can mean that when we pop a node
off the list of nodes to be analyzed, it may already
have been analyzed.

llvm-svn: 63632
2009-02-03 10:23:33 +00:00
Chris Lattner
2dae393299 rearrange how SRoA handles promotion of allocas to vectors.
With the new world order, it can handle cases where the first
store into the alloca is an element of the vector, instead of
requiring the first analyzed store to have the vector type 
itself.  This allows us to un-xfail 
test/CodeGen/X86/vec_ins_extract.ll.

llvm-svn: 63590
2009-02-03 01:30:09 +00:00
Dan Gohman
7b176db971 Add explicit -march=x86 to these tests so that they don't
default to -march=x86-64 on 64-bit hosts.

llvm-svn: 63579
2009-02-03 00:20:22 +00:00
Dan Gohman
9bc7148556 Fix another test to not use -mcpu=yonah with 64-bit code.
llvm-svn: 63572
2009-02-02 23:43:59 +00:00
Dan Gohman
8a2fa9ded4 Yonah does not support x86-64. Change the -mcpu value to one that does.
llvm-svn: 63561
2009-02-02 22:50:08 +00:00
Chris Lattner
eeeba2abe9 xfail this for now, will fix shortly.
llvm-svn: 63533
2009-02-02 18:15:33 +00:00
Evan Cheng
483bbd1643 Teach LowerBRCOND to recognize (xor (setcc x), 1). The xor inverts the condition. It's normally transformed by the dag combiner, unless the condition is set by a arithmetic op with overflow.
llvm-svn: 63505
2009-02-02 08:07:36 +00:00
Torok Edwin
110a24e43e add 2 more testcases for -mattr=-sse (r63495).
--This line, and those below, will be ignaored--

A    test/CodeGen/X86/nosse-error1.ll
A    test/CodeGen/X86/nosse-error2.ll

llvm-svn: 63496
2009-02-01 18:24:20 +00:00
Torok Edwin
b4c9a6097f Implement -mno-sse: if SSE is disabled on x86-64, don't store XMM on stack for
var-args, and don't allow FP return values

llvm-svn: 63495
2009-02-01 18:15:56 +00:00
Duncan Sands
cac6cf74f9 Fix PR3453 and probably a bunch of other potential
crashes or wrong code with codegen of large integers:
eliminate the legacy getIntegerVTBitMask and
getIntegerVTSignBit methods, which returned their
value as a uint64_t, so couldn't handle huge types.

llvm-svn: 63494
2009-02-01 18:06:53 +00:00
Duncan Sands
74179a9dde Fix PR3401: when using large integers, the type
returned by getShiftAmountTy may be too small
to hold shift values (it is an i8 on x86-32).
Before and during type legalization, use a large
but legal type for shift amounts: getPointerTy;
afterwards use getShiftAmountTy, fixing up any
shift amounts with a big type during operation
legalization.  Thanks to Dan for writing the
original patch (which I shamelessly pillaged).

llvm-svn: 63482
2009-01-31 15:50:11 +00:00
Mon P Wang
1cc0fd54b7 Used "-enable-unsafe-fp-math" to allow this transformation - (a * b -c) = c - a *b.
llvm-svn: 63475
2009-01-31 06:50:54 +00:00
Mon P Wang
2a0cce6b1c If unsafe FP optimization is not set, don't allow -(A-B) => B-A because
when A==B, -0.0 != +0.0.

llvm-svn: 63474
2009-01-31 06:07:45 +00:00
Owen Anderson
fe46e52119 XFAIL this test. It only worked before because of a bug in the spill point selection code. Not deleting because
it should be possible to enhance the selection code to handle this in the future.

llvm-svn: 63340
2009-01-29 22:27:56 +00:00
Evan Cheng
1ffb8d20e8 Local register allocator shouldn't assume only the entry and landing pad basic blocks have live-ins.
llvm-svn: 63323
2009-01-29 18:37:30 +00:00
Dan Gohman
14959cba72 In the case of an extractelement on an insertelement value,
the element indices may be equal if either one is not a
constant.

llvm-svn: 63311
2009-01-29 16:10:46 +00:00
Evan Cheng
1346d22223 Exit with nice warnings when register allocator run out of registers.
llvm-svn: 63267
2009-01-29 02:20:59 +00:00
Dan Gohman
9d120d6d8f Make x86's BT instruction matching more thorough, and add some
dagcombines that help it match in several more cases. Add
several more cases to test/CodeGen/X86/bt.ll. This doesn't
yet include matching for BT with an immediate operand, it
just covers more register+register cases.

llvm-svn: 63266
2009-01-29 01:59:02 +00:00
Mon P Wang
8abb07a527 Fixed lowering of v816 shuffles.
llvm-svn: 63252
2009-01-28 23:11:14 +00:00
Bill Wendling
ab2b1d7629 Make test platform agnostic.
llvm-svn: 63247
2009-01-28 22:20:56 +00:00
Evan Cheng
2a965124b7 The memory alignment requirement on some of the mov{h|l}p{d|s} patterns are 16-byte. That is overly strict. These instructions read / write f64 memory locations without alignment requirement.
llvm-svn: 63195
2009-01-28 08:35:02 +00:00
Mon P Wang
f089df40b5 Added sse test patterns for r62979 and r63193.
llvm-svn: 63194
2009-01-28 08:13:56 +00:00
Bill Wendling
6c7c632a21 Add testcase for r63142.
llvm-svn: 63149
2009-01-27 23:00:53 +00:00
Evan Cheng
a05436f739 Implement multiple with overflow by 2 with an add instruction.
llvm-svn: 63090
2009-01-27 03:30:42 +00:00
Dan Gohman
2c06ee586b Add a regression test for x86-64 red zone usage.
llvm-svn: 63075
2009-01-27 00:40:27 +00:00
Duncan Sands
276b736496 Fix PR3393, which amounts to a bug in the expensive
checking logic.  Rather than make the checking more
complicated, I've tweaked some logic to make things
conform to how the checking thought things ought to
be, since this results in a simpler "mental model".

llvm-svn: 63048
2009-01-26 21:54:18 +00:00
Dan Gohman
4613b5e807 At Nick Lewycky's request, rename this test with a more informative name.
llvm-svn: 63042
2009-01-26 21:36:31 +00:00
Evan Cheng
ec03e0cd3b Enhance logic in X86DAGToDAGISel::PreprocessForRMW which move load inside callseq_start to allow it to be folded into a call. It was not considering the cases where a token factor is between the load and the callseq_start.
llvm-svn: 63022
2009-01-26 18:43:34 +00:00
Scott Michel
da9360e77e CellSPU:
- Rename fcmp.ll test to fcmp32.ll, start adding new double tests to fcmp64.ll
- Fix select_bits.ll test
- Capitulate to the DAGCombiner and move i64 constant loads to instruction
  selection (SPUISelDAGtoDAG.cpp).

  <rant>DAGCombiner will insert all kinds of 64-bit optimizations after
  operation legalization occurs and now we have to do most of the work that
  instruction selection should be doing twice (once to determine if v2i64
  build_vector can be handled by SelectCode(), which then runs all of the
  predicates a second time to select the necessary instructions.) But,
  CellSPU is a good citizen.</rant>

llvm-svn: 62990
2009-01-26 03:31:40 +00:00
Nate Begeman
92efc4f0ce Map address space 256 to gs; similar mappings could be supported for the
other x86 segments.  address space 0 is stack/default, 1-255 are reserved for
client use.

llvm-svn: 62980
2009-01-26 01:24:32 +00:00
Torok Edwin
3f54410405 revert this patch for now, because Codegen does still want to generate SSE code,
for example in the case of va-args. XFAIL associated tests.

llvm-svn: 62972
2009-01-25 20:21:24 +00:00
Torok Edwin
49b1d3e3cc If user explicitly asks not to use SSE, don't force it. This fixes LLVM part of PR3402.
llvm-svn: 62967
2009-01-25 17:58:56 +00:00
Evan Cheng
71ca3e2bdb Private linkage support for PPC / Darwin.
llvm-svn: 62955
2009-01-25 06:32:01 +00:00
Evan Cheng
4ebe9b79fa Teach 2addr pass to be do more commuting. If both uses of a two-address instruction are killed, but the first operand has a use before and after the def, commute if the second operand does not suffer from the same issue.
%reg1028<def> = EXTRACT_SUBREG %reg1027<kill>, 1                                                                                                                                     
%reg1029<def> = MOV8rr %reg1028                                                                                                                                                      
%reg1029<def> = SHR8ri %reg1029, 7, %EFLAGS<imp-def,dead>                                                                                                                            
insert => %reg1030<def> = MOV8rr %reg1028                                                                                                                                            
%reg1030<def> = ADD8rr %reg1028<kill>, %reg1029<kill>, %EFLAGS<imp-def,dead>                                                                                                         

In this case, it might not be possible to coalesce the second MOV8rr                                                                                                                 
instruction if the first one is coalesced. So it would be profitable to                                                                                                              
commute it:                                                                                                                                                                          
%reg1028<def> = EXTRACT_SUBREG %reg1027<kill>, 1                                                                                                                                     
%reg1029<def> = MOV8rr %reg1028                                                                                                                                                      
%reg1029<def> = SHR8ri %reg1029, 7, %EFLAGS<imp-def,dead>                                                                                                                            
insert => %reg1030<def> = MOV8rr %reg1029                                                                                                                                            
%reg1030<def> = ADD8rr %reg1029<kill>, %reg1028<kill>, %EFLAGS<imp-def,dead>

llvm-svn: 62954
2009-01-25 03:53:59 +00:00
Dan Gohman
14a8caaef9 Add a PR comment to this test.
llvm-svn: 62921
2009-01-24 17:32:54 +00:00
Evan Cheng
4a5296ec5f Update test to reflect command line option name change.
llvm-svn: 62836
2009-01-23 05:45:31 +00:00
Dan Gohman
a6e5948fce Don't create ISD::FNEG nodes after legalize if they aren't legal.
Simplify x+0 to x in unsafe-fp-math mode. This avoids a bunch of
redundant work in many cases, because in unsafe-fp-math mode,
ISD::FADD with a constant is considered free to negate, so the
DAGCombiner often negates x+0 to -0-x thinking it's free, when
in reality the end result is -x, which is more expensive than x.

Also, combine x*0 to 0.

This fixes PR3374.

llvm-svn: 62789
2009-01-22 21:58:43 +00:00
Devang Patel
386023e3f0 Do not use buggy llvm-gcc to generate testcases.
llvm-svn: 62770
2009-01-22 18:28:11 +00:00
Bill Wendling
58a51ca0f9 Now with RUN line.
llvm-svn: 62716
2009-01-21 21:28:03 +00:00
Bill Wendling
1eb2ec148b Run this through -simplifycfg and -mem2reg to test only what we need to test.
llvm-svn: 62714
2009-01-21 21:02:27 +00:00
Dan Gohman
d021a20409 Simplify ReduceLoadWidth's logic: it doesn't need several different
special cases after producing the new reduced-width load, because the
new load already has the needed adjustments built into it. This fixes
several bugs due to the special cases, including PR3317.

llvm-svn: 62692
2009-01-21 15:17:51 +00:00
Dan Gohman
704f0d5879 Fix a recent regression. ClrOpcode is not set for i8; for i8, if
we want to clear %ah to zero before a division, just use a
zero-extending mov to %al. This fixes PR3366.

llvm-svn: 62691
2009-01-21 14:50:16 +00:00
Duncan Sands
07b1beeba8 Let's try to have our cake and eat it to: move
this test into FrontendC to ensure that llvm-gcc
is available; assemble using "llvm-gcc -xassembler"
rather than "as".

llvm-svn: 62683
2009-01-21 11:37:31 +00:00
Duncan Sands
e60dd41a39 Don't rely on grep -w working.
llvm-svn: 62682
2009-01-21 09:41:42 +00:00
Scott Michel
c80e71ac35 CellSPU:
- Ensure that (operation) legalization emits proper FDIV libcall when needed.
- Fix various bugs encountered during llvm-spu-gcc build, along with various
  cleanups.
- Start supporting double precision comparisons for remaining libgcc2 build.
  Discovered interesting DAGCombiner feature, which is currently solved via
  custom lowering (64-bit constants are not legal on CellSPU, but DAGCombiner
  insists on inserting one anyway.)
- Update README.

llvm-svn: 62664
2009-01-21 04:58:48 +00:00
Evan Cheng
0ed6a9d7e0 Favors generating "not" over "xor -1". For example.
unsigned test(unsigned a) {
  return ~a;
}
llvm used to generate:
movl    $4294967295, %eax
xorl    4(%esp), %eax

Now it generates:
movl      4(%esp), %eax
notl      %eax

It's 3 bytes shorter.

llvm-svn: 62661
2009-01-21 02:09:05 +00:00
Owen Anderson
10ad717dc8 Be more aggressive about renumbering vregs after splitting them.
llvm-svn: 62639
2009-01-21 00:13:28 +00:00
Chris Lattner
4f4bbecafb Don't bother running the assembler, we don't know that it will be configured
for whatever llc defaults to.  This fixes PR3363

llvm-svn: 62619
2009-01-20 21:41:53 +00:00
Evan Cheng
5bea79c062 Fix PR3243: a LiveVariables bug. When HandlePhysRegKill is checking whether the last reference is also the last def (i.e. dead def), it should also check if last reference is the current machine instruction being processed. This can happen when it is processing a physical register use and setting the current machine instruction as sub-register's last ref.
llvm-svn: 62617
2009-01-20 21:25:12 +00:00
Evan Cheng
0151af3a61 Add test case for PR3154.
llvm-svn: 62604
2009-01-20 19:29:54 +00:00
Bill Wendling
68171bde8e Testcase for limited precision stuff.
llvm-svn: 62572
2009-01-20 06:23:59 +00:00
Dan Gohman
ff4c4ab39f Fix a dagcombine to not generate loads of non-round integer types,
as its comment says, even in the case where it will be generating
extending loads. This fixes PR3216.

llvm-svn: 62557
2009-01-20 01:06:45 +00:00
Evan Cheng
5ee5ba12be Make linear scan's trivial coalescer slightly more aggressive.
llvm-svn: 62547
2009-01-20 00:16:18 +00:00
Dale Johannesen
5508ead868 Move & restructure test per review.
llvm-svn: 62538
2009-01-19 22:33:12 +00:00
Dan Gohman
af4e583c93 Fix SelectionDAG::ReplaceAllUsesWith to behave correctly when
uses are added to the From node while it is processing From's
use list, because of automatic local CSE. The fix is to avoid
visiting any new uses.

Fix a few places in the DAGCombiner that assumed that after
a RAUW call, the From node has no users and may be deleted.

This fixes PR3018.

llvm-svn: 62533
2009-01-19 21:44:21 +00:00
Dale Johannesen
31f3cac06b compile-time fmod was done incorrectly. PR 3316.
llvm-svn: 62528
2009-01-19 21:17:05 +00:00
Devang Patel
50ac518b6c Verify Intrinsic::dbg_declare.
llvm-svn: 62526
2009-01-19 21:00:48 +00:00
Evan Cheng
06cfade044 DIVREM isel deficiency: If sign bit is known zero, zero out DX/EDX/RDX instead of sign extending the low part (in AX/EAX/RAX) into it.
llvm-svn: 62519
2009-01-19 19:06:11 +00:00
Evan Cheng
53e83a2eb9 Now not UINT_TO_FP is legal (it's marked custom), dag combiner won't
optimize it to a SINT_TO_FP when the sign bit is known zero. X86 isel should perform the optimization itself.

llvm-svn: 62504
2009-01-19 08:08:22 +00:00
Chris Lattner
6f03811071 Fix rdar://6505632, an llc crash on 483.xalancbmk
llvm-svn: 62470
2009-01-18 20:35:00 +00:00
Bill Wendling
bca1d8ae5a Testcase for last commit.
llvm-svn: 62418
2009-01-17 07:42:44 +00:00
Evan Cheng
182d9c4c9f Fix MatchAddress bug that's preventing negative displacement from being folded in 64-bit mode.
llvm-svn: 62413
2009-01-17 07:09:27 +00:00
Mon P Wang
563134282c Simplify extract element of a scalar to vector.
llvm-svn: 62383
2009-01-17 00:07:25 +00:00
Evan Cheng
d7cc550900 Fix PPC ISD::Declare isel and eliminate the need for PPCTargetLowering::LowerGlobalAddress to check if isVerifiedDebugInfoDesc() is true. Given the recent changes, it would falsely return true for a lot of GlobalAddressSDNode's.
llvm-svn: 62373
2009-01-16 22:57:32 +00:00
Dan Gohman
cd46de9bdc Disable the post-RA scheduler on this test, since it uses a
simple %prcontext which doesn't find what it's looking for
if the scheduler has rearranged the instructions.

llvm-svn: 62363
2009-01-16 21:40:12 +00:00
Evan Cheng
c4d19d6e8c CreateVirtualRegisters does trivial copy coalescing. If a node def is used by a single CopyToReg, it reuses the virtual register assigned to the CopyToReg. This won't work for SDNode that is a clone or is itself cloned. Disable this optimization for those nodes or it can end up with non-SSA machine instructions.
llvm-svn: 62356
2009-01-16 20:57:18 +00:00
Bill Wendling
c9e856fbfd Add support for non-zero __builtin_return_address values on X86.
llvm-svn: 62338
2009-01-16 19:25:27 +00:00
Mon P Wang
e235ba591f Added missing support to widen an operand from a bit convert.
llvm-svn: 62285
2009-01-15 22:43:38 +00:00
Rafael Espindola
46b374f55b Fix Alpha test and support for private linkage.
llvm-svn: 62282
2009-01-15 21:51:46 +00:00
Mon P Wang
4cfe965df2 Expand insert/extract of a <4 x i32> with a variable index.
llvm-svn: 62281
2009-01-15 21:10:20 +00:00
Rafael Espindola
0aba6c9435 Add the private linkage.
llvm-svn: 62279
2009-01-15 20:18:42 +00:00
Richard Osborne
ce265d8cf9 Don't fold address calculations which use negative offsets into
the ADDRspii addressing mode.

llvm-svn: 62258
2009-01-15 11:32:30 +00:00
Scott Michel
b4699590f0 - Convert remaining i64 custom lowering into custom instruction emission
sequences in SPUDAGToDAGISel.cpp and SPU64InstrInfo.td, killing custom
  DAG node types as needed.
- i64 mul is now a legal instruction, but emits an instruction sequence
  that stretches tblgen and the imagination, as well as violating laws of
  several small countries and most southern US states (just kidding, but
  looking at a function with 80+ parameters is really weird and just plain
  wrong.)
- Update tests as needed.

llvm-svn: 62254
2009-01-15 04:41:47 +00:00
Richard Osborne
12b88f2fae Add pseudo instructions to the XCore for (load|store|load address) of a
frame index. eliminateFrameIndex will replace these instructions with
(LDWSP|STWSP|LDAWSP) or (LDW|STW|LDAWF) if a frame pointer is in use.

This fixes PR 3324. Previously we used LDWSP, STWSP, LDAWSP before frame
pointer elimination. However since they were marked as implicitly using
SP they could not be rematerialised.

llvm-svn: 62238
2009-01-14 18:26:46 +00:00
Dan Gohman
8c835f6285 Disable the register+memory forms of the bt instructions for now. Thanks
to Eli for pointing out that these forms don't ignore the high bits of
their index operands, and as such are not immediately suitable for use
by isel.

llvm-svn: 62194
2009-01-13 23:23:30 +00:00
Dan Gohman
9e9858781c The list-td and list-tdrr schedulers don't yet support physreg
scheduling dependencies. Add assertion checks to help catch
this.

It appears the Mips target defaults to list-td, and it has a
regression test that uses a physreg dependence. Such code was
liable to be miscompiled, and now evokes an assertion failure.

llvm-svn: 62177
2009-01-13 20:24:13 +00:00
Duncan Sands
975f2428ba When replacing uses and the same node is reached
via two paths, process it once not twice, d'oh!
Analysis, testcase and original patch thanks to
Mon Ping Wang.

llvm-svn: 62169
2009-01-13 15:17:14 +00:00
Evan Cheng
a706a020bc FIX llvm-gcc bootstrap on x86_64 linux. If a virtual register is copied to a physical register, it's not necessarily defined by a copy. We have to watch out it doesn't clobber any sub-register that might be live during its live interval. If the live interval crosses a basic block, then it's not safe to check with the less conservative check (by scanning uses and defs) because it's possible a sub-register might be live out of the block.
llvm-svn: 62144
2009-01-13 03:57:45 +00:00
Devang Patel
6d7fd4b913 Use DebugInfo interface to lower dbg_* intrinsics.
llvm-svn: 62126
2009-01-13 00:32:17 +00:00
Evan Cheng
5e17ea36e1 Fix PR3241: Currently EmitCopyFromReg emits a copy from the physical register to a virtual register unless it requires an expensive cross class copy. That means we are only treating "expensive to copy" register dependency as physical register dependency.
Also future proof the scheduler to handle "normal" physical register dependencies. The code is not exercised yet.

llvm-svn: 62074
2009-01-12 03:19:55 +00:00
Evan Cheng
47dfb8c719 This is a dup of pr2659.ll.
llvm-svn: 62029
2009-01-10 19:06:32 +00:00
Evan Cheng
411c48b7d2 Duplicated node may produce a non-physical register def.
llvm-svn: 62015
2009-01-09 22:44:02 +00:00
Evan Cheng
84945aba0b Add test case from PR2659.
llvm-svn: 62006
2009-01-09 21:01:31 +00:00
Dan Gohman
a707c0dafe PR2659 was fixed by r61847. Add the testcase as a regression test.
llvm-svn: 61986
2009-01-09 08:16:12 +00:00
Chris Lattner
4166afffa7 this test should not run opt -std-compile-opts, it should run
just llc.

llvm-svn: 61979
2009-01-09 05:32:00 +00:00
Misha Brukman
6338af14f6 Fix off-by-one error in traversing an array; this fixes a test.
The error was reported by gcc-4.3.0 during compilation.

llvm-svn: 61896
2009-01-07 23:07:29 +00:00
Evan Cheng
a70ecc2f51 The coalescer does not coalesce a virtual register to a physical register if any of the physical register's sub-register live intervals overlaps with the virtual register. This is overly conservative. It prevents a extract_subreg from being coalesced away:
v1024 = EDI  // not killed
      =
      = EDI

One possible solution is for the coalescer to examine the sub-register live intervals in the same manner as the physical register. Another possibility is to examine defs and uses (when needed) of sub-registers. Both solutions are too expensive. For now, look for "short virtual intervals" and scan instructions to look for conflict instead.

This is a small win on x86-64. e.g. It shaves 403.gcc by ~80 instructions.

llvm-svn: 61847
2009-01-07 02:08:57 +00:00
Chris Lattner
f6de7aa2c9 add a testcase.
llvm-svn: 61845
2009-01-07 01:48:08 +00:00
Dan Gohman
ca4475dd7b Add patterns to match conditional moves with loads folded
into their left operand, rather than their right. Do this
by commuting the operands and inverting the condition.

llvm-svn: 61842
2009-01-07 01:00:24 +00:00
Dan Gohman
2682e8745c X86_COND_C and X86_COND_NC are alternate mnemonics for
X86_COND_B and X86_COND_AE, respectively.

llvm-svn: 61835
2009-01-07 00:15:08 +00:00
Dan Gohman
4edc9d725b Now that fold-pcmpeqd-0.ll is effectively testing that scheduling helps
avoid the need for spilling, add a new testcase that tests that the
pcmpeqd used for V_SETALLONES is changed to a constant-pool load as
needed.

llvm-svn: 61831
2009-01-06 23:48:10 +00:00
Dan Gohman
e033f7c41e Revert r42653 and forward-port the code that lets INC64_32r be
converted to LEA64_32r in x86's convertToThreeAddress. This
replaces code like this:
   movl  %esi, %edi
   inc   %edi
with this:
   lea   1(%rsi), %edi
which appears to be beneficial.

llvm-svn: 61830
2009-01-06 23:34:46 +00:00
Dan Gohman
b19f5073f9 Fix a bug in ComputeLinearIndex computation handling multi-level
aggregate types. Don't increment the current index after reaching
the end of a struct, as it will already be pointing at
one-past-the end. This fixes PR3288.

llvm-svn: 61828
2009-01-06 22:53:52 +00:00
Scott Michel
c30557841b CellSPU:
- Fix bugs 3194, 3195: i128 load/stores produce correct code (although, we
  need to ensure that i128 is 16-byte aligned in real life), and 128 zero-
  extends are supported.
- New td file: SPU128InstrInfo.td: this is where all new i128 support should
  be put in the future.
- Continue to hammer on i64 operations and test cases; ensure that the only
  remaining problem will be i64 mul.

llvm-svn: 61784
2009-01-06 03:36:14 +00:00
Dan Gohman
cf1ac86514 Delete this test; it's a duplicate of 2006-07-03-schedulers.ll.
llvm-svn: 61781
2009-01-06 01:36:23 +00:00
Dan Gohman
1cdb677fc8 Use a latency value of 0 for the artificial edges inserted by
AddPseudoTwoAddrDeps. This lets the scheduling infrastructure
avoid recalculating node heights. In very large testcases this
was a major bottleneck. Thanks to Roman Levenstein for finding
this!

As a side effect, fold-pcmpeqd-0.ll is now scheduled better
and it no longer requires spilling on x86-32.

llvm-svn: 61778
2009-01-06 01:19:04 +00:00
Evan Cheng
36e238a4d3 Find loop back edges only after empty blocks are eliminated.
llvm-svn: 61752
2009-01-05 21:17:27 +00:00
Scott Michel
733d5f71a0 CellSPU:
- Teach SPU64InstrInfo.td about the remaining signed comparisons, update tests
  accordingly.

llvm-svn: 61672
2009-01-05 04:05:53 +00:00
Scott Michel
06c324c6c7 CellSPU:
- Add an 8-bit operation test, which doesn't do much at this point.

llvm-svn: 61665
2009-01-05 01:35:22 +00:00
Scott Michel
0d9d939406 CellSPU:
- Fix (brcond (setq ...)) bug, where BRNZ should have been used vice BRZ.
- Kill unused/unnecessary nodes in SPUNodes.td
- Beef out the i64operations.c test harness to use a lot of unaligned
  loads, test loops and LLVM loop/basic block optimizations; run the
  test harness successfully on real Cell hardware.

llvm-svn: 61664
2009-01-05 01:34:35 +00:00
Dan Gohman
2a079de3f5 Fix a DAGCombiner abort on an invalid shift count constant. This fixes PR3250.
llvm-svn: 61613
2009-01-03 19:22:06 +00:00
Scott Michel
0309418000 CellSPU:
- Remove custom lowering for BRCOND
- Add remaining functionality for branches in SPUInstrInfo, such as branch
  condition reversal and load/store folding. Updated BrCond test to reflect
  branch reversal.

llvm-svn: 61597
2009-01-03 00:27:53 +00:00