1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 12:33:33 +02:00
Commit Graph

5 Commits

Author SHA1 Message Date
Stephen Lin
3ae734a60c Convert CodeGen/*/*.ll tests to use the new CHECK-LABEL for easier debugging. No functionality change and all tests pass after conversion.
This was done with the following sed invocation to catch label lines demarking function boundaries:
    sed -i '' "s/^;\( *\)\([A-Z0-9_]*\):\( *\)test\([A-Za-z0-9_-]*\):\( *\)$/;\1\2-LABEL:\3test\4:\5/g" test/CodeGen/*/*.ll
which was written conservatively to avoid false positives rather than false negatives. I scanned through all the changes and everything looks correct.

llvm-svn: 186258
2013-07-13 20:38:47 +00:00
Jakob Stoklund Olesen
ef8527b836 Pseudo CMOV instructions don't clobber EFLAGS.
The explanation about a 0 argument being materialized as xor is no
longer valid.  Rematerialization will check if EFLAGS is live before
clobbering it.

The code produced by X86TargetLowering::EmitLoweredSelect does not
clobber EFLAGS.

This causes one less testb instruction to be generated in the cmov.ll
test case.

llvm-svn: 139057
2011-09-02 23:52:55 +00:00
Jakob Stoklund Olesen
32cf19caa6 Fix register-dependent X86 tests.
llvm-svn: 128867
2011-04-05 00:32:44 +00:00
Dan Gohman
570b621976 Add another variant of this test which found a place where
CodeGen's ComputeMaskedBits was being over-conservative when computing
bits for an ADD.

llvm-svn: 101963
2010-04-21 00:19:28 +00:00
Chris Lattner
6db0f451a7 teach the x86 address matching stuff to handle
(shl (or x,c), 3) the same as (shl (add x, c), 3)
when x doesn't have any bits from c set.

This finishes off PR1135.  Before we compiled the block to:
to:

LBB0_3:                                 ## %bb
	cmpb	$4, %dl
	sete	%dl
	addb	%dl, %cl
	movb	%cl, %dl
	shlb	$2, %dl
	addb	%r8b, %dl
	shlb	$2, %dl
	movzbl	%dl, %edx
	movl	%esi, (%rdi,%rdx,4)
	leaq	2(%rdx), %r9
	movl	%esi, (%rdi,%r9,4)
	leaq	1(%rdx), %r9
	movl	%esi, (%rdi,%r9,4)
	addq	$3, %rdx
	movl	%esi, (%rdi,%rdx,4)
	incb	%r8b
	decb	%al
	movb	%r8b, %dl
	jne	LBB0_1

Now we produce:

LBB0_3:                                 ## %bb
	cmpb	$4, %dl
	sete	%dl
	addb	%dl, %cl
	movb	%cl, %dl
	shlb	$2, %dl
	addb	%r8b, %dl
	shlb	$2, %dl
	movzbl	%dl, %edx
	movl	%esi, (%rdi,%rdx,4)
	movl	%esi, 8(%rdi,%rdx,4)
	movl	%esi, 4(%rdi,%rdx,4)
	movl	%esi, 12(%rdi,%rdx,4)
	incb	%r8b
	decb	%al
	movb	%r8b, %dl
	jne	LBB0_1

llvm-svn: 101958
2010-04-20 23:18:40 +00:00