1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-01 00:12:50 +01:00
Commit Graph

9 Commits

Author SHA1 Message Date
Chris Lattner
01c6e93ea4 fix rdar://8494845 + PR8244 - a miscompile exposed by my patch in r101350
llvm-svn: 115294
2010-10-01 05:36:09 +00:00
Dan Gohman
6824bfc554 Don't narrow the load and store in a load+twiddle+store sequence unless
there are clearly no stores between the load and the store. This fixes
this miscompile reported as PR7833.

This breaks the test/CodeGen/X86/narrow_op-2.ll optimization, which is
safe, but awkward to prove safe. Move it to X86's README.txt.

llvm-svn: 112861
2010-09-02 21:18:42 +00:00
Dan Gohman
a08a9b8a0b Reapply r106634, now that the bug it exposed is fixed.
llvm-svn: 106746
2010-06-24 14:30:44 +00:00
Daniel Dunbar
be50ef88bd Revert r106263, "Fold the ShrinkDemandedOps pass into the regular DAGCombiner pass,"... it was causing both 'file' (with clang) and 176.gcc (with llvm-gcc) to be miscompiled.
llvm-svn: 106634
2010-06-23 17:09:26 +00:00
Dan Gohman
8185674354 Fold the ShrinkDemandedOps pass into the regular DAGCombiner pass,
which is faster, simpler, and less surprising.

llvm-svn: 106263
2010-06-18 01:05:21 +00:00
Evan Cheng
d4fe387eb8 Enable i16 to i32 promotion by default.
llvm-svn: 102493
2010-04-28 08:30:49 +00:00
Chris Lattner
1b7ecfdf60 enhance the load/store narrowing optimization to handle a
tokenfactor in between the load/store.  This allows us to 
optimize test7 into:

_test7:                                 ## @test7
## BB#0:                                ## %entry
	movl	(%rdx), %eax
                                        ## kill: SIL<def> ESI<kill>
	movb	%sil, 5(%rdi)
	ret

instead of:

_test7:                                 ## @test7
## BB#0:                                ## %entry
	movl	4(%esp), %ecx
	movl	$-65281, %eax           ## imm = 0xFFFFFFFFFFFF00FF
	andl	4(%ecx), %eax
	movzbl	8(%esp), %edx
	shll	$8, %edx
	addl	%eax, %edx
	movl	12(%esp), %eax
	movl	(%eax), %eax
	movl	%edx, 4(%ecx)
	ret

llvm-svn: 101355
2010-04-15 06:10:49 +00:00
Chris Lattner
8c5a5c9094 teach codegen to turn trunc(zextload) into load when possible.
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering.  In that
case it is saves a few bytes on x86-32.

llvm-svn: 101350
2010-04-15 05:40:59 +00:00
Chris Lattner
3282f3d34f Implement rdar://7860110 (also in target/readme.txt) narrowing
a load/or/and/store sequence into a narrower store when it is
safe.  Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does  trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.

This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll 
into:

        movl    %eax, 36(%rdi)

instead of:

        movl    $4294967295, %eax       ## imm = 0xFFFFFFFF
        andq    32(%rdi), %rax
        shlq    $32, %rcx
        addq    %rax, %rcx
        movq    %rcx, 32(%rdi)

and each of the testcases into a single store.  Each of them used
to compile into craziness like this:

_test4:
	movl	$65535, %eax            ## imm = 0xFFFF
	andl	(%rdi), %eax
	shll	$16, %esi
	addl	%eax, %esi
	movl	%esi, (%rdi)
	ret

llvm-svn: 101343
2010-04-15 04:48:01 +00:00