1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 20:12:56 +02:00
Commit Graph

67801 Commits

Author SHA1 Message Date
Chris Lattner
db6c348f31 Fix PR8728, a miscompilation I recently introduced. When optimizing
memcpy's like:
  memcpy(A, B)
  memcpy(A, C)

we cannot delete the first memcpy as dead if A and C might be aliases.
If so, we actually get:

  memcpy(A, B)
  memcpy(A, A)

which is not correct to transform into:

  memcpy(A, A)

This patch was heavily influenced by Jakub Staszak's patch in PR8728, thanks
Jakub!

llvm-svn: 120974
2010-12-06 01:48:06 +00:00
Chris Lattner
83445a6415 add a helper method.
llvm-svn: 120973
2010-12-06 01:01:28 +00:00
Evan Cheng
4d9d54e44e Eliminate unneeded #include's.
llvm-svn: 120971
2010-12-05 23:41:43 +00:00
NAKAMURA Takumi
594d4094ca ARM/CMakeLists.txt: Add missing MLxExpansionPass.cpp since r120960.
llvm-svn: 120966
2010-12-05 23:08:57 +00:00
Evan Cheng
12561e250d Code clean up.
llvm-svn: 120965
2010-12-05 23:03:45 +00:00
Evan Cheng
854ec53564 Remove an unused variable.
llvm-svn: 120964
2010-12-05 23:03:35 +00:00
Cameron Zwarich
f56ba80bb2 Some cleanup before I start committing some incremental progress on
StrongPHIElimination.

llvm-svn: 120961
2010-12-05 22:34:08 +00:00
Evan Cheng
fc78767730 Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the #3 exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Work in progress, only A+B are enabled.

llvm-svn: 120960
2010-12-05 22:04:16 +00:00
Cameron Zwarich
f64c26bb9e Remove the PHIElimination.h header, as it is no longer needed.
llvm-svn: 120959
2010-12-05 21:39:42 +00:00
Frits van Bommel
96efe38470 Clarify some of the differences between indexing with getelementptr and indexing with insertvalue/extractvalue.
llvm-svn: 120957
2010-12-05 20:54:38 +00:00
Frits van Bommel
e390b379ae Fix PR 4170 by having ExtractValueInst::getIndexedType() reject out-of-bounds indexing.
Also add asserts that the indices are valid in InsertValueInst::init(). ExtractValueInst already asserts when constructed with invalid indices.

llvm-svn: 120956
2010-12-05 20:50:26 +00:00
Cameron Zwarich
fbe9e91d97 I forgot to actually remove the FindCopyInsertPoint() declaration from
PHIElimination.h.

llvm-svn: 120953
2010-12-05 19:58:57 +00:00
Cameron Zwarich
cb613dcf69 Remove the SplitCriticalEdge() method declaration from PHIElimination.h. At one
time, this method existed, but now PHIElimination uses the method of the same
name on MachineBasicBlock.

llvm-svn: 120952
2010-12-05 19:54:23 +00:00
Cameron Zwarich
c680f44c1b Move the FindCopyInsertPoint method of PHIElimination to a new standalone
function so that it can be shared with StrongPHIElimination.

llvm-svn: 120951
2010-12-05 19:51:05 +00:00
Frits van Bommel
b95594885e Refactor jump threading.
Should have no functional change other than the order of two transformations that are mutually-exclusive and the exact formatting of debug output.
Internally, it now stores the ConstantInt*s as Constant*s, and actual undef values instead of nulls.

llvm-svn: 120946
2010-12-05 19:06:41 +00:00
Frits van Bommel
4f39797ac2 Remove trailing whitespace.
llvm-svn: 120945
2010-12-05 19:02:47 +00:00
Frits van Bommel
31cf7b99f9 Teach SimplifyCFG to turn
(indirectbr (select cond, blockaddress(@fn, BlockA),
                            blockaddress(@fn, BlockB)))
into
  (br cond, BlockA, BlockB).

llvm-svn: 120943
2010-12-05 18:29:03 +00:00
Chris Lattner
e30adfb732 Teach X86ISelLowering that the second result of X86ISD::UMUL is a flags
result.  This allows us to compile:

void *test12(long count) {
      return new int[count];
}

into:

test12:
	movl	$4, %ecx
	movq	%rdi, %rax
	mulq	%rcx
	movq	$-1, %rdi
	cmovnoq	%rax, %rdi
	jmp	__Znam                  ## TAILCALL

instead of:

test12:
	movl	$4, %ecx
	movq	%rdi, %rax
	mulq	%rcx
	seto	%cl
	testb	%cl, %cl
	movq	$-1, %rdi
	cmoveq	%rax, %rdi
	jmp	__Znam

Of course it would be even better if the regalloc inverted the cmov to 'cmovoq',
which would eliminate the need for the 'movq %rdi, %rax'.

llvm-svn: 120936
2010-12-05 07:49:54 +00:00
Chris Lattner
76601e7a99 it turns out that when ".with.overflow" intrinsics were added to the X86
backend that they were all implemented except umul.  This one fell back
to the default implementation that did a hi/lo multiply and compared the
top.  Fix this to check the overflow flag that the 'mul' instruction
sets, so we can avoid an explicit test.  Now we compile:

void *func(long count) {
      return new int[count];
}

into:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	seto	%cl                     ## encoding: [0x0f,0x90,0xc1]
	testb	%cl, %cl                ## encoding: [0x84,0xc9]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL

instead of:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	testq	%rdx, %rdx              ## encoding: [0x48,0x85,0xd2]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL

Other than the silly seto+test, this is using the o bit directly, so it's going in the right
direction.

llvm-svn: 120935
2010-12-05 07:30:36 +00:00
Chris Lattner
9b4b9e751a fix the rest of the linux miscompares :)
llvm-svn: 120933
2010-12-05 02:08:07 +00:00
Chris Lattner
16bafb2414 generalize the previous check to handle -1 on either side of the
select, inserting a not to compensate.  Add a missing isZero check
that I lost somehow.

This improves codegen of:

void *func(long count) {
      return new int[count];
}

from:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	testq	%rdx, %rdx              ## encoding: [0x48,0x85,0xd2]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL
                                        ## encoding: [0xeb,A]

to:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	cmpq	$1, %rdx                ## encoding: [0x48,0x83,0xfa,0x01]
	sbbq	%rdi, %rdi              ## encoding: [0x48,0x19,0xff]
	notq	%rdi                    ## encoding: [0x48,0xf7,0xd7]
	orq	%rax, %rdi              ## encoding: [0x48,0x09,0xc7]
	jmp	__Znam                  ## TAILCALL
                                        ## encoding: [0xeb,A]

llvm-svn: 120932
2010-12-05 02:00:51 +00:00
Chris Lattner
e1c32a116b relax this to handle linux defaulting to -static.
llvm-svn: 120930
2010-12-05 01:31:13 +00:00
Chris Lattner
474ed0aa9b Improve an integer select optimization in two ways:
1. generalize 
    (select (x == 0), -1, 0) -> (sign_bit (x - 1))
to:
    (select (x == 0), -1, y) -> (sign_bit (x - 1)) | y

2. Handle the identical pattern that happens with !=:
   (select (x != 0), y, -1) -> (sign_bit (x - 1)) | y

cmov is often high latency and can't fold immediates or
memory operands.  For example for (x == 0) ? -1 : 1, before 
we got:

< 	testb	%sil, %sil
< 	movl	$-1, %ecx
< 	movl	$1, %eax
< 	cmovel	%ecx, %eax

now we get:

> 	cmpb	$1, %sil
> 	sbbl	%eax, %eax
> 	orl	$1, %eax

llvm-svn: 120929
2010-12-05 01:23:24 +00:00
Chris Lattner
c383be807e merge some tests into select.ll and make them more specific.
llvm-svn: 120928
2010-12-05 01:13:58 +00:00
Chris Lattner
7e0a594633 rename test
llvm-svn: 120927
2010-12-05 01:02:23 +00:00
Chris Lattner
1a760f6247 remove two tests that aren't really testing anything.
llvm-svn: 120926
2010-12-05 01:02:13 +00:00
Bill Wendling
2b53c0830d Initialize HasPOPCNT.
llvm-svn: 120923
2010-12-04 23:57:24 +00:00
Rafael Espindola
310b851621 Once the layout is done we don't need to keep updating which fragments are
valid. Addresses will not change.

llvm-svn: 120921
2010-12-04 22:47:22 +00:00
Rafael Espindola
1bf8d261f1 Remember the contents of leb and dwarfline fragments when relaxing. This avoids
having to evaluate the expression again when writing.

llvm-svn: 120920
2010-12-04 21:58:52 +00:00
Cameron Zwarich
5e3c712e67 Remove PHIElimination's private copy of SkipPHIsAndLabels.
llvm-svn: 120918
2010-12-04 20:40:15 +00:00
Benjamin Kramer
851691ddb2 Add patterns for the x86 popcnt instruction.
- Also adds a new POPCNT subtarget feature that is currently enabled if the target
  supports SSE4.2 (nehalem) or SSE4A (barcelona).

llvm-svn: 120917
2010-12-04 20:32:23 +00:00
Bill Wendling
18e834f217 Silence 'may be used uninitialized in this function' warnings. Static analysis
may determine that they cannot be used uninitialized. But that might be a bit
too much for the compiler to determine.

llvm-svn: 120916
2010-12-04 20:20:34 +00:00
Michael J. Spencer
2cd429339f Support/PathV2: Remove redundant calls to make_error_code.
llvm-svn: 120913
2010-12-04 18:45:32 +00:00
Benjamin Kramer
e2e8053264 APInt: microoptimize a few methods.
llvm-svn: 120912
2010-12-04 18:05:36 +00:00
Benjamin Kramer
ec7938f7af Simplify APInt::getAllOnesValue.
llvm-svn: 120911
2010-12-04 16:37:47 +00:00
Benjamin Kramer
009451fddc Remove unneeded zero arrays.
llvm-svn: 120910
2010-12-04 15:28:22 +00:00
Benjamin Kramer
612c7225ee Apparently APFloat::getZero doesn't like PPCDoubleDoubles.
llvm-svn: 120909
2010-12-04 14:43:08 +00:00
Francois Pichet
2fac2ab94d Disable C++ exception handling on MSVC.
Total size of bin\Release on disk goes from 82.9 MB to 74.2 MB. (~10% saving)

llvm-svn: 120908
2010-12-04 14:30:22 +00:00
Benjamin Kramer
77faee6ba1 Simplify code. No functionality change.
llvm-svn: 120907
2010-12-04 14:22:24 +00:00
Francois Pichet
6aff5fc3be Disable RTTI on Windows.
Total size of bin\Release on disk goes from 83.6 MB to 81.8MB. (~2% saving)

llvm-svn: 120901
2010-12-04 09:42:30 +00:00
Bob Wilson
20c65a9d33 The Thumb tADDrSPi instruction is not valid when the destination is SP.
Check for that and try narrowing it to tADDspi instead.  Radar 8724703.

llvm-svn: 120892
2010-12-04 04:40:19 +00:00
Bob Wilson
5917f1a762 Remove trailing whitespace.
llvm-svn: 120891
2010-12-04 04:40:15 +00:00
Rafael Espindola
9215947c83 There are two reasons why we might want to use
foo = a - b
.long foo
instead of just
.long a - b

First, on darwin9 64 bits the assembler produces the wrong result. Second,
if "a" is the end of the section all darwin assemblers (9, 10 and mc) will not
consider a - b to be a constant but will if the dummy foo is created.

Split how we handle these cases. The first one is something MC should take care
of. The second one has to be handled by the caller.

llvm-svn: 120889
2010-12-04 03:21:47 +00:00
Michael J. Spencer
e541b5fa8c Unittests/Support/PathV2: Add FileSystem tests.
llvm-svn: 120888
2010-12-04 03:18:42 +00:00
Michael J. Spencer
5a2272dcef Support/FileSystem: Add status implementation.
llvm-svn: 120870
2010-12-04 00:32:40 +00:00
Michael J. Spencer
8754a822bb Support/SystemError: Make error_category and error_code auto-bool-conversion-safe.
llvm-svn: 120869
2010-12-04 00:32:24 +00:00
Michael J. Spencer
090a4bd632 Support/Windows/FileSystem: Fix MinGW warnings.
llvm-svn: 120868
2010-12-04 00:32:14 +00:00
Michael J. Spencer
e1360680ce Support/FileSystem: Add file_size implementation.
llvm-svn: 120867
2010-12-04 00:31:48 +00:00
Rafael Espindola
50b6170457 Next step: Only pad debug_line when the target is darwin. Add a FIXME to avoid
doing that if the target is darwin10 or newer.

This fixes
*) Direct object emission was producing objects without the workaround on
   darwin9.
*) Assembly printing was producing objects with the workaround on linux.

llvm-svn: 120866
2010-12-04 00:31:13 +00:00
Jim Grosbach
4c10ffa7c3 Encode condition code for Thumb1 conditional branch instruction.
llvm-svn: 120865
2010-12-04 00:20:40 +00:00