1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-28 06:22:51 +01:00
Commit Graph

16639 Commits

Author SHA1 Message Date
Owen Anderson
d5c17d5981 Improve comment.
llvm-svn: 121272
2010-12-08 19:31:11 +00:00
Jim Grosbach
16cdaf34bc Add initializer.
llvm-svn: 121262
2010-12-08 15:36:45 +00:00
Evan Cheng
5582f058f4 Add comments.
llvm-svn: 121238
2010-12-08 06:29:02 +00:00
Bill Wendling
dac1f24d98 Add support for loading from a constant pool.
llvm-svn: 121226
2010-12-08 01:57:09 +00:00
Jim Grosbach
d4eea7c10d Let target asm backends see assembler flags as they go by. Use that to handle
thumb vs. arm mode differences in WriteNopData().

llvm-svn: 121219
2010-12-08 01:16:55 +00:00
Owen Anderson
d00dc39a11 Simplify the byte reordering logic slightly.
llvm-svn: 121216
2010-12-08 00:21:33 +00:00
Owen Anderson
ba5edcfe05 VLDR fixups need special handling under Thumb. While the encoding is the same,
the order of the bytes in the data stream is flipped around.

llvm-svn: 121215
2010-12-08 00:18:36 +00:00
Matt Beaumont-Gay
5e680ad101 Fix a warning about a variable which is only used in an assertion.
llvm-svn: 121206
2010-12-07 23:26:21 +00:00
Bill Wendling
45bdb13970 Cleanup in the Darwin end. No functionality change.
llvm-svn: 121198
2010-12-07 23:11:00 +00:00
Evan Cheng
3bd9b95b4d Fix a bad prologue / epilogue codegen bug where the compiler would emit illegal
vpush instructions to save / restore VFP / NEON registers like this:
vpush {d8,d10,d11}
vpop {d8,d10,d11}

vpush and vpop do not allow gaps in the register list.
rdar://8728956

llvm-svn: 121197
2010-12-07 23:08:38 +00:00
Bill Wendling
4399d09458 A bit of cleanup: early exit ApplyFixup and cache the Fixup offset. No
functionality change.

llvm-svn: 121195
2010-12-07 23:05:20 +00:00
Jim Grosbach
77b631549c Binary encoding for ARM tLDRspi and tSTRspi.
llvm-svn: 121186
2010-12-07 21:50:47 +00:00
Owen Anderson
a23e10f29d Fix Thumb2 encoding of the S bit.
llvm-svn: 121182
2010-12-07 20:50:15 +00:00
Jim Grosbach
1aa6a676cf Refactor the ARM CMPz* patterns to just use the normal CMP instructions when
possible. They were duplicates for everything exception the source pattern
before.

llvm-svn: 121179
2010-12-07 20:41:06 +00:00
Evan Cheng
9af09ebf8b Code clean up; no functionality change.
llvm-svn: 121176
2010-12-07 20:11:46 +00:00
Evan Cheng
0295c17fbc Code clean up; no functionality change.
llvm-svn: 121172
2010-12-07 19:59:34 +00:00
Bruno Cardoso Lopes
e11d870459 Remove target specific node MipsISD::CMov, which is not used because all conditional moves are directly matched using tablegen patterns. If there's a need in the future, we can introduce it again
llvm-svn: 121164
2010-12-07 19:04:14 +00:00
Bruno Cardoso Lopes
0e14644599 Match a pattern generated by a dag combiner opt where:
(select (load (load tga0)) (load tga1)) => (load (select (load tga0) tga1))

Thanks to Akira for pointing that.

llvm-svn: 121163
2010-12-07 19:00:20 +00:00
Jim Grosbach
c99517ecc6 Encode the literal field for tCMPzi instruction.
llvm-svn: 121153
2010-12-07 17:48:24 +00:00
Benjamin Kramer
fb17a54866 Add parens to pacify gcc.
llvm-svn: 121142
2010-12-07 15:50:35 +00:00
Jay Foad
79e18ed269 PR5207: Change APInt methods trunc(), sext(), zext(), sextOrTrunc() and
zextOrTrunc(), and APSInt methods extend(), extOrTrunc() and new method
trunc(), to be const and to return a new value instead of modifying the
object in place.

llvm-svn: 121120
2010-12-07 08:25:19 +00:00
NAKAMURA Takumi
7947c9770a lib/Target/X86/X86MCAsmInfo.cpp: [PR8741] On Win64, specify explicit PrivateGlobalPrefix as ".L".
Or, global symbols @Lxxxx might be treated as temporal symbol by MCSymbol.

llvm-svn: 121103
2010-12-07 02:43:45 +00:00
Owen Anderson
81f8b084e6 Second attempt at converting Thumb2's LDRpci, including updating the gazillion places that need to know about it.
llvm-svn: 121082
2010-12-07 00:45:21 +00:00
Jim Grosbach
2d361b9318 Add fixup for Thumb1 BL/BLX instructions.
llvm-svn: 121072
2010-12-06 23:57:07 +00:00
Wesley Peck
ffdbf99b57 Adding bug fix that was suppose to be part of 121044.
patch contributed by Jack Whitham!

llvm-svn: 121049
2010-12-06 22:19:28 +00:00
Wesley Peck
996c76c27e Fixed reversed operands for IDIV and CMP instructions in MBlaze backend.
Use BRAD instead of BRD for indirect branches in MBlaze backend.

patch contributed by Jack Whitham!

llvm-svn: 121044
2010-12-06 22:06:49 +00:00
Wesley Peck
b168ddedaa Fix a 16-bit immediate value detection bug in the MBlaze delay slot filler.
Address more hazards in the MBlaze delay slot filler.

patch contributed by Jack Whitham!

llvm-svn: 121037
2010-12-06 21:11:01 +00:00
Rafael Espindola
65c25aef87 Remove the instruction fragment to data fragment lowering since it was causing
freed data to be read. I will open a bug to track it being reenabled.

llvm-svn: 121028
2010-12-06 19:08:48 +00:00
Owen Anderson
8e9cb84ea2 Revert r121021, which broke the buildbots.
llvm-svn: 121026
2010-12-06 18:57:40 +00:00
Jim Grosbach
f2e0e808ba Trailing whitespace.
llvm-svn: 121024
2010-12-06 18:47:44 +00:00
Owen Anderson
0c51a02230 Improve handling of Thumb2 PC-relative loads by converting LDRpci (and friends) to Pseudos.
llvm-svn: 121021
2010-12-06 18:35:51 +00:00
Jim Grosbach
6c27b4f3cf Encode the register operand of ARM CondCode operands correctly. ARM::CPSR if
the instruction is predicated, reg0 otherwise.

llvm-svn: 121020
2010-12-06 18:30:57 +00:00
Jim Grosbach
c79c6290ee The ARM AsmMatcher needs to know that the CCOut operand is a register value,
not an immediate. It stores either ARM::CPSR or reg0.

llvm-svn: 121018
2010-12-06 18:21:12 +00:00
Rafael Espindola
3e954d16f4 Second try at making direct object emission produce the same results
as llc + llvm-mc. This time ELF is not changed and I tested that llvm-gcc
bootstrap on darwin10 using darwin9's assembler and linker.

llvm-svn: 121006
2010-12-06 17:27:56 +00:00
Che-Liang Chiou
cd2878d421 ptx: add shift instructions
llvm-svn: 120982
2010-12-06 04:00:03 +00:00
Evan Cheng
4d9d54e44e Eliminate unneeded #include's.
llvm-svn: 120971
2010-12-05 23:41:43 +00:00
NAKAMURA Takumi
594d4094ca ARM/CMakeLists.txt: Add missing MLxExpansionPass.cpp since r120960.
llvm-svn: 120966
2010-12-05 23:08:57 +00:00
Evan Cheng
12561e250d Code clean up.
llvm-svn: 120965
2010-12-05 23:03:45 +00:00
Evan Cheng
854ec53564 Remove an unused variable.
llvm-svn: 120964
2010-12-05 23:03:35 +00:00
Evan Cheng
fc78767730 Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the #3 exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Work in progress, only A+B are enabled.

llvm-svn: 120960
2010-12-05 22:04:16 +00:00
Chris Lattner
e30adfb732 Teach X86ISelLowering that the second result of X86ISD::UMUL is a flags
result.  This allows us to compile:

void *test12(long count) {
      return new int[count];
}

into:

test12:
	movl	$4, %ecx
	movq	%rdi, %rax
	mulq	%rcx
	movq	$-1, %rdi
	cmovnoq	%rax, %rdi
	jmp	__Znam                  ## TAILCALL

instead of:

test12:
	movl	$4, %ecx
	movq	%rdi, %rax
	mulq	%rcx
	seto	%cl
	testb	%cl, %cl
	movq	$-1, %rdi
	cmoveq	%rax, %rdi
	jmp	__Znam

Of course it would be even better if the regalloc inverted the cmov to 'cmovoq',
which would eliminate the need for the 'movq %rdi, %rax'.

llvm-svn: 120936
2010-12-05 07:49:54 +00:00
Chris Lattner
76601e7a99 it turns out that when ".with.overflow" intrinsics were added to the X86
backend that they were all implemented except umul.  This one fell back
to the default implementation that did a hi/lo multiply and compared the
top.  Fix this to check the overflow flag that the 'mul' instruction
sets, so we can avoid an explicit test.  Now we compile:

void *func(long count) {
      return new int[count];
}

into:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	seto	%cl                     ## encoding: [0x0f,0x90,0xc1]
	testb	%cl, %cl                ## encoding: [0x84,0xc9]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL

instead of:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	testq	%rdx, %rdx              ## encoding: [0x48,0x85,0xd2]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL

Other than the silly seto+test, this is using the o bit directly, so it's going in the right
direction.

llvm-svn: 120935
2010-12-05 07:30:36 +00:00
Chris Lattner
16bafb2414 generalize the previous check to handle -1 on either side of the
select, inserting a not to compensate.  Add a missing isZero check
that I lost somehow.

This improves codegen of:

void *func(long count) {
      return new int[count];
}

from:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	testq	%rdx, %rdx              ## encoding: [0x48,0x85,0xd2]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL
                                        ## encoding: [0xeb,A]

to:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	cmpq	$1, %rdx                ## encoding: [0x48,0x83,0xfa,0x01]
	sbbq	%rdi, %rdi              ## encoding: [0x48,0x19,0xff]
	notq	%rdi                    ## encoding: [0x48,0xf7,0xd7]
	orq	%rax, %rdi              ## encoding: [0x48,0x09,0xc7]
	jmp	__Znam                  ## TAILCALL
                                        ## encoding: [0xeb,A]

llvm-svn: 120932
2010-12-05 02:00:51 +00:00
Chris Lattner
474ed0aa9b Improve an integer select optimization in two ways:
1. generalize 
    (select (x == 0), -1, 0) -> (sign_bit (x - 1))
to:
    (select (x == 0), -1, y) -> (sign_bit (x - 1)) | y

2. Handle the identical pattern that happens with !=:
   (select (x != 0), y, -1) -> (sign_bit (x - 1)) | y

cmov is often high latency and can't fold immediates or
memory operands.  For example for (x == 0) ? -1 : 1, before 
we got:

< 	testb	%sil, %sil
< 	movl	$-1, %ecx
< 	movl	$1, %eax
< 	cmovel	%ecx, %eax

now we get:

> 	cmpb	$1, %sil
> 	sbbl	%eax, %eax
> 	orl	$1, %eax

llvm-svn: 120929
2010-12-05 01:23:24 +00:00
Bill Wendling
2b53c0830d Initialize HasPOPCNT.
llvm-svn: 120923
2010-12-04 23:57:24 +00:00
Benjamin Kramer
851691ddb2 Add patterns for the x86 popcnt instruction.
- Also adds a new POPCNT subtarget feature that is currently enabled if the target
  supports SSE4.2 (nehalem) or SSE4A (barcelona).

llvm-svn: 120917
2010-12-04 20:32:23 +00:00
Benjamin Kramer
77faee6ba1 Simplify code. No functionality change.
llvm-svn: 120907
2010-12-04 14:22:24 +00:00
Bob Wilson
20c65a9d33 The Thumb tADDrSPi instruction is not valid when the destination is SP.
Check for that and try narrowing it to tADDspi instead.  Radar 8724703.

llvm-svn: 120892
2010-12-04 04:40:19 +00:00
Rafael Espindola
9215947c83 There are two reasons why we might want to use
foo = a - b
.long foo
instead of just
.long a - b

First, on darwin9 64 bits the assembler produces the wrong result. Second,
if "a" is the end of the section all darwin assemblers (9, 10 and mc) will not
consider a - b to be a constant but will if the dummy foo is created.

Split how we handle these cases. The first one is something MC should take care
of. The second one has to be handled by the caller.

llvm-svn: 120889
2010-12-04 03:21:47 +00:00
Jim Grosbach
4c10ffa7c3 Encode condition code for Thumb1 conditional branch instruction.
llvm-svn: 120865
2010-12-04 00:20:40 +00:00