1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 11:02:59 +02:00
Commit Graph

46574 Commits

Author SHA1 Message Date
Daniel Dunbar
6a52508cb9 Target: Eliminate a use of getDarwinMajorNumber().
llvm-svn: 129803
2011-04-19 20:44:08 +00:00
Daniel Dunbar
140e365c49 CodeGen: Eliminate a use of getDarwinMajorNumber().
- There is a minor semantic change here (evidenced by the test change) for
   Darwin triples that have no version component. I debated changing the default
   behavior of isOSVersionLT, but decided it made more sense for triples to be
   explicit.

llvm-svn: 129802
2011-04-19 20:32:39 +00:00
Daniel Dunbar
e24d3d1bc5 ADT/Triple: Generalize and simplify getDarwinNumber to just be getOSVersion.
llvm-svn: 129799
2011-04-19 20:24:34 +00:00
Daniel Dunbar
ce4e39d010 ADT/Triple: Add support for more explicit "osx" and "ios" OS names.
llvm-svn: 129798
2011-04-19 20:19:27 +00:00
Stuart Hastings
89cb281cf8 Delete unnecessary variable. <rdar://problem/7662569>
llvm-svn: 129796
2011-04-19 20:09:38 +00:00
Eric Christopher
21ad2325df Remove some duplicate op action entries and reorganize.
llvm-svn: 129781
2011-04-19 18:49:19 +00:00
Bob Wilson
3daeb462cb This patch combines several changes from Evan Cheng for rdar://8659675.
Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the #3 exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Enable these fp vmlx codegen changes for Cortex-A9.

llvm-svn: 129775
2011-04-19 18:11:57 +00:00
Bob Wilson
56f64ab701 Add -mcpu=cortex-a9-mp. It's cortex-a9 with MP extension. rdar://8648637.
llvm-svn: 129774
2011-04-19 18:11:52 +00:00
Bob Wilson
0cbbc50f26 Avoid some 's' 16-bit instruction which partially update CPSR
(and add false dependency) when it isn't dependent on last CPSR defining
instruction. rdar://8928208

llvm-svn: 129773
2011-04-19 18:11:49 +00:00
Bob Wilson
886994b683 Avoid write-after-write issue hazards for Cortex-A9.
Add a avoidWriteAfterWrite() target hook to identify register classes that
suffer from write-after-write hazards. For those register classes, try to avoid
writing the same register in two consecutive instructions.

This is currently disabled by default.  We should not spill to avoid hazards!
The command line flag -avoid-waw-hazard can be used to enable waw avoidance.

llvm-svn: 129772
2011-04-19 18:11:45 +00:00
Bob Wilson
48d5451029 Some single-precision VFP instructions can execute in either the VPF or Neon
pipelines, at least on Cortex-A9.

llvm-svn: 129771
2011-04-19 18:11:38 +00:00
Bob Wilson
d83b95fd68 Improvements for the Cortex-A9 scheduling itineraries.
llvm-svn: 129770
2011-04-19 18:11:36 +00:00
Eli Friedman
01f94bd648 Add support for FastISel'ing varargs calls.
llvm-svn: 129765
2011-04-19 17:22:22 +00:00
Jakob Stoklund Olesen
dceb96c62d Force the greedy register allocator to be linked alongside linear scan.
This means that the new register allocator can be used with 'clang -mllvm -regalloc=greedy'.

llvm-svn: 129764
2011-04-19 17:17:58 +00:00
Eli Friedman
bbf7d2ac38 SelectBasicBlock is rather slow even when it doesn't do anything; skip the
unnecessary work where possible.

llvm-svn: 129763
2011-04-19 17:01:08 +00:00
Stuart Hastings
f838ea4959 Support nested CALLSEQ_BEGIN/END; necessary for ARM byval support. <rdar://problem/7662569>
llvm-svn: 129761
2011-04-19 16:16:58 +00:00
Jay Foad
0dcd432074 Trivial simplification.
llvm-svn: 129759
2011-04-19 15:23:29 +00:00
Chris Lattner
f15db6c86f Implement support for x86 fastisel of small fixed-sized memcpys, which are generated
en-mass for C++ PODs.  On my c++ test file, this cuts the fast isel rejects by 10x 
and shrinks the generated .s file by 5%

llvm-svn: 129755
2011-04-19 05:52:03 +00:00
Chris Lattner
ec5a480dca tidy up
llvm-svn: 129753
2011-04-19 05:15:59 +00:00
Chris Lattner
7d07af0bf2 Implement support for fast isel of calls of i1 arguments, even though they are illegal,
when they are a truncate from something else.  This eliminates fully half of all the 
fastisel rejections on a test c++ file I'm working with, which should make a substantial
improvement for -O0 compile of c++ code.

This fixed rdar://9297003 - fast isel bails out on all functions taking bools

llvm-svn: 129752
2011-04-19 05:09:50 +00:00
Chris Lattner
3c4af7bfee Handle i1/i8/i16 constant integer arguments to calls by prepromoting them.
Before we would bail out on i1 arguments all together, now we just bail on
non-constant ones.  Also, we used to emit extraneous code.  e.g. test12 was:

	movb	$0, %al
	movzbl	%al, %edi
	callq	_test12

and test13 was:
	movb	$0, %al
	xorl	%edi, %edi
	movb	%al, 7(%rsp)
	callq	_test13f

Now we get:

	movl	$0, %edi
	callq	_test12
and:
	movl	$0, %edi
	callq	_test13f

llvm-svn: 129751
2011-04-19 04:42:38 +00:00
Chris Lattner
87b2a0ab2a be layout aware, to produce:
testb	$1, %al
	je	LBB0_2
## BB#1:                                ## %if.then
	movb	$0, %al

instead of:

	testb	$1, %al
	jne	LBB0_1
	jmp	LBB0_2
LBB0_1:                                 ## %if.then
	movb	$0, %al

how 'bout that.

llvm-svn: 129749
2011-04-19 04:26:32 +00:00
Chris Lattner
d259570b73 fix rdar://9297006 - fast isel bails out on trunc to i1 -> bools cry,
a common cause of fast isel rejects on c++ code.

llvm-svn: 129748
2011-04-19 04:22:17 +00:00
Evan Cheng
e232ee2466 Change A9 scheduling itineraries VLD* / VST* entries default to "aligned". That
is, it assumes addresses are 64-bit aligned (which should be the more common
case). If the alignment is found not to be aligned, then getOperandLatency()
would adjust the operand latency computation by one to compensate for it.
rdar://9294833

llvm-svn: 129742
2011-04-19 01:21:49 +00:00
Evan Cheng
56c151cba9 Do not lose mem_operands while lowering VLD / VST intrinsics.
llvm-svn: 129738
2011-04-19 00:04:03 +00:00
Devang Patel
4090ab2ed7 Use ArrayRef variants.
llvm-svn: 129735
2011-04-18 23:51:03 +00:00
Ted Kremenek
7cac5a8369 Add BumpPtrAllocator::getTotalMemory() to allow clients to query how much memory a BumpPtrAllocator allocated.
llvm-svn: 129727
2011-04-18 22:44:46 +00:00
Jim Grosbach
0427f5dec9 Trim a few unneeded includes.
llvm-svn: 129723
2011-04-18 21:35:54 +00:00
Eric Christopher
bd8bbe5934 Invert the meaning of printAliasInstr's return value. It now returns
true on success and false on failure. Update callers.

llvm-svn: 129722
2011-04-18 21:28:11 +00:00
Eli Friedman
b306371396 Simplify declarations slightly by using typedefs.
llvm-svn: 129720
2011-04-18 21:21:37 +00:00
Eli Friedman
9009047cff malloc elimination: it's a bad idea to use raw_svector_ostream on a
small heap-allocated SmallString because it unconditionally forces a malloc.

(Revised version of r129688, with the necessary flush() call.)

llvm-svn: 129716
2011-04-18 20:54:46 +00:00
Devang Patel
7220c1a021 Reduce clutter in asm output. Do not emit source location as comment for each instruction.
llvm-svn: 129715
2011-04-18 20:26:49 +00:00
Jakob Stoklund Olesen
c2f25578a4 Handle spilling around an instruction that has an early-clobber re-definition of
the spilled register.

This is quite common on ARM now that some stores have early-clobber defines.

llvm-svn: 129714
2011-04-18 20:23:27 +00:00
Sean Callanan
5e7f364b17 Small fix to the ARM AsmParser to ensure that a
superclass variable is instantiated properly.

llvm-svn: 129713
2011-04-18 20:20:44 +00:00
Eric Christopher
e1103d0a86 Fix a bug where we were counting the alias sets as completely used
registers for fast allocation a different way. This has us updating
used registers only when we're using that exact register.

Fixes rdar://9207598

llvm-svn: 129711
2011-04-18 19:26:25 +00:00
Chandler Carruth
af6432924d Mark some functions as used which are used within debug-only code. This
silences Clang's -Wunused-function when building in release mode.

llvm-svn: 129709
2011-04-18 18:49:44 +00:00
Chris Lattner
f8f4d3c30a while we're at it, handle 'sdiv exact' of a power of 2 also,
this fixes a few rejects on c++ iterator loops.

llvm-svn: 129694
2011-04-18 07:00:40 +00:00
Chris Lattner
dd2f1ec77c fix rdar://9297011 - udiv by power of two causing fast-isel rejects
llvm-svn: 129693
2011-04-18 06:55:51 +00:00
Chris Lattner
a473329704 Add a new bit that ImmLeaf's can opt into, which allows them to duck out of
the generated FastISel.  X86 doesn't need to generate code to match ADD16ri8 
since ADD16ri will do just fine.  This is a small codesize win in the generated
instruction selector.

llvm-svn: 129692
2011-04-18 06:36:55 +00:00
Eli Friedman
9654b5718d Revert r129688; it's breaking buildbots.
llvm-svn: 129689
2011-04-18 05:54:54 +00:00
Eli Friedman
fe593ec2d0 More malloc elimination: it's a bad idea to use raw_svector_ostream on a
small heap-allocated SmallString because it unconditionally forces a malloc.

llvm-svn: 129688
2011-04-18 05:38:58 +00:00
Eli Friedman
819c450b25 Make the StringMaps attached to MCContext use the MCContext's allocator;
reduces the number of calls to malloc().

llvm-svn: 129687
2011-04-18 05:02:31 +00:00
Chris Lattner
93445e092f switch the rest of the x86 immediate patterns over to ImmLeaf,
simplifying them and exposing more information to tblgen.  It would be nice
if other target authors adopted this as well, particularly arm since it has fastisel.

llvm-svn: 129676
2011-04-17 22:12:55 +00:00
Chris Lattner
2d89feb795 now that predicates have a decent abstraction layer on them, introduce a new
kind of predicate: one that is specific to imm nodes.  The predicate function
specified here just checks an int64_t directly instead of messing around with
SDNode's.  The virtue of this is that it means that fastisel and other things
can reason about these predicates.

llvm-svn: 129675
2011-04-17 22:05:17 +00:00
Chris Lattner
d776e76c44 Rework our internal representation of node predicates to expose more
structure and fix some fixmes.  We now have a TreePredicateFn class
that handles all of the decoding of these things.  This is an internal
cleanup that has no impact on the code generated by tblgen.

llvm-svn: 129670
2011-04-17 21:38:24 +00:00
Chris Lattner
28eaf6be7f 1. merge fast-isel-shift-imm.ll into fast-isel-x86-64.ll
2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts
3. teach tblgen to handle shift immediates that are different sizes than the 
   shifted operands, eliminating some code from the X86 fast isel backend.
4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function
   instead of FastEmit_ri to simplify code.

llvm-svn: 129666
2011-04-17 20:23:29 +00:00
Chris Lattner
bcc20f62ec fix an x86 fast isel issue where we'd completely give up on folding an address
when we have a global variable base an an index.  Instead, just give up on
folding the global variable.

Before we'd geenrate:

_test:                                  ## @test
## BB#0:
	movq	_rtx_length@GOTPCREL(%rip), %rax
	leaq	(%rax), %rax
	addq	%rdi, %rax
	movzbl	(%rax), %eax
	ret

now we generate:

_test:                                  ## @test
## BB#0:
	movq	_rtx_length@GOTPCREL(%rip), %rax
	movzbl	(%rax,%rdi), %eax
	ret

The difference is even more significant when there is a scale
involved.

This fixes rdar://9289558 - total fail with addr mode formation at -O0/x86-64

llvm-svn: 129664
2011-04-17 17:47:38 +00:00
Chris Lattner
f9d9976374 fix an oversight which caused us to compile the testcase (and other
less trivial things) into a dummy lea.  Before we generated:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	leaq	(%rax), %rax
	ret

now we produce:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	ret

This is part of rdar://9289558

llvm-svn: 129662
2011-04-17 17:12:08 +00:00
Chris Lattner
1f1f7f2742 tidy up and reduce indentation.
llvm-svn: 129661
2011-04-17 17:05:12 +00:00
Chris Lattner
5e00f501ff Fix rdar://9289512 - not folding load into compare at -O0
The basic issue here is that bottom-up isel is matching the branch
and compare, and was failing to fold the load into the branch/compare
combo.  Fixing this (by allowing folding into any instruction of a
sequence that is selected) allows us to produce things like:


cmpb    $0, 52(%rax)
je      LBB4_2

instead of:

movb    52(%rax), %cl
cmpb    $0, %cl
je      LBB4_2

This makes the generated -O0 code run a bit faster, but also speeds up
compile time by putting less pressure on the register allocator and 
generating less code.

This was one of the biggest classes of missing load folding.  Implementing
this shrinks 176.gcc's c-decl.s (as a random example) by about 4% in (verbose-asm)
line count.

llvm-svn: 129656
2011-04-17 06:35:44 +00:00