1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-26 12:43:36 +01:00
Commit Graph

16859 Commits

Author SHA1 Message Date
Venkatraman Govindaraju
3c6418f9fc Multiple SPARC backend fixes: added Y register; updated select_cc, subx, subxcc defs/uses;
and fixed CustomInserter.

llvm-svn: 122607
2010-12-28 20:39:17 +00:00
Chris Lattner
8e3ff12790 add a note from llvmdev
llvm-svn: 122603
2010-12-28 18:45:02 +00:00
Rafael Espindola
e7e67fce10 Add support for the same encodings of the personality function that gnu as
supports.

llvm-svn: 122577
2010-12-27 00:36:05 +00:00
Chris Lattner
a6f2b8316f fix some sort of weird pasto
llvm-svn: 122560
2010-12-26 12:05:11 +00:00
Chris Lattner
eec079a470 add a note
llvm-svn: 122559
2010-12-26 03:53:31 +00:00
Andrew Trick
134b2a5907 Various bits of framework needed for precise machine-level selection
DAG scheduling during isel. Most new functionality is currently
guarded by -enable-sched-cycles and -enable-sched-hazard.

Added InstrItineraryData::IssueWidth field, currently derived from
ARM itineraries, but could be initialized differently on other targets.

Added ScheduleHazardRecognizer::MaxLookAhead to indicate whether it is
active, and if so how many cycles of state it holds.

Added SchedulingPriorityQueue::HasReadyFilter to allowing gating entry
into the scheduler's available queue.

ScoreboardHazardRecognizer now accesses the ScheduleDAG in order to
get information about it's SUnits, provides RecedeCycle for bottom-up
scheduling, correctly computes scoreboard depth, tracks IssueCount, and
considers potential stall cycles when checking for hazards.

ScheduleDAGRRList now models machine cycles and hazards (under
flags). It tracks MinAvailableCycle, drives the hazard recognizer and
priority queue's ready filter, manages a new PendingQueue, properly
accounts for stall cycles, etc.

llvm-svn: 122541
2010-12-24 05:03:26 +00:00
Andrew Trick
53f4556c64 whitespace
llvm-svn: 122539
2010-12-24 04:28:06 +00:00
Jim Grosbach
7fc4f99084 Use a StringSwitch<> instead of a manually constructed string matcher.
llvm-svn: 122530
2010-12-24 00:03:39 +00:00
Evan Cheng
ea28d16e36 Code clean up. No functionality change.
llvm-svn: 122528
2010-12-23 23:54:17 +00:00
Jim Grosbach
03a39130cb Remove dead patterns.
llvm-svn: 122524
2010-12-23 23:20:13 +00:00
Jim Grosbach
14f46d80df Recognize a few more documented register name aliases for ARM in the asm lexer.
llvm-svn: 122523
2010-12-23 23:19:54 +00:00
Bob Wilson
85dbc89f44 Radar 8803471: Fix expansion of ARM BCCi64 pseudo instructions.
If the basic block containing the BCCi64 (or BCCZi64) instruction ends with
an unconditional branch, that branch needs to be deleted before appending
the expansion of the BCCi64 to the end of the block.

llvm-svn: 122521
2010-12-23 22:45:49 +00:00
Chris Lattner
01e8c46349 Flag -> Glue, the ongoing saga
llvm-svn: 122513
2010-12-23 18:28:41 +00:00
Chris Lattner
b607e7deda flags -> glue for selectiondag
llvm-svn: 122509
2010-12-23 17:24:32 +00:00
Benjamin Kramer
0a0e2c55c4 Remove/fix invalid README entries. The well thought out strcpy function doesn't return a pointer to the end of the string.
llvm-svn: 122496
2010-12-23 15:32:07 +00:00
Benjamin Kramer
b21118c91b Remove some obsolete README items, add a new one off the top of my head.
llvm-svn: 122495
2010-12-23 15:07:02 +00:00
Jeffrey Yasskin
a199652a3e Change all self assignments X=X to (void)X, so that we can turn on a
new gcc warning that complains on self-assignments and
self-initializations.

llvm-svn: 122458
2010-12-23 00:58:24 +00:00
Jim Grosbach
b0e9926c33 Trailing whitespace.
llvm-svn: 122456
2010-12-22 23:26:02 +00:00
Benjamin Kramer
d8387aa9bd X86: Lower a select directly to a setcc_carry if possible.
int test(unsigned long a, unsigned long b) { return -(a < b); }
compiles to
  _test:                              ## @test
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    sbbl  %eax, %eax                  ## encoding: [0x19,0xc0]
    ret                               ## encoding: [0xc3]
instead of
  _test:                              ## @test
    xorl  %ecx, %ecx                  ## encoding: [0x31,0xc9]
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    movl  $-1, %eax                   ## encoding: [0xb8,0xff,0xff,0xff,0xff]
    cmovael %ecx, %eax                ## encoding: [0x0f,0x43,0xc1]
    ret                               ## encoding: [0xc3]

llvm-svn: 122451
2010-12-22 23:09:28 +00:00
Che-Liang Chiou
e73ad4387e ptx: add ld instruction and test
llvm-svn: 122398
2010-12-22 10:38:51 +00:00
Wesley Peck
2759b7bc98 Don't generate carry bit when loading immediate values on the Microblaze.
llvm-svn: 122385
2010-12-22 01:29:32 +00:00
Wesley Peck
0d64db9772 Add support for some of the LLVM atomic operations to the MBlaze backend.
llvm-svn: 122384
2010-12-22 01:15:01 +00:00
Wesley Peck
32c95e5ef4 Modeling the carry bit in the MSR register of the MicroBlaze.
llvm-svn: 122381
2010-12-22 00:53:07 +00:00
Wesley Peck
fc577ca406 Fix a regression introduced into the MBlaze delay slot filler.
llvm-svn: 122379
2010-12-22 00:22:59 +00:00
Benjamin Kramer
369872edfc Add some x86 specific dagcombines for conditional increments.
(add Y, (sete  X, 0)) -> cmp X, 1; adc  0, Y
(add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y
(sub (sete  X, 0), Y) -> cmp X, 1; sbb  0, Y
(sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y

for
  unsigned foo(unsigned a, unsigned b) {
    if (a == 0) b++;
    return b;
  }
we now get:
  foo:
    cmpl  $1, %edi
    movl  %esi, %eax
    adcl  $0, %eax
    ret
instead of:
  foo:
    testl %edi, %edi
    sete  %al
    movzbl  %al, %eax
    addl  %esi, %eax
    ret

llvm-svn: 122364
2010-12-21 21:41:44 +00:00
Bob Wilson
01593c55a2 Add ARM-specific DAG combining to cast i64 vector element load/stores to f64.
Type legalization splits up i64 values into pairs of i32 values, which leads
to poor quality code when inserting or extracting i64 vector elements.
If the vector element is loaded or stored, it can be treated as an f64 value
and loaded or stored directly from a VPR register.  Use the pre-legalization
DAG combiner to cast those vector elements to f64 types so that the type
legalizer won't mess them up.  Radar 8755338.

llvm-svn: 122319
2010-12-21 06:43:19 +00:00
Eric Christopher
72ceef0a74 Arm and thumb call instructions are also in different orders.
Fixes rdar://8782223

llvm-svn: 122313
2010-12-21 03:50:43 +00:00
Chris Lattner
65c5243bd6 rename MVT::Flag to MVT::Glue. "Flag" is a terrible name for
something that just glues two nodes together, even if it is
sometimes used for flags.

llvm-svn: 122310
2010-12-21 02:38:05 +00:00
Eric Christopher
81ae56b33c If we're not using reg+reg offset we're using reg+imm, set the opcode
to be the one we want to use. bugpoint reduced testcase is a little large,
I'll see if I can simplify it down more.

Fixes part of rdar://8782207

llvm-svn: 122307
2010-12-21 02:12:07 +00:00
Bill Wendling
856080c8a1 Fix a copy-pasto. When the tBR_JTr instruction was converted to using the
tPseudoInst class, its size was changed from "special" to "2 bytes". This is
incorrect because the jump table will no longer be taken into account when
calculating branch offsets.
<rdar://problem/8782216>

llvm-svn: 122303
2010-12-21 01:57:15 +00:00
Bill Wendling
8c7f90099b Comment cleanups.
llvm-svn: 122302
2010-12-21 01:54:40 +00:00
Nate Begeman
c7dfecb10e Implement feedback from Bruno on making pblendvb an x86-specific ISD node in addition to being an intrinsic, and convert
lowering to use it.  Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing.

llvm-svn: 122277
2010-12-20 22:04:24 +00:00
Wesley Peck
e8ec7a4d1f Teach the MBlaze disassembler to disassemble special purpose registers.
llvm-svn: 122269
2010-12-20 21:18:04 +00:00
Wesley Peck
af2890a051 Teach the MBlaze asm parser how to parse special purpose register names.
llvm-svn: 122261
2010-12-20 20:43:24 +00:00
Daniel Dunbar
5580eff1f8 Add header...
llvm-svn: 122247
2010-12-20 15:45:51 +00:00
Daniel Dunbar
f1deaf06a9 X86/MC/Mach-O: Split out createX86MachObjectWriter().
llvm-svn: 122246
2010-12-20 15:07:39 +00:00
Chris Lattner
bee7320c3c now that addc/adde are gone, "ADDC" in the X86 backend uses EFLAGS results,
the same as setcc.  Optimize ADDC(0,0,FLAGS) -> SET_CARRY(FLAGS).  This is
a step towards finishing off PR5443.  In the testcase in that bug we now  get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbq	%rcx, %rcx
	testb	$1, %cl
	setne	%dl
	ret

instead of:

	movq	%rdi, %rax
	addq	%rsi, %rax
	movl	$0, %ecx
	adcq	$0, %rcx
	testq	%rcx, %rcx
	setne	%dl
	ret

llvm-svn: 122219
2010-12-20 01:37:09 +00:00
Chris Lattner
2d4e17d195 We lower setb to sbb with the hope that the and will go away, when it
doesn't, match it back to setb.

On a 64-bit version of the testcase before we'd get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbb	%dl, %dl
	andb	$1, %dl
	ret

now we get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	setb	%dl
	ret

llvm-svn: 122217
2010-12-20 01:16:03 +00:00
Chris Lattner
16ea7f257f use for loop over types.
llvm-svn: 122214
2010-12-20 01:03:27 +00:00
Chris Lattner
8b1f76cad6 Change the X86 backend to stop using the evil ADDC/ADDE/SUBC/SUBE nodes (which
their carry depenedencies with MVT::Flag operands) and use clean and beautiful
EFLAGS dependences instead.

We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs
(which is what requires the previous scheduler change) and change X86 ISelLowering
to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes.

With the previous series of changes, this causes no changes in the testsuite, woo.

llvm-svn: 122213
2010-12-20 00:59:46 +00:00
Mon P Wang
d3adab7a64 Prevents PerformShuffleCombine from creating a node with an illegal type after legalize types
has run, e.g., prevent creating an i64 node from a v2i64 when i64 is not a legal type.

llvm-svn: 122206
2010-12-19 23:55:53 +00:00
Chris Lattner
297259f6f1 improve the setcc -> setcc_carry optimization to happen more
consistently by moving it out of lowering into dag combine.

Add some missing patterns for matching away extended versions of setcc_c.

llvm-svn: 122201
2010-12-19 22:08:31 +00:00
Chris Lattner
1f31c7fa15 simplify some code to just reuse a setcc if we can instead of
going through the CSE maps to get it.

llvm-svn: 122196
2010-12-19 21:23:48 +00:00
Nick Lewycky
c85935836b Add missing standard headers. Patch by Joerg Sonnenberger!
llvm-svn: 122193
2010-12-19 20:43:38 +00:00
Nick Lewycky
e718af6c63 Add missing std:: prefixes to some calls. C++ doesn't require that <cfoo>
headers provide symbols outside namespace std and the LLVM coding standards
state that we should prefix all of them.

llvm-svn: 122192
2010-12-19 20:42:43 +00:00
Chris Lattner
2d59eef5fd now that generic vector types aren't selected onto MMX operations,
we don't need -disable-mmx anymore.

llvm-svn: 122189
2010-12-19 20:19:20 +00:00
Chris Lattner
30438e63c8 reduce copy/paste programming with the power of for loops.
llvm-svn: 122187
2010-12-19 20:07:10 +00:00
Chris Lattner
29475c23d0 X86 supports i8/i16 overflow ops (except i8 multiplies), we should
generate them.  

Now we compile:

define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
  %0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
  %cmp = extractvalue %0 %0, 1
  br i1 %cmp, label %if.then, label %if.end

into:

_X:                                     ## @X
## BB#0:                                ## %entry
	subl	$12, %esp
	movb	16(%esp), %al
	addb	20(%esp), %al
	jo	LBB0_2

Before we were generating:

_X:                                     ## @X
## BB#0:                                ## %entry
	pushl	%ebp
	movl	%esp, %ebp
	subl	$8, %esp
	movb	12(%ebp), %al
	testb	%al, %al
	setge	%cl
	movb	8(%ebp), %dl
	testb	%dl, %dl
	setge	%ah
	cmpb	%cl, %ah
	sete	%cl
	addb	%al, %dl
	testb	%dl, %dl
	setge	%al
	cmpb	%al, %ah
	setne	%al
	andb	%cl, %al
	testb	%al, %al
	jne	LBB0_2

llvm-svn: 122186
2010-12-19 20:03:11 +00:00
Chris Lattner
3bc741a0d2 recognize an unsigned add with overflow idiom into uadd.
This resolves a README entry and technically resolves PR4916,
but we still get poor code for the testcase in that PR because
GVN isn't CSE'ing uadd with add, filed as PR8817.

Previously we got:

_test7:                                 ## @test7
	addq	%rsi, %rdi
	cmpq	%rdi, %rsi
	movl	$42, %eax
	cmovaq	%rsi, %rax
	ret

Now we get:

_test7:                                 ## @test7
	addq	%rsi, %rdi
	movl	$42, %eax
	cmovbq	%rsi, %rax
	ret

llvm-svn: 122182
2010-12-19 19:37:52 +00:00
Anton Korobeynikov
f49c9c02d6 Restore the behavior of frame lowering before my refactoring.
It turns out that ppc backend has really weird interdependencies
over different hooks and all stuff is fragile wrt small changes.
This should fix PR8749

llvm-svn: 122155
2010-12-18 19:53:14 +00:00