1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 14:33:02 +02:00
Commit Graph

6862 Commits

Author SHA1 Message Date
Eric Christopher
3821f63f4b Experiment with changing the default 32-bit linux stack alignment to
16 bytes for PR8969. Update all testcases accordingly.

llvm-svn: 123367
2011-01-13 06:47:10 +00:00
Chris Lattner
586e7af07d Fix PR8946, a missing reg/reg form of movdqu.
llvm-svn: 123242
2011-01-11 17:04:55 +00:00
Anton Korobeynikov
abd9a868df Update CMake stuff
llvm-svn: 123171
2011-01-10 12:39:23 +00:00
Anton Korobeynikov
cf5967630b Rename TargetFrameInfo into TargetFrameLowering. Also, put couple of FIXMEs and fixes here and there.
llvm-svn: 123170
2011-01-10 12:39:04 +00:00
Jakob Stoklund Olesen
32f1783ca1 Simplify a bunch of isVirtualRegister() and isPhysicalRegister() logic.
These functions not longer assert when passed 0, but simply return false instead.

No functional change intended.

llvm-svn: 123155
2011-01-10 02:58:51 +00:00
Jakob Stoklund Olesen
e2e0850651 Fix the last virtual register enumerations.
llvm-svn: 123102
2011-01-08 23:11:11 +00:00
Evan Cheng
1afd04fc59 Recognize inline asm 'rev /bin/bash, ' as a bswap intrinsic call.
llvm-svn: 123048
2011-01-08 01:24:27 +00:00
Evan Cheng
aa16fd02ad Do not model all INLINEASM instructions as having unmodelled side effects.
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.

This allows memory instructions to be moved around INLINEASM instructions.

llvm-svn: 123044
2011-01-07 23:50:32 +00:00
Evan Cheng
ae26b91353 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
llvm-svn: 123015
2011-01-07 19:35:30 +00:00
Rafael Espindola
64814fff0b Correctly disassemble truncated asm.
Patch by Richard Simth.

llvm-svn: 122962
2011-01-06 16:48:42 +00:00
Benjamin Kramer
dcff486813 Remove dead code and silence warnings.
llvm-svn: 122957
2011-01-06 13:01:02 +00:00
Evan Cheng
1a1771584e Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010

llvm-svn: 122955
2011-01-06 07:58:36 +00:00
Evan Cheng
cb39cc2164 Re-implement r122936 with proper target hooks. Now getMaxStoresPerMemcpy
etc. takes an option OptSize. If OptSize is true, it would return
the inline limit for functions with attribute OptSize.

llvm-svn: 122952
2011-01-06 06:52:41 +00:00
Bill Wendling
a59afdaec5 PR8919 - LLVM incorrectly generates "_alloca" as the stack probing call. That
works only on MinGW32. On 64-bit, the function to call is "__chkstk".
Patch by KS Sreeram!

llvm-svn: 122934
2011-01-06 00:50:34 +00:00
Bill Wendling
fae0dd1afa PR8918 - When used with MinGW64, LLVM generates a "calll __main" at the
beginning of the "main" function. The assembler complains about the invalid
suffix for the 'call' instruction. The right instruction is "callq __main".
Patch by KS Sreeram!

llvm-svn: 122933
2011-01-06 00:47:10 +00:00
Chris Lattner
3ef9db5cd4 fix PR8900, a shuffle miscompilation. Patch by Nadav Rotem!
llvm-svn: 122921
2011-01-05 22:28:46 +00:00
Chris Lattner
0caa2500c0 silence more self assignment warnings.
llvm-svn: 122920
2011-01-05 22:26:52 +00:00
Jakob Stoklund Olesen
76e782c385 Use the EdgeBundles analysis in X86FloatingPoint instead of recomputing CFG
bundles in the pass.

llvm-svn: 122833
2011-01-04 21:10:11 +00:00
Jakob Stoklund Olesen
abf8941a60 Turn the EdgeBundles class into a stand-alone machine CFG analysis pass.
The analysis will be needed by both the greedy register allocator and the
X86FloatingPoint pass. It only needs to be computed once when the CFG doesn't
change.

This pass is very fast, usually showing up as 0.0% wall time.

llvm-svn: 122832
2011-01-04 21:10:05 +00:00
Dale Johannesen
c7168aa6fe Eliminate a warning compiling with llvm-gcc. (IMO the
warning is overzealous but gcc is what it is.)

llvm-svn: 122829
2011-01-04 19:31:24 +00:00
Evan Cheng
25f7df1bce Use pushq / popq instead of subq $8, %rsp / addq $8, %rsp to adjust stack in
prologue and epilogue if the adjustment is 8. Similarly, use pushl / popl if
the adjustment is 4 in 32-bit mode.

In the epilogue, takes care to pop to a caller-saved register that's not live
at the exit (either return or tailcall instruction).
rdar://8771137

llvm-svn: 122783
2011-01-03 22:53:22 +00:00
Benjamin Kramer
a58b69aa9d Try to reuse the value when lowering memset.
This allows us to compile:
  void test(char *s, int a) {
    __builtin_memset(s, a, 15);
  }
into 1 mul + 3 stores instead of 3 muls + 3 stores.

llvm-svn: 122710
2011-01-02 19:57:05 +00:00
Oscar Fuentes
6514b0ac68 A workaround for a bug in cmake 2.8.3 diagnosed on PR 8885.
llvm-svn: 122706
2011-01-02 19:32:31 +00:00
Chris Lattner
222b24e2de update a bunch of entries.
llvm-svn: 122700
2011-01-02 18:31:38 +00:00
Rafael Espindola
55f7a5057d Add support for the 'H' modifier.
llvm-svn: 122667
2011-01-01 20:58:46 +00:00
Oscar Fuentes
a63d0dbfbf Add to the list of cmake files the object file, not the asm file. This
is necessary for executing the custom command that runs the
assember. Fixes PR8877.

llvm-svn: 122649
2010-12-31 20:15:37 +00:00
Nick Lewycky
5cb84ee2cf Add another non-commutable instruction that gas accepts commuted forms for.
Fixes PR8861.

llvm-svn: 122641
2010-12-30 22:10:49 +00:00
NAKAMURA Takumi
2160b3953b CMake: Add disabling optimization on MSVC8 and MSVC10 as workaround for some files in Target/ARM and Target/X86.
llvm-svn: 122623
2010-12-29 03:59:27 +00:00
Rafael Espindola
e7e67fce10 Add support for the same encodings of the personality function that gnu as
supports.

llvm-svn: 122577
2010-12-27 00:36:05 +00:00
Chris Lattner
a6f2b8316f fix some sort of weird pasto
llvm-svn: 122560
2010-12-26 12:05:11 +00:00
Chris Lattner
eec079a470 add a note
llvm-svn: 122559
2010-12-26 03:53:31 +00:00
Evan Cheng
ea28d16e36 Code clean up. No functionality change.
llvm-svn: 122528
2010-12-23 23:54:17 +00:00
Chris Lattner
01e8c46349 Flag -> Glue, the ongoing saga
llvm-svn: 122513
2010-12-23 18:28:41 +00:00
Benjamin Kramer
b21118c91b Remove some obsolete README items, add a new one off the top of my head.
llvm-svn: 122495
2010-12-23 15:07:02 +00:00
Benjamin Kramer
d8387aa9bd X86: Lower a select directly to a setcc_carry if possible.
int test(unsigned long a, unsigned long b) { return -(a < b); }
compiles to
  _test:                              ## @test
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    sbbl  %eax, %eax                  ## encoding: [0x19,0xc0]
    ret                               ## encoding: [0xc3]
instead of
  _test:                              ## @test
    xorl  %ecx, %ecx                  ## encoding: [0x31,0xc9]
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    movl  $-1, %eax                   ## encoding: [0xb8,0xff,0xff,0xff,0xff]
    cmovael %ecx, %eax                ## encoding: [0x0f,0x43,0xc1]
    ret                               ## encoding: [0xc3]

llvm-svn: 122451
2010-12-22 23:09:28 +00:00
Benjamin Kramer
369872edfc Add some x86 specific dagcombines for conditional increments.
(add Y, (sete  X, 0)) -> cmp X, 1; adc  0, Y
(add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y
(sub (sete  X, 0), Y) -> cmp X, 1; sbb  0, Y
(sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y

for
  unsigned foo(unsigned a, unsigned b) {
    if (a == 0) b++;
    return b;
  }
we now get:
  foo:
    cmpl  $1, %edi
    movl  %esi, %eax
    adcl  $0, %eax
    ret
instead of:
  foo:
    testl %edi, %edi
    sete  %al
    movzbl  %al, %eax
    addl  %esi, %eax
    ret

llvm-svn: 122364
2010-12-21 21:41:44 +00:00
Chris Lattner
65c5243bd6 rename MVT::Flag to MVT::Glue. "Flag" is a terrible name for
something that just glues two nodes together, even if it is
sometimes used for flags.

llvm-svn: 122310
2010-12-21 02:38:05 +00:00
Nate Begeman
c7dfecb10e Implement feedback from Bruno on making pblendvb an x86-specific ISD node in addition to being an intrinsic, and convert
lowering to use it.  Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing.

llvm-svn: 122277
2010-12-20 22:04:24 +00:00
Daniel Dunbar
5580eff1f8 Add header...
llvm-svn: 122247
2010-12-20 15:45:51 +00:00
Daniel Dunbar
f1deaf06a9 X86/MC/Mach-O: Split out createX86MachObjectWriter().
llvm-svn: 122246
2010-12-20 15:07:39 +00:00
Chris Lattner
bee7320c3c now that addc/adde are gone, "ADDC" in the X86 backend uses EFLAGS results,
the same as setcc.  Optimize ADDC(0,0,FLAGS) -> SET_CARRY(FLAGS).  This is
a step towards finishing off PR5443.  In the testcase in that bug we now  get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbq	%rcx, %rcx
	testb	$1, %cl
	setne	%dl
	ret

instead of:

	movq	%rdi, %rax
	addq	%rsi, %rax
	movl	$0, %ecx
	adcq	$0, %rcx
	testq	%rcx, %rcx
	setne	%dl
	ret

llvm-svn: 122219
2010-12-20 01:37:09 +00:00
Chris Lattner
2d4e17d195 We lower setb to sbb with the hope that the and will go away, when it
doesn't, match it back to setb.

On a 64-bit version of the testcase before we'd get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbb	%dl, %dl
	andb	$1, %dl
	ret

now we get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	setb	%dl
	ret

llvm-svn: 122217
2010-12-20 01:16:03 +00:00
Chris Lattner
16ea7f257f use for loop over types.
llvm-svn: 122214
2010-12-20 01:03:27 +00:00
Chris Lattner
8b1f76cad6 Change the X86 backend to stop using the evil ADDC/ADDE/SUBC/SUBE nodes (which
their carry depenedencies with MVT::Flag operands) and use clean and beautiful
EFLAGS dependences instead.

We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs
(which is what requires the previous scheduler change) and change X86 ISelLowering
to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes.

With the previous series of changes, this causes no changes in the testsuite, woo.

llvm-svn: 122213
2010-12-20 00:59:46 +00:00
Mon P Wang
d3adab7a64 Prevents PerformShuffleCombine from creating a node with an illegal type after legalize types
has run, e.g., prevent creating an i64 node from a v2i64 when i64 is not a legal type.

llvm-svn: 122206
2010-12-19 23:55:53 +00:00
Chris Lattner
297259f6f1 improve the setcc -> setcc_carry optimization to happen more
consistently by moving it out of lowering into dag combine.

Add some missing patterns for matching away extended versions of setcc_c.

llvm-svn: 122201
2010-12-19 22:08:31 +00:00
Chris Lattner
1f31c7fa15 simplify some code to just reuse a setcc if we can instead of
going through the CSE maps to get it.

llvm-svn: 122196
2010-12-19 21:23:48 +00:00
Chris Lattner
2d59eef5fd now that generic vector types aren't selected onto MMX operations,
we don't need -disable-mmx anymore.

llvm-svn: 122189
2010-12-19 20:19:20 +00:00
Chris Lattner
30438e63c8 reduce copy/paste programming with the power of for loops.
llvm-svn: 122187
2010-12-19 20:07:10 +00:00
Chris Lattner
29475c23d0 X86 supports i8/i16 overflow ops (except i8 multiplies), we should
generate them.  

Now we compile:

define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
  %0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
  %cmp = extractvalue %0 %0, 1
  br i1 %cmp, label %if.then, label %if.end

into:

_X:                                     ## @X
## BB#0:                                ## %entry
	subl	$12, %esp
	movb	16(%esp), %al
	addb	20(%esp), %al
	jo	LBB0_2

Before we were generating:

_X:                                     ## @X
## BB#0:                                ## %entry
	pushl	%ebp
	movl	%esp, %ebp
	subl	$8, %esp
	movb	12(%ebp), %al
	testb	%al, %al
	setge	%cl
	movb	8(%ebp), %dl
	testb	%dl, %dl
	setge	%ah
	cmpb	%cl, %ah
	sete	%cl
	addb	%al, %dl
	testb	%dl, %dl
	setge	%al
	cmpb	%al, %ah
	setne	%al
	andb	%cl, %al
	testb	%al, %al
	jne	LBB0_2

llvm-svn: 122186
2010-12-19 20:03:11 +00:00