1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-27 14:02:50 +01:00
Commit Graph

1519 Commits

Author SHA1 Message Date
Owen Anderson
bd26993873 Allow targets to specify a the type of the RHS of a shift parameterized on the type of the LHS.
llvm-svn: 126518
2011-02-25 21:41:48 +00:00
Chris Lattner
55119c81aa remove command line option debugging hook.
llvm-svn: 126441
2011-02-24 21:53:03 +00:00
David Greene
7b0539174a [AVX] General VUNPCKL codegen support.
llvm-svn: 126264
2011-02-22 23:31:46 +00:00
Devang Patel
d5c4589795 Revert r124611 - "Keep track of incoming argument's location while emitting LiveIns."
In other words, do not keep track of argument's location.  The debugger (gdb) is not prepared to see line table entries for arguments. For the debugger, "second" line table entry marks beginning of function body.
This requires some coordination with debugger to get this working. 
 - The debugger needs to be aware of prolog_end attribute attached with line table entries.
 - The compiler needs to accurately mark prolog_end in line table entries (at -O0 and at -O1+)

llvm-svn: 126155
2011-02-21 23:21:26 +00:00
Eric Christopher
568548ce13 If both operands are loads from stores in memory we can't use movlpd/movlps
since one needs to be a register operand. Just use movss instead of forcing
an operand into a register.

Fixes PR9239

llvm-svn: 126072
2011-02-20 05:04:42 +00:00
Eric Christopher
67a5a75e28 Fix typos.
llvm-svn: 126018
2011-02-19 03:19:09 +00:00
David Greene
244920d662 [AVX] Recorganize X86ShuffleDecode into its own library
(LLVMX86Utils.a) to break cyclic library dependencies between
LLVMX86CodeGen.a and LLVMX86AsmParser.a.  Previously this code was in
a header file and marked static but AVX requires some additional
functionality here that won't be used by all clients.  Since including
unused static functions causes a gcc compiler warning, keeping it as a
header would break builds that use -Werror.  Putting this in its own
library solves both problems at once.

llvm-svn: 125765
2011-02-17 19:18:59 +00:00
Stuart Hastings
47e45a32a8 Swap VT and DebugLoc operands of getExtLoad() for consistency with
other getNode() methods.  Radar 9002173.

llvm-svn: 125665
2011-02-16 16:23:55 +00:00
Chris Lattner
bcf2d46d8a Enhance ComputeMaskedBits to know that aligned frameindexes
have their low bits set to zero.  This allows us to optimize
out explicit stack alignment code like in stack-align.ll:test4 when
it is redundant.

Doing this causes the code generator to start turning FI+cst into
FI|cst all over the place, which is general goodness (that is the
canonical form) except that various pieces of the code generator
don't handle OR aggressively.  Fix this by introducing a new
SelectionDAG::isBaseWithConstantOffset predicate, and using it
in places that are looking for ADD(X,CST).  The ARM backend in
particular was missing a lot of addressing mode folding opportunities
around OR.

llvm-svn: 125470
2011-02-13 22:25:43 +00:00
David Greene
ed6f0caa6d [AVX] Implement 256-bit vector lowering for SCALAR_TO_VECTOR. This
largely completes support for 128-bit fallback lowering for code that
is not 256-bit ready.

llvm-svn: 125315
2011-02-10 23:11:29 +00:00
David Greene
bb3702619c [AVX] Implement 256-bit vector lowering for EXTRACT_VECTOR_ELT.
llvm-svn: 125284
2011-02-10 16:57:36 +00:00
David Greene
dafc330bf6 [AVX] Implement 256-bit vector lowering for INSERT_VECTOR_ELT.
llvm-svn: 125187
2011-02-09 15:32:06 +00:00
David Greene
1fc808c066 [AVX] Implement BUILD_VECTOR lowering for 256-bit vectors. For
anything but the simplest of cases, lower a 256-bit BUILD_VECTOR by
splitting it into 128-bit parts and recombining.

llvm-svn: 125105
2011-02-08 19:04:41 +00:00
David Greene
597e995e8d [AVX] Insert/extract subvector lowering support. This includes a
couple of utility functions that will be used in other places for more
AVX lowering.

llvm-svn: 125029
2011-02-07 19:36:54 +00:00
NAKAMURA Takumi
07a84f5950 Target/X86: Tweak allocating shadow area (aka home) on Win64. It must be enough for caller to allocate one.
llvm-svn: 124949
2011-02-05 15:11:32 +00:00
NAKAMURA Takumi
5ae1b1d643 lib/Target/X86/X86ISelLowering.cpp: Introduce a new variable "IsWin64". No functional changes.
llvm-svn: 124948
2011-02-05 15:11:13 +00:00
NAKAMURA Takumi
c4522ab931 Target/X86: Fix whitespace.
llvm-svn: 124946
2011-02-05 15:10:54 +00:00
David Greene
50efbf730f [AVX] Revert 124910 until clients are ready.
llvm-svn: 124912
2011-02-05 00:24:41 +00:00
David Greene
0b0ec3aed7 [AVX] Add some utilities to insert and extract 128-bit subvectors.
This allows us to easily support 256-bit operations that don't have
native 256-bit support.  This applies to integer operations, certain
types of shuffles and various othher things.

llvm-svn: 124910
2011-02-04 23:29:33 +00:00
David Greene
7de7347ee8 [AVX] Support VSINSERTF128 with more patterns and appropriate
infrastructure.  This makes lowering 256-bit vectors to 128-bit
vectors simple when 256-bit vector support is not available.

llvm-svn: 124868
2011-02-04 16:08:29 +00:00
David Greene
2753be260c [AVX] VEXTRACTF128 support. This commit includes patterns for
matching EXTRACT_SUBVECTOR to VEXTRACTF128 along with support routines
to examine and translate index values.  VINSERTF128 comes next.  With
these two in place we can begin supporting more AVX operations as
INSERT/EXTRACT can be used as a fallback when 256-bit support is not
available.

llvm-svn: 124797
2011-02-03 15:50:00 +00:00
Rafael Espindola
5bfba89832 Fix PR9127 by reversing the operands even if they have more then one use.
Reversing the operands allows us to fold, but doesn't force us to. Also, at
this point the DAG is still being optimized, so the check for hasOneUse is not
very precise.

llvm-svn: 124773
2011-02-03 03:58:05 +00:00
Evan Cheng
0e8c521bbd Patches to build EFI with Clang/LLVM. By Carl Norum.
llvm-svn: 124639
2011-02-01 01:14:13 +00:00
Devang Patel
97c467ee47 Keep track of incoming argument's location while emitting LiveIns.
llvm-svn: 124611
2011-01-31 21:38:14 +00:00
David Greene
1f8b96494e [AVX] Clean up the code to configure target lowering for AVX. Specify
how to lower more/new operations.  This is a prerequisite for adding
additional AVX lowering.

llvm-svn: 124447
2011-01-27 22:38:56 +00:00
David Greene
5c173a307b [AVX] Add INSERT_SUBVECTOR and support it on x86. This provides a
default implementation for x86, going through the stack in a similr
fashion to how the codegen implements BUILD_VECTOR.  Eventually this
will get matched to VINSERTF128 if AVX is available.

llvm-svn: 124307
2011-01-26 19:13:22 +00:00
David Greene
93b74739e7 [AVX] Support EXTRACT_SUBVECTOR on x86. This provides a default
implementation of EXTRACT_SUBVECTOR for x86, going through the stack
in a similr fashion to how the codegen implements BUILD_VECTOR.
Eventually this will get matched to VEXTRACTF128 if AVX is available.

llvm-svn: 124292
2011-01-26 15:38:49 +00:00
NAKAMURA Takumi
8ace7260cc Target/X86: Tweak win64's tailcall.
llvm-svn: 124272
2011-01-26 02:04:09 +00:00
NAKAMURA Takumi
066378440a Fix whitespace.
llvm-svn: 124270
2011-01-26 02:03:37 +00:00
Chris Lattner
24ea7f696e fix PR8981, a crash trying to form a conditional inc with a floating point compare.
llvm-svn: 123560
2011-01-16 02:56:53 +00:00
Anton Korobeynikov
cf5967630b Rename TargetFrameInfo into TargetFrameLowering. Also, put couple of FIXMEs and fixes here and there.
llvm-svn: 123170
2011-01-10 12:39:04 +00:00
Jakob Stoklund Olesen
32f1783ca1 Simplify a bunch of isVirtualRegister() and isPhysicalRegister() logic.
These functions not longer assert when passed 0, but simply return false instead.

No functional change intended.

llvm-svn: 123155
2011-01-10 02:58:51 +00:00
Evan Cheng
1afd04fc59 Recognize inline asm 'rev /bin/bash, ' as a bswap intrinsic call.
llvm-svn: 123048
2011-01-08 01:24:27 +00:00
Evan Cheng
ae26b91353 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
llvm-svn: 123015
2011-01-07 19:35:30 +00:00
Evan Cheng
1a1771584e Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010

llvm-svn: 122955
2011-01-06 07:58:36 +00:00
Evan Cheng
cb39cc2164 Re-implement r122936 with proper target hooks. Now getMaxStoresPerMemcpy
etc. takes an option OptSize. If OptSize is true, it would return
the inline limit for functions with attribute OptSize.

llvm-svn: 122952
2011-01-06 06:52:41 +00:00
Benjamin Kramer
d8387aa9bd X86: Lower a select directly to a setcc_carry if possible.
int test(unsigned long a, unsigned long b) { return -(a < b); }
compiles to
  _test:                              ## @test
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    sbbl  %eax, %eax                  ## encoding: [0x19,0xc0]
    ret                               ## encoding: [0xc3]
instead of
  _test:                              ## @test
    xorl  %ecx, %ecx                  ## encoding: [0x31,0xc9]
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    movl  $-1, %eax                   ## encoding: [0xb8,0xff,0xff,0xff,0xff]
    cmovael %ecx, %eax                ## encoding: [0x0f,0x43,0xc1]
    ret                               ## encoding: [0xc3]

llvm-svn: 122451
2010-12-22 23:09:28 +00:00
Benjamin Kramer
369872edfc Add some x86 specific dagcombines for conditional increments.
(add Y, (sete  X, 0)) -> cmp X, 1; adc  0, Y
(add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y
(sub (sete  X, 0), Y) -> cmp X, 1; sbb  0, Y
(sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y

for
  unsigned foo(unsigned a, unsigned b) {
    if (a == 0) b++;
    return b;
  }
we now get:
  foo:
    cmpl  $1, %edi
    movl  %esi, %eax
    adcl  $0, %eax
    ret
instead of:
  foo:
    testl %edi, %edi
    sete  %al
    movzbl  %al, %eax
    addl  %esi, %eax
    ret

llvm-svn: 122364
2010-12-21 21:41:44 +00:00
Chris Lattner
65c5243bd6 rename MVT::Flag to MVT::Glue. "Flag" is a terrible name for
something that just glues two nodes together, even if it is
sometimes used for flags.

llvm-svn: 122310
2010-12-21 02:38:05 +00:00
Nate Begeman
c7dfecb10e Implement feedback from Bruno on making pblendvb an x86-specific ISD node in addition to being an intrinsic, and convert
lowering to use it.  Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing.

llvm-svn: 122277
2010-12-20 22:04:24 +00:00
Chris Lattner
bee7320c3c now that addc/adde are gone, "ADDC" in the X86 backend uses EFLAGS results,
the same as setcc.  Optimize ADDC(0,0,FLAGS) -> SET_CARRY(FLAGS).  This is
a step towards finishing off PR5443.  In the testcase in that bug we now  get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbq	%rcx, %rcx
	testb	$1, %cl
	setne	%dl
	ret

instead of:

	movq	%rdi, %rax
	addq	%rsi, %rax
	movl	$0, %ecx
	adcq	$0, %rcx
	testq	%rcx, %rcx
	setne	%dl
	ret

llvm-svn: 122219
2010-12-20 01:37:09 +00:00
Chris Lattner
16ea7f257f use for loop over types.
llvm-svn: 122214
2010-12-20 01:03:27 +00:00
Chris Lattner
8b1f76cad6 Change the X86 backend to stop using the evil ADDC/ADDE/SUBC/SUBE nodes (which
their carry depenedencies with MVT::Flag operands) and use clean and beautiful
EFLAGS dependences instead.

We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs
(which is what requires the previous scheduler change) and change X86 ISelLowering
to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes.

With the previous series of changes, this causes no changes in the testsuite, woo.

llvm-svn: 122213
2010-12-20 00:59:46 +00:00
Mon P Wang
d3adab7a64 Prevents PerformShuffleCombine from creating a node with an illegal type after legalize types
has run, e.g., prevent creating an i64 node from a v2i64 when i64 is not a legal type.

llvm-svn: 122206
2010-12-19 23:55:53 +00:00
Chris Lattner
297259f6f1 improve the setcc -> setcc_carry optimization to happen more
consistently by moving it out of lowering into dag combine.

Add some missing patterns for matching away extended versions of setcc_c.

llvm-svn: 122201
2010-12-19 22:08:31 +00:00
Chris Lattner
1f31c7fa15 simplify some code to just reuse a setcc if we can instead of
going through the CSE maps to get it.

llvm-svn: 122196
2010-12-19 21:23:48 +00:00
Chris Lattner
2d59eef5fd now that generic vector types aren't selected onto MMX operations,
we don't need -disable-mmx anymore.

llvm-svn: 122189
2010-12-19 20:19:20 +00:00
Chris Lattner
30438e63c8 reduce copy/paste programming with the power of for loops.
llvm-svn: 122187
2010-12-19 20:07:10 +00:00
Chris Lattner
29475c23d0 X86 supports i8/i16 overflow ops (except i8 multiplies), we should
generate them.  

Now we compile:

define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
  %0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
  %cmp = extractvalue %0 %0, 1
  br i1 %cmp, label %if.then, label %if.end

into:

_X:                                     ## @X
## BB#0:                                ## %entry
	subl	$12, %esp
	movb	16(%esp), %al
	addb	20(%esp), %al
	jo	LBB0_2

Before we were generating:

_X:                                     ## @X
## BB#0:                                ## %entry
	pushl	%ebp
	movl	%esp, %ebp
	subl	$8, %esp
	movb	12(%ebp), %al
	testb	%al, %al
	setge	%cl
	movb	8(%ebp), %dl
	testb	%dl, %dl
	setge	%ah
	cmpb	%cl, %ah
	sete	%cl
	addb	%al, %dl
	testb	%dl, %dl
	setge	%al
	cmpb	%al, %ah
	setne	%al
	andb	%cl, %al
	testb	%al, %al
	jne	LBB0_2

llvm-svn: 122186
2010-12-19 20:03:11 +00:00
Nate Begeman
ef5f3c0fa7 Add support for matching psign & plendvb to the x86 target
Remove unnecessary pandn patterns, 'vnot' patfrag looks through bitcasts

llvm-svn: 122098
2010-12-17 22:55:37 +00:00