1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 03:53:04 +02:00
Commit Graph

28779 Commits

Author SHA1 Message Date
Matt Arsenault
48848ba546 R600/SI: Prettier operand printing for 64-bit ops.
Copy what is done for 32-bit already so the order is about the same.

llvm-svn: 211186
2014-06-18 17:13:51 +00:00
Matheus Almeida
d742950090 [mips] SYNC $stype instruction was added in Mips32
but SYNC with an implied operand ($stype = 0) is valid since Mips2.

llvm-svn: 211185
2014-06-18 17:10:30 +00:00
Matt Arsenault
068d030935 R600: Implement f64 ftrunc, ffloor and fceil.
CI has instructions for these, so this fixes them for older hardware.

llvm-svn: 211183
2014-06-18 17:05:30 +00:00
Matt Arsenault
77f7e6fc35 R600: Custom lower f64 frint for pre-CI
llvm-svn: 211182
2014-06-18 17:05:26 +00:00
Matt Arsenault
990ee542e5 R600/SI: Temporary fix for f64 fneg
This should be a source modifier, but this unblocks
most of my math patches.

llvm-svn: 211181
2014-06-18 17:05:22 +00:00
Matt Arsenault
dc16a24358 R600/SI: Comparisons set vcc.
llvm-svn: 211178
2014-06-18 16:53:48 +00:00
Adam Nemet
d36a2c2dba [X86] AVX512: Add non-temporal stores
Note that I followed the AVX2 convention here and didn't add LLVM intrinsics
for stores.  These can be generated with the nontemporal hint on LLVM IR
stores (see new test). The GCC builtins are lowered directly into nontemporal
stores.

<rdar://problem/17082571>

llvm-svn: 211176
2014-06-18 16:51:10 +00:00
Adam Nemet
2673c0a107 [X86] AVX512: Specify compressed displacement for vmovntdqa
Use the max 64-bit element size with EVEX_CD8.  This should work since element
size is ignored for a full-vector access (FVM).

llvm-svn: 211175
2014-06-18 16:51:07 +00:00
Ulrich Weigand
1458f67d3e [PowerPC] Do not use BLA with the 64-bit SVR4 ABI
The PowerPC back-end uses BLA to implement calls to functions at
known-constant addresses, which is apparently used for certain
system routines on Darwin.

However, with the 64-bit SVR4 ABI, this is actually incorrect.
An immediate function pointer value on this platform is not
directly usable as a target address for BLA:
- in the ELFv1 ABI, the function pointer value refers to the
  *function descriptor*, not the code address
- in the ELFv2 ABI, the function pointer value refers to the
  global entry point, but BL(A) would only be correct when
  calling the *local* entry point

This bug didn't show up since using immediate function pointer
values is not usually done in the 64-bit SVR4 ABI in the first
place.  However, I ran into this issue with a certain use case
of LLVM as JIT, where immediate function pointer values were
uses to implement callbacks from JITted code to helpers in
statically compiled code.

Fixed by simply not using BLA with the 64-bit SVR4 ABI.

llvm-svn: 211174
2014-06-18 16:14:04 +00:00
Ulrich Weigand
cfcf14bd4c [PowerPC] Fix emitting instruction pairs on LE
My patch r204634 to emit instructions in little-endian format failed to
handle those special cases where we emit a pair of instructions from a
single LLVM MC instructions (like the bl; nop pairs used to implement
the call sequence).

In those cases, we still need to emit the "first" instruction (the one
in the more significant word) first, on both big and little endian,
and not swap them.

llvm-svn: 211171
2014-06-18 15:37:07 +00:00
Matheus Almeida
20a0332ec0 [mips] Fix expansion of memory operation if destination register is not a GPR.
Summary:
The assembler tries to reuse the destination register for memory operations whenever
it can but it's not possible to do so if the destination register is not a GPR.

Example:
  ldc1 $f0, sym
should expand to:
  lui $at, %hi(sym)
  ldc1 $f0, %lo(sym)($at)

It's entirely wrong to expand to:
  lui $f0, %hi(sym)
  ldc1 $f0, %lo(sym)($f0)

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D4173

llvm-svn: 211169
2014-06-18 14:49:56 +00:00
Matheus Almeida
c796a413df [mips] Report correct location when "erroring" about the use of $at when it's not available.
Summary: This removes the FIXMEs from test/MC/Mips/mips-noat.s.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D4172

llvm-svn: 211168
2014-06-18 14:46:05 +00:00
Zoran Jovanovic
02da9f46d5 [mips][mips64r6] Add BLTC and BLTUC instructions
Differential Revision: http://reviews.llvm.org/D3923

llvm-svn: 211167
2014-06-18 14:36:00 +00:00
Matheus Almeida
97cad15f37 [mips] Access $at only if necessary.
Summary:
This patch doesn't really change the logic behind expandMemInst but it allows
us to assemble .S files that use .set noat with some macros. For example:

.set noat
lw $k0, offset($k1)

Can expand to:
lui	$k0, %hi(offset)
addu	$k0, $k0, $k1
lw	$k0, %lo(offset)($k0)

with no need to access $at.

Reviewers: dsanders, vmedic

Reviewed By: dsanders, vmedic

Differential Revision: http://reviews.llvm.org/D4159

llvm-svn: 211165
2014-06-18 14:15:42 +00:00
Cameron McInally
8c86d371c4 Add pattern for unsigned v4i32->v4f64 convert on AVX512.
llvm-svn: 211164
2014-06-18 14:04:37 +00:00
Matheus Almeida
2185c77bcc [mips] Update MipsAsmParser so that it's possible to handle immediates that start with the binary operator NOT (~).
Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D4158

llvm-svn: 211163
2014-06-18 13:55:18 +00:00
Matheus Almeida
9d6564dc81 [mips] Implement alias for 'and' and 'or' instructions for all ISAs.
Summary:
Examples: 
and $2, 4 <=> andi $2, $2, 4
or $2, 4 <=> ori $2, $2, 4

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D4155

llvm-svn: 211161
2014-06-18 13:30:57 +00:00
Matheus Almeida
6d8543a62d [mips] Remove the last usage of parseRegister from MipsAsmParser.
Summary:
Added negative test case so that we can be sure we handle erroneous situations
while parsing the .cpsetup directive.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3681

llvm-svn: 211160
2014-06-18 13:08:59 +00:00
Jan Vesely
79097fa008 R600: Implement 64bit SRA
v2: Use capitalized variable name

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
llvm-svn: 211159
2014-06-18 12:27:17 +00:00
Jan Vesely
e8b0cf21f9 R600: Implement 64bit SRL
v2: use C++ style comment

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
llvm-svn: 211158
2014-06-18 12:27:15 +00:00
Jan Vesely
945809b390 R600: Implement 64bit SHL
v2: Use c++ style comment

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
llvm-svn: 211157
2014-06-18 12:27:13 +00:00
Kevin Qin
8c13822db4 [AArch64] Fix a pattern match failure caused by creating improper CONCAT_VECTOR.
ReconstructShuffle() may wrongly creat a CONCAT_VECTOR trying to
concat 2 of v2i32 into v4i16. This commit is to fix this issue and
try to generate UZP1 instead of lots of MOV and INS.
Patch is initalized by Kevin Qin, and refactored by Tim Northover.

llvm-svn: 211144
2014-06-18 05:54:42 +00:00
Craig Topper
701fca8e8d Replace some assert(0)'s with llvm_unreachable.
llvm-svn: 211141
2014-06-18 05:05:13 +00:00
Louis Gerbarg
c6a8e61725 Allow X86FastIsel to cope with 64 bit absolute relocations
This patch is a follow up to r211040 & r211052. Rather than bailing out of fast
isel this patch will generate an alternate instruction (movabsq) instead of the
leaq. While this will always have enough room to handle the 64 bit displacment
it is generally over kill for internal symbols (most displacements will be
within 32 bits) but since we have no way of communicating the code model to the
the assmebler in order to avoid flagging an absolute leal/leaq as illegal when
using a symbolic displacement.

llvm-svn: 211130
2014-06-17 23:22:41 +00:00
Juergen Ributzka
4e43e935e5 [FastISel][X86] Optimize predicates and fold CMP instructions.
This optimizes predicates for certain compares, such as fcmp oeq %x, %x to
fcmp ord %x, %x. The latter one is more efficient to generate.

The same optimization is applied to conditional branches.

llvm-svn: 211126
2014-06-17 21:55:43 +00:00
Tom Stellard
fa8603abf4 R600/SI: Make sure target flags are set on pseudo VOP3 instructions
llvm-svn: 211120
2014-06-17 19:34:46 +00:00
Matt Arsenault
508925b13a R600/SI: Match cttz_zero_undef
llvm-svn: 211116
2014-06-17 17:36:27 +00:00
Matt Arsenault
71fb43e88e R600/SI: Match ctlz_zero_undef
llvm-svn: 211115
2014-06-17 17:36:24 +00:00
Tom Stellard
a529beed9c R600: Use LDS and vectors for private memory
llvm-svn: 211110
2014-06-17 16:53:14 +00:00
Tom Stellard
4bdc400436 R600/SI: Add a pattern for llvm.AMDGPU.barrier.global
llvm-svn: 211109
2014-06-17 16:53:09 +00:00
Tom Stellard
6987d06184 SelectionDAG: Expand i64 = FP_TO_SINT i32
llvm-svn: 211108
2014-06-17 16:53:07 +00:00
Tom Stellard
1513f160dc R600/SI: Re-initialize the m0 register after using it for indirect addressing
We need to store a value greater than or equal to the number of LDS
bytes allocated by the shader in the m0 register in order for LDS
instructions to work correctly.

We always initialize m0 at the beginning of a shader, but this register
is also used for indirect addressing offsets, so we need to
re-initialize it any time we use indirect addressing.

llvm-svn: 211107
2014-06-17 16:53:04 +00:00
Juergen Ributzka
39c5c57c4a [FastISel][X86] Fix previous refactoring commit (r211077)
Overlooked that fcmp_une uses an "or" instead of an "and" for combining the
flags.

llvm-svn: 211104
2014-06-17 14:47:45 +00:00
James Molloy
730bf7f760 Fix memory leak of RegScavenger accidentally added in r211037.
llvm-svn: 211097
2014-06-17 12:31:41 +00:00
Tim Northover
300d65b2ea AArch64: estimate inline asm length during branch relaxation
To make sure branches are in range, we need to do a better job of estimating
the length of an inline assembly block than "it's probably 1 instruction, who'd
write asm with more than that?".

Fortunately there's already a (highly suspect, see how many ways you can think
of to break it!) callback for this purpose, which is used by the other targets.

rdar://problem/17277590

llvm-svn: 211095
2014-06-17 11:31:42 +00:00
Juergen Ributzka
199456e51e [FastISel][X86] Refactor the code to get the X86 condition from a helper function. NFC.
Make use of helper functions to simplify the branch and compare instruction
selection in FastISel. Also add test cases for compare and conditonal branch.

llvm-svn: 211077
2014-06-16 23:58:24 +00:00
Reed Kotler
e3da1c94f1 Add load/store functionality
Summary:
This patches allows non conversions like i1=i2; where both are global ints.
In addition, arithmetic and other things start to work since fast-isel will use
existing patterns for non fast-isel from tablegen files where applicable.

In addition i8, i16 will work in this limited context for assignment without the need
for sign extension (zero or signed). It does not matter how i8 or i16 are loaded (zero or sign extended)
since only the 8 or 16 relevant bits are used and clang will ask for sign extension before using them in
arithmetic. This is all made more complete in forthcoming patches.

for example:
  int i, j=1, k=3;
 
  void foo() {
    i = j + k;
  }

Keep in mind that this pass is not enabled right now and is an experimental pass
It can only be enabled with a hidden option to llvm of -mips-fast-isel.

Test Plan: Run test-suite, loadstore2.ll and I will run some executable tests.

Reviewers: dsanders

Subscribers: mcrosier

Differential Revision: http://reviews.llvm.org/D3856

llvm-svn: 211061
2014-06-16 22:05:47 +00:00
Jim Grosbach
6677259350 AArch64: Add backend intrinsic for rbit.
Define an intrinsic for the frontend to use and pattern match it to
the RBIT instruction.

rdar://9283021

llvm-svn: 211058
2014-06-16 21:55:35 +00:00
Jim Grosbach
0ce3294395 ARM: intrinsic support for rbit.
We already have an ARMISD node. Create an intrinsic to map to it so we can
add support for the frontend __rbit() intrinsic.

rdar://9283021

llvm-svn: 211057
2014-06-16 21:55:30 +00:00
Bill Schmidt
b0bab996e0 [PPC64] Fix PR19893 - improve code generation for local function addresses
Rafael opened http://llvm.org/bugs/show_bug.cgi?id=19893 to track non-optimal
code generation for forming a function address that is local to the compile
unit.  The existing code was treating both local and non-local functions
identically.

This patch fixes the problem by properly identifying local functions and
generating the proper addis/addi code.  I also noticed that Rafael's earlier
changes to correct the surrounding code in PPCISelLowering.cpp were also
needed for fast instruction selection in PPCFastISel.cpp, so this patch
fixes that code as well.

The existing test/CodeGen/PowerPC/func-addr.ll is modified to test the new
code generation.  I've added a -O0 run line to test the fast-isel code as
well.

Tested on powerpc64[le]-unknown-linux-gnu with no regressions.

llvm-svn: 211056
2014-06-16 21:36:02 +00:00
Eric Christopher
0c4bc0dbe3 Since the DataLayout is always found off of the subtarget go ahead
and query the base target machine implementation for it.

llvm-svn: 211055
2014-06-16 21:18:27 +00:00
Louis Gerbarg
7ea43963d7 Improve comments for r211040
Added comment to clarify why we r211040 choose to bail out of fast isel instead
of generating a more complicated relocation, and fix mislabelled register in the
comments of the asan test case.

llvm-svn: 211052
2014-06-16 20:31:50 +00:00
Tim Northover
7f8ca02cf4 ARM: implement correct atomic operations on v7M
ARM v7M has ldrex/strex but not ldrexd/strexd. This means 32-bit
operations should work as normal, but 64-bit ones are almost certainly
doomed.

Patch by Phoebe Buckheister.

llvm-svn: 211042
2014-06-16 18:49:36 +00:00
Louis Gerbarg
55f89e91ff Fix illegal relocations in X86FastISel
On x86_86  the lea instruction can only use a 32 bit immediate value. When
the code is compiled statically the RIP register is not used, meaning the
immediate is all that can be used for the relocation, which is not sufficient
in the case of targets more than +/- 2GB away. This patch bails out of fast
isel in those cases and reverts to DAG which does the right thing.

Test case included.

llvm-svn: 211040
2014-06-16 17:35:40 +00:00
James Molloy
26c8f2b1cd Refactor the disabling of Thumb-1 LDM/STM generation
Originally I switched the LD/ST optimizer off in TargetMachine as it was previously, but Eric has suggested he'd prefer that it be short-circuited in the pass itself.

No functionality change.

llvm-svn: 211037
2014-06-16 16:42:53 +00:00
Tilmann Scheller
c25b867f23 [AArch64] Remove dead code.
Both function declarations lack a callee and an implementation.

llvm-svn: 211029
2014-06-16 15:15:41 +00:00
Cameron McInally
1bfa586059 Hook up vector int_ctlz for AVX512.
llvm-svn: 211024
2014-06-16 14:12:28 +00:00
Daniel Sanders
4895243b92 [mips][mips64r6] ssnop is deprecated on MIPS32r6/MIPS64r6
Summary: Depends on D4120

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: zoran.jovanovic, vmedic

Differential Revision: http://reviews.llvm.org/D4121

llvm-svn: 211021
2014-06-16 13:25:35 +00:00
Daniel Sanders
495b392e19 [mips][mips64r6] cl[oz], and dcl[oz] are re-encoded in MIPS32r6/MIPS64r6
Summary:
There is no change to the restrictions, just the result register is stored
once in the encoding rather than twice. The rt field is zero in
MIPS32r6/MIPS64r6.

Depends on D4119

Reviewers: zoran.jovanovic, jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D4120

llvm-svn: 211019
2014-06-16 13:18:59 +00:00
Daniel Sanders
2a30e4fcab [mips][mips64r6] ll, sc, lld, and scd are re-encoded on MIPS32r6/MIPS64r6.
Summary:
The linked-load, store-conditional operations have been re-encoded such
that have a 9-bit offset instead of the 16-bit offset they have prior to
MIPS32r6/MIPS64r6.

While implementing this, I noticed that the atomic load/store pseudos always
emit a sign extension using sll and sra. I have improved this to use seb/seh
when they are available (MIPS32r2/MIPS64r2 and above).

Depends on D4118

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D4119

llvm-svn: 211018
2014-06-16 13:13:03 +00:00