1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 22:42:46 +02:00
Commit Graph

12508 Commits

Author SHA1 Message Date
Andrea Di Biagio
7008d5a01f [X86][AVX] Fix failure due to a missing ISel pattern to select VBROADCAST nodes (PR23259).
This fixes a regression introduced at revision 218263.

On AVX, if we optimize for size, a splat build_vector of a load
is lowered into a VBROADCAST node. This is done even if the value type of the
splat build_vector node is v2i64.

Since AVX doesn't support v2f64/v2i64 broadcasts, revision 218263 added two
extra tablegen patterns to allow selecting a VMOVDDUPrm from an X86VBroadcast
where the scalar element comes from a loadi64/loadf64.

However, revision 218263 forgot to add an extra fallback pattern for the case
where we have a X86VBroadcast of a loadi64 with multiple uses.

This patch adds the missing tablegen pattern in X86InstrSSE.td.
This patch also adds an extra test to 'splat-for-size.ll' to verify that ISel
doesn't crash with a 'fatal error in the backend' due to a missing AVX pattern
to select v2i64 X86ISD::BROADCAST nodes.

llvm-svn: 235509
2015-04-22 14:53:39 +00:00
Hal Finkel
df195a3bea [DAGCombine] Disable select(c, load,load) for indexed loads
This turned up after r235333, but was a pre-existing bug. The optimization
which transforms select(c, load, load) into a load of a select of the addresses
does not handle indexed loads (pre/post inc/dec). However, it did not check for
them either, leading to a crash if it tried to transform one of them.

llvm-svn: 235497
2015-04-22 11:32:25 +00:00
Vasileios Kalintiris
7d80549d5b Revert "[mips][FastISel] Implement shift ops for Mips fast-isel."
This reverts commit r235194. It was causing a failure in FastISel buildbots
due to sign-extension issues.

llvm-svn: 235495
2015-04-22 10:08:46 +00:00
James Molloy
ffea00f649 [AArch64] Disable complex GEP optimization by default.
Enough concerns were raised that this optimization is pessimising some code patterns.

The obvious fix, to add a Reassociate run afterwards, causes even more pessimisation in some cases due to fewer complex addressing modes being matched. As there isn't a trivial fix for this, backing this out by default until someone gets a chance to fix the addressing mode matcher.

llvm-svn: 235491
2015-04-22 09:11:38 +00:00
Lang Hames
76014c544d [patchpoint] Add support for symbolic patchpoint targets to SelectionDAG and the
X86 backend.

The code generated for symbolic targets is identical to the code generated for
constant targets, except that a relocation is emitted to fix up the actual
target address at link-time. This allows IR and object files containing
patchpoints to be cached across JIT-invocations where the target address may
change.

llvm-svn: 235483
2015-04-22 06:02:31 +00:00
Sanjay Patel
b365b56ba8 [x86] allow 64-bit extracted vector element integer stores on a 32-bit system
With SSE2, we can generate a 'movq' or other 64-bit store op on a 32-bit system
even though 64-bit integers are not legal types.

So instead of producing this:

  pshufd	$229, %xmm0, %xmm1      ## xmm1 = xmm0[1,1,2,3]
  movd	%xmm0, (%eax)
  movd	%xmm1, 4(%eax)

We can do:

  movq %xmm0, (%eax)

This is a fix for the problem noted in D7296.

Differential Revision: http://reviews.llvm.org/D9134

llvm-svn: 235460
2015-04-22 00:24:30 +00:00
Reid Kleckner
2e3b8976cc [WinEH] Correctly handle inlined __finally blocks with captures
We should also teach the inliner to collapse framerecover of
frameaddress of the current frame down to an alloca, but that can happen
later.

llvm-svn: 235459
2015-04-22 00:07:52 +00:00
Krzysztof Parzyszek
0759b1ad49 [Hexagon] Patterns for frame index with offset for isel
llvm-svn: 235418
2015-04-21 21:28:03 +00:00
Reid Kleckner
3b4014368f Re-land r235154-r235156 under the existing -sehprepare flag
Keep the old SEH fan-in lowering on by default for now, since projects
rely on it.  This will make it easy to test this change with a simple
flag flip.

llvm-svn: 235399
2015-04-21 18:23:57 +00:00
Matthias Braun
d300744bcd X86: Match for X86ISD nodes in LowerBUILD_VECTOR instead of BUILD_VECTORCombine
There doesn't seem to be a reason to perform this target ISD node matching
in an DAGCombine, moving it to lowering fixes PR23296.

Differential Revision: http://reviews.llvm.org/D9137

llvm-svn: 235394
2015-04-21 17:21:36 +00:00
Vasileios Kalintiris
ea11455700 [mips] Optimize code generation for 64-bit variable shift instructions.
Summary:
The 64-bit version of the variable shift instructions uses the
shift_rotate_reg class which uses a GPR32Opnd to specify the variable
shift amount. With this patch we avoid the generation of a redundant
SLL instruction for the variable shift instructions in 64-bit targets.

Reviewers: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7413

llvm-svn: 235376
2015-04-21 10:49:03 +00:00
Elena Demikhovsky
abf0138a81 AVX-512: Added logical and arithmetic instructions for SKX
by Asaf Badouh (asaf.badouh@intel.com)

llvm-svn: 235375
2015-04-21 10:27:40 +00:00
Simon Pilgrim
5ad623ef36 [X86][SSE] Provide execution domains for scalar floating point operations
This is an updated version of Chandler's patch D7402 that got accepted but never committed, and has bit-rotted a bit since.

I've updated the execution domain declarations to match the approach of the packed templates and also added some extra scalar unary tests.

Differential Revision: http://reviews.llvm.org/D9095

llvm-svn: 235372
2015-04-21 08:40:22 +00:00
Simon Pilgrim
d5bdbd921b CONCAT_VECTOR of BUILD_VECTOR - minor fix
Fixed issue with the combine of CONCAT_VECTOR of 2 BUILD_VECTOR nodes - the optimisation wasn't ensuring that the scalar operands of both nodes were the same type/size for implicit truncation.

Test case spotted by Patrik Hagglund

llvm-svn: 235371
2015-04-21 08:05:43 +00:00
Pawel Bylica
b046734bb1 Fix generic shift expansion when shift amount is 0
Summary:
This fixes http://llvm.org/bugs/show_bug.cgi?id=16439. 

This is one possible way to approach this. The other would be to split InL>>(nbits-Amt) into (InL>>(nbits-1-Amt))>>1, which is also valid since since we only need to care about Amt up nbits-1. It's hard to tell which one is better since the shift might be expensive if this stage of expansion is not yet a legal machine integer, whereas comparisons with zero are relatively cheap at all sizes, but more expensive than a shift if the shift is on a legal machine type. 

Patch by Keno Fischer!

Test Plan: regression test from http://reviews.llvm.org/D7752

Reviewers: chfast, resistor

Reviewed By: chfast, resistor

Subscribers: sanjoy, resistor, chfast, llvm-commits

Differential Revision: http://reviews.llvm.org/D4978

llvm-svn: 235370
2015-04-21 06:28:36 +00:00
Matthias Braun
3df6448424 X86: Do not select X86 custom vector nodes if operand types don't match
X86ISD::ADDSUB, X86ISD::(F)HADD, X86ISD::(F)HSUB should not be selected
if the operand types do not match the result type because vector type
legalization cannot deal with this for custom nodes.

Testcase X86ISD::ADDSUB is attached. I could not create a testcase for
the FHADD/FHSUB cases because of: https://llvm.org/bugs/show_bug.cgi?id=23296

Differential Revision: http://reviews.llvm.org/D9120

llvm-svn: 235367
2015-04-21 01:13:41 +00:00
Pirama Arumuga Nainar
c8c950f572 Fix flakiness in fp16-promote.ll
Summary:
In the f16-promote test, make the checks for native conversion instructions
similar to the libcall checks:
- Remove hard coded register names
- Do not check exact instruction sequences.

This fixes test flakiness due to non-determinism in instruction
scheduling and register allocation.  I also fixed a few minor things in
the CHECK-LIBCALL checks.

I'll try to find a way to check that unnecessary loads, stores, or
conversions don't happen.

Reviewers: mzolotukhin, srhines, ab

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9112

llvm-svn: 235363
2015-04-20 23:54:41 +00:00
Sanjay Patel
3391f7c43c use update_llc_test_checks.py to tighten checking
Also, replace win and linux runs with a generic run because that
makes no difference in what this test is checking.

llvm-svn: 235361
2015-04-20 23:31:53 +00:00
Andrew Kaylor
721c6b75e7 [WinEH] Fix problem with mapping shared empty handler blocks.
Differential Revision: http://reviews.llvm.org/D9125

llvm-svn: 235354
2015-04-20 22:04:09 +00:00
Olivier Sallenave
f748efd9fd Refactoring and enhancement to FMA combine.
llvm-svn: 235344
2015-04-20 20:29:40 +00:00
Andrew Kaylor
7f5330d3c0 Fixing line endings
llvm-svn: 235342
2015-04-20 20:27:28 +00:00
Pirama Arumuga Nainar
a463349e8a [MIPS] OperationAction for FP_TO_FP16, FP16_TO_FP
Summary:
Set operation action for FP16 conversion opcodes, so the Op legalizer
can choose the gnu_* libcalls for Mips.

Set LoadExtAction and TruncStoreAction for f16 scalars and vectors to
prevent (fpext (load )) and (store (fptrunc)) from getting combined into
unsupported operations.

Added test cases to test that these operations are handled correctly
for f16 scalars and vectors.  This patch depends on
http://reviews.llvm.org/D8755.

Reviewers: srhines

Subscribers: llvm-commits, ab

Differential Revision: http://reviews.llvm.org/D8804

llvm-svn: 235341
2015-04-20 20:15:36 +00:00
Tom Stellard
68ab711728 DAGCombine: Remove redundant NaN checks around ISD::FSQRT
This folds:

(select (setcc x, -0.0, *lt), NaN, (fsqrt x)) -> ( fsqrt x)

llvm-svn: 235333
2015-04-20 19:38:27 +00:00
Andrea Di Biagio
b10889be03 [X86][FastIsel] Fix assertion failure when selecting int-to-double conversion (PR23273).
This fixes a regression introduced at revision 231243.
The target-independent selection algorithm in FastISel knows how to select
a SINT_TO_FP if the target is SSE but not AVX. That is because on X86, the
tablegen'd 'fastEmit' functions know how to select CVTSI2SSrr and CVTSI2SDrr.

Method X86FastISel::X86SelectSIToFP was therefore working under the
wrong assumption that the target was AVX. That assumption was incorrect since
we can have a target that is neither AVX nor SSE.

So, rather than asserting for the presence of AVX, we should have had an
early exit from 'X86SelectSIToFP' if the target was not AVX.
This patch fixes the issue replacing the invalid assertion with an early exit.

Thanks to Dimitry Andric for reporting this problem and for providing a small
reproducible testcase. Added test pr23273.ll.

llvm-svn: 235295
2015-04-20 11:56:59 +00:00
Hal Finkel
40b204ea4b [InlineAsm] Remove EarlyClobber on registers that are also inputs
When an inline asm call has an output register marked as early-clobber, but
that same register is also an input operand, what should we do? GCC accepts
this, and is documented to accept this for read/write operands saying,
"Furthermore, if the earlyclobber operand is also a read/write operand, then
that operand is written only after it's used." For write-only operands, the
situation seems less clear, but I have at least one existing codebase that
assumes this will work, in part because it has syscall macros like this:

({                                                                         \
  register uint64_t r0 __asm__ ("r0") = (__NR_ ## name);                   \
  register uint64_t r3 __asm__ ("r3") = ((uint64_t) (arg0));               \
  register uint64_t r4 __asm__ ("r4") = ((uint64_t) (arg1));               \
  register uint64_t r5 __asm__ ("r5") = ((uint64_t) (arg2));               \
  __asm__ __volatile__                                                     \
  ("sc"                                                                    \
   : "=&r"(r0),"=&r"(r3),"=&r"(r4),"=&r"(r5)                               \
   :   "0"(r0),  "1"(r3),  "2"(r4),  "3"(r5)                               \
   : "r6","r7","r8","r9","r10","r11","r12","cr0","memory");                \
  r3;                                                                      \
})

Furthermore, with register aliases and subregister relationships that only the
backend knows about, rejecting this in the frontend seems like a difficult
proposition (if we wanted to do so). However, keeping the early-clobber flag on
the INLINEASM MI does not work for us, because it will cause the register's
live interval to end to soon (so it will not appear defined to be used as an
input).

Fortunately, fixing this does not seem hard: When forming the INLINEASM MI,
check to see if any of the early-clobber outputs are also inputs, and if so,
remove the early-clobber flag.

llvm-svn: 235283
2015-04-20 00:01:30 +00:00
Simon Pilgrim
292688507a [X86][SSE] Fix for getScalarValueForVectorElement to detect scalar sources requiring truncation.
The fix ensures that scalar sources inserted into a vector are the correct bit size.

Integer scalar sources from BUILD_VECTOR and SCALAR_TO_VECTOR nodes may require truncation that this function doesn't currently support.

llvm-svn: 235281
2015-04-19 22:16:49 +00:00
Simon Pilgrim
1a7066b9c2 [X86][SSE] Extended copysign tests to include llvm intrinsic implementation and constant folding.
llvm-svn: 235279
2015-04-19 21:34:57 +00:00
Simon Pilgrim
cdf694c820 [X86][AVX2] Force execution domain on broadcast folding tests.
llvm-svn: 235260
2015-04-18 21:24:16 +00:00
Simon Pilgrim
048a4e3644 [X86][SSE] Force execution domain on float/double unpack shuffle tests.
llvm-svn: 235259
2015-04-18 18:50:55 +00:00
Ahmed Bougacha
fa8edc9a41 [GlobalMerge] Look at uses to create smaller global sets.
Instead of merging everything together, look at the users of
GlobalVariables, and try to group them by function, to create
sets of globals used "together".

Using that information, a less-aggressive alternative is to keep merging
everything together *except* globals that are only ever used alone, that
is, those for which it's clearly non-profitable to merge with others.

In my testing, grouping by Function is too aggressive, but grouping by
BasicBlock is too conservative.  Anything in-between isn't trivially
available, so stick with Function grouping for now.

cl::opts are added for testing; both enabled by default.

A few of the testcases aren't testing the merging proper, but just
various edge cases when merging does occur.  Update them to use the
previous grouping behavior. Also, one of the tests is unrelated to
GlobalMerge; change it accordingly.
While there, switch to r234666' flags rather than the brutal -O3.

Differential Revision: http://reviews.llvm.org/D8070

llvm-svn: 235249
2015-04-18 01:21:58 +00:00
Ahmed Bougacha
6c08d94243 [AArch64] Don't force MVT::Untyped when selecting LD1LANEpost.
The result is either an Untyped reg sequence, on ldN with N > 1, or
just the type of the input vector, on ld1.  Don't force Untyped.
Instead, just use the type of the reg sequence.

This mirrors the behavior of createTuple, which feeds the LD1*_POST.

The narrow code path wasn't actually covered by tests, because V64
insert_vector_elt are widened to V128 before the LD1LANEpost combine
has the chance to run, usually.

The only case where it does run on V64 vectors is if the vector ops
legalizer ran.  So, tickle the code with a ctpop.

Fixes PR23265.

llvm-svn: 235243
2015-04-17 23:43:33 +00:00
Ahmed Bougacha
2286313fee Fix another typo in r235224 testcase. NFC.
Third time's the charm!

llvm-svn: 235242
2015-04-17 23:38:46 +00:00
Andrew Kaylor
dcac5320d4 [WinEH] Fixes for a few cppeh failures.
Differential Review: http://reviews.llvm.org/D9065

llvm-svn: 235239
2015-04-17 23:05:43 +00:00
Pete Cooper
2a4be5132d AArch64: Add test for returning [2 x i64] in registers. NFC.
llvm-svn: 235228
2015-04-17 21:31:25 +00:00
Ahmed Bougacha
0f72cb9a76 Fix typo in r235224 testcase. NFC.
llvm-svn: 235226
2015-04-17 21:11:58 +00:00
Ahmed Bougacha
ab193cb218 [AArch64] Avoid vector->load dependency cycles when creating LD1*post.
They would break the SelectionDAG.
Note that the opposite load->vector dependency is already obvious in:
  (LD1*post vec, ..)

llvm-svn: 235224
2015-04-17 21:02:30 +00:00
Pirama Arumuga Nainar
f8369b5437 Add support to promote f16 to f32
Summary:
This patch adds legalization support to operate on FP16 as a load/store type
and do operations on it as floats.

Tests for ARM are added to test/CodeGen/ARM/fp16-promote.ll

Reviewers: srhines, t.p.northover

Differential Revision: http://reviews.llvm.org/D8755

llvm-svn: 235215
2015-04-17 18:36:25 +00:00
Vasileios Kalintiris
029b9d94fb [mips][FastISel] Implement FastMaterializeAlloca in Mips fast-isel.
Summary: Implement the method FastMaterializeAlloca in Mips fast-isel

Based on a patch by Reed Kotler.

Test Plan:
Passes test-suite at O0/O2 for mips32 r1/r2
fastalloca.ll

Reviewers: dsanders, rkotler

Subscribers: rfuhler, llvm-commits

Differential Revision: http://reviews.llvm.org/D6742

llvm-svn: 235213
2015-04-17 17:29:58 +00:00
Sanjay Patel
d7195a62c2 [X86, AVX] add an exedepfix entry for vmovq == vmovlps == vmovlpd
This is the AVX extension of r235014:
http://llvm.org/viewvc/llvm-project?view=revision&revision=235014

Review:
http://reviews.llvm.org/D8691

llvm-svn: 235210
2015-04-17 17:02:37 +00:00
Vasileios Kalintiris
8c6dc71fa1 [mips][FastISel] Implement shift ops for Mips fast-isel.
Summary:
Add shift operators implementation to fast-isel for Mips.  These are shift ops
for non legal forms, i.e. i8 and i16.

Based on a patch by Reed Kotler.

Test Plan:

Reviewers: dsanders

Subscribers: echristo, rfuhler, llvm-commits

Differential Revision: http://reviews.llvm.org/D6726

llvm-svn: 235194
2015-04-17 14:29:21 +00:00
James Molloy
8614d8be5f Fix TRUNCATE splitting helper logic.
This is a followon to r233681 - I'd misunderstood the semantics of FTRUNC,
and had confused it with (FP_ROUND ..., 0).

Thanks for Ahmed Bougacha for his post-commit review!

llvm-svn: 235191
2015-04-17 13:51:40 +00:00
Vasileios Kalintiris
e73d807a5c [mips] Teach the delay slot filler to remove needless KILL instructions.
Summary:
Previously, the presence of KILL instructions would block valid candidates
from filling a specific delay slot. With the elimination of the KILL
instructions, in the appropriate range, we are able to fill more slots and
keep the information from future def/use analysis consistent.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: hfinkel, llvm-commits

Differential Revision: http://reviews.llvm.org/D7724

llvm-svn: 235183
2015-04-17 12:01:02 +00:00
Nico Weber
ffd0269e17 Revert r235154-r235156, they cause asserts when building win64 code (http://crbug.com/477988)
llvm-svn: 235170
2015-04-17 09:10:43 +00:00
Reid Kleckner
560ebfc81d Fix test failure due to racing commits
It looks like r235145 changed the .ll syntax for variadic calls. Update
tests to use the new syntax.

llvm-svn: 235156
2015-04-17 01:09:53 +00:00
Reid Kleckner
52ead7bcab [SEH] Reimplement x64 SEH using WinEHPrepare
This now emits simple, unoptimized xdata tables for __C_specific_handler
based on the handlers listed in @llvm.eh.actions calls produced by
WinEHPrepare.

This adds support for running __finally blocks when exceptions are
thrown, and removes the old landingpad fan-in codepath.

I ran some manual execution tests on small basic test cases with and
without optimization, as well as on Chrome base_unittests, which uses a
small amount of SEH.  I'm sure there are bugs, and we may need to
revert.

llvm-svn: 235154
2015-04-17 01:01:27 +00:00
Ahmed Bougacha
99c7f5d42e [AArch64] Don't assert on f16 in DUP PerfectShuffle generator.
Found by code inspection, but breaking i16 at least breaks other tests.
They aren't checking this in particular though, so also add some
explicit tests for the already working types.

llvm-svn: 235148
2015-04-16 23:57:07 +00:00
David Blaikie
dfadb4e9ee [opaque pointer type] Add textual IR support for explicit type parameter to the call instruction
See r230786 and r230794 for similar changes to gep and load
respectively.

Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.

When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.

This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.

This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).

No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.

This leaves /only/ the varargs case where the explicit type is required.

Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.

About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.

import fileinput
import sys
import re

pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")

def conv(match, line):
  if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
    return line
  return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]

for line in sys.stdin:
  sys.stdout.write(conv(re.search(pat, line), line))

llvm-svn: 235145
2015-04-16 23:24:18 +00:00
Pete Cooper
7099480464 Disable AArch64 fast-isel on big-endian call vector returns.
A big-endian vector return needs a byte-swap which we aren't doing right now.

For now just bail on these cases to get correctness back.

llvm-svn: 235133
2015-04-16 21:19:36 +00:00
Reid Kleckner
01ceef5d17 [WinEH] Handle a landingpad, resume, and cleanup all rolled into a BB
This happens a lot with simple cleanups after SimplifyCFG.

llvm-svn: 235117
2015-04-16 17:02:23 +00:00
Hans Wennborg
ff837f8fc0 Revert the switch lowering change (r235101, r235103, r235106)
Looks like it broke the sanitizer-ppc64-linux1 build. Reverting for now.

llvm-svn: 235108
2015-04-16 15:43:26 +00:00