1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 14:33:02 +02:00
Commit Graph

679 Commits

Author SHA1 Message Date
Quentin Colombet
00a9a98ba4 [AArch64][FastISel] Variant of the logical instructions that use two input
registers cannot write on SP.

rdar://problem/20748715

llvm-svn: 236352
2015-05-01 21:34:57 +00:00
Quentin Colombet
e70bd6613a [AArch64][FastISel] Fix the setting of kill flags for MUL -> UMULH sequences.
rdar://problem/20748715

llvm-svn: 236346
2015-05-01 20:57:11 +00:00
Quentin Colombet
43702a31a2 [AArch64] Fix bad register class constraint in fast-isel for TST instruction.
rdar://problem/20748715

llvm-svn: 236273
2015-04-30 22:27:20 +00:00
Duncan P. N. Exon Smith
09b5c9c24d IR: Give 'DI' prefix to debug info metadata
Finish off PR23080 by renaming the debug info IR constructs from `MD*`
to `DI*`.  The last of the `DIDescriptor` classes were deleted in
r235356, and the last of the related typedefs removed in r235413, so
this has all baked for about a week.

Note: If you have out-of-tree code (like a frontend), I recommend that
you get everything compiling and tests passing with the *previous*
commit before updating to this one.  It'll be easier to keep track of
what code is using the `DIDescriptor` hierarchy and what you've already
updated, and I think you're extremely unlikely to insert bugs.  YMMV of
course.

Back to *this* commit: I did this using the rename-md-di-nodes.sh
upgrade script I've attached to PR23080 (both code and testcases) and
filtered through clang-format-diff.py.  I edited the tests for
test/Assembler/invalid-generic-debug-node-*.ll by hand since the columns
were off-by-three.  It should work on your out-of-tree testcases (and
code, if you've followed the advice in the previous paragraph).

Some of the tests are in badly named files now (e.g.,
test/Assembler/invalid-mdcompositetype-missing-tag.ll should be
'dicompositetype'); I'll come back and move the files in a follow-up
commit.

llvm-svn: 236120
2015-04-29 16:38:44 +00:00
Ahmed Bougacha
54c4fb4a3f [AArch64] Also combine vector selects fed by non-i1 SETCCs.
After legalization, scalar SETCC has an i32 result type on AArch64.
The i1 requirement seems too conservative, replace it with an assert.

This also means that we now can run after legalization. That should also
be fine, since the ops legalizer runs again after each combine, and
all types created all have the same sizes as the (legal) inputs.

Exposed by r235917; while there, robustize its tests (bsl also uses the
register it defines).

llvm-svn: 235922
2015-04-27 21:43:12 +00:00
Ahmed Bougacha
5f0f3e8528 [AArch64] Don't assert when combining (v3f32 select (setcc f64)).
When the setcc has f64 operands, we can't build a vector setcc mask
to feed a vselect, because f64 doesn't divide v3f32 evenly.
Just bail out when that happens.

llvm-svn: 235917
2015-04-27 21:01:20 +00:00
Yaron Keren
b412c387f7 Teach AArch64\lit.local.cfg the new triple names windows-gnu and windows-msvc.
Tests were failing when built with -DLLVM_DEFAULT_TARGET_TRIPLE=i686-pc-windows-gnu.

llvm-svn: 235733
2015-04-24 17:14:16 +00:00
Pirama Arumuga Nainar
17c8db4460 [AArch64] Add nvcast patterns for v4f16 and v8f16
Summary:
Constant stores of f16 vectors can create NvCast nodes from various
operand types to v4f16 or v8f16 depending on patterns in the stored
constants.  This patch adds nvcast rules with v4f16 and v8f16 values.

AArchISelLowering::LowerBUILD_VECTOR has the details on which constant
patterns generate the nvcast nodes.

Reviewers: jmolloy, srhines, ab

Subscribers: rengolin, aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D9201

llvm-svn: 235610
2015-04-23 17:32:25 +00:00
Pirama Arumuga Nainar
8f58d5c8e1 [AArch64] Handle vec4, vec8, vec16 *itofp for half
Summary:
Set operation action for SINT_TO_FP and UINT_TO_FP nodes with v4i32,
v8i8, v8i16 inputs to allow promotion of v4f16 results.

Add tests for sitofp and uitofp for vec4, vec8, vec16, and i8, i16, i32,
and i64 vectors.  Only missing tests are for v16i8 and v16i16 as the
shift operations are too complicated to write a proper check sequence.

The conversions from v4i64 to v4f16 do not depend on this patch - v4i64
is split and the conversion gets handled while lowering v2i64.  I am
adding a test here for completeness.

Reviewers: aemerson, rengolin, ab, jmolloy, srhines

Subscribers: rengolin, aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D9166

llvm-svn: 235609
2015-04-23 17:16:27 +00:00
James Molloy
ffea00f649 [AArch64] Disable complex GEP optimization by default.
Enough concerns were raised that this optimization is pessimising some code patterns.

The obvious fix, to add a Reassociate run afterwards, causes even more pessimisation in some cases due to fewer complex addressing modes being matched. As there isn't a trivial fix for this, backing this out by default until someone gets a chance to fix the addressing mode matcher.

llvm-svn: 235491
2015-04-22 09:11:38 +00:00
Ahmed Bougacha
fa8edc9a41 [GlobalMerge] Look at uses to create smaller global sets.
Instead of merging everything together, look at the users of
GlobalVariables, and try to group them by function, to create
sets of globals used "together".

Using that information, a less-aggressive alternative is to keep merging
everything together *except* globals that are only ever used alone, that
is, those for which it's clearly non-profitable to merge with others.

In my testing, grouping by Function is too aggressive, but grouping by
BasicBlock is too conservative.  Anything in-between isn't trivially
available, so stick with Function grouping for now.

cl::opts are added for testing; both enabled by default.

A few of the testcases aren't testing the merging proper, but just
various edge cases when merging does occur.  Update them to use the
previous grouping behavior. Also, one of the tests is unrelated to
GlobalMerge; change it accordingly.
While there, switch to r234666' flags rather than the brutal -O3.

Differential Revision: http://reviews.llvm.org/D8070

llvm-svn: 235249
2015-04-18 01:21:58 +00:00
Ahmed Bougacha
6c08d94243 [AArch64] Don't force MVT::Untyped when selecting LD1LANEpost.
The result is either an Untyped reg sequence, on ldN with N > 1, or
just the type of the input vector, on ld1.  Don't force Untyped.
Instead, just use the type of the reg sequence.

This mirrors the behavior of createTuple, which feeds the LD1*_POST.

The narrow code path wasn't actually covered by tests, because V64
insert_vector_elt are widened to V128 before the LD1LANEpost combine
has the chance to run, usually.

The only case where it does run on V64 vectors is if the vector ops
legalizer ran.  So, tickle the code with a ctpop.

Fixes PR23265.

llvm-svn: 235243
2015-04-17 23:43:33 +00:00
Ahmed Bougacha
2286313fee Fix another typo in r235224 testcase. NFC.
Third time's the charm!

llvm-svn: 235242
2015-04-17 23:38:46 +00:00
Pete Cooper
2a4be5132d AArch64: Add test for returning [2 x i64] in registers. NFC.
llvm-svn: 235228
2015-04-17 21:31:25 +00:00
Ahmed Bougacha
0f72cb9a76 Fix typo in r235224 testcase. NFC.
llvm-svn: 235226
2015-04-17 21:11:58 +00:00
Ahmed Bougacha
ab193cb218 [AArch64] Avoid vector->load dependency cycles when creating LD1*post.
They would break the SelectionDAG.
Note that the opposite load->vector dependency is already obvious in:
  (LD1*post vec, ..)

llvm-svn: 235224
2015-04-17 21:02:30 +00:00
James Molloy
8614d8be5f Fix TRUNCATE splitting helper logic.
This is a followon to r233681 - I'd misunderstood the semantics of FTRUNC,
and had confused it with (FP_ROUND ..., 0).

Thanks for Ahmed Bougacha for his post-commit review!

llvm-svn: 235191
2015-04-17 13:51:40 +00:00
Ahmed Bougacha
99c7f5d42e [AArch64] Don't assert on f16 in DUP PerfectShuffle generator.
Found by code inspection, but breaking i16 at least breaks other tests.
They aren't checking this in particular though, so also add some
explicit tests for the already working types.

llvm-svn: 235148
2015-04-16 23:57:07 +00:00
David Blaikie
dfadb4e9ee [opaque pointer type] Add textual IR support for explicit type parameter to the call instruction
See r230786 and r230794 for similar changes to gep and load
respectively.

Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.

When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.

This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.

This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).

No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.

This leaves /only/ the varargs case where the explicit type is required.

Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.

About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.

import fileinput
import sys
import re

pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")

def conv(match, line):
  if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
    return line
  return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]

for line in sys.stdin:
  sys.stdout.write(conv(re.search(pat, line), line))

llvm-svn: 235145
2015-04-16 23:24:18 +00:00
Pete Cooper
7099480464 Disable AArch64 fast-isel on big-endian call vector returns.
A big-endian vector return needs a byte-swap which we aren't doing right now.

For now just bail on these cases to get correctness back.

llvm-svn: 235133
2015-04-16 21:19:36 +00:00
Simon Pilgrim
969718c5af TRUNCATE constant folding - minor fix for rL233224
Fix for test case found by James Molloy - TRUNCATE of constant build vectors can be more simply achieved by simply replacing with a new build vector node with the truncated value type - no need to touch the scalar operands at all.

llvm-svn: 235079
2015-04-16 08:21:09 +00:00
Ahmed Bougacha
697a3e0440 [CodeGen] Re-apply r234809 (concat of scalars), with an x86_mmx fix.
The only type that isn't an integer, isn't floating point, and isn't
a vector; ladies and gentlemen, the gift that keeps on giving: x86_mmx!

Fixes PR23246.

Original message (reverted in r235062):
[CodeGen] Combine concat_vectors of scalars into build_vector.

Combine something like:
  (v8i8 concat_vectors (v2i8 bitcast (i16)) x4)
into:
  (v8i8 (bitcast (v4i16 BUILD_VECTOR (i16) x4)))

If any of the scalars are floating point, use that throughout.

Differential Revision: http://reviews.llvm.org/D8948

llvm-svn: 235072
2015-04-16 02:39:14 +00:00
Nick Lewycky
a203d8de5f Revert r234809 because it caused PR23246.
llvm-svn: 235062
2015-04-16 00:56:20 +00:00
Daniel Jasper
266ce3788c Re-apply r234898 and fix tests.
This commit makes LLVM not estimate branch probabilities when doing a
single bit bitmask tests.

The code that originally made me discover this is:

  if ((a & 0x1) == 0x1) {
    ..
  }

In this case we don't actually have any branch probability information
and should not assume to have any. LLVM transforms this into:

  %and = and i32 %a, 1
  %tobool = icmp eq i32 %and, 0

So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.

CodeGen/ARM/2013-10-11-select-stalls.ll started failing because the
changed probabilities changed the results of
ARMBaseInstrInfo::isProfitableToIfCvt() and led to an Ifcvt of the
diamond in the test. AFAICT, the test was never meant to test this and
thus changing the test input slightly to not change the probabilities
seems like the best way to preserve the meaning of the test.

llvm-svn: 234979
2015-04-15 06:24:07 +00:00
Rafael Espindola
19e464908f Revert "The code that originally made me discover this is:"
This reverts commit r234898.
CodeGen/ARM/2013-10-11-select-stalls.ll was faling.

llvm-svn: 234903
2015-04-14 15:56:33 +00:00
Daniel Jasper
a598c99e31 The code that originally made me discover this is:
if ((a & 0x1) == 0x1) {
    ..
  }

In this case we don't actually have any branch probability information and
should not assume to have any. LLVM transforms this into:

  %and = and i32 %a, 1
  %tobool = icmp eq i32 %and, 0

So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.

llvm-svn: 234898
2015-04-14 15:20:37 +00:00
Ahmed Bougacha
5a98dbe174 [CodeGen] Combine concat_vectors of scalars into build_vector.
Combine something like:
  (v8i8 concat_vectors (v2i8 bitcast (i16)) x4)
into:
  (v8i8 (bitcast (v4i16 BUILD_VECTOR (i16) x4)))

If any of the scalars are floating point, use that throughout.

Differential Revision: http://reviews.llvm.org/D8948

llvm-svn: 234809
2015-04-13 22:57:21 +00:00
Krzysztof Parzyszek
b303ebcc9d Settle on a specific triple for the aarch64 testcase
llvm-svn: 234801
2015-04-13 21:55:21 +00:00
Krzysztof Parzyszek
ef47e64c07 Also add mtriple to the aarch64 testcase
llvm-svn: 234797
2015-04-13 20:49:08 +00:00
Krzysztof Parzyszek
3efcf81e03 Allow memory intrinsics to be tail calls
llvm-svn: 234764
2015-04-13 17:16:45 +00:00
Ahmed Bougacha
329fda6f2a [CodeGen] Split -enable-global-merge into ARM and AArch64 options.
Currently, there's a single flag, checked by the pass itself.
It can't force-enable the pass (and is on by default), because it
might not even have been created, as that's the targets decision.
Instead, have separate explicit flags, so that the decision is
consistently made in the target.

Keep the flag as a last-resort "force-disable GlobalMerge" for now,
for backwards compatibility.

llvm-svn: 234666
2015-04-11 00:06:36 +00:00
Ahmed Bougacha
9e6b267c41 [AArch64] Promote f16 operations to f32.
For the most common ones (such as fadd), we already did the promotion.
Do the same thing for all the others.

Currently, we'll just crash/assert on all these operations, as
there's no hardware or libcall support whatsoever.

f16 (half) is specified as an interchange - not arithmetic - format,
and is expected to be promoted to single-precision for arithmetic
operations.

While there, teach the legalizer about promoting some of the (mostly
floating-point) operations that we never needed before.

Differential Revision: http://reviews.llvm.org/D8648
See related discussion on the thread for: http://reviews.llvm.org/D8755

llvm-svn: 234550
2015-04-10 00:08:48 +00:00
Ahmed Bougacha
d2a7cedeca [CodeGen] Combine concat_vector of trunc'd scalar to scalar_to_vector.
We already do:
  concat_vectors(scalar, undef) -> scalar_to_vector(scalar)
When the scalar is legal.
When it's not, but is a truncated legal scalar, we can also do:
  concat_vectors(trunc(scalar), undef) -> scalar_to_vector(scalar)
Which is equivalent, since the upper lanes are undef anyway.
While there, teach the combine to look at more than 2 operands.

Differential Revision: http://reviews.llvm.org/D8883

llvm-svn: 234530
2015-04-09 20:04:47 +00:00
Juergen Ributzka
6f558fd68f [AArch64][FastISel] Fix integer extend optimization.
The integer extend optimization tries to fold the extend into the load
instruction. This requires us to identify if the extend has already been
emitted or not and act accordingly on it.

The check that was originally performed for this was not sufficient. Besides
checking the ValueMap for a mapped register we also need to check if the
virtual register has already an associated machine instruction that defines it.

This fixes rdar://problem/20470788.

llvm-svn: 234529
2015-04-09 20:00:46 +00:00
Kristof Beyls
c065d4f7d5 [AArch64] Add support for dynamic stack alignment
Differential Revision: http://reviews.llvm.org/D8876

llvm-svn: 234471
2015-04-09 08:49:47 +00:00
Lang Hames
0020ebeb89 [AArch64] Remove redundant -march option. Also fix a think-o from r234462.
llvm-svn: 234467
2015-04-09 05:34:57 +00:00
Nick Lewycky
e6736038da Not all triples put _ before function names. Specify a triple to make this test pass on Linux.
llvm-svn: 234466
2015-04-09 05:31:32 +00:00
Lang Hames
360efe3451 [AArch64] Teach AArch64TargetLowering::getOptimalMemOpType to consider alignment
restrictions when choosing a type for small-memcpy inlining in
SelectionDAGBuilder.

This ensures that the loads and stores output for the memcpy won't be further
expanded during legalization, which would cause the total number of instructions
for the memcpy to exceed (often significantly) the inlining thresholds.

<rdar://problem/17829180>

llvm-svn: 234462
2015-04-09 03:40:33 +00:00
Akira Hatanaka
8e8f03d803 Use option -march instead of -mtriple to avoid overconditionalizing the test.
This fixes r234439, which was committed to fix the test failures caused by
r234430.

llvm-svn: 234451
2015-04-08 23:02:45 +00:00
Akira Hatanaka
a1ad440430 Pass -mtriple to llc to appease buildbot.
This fixes the test case I committed in r234430.

llvm-svn: 234439
2015-04-08 21:30:48 +00:00
Akira Hatanaka
5d29050f58 [DAGCombine] Fix a bug in MergeConsecutiveStores.
The bug manifests when there are two loads and two stores chained as follows in
a DAG,

(ld v3f32) -> (st f32) -> (ld v3f32) -> (st f32)

and the stores' values are extracted from the preceding vector loads.

MergeConsecutiveStores would replace the first store in the chain with the
merged vector store, which would create a cycle between the merged store node
and the last load node that appears in the chain.

This commits fixes the bug by replacing the last store in the chain instead.

rdar://problem/20275084

Differential Revision: http://reviews.llvm.org/D8849

llvm-svn: 234430
2015-04-08 20:34:53 +00:00
Simon Pilgrim
335a565d46 [DAGCombiner] Combine shuffles of BUILD_VECTOR and SCALAR_TO_VECTOR
This patch attempts to fold the shuffling of 'scalar source' inputs - BUILD_VECTOR and SCALAR_TO_VECTOR nodes - if the shuffle node is the only user. This folds away a lot of unnecessary shuffle nodes, and allows quite a bit of constant folding that was being missed.

Differential Revision: http://reviews.llvm.org/D8516

llvm-svn: 234004
2015-04-03 10:02:21 +00:00
Jiangning Liu
0fbb128209 Fix PR23065. Avoid optimizing bitcast of build_vector with constant input to scalar_to_vector.
llvm-svn: 233778
2015-04-01 01:52:38 +00:00
Quentin Colombet
574df40140 [AArch64] Enable the codegenprepare optimization that promotes operation to form
extended loads.
Implement the related target lowering hook so that the optimization has a better
estimation of the cost of an extension.

rdar://problem/19267165

llvm-svn: 233753
2015-03-31 20:52:32 +00:00
Tim Northover
ecf8beb388 AArch64: fix v8.1 sqrdmlah tests on Darwin platforms
llvm-svn: 233709
2015-03-31 16:41:38 +00:00
Vladimir Sukharev
22589e7b79 [AArch64] Add v8.1a "Rounding Double Multiply Add/Subtract" extension
Reviewers: t.p.northover, jmolloy

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8502

llvm-svn: 233693
2015-03-31 13:15:48 +00:00
James Molloy
c4bf9e7282 [SDAG] Move TRUNCATE splitting logic into a helper, and use
it more liberally.

SplitVecOp_TRUNCATE has logic for recursively splitting oversize vectors
that need more than one round of splitting to become legal. There are many
other ISD nodes that could benefit from this logic, so factor it out and
use it for FP_TO_UINT,FP_TO_SINT,SINT_TO_FP,UINT_TO_FP and FTRUNC.

llvm-svn: 233681
2015-03-31 10:20:58 +00:00
Quentin Colombet
280dbbd452 [AArch64] Fix poor codegen for add immediate.
We used to match the register variant before the immediate when the register
argument could be implicitly zero-extended.

llvm-svn: 233653
2015-03-31 00:31:13 +00:00
Juergen Ributzka
33cfd96d53 Transfer implicit operands when expanding the RET_ReallyLR pseudo instruction.
When we expand the RET_ReallyLR pseudo instruction we also need to transfer the
implicit operands.

The return register is an implicit operand and without it the liveness
calculation generates an incorrect live-out set for the patchpoint.

This fixes rdar://problem/19068476.

llvm-svn: 233635
2015-03-30 22:45:56 +00:00
Akira Hatanaka
bbf66d7ddc [AArch64InstPrinter] Use the feature bits of the subtarget passed to the print
method.

This enables the instprinter to print a different system register name based on
the feature bits of the per-function subtarget. 

Differential Revision: http://reviews.llvm.org/D8668 

llvm-svn: 233412
2015-03-27 20:37:20 +00:00