1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-24 05:23:45 +02:00
Commit Graph

2110 Commits

Author SHA1 Message Date
Matt Arsenault
f5834050fa Revert "Revert "InstCombine: Reduce trunc (shl x, K) width.""
Reapply r272987. Condition should be in terms of the destination type,
and the flags should not be copied.

llvm-svn: 273045
2016-06-17 20:33:53 +00:00
Sanjay Patel
981e90bafa [InstCombine] allow more than one use for vector bitcast folding with selects
The motivating example for this transform is similar to D20774 where bitcasts interfere
with a single cmp/select sequence, but in this case we have 2 uses of each bitcast to 
produce min and max ops:

define void @minmax_bc_store(<4 x float> %a, <4 x float> %b, <4 x float>* %ptr1, <4 x float>* %ptr2) {
  %cmp = fcmp olt <4 x float> %a, %b
  %bc1 = bitcast <4 x float> %a to <4 x i32>
  %bc2 = bitcast <4 x float> %b to <4 x i32>
  %sel1 = select <4 x i1> %cmp, <4 x i32> %bc1, <4 x i32> %bc2
  %sel2 = select <4 x i1> %cmp, <4 x i32> %bc2, <4 x i32> %bc1
  %bc3 = bitcast <4 x float>* %ptr1 to <4 x i32>*
  store <4 x i32> %sel1, <4 x i32>* %bc3
  %bc4 = bitcast <4 x float>* %ptr2 to <4 x i32>*
  store <4 x i32> %sel2, <4 x i32>* %bc4
  ret void
}

With this patch, we move the selects up to use the input args which allows getting rid of
all of the bitcasts:

define void @minmax_bc_store(<4 x float> %a, <4 x float> %b, <4 x float>* %ptr1, <4 x float>* %ptr2) {
  %cmp = fcmp olt <4 x float> %a, %b
  %sel1.v = select <4 x i1> %cmp, <4 x float> %a, <4 x float> %b
  %sel2.v = select <4 x i1> %cmp, <4 x float> %b, <4 x float> %a
  store <4 x float> %sel1.v, <4 x float>* %ptr1, align 16
  store <4 x float> %sel2.v, <4 x float>* %ptr2, align 16
  ret void
}

The asm for x86 SSE then improves from:

movaps  %xmm0, %xmm2
cmpltps %xmm1, %xmm2
movaps  %xmm2, %xmm3
andnps  %xmm1, %xmm3
movaps  %xmm2, %xmm4
andnps  %xmm0, %xmm4
andps %xmm2, %xmm0
orps  %xmm3, %xmm0
andps %xmm1, %xmm2
orps  %xmm4, %xmm2
movaps  %xmm0, (%rdi)
movaps  %xmm2, (%rsi)

To:

movaps  %xmm0, %xmm2
minps %xmm1, %xmm2
maxps %xmm0, %xmm1
movaps  %xmm2, (%rdi)
movaps  %xmm1, (%rsi)

The TODO comments show that we're limiting this transform only to vectors and only to bitcasts
because we need to improve other transforms or risk creating worse codegen.

Differential Revision: http://reviews.llvm.org/D21190

llvm-svn: 273011
2016-06-17 16:46:50 +00:00
Matt Arsenault
6251ace8d8 Revert "InstCombine: Reduce trunc (shl x, K) width."
This reverts commit r272987.

This might be causing crashes on some bots.

llvm-svn: 272990
2016-06-17 06:28:53 +00:00
Matt Arsenault
123f827fb9 InstCombine: Reduce trunc (shl x, K) width.
llvm-svn: 272987
2016-06-17 04:43:22 +00:00
Eli Friedman
39d15bed50 [InstCombine] Don't widen metadata on store-to-load forwarding
The original check for load CSE or store-to-load forwarding is wrong
when the forwarded stored value happened to be a load.

Ref https://github.com/JuliaLang/julia/issues/16894

Differential Revision: http://reviews.llvm.org/D21271

Patch by Yichao Yu!

llvm-svn: 272868
2016-06-16 02:33:42 +00:00
David Majnemer
89fe9b6eda [TargetLibraryInfo] Teach isValidProtoForLibFunc about tan
We would fail to validate the type of the tan function which would cause
downstream users of isValidProtoForLibFunc to assert.

This fixes PR28143.

llvm-svn: 272802
2016-06-15 16:47:23 +00:00
David Majnemer
c6df3d773b Remove the ScalarReplAggregates pass
Nearly all the changes to this pass have been done while maintaining and
updating other parts of LLVM.  LLVM has had another pass, SROA, which
has superseded ScalarReplAggregates for quite some time.

Differential Revision: http://reviews.llvm.org/D21316

llvm-svn: 272737
2016-06-15 00:19:09 +00:00
Simon Pilgrim
3b8db9c327 [InstCombine][AVX2] Add support for simplifying AVX2 per-element shifts to native shifts
Unlike native shifts, the AVX2 per-element shift instructions VPSRAV/VPSRLV/VPSLLV handle out of range shift values (logical shifts set the result to zero, arithmetic shifts splat the sign bit).

If the shift amount is constant we can sometimes convert these instructions to native shifts:

1 - if all shift amounts are in range then the conversion is trivial.
2 - out of range arithmetic shifts can be clamped to the (bitwidth - 1) (a legal shift amount) before conversion.
3 - logical shifts just return zero if all elements have out of range shift amounts.

In addition, UNDEF shift amounts are handled - either as an UNDEF shift amount in a native shift or as an UNDEF in the logical 'all out of range' zero constant special case for logical shifts.

Differential Revision: http://reviews.llvm.org/D19675

llvm-svn: 271996
2016-06-07 10:27:15 +00:00
Simon Pilgrim
6a9d532b78 [InstCombine][SSE] Add MOVMSK constant folding (PR27982)
This patch adds support for folding undef/zero/constant inputs to MOVMSK instructions.

The SSE/AVX versions can be fully folded, but the MMX version can only handle undef inputs.

Differential Revision: http://reviews.llvm.org/D20998

llvm-svn: 271990
2016-06-07 08:18:35 +00:00
Michael Kuperstein
247caf8b05 [InstCombine] scalarizePHI should not assume the code it sees has been CSE'd
scalarizePHI only looked for phis that have exactly two uses - the "latch"
use, and an extract. Unfortunately, we can not assume all equivalent extracts
are CSE'd, since InstCombine itself may create an extract which is a duplicate
of an existing one. This extends it to handle several distinct extracts from
the same index.

This should fix at least some of the  performance regressions from PR27988.

Differential Revision: http://reviews.llvm.org/D20983

llvm-svn: 271961
2016-06-06 23:38:33 +00:00
Sanjay Patel
6ec19288fa [InstCombine] limit icmp transform to ConstantInt (PR28011)
In r271810 ( http://reviews.llvm.org/rL271810 ), I loosened the check
above this to work for any Constant rather than ConstantInt. AFAICT, 
that part makes sense if we can determine that the shrunken/extended 
constant remained equal. But it doesn't make sense for this later 
transform where we assume that the constant DID change. 

This could assert for a ConstantExpr:
https://llvm.org/bugs/show_bug.cgi?id=28011

And it could be wrong for a vector as shown in the added regression test.

llvm-svn: 271908
2016-06-06 16:56:57 +00:00
Sanjay Patel
810d7e88a4 regenerate checks
llvm-svn: 271904
2016-06-06 16:03:06 +00:00
Sanjay Patel
95a34fb288 regenerate checks
llvm-svn: 271903
2016-06-06 15:55:00 +00:00
Sanjoy Das
7a0f514448 Add safety check to InstCombiner::commonIRemTransforms
Since FoldOpIntoPhi speculates the binary operation to potentially each
of the predecessors of the PHI node (pulling it out of arbitrary control
dependence in the process), we can FoldOpIntoPhi only if we know the
operation doesn't have UB.

This also brings up an interesting profitability question -- the way it
is written today, commonIRemTransforms will hoist out work from
dynamically dead code into code that will execute at runtime.  Perhaps
that isn't the best canonicalization?

Fixes PR27968.

llvm-svn: 271857
2016-06-05 21:17:04 +00:00
Sanjoy Das
feb879a718 Add test case for InstCombiner::commonIRemTransforms; NFC
The PHI case in commonIRemTransforms was untested; add a trivial test
case.

llvm-svn: 271856
2016-06-05 21:17:00 +00:00
Sanjay Patel
1709e61793 fix checks
update_test_checks.py got confused matching the variable names. 

llvm-svn: 271844
2016-06-05 17:54:56 +00:00
Sanjay Patel
57c134b91a [InstCombine] allow vector icmp bool transforms
llvm-svn: 271843
2016-06-05 17:49:45 +00:00
Sanjay Patel
b11050c641 add tests to show missing vector transforms
llvm-svn: 271842
2016-06-05 17:32:58 +00:00
Sanjay Patel
b1c8f8ce55 regenerate checks
llvm-svn: 271841
2016-06-05 17:29:45 +00:00
Sanjay Patel
605061c514 update test to use FileCheck
llvm-svn: 271840
2016-06-05 17:13:09 +00:00
Sanjay Patel
24556cdcec update test to use FileCheck
llvm-svn: 271838
2016-06-05 16:41:20 +00:00
Sanjay Patel
7c1912623f update test to FileCheck
llvm-svn: 271837
2016-06-05 16:29:15 +00:00
Sanjay Patel
dad6c47e6b [InstCombine] allow vector constants for cast+icmp fold
This is step 1 of unknown towards fixing PR28001:
https://llvm.org/bugs/show_bug.cgi?id=28001

llvm-svn: 271810
2016-06-04 22:04:05 +00:00
Sanjay Patel
a7b7945972 [InstCombine] add test for missing vector optimization
llvm-svn: 271808
2016-06-04 21:41:25 +00:00
Sanjay Patel
b44cf32fc9 [InstCombine] add test for missing vector optimization
llvm-svn: 271806
2016-06-04 21:20:03 +00:00
Sanjay Patel
9fa8579461 [InstCombine] minimize test case and use FileCheck
llvm-svn: 271805
2016-06-04 21:04:59 +00:00
Simon Pilgrim
c995ee7b75 [InstCombine][MMX] Extend SimplifyDemandedUseBits MOVMSK support to MMX
Add the MMX implementation to the SimplifyDemandedUseBits SSE/AVX MOVMSK support added in D19614

Requires a minor tweak as llvm.x86.mmx.pmovmskb takes a x86_mmx argument - so we have to be explicit about the implied v8i8 vector type.

llvm-svn: 271789
2016-06-04 13:42:46 +00:00
Sanjay Patel
500fe712ac [InstCombine] look through bitcasts to find selects
There was concern that creating bitcasts for the simpler potential select pattern:

define <2 x i64> @vecBitcastOp1(<4 x i1> %cmp, <2 x i64> %a) {
  %a2 = add <2 x i64> %a, %a
  %sext = sext <4 x i1> %cmp to <4 x i32>
  %bc = bitcast <4 x i32> %sext to <2 x i64>
  %and = and <2 x i64> %a2, %bc
  ret <2 x i64> %and
}

might lead to worse code for some targets, so this patch is matching the larger
patterns seen in the test cases.

The motivating example for this patch is this IR produced via SSE intrinsics in C:

define <2 x i64> @gibson(<2 x i64> %a, <2 x i64> %b) {
  %t0 = bitcast <2 x i64> %a to <4 x i32>
  %t1 = bitcast <2 x i64> %b to <4 x i32>
  %cmp = icmp sgt <4 x i32> %t0, %t1
  %sext = sext <4 x i1> %cmp to <4 x i32>
  %t2 = bitcast <4 x i32> %sext to <2 x i64>
  %and = and <2 x i64> %t2, %a
  %neg = xor <4 x i32> %sext, <i32 -1, i32 -1, i32 -1, i32 -1>
  %neg2 = bitcast <4 x i32> %neg to <2 x i64>
  %and2 = and <2 x i64> %neg2, %b
  %or = or <2 x i64> %and, %and2
  ret <2 x i64> %or
}

For an AVX target, this is currently:

vpcmpgtd  %xmm1, %xmm0, %xmm2
vpand     %xmm0, %xmm2, %xmm0
vpandn    %xmm1, %xmm2, %xmm1
vpor      %xmm1, %xmm0, %xmm0
retq

With this patch, it becomes:

vpmaxsd   %xmm1, %xmm0, %xmm0

Differential Revision: http://reviews.llvm.org/D20774

llvm-svn: 271676
2016-06-03 14:42:07 +00:00
Sanjay Patel
009edf13a8 [InstCombine] change tests to show a more obvious transform possibility
The original tests were intended to show a missing transform that would
be solved by D20774:
http://reviews.llvm.org/D20774

But it's not clear that the transform for the simpler tests is a win for
all targets. Make the tests show a larger pattern that should be a win
regardless of the cost of bitcast instructions.

llvm-svn: 271603
2016-06-02 22:45:49 +00:00
Sanjay Patel
02638731c1 transform obscured FP sign bit ops into a fabs/fneg using TLI hook
This is effectively a revert of:
http://reviews.llvm.org/rL249702 - [InstCombine] transform masking off of an FP sign bit into a fabs() intrinsic call (PR24886)
and:
http://reviews.llvm.org/rL249701 - [ValueTracking] teach computeKnownBits that a fabs() clears sign bits
and a reimplementation as a DAG combine for targets that have IEEE754-compliant fabs/fneg instructions.

This is intended to resolve the objections raised on the dev list:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098154.html
and:
https://llvm.org/bugs/show_bug.cgi?id=24886#c4

In the interest of patch minimalism, I've only partly enabled AArch64. PowerPC, MIPS, x86 and others can enable later.

Differential Revision: http://reviews.llvm.org/D19391

llvm-svn: 271573
2016-06-02 20:01:37 +00:00
Simon Pilgrim
6ec0f7efbc [X86][SSE] (Reapplied) Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 271131
2016-05-28 18:03:41 +00:00
Sanjay Patel
2c70238f80 [InstCombine] add tests to show bitcast interference
llvm-svn: 271125
2016-05-28 16:10:37 +00:00
Sanjay Patel
a0cd62f38a regenerate checks
llvm-svn: 271117
2016-05-28 15:44:28 +00:00
Sanjay Patel
fe05838f17 join RUN lines; NFC
llvm-svn: 271115
2016-05-28 15:34:05 +00:00
Simon Pilgrim
99e3cf65ff Revert: r270973 - [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
llvm-svn: 270976
2016-05-27 09:02:25 +00:00
Simon Pilgrim
c8925e270b [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (llvm)
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.

A companion patch (D20684) removes/auto-upgrade the clang intrinsics.

Differential Revision: http://reviews.llvm.org/D20686

llvm-svn: 270973
2016-05-27 08:49:15 +00:00
Chad Rosier
1b8b93b50a [InstCombine] Catch more bswap cases missed due to zext and truncs.
Fixes PR27824.
Differential Revision: http://reviews.llvm.org/D20591.

llvm-svn: 270853
2016-05-26 14:58:51 +00:00
Adam Nemet
de673be237 [ConstantFold] Fix incorrect index rewrites for GEPs
Summary:
If an index for a vector or array type is out-of-range GEP constant
folding tries to factor it into preceding dimensions.  The code however
does not consider addressing of structure field padding which should not
qualify as out-of-range index.

As demonstrated by the testcase, this can occur if the indexing
performed on a vector type and the preceding index is an array type.

SROA generates GEPs for example involving padding bytes as it slices an
alloca.

My fix disables this folding if the element type is a vector type.  I
believe that this is the only way we can end up with padding.  (We have
no access to DataLayout so I am not sure if there is actual robust way
of actually checking the presence of padding.)

Reviewers: majnemer

Subscribers: llvm-commits, Gerolf

Differential Revision: http://reviews.llvm.org/D20663

llvm-svn: 270826
2016-05-26 07:08:05 +00:00
Craig Topper
4710ab1424 [X86] Remove the llvm.x86.sse2.storel.dq intrinsic. It hasn't been used in a long time.
llvm-svn: 270677
2016-05-25 06:56:32 +00:00
Chad Rosier
24b2269c8b [InstCombine] Clean up and FileCheckize test case.
llvm-svn: 270586
2016-05-24 17:35:49 +00:00
Simon Pilgrim
98f098a7dc [InstCombine][X86][SSE41] The SSE41 PMOVSX intrinsics are auto upgraded now and aren't handled by InstCombine any more
llvm-svn: 270561
2016-05-24 13:52:44 +00:00
Gerolf Hoflehner
02d0a11d91 [InstCombine] Fix assertion when bitcast is converted to gep
When an aggregate contains an opaque type its size cannot be
determined. This triggers an "Invalid GetElementPtrInst indices for type" assert
in function checkGEPType. The fix suppresses the conversion in this case.

http://reviews.llvm.org/D20319

llvm-svn: 270479
2016-05-23 19:23:17 +00:00
Sanjay Patel
be15a9a346 [ValueTracking, InstCombine] extend isKnownToBeAPowerOfTwo() to handle vector splat constants
We could try harder to handle non-splat vector constants too, 
but that seems much rarer to me.

Note that the div test isn't resolved because there's a check
for isIntegerTy() guarding that transform.

Differential Revision: http://reviews.llvm.org/D20497

llvm-svn: 270369
2016-05-22 15:41:53 +00:00
Matt Arsenault
042c06f5a2 Fix constant folding of addrspacecast of null
This should not be making assumptions on the value of
the casted pointer.

llvm-svn: 270293
2016-05-21 00:14:04 +00:00
Sanjay Patel
df2ee08706 add test vector sdiv
llvm-svn: 270285
2016-05-20 22:08:40 +00:00
Sanjay Patel
38c18fba6e add test for vector shift
llvm-svn: 270284
2016-05-20 22:08:16 +00:00
Sanjay Patel
156965ba77 add tests for vector urem
llvm-svn: 270271
2016-05-20 20:55:17 +00:00
Sanjay Patel
f536e5debc use FileCheck instead of grep for exact checking
llvm-svn: 270265
2016-05-20 20:07:18 +00:00
Guozhi Wei
dc5a37bcd2 [InstCombine] Avoid combining the bitcast of a var that is used as both address and result of load instructions
This patch fixes https://llvm.org/bugs/show_bug.cgi?id=27703.

If there is a sequence of one or more load instructions, each loaded value is used as address of later load instruction, bitcast is necessary to change the value type, don't optimize it.

llvm-svn: 270135
2016-05-19 21:07:01 +00:00
Sanjay Patel
a9e371ad52 [InstCombine] add another test for wrong icmp constant (PR27792)
It doesn't matter if the comparison is unsigned; the inc/dec is always signed.

llvm-svn: 269831
2016-05-17 20:20:40 +00:00