1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 04:02:41 +01:00
Commit Graph

6760 Commits

Author SHA1 Message Date
David Majnemer
d785861d7b [MemCpyOpt] Don't perform callslot optimization across may-throw calls
An exception could prevent a store from occurring but MemCpyOpt's
callslot optimization would fire anyway, causing the store to occur.

This fixes PR27849.

llvm-svn: 270892
2016-05-26 19:24:24 +00:00
Michael Kuperstein
9d7ba27b1f [BBVectorize] Don't vectorize selects with a scalar condition and vector operands.
This fixes PR27879.

Differential Revision: http://reviews.llvm.org/D20659

llvm-svn: 270888
2016-05-26 18:43:57 +00:00
David Majnemer
89a8ea9137 [CaptureTracking] Volatile operations capture their memory location
The memory location that corresponds to a volatile operation is very
special.  They are observed by the machine in ways which we cannot
reason about.

Differential Revision: http://reviews.llvm.org/D20555

llvm-svn: 270879
2016-05-26 17:36:22 +00:00
Chad Rosier
1b8b93b50a [InstCombine] Catch more bswap cases missed due to zext and truncs.
Fixes PR27824.
Differential Revision: http://reviews.llvm.org/D20591.

llvm-svn: 270853
2016-05-26 14:58:51 +00:00
David Majnemer
fb8ef4867c [MergedLoadStoreMotion] Don't transform across may-throw calls
It is unsafe to hoist a load before a function call which may throw, the
throw might prevent a pointer dereference.

Likewise, it is unsafe to sink a store after a call which may throw.
The caller might be able to observe the difference.

This fixes PR27858.

llvm-svn: 270828
2016-05-26 07:11:09 +00:00
Adam Nemet
de673be237 [ConstantFold] Fix incorrect index rewrites for GEPs
Summary:
If an index for a vector or array type is out-of-range GEP constant
folding tries to factor it into preceding dimensions.  The code however
does not consider addressing of structure field padding which should not
qualify as out-of-range index.

As demonstrated by the testcase, this can occur if the indexing
performed on a vector type and the preceding index is an array type.

SROA generates GEPs for example involving padding bytes as it slices an
alloca.

My fix disables this folding if the element type is a vector type.  I
believe that this is the only way we can end up with padding.  (We have
no access to DataLayout so I am not sure if there is actual robust way
of actually checking the presence of padding.)

Reviewers: majnemer

Subscribers: llvm-commits, Gerolf

Differential Revision: http://reviews.llvm.org/D20663

llvm-svn: 270826
2016-05-26 07:08:05 +00:00
Peter Collingbourne
19d18aa6de MemorySSA: Revert r269678 and r268068; replace with special casing in MemorySSA.
It turns out that too many passes are relying on alias analysis results
for control dependencies. Until we fix that by introducing a more accurate
modelling of control dependencies, special case assume in MemorySSA instead.

Also introduce tests to ensure we don't regress the FunctionAttrs or LICM
passes.

Differential Revision: http://reviews.llvm.org/D20658

llvm-svn: 270823
2016-05-26 04:58:46 +00:00
Sanjoy Das
5255ead22b [IRCE] Optimize conjunctions of range checks
After this change, we do the expected thing for cases like

```
Check0Passed = /* range check IRCE can optimize */
Check1Passed = /* range check IRCE can optimize */
if (!(Check0Passed && Check1Passed))
  throw_Exception();
```

llvm-svn: 270804
2016-05-26 00:09:02 +00:00
Davide Italiano
504e778cb8 [PM] Port PartiallyInlineLibCalls to the new pass manager.
llvm-svn: 270798
2016-05-25 23:38:53 +00:00
Hal Finkel
e4580bceba Look for a loop's starting location in the llvm.loop metadata
Getting accurate locations for loops is important, because those locations are
used by the frontend to generate optimization remarks. Currently, optimization
remarks for loops often appear on the wrong line, often the first line of the
loop body instead of the loop itself. This is confusing because that line might
itself be another loop, or might be somewhere else completely if the body was
inlined function call. This happens because of the way we find the loop's
starting location. First, we look for a preheader, and if we find one, and its
terminator has a debug location, then we use that. Otherwise, we look for a
location on an instruction in the loop header.

The fallback heuristic is not bad, but will almost always find the beginning of
the body, and not the loop statement itself. The preheader location search
often fails because there's often not a preheader, and even when there is a
preheader, depending on how it was formed, it sometimes carries the location of
some preceeding code.

I don't see any good theoretical way to fix this problem. On the other hand,
this seems like a straightforward solution: Put the debug location in the
loop's llvm.loop metadata. A companion Clang patch will cause Clang to insert
llvm.loop metadata with appropriate locations when generating debugging
information. With these changes, our loop remarks have much more accurate
locations.

Differential Revision: http://reviews.llvm.org/D19738

llvm-svn: 270771
2016-05-25 21:42:37 +00:00
Ahmed Bougacha
b3c9ba99bf [TLI] Also cover Linux 64 libfunc (stat64, ...) prototype checking.
My script missed those in r270750.

llvm-svn: 270763
2016-05-25 21:16:33 +00:00
Ahmed Bougacha
15451fb9fc [TLI] Fix NumParams==0 prototype checking typo.
There was a typo in r267758. It caused invalid accesses when
given something like "void @free(...)", as NumParams == 0, and
we then try to look at the 0th parameter.

Turns out, most of these were untested; add both attribute
and missing-prototype checks for all libc libfuncs.

Differential Revision: http://reviews.llvm.org/D20543

llvm-svn: 270750
2016-05-25 20:22:45 +00:00
Reid Kleckner
31bb5b2278 [IR] Copy comdats in GlobalObject::copyAttributesFrom
This is probably correct for all uses except cross-module IR linking,
where we need to move the comdat from the source module to the
destination module.

Fixes PR27870.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D20631

llvm-svn: 270743
2016-05-25 18:36:22 +00:00
Sanjay Patel
e582594538 [x86] avoid code explosion from LoopVectorizer for gather loop (PR27826)
By making pointer extraction from a vector more expensive in the cost model,
we avoid the vectorization of a loop that is very likely to be memory-bound:
https://llvm.org/bugs/show_bug.cgi?id=27826

There are still bugs related to this, so we may need a more general solution
to avoid vectorizing obviously memory-bound loops when we don't have HW gather
support.

Differential Revision: http://reviews.llvm.org/D20601

llvm-svn: 270729
2016-05-25 17:27:54 +00:00
Craig Topper
4710ab1424 [X86] Remove the llvm.x86.sse2.storel.dq intrinsic. It hasn't been used in a long time.
llvm-svn: 270677
2016-05-25 06:56:32 +00:00
David Majnemer
3c41824d16 [FunctionAttrs] Volatile loads should disable readonly
A volatile load has side effects beyond what callers expect readonly to
signify.  For example, it is not safe to reorder two function calls
which each perform a volatile load to the same memory location.

llvm-svn: 270671
2016-05-25 05:53:04 +00:00
Davide Italiano
c01e301072 [PM] Port BDCE to the new pass manager.
llvm-svn: 270647
2016-05-25 01:57:04 +00:00
Michael Zolotukhin
82048571c5 Re-enable "[LoopUnroll] Enable advanced unrolling analysis by default" one more time.
This reverts commit r270577.

llvm-svn: 270630
2016-05-24 23:00:05 +00:00
Michael Zolotukhin
7bf254c21f [LoopUnrollAnalyzer] Fix a crash in UnrolledInstAnalyzer::visitCastInst.
This fixes PR27847. Now for real.

llvm-svn: 270629
2016-05-24 22:59:58 +00:00
Chad Rosier
24b2269c8b [InstCombine] Clean up and FileCheckize test case.
llvm-svn: 270586
2016-05-24 17:35:49 +00:00
Hans Wennborg
1bb7a310ec Revert r270518, which re-enabled "[LoopUnroll] Enable advanced unrolling analysis by default.
Chromium builds are still hitting the assert in PR27874.

llvm-svn: 270577
2016-05-24 16:10:12 +00:00
Sanjay Patel
d1d88e8e3a [ValueTracking, InstSimplify] extend isKnownNonZero() to handle vector constants
Similar in spirit to D20497 :
If all elements of a constant vector are known non-zero, then we can say that the
whole vector is known non-zero.

It seems like we could extend this to FP scalar/vector too, but isKnownNonZero()
says it only works for integers and pointers for now.

Differential Revision: http://reviews.llvm.org/D20544

llvm-svn: 270562
2016-05-24 14:18:49 +00:00
Simon Pilgrim
98f098a7dc [InstCombine][X86][SSE41] The SSE41 PMOVSX intrinsics are auto upgraded now and aren't handled by InstCombine any more
llvm-svn: 270561
2016-05-24 13:52:44 +00:00
Michael Zolotukhin
5d004fcebe Revert "Revert r270478 "[LoopUnroll] Enable advanced unrolling analysis by default.""
This reverts commit r270512 and reapplies r270478. Originally it caused
PR27847, but it was fixed in r270517.

llvm-svn: 270518
2016-05-24 01:22:20 +00:00
Michael Zolotukhin
32f621fe89 [LoopUnrollAnalyzer] Fix a crash in UnrolledInstAnalyzer::visitCastInst.
This fixes PR27847.

llvm-svn: 270517
2016-05-24 00:51:01 +00:00
Hans Wennborg
794d1b3f11 Revert r270478 "[LoopUnroll] Enable advanced unrolling analysis by default."
This caused PR27847.

llvm-svn: 270512
2016-05-23 23:42:35 +00:00
Sanjoy Das
c46619aba2 [IRCE] Optimize "uses" not branches; NFCI
This changes IRCE to optimize uses, and not branches.  This change is
NFCI since the uses we do inspect are in practice only ever going to be
the condition use in conditional branches; but this flexibility will
later allow us to analyze more complex expressions than just a direct
branch on a range check.

llvm-svn: 270500
2016-05-23 22:16:45 +00:00
Sanjay Patel
782c05a22a [InstSimplify] add vector tests for isKnownNonZero
llvm-svn: 270498
2016-05-23 22:09:04 +00:00
Gerolf Hoflehner
02d0a11d91 [InstCombine] Fix assertion when bitcast is converted to gep
When an aggregate contains an opaque type its size cannot be
determined. This triggers an "Invalid GetElementPtrInst indices for type" assert
in function checkGEPType. The fix suppresses the conversion in this case.

http://reviews.llvm.org/D20319

llvm-svn: 270479
2016-05-23 19:23:17 +00:00
Michael Zolotukhin
aaf9408cae [LoopUnroll] Enable advanced unrolling analysis by default.
Summary:
This patch turns on LoopUnrollAnalyzer by default. To mitigate compile
time regressions, I chose very conservative thresholds for now. Later we
can make them more aggressive, but it might require being smarter in
which loops we're optimizing. E.g. currently the biggest issue is that
with more agressive thresholds we unroll many cold loops, which
increases compile time for no performance benefit (performance of those
loops is improved, but it doesn't matter since they are cold).

Test results for compile time(using 4 samples to reduce noise):
```
MultiSource/Benchmarks/VersaBench/ecbdes/ecbdes 5.19%
SingleSource/Benchmarks/Polybench/medley/reg_detect/reg_detect  4.19%
MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow  3.39%
MultiSource/Applications/JM/lencod/lencod 1.47%
MultiSource/Benchmarks/Fhourstones-3_1/fhourstones3_1 -6.06%
```

I didn't see any performance changes in the testsuite, but it improves
some internal tests.

Reviewers: hfinkel, chandlerc

Subscribers: llvm-commits, mzolotukhin

Differential Revision: http://reviews.llvm.org/D20482

llvm-svn: 270478
2016-05-23 19:10:19 +00:00
Sanjay Patel
be15a9a346 [ValueTracking, InstCombine] extend isKnownToBeAPowerOfTwo() to handle vector splat constants
We could try harder to handle non-splat vector constants too, 
but that seems much rarer to me.

Note that the div test isn't resolved because there's a check
for isIntegerTy() guarding that transform.

Differential Revision: http://reviews.llvm.org/D20497

llvm-svn: 270369
2016-05-22 15:41:53 +00:00
David Majnemer
cb9616636d [SimplifyCFG] Remove cleanuppads which are empty except for calls to lifetime.end
A cleanuppad is not cheap, they turn into many instructions and result
in additional spills and fills.  It is not worth keeping a cleanuppad
around if all it does is hold a lifetime.end instruction.

N.B.  We first try to merge the cleanuppad with another cleanuppad to
avoid dropping the lifetime and debug info markers.

llvm-svn: 270314
2016-05-21 05:12:32 +00:00
Sanjoy Das
9b7c91f0ad [GuardWidening] Fix incorrect use of remove_if
I had used `std::remove_if` under the assumption that it moves the
predicate matching elements to the end, but actaully the elements
remaining towards the end (after the iterator returned by
`std::remove_if`) are indeterminate.  Fix the bug (and make the code
more straightforward) by using a temporary SmallVector, and add a test
case demonstrating the issue.

llvm-svn: 270306
2016-05-21 02:24:44 +00:00
Matt Arsenault
042c06f5a2 Fix constant folding of addrspacecast of null
This should not be making assumptions on the value of
the casted pointer.

llvm-svn: 270293
2016-05-21 00:14:04 +00:00
Sanjay Patel
df2ee08706 add test vector sdiv
llvm-svn: 270285
2016-05-20 22:08:40 +00:00
Sanjay Patel
38c18fba6e add test for vector shift
llvm-svn: 270284
2016-05-20 22:08:16 +00:00
Sanjay Patel
156965ba77 add tests for vector urem
llvm-svn: 270271
2016-05-20 20:55:17 +00:00
Sanjay Patel
f536e5debc use FileCheck instead of grep for exact checking
llvm-svn: 270265
2016-05-20 20:07:18 +00:00
Mark Lacey
15fb815be7 Functions with differing phis should not be merged.
Check that the incoming blocks of phi nodes are identical, and block
function merging if they are not.

rdar://problem/26255167

Differential Revision: http://reviews.llvm.org/D20462

llvm-svn: 270250
2016-05-20 18:39:11 +00:00
Sanjay Patel
41ae96619d [SimplifyCFG] eliminate switch cases based on known range of switch condition
This was noted in PR24766:
https://llvm.org/bugs/show_bug.cgi?id=24766#c2

We may not know whether the sign bit(s) are zero or one, but we can still
optimize based on knowing that the sign bit is repeated.

Differential Revision: http://reviews.llvm.org/D20275

llvm-svn: 270222
2016-05-20 14:53:09 +00:00
Easwaran Raman
21aa6c180e Allow -inline-threshold to override default threshold.
Before r257832, the threshold used by SimpleInliner was explicitly specified or generated from opt levels and passed to the base class Inliner's constructor. There, it was first overridden by explicitly specified -inline-threshold. The refactoring in r257832 did not preserve this behavior for all opt levels. This change brings back the original behavior.

Differential Revision: http://reviews.llvm.org/D20452

llvm-svn: 270153
2016-05-19 23:02:09 +00:00
Sanjoy Das
14b186a828 [GuardWidening] Introduce range check merging
Sequences of range checks expressed using guards, like

  guard((I - 2) u< L)
  guard((I - 1) u< L)
  guard((I + 0) u< L)
  guard((I + 1) u< L)
  guard((I + 2) u< L)

can sometimes be combined into a smaller sequence:

  guard((I - 2) u< L AND (I + 2) u< L)

if we can prove that (I - 2) u< L AND (I + 2) u< L implies all of checks
expressed in the previous sequence.

This change teaches GuardWidening to do this kind of merging when
feasible.

llvm-svn: 270151
2016-05-19 22:55:46 +00:00
Guozhi Wei
dc5a37bcd2 [InstCombine] Avoid combining the bitcast of a var that is used as both address and result of load instructions
This patch fixes https://llvm.org/bugs/show_bug.cgi?id=27703.

If there is a sequence of one or more load instructions, each loaded value is used as address of later load instruction, bitcast is necessary to change the value type, don't optimize it.

llvm-svn: 270135
2016-05-19 21:07:01 +00:00
Wei Mi
3cf2cc8254 Recommit r255691 since PR26509 has been fixed.
llvm-svn: 270113
2016-05-19 20:38:03 +00:00
Peter Collingbourne
1d69c09bab CodeGen: Make the global-merge pass independently testable, and add a test.
llvm-svn: 270023
2016-05-19 04:38:56 +00:00
Sanjoy Das
b590ade35e [GuardWidening] Use getEquivalentICmp to fold constant compares
`ConstantRange::getEquivalentICmp` is more general, and better
factored.

llvm-svn: 270019
2016-05-19 03:53:17 +00:00
Sanjoy Das
479a57ca4a New pass: guard widening
Summary:
Implement guard widening in LLVM. Description from GuardWidening.cpp:

The semantics of the `@llvm.experimental.guard` intrinsic lets LLVM
transform it so that it fails more often that it did before the
transform.  This optimization is called "widening" and can be used hoist
and common runtime checks in situations like these:

```
%cmp0 = 7 u< Length
call @llvm.experimental.guard(i1 %cmp0) [ "deopt"(...) ]
call @unknown_side_effects()
%cmp1 = 9 u< Length
call @llvm.experimental.guard(i1 %cmp1) [ "deopt"(...) ]
...
```

to

```
%cmp0 = 9 u< Length
call @llvm.experimental.guard(i1 %cmp0) [ "deopt"(...) ]
call @unknown_side_effects()
...
```

If `%cmp0` is false, `@llvm.experimental.guard` will "deoptimize" back
to a generic implementation of the same function, which will have the
correct semantics from that point onward.  It is always _legal_ to
deoptimize (so replacing `%cmp0` with false is "correct"), though it may
not always be profitable to do so.

NB! This pass is a work in progress.  It hasn't been tuned to be
"production ready" yet.  It is known to have quadriatic running time and
will not scale to large numbers of guards

Reviewers: reames, atrick, bogner, apilipenko, nlewycky

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20143

llvm-svn: 269997
2016-05-18 22:55:34 +00:00
Dehao Chen
7cac814db8 Follow-up patch of http://reviews.llvm.org/D19948 to handle missing profiles when simplifying CFG.
Summary: Set default branch weight to 1:1 if one of the branch has profile missing when simplifying CFG.

Reviewers: spatel, davidxl

Subscribers: danielcdh, llvm-commits

Differential Revision: http://reviews.llvm.org/D20307

llvm-svn: 269995
2016-05-18 22:41:03 +00:00
Michael Zolotukhin
60d4945387 [LoopUnrollAnalyzer] Take into account cost of instructions controlling branches, along with their operands.
Previously, we didn't add their and their operands cost, which could've
resulted in unrolling loops for no actual benefit.

llvm-svn: 269985
2016-05-18 21:20:12 +00:00
Matt Arsenault
41311e20a0 AMDGPU: Other sizes of popcnt are fast
We can chain bcnt instructions together, so
any width popcnt is pretty fast.

llvm-svn: 269950
2016-05-18 16:10:19 +00:00