1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 03:53:04 +02:00
Commit Graph

1682 Commits

Author SHA1 Message Date
Nico Weber
6dca3ae366 Revert r267210, it makes clang assert (PR27490).
llvm-svn: 267232
2016-04-22 22:08:42 +00:00
Andrew Kaylor
653d361880 Re-commit optimization bisect support (r267022) without new pass manager support.
The original commit was reverted because of a buildbot problem with LazyCallGraph::SCC handling (not related to the OptBisect handling).

Differential Revision: http://reviews.llvm.org/D19172

llvm-svn: 267231
2016-04-22 22:06:11 +00:00
Philip Reames
53e3092198 [unordered] sink unordered stores at end of blocks
The existing code turned out to be completely correct when auditted.  Thus, only minor code changes and adding a couple of tests.

llvm-svn: 267215
2016-04-22 20:53:32 +00:00
Sanjoy Das
cbed8d8247 Fold compares for distinct allocations
Summary:
We can fold compares to false when two distinct allocations within a
function are compared for equality.

Patch by Anna Thomas!

Reviewers: majnemer, reames, sanjoy

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D19390

llvm-svn: 267214
2016-04-22 20:52:25 +00:00
Philip Reames
29f7cc186f [unordered] Extend load/store type canonicalization to handle unordered operations
Extend the type canonicalization logic to work for unordered atomic loads and stores.  Note that while this change itself is fairly simple and low risk, there's a reasonable chance this will expose problems in the backends by suddenly generating IR they wouldn't have seen before.  Anything of this nature will be an existing bug in the backend (you could write an atomic float load), but this will definitely change the frequency with which such cases are encountered.  If you see problems, feel free to revert this change, but please make sure you collect a test case.  

llvm-svn: 267210
2016-04-22 20:33:48 +00:00
Silviu Baranga
52b925e4c9 [InstCombine] Preserve fast math flags when combining PHIs
Summary:
When optimizing PHIs which have inputs floating point binary
operators, we preserve all IR flags except the fast math
flags.

This change removes the logic which tracked some of the IR flags
(no wrap, exact) and replaces it by doing an and on the IR flags of
all inputs to the PHI - which will also handle the fast math
flags.

Reviewers: majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D19370

llvm-svn: 267139
2016-04-22 11:21:36 +00:00
Vedant Kumar
b6cc52b7d8 Revert "Initial implementation of optimization bisect support."
This reverts commit r267022, due to an ASan failure:

  http://lab.llvm.org:8080/green/job/clang-stage2-cmake-RgSan_check/1549

llvm-svn: 267115
2016-04-22 06:51:37 +00:00
JF Bastien
03f28f7dfe NFC: fix copy / paste comment
llvm-svn: 267039
2016-04-21 19:53:39 +00:00
JF Bastien
6b5bcbfa08 NFC: fix nonsensical comment
llvm-svn: 267036
2016-04-21 19:41:48 +00:00
Sanjoy Das
d82c062070 Folding compares with unescaped allocations
Summary:
If we know that the pointer allocated within a function does not escape,
we can fold away comparisons that are done with global pointers

Patch by Anna Thomas!

Reviewers: reames, majnemer, sanjoy

Subscribers: mgrang, mcrosier, majnemer, llvm-commits

Differential Revision: http://reviews.llvm.org/D19276

llvm-svn: 267035
2016-04-21 19:26:45 +00:00
Philip Reames
1c0459c510 [instcombine][unordered] Extend load(select) transform to handle unordered loads
llvm-svn: 267023
2016-04-21 17:59:40 +00:00
Andrew Kaylor
fd49f275f8 Initial implementation of optimization bisect support.
This patch implements a optimization bisect feature, which will allow optimizations to be selectively disabled at compile time in order to track down test failures that are caused by incorrect optimizations.

The bisection is enabled using a new command line option (-opt-bisect-limit).  Individual passes that may be skipped call the OptBisect object (via an LLVMContext) to see if they should be skipped based on the bisect limit.  A finer level of control (disabling individual transformations) can be managed through an addition OptBisect method, but this is not yet used.

The skip checking in this implementation is based on (and replaces) the skipOptnoneFunction check.  Where that check was being called, a new call has been inserted in its place which checks the bisect limit and the optnone attribute.  A new function call has been added for module and SCC passes that behaves in a similar way.

Differential Revision: http://reviews.llvm.org/D19172

llvm-svn: 267022
2016-04-21 17:58:54 +00:00
Philip Reames
7e0ae29aca [unordered] unordered loads from null are still unreachable
llvm-svn: 267019
2016-04-21 17:45:05 +00:00
Philip Reames
e04aebaa98 [instcombine][unordered] Implement *-load forwarding for unordered atomics
This builds on 266999 which made FindAvailableValue do the right thing.  Tests included show the newly enabled transforms and those which disabled either due to conservatism or correctness requirements.

llvm-svn: 267006
2016-04-21 17:03:33 +00:00
David Majnemer
d93b1bf8b1 [ValueTracking, VectorUtils] Refactor getIntrinsicIDForCall
The functionality contained within getIntrinsicIDForCall is two-fold: it
checks if a CallInst's callee is a vectorizable intrinsic.  If it isn't
an intrinsic, it attempts to map the call's target to a suitable
intrinsic.

Move the mapping functionality into getIntrinsicForCallSite and rename
getIntrinsicIDForCall to getVectorIntrinsicIDForCall while
reimplementing it in terms of getIntrinsicForCallSite.

llvm-svn: 266801
2016-04-19 19:10:21 +00:00
Mehdi Amini
9ff867f98c [NFC] Header cleanup
Removed some unused headers, replaced some headers with forward class declarations.

Found using simple scripts like this one:
clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap'

Patch by Eugene Kosov <claprix@yandex.ru>

Differential Revision: http://reviews.llvm.org/D19219

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266595
2016-04-18 09:17:29 +00:00
Sanjoy Das
157d1c4576 Fix a typo in rL265762
I accidentally replaced `mayBeOverridden` with `!isInterposable`.
Remove the negation and add a test case that would've caught this.

Many thanks to Håkan Hjort for spotting this!

llvm-svn: 266551
2016-04-17 04:30:43 +00:00
David Majnemer
77f3549d90 [InstCombine] Don't transform compares of calls to functions named fabs{f,l,}
InstCombine wants to optimize compares of calls to fabs with zero.
However, we didn't have the necessary legality checking to verify that
the function call had the same behavior as fabs.

llvm-svn: 266452
2016-04-15 17:21:03 +00:00
Sanjay Patel
1d8eaf1ecc [InstCombine] remove constant by inverting compare + logic (PR27105)
https://llvm.org/bugs/show_bug.cgi?id=27105

We can check if all bits outside of a constant mask are set with a 
single constant.

As noted in the bug report, although this form should be considered the
canonical IR, backends may want to transform this into an 'andn' / 'andc' 
comparison against zero because that could be a single machine instruction.

Differential Revision: http://reviews.llvm.org/D18842

llvm-svn: 266362
2016-04-14 20:17:40 +00:00
David Majnemer
9e0160358c [InstCombine] We folded an fcmp to an i1 instead of a vector of i1
Remove an ad-hoc transform in InstCombine and replace it with more
general machinery (ValueTracking, InstructionSimplify and VectorUtils).

This fixes PR27332.

llvm-svn: 266175
2016-04-13 06:55:52 +00:00
Sanjay Patel
ba900a7bb3 [x86, InstCombine] fix masked load pass-through operand to be a zero vector
This bug was introduced with:
http://reviews.llvm.org/rL262269

AVX masked loads are specified to set vector lanes to zero when the high bit of the mask
element for that lane is zero:
"If the mask is 0, the corresponding data element is set to zero in the load form of these
instructions, and unmodified in the store form." --Intel manual

Differential Revision: http://reviews.llvm.org/D19017

llvm-svn: 266148
2016-04-12 23:16:23 +00:00
George Burgess IV
4e2d8dca99 Add the allocsize attribute to LLVM.
`allocsize` is a function attribute that allows users to request that
LLVM treat arbitrary functions as allocation functions.

This patch makes LLVM accept the `allocsize` attribute, and makes
`@llvm.objectsize` recognize said attribute.

The review for this was split into two patches for ease of reviewing:
D18974 and D14933. As promised on the revisions, I'm landing both
patches as a single commit.

Differential Revision: http://reviews.llvm.org/D14933

llvm-svn: 266032
2016-04-12 01:05:35 +00:00
Sanjay Patel
68bd68b7b2 add FIXME comment; NFC
llvm-svn: 265970
2016-04-11 17:35:57 +00:00
Sanjay Patel
86526c47b6 add an assert for safety; NFC
llvm-svn: 265969
2016-04-11 17:27:44 +00:00
Sanjay Patel
3172549776 variable names start with a capital letter; NFC
llvm-svn: 265968
2016-04-11 17:25:23 +00:00
Sanjay Patel
6e95a2af4a [InstCombine] use canEvaluateShiftedShift() to handle the lshr case (NFCI)
We need just a couple of logic tweaks to consolidate the shl and lshr cases.

This is step 5 of refactoring to solve PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760

llvm-svn: 265965
2016-04-11 17:11:55 +00:00
Sanjay Patel
c3b4340d01 [InstCombine] don't try to shift an illegal amount (PR26760)
This is the straightforward fix for PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760

But we still need to make some changes to generalize this helper function
and then send the lshr case into here.

llvm-svn: 265960
2016-04-11 16:50:32 +00:00
Sanjay Patel
9d615dcbe9 [InstCombine] rename variables in shifted-shift helper function (NFCI)
This is step 3 of refactoring to solve PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760

llvm-svn: 265954
2016-04-11 16:11:07 +00:00
Sanjay Patel
2bc03c53fa [InstCombine] add helper function for shift-shift optimization (NFCI)
This is step 2 of refactoring to solve PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760

llvm-svn: 265951
2016-04-11 15:43:41 +00:00
David Majnemer
08e9ca8730 [InstCombine] Fix miscompile in FoldSPFofSPF
We had a select of a cast of a select but attempted to replace the outer
select with the inner select dispite their incompatible types.

Patch by Anton Korobeynikov!

This fixes PR27236.

llvm-svn: 265805
2016-04-08 16:51:49 +00:00
David Majnemer
8f75cadbc7 [InstCombine] Add a peephole for redundant assumes
Two or more identical assumes are occasionally next to each other in a
basic block.
While our generic machinery will turn a redundant assume into a no-op,
it is not super cheap.
We can perform a simpler check to achieve the same result for this case.

llvm-svn: 265801
2016-04-08 16:37:12 +00:00
Sanjoy Das
b20d278ebd Don't IPO over functions that can be de-refined
Summary:
Fixes PR26774.

If you're aware of the issue, feel free to skip the "Motivation"
section and jump directly to "This patch".

Motivation:

I define "refinement" as discarding behaviors from a program that the
optimizer has license to discard.  So transforming:

```
void f(unsigned x) {
  unsigned t = 5 / x;
  (void)t;
}
```

to

```
void f(unsigned x) { }
```

is refinement, since the behavior went from "if x == 0 then undefined
else nothing" to "nothing" (the optimizer has license to discard
undefined behavior).

Refinement is a fundamental aspect of many mid-level optimizations done
by LLVM.  For instance, transforming `x == (x + 1)` to `false` also
involves refinement since the expression's value went from "if x is
`undef` then { `true` or `false` } else { `false` }" to "`false`" (by
definition, the optimizer has license to fold `undef` to any non-`undef`
value).

Unfortunately, refinement implies that the optimizer cannot assume
that the implementation of a function it can see has all of the
behavior an unoptimized or a differently optimized version of the same
function can have.  This is a problem for functions with comdat
linkage, where a function can be replaced by an unoptimized or a
differently optimized version of the same source level function.

For instance, FunctionAttrs cannot assume a comdat function is
actually `readnone` even if it does not have any loads or stores in
it; since there may have been loads and stores in the "original
function" that were refined out in the currently visible variant, and
at the link step the linker may in fact choose an implementation with
a load or a store.  As an example, consider a function that does two
atomic loads from the same memory location, and writes to memory only
if the two values are not equal.  The optimizer is allowed to refine
this function by first CSE'ing the two loads, and the folding the
comparision to always report that the two values are equal.  Such a
refined variant will look like it is `readonly`.  However, the
unoptimized version of the function can still write to memory (since
the two loads //can// result in different values), and selecting the
unoptimized version at link time will retroactively invalidate
transforms we may have done under the assumption that the function
does not write to memory.

Note: this is not just a problem with atomics or with linking
differently optimized object files.  See PR26774 for more realistic
examples that involved neither.

This patch:

This change introduces a new set of linkage types, predicated as
`GlobalValue::mayBeDerefined` that returns true if the linkage type
allows a function to be replaced by a differently optimized variant at
link time.  It then changes a set of IPO passes to bail out if they see
such a function.

Reviewers: chandlerc, hfinkel, dexonsmith, joker.eph, rnk

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D18634

llvm-svn: 265762
2016-04-08 00:48:30 +00:00
David Majnemer
0cf2c938ab [InstCombine] Don't sink an instr after a catchswitch
A catchswitch is a terminator, instructions cannot be inserted after it.

llvm-svn: 265158
2016-04-01 17:28:17 +00:00
Matt Arsenault
2a2b3fa835 AMDGPU: Add frexp_exp intrinsic
llvm-svn: 264944
2016-03-30 22:28:52 +00:00
Matt Arsenault
c182eae9e4 AMDGPU: Constant folding for frexp_mant
llvm-svn: 264943
2016-03-30 22:28:26 +00:00
Junmo Park
f9c43d451f Minor code cleanup. NFC.
llvm-svn: 264124
2016-03-23 01:38:35 +00:00
David Majnemer
2669789037 [InstCombine] Don't insert instructions before a catch switch
CatchSwitches are not splittable, we cannot insert casts, etc. before
them.

This fixes PR26992.

llvm-svn: 263874
2016-03-19 04:39:52 +00:00
Guozhi Wei
9cfdcc479b [InstCombine] Combine A->B->A BitCast
This patch enhances InstCombine to handle following case:

        A  ->  B    bitcast
        PHI
        B  ->  A    bitcast

llvm-svn: 263734
2016-03-17 18:47:20 +00:00
Bjorn Steinbrink
a31b1fe444 Also handle the new Rust pers fn to isCatchAll()
llvm-svn: 263585
2016-03-15 20:57:07 +00:00
Justin Lebar
7e369d0dca [attrs] Handle convergent CallSites.
Summary:
Previously we had a notion of convergent functions but not of convergent
calls.  This is insufficient to correctly analyze calls where the target
is unknown, e.g. indirect calls.

Now a call is convergent if it targets a known-convergent function, or
if it's explicitly marked as convergent.  As usual, we can remove
convergent where we can prove that no convergent operations are
performed in the call.

Originally landed as r261544, then reverted in r261544 for (incidental)
build breakage.  Re-landed here with no changes.

Reviewers: chandlerc, jingyue

Subscribers: llvm-commits, tra, jhen, hfinkel

Differential Revision: http://reviews.llvm.org/D17739

llvm-svn: 263481
2016-03-14 20:18:54 +00:00
Mehdi Amini
12d624a26a Remove PreserveNames template parameter from IRBuilder
This reapplies r263258, which was reverted in r263321 because
of issues on Clang side.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 263393
2016-03-13 21:05:13 +00:00
Sanjay Patel
60bcb48851 [x86, InstCombine] delete x86 SSE2 masked store with zero mask
This follows up on the related AVX instruction transforms, but this
one is too strange to do anything more with. Intel's behavioral
description of this instruction in its Software Developer's Manual
is tragi-comic.

llvm-svn: 263340
2016-03-12 15:16:59 +00:00
Eric Christopher
559b0be562 Temporarily revert:
commit ae14bf6488e8441f0f6d74f00455555f6f3943ac
Author: Mehdi Amini <mehdi.amini@apple.com>
Date:   Fri Mar 11 17:15:50 2016 +0000

    Remove PreserveNames template parameter from IRBuilder

    Summary:
    Following r263086, we are now relying on a flag on the Context to
    discard Value names in release builds.

    Reviewers: chandlerc

    Subscribers: mzolotukhin, llvm-commits

    Differential Revision: http://reviews.llvm.org/D18023

    From: Mehdi Amini <mehdi.amini@apple.com>

    git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263258
    91177308-0d34-0410-b5e6-96231b3b80d8

until we can figure out what to do about clang and Release build testing.

This reverts commit 263258.

llvm-svn: 263321
2016-03-12 01:47:22 +00:00
Mehdi Amini
1f82b794e4 Remove PreserveNames template parameter from IRBuilder
Summary:
Following r263086, we are now relying on a flag on the Context to
discard Value names in release builds.

Reviewers: chandlerc

Subscribers: mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D18023

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 263258
2016-03-11 17:15:50 +00:00
Chandler Carruth
6150530377 [PM] Make the AnalysisManager parameter to run methods a reference.
This was originally a pointer to support pass managers which didn't use
AnalysisManagers. However, that doesn't realistically come up much and
the complexity of supporting it doesn't really make sense.

In fact, *many* parts of the pass manager were just assuming the pointer
was never null already. This at least makes it much more explicit and
clear.

llvm-svn: 263219
2016-03-11 11:05:24 +00:00
Benjamin Kramer
af02270708 [InstCombine] Use Twines to generate names.
Since the names are used in a loop this does more work in debug builds. In
release builds value names are generally discarded so we don't have to do
the concatenation at all. It's also simpler code, no functional change
intended.

llvm-svn: 263215
2016-03-11 10:20:56 +00:00
Philip Reames
64fa288117 [ValueTracking] Extract isKnownPositive [NFCI]
Extract out a generic interface from a recently landed patch and document a TODO in case compile time becomes a problem.

llvm-svn: 263062
2016-03-09 21:31:47 +00:00
Philip Reames
a1cf015229 [InstCombine] (icmp sgt smin(PosA, B) 0) -> (icmp sgt B 0)
When checking whether an smin is positive, we can move the comparison to one of the inputs if the other is known positive. If the known positive one is the min, then the other can't be negative. If the other is the min, then we compute the min.

Differential Revision: http://reviews.llvm.org/D17873

llvm-svn: 263059
2016-03-09 21:05:07 +00:00
Matthias Braun
3757169d38 InstCombine: Restrict computeKnownBits() on all Values to OptLevel > 2
As part of r251146 InstCombine was extended to call computeKnownBits on
every value in the function to determine whether it happens to be
constant. This increases typical compiletime by 1-3% (5% in irgen+opt
time) in my measurements. On the other hand this case did not trigger
once in the whole llvm-testsuite.

This patch introduces the notion of ExpensiveCombines which are only
enabled for OptLevel > 2. I removed the check in InstructionSimplify as
that is called from various places where the OptLevel is not known but
given the rarity of the situation I think a check in InstCombine is
enough.

Differential Revision: http://reviews.llvm.org/D16835

llvm-svn: 263047
2016-03-09 18:47:11 +00:00
Petar Jovanovic
d649b976d0 Reland r262337 "calculate builtin_object_size if arg is a removable pointer"
Original commit message:
 calculate builtin_object_size if argument is a removable pointer

 This patch fixes calculating correct value for builtin_object_size function
 when pointer is used only in builtin_object_size function call and never
 after that.

 Patch by Strahinja Petrovic.

 Differential Revision: http://reviews.llvm.org/D17337

Reland the original change with a small modification (first do a null check
and then do the cast) to satisfy ubsan.

llvm-svn: 263011
2016-03-09 14:12:47 +00:00