1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 12:33:33 +02:00
Commit Graph

14024 Commits

Author SHA1 Message Date
Diego Novillo
9bbd13f9a0 SamplePGO - Reduce memory utilization by 10x.
DenseMap is the wrong data structure to use for sample records and call
sites.  The keys are too large, causing massive core memory growth when
reading profiles.

Before this patch, a 21Mb input profile was causing the compiler to grow
to 3Gb in memory.  By switching to std::map, the compiler now grows to
300Mb in memory.

There still are some opportunities for memory footprint reduction. I'll
be looking at those next.

llvm-svn: 255389
2015-12-11 23:21:38 +00:00
Chad Rosier
2dd4cf7c08 Revert r255247, r255265, and r255286 due to serious compile-time regressions.
Revert "[DSE] Disable non-local DSE to see if the bots go green."
Revert "[DeadStoreElimination] Use range-based loops. NFC."
Revert "[DeadStoreElimination] Add support for non-local DSE."

llvm-svn: 255354
2015-12-11 18:39:41 +00:00
Hal Finkel
4c089491a5 AlignmentFromAssumptions and SLPVectorizer preserves AA and GlobalsAA
GlobalsAA's assumptions that passes do not escape globals not previously
escaped is not violated by AlignmentFromAssumptions and SLPVectorizer. Marking
them as such allows GlobalsAA to be preserved until GVN in the LTO pipeline.

http://lists.llvm.org/pipermail/llvm-dev/2015-December/092972.html

Patch by Vaivaswatha Nagaraj!

llvm-svn: 255348
2015-12-11 17:46:01 +00:00
Artur Pilipenko
8b6635b2f6 PruneEH pass incorrectly reports that a change was made
Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D14097

llvm-svn: 255343
2015-12-11 16:30:26 +00:00
James Molloy
f5a3f4d77c [Mem2Reg] Respect optnone
Mem2Reg shouldn't be optimizing a function that is marked
optnone. There is a test checking this that fails when mem2reg is
explicitly added to the standard pass pipeline.

llvm-svn: 255336
2015-12-11 13:36:59 +00:00
James Molloy
d8003c7bf6 [InstCombine] Make MatchBSwap also match bit reversals
MatchBSwap has most of the functionality to match bit reversals already. If we switch it from looking at bytes to individual bits and remove a few early exits, we can extend the main recursive function to match any sequence of ORs, ANDs and shifts that assemble a value from different parts of another, base value. Once we have this bit->bit mapping, we can very simply detect if it is appropriate for a bswap or bitreverse.

llvm-svn: 255334
2015-12-11 10:04:51 +00:00
Chad Rosier
f118529c0e [DSE] Disable non-local DSE to see if the bots go green.
I see a few bots timing out, so I'm speculatively disabling r255247.

llvm-svn: 255286
2015-12-10 19:23:02 +00:00
Chad Rosier
b8c929dbff [DeadStoreElimination] Use range-based loops. NFC.
llvm-svn: 255265
2015-12-10 17:27:18 +00:00
Sanjay Patel
25a4b4195f [InstCombine] fold bitcasts around an extractelement (3rd try)
This is a redo of r255137 (reverted at r255227) which was a redo of 
r255124 (reverted at r255126) with a fixed check for a scalar source 
type and an added test for the failure that caused the revert.

Original commit message:

Example:
  bitcast (extractelement (bitcast <2 x float> %X to <2 x i32>), 1) to float
    --->
  extractelement <2 x float> %X, i32 1

This is part of fixing PR25543:
https://llvm.org/bugs/show_bug.cgi?id=25543

The next step will be to generalize this fold:
trunc ( lshr ( bitcast X) ) -> extractelement (X)

Ie, I'm hoping to replace the existing transform of:
bitcast ( trunc ( lshr ( bitcast X)))
added by:
http://reviews.llvm.org/rL112232

with 2 less specific transforms to catch the case in the bug report.

Differential Revision: http://reviews.llvm.org/D14879

llvm-svn: 255261
2015-12-10 17:09:28 +00:00
Teresa Johnson
087bc3b677 [ThinLTO] Debug message cleanup (NFC)
Added some missing spaces between the module identifier and the start of
the debug message. Also added a ":" after the module identifier to make
this look a little nicer.

llvm-svn: 255259
2015-12-10 16:39:07 +00:00
Chad Rosier
4a619fca22 [DeadStoreElimination] Add support for non-local DSE.
We extend the search for redundant stores to predecessor blocks that
unconditionally lead to the block BB with the current store instruction.  That
also includes single-block loops that unconditionally lead to BB, and
if-then-else blocks where then- and else-blocks unconditionally lead to BB.

http://reviews.llvm.org/D13363
Patch by Ivan Baev <ibaev@codeaurora.org>!

llvm-svn: 255247
2015-12-10 13:51:43 +00:00
Silviu Baranga
c26660fa6c [LLE] Use the PredicatedScalarEvolution interface to query SCEVs for dependences
Summary:
LAA uses the PredicatedScalarEvolution interface, so it can produce
forward/backward dependences having SCEVs that are AddRecExprs only after being
transformed by PredicatedScalarEvolution.

Use PredicatedScalarEvolution to get the expected expressions.

Reviewers: anemet

Subscribers: llvm-commits, sanjoy

Differential Revision: http://reviews.llvm.org/D15382

llvm-svn: 255241
2015-12-10 11:07:18 +00:00
Akira Hatanaka
a1488717da Revert r255137.
This commit broke apple's internal bot.

llvm-svn: 255227
2015-12-10 08:00:52 +00:00
Sanjoy Das
d85ded90d0 Add arg_begin() and arg_end() to CallInst and InvokeInst; NFCI
- This simplifies the CallSite class, arg_begin / arg_end are now
   simple wrapper getters.

 - In several places, we were creating CallSite instances solely to call
   arg_begin and arg_end.  With this change, that's no longer required.

llvm-svn: 255226
2015-12-10 06:39:02 +00:00
Reid Kleckner
dc26fb037c [Float2Int] Don't operate on vector instructions
This fixes a crash bug. It's also not clear if we'd want to do this
transform for vectors.

llvm-svn: 255155
2015-12-09 21:08:18 +00:00
Rafael Espindola
2e47184a91 Don't assign a temporary string to a StringRef.
Should fix the windows debug and asan bots.

llvm-svn: 255149
2015-12-09 20:41:10 +00:00
Sanjoy Das
f3ba629c4d Use WeakVH to keep track of calls with operand bundles in CloneCodeInfo
`CloneAndPruneIntoFromInst` can DCE instructions after cloning them into
the new function, and so an AssertingVH is too strong.  This change
switches CloneCodeInfo to use a std::vector<WeakVH>.

llvm-svn: 255148
2015-12-09 20:33:52 +00:00
Sanjoy Das
f1615f5295 Delete trailing whitespace; NFC
llvm-svn: 255147
2015-12-09 20:33:45 +00:00
Teresa Johnson
83a7df21b2 [ThinLTO] FunctionImport pass can take a const index pointer (NFC)
llvm-svn: 255140
2015-12-09 19:39:47 +00:00
Sanjay Patel
de6f59d487 [InstCombine] fold bitcasts around an extractelement (2nd try)
This is a redo of r255124 (reverted at r255126) with an added check for a
scalar destination type and an added test for the failure seen in Clang's
test/CodeGen/vector.c. The extra test shows a different missing optimization.

Original commit message:

Example:
  bitcast (extractelement (bitcast <2 x float> %X to <2 x i32>), 1) to float
    --->
  extractelement <2 x float> %X, i32 1

This is part of fixing PR25543:
https://llvm.org/bugs/show_bug.cgi?id=25543

The next step will be to generalize this fold:
trunc ( lshr ( bitcast X) ) -> extractelement (X)

Ie, I'm hoping to replace the existing transform of:
bitcast ( trunc ( lshr ( bitcast X)))
added by:
http://reviews.llvm.org/rL112232

with 2 less specific transforms to catch the case in the bug report.

Differential Revision: http://reviews.llvm.org/D14879

llvm-svn: 255137
2015-12-09 18:57:16 +00:00
Michael Zolotukhin
b39f3c2210 Revert "Revert r253253 and r253126: "Don't recompute LCSSA after loop-unrolling when possible.""
The bug in IndVarSimplify was fixed in r254976, r254977, so I'm
reapplying the original patch for avoiding redundant LCSSA recomputation.

This reverts commit ffe3b434e505e403146aff00be0c177bb6d13466.

llvm-svn: 255133
2015-12-09 18:20:28 +00:00
Rong Xu
2f995f2098 [PGO] Resubmit "MST based PGO instrumentation infrastructure" (r254021)
This new patch fixes a few bugs that exposed in last submit. It also improves
the test cases.
--Original Commit Message--
This patch implements a minimum spanning tree (MST) based instrumentation for
PGO. The use of MST guarantees minimum number of CFG edges getting
instrumented. An addition optimization is to instrument the less executed
edges to further reduce the instrumentation overhead. The patch contains both the
instrumentation and the use of the profile to set the branch weights.

Differential Revision: http://reviews.llvm.org/D12781

llvm-svn: 255132
2015-12-09 18:08:16 +00:00
Mehdi Amini
de04fa6b68 Revert "[InstCombine] fold bitcasts around an extractelement"
This reverts commit r255124.

Broke http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast/builds/4193/steps/test/logs/stdio

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 255126
2015-12-09 16:31:39 +00:00
Sanjay Patel
8a5018320c [InstCombine] fold bitcasts around an extractelement
Example:
  bitcast (extractelement (bitcast <2 x float> %X to <2 x i32>), 1) to float
    --->
  extractelement <2 x float> %X, i32 1

This is part of fixing PR25543:
https://llvm.org/bugs/show_bug.cgi?id=25543

The next step will be to generalize this fold:
trunc ( lshr ( bitcast X) ) -> extractelement (X)

Ie, I'm hoping to replace the existing transform of:
bitcast ( trunc ( lshr ( bitcast X)))
added by:
http://reviews.llvm.org/rL112232

with 2 less specific transforms to catch the case in the bug report.

Differential Revision: http://reviews.llvm.org/D14879

llvm-svn: 255124
2015-12-09 16:17:20 +00:00
Silviu Baranga
d19d7b747a Re-commit r255115, with the PredicatedScalarEvolution class moved to
ScalarEvolution.h, in order to avoid cyclic dependencies between the Transform
and Analysis modules:

[LV][LAA] Add a layer over SCEV to apply run-time checked knowledge on SCEV expressions

Summary:
This change creates a layer over ScalarEvolution for LAA and LV, and centralizes the
usage of SCEV predicates. The SCEVPredicatedLayer takes the statically deduced knowledge
by ScalarEvolution and applies the knowledge from the SCEV predicates. The end goal is
that both LAA and LV should use this interface everywhere.

This also solves a problem involving the result of SCEV expression rewritting when
the predicate changes. Suppose we have the expression (sext {a,+,b}) and two predicates
  P1: {a,+,b} has nsw
  P2: b = 1.

Applying P1 and then P2 gives us {a,+,1}, while applying P2 and the P1 gives us
sext({a,+,1}) (the AddRec expression was changed by P2 so P1 no longer applies).
The SCEVPredicatedLayer maintains the order of transformations by feeding back
the results of previous transformations into new transformations, and therefore
avoiding this issue.

The SCEVPredicatedLayer maintains a cache to remember the results of previous
SCEV rewritting results. This also has the benefit of reducing the overall number
of expression rewrites.

Reviewers: mzolotukhin, anemet

Subscribers: jmolloy, sanjoy, llvm-commits

Differential Revision: http://reviews.llvm.org/D14296

llvm-svn: 255122
2015-12-09 16:06:28 +00:00
Silviu Baranga
ba0669cbca Revert r255115 until we figure out how to fix the bot failures.
llvm-svn: 255117
2015-12-09 15:25:28 +00:00
Silviu Baranga
f6006f41f7 [LV][LAA] Add a layer over SCEV to apply run-time checked knowledge on SCEV expressions
Summary:
This change creates a layer over ScalarEvolution for LAA and LV, and centralizes the
usage of SCEV predicates. The SCEVPredicatedLayer takes the statically deduced knowledge
by ScalarEvolution and applies the knowledge from the SCEV predicates. The end goal is
that both LAA and LV should use this interface everywhere.

This also solves a problem involving the result of SCEV expression rewritting when
the predicate changes. Suppose we have the expression (sext {a,+,b}) and two predicates
  P1: {a,+,b} has nsw
  P2: b = 1.

Applying P1 and then P2 gives us {a,+,1}, while applying P2 and the P1 gives us
sext({a,+,1}) (the AddRec expression was changed by P2 so P1 no longer applies).
The SCEVPredicatedLayer maintains the order of transformations by feeding back
the results of previous transformations into new transformations, and therefore
avoiding this issue.

The SCEVPredicatedLayer maintains a cache to remember the results of previous
SCEV rewritting results. This also has the benefit of reducing the overall number
of expression rewrites.

Reviewers: mzolotukhin, anemet

Subscribers: jmolloy, sanjoy, llvm-commits

Differential Revision: http://reviews.llvm.org/D14296

llvm-svn: 255115
2015-12-09 15:03:52 +00:00
JF Bastien
8a85d077c5 EarlyCSE: fix typo from rL255054.
llvm-svn: 255102
2015-12-09 09:05:42 +00:00
Mehdi Amini
b282e7bd00 The current importing scheme is processing one function at a time,
loading the source Module, linking the function in the destination
module, and destroying the source Module before repeating with the
next function to import (potentially from the same Module).

Ideally we would keep the source Module alive and import the next
Function needed from this Module. Unfortunately this is not possible
because the linker does not leave it in a usable state.

However we can do better by first computing the list of all candidates
per Module, and only then load the source Module and import all the
function we need for it.

The trick to process callees is to materialize function in the source
module when building the list of function to import, and inspect them
in their source module, collecting the list of callees for each
callee.

When we move the the actual import, we will import from each source
module exactly once. Each source module is loaded exactly once.
The only drawback it that it requires to have all the lazy-loaded
source Module in memory at the same time.

Currently this patch already improves considerably the link time,
a multithreaded link of llvm-dis on my laptop was:

  real  1m12.175s  user  6m32.430s sys  0m10.529s

and is now:

  real  0m40.697s  user  2m10.237s sys  0m4.375s

Note: this is the full link time (linker+Import+Optimizer+CodeGen)

Differential Revision: http://reviews.llvm.org/D15178

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 255100
2015-12-09 08:17:35 +00:00
Vikram TV
75774f3b62 Test commit access - Fix few missing '.' in comments of LoopInterchange code.
llvm-svn: 255095
2015-12-09 05:16:24 +00:00
Rafael Espindola
f20bc23b7c Return a std::unique_ptr from CloneModule. NFC.
llvm-svn: 255078
2015-12-08 23:57:17 +00:00
Sanjoy Das
e384f13917 [IndVars] Use any_of and foreach instead of explicit for loops; NFC
llvm-svn: 255077
2015-12-08 23:52:58 +00:00
Sanjoy Das
cb770fbcb6 [OperandBundles] Have PruneEH work correct with operand bundles.
For an invoke with operand bundles, the [op_begin(), op_end()-3] range
can contain things other than invoke arguments.  This change teaches
PruneEH to use arg_begin() and arg_end() explicitly.

llvm-svn: 255073
2015-12-08 23:16:52 +00:00
Mehdi Amini
5d4cc87b91 Fix/Improve Debug print in FunctionImport pass
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 255071
2015-12-08 23:04:19 +00:00
Mehdi Amini
ba2c064383 Remove caching in FunctionImport: a Module can't be reused after being linked from
The Linker destroys the source module (API change coming to make it explicit)

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 255064
2015-12-08 22:39:40 +00:00
Sanjoy Das
5a8ebaa29b [OperandBundles] Fix a transform in simplifycfg
Reviewers: pcc, majnemer, reames

Subscribers: reames, llvm-commits

Differential Revision: http://reviews.llvm.org/D15345

llvm-svn: 255062
2015-12-08 22:26:08 +00:00
Philip Reames
041ac7b389 [EarlyCSE] Value forwarding for unordered atomics
This patch teaches the fully redundant load part of EarlyCSE how to forward from atomic and volatile loads and stores, and how to eliminate unordered atomics (only). This patch does not include dead store elimination support for unordered atomics, that will follow in the near future.

The basic idea is that we allow all loads and stores to be tracked by the AvailableLoad table. We store a bit in the table which tracks whether load/store was atomic, and then only replace atomic loads with ones which were also atomic.

No attempt is made to refine our handling of ordered loads or stores. Those are still treated as full fences. We could pretty easily extend the release fence handling to release stores, but that should be a separate patch.

Differential Revision: http://reviews.llvm.org/D15337

llvm-svn: 255054
2015-12-08 21:45:41 +00:00
Sanjoy Das
90bb44dfe3 [OperandBundles] Remove unncessary constructor
The StringRef constructor is unnecessary (since we're converting to
std::string anyway), and having it requires an explicit call to
StringRef's or std::string's constructor.

llvm-svn: 255000
2015-12-08 03:50:32 +00:00
Sanjoy Das
2f7aca1668 [IndVars] Have getInsertPointForUses preserve LCSSA
Summary:
Also add a stricter post-condition for IndVarSimplify.

Fixes PR25578.  Test case by Michael Zolotukhin.

Reviewers: hfinkel, atrick, mzolotukhin

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D15059

llvm-svn: 254977
2015-12-08 00:13:21 +00:00
Philip Reames
b342730bfb Reapply 254950 w/fix
254950 ended up being not NFC.  The previous code was overriding the flags for whether an instruction read or wrote memory using the target specific flags returned via TTI.  I'd missed this in my refactoring.  Since I mistakenly built only x86 and didn't notice the number of unsupported tests, I didn't catch that before the original checkin.

This raises an interesting issue though.  Given we have function attributes (i.e. readonly, readnone, argmemonly) which describe the aliasing of intrinsics, why does TTI have this information overriding the instruction definition at all?  I see no reason for this, but decided to preserve existing behavior for the moment.  The root issue might be that we don't have a "writeonly" attribute.

Original commit message:
[EarlyCSE] Simplify and invert ParseMemoryInst [NFCI]

Restructure ParseMemoryInst - which was introduced to abstract over target specific load and stores instructions - to just query the underlying instructions. In theory, this could be slightly slower than caching the results, but in practice, it's very unlikely to be measurable.

The simple query scheme makes it far easier to understand, and much easier to extend with new queries. Given I'm about to need to add new query types, doing the cleanup first seemed worthwhile.

Do we still believe the target specific intrinsic handling is worthwhile in EarlyCSE? It adds quite a bit of complexity and makes the code harder to read. Being able to delete the abstraction entirely would be wonderful.

llvm-svn: 254957
2015-12-07 22:41:23 +00:00
Philip Reames
be05ec0211 Revert 254950
It's causing test failures on AArch64.  Due to a bad build config on my part, I apparently wasn't running the tests I thought I was.

llvm-svn: 254954
2015-12-07 21:41:29 +00:00
Philip Reames
db73a336d2 [EarlyCSE] Simplify and invert ParseMemoryInst [NFCI]
Restructure ParseMemoryInst - which was introduced to abstract over target specific load and stores instructions - to just query the underlying instructions. In theory, this could be slightly slower than caching the results, but in practice, it's very unlikely to be measurable.

The simple query scheme makes it far easier to understand, and much easier to extend with new queries. Given I'm about to need to add new query types, doing the cleanup first seemed worthwhile.

Do we still believe the target specific intrinsic handling is worthwhile in EarlyCSE? It adds quite a bit of complexity and makes the code harder to read. Being able to delete the abstraction entirely would be wonderful.

llvm-svn: 254950
2015-12-07 21:27:15 +00:00
Teresa Johnson
1fb89d62fb [ThinLTO] Support for specifying function index from pass manager
Summary:
Add a field on the PassManagerBuilder that clang or gold can use to pass
down a pointer to the function index in memory to use for importing when
the ThinLTO backend is triggered. Add support to supply this to the
function import pass.

Reviewers: joker.eph, dexonsmith

Subscribers: davidxl, llvm-commits, joker.eph

Differential Revision: http://reviews.llvm.org/D15024

llvm-svn: 254926
2015-12-07 19:21:11 +00:00
Rafael Espindola
d6d8f278f8 Create llvm.global_ctors in the new format.
llvm-svn: 254878
2015-12-06 16:18:25 +00:00
Sanjoy Das
16ad4f2471 [InstCombine] Call getCmpPredicateForMinMax only with a valid SPF
Summary:
There are `SelectPatternFlavor`s that don't represent min or max idioms,
and we should not be passing those to `getCmpPredicateForMinMax`.

Fixes PR25745.

Reviewers: majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D15249

llvm-svn: 254869
2015-12-05 23:44:22 +00:00
Keno Fischer
536fec1abb [ASAN] Add doFinalization to reset state
Summary: If the same pass manager is used for multiple modules ASAN
complains about GlobalsMD being initialized twice. Fix this by
resetting GlobalsMD in a new doFinalization method to allow this
use case.

Reviewers: kcc

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14962

llvm-svn: 254851
2015-12-05 14:42:34 +00:00
Cong Hou
30b1a3deac Fix a typo in LoopVectorize.cpp. NFC.
llvm-svn: 254813
2015-12-05 01:00:22 +00:00
Philip Reames
730b7eb8e3 [EarlyCSE] IsSimple vs IsVolatile naming clarification (NFC)
When the notion of target specific memory intrinsics was introduced to EarlyCSE, the commit confused the notions of volatile and simple memory access.  Since I'm about to start working on this area, cleanup the naming so that patches aren't horribly confusing.  Note that the actual implementation was always bailing if the load or store wasn't simple.  

Reminder:
- "volatile" - C++ volatile, can't remove any memory operations, but in principal unordered
- "ordered" - imposes ordering constraints on other nearby memory operations
- "atomic" - can't be split or sheared.  In LLVM terms, all "ordered" operations are also atomic so the predicate "isAtomic" is often used.
- "simple" - a load which is none of the above.  These are normal loads and what most of the optimizer works with.

llvm-svn: 254805
2015-12-05 00:18:33 +00:00
Weiming Zhao
84bd343622 [SimplifyLibCalls] Optimization for pow(x, n) where n is some constant
Summary:
    In order to avoid calling pow function we generate repeated fmul when n is a
    positive or negative whole number.
    
    For each exponent we pre-compute Addition Chains in order to minimize the no.
    of fmuls.
    Refer: http://wwwhomes.uni-bielefeld.de/achim/addition_chain.html
    
    We pre-compute addition chains for exponents upto 32 (which results in a max of
    7 fmuls).

    For eg:
    4 = 2+2
    5 = 2+3
    6 = 3+3 and so on
    
    Hence,
    pow(x, 4.0) ==> y = fmul x, x
                    x = fmul y, y
                    ret x

    For negative exponents, we simply compute the reciprocal of the final result.
    
    Note: This transformation is only enabled under fast-math.
    
    Patch by Mandeep Singh Grang <mgrang@codeaurora.org>

Reviewers: weimingz, majnemer, escha, davide, scanon, joerg

Subscribers: probinson, escha, llvm-commits

Differential Revision: http://reviews.llvm.org/D13994

llvm-svn: 254776
2015-12-04 22:00:47 +00:00
Yury Gribov
69044c0c7c [asan] Fix dynamic allocas unpoisoning on PowerPC64.
For PowerPC64 we cannot just pass SP extracted from @llvm.stackrestore to
_asan_allocas_unpoison due to specific ABI requirements
(http://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi.html#DYNAM-STACK).
This patch adds the value returned by @llvm.get.dynamic.area.offset to
extracted from @llvm.stackrestore stack pointer, so dynamic allocas unpoisoning
stuff would work correctly on PowerPC64.

Patch by Max Ostapenko.

Differential Revision: http://reviews.llvm.org/D15108

llvm-svn: 254707
2015-12-04 09:19:14 +00:00