1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-25 05:52:53 +02:00
Commit Graph

3419 Commits

Author SHA1 Message Date
Peter Collingbourne
8facf0faed AssumptionCache: Update documentation comment.
The comment was somewhat misleading in that it implied that passes were not
responsible for adding new assumptions to the assumption cache. This new
wording now explicitly mentions that they are required to do so.

Differential Revision: https://reviews.llvm.org/D29977

llvm-svn: 295148
2017-02-15 03:50:01 +00:00
Adam Nemet
ee6ac75548 [LazyBFI] Fix typos
llvm-svn: 295073
2017-02-14 17:21:12 +00:00
Adam Nemet
337f461009 Add new pass LazyMachineBlockFrequencyInfo
And use it in MachineOptimizationRemarkEmitter.  A test will follow on top of
Justin's changes to enable MachineORE in AsmPrinter.

The approach is similar to the IR-level pass.  It's a bit simpler because BPI
is immutable at the Machine level so we don't need to make that lazy.

Because of this, a new function mapping is introduced (BPIPassTrait::getBPI).
This function extracts BPI from the pass.  In case of the lazy pass, this is
when the calculation of the BFI occurs.  For Machine-level, this is the
identity function.

Differential Revision: https://reviews.llvm.org/D29836

llvm-svn: 295072
2017-02-14 17:21:09 +00:00
Adam Nemet
881f5d613b [LazyBFI] Split out and templatize LazyBlockFrequencyInfo, NFC
This will be used by the LazyMachineBFI pass.

Differential Revision: https://reviews.llvm.org/D29834

llvm-svn: 295071
2017-02-14 17:21:04 +00:00
Igor Laevsky
cf821eac6c [SCEV] Cache results during GetMinTrailingZeros query
Differential Revision: https://reviews.llvm.org/D29759

llvm-svn: 295060
2017-02-14 15:53:12 +00:00
Sanjay Patel
8e7e7e2058 [ValueTracking] use nonnull argument attribute to eliminate null checks
Enhancing value tracking's analysis of null-ness was suggested in D27855, so here's a first attempt at that.

This is part of solving:
https://llvm.org/bugs/show_bug.cgi?id=28430

Differential Revision: https://reviews.llvm.org/D28204

llvm-svn: 294897
2017-02-12 15:35:34 +00:00
Chandler Carruth
042041bdf3 [PM/LCG] Teach the LazyCallGraph how to replace a function without
disturbing the graph or having to update edges.

This is motivated by porting argument promotion to the new pass manager.
Because of how LLVM IR Function objects work, in order to change their
signature a new object needs to be created. This is efficient and
straight forward in the IR but previously was very hard to implement in
LCG. We could easily replace the function a node in the graph
represents. The challenging part is how to handle updating the edges in
the graph.

LCG previously used an edge to a raw function to represent a node that
had not yet been scanned for calls and references. This was the core
of its laziness. However, that model causes this kind of update to be
very hard:
1) The keys to lookup an edge need to be `Function*`s that would all
   need to be updated when we update the node.
2) There will be some unknown number of edges that haven't transitioned
   from `Function*` edges to `Node*` edges.

All of this complexity isn't necessary. Instead, we can always build
a node around any function, always pointing edges at it and always using
it as the key to lookup an edge. To maintain the laziness, we need to
sink the *edges* of a node into a secondary object and explicitly model
transitioning a node from empty to populated by scanning the function.
This design seems much cleaner in a number of ways, but importantly
there is now exactly *one* place where the `Function*` has to be
updated!

Some other cleanups that fall out of this include having something to
model the *entry* edges more accurately. Rather than hand rolling parts
of the node in the graph itself, we have an explicit `EdgeSequence`
object that gives us exactly the functionality needed. We also have
a consistent place to define the edge iterators and can use them for
both the entry edges and the internal edges of the graph.

The API used to model the separation between a node and its edges is
intentionally very thin as most clients are expected to deal with nodes
that have populated edges. We model this exactly as an optional does
with an additional method to populate the edges when that is
a reasonable thing for a client to do. This is based on API design
suggestions from Richard Smith and David Blaikie, credit goes to them
for helping pick how to model this without it being either too explicit
or too implicit.

The patch is somewhat noisy due to shifting around iterator types and
new syntax for walking the edges of a node, but most of the
functionality change is in the `Edge`, `EdgeSequence`, and `Node` types.

Differential Revision: https://reviews.llvm.org/D29577

llvm-svn: 294653
2017-02-09 23:24:13 +00:00
Peter Collingbourne
fae9e6514b De-duplicate some code for creating an AARGetter suitable for the legacy PM.
I'm about to use this in a couple more places.

Differential Revision: https://reviews.llvm.org/D29793

llvm-svn: 294648
2017-02-09 23:11:52 +00:00
Chandler Carruth
cefe125ec4 [IR/Analysis] Defend against getting slightly wrong template arguments
passed into CRTP base classes.

This can sometimes happen and not cause an immediate failure when the
derived class is, itself, a template. You can end up essentially calling
methods on the wrong derived type but a type where many things will
appear to "work".

To fail fast and with a clear error message we can use a static_assert,
but we have to stash that static_assert inside a method body or nested
type that won't need to be completed while building the base class. I've
tried to pick a reasonably small number of places that seemed like
reliably places for this to be instantiated.

llvm-svn: 294272
2017-02-07 03:17:30 +00:00
Chandler Carruth
b07ec0321e Revert r293017 and fix the actual underlying issue.
The patch committed in r293017, as discussed on the list, doesn't really
make sense but was causing an actual issue to go away.

The issue turns out to be that in one place the extra template arguments
were dropped from the OuterAnalysisManagerProxy. This in turn caused the
types used in one set of places to access the key to be completely
different from the types used in another set of places for both Loop and
CGSCC cases where there are extra arguments.

I have literally no idea how anything seemed to work with this bug in
place. It blows my mind. But it did except for mingw64 in a DLL build.

I've added a really handy static assert that helps ensure we don't break
this in the future. It immediately diagnoses the issue with a compile
failure and a very clear error message. Much better that staring at
backtraces on a build bot. =]

llvm-svn: 294267
2017-02-07 01:50:48 +00:00
Dehao Chen
4fb3035c34 Fix the samplepgo indirect call promotion bug: we should not promote a direct call.
Summary: Checking CS.getCalledFunction() == nullptr does not necessary indicate indirect call. We also need to check if CS.getCalledValue() is not a constant.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29570

llvm-svn: 294260
2017-02-06 23:33:15 +00:00
Chandler Carruth
93759a2957 [PM/LCG] Remove the lazy RefSCC formation from the LazyCallGraph during
iteration.

The lazy formation of RefSCCs isn't really the most important part of
the laziness here -- that has to do with walking the functions
themselves -- and isn't essential to maintain. Originally, there were
incremental update algorithms that relied on updates happening
predominantly near the most recent RefSCC formed, but those have been
replaced with ones that have much tighter general case bounds at this
point. We do still perform asserts that only scale well due to this
incrementality, but those are easy to place behind EXPENSIVE_CHECKS.

Removing this simplifies the entire analysis by having a single up-front
step that builds all of the RefSCCs in a direct Tarjan walk. We can even
easily replace this with other or better algorithms at will and with
much less confusion now that there is no iterator-based incremental
logic involved. This removes a lot of complexity from LCG.

Another advantage of moving in this direction is that it simplifies
testing the system substantially as we no longer have to worry about
observing and mutating the graph half-way through the RefSCC formation.

We still need a somewhat special iterator for RefSCCs because we want
the iterator to remain stable in the face of graph updates. However,
this now merely involves relative indexing to the current RefSCC's
position in the sequence which isn't too hard.

Differential Revision: https://reviews.llvm.org/D29381

llvm-svn: 294227
2017-02-06 19:38:06 +00:00
Sanjay Patel
00cf3d4d68 [ValueTracking] emit a remark when we detect a conflicting assumption (PR31809)
This is a follow-up to D29395 where we try to be good citizens and let the user know that
we've probably gone off the rails.

This should allow us to resolve:
https://llvm.org/bugs/show_bug.cgi?id=31809

Differential Revision: https://reviews.llvm.org/D29404

llvm-svn: 294208
2017-02-06 18:26:06 +00:00
Daniil Fukalov
fd35b81460 [SCEV] limit recursion depth and operands number in getAddExpr
for a quite big function with source like

%add = add nsw i32 %mul, %conv
%mul1 = mul nsw i32 %add, %conv
%add2 = add nsw i32 %mul1, %add
%mul3 = mul nsw i32 %add2, %add
; repeat couple of thousands times
that can be produced by loop unroll, getAddExpr() tries to recursively construct SCEV and runs almost infinite time.

Added recursion depth restriction (with new parameter to set it)

Reviewers: sanjoy

Subscribers: hfinkel, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D28158

llvm-svn: 294181
2017-02-06 12:38:06 +00:00
Michael Kuperstein
f2f8127335 [SLP] Make sortMemAccesses explicitly return an error. NFC.
llvm-svn: 294029
2017-02-03 19:32:50 +00:00
Michael Kuperstein
f76d717429 [SLP] Use SCEV to sort memory accesses.
This generalizes memory access sorting to use differences between SCEVs,
instead of relying on constant offsets. That allows us to properly do
SLP vectorization of non-sequentially ordered loads within loops bodies.

Differential Revision: https://reviews.llvm.org/D29425

llvm-svn: 294027
2017-02-03 19:09:45 +00:00
Jun Bum Lim
296a001019 [JumpThread] Enhance finding partial redundant loads by continuing scanning single predecessor
Summary: While scanning predecessors to find an available loaded value, if the predecessor has a single predecessor, we can continue scanning through the single predecessor.

Reviewers: mcrosier, rengolin, reames, davidxl, haicheng

Reviewed By: rengolin

Subscribers: zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D29200

llvm-svn: 293896
2017-02-02 15:12:34 +00:00
Matthew Simpson
1981a68b4b [LV] Move interleaved access helper functions to VectorUtils (NFC)
This patch moves some helper functions related to interleaved access
vectorization out of LoopVectorize.cpp and into VectorUtils.cpp. We would like
to use these functions in a follow-on patch that improves interleaved load and
store lowering in (ARM/AArch64)ISelLowering.cpp. One of the functions was
already duplicated there and has been removed.

Differential Revision: https://reviews.llvm.org/D29398

llvm-svn: 293788
2017-02-01 17:45:46 +00:00
Matthew Simpson
24741d447a Fix VectorUtils include guard name (NFC)
VectorUtils was moved to Analysis from Transforms/Utils, but some comments and
the include guard name still reflect its old location.

llvm-svn: 293684
2017-01-31 20:29:10 +00:00
Matt Arsenault
dd128e79e5 NVPTX: Refactor NVPTXInferAddressSpaces to check TTI
Add a new TTI hook for getting the generic address space value.

llvm-svn: 293563
2017-01-30 23:02:12 +00:00
Xinliang David Li
88f1087cc7 Add support to dump dot graph block layout after MBP
Differential Revision: https://reviews.llvm.org/D29141

llvm-svn: 293408
2017-01-29 01:57:02 +00:00
Mohammad Shahid
a2646e67e7 [SLP] Vectorize loads of consecutive memory accesses, accessed in non-consecutive (jumbled) way.
The jumbled scalar loads will be sorted while building the tree and these accesses will be marked to generate shufflevector after the vectorized load with proper mask.

Reviewers: hfinkel, mssimpso, mkuper

Differential Revision: https://reviews.llvm.org/D26905

Change-Id: I9c0c8e6f91a00076a7ee1465440a3f6ae092f7ad
llvm-svn: 293386
2017-01-28 17:59:44 +00:00
Peter Collingbourne
3703d8c3b2 Analysis: Add appropriate const qualification to functions in TypeMetadataUtils.cpp. NFC.
llvm-svn: 293341
2017-01-27 22:55:30 +00:00
Craig Topper
42fe18a2f3 [TargetTransformInfo] Add override keywords to supporess -Winconsistent-missing-override.
llvm-svn: 293158
2017-01-26 08:04:27 +00:00
Jonas Paulsson
1dc6fdc89f [TargetTransformInfo] Refactor and improve getScalarizationOverhead()
Refactoring to remove duplications of this method.

New method getOperandsScalarizationOverhead() that looks at the present unique
operands and add extract costs for them. Old behaviour was to just add extract
costs for one operand of the type always, which still happens in
getArithmeticInstrCost() if no operands are provided by the caller.

This is a good start of improving on this, but there are more places
that can be improved by using getOperandsScalarizationOverhead().

Review: Hal Finkel
https://reviews.llvm.org/D29017

llvm-svn: 293155
2017-01-26 07:03:25 +00:00
Chandler Carruth
c22d160d50 [Loops] Restructure the LoopInfo verify function so that it more
directly walks the current loop structure verifying that a matching
structure can be found in a freshly computed version.

Also pull things out of containers when necessary once an issue is found
and print them directly.

This makes it substantially easier to debug verification failures as
the process stops at the exact point in the loop nest where they diverge
and has in easily accessed local variables (or printed to stderr
already) the loops and other information needed to analyze the failure.

Differential Revision: https://reviews.llvm.org/D29142

llvm-svn: 293133
2017-01-26 02:07:20 +00:00
Adam Nemet
eb46bca148 New OptimizationRemarkEmitter pass for MIR
This allows MIR passes to emit optimization remarks with the same level
of functionality that is available to IR passes.

It also hooks up the greedy register allocator to report spills.  This
allows for interesting use cases like increasing interleaving on a loop
until spilling of registers is observed.

I still need to experiment whether reporting every spill scales but this
demonstrates for now that the functionality works from llc
using -pass-remarks*=<pass>.

Differential Revision: https://reviews.llvm.org/D29004

llvm-svn: 293110
2017-01-25 23:20:33 +00:00
Adam Nemet
ab7818e0cc [OptDiag] Split code region out of DiagnosticInfoOptimizationBase
Code region is the only part of this class that is IR-specific.  Code
region is moved down in the inheritance tree to a new derived class,
called DiagnosticInfoIROptimization.

All the existing remarks are derived from this new class now.

This allows the new MIR pass-remark classes to be derived from
DiagnosticInfoOptimizationBase.

Also because we keep the name DiagnosticInfoOptimizationBase, the clang
parts don't need any adjustment.

Differential Revision: https://reviews.llvm.org/D29003

llvm-svn: 293109
2017-01-25 23:20:25 +00:00
Artur Pilipenko
e063c39b33 NFC. Make ScalarEvolution::isMonotonicPredicate public
Will be used by the upcoming LoopPredication optimization.

llvm-svn: 293062
2017-01-25 15:07:55 +00:00
NAKAMURA Takumi
b8624b1643 Rewind instantiations of OuterAnalysisManagerProxy in r289317, r291651, and r291662.
I found root class should be instantiated for variadic tempate to instantiate static member explicitly.

This will fix failures in mingw DLL build.

llvm-svn: 293017
2017-01-25 04:26:29 +00:00
Chandler Carruth
d7cc3d1b4a [PH] Replace uses of AssertingVH from members of analysis results with
a lazy-asserting PoisoningVH.

AssertVH is fundamentally incompatible with cache-invalidation of
analysis results. The invaliadtion happens after the AssertingVH has
already fired. Instead, use a PoisoningVH that will assert if the
dangling handle is ever used rather than merely be assigned or
destroyed.

This patch also removes all of the (numerous) doomed attempts to work
around this fundamental incompatibility. It is a pretty significant
simplification IMO.

The most interesting change is in the Inliner where we still do some
clearing because we don't want to rely on the coarse grained
invalidation strategy of the containing pass manager. However, I prefer
the approach that contains this logic to the cleanup phase of the
Inliner, and I think we could enhance the CGSCC analysis management
layer to make this even better in the future if desired.

The rest is straight cleanup.

I've also added a test for one of the harder cases to work around: when
a *module analysis* contains many AssertingVHes pointing at functions.

Differential Revision: https://reviews.llvm.org/D29006

llvm-svn: 292928
2017-01-24 12:55:57 +00:00
David L. Jones
268960185f [Analysis] Add LibFunc_ prefix to enums in TargetLibraryInfo. (NFC)
Summary:
The LibFunc::Func enum holds enumerators named for libc functions.
Unfortunately, there are real situations, including libc implementations, where
function names are actually macros (musl uses "#define fopen64 fopen", for
example; any other transitively visible macro would have similar effects).

Strictly speaking, a conforming C++ Standard Library should provide any such
macros as functions instead (via <cstdio>). However, there are some "library"
functions which are not part of the standard, and thus not subject to this
rule (fopen64, for example). So, in order to be both portable and consistent,
the enum should not use the bare function names.

The old enum naming used a namespace LibFunc and an enum Func, with bare
enumerators. This patch changes LibFunc to be an enum with enumerators prefixed
with "LibFFunc_". (Unfortunately, a scoped enum is not sufficient to override
macros.)

There are additional changes required in clang.

Reviewers: rsmith

Subscribers: mehdi_amini, mzolotukhin, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D28476

llvm-svn: 292848
2017-01-23 23:16:46 +00:00
Chandler Carruth
1b0f8fde38 [PM] Teach LVI to correctly invalidate itself when its dependencies
become unavailable.

The AssumptionCache is now immutable but it still needs to respond to
DomTree invalidation if it ended up caching one.

This lets us remove one of the explicit invalidates of LVI but the
other one continues to avoid hitting a latent bug.

llvm-svn: 292769
2017-01-23 06:35:12 +00:00
Justin Lebar
20ae0dcecf [ValueTracking] Clarify comments on CannotBeOrderedLessThanZero and SignBitMustBeZero.
Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28926

llvm-svn: 292691
2017-01-21 00:59:40 +00:00
Easwaran Raman
9e1b62af44 Improve PGO support for the new inliner
This adds the following to the new PM based inliner in PGO mode:

* Use block frequency analysis to derive callsite's profile count and use
that to adjust thresholds of hot and cold callsites.

* Incrementally update the BFI of the caller after a callee gets inlined
into it. This incremental update is only within an invocation of the run
method - BFI is not preserved across calls to run.
Update the function entry count of the callee after inlining it into a
caller.

* I've tuned the thresholds for the hot and cold callsites using a hacked
up version of the old inliner that explicitly computes BFI on a set of
internal benchmarks and spec. Once the new PM based pipeline stabilizes
(IIRC Chandler mentioned there are known issues) I'll benchmark this
again and adjust the thresholds if required.
Inliner PGO support.

Differential revision: https://reviews.llvm.org/D28331

llvm-svn: 292666
2017-01-20 22:44:04 +00:00
Chandler Carruth
aea315a631 [LoopInfo] Add helper methods to compute two useful orderings of the
loops in a function.

These are relatively confusing to talk about and compute correctly so it
seems really good to write down their implementation in one place. I've
replaced one place we needed this in the loop PM infrastructure and
I have another place in a pending patch that wants it.

We can't quite use this for the core loop PM walk because there we're
sometimes working on a sub-forest.

I'll add the expected unittests before committing this but wanted to
make sure folks were happy with these names / comments.

Credit goes to Richard Smith for the idea for naming the order where siblings
are in reverse program order but the tree traversal remains preorder.

Differential Revision: https://reviews.llvm.org/D28932

llvm-svn: 292569
2017-01-20 02:41:20 +00:00
Anna Thomas
aaa1cad9b5 [AliasAnalysis] Fences do not modify constant memory location
Summary:
Fence instructions are currently marked as `ModRef` for all memory locations.

We can improve this for constant memory locations (such as constant globals),
since fence instructions cannot modify these locations.

This helps us to forward constant loads across fences (added test case in GVN).
There were no changes in behaviour for similar test cases in early-cse and licm.

Reviewers: dberlin, sanjoy, reames

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28914

llvm-svn: 292546
2017-01-20 00:21:33 +00:00
Easwaran Raman
7a2d97369c Add an interface to scale the frequencies of a set of blocks.
The scaling is done with reference to the the new frequency of a reference block.

Differential Revision: https://reviews.llvm.org/D28535

llvm-svn: 292507
2017-01-19 18:53:16 +00:00
Hal Finkel
90c52fd329 Fix use-after-free bug in AffectedValueCallbackVH::allUsesReplacedWith
When transferring affected values in the cache from an old value, identified by
the value of the current callback, to the specified new value we might need to
insert a new entry into the DenseMap which constitutes the cache. Doing so
might delete the current callback object. Move the copying logic into a new
function, a member of the assumption cache itself, so that we don't run into UB
should the callback handle itself be removed mid-copy.

Differential Revision: https://reviews.llvm.org/D28749

llvm-svn: 292133
2017-01-16 15:22:01 +00:00
Xin Tong
a3b9f48421 Fix typos. NFC
llvm-svn: 292092
2017-01-16 03:41:09 +00:00
Chandler Carruth
f5c7ad9276 [PM] Teach the optimization remarks emitter to handle invalidation
events.

This pass sometimes has a pointer to BlockFrequencyInfo so it needs
custom invalidation logic. It is also otherwise immutable so we can
reduce the number of invalidations that happen substantially.

llvm-svn: 292058
2017-01-15 08:20:50 +00:00
Chandler Carruth
531a8d8a72 [PM] Introduce an analysis set used to preserve all analyses over
a function's CFG when that CFG is unchanged.

This allows transformation passes to simply claim they preserve the CFG
and analysis passes to check for the CFG being preserved to remove the
fanout of all analyses being listed in all passes.

I've gone through and removed or cleaned up as many of the comments
reminding us to do this as I could.

Differential Revision: https://reviews.llvm.org/D28627

llvm-svn: 292054
2017-01-15 06:32:49 +00:00
Chandler Carruth
cad5492fda [PM] The assumption cache is fundamentally designed to be self-updating,
mark it as never invalidated in the new PM.

The old PM already required this to work, and after a discussion with
Hal this seems to really be the only sensible answer. The cache
gracefully degrades as the IR is mutated, and most things which do this
should already be incrementally updating the cache.

This gets rid of a bunch of logic preserving and testing the
invalidation of this analysis.

llvm-svn: 292039
2017-01-15 00:26:18 +00:00
Xin Tong
b80c7151c1 Delete duplicate word. NFC
llvm-svn: 291999
2017-01-14 05:51:36 +00:00
Easwaran Raman
754f5f3612 Compute summary before calling extractProfTotalWeight
extractProfTotalWeight checks if the profile type is sample profile, but
before that we have to ensure that summary is available. Also expanded
the unittest to test the case where there is no summar

Differential Revision: https://reviews.llvm.org/D28708

llvm-svn: 291982
2017-01-14 00:32:37 +00:00
Easwaran Raman
50f9f008cb ProfileSummaryInfo improvements.
* Add is{Hot|Cold}CallSite methods
* Fix a bug in isHotBB where it was looking for MD_prof on a return instruction
* Use MD_prof data only if sample profiling was used to collect profiles.
* Add an unit test to ProfileSummaryInfo

Differential Revision: https://reviews.llvm.org/D28584

llvm-svn: 291878
2017-01-13 01:34:00 +00:00
Chad Rosier
b66b2c6427 TTI: Add comment clarifying the meaning of MemIntrinsicInfo::PtrVal.
Patch by Tom Stellard.
Differential Revision: https://reviews.llvm.org/D27563

llvm-svn: 291772
2017-01-12 16:15:10 +00:00
Piotr Padlewski
9032a3fc4f [Devirtualization] MemDep returns non-local !invariant.group dependencies
Summary:
Memory Dependence Analysis was limited to return only local dependencies
for invariant.group handling. Now it returns NonLocal when it finds it
and then by asking getNonLocalPointerDependency we get found dep.

Thanks to this we are able to devirtualize loops!

    void indirect(A &a, int n) {
      for (int i = 0 ; i < n; i++)
        a.foo();

    }
    void test(int n) {
      A a;
      indirect(a);
    }

After inlining a.foo() will be changed to direct call, even if foo and A::A()
is external (but only if vtable definition is be available).

Reviewers: nlewycky, dberlin, chandlerc, rsmith

Subscribers: mehdi_amini, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D28137

llvm-svn: 291762
2017-01-12 11:33:58 +00:00
David Blaikie
8f9bdb1c35 Make some operator bools explicit for sanity/safety.
There are a couple left in bool-like containers (BitVector, etc) where
the implicit conversions seem more suitable - though it might be worth
considering explicitifying those too.

llvm-svn: 291694
2017-01-11 19:47:16 +00:00
Hal Finkel
a3dd8b8968 Make processing @llvm.assume more efficient - Add affected values to the assumption cache
Here's my second try at making @llvm.assume processing more efficient. My
previous attempt, which leveraged operand bundles, r289755, didn't end up
working: it did make assume processing more efficient but eliminating the
assumption cache made ephemeral value computation too expensive. This is a
more-targeted change. We'll keep the assumption cache, but extend it to keep a
map of affected values (i.e. values about which an assumption might provide
some information) to the corresponding assumption intrinsics. This allows
ValueTracking and LVI to find assumptions relevant to the value being queried
without scanning all assumptions in the function. The fact that ValueTracking
started doing O(number of assumptions in the function) work, for every
known-bits query, has become prohibitively expensive in some cases.

As discussed during the review, this is a pragmatic fix that, longer term, will
likely be replaced by a more-principled solution (perhaps based on an extended
SSA form).

Differential Revision: https://reviews.llvm.org/D28459

llvm-svn: 291671
2017-01-11 13:24:24 +00:00