1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 19:23:23 +01:00
Commit Graph

180 Commits

Author SHA1 Message Date
Sanjoy Das
c75f995ef0 Teach SCEV normalization to de/normalize non-affine add recs
Summary:
Before this change, SCEV Normalization would incorrectly normalize
non-affine add recurrences.  To work around this there was (still is)
a check in place to make sure we only tried to normalize affine add
recurrences.

We recently found a bug in aforementioned check to bail out of
normalizing non-affine add recurrences.  However, instead of fixing
the bailout, I have decided to teach SCEV normalization to work
correctly with non-affine add recurrences, making the bailout
unnecessary (I'll remove it in a subsequent change).

I've also added some unit tests (which would have failed before this
change).

Reviewers: atrick, sunfish, efriedma

Reviewed By: atrick

Subscribers: mcrosier, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D32104

llvm-svn: 301281
2017-04-25 00:09:19 +00:00
Wei Mi
a91f9f0c5f [SCEV] Add a local cache for getZeroExtendExpr and getSignExtendExpr to prevent
the exponential behavior.

The patch is to fix PR32043. Functions getZeroExtendExpr and getSignExtendExpr
may call themselves recursively more than once. This is potentially a 2^N
complexity behavior. The exponential behavior was not commonly exposed before
because of existing global cache mechnism like UniqueSCEVs or some early return
mechanism when flags FlagNSW or FlagNUW are seen. However, we still have case
which can expose the exponential behavior, like the case in PR32043, so we add
a local cache in getZeroExtendExpr and getSignExtendExpr. If the input of the
functions -- SCEV and type pair have been seen before, we can find the extended
expression directly in the local cache.

Differential Revision: https://reviews.llvm.org/D30350

llvm-svn: 300494
2017-04-17 20:40:05 +00:00
Sanjoy Das
0bffb56cbe Generalize SCEV's unit testing helper a bit
llvm-svn: 300379
2017-04-14 23:47:53 +00:00
Craig Topper
9f4fbdf31c [ValueTracking] Avoid undefined behavior in unittest by not making a named ArrayRef from a std::initializer_list
One of the ValueTracking unittests creates a named ArrayRef initialized by a std::initializer_list. The underlying array for an std::initializer_list is only guaranteed to have a lifetime as long as the initializer_list object itself. So this can leave the ArrayRef pointing at an array that no long exists.

This fixes this to just create an explicit array instead of an ArrayRef.

Differential Revision: https://reviews.llvm.org/D32089

llvm-svn: 300354
2017-04-14 17:59:19 +00:00
Sanjoy Das
01c58fa721 Add a unit test for SCEV Normalization
llvm-svn: 300332
2017-04-14 15:50:04 +00:00
Daniel Berlin
e485ce96aa MemorySSA: Move to Analysis, from Transforms/Utils. It's used as
Analysis, it has Analysis passes, and once NewGVN is made an Analysis,
this removes the cross dependency from Analysis to Transform/Utils.
NFC.

llvm-svn: 299980
2017-04-11 20:06:36 +00:00
Matt Arsenault
204d4c1d7b Allow DataLayout to specify addrspace for allocas.
LLVM makes several assumptions about address space 0. However,
alloca is presently constrained to always return this address space.
There's no real way to avoid using alloca, so without this
there is no way to opt out of these assumptions.

The problematic assumptions include:
- That the pointer size used for the stack is the same size as
  the code size pointer, which is also the maximum sized pointer.

- That 0 is an invalid, non-dereferencable pointer value.

These are problems for AMDGPU because alloca is used to
implement the private address space, which uses a 32-bit
index as the pointer value. Other pointers are 64-bit
and behave more like LLVM's notion of generic address
space. By changing the address space used for allocas,
we can change our generic pointer type to be LLVM's generic
pointer type which does have similar properties.

llvm-svn: 299888
2017-04-10 22:27:50 +00:00
Simon Pilgrim
b8aa529569 Fix signed/unsigned comparison warnings
llvm-svn: 297561
2017-03-11 13:02:31 +00:00
Dehao Chen
f131435479 Refactor the PSI to extract getCallSiteCount and remove checks for profile type.
Summary: There is no need to check profile count as only CallInst will have metadata attached.

Reviewers: eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30799

llvm-svn: 297500
2017-03-10 19:45:16 +00:00
Sanjoy Das
58bb799513 [SCEV] Decrease the recursion threshold for CompareValueComplexity
Fixes PR32142.

r287232 accidentally increased the recursion threshold for
CompareValueComplexity from 2 to 32.  This change reverses that change
by introducing a separate flag for CompareValueComplexity's threshold.

llvm-svn: 296992
2017-03-05 23:49:17 +00:00
Sanjoy Das
97ea3c33f1 Fix signed-unsigned comparison warning
llvm-svn: 296274
2017-02-25 22:25:48 +00:00
Sanjoy Das
059733f666 [ValueTracking] Don't do an unchecked shift in ComputeNumSignBits
Summary:
Previously we used to return a bogus result, 0, for IR like `ashr %val,
-1`.

I've also added an assert checking that `ComputeNumSignBits` at least
returns 1.  That assert found an already checked in test case where we
were returning a bad result for `ashr %val, -1`.

Fixes PR32045.

Reviewers: spatel, majnemer

Reviewed By: spatel, majnemer

Subscribers: efriedma, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D30311

llvm-svn: 296273
2017-02-25 20:30:45 +00:00
Chandler Carruth
e56d7470a4 [PM/LCG] Teach LCG to support spurious reference edges.
Somewhat amazingly, this only requires teaching it to clean them up when
deleting a dead function from the graph. And we already have exactly the
necessary data structures to do that in the parent RefSCCs.

This allows ArgPromote to work in a much simpler way be merely letting
reference edges linger in the graph after the causing IR is deleted. We
will clean up these edges when we run any function pass over the IR, but
don't remove them eagerly.

This avoids all of the quadratic update issues both in the current pass
manager and in my previous attempt with the new pass manager.

Differential Revision: https://reviews.llvm.org/D29579

llvm-svn: 294663
2017-02-09 23:30:14 +00:00
Chandler Carruth
042041bdf3 [PM/LCG] Teach the LazyCallGraph how to replace a function without
disturbing the graph or having to update edges.

This is motivated by porting argument promotion to the new pass manager.
Because of how LLVM IR Function objects work, in order to change their
signature a new object needs to be created. This is efficient and
straight forward in the IR but previously was very hard to implement in
LCG. We could easily replace the function a node in the graph
represents. The challenging part is how to handle updating the edges in
the graph.

LCG previously used an edge to a raw function to represent a node that
had not yet been scanned for calls and references. This was the core
of its laziness. However, that model causes this kind of update to be
very hard:
1) The keys to lookup an edge need to be `Function*`s that would all
   need to be updated when we update the node.
2) There will be some unknown number of edges that haven't transitioned
   from `Function*` edges to `Node*` edges.

All of this complexity isn't necessary. Instead, we can always build
a node around any function, always pointing edges at it and always using
it as the key to lookup an edge. To maintain the laziness, we need to
sink the *edges* of a node into a secondary object and explicitly model
transitioning a node from empty to populated by scanning the function.
This design seems much cleaner in a number of ways, but importantly
there is now exactly *one* place where the `Function*` has to be
updated!

Some other cleanups that fall out of this include having something to
model the *entry* edges more accurately. Rather than hand rolling parts
of the node in the graph itself, we have an explicit `EdgeSequence`
object that gives us exactly the functionality needed. We also have
a consistent place to define the edge iterators and can use them for
both the entry edges and the internal edges of the graph.

The API used to model the separation between a node and its edges is
intentionally very thin as most clients are expected to deal with nodes
that have populated edges. We model this exactly as an optional does
with an additional method to populate the edges when that is
a reasonable thing for a client to do. This is based on API design
suggestions from Richard Smith and David Blaikie, credit goes to them
for helping pick how to model this without it being either too explicit
or too implicit.

The patch is somewhat noisy due to shifting around iterator types and
new syntax for walking the edges of a node, but most of the
functionality change is in the `Edge`, `EdgeSequence`, and `Node` types.

Differential Revision: https://reviews.llvm.org/D29577

llvm-svn: 294653
2017-02-09 23:24:13 +00:00
Chandler Carruth
59e43d5713 [SCEV] Scale back the test added in r294181 as it goes quadratic in
SCEV.

This test was immediately the slowest test in 'check-llvm' even in an
optimized build and was driving up the total test time by 50% for me.

Sanjoy has filed a PR about the quadratic behavior in SCEV but it is
also concerning that the test still passes given that r294181 added
a threshold at 32 to SCEV. I've followed up on the original patch to
figure out how this test should work long-term, but for now I want to
get check-llvm to be fast again.

llvm-svn: 294241
2017-02-06 21:27:12 +00:00
Chandler Carruth
93759a2957 [PM/LCG] Remove the lazy RefSCC formation from the LazyCallGraph during
iteration.

The lazy formation of RefSCCs isn't really the most important part of
the laziness here -- that has to do with walking the functions
themselves -- and isn't essential to maintain. Originally, there were
incremental update algorithms that relied on updates happening
predominantly near the most recent RefSCC formed, but those have been
replaced with ones that have much tighter general case bounds at this
point. We do still perform asserts that only scale well due to this
incrementality, but those are easy to place behind EXPENSIVE_CHECKS.

Removing this simplifies the entire analysis by having a single up-front
step that builds all of the RefSCCs in a direct Tarjan walk. We can even
easily replace this with other or better algorithms at will and with
much less confusion now that there is no iterator-based incremental
logic involved. This removes a lot of complexity from LCG.

Another advantage of moving in this direction is that it simplifies
testing the system substantially as we no longer have to worry about
observing and mutating the graph half-way through the RefSCC formation.

We still need a somewhat special iterator for RefSCCs because we want
the iterator to remain stable in the face of graph updates. However,
this now merely involves relative indexing to the current RefSCC's
position in the sequence which isn't too hard.

Differential Revision: https://reviews.llvm.org/D29381

llvm-svn: 294227
2017-02-06 19:38:06 +00:00
Daniil Fukalov
fd35b81460 [SCEV] limit recursion depth and operands number in getAddExpr
for a quite big function with source like

%add = add nsw i32 %mul, %conv
%mul1 = mul nsw i32 %add, %conv
%add2 = add nsw i32 %mul1, %add
%mul3 = mul nsw i32 %add2, %add
; repeat couple of thousands times
that can be produced by loop unroll, getAddExpr() tries to recursively construct SCEV and runs almost infinite time.

Added recursion depth restriction (with new parameter to set it)

Reviewers: sanjoy

Subscribers: hfinkel, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D28158

llvm-svn: 294181
2017-02-06 12:38:06 +00:00
David L. Jones
268960185f [Analysis] Add LibFunc_ prefix to enums in TargetLibraryInfo. (NFC)
Summary:
The LibFunc::Func enum holds enumerators named for libc functions.
Unfortunately, there are real situations, including libc implementations, where
function names are actually macros (musl uses "#define fopen64 fopen", for
example; any other transitively visible macro would have similar effects).

Strictly speaking, a conforming C++ Standard Library should provide any such
macros as functions instead (via <cstdio>). However, there are some "library"
functions which are not part of the standard, and thus not subject to this
rule (fopen64, for example). So, in order to be both portable and consistent,
the enum should not use the bare function names.

The old enum naming used a namespace LibFunc and an enum Func, with bare
enumerators. This patch changes LibFunc to be an enum with enumerators prefixed
with "LibFFunc_". (Unfortunately, a scoped enum is not sufficient to override
macros.)

There are additional changes required in clang.

Reviewers: rsmith

Subscribers: mehdi_amini, mzolotukhin, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D28476

llvm-svn: 292848
2017-01-23 23:16:46 +00:00
Chandler Carruth
aea315a631 [LoopInfo] Add helper methods to compute two useful orderings of the
loops in a function.

These are relatively confusing to talk about and compute correctly so it
seems really good to write down their implementation in one place. I've
replaced one place we needed this in the loop PM infrastructure and
I have another place in a pending patch that wants it.

We can't quite use this for the core loop PM walk because there we're
sometimes working on a sub-forest.

I'll add the expected unittests before committing this but wanted to
make sure folks were happy with these names / comments.

Credit goes to Richard Smith for the idea for naming the order where siblings
are in reverse program order but the tree traversal remains preorder.

Differential Revision: https://reviews.llvm.org/D28932

llvm-svn: 292569
2017-01-20 02:41:20 +00:00
Easwaran Raman
7a2d97369c Add an interface to scale the frequencies of a set of blocks.
The scaling is done with reference to the the new frequency of a reference block.

Differential Revision: https://reviews.llvm.org/D28535

llvm-svn: 292507
2017-01-19 18:53:16 +00:00
Xin Tong
acffedf36e Refactor out LoopInfo computation so that it can be used by
other test cases.

Summary: Refactor out LoopInfo computation so that it can be
used by other test cases.

So i am changing this test proactively for later commit, which will use
this function.

Reviewers: sanjoy, hfinkel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28778

llvm-svn: 292250
2017-01-17 20:24:39 +00:00
Ahmed Bougacha
c5c2e57ec2 [TLI] Add prototype checking for all remaining LibFuncs.
This is another step towards unifying all LibFunc prototype checks.
This work started in r267758 (D19469);  add the remaining checks.

Also add a unittest that checks each libfunc declared with a known-valid
and known-invalid prototype.  New libfuncs added in the future are
required to have prototype checking in place; the known-valid test will
fail otherwise.

Differential Revision: https://reviews.llvm.org/D28030

llvm-svn: 292188
2017-01-17 03:10:02 +00:00
Ahmed Bougacha
bb16a66093 [unittests] Alphabetize cmake file list. NFC.
llvm-svn: 292186
2017-01-17 03:09:55 +00:00
Xin Tong
97363b5028 Empty line. NFC.
llvm-svn: 292081
2017-01-15 23:32:11 +00:00
Xin Tong
9f93956e98 Use getLoopLatch in place of isLoopSimplifyForm
Summary:
Use getLoopLatch in place of isLoopSimplifyForm. we do not need
to know whether the loop has a preheader nor dedicated exits.

Reviewers: hfinkel, sanjoy, atrick, mkuper

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D28724

llvm-svn: 292078
2017-01-15 21:17:52 +00:00
Xin Tong
b8f4943e70 Delete a dead argument. NFC
llvm-svn: 292074
2017-01-15 19:53:59 +00:00
Easwaran Raman
754f5f3612 Compute summary before calling extractProfTotalWeight
extractProfTotalWeight checks if the profile type is sample profile, but
before that we have to ensure that summary is available. Also expanded
the unittest to test the case where there is no summar

Differential Revision: https://reviews.llvm.org/D28708

llvm-svn: 291982
2017-01-14 00:32:37 +00:00
Easwaran Raman
50f9f008cb ProfileSummaryInfo improvements.
* Add is{Hot|Cold}CallSite methods
* Fix a bug in isHotBB where it was looking for MD_prof on a return instruction
* Use MD_prof data only if sample profiling was used to collect profiles.
* Add an unit test to ProfileSummaryInfo

Differential Revision: https://reviews.llvm.org/D28584

llvm-svn: 291878
2017-01-13 01:34:00 +00:00
Chandler Carruth
e59e4b3dc5 [PM] Separate the LoopAnalysisManager from the LoopPassManager and move
the latter to the Transforms library.

While the loop PM uses an analysis to form the IR units, the current
plan is to have the PM itself establish and enforce both loop simplified
form and LCSSA. This would be a layering violation in the analysis
library.

Fundamentally, the idea behind the loop PM is to *transform* loops in
addition to running passes over them, so it really seemed like the most
natural place to sink this was into the transforms library.

We can't just move *everything* because we also have loop analyses that
rely on a subset of the invariants. So this patch splits the the loop
infrastructure into the analysis management that has to be part of the
analysis library, and the transform-aware pass manager.

This also required splitting the loop analyses' printer passes out to
the transforms library, which makes sense to me as running these will
transform the code into LCSSA in theory.

I haven't split the unittest though because testing one component
without the other seems nearly intractable.

Differential Revision: https://reviews.llvm.org/D28452

llvm-svn: 291662
2017-01-11 09:43:56 +00:00
Chandler Carruth
47fe0b8a82 [PM] Take more drastic measures to work around MSVC's failure on this
code. If this doesn't work and I can't find someone to help who has MSVC
installed, I'll back everything out I guess. =[

llvm-svn: 291661
2017-01-11 09:20:24 +00:00
Chandler Carruth
9d614df7e2 [PM] Pull a lambda out of an argument into a named variable to try and
get a little more clarity about the nature of the issue MSVC is having
with this code.

llvm-svn: 291656
2017-01-11 08:23:29 +00:00
Chandler Carruth
c483768d91 [PM] Another attempt to satisfy MSVC.
llvm-svn: 291655
2017-01-11 07:53:12 +00:00
Chandler Carruth
15b99828b8 [PM] Try to appease MSVC by explicitly disambiguating a member name as
a template.

llvm-svn: 291654
2017-01-11 07:37:50 +00:00
Chandler Carruth
4855803b43 [PM] Rewrite the loop pass manager to use a worklist and augmented run
arguments much like the CGSCC pass manager.

This is a major redesign following the pattern establish for the CGSCC layer to
support updates to the set of loops during the traversal of the loop nest and
to support invalidation of analyses.

An additional significant burden in the loop PM is that so many passes require
access to a large number of function analyses. Manually ensuring these are
cached, available, and preserved has been a long-standing burden in LLVM even
with the help of the automatic scheduling in the old pass manager. And it made
the new pass manager extremely unweildy. With this design, we can package the
common analyses up while in a function pass and make them immediately available
to all the loop passes. While in some cases this is unnecessary, I think the
simplicity afforded is worth it.

This does not (yet) address loop simplified form or LCSSA form, but those are
the next things on my radar and I have a clear plan for them.

While the patch is very large, most of it is either mechanically updating loop
passes to the new API or the new testing for the loop PM. The code for it is
reasonably compact.

I have not yet updated all of the loop passes to correctly leverage the update
mechanisms demonstrated in the unittests. I'll do that in follow-up patches
along with improved FileCheck tests for those passes that ensure things work in
more realistic scenarios. In many cases, there isn't much we can do with these
until the loop simplified form and LCSSA form are in place.

Differential Revision: https://reviews.llvm.org/D28292

llvm-svn: 291651
2017-01-11 06:23:21 +00:00
Sanjoy Das
a2fc5cf9c6 Fix an issue with isGuaranteedToTransferExecutionToSuccessor
I'm not sure if this was intentional, but today
isGuaranteedToTransferExecutionToSuccessor returns true for readonly and
argmemonly calls that may throw.  This commit changes the function to
not implicitly infer nounwind this way.

Even if we eventually specify readonly calls as not throwing,
isGuaranteedToTransferExecutionToSuccessor is not the best place to
infer that.  We should instead teach FunctionAttrs or some other such
pass to tag readonly functions / calls as nounwind instead.

llvm-svn: 290794
2016-12-31 22:12:34 +00:00
Chandler Carruth
90f7376682 [PM] Teach the CGSCC's CG update utility to more carefully invalidate
analyses when we're about to break apart an SCC.

We can't wait until after breaking apart the SCC to invalidate things:
1) Which SCC do we then invalidate? All of them?
2) Even if we invalidate all of them, a newly created SCC may not have
   a proxy that will convey the invalidation to functions!

Previously we only invalidated one of the SCCs and too late. This led to
stale analyses remaining in the cache. And because the caching strategy
actually works, they would get used and chaos would ensue.

Doing invalidation early is somewhat pessimizing though if we *know*
that the SCC structure won't change. So it turns out that the design to
make the mutation API force the caller to know the *kind* of mutation in
advance was indeed 100% correct and we didn't do enough of it. So this
change also splits two cases of switching a call edge to a ref edge into
two separate APIs so that callers can clearly test for this and take the
easy path without invalidating when appropriate. This is particularly
important in this case as we expect most inlines to be between functions
in separate SCCs and so the common case is that we don't have to so
aggressively invalidate analyses.

The LCG API change in turn needed some basic cleanups and better testing
in its unittest. No interesting functionality changed there other than
more coverage of the returned sequence of SCCs.

While this seems like an obvious improvement over the current state, I'd
like to revisit the core concept of invalidating within the CG-update
layer at all. I'm wondering if we would be better served forcing the
callers to handle the invalidation beforehand in the cases that they
can handle it. An interesting example is when we want to teach the
inliner to *update and preserve* analyses. But we can cross that bridge
when we get there.

With this patch, the new pass manager an build all of the LLVM test
suite at -O3 and everything passes. =D I haven't bootstrapped yet and
I'm sure there are still plenty of bugs, but this gives a nice baseline
so I'm going to increasingly focus on fleshing out the missing
functionality, especially the bits that are just turned off right now in
order to let us establish this baseline.

llvm-svn: 290664
2016-12-28 10:34:50 +00:00
Chandler Carruth
ff1b18d787 [LCG] Teach the ref edge removal to handle a ref edge that is trivial
due to a call cycle.

This actually crashed the ref removal before.

I've added a unittest that covers this kind of interesting graph
structure and mutation.

llvm-svn: 290645
2016-12-28 02:24:58 +00:00
Chandler Carruth
99ebb6e7dc [PM] Introduce the facilities for registering cross-IR-unit dependencies
that require deferred invalidation.

This handles the other real-world invalidation scenario that we have
cases of: a function analysis which caches references to a module
analysis. We currently do this in the AA aggregation layer and might
well do this in other places as well.

Since this is relative rare, the technique is somewhat more cumbersome.
Analyses need to register themselves when accessing the outer analysis
manager's proxy. This proxy is already necessarily present to allow
access to the outer IR unit's analyses. By registering here we can track
and trigger invalidation when that outer analysis goes away.

To make this work we need to enhance the PreservedAnalyses
infrastructure to support a (slightly) more explicit model for "sets" of
analyses, and allow abandoning a single specific analyses even when
a set covering that analysis is preserved. That allows us to describe
the scenario of preserving all Function analyses *except* for the one
where deferred invalidation has triggered.

We also need to teach the invalidator API to support direct ID calls
instead of always going through a template to dispatch so that we can
just record the ID mapping.

I've introduced testing of all of this both for simple module<->function
cases as well as for more complex cases involving a CGSCC layer.

Much like the previous patch I've not tried to fully update the loop
pass management layer because that layer is due to be heavily reworked
to use similar techniques to the CGSCC to handle updates. As that
happens, we'll have a better testing basis for adding support like this.

Many thanks to both Justin and Sean for the extensive reviews on this to
help bring the API design and documentation into a better state.

Differential Revision: https://reviews.llvm.org/D27198

llvm-svn: 290594
2016-12-27 08:40:39 +00:00
Chandler Carruth
c51940f6af [LCG] Teach the LazyCallGraph to handle visiting the blockaddress
constant expression and to correctly form function reference edges
through them without crashing because one of the operands (the
`BasicBlock` isn't actually a constant despite being an operand of
a constant).

llvm-svn: 290581
2016-12-27 05:00:45 +00:00
George Burgess IV
6a02f0bf77 Don't consider allocsize functions to be allocation functions.
This patch fixes some ASAN unittest failures on FreeBSD. See the
cfe-commits email thread for r290169 for more on those.

According to the LangRef, the allocsize attribute only tells us about
the number of bytes that exist at the memory location pointed to by the
return value of a function. It does not necessarily mean that the
function will only ever allocate. So, we need to be very careful about
treating functions with allocsize as general allocation functions. This
patch makes us fully conservative in this regard, though I suspect that
we have room to be a bit more aggressive if we want.

This has a FIXME that can be fixed by a relatively straightforward
refactor; I just wanted to keep this patch minimal. If this sticks, I'll
come back and fix it in a few days.

llvm-svn: 290397
2016-12-23 01:18:09 +00:00
Chandler Carruth
2ed363a464 [PM] Introduce a reasonable port of the main per-module pass pipeline
from the old pass manager in the new one.

I'm not trying to support (initially) the numerous options that are
currently available to customize the pass pipeline. If we end up really
wanting them, we can add them later, but I suspect many are no longer
interesting. The simplicity of omitting them will help a lot as we sort
out what the pipeline should look like in the new PM.

I've also documented to the best of my ability *why* each pass or group
of passes is used so that reading the pipeline is more helpful. In many
cases I think we have some questionable choices of ordering and I've
left FIXME comments in place so we know what to come back and revisit
going forward. But for now, I've left it as similar to the current
pipeline as I could.

Lastly, I've had to comment out several places where passes are not
ported to the new pass manager or where the loop pass infrastructure is
not yet ready. I did at least fix a few bugs in the loop pass
infrastructure uncovered by running the full pipeline, but I didn't want
to go too far in this patch -- I'll come back and re-enable these as the
infrastructure comes online. But I'd like to keep the comments in place
because I don't want to lose track of which passes need to be enabled
and where they go.

One thing that seemed like a significant API improvement was to require
that we don't build pipelines for O0. It seems to have no real benefit.

I've also switched back to returning pass managers by value as at this
API layer it feels much more natural to me for composition. But if
others disagree, I'm happy to go back to an output parameter.

I'm not 100% happy with the testing strategy currently, but it seems at
least OK. I may come back and try to refactor or otherwise improve this
in subsequent patches but I wanted to at least get a good starting point
in place.

Differential Revision: https://reviews.llvm.org/D28042

llvm-svn: 290325
2016-12-22 06:59:15 +00:00
Daniel Jasper
162ffcacd6 Revert @llvm.assume with operator bundles (r289755-r289757)
This creates non-linear behavior in the inliner (see more details in
r289755's commit thread).

llvm-svn: 290086
2016-12-19 08:22:17 +00:00
Vedant Kumar
896c96fa1e Retry: [BPI] Use a safer constructor to calculate branch probabilities
BPI may trigger signed overflow UB while computing branch probabilities for
cold calls or to unreachables. For example, with our current choice of weights,
we'll crash if there are >= 2^12 branches to an unreachable.

Use a safer BranchProbability constructor which is better at handling fractions
with large denominators.

Changes since the initial commit:
  - Use explicit casts to ensure that multiplication operands are 64-bit
    ints.

rdar://problem/29368161

Differential Revision: https://reviews.llvm.org/D27862

llvm-svn: 290022
2016-12-17 01:02:08 +00:00
Vedant Kumar
2a685e1c7e Revert "[BPI] Use a safer constructor to calculate branch probabilities"
This reverts commit r290016. It breaks this bot, even though the test
passes locally:

  http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/32956/

AnalysisTests: /home/bb/ninja-x64-msvc-RA-centos6/llvm-project/llvm/lib/Support/BranchProbability.cpp:52: static llvm::BranchProbability llvm::BranchProbability::getBranchProbability(uint64_t, uint64_t): Assertion `Numerator <= Denominator && "Probability cannot be bigger than 1!"' failed.
llvm-svn: 290019
2016-12-17 00:19:06 +00:00
Vedant Kumar
0a726263ad [BPI] Use a safer constructor to calculate branch probabilities
BPI may trigger signed overflow UB while computing branch probabilities
for cold calls or to unreachables. For example, with our current choice
of weights, we'll crash if there are >= 2^12 branches to an unreachable.

Use a safer BranchProbability constructor which is better at handling
fractions with large denominators.

rdar://problem/29368161

Differential Revision: https://reviews.llvm.org/D27862

llvm-svn: 290016
2016-12-17 00:09:51 +00:00
Hal Finkel
f224db75d2 Remove the AssumptionCache
After r289755, the AssumptionCache is no longer needed. Variables affected by
assumptions are now found by using the new operand-bundle-based scheme. This
new scheme is more computationally efficient, and also we need much less
code...

llvm-svn: 289756
2016-12-15 03:02:15 +00:00
Reid Kleckner
a3cd1e4795 Revert "[SCEVExpand] do not hoist divisions by zero (PR30935)"
Reverts r289412. It caused an OOB PHI operand access in instcombine when
ASan is enabled. Reduction in progress.

Also reverts "[SCEVExpander] Add a test case related to r289412"

llvm-svn: 289453
2016-12-12 18:52:32 +00:00
Sanjoy Das
1a88a7426f [SCEVExpander] Add a test case related to r289412
llvm-svn: 289435
2016-12-12 14:57:11 +00:00
Sebastian Pop
9402602249 [SCEVExpand] do not hoist divisions by zero (PR30935)
SCEVExpand computes the insertion point for the components of a SCEV to be code
generated.  When it comes to generating code for a division, SCEVexpand would
not be able to check (at compilation time) all the conditions necessary to avoid
a division by zero.  The patch disables hoisting of expressions containing
divisions by anything other than non-zero constants in order to avoid hoisting
these expressions past conditions that should hold before doing the division.

The patch passes check-all on x86_64-linux.

Differential Revision: https://reviews.llvm.org/D27216

llvm-svn: 289412
2016-12-12 02:52:51 +00:00
Sanjoy Das
fc775b1e49 [TBAA] Don't generate invalid TBAA when merging nodes
Summary:
Fix a corner case in `MDNode::getMostGenericTBAA` where we can sometimes
generate invalid TBAA metadata.

Reviewers: chandlerc, hfinkel, mehdi_amini, manmanren

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D26635

llvm-svn: 289403
2016-12-11 20:07:25 +00:00