1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-25 14:02:52 +02:00
Commit Graph

5581 Commits

Author SHA1 Message Date
Adam Nemet
45d547aa42 [LAA] Remove unused pointer partition argument from addRuntimeCheck, NFC
This variant of addRuntimeCheck is only used now from the LoopVectorizer
which does not use this parameter.

llvm-svn: 243955
2015-08-04 05:16:20 +00:00
Adam Nemet
9cd964a03a [LAA] Remove unused needsAnyChecking(), NFC
llvm-svn: 243921
2015-08-03 23:33:03 +00:00
David Blaikie
95e59129ca -Wdeprecated-clean: Fix cases of violating the rule of 5 in ways that are deprecated in C++11
Various value handles needed to be copy constructible and copy
assignable (mostly for their use in DenseMap). But to avoid an API that
might allow accidental slicing, make these members protected in the base
class and make derived classes final (the special members become
implicitly public there - but disallowing further derived classes that
might be sliced to the intermediate type).

Might be worth having a warning a bit like -Wnon-virtual-dtor that
catches public move/copy assign/ctors in classes with virtual functions.
(suppressable in the same way - by making them protected in the base,
and making the derived classes final) Could be fancier and only diagnose
them when they're actually called, potentially.

Also allow a few default implementations where custom implementations
(especially with non-standard return types) were implemented.

llvm-svn: 243909
2015-08-03 22:30:24 +00:00
Craig Topper
bbb2ce25cc De-constify pointers to Type since they can't be modified. NFC
This was already done in most places a while ago. This just fixes the ones that crept in over time.

llvm-svn: 243842
2015-08-01 22:20:21 +00:00
David Blaikie
fb8a7d97d2 -Wdeprecated-clean: Fix cases of violating the rule of 5 in ways that are deprecated in C++11
llvm-svn: 243788
2015-07-31 21:37:09 +00:00
David Majnemer
34ee3789f3 New EH representation for MSVC compatibility
This introduces new instructions neccessary to implement MSVC-compatible
exception handling support.  Most of the middle-end and none of the
back-end haven't been audited or updated to take them into account.

Differential Revision: http://reviews.llvm.org/D11097

llvm-svn: 243766
2015-07-31 17:58:14 +00:00
Bruno Cardoso Lopes
8ce897de05 [CaptureTracker] Provide an ordered basic block to PointerMayBeCapturedBefore
This patch is a follow up from r240560 and is a step further into
mitigating the compile time performance issues in CaptureTracker.

By providing the CaptureTracker with a "cached ordered basic block"
instead of computing it every time, MemDepAnalysis can use this cache
throughout its calls to AA->callCapturesBefore, avoiding to recompute it
for every scanned instruction. In the same testcase used in r240560,
compile time is reduced from 2min to 30s.

This also fixes PR22348.

rdar://problem/19230319
Differential Revision: http://reviews.llvm.org/D11364

llvm-svn: 243750
2015-07-31 14:31:35 +00:00
Eric Christopher
520f118385 Rename hasCompatibleFunctionAttributes->areInlineCompatible based
on suggestions. Currently the function is only used for inline purposes
and this is more descriptive for the use.

llvm-svn: 243578
2015-07-29 22:09:48 +00:00
Jingyue Wu
a6a8a2d2b1 [SCEV] Apply NSW and NUW flags via poison value analysis
Summary:
Make Scalar Evolution able to propagate NSW and NUW flags from instructions to SCEVs in some cases. This is based on reasoning about when poison from instructions with these flags would trigger undefined behavior. This gives a 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX.

There does not seem to be clear agreement about when poison should be considered to propagate through instructions. In this analysis, poison propagates only in cases where that should be uncontroversial.

This change makes LSR able to create induction variables for expressions like &ptr[i + offset] for loops like this:

  for (int i = 0; i < limit; ++i) {
    sum += ptr[i + offset];
  }

Here ptr is a 64 bit pointer and offset is a 32 bit integer. For NVPTX, LSR currently creates an induction variable for i + offset instead, which is not as fast. Improving this situation is what brings the 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX.


There are more details in this discussion on llvmdev.
June: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-June/thread.html#87234
July: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-July/thread.html#87392

Patch by Bjarke Roune

Reviewers: eliben, atrick, sanjoy

Subscribers: majnemer, hfinkel, jingyue, meheff, llvm-commits

Differential Revision: http://reviews.llvm.org/D11212

llvm-svn: 243460
2015-07-28 18:22:40 +00:00
Bruno Cardoso Lopes
cdfacd96b4 [LVI] Cleanup whitespaces. NFC
llvm-svn: 243430
2015-07-28 15:53:21 +00:00
Silviu Baranga
97f890ebca [LAA] Add clarifying comments for the checking pointer grouping algorithm. NFC
llvm-svn: 243416
2015-07-28 13:44:08 +00:00
Chandler Carruth
d51b8c5480 [GMR] Teach GlobalsModRef to distinguish an important and safe case of
no-alias with non-addr-taken globals: they cannot alias a captured
pointer.

If the non-global underlying object would have been a capture were it to
alias the global, we can firmly conclude no-alias. It isn't reasonable
for a transformation to introduce a capture in a way observable by an
alias analysis. Consider, even if it were to temporarily capture one
globals address into another global and then restore the other global
afterward, there would be no way for the load in the alias query to
observe that capture event correctly. If it observes it then the
temporary capturing would have changed the meaning of the program,
making it an invalid transformation. Even instrumentation passes or
a pass which is synthesizing stores to global variables to expose race
conditions in programs could not trigger this unless it queried the
alias analysis infrastructure mid-transform, in which case it seems
reasonable to return results from before the transform started.

See the comments in the change for a more detailed outlining of the
theory here.

This should address the primary performance regression found when the
non-conservatively-correct path of the alias query was disabled.

Differential Revision: http://reviews.llvm.org/D11410

llvm-svn: 243405
2015-07-28 11:11:11 +00:00
Chandler Carruth
e831556fca [GMR] Fix a long-standing bug in GlobalsModRef where it failed to clear
out the per-function modref data structures when functions were deleted
or when globals were deleted.

I don't actually know how the global deletion side of this bug hasn't
been hit before, but for the other it just-so-happens that functions
aren't likely to be deleted in the particular part of the LTO pipeline
where we currently enable GMR, so we got lucky.

With this patch, I can self-host with GMR enabled in the normal pass
pipeline!

I was a bit concerned about the compile-time impact of this chang, which
is part of what motivated my prior string of patches to make the
per-function datastructure very dense and fast to walk. With those
changes in place, I can't measure a significant compile time difference
(the difference is around 0.1% which is *way* below the noise) before
and after this patch when building a linked bitcode for all of Clang.

Differential Revision: http://reviews.llvm.org/D11453

llvm-svn: 243385
2015-07-28 06:01:57 +00:00
Adam Nemet
7b14f72b61 [LAA] Split out a helper to print a collection of memchecks
This is effectively an NFC but we can no longer print the index of the
pointer group so instead I print its address.  This still lets us
cross-check the section that list the checks against the section that
list the groups (see how I modified the test).

E.g. before we printed this:

    Run-time memory checks:
    Check 0:
      Comparing group 0:
        %arrayidxC = getelementptr inbounds i16, i16* %c, i64 %store_ind
        %arrayidxC1 = getelementptr inbounds i16, i16* %c, i64 %store_ind_inc
      Against group 1:
        %arrayidxA = getelementptr i16, i16* %a, i64 %ind
        %arrayidxA1 = getelementptr i16, i16* %a, i64 %add
    ...
    Grouped accesses:
      Group 0:
        (Low: %c High: (78 + %c))
          Member: {%c,+,4}<%for.body>
          Member: {(2 + %c),+,4}<%for.body>

Now we print this (changes are underlined):

    Run-time memory checks:
    Check 0:
      Comparing group (0x7f9c6040c320):
                       ~~~~~~~~~~~~~~
        %arrayidxC1 = getelementptr inbounds i16, i16* %c, i64 %store_ind_inc
        %arrayidxC = getelementptr inbounds i16, i16* %c, i64 %store_ind
      Against group (0x7f9c6040c358):
                     ~~~~~~~~~~~~~~
        %arrayidxA1 = getelementptr i16, i16* %a, i64 %add
        %arrayidxA = getelementptr i16, i16* %a, i64 %ind
    ...
    Grouped accesses:
      Group 0x7f9c6040c320:
            ~~~~~~~~~~~~~~
        (Low: %c High: (78 + %c))
          Member: {(2 + %c),+,4}<%for.body>
          Member: {%c,+,4}<%for.body>

llvm-svn: 243354
2015-07-27 23:54:41 +00:00
Sanjoy Das
7c45e992cb [TargetTransformInfo][NFCI] Add TargetTransformInfo::isZExtFree.
Summary:
This function is not used in this change but will be used in a
subsequent change.

Reviewers: mcrosier, chandlerc

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9180

llvm-svn: 243347
2015-07-27 23:27:43 +00:00
Sanjoy Das
4c063981c7 [IndVars] Make loop varying predicates loop invariant.
Summary:
Was D9784: "Remove loop variant range check when induction variable is
strictly increasing"

This change re-implements D9784 with the two differences:

 1. It does not use SCEVExpander and does not generate new
    instructions.  Instead, it does a quick local search for existing
    `llvm::Value`s that it needs when modifying the `icmp`
    instruction.

 2. It is more general -- it deals with both increasing and decreasing
    induction variables.

I've added all of the tests included with D9784, and two more.

As an example on what this change does (copied from D9784):

Given C code:

```
for (int i = M; i < N; i++) // i is known not to overflow
  if (i < 0) break;
  a[i] = 0;
}
```

This transformation produces:

```
for (int i = M; i < N; i++)
  if (M < 0) break;
  a[i] = 0;
}
```

Which can be unswitched into:

```
if (!(M < 0))
  for (int i = M; i < N; i++)
    a[i] = 0;
}
```

I went back and forth on whether the top level logic should live in
`SimplifyIndvar::eliminateIVComparison` or be put into its own
routine.  Right now I've put it under `eliminateIVComparison` because
even though the `icmp` is not *eliminated*, it no longer is an IV
comparison.  I'm open to putting it in its own helper routine if you
think that is better.

Reviewers: reames, nicholas, atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11278

llvm-svn: 243331
2015-07-27 21:42:49 +00:00
Adam Nemet
d3a021d6fe [LAA] Upper-case variable names, NFC
llvm-svn: 243313
2015-07-27 19:38:50 +00:00
Adam Nemet
c7915a9342 [LAA] Split out a helper from addRuntimeCheck to generate the check, NFC
llvm-svn: 243312
2015-07-27 19:38:48 +00:00
Matt Arsenault
ff3dae692f Fix assert when inlining a constantexpr addrspacecast
The pointer size of the addrspacecasted pointer might not have matched,
so this would have hit an assert in accumulateConstantOffset.

I think this was here to allow constant folding of a load of an
addrspacecasted constant. Accumulating the offset through the
addrspacecast doesn't make much sense, so something else is necessary
to allow folding the load through this cast.

llvm-svn: 243300
2015-07-27 18:31:03 +00:00
NAKAMURA Takumi
1c7fecc1fe LoopAccessAnalysis.cpp: Tweak r243239 to avoid side effects. It caused different emissions between gcc and clang.
llvm-svn: 243258
2015-07-27 01:35:30 +00:00
Jingyue Wu
91cf96359e Roll forward r243250
r243250 appeared to break clang/test/Analysis/dead-store.c on one of the build
slaves, but I couldn't reproduce this failure locally. Probably a false
positive as I saw this test was broken by r243246 or r243247 too but passed
later without people fixing anything.

llvm-svn: 243253
2015-07-26 19:10:03 +00:00
Jingyue Wu
61ee29a54f Revert r243250
breaks tests

llvm-svn: 243251
2015-07-26 18:30:13 +00:00
Jingyue Wu
f4362fe267 [TTI/CostModel] improve TTI::getGEPCost and use it in CostModel::getInstructionCost
Summary:
This patch updates TargetTransformInfoImplCRTPBase::getGEPCost to consider
addressing modes. It now returns TCC_Free when the GEP can be completely folded
to an addresing mode.

I started this patch as I refactored SLSR. Function isGEPFoldable looks common
and is indeed used by some WIP of mine. So I extracted that logic to getGEPCost.

Furthermore, I noticed getGEPCost wasn't directly tested anywhere. The best
testing bed seems CostModel, but its getInstructionCost method invokes
getAddressComputationCost for GEPs which provides very coarse estimation. So
this patch also makes getInstructionCost call the updated getGEPCost for GEPs.
This change inevitably breaks some tests because the cost model changes, but
nothing looks seriously wrong -- if we believe the new cost model is the right
way to go, these tests should be updated.

This patch is not perfect yet -- the comments in some tests need to be updated.
I want to know whether this is a right approach before fixing those details.

Reviewers: chandlerc, hfinkel

Subscribers: aschwaighofer, llvm-commits, aemerson

Differential Revision: http://reviews.llvm.org/D9819

llvm-svn: 243250
2015-07-26 17:28:13 +00:00
Adam Nemet
1745f79e9b [LAA] Begin moving the logic of generating checks out of addRuntimeCheck
Summary:
The goal is to start moving us closer to the model where
RuntimePointerChecking will compute and store the checks.  Then a client
can filter the check according to its requirements and then use the
filtered list of checks with addRuntimeCheck.

Before the patch, this is all done in addRuntimeCheck.  So the patch
starts to split up addRuntimeCheck while providing the old API under
what's more or less a wrapper now.

The new underlying addRuntimeCheck takes a collection of checks now,
expands the code for the bounds then generates the code for the checks.

I am not completely happy with making expandBounds static because now it
needs so many explicit arguments but I don't want to make the type
PointerBounds part of LAI.  This should get fixed when addRuntimeCheck
is moved to LoopVersioning where it really belongs, IMO.

Audited the assembly diff of the testsuite (including externals).  There
is a tiny bit of assembly churn that is due to the different order the
code for the bounds is expanded now
(MultiSource/Benchmarks/Prolangs-C/bison/conflicts.s and with LoopDist
on 456.hmmer/fast_algorithms.s).

Reviewers: hfinkel

Subscribers: klimek, llvm-commits

Differential Revision: http://reviews.llvm.org/D11205

llvm-svn: 243239
2015-07-26 05:32:14 +00:00
Chandler Carruth
017dde1e72 [GMR] Switch the function info we store for every function to be a much
more dense datastructure. We actually only have 3 bits of information
and an often-null pointer here. This fits very nicely into a
pointer-size value in the DenseMap from Function -> Info. Then we take
one more pointer hop to get to a secondary DenseMap from GlobalValue ->
ModRefInfo when we actually have precise info for particular globals.

This is more code than I would really like to do this packing, but it
ended up reasonably cleanly laid out. It should ensure we don't hit
scaling limitations with more widespread use of GMR.

llvm-svn: 242991
2015-07-23 07:50:52 +00:00
Craig Topper
de990d5486 [ScalarEvolution] Change addRequired to addRequiredTransitive on two passes where ScalarEvolution stores long lived raw pointers to objects those passes own.
This prevents the pointers from dangling when those passes are freed.

http://reviews.llvm.org/D11236
Patch by Steve King.

llvm-svn: 242989
2015-07-23 07:33:48 +00:00
Chandler Carruth
18045e8fbf [GMR] Further improve the FunctionInfo API inside of GlobalsModRef, NFC.
This takes the operation of merging a callee's information into the
current information and embeds it into the FunctionInfo type itself.
This is much cleaner as now we don't need to expose iteration of the
globals, etc.

Also, switched all the uses of a raw integer two maintain the mod/ref
info during the SCC walk into just directly manipulating it in the
FunctionInfo object.

llvm-svn: 242976
2015-07-23 00:12:32 +00:00
Chandler Carruth
3c531f97b2 [GMR] Wrap all of the per-function information behind a more strongly
typed interface as a precursor to rewriting how it is stored.

This way we know that the access paths are controlled and it should be
easy to store these bits in a different way.

No functionality changed.

llvm-svn: 242974
2015-07-22 23:56:31 +00:00
Chandler Carruth
2e896f4f08 [PM/AA] Extract the ModRef enums from the AliasAnalysis class in
preparation for de-coupling the AA implementations.

In order to do this, they had to become fake-scoped using the
traditional LLVM pattern of a leading initialism. These can't be actual
scoped enumerations because they're bitfields and thus inherently we use
them as integers.

I've also renamed the behavior enums that are specific to reasoning
about the mod/ref behavior of functions when called. This makes it more
clear that they have a very narrow domain of applicability.

I think there is a significantly cleaner API for all of this, but
I don't want to try to do really substantive changes for now, I just
want to refactor the things away from analysis groups so I'm preserving
the exact original design and just cleaning up the names, style, and
lifting out of the class.

Differential Revision: http://reviews.llvm.org/D10564

llvm-svn: 242963
2015-07-22 23:15:57 +00:00
Chandler Carruth
3b397f6c67 [GMR] Continue my quest to remove linked datastructures from GMR, NFC.
This replaces the next-to-last std::map with a DenseMap. While DenseMap
doesn't yet make tons of sense (there are 32 bytes or so in the value
type), my next change will reduce the value type to a single pointer --
we only need a pointer and 3 bits, and that is exactly what we can have.

llvm-svn: 242956
2015-07-22 22:32:34 +00:00
David Majnemer
dd0c248e77 [ConstantFolding] Support folding loads from a GlobalAlias
The MSVC ABI requires that we generate an alias for the vtable which
means looking through a GlobalAlias which cannot be overridden improves
our ability to devirtualize.

Found while investigating PR20801.

Patch by Andrew Zhogin!

Differential Revision: http://reviews.llvm.org/D11306

llvm-svn: 242955
2015-07-22 22:29:30 +00:00
Chandler Carruth
bba928b0ab [GMR] Make the collection of readers and writers of globals much more
efficient, NFC.

Previously, we built up vectors of function pointers to track readers
and writers. The primary problem here is that we would add the same
function to this vector every time we found an instruction that reads or
writes to the pointer. This could be a *lot* of redudant function
pointers. Instead of doing that, we can use a SmallPtrSet.

This does more than just reduce the size of the list of readers or
writers. We walk the entire lists of each and do a map lookup for each
one. By having sets, we will only do one map lookup per reader or writer
function.

But only one user of the pointer analyzer actually needs this
information, so we can also skip accumulating it (and doing a lot of
heap allocations) for all the other pointer analysis. This is
particularly useful because there are very many more pointers in some of
the other cases.

llvm-svn: 242950
2015-07-22 22:10:05 +00:00
Hans Wennborg
34fee45808 Fix -Wextra-semi warnings.
Patch by Eugene Zelenko!

Differential Revision: http://reviews.llvm.org/D11400

llvm-svn: 242930
2015-07-22 20:46:11 +00:00
Chandler Carruth
a4c7c6c264 [GMR] Switch from std::set to SmallPtrSet. NFC.
This almost certainly doesn't matter in some deep sense, but std::set is
essentially always going to be slower here. Now the alias query should
be essentially constant time instead of having to chase the set tree
each time.

llvm-svn: 242893
2015-07-22 11:47:54 +00:00
Chandler Carruth
41327e670f [GMR] Only look in the associated allocs map for an underlying value if
it wasn't one of the indirect globals (which clearly cannot be an
allocation function call). Also only do a single lookup into this map
instead of two. NFC.

llvm-svn: 242892
2015-07-22 11:43:24 +00:00
Chandler Carruth
c9d371fb74 [GMR] Switch to a DenseMap and clean up the iteration loop. NFC.
Since we have to iterate this map not that infrequently, we should use
a map that is efficient for iteration. It is also almost certainly much
faster for lookups as well. There is more to do in terms of reducing the
wasted overhead of GMR's runtime though. Not sure how much is worthwhile
though.

The loop improvements should hopefully address the code review that
Duncan gave when he saw this code as I moved it around.

llvm-svn: 242891
2015-07-22 11:36:09 +00:00
Chandler Carruth
18b5a704b2 [PM/AA] Try to fix libc++ build bots which require the type used in
std::list to be complete by hoisting the entire definition into the
class. Ugly, but hopefully works.

llvm-svn: 242888
2015-07-22 11:10:41 +00:00
Chandler Carruth
cdb8301de0 [PM/AA] Remove the last of the legacy update API from AliasAnalysis as
part of simplifying its interface and usage in preparation for porting
to work with the new pass manager.

Note that this will likely expose that we have dead arguments, members,
and maybe even pass requirements for AA. I'll be cleaning those up in
seperate patches. This just zaps the actual update API.

Differential Revision: http://reviews.llvm.org/D11325

llvm-svn: 242881
2015-07-22 09:49:59 +00:00
Chandler Carruth
939d6a13a3 [PM/AA] Put the 'final' keyword in the correct place. And actually
succeed at compiling my change before committing it too!

llvm-svn: 242879
2015-07-22 09:34:18 +00:00
Chandler Carruth
5204193caa [PM/AA] Replace the only use of the AliasAnalysis::deleteValue API (in
GlobalsModRef) with CallbackVHs that trigger the same behavior.

This is technically more expensive, but in benchmarking some LTO runs,
it seems unlikely to even be above the noise floor. The only way I was
able to measure the performance of GMR at all was to run nothing else
but this one analysis on a linked clang bitcode file. The call graph
analysis still took 5x more time than GMR, and this change at most made
GMR 2% slower (this is well within the noise, so its hard for me to be
sure that this is an actual change). However, in a real LTO run over the
same bitcode, the GMR run takes so little time that the pass timers
don't measure it.

With this, I can remove the last update API from the AliasAnalysis
interface, but I'll actually remove the interface hook point in
a follow-up commit.

Differential Revision: http://reviews.llvm.org/D11324

llvm-svn: 242878
2015-07-22 09:27:58 +00:00
Jingyue Wu
98a3974775 [MDA] change BlockScanLimit into a command line option.
Summary:
In the benchmark (https://github.com/vetter/shoc) we are researching,
the duplicated load is not eliminated because MemoryDependenceAnalysis
hit the BlockScanLimit. This patch change it into a command line option
instead of a hardcoded value.

Patched by Xuetian Weng. 

Test Plan: test/Analysis/MemoryDependenceAnalysis/memdep-block-scan-limit.ll

Reviewers: jingyue, reames

Subscribers: reames, llvm-commits

Differential Revision: http://reviews.llvm.org/D11366

llvm-svn: 242842
2015-07-21 21:50:39 +00:00
Sanjoy Das
24f4212b35 [SCEV][NFC] Fix a typo in a comment.
llvm-svn: 242834
2015-07-21 20:59:22 +00:00
Karthik Bhat
bceb74b45e Constfold trunc,rint,nearbyint,ceil and floor using APFloat
A patch by Chakshu Grover!
This patch allows constfolding of trunc,rint,nearbyint,ceil and floor intrinsics using APFloat class.
Differential Revision: http://reviews.llvm.org/D11144

llvm-svn: 242763
2015-07-21 08:52:23 +00:00
Chandler Carruth
c58ca38a3a [PM/AA] Remove the addEscapingUse update API that won't be easy to
directly model in the new PM.

This also was an incredibly brittle and expensive update API that was
never fully utilized by all the passes that claimed to preserve AA, nor
could it reasonably have been extended to all of them. Any number of
places add uses of values. If we ever wanted to reliably instrument
this, we would want a callback hook much like we have with ValueHandles,
but doing this for every use addition seems *extremely* expensive in
terms of compile time.

The only user of this update mechanism is GlobalsModRef. The idea of
using this to keep it up to date doesn't really work anyways as its
analysis requires a symmetric analysis of two different memory
locations. It would be very hard to make updates be sufficiently
rigorous to *guarantee* symmetric analysis in this way, and it pretty
certainly isn't true today.

However, folks have been using GMR with this update for a long time and
seem to not be hitting the issues. The reported issue that the update
hook fixes isn't even a problem any more as other changes to
GetUnderlyingObject worked around it, and that issue stemmed from *many*
years ago. As a consequence, a prior patch provided a flag to control
the unsafe behavior of GMR, and this patch removes the update mechanism
that has questionable compile-time tradeoffs and is causing problems
with moving to the new pass manager. Note the lack of test updates --
not one test in tree actually requires this update, even for a contrived
case.

All of this was extensively discussed on the dev list, this patch will
just enact what that discussion decides on. I'm sending it for review in
part to show what I'm planning, and in part to show the *amazing* amount
of work this avoids. Every call to the AA here is something like three
to six indirect function calls, which in the non-LTO pipeline never do
any work! =[

Differential Revision: http://reviews.llvm.org/D11214

llvm-svn: 242605
2015-07-18 03:26:46 +00:00
Chandler Carruth
27209caf10 [PM/AA] Disable the core unsafe aspect of GlobalsModRef in the face of
basic changes to the IR such as folding pointers through PHIs, Selects,
integer casts, store/load pairs, or outlining.

This leaves the feature available behind a flag. This flag's default
could be flipped if necessary, but the real-world performance impact of
this particular feature of GMR may not be sufficiently significant for
many folks to want to run the risk.

Currently, the risk here is somewhat mitigated by half-hearted attempts
to update GlobalsModRef when the rest of the optimizer changes
something. However, I am currently trying to remove that update
mechanism as it makes migrating the AA infrastructure to a form that can
be readily shared between new and old pass managers very challenging.
Without this update mechanism, it is possible that this still unlikely
failure mode will start to trip people, and so I wanted to try to
proactively avoid that.

There is a lengthy discussion on the mailing list about why the core
approach here is flawed, and likely would need to look totally different
to be both reasonably effective and resilient to basic IR changes
occuring. This patch is essentially the first of two which will enact
the result of that discussion. The next patch will remove the current
update mechanism.

Thanks to lots of folks that helped look at this from different angles.
Especial thanks to Michael Zolotukhin for doing some very prelimanary
benchmarking of LTO without GlobalsModRef to get a rough idea of the
impact we could be facing here. So far, it looks very small, but there
are some concerns lingering from other benchmarking. The default here
may get flipped if performance results end up pointing at this as a more
significant issue.

Also thanks to Pete and Gerolf for reviewing!

Differential Revision: http://reviews.llvm.org/D11213

llvm-svn: 242512
2015-07-17 06:58:24 +00:00
Cong Hou
9e8d1a2460 Add new constructors for LoopInfo/DominatorTree/BFI/BPI
Those new constructors make it more natural to construct an object for a function. For example, previously to build a LoopInfo for a function, we need four statements:

DominatorTree DT;
LoopInfo LI;
DT.recalculate(F);
LI.analyze(DT);

Now we only need one statement:

LoopInfo LI(DominatorTree(F));

http://reviews.llvm.org/D11274

llvm-svn: 242486
2015-07-16 23:23:35 +00:00
Cong Hou
bd3b6b651a Rename LoopInfo::Analyze() to LoopInfo::analyze() and turn its parameter type to const&.
The benefit of turning the parameter of LoopInfo::analyze() to const& is that it now can accept a rvalue.

http://reviews.llvm.org/D11250

llvm-svn: 242426
2015-07-16 18:23:57 +00:00
Silviu Baranga
9659e5c213 Fix memcheck interval ends for pointers with negative strides
Summary:
The checking pointer grouping algorithm assumes that the
starts/ends of the pointers are well formed (start <= end).

The runtime memory checking algorithm also assumes this by doing:

 start0 < end1 && start1 < end0

to detect conflicts. This check only works if start0 <= end0 and
start1 <= end1.

This change correctly orders the interval ends by either checking
the stride (if it is constant) or by using min/max SCEV expressions.

Reviewers: anemet, rengolin

Subscribers: rengolin, llvm-commits

Differential Revision: http://reviews.llvm.org/D11149

llvm-svn: 242400
2015-07-16 14:02:58 +00:00
Adam Nemet
da5196c5c1 [LAA] Split out a helper to check the pointer partitions, NFC
This is made a static public member function to allow the transition of
this logic from LAA to LoopDistribution.  (Technically, it could be an
implementation-local static function but then it would not be accessible
from LoopDistribution.)

llvm-svn: 242376
2015-07-16 02:48:05 +00:00
Cong Hou
be08cf7ca6 Create a wrapper pass for BranchProbabilityInfo.
This new wrapper pass is useful when we want to do branch probability analysis conditionally (e.g. only in PGO mode) but don't want to add one more pass dependence.

http://reviews.llvm.org/D11241

llvm-svn: 242349
2015-07-15 22:48:29 +00:00