mirror of
https://github.com/RPCS3/llvm-mirror.git
synced 2024-11-23 19:23:23 +01:00
c58ca38a3a
directly model in the new PM. This also was an incredibly brittle and expensive update API that was never fully utilized by all the passes that claimed to preserve AA, nor could it reasonably have been extended to all of them. Any number of places add uses of values. If we ever wanted to reliably instrument this, we would want a callback hook much like we have with ValueHandles, but doing this for every use addition seems *extremely* expensive in terms of compile time. The only user of this update mechanism is GlobalsModRef. The idea of using this to keep it up to date doesn't really work anyways as its analysis requires a symmetric analysis of two different memory locations. It would be very hard to make updates be sufficiently rigorous to *guarantee* symmetric analysis in this way, and it pretty certainly isn't true today. However, folks have been using GMR with this update for a long time and seem to not be hitting the issues. The reported issue that the update hook fixes isn't even a problem any more as other changes to GetUnderlyingObject worked around it, and that issue stemmed from *many* years ago. As a consequence, a prior patch provided a flag to control the unsafe behavior of GMR, and this patch removes the update mechanism that has questionable compile-time tradeoffs and is causing problems with moving to the new pass manager. Note the lack of test updates -- not one test in tree actually requires this update, even for a contrived case. All of this was extensively discussed on the dev list, this patch will just enact what that discussion decides on. I'm sending it for review in part to show what I'm planning, and in part to show the *amazing* amount of work this avoids. Every call to the AA here is something like three to six indirect function calls, which in the non-LTO pipeline never do any work! =[ Differential Revision: http://reviews.llvm.org/D11214 llvm-svn: 242605 |
||
---|---|---|
.. | ||
IPA | ||
AliasAnalysis.cpp | ||
AliasAnalysisCounter.cpp | ||
AliasAnalysisEvaluator.cpp | ||
AliasDebugger.cpp | ||
AliasSetTracker.cpp | ||
Analysis.cpp | ||
AssumptionCache.cpp | ||
BasicAliasAnalysis.cpp | ||
BlockFrequencyInfo.cpp | ||
BlockFrequencyInfoImpl.cpp | ||
BranchProbabilityInfo.cpp | ||
CaptureTracking.cpp | ||
CFG.cpp | ||
CFGPrinter.cpp | ||
CFLAliasAnalysis.cpp | ||
CGSCCPassManager.cpp | ||
CMakeLists.txt | ||
CodeMetrics.cpp | ||
ConstantFolding.cpp | ||
CostModel.cpp | ||
Delinearization.cpp | ||
DependenceAnalysis.cpp | ||
DivergenceAnalysis.cpp | ||
DominanceFrontier.cpp | ||
DomPrinter.cpp | ||
InstCount.cpp | ||
InstructionSimplify.cpp | ||
Interval.cpp | ||
IntervalPartition.cpp | ||
IteratedDominanceFrontier.cpp | ||
IVUsers.cpp | ||
LazyCallGraph.cpp | ||
LazyValueInfo.cpp | ||
LibCallAliasAnalysis.cpp | ||
LibCallSemantics.cpp | ||
Lint.cpp | ||
LLVMBuild.txt | ||
Loads.cpp | ||
LoopAccessAnalysis.cpp | ||
LoopInfo.cpp | ||
LoopPass.cpp | ||
Makefile | ||
MemDepPrinter.cpp | ||
MemDerefPrinter.cpp | ||
MemoryBuiltins.cpp | ||
MemoryDependenceAnalysis.cpp | ||
MemoryLocation.cpp | ||
ModuleDebugInfoPrinter.cpp | ||
NoAliasAnalysis.cpp | ||
PHITransAddr.cpp | ||
PostDominators.cpp | ||
PtrUseVisitor.cpp | ||
README.txt | ||
RegionInfo.cpp | ||
RegionPass.cpp | ||
RegionPrinter.cpp | ||
ScalarEvolution.cpp | ||
ScalarEvolutionAliasAnalysis.cpp | ||
ScalarEvolutionExpander.cpp | ||
ScalarEvolutionNormalization.cpp | ||
ScopedNoAliasAA.cpp | ||
SparsePropagation.cpp | ||
StratifiedSets.h | ||
TargetLibraryInfo.cpp | ||
TargetTransformInfo.cpp | ||
Trace.cpp | ||
TypeBasedAliasAnalysis.cpp | ||
ValueTracking.cpp | ||
VectorUtils.cpp |
Analysis Opportunities: //===---------------------------------------------------------------------===// In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the ScalarEvolution expression for %r is this: {1,+,3,+,2}<loop> Outside the loop, this could be evaluated simply as (%n * %n), however ScalarEvolution currently evaluates it as (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n)) In addition to being much more complicated, it involves i65 arithmetic, which is very inefficient when expanded into code. //===---------------------------------------------------------------------===// In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll, ScalarEvolution is forming this expression: ((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32))) This could be folded to (-1 * (trunc i64 undef to i32)) //===---------------------------------------------------------------------===//