1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 14:33:02 +02:00
Commit Graph

9022 Commits

Author SHA1 Message Date
Evgeniy Stepanov
3519d90668 Use zero-based shadow by default on Android.
llvm-svn: 157317
2012-05-23 11:52:12 +00:00
Stepan Dyatkovskiy
49092667b1 PR1255(case ranges) related changes in Local Transformations.
llvm-svn: 157315
2012-05-23 08:18:26 +00:00
Nuno Lopes
1a989d236d address some of John Criswell's comments
teach computeAllocSize about realloc, reallocf, and valloc

llvm-svn: 157298
2012-05-22 22:02:19 +00:00
Nuno Lopes
73d40438e2 hopefully fix the CMake build. sorry for breakage
llvm-svn: 157264
2012-05-22 17:40:46 +00:00
Nuno Lopes
114b8eaa9c add a new pass to instrument loads and stores for run-time bounds checking
move EmitGEPOffset from InstCombine to Transforms/Utils/Local.h

(a draft of this) patch reviewed by Andrew, thanks.

llvm-svn: 157261
2012-05-22 17:19:09 +00:00
Nuno Lopes
944814b41a revert my previous patches that introduced an additional parameter to the objectsize intrinsic.
After a lot of discussion, we realized it's not the best option for run-time bounds checking

llvm-svn: 157255
2012-05-22 15:25:31 +00:00
Duncan Sands
39edcc75ac Fix PR12858, a crash due to GVN's PRE not fully removing an instruction from the
leader table.  That's because it wasn't expecting instructions to turn up as
leader for a value number that is not its own, but equality propagation could
create this situation.  One solution is to have the leader table use a WeakVH
but this slows down GVN by about 5%.  Instead just have equality propagation not
add instructions to the leader table, only constants and arguments.  In theory
this might cause GVN to run more (each time it changes something it runs again)
but it doesn't seem to occur enough to cause a slow down.

llvm-svn: 157251
2012-05-22 14:17:53 +00:00
Dan Gohman
992a69b57c Mark an unreachable region of code with llvm_unreachable.
llvm-svn: 157197
2012-05-21 17:41:28 +00:00
Peter Collingbourne
4b4c08e616 Do not pass an invalid domtree to SimplifyInstruction from
LoopUnswitch.  Fixes PR12887.

llvm-svn: 157140
2012-05-20 01:32:09 +00:00
Peter Collingbourne
0baed83df2 Do not eliminate allocas whose alignment exceeds that of the
copied-in constant, as a subsequent user may rely on over alignment.
Fixes PR12885.

llvm-svn: 157134
2012-05-19 22:52:10 +00:00
Dan Gohman
a487e2b57e Fix replacing all the users of objc weak runtime routines
when deleting them. rdar://11434915.

llvm-svn: 157080
2012-05-18 22:17:29 +00:00
David Majnemer
ea3e1ea334 Teach SimplifyLibCalls about stpcpy.
llvm-svn: 156815
2012-05-15 11:46:21 +00:00
Chad Rosier
c3a90c47b9 Move the capture analysis from MemoryDependencyAnalysis to a more general place
so that it can be reused in MemCpyOptimizer.  This analysis is needed to remove
an unnecessary memcpy when returning a struct into a local variable.
rdar://11341081
PR12686

llvm-svn: 156776
2012-05-14 20:35:04 +00:00
Jay Foad
65d25fa204 Teach Function::hasAddressTaken that BlockAddress doesn't really take
the address of a function.

llvm-svn: 156703
2012-05-12 08:30:16 +00:00
Nuno Lopes
11d6ecb6db objectsize: add a few more tests and fix a bug
llvm-svn: 156625
2012-05-11 18:25:29 +00:00
Eli Friedman
1746bfc50e Fix a minor logic mistake transforming compares in instcombine. PR12514.
llvm-svn: 156600
2012-05-11 01:32:59 +00:00
Nuno Lopes
415911a5c7 objectsize: add support for GEPs with non-constant indexes
add an additional parameter to InstCombiner::EmitGEPOffset() to force it to *not* emit operations with NUW flag

llvm-svn: 156585
2012-05-10 23:17:35 +00:00
Dan Gohman
8b1a3cec89 Teach DeadStoreElimination to eliminate exit-block stores with phi addresses.
llvm-svn: 156558
2012-05-10 18:57:38 +00:00
Nuno Lopes
ea7b37e3ae teach DSE and isInstructionTriviallyDead() about calloc
llvm-svn: 156553
2012-05-10 17:14:00 +00:00
Dan Gohman
9e72870dd1 Fix the objc_storeStrong recognizer to stop before walking off the
end of a basic block if there's no store.

llvm-svn: 156520
2012-05-09 23:08:33 +00:00
Nuno Lopes
3d7a8137ee objectsize:
refactor code a bit to enable future changes to support run-time information
add support to compute allocation sizes at run-time if penalty > 1 (e.g., malloc(x), calloc(x, y), and VLAs)

llvm-svn: 156515
2012-05-09 21:30:57 +00:00
Craig Topper
749c95a942 Remove unused variable to get rid of warning.
llvm-svn: 156466
2012-05-09 07:08:58 +00:00
Dan Gohman
0f60d6f9b0 Miscellaneous accumulated cleanups.
llvm-svn: 156445
2012-05-08 23:39:44 +00:00
Dan Gohman
b47d02f929 Fix objc_storeStrong pattern matching to catch a potential use of the
old value after the store but before it is released.
This fixes rdar:/11116986.

llvm-svn: 156442
2012-05-08 23:34:08 +00:00
Duncan Sands
c9f011a85b Calling ReassociateExpression recursively is extremely dangerous since it will
replace the operands of expressions with only one use with undef and generate
a new expression for the original without using RAUW to update the original.
Thus any copies of the original expression held in a vector may end up
referring to some bogus value - and using a ValueHandle won't help since there
is no RAUW.  There is already a mechanism for getting the effect of recursion
non-recursively: adding the value to be recursed on to RedoInsts.  But it wasn't
being used systematically.  Have various places where recursion had snuck in at
some point use the RedoInsts mechanism instead.  Fixes PR12169.

llvm-svn: 156379
2012-05-08 12:16:05 +00:00
Andrew Trick
2da17d5df9 Allow NULL LoopPassManager argument in UnrollLoop. PR12734.
llvm-svn: 156358
2012-05-08 02:52:09 +00:00
Owen Anderson
1e7a4f0f91 Teach reassociate to commute FMul's and FAdd's in order to canonicalize the order of their operands across instructions. This allows for greater CSE opportunities.
llvm-svn: 156323
2012-05-07 20:47:23 +00:00
Benjamin Kramer
786f7671ab Switch the select to branch transformation on by default.
The primitive conservative heuristic seems to give a slight overall
improvement while not regressing stuff. Make it available to wider
testing. If you notice any speed regressions (or significant code
size regressions) let me know!

llvm-svn: 156258
2012-05-06 14:25:16 +00:00
Jakub Staszak
b3bddb41cb Remove trailing spaces.
llvm-svn: 156257
2012-05-06 13:52:31 +00:00
Benjamin Kramer
0463564612 CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:

	ucomisd	(%rdi), %xmm0
	cmovbel	%edx, %esi

cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.

This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.

It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.


Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.

Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)

llvm-svn: 156234
2012-05-05 12:49:22 +00:00
Stepan Dyatkovskiy
469935e0ae Small fix in InstCombineCasts.cpp. Restored "alloca + bitcast" reducing for case when alloca's size is calculated within the "add/sub/... nsw".
Also added fix to 2011-06-13-nsw-alloca.ll test.

llvm-svn: 156231
2012-05-05 07:09:40 +00:00
Chandler Carruth
51819a2bcf Teach the code extractor how to extract a sequence of blocks from
RegionInfo's RegionNode. This mirrors the logic for automating the
extraction from a Loop.

llvm-svn: 156208
2012-05-04 21:33:30 +00:00
Chandler Carruth
8cdf727fc7 Factor the computation of input and output sets into a public interface
of the CodeExtractor utility. This allows speculatively computing input
and output sets to measure the likely size impact of the code
extraction.

These sets cannot be reused sadly -- we mutate the function prior to
forming the final sets used by the actual extraction.

The interface has been revamped slightly to make it easier to use
correctly by making the interface const and sinking the computation of
the number of exit blocks into the full extraction function and away
from the rest of this logic which just computed two output parameters.

llvm-svn: 156168
2012-05-04 11:20:27 +00:00
Chandler Carruth
b6a7c286f3 Rather than trying to gracefully handle input sequences with repeated
blocks, assert that this doesn't happen. We don't want to bother trying
to support this call pattern as it isn't necessary.

llvm-svn: 156167
2012-05-04 11:17:06 +00:00
Chandler Carruth
a74f417e2e Fix a goof with my previous commit by completely returning when we
detect an in-eligible block rather than just breaking out of the loop.

llvm-svn: 156166
2012-05-04 11:14:19 +00:00
Chandler Carruth
8ce878e46c Hoist a safety assert from the extraction method into the construction
of the extractor itself.

llvm-svn: 156164
2012-05-04 10:26:45 +00:00
Chandler Carruth
67c334679c Move the CodeExtractor utility to a dedicated header file / source file,
and expose it as a utility class rather than as free function wrappers.

The simple free-function interface works well for the bugpoint-specific
pass's uses of code extraction, but in an upcoming patch for more
advanced code extraction, they simply don't expose a rich enough
interface. I need to expose various stages of the process of doing the
code extraction and query information to decide whether or not to
actually complete the extraction or give up.

Rather than build up a new predicate model and pass that into these
functions, just take the class that was actually implementing the
functions and lift it up into a proper interface that can be used to
perform code extraction. The interface is cleaned up and re-documented
to work better in a header. It also is now setup to accept the blocks to
be extracted in the constructor rather than in a method.

In passing this essentially reverts my previous commit here exposing
a block-level query for eligibility of extraction. That is no longer
necessary with the more rich interface as clients can query the
extraction object for eligibility directly. This will reduce the number
of walks of the input basic block sequence by quite a bit which is
useful if this enters the normal optimization pipeline.

llvm-svn: 156163
2012-05-04 10:18:49 +00:00
Bill Wendling
8661cdc0f4 Add 'landingpad' instructions to the list of instructions to ignore.
Also combine the code in the 'assert' statement.

llvm-svn: 156155
2012-05-04 04:22:32 +00:00
Chandler Carruth
a3a5c6ba2c A pile of long over-due refactorings here. There are some very, *very*
minor behavior changes with this, but nothing I have seen evidence of in
the wild or expect to be meaningful. The real goal is unifying our logic
and simplifying the interfaces. A summary of the changes follows:

- Make 'callIsSmall' actually accept a callsite so it can handle
  intrinsics, and simplify callers appropriately.
- Nuke a completely bogus declaration of 'callIsSmall' that was still
  lurking in InlineCost.h... No idea how this got missed.
- Teach the 'isInstructionFree' about the various more intelligent
  'free' heuristics that got added to the inline cost analysis during
  review and testing. This mostly surrounds int->ptr and ptr->int casts.
- Switch most of the interesting parts of the inline cost analysis that
  were essentially computing 'is this instruction free?' to use the code
  metrics routine instead. This way we won't keep duplicating logic.

All of this is motivated by the desire to allow other passes to compute
a roughly equivalent 'cost' metric for a particular basic block as the
inline cost analysis. Sadly, re-using the same analysis for both is
really messy because only the actual inline cost analysis is ever going
to go to the contortions required for simplification, SROA analysis,
etc.

llvm-svn: 156140
2012-05-04 00:58:03 +00:00
Chandler Carruth
a75274c657 Factor the logic for testing whether a basic block is viable for code
extraction into a public interface. Also clean it up and apply it more
consistently such that we check for landing pads *anywhere* in the
extracted code, not just in single-block extraction.

This will be used to guide decisions in passes that are planning to
eventually perform a round of code extraction.

llvm-svn: 156114
2012-05-03 22:26:53 +00:00
Nuno Lopes
2762496a1a remove calls to calloc if the allocated memory is not used (it was already being done for malloc)
fix a few typos found by Chad in my previous commit

llvm-svn: 156110
2012-05-03 22:08:19 +00:00
Nuno Lopes
26239aeb99 add support for calloc to objectsize lowering
llvm-svn: 156102
2012-05-03 21:19:58 +00:00
Nuno Lopes
fe63eee05b replace 'break's with 'return 0' in visitCallInst code for objectsize, since there is no need to fallback to visitCallSite.
This gives a 0.9% in a test case

llvm-svn: 156069
2012-05-03 16:06:07 +00:00
Bill Wendling
055a725884 Whitespace cleanup.
llvm-svn: 156034
2012-05-02 23:43:23 +00:00
Kostya Serebryany
f224182c86 [tsan] typo and style (thanks to Nick Lewycky)
llvm-svn: 155986
2012-05-02 13:12:19 +00:00
Bill Wendling
4cb38868b9 The value held in the vector may be RAUW'ed by some of the canonicalization
methods. Use a weak value handle to keep up with this.
PR12245

llvm-svn: 155984
2012-05-02 09:59:45 +00:00
Nick Lewycky
fd4342c2f1 An instruction in a loop is not guaranteed to be executed just because the loop
has no exit blocks. Fixes PR12706!

llvm-svn: 155884
2012-05-01 04:03:01 +00:00
Lang Hames
26d71c9d0a Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. Fixes
<rdar://problem/11291436>.

This is a second attempt at a fix for this, the first was r155468. Thanks
to Chandler, Bob and others for the feedback that helped me improve this.

llvm-svn: 155866
2012-05-01 00:20:38 +00:00
Bill Wendling
5a1a6421ca Second attempt at PR12573:
Allow the "SplitCriticalEdge" function to split the edge to a landing pad. If
the pass is *sure* that it thinks it knows what it's doing, then it may go ahead
and specify that the landing pad can have its critical edge split. The loop
unswitch pass is one of these passes. It will split the critical edges of all
edges coming from a loop to a landing pad not within the loop. Doing so will
retain important loop analysis information, such as loop simplify.

llvm-svn: 155817
2012-04-30 10:44:54 +00:00
Bill Wendling
b9975200ed Use an ArrayRef instead of explicit vector type.
llvm-svn: 155816
2012-04-30 10:25:51 +00:00
Bill Wendling
16341abe85 Remove hack from r154987. The problem persists even with it, so it's not even a good hack.
llvm-svn: 155813
2012-04-30 09:23:48 +00:00
Rafael Espindola
314a1a477a Make sure HoistInsertPosition finds a position that is dominated by all
inputs.

llvm-svn: 155809
2012-04-30 03:53:06 +00:00
Hal Finkel
a565d03d78 Don't vectorize target-specific types (ppc_fp128, x86_fp80, etc.).
Target specific types should not be vectorized. As a practical matter,
these types are already register matched (at least in the x86 case),
and codegen does not always work correctly (at least in the ppc case,
and this is not worth fixing because ppc_fp128 is currently broken and
will probably go away soon).

llvm-svn: 155729
2012-04-27 19:34:00 +00:00
David Blaikie
296c942e88 Change recurse depth limit to uint32 to fix warning.
llvm-svn: 155727
2012-04-27 19:30:32 +00:00
Dan Gohman
1bc0d2e1bc Miscellaneous accumulated cleanups.
llvm-svn: 155725
2012-04-27 18:56:31 +00:00
Mon P Wang
85af068593 Add an early bailout to IsValueFullyAvailableInBlock from deeply nested blocks.
The limit is set to an arbitrary 1000 recursion depth to avoid stack overflow
issues. <rdar://problem/11286839>.

llvm-svn: 155722
2012-04-27 18:09:28 +00:00
Kostya Serebryany
c855c442c8 [asan] small optimization: do not emit "x+0" instructions
llvm-svn: 155701
2012-04-27 10:04:53 +00:00
Kostya Serebryany
0cd695bd39 [tsan] Atomic support for ThreadSanitizer, patch by Dmitry Vyukov
llvm-svn: 155698
2012-04-27 07:31:53 +00:00
Jakob Stoklund Olesen
185c3797be Break up getProfitableChainIncrement().
The required checks are moved to ChainInstruction() itself and the
policy decisions are moved to IVChain::isProfitableInc().

Also cache the ExprBase in IVChain to avoid frequent recomputations.

No functional change intended.

llvm-svn: 155676
2012-04-26 23:33:11 +00:00
Jakob Stoklund Olesen
613b89fecd Turn IVChain into a struct.
No functional change intended.

llvm-svn: 155675
2012-04-26 23:33:09 +00:00
Chad Rosier
f3d4646377 Add instcombine patterns for the following transformations:
(x & y) | (x ^ y) -> x | y 
 (x & y) + (x ^ y) -> x | y 

Patch by Manman Ren.
rdar://10770603

llvm-svn: 155674
2012-04-26 23:29:14 +00:00
Chandler Carruth
587c136c31 Teach the reassociate pass to fold chains of multiplies with repeated
elements to minimize the number of multiplies required to compute the
final result. This uses a heuristic to attempt to form near-optimal
binary exponentiation-style multiply chains. While there are some cases
it misses, it seems to at least a decent job on a very diverse range of
inputs.

Initial benchmarks show no interesting regressions, and an 8%
improvement on SPASS. Let me know if any other interesting results (in
either direction) crop up!

Credit to Richard Smith for the core algorithm, and helping code the
patch itself.

llvm-svn: 155616
2012-04-26 05:30:30 +00:00
Jakob Stoklund Olesen
e2913e1ad5 Print IV chain numbers while collecting them.
llvm-svn: 155567
2012-04-25 18:01:32 +00:00
Lang Hames
7f69fbca29 Reverting r155468. Chris and Chandler have convinced me that it's dangerous and
in poor taste.

Talking through some alternate solutions with Chandler.

llvm-svn: 155530
2012-04-25 02:16:54 +00:00
Dan Gohman
64171a0b3b Simplify the known retain count tracking; use a boolean state instead
of a precise count. Also, move RRInfo's Partial field into PtrState,
now that it won't increase the size.

llvm-svn: 155513
2012-04-25 00:50:46 +00:00
Dan Gohman
3a24d34041 Build custom predecessor and successor lists for each basic block.
These lists exclude invoke unwind edges and loop backedges which
are being ignored. This makes it easier to ignore them
consistently.

llvm-svn: 155500
2012-04-24 22:53:18 +00:00
Lang Hames
08eb5f2340 Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. This fixes
<rdar://problem/11291436>.

llvm-svn: 155468
2012-04-24 18:58:36 +00:00
Jakob Stoklund Olesen
6c1440cf27 Reapply r155136 after fixing PR12599.
Original commit message:

Defer some shl transforms to DAGCombine.

The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.

Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.

These transformations are deferred:

  (X >>? C) << C   --> X & (-1 << C)  (When X >> C has multiple uses)
  (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2)   (When C2 > C1)
  (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2)  (When C1 > C2)

The corresponding exact transformations are preserved, just like
div-exact + mul:

  (X >>?,exact C) << C   --> X
  (X >>?,exact C1) << C2 --> X << (C2-C1)
  (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)

The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.

llvm-svn: 155362
2012-04-23 17:39:52 +00:00
Alexander Potapenko
1f7d60fd8a Fix issue 67 by checking that the interface functions weren't redefined in the compiled source file.
llvm-svn: 155346
2012-04-23 10:47:31 +00:00
Kostya Serebryany
67e32a152a [tsan] use llvm/ADT/Statistic.h for tsan stats
llvm-svn: 155341
2012-04-23 08:44:59 +00:00
Jakob Stoklund Olesen
3d22f26e88 Revert r155136 "Defer some shl transforms to DAGCombine."
While the patch was perfect and defect free, it exposed a really nasty
bug in X86 SelectionDAG that caused an llc crash when compiling lencod.

I'll put the patch back in after fixing the SelectionDAG problem.

llvm-svn: 155181
2012-04-20 00:38:45 +00:00
Bill Wendling
fd7c52fe58 Put this expensive check below the less expensive ones.
llvm-svn: 155166
2012-04-19 23:31:07 +00:00
Dan Gohman
f4472e9a1f Avoid a bug in the path count computation, preventing an infinite
loop repeatedlt making the same change. This is for rdar://11256239.

llvm-svn: 155160
2012-04-19 21:50:46 +00:00
Jakob Stoklund Olesen
1507d20c57 Defer some shl transforms to DAGCombine.
The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.

Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.

These transformations are deferred:

  (X >>? C) << C   --> X & (-1 << C)  (When X >> C has multiple uses)
  (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2)   (When C2 > C1)
  (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2)  (When C1 > C2)

The corresponding exact transformations are preserved, just like
div-exact + mul:

  (X >>?,exact C) << C   --> X
  (X >>?,exact C1) << C2 --> X << (C2-C1)
  (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)

The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.

llvm-svn: 155136
2012-04-19 16:46:26 +00:00
Dan Gohman
a99c119e05 Don't crash on code where the user put __attribute__((constructor)) on
a function with arguments. This fixes rdar://11265785.

llvm-svn: 155073
2012-04-18 22:24:33 +00:00
Bill Wendling
c37741ca5a Use a heavy hammer to fix PR12573.
If the loop contains invoke instructions, whose unwind edge escapes the loop,
then don't try to unswitch the loop. Doing so may cause the unwind edge to be
split, which not only is non-trivial but doesn't preserve loop simplify
information.

Fixes PR12573

llvm-svn: 154987
2012-04-18 06:00:09 +00:00
Andrew Trick
a5981a21f9 loop-reduce: Add an early bailout to catch extremely large loops.
This introduces a threshold of 200 IV Users, which is very
conservative but should be sufficient to avoid serious compile time
sink or stack overflow. The llvm test-suite with LTO never exceeds 190
users per loop.

The bug doesn't relate to a specific type of loop. Checking in an
arbitrary giant loop as a unit test would be silly.

Fixes rdar://11262507.

llvm-svn: 154983
2012-04-18 04:00:10 +00:00
Joe Groff
cc9c07aacc fix pr12559: mark unavailable win32 math libcalls
also fix SimplifyLibCalls to use TLI rather than compile-time conditionals to enable optimizations on floor, ceil, round, rint, and nearbyint

llvm-svn: 154960
2012-04-17 23:05:54 +00:00
Hal Finkel
5e614e7520 Fix style violation in BBVectorize (pointed out by Bill Wendling)
llvm-svn: 154810
2012-04-16 12:39:17 +00:00
Bill Wendling
66282c6d7f Add a Fixme.
llvm-svn: 154793
2012-04-16 04:23:52 +00:00
Hal Finkel
4f7adc1f50 Simplify checking for pointer types in BBVectorize (this change was suggested by Duncan).
llvm-svn: 154787
2012-04-16 03:49:42 +00:00
Hal Finkel
028d6e153e Fix an error in BBVectorize important for vectorizing pointer types.
When vectorizing pointer types it is important to realize that potential
pairs cannot be connected via the address pointer argument of a load or store.
This is because even after vectorization, the address is still a scalar because
the address of the higher half of the pair is implicit from the address of the
lower half (it need not be, and should not be, explicitly computed).

llvm-svn: 154735
2012-04-14 07:32:50 +00:00
Hal Finkel
c55edb7b35 Enhance BBVectorize to more-properly handle pointer values and vectorize GEPs.
llvm-svn: 154734
2012-04-14 07:32:43 +00:00
Hal Finkel
12b4c41203 Add support to BBVectorize for vectorizing selects.
llvm-svn: 154700
2012-04-13 20:45:45 +00:00
Dan Gohman
0387e6b701 Add some comments, and fix a few places that missed setting Changed.
llvm-svn: 154687
2012-04-13 18:57:48 +00:00
Dan Gohman
d5743c7fd0 Consider ObjC runtime calls objc_storeWeak and others which make a copy of
their argument as "escape" points for objc_retainBlock optimization.
This fixes rdar://11229925.

llvm-svn: 154682
2012-04-13 18:28:58 +00:00
Hal Finkel
f8611de2a6 By default, use Early-CSE instead of GVN for vectorization cleanup.
As has been suggested by Duncan and others, Early-CSE and GVN should
do similar redundancy elimination, but Early-CSE is much less expensive.
Most of my autovectorization benchmarks show a performance regresion, but
all of these are < 0.1%, and so I think that it is still worth using
the less expensive pass.

llvm-svn: 154673
2012-04-13 17:15:33 +00:00
Dan Gohman
81ac0c921f Use the new Use-aware dominates method to apply the objc runtime
library return value optimization for phi uses. Even when the
phi itself is not dominated, the specific use may be dominated.

llvm-svn: 154647
2012-04-13 01:08:28 +00:00
Bill Wendling
8659a23f4a Code-gen may inject code into the IR before it emits the ASM. The linker
obviously cannot know that this code is present, let alone used. So prevent the
internalize pass from internalizing those global values which code-gen may
insert.

llvm-svn: 154645
2012-04-13 01:06:27 +00:00
Dan Gohman
6a5b02f8ee Don't move objc_autorelease calls past autorelease pool boundaries when
optimizing autorelease calls on phi nodes with null operands.
This fixes rdar://11207070.

llvm-svn: 154642
2012-04-13 00:59:57 +00:00
Chad Rosier
b41586c8e1 Typo.
llvm-svn: 154522
2012-04-11 19:21:58 +00:00
Chandler Carruth
80c3e3bbba Add two statistics to help track how we are computing the inline cost.
Yea, 'NumCallerCallersAnalyzed' isn't a great name, suggestions welcome.

llvm-svn: 154492
2012-04-11 10:15:10 +00:00
Kostya Serebryany
3047a70ed9 [tsan] two more compile-time optimizations:
- don't isntrument reads from constant globals.
Saves ~1.5% of instrumented instructions on CPU2006
(counting static instructions, not their execution).
- don't insrument reads from vtable (which is a global constant too).
Saves ~5%.

I did not measure the run-time impact of this,
but it is certainly non-negative.

llvm-svn: 154444
2012-04-10 22:29:17 +00:00
Kostya Serebryany
01d463472d [tsan] compile-time instrumentation: do not instrument a read if
a write to the same temp follows in the same BB.
Also add stats printing.

On Spec CPU2006 this optimization saves roughly 4% of instrumented reads
(which is 3% of all instrumented accesses):
Writes            : 161216
Reads             : 446458
Reads-before-write: 18295

llvm-svn: 154418
2012-04-10 18:18:56 +00:00
Andrew Trick
7230fee696 Fix 12513: Loop unrolling breaks with indirect branches.
Take this opportunity to generalize the indirectbr bailout logic for
loop transformations. CFG transformations will never get indirectbr
right, and there's no point trying.

llvm-svn: 154386
2012-04-10 05:14:42 +00:00
Andrew Trick
83a330c1b9 whitespace
llvm-svn: 154385
2012-04-10 05:14:37 +00:00
Chandler Carruth
b3fb4be360 Teach InstCombine to nuke a common alloca pattern -- an alloca which has
GEPs, bit casts, and stores reaching it but no other instructions. These
often show up during the iterative processing of the inliner, SROA, and
DCE. Once we hit this point, we can completely remove the alloca. These
were actually showing up in the final, fully optimized code in a bunch
of inliner tests I've been working on, and notably they show up after
LLVM finishes optimizing away all function calls involved in
hash_combine(a, b).

llvm-svn: 154285
2012-04-08 14:36:56 +00:00
Hongbin Zheng
48758c581f Refactor: Use positive field names in VectorizeConfig.
llvm-svn: 154249
2012-04-07 03:56:23 +00:00
Chandler Carruth
020a15db9d Sink the collection of return instructions until after *all*
simplification has been performed. This is a bit less efficient
(requires another ilist walk of the basic blocks) but shouldn't matter
in practice. More importantly, it's just too much work to keep track of
all the various ways the return instructions can be mutated while
simplifying them. This fixes yet another crasher, reported by Daniel
Dunbar.

llvm-svn: 154179
2012-04-06 17:21:31 +00:00
Duncan Sands
c7d0fdb71f Make GVN's propagateEquality non-recursive. No intended functionality change.
The modifications are a lot more trivial than they appear to be in the diff!

llvm-svn: 154174
2012-04-06 15:31:09 +00:00
Chandler Carruth
dc52b30dac Sink the return instruction collection until after we're done deleting
dead code, including dead return instructions in some cases. Otherwise,
we end up having a bogus poniter to a return instruction that blows up
much further down the road.

It turns out that this pattern is both simpler to code, easier to update
in the face of enhancements to the inliner cleanup, and likely cheaper
given that it won't add dead instructions to the list.

Thanks to John Regehr's numerous test cases for teasing this out.

llvm-svn: 154157
2012-04-06 01:11:52 +00:00
Dan Gohman
a5e2200b2a Fix accidentally inverted logic from r152803, and make the
testcase slightly less trivial. This fixes rdar://11171718.

llvm-svn: 154118
2012-04-05 20:27:21 +00:00
Hongbin Zheng
4da3f9fa46 BBVectorize: Add the const modifier to the VectorizeConfig because we won't
modify it.

llvm-svn: 154098
2012-04-05 16:07:49 +00:00
Hongbin Zheng
7a4e40f87f Introduce the VectorizeConfig class, with which we can control the behavior
of the BBVectorizePass without using command line option. As pointed out
  by Hal, we can ask the TargetLoweringInfo for the architecture specific
  VectorizeConfig to perform vectorizing with architecture specific
  information.

llvm-svn: 154096
2012-04-05 15:46:55 +00:00
Hongbin Zheng
8d380b332d Add the function "vectorizeBasicBlock" which allow users vectorize a
BasicBlock in other passes, e.g. we can call vectorizeBasicBlock in the
 loop unroll pass right after the loop is unrolled.

llvm-svn: 154089
2012-04-05 08:05:16 +00:00
Jakob Stoklund Olesen
e1ae4f161c Pass the right sign to TLI->isLegalICmpImmediate.
LSR can fold three addressing modes into its ICmpZero node:

  ICmpZero BaseReg + Offset      => ICmp BaseReg, -Offset
  ICmpZero -1*ScaleReg + Offset  => ICmp ScaleReg, Offset
  ICmpZero BaseReg + -1*ScaleReg => ICmp BaseReg, ScaleReg

The first two cases are only used if TLI->isLegalICmpImmediate() likes
the offset.

Make sure the right Offset sign is passed to this method in the second
case. The ARM version is not symmetric.

<rdar://problem/11184260>

llvm-svn: 154079
2012-04-05 03:10:56 +00:00
Rafael Espindola
88a1aeb123 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

llvm-svn: 154011
2012-04-04 12:51:34 +00:00
Hongbin Zheng
c50b4781ab LoopUnrollPass: Use variable "Threshold" instead of "CurrentThreshold" when
reducing unroll count, otherwise the reduced unroll count is not taking
  the "OptimizeForSize" attribute into account.

llvm-svn: 154007
2012-04-04 11:44:08 +00:00
Bill Wendling
1db4186413 Add an option to turn off the expensive GVN load PRE part of GVN.
llvm-svn: 153902
2012-04-02 22:16:50 +00:00
Stepan Dyatkovskiy
0ddc03ebad Fast fix for PR12343:
http://llvm.org/bugs/show_bug.cgi?id=12343

We have not trivial way for splitting edges that are goes from indirect branch. We can do it with some tricks, but it should be additionally discussed. And it is still dangerous due to difficulty of indirect branches controlling.

Fix forbids this case for unswitching.

llvm-svn: 153879
2012-04-02 17:16:45 +00:00
Chandler Carruth
8b30ff4d0f Belatedly address some code review from Chris.
As a side note, I really dislike array_pod_sort... Do we really still
care about any STL implementations that get this so wrong? Does libc++?

llvm-svn: 153834
2012-04-01 10:41:24 +00:00
Chandler Carruth
1a2234d527 Fix a pretty scary bug I introduced into the always inliner with
a single missing character. Somehow, this had gone untested. I've added
tests for returns-twice logic specifically with the always-inliner that
would have caught this, and fixed the bug.

Thanks to Matt for the careful review and spotting this!!! =D

llvm-svn: 153832
2012-04-01 10:21:05 +00:00
Chandler Carruth
ad48424b6b Give the always-inliner its own custom filter. It shouldn't have to pay
the very high overhead of the complex inline cost analysis when all it
wants to do is detect three patterns which must not be inlined. Comment
the code, clean it up, and leave some hints about possible performance
improvements if this ever shows up on a profile.

Moving this off of the (now more expensive) inline cost analysis is
particularly important because we have to run this inliner even at -O0.

llvm-svn: 153814
2012-03-31 13:17:18 +00:00
Chandler Carruth
61d0735f00 Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

llvm-svn: 153813
2012-03-31 12:48:08 +00:00
Chandler Carruth
8cacff57bf Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

llvm-svn: 153812
2012-03-31 12:42:41 +00:00
Benjamin Kramer
b7ee22e3f3 Internalize: Remove reference of @llvm.noinline, it was replaced with the noinline attribute a long time ago.
llvm-svn: 153806
2012-03-31 11:03:47 +00:00
Hal Finkel
45ad90afac Correctly vectorize powi.
The powi intrinsic requires special handling because it always takes a single
integer power regardless of the result type. As a result, we can vectorize
only if the powers are equal. Fixes PR12364.

llvm-svn: 153797
2012-03-31 03:38:40 +00:00
Jakob Stoklund Olesen
9571cb56c5 Don't PRE compares.
CodeGenPrepare sinks compare instructions down to their uses to prevent
live flags and predicate registers across basic blocks.

PRE of a compare instruction prevents that, forcing the i1 compare
result into a general purpose register.  That is usually more expensive
than the redundant compare PRE was trying to eliminate in the first
place.

llvm-svn: 153657
2012-03-29 17:22:39 +00:00
Benjamin Kramer
b6baea7014 GlobalOpt: If we have an inbounds GEP from a ConstantAggregateZero global that we just determined to be constant, replace all loads from it with a zero value.
llvm-svn: 153576
2012-03-28 14:50:09 +00:00
Chandler Carruth
e507ddfe74 Switch to WeakVHs in the value mapper, and aggressively prune dead basic
blocks in the function cloner. This removes the last case of trivially
dead code that I've been seeing in the wild getting inlined, analyzed,
re-inlined, optimized, only to be deleted. Nukes a FIXME from the
cleanup tests.

llvm-svn: 153572
2012-03-28 08:38:27 +00:00
Chad Rosier
23505e5bb6 Fix 80-column violation.
llvm-svn: 153556
2012-03-28 00:35:33 +00:00
Chandler Carruth
cd6d2689a0 Make a seemingly tiny change to the inliner and fix the generated code
size bloat. Unfortunately, I expect this to disable the majority of the
benefit from r152737. I'm hopeful at least that it will fix PR12345. To
explain this requires... quite a bit of backstory I'm afraid.

TL;DR: The change in r152737 actually did The Wrong Thing for
linkonce-odr functions. This change makes it do the right thing. The
benefits we saw were simple luck, not any actual strategy. Benchmark
numbers after a mini-blog-post so that I've written down my thoughts on
why all of this works and doesn't work...

To understand what's going on here, you have to understand how the
"bottom-up" inliner actually works. There are two fundamental modes to
the inliner:

1) Standard fixed-cost bottom-up inlining. This is the mode we usually
   think about. It walks from the bottom of the CFG up to the top,
   looking at callsites, taking information about the callsite and the
   called function and computing th expected cost of inlining into that
   callsite. If the cost is under a fixed threshold, it inlines. It's
   a touch more complicated than that due to all the bonuses, weights,
   etc. Inlining the last callsite to an internal function gets higher
   weighth, etc. But essentially, this is the mode of operation.

2) Deferred bottom-up inlining (a term I just made up). This is the
   interesting mode for this patch an r152737. Initially, this works
   just like mode #1, but once we have the cost of inlining into the
   callsite, we don't just compare it with a fixed threshold. First, we
   check something else. Let's give some names to the entities at this
   point, or we'll end up hopelessly confused. We're considering
   inlining a function 'A' into its callsite within a function 'B'. We
   want to check whether 'B' has any callers, and whether it might be
   inlined into those callers. If so, we also check whether inlining 'A'
   into 'B' would block any of the opportunities for inlining 'B' into
   its callers. We take the sum of the costs of inlining 'B' into its
   callers where that inlining would be blocked by inlining 'A' into
   'B', and if that cost is less than the cost of inlining 'A' into 'B',
   then we skip inlining 'A' into 'B'.

Now, in order for #2 to make sense, we have to have some confidence that
we will actually have the opportunity to inline 'B' into its callers
when cheaper, *and* that we'll be able to revisit the decision and
inline 'A' into 'B' if that ever becomes the correct tradeoff. This
often isn't true for external functions -- we can see very few of their
callers, and we won't be able to re-consider inlining 'A' into 'B' if
'B' is external when we finally see more callers of 'B'. There are two
cases where we believe this to be true for C/C++ code: functions local
to a translation unit, and functions with an inline definition in every
translation unit which uses them. These are represented as internal
linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for
linkonce-odr in r152737.

Unfortunately, when I did that, I also introduced a subtle bug. There
was an implicit assumption that the last caller of the function within
the TU was the last caller of the function in the program. We want to
bonus the last caller of the function in the program by a huge amount
for inlining because inlining that callsite has very little cost.
Unfortunately, the last caller in the TU of a linkonce-odr function is
*not* the last caller in the program, and so we don't want to apply this
bonus. If we do, we can apply it to one callsite *per-TU*. Because of
the way deferred inlining works, when it sees this bonus applied to one
callsite in the TU for 'B', it decides that inlining 'B' is of the
*utmost* importance just so we can get that final bonus. It then
proceeds to essentially force deferred inlining regardless of the actual
cost tradeoff.

The result? PR12345: code bloat, code bloat, code bloat. Another result
is getting *damn* lucky on a few benchmarks, and the over-inlining
exposing critically important optimizations. I would very much like
a list of benchmarks that regress after this change goes in, with
bitcode before and after. This will help me greatly understand what
opportunities the current cost analysis is missing.

Initial benchmark numbers look very good. WebKit files that exhibited
the worst of PR12345 went from growing to shrinking compared to Clang
with r152737 reverted.

- Bootstrapped Clang is 3% smaller with this change.
- Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4%
  faster with this change.

Please let me know about any other performance impact you see. Thanks to
Nico for reporting and urging me to actually fix, Richard Smith, Duncan
Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues
today.

llvm-svn: 153506
2012-03-27 10:48:28 +00:00
Nadav Rotem
bcd74e695a 153465 was incorrect. In this code we wanted to check that the pointer operand is of pointer type (and not vector type).
llvm-svn: 153468
2012-03-26 21:00:53 +00:00
Nadav Rotem
165c8a3432 PR12357: The pointer was used before it was checked.
llvm-svn: 153465
2012-03-26 20:39:18 +00:00
Andrew Trick
a7e2266fa7 LSR ivchain bug fix: corner case with ConstantExpr.
Fixes PR11950.

llvm-svn: 153463
2012-03-26 20:28:37 +00:00
Andrew Trick
048fee970f comment typo
llvm-svn: 153462
2012-03-26 20:28:35 +00:00
Chris Lattner
5a5f3badd6 eliminate an unneeded branch, part of PR12357
llvm-svn: 153458
2012-03-26 19:13:57 +00:00
Eric Christopher
b13384d5f5 Tidy.
llvm-svn: 153456
2012-03-26 19:09:40 +00:00
Eric Christopher
2778e29567 Tidy.
llvm-svn: 153455
2012-03-26 19:09:38 +00:00
Andrew Trick
de4046d7dd LSR cleanup: potential bug caught by PVS-Studio.
Thanks Andrey.

llvm-svn: 153451
2012-03-26 18:03:16 +00:00
Kostya Serebryany
605c594f4f [tsan] treat vtable pointer updates in a special way (requires tbaa); fix a bug (forgot to return true after instrumenting); make sure the tsan tests are run
llvm-svn: 153448
2012-03-26 17:35:03 +00:00
Craig Topper
76f7896f49 Prune some includes and forward declarations.
llvm-svn: 153429
2012-03-26 06:58:25 +00:00
Chandler Carruth
fc1ee5b5d6 Teach the function cloner (and thus the inliner) to simplify PHINodes
aggressively. There are lots of dire warnings about this being expensive
that seem to predate switching to the TrackingVH-based value remapper
that is automatically updated on RAUW. This makes it easy to not just
prune single-entry PHIs, but to fully simplify PHIs, and to recursively
simplify the newly inlined code to propagate PHINode simplifications.

This introduces a bit of a thorny problem though. We may end up
simplifying a branch condition to a constant when we fold PHINodes, and
we would like to nuke any dead blocks resulting from this so that time
isn't wasted continually analyzing them, but this isn't easy. Deleting
basic blocks *after* they are fully cloned and mapped into the new
function currently requires manually updating the value map. The last
piece of the simplification-during-inlining puzzle will require either
switching to WeakVH mappings or some other piece of refactoring. I've
left a FIXME in the testcase about this.

llvm-svn: 153410
2012-03-25 10:34:54 +00:00
Chandler Carruth
2a69d3eac5 Move the instruction simplification of callsite arguments in the inliner
to instead rely on much more generic and powerful instruction
simplification in the function cloner (and thus inliner).

This teaches the pruning function cloner to use instsimplify rather than
just the constant folder to fold values during cloning. This can
simplify a large number of things that constant folding alone cannot
begin to touch. For example, it will realize that 'or' and 'and'
instructions with certain constant operands actually become constants
regardless of what their other operand is. It also can thread back
through the caller to perform simplifications that are only possible by
looking up a few levels. In particular, GEPs and pointer testing tend to
fold much more heavily with this change.

This should (in some cases) have a positive impact on compile times with
optimizations on because the inliner itself will simply avoid cloning
a great deal of code. It already attempted to prune proven-dead code,
but now it will be use the stronger simplifications to prove more code
dead.

llvm-svn: 153403
2012-03-25 04:03:40 +00:00
Chandler Carruth
6d9fc81b34 Add an asserting ValueHandle to the block simplification code which will
fire if anything ever invalidates the assumption of a terminator
instruction being unchanged throughout the routine.

I've convinced myself that the current definition of simplification
precludes such a transformation, so I think getting some asserts
coverage that we don't violate this agreement is sufficient to make this
code safe for the foreseeable future.

Comments to the contrary or other suggestions are of course welcome. =]
The bots are now happy with this code though, so it appears the bug here
has indeed been fixed.

llvm-svn: 153401
2012-03-25 03:29:25 +00:00
Chandler Carruth
2633e643e5 Don't form a WeakVH around the sentinel node in the instructions BB
list. This is a bad idea. ;] I'm hopeful this is the bug that's showing
up with the MSVC bots, but we'll see.

It is definitely unnecessary. InstSimplify won't do anything to
a terminator instruction, we don't need to even include it in the
iteration range. We can also skip the now dead terminator check,
although I've made it an assert to help document that this is an
important invariant.

I'm still a bit queasy about this because there is an implicit
assumption that the terminator instruction cannot be RAUW'ed by the
simplification code. While that appears to be true at the moment, I see
no guarantee that would ensure it remains true in the future. I'm
looking at the cleanest way to solve that...

llvm-svn: 153399
2012-03-24 23:03:27 +00:00
Chandler Carruth
58c542736c Refactor the interface to recursively simplifying instructions to be tad
bit simpler by handling a common case explicitly.

Also, refactor the implementation to use a worklist based walk of the
recursive users, rather than trying to use value handles to detect and
recover from RAUWs during the recursive descent. This fixes a very
subtle bug in the previous implementation where degenerate control flow
structures could cause mutually recursive instructions (PHI nodes) to
collapse in just such a way that From became equal to To after some
amount of recursion. At that point, we hit the inf-loop that the assert
at the top attempted to guard against. This problem is defined away when
not using value handles in this manner. There are lots of comments
claiming that the WeakVH will protect against just this sort of error,
but they're not accurate about the actual implementation of WeakVHs,
which do still track RAUWs.

I don't have any test case for the bug this fixes because it requires
running the recursive simplification on unreachable phi nodes. I've no
way to either run this or easily write an input that triggers it. It was
found when using instruction simplification inside the inliner when
running over the nightly test-suite.

llvm-svn: 153393
2012-03-24 21:11:24 +00:00
Francois Pichet
6a1f32c9cf Fix the MSVC build.
llvm-svn: 153366
2012-03-24 01:36:37 +00:00
Andrew Trick
0ff69472e6 More IndVarSimplify cleanup.
llvm-svn: 153362
2012-03-24 00:51:17 +00:00
Kostya Serebryany
5fa635b5bf add EP_OptimizerLast extension point
llvm-svn: 153353
2012-03-23 23:22:59 +00:00
Dan Gohman
ef28237798 Don't convert objc_retainAutoreleasedReturnValue to objc_retain if it
is retaining the return value of an invoke that it immediately follows.

llvm-svn: 153344
2012-03-23 18:09:00 +00:00
Dan Gohman
0c02608af1 It's not possible to insert code immediately after an invoke in the
same basic block, and it's not safe to insert code in the successor
blocks if the edges are critical edges. Splitting those edges is
possible, but undesirable, especially on the unwind side. Instead,
make the bottom-up code motion to consider invokes to be part of
their successor blocks, rather than part of their parent blocks, so
that it doesn't push code past them and onto the edges. This fixes
PR12307.

llvm-svn: 153343
2012-03-23 17:47:54 +00:00
Duncan Sands
fcc4791c7b When propagating equalities, eg replacing A with B in every basic block
dominated by Root, check that B is available throughout the scope.  This
is obviously true (famous last words?) given the current logic, but the
check may be helpful if more complicated reasoning is added one day.

llvm-svn: 153323
2012-03-23 08:45:52 +00:00
Duncan Sands
1ac68ded1a Indentation.
llvm-svn: 153322
2012-03-23 08:29:04 +00:00
Andrew Trick
690dfe6b2a Remove -enable-lsr-retry in time for 3.1.
llvm-svn: 153287
2012-03-22 22:42:51 +00:00
Andrew Trick
121e3e153c Remove -enable-lsr-nested in time for 3.1.
Tests cases have been removed but attached to open PR12330.

llvm-svn: 153286
2012-03-22 22:42:45 +00:00
Dan Gohman
df222b2e87 Refactor the code for visiting instructions out into helper functions.
llvm-svn: 153267
2012-03-22 18:24:56 +00:00
Andrew Trick
74bcc63b1b Remove unused simplifyIVUsers
llvm-svn: 153262
2012-03-22 17:47:30 +00:00
Andrew Trick
30b133e1ae Remove -enable-iv-rewrite, which has been unsupported since 3.0.
llvm-svn: 153260
2012-03-22 17:10:11 +00:00
Chris Lattner
9bb176f16d don't use "signed", just something I noticed in patches flying by.
llvm-svn: 153237
2012-03-22 03:46:58 +00:00
Kostya Serebryany
19681e071c [asan] fix one more bug related to long double
llvm-svn: 153189
2012-03-21 15:28:50 +00:00
Eric Christopher
b43efeeb1f Zap some dead code pointed out by Chandler.
llvm-svn: 153150
2012-03-20 23:28:58 +00:00
Andrew Trick
83eb659a9f LoopSimplify bug fix. Handle indirect loop back edges.
Do not call SplitBlockPredecessors on a loop preheader when one of the
predecessors is an indirectbr. Otherwise, you will hit this assert:
!isa<IndirectBrInst>(Preds[i]->getTerminator()) && "Cannot split an edge from an IndirectBrInst"

llvm-svn: 153134
2012-03-20 21:24:52 +00:00
Andrew Trick
e06965dfc3 whitespace
llvm-svn: 153133
2012-03-20 21:24:47 +00:00
Kostya Serebryany
f8cb0a65c1 [asan] don't emit __asan_mapping_offset/__asan_mapping_scale by default -- they are currently used only for experiments
llvm-svn: 153040
2012-03-19 16:40:35 +00:00
Bill Wendling
9343ed10c6 Revert r152907.
llvm-svn: 152935
2012-03-16 18:20:54 +00:00
Bill Wendling
3c44ed8385 The alignment of the pointer part of the store instruction may have an
alignment. If that's the case, then we want to make sure that we don't increase
the alignment of the store instruction. Because if we increase it to be "more
aligned" than the pointer, code-gen may use instructions which require a greater
alignment than the pointer guarantees.
<rdar://problem/11043589>

llvm-svn: 152907
2012-03-16 07:40:08 +00:00
Chandler Carruth
e0a21944a1 Rip out support for 'llvm.noinline'. This thing has a strange history...
It was added in 2007 as the first cut at supporting no-inline
attributes, but we didn't have function attributes of any form at the
time. However, it was added without any mention in the LangRef or other
documentation.

Later on, in 2008, Devang added function notes for 'inline=never' and
then turned them into proper function attributes. From that point
onward, as far as I can tell, the world moved on, and no one has touched
'llvm.noinline' in any meaningful way since.

It's time has now come. We have had better mechanisms for doing this for
a long time, all the frontends I'm aware of use them, and this is just
holding back progress. Given that it was never a documented feature of
the IR, I've provided no auto-upgrade support. If people know of real,
in-the-wild bitcode that relies on this, yell at me and I'll add it, but
I *seriously* doubt anyone cares.

llvm-svn: 152904
2012-03-16 06:10:15 +00:00
Chandler Carruth
1d0b0955da Start removing the use of an ad-hoc 'never inline' set and instead
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.

Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.

The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.

The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.

This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.

I believe there is no functional change here, but yell if you spot one.
None are intended.

Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.

llvm-svn: 152903
2012-03-16 06:10:13 +00:00
Andrew Trick
719339e40f LSR fix: Add isSimplifiedLoopNest to IVUsers analysis.
Only record IVUsers that are dominated by simplified loop
headers. Otherwise SCEVExpander will crash while looking for a
preheader.

I previously tried to work around this in LSR itself, but that was
insufficient. This way, LSR can continue to run if some uses are not
in simple loops, as long as we don't attempt to analyze those users.

Fixes <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce

llvm-svn: 152892
2012-03-16 03:16:56 +00:00
Eli Friedman
0763584d78 In InstCombiner::visitOr, make sure we reverse the operand swap used for checking for or-of-xor operations after those checks; a later check expects that any constant will be in Op1. PR12234.
llvm-svn: 152884
2012-03-16 00:52:42 +00:00
Rafael Espindola
ac42573389 Short term fix for pr12270 before we change dominates to handle unreachable
code.
While here, reduce indentation.

llvm-svn: 152803
2012-03-15 15:52:59 +00:00
Bill Wendling
92dc871aad Use an iterator instead of calling .size() on the worklist every time, which is wasteful.
llvm-svn: 152794
2012-03-15 11:19:41 +00:00
Chandler Carruth
02f89bdb9c Remove the basic inliner. This was added in 2007, and hasn't really
changed since. No one was using it. It is yet another consumer of the
InlineCost interface that I'd like to change.

llvm-svn: 152769
2012-03-15 01:37:56 +00:00
Chandler Carruth
0720a328f7 This pass didn't want the inline cost per-se, it just wants generic code
metrics.

llvm-svn: 152760
2012-03-15 00:29:10 +00:00
Aaron Ballman
bf6eebde21 Fixed a transform crash when setting a negative size value for memset. Fixes PR12202.
llvm-svn: 152756
2012-03-15 00:05:31 +00:00
Kostya Serebryany
ee6f6a527d [tsan] use FunctionBlackList
llvm-svn: 152755
2012-03-14 23:33:24 +00:00
Kostya Serebryany
70c5b4a077 [asan] rename class BlackList to FunctionBlackList and move it into a separate file -- we will need the same functionality in ThreadSanitizer
llvm-svn: 152753
2012-03-14 23:22:10 +00:00
Dan Gohman
a30e1f4576 When an invoke is marked with metadata indicating its unwind edge
should be ignored by ARC optimization, don't insert new ARC runtime
calls in the unwind destination.

llvm-svn: 152748
2012-03-14 23:05:06 +00:00
Chandler Carruth
5caf391dc5 Change where we enable the heuristic that delays inlining into functions
which are small enough to themselves be inlined. Delaying in this manner
can be harmful if the function is inelligible for inlining in some (or
many) contexts as it pessimizes the code of the function itself in the
event that inlining does not eventually happen.

Previously the check was written to only do this delaying of inlining
for static functions in the hope that they could be entirely deleted and
in the knowledge that all callers of static functions will have the
opportunity to inline if it is in fact profitable. However, with C++ we
get two other important sources of functions where the definition is
always available for inlining: inline functions and templated functions.
This patch generalizes the inliner to allow linkonce-ODR (the linkage
such C++ routines receive) to also qualify for this delay-based
inlining.

Benchmarking across a range of large real-world applications shows
roughly 2% size increase across the board, but an average speedup of
about 0.5%. Some benhcmarks improved over 2%, and the 'clang' binary
itself (when bootstrapped with this feature) shows a 1% -O0 performance
improvement when run over all Sema, Lex, and Parse source code smashed
into a single file. A clean re-build of Clang+LLVM with a bootstrapped
Clang shows approximately 2% improvement, but that measurement is often
noisy.

llvm-svn: 152737
2012-03-14 20:16:41 +00:00
Pete Cooper
df5d2a8893 Target override to allow CodeGenPrepare to sink address operands to intrinsics in the same way it current does for loads and stores
llvm-svn: 152666
2012-03-13 20:59:56 +00:00
Chris Lattner
84f83c2727 enhance jump threading to preserve TBAA information when PRE'ing loads,
fixing rdar://11039258, an issue that came up when inspecting clang's 
bootstrapped codegen.

llvm-svn: 152635
2012-03-13 18:07:41 +00:00
Dan Gohman
fa43b599ac Teach globalopt how to evaluate an invoke with a non-void return type.
llvm-svn: 152634
2012-03-13 18:01:37 +00:00
Chandler Carruth
015ff468c2 When inlining a function and adding its inner call sites to the
candidate set for subsequent inlining, try to simplify the arguments to
the inner call site now that inlining has been performed.

The goal here is to propagate and fold constants through deeply nested
call chains. Without doing this, we loose the inliner bonus that should
be applied because the arguments don't match the exact pattern the cost
estimator uses.

Reviewed on IRC by Benjamin Kramer.

llvm-svn: 152556
2012-03-12 11:19:33 +00:00
Stepan Dyatkovskiy
72fdcabd4d llvm::SwitchInst
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.

llvm-svn: 152532
2012-03-11 06:09:17 +00:00
Duncan Sands
0a3403af6e Add statistics on removed switch cases, and fix the phi statistic
to count the number of phis changed, not the number visited.

llvm-svn: 152425
2012-03-09 19:21:15 +00:00
Dan Gohman
784659a39f When identifying exit nodes for the reverse-CFG reverse-post-order
traversal, consider nodes for which the only successors are backedges
which the traversal is ignoring to be exit nodes. This fixes a problem
where the bottom-up traversal was failing to visit split blocks along
split loop backedges. This fixes rdar://10989035.

llvm-svn: 152421
2012-03-09 18:50:52 +00:00
Duncan Sands
8139573edf Eliminate switch cases that can never match, for example removes all
negative switch cases if the branch condition is known to be positive.
Inspired by a recent improvement to GCC's VRP.

llvm-svn: 152405
2012-03-09 13:45:18 +00:00
Stepan Dyatkovskiy
79f3dd93b7 Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html

Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".

ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.

Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.

Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow

for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
  BasicBlock *BB = i.getCaseSuccessor();
  ConstantInt *V = i.getCaseValue();
  // Do something.
}

If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.

There are also related changes in llvm-clients: klee and clang.

llvm-svn: 152297
2012-03-08 07:06:20 +00:00
Sebastian Pop
d7b990a624 fix typos
llvm-svn: 152035
2012-03-05 17:39:47 +00:00
Sebastian Pop
64efa0a7a8 remove spaces on empty lines
llvm-svn: 152034
2012-03-05 17:39:45 +00:00
Duncan Sands
2640024ac8 This is not a common case, in fact it never happens!
llvm-svn: 152027
2012-03-05 12:23:00 +00:00
Chandler Carruth
533fb74c5d Switch mem2reg to use the new hashing infrastructure.
llvm-svn: 152026
2012-03-05 11:29:56 +00:00
Chandler Carruth
517e7a4d9e Replace the ad-hoc hashing in GVN with the new hashing infrastructure.
This implicitly fixes a nasty bug in the GVN hashing (that thankfully
could only manifest as a performance bug): actually include the opcode
in the hash. The old code started the hash off with the opcode, but then
overwrote it with the type pointer.

Since this is likely to be pretty hot (GVN being already pretty
expensive) I've included a micro-optimization to just not bother with
the varargs hashing if they aren't present. I can't measure any change
in GVN performance due to this, even with a big test case like Duncan's
sqlite one. Everything I see is in the noise floor. That said, this
closes a loop hole for a potential scaling problem due to collisions if
the opcode were the differentiating aspect of the expression.

llvm-svn: 152025
2012-03-05 11:29:54 +00:00
Duncan Sands
ccc56e1071 Nick pointed out on IRC that GVN's propagateEquality wasn't propagating
equalities into phi node operands for which the equality is known to
hold in the incoming basic block.  That's because replaceAllDominatedUsesWith
wasn't handling phi nodes correctly in general (that this didn't give wrong
results was just luck: the specific way GVN uses replaceAllDominatedUsesWith
precluded wrong changes to phi nodes).

llvm-svn: 152006
2012-03-04 13:25:19 +00:00
Bill Wendling
88f55b45b2 Do trivial CSE of dead BBs during codegen preparation.
Some BBs can become dead after codegen preparation. If we delete them here, it
could help enable tail-call optimizations later on.
<rdar://problem/10256573>

llvm-svn: 152002
2012-03-04 10:46:01 +00:00
Evgeniy Stepanov
19a3f5fed4 ASan: use getTypeAllocSize instead of getTypeStoreSize.
This change replaces getTypeStoreSize with getTypeAllocSize in AddressSanitizer
instrumentation for stack allocations.

One case where old behaviour produced undesired results is an optimization in
InstCombine pass (PromoteCastOfAllocation), which can replace  alloca(T) with
alloca(S), where S has the same AllocSize, but a smaller StoreSize. Another
case is memcpy(long double => long double), where ASan will poison bytes 10-15
of a stack-allocated long double (StoreSize  10, AllocSize 16,
sizeof(long double) = 16).

See http://llvm.org/bugs/show_bug.cgi?id=12047 for more context.

llvm-svn: 151887
2012-03-02 10:41:08 +00:00
Dan Gohman
e0e8af6739 Fix an iterator invalidation problem. operator[] on a DenseMap
can insert a new element, invalidating iterators. Use find
instead, and handle the case where the key is not found explicitly.

llvm-svn: 151871
2012-03-02 01:26:46 +00:00
Dan Gohman
acbf555d2f Misc micro-optimizations.
llvm-svn: 151869
2012-03-02 01:13:53 +00:00
Duncan Sands
207ee17589 Have GVN also do condition propagation when the right-hand side is not
a constant.  This fixes PR1768.

llvm-svn: 151713
2012-02-29 11:12:03 +00:00
Bill Wendling
7786b1f9dc Restrict this transformation to equality conditions.
This transformation is not correct for not-equal conditions:

(trunc x) != C1 & (and x, CA) != C2 -> (and x, CA|CMAX) != C1|C2

Let
  C1 == 0
  C2 == 0
  CA == 0xFF0000
  CMAX == 0xFF
and truncating to i8.

The original truth table:

    x   | A: trunc x != 0 | B: x & 0xFF0000 != 0 | A & B != 0
--------------------------------------------------------------
0x00000 |        0        |          0           |     0
0x00001 |        1        |          0           |     0
0x10000 |        0        |          1           |     0
0x10001 |        1        |          1           |     1

The truth table of the replacement:

    x   | x & 0xFF00FF != 0
----------------------------
0x00000 |        0
0x00001 |        1
0x10000 |        1
0x10001 |        1

So they are different.

llvm-svn: 151691
2012-02-29 01:46:50 +00:00
Pete Cooper
ab5f2302dc Reverted r152620 - DSE: Shorten memset when a later store overwrites the start of it. There were all sorts of buildbot issues
llvm-svn: 151621
2012-02-28 05:06:24 +00:00
Pete Cooper
93352dcd53 DSE: Shorten memset when a later store overwrites the start of it
llvm-svn: 151620
2012-02-28 04:27:10 +00:00
Benjamin Kramer
e96ae5392c Plog a memleak in GlobalOpt.
Found by valgrind.

llvm-svn: 151525
2012-02-27 12:48:24 +00:00
Duncan Sands
0f1520e70b Micro-optimization, no functionality change.
llvm-svn: 151524
2012-02-27 12:11:41 +00:00
Duncan Sands
7dc8ff6615 The value numbering function is recursive, so it is possible for multiple new
value numbers to be assigned when calculating any particular value number.
Enhance the logic that detects new value numbers to take this into account,
for a tiny compile time speedup.  Fix a comment typo while there.

llvm-svn: 151522
2012-02-27 09:54:35 +00:00
Duncan Sands
9e95178a81 When performing a conditional branch depending on the value of a comparison
%cmp (eg: A==B) we already replace %cmp with "true" under the true edge, and
with "false" under the false edge.  This change enhances this to replace the
negated compare (A!=B) with "false" under the true edge and "true" under the
false edge.  Reported to improve perlbench results by 1%.

llvm-svn: 151517
2012-02-27 08:14:30 +00:00
Chad Rosier
7ebe024853 Add comment.
llvm-svn: 151431
2012-02-25 03:07:57 +00:00
Chad Rosier
096b8c0365 Add support for disabling llvm.lifetime intrinsics in the AlwaysInliner. These
are optimization hints, but at -O0 we're not optimizing.  This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594

llvm-svn: 151429
2012-02-25 02:56:01 +00:00
Chad Rosier
06b04ed725 Fix indentation.
llvm-svn: 151420
2012-02-25 01:10:59 +00:00