1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 03:23:01 +02:00
Commit Graph

680 Commits

Author SHA1 Message Date
Craig Topper
a1ec6376b9 [ValueTracking] Use APInt::sext instead of zext and setBitsFrom. NFC
llvm-svn: 300307
2017-04-14 06:43:29 +00:00
Craig Topper
494adc9634 [ValueTracking] Remove duplicate call to computeKnownBits for the operands of Select.
We call it unconditionally on the operands of the select. Then decide if its a min/max and call it on the min/max operands or on the select operands again. Either of those second calls will overwrite the results of the initial call so we can just delete the first call.

llvm-svn: 300256
2017-04-13 20:39:37 +00:00
Craig Topper
3311db7fc2 [ValueTracking] Prevent a call to computeKnownBits if we already know the state of the bit we would calculate. Also reuse a temporary APInt instead of creating a new one.
llvm-svn: 300239
2017-04-13 19:04:45 +00:00
Craig Topper
6fe9669cc0 [ValueTracking] Move a temporary APInt instead of copying it.
llvm-svn: 300233
2017-04-13 18:25:53 +00:00
Craig Topper
7552ec6918 [ValueTracking] Teach GetUnderlyingObject to stop when it reachs an alloca instruction.
Previously it tried to call SimplifyInstruction which doesn't know anything about alloca so defers to constant folding which also doesn't do anything with alloca. This results in wasted cycles making calls that won't do anything. Given the frequency with which this function is called this time adds up.

llvm-svn: 300118
2017-04-12 22:29:23 +00:00
Craig Topper
74bd2ab89a [APInt] Remove shift functions from APIntOps namespace. Replace the few users with the APInt class methods. NFCI
llvm-svn: 299248
2017-03-31 20:01:16 +00:00
Craig Topper
25553e107d Revert r298711 "[InstCombine] Provide a way to calculate KnownZero/One for Add/Sub in SimplifyDemandedUseBits without recursing into ComputeKnownBits"
Tsan bot is failing.

llvm-svn: 298745
2017-03-24 22:12:10 +00:00
Craig Topper
bfabb49a58 [InstCombine] Provide a way to calculate KnownZero/One for Add/Sub in SimplifyDemandedUseBits without recursing into ComputeKnownBits
SimplifyDemandedUseBits for Add/Sub already recursed down LHS and RHS for simplifying bits. If that didn't provide any simplifications we fall back to calling computeKnownBits which will recurse again. Instead just take the known bits for LHS and RHS we already have and call into a new function in ValueTracking that can calculate the known bits given the LHS/RHS bits.

llvm-svn: 298711
2017-03-24 16:56:51 +00:00
Craig Topper
4f82790b32 [ValueTracking] Use uint64_t for CarryIn in computeKnownBitsAddSub instead of a creating a temporary APInt. NFC
llvm-svn: 298688
2017-03-24 05:38:09 +00:00
Craig Topper
b3d605ee1f [ValueTracking] Convert more places to use setHighBits/setLowBits/setSignBit. NFCI
llvm-svn: 298683
2017-03-24 03:57:24 +00:00
Craig Topper
930a98f541 [ValueTracking] Use APInt::isNegative instead of using operator[BitWidth-1]. NFCI
llvm-svn: 298584
2017-03-23 07:06:42 +00:00
Craig Topper
ed2e3a5ff2 [ValueTracking] Use setAllBits/setSignBit/setLowBits/setHighBits. NFCI
llvm-svn: 298583
2017-03-23 07:06:39 +00:00
Craig Topper
d7a1463461 [ValueTracking] Make sure we keep range metadata information when calculating known bits for calls to bitreverse intrinsic.
llvm-svn: 298488
2017-03-22 07:22:49 +00:00
Craig Topper
5d64eb532a [ValueTracking] use setLowBits/setHighBits/setBitsFrom to replace |= getHighBits/getLowBits. NFCI
llvm-svn: 298486
2017-03-22 06:19:37 +00:00
Craig Topper
0a85a90476 [ValueTracking] Remove deadish code from computeKnownBitsAddSub.
The code assigned to KnownZero, but later code unconditionally assigned over it. I'm pretty sure the later code can handle the same cases and more equally well.

llvm-svn: 298190
2017-03-18 18:21:46 +00:00
Craig Topper
41e75a32fb [ValueTracking] Add APInt::setSignBit and use it to replace ORing with getSignBit which will malloc if the bit width is larger than 64.
llvm-svn: 298180
2017-03-18 04:01:29 +00:00
Oliver Stannard
63381d7b41 [ValueTracking] Out of range shifts might be undef
If it is possible for the RHS of a shift operation to be greater than or equal
to the bit-width, then the result might be undef, and we can't report any known
bits.

In some cases, this was allowing a transformation in instcombine which widened
an undef value from i1 to i32, increasing the range of values that a function
could return.

Differential revision: https://reviews.llvm.org/D30781

llvm-svn: 297724
2017-03-14 10:13:17 +00:00
Sebastian Pop
b86663486c Handle UnreachableInst in isGuaranteedToTransferExecutionToSuccessor
A block with an UnreachableInst does not transfer execution to a successor.
The problem was exposed by GVN-hoist. This patch fixes bug 32153.

Patch by Aditya Kumar.

Differential Revision: https://reviews.llvm.org/D30667

llvm-svn: 297254
2017-03-08 01:54:50 +00:00
Sanjoy Das
059733f666 [ValueTracking] Don't do an unchecked shift in ComputeNumSignBits
Summary:
Previously we used to return a bogus result, 0, for IR like `ashr %val,
-1`.

I've also added an assert checking that `ComputeNumSignBits` at least
returns 1.  That assert found an already checked in test case where we
were returning a bad result for `ashr %val, -1`.

Fixes PR32045.

Reviewers: spatel, majnemer

Reviewed By: spatel, majnemer

Subscribers: efriedma, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D30311

llvm-svn: 296273
2017-02-25 20:30:45 +00:00
Sanjoy Das
20031f752d [ValueTracking] Make poison propagation more aggressive
Summary:
Motivation: fix PR31181 without regression (the actual fix is still in
progress).  However, the actual content of PR31181 is not relevant
here.

This change makes poison propagation more aggressive in the following
cases:

 1. poision * Val == poison, for any Val.  In particular, this changes
    existing intentional and documented behavior in these two cases:
     a. Val is 0
     b. Val is 2^k * N
 2. poison << Val == poison, for any Val
 3. getelementptr is poison if any input is poison

I think all of these are justified (and are axiomatically true in the
new poison / undef model):

1a: we need poison * 0 to be poison to allow transforms like these:

  A * (B + C) ==> A * B + A * C

If poison * 0 were 0 then the above transform could not be allowed
since e.g. we could have A = poison, B = 1, C = -1, making the LHS

  poison * (1 + -1) = poison * 0 = 0

and the RHS

  poison * 1 + poison * -1 = poison + poison = poison

1b: we need e.g. poison * 4 to be poison since we want to allow

  A * 4 ==> A + A + A + A

If poison * 4 were a value with all of their bits poison except the
last four; then we'd not be able to do this transform since then if A
were poison the LHS would only be "partially" poison while the RHS
would be "full" poison.

2: Same reasoning as (1b), we'd like have the following kinds
transforms be legal:

  A << 1 ==> A + A

Reviewers: majnemer, efriedma

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D30185

llvm-svn: 295809
2017-02-22 06:52:32 +00:00
Sanjoy Das
00beeea6bd [ValueTracking] clang-format a section I'm about to touch; NFC
(Whitespace only change)

llvm-svn: 295690
2017-02-21 02:42:42 +00:00
Sanjay Patel
8e7e7e2058 [ValueTracking] use nonnull argument attribute to eliminate null checks
Enhancing value tracking's analysis of null-ness was suggested in D27855, so here's a first attempt at that.

This is part of solving:
https://llvm.org/bugs/show_bug.cgi?id=28430

Differential Revision: https://reviews.llvm.org/D28204

llvm-svn: 294897
2017-02-12 15:35:34 +00:00
Sanjay Patel
00cf3d4d68 [ValueTracking] emit a remark when we detect a conflicting assumption (PR31809)
This is a follow-up to D29395 where we try to be good citizens and let the user know that
we've probably gone off the rails.

This should allow us to resolve:
https://llvm.org/bugs/show_bug.cgi?id=31809

Differential Revision: https://reviews.llvm.org/D29404

llvm-svn: 294208
2017-02-06 18:26:06 +00:00
Sanjay Patel
01c2913892 [ValueTracking] remove a FIXME for something we don't want to do; NFC
The comment was added with:
https://reviews.llvm.org/rL293773
...but there would be a cost to implement this and possibly no payoff.

llvm-svn: 293823
2017-02-01 22:27:34 +00:00
Sanjay Patel
57ac5b4c24 [ValueTracking] avoid crashing from bad assumptions (PR31809)
A program may contain llvm.assume info that disagrees with other analysis. 
This may be caused by UB in the program, so we must not crash because of that.

As noted in the code comments:
https://llvm.org/bugs/show_bug.cgi?id=31809
...we can do better, but this at least avoids the assert/crash in the bug report.

Differential Revision: https://reviews.llvm.org/D29395

llvm-svn: 293773
2017-02-01 15:41:32 +00:00
Sanjay Patel
4eb6691ce0 [ValueTracking] clean up lookThroughCast; NFCI
1. Use auto with dyn_cast.
2. Don't use else after return.
3. Convert chain of 'else if' to switch.
4. Improve variable names.

llvm-svn: 293432
2017-01-29 16:34:57 +00:00
Justin Lebar
ed0bd55a60 [ValueTracking] Add comment that CannotBeOrderedLessThanZero does the wrong thing for powi.
Summary:
CannotBeOrderedLessThanZero(powi(x, exp)) returns true if
CannotBeOrderedLessThanZero(x).  But powi(-0, exp) is negative if exp is
odd, so we actually want to return SignBitMustBeZero(x).

Except that also isn't right, because we want to return true if x is
NaN, even if x has a negative sign bit.

What we really need in order to fix this is a consistent approach in
this function to handling the sign bit of NaNs.  Without this it's very
difficult to say what the correct behavior here is.

Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28927

llvm-svn: 293243
2017-01-27 00:58:34 +00:00
Justin Lebar
f25dcc8534 [ValueTracking] Implement SignBitMustBeZero correctly for sqrt.
Summary:
Previously we assumed that the result of sqrt(x) always had 0 as its
sign bit.  But sqrt(-0) == -0.

Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28928

llvm-svn: 293115
2017-01-26 00:10:26 +00:00
whitequark
e14626baaf Mark @llvm.powi.* as safe to speculatively execute.
Floating point intrinsics in LLVM are generally not speculatively
executed, since most of them are defined to behave the same as libm
functions, which set errno.

However, the @llvm.powi.* intrinsics do not correspond to any libm
function, and lacks any defined error handling semantics in LangRef.
It most certainly does not alter errno.

llvm-svn: 293041
2017-01-25 09:32:30 +00:00
David L. Jones
268960185f [Analysis] Add LibFunc_ prefix to enums in TargetLibraryInfo. (NFC)
Summary:
The LibFunc::Func enum holds enumerators named for libc functions.
Unfortunately, there are real situations, including libc implementations, where
function names are actually macros (musl uses "#define fopen64 fopen", for
example; any other transitively visible macro would have similar effects).

Strictly speaking, a conforming C++ Standard Library should provide any such
macros as functions instead (via <cstdio>). However, there are some "library"
functions which are not part of the standard, and thus not subject to this
rule (fopen64, for example). So, in order to be both portable and consistent,
the enum should not use the bare function names.

The old enum naming used a namespace LibFunc and an enum Func, with bare
enumerators. This patch changes LibFunc to be an enum with enumerators prefixed
with "LibFFunc_". (Unfortunately, a scoped enum is not sufficient to override
macros.)

There are additional changes required in clang.

Reviewers: rsmith

Subscribers: mehdi_amini, mzolotukhin, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D28476

llvm-svn: 292848
2017-01-23 23:16:46 +00:00
Sanjay Patel
d0d8940769 [ValueTracking] tighten up matchMinMax(); NFCI
This is similar to what the caller (matchSelectPattern()) does. In all
cases where we succeed in matching a min/max pattern, the values in
that pattern will be the values of the 'select', so hoist that and
remove a bunch of duplicated code.

llvm-svn: 292725
2017-01-21 17:51:25 +00:00
Sanjay Patel
fc6ca85023 [ValueTracking] recognize variations of 'clamp' to improve codegen (PR31693)
By enhancing value tracking, we allow an existing min/max canonicalization to
kick in and improve codegen for several targets that have min/max instructions.

Unfortunately, recognizing min/max in value tracking may cause us to hit
a hack in InstCombiner::visitICmpInst() more often:
http://lists.llvm.org/pipermail/llvm-dev/2017-January/109340.html
...but I'm hoping we can remove that soon.

Correctness proofs based on Alive:

Name: smaxmin
Pre: C1 < C2
%cmp2 = icmp slt i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp slt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %min
=>
%cmp2 = icmp slt i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp sgt i8 %min, C1
%r = select i1 %cmp1, i8 %min, i8 C1

Name: sminmax
Pre: C1 > C2
%cmp2 = icmp sgt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp sgt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %max
=>
%cmp2 = icmp sgt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp slt i8 %max, C1
%r = select i1 %cmp1, i8 %max, i8 C1

---------------------------------------- 
Optimization: smaxmin 
Done: 1 
Optimization is correct! 
---------------------------------------- 
Optimization: sminmax 
Done: 1 
Optimization is correct!



Name: umaxmin
Pre: C1 u< C2
%cmp2 = icmp ult i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp ult i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %min
=>
%cmp2 = icmp ult i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp ugt i8 %min, C1
%r = select i1 %cmp1, i8 %min, i8 C1

Name: uminmax
Pre: C1 u> C2
%cmp2 = icmp ugt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp ugt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %max
=>
%cmp2 = icmp ugt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp ult i8 %max, C1
%r = select i1 %cmp1, i8 %max, i8 C1

---------------------------------------- 
Optimization: umaxmin 
Done: 1 
Optimization is correct! 
---------------------------------------- 
Optimization: uminmax 
Done: 1 
Optimization is correct! 
 

llvm-svn: 292660
2017-01-20 22:18:47 +00:00
Sanjay Patel
d9256d18d9 [ValueTracking] recognize a 'not' of an assumed condition as false
Also, add the corresponding match to the AssumptionCache's 'Affected Values' list.

Differential Revision: https://reviews.llvm.org/D28485

llvm-svn: 292239
2017-01-17 18:15:49 +00:00
Chad Rosier
831d15d618 [ValueTracking] Extend known bits to understand @llvm.bitreverse.
Differential Revision: https://reviews.llvm.org/D28780

llvm-svn: 292233
2017-01-17 17:23:51 +00:00
Malcolm Parsons
26eb6a0783 Remove unused lambda captures. NFC
llvm-svn: 291916
2017-01-13 17:12:16 +00:00
Hal Finkel
a3dd8b8968 Make processing @llvm.assume more efficient - Add affected values to the assumption cache
Here's my second try at making @llvm.assume processing more efficient. My
previous attempt, which leveraged operand bundles, r289755, didn't end up
working: it did make assume processing more efficient but eliminating the
assumption cache made ephemeral value computation too expensive. This is a
more-targeted change. We'll keep the assumption cache, but extend it to keep a
map of affected values (i.e. values about which an assumption might provide
some information) to the corresponding assumption intrinsics. This allows
ValueTracking and LVI to find assumptions relevant to the value being queried
without scanning all assumptions in the function. The fact that ValueTracking
started doing O(number of assumptions in the function) work, for every
known-bits query, has become prohibitively expensive in some cases.

As discussed during the review, this is a pragmatic fix that, longer term, will
likely be replaced by a more-principled solution (perhaps based on an extended
SSA form).

Differential Revision: https://reviews.llvm.org/D28459

llvm-svn: 291671
2017-01-11 13:24:24 +00:00
Matt Arsenault
588e04537c InstSimplify: Eliminate fabs on known positive
llvm-svn: 291624
2017-01-11 00:33:24 +00:00
Xin Tong
9aa14f6869 Intrinsic::Bitreverse is safe to speculate
Summary: Intrinsic::Bitreverse is safe to speculate

Reviewers: hfinkel, mkuper, arsenm, jmolloy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28471

llvm-svn: 291456
2017-01-09 17:57:08 +00:00
Sanjay Patel
3bc5e43c7e [ValueTracking] remove stale comments; NFC
The checks were improved with:
https://reviews.llvm.org/rL290194

llvm-svn: 290826
2017-01-02 19:04:07 +00:00
Sanjoy Das
a2fc5cf9c6 Fix an issue with isGuaranteedToTransferExecutionToSuccessor
I'm not sure if this was intentional, but today
isGuaranteedToTransferExecutionToSuccessor returns true for readonly and
argmemonly calls that may throw.  This commit changes the function to
not implicitly infer nounwind this way.

Even if we eventually specify readonly calls as not throwing,
isGuaranteedToTransferExecutionToSuccessor is not the best place to
infer that.  We should instead teach FunctionAttrs or some other such
pass to tag readonly functions / calls as nounwind instead.

llvm-svn: 290794
2016-12-31 22:12:34 +00:00
Sanjoy Das
20575c70ce Avoid const_cast; NFC
llvm-svn: 290793
2016-12-31 22:12:31 +00:00
Sanjay Patel
5536521abd [ValueTracking] make dominator tree requirement explicit for isKnownNonNullFromDominatingCondition(); NFCI
I don't think this hole is currently exposed, but I crashed regression tests for
jump-threading and loop-vectorize after I added calls to isKnownNonNullAt() in
InstSimplify as part of trying to solve PR28430:
https://llvm.org/bugs/show_bug.cgi?id=28430

That's because they call into value tracking with a context instruction, but no
other parts of the query structure filled in.

For more background, see the discussion in:
https://reviews.llvm.org/D27855

llvm-svn: 290786
2016-12-31 17:37:01 +00:00
Matt Arsenault
4c8965d84d Use MaxDepth instead of repeating its value
llvm-svn: 290194
2016-12-20 19:06:15 +00:00
Daniel Jasper
162ffcacd6 Revert @llvm.assume with operator bundles (r289755-r289757)
This creates non-linear behavior in the inliner (see more details in
r289755's commit thread).

llvm-svn: 290086
2016-12-19 08:22:17 +00:00
Hal Finkel
f224db75d2 Remove the AssumptionCache
After r289755, the AssumptionCache is no longer needed. Variables affected by
assumptions are now found by using the new operand-bundle-based scheme. This
new scheme is more computationally efficient, and also we need much less
code...

llvm-svn: 289756
2016-12-15 03:02:15 +00:00
Hal Finkel
502475d4f3 Make processing @llvm.assume more efficient by using operand bundles
There was an efficiency problem with how we processed @llvm.assume in
ValueTracking (and other places). The AssumptionCache tracked all of the
assumptions in a given function. In order to find assumptions relevant to
computing known bits, etc. we searched every assumption in the function. For
ValueTracking, that means that we did O(#assumes * #values) work in InstCombine
and other passes (with a constant factor that can be quite large because we'd
repeat this search at every level of recursion of the analysis).

Several of us discussed this situation at the last developers' meeting, and
this implements the discussed solution: Make the values that an assume might
affect operands of the assume itself. To avoid exposing this detail to
frontends and passes that need not worry about it, I've used the new
operand-bundle feature to add these extra call "operands" in a way that does
not affect the intrinsic's signature. I think this solution is relatively
clean. InstCombine adds these extra operands based on what ValueTracking, LVI,
etc. will need and then those passes need only search the users of the values
under consideration. This should fix the computational-complexity problem.

At this point, no passes depend on the AssumptionCache, and so I'll remove
that as a follow-up change.

Differential Revision: https://reviews.llvm.org/D27259

llvm-svn: 289755
2016-12-15 02:53:42 +00:00
Peter Collingbourne
a2d4395226 IR, X86: Understand !absolute_symbol metadata on global variables.
Summary:
Attaching !absolute_symbol to a global variable does two things:
1) Marks it as an absolute symbol reference.
2) Specifies the value range of that symbol's address.
Teach the X86 backend to allow absolute symbols to appear in place of
immediates by extending the relocImm and mov64imm32 matchers. Start using
relocImm in more places where it is legal.

As previously proposed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-October/105800.html

Differential Revision: https://reviews.llvm.org/D25878

llvm-svn: 289087
2016-12-08 19:01:00 +00:00
Peter Collingbourne
bc87b9fd38 IR: Change the gep_type_iterator API to avoid always exposing the "current" type.
Instead, expose whether the current type is an array or a struct, if an array
what the upper bound is, and if a struct the struct type itself. This is
in preparation for a later change which will make PointerType derive from
Type rather than SequentialType.

Differential Revision: https://reviews.llvm.org/D26594

llvm-svn: 288458
2016-12-02 02:24:42 +00:00
Yaxun Liu
7bae0ef103 Fix known zero bits for addrspacecast.
Currently LLVM assumes that a pointer addrspacecasted to a different addr space is equivalent to trunc or zext bitwise, which is not true. For example, in amdgcn target, when a null pointer is addrspacecasted from addr space 4 to 0, its value is changed from i64 0 to i32 -1.

This patch teaches LLVM not to assume known bits of addrspacecast instruction to its operand.

Differential Revision: https://reviews.llvm.org/D26803

llvm-svn: 287545
2016-11-21 15:42:31 +00:00
Sanjay Patel
a6fc956a6e [ValueTracking] recognize even more variants of smin/smax
Similar to:
https://reviews.llvm.org/rL285499
https://reviews.llvm.org/rL286318

We can't minimally expose this in IR tests because we don't have min/max intrinsics,
but the difference is visible in codegen because SelectionDAGBuilder::visitSelect() 
uses matchSelectPattern().

We're not canonicalizing these patterns in IR (yet), so I don't expect there to be any
regressions as noted here:
http://lists.llvm.org/pipermail/llvm-dev/2016-November/106868.html

llvm-svn: 286776
2016-11-13 20:04:52 +00:00
Sanjay Patel
5b14df241a [ValueTracking] move min/max matching to helper function; NFCI
llvm-svn: 286772
2016-11-13 19:30:19 +00:00
Sanjay Patel
8c519f1214 [ValueTracking] recognize obfuscated variants of umin/umax
The smallest tests that expose this are codegen tests (because SelectionDAGBuilder::visitSelect() uses matchSelectPattern
to create UMAX/UMIN nodes), but it's also possible to see the effects in IR alone with folds of min/max pairs.

If these were written as unsigned compares in IR, InstCombine canonicalizes the unsigned compares to signed compares. 
Ie, running the optimizer pessimizes the codegen for this case without this patch:

define <4 x i32> @umax_vec(<4 x i32> %x) {
  %cmp = icmp ugt <4 x i32> %x, <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647>
  %sel = select <4 x i1> %cmp, <4 x i32> %x, <4 x i32> <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647>
  ret <4 x i32> %sel
}

$ ./opt umax.ll -S | ./llc -o - -mattr=avx

vpmaxud LCPI0_0(%rip), %xmm0, %xmm0

$ ./opt -instcombine umax.ll -S | ./llc -o - -mattr=avx

vpxor %xmm1, %xmm1, %xmm1
vpcmpgtd  %xmm0, %xmm1, %xmm1
vmovaps LCPI0_0(%rip), %xmm2    ## xmm2 = [2147483647,2147483647,2147483647,2147483647]
vblendvps %xmm1, %xmm0, %xmm2, %xmm0

Differential Revision: https://reviews.llvm.org/D26096

llvm-svn: 286318
2016-11-09 00:24:44 +00:00
Sanjay Patel
3c52fdbd3d [ValueTracking] remove TODO comment; NFC
InstCombine should always canonicalize patterns like the one shown in the comment
when visiting 'select' insts in adjustMinMax().

Scalars were already handled there, and vector splats are handled after:
https://reviews.llvm.org/rL285732

llvm-svn: 285744
2016-11-01 20:43:00 +00:00
Sanjay Patel
c3d6bced70 [ValueTracking] recognize more variants of smin/smax
Try harder to detect obfuscated min/max patterns: the initial pattern was added with D9352 / rL236202. 
There was a bug fix for PR27137 at rL264996, but I think we can do better by folding the corresponding
smax pattern and commuted variants.

The codegen tests demonstrate the effect of ValueTracking on the backend via SelectionDAGBuilder. We
can't expose these differences minimally in IR because we don't have smin/smax intrinsics for IR.

Differential Revision: https://reviews.llvm.org/D26091

llvm-svn: 285499
2016-10-29 16:21:19 +00:00
Sanjay Patel
87ccc5a816 [ValueTracking] fix matchSelectPattern to allow vector splat folds of min/max/abs/nabs
llvm-svn: 285303
2016-10-27 15:26:10 +00:00
Peter Collingbourne
cec8c686b3 Analysis: Move llvm::getConstantRangeFromMetadata to IR library.
We're about to start using it there.

Differential Revision: https://reviews.llvm.org/D25877

llvm-svn: 284865
2016-10-21 19:59:26 +00:00
Li Huang
92d10bb262 Test commit. (NFC)
llvm-svn: 284307
2016-10-15 19:00:04 +00:00
Artur Pilipenko
6245b73221 [ValueTracking] An improvement to IR ValueTracking on Non-negative Integers
Since this change is known to cause performance degradations in some cases it's commited under a temporary flag which is turned off by default.

Patch by Li Huang

Differential Revision: https://reviews.llvm.org/D18777

llvm-svn: 284022
2016-10-12 16:18:43 +00:00
Tom Stellard
6fa6d1d712 [ValueTracking] Fix crash in GetPointerBaseWithConstantOffset()
Summary:
While walking defs of pointer operands we were assuming that the pointer
size would remain constant.  This is not true, because addresspacecast
instructions may cast the pointer to an address space with a different
pointer width.

This partial reverts r282612, which was a more conservative solution
to this problem.

Reviewers: reames, sanjoy, apilipenko

Subscribers: wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D24772

llvm-svn: 283557
2016-10-07 14:23:29 +00:00
Bjorn Pettersson
403539ab64 [ValueTracking] Teach computeKnownBits and ComputeNumSignBits to look through ExtractElement.
Summary:
The computeKnownBits and ComputeNumSignBits functions in ValueTracking can now do a simple look-through of ExtractElement.

Reviewers: majnemer, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D24955

llvm-svn: 283434
2016-10-06 09:56:21 +00:00
Artur Pilipenko
e07af89244 Don't look through addrspacecast in GetPointerBaseWithConstantOffset
Pointers in different addrspaces can have different sizes, so it's not valid to look through addrspace cast calculating base and offset for a value.

This is similar to D13008.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D24729

llvm-svn: 282612
2016-09-28 17:57:16 +00:00
Duncan P. N. Exon Smith
2f789947a5 Analysis: Return early for UndefValue in computeKnownBits
There is no benefit in looking through assumptions on UndefValue to
guess known bits.  Return early to avoid walking their use-lists, and
assert that all instances of ConstantData are handled here for similar
reasons (UndefValue was the only integer/pointer holdout).

llvm-svn: 282337
2016-09-24 20:42:02 +00:00
Duncan P. N. Exon Smith
b94ad7ec1b Analysis: Return early in isKnownNonNullAt for ConstantData
Check and return early for ConstantPointerNull and UndefValue
specifically in isKnownNonNullAt, and assert that ConstantData never
make it to isKnownNonNullFromDominatingCondition.

This confirms that isKnownNonNullFromDominatingCondition never walks
through the use-list of an instance of ConstantData.  Given that such
use-lists cross module boundaries, it never really made sense to do so,
and was potentially very expensive.

llvm-svn: 282333
2016-09-24 19:39:47 +00:00
Sanjay Patel
11e8147804 [InstCombine] allow vector types for constant folding / computeKnownBits (PR24942)
computeKnownBits() already works for integer vectors, so allow vector types when calling that from InstCombine.

I don't think the change to use m_APInt in computeKnownBits is strictly necessary because we do check for 
ConstantVector later, but it's more efficient to handle the splat case without needing to loop on vector elements.

This should work with InstSimplify, but doesn't yet, so I made that a FIXME comment on the test for PR24942:
https://llvm.org/bugs/show_bug.cgi?id=24942

Differential Revision: https://reviews.llvm.org/D24677

llvm-svn: 281777
2016-09-16 21:20:36 +00:00
Evgeny Stupachenko
a52e9dab82 The patch improves ValueTracking on left shift with nsw flag.
Summary:
The patch fixes PR28946.

Reviewers: majnemer, sanjoy

Differential Revision: http://reviews.llvm.org/D23296

From: Li Huang
llvm-svn: 279684
2016-08-24 23:01:33 +00:00
David Majnemer
70561a7ec3 [ValueTracking] Use a function_ref to avoid multiple instantiations
No functional change intended, this should just be a code size
improvement.

llvm-svn: 279563
2016-08-23 20:52:00 +00:00
Artur Pilipenko
7913289c6e Revert -r278267 [ValueTracking] An improvement to IR ValueTracking on Non-negative Integers
This change cause performance regression on MultiSource/Benchmarks/TSVC/Symbolics-flt/Symbolics-flt from LNT and some other bechmarks.

See https://reviews.llvm.org/D18777 for details.

llvm-svn: 279433
2016-08-22 13:14:07 +00:00
Justin Bogner
507d362929 Replace a few more "fall through" comments with LLVM_FALLTHROUGH
Follow up to r278902. I had missed "fall through", with a space.

llvm-svn: 278970
2016-08-17 20:30:52 +00:00
Sanjoy Das
2169ec7470 Revert "[ValueTracking] Improve ValueTracking on left shift with nsw flag"
This reverts commit r278172.  It causes PR28946.

llvm-svn: 278740
2016-08-15 21:01:31 +00:00
Pete Cooper
6327dd4768 Constify ValueTracking. NFC.
Almost all of the method here are only analysing Value's as opposed to
mutating them.  Mark all of the easy ones as const.

llvm-svn: 278585
2016-08-13 01:05:32 +00:00
Pete Cooper
fdd29b1f85 Refactor isValidAssumeForContext to reduce duplication and indentation. NFC.
This method had some duplicate code when we did or did not have a dom tree.  Refactor
it to remove the duplication, but also clean up the control flow to have less duplication.

llvm-svn: 278450
2016-08-12 01:00:15 +00:00
Pete Cooper
5b58e1879b Remove unnecessary extra version of isValidAssumeForContext. NFC.
There were 2 versions of this method.  A public one which takes a
const Instruction* and a private implementation which takes a mutable
Value* and casts to an Instruction*.

There was no need for the 2 versions as all callers pass a const Instruction*
and there was no need for a mutable pointer as we only do analysis here.

llvm-svn: 278434
2016-08-11 22:23:07 +00:00
David Majnemer
5423e4bff5 Use range algorithms instead of unpacking begin/end
No functionality change is intended.

llvm-svn: 278417
2016-08-11 21:15:00 +00:00
Andrew Kaylor
6c20f4ff06 [ValueTracking] An improvement to IR ValueTracking on Non-negative Integers
Patch by Li Huang

Differential Revision: https://reviews.llvm.org/D18777

llvm-svn: 278267
2016-08-10 18:47:19 +00:00
Andrew Kaylor
3ceabfd932 [ValueTracking] Improve ValueTracking on left shift with nsw flag
Patch by Li Huang

Differential Revison: https://reviews.llvm.org/D23296

llvm-svn: 278172
2016-08-09 22:41:35 +00:00
David Majnemer
cbcaf6adec [ValueTracking] Teach computeKnownBits about [su]min/max
Reasoning about a select in terms of a min or max allows us to derive a
tigher bound on the result.

llvm-svn: 277914
2016-08-06 08:16:00 +00:00
Sanjoy Das
f04600393d [ValueTracking] Use Instruction::getFunction; NFC
llvm-svn: 275465
2016-07-14 20:19:01 +00:00
Hal Finkel
66f66627ed Teach computeKnownBits to look through returned-argument functions
If a function is known to return one of its arguments, we can use that in order
to compute known bits of the return value.

Differential Revision: http://reviews.llvm.org/D9397

llvm-svn: 275036
2016-07-11 02:25:14 +00:00
Hal Finkel
9dd10de0f9 BasicAA should look through functions with returned arguments
Motivated by the work on the llvm.noalias intrinsic, teach BasicAA to look
through returned-argument functions when answering queries. This is essential
so that we don't loose all other AA information when supplementing with
llvm.noalias.

Differential Revision: http://reviews.llvm.org/D9383

llvm-svn: 275035
2016-07-11 01:32:20 +00:00
Sean Silva
145ca3d2aa Remove dead TLI arg of isKnownNonNull and propagate deadness. NFC.
This actually uncovered a surprisingly large chain of ultimately unused
TLI args.
From what I can gather, this argument is a remnant of when
isKnownNonNull would look at the TLI directly.
The current approach seems to be that InferFunctionAttrs runs early in
the pipeline and uses TLI to annotate the TLI-dependent non-null
information as return attributes.

This also removes the dependence of functionattrs on TLI altogether.

llvm-svn: 274455
2016-07-02 23:47:27 +00:00
Craig Topper
1e3771f34a Revert "[ValueTracking] Teach computeKnownBits for PHI nodes to compute sign bit for a recurrence with a NSW addition."
This is breaking an optimizaton remark test in clang. I've identified a couple fixes for that, but want to understand it better before I commit to anything.

llvm-svn: 274102
2016-06-29 04:57:00 +00:00
Craig Topper
e15fd1d997 [ValueTracking] Teach computeKnownBits for PHI nodes to compute sign bit for a recurrence with a NSW addition.
If a operation for a recurrence is an addition with no signed wrap and both input sign bits are 0, then the result sign bit must also be 0. Similar for the negative case.

I found this deficiency while playing around with a loop in the x86 backend that contained a signed division that could be optimized into an unsigned division if we could prove both inputs were positive. One of them being the loop induction variable. With this patch we can perform the conversion for this case. One of the test cases here is a contrived variation of the loop I was looking at.

Differential revision: http://reviews.llvm.org/D21493

llvm-svn: 274098
2016-06-29 03:46:47 +00:00
Sanjay Patel
04c219ebba [ValueTracking] simplify logic in ComputeNumSignBits (NFCI)
This was noted in http://reviews.llvm.org/D21610 . The previous code
predated the use of APInt ( http://reviews.llvm.org/rL47654 ), so it
had to account for the fixed width of uint64_t.

Now that we're using the variable width APInt, we can remove some
complexity.

llvm-svn: 273584
2016-06-23 17:41:59 +00:00
Sanjay Patel
4cc3c35fb0 [ValueTracking] improve ComputeNumSignBits for vector constants
This is similar to the computeKnownBits improvement in rL268479. 
There's probably more we can do for vector logic instructions, but 
this should let us see non-splat constant masking ops that can
become vector selects instead of and/andn/or sequences.

Differential Revision: http://reviews.llvm.org/D21610

llvm-svn: 273459
2016-06-22 19:20:59 +00:00
Sanjoy Das
c56c5dee21 [ValueTracking] Calls to @llvm.assume always return
This change teaches llvm::isGuaranteedToTransferExecutionToSuccessor
that calls to @llvm.assume always terminate.  Most other relevant
intrinsics should be covered by the "CS.onlyReadsMemory() ||
CS.onlyAccessesArgMemory()" bit but we were missing @llvm.assumes
because we state that it clobbers memory.

Added an LICM test case, but this change is not specific to LICM.

llvm-svn: 272703
2016-06-14 20:23:16 +00:00
Eli Friedman
a67cf70e74 [LICM] Make isGuaranteedToExecute more accurate.
Summary:
Make isGuaranteedToExecute use the
isGuaranteedToTransferExecutionToSuccessor helper, and make that helper
a bit more accurate.

There's a potential performance impact here from assuming that arbitrary
calls might not return. This probably has little impact on loads and
stores to a pointer because most things alias analysis can reason about
are dereferenceable anyway. The other impacts, like less aggressive
hoisting of sdiv by a variable and less aggressive hoisting around
volatile memory operations, are unlikely to matter for real code.

This also impacts SCEV, which uses the same helper.  It's a minor
improvement there because we can tell that, for example, memcpy always
returns normally. Strictly speaking, it's also introducing
a bug, but it's not any worse than everywhere else we assume readonly
functions terminate.

Fixes http://llvm.org/PR27857.

Reviewers: hfinkel, reames, chandlerc, sanjoy

Subscribers: broune, llvm-commits

Differential Revision: http://reviews.llvm.org/D21167

llvm-svn: 272489
2016-06-11 21:48:25 +00:00
Sanjoy Das
45bd5cf143 Teach isGuarantdToTransferExecToSuccessor about debug info intrinsics
Calls to `@llvm.dbg.*` can be assumed to terminate.

llvm-svn: 272180
2016-06-08 17:48:36 +00:00
Benjamin Kramer
5d5a0e4f68 Avoid copies of std::strings and APInt/APFloats where we only read from it
As suggested by clang-tidy's performance-unnecessary-copy-initialization.
This can easily hit lifetime issues, so I audited every change and ran the
tests under asan, which came back clean.

llvm-svn: 272126
2016-06-08 10:01:20 +00:00
Sanjay Patel
02638731c1 transform obscured FP sign bit ops into a fabs/fneg using TLI hook
This is effectively a revert of:
http://reviews.llvm.org/rL249702 - [InstCombine] transform masking off of an FP sign bit into a fabs() intrinsic call (PR24886)
and:
http://reviews.llvm.org/rL249701 - [ValueTracking] teach computeKnownBits that a fabs() clears sign bits
and a reimplementation as a DAG combine for targets that have IEEE754-compliant fabs/fneg instructions.

This is intended to resolve the objections raised on the dev list:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098154.html
and:
https://llvm.org/bugs/show_bug.cgi?id=24886#c4

In the interest of patch minimalism, I've only partly enabled AArch64. PowerPC, MIPS, x86 and others can enable later.

Differential Revision: http://reviews.llvm.org/D19391

llvm-svn: 271573
2016-06-02 20:01:37 +00:00
Sanjoy Das
5e3a215bf7 [SCEV] See through op.with.overflow intrinsics (re-apply)
Summary:
This change teaches SCEV to see reduce `(extractvalue
0 (op.with.overflow X Y))` into `op X Y` (with a no-wrap tag if
possible).

This was first checked in at r265912 but reverted in r265950 because it
exposed some issues around how SCEV handled post-inc add recurrences.
Those issues have now been fixed.

Reviewers: atrick, regehr

Subscribers: mcrosier, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D18684

llvm-svn: 271152
2016-05-29 00:34:42 +00:00
Sanjoy Das
ad0baa641e [ValueTracking] ICmp instructions propagate poison
This is a stripped down version of D19211, leaving out the questionable
"branching in poison is UB" bit.

llvm-svn: 271150
2016-05-29 00:31:18 +00:00
Sanjay Patel
d1d88e8e3a [ValueTracking, InstSimplify] extend isKnownNonZero() to handle vector constants
Similar in spirit to D20497 :
If all elements of a constant vector are known non-zero, then we can say that the
whole vector is known non-zero.

It seems like we could extend this to FP scalar/vector too, but isKnownNonZero()
says it only works for integers and pointers for now.

Differential Revision: http://reviews.llvm.org/D20544

llvm-svn: 270562
2016-05-24 14:18:49 +00:00
Sanjay Patel
17cf37684e fix formatting; NFC
llvm-svn: 270465
2016-05-23 17:57:54 +00:00
Sanjay Patel
80f08c1c8b use 'auto' with 'dyn_cast'; fix formatting; NFC
llvm-svn: 270370
2016-05-22 16:07:20 +00:00
Sanjay Patel
be15a9a346 [ValueTracking, InstCombine] extend isKnownToBeAPowerOfTwo() to handle vector splat constants
We could try harder to handle non-splat vector constants too, 
but that seems much rarer to me.

Note that the div test isn't resolved because there's a check
for isIntegerTy() guarding that transform.

Differential Revision: http://reviews.llvm.org/D20497

llvm-svn: 270369
2016-05-22 15:41:53 +00:00
Sanjoy Das
bbb7dedca9 [ValueTracking] Use guards to prove non-nullness of a value
Reviewers: apilipenko, majnemer, reames

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D20044

llvm-svn: 269008
2016-05-10 02:35:44 +00:00
Sanjoy Das
fc5838c877 [ValueTracking] Hoist some computation out of a loop; NFC
There is no need to match the comparison instruction repeatedly.

llvm-svn: 268836
2016-05-07 02:08:24 +00:00
Sanjoy Das
dff69bed0a Clean up comment; NFC
llvm-svn: 268835
2016-05-07 02:08:22 +00:00
Sanjoy Das
d8f828a34c Delete trailing whitespace; NFC
llvm-svn: 268834
2016-05-07 02:08:15 +00:00
Chad Rosier
5695ddaafe [ValueTracking] Early exit when further analysis won't be fruitful.
This should have NFC in the context of codegen, but may have positive
implications on compile-time.

llvm-svn: 268651
2016-05-05 17:41:19 +00:00
Chad Rosier
8c9359c0d4 [ValueTracking] Improve isImpliedCondition for matching LHS and Imm RHSs.
llvm-svn: 268636
2016-05-05 15:39:18 +00:00
David Majnemer
c1b3a43079 [ConstantFolding, ValueTracking] Fold constants involving bitcasts of ConstantVector
We assumed that ConstantVectors would be rather uninteresting from the
perspective of analysis.  However, this is not the case due to a quirk
of how LLVM handles vectors of i1.  Vectors of i1 are not
ConstantDataVectors like vectors of i8, i16, i32 or i64 because i1's
SizeInBits differs from it's StoreSizeInBytes.  This leads to it being
categorized as a ConstantVector instead of a ConstantDataVector.

Instead, treat ConstantVector more uniformly.

This fixes PR27591.

llvm-svn: 268479
2016-05-04 06:13:33 +00:00
David Majnemer
2a55d1f576 [ValueTracking] Make the code in lookThroughCast
No functionality change is intended.

llvm-svn: 268108
2016-04-29 21:22:04 +00:00
Chad Rosier
5fffe70b4f [InstCombine] Determine the result of a select based on a dominating condition.
Differential Revision: http://reviews.llvm.org/D19550

llvm-svn: 268104
2016-04-29 21:12:31 +00:00
David Majnemer
8fe9b6cacf [ValueTracking] matchSelectPattern needs to be more careful around FP
matchSelectPattern attempts to see through casts which mask min/max
patterns from being more obvious.  Under certain circumstances, it would
misidentify a sequence of instructions as a min/max because it assumed
that folding casts would preserve the result.  This is not the case for
floating point <-> integer casts.

This fixes PR27575.

llvm-svn: 268086
2016-04-29 18:40:34 +00:00
Ahmed Bougacha
fff706e40a [TLI] Unify LibFunc signature checking. NFCI.
I tried to be as close as possible to the strongest check that
existed before; cleaning these up properly is left for future work.

Differential Revision: http://reviews.llvm.org/D19469

llvm-svn: 267758
2016-04-27 19:04:35 +00:00
Chad Rosier
5a0c6f041b [ValueTracking] Improve isImpliedCondition when the dominating cond is false.
llvm-svn: 267430
2016-04-25 17:23:36 +00:00
Sanjoy Das
cc00fe4f8f Have isKnownNotFullPoison be smarter around control flow
Summary:
(... while still not using a PostDomTree)

The way we use isKnownNotFullPoison from SCEV today, the new CFG walking
logic will not trigger for any realistic cases -- it will kick in only
for situations where we could have merged the contiguous basic blocks
anyway[0], since the poison generating instruction dominates all of its
non-PHI uses (which are the only uses we consider right now).

However, having this change in place will allow a later bugfix to break
fewer llvm-lit tests.

[0]: i.e. cases where block A branches to block B and B is A's only
successor and A is B's only predecessor.

Reviewers: broune, bjarke.roune

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D19212

llvm-svn: 267175
2016-04-22 17:41:06 +00:00
Chad Rosier
3f298875b1 Address Philip's post-commit feedback for r266987. NFC.
llvm-svn: 266998
2016-04-21 16:18:02 +00:00
Chad Rosier
2412be0a92 Refactor implied condition logic from ValueTracking directly into CmpInst. NFC.
Differential Revision: http://reviews.llvm.org/D19330

llvm-svn: 266987
2016-04-21 14:04:54 +00:00
Nick Lewycky
39244705f7 Add optimization for 'icmp slt (or A, B), A' and some related idioms based on knowledge of the sign bit for A and B.
No matter what value you OR in to A, the result of (or A, B) is going to be UGE A. When A and B are positive, it's SGE too. If A is negative, OR'ing a value into it can't make it positive, but can increase its value closer to -1, therefore (or A, B) is SGE A. Working through all possible combinations produces this truth table:

```
A is
+, -, +/-
F  F   F   +    B is
T  F   ?   -
?  F   ?   +/-
```

The related optimizations are flipping the 'slt' for 'sge' which always NOTs the result (if the result is known), and swapping the LHS and RHS while swapping the comparison predicate.

There are more idioms left to implement (aren't there always!) but I've stopped here because any more would risk becoming unreasonable for reviewers.

llvm-svn: 266939
2016-04-21 00:53:14 +00:00
Chad Rosier
7b61268a0d [ValueTracking] Make isImpliedCondition return an Optional<bool>. NFC.
Phabricator Revision: http://reviews.llvm.org/D19277

llvm-svn: 266904
2016-04-20 19:15:26 +00:00
David Majnemer
d93b1bf8b1 [ValueTracking, VectorUtils] Refactor getIntrinsicIDForCall
The functionality contained within getIntrinsicIDForCall is two-fold: it
checks if a CallInst's callee is a vectorizable intrinsic.  If it isn't
an intrinsic, it attempts to map the call's target to a suitable
intrinsic.

Move the mapping functionality into getIntrinsicForCallSite and rename
getIntrinsicIDForCall to getVectorIntrinsicIDForCall while
reimplementing it in terms of getIntrinsicForCallSite.

llvm-svn: 266801
2016-04-19 19:10:21 +00:00
Chad Rosier
6c28047766 [ValueTracking] Improve isImpliedCondition for conditions with matching operands.
This patch improves SimplifyCFG to catch cases like:

  if (a < b) {
    if (a > b) <- known to be false
      unreachable;
  }

Phabricator Revision: http://reviews.llvm.org/D18905

llvm-svn: 266767
2016-04-19 17:19:14 +00:00
David L Kreitzer
cc655b3f3d Simplify strlen to a subtraction for certain cases.
Patch by Li Huang (li1.huang@intel.com)

Differential Revision: http://reviews.llvm.org/D18230

llvm-svn: 266200
2016-04-13 14:31:06 +00:00
David Majnemer
9e0160358c [InstCombine] We folded an fcmp to an i1 instead of a vector of i1
Remove an ad-hoc transform in InstCombine and replace it with more
general machinery (ValueTracking, InstructionSimplify and VectorUtils).

This fixes PR27332.

llvm-svn: 266175
2016-04-13 06:55:52 +00:00
Sanjoy Das
23c7d74ce2 This reverts commit r265913 and r265912
See PR27315

r265913: "[IndVars] Eliminate op.with.overflow when possible"

r265912: "[SCEV] See through op.with.overflow intrinsics"
llvm-svn: 265950
2016-04-11 15:26:18 +00:00
Sanjoy Das
e72674dde8 [SCEV] See through op.with.overflow intrinsics
Summary:
This change teaches SCEV to see reduce `(extractvalue
0 (op.with.overflow X Y))` into `op X Y` (with a no-wrap tag if
possible).

Reviewers: atrick, regehr

Subscribers: mcrosier, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D18684

llvm-svn: 265912
2016-04-10 22:50:26 +00:00
Sanjoy Das
b20d278ebd Don't IPO over functions that can be de-refined
Summary:
Fixes PR26774.

If you're aware of the issue, feel free to skip the "Motivation"
section and jump directly to "This patch".

Motivation:

I define "refinement" as discarding behaviors from a program that the
optimizer has license to discard.  So transforming:

```
void f(unsigned x) {
  unsigned t = 5 / x;
  (void)t;
}
```

to

```
void f(unsigned x) { }
```

is refinement, since the behavior went from "if x == 0 then undefined
else nothing" to "nothing" (the optimizer has license to discard
undefined behavior).

Refinement is a fundamental aspect of many mid-level optimizations done
by LLVM.  For instance, transforming `x == (x + 1)` to `false` also
involves refinement since the expression's value went from "if x is
`undef` then { `true` or `false` } else { `false` }" to "`false`" (by
definition, the optimizer has license to fold `undef` to any non-`undef`
value).

Unfortunately, refinement implies that the optimizer cannot assume
that the implementation of a function it can see has all of the
behavior an unoptimized or a differently optimized version of the same
function can have.  This is a problem for functions with comdat
linkage, where a function can be replaced by an unoptimized or a
differently optimized version of the same source level function.

For instance, FunctionAttrs cannot assume a comdat function is
actually `readnone` even if it does not have any loads or stores in
it; since there may have been loads and stores in the "original
function" that were refined out in the currently visible variant, and
at the link step the linker may in fact choose an implementation with
a load or a store.  As an example, consider a function that does two
atomic loads from the same memory location, and writes to memory only
if the two values are not equal.  The optimizer is allowed to refine
this function by first CSE'ing the two loads, and the folding the
comparision to always report that the two values are equal.  Such a
refined variant will look like it is `readonly`.  However, the
unoptimized version of the function can still write to memory (since
the two loads //can// result in different values), and selecting the
unoptimized version at link time will retroactively invalidate
transforms we may have done under the assumption that the function
does not write to memory.

Note: this is not just a problem with atomics or with linking
differently optimized object files.  See PR26774 for more realistic
examples that involved neither.

This patch:

This change introduces a new set of linkage types, predicated as
`GlobalValue::mayBeDerefined` that returns true if the linkage type
allows a function to be replaced by a differently optimized variant at
link time.  It then changes a set of IPO passes to bail out if they see
such a function.

Reviewers: chandlerc, hfinkel, dexonsmith, joker.eph, rnk

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D18634

llvm-svn: 265762
2016-04-08 00:48:30 +00:00
Peter Zotov
7378ca57f0 Mark some FP intrinsics as safe to speculatively execute
Floating point intrinsics in LLVM are generally not speculatively
executed, since most of them are defined to behave the same as libm
functions, which set errno.

However, the only error that can happen  when executing ceil, floor,
nearbyint, rint and round libm functions per POSIX.1-2001 is -ERANGE,
and that requires the maximum value of the exponent to be smaller
than  the number of mantissa bits, which is not the case with any of
the floating point types supported by LLVM.

The trunc and copysign functions never set errno per per POSIX.1-2001.

Differential Revision: http://reviews.llvm.org/D18643

llvm-svn: 265262
2016-04-03 12:30:46 +00:00
Sanjoy Das
3059b3a92d [InstCombine] Fix incorrect rule from rL236202
The rule for SMIN introduced in rL236202 doesn't work as advertised: the
check for Pred == ICmpInst::ICMP_SGT was missing.

llvm-svn: 264996
2016-03-31 05:14:34 +00:00
Sanjoy Das
2fe82474a4 Delete trailing whitespace
llvm-svn: 264995
2016-03-31 05:14:29 +00:00
Philip Reames
64fa288117 [ValueTracking] Extract isKnownPositive [NFCI]
Extract out a generic interface from a recently landed patch and document a TODO in case compile time becomes a problem.

llvm-svn: 263062
2016-03-09 21:31:47 +00:00
Philip Reames
b31e6a2515 [ValueTracking] "constant fold" an experimental hidden option
llvm-svn: 262648
2016-03-03 19:50:32 +00:00
Philip Reames
f86417771d [ValueTracking] Remove dead code from an old experiment
This experiment was originally about trying to use facts implied dominating conditions to infer more precise known bits.  While the compile time was found to be acceptable on several large code bases, we never found sufficiently profitable examples to justify turning on the code by default.  Given this, it's time to abandon the experiment.  

Several folks have commented that they've found this useful for experimentation, but nothing has come of those experiments.  Given how easy the patch is to apply, there's no reason to leave the code in tree.  

For anyone interested in further investigation in this area, I recommend finding the summary email I sent on one of the original review threads.  In particular, I now believe the use-list based approach is strictly worse than the dom-tree-walking approach.  

llvm-svn: 262646
2016-03-03 19:44:06 +00:00
Artur Pilipenko
5746dd289e NFC. Move isDereferenceable to Loads.h/cpp
This is a part of the refactoring to unify isSafeToLoadUnconditionally and isDereferenceablePointer functions. In subsequent change I'm going to eliminate isDerferenceableAndAlignedPointer from Loads API, leaving isSafeToLoadSpecualtively the only function to check is load instruction can be speculated.   

Reviewed By: hfinkel

Differential Revision: http://reviews.llvm.org/D16180

llvm-svn: 261736
2016-02-24 12:49:04 +00:00
Artur Pilipenko
e41e2d4c04 NFC. Move getAlignment helper function from ValueTracking to Value class.
Reviewed By: reames, hfinkel

Differential Revision: http://reviews.llvm.org/D16144

llvm-svn: 261735
2016-02-24 12:25:10 +00:00
Richard Trieu
5a759985de Remove uses of builtin comma operator.
Cleanup for upcoming Clang warning -Wcomma.  No functionality change intended.

llvm-svn: 261270
2016-02-18 22:09:30 +00:00
Jun Bum Lim
b95afc3d46 [ValueTracking] Improve isKnownNonZero for PHI of non-zero constants
It is clear that a PHI is a non-zero if all incoming values are non-zero constants.

llvm-svn: 259370
2016-02-01 17:03:07 +00:00
Matthias Braun
54b35cf6ca ValueTracking: Use fixed array for assumption exclude set in Query.
The Query structure is constructed often and is relevant for compiletime
performance. We can replace the SmallPtrSet for assumption exclusions in
this structure with a fixed size array because we know the maximum
number of elements.  This improves typical clang -O3 -emit-llvm compiletime
by 1.2% in my measurements.

Differential Revision: http://reviews.llvm.org/D16204

llvm-svn: 259025
2016-01-28 06:29:33 +00:00
Eduard Burtescu
c55147fcdc [opaque pointer types] [NFC] GEP: replace get(Pointer)ElementType uses with get{Source,Result}ElementType.
Summary:
GEPOperator: provide getResultElementType alongside getSourceElementType.
This is made possible by adding a result element type field to GetElementPtrConstantExpr, which GetElementPtrInst already has.

GEP: replace get(Pointer)ElementType uses with get{Source,Result}ElementType.

Reviewers: mjacob, dblaikie

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D16275

llvm-svn: 258145
2016-01-19 17:28:00 +00:00
Eduard Burtescu
313153c723 [opaque pointer types] Alloca: use getAllocatedType() instead of getType()->getPointerElementType().
Reviewers: mjacob

Subscribers: llvm-commits, dblaikie

Differential Revision: http://reviews.llvm.org/D16272

llvm-svn: 258028
2016-01-18 00:10:01 +00:00
Manuel Jacob
e6438acb66 GlobalValue: use getValueType() instead of getType()->getPointerElementType().
Reviewers: mjacob

Subscribers: jholewinski, arsenm, dsanders, dblaikie

Patch by Eduard Burtescu.

Differential Revision: http://reviews.llvm.org/D16260

llvm-svn: 257999
2016-01-16 20:30:46 +00:00
Matthias Braun
cbf73beb6f ValueTracking: Put DataLayout reference into the Query structure, NFC.
It looks nicer and improves the compiletime of a typical
clang -O3 -emit-llvm run by ~0.6% for me.

Differential Revision: http://reviews.llvm.org/D16205

llvm-svn: 257944
2016-01-15 22:22:04 +00:00
James Molloy
d5127bffc5 Revert "[ValueTracking] Understand more select patterns in ComputeKnownBits"
This reverts commit r257769. Backing this out because of stage2 failures.

llvm-svn: 257773
2016-01-14 15:49:32 +00:00
James Molloy
c0c824e64b [ValueTracking] Understand more select patterns in ComputeKnownBits
Some patterns of select+compare allow us to know exactly the value of the uppermost bits in the select result. For example:

  %b = icmp ugt i32 %a, 5
  %c = select i1 %b, i32 2, i32 %a

Here we know that %c is bounded by 5, and therefore KnownZero = ~APInt(5).getActiveBits() = ~7.

There are several such patterns, and this patch attempts to understand a reasonable subset of them - namely when the base values are the same (as above), and when they are related by a simple (add nsw), for example (add nsw %a, 4) and %a.

llvm-svn: 257769
2016-01-14 15:23:19 +00:00
Fiona Glaser
3959ead5a7 CannotBeOrderedLessThanZero: add some missing cases
llvm-svn: 257542
2016-01-12 23:37:30 +00:00
Manuel Jacob
102d481261 [Statepoints] Refactor GCRelocateOperands into an intrinsic wrapper. NFC.
Summary:
This commit renames GCRelocateOperands to GCRelocateInst and makes it an
intrinsic wrapper, similar to e.g. MemCpyInst.  Also, all users of
GCRelocateOperands were changed to use the new intrinsic wrapper instead.

Reviewers: sanjoy, reames

Subscribers: reames, sanjoy, llvm-commits

Differential Revision: http://reviews.llvm.org/D15762

llvm-svn: 256811
2016-01-05 04:03:00 +00:00
Philip Reames
a43feccb31 [MemoryBuiltins] Remove isOperatorNewLike by consolidating non-null inference handling
This patch removes the isOperatorNewLike predicate since it was only being used to establish a non-null return value and we have attributes specifically for that purpose with generic handling. To keep approximate the same behaviour for existing frontends, I added the various operator new like (i.e. instances of operator new) to InferFunctionAttrs. It's not really clear to me why this isn't handled in Clang, but I didn't want to break existing code and any subtle assumptions it might have.

Once this patch is in, I'm going to start separating the isAllocLike family of predicates. These appear to be being used for a mixture of things which should be more clearly separated and documented. Today, they're being used to indicate (at least) aliasing facts, CSE-ability, and default values from an allocation site.

Differential Revision: http://reviews.llvm.org/D15820

llvm-svn: 256787
2016-01-04 22:49:23 +00:00
Chandler Carruth
7a140bacb2 Fix a horrible infloop in value tracking in the face of dead code.
Amazingly, we just never triggered this without:
1) Moving code around for MetadataTracking so that a certain *different*
   amount of inlining occurs in the per-TU compile step.
2) Then you LTO opt or clang with a bootstrap, and get inlining, loop
   opts, and GVN line up everything *just* right.

I don't really know how we didn't hit this before. We really need to be
fuzz testing stuff, it shouldn't be hard to trigger. I'm working on
crafting a reduced nice test case, and will submit that when I have it,
but I want to get LTO build bots going again.

llvm-svn: 256735
2016-01-04 07:23:12 +00:00
Sanjay Patel
9da3c40bfb [ValueTracking] fix bug computing isKnownToBeAPowerOfTwo() with arithmetic shift right (PR25900)
This is a fix for:
https://llvm.org/bugs/show_bug.cgi?id=25900

If we think that an arithmetic right shift of a power of two is always a power of two, 
an sdiv gets wrongly converted to udiv.

Differential Revision: http://reviews.llvm.org/D15827

llvm-svn: 256655
2015-12-30 22:40:52 +00:00
Michael Zolotukhin
fa6d622c80 [ValueTracking] Properly handle non-sized types in isAligned function.
Reviewers: apilipenko, reames, sanjoy, hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D15597

llvm-svn: 256192
2015-12-21 20:38:18 +00:00
David Majnemer
49dcd13916 [IR] Remove terminatepad
It turns out that terminatepad gives little benefit over a cleanuppad
which calls the termination function.  This is not sufficient to
implement fully generic filters but MSVC doesn't support them which
makes terminatepad a little over-designed.

Depends on D15478.

Differential Revision: http://reviews.llvm.org/D15479

llvm-svn: 255522
2015-12-14 18:34:23 +00:00
David Majnemer
bf189bdcd7 [IR] Reformulate LLVM's EH funclet IR
While we have successfully implemented a funclet-oriented EH scheme on
top of LLVM IR, our scheme has some notable deficiencies:
- catchendpad and cleanupendpad are necessary in the current design
  but they are difficult to explain to others, even to seasoned LLVM
  experts.
- catchendpad and cleanupendpad are optimization barriers.  They cannot
  be split and force all potentially throwing call-sites to be invokes.
  This has a noticable effect on the quality of our code generation.
- catchpad, while similar in some aspects to invoke, is fairly awkward.
  It is unsplittable, starts a funclet, and has control flow to other
  funclets.
- The nesting relationship between funclets is currently a property of
  control flow edges.  Because of this, we are forced to carefully
  analyze the flow graph to see if there might potentially exist illegal
  nesting among funclets.  While we have logic to clone funclets when
  they are illegally nested, it would be nicer if we had a
  representation which forbade them upfront.

Let's clean this up a bit by doing the following:
- Instead, make catchpad more like cleanuppad and landingpad: no control
  flow, just a bunch of simple operands;  catchpad would be splittable.
- Introduce catchswitch, a control flow instruction designed to model
  the constraints of funclet oriented EH.
- Make funclet scoping explicit by having funclet instructions consume
  the token produced by the funclet which contains them.
- Remove catchendpad and cleanupendpad.  Their presence can be inferred
  implicitly using coloring information.

N.B.  The state numbering code for the CLR has been updated but the
veracity of it's output cannot be spoken for.  An expert should take a
look to make sure the results are reasonable.

Reviewers: rnk, JosephTremoulet, andrew.w.kaylor

Differential Revision: http://reviews.llvm.org/D15139

llvm-svn: 255422
2015-12-12 05:38:55 +00:00
Sanjoy Das
58e9789caa [ValueTracking] Remove untested / unreachable code, NFC
Right now isTruePredicate is only ever called with Pred == ICMP_SLE or
ICMP_ULE, and the ICMP_SLT and ICMP_ULT cases are dead.  This change
removes the untested dead code so that the function is not misleading.

llvm-svn: 252676
2015-11-11 00:16:41 +00:00
Sanjoy Das
acb331cd8c [ValueTracking] Teach isImpliedCondition a new bitwise trick
Summary:
This change teaches isImpliedCondition to prove things like

  (A | 15) < L  ==>  (A | 14) < L

if the low 4 bits of A are known to be zero.

Depends on D14391

Reviewers: majnemer, reames, hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14392

llvm-svn: 252673
2015-11-10 23:56:20 +00:00
Sanjoy Das
ceecb922e9 [ValueTracking] Use m_APInt instead of m_ConstantInt, NFC
This change would add functionality if isImpliedCondition worked on
vector types; but since it bail out on vector predicates this change is
an NFC.

llvm-svn: 252672
2015-11-10 23:56:15 +00:00
Philip Reames
7622c5e11f [ValueTracking] Recognize that and(x, add (x, -1)) clears the low bit
This is a cleaned up version of a patch by John Regehr with permission. Originally found via the souper tool.

If we add an odd number to x, then bitwise-and the result with x, we know that the low bit of the result must be zero. Either it was zero in x originally, or the add cleared it in the temporary value. As a result, one of the two values anded together must have the bit cleared.

Differential Revision: http://reviews.llvm.org/D14315

llvm-svn: 252629
2015-11-10 18:46:14 +00:00
Sanjoy Das
3c64437294 [ValueTracking] Add parameters to isImpliedCondition; NFC
Summary:
This change makes the `isImpliedCondition` interface similar to the rest
of the functions in ValueTracking (in that it takes a DataLayout,
AssumptionCache etc.).  This is an NFC, intended to make a later diff
less noisy.

Depends on D14369

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14391

llvm-svn: 252333
2015-11-06 19:01:08 +00:00
Sanjoy Das
9d8ff3ae2d [ValueTracking] De-pessimize isImpliedCondition around unsigned compares
Summary:
Currently `isImpliedCondition` will optimize "I +_nuw C < L ==> I < L"
only if C is positive.  This is an unnecessary restriction -- the
implication holds even if `C` is negative.

Reviewers: reames, majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14369

llvm-svn: 252332
2015-11-06 19:01:03 +00:00
Sanjoy Das
f52081da0a [ValueTracking] Add a framework for encoding implication rules
Summary:
This change adds a framework for adding more smarts to
`isImpliedCondition` around inequalities.  Informally,
`isImpliedCondition` will now try to prove "A < B ==> C < D" by proving
"C <= A && B <= D", since then it follows "C <= A < B <= D".

While this change is in principle NFC, I could not think of a way to not
handle cases like "i +_nsw 1 < L ==> i < L +_nsw 1" (that ValueTracking
did not handle before) while keeping the change understandable.  I've
added tests for these cases.

Reviewers: reames, majnemer, hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14368

llvm-svn: 252331
2015-11-06 19:00:57 +00:00
Sanjoy Das
43a55dd2a9 [ValueTracking] Expose implies via ValueTracking, NFC
Summary: This will allow a later patch to `JumpThreading` use this functionality.

Reviewers: reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D13971

llvm-svn: 251488
2015-10-28 03:20:19 +00:00
Sanjoy Das
2819145e6a [ValueTracking] Use !range metadata more aggressively in KnownBits
Summary:
Teach `computeKnownBitsFromRangeMetadata` to use `!range` metadata more
aggressively.

Reviewers: majnemer, nlewycky, jingyue

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14100

llvm-svn: 251487
2015-10-28 03:20:15 +00:00
Sanjoy Das
0cf0546748 [ValueTracking] Don't special case wrapped ConstantRanges; NFCI
Use `getUnsignedMax` directly instead of special casing a wrapped
ConstantRange.

The previous code would have been "buggy" (and this would have been a
semantic change) if LLVM allowed !range metadata to denote full
ranges. E.g. in

  %val = load i1, i1* %ptr, !range !{i1 1, i1 1} ;; == full set

ValueTracking would conclude that the high bit (IOW the only bit) in
%val was zero.

Since !range metadata does not allow empty or full ranges, this change
is just a minor stylistic improvement.

llvm-svn: 251380
2015-10-27 01:36:06 +00:00
James Molloy
8d9daddc1b [ValueTracking] Extend r251146 to catch a fairly common case
Even though we may not know the value of the shifter operand, it's possible we know the shifter operand is non-zero. This can allow us to infer more known bits - for example:

  %1 = load %p !range {1, 5}
  %2 = shl %q, %1

We don't know %1, but we do know that it is nonzero so %2[0] is known zero, and importantly %2 is known non-zero.

Calling isKnownNonZero is nontrivially expensive so use an Optional to run it lazily and cache its result.

llvm-svn: 251294
2015-10-26 14:10:46 +00:00
Benjamin Kramer
7797bb7df1 Use all_of to simplify control flow. NFC.
llvm-svn: 251202
2015-10-24 19:30:37 +00:00
Sanjoy Das
1f464e3640 Extract out getConstantRangeFromMetadata; NFC
The loop idiom creating a ConstantRange is repeated twice in the
codebase, time to give it a name and a home.

The loop is also repeated in `rangeMetadataExcludesValue`, but using
`getConstantRangeFromMetadata` there would not be an NFC -- the range
returned by `getConstantRangeFromMetadata` may contain a value that none
of the subranges did.

llvm-svn: 251180
2015-10-24 05:37:35 +00:00
Hal Finkel
1a64f66683 Handle non-constant shifts in computeKnownBits, and use computeKnownBits for constant folding in InstCombine/Simplify
First, the motivation: LLVM currently does not realize that:

  ((2072 >> (L == 0)) >> 7) & 1 == 0

where L is some arbitrary value. Whether you right-shift 2072 by 7 or by 8, the
lowest-order bit is always zero. There are obviously several ways to go about
fixing this, but the generic solution pursued in this patch is to teach
computeKnownBits something about shifts by a non-constant amount. Previously,
we would give up completely on these. Instead, in cases where we know something
about the low-order bits of the shift-amount operand, we can combine (and
together) the associated restrictions for all shift amounts consistent with
that knowledge. As a further generalization, I refactored all of the logic for
all three kinds of shifts to have this capability. This works well in the above
case, for example, because the dynamic shift amount can only be 0 or 1, and
thus we can say a lot about the known bits of the result.

This brings us to the second part of this change: Even when we know all of the
bits of a value via computeKnownBits, nothing used to constant-fold the result.
This introduces the necessary code into InstCombine and InstSimplify. I've
added it into both because:

  1. InstCombine won't automatically pick up the associated logic in
     InstSimplify (InstCombine uses InstSimplify, but not via the API that
     passes in the original instruction).

  2. Putting the logic in InstCombine allows the resulting simplifications to become
     part of the iterative worklist

  3. Putting the logic in InstSimplify allows the resulting simplifications to be
     used by everywhere else that calls SimplifyInstruction (inlining, unrolling,
     and many others).

And this requires a small change to our definition of an ephemeral value so
that we don't break the rest case from r246696 (where the icmp feeding the
@llvm.assume, is also feeding a br). Under the old definition, the icmp would
not be considered ephemeral (because it is used by the br), but this causes the
assume to remove itself (in addition to simplifying the branch structure), and
it seems more-useful to prevent that from happening.

llvm-svn: 251146
2015-10-23 20:37:08 +00:00
James Molloy
94327af5e1 [ValueTracking] Add a new predicate: isKnownNonEqual()
isKnownNonEqual(A, B) returns true if it can be determined that A != B.

At the moment it only knows two facts, that a non-wrapping add of nonzero to a value cannot be that value:

A + B != A [where B != 0, addition is nsw or nuw]

and that contradictory known bits imply two values are not equal.

This patch also hooks this up to InstSimplify; InstSimplify had a peephole for the first fact but not the second so this teaches InstSimplify a new trick too (alas no measured performance impact!)

llvm-svn: 251012
2015-10-22 13:18:42 +00:00
Aaron Ballman
8601c2e902 Silencing a -Wtype-limits warning; an unsigned value will always be >= 0; NFC.
llvm-svn: 250404
2015-10-15 13:55:43 +00:00
Philip Reames
1bb1df8eb7 Tighten known bits for ctpop based on zero input bits
This is a cleaned up patch from the one written by John Regehr based on the findings of the Souper superoptimizer.

The basic idea here is that input bits that are known zero reduce the maximum count that the intrinsic could return. We know that the number of bits required to represent a particular count is at most log2(N)+1.

Differential Revision: http://reviews.llvm.org/D13253

llvm-svn: 250338
2015-10-14 22:42:12 +00:00
Kostya Serebryany
b4d1390441 [asan] Disabling speculative loads under asan. Patch by Mike Aizatsky
llvm-svn: 250259
2015-10-14 00:21:05 +00:00
Duncan P. N. Exon Smith
c56daa1083 Analysis: Remove implicit ilist iterator conversions
Remove implicit ilist iterator conversions from LLVMAnalysis.

I came across something really scary in `llvm::isKnownNotFullPoison()`
which relied on `Instruction::getNextNode()` being completely broken
(not surprising, but scary nevertheless).  This function is documented
(and coded to) return `nullptr` when it gets to the sentinel, but with
an `ilist_half_node` as a sentinel, the sentinel check looks into some
other memory and we don't recognize we've hit the end.

Rooting out these scary cases is the reason I'm removing the implicit
conversions before doing anything else with `ilist`; I'm not at all
surprised that clients rely on badness.

I found another scary case -- this time, not relying on badness, just
bad (but I guess getting lucky so far) -- in
`ObjectSizeOffsetEvaluator::compute_()`.  Here, we save out the
insertion point, do some things, and then restore it.  Previously, we
let the iterator auto-convert to `Instruction*`, and then set it back
using the `Instruction*` version:

    Instruction *PrevInsertPoint = Builder.GetInsertPoint();

    /* Logic that may change insert point */

    if (PrevInsertPoint)
      Builder.SetInsertPoint(PrevInsertPoint);

The check for `PrevInsertPoint` doesn't protect correctly against bad
accesses.  If the insertion point has been set to the end of a basic
block (i.e., `SetInsertPoint(SomeBB)`), then `GetInsertPoint()` returns
an iterator pointing at the list sentinel.  The version of
`SetInsertPoint()` that's getting called will then call
`PrevInsertPoint->getParent()`, which explodes horribly.  The only
reason this hasn't blown up is that it's fairly unlikely the builder is
adding to the end of the block; usually, we're adding instructions
somewhere before the terminator.

llvm-svn: 249925
2015-10-10 00:53:03 +00:00
Artur Pilipenko
0407210d7b ValueTracking: use getAlignment in isAligned
Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D13517

llvm-svn: 249841
2015-10-09 15:58:26 +00:00
Sanjay Patel
172d0b6bb7 [ValueTracking] teach computeKnownBits that a fabs() clears sign bits
This was requested in D13076: if we're going to canonicalize to fabs(), ValueTracking
should know that fabs() clears sign bits.

In this patch (as in D13076), we're not handling vectors yet even though computeKnownBits'
fabs() case itself should be vector-ready via the splat in this patch. 
Fixing this will require follow-on patches to correct other logic that uses 'getScalarType'.

Differential Revision: http://reviews.llvm.org/D13222

llvm-svn: 249701
2015-10-08 16:56:55 +00:00
Artur Pilipenko
8b3cc14fdc Teach computeKnownBits to use new align attribute/metadata
Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D13470

llvm-svn: 249557
2015-10-07 16:01:18 +00:00
Philip Reames
ca99cf7d94 Extend known bits to understand @llvm.bswap
This is a cleaned up patch from the one written by John Regehr based on the findings of the Souper superoptimizer.

When writing tests, I was surprised to find that instsimplify apparently doesn't know how to collapse bit test sequences based purely on known bits. This required me to split my tests across both instsimplify and instcombine.

Differential Revision: http://reviews.llvm.org/D13250

llvm-svn: 249453
2015-10-06 20:20:45 +00:00
Artur Pilipenko
a7a9828273 Refactor computeKnownBits alignment handling code
Reviewed By: reames, hfinkel

Differential Revision: http://reviews.llvm.org/D12958

llvm-svn: 248892
2015-09-30 11:55:45 +00:00
Igor Laevsky
7db851f357 [ValueTracking] Lower dom-conditions-dom-blocks and dom-conditions-max-uses thresholds
On some of our benchmarks this change shows about 50% compile time improvement without any noticeable performance difference.

Differential Revision: http://reviews.llvm.org/D13248

llvm-svn: 248801
2015-09-29 14:57:52 +00:00
James Molloy
7af5f0f32e [ValueTracking] Teach isKnownNonZero about monotonically increasing PHIs
If a PHI starts at a non-negative constant, monotonically increases
(only adds of a constant are supported at the moment) and that add
does not wrap, then the PHI is known never to be zero.

llvm-svn: 248796
2015-09-29 14:08:45 +00:00
Artur Pilipenko
22c8d170dd Introduce !align metadata for load instruction
Reviewed By: hfinkel

Differential Revision: http://reviews.llvm.org/D12853

llvm-svn: 248721
2015-09-28 17:41:08 +00:00
Sanjay Patel
392541a337 more space; NFC
llvm-svn: 248609
2015-09-25 20:12:43 +00:00
James Molloy
52b426c16a [ValueTracking] Teach isKnownNonZero a new trick
If the shifter operand is a constant, and all of the bits shifted out
are known to be zero, then if X is known non-zero at least one
non-zero bit must remain.

llvm-svn: 248508
2015-09-24 16:06:32 +00:00
Philip Reames
a407871a49 Fix for pr24866
Turns out that not every basic block is guaranteed to have a node within the DominatorTree.  This is really hard to trigger, but the test case from the PR managed to do so.  There's active discussion continuing about what documentation and/or invariants needed cleaned up.

llvm-svn: 248216
2015-09-21 22:04:10 +00:00
Artur Pilipenko
a27007918a Support align attribute for return values
Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D12844

llvm-svn: 247984
2015-09-18 12:33:31 +00:00
Sanjay Patel
198af4d259 fix typo; NFC
llvm-svn: 247938
2015-09-17 20:51:50 +00:00
Chen Li
7a880213bc [InstCombineCalls] Use isKnownNonNullAt() to check nullness of passing arguments at callsite
Summary: This patch replaces isKnownNonNull() with isKnownNonNullAt() when checking nullness of passing arguments at callsite. In this way it can handle cases where the argument does not have nonnull attribute but has a dominating null check from the CFG. It also adds assertions in isKnownNonNull() and isKnownNonNullFromDominatingCondition() to make sure the value checked is pointer type (as defined in LLVM document). These assertions might trip failures in things which are not  covered under llvm/test, but fixes should be pretty obvious. 

Reviewers: reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D12779

llvm-svn: 247587
2015-09-14 18:10:43 +00:00
Joseph Tremoulet
bce9d857cc [WinEH] Add cleanupendpad instruction
Summary:
Add a `cleanupendpad` instruction, used to mark exceptional exits out of
cleanups (for languages/targets that can abort a cleanup with another
exception).  The `cleanupendpad` instruction is similar to the `catchendpad`
instruction in that it is an EH pad which is the target of unwind edges in
the handler and which itself has an unwind edge to the next EH action.
The `cleanupendpad` instruction, similar to `cleanupret` has a `cleanuppad`
argument indicating which cleanup it exits.  The unwind successors of a
`cleanuppad`'s `cleanupendpad`s must agree with each other and with its
`cleanupret`s.

Update WinEHPrepare (and docs/tests) to accomodate `cleanupendpad`.

Reviewers: rnk, andrew.w.kaylor, majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D12433

llvm-svn: 246751
2015-09-03 09:09:43 +00:00
James Molloy
eacb992ece [ValueTracking] Look through casts when both operands are casts.
We only looked through casts when one operand was a constant. We can also look through casts when both operands are non-constant, but both are in fact the same cast type. For example:

%1 = icmp ult i8 %a, %b
%2 = zext i8 %a to i32
%3 = zext i8 %b to i32
%4 = select i1 %1, i32 %2, i32 %3

llvm-svn: 246678
2015-09-02 17:25:25 +00:00
David Majnemer
6fb7d57097 Revert r246232 and r246304.
This reverts isSafeToSpeculativelyExecute's use of ReadNone until we
split ReadNone into two pieces: one attribute which reasons about how
the function reasons about memory and another attribute which determines
how it may be speculated, CSE'd, trap, etc.

llvm-svn: 246331
2015-08-28 21:13:39 +00:00
David Majnemer
46c3738c62 [CodeGen] isInTailCallPosition didn't consider readnone tailcalls
A readnone tailcall may still have a chain of computation which follows
it that would invalidate a tailcall lowering.  Don't skip the analysis
in such cases.

This fixes PR24613.

llvm-svn: 246304
2015-08-28 16:44:09 +00:00
David Majnemer
d9149d4cf3 [ValueTracking] readnone CallInsts are fair game for speculation
Any call which is side effect free is trivially OK to speculate.  We
already had similar logic in EarlyCSE and GVN but we were missing it
from isSafeToSpeculativelyExecute.

This fixes PR24601.

llvm-svn: 246232
2015-08-27 23:03:01 +00:00
Pete Cooper
7ecbe8117c isKnownNonNull needs to consider globals in non-zero address spaces.
Globals in address spaces other than one may have 0 as a valid address,
so we should not assume that they can be null.

Reviewed by Philip Reames.

llvm-svn: 246137
2015-08-27 03:16:29 +00:00
Jingyue Wu
2a45313ac4 [ValueTracking] computeOverflowForSignedAdd and isKnownNonNegative
Summary:
Refactor, NFC

Extracts computeOverflowForSignedAdd and isKnownNonNegative from NaryReassociate to ValueTracking in case
others need it.

Reviewers: reames

Subscribers: majnemer, llvm-commits

Differential Revision: http://reviews.llvm.org/D11313

llvm-svn: 245591
2015-08-20 18:27:04 +00:00
Artur Pilipenko
95415da893 Take alignment into account in isSafeToSpeculativelyExecute and isSafeToLoadUnconditionally.
Reviewed By: hfinkel, sanjoy, MatzeB

Differential Revision: http://reviews.llvm.org/D9791

llvm-svn: 245223
2015-08-17 15:54:26 +00:00
James Molloy
b13284a73a [ValueTracking] Tweak a comment slightly
Hal asked for this change in D11146, but I missed it when I committed originally.

llvm-svn: 244754
2015-08-12 15:11:43 +00:00
James Molloy
ecd6525b24 Add support for floating-point minnum and maxnum
The select pattern recognition in ValueTracking (as used by InstCombine
and SelectionDAGBuilder) only knew about integer patterns. This teaches
it about minimum and maximum operations.

matchSelectPattern() has been extended to return a struct containing the
existing Flavor and a new enum defining the pattern's behavior when
given one NaN operand.

C minnum() is defined to return the non-NaN operand in this case, but
the idiomatic C "a < b ? a : b" would return the NaN operand.

ARM and AArch64 at least have different instructions for these different cases.

llvm-svn: 244580
2015-08-11 09:12:57 +00:00
Benjamin Kramer
69a3fdb314 Fix some comment typos.
llvm-svn: 244402
2015-08-08 18:27:36 +00:00
Quentin Colombet
4323bdacbd [Reassociation] Fix miscompile for va_arg arguments.
iisUnmovableInstruction() had a list of instructions hardcoded which are
considered unmovable. The list lacked (at least) an entry for the va_arg
and cmpxchg instructions.
Fix this by introducing a new Instruction::mayBeMemoryDependent()
instead of maintaining another instruction list.

Patch by Matthias Braun <matze@braunis.de>.

Differential Revision: http://reviews.llvm.org/D11577

rdar://problem/22118647

llvm-svn: 244244
2015-08-06 18:44:34 +00:00
David Majnemer
34ee3789f3 New EH representation for MSVC compatibility
This introduces new instructions neccessary to implement MSVC-compatible
exception handling support.  Most of the middle-end and none of the
back-end haven't been audited or updated to take them into account.

Differential Revision: http://reviews.llvm.org/D11097

llvm-svn: 243766
2015-07-31 17:58:14 +00:00
Jingyue Wu
a6a8a2d2b1 [SCEV] Apply NSW and NUW flags via poison value analysis
Summary:
Make Scalar Evolution able to propagate NSW and NUW flags from instructions to SCEVs in some cases. This is based on reasoning about when poison from instructions with these flags would trigger undefined behavior. This gives a 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX.

There does not seem to be clear agreement about when poison should be considered to propagate through instructions. In this analysis, poison propagates only in cases where that should be uncontroversial.

This change makes LSR able to create induction variables for expressions like &ptr[i + offset] for loops like this:

  for (int i = 0; i < limit; ++i) {
    sum += ptr[i + offset];
  }

Here ptr is a 64 bit pointer and offset is a 32 bit integer. For NVPTX, LSR currently creates an induction variable for i + offset instead, which is not as fast. Improving this situation is what brings the 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX.


There are more details in this discussion on llvmdev.
June: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-June/thread.html#87234
July: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-July/thread.html#87392

Patch by Bjarke Roune

Reviewers: eliben, atrick, sanjoy

Subscribers: majnemer, hfinkel, jingyue, meheff, llvm-commits

Differential Revision: http://reviews.llvm.org/D11212

llvm-svn: 243460
2015-07-28 18:22:40 +00:00
Peter Collingbourne
f49ef7d3ac IR: Do not consider available_externally linkage to be linker-weak.
From the linker's perspective, an available_externally global is equivalent
to an external declaration (per isDeclarationForLinker()), so it is incorrect
to consider it to be a weak definition.

Also clean up some logic in the dead argument elimination pass and clarify
its comments to better explain how its behavior depends on linkage,
introduce GlobalValue::isStrongDefinitionForLinker() and start using
it throughout the optimizers and backend.

Differential Revision: http://reviews.llvm.org/D10941

llvm-svn: 241413
2015-07-05 20:52:35 +00:00
Jingyue Wu
88ed36369b [ValueTracking] do not overwrite analysis results already computed
Summary:
ValueTracking used to overwrite the analysis results computed from
assumes and dominating conditions. This patch fixes this issue.

Test Plan: test/Analysis/ValueTracking/assume.ll

Reviewers: hfinkel, majnemer

Reviewed By: majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10283

llvm-svn: 239718
2015-06-15 05:46:29 +00:00
Artur Pilipenko
c17a17c081 Minor refactoring of GEP handling in isDereferenceablePointer
For GEP instructions isDereferenceablePointer checks that all indices are constant and within bounds. Replace this index calculation logic to a call to accumulateConstantOffset. Separated from the http://reviews.llvm.org/D9791

Reviewed By: sanjoy

Differential Revision: http://reviews.llvm.org/D9874

llvm-svn: 239299
2015-06-08 11:58:13 +00:00
James Molloy
e993a7db93 Reapply r237539 with a fix for the Chromium build.
Make sure if we're truncating a constant that would then be sign extended
that the sign extension of the truncated constant is the same as the
original constant.

> Canonicalize min/max expressions correctly.
>
> This patch introduces a canonical form for min/max idioms where one operand
> is extended or truncated. This often happens when the other operand is a
> constant. For example:
>
> %1 = icmp slt i32 %a, i32 0
> %2 = sext i32 %a to i64
> %3 = select i1 %1, i64 %2, i64 0
>
> Would now be canonicalized into:
>
> %1 = icmp slt i32 %a, i32 0
> %2 = select i1 %1, i32 %a, i32 0
> %3 = sext i32 %2 to i64
>
> This builds upon a patch posted by David Majenemer
> (https://www.marc.info/?l=llvm-commits&m=143008038714141&w=2). That pass
> passively stopped instcombine from ruining canonical patterns. This
> patch additionally actively makes instcombine canonicalize too.
>
> Canonicalization of expressions involving a change in type from int->fp
> or fp->int are not yet implemented.

llvm-svn: 237821
2015-05-20 18:41:25 +00:00
Sanjoy Das
4eaf966f48 Dereferenceable, dereferenceable_or_null metadata for loads
Summary:
Introduce dereferenceable, dereferenceable_or_null metadata for loads
with the same semantic as corresponding attributes.

This patch depends on http://reviews.llvm.org/D9253

Patch by Artur Pilipenko!

Reviewers: hfinkel, sanjoy, reames

Reviewed By: sanjoy, reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9365

llvm-svn: 237720
2015-05-19 20:10:19 +00:00
Sanjoy Das
552d093b67 Exploit dereferenceable_or_null attribute in LICM pass
Summary:
Allow hoisting of loads from values marked with dereferenceable_or_null
attribute. For values marked with the attribute perform
context-sensitive analysis to determine whether it's known-non-null or
not.

Patch by Artur Pilipenko!

Reviewers: hfinkel, sanjoy, reames

Reviewed By: reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9253

llvm-svn: 237593
2015-05-18 18:07:00 +00:00
James Molloy
dfdfc1be38 Allow min/max detection to see through casts.
This teaches the min/max idiom detector in ValueTracking to see through
casts such as SExt/ZExt/Trunc. SCEV can already do this, so we're bringing
non-SCEV analyses up to the same level.

The returned LHS/RHS will not match the type of the original SelectInst
any more, so a CastOp is returned too to inform the caller how to
convert to the SelectInst's type.

No in-tree users yet; this will be used by InstCombine in a followup.

llvm-svn: 237452
2015-05-15 16:04:50 +00:00
Jingyue Wu
55f6400e38 [ValueTracking] refactor: extract method haveNoCommonBitsSet
Summary:
Extract method haveNoCommonBitsSet so that we don't have to duplicate this logic in
InstCombine and SeparateConstOffsetFromGEP.

This patch also makes SeparateConstOffsetFromGEP more precise by passing
DominatorTree to computeKnownBits.

Test Plan: value-tracking-domtree.ll that tests ValueTracking indeed leverages dominating conditions

Reviewers: broune, meheff, majnemer

Reviewed By: majnemer

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D9734

llvm-svn: 237407
2015-05-14 23:53:19 +00:00
Pete Cooper
cd94898d6b Convert PHI getIncomingValue() to foreach over incoming_values(). NFC.
We already had a method to iterate over all the incoming values of a PHI.  This just changes all eligible code to use it.

Ineligible code included anything which cared about the index, or was also trying to get the i'th incoming BB.

llvm-svn: 237169
2015-05-12 20:05:31 +00:00
James Molloy
1208541226 Rip min/max pattern matching out of InstCombine and into
ValueTracking.

This matching functionality is useful in more than just InstCombine, so
make it available in ValueTracking.

NFC.

llvm-svn: 236998
2015-05-11 14:42:20 +00:00
Sanjoy Das
197092fa7d [Statepoint] Clean up Statepoint.h: accessor names.
Use getFoo() as accessors consistently and some other naming changes.

llvm-svn: 236564
2015-05-06 02:36:26 +00:00
Adam Nemet
2a8bdf8474 [getUnderlyingOjbects] Analyze loop PHIs further to remove false positives
Specifically, if a pointer accesses different underlying objects in each
iteration, don't look through the phi node defining the pointer.

The motivating case is the underlyling-objects-2.ll testcase.  Consider
the loop nest:

  int **A;
  for (i)
    for (j)
       A[i][j] = A[i-1][j] * B[j]

This loop is transformed by Load-PRE to stash away A[i] for the next
iteration of the outer loop:

  Curr = A[0];          // Prev_0
  for (i: 1..N) {
    Prev = Curr;        // Prev = PHI (Prev_0, Curr)
    Curr = A[i];
    for (j: 0..N)
       Curr[j] = Prev[j] * B[j]
  }

Since A[i] and A[i-1] are likely to be independent pointers,
getUnderlyingObjects should not assume that Curr and Prev share the same
underlying object in the inner loop.

If it did we would try to dependence-analyze Curr and Prev and the
analysis of the corresponding SCEVs would fail with non-constant
distance.

To fix this, the getUnderlyingObjects API is extended with an optional
LoopInfo parameter.  This is effectively what controls whether we want
the above behavior or the original.  Currently, I only changed to use
this approach for LoopAccessAnalysis.

The other testcase is to guard the opposite case where we do want to
look through the loop PHI.  If we step through an array by incrementing
a pointer, the underlying object is the incoming value of the phi as the
loop is entered.

Fixes rdar://problem/19566729

llvm-svn: 235634
2015-04-23 20:09:20 +00:00
Philip Reames
3f4453de96 Move Value.isDereferenceablePointer to ValueTracking [NFC]
Move isDereferenceablePointer function to Analysis. This function recursively tracks dereferencability over a chain of values like other functions in ValueTracking.

This refactoring is motivated by further changes to support dereferenceable_or_null attribute (http://reviews.llvm.org/D8650). isDereferenceablePointer will be extended to perform context-sensitive analysis and IR is not a good place to have such functionality.

Patch by: Artur Pilipenko <apilipenko@azulsystems.com>
Differential Revision: reviews.llvm.org/D9075

llvm-svn: 235611
2015-04-23 17:36:48 +00:00
Benjamin Kramer
024e0f290e [CallSite] Make construction from Value* (or Instruction*) explicit.
CallSite roughly behaves as a common base CallInst and InvokeInst. Bring
the behavior closer to that model by making upcasts explicit. Downcasts
remain implicit and work as before.

Following dyn_cast as a mental model checking whether a Value *V isa
CallSite now looks like this: 
  if (auto CS = CallSite(V)) // think dyn_cast
instead of:
  if (CallSite CS = V)

This is an extra token but I think it is slightly clearer. Making the
ctor explicit has the advantage of not accidentally creating nullptr
CallSites, e.g. when you pass a Value * to a function taking a CallSite
argument.

llvm-svn: 234601
2015-04-10 14:50:08 +00:00
Benjamin Kramer
f6149322d4 Reduce dyn_cast<> to isa<> or cast<> where possible.
No functional change intended.

llvm-svn: 234586
2015-04-10 11:24:51 +00:00
Sanjoy Das
40f3beb387 [ValueTracking] Fix PR23011.
Summary:
`ComputeNumSignBits` returns incorrect results for `srem` instructions.
This change fixes the issue and adds a test case.

Reviewers: nadav, nicholas, atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8600

llvm-svn: 233225
2015-03-25 22:33:53 +00:00
Benjamin Kramer
3deba1d2df [APInt] Add an isSplat helper and use it in some places.
To complement getSplat. This is more general than the binary
decomposition method as it also handles non-pow2 splat sizes.

llvm-svn: 233195
2015-03-25 16:49:59 +00:00
Benjamin Kramer
804e94f0e7 ValueTracking: Forward getConstantStringInfo's TrimAtNul param into recursive invocation
Currently this is only used to tweak the backend's memcpy inlining
heuristics, testing that isn't very helpful. A real test case will
follow in the next commit, where this behavior would cause a real
miscompilation.

llvm-svn: 232895
2015-03-21 15:36:06 +00:00
Philip Reames
6b5f658b6c Infer known bits from dominating conditions
This patch adds limited support in ValueTracking for inferring known bits of a value from conditional expressions which must be true to reach the instruction we're trying to optimize. At this time, the feature is off by default. Once landed, I'm hoping for feedback from others on both profitability and compile time impact.

Forms of conditional value propagation have been tried in LLVM before and have failed due to compile time problems.  In an attempt to side step that, this patch only considers conditions where the edge leaving the branch dominates the context instruction. It does not attempt full dataflow.  Even with that restriction, it handles many interesting cases:
 * Early exits from functions
 * Early exits from loops (for context instructions in the loop and after the check)
 * Conditions which control entry into loops, including multi-version loops (such as those produced during vectorization, IRCE, loop unswitch, etc..)

Possible applications include optimizing using information provided by constructs such as: preconditions, assumptions, null checks, & range checks.

This patch implements two approaches to the problem that need further benchmarking.  Approach 1 is to directly walk the dominator tree looking for interesting conditions.  Approach 2 is to inspect other uses of the value being queried for interesting comparisons.  From initial benchmarking, it appears that Approach 2 is faster than Approach 1, but this needs to be further validated.  

Differential Revision: http://reviews.llvm.org/D7708

llvm-svn: 231879
2015-03-10 22:43:20 +00:00
Mehdi Amini
f88efe5f8a DataLayout is mandatory, update the API to reflect it with references.
Summary:
Now that the DataLayout is a mandatory part of the module, let's start
cleaning the codebase. This patch is a first attempt at doing that.

This patch is not exactly NFC as for instance some places were passing
a nullptr instead of the DataLayout, possibly just because there was a
default value on the DataLayout argument to many functions in the API.
Even though it is not purely NFC, there is no change in the
validation.

I turned as many pointer to DataLayout to references, this helped
figuring out all the places where a nullptr could come up.

I had initially a local version of this patch broken into over 30
independant, commits but some later commit were cleaning the API and
touching part of the code modified in the previous commits, so it
seemed cleaner without the intermediate state.

Test Plan:

Reviewers: echristo

Subscribers: llvm-commits

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231740
2015-03-10 02:37:25 +00:00
Nadav Rotem
f095e52b93 Teach ComputeNumSignBits about signed reminder.
This optimization a continuation of r231140 that reasoned about signed div.

llvm-svn: 231433
2015-03-06 00:23:58 +00:00
Nadav Rotem
0f7a38b97d Teach ComputeNumSignBits about signed divisions.
http://reviews.llvm.org/D8028
rdar://20023136

llvm-svn: 231140
2015-03-03 21:39:02 +00:00
Sanjay Patel
5a3bbd8851 Fix really obscure bug in CannotBeNegativeZero() (PR22688)
With a diabolically crafted test case, we could recurse
through this code and return true instead of false.

The larger engineering crime is the use of magic numbers. 
Added FIXME comments for those.

llvm-svn: 230515
2015-02-25 18:00:15 +00:00
Benjamin Kramer
3aeb5530c5 ValueTracking: Make isBytewiseValue simpler and more powerful at the same time.
Turns out there is a simpler way of checking that all bytes in a word are equal
than binary decomposition.

llvm-svn: 228503
2015-02-07 19:29:02 +00:00
David Majnemer
d626da0571 ValueTracking: Make isSafeToSpeculativelyExecute a little cleaner
No functional change intended.

llvm-svn: 227760
2015-02-01 19:10:19 +00:00
Elena Demikhovsky
e46025656d Fold fcmp in cases where value is provably non-negative. By Arch Robison.
This patch folds fcmp in some cases of interest in Julia. The patch adds a function CannotBeOrderedLessThanZero that returns true if a value is provably not less than zero. I.e. the function returns true if the value is provably -0, +0, positive, or a NaN. The patch extends InstructionSimplify.cpp to fold instances of fcmp where:
 - the predicate is olt or uge
 - the first operand is provably not less than zero
 - the second operand is zero
The motivation for handling these cases optimizing away domain checks for sqrt in Julia for common idioms such as sqrt(x*x+y*y)..

http://reviews.llvm.org/D6972

llvm-svn: 227298
2015-01-28 08:03:58 +00:00
Chandler Carruth
0b619fcc8e [cleanup] Re-sort all the #include lines in LLVM using
utils/sort_includes.py.

I clearly haven't done this in a while, so more changed than usual. This
even uncovered a missing include from the InstrProf library that I've
added. No functionality changed here, just mechanical cleanup of the
include order.

llvm-svn: 225974
2015-01-14 11:23:27 +00:00
David Majnemer
eb61de555d Analysis: Reformulate WillNotOverflowUnsignedAdd for reusability
WillNotOverflowUnsignedAdd's smarts will live in ValueTracking as
computeOverflowForUnsignedAdd.  It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.

llvm-svn: 225329
2015-01-07 00:39:50 +00:00
Chandler Carruth
670ed5fed8 [PM] Cleanup a const_cast and other machinery left over in this code
from before I removed thet non-const use of the function.

The unused variable that held the const_cast was already kindly removed
by Michael.

llvm-svn: 225143
2015-01-04 23:13:57 +00:00
Michael Kuperstein
448b902906 Fix unused variable warning for non-asserts builds. NFC.
llvm-svn: 225133
2015-01-04 13:35:44 +00:00
Chandler Carruth
c140bae640 [PM] Split the AssumptionTracker immutable pass into two separate APIs:
a cache of assumptions for a single function, and an immutable pass that
manages those caches.

The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.

Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.

For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.

llvm-svn: 225131
2015-01-04 12:03:27 +00:00
David Majnemer
8ed72d8999 ValueTracking: ComputeNumSignBits should tolerate misshapen phi nodes
PHI nodes can have zero operands in the middle of a transform.  It is
expected that utilities in Analysis don't freak out when this happens.

Note that it is considered invalid to allow these misshapen phi nodes to
make it to another pass.

This fixes PR22086.

llvm-svn: 225126
2015-01-04 07:06:53 +00:00
David Majnemer
5771468274 ValueTracking: Make computeKnownBits for Arguments a little more clear
We would sometimes leave the out-param APInts untouched while going
through computeKnownBits.  While I don't know of a way to trigger a bug
involving this in practice, it goes against the overall design of
computeKnownBits.

Found via code inspection.

llvm-svn: 225109
2015-01-03 02:33:25 +00:00
David Majnemer
78198d7245 InstCombine: Detect when llvm.umul.with.overflow always overflows
We know overflow always occurs if both ~LHSKnownZero * ~RHSKnownZero
and LHSKnownOne * RHSKnownOne overflow.

llvm-svn: 225077
2015-01-02 07:29:47 +00:00
David Majnemer
a7058e95b3 Analysis: Reformulate WillNotOverflowUnsignedMul for reusability
WillNotOverflowUnsignedMul's smarts will live in ValueTracking as
computeOverflowForUnsignedMul.  It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.

llvm-svn: 225076
2015-01-02 07:29:43 +00:00
David Majnemer
abd3e157be ValueTracking: Small cleanup in ComputeNumSignBits
Constant contains the isAllOnesValue and isNullValue predicates, not
ConstantInt.

llvm-svn: 224848
2014-12-26 09:20:17 +00:00
Michael Kuperstein
180649bb38 [ValueTracking] Move GlobalAlias handling to be after the max depth check in computeKnownBits()
GlobalAlias handling used to be after GlobalValue handling, which meant it was, in practice, dead code. r220165 moved GlobalAlias handling to be before GlobalValue handling, but also moved it to be before the max depth check, causing an assert due to a recursion depth limit violation. 

This moves GlobalAlias handling forward to where it's safe, and changes the GlobalValue handling to only look at GlobalObjects.

Differential Revision: http://reviews.llvm.org/D6758

llvm-svn: 224765
2014-12-23 11:33:41 +00:00
David Majnemer
70ded5026c ValueTracking: Don't recurse too deeply in computeKnownBitsFromAssume
Respect the MaxDepth recursion limit, doing otherwise will trigger an
assert in computeKnownBits.

This fixes PR21891.

llvm-svn: 224168
2014-12-12 23:59:29 +00:00
Duncan P. N. Exon Smith
3d57886267 IR: Split Metadata from Value
Split `Metadata` away from the `Value` class hierarchy, as part of
PR21532.  Assembly and bitcode changes are in the wings, but this is the
bulk of the change for the IR C++ API.

I have a follow-up patch prepared for `clang`.  If this breaks other
sub-projects, I apologize in advance :(.  Help me compile it on Darwin
I'll try to fix it.  FWIW, the errors should be easy to fix, so it may
be simpler to just fix it yourself.

This breaks the build for all metadata-related code that's out-of-tree.
Rest assured the transition is mechanical and the compiler should catch
almost all of the problems.

Here's a quick guide for updating your code:

  - `Metadata` is the root of a class hierarchy with three main classes:
    `MDNode`, `MDString`, and `ValueAsMetadata`.  It is distinct from
    the `Value` class hierarchy.  It is typeless -- i.e., instances do
    *not* have a `Type`.

  - `MDNode`'s operands are all `Metadata *` (instead of `Value *`).

  - `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be
    replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively.

    If you're referring solely to resolved `MDNode`s -- post graph
    construction -- just use `MDNode*`.

  - `MDNode` (and the rest of `Metadata`) have only limited support for
    `replaceAllUsesWith()`.

    As long as an `MDNode` is pointing at a forward declaration -- the
    result of `MDNode::getTemporary()` -- it maintains a side map of its
    uses and can RAUW itself.  Once the forward declarations are fully
    resolved RAUW support is dropped on the ground.  This means that
    uniquing collisions on changing operands cause nodes to become
    "distinct".  (This already happened fairly commonly, whenever an
    operand went to null.)

    If you're constructing complex (non self-reference) `MDNode` cycles,
    you need to call `MDNode::resolveCycles()` on each node (or on a
    top-level node that somehow references all of the nodes).  Also,
    don't do that.  Metadata cycles (and the RAUW machinery needed to
    construct them) are expensive.

  - An `MDNode` can only refer to a `Constant` through a bridge called
    `ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`).

    As a side effect, accessing an operand of an `MDNode` that is known
    to be, e.g., `ConstantInt`, takes three steps: first, cast from
    `Metadata` to `ConstantAsMetadata`; second, extract the `Constant`;
    third, cast down to `ConstantInt`.

    The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have
    metadata schema owners transition away from using `Constant`s when
    the type isn't important (and they don't care about referring to
    `GlobalValue`s).

    In the meantime, I've added transitional API to the `mdconst`
    namespace that matches semantics with the old code, in order to
    avoid adding the error-prone three-step equivalent to every call
    site.  If your old code was:

        MDNode *N = foo();
        bar(isa             <ConstantInt>(N->getOperand(0)));
        baz(cast            <ConstantInt>(N->getOperand(1)));
        bak(cast_or_null    <ConstantInt>(N->getOperand(2)));
        bat(dyn_cast        <ConstantInt>(N->getOperand(3)));
        bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4)));

    you can trivially match its semantics with:

        MDNode *N = foo();
        bar(mdconst::hasa               <ConstantInt>(N->getOperand(0)));
        baz(mdconst::extract            <ConstantInt>(N->getOperand(1)));
        bak(mdconst::extract_or_null    <ConstantInt>(N->getOperand(2)));
        bat(mdconst::dyn_extract        <ConstantInt>(N->getOperand(3)));
        bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4)));

    and when you transition your metadata schema to `MDInt`:

        MDNode *N = foo();
        bar(isa             <MDInt>(N->getOperand(0)));
        baz(cast            <MDInt>(N->getOperand(1)));
        bak(cast_or_null    <MDInt>(N->getOperand(2)));
        bat(dyn_cast        <MDInt>(N->getOperand(3)));
        bay(dyn_cast_or_null<MDInt>(N->getOperand(4)));

  - A `CallInst` -- specifically, intrinsic instructions -- can refer to
    metadata through a bridge called `MetadataAsValue`.  This is a
    subclass of `Value` where `getType()->isMetadataTy()`.

    `MetadataAsValue` is the *only* class that can legally refer to a
    `LocalAsMetadata`, which is a bridged form of non-`Constant` values
    like `Argument` and `Instruction`.  It can also refer to any other
    `Metadata` subclass.

(I'll break all your testcases in a follow-up commit, when I propagate
this change to assembly.)

llvm-svn: 223802
2014-12-09 18:38:53 +00:00
Philip Reames
f22f53238c Factor check for the assume intrinsic out of checks in computeKnownBitsFromAssume
We were matching against the assume intrinsic in every check.  Since we know that it must be an assume, this is just wasted work.  Somewhat surprisingly, matching an intrinsic id is actually relatively expensive.  It devolves to a string construction and comparison in Function::isIntrinsic.

I originally spotted this because it showed up in a performance profile of my compiler.  I've since discovered a separate issue which seems to be the actual root cause, but this is minor perf goodness regardless.  

I'm likely to follow up with another change to factor out the comparison matching.  There's no need to match the compare instruction in every single one of the tests.

Differential Revision: http://reviews.llvm.org/D6312

llvm-svn: 222709
2014-11-24 23:44:28 +00:00
David Blaikie
60e6c80905 Update SetVector to rely on the underlying set's insert to return a pair<iterator, bool>
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.

This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...

llvm-svn: 222334
2014-11-19 07:49:26 +00:00
Duncan P. N. Exon Smith
8770505e4e Revert "IR: MDNode => Value"
Instead, we're going to separate metadata from the Value hierarchy.  See
PR21532.

This reverts commit r221375.
This reverts commit r221373.
This reverts commit r221359.
This reverts commit r221167.
This reverts commit r221027.
This reverts commit r221024.
This reverts commit r221023.
This reverts commit r220995.
This reverts commit r220994.

llvm-svn: 221711
2014-11-11 21:30:22 +00:00
Michael Liao
9f46de9bd4 Indentation fixes
llvm-svn: 221472
2014-11-06 19:05:57 +00:00
Sanjay Patel
13b8368b56 remove extra breaks; NFC
llvm-svn: 221374
2014-11-05 18:00:07 +00:00
David Majnemer
dc5cf4d8b3 Analysis: Make isSafeToSpeculativelyExecute fire less for divides
Divides and remainder operations do not behave like other operations
when they are given poison: they turn into undefined behavior.

It's really hard to know if the operands going into a div are or are not
poison.  Because of this, we should only choose to speculate if there
are constant operands which we can easily reason about.

This fixes PR21412.

llvm-svn: 221318
2014-11-04 23:49:08 +00:00
Sanjay Patel
9ea714e765 remove function names from comments; NFC
llvm-svn: 221274
2014-11-04 16:27:42 +00:00
Sanjay Patel
645461083d fix typo in comment
llvm-svn: 221273
2014-11-04 16:09:50 +00:00
Duncan P. N. Exon Smith
7004fd9aac IR: MDNode => Value: Instruction::getMetadata()
Change `Instruction::getMetadata()` to return `Value` as part of
PR21433.

Update most callers to use `Instruction::getMDNode()`, which wraps the
result in a `cast_or_null<MDNode>`.

llvm-svn: 221024
2014-11-01 00:10:31 +00:00
Philip Reames
d5ee9ddca8 Add handling for range metadata in ValueTracking isKnownNonZero
If we load from a location with range metadata, we can use information about the ranges of the loaded value for optimization purposes.  This helps to remove redundant checks and canonicalize checks for other optimization passes.  This particular patch checks whether a value is known to be non-zero from the range metadata.

Currently, these tests are against InstCombine.  In theory, all of these should be InstSimplify since we're not inserting any new instructions.  Moving the code may follow in a separate change.

Reviewed by: Hal
Differential Revision: http://reviews.llvm.org/D5947

llvm-svn: 220925
2014-10-30 20:25:19 +00:00
Matt Arsenault
74dd906076 Add minnum / maxnum intrinsics
These are named following the IEEE-754 names for these
functions, rather than the libm fmin / fmax to avoid
possible ambiguities. Some languages may implement something
resembling fmin / fmax which return NaN if either operand is
to propagate errors. These implement the IEEE-754 semantics
of returning the other operand if either is a NaN representing
missing data.

llvm-svn: 220341
2014-10-21 23:00:20 +00:00
Philip Reames
c3e4c79873 Introduce enum values for previously defined metadata types. (NFC)
Our metadata scheme lazily assigns IDs to string metadata, but we have a mechanism to preassign them as well.  Using a preassigned ID is helpful since we get compile time type checking, and avoid some (minimal) string construction and comparison.  This change adds enum value for three existing metadata types:
+    MD_nontemporal = 9, // "nontemporal"
+    MD_mem_parallel_loop_access = 10, // "llvm.mem.parallel_loop_access"
+    MD_nonnull = 11 // "nonnull"

I went through an updated various uses as well.  I made no attempt to get all uses; I focused on the ones which were easily grepable and easily to translate.  For example, there were several items in LoopInfo.cpp I chose not to update.

llvm-svn: 220248
2014-10-21 00:13:20 +00:00
Philip Reames
cb6ff55dfa Introduce a 'nonnull' metadata on Load instructions.
The newly introduced 'nonnull' metadata is analogous to existing 'nonnull' attributes, but applies to load instructions rather than call arguments or returns.  Long term, it would be nice to combine these into a single construct.   The value of the load is allowed to vary between successive loads, but null is not a valid value to be loaded by any load marked nonnull.

Reviewed by: Hal Finkel
Differential Revision:  http://reviews.llvm.org/D5220

llvm-svn: 220240
2014-10-20 22:40:55 +00:00
Chandler Carruth
9c87bbcef7 Move previously dead code to handle computing the known bits of an alias
up to where it actually works as intended. The problem is that
a GlobalAlias isa GlobalValue and so the prior block handled all of the
cases.

This allows us to constant fold based on the actual constant expression
in the global alias. As an example, see the last function in the newly
added test case which explicitly aligns an unaligned pointer using
constant expression math. Without this change, we fail to see that and
fold an alignment test to zero.

llvm-svn: 220164
2014-10-19 09:06:56 +00:00
Benjamin Kramer
ad466ef0b8 Fix an ODR violation consisting of two 'struct Query' in the global namespace.
Put them in their own anonymous namespaces. Found by GCC's new -Wodr (PR20915).

llvm-svn: 217662
2014-09-12 08:56:53 +00:00
Hal Finkel
be0364002a Add additional patterns for @llvm.assume in ValueTracking
This builds on r217342, which added the infrastructure to compute known bits
using assumptions (@llvm.assume calls). That original commit added only a few
patterns (to catch common cases related to determining pointer alignment); this
change adds several other patterns for simple cases.

r217342 contained that, for assume(v & b = a), bits in the mask
that are known to be one, we can propagate known bits from the a to v. It also
had a known-bits transfer for assume(a = b). This patch adds:

assume(~(v & b) = a) : For those bits in the mask that are known to be one, we
                       can propagate inverted known bits from the a to v.

assume(v | b = a) :    For those bits in b that are known to be zero, we can
                       propagate known bits from the a to v.

assume(~(v | b) = a):  For those bits in b that are known to be zero, we can
                       propagate inverted known bits from the a to v.

assume(v ^ b = a) :    For those bits in b that are known to be zero, we can
		       propagate known bits from the a to v. For those bits in
		       b that are known to be one, we can propagate inverted
                       known bits from the a to v.

assume(~(v ^ b) = a) : For those bits in b that are known to be zero, we can
		       propagate inverted known bits from the a to v. For those
		       bits in b that are known to be one, we can propagate
                       known bits from the a to v.

assume(v << c = a) :   For those bits in a that are known, we can propagate them
                       to known bits in v shifted to the right by c.

assume(~(v << c) = a) : For those bits in a that are known, we can propagate
                        them inverted to known bits in v shifted to the right by c.

assume(v >> c = a) :   For those bits in a that are known, we can propagate them
                       to known bits in v shifted to the right by c.

assume(~(v >> c) = a) : For those bits in a that are known, we can propagate
                        them inverted to known bits in v shifted to the right by c.

assume(v >=_s c) where c is non-negative: The sign bit of v is zero

assume(v >_s c) where c is at least -1: The sign bit of v is zero

assume(v <=_s c) where c is negative: The sign bit of v is one

assume(v <_s c) where c is non-positive: The sign bit of v is one

assume(v <=_u c): Transfer the known high zero bits

assume(v <_u c): Transfer the known high zero bits (if c is know to be a power
                 of 2, transfer one more)

A small addition to InstCombine was necessary for some of the test cases. The
problem is that when InstCombine was simplifying and, or, etc. it would fail to
check the 'do I know all of the bits' condition before checking less specific
conditions and would not fully constant-fold the result. I'm not sure how to
trigger this aside from using assumptions, so I've just included the change
here.

llvm-svn: 217343
2014-09-07 19:21:07 +00:00
Hal Finkel
f8bb9b78cf Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.

As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.

The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.

Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.

This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).

llvm-svn: 217342
2014-09-07 18:57:58 +00:00
Matt Arsenault
f9216b5a3a Make fabs safe to speculatively execute
llvm-svn: 216736
2014-08-29 16:01:17 +00:00
David Majnemer
5a45d76cfb ValueTracking: Figure out more bits when looking at add/sub
Given something like X01XX + X01XX, we know that the result must look
like X1XXX.

Adapted from a patch by Richard Smith, test-case written by me.

llvm-svn: 216250
2014-08-22 00:40:43 +00:00
Craig Topper
65775cc03d Repace SmallPtrSet with SmallPtrSetImpl in function arguments to avoid needing to mention the size.
llvm-svn: 216158
2014-08-21 05:55:13 +00:00