1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 22:42:46 +02:00
Commit Graph

5791 Commits

Author SHA1 Message Date
Sanjoy Das
1f464e3640 Extract out getConstantRangeFromMetadata; NFC
The loop idiom creating a ConstantRange is repeated twice in the
codebase, time to give it a name and a home.

The loop is also repeated in `rangeMetadataExcludesValue`, but using
`getConstantRangeFromMetadata` there would not be an NFC -- the range
returned by `getConstantRangeFromMetadata` may contain a value that none
of the subranges did.

llvm-svn: 251180
2015-10-24 05:37:35 +00:00
Sanjoy Das
9a9c3c76d2 Fix whitespace issues in two places; NFC
llvm-svn: 251179
2015-10-24 05:37:28 +00:00
Hal Finkel
1a64f66683 Handle non-constant shifts in computeKnownBits, and use computeKnownBits for constant folding in InstCombine/Simplify
First, the motivation: LLVM currently does not realize that:

  ((2072 >> (L == 0)) >> 7) & 1 == 0

where L is some arbitrary value. Whether you right-shift 2072 by 7 or by 8, the
lowest-order bit is always zero. There are obviously several ways to go about
fixing this, but the generic solution pursued in this patch is to teach
computeKnownBits something about shifts by a non-constant amount. Previously,
we would give up completely on these. Instead, in cases where we know something
about the low-order bits of the shift-amount operand, we can combine (and
together) the associated restrictions for all shift amounts consistent with
that knowledge. As a further generalization, I refactored all of the logic for
all three kinds of shifts to have this capability. This works well in the above
case, for example, because the dynamic shift amount can only be 0 or 1, and
thus we can say a lot about the known bits of the result.

This brings us to the second part of this change: Even when we know all of the
bits of a value via computeKnownBits, nothing used to constant-fold the result.
This introduces the necessary code into InstCombine and InstSimplify. I've
added it into both because:

  1. InstCombine won't automatically pick up the associated logic in
     InstSimplify (InstCombine uses InstSimplify, but not via the API that
     passes in the original instruction).

  2. Putting the logic in InstCombine allows the resulting simplifications to become
     part of the iterative worklist

  3. Putting the logic in InstSimplify allows the resulting simplifications to be
     used by everywhere else that calls SimplifyInstruction (inlining, unrolling,
     and many others).

And this requires a small change to our definition of an ephemeral value so
that we don't break the rest case from r246696 (where the icmp feeding the
@llvm.assume, is also feeding a br). Under the old definition, the icmp would
not be considered ephemeral (because it is used by the br), but this causes the
assume to remove itself (in addition to simplifying the branch structure), and
it seems more-useful to prevent that from happening.

llvm-svn: 251146
2015-10-23 20:37:08 +00:00
Sanjoy Das
d2f20d8e22 [SCEV] Fix stylistic issue in MatchBinaryAddToConst; NFCI
Instead of checking `(FlagsPresent & ExpectedFlags) != 0`, check
`(FlagsPresent & ExpectedFlags) == ExpectedFlags`.  Right now they're
equivalent since `ExpectedFlags` can only be either `FlagNUW` or
`FlagNSW`, but if we ever pass in `ExpectedFlags` as `FlagNUW | FlagNSW`
then checking `(FlagsPresent & ExpectedFlags) != 0` would be wrong.

llvm-svn: 251142
2015-10-23 20:09:57 +00:00
James Molloy
6472ad3426 [BasicAA] Bugfix for r251016
If the loaded type sizes don't match the element type of the sequential type, all bets are off and the addresses may, indeed, overlap.

Surprisingly, this just got caught in one test, on one builder, out of the 30+ builders testing this change. Congratulations go to http://lab.llvm.org:8011/builders/clang-aarch64-lnt/builds/5205.

llvm-svn: 251112
2015-10-23 14:17:03 +00:00
Sanjoy Das
841975a868 [SCEV] Get rid of an unnecessary lambda; NFC
llvm-svn: 251099
2015-10-23 06:57:21 +00:00
Sanjoy Das
c16a88d64c [SCEV] Fix a latent bug in getPreStartForExtend
I could not come up a way to test this -- I think this bug is latent
today, and will not actually result in a miscompile.

In `getPreStartForExtend`, SCEV constructs `PreStart` as a sum of all of
`SA`'s operands except `Op`.  It also uses `SA`'s no-wrap flags, and
this is problematic because removing an element from an add expression
can make it signed-wrap.  E.g. if `SA` was `(127 + 1 + -1)`, then it
could safely be `<nsw>` (since `sext(127) + sext(1) + sext(-1)` ==
`sext(127 + 1 + -1)`), but `(127 + 1)` (== `PreStart` if `Op` is `-1`)
is not `<nsw>`.

Transferring `<nuw>` from `SA` to `PreStart` is safe, as far as I can
tell.

llvm-svn: 251097
2015-10-23 06:33:47 +00:00
Justin Bogner
55d6455292 LoopPass: Remove redoLoop, it isn't used. NFC
In r251064 I removed a logically unreachable call to `redoLoop`, and
now there aren't any callers of this API at all. Remove the needless
complexity.

llvm-svn: 251067
2015-10-22 21:31:34 +00:00
Justin Bogner
8d6fc38768 LoopPass: Simplify the API for adding a new loop. NFC
The insertLoop() API is only used to add new loops, and has confusing
ownership semantics. Simplify it by replacing it with addLoop().

llvm-svn: 251064
2015-10-22 21:21:32 +00:00
Sanjoy Das
20577e65c0 [SCEV] Commute zero extends through <nuw> additions
llvm-svn: 251052
2015-10-22 19:57:38 +00:00
Sanjoy Das
592ded635e [SCEV] Opportunistically interpret unsigned constraints as signed
Summary:
An unsigned comparision is equivalent to is corresponding signed version
if both the operands being compared are positive.  Teach SCEV to use
this fact when profitable.

Reviewers: atrick, hfinkel, reames, nlewycky

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D13687

llvm-svn: 251051
2015-10-22 19:57:34 +00:00
Sanjoy Das
7eba4a995b [SCEV] Teach SCEV some axioms about non-wrapping arithmetic
Summary:
 - A s<  (A + C)<nsw> if C >  0
 - A s<= (A + C)<nsw> if C >= 0
 - (A + C)<nsw> s<  A if C <  0
 - (A + C)<nsw> s<= A if C <= 0

Right now `C` needs to be a constant, but we can later generalize it to
be a non-constant if needed.

Reviewers: atrick, hfinkel, reames, nlewycky

Subscribers: sanjoy, llvm-commits

Differential Revision: http://reviews.llvm.org/D13686

llvm-svn: 251050
2015-10-22 19:57:29 +00:00
Sanjoy Das
2c0387ed8b [SCEV] Commute sign extends through nsw additions
Summary: Depends on D13613.

Reviewers: atrick, hfinkel, reames, nlewycky

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D13685

llvm-svn: 251049
2015-10-22 19:57:25 +00:00
Sanjoy Das
fd03ccdaff [SCEV] Mark AddExprs as nsw or nuw if legal
Summary:
This uses `ScalarEvolution::getRange` and not potentially control
dependent `nsw` and `nuw` bits on the arithmetic instruction.

Reviewers: atrick, hfinkel, nlewycky

Subscribers: llvm-commits, sanjoy

Differential Revision: http://reviews.llvm.org/D13613

llvm-svn: 251048
2015-10-22 19:57:19 +00:00
James Molloy
ee690e2eb4 [GlobalsAA] Loosen an overly conservative bailout
Instead of bailing out when we see loads, analyze them. If we can prove that the loaded-from address must escape, then we can conclude that a load from that address must escape too and therefore cannot alias a non-addr-taken global.

When checking if a Value can alias a non-addr-taken global, if the Value is a LoadInst of a non-global, recurse instead of bailing.

If we can follow a trail of loads up to some base that is captured, we know by inference that all the loads we followed are also captured.

llvm-svn: 251017
2015-10-22 13:44:26 +00:00
James Molloy
a87ba0206e [BasicAA] Non-equal indices in a GEP of a SequentialType don't overlap
If the final indices of two GEPs can be proven to not be equal, and
the GEP is of a SequentialType (not a StructType), then the two GEPs
do not alias.

llvm-svn: 251016
2015-10-22 13:28:18 +00:00
James Molloy
94327af5e1 [ValueTracking] Add a new predicate: isKnownNonEqual()
isKnownNonEqual(A, B) returns true if it can be determined that A != B.

At the moment it only knows two facts, that a non-wrapping add of nonzero to a value cannot be that value:

A + B != A [where B != 0, addition is nsw or nuw]

and that contradictory known bits imply two values are not equal.

This patch also hooks this up to InstSimplify; InstSimplify had a peephole for the first fact but not the second so this teaches InstSimplify a new trick too (alas no measured performance impact!)

llvm-svn: 251012
2015-10-22 13:18:42 +00:00
Chandler Carruth
2d5e031754 [AA] Enhance the new AliasAnalysis infrastructure with an optional
"external" AA wrapper pass.

This is a generic hook that can be used to thread custom code into the
primary AAResultsWrapperPass for the legacy pass manager in order to
allow it to merge external AA results into the AA results it is
building. It does this by threading in a raw callback and so it is
*very* powerful and should serve almost any use case I have come up with
for extending the set of alias analyses used. The only thing not well
supported here is using a *different order* of alias analyses. That form
of extension *is* supportable with the new pass manager, and I can make
the callback structure here more elaborate to support it in the legacy
pass manager if this is a critical use case that people are already
depending on, but the only use cases I have heard of thus far should be
reasonably satisfied by this simpler extension mechanism.

It is hard to test this using normal facilities (the built-in AAs don't
use this for obvious reasons) so I've written a fairly extensive set of
custom passes in the alias analysis unit test that should be an
excellent test case because it models the out-of-tree users: it adds
a totally custom AA to the system. This should also serve as
a reasonably good example and guide for out-of-tree users to follow in
order to rig up their existing alias analyses.

No support in opt for commandline control is provided here however. I'm
really unhappy with the kind of contortions that would be required to
support that. It would fully re-introduce the analysis group
self-recursion kind of patterns. =/

I've heard from out-of-tree users that this will unblock their use cases
with extending AAs on top of the new infrastructure and let us retain
the new analysis-group-free-world.

Differential Revision: http://reviews.llvm.org/D13418

llvm-svn: 250894
2015-10-21 12:15:19 +00:00
James Molloy
570047b4e8 [GlobalsAA] Fix a really horrible iterator invalidation bug
We were keeping a reference to an object in a DenseMap then mutating it. At the end of the function we were attempting to clone that reference into other keys in the DenseMap, but DenseMap may well decide to resize its hashtable which would invalidate the reference!

It took an extremely complex testcase to catch this - many thanks to Zhendong Su for catching it in PR25225.

This fixes PR25225.

llvm-svn: 250692
2015-10-19 08:54:59 +00:00
Elena Demikhovsky
2e0208e770 Removed parameter "Consecutive" from isLegalMaskedLoad() / isLegalMaskedStore().
Originally I planned to use the same interface for masked gather/scatter and set isConsecutive to "false" in this case.

Now I'm implementing masked gather/scatter and see that the interface is inconvenient. I want to add interfaces isLegalMaskedGather() / isLegalMaskedScatter() instead of using the "Consecutive" parameter in the existing interfaces.

Differential Revision: http://reviews.llvm.org/D13850

llvm-svn: 250686
2015-10-19 07:43:38 +00:00
Sanjoy Das
b98ecd3f49 [SCEV] Fix whitespace issues and remove extra braces; NFC
llvm-svn: 250636
2015-10-18 00:29:27 +00:00
Sanjoy Das
50512a9a28 [SCEV] Use std::all_of and std::any_of; NFC
llvm-svn: 250635
2015-10-18 00:29:23 +00:00
Sanjoy Das
08660450fd [SCEV] Use auto where it helps remove line breaks; NFC
llvm-svn: 250634
2015-10-18 00:29:20 +00:00
Sanjoy Das
c8d2bcdfc3 [SCEV] Use range for loops; NFC
llvm-svn: 250633
2015-10-18 00:29:16 +00:00
Manman Ren
8fc1a288a0 Recommit r250345, it was reverted in r250366 to investigate a bot failure.
Our internal bot is still red after r250366.

llvm-svn: 250415
2015-10-15 14:59:40 +00:00
Aaron Ballman
8601c2e902 Silencing a -Wtype-limits warning; an unsigned value will always be >= 0; NFC.
llvm-svn: 250404
2015-10-15 13:55:43 +00:00
Manman Ren
f2865ff590 Temporarily revert r250345 to sort out bot failure.
With r250345 and r250343, we start to observe the following failure
when bootstrap clang with lto and pgo:
PHI node entries do not match predecessors!
  %.sroa.029.3.i = phi %"class.llvm::SDNode.13298"* [ null, %30953 ], [ null, %31017 ], [ null, %30998 ], [ null, %_ZN4llvm8dyn_castINS_14ConstantSDNodeENS_7SDValueEEENS_10cast_rettyIT_T0_E8ret_typeERS5_.exit.i.1804 ], [ null, %30975 ], [ null, %30991 ], [ null, %_ZNK4llvm3EVT13getScalarTypeEv.exit.i.1812 ], [ %..sroa.029.0.i, %_ZN4llvm11SmallVectorIiLj8EED1Ev.exit.i.1826 ], !dbg !451895
label %30998
label %_ZNK4llvm3EVTeqES0_.exit19.thread.i
LLVM ERROR: Broken function found, compilation aborted!

I will re-commit this if the bot does not recover.

llvm-svn: 250366
2015-10-15 04:58:24 +00:00
Cong Hou
07a3f21d1f Update the branch weight metadata in JumpThreading pass.
Currently in JumpThreading pass, the branch weight metadata is not updated after CFG modification. Consider the jump threading on PredBB, BB, and SuccBB. After jump threading, the weight on BB->SuccBB should be adjusted as some of it is contributed by the edge PredBB->BB, which doesn't exist anymore. This patch tries to update the edge weight in metadata on BB->SuccBB by scaling it by 1 - Freq(PredBB->BB) / Freq(BB->SuccBB).

This is the third attempt to submit this patch, while the first two led to failures in some FDO tests. After investigation, it is the edge weight normalization that caused those failures. In this patch the edge weight normalization is fixed so that there is no zero weight in the output and the sum of all weights can fit in 32-bit integer. Several unit tests are added.

Differential revision: http://reviews.llvm.org/D10979

llvm-svn: 250345
2015-10-14 23:14:17 +00:00
Philip Reames
1bb1df8eb7 Tighten known bits for ctpop based on zero input bits
This is a cleaned up patch from the one written by John Regehr based on the findings of the Souper superoptimizer.

The basic idea here is that input bits that are known zero reduce the maximum count that the intrinsic could return. We know that the number of bits required to represent a particular count is at most log2(N)+1.

Differential Revision: http://reviews.llvm.org/D13253

llvm-svn: 250338
2015-10-14 22:42:12 +00:00
Manman Ren
0a63668758 Revert r250204 and r250240 due to bot failure. We failed to build PGO-ed clang.
llvm-svn: 250264
2015-10-14 03:04:03 +00:00
Kostya Serebryany
b4d1390441 [asan] Disabling speculative loads under asan. Patch by Mike Aizatsky
llvm-svn: 250259
2015-10-14 00:21:05 +00:00
Sanjoy Das
701aed0a6e [SCEV] Use SCEV::isAllOnesValue directly; NFC.
Instead of `dyn_cast` ing to `SCEVConstant` and checking the contained
`ConstantInteger.

llvm-svn: 250251
2015-10-13 23:28:31 +00:00
Cong Hou
33d543ef55 Update the branch weight metadata in JumpThreading pass.
Currently in JumpThreading pass, the branch weight metadata is not updated after CFG modification. Consider the jump threading on PredBB, BB, and SuccBB. After jump threading, the weight on BB->SuccBB should be adjusted as some of it is contributed by the edge PredBB->BB, which doesn't exist anymore. This patch tries to update the edge weight in metadata on BB->SuccBB by scaling it by 1 - Freq(PredBB->BB) / Freq(BB->SuccBB).

Differential revision: http://reviews.llvm.org/D10979

llvm-svn: 250204
2015-10-13 18:43:10 +00:00
James Molloy
7b355004c3 [GlobalsAA] Don't assume anything about functions that may be overridden
Weak linkage and friends allow a symbol to be overriden outside the
code generator's model, so GlobalsAA shouldn't assume that anything it
can compute about such a symbol is valid.

llvm-svn: 250156
2015-10-13 10:43:33 +00:00
Manman Ren
3f113087cd Revert 250089 due to bot failure. It failed when building clang itself with PGO.
llvm-svn: 250145
2015-10-13 03:38:02 +00:00
Sanjoy Das
689632fa53 [SCEV] Put some utilites in the ScalarEvolution class
In a later commit, `SplitBinaryAdd` will be used outside `IsConstDiff`,
so lift that out.  And lift out `IsConstDiff` as
`computeConstantDifference` to keep things clean and to avoid playing
C++ access specifier games.

NFC.

llvm-svn: 250143
2015-10-13 02:53:27 +00:00
Cong Hou
664f8a8312 Update the branch weight metadata in JumpThreading pass.
In JumpThreading pass, the branch weight metadata is not updated after CFG modification. Consider the jump threading on PredBB, BB, and SuccBB. After jump threading, the weight on BB->SuccBB should be adjusted as some of it is contributed by the edge PredBB->BB, which doesn't exist anymore. This patch tries to update the edge weight in metadata on BB->SuccBB by scaling it by 1 - Freq(PredBB->BB) / Freq(BB->SuccBB). 

Differential revision: http://reviews.llvm.org/D10979

llvm-svn: 250089
2015-10-12 19:44:08 +00:00
James Molloy
629826974b [LoopVectorize] Shrink integer operations into the smallest type possible
C semantics force sub-int-sized values (e.g. i8, i16) to be promoted to int
type (e.g. i32) whenever arithmetic is performed on them.

For targets with native i8 or i16 operations, usually InstCombine can shrink
the arithmetic type down again. However InstCombine refuses to create illegal
types, so for targets without i8 or i16 registers, the lengthening and
shrinking remains.

Most SIMD ISAs (e.g. NEON) however support vectors of i8 or i16 even when
their scalar equivalents do not, so during vectorization it is important to
remove these lengthens and truncates when deciding the profitability of
vectorization.

The algorithm this uses starts at truncs and icmps, trawling their use-def
chains until they terminate or instructions outside the loop are found (or
unsafe instructions like inttoptr casts are found). If the use-def chains
starting from different root instructions (truncs/icmps) meet, they are
unioned. The demanded bits of each node in the graph are ORed together to form
an overall mask of the demanded bits in the entire graph. The minimum bitwidth
that graph can be truncated to is the bitwidth minus the number of leading
zeroes in the overall mask.

The intention is that this algorithm should "first do no harm", so it will
never insert extra cast instructions. This is why the use-def graphs are
unioned, so that subgraphs with different minimum bitwidths do not need casts
inserted between them.

This algorithm works hard to reduce compile time impact. DemandedBits are only
queried if there are extends of illegal types and if a truncate to an illegal
type is seen. In the general case, this results in a simple linear scan of the
instructions in the loop.

No non-noise compile time impact was seen on a clang bootstrap build.

llvm-svn: 250032
2015-10-12 12:34:45 +00:00
Tobias Grosser
de0082bcae SCEV: Allow simple AddRec * Parameter products in delinearization
This patch also allows the -delinearize pass to delinearize expressions that do
not have an outermost SCEVAddRec expression. The SCEV::delinearize
infrastructure allowed this since r240952, but the -delinearize pass was not
updated yet.

llvm-svn: 250018
2015-10-12 08:02:00 +00:00
Duncan P. N. Exon Smith
c56daa1083 Analysis: Remove implicit ilist iterator conversions
Remove implicit ilist iterator conversions from LLVMAnalysis.

I came across something really scary in `llvm::isKnownNotFullPoison()`
which relied on `Instruction::getNextNode()` being completely broken
(not surprising, but scary nevertheless).  This function is documented
(and coded to) return `nullptr` when it gets to the sentinel, but with
an `ilist_half_node` as a sentinel, the sentinel check looks into some
other memory and we don't recognize we've hit the end.

Rooting out these scary cases is the reason I'm removing the implicit
conversions before doing anything else with `ilist`; I'm not at all
surprised that clients rely on badness.

I found another scary case -- this time, not relying on badness, just
bad (but I guess getting lucky so far) -- in
`ObjectSizeOffsetEvaluator::compute_()`.  Here, we save out the
insertion point, do some things, and then restore it.  Previously, we
let the iterator auto-convert to `Instruction*`, and then set it back
using the `Instruction*` version:

    Instruction *PrevInsertPoint = Builder.GetInsertPoint();

    /* Logic that may change insert point */

    if (PrevInsertPoint)
      Builder.SetInsertPoint(PrevInsertPoint);

The check for `PrevInsertPoint` doesn't protect correctly against bad
accesses.  If the insertion point has been set to the end of a basic
block (i.e., `SetInsertPoint(SomeBB)`), then `GetInsertPoint()` returns
an iterator pointing at the list sentinel.  The version of
`SetInsertPoint()` that's getting called will then call
`PrevInsertPoint->getParent()`, which explodes horribly.  The only
reason this hasn't blown up is that it's fairly unlikely the builder is
adding to the end of the block; usually, we're adding instructions
somewhere before the terminator.

llvm-svn: 249925
2015-10-10 00:53:03 +00:00
Reid Kleckner
51ced6582b [WinEH] Delete the old landingpad implementation of Windows EH
The new implementation works at least as well as the old implementation
did.

Also delete the associated preparation tests. They don't exercise
interesting corner cases of the new implementation. All the codegen
tests of the EH tables have already been ported.

llvm-svn: 249918
2015-10-09 23:34:53 +00:00
Artur Pilipenko
0407210d7b ValueTracking: use getAlignment in isAligned
Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D13517

llvm-svn: 249841
2015-10-09 15:58:26 +00:00
Sanjoy Das
b0ed8472d2 [SCEV] Call StrengthenNoWrapFlags after GroupByComplexity; NFCI
The current implementation of `StrengthenNoWrapFlags` is agnostic to the
order of `Ops`, so this commit should not change anything semantic.  An
upcoming change will make `StrengthenNoWrapFlags` sensitive to the order
of `Ops`.

llvm-svn: 249802
2015-10-09 02:44:45 +00:00
Sanjoy Das
1f293024d7 [SCEV] Bring some methods up to coding style; NFC
- Start methods with lower case
 - Reflow a comment
 - Delete header comment repeated in .cpp file

llvm-svn: 249716
2015-10-08 18:46:59 +00:00
Sanjoy Das
b71c2563ef [SCEV] Remove comment repeated in cpp file; NFC
llvm-svn: 249713
2015-10-08 18:28:42 +00:00
Sanjoy Das
277ce4cdb4 [SCEV] Pick backedge values for phi nodes correctly
Summary:
`getConstantEvolutionLoopExitValue` and `ComputeExitCountExhaustively`
assumed all phi nodes in the loop header have the same order of incoming
values.  This is not correct, and this commit changes
`getConstantEvolutionLoopExitValue` and `ComputeExitCountExhaustively`
to lookup the backedge value of a phi node using the loop's latch block.

Unfortunately, there is still some code duplication
`getConstantEvolutionLoopExitValue` and `ComputeExitCountExhaustively`.
At some point in the future we should extract out a helper class /
method that can evolve constant evolution phi nodes across iterations.

Fixes 25060.  Thanks to Mattias Eriksson for the spot-on analysis!

Depends on D13457.

Reviewers: atrick, hfinkel

Subscribers: materi, llvm-commits

Differential Revision: http://reviews.llvm.org/D13458

llvm-svn: 249712
2015-10-08 18:28:36 +00:00
Sanjay Patel
172d0b6bb7 [ValueTracking] teach computeKnownBits that a fabs() clears sign bits
This was requested in D13076: if we're going to canonicalize to fabs(), ValueTracking
should know that fabs() clears sign bits.

In this patch (as in D13076), we're not handling vectors yet even though computeKnownBits'
fabs() case itself should be vector-ready via the splat in this patch. 
Fixing this will require follow-on patches to correct other logic that uses 'getScalarType'.

Differential Revision: http://reviews.llvm.org/D13222

llvm-svn: 249701
2015-10-08 16:56:55 +00:00
James Molloy
2ccf761faa Compute demanded bits for icmp instructions
Instead of bailing out when we see an icmp, we can instead at least
say that if the upper bits of both operands are known zero, they are
not demanded. This doesn't help with signed comparisons, but it's at
least better than bailing out.

llvm-svn: 249687
2015-10-08 12:40:06 +00:00
James Molloy
5da32b5e20 Treat Mul just like Add and Subtract
Like adds and subtracts, muls ripple only to the left so we can use
the same logic.

While we're here, add a print method to DemandedBits so it can be used
with -analyze, which we'll use in the testcase.

llvm-svn: 249686
2015-10-08 12:39:59 +00:00
James Molloy
496f624786 Make demanded bits lazy
The algorithm itself is still eager, but it doesn't get run until a
query function is called. This greatly reduces the compile-time impact
of requiring DemandedBits when at runtime it is not often used.

NFCI.

llvm-svn: 249685
2015-10-08 12:39:50 +00:00