1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 03:33:20 +01:00
Commit Graph

726 Commits

Author SHA1 Message Date
Silviu Baranga
a5bff005e9 Add a test of a regression discovered during testing of r241673
Summary:
We were missing a corner case where DepCands was not available,
but we were using DepCands to compute the checking pointer
groups.

This adds a test for that regression.

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11068

llvm-svn: 241818
2015-07-09 16:40:25 +00:00
Silviu Baranga
3a8eca654a Don't rely on the DepCands iteration order when constructing checking pointer groups
Summary:
The checking pointer group construction algorithm relied on the iteration on DepCands.
We would need the same leaders across runs and the same iteration order over the underlying std::set for determinism.

This changes the algorithm to process the pointers in the order in which they were added to the runtime check, which is deterministic.
We need to update the tests, since the order in which pointers appear has changed.

No new tests were added, since it is impossible to test for non-determinism.

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11064

llvm-svn: 241809
2015-07-09 15:18:25 +00:00
Adam Nemet
24c6a55664 [LAA] Revert a small part of r239295
This commit ([LAA] Fix estimation of number of memchecks) regressed the
logic a bit.  We shouldn't quit the analysis if we encounter a pointer
without known bounds *unless* we actually need to emit a memcheck for
it.

The original code was using NumComparisons which is now computed
differently.  Instead I compute NeedRTCheck from NumReadPtrChecks and
NumWritePtrChecks.

As side note, I find the separation of NeedRTCheck and CanDoRT
confusing, so I will try to merge them in a follow-up patch.

llvm-svn: 241756
2015-07-08 22:58:48 +00:00
Silviu Baranga
f1d9b77877 [LAA] Merge memchecks for accesses separated by a constant offset
Summary:
Often filter-like loops will do memory accesses that are
separated by constant offsets. In these cases it is
common that we will exceed the threshold for the
allowable number of checks.

However, it should be possible to merge such checks,
sice a check of any interval againt two other intervals separated
by a constant offset (a,b), (a+c, b+c) will be equivalent with
a check againt (a, b+c), as long as (a,b) and (a+c, b+c) overlap.
Assuming the loop will be executed for a sufficient number of
iterations, this will be true. If not true, checking against
(a, b+c) is still safe (although not equivalent).

As long as there are no dependencies between two accesses,
we can merge their checks into a single one. We use this
technique to construct groups of accesses, and then check
the intervals associated with the groups instead of
checking the accesses directly.

Reviewers: anemet

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10386

llvm-svn: 241673
2015-07-08 09:16:33 +00:00
Simon Pilgrim
a825efbf95 [X86][SSE] Vectorized i64 uniform constant SRA shifts
This patch adds vectorization support for uniform constant i64 arithmetic shift right operators.

Differential Revision: http://reviews.llvm.org/D9645

llvm-svn: 241514
2015-07-06 22:35:19 +00:00
Sanjoy Das
2e28d11350 [LazyCallGraph] Port test case from r240039 to LCG.
Summary:
r240039 adds a test case to check that CallGraph does the right thing
with respect to non-leaf intrinsics like statepoint and patchpoint.
This ports the same test case to LazyCallGraph.  LazyCallGraph already
does the right thing with respect to escaping function pointers so there
is no need to change any code.

Reviewers: chandlerc

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10582

llvm-svn: 241226
2015-07-02 02:03:58 +00:00
Adam Nemet
8912979930 [LAA] Try to prove non-wrapping of pointers if SCEV cannot
Summary:
Scalar evolution does not propagate the non-wrapping flags to values
that are derived from a non-wrapping induction variable because
the non-wrapping property could be flow-sensitive.

This change is a first attempt to establish the non-wrapping property in
some simple cases.  The main idea is to look through the operations
defining the pointer.  As long as we arrive to a non-wrapping AddRec via
a small chain of non-wrapping instruction, the pointer should not wrap
either.

I believe that this essentially is what Andy described in
http://article.gmane.org/gmane.comp.compilers.llvm.cvs/220731 as the way
forward.

Reviewers: aschwaighofer, nadav, sanjoy, atrick

Reviewed By: atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10472

llvm-svn: 240798
2015-06-26 17:25:43 +00:00
Simon Pilgrim
76d548e80e [X86][SSE][CostModel] Added full set of sitofp/uitofp costings for SSE2/AVX/AVX2/AVX512F.
Merged separate (but equivalent) SSE2/AVX512F tests.

Removed codegen tests since these are already done better in test/CodeGen/X86.

The actual cost values still need to be updated to match recent codegen improvements.

llvm-svn: 240219
2015-06-20 14:58:01 +00:00
Sanjoy Das
3200fa8f55 [CallGraph] Given -print-callgraph a stable printing order.
Summary:
Since FunctionMap has llvm::Function pointers as keys, the order in
which the traversal happens can differ from run to run, causing spurious
FileCheck failures.  Have CallGraph::print sort the CallGraphNodes by
name before printing them.

Reviewers: bogner, chandlerc

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10575

llvm-svn: 240191
2015-06-19 23:20:31 +00:00
Simon Pilgrim
f887599c07 [X86][SSE][CostModel] Fixed uitofp/sitofp cost target tests to specify sse2/avx2/avx512f directly instead of via a cpu model.
llvm-svn: 240062
2015-06-18 21:26:01 +00:00
Sanjoy Das
7827c2f533 [CallGraph] Teach the CallGraph about non-leaf intrinsics.
Summary:
Currently intrinsics don't affect the creation of the call graph.
This is not accurate with respect to statepoint and patchpoint
intrinsics -- these do call (or invoke) LLVM level functions.

This change fixes this inconsistency by adding a call to the external
node for call sites that call these non-leaf intrinsics.  This coupled
with the fact that these intrinsics also escape the function pointer
they call gives us a conservatively correct call graph.

Reviewers: reames, chandlerc, atrick, pgavlin

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10526

llvm-svn: 240039
2015-06-18 19:28:26 +00:00
David Majnemer
c8b1f095a3 Move the personality function from LandingPadInst to Function
The personality routine currently lives in the LandingPadInst.

This isn't desirable because:
- All LandingPadInsts in the same function must have the same
  personality routine.  This means that each LandingPadInst beyond the
  first has an operand which produces no additional information.

- There is ongoing work to introduce EH IR constructs other than
  LandingPadInst.  Moving the personality routine off of any one
  particular Instruction and onto the parent function seems a lot better
  than have N different places a personality function can sneak onto an
  exceptional function.

Differential Revision: http://reviews.llvm.org/D10429

llvm-svn: 239940
2015-06-17 20:52:32 +00:00
Diego Novillo
8ef76ca78f Add documentation for new backedge mass propagation in irregular loops.
Tweak test cases and rename headerIndexFor -> getHeaderIndex.

llvm-svn: 239915
2015-06-17 16:28:22 +00:00
Diego Novillo
8e90e86295 Fix PR 23525 - Separate header mass propagation in irregular loops.
Summary:
When propagating mass through irregular loops, the mass flowing through
each loop header may not be equal. This was causing wrong frequencies
to be computed for irregular loop headers.

Fixed by keeping track of masses flowing through each of the headers in
an irregular loop. To do this, we now keep track of per-header backedge
weights. After the loop mass is distributed through the loop, the
backedge weights are used to re-distribute the loop mass to the loop
headers.

Since each backedge will have a mass proportional to the different
branch weights, the loop headers will end up with a more approximate
weight distribution (as opposed to the current distribution that assumes
that every loop header is the same).

Reviewers: dexonsmith

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10348

llvm-svn: 239843
2015-06-16 19:10:58 +00:00
Jingyue Wu
88ed36369b [ValueTracking] do not overwrite analysis results already computed
Summary:
ValueTracking used to overwrite the analysis results computed from
assumes and dominating conditions. This patch fixes this issue.

Test Plan: test/Analysis/ValueTracking/assume.ll

Reviewers: hfinkel, majnemer

Reviewed By: majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10283

llvm-svn: 239718
2015-06-15 05:46:29 +00:00
Simon Pilgrim
c3425b72b9 [X86][SSE] Vectorized i8 and i16 shift operators
This patch ensures that SHL/SRL/SRA shifts for i8 and i16 vectors avoid scalarization. It builds on the existing i8 SHL vectorized implementation of moving the shift bits up to the sign bit position and separating the 4, 2 & 1 bit shifts with several improvements:

1 - SSE41 targets can use (v)pblendvb directly with the sign bit instead of performing a comparison to feed into a VSELECT node.
2 - pre-SSE41 targets were masking + comparing with an 0x80 constant - we avoid this by using the fact that a set sign bit means a negative integer which can be compared against zero to then feed into VSELECT, avoiding the need for a constant mask (zero generation is much cheaper).
3 - SRA i8 needs to be unpacked to the upper byte of a i16 so that the i16 psraw instruction can be correctly used for sign extension - we have to do more work than for SHL/SRL but perf tests indicate that this is still beneficial.

The i16 implementation is similar but simpler than for i8 - we have to do 8, 4, 2 & 1 bit shifts but less shift masking is involved. SSE41 use of (v)pblendvb requires that the i16 shift amount is splatted to both bytes however.

Tested on SSE2, SSE41 and AVX machines.

Differential Revision: http://reviews.llvm.org/D9474

llvm-svn: 239509
2015-06-11 07:46:37 +00:00
Artur Pilipenko
c17a17c081 Minor refactoring of GEP handling in isDereferenceablePointer
For GEP instructions isDereferenceablePointer checks that all indices are constant and within bounds. Replace this index calculation logic to a call to accumulateConstantOffset. Separated from the http://reviews.llvm.org/D9791

Reviewed By: sanjoy

Differential Revision: http://reviews.llvm.org/D9874

llvm-svn: 239299
2015-06-08 11:58:13 +00:00
Silviu Baranga
e570ae0867 [LAA] Fix estimation of number of memchecks
Summary:
We need to add a runtime memcheck for pair of accesses (x,y) where at least one of x and y
are writes.
 
Assuming we have w writes and r reads, currently this number is  estimated as being
w* (w+r-1). This estimation will count (write,write) pairs twice and will overestimate
the number of checks required.

This change adds a getNumberOfChecks method to RuntimePointerCheck, which
will count the number of runtime checks needed (similar in implementation to
needsAnyChecking) and uses it to produce the correct number of runtime checks.

Test Plan:
llvm test suite
spec2k
spec2k6

Performance results: no changes observed (not surprising since the formula for 1 writer is basically the same, which would covers most cases - at least with the current check limit).

Reviewers: anemet

Reviewed By: anemet

Subscribers: mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D10217

llvm-svn: 239295
2015-06-08 10:27:06 +00:00
Hao Liu
0ff9b2806d [LoopAccessAnalysis] Teach LAA to check the memory dependence between strided accesses.
Differential Revision: http://reviews.llvm.org/D9368

llvm-svn: 239285
2015-06-08 04:48:37 +00:00
Jingyue Wu
53e4efe10e [DependenceAnalysis] Extend unifySubscriptType for handling coupled subscript groups.
Summary:
In continuation to an earlier commit to DependenceAnalysis.cpp by jingyue (r222100), the type for all subscripts in a coupled group need to be the same since constraints from one subscript may be propagated to another during testing. During testing, new SCEVs may be created and the operands for these need to be the same.
This patch extends unifySubscriptType() to work on lists of subscript pairs, ensuring a common extended type for all of them.

Test Plan:
Added a test case to NonCanonicalizedSubscript.ll which causes dependence analysis to crash without this fix.

All regression tests pass.

Reviewers: spop, sebpop, jingyue

Reviewed By: jingyue

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9698

llvm-svn: 238573
2015-05-29 16:58:08 +00:00
Hans Wennborg
b1900cd72d Revert r236894 "[BasicAA] Fix zext & sext handling"
This seems to have caused PR23626: Clang miscompiles webkit's base64 decoder

llvm-svn: 237984
2015-05-22 01:27:37 +00:00
Artur Pilipenko
36e22ed6bc Fix memory-dereferenceable.ll test
One of the testcases introduced by D9365 had incorrect !dereferenceable metadata on load. It must fail but it doesn't due to incorrect order of CHECK/CHECK-NOT commands in test. Fixed both.

Reviewed By: sanjoy

Differential Revision: http://reviews.llvm.org/D9877

llvm-svn: 237897
2015-05-21 12:51:38 +00:00
Sanjoy Das
4eaf966f48 Dereferenceable, dereferenceable_or_null metadata for loads
Summary:
Introduce dereferenceable, dereferenceable_or_null metadata for loads
with the same semantic as corresponding attributes.

This patch depends on http://reviews.llvm.org/D9253

Patch by Artur Pilipenko!

Reviewers: hfinkel, sanjoy, reames

Reviewed By: sanjoy, reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9365

llvm-svn: 237720
2015-05-19 20:10:19 +00:00
Adam Nemet
abe4727e89 [LoopAccesses] If shouldRetryWithRuntimeCheck, reset InterestingDependences
When dependence analysis encounters a non-constant distance between
memory accesses it aborts the analysis and falls back to run-time checks
only.  In this case we weren't resetting the array of dependences.

llvm-svn: 237574
2015-05-18 15:37:03 +00:00
Adam Nemet
2139d7d823 [LoopAccesses] Rearrange printed lines in -analyze
"Store to invariant address..." is moved as the last line.  This is not
the prime result of the analysis.  Plus it simplifies some of the tests.

llvm-svn: 237573
2015-05-18 15:36:57 +00:00
James Molloy
d5073858ac [DependenceAnalysis] Fix for PR21585: collectUpperBound triggers asserts
collectUpperBound hits an assertion when the back edge count is wider then the desired type.

If that happens, truncate the backedge count.

Patch by Philip Pfaffe!

llvm-svn: 237439
2015-05-15 12:17:22 +00:00
Sanjoy Das
6d67db8c09 [Statepoints] Support for "patchable" statepoints.
Summary:
This change adds two new parameters to the statepoint intrinsic, `i64 id`
and `i32 num_patch_bytes`.  `id` gets propagated to the ID field
in the generated StackMap section.  If the `num_patch_bytes` is
non-zero then the statepoint is lowered to `num_patch_bytes` bytes of
nops instead of a call (the spill and reload code remains unchanged).
A non-zero `num_patch_bytes` is useful in situations where a language
runtime requires complete control over how a call is lowered.

This change brings statepoints one step closer to patchpoints.  With
some additional work (that is not part of this patch) it should be
possible to get rid of `TargetOpcode::STATEPOINT` altogether.

PlaceSafepoints generates `statepoint` wrappers with `id` set to
`0xABCDEF00` (the old default value for the ID reported in the stackmap)
and `num_patch_bytes` set to `0`.  This can be made more sophisticated
later.

Reviewers: reames, pgavlin, swaroop.sridhar, AndyAyers

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9546

llvm-svn: 237214
2015-05-12 23:52:24 +00:00
Adam Nemet
cc439c33c2 [Testsuite] Renumber metadata in ScopedNoAliasAA test to match CHECK lines
Summary:
Now it's much easier to follow what's happening in this test.

Also removed some unused metadata entries.

Reviewers: hfinkel

Reviewed By: hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9601

llvm-svn: 236981
2015-05-11 09:10:14 +00:00
Sanjoy Das
52c1ad7787 [BasicAA] Fix zext & sext handling
Summary:

There are several unhandled edge cases in BasicAA's GetLinearExpression
method. This changes fixes outstanding issues, including zext / sext of
a constant with the sign bit set, and the refusal to decompose zexts or
sexts of wrapping arithmetic.

Test Plan: Unit tests added in //q.ext.ll//.

Patch by Nick White.

Reviewers: hfinkel, sanjoy

Reviewed By: hfinkel, sanjoy

Subscribers: sanjoy, llvm-commits, hfinkel

Differential Revision: http://reviews.llvm.org/D6682

llvm-svn: 236894
2015-05-08 18:58:55 +00:00
Pat Gavlin
c022b8d288 Extend the statepoint intrinsic to allow statepoints to be marked as transitions from GC-aware code to code that is not GC-aware.
This changes the shape of the statepoint intrinsic from:

  @llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 unused, ...call args, i32 # deopt args, ...deopt args, ...gc args)

to:

  @llvm.experimental.gc.statepoint(anyptr target, i32 # call args, i32 flags, ...call args, i32 # transition args, ...transition args, i32 # deopt args, ...deopt args, ...gc args)

This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.

In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.

Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.

Differential Revision: http://reviews.llvm.org/D9501

llvm-svn: 236888
2015-05-08 18:07:42 +00:00
Diego Novillo
62bfc25308 Fix information loss in branch probability computation.
Summary:
This addresses PR 22718. When branch weights are too large, they were
being clamped to the range [1, MaxWeightForBB]. But this clamping is
only applied to edges that go outside the range, so it distorts the
relative branch probabilities.

This patch changes the weight calculation to scale every branch so the
relative probabilities are preserved. The scaling is done differently
now. First, all the branch weights are added up, and if the sum exceeds
32 bits, it computes an integer scale to bring all the weights within
the range.

The patch fixes an existing test that had slightly wrong branch
probabilities due to the previous clamping. It now gets branch weights
scaled accordingly.

Reviewers: dexonsmith

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9442

llvm-svn: 236750
2015-05-07 17:22:06 +00:00
Adam Nemet
2a8bdf8474 [getUnderlyingOjbects] Analyze loop PHIs further to remove false positives
Specifically, if a pointer accesses different underlying objects in each
iteration, don't look through the phi node defining the pointer.

The motivating case is the underlyling-objects-2.ll testcase.  Consider
the loop nest:

  int **A;
  for (i)
    for (j)
       A[i][j] = A[i-1][j] * B[j]

This loop is transformed by Load-PRE to stash away A[i] for the next
iteration of the outer loop:

  Curr = A[0];          // Prev_0
  for (i: 1..N) {
    Prev = Curr;        // Prev = PHI (Prev_0, Curr)
    Curr = A[i];
    for (j: 0..N)
       Curr[j] = Prev[j] * B[j]
  }

Since A[i] and A[i-1] are likely to be independent pointers,
getUnderlyingObjects should not assume that Curr and Prev share the same
underlying object in the inner loop.

If it did we would try to dependence-analyze Curr and Prev and the
analysis of the corresponding SCEVs would fail with non-constant
distance.

To fix this, the getUnderlyingObjects API is extended with an optional
LoopInfo parameter.  This is effectively what controls whether we want
the above behavior or the original.  Currently, I only changed to use
this approach for LoopAccessAnalysis.

The other testcase is to guard the opposite case where we do want to
look through the loop PHI.  If we step through an array by incrementing
a pointer, the underlying object is the incoming value of the phi as the
loop is entered.

Fixes rdar://problem/19566729

llvm-svn: 235634
2015-04-23 20:09:20 +00:00
Brendon Cahoon
11d07abfa7 Fix a type mismatch assert in SCEV division
An assert was triggered when attempting to create a new SCEV
with operands of different types in the visitAddRecExpr. In this
test case, the operand types of the numerator and denominator
are different. The SCEV division code should generate a
conservative answer when this happens.

Differential Revision: http://reviews.llvm.org/D9021

llvm-svn: 235511
2015-04-22 15:06:40 +00:00
Brendon Cahoon
70912563c0 Recognize n/1 in the SCEV divide function
n/1 generates a quotient equal to n and a remainder of 0.
If this case is not recognized, then the SCEV divide() function
can return a remainder that is greater than or equal to the
denominator, which means the delinearized subscripts for the
test case will be incorrect.

Differential Revision: http://reviews.llvm.org/D9003

llvm-svn: 235311
2015-04-20 16:03:28 +00:00
David Blaikie
dfadb4e9ee [opaque pointer type] Add textual IR support for explicit type parameter to the call instruction
See r230786 and r230794 for similar changes to gep and load
respectively.

Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.

When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.

This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.

This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).

No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.

This leaves /only/ the varargs case where the explicit type is required.

Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.

About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.

import fileinput
import sys
import re

pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")

def conv(match, line):
  if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
    return line
  return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]

for line in sys.stdin:
  sys.stdout.write(conv(re.search(pat, line), line))

llvm-svn: 235145
2015-04-16 23:24:18 +00:00
Daniel Jasper
266ce3788c Re-apply r234898 and fix tests.
This commit makes LLVM not estimate branch probabilities when doing a
single bit bitmask tests.

The code that originally made me discover this is:

  if ((a & 0x1) == 0x1) {
    ..
  }

In this case we don't actually have any branch probability information
and should not assume to have any. LLVM transforms this into:

  %and = and i32 %a, 1
  %tobool = icmp eq i32 %and, 0

So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.

CodeGen/ARM/2013-10-11-select-stalls.ll started failing because the
changed probabilities changed the results of
ARMBaseInstrInfo::isProfitableToIfCvt() and led to an Ifcvt of the
diamond in the test. AFAICT, the test was never meant to test this and
thus changing the test input slightly to not change the probabilities
seems like the best way to preserve the meaning of the test.

llvm-svn: 234979
2015-04-15 06:24:07 +00:00
Rafael Espindola
19e464908f Revert "The code that originally made me discover this is:"
This reverts commit r234898.
CodeGen/ARM/2013-10-11-select-stalls.ll was faling.

llvm-svn: 234903
2015-04-14 15:56:33 +00:00
Daniel Jasper
a598c99e31 The code that originally made me discover this is:
if ((a & 0x1) == 0x1) {
    ..
  }

In this case we don't actually have any branch probability information and
should not assume to have any. LLVM transforms this into:

  %and = and i32 %a, 1
  %tobool = icmp eq i32 %and, 0

So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.

llvm-svn: 234898
2015-04-14 15:20:37 +00:00
Adam Nemet
edf6518769 [LoopAccesses] Properly print whether memchecks are needed
Fix oversight in -analyze output.  PtrRtCheck contains the pointers that
need to be checked against each other and not whether memchecks are
necessary.

For instance in the testcase PtrRtCheck has four elements but all
no-alias so no checking is necessary.

llvm-svn: 234833
2015-04-14 01:12:55 +00:00
Jingyue Wu
a3bb69f6c6 Divergence analysis for GPU programs
Summary:
Some optimizations such as jump threading and loop unswitching can negatively
affect performance when applied to divergent branches. The divergence analysis
added in this patch conservatively estimates which branches in a GPU program
can diverge. This information can then help LLVM to run certain optimizations
selectively.

Test Plan: test/Analysis/DivergenceAnalysis/NVPTX/diverge.ll

Reviewers: resistor, hfinkel, eliben, meheff, jholewinski

Subscribers: broune, bjarke.roune, madhur13490, tstellarAMD, dberlin, echristo, jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D8576

llvm-svn: 234567
2015-04-10 05:03:50 +00:00
Adam Nemet
23b1ef3354 [LoopAccesses] Allow analysis to complete in the presence of uniform stores
(Re-apply r234361 with a fix and a testcase for PR23157)

Both run-time pointer checking and the dependence analysis are capable
of dealing with uniform addresses. I.e. it's really just an orthogonal
property of the loop that the analysis computes.

Run-time pointer checking will only try to reason about SCEVAddRec
pointers or else gives up. If the uniform pointer turns out the be a
SCEVAddRec in an outer loop, the run-time checks generated will be
correct (start and end bounds would be equal).

In case of the dependence analysis, we work again with SCEVs. When
compared against a loop-dependent address of the same underlying object,
the difference of the two SCEVs won't be constant. This will result in
returning an Unknown dependence for the pair.

When compared against another uniform access, the difference would be
constant and we should return the right type of dependence
(forward/backward/etc).

The changes also adds support to query this property of the loop and
modify the vectorizer to use this.

Patch by Ashutosh Nema!

llvm-svn: 234424
2015-04-08 17:48:40 +00:00
Adam Nemet
8c21d63b1f Revert "[LoopAccesses] Allow analysis to complete in the presence of uniform stores"
This reverts commit r234361.

It caused PR23157.

llvm-svn: 234387
2015-04-08 04:16:55 +00:00
Adam Nemet
2d24bce667 [LoopAccesses] Allow analysis to complete in the presence of uniform stores
Both run-time pointer checking and the dependence analysis are capable
of dealing with uniform addresses. I.e. it's really just an orthogonal
property of the loop that the analysis computes.

Run-time pointer checking will only try to reason about SCEVAddRec
pointers or else gives up. If the uniform pointer turns out the be a
SCEVAddRec in an outer loop, the run-time checks generated will be
correct (start and end bounds would be equal).

In case of the dependence analysis, we work again with SCEVs. When
compared against a loop-dependent address of the same underlying object,
the difference of the two SCEVs won't be constant. This will result in
returning an Unknown dependence for the pair.

When compared against another uniform access, the difference would be
constant and we should return the right type of dependence
(forward/backward/etc).

The changes also adds support to query this property of the loop and
modify the vectorizer to use this.

Patch by Ashutosh Nema!

llvm-svn: 234361
2015-04-07 21:46:16 +00:00
Adam Nemet
86058cb79c [LoopAccesses] Remove unused global variables in tests
llvm-svn: 233887
2015-04-02 04:42:51 +00:00
Sanjoy Das
23dbc8c364 [SCEV] Look at backedge dominating conditions (re-land r233447).
Summary:
This change teaches ScalarEvolution::isLoopBackedgeGuardedByCond to look
at edges within the loop body that dominate the latch.  We don't do an
exhaustive search for all possible edges, but only a quick walk up the
dom tree.

This re-lands r233447.  r233447 was reverted because it caused massive
compile-time regressions.  This change has a fix for the same issue.

llvm-svn: 233829
2015-04-01 18:24:06 +00:00
Diego Novillo
f5c6703ef6 Remove 4,096 loop scale limitation.
Summary:
This is part 1 of fixes to address the problems described in
https://llvm.org/bugs/show_bug.cgi?id=22719.

The restriction to limit loop scales to 4,096 does not really prevent
overflows anymore, as the underlying algorithm has changed and does
not seem to suffer from this problem.

Additionally, artificially restricting loop scales to such a low number
skews frequency information, making loops of equal hotness appear to
have very different hotness properties.

The only loops that are artificially restricted to a scale of 4096 are
infinite loops (those loops with an exit mass of 0). This prevents
infinite loops from skewing the frequencies of other regions in the CFG.

At the end of propagation, frequencies are scaled to values that take no
more than 64 bits to represent. When the range of frequencies to be
represented fits within 61 bits, it pushes up the scaling factor to a
minimum of 8 to better distinguish small frequency values. Otherwise,
small frequency values are all saturated down at 1.

Tested on x86_64.

Reviewers: dexonsmith

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8718

llvm-svn: 233826
2015-04-01 17:42:27 +00:00
Daniel Jasper
49f0e0afd6 Revert "[SCEV] Look at backedge dominating conditions."
This leads to terribly slow compile times under MSAN. More discussion
on the commit thread of r233447.

llvm-svn: 233529
2015-03-30 09:30:02 +00:00
Sanjoy Das
7a86d5d742 [SCEV] Look at backedge dominating conditions.
Summary:
This change teaches ScalarEvolution::isLoopBackedgeGuardedByCond to look
at edges within the loop body that dominate the latch.  We don't do an
exhaustive search for all possible edges, but only a quick walk up the
dom tree.

Reviewers: atrick, hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8627

llvm-svn: 233447
2015-03-27 23:18:08 +00:00
Philip Reames
007c0af082 Require a GC strategy be specified for functions which use gc.statepoint
This was discussed a while back and I left it optional for migration.  Since it's been far more than the 'week or two' that was discussed, time to actually make this manditory.  

llvm-svn: 233357
2015-03-27 05:09:33 +00:00
Sanjoy Das
f1fd8e6a14 [SCEV] Revert bailout added in r75511.
Summary:
With the introduction of MarkPendingLoopPredicates in r157092, I don't
think the bailout is needed anymore.

Reviewers: atrick, nicholas

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8624

llvm-svn: 233296
2015-03-26 17:28:26 +00:00