1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-24 13:33:37 +02:00
Commit Graph

4316 Commits

Author SHA1 Message Date
Andrew Trick
6d854ef50f Revert "indvars: sink truncates outside the loop."
This reverts commit r198654.

One of the bots reported a SciMark failure.

llvm-svn: 198659
2014-01-07 01:50:58 +00:00
Andrew Trick
7621f7c6a3 indvars: sink truncates outside the loop.
This is a follow up of the r198338 commit that added truncates for
lcssa phi nodes. Sinking the truncates below the phis cleans up the
loop and simplifies subsequent analysis within the indvars pass.

llvm-svn: 198654
2014-01-07 01:02:55 +00:00
Andrew Trick
12dfc32452 Reapply r198478 "Fix PR18361: Invalidate LoopDispositions after LoopSimplify hoists things."
Now with a fix for PR18384: ValueHandleBase::ValueIsDeleted.

We need to invalidate SCEV's loop info when we delete a block, even if no values are hoisted.

llvm-svn: 198631
2014-01-06 19:43:14 +00:00
Alp Toker
2d17611e90 Revert "Fix PR18361: Invalidate LoopDispositions after LoopSimplify hoists things."
This commit was the source of crasher PR18384:

While deleting: label %for.cond127
An asserting value handle still pointed to this value!
UNREACHABLE executed at llvm/lib/IR/Value.cpp:671!

Reverting to get the builders green, feel free to re-land after fixing up.
(Renato has a handy isolated repro if you need it.)

This reverts commit r198478.

llvm-svn: 198503
2014-01-04 17:00:45 +00:00
Andrew Trick
45ef495b91 Fix PR18361: Invalidate LoopDispositions after LoopSimplify hoists things.
getSCEV for an ashr instruction creates an intermediate zext
expression when it truncates its operand.

The operand is initially inside the loop, so the narrow zext
expression has a non-loop-invariant loop disposition.

LoopSimplify then runs on an outer loop, hoists the ashr operand, and
properly invalidate the SCEVs that are mapped to value.

The SCEV expression for the ashr is now an AddRec with the hoisted
value as the now loop-invariant start value.

The LoopDisposition of this wide value was properly invalidated during
LoopSimplify.

However, if we later get the ashr SCEV again, we again try to create
the intermediate zext expression. We get the same SCEV that we did
earlier, and it is still cached because it was never mapped to a
Value. When we try to create a new AddRec we abort because we're using
the old non-loop-invariant LoopDisposition.

I don't have a solution for this other than to clear LoopDisposition
when LoopSimplify hoists things.

I think the long-term strategy should be to perform LoopSimplify on
all loops before computing SCEV and before running any loop opts on
individual loops. It's possible we may want to rerun LoopSimplify on
individual loops, but it should rarely do anything, so rarely require
invalidating SCEV.

llvm-svn: 198478
2014-01-04 05:52:49 +00:00
David Peixotto
2028917754 Fix loop rerolling pass failure with non-consant loop lower bound
The loop rerolling pass was failing with an assertion failure from a
failed cast on loops like this:

  void foo(int *A, int *B, int m, int n) {
    for (int i = m; i < n; i+=4) {
      A[i+0] = B[i+0] * 4;
      A[i+1] = B[i+1] * 4;
      A[i+2] = B[i+2] * 4;
      A[i+3] = B[i+3] * 4;
    }
  }

The code was casting the SCEV-expanded code for the new
induction variable to a phi-node. When the loop had a non-constant
lower bound, the SCEV expander would end the code expansion with an
add insted of a phi node and the cast would fail.

It looks like the cast to a phi node was only needed to get the
induction variable value coming from the backedge to compute the end
of loop condition. This patch changes the loop reroller to compare
the induction variable to the number of times the backedge is taken
instead of the iteration count of the loop. In other words, we stop
the loop when the current value of the induction variable ==
IterationCount-1. Previously, the comparison was comparing the
induction variable value from the next iteration == IterationCount.

This problem only seems to occur on 32-bit targets. For some reason,
the loop is not rerolled on 64-bit targets.

PR18290

llvm-svn: 198425
2014-01-03 17:20:01 +00:00
Arnold Schwaighofer
2e0173da10 BasicAA: Use reachabilty instead of dominance for checking value equality in phi
cycles

This allows the value equality check to work even if we don't have a dominator
tree. Also add some more comments.

I was worried about compile time impacts and did not implement reachability but
used the dominance check in the initial patch. The trade-off was that the
dominator tree was required.
The llvm utility function isPotentiallyReachable cuts off the recursive search
after 32 visits. Testing did not show any compile time regressions showing my
worries unjustfied.

No compile time or performance regressions at O3 -flto -mavx on test-suite +
externals.

Addresses review comments from r198290.

llvm-svn: 198400
2014-01-03 05:47:03 +00:00
Matt Arsenault
e28f607079 Delete unread globals through addrspacecast
llvm-svn: 198346
2014-01-02 20:01:43 +00:00
Matt Arsenault
090fe5a92a Fix addrspacecast with metadata globals
llvm-svn: 198345
2014-01-02 19:53:49 +00:00
Andrew Trick
5f76ab650f indvars: insert truncate at loop boundary to avoid redundant IVs.
When widening an IV to remove s/zext, we generally try to eliminate
the original narrow IV. However, LCSSA phi nodes outside the loop were
still using the original IV. Clean this up more aggressively to avoid
redundancy in generated code.

llvm-svn: 198338
2014-01-02 19:29:38 +00:00
Arnold Schwaighofer
1b53dd734c BasicAA: Fix value equality and phi cycles
When there are cycles in the value graph we have to be careful interpreting
"Value*" identity as "value" equivalence. We interpret the value of a phi node
as the value of its operands.
When we check for value equivalence now we make sure that the "Value*" dominates
all cycles (phis).

%0 = phi [%noaliasval, %addr2]
%l = load %ptr
%addr1 = gep @a, 0, %l
%addr2 = gep @a, 0, (%l + 1)
store %ptr ...

Before this patch we would return NoAlias for (%0, %addr1) which is wrong
because the value of the load is from different iterations of the loop.

Tested on x86_64 -mavx at O3 and O3 -flto with no performance or compile time
regressions.

PR18068
radar://15653794

llvm-svn: 198290
2014-01-02 03:31:36 +00:00
Chandler Carruth
704735664e Disable transforms that introduce calls to exp10*() on Linux due to
widespread glibc bugs.

The glibc implementation of exp10 has a very serious precision bug in
version 2.15 (and older versions). This is still very widely used (the
current Ubuntu LTS for example uses it) and so it isn't reasonable to
make transforms that produce these functions. This fixes many
miscompiles introduced when we started transforming pow(10.0, ...) into
exp10, and it may have fixed other latent miscompiles where exp10
provided sufficient precision but exp10f did not.

This is all really horrible. The primary bug has been fixed for over
a year and glibc 2.18 works correctly for the test cases I have, but it
will be 2017 before the LTS using 2.15 is no longer supported by Ubuntu
(and thus reasonable for folks to be relying on). =[ We're either going
to need to live without these optimizations, or find a way to switch
behavior more dynamically than using simply the fact that the OS is
"Linux".

To make matters worse, there appears to be significant testing and
fixing of numerous other bugs in the exp10 family of functions right now
in glibc. While those haven't been causing problems I've seen in the
wild, it gives me concerns that we may need to wait until an even later
release of glibc before we can reliably transform code into exp10.

llvm-svn: 198093
2013-12-28 02:40:19 +00:00
Andrew Trick
e7f9f5556d Add support to indvars for optimizing sadd.with.overflow.
Split sadd.with.overflow into add + sadd.with.overflow to allow
analysis and optimization. This should ideally be done after
InstCombine, which can perform code motion (eventually indvars should
run after all canonical instcombines). We want ISEL to recombine the
add and the check, at least on x86.

This is currently under an option for reducing live induction
variables: -liv-reduce. The next step is reducing liveness of IVs that
are live out of the overflow check paths. Once the related
optimizations are fully developed, reviewed and tested, I do expect
this to become default.

llvm-svn: 197926
2013-12-23 23:31:49 +00:00
Richard Sandiford
f367c783a7 Fix Scalarizer insertion point when replacing PHIs with insertelements
If the Scalarizer scalarized a vector PHI but could not scalarize
all uses of it, it would insert a series of insertelements to reconstruct
the vector PHI value from the scalar ones.  The problem was that it would
emit these insertelements immediately after the PHI, even if there were
other PHIs after it.

llvm-svn: 197909
2013-12-23 14:51:56 +00:00
Richard Sandiford
27fc4a21a8 Fix Scalarizer handling of vector GEPs with multiple index operands
The old code only worked for one index operand.  Also handle "inbounds".

llvm-svn: 197908
2013-12-23 14:45:00 +00:00
Justin Bogner
3b4e34606e Transforms: Don't create bad weights when eliminating dead cases
If we happen to eliminate every case in a switch that has branch
weights, we currently try to create metadata for the one remaining
branch, triggering an assert. Instead, we need to check that the
metadata we're trying to create is sensible.

llvm-svn: 197791
2013-12-20 08:21:30 +00:00
Justin Bogner
b9a8fd8b26 test: Make a branchweight test more specific
llvm-svn: 197790
2013-12-20 08:21:27 +00:00
Justin Bogner
20b5f203b2 test: Prefer CHECK-LABEL to CHECK in branchweight tests
llvm-svn: 197789
2013-12-20 08:21:24 +00:00
Arnold Schwaighofer
e4d65aae7d LoopVectorizer: Don't if-convert constant expressions that can trap
A phi node operand or an instruction operand could be a constant expression that
can trap (division). Check that we don't vectorize such cases.

PR16729
radar://15653590

llvm-svn: 197449
2013-12-17 01:11:01 +00:00
Yi Jiang
67f2e8e3f8 Enable double to float shrinking optimizations for binary functions like 'fmin/fmax'. Fix radar:15283121
llvm-svn: 197434
2013-12-16 22:42:40 +00:00
Joerg Sonnenberger
72ca1c7c94 There is no exp10 on NetBSD.
llvm-svn: 197348
2013-12-15 20:36:17 +00:00
Chandler Carruth
7848339d15 [inliner] Fix PR18206 by preventing inlining functions that call setjmp
through an invoke instruction.

The original patch for this was written by Mark Seaborn, but I've
reworked his test case into the existing returns_twice test case and
implemented the fix by the prior refactoring to actually run the cost
analysis over invoke instructions, and then here fixing our detection of
the returns_twice attribute to work for both calls and invokes. We never
noticed because we never saw an invoke. =[

llvm-svn: 197216
2013-12-13 08:00:01 +00:00
Chandler Carruth
af9509e185 [inliner] Completely change (and fix) how the inline cost analysis
handles terminator instructions.

The inline cost analysis inheritted some pretty rough handling of
terminator insts from the original cost analysis, and then made it much,
much worse by factoring all of the important analyses into a separate
instruction visitor. That instruction visitor never visited the
terminator.

This works fine for things like conditional branches, but for many other
things we simply computed The Wrong Value. First example are
unconditional branches, which should be free but were counted as full
cost. This is most significant for conditional branches where the
condition simplifies and folds during inlining. We paid a 1 instruction
tax on every branch in a straight line specialized path. =[

Oh, we also claimed that the unreachable instruction had cost.

But it gets worse. Let's consider invoke. We never applied the call
penalty. We never accounted for the cost of the arguments. Nope. Worse
still, we didn't handle the *correctness* constraints of not inlining
recursive invokes, or exception throwing returns_twice functions. Oops.
See PR18206. Sadly, PR18206 requires yet another fix, but this
refactoring is at least a huge step in that direction.

llvm-svn: 197215
2013-12-13 07:59:56 +00:00
Mark Seaborn
b8269b195e Fix spelling in comment in test: "themselve" -> "themselves"
llvm-svn: 197180
2013-12-12 21:26:30 +00:00
Hal Finkel
89ba3023da Fix a use-after-free error in GlobalOpt CleanupConstantGlobalUsers
GlobalOpt's CleanupConstantGlobalUsers function uses a worklist array to manage
constant users to be visited. The pointers in this array need to be weak
handles because when we delete a constant array, we may also be holding a
pointer to one of its elements (or an element of one of its elements if we're
dealing with an array of arrays) in the worklist.

Fixes PR17347.

llvm-svn: 197178
2013-12-12 20:45:24 +00:00
Yi Jiang
f648b7ec82 Resubmit r196544: Apply transformation on OS X 10.9+ and iOS 7.0+: pow(10, x) ―> __exp10(x)
llvm-svn: 197109
2013-12-12 01:55:04 +00:00
Justin Bogner
42f9d0cf93 Transforms: Don't create bad branch weights when folding a switch
This avoids creating branch weight metadata of length one when we fold
cases into the default of a switch instruction, which was triggering
an assert.

llvm-svn: 196845
2013-12-10 00:13:41 +00:00
Manman Ren
4fcc808139 Revert 196544 due to internal bot failures.
llvm-svn: 196732
2013-12-08 20:28:33 +00:00
Mark Seaborn
72517bc221 Fix inlining to not lose the "cleanup" clause from landingpads
This fixes PR17872.  This bug can lead to C++ destructors not being
called when they should be, when an exception is thrown.

llvm-svn: 196711
2013-12-08 00:51:21 +00:00
Mark Seaborn
2d856cb007 Fix inlining to not produce duplicate landingpad clauses
Before this change, inlining one "invoke" into an outer "invoke" call
site can lead to the outer landingpad's catch/filter clauses being
copied multiple times into the resulting landingpad.  This happens:

 * when the inlined function contains multiple "resume" instructions,
   because forwardResume() copies the clauses but is called multiple
   times;

 * when the inlined function contains a "resume" and a "call", because
   HandleCallsInBlockInlinedThroughInvoke() copies the clauses but is
   redundant with forwardResume().

Fix this by deduplicating the code.

This problem doesn't lead to any incorrect execution; it's only
untidy.

This change will make fixing PR17872 a little easier.

llvm-svn: 196710
2013-12-08 00:50:58 +00:00
Renato Golin
6dc8d4d977 force vector width via cpu on vectorizer metadata enable
llvm-svn: 196669
2013-12-07 21:46:08 +00:00
Matt Arsenault
db406f2a95 Fix assert with copy from global through addrspacecast
llvm-svn: 196638
2013-12-07 02:58:45 +00:00
Duncan P. N. Exon Smith
de77610c42 Don't use isNullValue to evaluate ConstantExpr
ConstantExpr can evaluate to false even when isNullValue gives false.

Fixes PR18143.

llvm-svn: 196611
2013-12-06 21:48:36 +00:00
Yi Jiang
0bd0569be5 Apply transformation on OS X 10.9+ and iOS 7.0+: pow(10, x) ―> __exp10(x)
llvm-svn: 196544
2013-12-05 22:42:50 +00:00
Renato Golin
7c406f2a69 Move test to X86 dir
Test is platform independent, but I don't want to force vector-width, or
that could spoil the pragma test.

llvm-svn: 196539
2013-12-05 21:45:39 +00:00
Renato Golin
a4d4a4c44f Add #pragma vectorize enable/disable to LLVM
The intended behaviour is to force vectorization on the presence
of the flag (either turn on or off), and to continue the behaviour
as expected in its absence. Tests were added to make sure the all
cases are covered in opt. No tests were added in other tools with
the assumption that they should use the PassManagerBuilder in the
same way.

This patch also removes the outdated -late-vectorize flag, which was
on by default and not helping much.

The pragma metadata is being attached to the same place as other loop
metadata, but nothing forbids one from attaching it to a function
(to enable #pragma optimize) or basic blocks (to hint the basic-block
vectorizers), etc. The logic should be the same all around.

Patches to Clang to produce the metadata will be produced after the
initial implementation is agreed upon and committed. Patches to other
vectorizers (such as SLP and BB) will be added once we're happy with
the pass manager changes.

llvm-svn: 196537
2013-12-05 21:20:02 +00:00
Arnold Schwaighofer
120880c780 SLPVectorizer: An in-tree vectorized entry cannot also be a scalar external use
We were creating external uses for scalar values in MustGather entries that also
had a ScalarToTreeEntry (they also are present in a vectorized tuple). This
meant we would keep a value 'alive' as a scalar and vectorized causing havoc.
This is not necessary because when we create a MustGather vector we explicitly
create external uses entries for the insertelement instructions of the
MustGather vector elements.

Fixes PR18129.

radar://15582184

llvm-svn: 196508
2013-12-05 15:14:40 +00:00
Alp Toker
e845f8af67 Correct word hyphenations
This patch tries to avoid unrelated changes other than fixing a few
hyphen-related ambiguities and contractions in nearby lines.

llvm-svn: 196471
2013-12-05 05:44:44 +00:00
Yunzhong Gao
05c1966c8c Teach the internalize pass to skip dllexported symbols because they could be
referenced in a way that even the linker does not see.

Differential Revision: http://llvm-reviews.chandlerc.com/D2280

llvm-svn: 196300
2013-12-03 18:05:14 +00:00
Arnold Schwaighofer
1f4eee9e2f opt: Mirror vectorization presets of clang
clang enables vectorization at optimization levels > 1 and size level < 2. opt
should behave similarily.

Loop vectorization and SLP vectorization can be disabled with the flags
-disable-(loop/slp)-vectorization.

llvm-svn: 196294
2013-12-03 16:33:06 +00:00
NAKAMURA Takumi
90ce128a31 llvm/test/Transforms/SampleProfile/syntax.ll: Relax an expression, not to check locale-dependent message.
llvm-svn: 196195
2013-12-03 02:20:53 +00:00
Kay Tiong Khoo
5257afa264 Conservative fix for PR17827 - don't optimize a shift + and + compare sequence where the shift is logical unless the comparison is unsigned
llvm-svn: 196129
2013-12-02 18:43:59 +00:00
Diego Novillo
a4c6fce65c Add tests for profile sample file parsing.
The profile file parser needed some tests for its parsing actions.
This adds tests for each of the error messages emitted by the parser.

llvm-svn: 196106
2013-12-02 15:12:50 +00:00
Alp Toker
fcc4ea594d Rename test with misspelt filename
llvm-svn: 196064
2013-12-02 04:31:36 +00:00
Stephen Canon
d8aaca93a6 Rein in overzealous InstCombine of fptrunc(OP(fpextend, fpextend)).
llvm-svn: 195934
2013-11-28 21:38:05 +00:00
Nadav Rotem
dc01e91cf5 PR1860 - We can't save a list of ExtractElement instructions to CSE because some of these instructions
may be removed and optimized in future iterations. Instead we save a list of basic blocks that we need to CSE.

llvm-svn: 195791
2013-11-26 22:24:25 +00:00
Arnold Schwaighofer
d0c05d2c84 LoopVectorizer: Truncate i64 trip counts of i32 phis if necessary
In signed arithmetic we could end up with an i64 trip count for an i32 phi.
Because it is signed arithmetic we know that this is only defined if the i32
does not wrap. It is therefore safe to truncate the i64 trip count to a i32
value.

Fixes PR18049.

llvm-svn: 195787
2013-11-26 22:11:23 +00:00
Nadav Rotem
643eb4c26e PR18060 - When we RAUW values with ExtractElement instructions in some cases
we generate PHI nodes with multiple entries from the same basic block but
with different values. Enabling CSE on ExtractElement instructions make sure
that all of the RAUWed instructions are the same.

llvm-svn: 195773
2013-11-26 17:29:19 +00:00
Stepan Dyatkovskiy
83455f2b60 PR17925 bugfix.
Short description.

This issue is about case of treating pointers as integers.
We treat pointers as different if they references different address space.
At the same time, we treat pointers equal to integers (with machine address
width). It was a point of false-positive. Consider next case on 32bit machine:

void foo0(i32 addrespace(1)* %p)
void foo1(i32 addrespace(2)* %p)
void foo2(i32 %p)

foo0 != foo1, while
foo1 == foo2 and foo0 == foo2.

As you can see it breaks transitivity. That means that result depends on order
of how functions are presented in module. Next order causes merging of foo0
and foo1: foo2, foo0, foo1
First foo0 will be merged with foo2, foo0 will be erased. Second foo1 will be
merged with foo2.
Depending on order, things could be merged we don't expect to.

The fix:
Forbid to treat any pointer as integer, except for those, who belong to address space 0.

llvm-svn: 195769
2013-11-26 16:11:03 +00:00
Chandler Carruth
497a42d1b9 Add the test case that I missed when committing r195528. Doh!
llvm-svn: 195691
2013-11-25 22:24:27 +00:00
Manman Ren
e53617a3e6 Debug Info: update testing cases to specify the debug info version number.
We are going to drop debug info without a version number or with a different
version number, to make sure we don't crash when we see bitcode files with
different debug info metadata format.

Make tests more robust by removing hard-coded metadata numbers in CHECK lines.

llvm-svn: 195535
2013-11-23 01:16:29 +00:00
Manman Ren
f0d5143ea6 Debug Info: update testing cases to specify the debug info version number.
We are going to drop debug info without a version number or with a different
version number, to make sure we don't crash when we see bitcode files with
different debug info metadata format.

llvm-svn: 195504
2013-11-22 21:49:45 +00:00
Matt Arsenault
35acaad8c7 StructurizeCFG: Fix verification failure with some loops.
If the beginning of the loop was also the entry block
of the function, branches were inserted to the entry block
which isn't allowed. If this occurs, create a new dummy
function entry block that branches to the start of the loop.

llvm-svn: 195493
2013-11-22 19:24:39 +00:00
Matt Arsenault
9afcbf3562 StructurizeCFG: Fix inverting a branch on an argument
llvm-svn: 195492
2013-11-22 19:24:37 +00:00
Richard Sandiford
82ac8f6b68 Add a Scalarizer pass.
llvm-svn: 195471
2013-11-22 16:58:05 +00:00
Yi Jiang
74286d427a SLP Vectorizer: Extract cost will only be added once even if the scalar has multiple external uses.
llvm-svn: 195406
2013-11-22 01:57:02 +00:00
Kostya Serebryany
1513e9969b Don't speculate loads under ThreadSanitizer
Summary:
Don't speculate loads under ThreadSanitizer.
This fixes https://code.google.com/p/thread-sanitizer/issues/detail?id=40
Also discussed here: http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-November/067929.html

Reviewers: chandlerc

Reviewed By: chandlerc

CC: llvm-commits, dvyukov

Differential Revision: http://llvm-reviews.chandlerc.com/D2227

llvm-svn: 195324
2013-11-21 07:29:28 +00:00
Yuchen Wu
734fa40b2a llvm-cov: Added file checksum to gcno and gcda files.
Instead of permanently outputting "MVLL" as the file checksum, clang
will create gcno and gcda checksums by hashing the destination block
numbers of every arc. This allows for llvm-cov to check if the two gcov
files are synchronized.

Regenerated the test files so they contain the checksum. Also added
negative test to ensure error when the checksums don't match.

llvm-svn: 195191
2013-11-20 04:15:05 +00:00
Arnold Schwaighofer
242935ec8c SLPVectorizer: Fix stale for Value pointer array
We are slicing an array of Value pointers and process those slices in a loop.
The problem is that we might invalidate a later slice by vectorizing a former
slice.

Use a WeakVH to track the pointer. If the pointer is deleted or RAUW'ed we can
tell.

The test case will only fail when running with libgmalloc.

radar://15498655

llvm-svn: 195162
2013-11-19 22:20:20 +00:00
Chandler Carruth
4d8e469cd3 Fix an issue where SROA computed different results based on the relative
order of slices of the alloca which have exactly the same size and other
properties. This was found by a perniciously unstable sort
implementation used to flush out buggy uses of the algorithm.

The fundamental idea is that findCommonType should return the best
common type it can find across all of the slices in the range. There
were two bugs here previously:

1) We would accept an integer type smaller than a byte-width multiple,
   and if there were different bit-width integer types, we would accept
   the first one. This caused an actual failure in the testcase updated
   here when the sort order changed.
2) If we found a bad combination of types or a non-load, non-store use
   before an integer typed load or store we would bail, but if we found
   the integere typed load or store, we would use it. The correct
   behavior is to always use an integer typed operation which covers the
   partition if one exists.

While a clever debugging sort algorithm found problem #1 in our existing
test cases, I have no useful test case ideas for #2. I spotted in by
inspection when looking at this code.

llvm-svn: 195118
2013-11-19 09:03:18 +00:00
Paul Robinson
27ef03dc70 The 'optnone' attribute means don't inline anything into this function
(except functions marked always_inline).
Functions with 'optnone' must also have 'noinline' so they don't get
inlined into any other function.

Based on work by Andrea Di Biagio.

llvm-svn: 195046
2013-11-18 21:44:03 +00:00
Arnold Schwaighofer
e4280ec4dd LoopVectorizer: Extend the induction variable to a larger type
In some case the loop exit count computation can overflow. Extend the type to
prevent most of those cases.

The problem is loops like:
int main ()
{
  int a = 1;
  char b = 0;
  lbl:
    a &= 4;
    b--;
    if (b) goto lbl;
  return a;
}

The backedge count is 255. The induction variable type is i8. If we add one to
255 to get the exit count we overflow to zero.

To work around this issue we extend the type of the induction variable to i32 in
the case of i8 and i16.

PR17532

llvm-svn: 195008
2013-11-18 13:14:32 +00:00
Hal Finkel
f09058ab9e Add the cold attribute to error-reporting call sites
Generally speaking, control flow paths with error reporting calls are cold.
So far, error reporting calls are calls to perror and calls to fprintf,
fwrite, etc. with stderr as the stream. This can be extended in the future.

The primary motivation is to improve block placement (the cold attribute
affects the static branch prediction heuristics).

llvm-svn: 194943
2013-11-17 02:06:35 +00:00
Hal Finkel
cc70e01f05 Add a loop rerolling pass
This adds a loop rerolling pass: the opposite of (partial) loop unrolling. The
transformation aims to take loops like this:

for (int i = 0; i < 3200; i += 5) {
  a[i]     += alpha * b[i];
  a[i + 1] += alpha * b[i + 1];
  a[i + 2] += alpha * b[i + 2];
  a[i + 3] += alpha * b[i + 3];
  a[i + 4] += alpha * b[i + 4];
}

and turn them into this:

for (int i = 0; i < 3200; ++i) {
  a[i] += alpha * b[i];
}

and loops like this:

for (int i = 0; i < 500; ++i) {
  x[3*i] = foo(0);
  x[3*i+1] = foo(0);
  x[3*i+2] = foo(0);
}

and turn them into this:

for (int i = 0; i < 1500; ++i) {
  x[i] = foo(0);
}

There are two motivations for this transformation:

  1. Code-size reduction (especially relevant, obviously, when compiling for
code size).

  2. Providing greater choice to the loop vectorizer (and generic unroller) to
choose the unrolling factor (and a better ability to vectorize). The loop
vectorizer can take vector lengths and register pressure into account when
choosing an unrolling factor, for example, and a pre-unrolled loop limits that
choice. This is especially problematic if the manual unrolling was optimized
for a machine different from the current target.

The current implementation is limited to single basic-block loops only. The
rerolling recognition should work regardless of how the loop iterations are
intermixed within the loop body (subject to dependency and side-effect
constraints), but the significant restriction is that the order of the
instructions in each iteration must be identical. This seems sufficient to
capture all current use cases.

This pass is not currently enabled by default at any optimization level.

llvm-svn: 194939
2013-11-16 23:59:05 +00:00
Hal Finkel
79b1387151 Apply the InstCombine fptrunc sqrt optimization to llvm.sqrt
InstCombine, in visitFPTrunc, applies the following optimization to sqrt calls:

  (fptrunc (sqrt (fpext x))) -> (sqrtf x)

but does not apply the same optimization to llvm.sqrt. This is a problem
because, to enable vectorization, Clang generates llvm.sqrt instead of sqrt in
fast-math mode, and because this optimization is being applied to sqrt and not
applied to llvm.sqrt, sometimes the fast-math code is slower.

This change makes InstCombine apply this optimization to llvm.sqrt as well.

This fixes the specific problem in PR17758, although the same underlying issue
(optimizations applied to libcalls are not applied to intrinsics) exists for
other optimizations in SimplifyLibCalls.

llvm-svn: 194935
2013-11-16 21:29:08 +00:00
Benjamin Kramer
0519e29d1b InstCombine: fold (A >> C) == (B >> C) --> (A^B) < (1 << C) for constant Cs.
This is common in bitfield code.

llvm-svn: 194925
2013-11-16 16:00:48 +00:00
Arnold Schwaighofer
01b6f1cc9a LoopVectorizer: Use abi alignment for accesses with no alignment
When we vectorize a scalar access with no alignment specified, we have to set
the target's abi alignment of the scalar access on the vectorized access.
Using the same alignment of zero would be wrong because most targets will have a
bigger abi alignment for vector types.

This probably fixes PR17878.

llvm-svn: 194876
2013-11-15 23:09:33 +00:00
Manman Ren
2189fb16b9 ArgumentPromotion: correctly transfer TBAA tags and alignments.
We used to use std::map<IndicesVector, LoadInst*> for OriginalLoads, and when we
try to promote two arguments, they will both write to OriginalLoads causing
created loads for the two arguments to have the same original load. And the same
tbaa tag and alignment will be put to the created loads for the two arguments.

The fix is to use std::map<std::pair<Argument*, IndicesVector>, LoadInst*>
for OriginalLoads, so each Argument will write to different parts of the map.

PR17906

llvm-svn: 194846
2013-11-15 20:41:15 +00:00
Matt Arsenault
2c9ec5c652 Add instcombine visitor for addrspacecast
llvm-svn: 194786
2013-11-15 05:45:08 +00:00
Matt Arsenault
9921608896 Add addrspacecast instruction.
Patch by Michele Scandale!

llvm-svn: 194760
2013-11-15 01:34:59 +00:00
Yunzhong Gao
2ab2f4eec5 Fixing a heisenbug where the memory dependence analysis behaves differently
with and without -g.

Adding a test case to make sure that the threshold used in the memory
dependence analysis is respected. The test case also checks that debug
intrinsics are not counted towards this threshold.

Differential Revision: http://llvm-reviews.chandlerc.com/D2141

llvm-svn: 194646
2013-11-14 01:10:52 +00:00
Diego Novillo
7b4e2dda6b SampleProfileLoader pass. Initial setup.
This adds a new scalar pass that reads a file with samples generated
by 'perf' during runtime. The samples read from the profile are
incorporated and emmited as IR metadata reflecting that profile.

The profile file is assumed to have been generated by an external
profile source. The profile information is converted into IR metadata,
which is later used by the analysis routines to estimate block
frequencies, edge weights and other related data.

External profile information files have no fixed format, each profiler
is free to define its own. This includes both the on-disk representation
of the profile and the kind of profile information stored in the file.
A common kind of profile is based on sampling (e.g., perf), which
essentially counts how many times each line of the program has been
executed during the run.

The SampleProfileLoader pass is organized as a scalar transformation.
On startup, it reads the file given in -sample-profile-file to
determine what kind of profile it contains.  This file is assumed to
contain profile information for the whole application. The profile
data in the file is read and incorporated into the internal state of
the corresponding profiler.

To facilitate testing, I've organized the profilers to support two file
formats: text and native. The native format is whatever on-disk
representation the profiler wants to support, I think this will mostly
be bitcode files, but it could be anything the profiler wants to
support. To do this, every profiler must implement the
SampleProfile::loadNative() function.

The text format is mostly meant for debugging. Records are separated by
newlines, but each profiler is free to interpret records as it sees fit.
Profilers must implement the SampleProfile::loadText() function.

Finally, the pass will call SampleProfile::emitAnnotations() for each
function in the current translation unit. This function needs to
translate the loaded profile into IR metadata, which the analyzer will
later be able to use.

This patch implements the first steps towards the above design. I've
implemented a sample-based flat profiler. The format of the profile is
fairly simplistic. Each sampled function contains a list of relative
line locations (from the start of the function) together with a count
representing how many samples were collected at that line during
execution. I generate this profile using perf and a separate converter
tool.

Currently, I have only implemented a text format for these profiles. I
am interested in initial feedback to the whole approach before I send
the other parts of the implementation for review.

This patch implements:

- The SampleProfileLoader pass.
- The base ExternalProfile class with the core interface.
- A SampleProfile sub-class using the above interface. The profiler
  generates branch weight metadata on every branch instructions that
  matches the profiles.
- A text loader class to assist the implementation of
  SampleProfile::loadText().
- Basic unit tests for the pass.

Additionally, the patch uses profile information to compute branch
weights based on instruction samples.

This patch converts instruction samples into branch weights. It
does a fairly simplistic conversion:

Given a multi-way branch instruction, it calculates the weight of
each branch based on the maximum sample count gathered from each
target basic block.

Note that this assignment of branch weights is somewhat lossy and can be
misleading. If a basic block has more than one incoming branch, all the
incoming branches will get the same weight. In reality, it may be that
only one of them is the most heavily taken branch.

I will adjust this assignment in subsequent patches.

llvm-svn: 194566
2013-11-13 12:22:21 +00:00
Nadav Rotem
e15585e8ff Fold (iszero(A&K1) | iszero(A&K2)) -> (A&(K1|K2)) != (K1|K2) if we know that K1 and K2 are 'one-hot' (only one bit is on).
llvm-svn: 194525
2013-11-12 22:38:59 +00:00
Nadav Rotem
8fbd606127 FoldBranchToCommonDest merges branches into a single branch with or/and of the condition. It has a heuristics for estimating when some of the dependencies are processed by out-of-order processors. This patch adds another rule to the heuristics that says that if the "BonusInstruction" that we speculatively execute is used by the condition of the second branch then it is okay to hoist it. This change exposes more opportunities for other passes to transform the code. It does not matter that much that we if-convert the code because the selectiondag builder splits or/and branches into multiple branches when profitable.
llvm-svn: 194524
2013-11-12 22:37:16 +00:00
Rafael Espindola
318b459092 Corruptly merge constants with explicit and implicit alignments.
Constant merge can merge a constant with implicit alignment with one that has
explicit alignment. Before this change it was assuming that the explicit
alignment was higher than the implicit one, causing the result to be under
aligned in some cases.

Fixes pr17815.

Patch by Chris Smowton!

llvm-svn: 194506
2013-11-12 20:21:43 +00:00
Benjamin Kramer
d93d22eb86 SimplifyCFG: Use existing constant folding logic when forming switch tables.
Both simpler and more powerful than the hand-rolled folding logic.

llvm-svn: 194475
2013-11-12 12:24:36 +00:00
Shuxin Yang
fc370d42f7 Fix PR17952.
The symptom is that an assertion is triggered. The assertion was added by
me to detect the situation when value is propagated from dead blocks.
(We can certainly get rid of assertion; it is safe to do so, because propagating
 value from dead block to alive join node is certainly ok.)

  The root cause of this bug is : edge-splitting is conducted on the fly,
the edge being split could be a dead edge, therefore the block that 
split the critial edge needs to be flagged "dead" as well.

  There are 3 ways to fix this bug:
  1) Get rid of the assertion as I mentioned eariler 
  2) When an dead edge is split, flag the inserted block "dead".
  3) proactively split the critical edges connecting dead and live blocks when
     new dead blocks are revealed.

  This fix go for 3) with additional 2 LOC.

  Testing case was added by Rafael the other day.

llvm-svn: 194424
2013-11-11 22:00:23 +00:00
Rafael Espindola
a60eb6af02 Add a testcase for pr17852.
llvm-svn: 194385
2013-11-11 15:37:52 +00:00
Bill Wendling
74b6463d35 Revert "Resurrect r191017 " GVN proceeds in the presence of dead code" plus a fix to PR17307 & 17308."
This causes PR17852.

This reverts commit d93e8a06b2ca09ab18f390cd514b7443e2e571f7.

Conflicts:
	test/Transforms/GVN/cond_br2.ll

llvm-svn: 194348
2013-11-10 07:34:34 +00:00
Nadav Rotem
537b5180e9 SimplifyCFG has a heuristics for out-of-order processors that decides when it is worthwhile to merge branches. It tries to estimate if the operands of the instruction that we want to hoist are ready. This commit marks function arguments as 'ready' because they require no calculation. This boosts libquantum and a few other workloads from the testsuite.
llvm-svn: 194346
2013-11-10 04:13:31 +00:00
Matt Arsenault
044bacb671 Resolve TODO in test now that filecheck has multiple check prefixes.
llvm-svn: 194344
2013-11-10 02:16:47 +00:00
Matt Arsenault
4fcb58f7fb Teach MergeFunctions about address spaces
llvm-svn: 194342
2013-11-10 01:44:37 +00:00
Matt Arsenault
8e8180422b Use variable for register name in test
llvm-svn: 194338
2013-11-10 00:57:17 +00:00
David Majnemer
d1c4acb898 IR: Do not canonicalize constant GEPs into an out-of-bounds array access
Summary:
Consider a GEP of:
i8* getelementptr ({ [2 x i8], i32, i8, [3 x i8] }* @main.c, i32 0, i32 0, i64 0)

If we proceeded to GEP the aforementioned object by 8, would form a GEP of:
i8* getelementptr ({ [2 x i8], i32, i8, [3 x i8] }* @main.c, i32 0, i32 0, i64 8)

Note that we would go through the first array member, causing an
out-of-bounds accesses.  This is problematic because we might get fooled
if we are trying to evaluate loads using this GEP, for example, based
off of an object with a constant initializer where the array is zero.

This fixes PR17732.

Reviewers: nicholas, chandlerc, void

Reviewed By: void

CC: llvm-commits, echristo, void, aemerson

Differential Revision: http://llvm-reviews.chandlerc.com/D2093

llvm-svn: 194220
2013-11-07 22:15:53 +00:00
Benjamin Kramer
b611fbe0fd Add test case for PR12377, it was fixed by r194116.
llvm-svn: 194147
2013-11-06 11:55:41 +00:00
Andrew Trick
b072cf7882 Rewrite SCEV's backedge taken count computation.
Patch by Michele Scandale!

Rewrite of the functions used to compute the backedge taken count of a
loop on LT and GT comparisons.

I decided to split the handling of LT and GT cases becasue the trick
"a > b == -a < -b" in some cases prevents the trip count computation
due to the multiplication by -1 on the two operands of the
comparison. This issue comes from the conservative computation of
value range of SCEVs: taking the negative SCEV of an expression that
have a small positive range (e.g. [0,31]), we would have a SCEV with a
fullset as value range.

Indeed, in the new rewritten function I tried to better handle the
maximum backedge taken count computation when MAX/MIN expression are
used to handle the cases where no entry guard is found.

Some test have been modified in order to check the new value correctly
(I manually check them and reasoning on possible overflow the new
values seem correct).

I finally added a new test case related to the multiplication by -1
issue on GT comparisons.

llvm-svn: 194116
2013-11-06 02:08:26 +00:00
Michael Gottesman
04ff2468f0 [objc-arc] Convert the one directional retain/release relation assert to a conditional check + fail.
Due to the previously added overflow checks, we can have a retain/release
relation that is one directional. This occurs specifically when we run into an
additive overflow causing us to drop state in only one direction. If that
occurs, we should bail and not optimize that retain/release instead of
asserting.

Apologies for the size of the testcase. It is necessary to cause the additive
cfg overflow to trigger.

rdar://15377890

llvm-svn: 194083
2013-11-05 16:02:40 +00:00
Matt Arsenault
6df23adcfc Fix another constant folding address space place I missed.
This fixes an assertion failure with a different sized address space.

llvm-svn: 194014
2013-11-04 20:46:52 +00:00
Matt Arsenault
1f521e921d Scalarize select vector arguments when extracted.
When the elements are extracted from a select on vectors
or a vector select, do the select on the extracted scalars
from the input if there is only one use.

llvm-svn: 194013
2013-11-04 20:36:06 +00:00
Manman Ren
65b6ebbeff Rename testing case to use - instead of _.
llvm-svn: 194001
2013-11-04 18:52:06 +00:00
David Majnemer
2bbbbaaeee Revert "Inliner: Handle readonly attribute per argument when adding memcpy"
This reverts commit r193356, it caused PR17781.

A reduced test case covering this regression has been added to the test suite.

llvm-svn: 193955
2013-11-03 12:22:13 +00:00
Bob Wilson
eb7943100b Convert calls to __sinpi and __cospi into __sincospi_stret
This adds an SimplifyLibCalls case which converts the special __sinpi and
__cospi (float & double variants) into a __sincospi_stret where appropriate to
remove duplicated work.

Patch by Tim Northover

llvm-svn: 193943
2013-11-03 06:48:38 +00:00
Arnold Schwaighofer
fba1c74b67 LoopVectorizer: Perform redundancy elimination on induction variables
When the loop vectorizer was part of the SCC inliner pass manager gvn would
run after the loop vectorizer followed by instcombine. This way redundancy
(multiple uses) were removed and instcombine could perform scalarization on the
induction variables. Having moved the loop vectorizer to later we no longer run
any form of redundancy elimination before we perform instcombine. This caused
vectorized induction variables to survive that did not before.

On a recent iMac this helps linpack back from 6000Mflops to 7000Mflops.

This should also help lpbench and paq8p.

I ran a Release (without Asserts) build over the test-suite and did not see any
negative impact on compile time.

radar://15339680

llvm-svn: 193891
2013-11-01 22:18:19 +00:00
Manman Ren
ca35ef0140 Add comments.
llvm-svn: 193874
2013-11-01 18:06:25 +00:00
Benjamin Kramer
3045156cee LoopVectorize: Look for consecutive acces in GEPs with trailing zero indices
If we have a pointer to a single-element struct we can still build wide loads
and stores to it (if there is no padding).

llvm-svn: 193860
2013-11-01 14:09:50 +00:00
Arnold Schwaighofer
5d7be45165 LoopVectorizer: If dependency checks fail try runtime checks
When a dependence check fails we can still try to vectorize loops with runtime
array bounds checks.

This helps linpack to vectorize a loop in dgefa. And we are back to 2x of the
scalar performance on a corei7-avx.

radar://15339680

llvm-svn: 193853
2013-11-01 03:05:07 +00:00
Manman Ren
bba5655c39 Do not convert "call asm" to "invoke asm" in Inliner.
Given that backend does not handle "invoke asm" correctly ("invoke asm" will be
handled by SelectionDAGBuilder::visitInlineAsm, which does not have the right
setup for LPadToCallSiteMap) and we already made the assumption that inline asm
does not throw in InstCombiner::visitCallSite, we are going to make the same
assumption in Inliner to make sure we don't convert "call asm" to "invoke asm".

If it becomes necessary to add support for "invoke asm" later on, we will need
to modify the backend as well as remove the assumptions that inline asm does
not throw.

Fix rdar://15317907

llvm-svn: 193808
2013-10-31 21:56:03 +00:00
Rafael Espindola
24353f2de2 Use LTO_SYMBOL_SCOPE_DEFAULT_CAN_BE_HIDDEN instead of the "dso list".
There are two ways one could implement hiding of linkonce_odr symbols in LTO:
* LLVM tells the linker which symbols can be hidden if not used from native
  files.
* The linker tells LLVM which symbols are not used from other object files,
  but will be put in the dso symbol table if present.

GOLD's API is the second option. It was implemented almost 1:1 in llvm by
passing the list down to internalize.

LLVM already had partial support for the first option. It is also very similar
to how ld64 handles hiding these symbols when *not* doing LTO.

This patch then
* removes the APIs for the DSO list.
* marks LTO_SYMBOL_SCOPE_DEFAULT_CAN_BE_HIDDEN all linkonce_odr unnamed_addr
  global values and other linkonce_odr whose address is not used.
* makes the gold plugin responsible for handling the API mismatch.

llvm-svn: 193800
2013-10-31 20:51:58 +00:00
Matt Arsenault
7af7ffc005 Teach scalarrepl about address spaces
llvm-svn: 193720
2013-10-30 22:54:58 +00:00
Matt Arsenault
f608b07c6f Fix GVN creating bitcast between address spaces
llvm-svn: 193710
2013-10-30 19:05:41 +00:00
NAKAMURA Takumi
d21529f67e Add llvm/test/Transforms/SLPVectorizer/ARM/lit.local.cfg. Tests there require ARM in targets.
llvm-svn: 193580
2013-10-29 02:46:00 +00:00
Alp Toker
54480d8824 Fix "existant" typos
llvm-svn: 193579
2013-10-29 02:35:28 +00:00
Arnold Schwaighofer
4c1da89bfc ARM cost model: Unaligned vectorized double stores are expensive
Updated a test case that assumed that <2 x double> would vectorize to use
<4 x float>.

radar://15338229

llvm-svn: 193574
2013-10-29 01:33:57 +00:00
Arnold Schwaighofer
fe80e563da ARM cost model: Account for zero cost scalar SROA instructions
By vectorizing a series of srl, or, ... instructions we have obfuscated the
intention so much that the backend does not know how to fold this code away.

radar://15336950

llvm-svn: 193573
2013-10-29 01:33:53 +00:00
Alp Toker
d1073c1f24 Quote potential shell expansions found in tests
llvm-svn: 193558
2013-10-28 23:37:45 +00:00
Shuxin Yang
eb29e658a0 Revert r193251 : Use address-taken to disambiguate global variable and indirect memops.
llvm-svn: 193489
2013-10-27 03:08:44 +00:00
Andrew Trick
741b7a0cfc Fix SCEVExpander: don't try to expand quadratic recurrences outside a loop.
Partial fix for PR17459: wrong code at -O3 on x86_64-linux-gnu
(affecting trunk and 3.3)

When SCEV expands a recurrence outside of a loop it attempts to scale
by the stride of the recurrence. Chained recurrences don't work that
way. We could compute binomial coefficients, but would hve to
guarantee that the chained AddRec's are in a perfectly reduced form.

llvm-svn: 193438
2013-10-25 21:35:56 +00:00
Andrew Trick
16b7e5b553 Fix LSR: don't normalize quadratic recurrences.
Partial fix for PR17459: wrong code at -O3 on x86_64-linux-gnu
(affecting trunk and 3.3)

ScalarEvolutionNormalization was attempting to normalize by adding and
subtracting strides. Chained recurrences don't work that way.

llvm-svn: 193437
2013-10-25 21:35:52 +00:00
Rafael Espindola
37e88054ad Handle calls and invokes in GlobalStatus.
This patch teaches GlobalStatus to analyze a call that uses the global value as
a callee, not as an argument.

With this change internalize call handle the common use of linkonce_odr
functions. This reduces the number of linkonce_odr functions in a LTO build of
clang (checked with the emit-llvm gold plugin option) from 1730 to 60.

llvm-svn: 193436
2013-10-25 21:29:52 +00:00
Hal Finkel
d554c99b37 LoopVectorizer: Don't attempt to vectorize extractelement instructions
The loop vectorizer does not currently understand how to vectorize
extractelement instructions. The existing check, which excluded all
vector-valued instructions, did not catch extractelement instructions because
it checked only the return value. As a result, vectorization would proceed,
producing illegal instructions like this:

  %58 = extractelement <2 x i32> %15, i32 0
  %59 = extractelement i32 %58, i32 0

where the second extractelement is illegal because its first operand is not a vector.

llvm-svn: 193434
2013-10-25 20:40:15 +00:00
Tom Stellard
3863b94253 Inliner: Handle readonly attribute per argument when adding memcpy
Patch by: Vincent Lejeune

llvm-svn: 193356
2013-10-24 16:38:33 +00:00
Renato Golin
d182b91bea I had to move and remove
llvm-svn: 193355
2013-10-24 16:31:43 +00:00
Renato Golin
e830b13dc2 Fix broken builds by moving test to x86 dir
llvm-svn: 193351
2013-10-24 15:11:03 +00:00
Renato Golin
ae79e04f36 Mark vector loops as already vectorized
Make sure we mark all loops (scalar and vector) when vectorizing,
so that we don't try to vectorize them anymore. Also, set unroll
to 1, since this is what we check for on early exit.

llvm-svn: 193349
2013-10-24 14:50:51 +00:00
Juergen Ributzka
db5b9c734f Fix a bug in LinearFunctionTestReplace that created invalid loop exit checks.
Reviewed by Andy

llvm-svn: 193303
2013-10-24 05:29:56 +00:00
Shuxin Yang
45a453cafe Use address-taken to disambiguate global variable and indirect memops.
Major steps include:
 1). introduces a not-addr-taken bit-field in GlobalVariable
 2). GlobalOpt pass sets "not-address-taken" if it proves a global varirable 
    dosen't have its address taken.
 3). AA use this info for disambiguation. 

llvm-svn: 193251
2013-10-23 17:28:19 +00:00
Tom Stellard
6f3b22d6ce SimplifyCFG: Don't duplicate calls to functions marked noduplicate v2
v2:
  - Use CI->cannotDuplicate()

llvm-svn: 193115
2013-10-21 20:07:30 +00:00
Matt Arsenault
8faf1f4444 Teach SimplifyCFG about address spaces
llvm-svn: 193104
2013-10-21 18:55:08 +00:00
Rafael Espindola
34304975c5 Optimize more linkonce_odr values during LTO.
When a linkonce_odr value that is on the dso list is not unnamed_addr
we can still look to see if anything is actually using its address. If
not, it is safe to hide it.

This patch implements that by moving GlobalStatus to Transforms/Utils
and using it in Internalize.

llvm-svn: 193090
2013-10-21 17:14:55 +00:00
Bill Wendling
ab46af5515 Don't eliminate a partially redundant load if it's in a landing pad.
A landing pad can be jumped to only by the unwind edge of an invoke
instruction. If we eliminate a partially redundant load in a landing pad, it
will create a basic block that violates this constraint. It then leads to other
problems down the line if it tries to merge that basic block with the landing
pad. Avoid this by not eliminating the load in a landing pad.

PR17621

llvm-svn: 193064
2013-10-21 04:09:17 +00:00
Michael Gottesman
f056facc24 Teach simplify-cfg how to correctly create covered lookup tables for switches on iN with N >= 3.
One optimization simplify-cfg performs is the converting of switches to
lookup tables if the switch has > 4 cases. This is done by:

1. Finding the max/min case value and calculating the switch case range.
2. Create a lookup table basic block.
3. Perform a check in the switch's BB to see if the input value is in
the switch's case range. If the input value satisfies said predicate
branch to the lookup table BB, otherwise branch to the switch's default
destination BB using the default value as the result.

The conditional check consists of subtracting the min case value of the
table from any input iN value and then ensuring that said value is
unsigned less than the size of the lookup table represented as an iN
value.

If the lookup table is a covered lookup table, the size of the table will be N
which is 0 as an iN value. Thus the comparison will be an `icmp ult` of an iN
value against 0 which is always false yielding the incorrect result.

This patch fixes this problem by recognizing if we have a covered lookup table
and if we do, unconditionally jumps to the lookup table BB since the covering
property of the lookup table implies no input values could not be handled by
said BB.

rdar://15268442

llvm-svn: 193045
2013-10-20 07:04:37 +00:00
Bill Wendling
fda67cdf91 Perform an intelligent splice of the predecessor with the single successor.
If the predecessor's being spliced into a landing pad, then we need the PHIs to
come first and the rest of the predecessor's code to come *after* the landing
pad instruction.

llvm-svn: 193035
2013-10-19 11:27:12 +00:00
Arnold Schwaighofer
789187ee86 SLPVectorizer: Don't vectorize volatile memory operations
radar://15231682

Reapply r192799,
  http://lab.llvm.org:8011/builders/lldb-x86_64-debian-clang/builds/8226
showed that the bot is still broken even with this out.

llvm-svn: 192820
2013-10-16 17:52:40 +00:00
Arnold Schwaighofer
7097263371 Revert "SLPVectorizer: Don't vectorize volatile memory operations"
This speculatively reverts commit 192799. It might have broken a linux buildbot.

llvm-svn: 192816
2013-10-16 17:19:40 +00:00
Arnold Schwaighofer
eebda9d6cf SLPVectorizer: Don't vectorize volatile memory operations
radar://15231682

llvm-svn: 192799
2013-10-16 16:09:00 +00:00
Arnold Schwaighofer
38ec37faba SLPVectorizer: Sort PHINodes based on their opcode
Before this patch we relied on the order of phi nodes when we looked for phi
nodes of the same type. This could prevent vectorization of cases where there
was a phi node of a second type in between phi nodes of some type.

This is important for vectorization of an internal graphics kernel. On the test
suite + external on x86_64 (and on a run on armv7s) it showed no impact on
either performance or compile time.

radar://15024459

llvm-svn: 192537
2013-10-12 18:56:27 +00:00
Shuxin Yang
708f0cc2e4 Fix a bug in Dead Argument Elimination.
If a function seen at compile time is not necessarily the one linked to
the binary being built, it is illegal to change the actual arguments
passing to it. 

  e.g. 
   --------------------------
   void foo(int lol) {
     // foo() has linkage satisifying isWeakForLinker()
     // "lol" is not used at all.
   }

   void bar(int lo2) {
      // xform to foo(undef) is illegal, as compiler dose not know which
      // instance of foo() will be linked to the the binary being built.
      foo(lol2); 
   }
  -----------------------------

  Such functions can be captured by isWeakForLinker(). NOTE that
mayBeOverridden() is insufficient for this purpose as it dosen't include
linkage types like AvailableExternallyLinkage and LinkOnceODRLinkage.
Take link_odr* as an example, it indicates a set of *EQUIVALENT* globals
that can be merged at link-time. However, the semantic of 
*EQUIVALENT*-functions includes parameters. Changing parameters breaks
the assumption.

  Thank John McCall for help, especially for the explanation of subtle
difference between linkage types.

  rdar://11546243

llvm-svn: 192302
2013-10-09 17:21:44 +00:00
Arnold Schwaighofer
bfe48b104a LoopVectorize: External uses must use the last value in a reduction cycle
Otherwise, we don't perform operations that would have been performed on
the scalar version.

Fixes PR17498.

llvm-svn: 192133
2013-10-07 21:05:43 +00:00
Alexey Samsonov
a04533615d Revert r191834 until we measure the effect of this benchmarks and maybe find a better way to fix it
llvm-svn: 192121
2013-10-07 19:03:24 +00:00
Matt Arsenault
9c8541d286 Change objectsize intrinsic to accept different address spaces.
Bitcasting everything to i8* won't work. Autoupgrade the old
intrinsic declarations to use the new mangling.

llvm-svn: 192117
2013-10-07 18:06:48 +00:00
Manman Ren
b6cdd4e959 Debug Info: In DIBuilder, the derived-from field of a DW_TAG_pointer_type
is updated to use DITypeRef.

Move isUnsignedDIType and getOriginalTypeSize from DebugInfo.h to be static
helper functions in DwarfCompileUnit. We already have a static helper function
"isTypeSigned" in DwarfCompileUnit, and a pointer to DwarfDebug is added to
resolve the derived-from field. All three functions need to go across link
for derived-from fields, so we need to get hold of a type identifier map.

A pointer to DwarfDebug is also added to DbgVariable in order to resolve the
derived-from field.

Debug info verifier is updated to check a derived-from field is a TypeRef.
Verifier will not go across link for derived-from fields, in debug info finder,
we go across the link to add derived-from fields to types.

Function getDICompositeType is only used by dragonegg and since dragonegg does
not generate identifier for types, we use an empty map to resolve the
derived-from field.

When printing a derived-from field, we use DITypeRef::getName to either return
the type identifier or getName of the DIType.

A paired commit at clang is required due to changes to DIBuilder.

llvm-svn: 192018
2013-10-05 01:43:03 +00:00
Hal Finkel
83df3058ed UpdatePHINodes in BasicBlockUtils should not crash on duplicate predecessors
UpdatePHINodes has an optimization to reuse an existing PHI node, where it
first deletes all of its entries and then replaces them. Unfortunately, in the
case where we had duplicate predecessors (which are allowed so long as the
associated PHI entries have the same value), the loop removing the existing PHI
entries from the to-be-reused PHI would assert (if that PHI was not the one
which had the duplicates).

llvm-svn: 192001
2013-10-04 23:41:05 +00:00
Arnold Schwaighofer
6fab174840 SLPVectorizer: Sort inputs to commutative binary operations
Sort the operands of the other entries in the current vectorization root
according to the first entry's operands opcodes.

%conv0 = uitofp ...
%load0 = load float ...

= fmul %conv0, %load0
= fmul %load0, %conv1
= fmul %load0, %conv2

Make sure that we recursively vectorize <%conv0, %conv1, %conv2> and <%load0,
%load0, %load0>.

This makes it more likely to obtain vectorizable trees. We have to be careful
when we sort that we don't destroy 'good' existing ordering implied by source
order.

radar://15080067

llvm-svn: 191977
2013-10-04 20:39:16 +00:00
Eric Christopher
88a8082ea0 Temporarily revert r191792 as it is causing some LTO debug failures
on platforms with relocations in debug info and also temporarily
revert r191800 due to conflicts with the revert of r191792.

llvm-svn: 191967
2013-10-04 17:08:38 +00:00
Owen Anderson
5e2eba8c18 Pull fptrunc's upwards through selects when one of the select's selectands was a constant. This has a number of benefits, including producing small immediates (easier to materialize, smaller constant pools) as well as being more likely to allow the fptrunc to fuse with a preceding instruction (truncating selects are unusual).
llvm-svn: 191929
2013-10-03 21:08:05 +00:00
Rafael Espindola
a88c415234 Optimize linkonce_odr unnamed_addr functions during LTO.
Generalize the API so we can distinguish symbols that are needed just for a DSO
symbol table from those that are used from some native .o.

The symbols that are only wanted for the dso symbol table can be dropped if
llvm can prove every other dso has a copy (linkonce_odr) and the address is not
important (unnamed_addr).

llvm-svn: 191922
2013-10-03 18:29:09 +00:00
Matt Arsenault
c8e9a77ae3 Make gep i8* X, -(ptrtoint Y) transform work with address spaces
llvm-svn: 191920
2013-10-03 18:15:57 +00:00
Matt Arsenault
2c180f66a9 Don't use runtime bounds check between address spaces.
Don't vectorize with a runtime check if it requires a
comparison between pointers with different address spaces.
The values can't be assumed to be directly comparable.
Previously it would create an illegal bitcast.

llvm-svn: 191862
2013-10-02 22:38:17 +00:00
Matt Arsenault
c43958bf36 Fix missing CHECK-LABELs
llvm-svn: 191853
2013-10-02 20:29:00 +00:00
Yi Jiang
09195de8f3 Apply slp vectorization on fully-vectorizable tree of height 2
llvm-svn: 191852
2013-10-02 20:20:39 +00:00
Benjamin Kramer
4458ae839d SLPVectorizer: Make store chain finding more aggressive with GetUnderlyingObject.
This recursively strips all GEPs like the existing code. It also handles bitcasts and
other operations that do not change the pointer value.

llvm-svn: 191847
2013-10-02 19:06:06 +00:00
Tom Stellard
b9cbfd3959 StructurizeCFG: Add dependency on LowerSwitch pass
Switch instructions were crashing the StructurizeCFG pass, and it's
probably easier anyway if we don't need to handle them in this pass.

Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 191841
2013-10-02 17:04:59 +00:00
Alexey Samsonov
3291ccfed7 Remove "localize global" optimization
Summary:
As discussed in http://llvm-reviews.chandlerc.com/D1754,
this optimization isn't really valid for C, and fires too rarely anyway.

Reviewers: rafael, nicholas

Reviewed By: nicholas

CC: rnk, llvm-commits, nicholas

Differential Revision: http://llvm-reviews.chandlerc.com/D1769

llvm-svn: 191834
2013-10-02 15:31:34 +00:00
Manman Ren
2771d4ef9c Debug Info: In DIBuilder, the derived-from field of a DW_TAG_pointer_type
is updated to use DITypeRef.

Move isUnsignedDIType and getOriginalTypeSize from DebugInfo.h to be static
helper functions in DwarfCompileUnit. We already have a static helper function
"isTypeSigned" in DwarfCompileUnit, and a pointer to DwarfDebug is added to
resolve the derived-from field. All three functions need to go across link
for derived-from fields, so we need to get hold of a type identifier map.

A pointer to DwarfDebug is also added to DbgVariable in order to resolve the
derived-from field.

Debug info verifier is updated to check a derived-from field is a TypeRef.
Verifier will not go across link for derived-from fields, in debug info finder,
we go across the link to add derived-from fields to types.

Function getDICompositeType is only used by dragonegg and since dragonegg does
not generate identifier for types, we use an empty map to resolve the
derived-from field.

When printing a derived-from field, we use DITypeRef::getName to either return
the type identifier or getName of the DIType.

A paired commit at clang is required due to changes to DIBuilder.

llvm-svn: 191800
2013-10-01 23:45:54 +00:00
Matt Arsenault
e06d79aa83 Don't merge tiny functions.
It's silly to merge functions like these:

define void @foo(i32 %x) {
  ret void
}

define void @bar(i32 %x) {
  ret void
}

to get

define void @bar(i32) {
  tail call void @foo(i32 %0)
  ret void
}

llvm-svn: 191786
2013-10-01 18:05:30 +00:00
Benjamin Kramer
e04c4c3faf SCEVExpander: Fix a regression I introduced by to eagerly adding RAII objects.
PR17425.

llvm-svn: 191741
2013-10-01 12:17:11 +00:00
Matt Arsenault
bc0b70cd8d Use right address space size in InstCombineCompares
The test's output doesn't change, but this ensures
this is actually hit with a different address space.

llvm-svn: 191701
2013-09-30 21:11:01 +00:00
Matt Arsenault
27f5203bba Constant fold ptrtoint + compare with address spaces
llvm-svn: 191699
2013-09-30 21:06:18 +00:00
Manman Ren
ad317a135a TBAA: update tbaa format from scalar format to struct-path aware format.
llvm-svn: 191690
2013-09-30 18:17:55 +00:00
Manman Ren
799fd39420 TBAA: remove !tbaa from testing cases when they are not needed.
llvm-svn: 191689
2013-09-30 18:17:35 +00:00
Benjamin Kramer
33abdcddb3 IRBuilder: Add RAII objects to reset insertion points or fast math flags.
Inspired by the object from the SLPVectorizer. This found a minor bug in the
debug loc restoration in the vectorizer where the location of a following
instruction was attached instead of the location from the original instruction.

llvm-svn: 191673
2013-09-30 15:39:48 +00:00
Joey Gouly
b2a84bfcc1 Fix a bug in InstCombine where it attempted to cast a Value* to an Instruction*
when it was actually a Constant*.

There are quite a few other casts to Instruction that might have the same problem,
but this is the only one I have a test case for.

llvm-svn: 191668
2013-09-30 14:18:35 +00:00
Benjamin Kramer
17653a5c14 Add a test that large offsets on GEPs on 32 bits targets are handled correctly.
llvm-svn: 191628
2013-09-28 21:27:49 +00:00
Matt Arsenault
f672e722f2 Use right pointer type in DebugIR
llvm-svn: 191576
2013-09-27 22:26:25 +00:00
Matt Arsenault
0dc0668061 Fix SLPVectorizer using wrong address space for load/store
llvm-svn: 191564
2013-09-27 21:24:57 +00:00
Justin Bogner
d2e08f6deb InstCombine: Only foldSelectICmpAndOr for integer types
Currently foldSelectICmpAndOr asserts if the "or" involves a vector
containing several of the same power of two. We can easily avoid this by
only performing the fold on integer types, like foldSelectICmpAnd does.

Fixes <rdar://problem/15012516>

llvm-svn: 191552
2013-09-27 20:35:39 +00:00
Manman Ren
2ef9ca7627 TBAA: handle scalar TBAA format and struct-path aware TBAA format.
Remove the command line argument "struct-path-tbaa" since we should not depend
on command line argument to decide which format the IR file is using. Instead,
we check the first operand of the tbaa tag node, if it is a MDNode, we treat
it as struct-path aware TBAA format, otherwise, we treat it as scalar TBAA
format.

When clang starts to use struct-path aware TBAA format no matter whether
struct-path-tbaa is no, and we can auto-upgrade existing bc files, the support
for scalar TBAA format can be dropped.

Existing testing cases are updated to use the struct-path aware TBAA format.

llvm-svn: 191538
2013-09-27 18:34:27 +00:00
Justin Bogner
db7f32982e Transforms: Use getFirstNonPHI to set the insertion point for PHIs
We were previously using getFirstInsertionPt to insert PHI
instructions when vectorizing, but getFirstInsertionPt also skips past
landingpads, causing this to generate invalid IR.

We can avoid this issue by using getFirstNonPHI instead.

llvm-svn: 191526
2013-09-27 15:30:25 +00:00
Arnold Schwaighofer
9830391a8f SLPVectorize: Put horizontal reductions feeding a store under separate flag
Put them under a separate flag for experimentation. They are more likely to
interfere with loop vectorization which happens later in the pass pipeline.

llvm-svn: 191371
2013-09-25 14:02:32 +00:00
Yi Jiang
f3cf79719b Test case for r191314.
Some supplemental information for r191314: We would like to make sure SLP Vectorizer will not try to vectorize tiny trees even with a negative threshold so we set the cost to INT_MAX. 

llvm-svn: 191327
2013-09-24 19:33:53 +00:00
Benjamin Kramer
07fbe3863f Verify that we don't optimize null return checks to the nothrow_t version of operator new.
llvm-svn: 191325
2013-09-24 18:37:49 +00:00
Benjamin Kramer
3ad5ca9c1c MemoryBuiltins: Reinstate optimizing (uninitialized) loads from operator new.
llvm-svn: 191315
2013-09-24 17:34:29 +00:00
Benjamin Kramer
e77aa22768 MemoryBuiltins: Fix operator new bits.
We really don't want to optimize malloc return value checks away.

llvm-svn: 191313
2013-09-24 17:15:14 +00:00
Benjamin Kramer
bc13e7ad78 Teach MemoryBuiltins and InstructionSimplify that operator new never returns NULL.
This is safe per C++11 18.6.1.1p3: [operator new returns] a non-null pointer to
suitably aligned storage (3.7.4), or else throw a bad_alloc exception. This
requirement is binding on a replacement version of this function.

Brings us a tiny bit closer to eliminating more vector push_backs.

llvm-svn: 191310
2013-09-24 16:37:51 +00:00
Arnold Schwaighofer
b1cea2cfcc Revert "LoopVectorizer: Only allow vectorization of intrinsics."
Revert 191122 - with extra checks we are allowed to vectorize math library
function calls.

Standard library indentifiers are reserved names so functions with external
linkage must not overrided them. However, functions with internal linkage can.

Therefore, we can vectorize calls to math library functions with a check for
external linkage and matching signature. This matches what we do during
SelectionDAG building.

llvm-svn: 191206
2013-09-23 14:54:39 +00:00
Benjamin Kramer
48e843e09c Expand test case a bit.
llvm-svn: 191205
2013-09-23 14:41:35 +00:00
Benjamin Kramer
5318f92353 InstSimplify: Fold equality comparisons between non-inbounds GEPs.
Overflow doesn't affect the correctness of equalities. Computing this is cheap,
we just reuse the computation for the inbounds case and try to peel of more
non-inbounds GEPs. This pattern is unlikely to ever appear in code generated by
Clang, but SCEV occasionally produces it.

llvm-svn: 191200
2013-09-23 14:16:38 +00:00
Benjamin Kramer
190baf6fef SROA: Handle casts involving vectors of pointers and integer scalars.
SROA wants to convert any types of equivalent widths but it's not possible to
convert vectors of pointers to an integer scalar with a single cast. As a
workaround we add a bitcast to the corresponding int ptr type first. This type
of cast used to be an edge case but has become common with SLP vectorization.
Fixes PR17271.

llvm-svn: 191143
2013-09-21 20:36:04 +00:00
Arnold Schwaighofer
c1f8473eb6 Reapply "SLPVectorizer: Handle more horizontal reductions (disabled)""
Reapply r191108 with a fix for a memory corruption error I introduced.  Of
course, we can't reference the scalars that we replace by vectorizing and then
call their eraseFromParent method. I only 'needed' the scalars to get the
DebugLoc. Just store the DebugLoc before actually vectorizing instead. As a nice
side effect, this also simplifies the interface between BoUpSLP and the
HorizontalReduction class to returning a value pointer (the vectorized tree
root).

radar://14607682

llvm-svn: 191123
2013-09-21 01:06:00 +00:00
Nadav Rotem
a10aa3ffa1 LoopVectorizer: Only allow vectorization of intrinsics. We can't know for sure that the functions 'abs' or 'round' are the functions from libm.
rdar://15012650

llvm-svn: 191122
2013-09-21 00:27:05 +00:00
Arnold Schwaighofer
c81e140238 Revert "SLPVectorizer: Handle more horizontal reductions (disabled)"
This reverts commit r191108.

The horizontal.ll test case fails under libgmalloc. Thanks Shuxin for pointing
this out to me.

llvm-svn: 191121
2013-09-21 00:06:20 +00:00
Shuxin Yang
640fbb44a5 Resurrect r191017 " GVN proceeds in the presence of dead code" plus a fix to PR17307 & 17308.
The problem of r191017 is that when GVN fabricate a val-number for a dead instruction (in order
to make following expr-PRE happy), it forget to fabricate a leader-table entry for it as well.

llvm-svn: 191118
2013-09-20 23:12:57 +00:00
Arnold Schwaighofer
db78859615 SLPVectorizer: Handle more horizontal reductions (disabled)
Match reductions starting at binary operation feeding into a phi. The code
handles trees like

 r += v1 + v2 + v3 ...

and

 r += v1
 r += v2
 ...

and

 r *= v1 + v2 + ...

We currently only handle associative operations (add, fadd fast).

The code can now also handle reductions feeding into stores.

 a[i] = v1 + v2 + v3 + ...

The code is currently disabled behind the flag "-slp-vectorize-hor".  The cost
model for most architectures is not there yet.

I found one opportunity of a horizontal reduction feeding a phi in TSVC
(LoopRerolling-flt) and there are several opportunities where reductions feed
into stores.

radar://14607682

llvm-svn: 191108
2013-09-20 21:18:20 +00:00
Joerg Sonnenberger
ed165ebb1f Delete empty files.
llvm-svn: 191105
2013-09-20 20:40:22 +00:00
Joerg Sonnenberger
d4339cb110 Revert r191017, it results in segmentation faults in Qt.
llvm-svn: 191104
2013-09-20 20:33:57 +00:00
Benjamin Kramer
7ea950a209 InstCombine: Canonicalize (gep i8* X, -(ptrtoint Y)) to (sub (ptrtoint X), (ptrtoint Y))
The GEP pattern is what SCEV expander emits for "ugly geps". The latter is what
you get for pointer subtraction in C code. The rest of instcombine already
knows how to deal with that so just canonicalize on that.

llvm-svn: 191090
2013-09-20 14:38:44 +00:00
Shuxin Yang
44b8e62121 [Fast-math] Disable "(C1/X)*C2 => (C1*C2)/X" if C1/X has multiple uses.
If "C1/X" were having multiple uses, the only benefit of this
transformation is to potentially shorten critical path. But it is at the
cost of instroducing additional div.

  The additional div may or may not incur cost depending on how div is
implemented. If it is implemented using Newton–Raphson iteration, it dosen't
seem to incur any cost (FIXME). However, if the div blocks the entire
pipeline, that sounds to be pretty expensive. Let CodeGen to take care 
this transformation.

  This patch sees 6% on a benchmark.

rdar://15032743

llvm-svn: 191037
2013-09-19 21:13:46 +00:00
Benjamin Kramer
b681a45715 InstCombine: Don't allow turning vector-of-pointer loads into vector-of-integer.
The code below can't handle any pointers. PR17293.

llvm-svn: 191036
2013-09-19 20:59:04 +00:00
Shuxin Yang
76ccefc5f8 GVN proceeds in the presence of dead code.
This is how it ignores the dead code:
1) When a dead branch target, say block B, is identified, all the
    blocks dominated by B is dead as well.

2) The PHIs of those blocks in dominance-frontier(B) is updated such
   that the operands corresponding to dead predecessors are replaced
   by "UndefVal".

   Using lattice's jargon, the "UndefVal" is the "Top" in essence.
   Phi node like this "phi(v1 bb1, undef xx)" will be optimized into
   "v1" if v1 is constant, or v1 is an instruction which dominate this
   PHI node.

3) When analyzing the availability of a load L, all dead mem-ops which
   L depends on disguise as a load which evaluate exactly same value as L.

4) The dead mem-ops will be materialized as "UndefVal" during code motion.

llvm-svn: 191017
2013-09-19 17:22:51 +00:00
Chandler Carruth
cfe57b2ba4 Name the XCore target-specific subdirectories canonically.
llvm-svn: 190940
2013-09-18 14:08:30 +00:00
NAKAMURA Takumi
22aae89022 A couple of tests, in llvm/test/Transforms/*/xcore, are XCore-specific. They should be excluded when XCore is not built.
llvm-svn: 190938
2013-09-18 13:56:16 +00:00
Robert Lytton
b41e4ff222 Prevent LoopVectorizer and SLPVectorizer running if the target has no vector registers.
XCore target: Add XCoreTargetTransformInfo
This is where getNumberOfRegisters() resides, which in turn returns the
number of vector registers (=0).

llvm-svn: 190936
2013-09-18 12:43:35 +00:00
Andrea Di Biagio
dc361820ef Re-add tests from r179291 which were accidentally removed by r181177.
llvm-svn: 190934
2013-09-18 12:06:59 +00:00
Matt Arsenault
e36237bda3 Fix a constant folding address space place I missed.
If address space 0 was smaller than the address space
in a constant inttoptr/ptrtoint pair, the wrong mask size
would be used.

llvm-svn: 190899
2013-09-17 23:23:16 +00:00
Quentin Colombet
1950396a9e Revert the load slicing done in r190870.
To avoid regressions with bitfield optimizations, this slicing should take place
later, like ISel time.

llvm-svn: 190891
2013-09-17 22:01:26 +00:00
Matt Arsenault
6fd5ad85d0 Cleanup handling of constant function casts.
Some of this code is no longer necessary since int<->ptr casts are no
longer occur as of r187444.

This also fixes handling vectors of pointers, and adds a bunch of new
testcases for vectors and address spaces.

llvm-svn: 190885
2013-09-17 21:10:14 +00:00
Arnold Schwaighofer
43c2040076 SLPVectorizer: Don't vectorize phi nodes that use invoke values
We can't insert an insertelement after an invoke. We would have to split a
critical edge. So when we see a phi node that uses an invoke we just give up.

radar://14990770

llvm-svn: 190871
2013-09-17 17:03:29 +00:00
Quentin Colombet
3d996b9289 [InstCombiner] Slice a big load in two loads when the elements are next to each
other in memory.

The motivation was to get rid of truncate and shift right instructions that get
in the way of paired load or floating point load.
E.g.,
Consider the following example:
struct Complex {
  float real;
  float imm;
};

When accessing a complex, llvm was generating a 64-bits load and the imm field
was obtained by a trunc(lshr) sequence, resulting in poor code generation, at
least for x86.

The idea is to declare that two load instructions is the canonical form for
loading two arithmetic type, which are next to each other in memory.

Two scalar loads at a constant offset from each other are pretty
easy to detect for the sorts of passes that like to mess with loads. 

<rdar://problem/14477220>

llvm-svn: 190870
2013-09-17 16:57:34 +00:00
Stepan Dyatkovskiy
124d49fc91 Bugfix for PR17099:
Wrong cast operation.
MergeFunctions emits Bitcast instead of pointer-to-integer operation.
Patch fixes MergeFunctions::writeThunk function. It replaces
unconditional Bitcast creation with "Value* createCast(...)" method, that
checks operand types and selects proper instruction.
See unit-test as example.

llvm-svn: 190859
2013-09-17 09:36:11 +00:00
Krzysztof Parzyszek
21dfcf1c7d Add testcase for r190631
llvm-svn: 190807
2013-09-16 21:24:30 +00:00
Arnold Schwaighofer
11f318e34c Don't vectorize if there are outside loop users of the induction variable.
We would have to compute the pre increment value, either by computing it on
every loop iteration or by splitting the edge out of the loop and inserting a
computation for it there.

For now, just give up vectorizing such loops.

Fixes PR17179.

llvm-svn: 190790
2013-09-16 16:17:24 +00:00
Chandler Carruth
d47d52e219 Remove the long, long defunct IR block placement pass.
This pass was based on the previous (essentially unused) profiling
infrastructure and the assumption that by ordering the basic blocks at
the IR level in a particular way, the correct layout would happen in the
end. This sometimes worked, and mostly didn't. It also was a really
naive implementation of the classical paper that dates from when branch
predictors were primarily directional and when loop structure wasn't
commonly available. It also didn't factor into the equation
non-fallthrough branches and other machine level details.

Anyways, for all of these reasons and more, I wrote
MachineBlockPlacement, which completely supercedes this pass. It both
uses modern profile information infrastructure, and actually works. =]

llvm-svn: 190748
2013-09-14 09:28:14 +00:00
Matt Arsenault
9196ae883b Add missing CHECK-LABEL
llvm-svn: 190740
2013-09-14 02:44:06 +00:00
Matt Arsenault
5525e4f3e8 Add test for untested path in SimplifyCFG
This case wasn't checked with a pointer condition.

llvm-svn: 190739
2013-09-14 02:44:02 +00:00
Hal Finkel
87afacb632 Implement TTI getUnrollingPreferences for PowerPC
The PowerPC A2 core greatly benefits from aggressive concatenation unrolling;
use the new getUnrollingPreferences to enable this by default when targeting
the PPC A2 core.

llvm-svn: 190549
2013-09-11 21:20:40 +00:00
Matt Arsenault
d35880d234 Teach loop-idiom about address space pointer sizes
llvm-svn: 190491
2013-09-11 05:09:42 +00:00
Matt Arsenault
1134ac465f Fix missing CHECK-LABELs
llvm-svn: 190426
2013-09-10 19:57:05 +00:00
Eli Friedman
f2904ff59c Don't shrink atomic ops to bool in GlobalOpt.
LLVM IR doesn't currently allow atomic bool load/store operations, and the
transformation is dubious anyway because it isn't profitable on all platforms.

PR17163.

llvm-svn: 190357
2013-09-09 22:00:13 +00:00
Quentin Colombet
48ec8d3046 [InstCombiner] Expose opportunities to merge subtract and comparison.
Several architectures use the same instruction to perform both a comparison and
a subtract. The instruction selection framework does not allow to consider
different basic blocks to expose such fusion opportunities.

Therefore, these instructions are “merged” by CSE at MI IR level.

To increase the likelihood of CSE to apply in such situation, we reorder the
operands of the comparison, when they have the same complexity, so that they
matches the order of the most frequent subtract.
E.g.,

icmp A, B
...
sub B, A

<rdar://problem/14514580>

llvm-svn: 190352
2013-09-09 20:56:48 +00:00
Bob Wilson
6c7d3717b3 Revert patches to add case-range support for PR1255.
The work on this project was left in an unfinished and inconsistent state.
Hopefully someone will eventually get a chance to implement this feature, but
in the meantime, it is better to put things back the way the were.  I have
left support in the bitcode reader to handle the case-range bitcode format,
so that we do not lose bitcode compatibility with the llvm 3.3 release.

This reverts the following commits: 155464, 156374, 156377, 156613, 156704,
156757, 156804 156808, 156985, 157046, 157112, 157183, 157315, 157384, 157575,
157576, 157586, 157612, 157810, 157814, 157815, 157880, 157881, 157882, 157884,
157887, 157901, 158979, 157987, 157989, 158986, 158997, 159076, 159101, 159100,
159200, 159201, 159207, 159527, 159532, 159540, 159583, 159618, 159658, 159659,
159660, 159661, 159703, 159704, 160076, 167356, 172025, 186736

llvm-svn: 190328
2013-09-09 19:14:35 +00:00
Manman Ren
fa420c3e35 Debug Info Testing: update context from empty string to null.
Context should be either null or MDNode.

llvm-svn: 190267
2013-09-08 03:11:54 +00:00
Manman Ren
450526b5a9 Debug Info Testing: updated to use NULL instead of "i32 0" in a few fields.
Field 2 of DIType (Context), field 9 of DIDerivedType (TypeDerivedFrom),
field 12 of DICompositeType (ContainingType), fields 2, 7, 12 of DISubprogram
(Context, Type, ContainingType).

llvm-svn: 190205
2013-09-06 21:03:58 +00:00
Rafael Espindola
0a6eda454a Merge these 2 tests in a single file.
llvm-svn: 189975
2013-09-04 19:19:32 +00:00
Rafael Espindola
357980289a Revert "Add r159136 back now that pr13124 has been fixed."
This reverts commit r189886.

I found a corner case where this optimization is not valid:

Say we have a "linkonce_odr unnamed_addr" in two translation units:
* In TU 1 this optimization kicks in and makes it hidden.
* In TU 2 it gets const merged with a constant that is *not* unnamed_addr,
  resulting in a non unnamed_addr constant with default visibility.
* The static linker rules for combining visibility them produce a hidden
  symbol, which is incorrect from the point of view of the non unnamed_addr
  constant.

The one place we can do this is when we know that the symbol is not used from
another TU in the same shared object, i.e., during LTO. I will move it there.

llvm-svn: 189954
2013-09-04 16:09:01 +00:00
Tim Northover
baf9697e72 InstCombine: allow unmasked icmps to be combined with logical ops
"(icmp op i8 A, B)" is equivalent to "(icmp op i8 (A & 0xff), B)" as a
degenerate case. Allowing this as a "masked" comparison when analysing "(icmp)
&/| (icmp)" allows us to combine them in more cases.

rdar://problem/7625728

llvm-svn: 189931
2013-09-04 11:57:17 +00:00
Tim Northover
9045305fce InstCombine: look for masked compares with subset relation
Even in cases which aren't universally optimisable like "(A & B) != 0 && (A &
C) != 0", the masks can make one of the comparisons completely redundant. In
this case, since we've gone to the effort of spotting masked comparisons we
should combine them.

rdar://problem/7625728

llvm-svn: 189930
2013-09-04 11:57:13 +00:00
Rafael Espindola
8b9c0a576e Add r159136 back now that pr13124 has been fixed.
Original message:
If a constant or a function has linkonce_odr linkage and unnamed_addr, mark
hidden. Being linkonce_odr guarantees that it is available in every dso that
needs it. Being a constant/function with unnamed_addr guarantees that the
copies don't have to be merged.

llvm-svn: 189886
2013-09-03 23:34:36 +00:00
Michael Gottesman
81c7cd8b47 [objc-arc] Turn off the objc_retainBlock -> objc_retain optimization.
The reason that I am turning off this optimization is that there is an
additional case where a block can escape that has come up. Specifically, this
occurs when a block is used in a scope outside of its current scope.

This can cause a captured retainable object pointer whose life is preserved by
the objc_retainBlock to be deallocated before the block is invoked.

An example of the code needed to trigger the bug is:

----
\#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]) {
  void (^somethingToDoLater)();

  {
    NSObject *obj = [NSObject new];

    somethingToDoLater = ^{
      [obj self]; // Crashes here
    };
  }

  NSLog(@"test.");

  somethingToDoLater();
  return 0;
}
----

In the next commit, I remove all the dead code that results from this.

Once I put in the fixing commit I will bring back the tests that I deleted in
this commit.

rdar://14802782.
rdar://14868830.

llvm-svn: 189869
2013-09-03 22:40:54 +00:00
Michael Gottesman
6bf22747af [objc-arc] Move some block tests from basic.ll -> retain-block.ll and add some missing CHECK-LABELS.
llvm-svn: 189868
2013-09-03 22:40:50 +00:00
Matt Arsenault
469d672381 Teach InstCombineLoadCast about address spaces.
This is another one that doesn't matter much,
but uses the right GEP index types in the first
place.

llvm-svn: 189854
2013-09-03 21:05:48 +00:00
Yi Jiang
e1b34bf1fe In this patch we are trying to do two things:
1) If the width of vectorization list candidate is bigger than vector reg width, we will break it down to fit the vector reg.
2) We do not vectorize the width which is not power of two.

The performance result shows it will help some spec benchmarks. mesa improved 6.97% and ammp improved 1.54%. 

llvm-svn: 189830
2013-09-03 17:26:04 +00:00
Benjamin Kramer
4bd23de2ce SimplifyLibCalls: When emitting an overloaded fp function check that it's available.
The existing code missed some edge cases when e.g. we're going to emit sqrtf but
only the availability of sqrt was checked. This happens on odd platforms like
windows.

llvm-svn: 189724
2013-08-31 18:19:35 +00:00
Benjamin Kramer
3a81691558 InstCombine: Check for zero shift amounts before subtracting one causing integer overflow.
PR17026. Also avoid undefined shifts and shift amounts larger than 64 bits
(those are always undef because we can't represent integer types that large).

llvm-svn: 189672
2013-08-30 14:35:35 +00:00
Daniel Dunbar
4815f0a8ee Fix a test to not fail for users with my name. :)
llvm-svn: 189547
2013-08-29 00:41:22 +00:00
Matt Arsenault
b86b388e15 Convert tests to FileCheck
llvm-svn: 189529
2013-08-28 23:04:41 +00:00
Matt Arsenault
c6d3bffb73 Handle address spaces in TargetTransformInfo
llvm-svn: 189527
2013-08-28 22:41:57 +00:00
Hal Finkel
a22a21165f Disable unrolling in the loop vectorizer when disabled in the pass manager
When unrolling is disabled in the pass manager, the loop vectorizer should also
not unroll loops. This will allow the -fno-unroll-loops option in Clang to
behave as expected (even for vectorizable loops). The loop vectorizer's
-force-vector-unroll option will (continue to) override the pass-manager
setting (including -force-vector-unroll=0 to force use of the internal
auto-selection logic).

In order to test this, I added a flag to opt (-disable-loop-unrolling) to force
disable unrolling through opt (the analog of -fno-unroll-loops in Clang). Also,
this fixes a small bug in opt where the loop vectorizer was enabled only after
the pass manager populated the queue of passes (the global_alias.ll test needed
a slight update to the RUN line as a result of this fix).

llvm-svn: 189499
2013-08-28 18:33:10 +00:00
Matt Arsenault
c32a5a3d46 Fix inserting instructions before last in bundle.
The builder inserts from before the insert point,
not after, so this would insert before the last
instruction in the bundle instead of after it.

I'm not sure if this can actually be a problem
with any of the current insertions.

llvm-svn: 189285
2013-08-26 23:08:37 +00:00
Manman Ren
20ae2817fa Debug Info: add an identifier field to DICompositeType.
DICompositeType will have an identifier field at position 14. For now, the
field is set to null in DIBuilder.
For DICompositeTypes where the template argument field (the 13th field)
was optional, modify DIBuilder to make sure the template argument field is set.
Now DICompositeType has 15 fields.

Update DIBuilder to use NULL instead of "i32 0" for null value of a MDNode.
Update verifier to check that DICompositeType has 15 fields and the last
field is null or a MDString.

Update testing cases to include an extra field for DICompositeType.
The identifier field will be used by type uniquing so a front end can
genearte a DICompositeType with a unique identifer.

llvm-svn: 189282
2013-08-26 22:39:55 +00:00
Nadav Rotem
7d4e24e1f4 LoopVectorize: Implement partial loop unrolling when vectorization is not profitable.
This patch enables unrolling of loops when vectorization is legal but not profitable.
We add a new class InnerLoopUnroller, that extends InnerLoopVectorizer and replaces some of the vector-specific logic with scalars.

This patch does not introduce any runtime regressions and improves the following workloads:

SingleSource/Benchmarks/Shootout/matrix -22.64%
SingleSource/Benchmarks/Shootout-C++/matrix -13.06%
External/SPEC/CINT2006/464_h264ref/464_h264ref  -3.99%
SingleSource/Benchmarks/Adobe-C++/simple_types_constant_folding -1.95%

llvm-svn: 189281
2013-08-26 22:33:26 +00:00
Matt Arsenault
adea897e71 Forgot to add slp threshold to test
llvm-svn: 189248
2013-08-26 18:08:35 +00:00
Matt Arsenault
fe57252c78 Vectorize starting from insertelements building a vector
llvm-svn: 189233
2013-08-26 17:56:35 +00:00
Michael Gottesman
5f6dfacead Filecheckize some tests.
llvm-svn: 189079
2013-08-23 00:23:28 +00:00
Michael Gottesman
0f9b142f60 Update StripDeadDebugInfo to use DebugInfoFinder so that it is no longer stale to the point of not working and more resilient to debug info changes.
The current version of StripDeadDebugInfo became stale and no longer actually
worked since it was expecting an older version of debug info.

This patch updates it to use DebugInfoFinder and the modern DebugInfo classes as
much as possible to make it more redundent to such changes. Additionally, the
only place where that was avoided (the code where we replace the old sets with
the new), I call verify on the DIContextUnit implying that if the format changes
and my live set changes no longer make sense an assert will be hit. In order to
ensure that that occurs I have included a test case.

The actual stripping of the dead debug info follows the same strategy as was
used before in this class: find the live set and replace the old set in the
given compile unit (which may contain dead global variables/functions) with the
new live one.

llvm-svn: 189078
2013-08-23 00:23:24 +00:00
Manman Ren
b66c695f15 [Debug Info Tests] Update testing cases.
A single metadata will not span multiple lines. This also helps me with
my script to automatic update the testing cases.
A debug info testing case should have a llvm.dbg.cu.
Do not use hard-coded id for debug nodes.

llvm-svn: 189033
2013-08-22 17:11:18 +00:00
Chandler Carruth
e6b6740e73 Teach the SLP vectorizer the correct way to check for consecutive access
using GEPs. Previously, it used a number of different heuristics for
analyzing the GEPs. Several of these were conservatively correct, but
failed to fall back to SCEV even when SCEV might have given a reasonable
answer. One was simply incorrect in how it was formulated.

There was good code already to recursively evaluate the constant offsets
in GEPs, look through pointer casts, etc. I gathered this into a form
code like the SLP code can use in a previous commit, which allows all of
this code to become quite simple.

There is some performance (compile time) concern here at first glance as
we're directly attempting to walk both pointers constant GEP chains.
However, a couple of thoughts:

1) The very common cases where there is a dynamic pointer, and a second
   pointer at a constant offset (usually a stride) from it, this code
   will actually not do any unnecessary work.

2) InstCombine and other passes work very hard to collapse constant
   GEPs, so it will be rare that we iterate here for a long time.

That said, if there remain performance problems here, there are some
obvious things that can improve the situation immensely. Doing
a vectorizer-pass-wide memoizer for each individual layer of pointer
values, their base values, and the constant offset is likely to be able
to completely remove redundant work and strictly limit the scaling of
the work to scrape these GEPs. Since this optimization was not done on
the prior version (which would still benefit from it), I've not done it
here. But if folks have benchmarks that slow down it should be straight
forward for them to add.

I've added a test case, but I'm not really confident of the amount of
testing done for different access patterns, strides, and pointer
manipulation.

llvm-svn: 189007
2013-08-22 12:45:17 +00:00
Matt Arsenault
1b410a4dec Teach LoopVectorize about address space sizes
llvm-svn: 188980
2013-08-22 02:42:55 +00:00
Manman Ren
1b7973fc19 TBAA: remove !tbaa from testing cases when they are not needed.
This will make it easier to turn on struct-path aware TBAA since the metadata
format will change.

llvm-svn: 188944
2013-08-21 22:20:53 +00:00
Matt Arsenault
95d00423a7 Teach InstCombine about address spaces
llvm-svn: 188926
2013-08-21 19:53:10 +00:00
Matt Arsenault
298ce78bbf Add test for bitcast array ptrs with address spaces
llvm-svn: 188919
2013-08-21 19:09:28 +00:00
Matt Arsenault
fe2de01e6d Add enforce known alignment test with address space
llvm-svn: 188917
2013-08-21 18:54:53 +00:00
Arnold Schwaighofer
276cfe784a SLPVectorizer: Fix invalid iterator errors
Update iterator when the SLP vectorizer changes the instructions in the basic
block by restarting the traversal of the basic block.

Patch by Yi Jiang!

Fixes PR 16899.

llvm-svn: 188832
2013-08-20 21:21:45 +00:00
Matt Arsenault
474ae7ebd0 Teach ConstantFolding about pointer address spaces
llvm-svn: 188831
2013-08-20 21:20:04 +00:00
Hal Finkel
8f395a803a Add a llvm.copysign intrinsic
This adds a llvm.copysign intrinsic; We already have Libfunc recognition for
copysign (which is turned into the FCOPYSIGN SDAG node). In order to
autovectorize calls to copysign in the loop vectorizer, we need a corresponding
intrinsic as well.

In addition to the expected changes to the language reference, the loop
vectorizer, BasicTTI, and the SDAG builder (the intrinsic is transformed into
an FCOPYSIGN node, just like the function call), this also adds FCOPYSIGN to a
few lists in LegalizeVector{Ops,Types} so that vector copysigns can be
expanded.

In TargetLoweringBase::initActions, I've made the default action for FCOPYSIGN
be Expand for vector types. This seems correct for all in-tree targets, and I
think is the right thing to do because, previously, there was no way to generate
vector-values FCOPYSIGN nodes (and most targets don't specify an action for
vector-typed FCOPYSIGN).

llvm-svn: 188728
2013-08-19 23:35:46 +00:00
Matt Arsenault
77bbadbfcb Teach InstCombine visitGetElementPtr about address spaces
llvm-svn: 188721
2013-08-19 22:17:40 +00:00
Matt Arsenault
1bfa3222e1 Fix assert with GEP ptr vector indexing structs
Also fix it calculating the wrong value. The struct index
is not a ConstantInt, so it was being interpreted as an array
index.

llvm-svn: 188713
2013-08-19 21:43:16 +00:00
Matt Arsenault
22098dc46b Revert non-test parts of r188507
Re-add the inboundsless tests I didn't add originally

llvm-svn: 188710
2013-08-19 21:40:31 +00:00
Michael Kuperstein
9bedf7fc8c Adds missing TLI check for library simplification of
* pow(x, 0.5) -> fabs(sqrt(x)) 
* pow(2.0, x) -> exp2(x)

llvm-svn: 188656
2013-08-19 06:55:47 +00:00
Matt Arsenault
8a84641968 Add missing test for GEP + bitcast transformation
llvm-svn: 188529
2013-08-16 02:59:17 +00:00
Daniel Dunbar
a496d61c01 [tests] Cleanup initialization of test suffixes.
- Instead of setting the suffixes in a bunch of places, just set one master
   list in the top-level config. We now only modify the suffix list in a few
   suites that have one particular unique suffix (.ml, .mc, .yaml, .td, .py).

 - Aside from removing the need for a bunch of lit.local.cfg files, this enables
   4 tests that were inadvertently being skipped (one in
   Transforms/BranchFolding, a .s file each in DebugInfo/AArch64 and
   CodeGen/PowerPC, and one in CodeGen/SI which is now failing and has been
   XFAILED).

 - This commit also fixes a bunch of config files to use config.root instead of
   older copy-pasted code.

llvm-svn: 188513
2013-08-16 00:37:11 +00:00
Jim Grosbach
72387340f5 InstCombine: Simplify if(x!=0 && x!=-1).
When both constants are positive or both constants are negative,
InstCombine already simplifies comparisons like this, but when
it's exactly zero and -1, the operand sorting ends up reversed
and the pattern fails to match. Handle that special case.

Follow up for rdar://14689217

llvm-svn: 188512
2013-08-16 00:15:20 +00:00
Matt Arsenault
9594ef019c Don't do FoldCmpLoadFromIndexedGlobal for non inbounds GEPs
This path wasn't tested before without a datalayout,
so add some more tests and re-run with and without one.

llvm-svn: 188507
2013-08-15 23:11:07 +00:00
Daniel Dunbar
85b4e03800 [tests] Fix refacto in r187764 that effectively disabled SimplifyCFG tests. :(
llvm-svn: 188503
2013-08-15 22:52:27 +00:00
Yunzhong Gao
37d3ce60e8 Fixing a corner-case bug in strchr and strrchr lib call optimizations where
the input character is not converted to char before comparing with zero.

The patch was discussed in this thread:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130812/184069.html

llvm-svn: 188489
2013-08-15 20:58:59 +00:00
Matt Arsenault
cb3b478d91 Fix always creating GEP with i32 indices
Use the pointer size if datalayout is available.
Use i64 if it's not, which is consistent with what other
places do when the pointer size is unknown.

The test doesn't really test this in a useful way
since it will be transformed to that later anyway,
but this now tests it for non-zero arrays and when
datalayout isn't available. The cases in
visitGetElementPtrInst should save an extra re-visit to
the newly created GEP since it won't need to cleanup after
itself.

llvm-svn: 188339
2013-08-14 00:24:38 +00:00
Hal Finkel
fd36621506 BBVectorize: Add initial stores to the write set when tracking uses
When computing the use set of a store, we need to add the store to the write
set prior to iterating over later instructions. Otherwise, if there is a later
aliasing load of that store, that load will not be tagged as a use, and bad
things will happen.

trackUsesOfI still adds later dependent stores of an instruction to that
instruction's write set, but it never sees the original instruction, and so
when tracking uses of a store, the store must be added to the write set by the
caller.

Fixes PR16834.

llvm-svn: 188329
2013-08-13 23:34:32 +00:00
Nick Lewycky
d738bd4283 Remove duplicate copy of testcase in r188327.
llvm-svn: 188328
2013-08-13 22:55:05 +00:00
Nick Lewycky
eab287a60a Revert r187191, which broke opt -mem2reg on the testcases included in PR16867.
However, opt -O2 doesn't run mem2reg directly so nobody noticed until r188146
when SROA started sending more things directly down the PromoteMemToReg path.

In order to revert r187191, I also revert dependent revisions r187296, r187322
and r188146. Fixes PR16867. Does not add the testcases from that PR, but both
of them should get added for both mem2reg and sroa when this revert gets
unreverted.

llvm-svn: 188327
2013-08-13 22:51:58 +00:00
Nadav Rotem
bc08e7ce84 Fix PR16797 - Support PHINodes with multiple inputs from the same basic block.
Do not generate new vector values for the same entries because we know that the incoming values
from the same block must be identical.

llvm-svn: 188185
2013-08-12 17:46:44 +00:00
Tim Northover
67f2cc8ebf Fix FileCheck --check-prefix lines.
Various tests had sprung up over the years which had --check-prefix=ABC on the
RUN line, but "CHECK-ABC:" later on. This happened to work before, but was
strictly incorrect. FileCheck is getting stricter soon though.

Patch by Ron Ofir.

llvm-svn: 188173
2013-08-12 12:43:26 +00:00