1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-02-01 13:11:39 +01:00

8626 Commits

Author SHA1 Message Date
Zvi Rackover
9774a1af3b Add another interesting shufflevector test case for InstSimplify. NFC.
Test case shows opportunity to constant fold a shuffle with one variable
input vector operand.

llvm-svn: 299327
2017-04-02 10:42:21 +00:00
Sanjay Patel
69e6407cba [InstSimplify] add constant folding for fdiv/frem
Also, add a helper function so we don't have to repeat this code for each binop.

llvm-svn: 299309
2017-04-01 19:05:11 +00:00
Sanjay Patel
e4f4119624 [InstSimplify] add tests for missed constant folding; NFC
llvm-svn: 299308
2017-04-01 18:44:03 +00:00
Peter Collingbourne
6114655df7 Fix a test to check assembly output instead of bitcode.
llvm-svn: 299279
2017-03-31 23:22:19 +00:00
Craig Topper
5d174cd9c8 [InstCombine] Add test case demonstrating missed opportunities for removing add/sub when the LSBs of one input are known to be 0 and MSBs of the output aren't consumed.
llvm-svn: 299263
2017-03-31 21:08:37 +00:00
Joerg Sonnenberger
d51f93d34b Do not translate rint into nearbyint, but truncate it like nearbyint.
A common way to implement nearbyint is by fiddling with the floating
point environment and calling rint. This is used at least by the BSD
libm and musl. As such, canonicalizing the latter to the former will
create infinite loops for libm and generally pessimize performance, at
least when the generic C versions are used.

This change preserves the rint in the libcall translation and also
handles the domain truncation logic, so that rint with float argument
will be reduced to rintf etc.

llvm-svn: 299247
2017-03-31 19:58:07 +00:00
Piotr Padlewski
142150de89 [MSSA] Small test fix
llvm-svn: 299235
2017-03-31 17:39:07 +00:00
Dehao Chen
20bf5cf253 Fix the InstCombine to reserve the VP metadata and sets correct call count.
Summary: Currently the VP metadata was dropped when InstCombine converts a call to direct call. This patch converts the VP metadata to branch_weights so that its hotness is recorded.

Reviewers: eraman, davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31344

llvm-svn: 299228
2017-03-31 15:59:52 +00:00
Zvi Rackover
1bba228355 Instsimplify: Adding shufflevector test. NFC.
Adding some test-cases demonstrating cases that need to be improved.
To be followed by patches that improve these cases.

llvm-svn: 299189
2017-03-31 07:46:02 +00:00
Mikael Holmen
fcc113dab5 [Scalarizer] Handle scalar arguments in vector GEP
Summary:
Triggered by commit r298620: "[LV] Vectorize GEPs".

If we encounter a vector GEP with scalar arguments, we splat the scalar
into a vector of appropriate size before we scatter the argument.

Reviewers: arsenm, mehdi_amini, bkramer

Reviewed By: arsenm

Subscribers: bjope, mssimpso, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D31416

llvm-svn: 299186
2017-03-31 06:29:49 +00:00
Matt Arsenault
47e1a3f8ac AMDGPU: Add all atomicrmw fields to atomic.inc/dec
Add scope, order, isVolatile

llvm-svn: 299122
2017-03-30 22:21:40 +00:00
Hongbin Zheng
f89792dd5c [SimplifyIndvar] Replace the sdiv used by IV if we can prove both of its operands are non-negative
Since there is no sdiv in SCEV, an 'udiv' is a better canonical form than an 'sdiv' as the user of induction variable

Differential Revision: https://reviews.llvm.org/D31488

llvm-svn: 299118
2017-03-30 21:56:56 +00:00
Adrian Prantl
0129dda89a Teach stripNonLineTableDebugInfo() to remap DILocations in !llvm.loop nodes.
llvm-svn: 299107
2017-03-30 20:10:56 +00:00
Matthew Simpson
1604421b6d [InstCombine] Correct the check for vector GEPs
Some of the GEP combines (e.g., descaling) can't handle vector GEPs. We have an
existing check that attempts to bail out if given a vector GEP. However, the
check only tests the GEP's pointer operand. A GEP results in a vector of
pointers if at least one of its operands is vector-typed (e.g., its pointer
operand could be a scalar, but its index could be a vector). We should just
check the type of the GEP itself. This should fix PR32414.

Reference: https://bugs.llvm.org/show_bug.cgi?id=32414
Differential Revision: https://reviews.llvm.org/D31470

llvm-svn: 299017
2017-03-29 18:23:08 +00:00
Anna Thomas
177bfea164 rename instcombine test file. NFC
llvm-svn: 298904
2017-03-28 08:34:07 +00:00
Matthew Simpson
a754d575fa [LV] Transform truncations of non-primary induction variables
The vectorizer tries to replace truncations of induction variables with new
induction variables having the smaller type. After r295063, this optimization
was applied to all integer induction variables, including non-primary ones.
When optimizing the truncation of a non-primary induction variable, we still
need to transform the new induction so that it has the correct start value.
This should fix PR32419.

Reference: https://bugs.llvm.org/show_bug.cgi?id=32419
llvm-svn: 298882
2017-03-27 20:07:38 +00:00
Anna Thomas
a818c43889 [InstCombine] Avoid incorrect folding of select into phi nodes when incoming element is a vector type
Summary:
We are incorrectly folding selects into phi nodes when the incoming value of a phi
node is a constant vector. This optimization is done in `FoldOpIntoPhi` when the
select condition is a phi node with constant incoming values.
Without the fix, we are miscompiling (i.e. incorrectly folding the
select into the phi node) when the vector contains non-zero
elements.
This patch fixes the miscompile and we will correctly fold based on the
select vector operand (see added test cases).

Reviewers: majnemer, sanjoy, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31189

llvm-svn: 298845
2017-03-27 13:52:51 +00:00
Serge Pavlov
83e654a405 [LoopUnroll] Remap references in peeled iteration
References in cloned blocks must be remapped prior to dominator
calculation.

Differential Revision: https://reviews.llvm.org/D31281

llvm-svn: 298811
2017-03-26 16:46:53 +00:00
Joerg Sonnenberger
a1971791a7 Split the SimplifyCFG pass into two variants.
The first variant contains all current transformations except
transforming switches into lookup tables. The second variant
contains all current transformations.

The switch-to-lookup-table conversion results in code that is more
difficult to analyze and optimize by other passes. Most importantly,
it can inhibit Dead Code Elimination. As such it is often beneficial to
only apply this transformation very late. A common example is inlining,
which can often result in range restrictions for the switch expression.

Changes in execution time according to LNT:
SingleSource/Benchmarks/Misc/fp-convert +3.03%
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk -11.20%
MultiSource/Benchmarks/Olden/perimeter/perimeter -10.43%
and a couple of smaller changes. For perimeter it also results 2.6%
a smaller binary.

Differential Revision: https://reviews.llvm.org/D30333

llvm-svn: 298799
2017-03-26 06:44:08 +00:00
Chandler Carruth
9469fd1dfc [IR] Make SwitchInst::CaseIt almost a normal iterator.
This moves it to the iterator facade utilities giving it full random
access semantics, etc. It can also now be used with standard algorithms
like std::all_of and std::any_of and range adaptors like llvm::reverse.

Also make the semantics of iterating match what every other iterator
uses and forbid decrementing past the begin iterator. This was used as
a hacky way to work around iterator invalidation. However, every
instance trying to do this failed to actually avoid touching invalid
iterators despite the clear documentation that the removed and all
subsequent iterators become invalid including the end iterator. So I've
added a return of the next iterator to removeCase and rewritten the
loops that were doing this to correctly follow the iterator pattern of
either incremneting or removing and assigning fresh values to the
iterator and the end.

In one case we were trying to go backwards to make this cleaner but it
doesn't actually work. I've made that code match the code we use
everywhere else to remove cases as we iterate. This changes the order of
cases in one test output and I moved that test to CHECK-DAG so it
wouldn't care -- the order isn't semantically meaningful anyways.

llvm-svn: 298791
2017-03-26 02:49:23 +00:00
Eric Christopher
3baf4622e9 Change the default attributes for llvm.prefetch to inaccessiblemem_or_argmemonly
so that we can perform some optimizations across it.

Fixes PR32365

llvm-svn: 298781
2017-03-25 20:20:23 +00:00
Ivan Krasin
2fef6ee778 Revert r298620: [LV] Vectorize GEPs
Reason: breaks linking Chromium with LLD + ThinLTO (a pass crashes)
LLVM bug: https://bugs.llvm.org//show_bug.cgi?id=32413

Original change description:

[LV] Vectorize GEPs

This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.

With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.

The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.

Original Differential Revision: https://reviews.llvm.org/D30710

llvm-svn: 298735
2017-03-24 20:49:43 +00:00
Matt Arsenault
2d400bd527 AMDGPU: Fold rcp/rsq of undef to undef
llvm-svn: 298725
2017-03-24 19:04:57 +00:00
Teresa Johnson
1f2c63e62a [ThinLTO] Correct counting of functions in inliner stats
Summary: Declarations need to be filtered out when counting functions.

Reviewers: eraman

Subscribers: Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D31336

llvm-svn: 298720
2017-03-24 17:59:06 +00:00
Daniel Berlin
9f0bf61efa NewGVN: Fix PR32403 - Handling of undef in phis was not quite correct
due to LLVM's view of phi nodes.  It would cause NewGVN not to fixpoint
in some interesting edge cases.

llvm-svn: 298687
2017-03-24 05:30:34 +00:00
Dehao Chen
42ca2b084a Set the prof weight correctly for call instructions in DeadArgumentElimination.
Summary: In DeadArgumentElimination, the call instructions will be replaced. We also need to set the prof weights so that function inlining can find the correct profile.

Reviewers: eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31143

llvm-svn: 298660
2017-03-23 23:26:00 +00:00
Bryant Wong
0a6105c658 [MetaRenamer] Don't rename library functions.
Library functions can have specific semantics that affect the behavior of
certain passes. DSE, for instance, gives special treatment to malloc-ed pointers
but not to pointers returned from an equivalently typed (but differently named)
function.

MetaRenamer ought not to alter program semantics, so library functions must
remain untouched.

Reviewers: mehdi_amini, majnemer, chandlerc, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D31304

llvm-svn: 298659
2017-03-23 23:21:07 +00:00
Dehao Chen
10905278e2 Use isFunctionHotInCallGraph to set the function section prefix.
Summary: The current prefix based function layout algorithm only looks at function's entry count, which is not sufficient. A function should be grouped together if its entry count or any call edge count is hot.

Reviewers: davidxl, eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31225

llvm-svn: 298656
2017-03-23 23:14:11 +00:00
Gil Rapaport
3b70d1973c [LV] Add regression test for r297610
The new test asserts that scalarized memory operations get memcheck metadata
added even if the loop is only unrolled.

Differential Revision: https://reviews.llvm.org/D30972

llvm-svn: 298641
2017-03-23 20:02:23 +00:00
Teresa Johnson
7703ef4532 [ThinLTO] Add support for emitting minimized bitcode for thin link
Summary:
The cumulative size of the bitcode files for a very large application
can be huge, particularly with -g. In a distributed build environment,
all of these files must be sent to the remote build node that performs
the thin link step, and this can exceed size limits.

The thin link actually only needs the summary along with a bitcode
symbol table. Until we have a proper bitcode symbol table, simply
stripping the debug metadata results in significant size reduction.

Add support for an option to additionally emit minimized bitcode
modules, just for use in the thin link step, which for now just strips
all debug metadata. I plan to add a cc1 option so this can be invoked
easily during the compile step.

However, care must be taken to ensure that these minimized thin link
bitcode files produce the same index as with the original bitcode files,
as these original bitcode files will be used in the backends.

Specifically:
1) The module hash used for caching is typically produced by hashing the
written bitcode, and we want to include the hash that would correspond
to the original bitcode file. This is because we want to ensure that
changes in the stripped portions affect caching. Added plumbing to emit
the same module hash in the minimized thin link bitcode file.
2) The module paths in the index are constructed from the module ID of
each thin linked bitcode, and typically is automatically generated from
the input file path. This is the path used for finding the modules to
import from, and obviously we need this to point to the original bitcode
files. Added gold-plugin support to take a suffix replacement during the
thin link that is used to override the identifier on the MemoryBufferRef
constructed from the loaded thin link bitcode file. The assumption is
that the build system can specify that the minimized bitcode file has a
name that is similar but uses a different suffix (e.g. out.thinlink.bc
instead of out.o).

Added various tests to ensure that we get identical index files out of
the thin link step.

Reviewers: mehdi_amini, pcc

Subscribers: Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D31027

llvm-svn: 298638
2017-03-23 19:47:39 +00:00
Matthew Simpson
6e2f8c9035 [LV] Vectorize GEPs
This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.

With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.

The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.

Differential Revision: https://reviews.llvm.org/D30710

llvm-svn: 298620
2017-03-23 16:29:58 +00:00
Matthew Simpson
0ed5ec7509 [LV] Delete unneeded scalar GEP creation code
The code for generating scalar base pointers in vectorizeMemoryInstruction is
not needed. We currently scalarize all GEPs and maintain the scalarized values
in VectorLoopValueMap. The GEP cloning in this unneeded code is the same as
that in scalarizeInstruction. The test cases that changed as a result of this
patch changed because we were able to reuse the scalarized GEP that we
previously generated instead of cloning a new one.

Differential Revision: https://reviews.llvm.org/D30587

llvm-svn: 298615
2017-03-23 16:07:21 +00:00
Dehao Chen
aaa8e43771 Do not set branch weight if the branch weight annotation is present.
Summary: ThinLTO will annotate the CFG twice. If the branch weight is set by the first annotation, we should not set the branch weight again in the second annotation because the first annotation is more accurate as there is less optimization that could affect debug info accuracy.

Reviewers: tejohnson, davidxl

Reviewed By: tejohnson

Subscribers: mehdi_amini, aprantl, llvm-commits

Differential Revision: https://reviews.llvm.org/D31228

llvm-svn: 298602
2017-03-23 14:43:10 +00:00
Luqman Aden
98d59dc333 Preserve nonnull metadata on Loads through SROA & mem2reg.
Summary:
https://llvm.org/bugs/show_bug.cgi?id=31142 :

SROA was dropping the nonnull metadata on loads from allocas that got optimized out. This patch simply preserves nonnull metadata on loads through SROA and mem2reg.

Reviewers: chandlerc, efriedma

Reviewed By: efriedma

Subscribers: hfinkel, spatel, efriedma, arielb1, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D27114

llvm-svn: 298540
2017-03-22 19:16:39 +00:00
Sanjay Patel
c970e00bec [InstCombine] canonicalize insertelement of scalar constant ahead of insertelement of variable
insertelement (insertelement X, Y, IdxC1), ScalarC, IdxC2 -->
insertelement (insertelement X, ScalarC, IdxC2), Y, IdxC1

As noted in the code comment and seen in the test changes, the motivation is that by pulling
constant insertion up, we may be able to constant fold some insertelement instructions.

Differential Revision: https://reviews.llvm.org/D31196

llvm-svn: 298520
2017-03-22 17:10:44 +00:00
Craig Topper
d7a1463461 [ValueTracking] Make sure we keep range metadata information when calculating known bits for calls to bitreverse intrinsic.
llvm-svn: 298488
2017-03-22 07:22:49 +00:00
Craig Topper
194deeab48 [InstCombine] Teach SimplifyDemandedUseBits to shrink Constants on the left side of subtracts
Summary: Subtracts can have constants on the left side, but we don't shrink them based on demanded bits. This patch fixes that to match the right hand side.

Reviewers: davide, majnemer, spatel, sanjoy, hfinkel

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31119

llvm-svn: 298478
2017-03-22 04:03:53 +00:00
Matt Arsenault
dd9ab77318 AMDGPU: Mark all unspecified CC functions in tests as amdgpu_kernel
Currently the default C calling convention functions are treated
the same as compute kernels. Make this explicit so the default
calling convention can be changed to a non-kernel.

Converted with perl -pi -e 's/define void/define amdgpu_kernel void/'
on the relevant test directories (and undoing in one place that actually
wanted a non-kernel).

llvm-svn: 298444
2017-03-21 21:39:51 +00:00
Sanjay Patel
78074206fb [InstCombine] regenerate checks; NFC
llvm-svn: 298432
2017-03-21 20:14:38 +00:00
George Burgess IV
40c3d6a6b6 Let llvm.objectsize be conservative with null pointers
This adds a parameter to @llvm.objectsize that makes it return
conservative values if it's given null.

This fixes PR23277.

Differential Revision: https://reviews.llvm.org/D28494

llvm-svn: 298430
2017-03-21 20:08:59 +00:00
Sanjay Patel
cacf3145df [InstCombine] auto-generate better checks; NFC
llvm-svn: 298377
2017-03-21 14:04:44 +00:00
Matt Arsenault
6a253574bc InstCombine: Check source value precision when reducing cast intrinsic
Missed this check when porting from the libcall version.

llvm-svn: 298312
2017-03-20 21:59:24 +00:00
Daniel Berlin
4baa53985f Add missing updated test from VN coercion changes. Instructions were renamed. NFC
llvm-svn: 298280
2017-03-20 18:04:19 +00:00
Dehao Chen
359491dd62 Updates branch_weights annotation for call instructions during inlining.
Summary: Inliner should update the branch_weights annotation to scale it to proper value.

Reviewers: davidxl, eraman

Reviewed By: eraman

Subscribers: zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D30767

llvm-svn: 298270
2017-03-20 16:40:44 +00:00
Craig Topper
128018c801 [InstCombine] Use update_test_checks.py to regenerate a test. NFC
llvm-svn: 298227
2017-03-19 17:04:52 +00:00
Xin Tong
b88e8777b2 Remove unused arguments. NFCI
llvm-svn: 298218
2017-03-19 15:31:16 +00:00
Xin Tong
92773d49ef [JumpThreading] Perform phi-translation in SimplifyPartiallyRedundantLoad.
Summary:
In case we are loading on a phi-load in SimplifyPartiallyRedundantLoad.
Try to phi translate it into incoming values in the predecessors before
we search for available loads.

This needs https://reviews.llvm.org/D30524

Reviewers: davide, sanjoy, efriedma, dberlin, rengolin

Reviewed By: dberlin

Subscribers: junbuml, llvm-commits

Differential Revision: https://reviews.llvm.org/D30543

llvm-svn: 298217
2017-03-19 15:30:53 +00:00
Brian Gesiak
61982c1630 [Analysis] bitreverse(undef) returns undef
Summary:
The reverse of an artbitrary bitpattern is also an arbitrary
bitpattern.

Reviewers: trentxintong, arsenm, majnemer

Reviewed By: majnemer

Subscribers: majnemer, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D31118

llvm-svn: 298201
2017-03-19 04:40:42 +00:00
Daniel Berlin
54d98b54f5 NewGVN: Fix PHI evaluation bug exposed by new verifier. We were checking whether the incoming block was reachable instead of whether the specific edge was reachable
llvm-svn: 298187
2017-03-18 15:41:36 +00:00
Rong Xu
84e1d80689 [PGO] Add omitted test cases.
llvm-svn: 298115
2017-03-17 20:05:13 +00:00