1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 11:33:24 +02:00
Commit Graph

18492 Commits

Author SHA1 Message Date
Dehao Chen
a8aee54f27 Make ICP uses PSI to check for hotness.
Summary: Currently, ICP checks the count against a fixed value to see if it is hot enough to be promoted. This does not work for SamplePGO because sampled count may be much smaller. This patch uses PSI to check if the count is hot enough to be promoted.

Reviewers: davidxl, tejohnson, eraman

Reviewed By: davidxl

Subscribers: sanjoy, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D36341

llvm-svn: 310416
2017-08-08 20:57:33 +00:00
Craig Topper
76bcda5719 [InstCombine] Support pulling left shifts through a subtract with constant LHS
We already support pulling through an add with constant RHS. We can do the same for subtract.

Differential Revision: https://reviews.llvm.org/D36443

llvm-svn: 310407
2017-08-08 20:14:11 +00:00
Chad Rosier
98b1b144a6 [NewGVN] Use a cast instead of a dyn_cast.
Differential Revision: https://reviews.llvm.org/D36478

llvm-svn: 310397
2017-08-08 18:41:49 +00:00
Anna Thomas
ce71877c5e [LoopVectorize] Fix assertion failure in Fcmp vectorization
Summary:
When vectorizing fcmps we can trip on incorrect cast assertion when setting the
FastMathFlags after generating the vectorized FCmp.
This can happen if the FCmp can be folded to true or false directly. The fix
here is to set the FastMathFlag using the FastMathFlagBuilder *before* creating
the FCmp Instruction. This is what's done by other optimizations such as
InstCombine.
Added a test case which trips on cast assertion without this patch.

Reviewers: Ayal, mssimpso, mkuper, gilr

Reviewed by: Ayal, mssimpso

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D36244

llvm-svn: 310389
2017-08-08 18:07:44 +00:00
Craig Topper
3fbd17d46d [InstCombine] Cast to BinaryOperator earlier in foldSelectIntoOp to simplify the code.
We no longer need the explicit operand count check or the later dynamic cast.

llvm-svn: 310339
2017-08-08 06:19:24 +00:00
Chandler Carruth
14f567c031 [PM] Fix new LoopUnroll function pass by invalidating loop analysis
results when a loop is completely removed.

This is very hard to manifest as a visible bug. You need to arrange for
there to be a subsequent allocation of a 'Loop' object which gets the
exact same address as the one which the unroll deleted, and you need the
LoopAccessAnalysis results to be significant in the way that they're
stale. And you need a million other things to align.

But when it does, you get a deeply mysterious crash due to actually
finding a stale analysis result. This fixes the issue and tests for it
by directly checking we successfully invalidate things. I have not been
able to get *any* test case to reliably trigger this. Changes to LLVM
itself caused the only test case I ever had to cease to crash.

I've looked pretty extensively at less brittle ways of fixing this and
they are actually very, very hard to do. This is a somewhat strange and
unusual case as we have a pass which is deleting an IR unit, but is not
running within that IR unit's pass framework (which is what handles this
cleanly for the normal loop unroll). And where there isn't a definitive
way to clear *all* of the stale cache entries. And where the pass *is*
updating the core analysis that provides the IR units!

For example, we don't have any of these problems with Function analyses
because it is easy to clear out function analyses when the functions
themselves may have been deleted -- we clear an entire module's worth!
But that is too heavy of a hammer down here in the LoopAnalysisManager
layer.

A better long-term solution IMO is to require that AnalysisManager's
make their keys durable to this kind of thing. Specifically, when
caching an analysis for one IR unit that is conceptually "owned" by
a higher level IR unit, the AnalysisManager should incorporate this into
its data structures so that we can reliably clear these results without
having to teach each and every pass to do so manually as we do here. But
that is a change for another day as it will be a fairly invasive change
to the AnalysisManager infrastructure. Until then, this fortunately
seems to be quite rare.

llvm-svn: 310333
2017-08-08 02:24:20 +00:00
Evgeny Stupachenko
8c60bbb9d2 Reapply fix PR23384 (part 3 of 3) r304824 (was reverted in r305720).
The root cause of reverting was fixed - PR33514.

Summary:
The patch makes instruction count the highest priority for
 LSR solution for X86 (previously registers had highest priority).

Reviewers: qcolombet

Differential Revision: http://reviews.llvm.org/D30562

From: Evgeny Stupachenko <evstupac@gmail.com>
                         <evgeny.v.stupachenko@intel.com>
llvm-svn: 310289
2017-08-07 19:56:34 +00:00
Aaron Ballman
7608dca219 Removing an unused variable that was missed with the refactoring in r310272; NFC.
llvm-svn: 310285
2017-08-07 19:26:17 +00:00
Craig Topper
e851864d61 [InstCombine] Support (X | C1) & C2 --> (X & C2^(C1&C2)) | (C1&C2) for vector splats
Note the original code I deleted incorrectly listed this as (X | C1) & C2 --> (X & C2^(C1&C2)) | C1 Which is only valid if C1 is a subset of C2. This relied on SimplifyDemandedBits to remove any extra bits from C1 before we got to that code.

My new implementation avoids relying on that behavior so that it can be naively verified with alive.

Differential Revision: https://reviews.llvm.org/D36384

llvm-svn: 310272
2017-08-07 18:10:39 +00:00
Alexey Bataev
9f19665894 [SLP] General improvements of SLP vectorization process.
Patch tries to improve two-pass vectorization analysis, existing in SLP vectorizer. What it does:

1. Defines key nodes, that are the vectorization roots. Previously vectorization started if StoreInst or ReturnInst is found. For now, the vectorization started for all Instructions with no users and void types (Terminators, StoreInst) + CallInsts.
2. CmpInsts, InsertElementInsts and InsertValueInsts are stored in the
array. This array is processed only after the vectorization of the
first-after-these instructions key node is finished. Vectorization goes
in reverse order to try to vectorize as much code as possible.

Reviewers: mzolotukhin, Ayal, mkuper, gilr, hfinkel, RKSimon

Subscribers: ashahid, anemet, RKSimon, mssimpso, llvm-commits

Differential Revision: https://reviews.llvm.org/D29826

llvm-svn: 310260
2017-08-07 15:25:49 +00:00
Alexey Bataev
92afaf2479 Revert "[SLP] General improvements of SLP vectorization process."
This reverts commit r310255.

llvm-svn: 310257
2017-08-07 14:51:52 +00:00
Alexey Bataev
ec62fc0fc9 [SLP] General improvements of SLP vectorization process.
Summary:
Patch tries to improve two-pass vectorization analysis, existing in SLP vectorizer. What it does:
1. Defines key nodes, that are the vectorization roots. Previously vectorization started if StoreInst or ReturnInst is found. For now, the vectorization started for all Instructions with no users and void types (Terminators, StoreInst) + CallInsts.
2. CmpInsts, InsertElementInsts and InsertValueInsts are stored in the array. This array is processed only after the vectorization of the first-after-these instructions key node is finished. Vectorization goes in reverse order to try to vectorize as much code as possible.

Reviewers: mzolotukhin, Ayal, mkuper, gilr, hfinkel, RKSimon

Subscribers: ashahid, anemet, RKSimon, mssimpso, llvm-commits

Differential Revision: https://reviews.llvm.org/D29826

llvm-svn: 310255
2017-08-07 14:03:17 +00:00
Vitaly Buka
cb2b0513d2 [asan] Fix asan dynamic shadow check before copyArgsPassedByValToAllocas
llvm-svn: 310242
2017-08-07 07:35:33 +00:00
Vitaly Buka
de5aa1767f [asan] Disable checking of arguments passed by value for --asan-force-dynamic-shadow
Fails with "Instruction does not dominate all uses!"

llvm-svn: 310241
2017-08-07 07:12:34 +00:00
Davide Italiano
1031f1aec8 [Reassociate] Use a range loop for clarity. NFCI.
While here, rename `i` to `Rank` as the latter is more
self-explanatory (and this code also uses `I` two lines below to
identify an Instruction).

llvm-svn: 310238
2017-08-07 01:57:21 +00:00
Davide Italiano
aeb942e819 [Reassociate] Try to bail out early when canonicalizing.
This commit rearranges the checks to avoid calls to getRank()
when not needed (e.g. when RHS == LHS).

llvm-svn: 310237
2017-08-07 01:49:09 +00:00
Craig Topper
7792eb189a [InstCombine] Remove shift handling from OptAndOp.
Summary: This is all handled by SimplifyDemandedBits.

Reviewers: spatel, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D36382

llvm-svn: 310234
2017-08-06 23:30:49 +00:00
Craig Topper
5ce12bbc0a [InstCombine] Support (X ^ C1) & C2 --> (X & C2) ^ (C1&C2) for vector splats.
llvm-svn: 310233
2017-08-06 23:11:49 +00:00
Craig Topper
926fe5d0ed [InstCombine] Support '(C - X) ^ signmask -> (C + signmask - X)' and '(X + C) ^ signmask -> (X + C + signmask)' for vector splats.
llvm-svn: 310232
2017-08-06 22:17:21 +00:00
Craig Topper
d5cf04da1c [InstCombine] Support ~(c-X) --> X+(-c-1) and ~(X-c) --> (-c-1)-X for splat vectors.
llvm-svn: 310195
2017-08-06 06:28:41 +00:00
Craig Topper
f043cfbb9a [InstCombine] Fold (C - X) ^ signmask -> (C + signmask - X).
llvm-svn: 310186
2017-08-05 20:00:44 +00:00
Craig Topper
c19e9b9071 [InstCombine] Teach the code that pulls logical operators through constant shifts to handle vector splats too.
llvm-svn: 310185
2017-08-05 20:00:42 +00:00
Craig Topper
eb05f8fcb3 [InstCombine] Support vector splats in foldSelectICmpAnd.
Unfortunately, it looks like there's some other missed optimizations in the generated code for some of these cases. I'll try to look at some of those next.

llvm-svn: 310184
2017-08-05 20:00:41 +00:00
Dinar Temirbulatov
a7733cfe8c [SLPVectorizer] Add extra parameter to setInsertPointAfterBundle to handle different opcodes, NFCI.
Differential Revision: https://reviews.llvm.org/D35769

llvm-svn: 310183
2017-08-05 18:43:52 +00:00
Sanjay Patel
2eb16c0389 [InstCombine] refactor trunc(binop) transforms; NFCI
In addition to moving the shift transforms over, we may want to
detect too-wide rotate patterns here (PR34046). 

llvm-svn: 310181
2017-08-05 15:19:18 +00:00
Craig Topper
0f832f8e82 [InstCombine] In foldSelectICmpAnd, if we need to to truncate from the 'and' type to the 'select' type, do it after shifting right instead of just bailing.
Previously we were always trying to emit the zext or truncate before any shift. This meant if the 'and' mask was larger than the size of the truncate we would skip the transformation.

Now we shift the result of the and right first leaving the bit within the range of the truncate.

This matches what we are doing in foldSelectICmpAndOr for the same problem.

llvm-svn: 310159
2017-08-05 01:45:17 +00:00
Sanjay Patel
23bd4a9364 [InstCombine] narrow truncated add/sub/mul with constant
Name: narrow_sub
  %sub = sub i32 C1, %x
  %r = trunc i32 %sub to i8
  =>  
  %xn = trunc i32 %x to i8
  %narrowC = trunc i32 C1 to i8
  %r = sub i8 %narrowC, %xn
 
Name: narrow_add
  %add = add i32 %x, C1
  %r = trunc i32 %add to i8
  =>  
  %xn = trunc i32 %x to i8
  %narrowC = trunc i32 C1 to i8
  %r = add i8 %xn, %narrowC
  
Name: narrow_mul
  %mul = mul i32 %x, C1
  %r = trunc i32 %mul to i8
  =>  
  %xn = trunc i32 %x to i8
  %narrowC = trunc i32 C1 to i8
  %r = mul i8 %xn, %narrowC


http://rise4fun.com/Alive/QpS

This doesn't solve PR34046 (failure to recognize rotate):
https://bugs.llvm.org/show_bug.cgi?id=34046
...but it reduces an extra complication in the description examples 
to a form that we can more easily match.

llvm-svn: 310141
2017-08-04 22:30:34 +00:00
Nico Weber
1ee42ff794 Revert r310055, it caused PR34074.
llvm-svn: 310123
2017-08-04 20:40:38 +00:00
Evgeny Stupachenko
2be9c5c55b Fix PR33514
Summary:
The bug was uncovered after fix of  PR23384 (part 3 of 3).
The patch restricts pointer multiplication in SCEV computaion for ICmpZero.

Reviewers: qcolombet

Differential Revision: http://reviews.llvm.org/D36170

From: Evgeny Stupachenko <evstupac@gmail.com>
                         <evgeny.v.stupachenko@intel.com>
llvm-svn: 310092
2017-08-04 18:46:13 +00:00
Reid Kleckner
e29583d0d2 [ArgPromotion] Preserve alignment of byval argument in new alloca
The frontend may have requested a higher alignment for any reason, and
downstream optimizations may already have taken advantage of it.  We
should keep the same alignment when moving the allocation from the
parameter area to the local variable area.

Fixes PR34038

llvm-svn: 310071
2017-08-04 17:09:11 +00:00
Benjamin Kramer
423f2d0b11 [InstCombine] Fold single-use variable into assert.
Avoids unused variable warnings in Release builds. No functional change.

llvm-svn: 310064
2017-08-04 16:08:41 +00:00
Craig Topper
bcfbcd1b18 [InstCombine] Remove the (not (sext)) case from foldBoolSextMaskToSelect and inline the remaining code to match visitOr
Summary:
The (not (sext)) case is really (xor (sext), -1) which should have been simplified to (sext (xor, 1)) before we got here. So we shouldn't need to handle it.

With that taken care of we only need to two cases so don't need the swap anymore. This makes us in sync with the equivalent code in visitOr so inline this to match.

Reviewers: spatel, eli.friedman, majnemer

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36240

llvm-svn: 310063
2017-08-04 16:07:20 +00:00
Craig Topper
fe1a0c7e67 [InstCombine] Use ConstantInt::getFalse to reduce some code. NFC
llvm-svn: 310062
2017-08-04 16:07:18 +00:00
Sanjay Patel
0e224c7256 [InstCombine] narrow lshr with constant
Name: narrow_shift
Pre: C1 < 8
%zx = zext i8 %x to i32
%l = lshr i32 %zx, C1
  =>  
%narrowC = trunc i32 C1 to i8
%ns = lshr i8 %x, %narrowC
%l = zext i8 %ns to i32

http://rise4fun.com/Alive/jIV

This isn't directly applicable to PR34046 as written, but we
need to have more narrowing folds like this to be sure that
rotate patterns are recognized.

llvm-svn: 310060
2017-08-04 15:42:47 +00:00
Filipe Cabecinhas
4058d9a4ca [DSE] Merge stores when the later store only writes to memory locations the early store also wrote to.
Summary:
This fixes PR31777.

If both stores' values are ConstantInt, we merge the two stores
(shifting the smaller store appropriately) and replace the earlier (and
larger) store with an updated constant.

In the future we should also support vectors of integers. And maybe
float/double if we can.

Reviewers: hfinkel, junbuml, jfb, RKSimon, bkramer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30703

llvm-svn: 310055
2017-08-04 12:28:36 +00:00
Nikolai Bozhenov
899aec6301 [InstCombine] Canonicalize clamp of float types to minmax in fast mode.
Summary:
This commit allows matchSelectPattern to recognize clamp of float
arguments in the presence of FMF the same way as already done for
integers.

This case is a little different though. With integers, given the
min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX
"automatically". That is not the case for float, because for them only
full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care
about NaNs. On the other hand, some backends (e.g. X86) have only
FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM
nodes are illegal thus selection is not happening. So I decided to do
such kind of transformation in IR (InstCombiner) instead of
complicating the logic in the backend.

Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper

Reviewed By: efriedma

Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits

Patch by Andrei Elovikov <andrei.elovikov@intel.com>

Differential Revision: https://reviews.llvm.org/D33186

llvm-svn: 310054
2017-08-04 12:22:17 +00:00
Max Kazantsev
9209a4521a Do not declare a variable which is used only in assert. NFC
llvm-svn: 310034
2017-08-04 07:41:24 +00:00
Max Kazantsev
ffa0782bec [IRCE] Handle loops with step different from 1/-1
This patch generalizes IRCE to handle IV steps that are not equal to 1 or -1.

Differential Revision: https://reviews.llvm.org/D35539

llvm-svn: 310032
2017-08-04 07:01:04 +00:00
Max Kazantsev
1aed4a4dfc [IRCE] Recognize loops with unsigned latch conditions
This patch enables recognition of loops with ult/ugt latch conditions.

Differential Revision: https://reviews.llvm.org/D35302

llvm-svn: 310027
2017-08-04 05:40:20 +00:00
Craig Topper
14179d8998 [InstCombine] Move the call to foldSelectICmpAnd into foldSelectInstWithICmp. NFCI
llvm-svn: 310025
2017-08-04 05:12:37 +00:00
Craig Topper
0f297d09b1 [InstCombine] Remove unnecessary casts. NFC
We're calling an overload of getOpcode that already returns Instruction::CastOps.

llvm-svn: 310024
2017-08-04 05:12:35 +00:00
Victor Leschuk
eabd98601c Un-revert r310014: false revert, it wasn't the cause of build break
llvm-svn: 310021
2017-08-04 04:51:15 +00:00
Victor Leschuk
fe0e5c87b4 Revert r310014 as it breaks build lld-x86_64-darwin13
llvm-svn: 310020
2017-08-04 04:43:54 +00:00
Adrian Prantl
d3acfe5504 Teach GlobalSRA to update the debug info for split-up globals.
This is similar to what we are doing in "regular" SROA and creates
DW_OP_LLVM_fragment operations to describe the resulting variables.

rdar://problem/33654891

llvm-svn: 310014
2017-08-04 01:19:54 +00:00
Teresa Johnson
cde6934bb7 Use profile summary to disable peeling for huge working sets
Summary:
Detect when the working set size of a profiled application is huge,
by comparing the number of counts required to reach the hot percentile
in the profile summary to a large threshold*.

When the working set size is determined to be huge, disable peeling
to avoid bloating the working set further.

*Note that the selected threshold (15K) is significantly larger than the
largest working set value in SPEC cpu2006 (which is gcc at around 11K).

Reviewers: davidxl

Subscribers: mehdi_amini, mzolotukhin, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D36288

llvm-svn: 310005
2017-08-03 23:42:58 +00:00
Davide Italiano
6806b06bf7 [NewGVN] Fix the case where we have a phi-of-ops which goes away.
Patch by Daniel Berlin, fixes PR33196 (and probably something else).

llvm-svn: 309988
2017-08-03 21:17:49 +00:00
Teresa Johnson
ffba812867 Disable loop peeling during full unrolling pass.
Summary:
Peeling should not occur during the full unrolling invocation early
in the pipeline, but rather later with partial and runtime loop
unrolling. The later loop unrolling invocation will also eventually
utilize profile summary and branch frequency information, which
we would like to use to control peeling. And for ThinLTO we want
to delay peeling until the backend (post thin link) phase, just as
we do for most types of unrolling.

Ensure peeling doesn't occur during the full unrolling invocation
by adding a parameter to the shared implementation function, similar
to the way partial and runtime loop unrolling are disabled.

Performance results for ThinLTO suggest this has a neutral to positive
effect on some internal benchmarks.

Reviewers: chandlerc, davidxl

Subscribers: mzolotukhin, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D36258

llvm-svn: 309966
2017-08-03 17:52:38 +00:00
Sanjay Patel
ef97ac86ad [NewGVN] fix typos; NFC
llvm-svn: 309946
2017-08-03 15:18:27 +00:00
Ewan Crawford
ae5418b4ef [Cloning] Move distinct GlobalVariable debug info metadata in CloneModule
Duplicating the distinct Subprogram and CU metadata nodes seems like the incorrect thing to do in CloneModule for GlobalVariable debug info. As it results in the scope of the GlobalVariable DI no longer being consistent with the rest of the module, and the new CU is absent from llvm.dbg.cu.

Fixed by adding RF_MoveDistinctMDs to MapMetadata flags for GlobalVariables.

Current unit test IR after clone:
```
@gv = global i32 1, comdat($comdat), !dbg !0, !type !5

define private void @f() comdat($comdat) personality void ()* @persfn !dbg !14 {

!llvm.dbg.cu = !{!10}

!0 = !DIGlobalVariableExpression(var: !1)
!1 = distinct !DIGlobalVariable(name: "gv", linkageName: "gv", scope: !2, file: !3, line: 1, type: !9, isLocal: false, isDefinition: true)
!2 = distinct !DISubprogram(name: "f", linkageName: "f", scope: null, file: !3, line: 4, type: !4, isLocal: true, isDefinition: true, scopeLine: 3, isOptimized: false, unit: !6, variables: !5)
!3 = !DIFile(filename: "filename.c", directory: "/file/dir/")
!4 = !DISubroutineType(types: !5)
!5 = !{}
!6 = distinct !DICompileUnit(language: DW_LANG_C99, file: !7, producer: "CloneModule", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: !5, globals: !8)
!7 = !DIFile(filename: "filename.c", directory: "/file/dir")
!8 = !{!0}
!9 = !DIBasicType(tag: DW_TAG_unspecified_type, name: "decltype(nullptr)")
!10 = distinct !DICompileUnit(language: DW_LANG_C99, file: !7, producer: "CloneModule", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: !5, globals: !11)
!11 = !{!12}
!12 = !DIGlobalVariableExpression(var: !13)
!13 = distinct !DIGlobalVariable(name: "gv", linkageName: "gv", scope: !14, file: !3, line: 1, type: !9, isLocal: false, isDefinition: true)
!14 = distinct !DISubprogram(name: "f", linkageName: "f", scope: null, file: !3, line: 4, type: !4, isLocal: true, isDefinition: true, scopeLine: 3, isOptimized: false, unit: !10, variables: !5)
```

Patched IR after clone:
```
@gv = global i32 1, comdat($comdat), !dbg !0, !type !5

define private void @f() comdat($comdat) personality void ()* @persfn !dbg !2 {

!llvm.dbg.cu = !{!6}

!0 = !DIGlobalVariableExpression(var: !1)
!1 = distinct !DIGlobalVariable(name: "gv", linkageName: "gv", scope: !2, file: !3, line: 1, type: !9, isLocal: false, isDefinition: true)
!2 = distinct !DISubprogram(name: "f", linkageName: "f", scope: null, file: !3, line: 4, type: !4, isLocal: true, isDefinition: true, scopeLine: 3, isOptimized: false, unit: !6, variables: !5)
!3 = !DIFile(filename: "filename.c", directory: "/file/dir/")
!4 = !DISubroutineType(types: !5)
!5 = !{}
!6 = distinct !DICompileUnit(language: DW_LANG_C99, file: !7, producer: "CloneModule", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: !5, globals: !8)
!7 = !DIFile(filename: "filename.c", directory: "/file/dir")
!8 = !{!0}
!9 = !DIBasicType(tag: DW_TAG_unspecified_type, name: "decltype(nullptr)")
```

Reviewers: aprantl, probinson, dblaikie, echristo, loladiro
Reviewed By: aprantl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D36082

llvm-svn: 309928
2017-08-03 09:23:03 +00:00
Matt Arsenault
3207fbe899 LV: Don't insert runtime ptr checks on divergent targets
llvm-svn: 309890
2017-08-02 21:43:08 +00:00