1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-26 12:43:36 +01:00
Commit Graph

153386 Commits

Author SHA1 Message Date
Sanjay Patel
4d8a49979a [utils] add aarch64 target as an option
I don't know enough to add a custom scrubber for AArch64, so I just re-used ARM.

llvm-svn: 311795
2017-08-25 19:33:18 +00:00
Kostya Serebryany
6b3c9c7943 [sanitizer-coverage] extend fsanitize-coverage=pc-table with flags for every PC
llvm-svn: 311794
2017-08-25 19:29:47 +00:00
Sanjay Patel
c6223c35ce [x86] regenerate checks; NFC
llvm-svn: 311793
2017-08-25 19:25:03 +00:00
Haicheng Wu
22dbb2c57b [InlineCost] Small changes to early exit condition. NFC.
Change the early exit condition from Cost > Threshold to Cost >= Threshold
because the inline condition is Cost < Threshold.

Differential Revision: https://reviews.llvm.org/D37087

llvm-svn: 311791
2017-08-25 19:00:33 +00:00
Craig Topper
a3080591ad [InstCombine] Don't fall back to only calling computeKnownBits if the upper bit of Add/Sub is demanded.
Just create an all 1s demanded mask and continue recursing like normal. The recursive calls should be able to handle an all 1s mask and do the right thing.

The only time we should care about knowing whether the upper bit was demanded is when we need to know if we should clear the NSW/NUW flags.

Now that we have a consistent path through the code for all cases, use KnownBits::computeForAddSub to compute the known bits at the end since we already have the LHS and RHS.

My larger goal here is to move the code that turns add into xor if only 1 bit is demanded and no bits below it are non-zero from InstCombiner::OptAndOp to here. This will allow it to be more general instead of just looking for 'add' and 'and' with constant RHS.

Differential Revision: https://reviews.llvm.org/D36486

llvm-svn: 311789
2017-08-25 18:39:40 +00:00
Craig Topper
321f7d083a [InstCombine] Add tests to show missed opportunities to combine bit tests hidden by a sign compare and a truncate. NFC
llvm-svn: 311784
2017-08-25 17:14:35 +00:00
Florian Hahn
51630ed7b4 [LoopInterchange] Skip zext instructions when looking for induction var.
Summary:
SimplifyIndVar may introduce zext instructions to widen arguments of the
loop exit check. They should not prevent us from splitting the loop at
the induction variable, but maybe the check should be more conservative,
e.g. making sure it only extends arguments used by a comparison?

Reviewers: karthikthecool, mcrosier, mzolotukhin

Reviewed By: mcrosier

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D34879

llvm-svn: 311783
2017-08-25 16:52:29 +00:00
David Blaikie
bdcf335b6b Fix unused-lambda-capture warning by using default capture-by-ref
Since the lambda isn't escaped (via a std::function or similar) it's
fine/better to use default capture-by-ref to provide semantics similar
to language-level nested scopes (if/for/while/etc).

llvm-svn: 311782
2017-08-25 16:46:07 +00:00
Matt Morehouse
182c279f3c Fix buildbot breakage from r311763. Remove unused lambda capture.
llvm-svn: 311781
2017-08-25 16:19:26 +00:00
David Green
9ea302e3bc [gold] Fix up a new test to allow it to pass on non x86 builds.
Fix a test that is failing on a downstream ARM/AArch64
bootstrap. We just need add an elf_x86_64 parameter to
gold.

llvm-svn: 311780
2017-08-25 16:14:56 +00:00
Michael Kruse
20af10c99e Normlize to LF line endings.
Commit r297442 introduced mixed CRLF/LF line endings to two files.
Normalize to to LF-only line endings.

llvm-svn: 311774
2017-08-25 12:38:53 +00:00
Amjad Aboud
11ed402f52 [InstCombine] Consider more cases where SimplifyDemandedUseBits does not convert AShr to LShr.
There are cases where AShr have better chance to be optimized than LShr, especially when the demanded bits are not known to be Zero, and also known to be similar to the sign bit.

Differential Revision: https://reviews.llvm.org/D36936

llvm-svn: 311773
2017-08-25 11:07:54 +00:00
Ilya Biryukov
7444ffb4de Use temporary directory when building docker image.
Summary:
This avoids races on copying of compiled clang from 'build' image
to 'release' image.

Reviewers: klimek, mehdi_amini

Reviewed By: mehdi_amini

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37098

llvm-svn: 311769
2017-08-25 09:03:57 +00:00
Craig Topper
43c78a7681 [X86] Use SDValue::getOpcode instead of calling getNode and calling getOpcode on that. NFC
llvm-svn: 311765
2017-08-25 05:36:29 +00:00
Craig Topper
79a63a06a7 [X86] Use isUInt and isShiftedUInt instead of using our own masking and compares. NFCI
While there use a local variable instead of calling C->getZExtValue() repeatedly.

llvm-svn: 311764
2017-08-25 05:04:34 +00:00
Aditya Nandakumar
26abd3f645 [GISel]: Implement widenScalar for Legalizing G_PHI
https://reviews.llvm.org/D37018

llvm-svn: 311763
2017-08-25 04:57:27 +00:00
Chandler Carruth
fffa224483 [x86] NFC - normalize test case formatting of IR and generate CHECK
lines with the script rather than using manually written checks.

llvm-svn: 311753
2017-08-25 02:32:51 +00:00
Chandler Carruth
3a8609b87d Teach the llc check updater to recognize the end-of-function comment
used on Windows and sometimes Darwin. Cleans up generated patterns for
me quite a bit.

llvm-svn: 311752
2017-08-25 02:32:48 +00:00
Gor Nishanov
c0d1e14f07 [coroutines] Add support for symmetric control transfer (musttail on coro.resumes followed by a suspend)
Summary:
Add musttail to any resume instructions that is immediately followed by a
suspend (i.e. ret). We do this even in -O0 to support guaranteed tail call
for symmetrical coroutine control transfer (C++ Coroutines TS extension).
This transformation is done only in the resume part of the coroutine that has
identical signature and calling convention as the coro.resume call.

Reviewers: GorNishanov

Reviewed By: GorNishanov

Subscribers: EricWF, majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D37125

llvm-svn: 311751
2017-08-25 02:25:10 +00:00
Chandler Carruth
db2fd5a8d0 [x86] NFC: More refactoring to pave the way to extending this ISel logic
to handle other x86 pseudos that carry flags and thus can't be matched
by our ISel patterns with fused memory accesses.

Differential Revision: https://reviews.llvm.org/D37088

llvm-svn: 311749
2017-08-25 02:06:36 +00:00
Chandler Carruth
38343a5078 [x86] NFC - Refactor the custom lowering of (load; op; store) RMW sequences.
This extracts the code out of a giant switch in preparation for expanding it to
handle operations other thin `inc` and `dec`. Add a FIXME indicating what's
coming here.

Differential Revision: https://reviews.llvm.org/D37045

llvm-svn: 311748
2017-08-25 02:04:03 +00:00
Craig Topper
15c99db224 [X86] Add TBM instructions to X86InstrInfo::isDefConvertible.
This allows us to remove "test" instructions and use the flags from the TBM instructions directly.

llvm-svn: 311747
2017-08-25 01:59:06 +00:00
Matt Arsenault
1286bf2696 DAG: Fix naming crime
Because isOperationCustom was only checking for custom
lowering on illegal types, this was behaving inconsistently
with the other isOperation* functions, so that
isOperationLegalOrCustom != (isOperationLegal || isOperationCustom)

Luckily this is only used in one place which already checks the
type legality on its own.

llvm-svn: 311743
2017-08-25 01:26:13 +00:00
Justin Bogner
c874810cb1 [sanitizer-coverage] Make sure pc-tables aren't dead stripped
Add a reference to the PC array in llvm.used so that linkers that
aggressively dead strip (like ld64) don't remove it.

llvm-svn: 311742
2017-08-25 01:24:54 +00:00
Mandeep Singh Grang
1a02b574d1 [unittests] Remove reverse iteration tests which use pointer-like keys
Summary: The expected order of pointer-like keys is hash-function-dependent which in turn depends on the platform/environment. Need to come up with a better way to test reverse iteration of containers with pointer-like keys.

Reviewers: dblaikie, mehdi_amini, efriedma, mgrang

Reviewed By: mgrang

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37128

llvm-svn: 311741
2017-08-25 01:11:28 +00:00
Chandler Carruth
1a71d8b41c [x86] Back out one aspect of r311318: don't generically set
FeatureSlowUAMem32.

The idea was to mark things that are slow on widely available processors
as slow in the generic CPU so that the code generated for that CPU would
be fast across those processors. However, for this feature that doesn't
work out very well at all.

The problem here is that you can very easily enable AVX or AVX2 on top
of this generic CPU. For example, this can happen just by using AVX2
intrinsics from Clang within a region of code guarded by a dynamic CPU
feature test. When you do that, the generated code with SlowUAMem32 set
is ... amazingly slower. The problem is that there really aren't very
good alternatives to the unaligned loads, and so our vector codegen
regresses significantly.

The other issue is that there are plenty of AMD CPUs with AVX1 that
don't set FeatureSlowUAMem32 and so we shouldn't just check for AVX2
instead of this special feature. =/

It would be nice to have the target attriute logic be able to
enable/disable more than just one feature at a time and control this in
a more fine grained and useful way, but that doesn't seem easy. Given
that it is only Sandybridge and Ivybridge that set this feature, for now
I'm just backing it out of the generic CPU. That has the additional
advantage of going back to the previous state that people seemed vaguely
happy with.

llvm-svn: 311740
2017-08-25 00:56:05 +00:00
Stephen Hines
7bc4971ffd Fix two (three) more issues with unchecked Error.
Summary:
If assertions are disabled, but LLVM_ABI_BREAKING_CHANGES is enabled,
this will cause an issue with an unchecked Success. Switching to
consumeError() is the correct way to bypass the check. This patch also
includes disabling 2 tests that can't work without assertions enabled,
since llvm_unreachable() with NDEBUG won't crash.

Reviewers: llvm-commits, lhames

Reviewed By: lhames

Subscribers: lhames, pirama

Differential Revision: https://reviews.llvm.org/D36729

llvm-svn: 311739
2017-08-25 00:48:21 +00:00
Chandler Carruth
78f62fed59 [x86] Fix an amazing goof in the handling of sub, or, and xor lowering.
The comment for this code indicated that it should work similar to our
handling of add lowering above: if we see uses of an instruction other
than flag usage and store usage, it tries to avoid the specialized
X86ISD::* nodes that are designed for flag+op modeling and emits an
explicit test.

Problem is, only the add case actually did this. In all the other cases,
the logic was incomplete and inverted. Any time the value was used by
a store, we bailed on the specialized X86ISD node. All of this appears
to have been historical where we had different logic here. =/

Turns out, we have quite a few patterns designed around these nodes. We
should actually form them. I fixed the code to match what we do for add,
and it has quite a positive effect just within some of our test cases.
The only thing close to a regression I see is using:

  notl %r
  testl %r, %r

instead of:

  xorl -1, %r

But we can add a pattern or something to fold that back out. The
improvements seem more than worth this.

I've also worked with Craig to update the comments to no longer be
actively contradicted by the code. =[ Some of this still remains
a mystery to both Craig and myself, but this seems like a large step in
the direction of consistency and slightly more accurate comments.

Many thanks to Craig for help figuring out this nasty stuff.

Differential Revision: https://reviews.llvm.org/D37096

llvm-svn: 311737
2017-08-25 00:34:07 +00:00
Sanjay Patel
7708490317 [DAG] convert vector select-of-constants to logic/math
This goes back to a discussion about IR canonicalization. We'd like to preserve and convert
more IR to 'select' than we currently do because that's likely the best choice in IR:
http://lists.llvm.org/pipermail/llvm-dev/2016-September/105335.html
...but that's often not true for codegen, so we need to account for this pattern coming in
to the backend and transform it to better DAG ops.

Steps in this patch:

  1. Add an EVT param to the existing convertSelectOfConstantsToMath() TLI hook to more finely
     enable this transform. Other targets will probably want that anyway to distinguish scalars
     from vectors. We're using that here to exclude AVX512 targets, but it may not be necessary.

  2. Convert a vselect to ext+add. This eliminates a constant load/materialization, and the
     vector ext is often free.

Implementing a more general fold using xor+and can be a follow-up for targets that don't have
a legal vselect. It's also possible that we can remove the TLI hook for the special case fold
implemented here because we're eliminating a constant, but it needs to be tested on other
targets.

Differential Revision: https://reviews.llvm.org/D36840

llvm-svn: 311731
2017-08-24 23:24:43 +00:00
Mandeep Singh Grang
1e72a9c7fd [ADT] Enable reverse iteration for DenseMap
Reviewers: mehdi_amini, dexonsmith, dblaikie, davide, chandlerc, davidxl, echristo, efriedma

Reviewed By: dblaikie

Subscribers: rsmith, mgorny, emaste, llvm-commits

Differential Revision: https://reviews.llvm.org/D35043

llvm-svn: 311730
2017-08-24 23:02:48 +00:00
Xinliang David Li
a3e43392cc [Profile] backward propagate profile info in JumpThreading
Take-2 after fixing bugs in the original patch.

Differential Revsion: http://reviews.llvm.org/D36864

llvm-svn: 311727
2017-08-24 22:54:01 +00:00
Sanjay Patel
cefd4148c8 [InstCombine] fix and enhance udiv/urem narrowing
There are 3 small independent changes here:

  1. Account for multiple uses in the pattern matching: avoid the transform if it increases the instruction count.
  2. Add a missing fold for the case where the numerator is the constant: http://rise4fun.com/Alive/E2p
  3. Enable all folds for vector types.

There's still one more potential change - use "shouldChangeType()" to keep from transforming to an illegal integer type.

Differential Revision: https://reviews.llvm.org/D36988

llvm-svn: 311726
2017-08-24 22:54:01 +00:00
Dehao Chen
a9454ea7bf Move accurate-sample-profile into the function attribute.
Summary: We need to have accurate-sample-profile in function attribute so that it works with LTO.

Reviewers: davidxl, rsmith

Reviewed By: davidxl

Subscribers: sanjoy, mehdi_amini, javed.absar, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D37113

llvm-svn: 311706
2017-08-24 21:37:04 +00:00
Eugene Zelenko
29b17aab3e [CodeGen] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 311703
2017-08-24 21:21:39 +00:00
Chad Rosier
e5a35ab997 [PartialInlining] Formatting. NFC.
llvm-svn: 311702
2017-08-24 21:21:09 +00:00
Nathan Hawes
d1c6d7e66f test commit: fix typo in comment
llvm-svn: 311701
2017-08-24 21:20:41 +00:00
Chad Rosier
f3b99e6829 [PartialInlining] Type. NFC.
llvm-svn: 311699
2017-08-24 20:29:02 +00:00
Konstantin Zhuravlyov
7d9fe6e6ce AMDGPU: Fix gfx801 features
gfx801 has 1/2 rate F64, Fast F32 FMA

Differential Revision: https://reviews.llvm.org/D36981

llvm-svn: 311694
2017-08-24 20:03:07 +00:00
Jacob Gravelle
517063fab6 [WebAssembly] FastISel : Bail to SelectionDAG for constexpr calls
Summary: Currently FastISel lowers constexpr calls as indirect calls.
We'd like those to direct calls, and falling back to SelectionDAGISel
handles that.

Reviewers: dschuff, sunfish

Subscribers: jfb, sbc100, llvm-commits, aheejin

Differential Revision: https://reviews.llvm.org/D37073

llvm-svn: 311693
2017-08-24 19:53:44 +00:00
Heejin Ahn
148679a81b [WebAssembly] Update GCC test suite failure expectations
Summary:
Update GCC test suite failure expectations as we add -O0 to the bare tests in
WebAssembly waterfall. There are still several untriaged lld failures.

Reviewers: sbc100, jgravelle-google, dschuff

Reviewed By: dschuff

Subscribers: jfb

Differential Revision: https://reviews.llvm.org/D37100

llvm-svn: 311691
2017-08-24 19:43:09 +00:00
Krzysztof Parzyszek
60dad8a5ce [Hexagon] Set access size for vector pseudo loads/stores
llvm-svn: 311690
2017-08-24 19:19:24 +00:00
Daniel Sanders
da70e0bfc7 [globalisel][tablegen] Predicates should start from GIPFP_Invalid+1 not GIPFP_Invalid
This fixes a warning when there are zero defined predicates and also fixes an
unnoticed bug where the first predicate in the table was unusable.

llvm-svn: 311684
2017-08-24 18:54:16 +00:00
Victor Leschuk
de7b197e6c Remove duplicate code
llvm-svn: 311675
2017-08-24 17:02:38 +00:00
Victor Leschuk
2dbf70e85c Add missing break in switch
llvm-svn: 311673
2017-08-24 16:57:10 +00:00
Pete Couperus
7fc741759e [ARC] Add ARC backend.
Add the ARC backend as an experimental target to lib/Target.
Reviewed at: https://reviews.llvm.org/D36331

llvm-svn: 311667
2017-08-24 15:40:33 +00:00
Krasimir Georgiev
94f045b3a9 [X86AsmParser] Refactor AsmRewrite constructors, NFCI
Summary:
This is a follow-up of https://reviews.llvm.org/D37105, where a slight refactoring
of the constructors of AsmRewrite is proposed.

Reviewers: coby

Reviewed By: coby

Differential Revision: https://reviews.llvm.org/D37110

llvm-svn: 311666
2017-08-24 15:03:18 +00:00
Sanjay Patel
b75dbbae08 fix typo; NFC
llvm-svn: 311665
2017-08-24 15:00:13 +00:00
Sjoerd Meijer
e2dbaabfc8 [AArch64] Add FMOVH0: materialize 0 using zero register for f16 values
Instead of loading 0 from a constant pool, it's of course much better to
materialize it using an fmov and the zero register.

Thanks to Ahmed Bougacha for the suggestion.

Differential Revision: https://reviews.llvm.org/D37102

llvm-svn: 311662
2017-08-24 14:47:06 +00:00
Sanjay Patel
9728d7148c [BypassSlowDivision] move map helper code to header; NFC
We can reuse this code with other div/rem transforms as shown in:
https://reviews.llvm.org/D31037 
https://bugs.llvm.org/show_bug.cgi?id=31028

llvm-svn: 311661
2017-08-24 14:43:33 +00:00
Chad Rosier
51ff968fb1 [TargetParser][AArch64] Add support for RDM feature in the target parser.
Differential Revision: https://reviews.llvm.org/D37081

llvm-svn: 311659
2017-08-24 14:30:44 +00:00