1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 12:33:33 +02:00
Commit Graph

143845 Commits

Author SHA1 Message Date
Davide Italiano
c68880e6bf [LTO] Teach lib/LTO about the new pass manager.
Differential Revision:  https://reviews.llvm.org/D28997

llvm-svn: 292864
2017-01-24 00:58:24 +00:00
Davide Italiano
ffa8336285 [PM] Flesh out the new pass manager LTO pipeline.
Differential Revision:  https://reviews.llvm.org/D28996

llvm-svn: 292863
2017-01-24 00:57:39 +00:00
Kostya Serebryany
ddd0879068 [sanitizer-coverage] emit __sanitizer_cov_trace_pc_guard w/o a preceding 'if' by default. Update the docs, also add deprecation notes around other parts of sanitizer coverage
llvm-svn: 292862
2017-01-24 00:57:31 +00:00
Tim Shen
28ce2d2c0f [APFloat] Add PPCDoubleDouble multiplication
Reviewers: echristo, hfinkel, kbarton, iteratee

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D28382

llvm-svn: 292860
2017-01-24 00:19:45 +00:00
Derek Schuff
419f66e8c9 [WebAssembly] Update LibFunc::Func -> LibFunc
Fixes compile failures after r292848

llvm-svn: 292857
2017-01-24 00:01:18 +00:00
Matt Arsenault
4249853cf1 SimplifyLibCalls: Replace more unary libcalls with intrinsics
llvm-svn: 292855
2017-01-23 23:55:08 +00:00
Michael Kuperstein
147f6c96a5 [LoopUnroll] First form LCSSA, then loop-simplify
Running non-LCSSA-preserving LoopSimplify followed by LCSSA on (roughly) the
same loop is incorrect, since LoopSimplify may break LCSSA arbitrarily higher
in the loop nest. Instead, run LCSSA first, and then run LCSSA-preserving
LoopSimplify on the result.

This fixes PR31718.

Differential Revision: https://reviews.llvm.org/D29055

llvm-svn: 292854
2017-01-23 23:45:42 +00:00
Eugene Zelenko
20ac491e69 [AMDGPU] Fix obsolete comments, spotted by Malcolm Parsons. (NFC)
llvm-svn: 292853
2017-01-23 23:41:16 +00:00
Dehao Chen
9e02f996da Makes promoteIndirectCall an external function.
Summary: promoteIndirectCall should be a utility function that could be invoked by other optimization passes.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29051

llvm-svn: 292850
2017-01-23 23:18:24 +00:00
David L. Jones
268960185f [Analysis] Add LibFunc_ prefix to enums in TargetLibraryInfo. (NFC)
Summary:
The LibFunc::Func enum holds enumerators named for libc functions.
Unfortunately, there are real situations, including libc implementations, where
function names are actually macros (musl uses "#define fopen64 fopen", for
example; any other transitively visible macro would have similar effects).

Strictly speaking, a conforming C++ Standard Library should provide any such
macros as functions instead (via <cstdio>). However, there are some "library"
functions which are not part of the standard, and thus not subject to this
rule (fopen64, for example). So, in order to be both portable and consistent,
the enum should not use the bare function names.

The old enum naming used a namespace LibFunc and an enum Func, with bare
enumerators. This patch changes LibFunc to be an enum with enumerators prefixed
with "LibFFunc_". (Unfortunately, a scoped enum is not sufficient to override
macros.)

There are additional changes required in clang.

Reviewers: rsmith

Subscribers: mehdi_amini, mzolotukhin, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D28476

llvm-svn: 292848
2017-01-23 23:16:46 +00:00
Matt Arsenault
1595f1e4ce AMDGPU: Custom lower more vector operations
This avoids stack usage.

llvm-svn: 292846
2017-01-23 23:09:58 +00:00
Krzysztof Parzyszek
5ad7c44a42 [RDF] Add registers to live set even if they are live already
When calculating kills, a register may be considered live because a part
of it is live, but if there is a use of that (whole) register, the whole
register (and its subregisters) need to be added to the live set.

llvm-svn: 292845
2017-01-23 23:03:49 +00:00
Kostya Serebryany
bd74c3d0a4 [libFuzzer] mutate empty input using the regular mutators (instead of a custom dummy one). This way when we mutate an empty input there is a chance we will get a dictionary word
llvm-svn: 292843
2017-01-23 22:52:13 +00:00
Matt Arsenault
b7e8aad4f5 DAG: Don't fold vector extract into load if target doesn't want to
Fixes turning a 32-bit scalar load into an extending vector load
for AMDGPU when dynamically indexing a vector.

llvm-svn: 292842
2017-01-23 22:48:53 +00:00
Sanjay Patel
24068611a3 [InstSimplify] add tests to show missing folds from 'icmp (add nsw)'; NFC
llvm-svn: 292841
2017-01-23 22:42:55 +00:00
Evgeniy Stepanov
23916be8e1 Revert "Refactor SampleProfile.cpp to move computation inside a branch. (NFC)"
Causes MSan failures on the buildbot.

llvm-svn: 292840
2017-01-23 22:40:08 +00:00
Tim Shen
4f93c19c2e [APFloat] Switch from (PPCDoubleDoubleImpl, IEEEdouble) layout to (IEEEdouble, IEEEdouble)
Summary:
This patch changes the layout of DoubleAPFloat, and adjust all
operations to do either:
1) (IEEEdouble, IEEEdouble) -> (uint64_t, uint64_t) -> PPCDoubleDoubleImpl,
   then run the old algorithm.
2) Do the right thing directly.

1) includes multiply, divide, remainder, mod, fusedMultiplyAdd, roundToIntegral,
   convertFromString, next, convertToInteger, convertFromAPInt,
   convertFromSignExtendedInteger, convertFromZeroExtendedInteger,
   convertToHexString, toString, getExactInverse.
2) includes makeZero, makeLargest, makeSmallest, makeSmallestNormalized,
   compare, bitwiseIsEqual, bitcastToAPInt, isDenormal, isSmallest,
   isLargest, isInteger, ilogb, scalbn, frexp, hash_value, Profile.

I could split this into two patches, e.g. use
1) for all operatoins first, then incrementally change some of them to
2). I didn't do that, because 1) involves code that converts data between
PPCDoubleDoubleImpl and (IEEEdouble, IEEEdouble) back and forth, and may
pessimize the compiler. Instead, I find easy functions and use
approach 2) for them directly.

Next step is to implement move multiply and divide from 1) to 2). I don't
have plans for other functions in 1).

Differential Revision: https://reviews.llvm.org/D27872

llvm-svn: 292839
2017-01-23 22:39:35 +00:00
Matt Arsenault
bd33194651 AMDGPU: Combine fp16/fp64 subtarget features
The same control register controls both, and are set to
the same defaults. Keep the old names around as aliases.

llvm-svn: 292837
2017-01-23 22:31:03 +00:00
Krzysztof Parzyszek
d7827facd5 [Hexagon] Explicitly reserve aliases of reserved registers
llvm-svn: 292836
2017-01-23 22:13:05 +00:00
Kostya Serebryany
8fb6d011db [libFuzzer] make sure we use the feedback from std::string operator ==
llvm-svn: 292835
2017-01-23 22:11:04 +00:00
Kevin Enderby
62e26e4a5d Add support for the x86_thread_state32_t and
in llvm-objdump for Mach-O files add the printing of the
x86_thread_state32_t in the same format as
otool-classic(1) on darwin.

To do this the 32-bit x86 general tread state
needed to be defined in include/llvm/Support/MachO.h .

rdar://30110111

llvm-svn: 292829
2017-01-23 21:13:29 +00:00
Ahmed Bougacha
9390ad2bce [AArch64][GlobalISel] Legalize narrow scalar fp->int conversions.
Since we're now avoiding operations using narrow scalar integer types,
we have to legalize the integer side of the FP conversions.

This requires teaching the legalizer how to do that.

llvm-svn: 292828
2017-01-23 21:10:14 +00:00
Ahmed Bougacha
a84a7cf9e8 [AArch64][GlobalISel] Legalize narrow scalar ops again.
Since r279760, we've been marking as legal operations on narrow integer
types that have wider legal equivalents (for instance, G_ADD s8).
Compared to legalizing these operations, this reduced the amount of
extends/truncates required, but was always a weird legalization decision
made at selection time.

So far, we haven't been able to formalize it in a way that permits the
selector generated from SelectionDAG patterns to be sufficient.

Using a wide instruction (say, s64), when a narrower instruction exists
(s32) would introduce register class incompatibilities (when one narrow
generic instruction is selected to the wider variant, but another is
selected to the narrower variant).

It's also impractical to limit which narrow operations are matched for
which instruction, as restricting "narrow selection" to ranges of types
clashes with potentially incompatible instruction predicates.

Concerns were also raised regarding  MIPS64's sign-extended register
assumptions, as well as wrapping behavior.
See discussions in https://reviews.llvm.org/D26878.

Instead, legalize the operations.

Should we ever revert to selecting these narrow operations, we should
try to represent this more accurately: for instance, by separating
a "concrete" type on operations, and an "underlying" type on vregs, we
could move the "this narrow-looking op is really legal" decision to the
legalizer, and let the selector use the "underlying" vreg type only,
which would be guaranteed to map to a register class.

In any case, we eventually should mitigate:
- the performance impact by selecting no-op extract/truncates to COPYs
  (which we currently do), and the COPYs to register reuses (which we
  don't do yet).
- the compile-time impact by optimizing away extract/truncate sequences
  in the legalizer.

llvm-svn: 292827
2017-01-23 21:10:05 +00:00
Steven Wu
d40d99d2b4 Attempt to fix the testcase in r292824
Try fix the testcase r292824 (failing on some bots) by reduce it to the
minimal. If this fix doesn't work, I will revert this test.

llvm-svn: 292826
2017-01-23 20:42:17 +00:00
Javed Absar
97ef1a63f1 [ARM] Classification Improvements to ARM Sched-Models. NFCI.
This is a series of patches to enable adding of machine sched
models for ARM processors easier and compact. They define new
sched-readwrites for groups of ARM instructions. This has been
missing so far, and as a consequence, machine scheduler models
for individual sub-targets have tended to be larger than they
needed to be. 

The current patch focuses on floating-point instructions.

Reviewers: Diana Picus (rovka), Renato Golin (rengolin)

Differential Revision: https://reviews.llvm.org/D28194

llvm-svn: 292825
2017-01-23 20:20:39 +00:00
Steven Wu
9711c6c21b Add LC_BUILD_VERSION load command
Summary:
Add a new load command LC_BUILD_VERSION. It is a generic version of
LC_*_VERSION_MIN load_command used on Apple platforms. Instead of having
a seperate load command for each platform, LC_BUILD_VERSION is recording
platform info as an enum. It also records SDK version, min_os, and tools
that used to build the binary.

rdar://problem/29781291

Reviewers: enderby

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29044

llvm-svn: 292824
2017-01-23 20:07:55 +00:00
Alexey Bataev
812b7aeabd [SLP] Additional test with extra args in horizontal reductions.
llvm-svn: 292821
2017-01-23 19:28:23 +00:00
Matt Arsenault
dd08415714 AMDGPU: Propagate fast math flags in fneg combines
Can't for fma/mad since it seems they can't have flags currently.

llvm-svn: 292818
2017-01-23 19:08:34 +00:00
Matthias Braun
9e8eceea87 Add unittests for empty bitvectors.
Addendum to r292575

llvm-svn: 292817
2017-01-23 19:06:54 +00:00
Matt Arsenault
592eb5a463 AMDGPU: Remove unnecessary check
There are no scalar FP types that can be extended.

llvm-svn: 292816
2017-01-23 19:00:15 +00:00
Xinliang David Li
9d9c348c04 [PGO] add debug option to view annotated cfg after prof use annotation
Differential Revision: http://reviews.llvm.org/D28967 

llvm-svn: 292815
2017-01-23 18:58:24 +00:00
Matt Arsenault
bb6aab2eaf DAG: Allow legalization of fcanonicalize vector types
llvm-svn: 292814
2017-01-23 18:52:26 +00:00
Kostya Serebryany
abbd6c2ac9 [libFuzzer] deflake a test
llvm-svn: 292813
2017-01-23 18:44:40 +00:00
Sanjay Patel
d945eb19e4 [InstSimplify] refactor finding limits for icmp with binop; NFCI
llvm-svn: 292812
2017-01-23 18:22:26 +00:00
Dehao Chen
f523ed4b82 Refactor SampleProfile.cpp to move computation inside a branch. (NFC)
llvm-svn: 292803
2017-01-23 17:09:02 +00:00
Chris Bieneman
2c8aeb847e Post-commit review feedback from dblaikie
Use ASSERT_* instead of EXPECT_* for error condition.

llvm-svn: 292798
2017-01-23 16:49:34 +00:00
Piotr Padlewski
57e245c0ad [MemorySSA] Add new tests for invariant.groups
Summary:
Next round of extra tests for MSSA.
I have a prototype invariant.group handling implementation
that fixes all the FIXMEs, and I think it will be
easier to see what is the difference if I firstly
post this, and then only fix fixits.

Reviewers: george.burgess.iv, dberlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29022

llvm-svn: 292797
2017-01-23 16:38:10 +00:00
Simon Pilgrim
8221416b9d [InstCombine][X86] Add MULDQ/MULUDQ constant folding support
llvm-svn: 292793
2017-01-23 15:22:59 +00:00
Amaury Sechet
0696308be8 Tweak ASCII art in Simplify CFG. NFC
llvm-svn: 292792
2017-01-23 15:13:01 +00:00
Jonas Paulsson
3cc66f0379 [SystemZ] Mark vector immediate load instructions with useful flags.
Vector immediate load instructions should have the isAsCheapAsAMove, isMoveImm
and isReMaterializable flags set. With them, these instruction will get
hoisted out of loops.

Review: Ulrich Weigand
llvm-svn: 292790
2017-01-23 14:09:58 +00:00
Eugene Leviant
62e5f5177e RuntimeDyldELF: add LDST128_ABS_LO12_NC reloc
llvm-svn: 292788
2017-01-23 13:52:08 +00:00
Eugene Leviant
cb30313381 RuntimeDyldELF: add LDST8_ABS_LO12_NC and LDST16_ABS_LO12_NC relocs
Differential revision: https://reviews.llvm.org/D28863

llvm-svn: 292785
2017-01-23 13:13:47 +00:00
Simon Pilgrim
6a557145af [InstCombine][X86] MULDQ/MULUDQ undef -> zero
Match generic mul behaviour so that <X x i64> multiply and muldq/muludq pattern act the same

llvm-svn: 292784
2017-01-23 12:07:32 +00:00
Alexey Bataev
5481977857 [SLP] Additional test for SLP vectorizer with 31 reduction elements.
llvm-svn: 292783
2017-01-23 11:53:16 +00:00
Simon Pilgrim
2a29239f10 [InstCombine][SSE] Tests showing missed opportunities to constant fold PMULDQ/PMULUDQ
llvm-svn: 292782
2017-01-23 10:57:39 +00:00
Chandler Carruth
c508becaa2 This test apparently requires an x86 target and is failing on numerous
bots ever since d0k fixed the CHECK lines so that it did something at
all.

It isn't actually testing SCEV directly but LSR, so move it into LSR and
the x86-specific tree of tests that already exists there. Target
dependence is common and unavoidable with the current design of LSR.

llvm-svn: 292774
2017-01-23 08:33:29 +00:00
Chandler Carruth
8965188357 [PM] Replace the hard invalidate in JumpThreading for LVI with correct
invalidation of deleted functions in GlobalDCE.

This was always testing a bug really triggered in GlobalDCE. Right now
we have analyses with asserting value handles into IR. As long as those
remain, when *deleting* an IR unit, we cannot wait for the normal
invalidation scheme to kick in even though it was designed to work
correctly in the face of these kinds of deletions. Instead, the pass
needs to directly handle invalidating the analysis results pointing at
that IR unit.

I've tought the Inliner about this and this patch teaches GlobalDCE.
This will handle the asserting VH case in the existing test as well as
other issues of the same fundamental variety. I've moved the test into
the GlobalDCE directory and added a comment explaining what is going on.

Note that we cannot simply require LVI here because LVI is too lazy.

llvm-svn: 292773
2017-01-23 08:33:24 +00:00
Chandler Carruth
b8dbb7e277 [PM] Add a dedicated test case for the issue fixed in r292770.
While this is covered by a clang test case, we should have something
locally to LLVM that immediately checks the inliner doesn't leave
analyses to dangling IR bodies.

llvm-svn: 292772
2017-01-23 07:53:20 +00:00
Daniel Sanders
76f8fed320 [globalisel] Remove unused, duplicate file added in r292478
It seems it appeared during a rebase (diff 6) of D27338
and went unnoticed.

Thanks to David Majnemer for noticing.

llvm-svn: 292771
2017-01-23 07:33:21 +00:00
Chandler Carruth
afd5c13448 [PM] Clear any analyses for a dead function after inlining it and before
clearing its body. This is essential to avoid triggering asserting value
handles in analyses on the function's body.

I'm working on a test case for this behavior in LLVM, but Clang has
a great one that managed to trigger this on all of the bots already.

llvm-svn: 292770
2017-01-23 07:03:41 +00:00