1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-01-31 20:51:52 +01:00

153454 Commits

Author SHA1 Message Date
Adam Nemet
e89676a607 Remove an unnecessary const_cast.
I think that this is dating back to when emit used to take a const reference.

llvm-svn: 311948
2017-08-28 23:00:13 +00:00
Marek Sokolowski
fb87fd513f [llvm-rc] Add ACCELERATORS parsing ability. (parser, pt 3/8).
This improves the current llvm-rc parser by the ability of parsing
ACCELERATORS statement.

Moreover, some small improvements to the original parsing commit
were made.

Thanks for Nico Weber for his original work in this area.

Differential Revision: https://reviews.llvm.org/D36894

llvm-svn: 311946
2017-08-28 22:58:31 +00:00
Evandro Menezes
384a7dd5ec [AArch64] Adjust the cost model for Exynos M1 and M2
Add new predicate to more accurately model the scheduling around branches
and function calls and of loads and stores of pairs and integer
multiplications.

llvm-svn: 311944
2017-08-28 22:51:52 +00:00
Evandro Menezes
48316bd1c5 [AArch64] Adjust the cost model for Exynos M1 and M2
Add new predicate to more accurately model the cost of arithmetic and
logical operations shifted left.

Differential revision: https://reviews.llvm.org/D37151

llvm-svn: 311943
2017-08-28 22:51:32 +00:00
Kamil Rytarowski
c304eefca8 Define NetBSD/amd64 ASAN Shadow Offset
Summary:
Catch up after compiler-rt changes and define kNetBSD_ShadowOffset64
as (1ULL << 46).
 
Sponsored by <The NetBSD Foundation>

Reviewers: kcc, joerg, filcab, vitalybuka, eugenis

Reviewed By: eugenis

Subscribers: llvm-commits, #sanitizers

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D37234

llvm-svn: 311941
2017-08-28 22:13:52 +00:00
Craig Topper
32c105c58a [InstCombine] Teach select01 helper of foldSelectIntoOp to handle vector splats
We were handling some vectors in foldSelectIntoOp, but not if the operand of the bin op was any kind of vector constant. This patch fixes it to treat vector splats the same as scalars.

Differential Revision: https://reviews.llvm.org/D37232

llvm-svn: 311940
2017-08-28 22:00:27 +00:00
Marek Sokolowski
4f9f1f7705 [llvm-rc] Add ICON and HTML parsing ability (parser, pt 2/8).
This extends the current llvm-rc parser by ICON and HTML resources.
Moreover, some tests have been slightly rewritten.

Thanks for Nico Weber for his original work in this area.

Differential Revision: https://reviews.llvm.org/D36891

llvm-svn: 311939
2017-08-28 21:59:54 +00:00
Sanjay Patel
d6b570677f [InstCombine] add tests to show failure of SimplifyDemandedVectorElts + shuffle combining; NFC
llvm-svn: 311934
2017-08-28 21:14:26 +00:00
Geoff Berry
82a5b35667 [AArch64][Falkor] Avoid generating STRQro* instructions
Summary:
STRQro* instructions are slower than the alternative ADD/STRQui expanded
instructions on Falkor, so avoid generating them unless we're optimizing
for code size.

Reviewers: t.p.northover, mcrosier

Subscribers: aemerson, rengolin, javed.absar, kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D37020

llvm-svn: 311931
2017-08-28 20:48:43 +00:00
Davide Italiano
5965f0fe5f [LoopUnroll] Properly update loop structure in case of successful peeling.
When peeling kicks in, it updates the loop preheader.
Later, a successful full unroll of the loop needs to update a PHI
which i-th argument comes from the loop preheader, so it'd better look
at the correct block. Fixes PR33437.

Differential Revision:  https://reviews.llvm.org/D37153

llvm-svn: 311922
2017-08-28 20:29:33 +00:00
Joerg Sonnenberger
314fb415b4 Fix ARMv4 support
ARMv4 doesn't support the "BX" instruction, which has been introduced
with ARMv4t. Adjust the call lowering and tail call implementation
accordingly.

Further changes are necessary to ensure that presence of the v4t feature
is correctly set. Most importantly, the "generic" CPU for thumb-*
triples should include ARMv4t, since thumb mode without thumb support
would naturally be pointless.

Add a couple of asserts to ensure thumb instructions are not emitted
without CPU support.

Differential Revision: https://reviews.llvm.org/D37030

llvm-svn: 311921
2017-08-28 20:20:47 +00:00
Matthias Braun
47745ebb6d Try to fix compilation problem with libstdc++
llvm-svn: 311918
2017-08-28 20:11:28 +00:00
Matthias Braun
a053ad9187 Address r311914 review comments
llvm-svn: 311917
2017-08-28 20:11:27 +00:00
Davide Italiano
f2e85cb73c [LoopUnroll] Add a cl::opt to force peeling, for testing purposes.
Will be used to test the patch proposed in D37153.

llvm-svn: 311915
2017-08-28 19:50:55 +00:00
Matthias Braun
e9d5200366 TableGen: Fix subreg composition/concatenation
This fixes 2 problems in subregister hierarchies with multiple levels
and tuples:

1) For bigger tuples computing secondary subregs would miss 2nd order
effects.  In the test case a register like `S10_S11_S12_S13_S14` with D5
= S10_S11, D6 = S12_S13 we would correctly compute sub0 = D5, sub1 = D6
but would miss the fact that we could now form ssub0_ssub1_ssub2_ssub3
(aka sub0_sub1) = D5_D6. This is fixed by changing
computeSecondarySubRegs() to compute a fixpoint.

2) Fixing 1) exposed a problem where TableGen would create multiple
names for effectively the same subregister index. In the test case
the subregister index sub0 is composed from ssub0 and ssub1, and sub1 is
composed from ssub2 and ssub3. TableGen should not create both sub0_sub1
and ssub0_ssub1_ssub2_ssub3 as infered subregister indexes. This changes
the code to build a transitive closure of the subregister components
before forming new concatenated subregister indexes.

This fix was developed for an out of tree target. For the in-tree
targets the only change is in the register information computed for ARM.
There is a slight chance this fixed/improved some register coalescing
around the QQQQ/QQ register classes there but I couldn't see/provoke any
code generation differences.

Differential Revision: https://reviews.llvm.org/D36913

llvm-svn: 311914
2017-08-28 19:48:42 +00:00
Matthias Braun
e701596846 TableGen: Add -gen-register-info-debug-dump
Adds a new --gen-register-info-debug-dump mode to tablegen that dumps various register related information:

- List of register classes with super and subclasses
- List of subregister indexes with lanemasks
- List of registers with subregisters

I will use this in an upcoming commit to create a test.

It may also be useful for target developers wanting to get an overview
of all the register related information, esp. the things inferred by
tablegen and not directly visible in the .td file.

Differential Revision: https://reviews.llvm.org/D36911

llvm-svn: 311913
2017-08-28 19:48:40 +00:00
Geoff Berry
5908f46708 [ARM] Fix bug in ARMLoadStoreOptimizer when kill flags are missing.
Summary:
ARMLoadStoreOpt::FixInvalidRegPairOp() was only checking if one of the
load destination registers to be split overlapped with the base register
if the base register was marked as killed.  Since kill flags may not
always be present, this can lead to incorrect code.

This bug was exposed by my MachineCopyPropagation change D30751 breaking
the sanitizer-x86_64-linux-android buildbot.

Also clean up some dead code and add an assert that a register offset is
never encountered by this code, since it does not handle them correctly.

Reviewers: MatzeB, qcolombet, t.p.northover

Subscribers: aemerson, javed.absar, kristof.beyls, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D37164

llvm-svn: 311907
2017-08-28 19:03:45 +00:00
Taewook Oh
3c72e5ece5 Create PHI node for the return value only when the return value has uses.
Summary:
Currently, a phi node is created in the normal destination to unify the return values from promoted calls and the original indirect call. This patch makes this phi node to be created only when the return value has uses.

This patch is necessary to generate valid code, as compiler crashes with the attached test case without this patch. Without this patch, an illegal phi node that has no incoming value from `entry`/`catch` is created in `cleanup` block.

I think existing implementation is good as far as there is at least one use of the original indirect call. `insertCallRetPHI` creates a new phi node in the normal destination block only when the original indirect call dominates its use and the normal destination block. Otherwise, `fixupPHINodeForNormalDest` will handle the unification of return values naturally without creating a new phi node. However, if there's no use, `insertCallRetPHI` still creates a new phi node even when the original indirect call does not dominate the normal destination block, because `getCallRetPHINode` returns false.

Reviewers: xur, davidxl, danielcdh

Reviewed By: xur

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37176

llvm-svn: 311906
2017-08-28 18:57:00 +00:00
Zachary Turner
77c0bf84e8 [CodeView] Don't output S_UDT symbols for forward decls.
S_UDT symbols are the debugger's "index" for all the structs,
typedefs, classes, and enums in a program.  If any of those
structs/classes don't have a complete declaration, or if there
is a typedef to something that doesn't have a complete definition,
then emitting the S_UDT is unhelpful because it doesn't give
the debugger enough information to do anything useful.  On the
other hand, it results in a huge size blow-up in the resulting
PDB, which is exacerbated by an order of magnitude when linking
with /DEBUG:FASTLINK.

With this patch, we drop S_UDT records for types that refer either
directly or indirectly (e.g. through a typedef, pointer, etc) to
a class/struct/union/enum without a complete definition.  This
brings us about 50% of the way towards parity with /DEBUG:FASTLINK
PDBs generated from cl-compiled object files.

Differential Revision: https://reviews.llvm.org/D37162

llvm-svn: 311904
2017-08-28 18:49:04 +00:00
Stefan Pintilie
76f5d80f5b [Power9] Add new instructions for floating point status and control registers.
Added the following P9 instructions: mffsce, mffscdrn, mffscdrni, mffscrn,
  mffscrni, mffsl

Differential Revision: https://reviews.llvm.org/D37167

llvm-svn: 311903
2017-08-28 18:46:01 +00:00
Craig Topper
2b39a77ccb [InstCombine] Call hasNoSignedWrap instead of hasNoUnsignedWrap to get the NSW flag when handling Add in SimplifyDemandedUseBits.
This is a typo from r311789.

This should fix PR34349.

llvm-svn: 311902
2017-08-28 18:44:28 +00:00
Krzysztof Parzyszek
55632e39ea [Hexagon] Check for potential bank conflicts in post-RA scheduling
Insert artificial edges between loads that could cause a cache bank
conflict.

llvm-svn: 311901
2017-08-28 18:36:21 +00:00
Stanislav Mekhanoshin
22de6c878a [AMDGPU] Fix regression in AMDGPULibCalls allowing native for doubles
Under -cl-fast-relaxed-math we could use native_sqrt, but f64 was
allowed to produce HSAIL's nsqrt instruction. HSAIL is not here
and we stick with non-existing native_sqrt(double) as a result.

Add check for f64 to not return native functions and also remove
handling of f64 case for fold_sqrt.

Differential Revision: https://reviews.llvm.org/D37223

llvm-svn: 311900
2017-08-28 18:00:08 +00:00
Stanislav Mekhanoshin
5f48b3a89c [AMDGPU] computeKnownBitsForTargetNode for 24 bit mul
Differential Revision: https://reviews.llvm.org/D37168

llvm-svn: 311896
2017-08-28 16:35:37 +00:00
Krzysztof Parzyszek
ab8e9a3099 [Hexagon] Break up DAG mutations into separate classes, move to subtarget
llvm-svn: 311895
2017-08-28 16:24:22 +00:00
Krzysztof Parzyszek
ad38f1f0c7 [Hexagon] Move pre-RA DAG mutations to scheduler constructor
llvm-svn: 311894
2017-08-28 15:52:54 +00:00
Craig Topper
02f477c78a [X86] Make 128/256-bit extract_subvector Legal instead of Custom. Move combining with BUILD_VECTOR from Legalization to DAG combine
EXTRACT_SUBVECTOR was marked Custom solely so we could combine it with BUILD_VECTOR operations to create smaller BUILD_VECTORS during Legalization. But that sort of combining should really be done by the DAG combiner.

This patch adds the last piece of needed supported DAG combine to handle this. Once that's done we can make the EXTRACT_SUBVECTOR operations Legal.

Differential Revision: https://reviews.llvm.org/D37197

llvm-svn: 311893
2017-08-28 15:32:50 +00:00
Craig Topper
8511a59905 [DAGCombiner] Teach visitEXTRACT_SUBVECTOR to turn extracts of BUILD_VECTOR into smaller BUILD_VECTORs
Only do this before operations are legalized of BUILD_VECTOR is Legal for the target.

Differential Revision: https://reviews.llvm.org/D37186

llvm-svn: 311892
2017-08-28 15:28:33 +00:00
Ilya Biryukov
44086ca14f Changed Dockerfiles to install LLVM into /usr/local
Summary:
Previously, the installation path was simply '/'.
Using '/usr/local' would ensure that LLVM installation does not
conflict with software installed via package managers.

Reviewers: mehdi_amini, klimek

Reviewed By: klimek

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37213

llvm-svn: 311890
2017-08-28 15:12:24 +00:00
Evgeny Mankov
a442cb9f0a [Support][CommandLine] Add cl::Option::setDefault()
Add abstract virtual method setDefault() to class Option and implement it in its inheritors in order to be able to set all the options to its default values in user's code without actually knowing all these options. For instance:

for (auto &OM : cl::getRegisteredOptions(*cl::TopLevelSubCommand)) {
  cl::Option *O = OM.second;
  O->setDefault();
}

Reviewed by: rampitec, Eugene.Zelenko, kasaurov

Differential Revision: http://reviews.llvm.org/D36877

llvm-svn: 311887
2017-08-28 13:39:43 +00:00
Andrew V. Tischenko
59a0d5671f The current version of LLVM X86 disassembler incorrectly interprets some possible sets of x86 prefixes. This patch is the first step to close PR7709 and PR17697. There will be next patch(es) to close relative PRs.
Differential Revision: https://reviews.llvm.org/D36788

M    lib/Target/X86/Disassembler/X86DisassemblerDecoder.cpp
M    lib/Target/X86/Disassembler/X86DisassemblerDecoder.h
A    test/MC/Disassembler/X86/prefixes-i386.s
A    test/MC/Disassembler/X86/prefixes-x86_64.s
M    test/MC/Disassembler/X86/prefixes.txt

llvm-svn: 311882
2017-08-28 10:43:14 +00:00
Gadi Haber
9f39237fa7 [X86][Haswell] Updating HSW instruction scheduling information
This patch completely replaces the instruction scheduling information for the Haswell architecture target by modifying the file X86SchedHaswell.td located under the X86 Target.
We used the scheduling information retrieved from the Haswell architects in order to replace and modify the existing scheduling.
The patch continues the scheduling replacement effort started with the SNB target in r307529 and r310792.
Information includes latency, number of micro-Ops and used ports by each HSW instruction.

Please expect some performance fluctuations due to code alignment effects.

Reviewers: RKSimon, zvi, aymanmus, craig.topper, m_zuckerman, igorb, dim, chandlerc, aaboud

Differential Revision: https://reviews.llvm.org/D36663

llvm-svn: 311879
2017-08-28 10:04:16 +00:00
NAKAMURA Takumi
3d69d45ff0 Prune whitespaces in blank lines.
llvm-svn: 311876
2017-08-28 07:48:37 +00:00
NAKAMURA Takumi
b40db7c573 Untabify.
llvm-svn: 311875
2017-08-28 06:47:47 +00:00
Craig Topper
80aa3b549e [X86] Use getUnpackl helper to create an ISD::VECTOR_SHUFFLE instead of using X86ISD::UNPCKL in reduceVMULWidth.
This runs fairly early, we should use target independent nodes if possible.

llvm-svn: 311873
2017-08-28 05:14:38 +00:00
Craig Topper
43ff91b11b [X86] Add an early out to combineLoopMAddPattern and combineLoopSADPattern when SSE2 is disabled.
Without this the madd.ll and sad.ll test cases both throw assertions if you run them with SSE2 disabled.

llvm-svn: 311872
2017-08-28 04:29:08 +00:00
Lang Hames
ed191215a4 [Error] Add a handleExpected utility.
handleExpected is similar to handleErrors, but takes an Expected<T> as its first
input value and a fallback functor as its second, followed by an arbitary list
of error handlers (equivalent to the handler list of handleErrors). If the first
input value is a success value then it is returned from handleErrors
unmodified. Otherwise the contained error(s) are passed to handleErrors, along
with the handlers. If handleErrors returns success (indicating that all errors
have been handled) then handleExpected runs the fallback functor and returns its
result. If handleErrors returns a failure value then the failure value is
returned and the fallback functor is never run.

This simplifies the process of re-trying operations that return Expected values.
Without this utility such retry logic is cumbersome as the internal Error must
be explicitly extracted from the Expected value, inspected to see if its
handleable and then consumed:

enum FooStrategy { Aggressive, Conservative };
Expected<Foo> tryFoo(FooStrategy S);

Expected<Foo> Result;
(void)!!Result; // "Check" Result so that it can be safely overwritten.
if (auto ValOrErr = tryFoo(Aggressive))
  Result = std::move(ValOrErr);
else {
  auto Err = ValOrErr.takeError();
  if (Err.isA<HandleableError>()) {
    consumeError(std::move(Err));
    Result = tryFoo(Conservative);
  } else
    return std::move(Err);
}

with handleExpected, this can be re-written as:

auto Result =
  handleExpected(
    tryFoo(Aggressive),
    []() { return tryFoo(Conservative); },
    [](HandleableError&) { /* discard to handle */ });

llvm-svn: 311870
2017-08-28 03:36:46 +00:00
Dehao Chen
f8a3b2e94f revert r310985 which breaks for the following case:
struct string {
  ~string();
};
void f2();
void f1(int) { f2(); }
void run(int c) {
  string body;
  while (true) {
    if (c)
      f1(c);
    else
      f1(c);
  }
}

Will recommit once the issue is fixed.

llvm-svn: 311864
2017-08-27 22:22:39 +00:00
Petar Jovanovic
bddb058ce2 [mips] Generate NMADD and NMSUB instructions when fneg node is present
This patch enables generation of NMADD and NMSUB instructions when fneg node
is present. These instructions are currently only generated if fsub node is
present.

Patch by Stanislav Ocovaj.

Differential Revision: https://reviews.llvm.org/D34507

llvm-svn: 311862
2017-08-27 21:07:24 +00:00
Javed Absar
81bd76015d [ARM] Tidy-up condition-code support functions
Move condition code support functions to Utils and remove code duplication.

Reviewed by: @fhahn, @asb
Differential Revision: https://reviews.llvm.org/D37179

llvm-svn: 311860
2017-08-27 20:38:28 +00:00
Craig Topper
5ee95085ce [AVX512] Add more patterns for using masked moves for subvector extracts of the lowest subvector. This time with bitcasts between the vselect and the extract.
llvm-svn: 311856
2017-08-27 19:03:36 +00:00
Sanjay Patel
3271f429fc [DAGCombiner] allow undef shuffle operands when eliminating bitcasts (PR34111)
As noted in the FIXME, this could be improved more, but this is the smallest fix
that helps:
https://bugs.llvm.org/show_bug.cgi?id=34111

llvm-svn: 311853
2017-08-27 17:29:30 +00:00
Sanjay Patel
0822de001f [x86] add haddps test for PR34111; NFC
llvm-svn: 311852
2017-08-27 17:15:49 +00:00
Javed Absar
e5296f277f [ARM] Tidy-up ARMAsmParser. NFC.
Simplify getDRegFromQReg function

Reviewed by: @fhahn, @asb
Differential Revision: https://reviews.llvm.org/D37118

llvm-svn: 311850
2017-08-27 14:46:57 +00:00
Ayal Zaks
995e9f83a6 [LV] Fix PR34248 - recommit D32871 after revert r311304
Original commit r311077 of D32871 was reverted in r311304 due to failures
reported in PR34248.

This recommit fixes PR34248 by restricting the packing of predicated scalars
into vectors only when vectorizing, avoiding doing so when unrolling w/o
vectorizing. Added a test derived from the reproducer of PR34248.

llvm-svn: 311849
2017-08-27 12:55:46 +00:00
Jatin Bhateja
6582a855c0 [X86] Adding more tests for horizontal [F]HADD/[F]SUB for AVX512 vectors types
llvm-svn: 311847
2017-08-27 12:43:25 +00:00
Craig Topper
e75012de8c [X86] Add a target-specific DAG combine to combine extract_subvector from all zero/one build_vectors.
llvm-svn: 311841
2017-08-27 05:39:57 +00:00
Craig Topper
fb3b8609d8 [X86] Use getOnesVector instead of using DAG.getConstant(-1).
llvm-svn: 311840
2017-08-27 03:26:04 +00:00
Davide Italiano
a10aa7c69b [NewGVN] Use auto when the type is obvious NFCI.
llvm-svn: 311838
2017-08-26 22:31:10 +00:00
Craig Topper
3c6db0bbbe [AVX512] Add patterns to match masked extract_subvector with bitcasts between the vselect and the extract_subvector. Remove the late DAG combine.
We used to do a late DAG combine to move the bitcasts out of the way, but I'm starting to think that it's better to canonicalize extract_subvector's type to match the type of its input. I've seen some cases where we've formed two different extract_subvector from the same node where one had a bitcast and the other didn't.

Add some more test cases to ensure we've also got most of the zero masking covered too.

llvm-svn: 311837
2017-08-26 22:24:57 +00:00