1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 03:33:20 +01:00
Commit Graph

147612 Commits

Author SHA1 Message Date
Chih-Hung Hsieh
047ba89d96 [X86] Keep EXTRACT_VECTOR_ELT result type as f128 for Android x86_64.
Android x86_64 target uses f128 type and stores f128 values in %xmm* registers.
SoftenFloatRes_EXTRACT_VECTOR_ELT should not convert result value
from f128 to i128.

Differential Revision: http://reviews.llvm.org/D32102

llvm-svn: 300583
2017-04-18 20:15:18 +00:00
Craig Topper
dab4df0257 [APInt] Inline the single word case of lshrInPlace similar to what we do for <<=.
llvm-svn: 300577
2017-04-18 19:13:27 +00:00
Simon Pilgrim
d9eb4d21ab [X86][SSE] Add scheduling latency/throughput tests for (most) SSE1 instructions
llvm-svn: 300576
2017-04-18 19:04:40 +00:00
Easwaran Raman
7b83777c46 [SLP vectorizer] Allow phi node reordering in tryToVectorizeList.
In tryToVectorizeList, under a very limited circumstance (when entered
from tryToVectorizePair), the values may be reordered (swapped) and the
SLP tree is built with the new order. This extends that to the case when
starting from phis in vectorizeChainsInBlock when there are exactly two
phis. The textual order of phi nodes shouldn't really matter. Without
this change, the loop body in the accompnaying test case is fully vectorized
when we swap the orde of the phis but not with this order. While this
doesn't solve the phi-ordering problem in a general way (for more than 2
phis), this is simple fix that piggybacks on an existing mechanism and
is useful in cases like multiplying two complex numbers.

Differential revision: https://reviews.llvm.org/D32065

llvm-svn: 300574
2017-04-18 18:16:57 +00:00
Simon Pilgrim
029fe287df [X86] Use for-range loop. NFCI.
llvm-svn: 300567
2017-04-18 17:18:54 +00:00
Craig Topper
debb291669 [APInt] Use lshrInPlace to replace lshr where possible
This patch uses lshrInPlace to replace code where the object that lshr is called on is being overwritten with the result.

This adds an lshrInPlace(const APInt &) version as well.

Differential Revision: https://reviews.llvm.org/D32155

llvm-svn: 300566
2017-04-18 17:14:21 +00:00
Daniel Berlin
20327c70bd NewGVN: Don't waste time value numbering unreachable blocks
llvm-svn: 300565
2017-04-18 17:06:11 +00:00
Nirav Dave
9e2d41cff4 [DAG] Improve store merge candidate pruning.
Remove non-consecutive stores from store merge candidate search as
they cannot be merged and will prevent us from finding subsequent
mergeable store cases.

Reviewers: jyknight, bogner, javed.absar, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32086

llvm-svn: 300561
2017-04-18 15:36:34 +00:00
Nirav Dave
23e3be6e6e Add base-index-based store merge test
llvm-svn: 300559
2017-04-18 15:12:13 +00:00
Zvi Rackover
9a0fbd2ef8 LoopRerollPass: Prefer Value::hasOneUse() over Value::getNumUses(). NFC.
getNumUses() can be more expensive as it iterates over all list's elements.

llvm-svn: 300558
2017-04-18 14:55:43 +00:00
Gil Rapaport
e43ac15a7c [LV] Cache block mask values
This patch is part of D28975's breakdown.

Add caching for block masks similar to the cache already used for edge masks,
replacing generation per user with reusing the first generated value which
dominates all uses.

Differential Revision: https://reviews.llvm.org/D32054

llvm-svn: 300557
2017-04-18 14:43:43 +00:00
Sanjay Patel
2d612df495 [ConstantRange] fix doxygen comment formatting; NFC
llvm-svn: 300554
2017-04-18 14:27:24 +00:00
Nikolai Bozhenov
ed492b9592 Make globalaa-retained.ll test catching more cases.
Summary:
* Add checks for store. That is needed because GlobalsAA is called
  twice in the current pipeline with different sets of Function passes
  following it. However, the loads are eliminated using instcombine
  which happens everywhere. On the other hand, DeadStoreElimination is
  performed only once so by checking for store we'll be able to catch
  more cases when GlobalsAA is invalidated unintentionally.
* Add empty function above/below the test so that we don't depend on
  the relative order of instcombine/dead-store-elimination and the
  pass that invalidates the analysis (inside the same
  FunctionPassManager).

Reviewers: kristof.beyls

Reviewed By: kristof.beyls

Subscribers: llvm-commits, n.bozhenov

Differential Revision: https://reviews.llvm.org/D32015
Patch by Andrei Elovikov <andrei.elovikov@intel.com>

llvm-svn: 300553
2017-04-18 13:29:26 +00:00
Nikolai Bozhenov
bd24efc989 [GVNHoist] Mark GlobalsAA as preserved by GVNHoist.
Reviewers: sebpop, hiraditya

Reviewed By: sebpop

Subscribers: n.bozhenov, llvm-commits

Differential Revision: https://reviews.llvm.org/D32158
Patch by Andrei Elovikov <andrei.elovikov@intel.com>

llvm-svn: 300552
2017-04-18 13:25:49 +00:00
Nirav Dave
38398a2ddd Add store Merge test.
llvm-svn: 300551
2017-04-18 13:25:19 +00:00
Oliver Stannard
76132aaa0b [ARM] Add hardware build attributes in assembler
In the assembler, we should emit build attributes based on the target
selected with command-line options. This matches the GNU assembler's
behaviour. We only do this for build attributes which describe the
hardware that is expected to be available, not the ones that describe
ABI compatibility.

This is done by moving some of the attribute emission code to
ARMTargetStreamer, so that it can be shared between the assembly and
code-generation code paths. Since the assembler only creates a
MCSubtargetInfo, not an ARMSubtarget, the code had to be changed to
check raw features, and not use the convenience functions in
ARMSubtarget.

If different attributes are later specified using the .eabi_attribute
directive, then they will take precedence, as happens when the same
.eabi_attribute is specified twice.

This must be enabled by an option, because we don't want to do this when
parsing inline assembly. The attributes would match the ones emitted at
the start of the file, so wouldn't actually change the emitted object
file, but the extra directives would be added to every inline assembly
block when emitting assembly, which we'd like to avoid.

The majority of the changes in the build-attributes.ll test are just
re-ordering the directives, because the hardware attributes are now
emitted before the ABI ones. However, I did fix one bug which I spotted:
Tag_CPU_arch_profile was not being emitted for v6M.

Differential revision: https://reviews.llvm.org/D31812

llvm-svn: 300547
2017-04-18 12:52:35 +00:00
Diana Picus
3f9333697f [ARM] GlobalISel: Add support for G_SUB
Support G_SUB throughout the GlobalISel pipeline. It is exactly the same
as G_ADD, nothing fancy.

llvm-svn: 300546
2017-04-18 12:35:28 +00:00
Andrea Di Biagio
66b10b7920 [SampleProfile] Don't assert when printing the DebugLoc of a branch. NFC.
llvm-svn: 300544
2017-04-18 11:27:58 +00:00
Andrea Di Biagio
de6c1f4aaf [SampleProfile] Skip intrinsic calls when visiting callsites in InlineHotFunctions.
Before this patch, we always called method 'findCalleeFunctionSamples()' on
intrinsic calls. However, intrinsic calls like llvm.dbg.value() are not viable
candidates for obvious reasons.

No functional change intended.

Differential Revision: https://reviews.llvm.org/D32008

llvm-svn: 300541
2017-04-18 10:08:53 +00:00
Kristof Beyls
1c3a629c41 Revert "[GlobalISel] Support vector-of-pointers in LLT"
This reverts r300535 and r300537.
The newly added tests in test/CodeGen/AArch64/GlobalISel/arm64-fallback.ll
produces slightly different code between LLVM versions being built with different compilers.
E.g., dependent on the compiler LLVM is built with, either one of the following
can be produced:

remark: <unknown>:0:0: unable to legalize instruction: %vreg0<def>(p0) = G_EXTRACT_VECTOR_ELT %vreg1, %vreg2; (in function: vector_of_pointers_extractelement)
remark: <unknown>:0:0: unable to legalize instruction: %vreg2<def>(p0) = G_EXTRACT_VECTOR_ELT %vreg1, %vreg0; (in function: vector_of_pointers_extractelement)

Non-determinism like this is clearly a bad thing, so reverting this until
I can find and fix the root cause of the non-determinism.

llvm-svn: 300538
2017-04-18 09:26:36 +00:00
Kristof Beyls
e329f3f7e1 Fix gcc build after r300535.
llvm-svn: 300537
2017-04-18 08:47:55 +00:00
Diana Picus
93efa79fb7 [ARM] Check for correct HW div when lowering divmod
For subtargets that use the custom lowering for divmod, e.g. gnueabi,
we used to check if the subtarget has hardware divide and then lower to
a div-mul-sub sequence if true, or to a libcall if false.

However, judging by the usage of hasDivide vs hasDivideInARMMode, it
seems that hasDivide only refers to Thumb. For instance, in the
ARMTargetLowering constructor, the code that specifies whether to use
libcalls for (S|U)DIV looks like this:

bool hasDivide = Subtarget->isThumb() ? Subtarget->hasDivide()
                                      : Subtarget->hasDivideInARMMode();

In the case of divmod for arm-gnueabi, using only hasDivide() to
determine what to do means that instead of lowering to __aeabi_idivmod
to get the remainder, we lower to div-mul-sub and then further lower the
div to __aeabi_idiv. Even worse, if we have hardware divide in ARM but
not in Thumb, we generate a libcall instead of using it (this is not an
issue in practice since AFAICT none of the cores that we support have
hardware divide in ARM but not Thumb).

This patch fixes the code dealing with custom lowering to take into
account the mode (Thumb or ARM) when deciding whether or not hardware
division is available.

Differential Revision: https://reviews.llvm.org/D32005

llvm-svn: 300536
2017-04-18 08:32:27 +00:00
Kristof Beyls
507db3264b [GlobalISel] Support vector-of-pointers in LLT
This fixes PR32471.

As comment 10 on that bug report highlights
(https://bugs.llvm.org//show_bug.cgi?id=32471#c10), there are quite a
few different defendable design tradeoffs that could be made, including
not representing pointers at all in LLT.

I decided to go for representing vector-of-pointer as a concept in LLT,
while keeping the size of the LLT type 64 bits (this is an increase from
48 bits before). My rationale for keeping pointers explicit is that on
some targets probably it's very handy to have the distinction between
pointer and non-pointer (e.g. 68K has a different register bank for
pointers IIRC). If we keep a scalar pointer, it probably is easiest to
also have a vector-of-pointers to keep LLT relatively conceptually clean
and orthogonal, while we don't have a very strong reason to break that
orthogonality. Once we gain more experience on the use of LLT, we can
of course reconsider this direction.

Rejecting vector-of-pointer types in the IRTranslator is also an option
to avoid the crash reported in PR32471, but that is only a very
short-term solution; also needs quite a bit of code tweaks in places,
and is probably fragile. Therefore I didn't consider this the best
option.

llvm-svn: 300535
2017-04-18 08:12:45 +00:00
Leslie Zhai
1c03a501b8 test commit
llvm-svn: 300532
2017-04-18 07:28:54 +00:00
Craig Topper
044fb2ba9d [APInt] Cleanup the reverseBits slow case a little.
Use lshrInPlace. Use single bit extract and operator|=(uint64_t) to avoid a few temporary APInts.

llvm-svn: 300527
2017-04-18 05:02:21 +00:00
Craig Topper
b0d5814eef [APInt] Make operator<<= shift in place. Improve the implementation of tcShiftLeft and use it to implement operator<<=.
llvm-svn: 300526
2017-04-18 04:39:48 +00:00
Adrian Prantl
f625c157a1 PR32382: Fix emitting complex DWARF expressions.
The DWARF specification knows 3 kinds of non-empty simple location
descriptions:
1. Register location descriptions
  - describe a variable in a register
  - consist of only a DW_OP_reg
2. Memory location descriptions
  - describe the address of a variable
3. Implicit location descriptions
  - describe the value of a variable
  - end with DW_OP_stack_value & friends

The existing DwarfExpression code is pretty much ignorant of these
restrictions. This used to not matter because we only emitted very
short expressions that we happened to get right by accident.  This
patch makes DwarfExpression aware of the rules defined by the DWARF
standard and now chooses the right kind of location description for
each expression being emitted.

This would have been an NFC commit (for the existing testsuite) if not
for the way that clang describes captured block variables. Based on
how the previous code in LLVM emitted locations, DW_OP_deref
operations that should have come at the end of the expression are put
at its beginning. Fixing this means changing the semantics of
DIExpression, so this patch bumps the version number of DIExpression
and implements a bitcode upgrade.

There are two major changes in this patch:

I had to fix the semantics of dbg.declare for describing function
arguments. After this patch a dbg.declare always takes the *address*
of a variable as the first argument, even if the argument is not an
alloca.

When lowering a DBG_VALUE, the decision of whether to emit a register
location description or a memory location description depends on the
MachineLocation — register machine locations may get promoted to
memory locations based on their DIExpression. (Future) optimization
passes that want to salvage implicit debug location for variables may
do so by appending a DW_OP_stack_value. For example:
  DBG_VALUE, [RBP-8]                        --> DW_OP_fbreg -8
  DBG_VALUE, RAX                            --> DW_OP_reg0 +0
  DBG_VALUE, RAX, DIExpression(DW_OP_deref) --> DW_OP_reg0 +0

All testcases that were modified were regenerated from clang. I also
added source-based testcases for each of these to the debuginfo-tests
repository over the last week to make sure that no synchronized bugs
slip in. The debuginfo-tests compile from source and run the debugger.

https://bugs.llvm.org/show_bug.cgi?id=32382
<rdar://problem/31205000>

Differential Revision: https://reviews.llvm.org/D31439

llvm-svn: 300522
2017-04-18 01:21:53 +00:00
George Burgess IV
fec4b3628a Add const to a const method. NFC
llvm-svn: 300520
2017-04-18 01:04:05 +00:00
Davide Italiano
8ba440ea9a [Target] Use hasOneUse() instead of getNumUses().
The latter does a liner scan over a linked list, therefore is
much more expensive.

llvm-svn: 300518
2017-04-18 00:29:54 +00:00
Peter Collingbourne
bb279104fc Object: Shrink the size of irsymtab::Symbol by a word. NFCI.
Instead of storing an UncommonIndex on the Symbol, use a flag bit to store
whether the Symbol has an Uncommon. This shrinks Chromium's .bc files (after
D32061) by about 1%.

Differential Revision: https://reviews.llvm.org/D32070

llvm-svn: 300514
2017-04-17 23:43:49 +00:00
Dehao Chen
186d97d7ce Build SymbolMap in SampleProfileLoader to help matchin function names with suffix.
Summary: If there is suffix added in the function name (e.g. module hash added by thinLTO), we will not be able to find a match in profile as the suffix does not exist in profile. This patch build a map from function name to Function *. The map includes the entry for the stripped function name so that inlineHotFunctions can find the corresponding function to promote/inline.

Reviewers: davidxl, dnovillo, tejohnson

Reviewed By: davidxl

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D31952

llvm-svn: 300507
2017-04-17 22:23:05 +00:00
Haicheng Wu
b656bd13b8 Change the testcase tail-merge-after-mbp.ll to tail-merge-after-mbp.mir
Differential Revision: https://reviews.llvm.org/D32037

llvm-svn: 300506
2017-04-17 22:22:38 +00:00
Craig Topper
bec88c1d46 [SimplifyCFG] Use hasNUses instead of comparing getNumUses to a constant."
The use list is a linked list so getNumUses requires a linear scan through the whole list. hasNUses will stop scanning at N and see if that is the end.

llvm-svn: 300505
2017-04-17 22:13:00 +00:00
Craig Topper
6607f16ee5 [APInt] Merge the multiword code from lshrInPlace and tcShiftRight into a single implementation
This merges the two different multiword shift right implementations into a single version located in tcShiftRight. lshrInPlace now calls tcShiftRight for the multiword case.

I retained the memmove fast path from lshrInPlace and used a memset for the zeroing. The for loop is basically tcShiftRight's implementation with the zeroing and the intra-shift of 0 removed.

Differential Revision: https://reviews.llvm.org/D32114

llvm-svn: 300503
2017-04-17 21:43:43 +00:00
Jacob Gravelle
cdffddeed8 [WebAssembly] Fix WebAssemblyOptimizeReturned after r300367
Summary:
Refactoring changed paramHasAttr(1 + i) to paramHasAttr(0), fix that to
paramHasAttr(i).
Add more tests to WebAssemblyOptimizeReturned that catch that
regression.

Reviewers: dschuff

Subscribers: jfb, sbc100, llvm-commits

Differential Revision: https://reviews.llvm.org/D32136

llvm-svn: 300502
2017-04-17 21:40:28 +00:00
Benjamin Kramer
a766284f06 [SCEV] Fix another unused variable warning in release builds.
llvm-svn: 300500
2017-04-17 21:07:26 +00:00
Wei Mi
066544acbe Fix an unused variable error in rL300494.
llvm-svn: 300499
2017-04-17 21:00:45 +00:00
Kostya Serebryany
ffa39a0c30 [libFuzzer] experimental option -cleanse_crash: tries to replace all bytes in a crash reproducer with garbage, while still preserving the crash
llvm-svn: 300498
2017-04-17 20:58:21 +00:00
Sylvestre Ledru
ea4b28cda6 Add a linker script to version LLVM symbols
Summary:
This patch adds a very simple linker script to version the lib's symbols
and thus trying to avoid crashes if an application loads two different
LLVM versions (as long as they do not share data between them).

Note that we deliberately *don't* make LLVM_5.0 depend on LLVM_4.0:
they're incompatible and the whole point of this patch is
to tell the linker that.


Avoid unexpected crashes when two LLVM versions are used in the same process.

Author: Rebecca N. Palmer <rebecca_palmer@zoho.com>
Author: Lisandro Damían Nicanor Pérez Meyer <lisandro@debian.org>
Author: Sylvestre Ledru <sylvestre@debian.org>
Bug-Debian:  https://bugs.debian.org/848368


Reviewers: beanz, rnk

Reviewed By: rnk

Subscribers: mgorny, llvm-commits

Differential Revision: https://reviews.llvm.org/D31524

llvm-svn: 300496
2017-04-17 20:51:50 +00:00
Davide Italiano
26c988b4fc [InstCombine] Matchers work with both ConstExpr and Instructions.
So, `cast<Instruction>` is not guaranteed to succeed. Change the
code so that we create a new constant and use it in the newly
created instruction, as it's done in other places in InstCombine.

OK'ed by Sanjay/Craig. Fixes PR32686.

llvm-svn: 300495
2017-04-17 20:49:50 +00:00
Wei Mi
a91f9f0c5f [SCEV] Add a local cache for getZeroExtendExpr and getSignExtendExpr to prevent
the exponential behavior.

The patch is to fix PR32043. Functions getZeroExtendExpr and getSignExtendExpr
may call themselves recursively more than once. This is potentially a 2^N
complexity behavior. The exponential behavior was not commonly exposed before
because of existing global cache mechnism like UniqueSCEVs or some early return
mechanism when flags FlagNSW or FlagNUW are seen. However, we still have case
which can expose the exponential behavior, like the case in PR32043, so we add
a local cache in getZeroExtendExpr and getSignExtendExpr. If the input of the
functions -- SCEV and type pair have been seen before, we can find the extended
expression directly in the local cache.

Differential Revision: https://reviews.llvm.org/D30350

llvm-svn: 300494
2017-04-17 20:40:05 +00:00
Sanjay Patel
0537b8170b [InstSimplify] add/move tests for (icmp X, C1 & icmp X, C2); NFC
We simplify based on range intersection, but we're missing folds.

llvm-svn: 300493
2017-04-17 20:38:33 +00:00
Dehao Chen
bd5ad1c833 Update the test to fix the buildbot failure introduced by r300486 (NFC)
llvm-svn: 300492
2017-04-17 20:35:32 +00:00
Derek Schuff
bffb7aa988 [WebAssembly] Encode block signatures as SLEB instead of ULEB
Use SLEB (varint) for block_type immediates in accordance with the spec.

Patch by Yury Delendik

llvm-svn: 300490
2017-04-17 20:28:28 +00:00
Dehao Chen
9071034fad Add GNU_discriminator support for inline callsites in llvm-symbolizer.
Summary: LLVM symbolize cannot recognize GNU_discriminator for inline callsites. This patch adds support for it.

Reviewers: dblaikie

Reviewed By: dblaikie

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32134

llvm-svn: 300486
2017-04-17 20:10:39 +00:00
Matt Arsenault
5c6c8d6ec1 AMDGPU: Use MachineRegisterInfo to find max used register
Avoid looping through program to determine register counts.
This avoids needing to look at regmask operands.

Also fixes some counting errors with flat_scr when there
are no stack objects.

llvm-svn: 300482
2017-04-17 19:48:30 +00:00
Matt Arsenault
931bf20b8a AMDGPU: Change stack alignment
While the incoming stack for a kernel is 256-byte aligned,
this refers to the base address of the entire wave. This isn't
useful information for most of codegen. Fixes unnecessarily
aligning stack objects in callees.

llvm-svn: 300481
2017-04-17 19:48:24 +00:00
Brendon Cahoon
ca6976ddbb [CodeGenPrepare] Fix crash due to an invalid CFG
The splitIndirectCriticalEdges function generates and invalid CFG when the
'Target' basic block is a loop to itself. When this occurs, the code that
updates the predecessor terminator needs to update the terminator in the split
basic block.

This occurs when there is an edge from block D back to D. Since D is split in
to D0 and D1, the code needs to update the terminator in D1. But D1 is not in
the OtherPreds vector, so it was not getting updated.

Differential Revision: https://reviews.llvm.org/D32126

llvm-svn: 300480
2017-04-17 19:11:04 +00:00
Benjamin Kramer
3c575c98f5 Unbreak build of the wasm backend after r300463.
llvm-svn: 300479
2017-04-17 19:08:41 +00:00
Peter Collingbourne
f85117cde3 Bitcode: Add missing build dep to fix shlib build.
llvm-svn: 300478
2017-04-17 18:53:27 +00:00