1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 11:02:59 +02:00
Commit Graph

127613 Commits

Author SHA1 Message Date
Mitch Bodart
efdbc49462 Fix some erroneous lit test failures due to unlucky name of working directory.
Differential Revision:  http://reviews.llvm.org/D17044

llvm-svn: 261104
2016-02-17 16:35:18 +00:00
Rafael Espindola
d1106791d2 Add a unwrapOrError utility and use it to simplify ELFDumper.cpp.
Utility extracted from r260488.

llvm-svn: 261103
2016-02-17 16:21:49 +00:00
Simon Pilgrim
4804c75dcf [X86][SSE] Update pshufb mask tests.
We are getting better at combining constant pshufb masks - use a real input instead of undef.

Add test for decoding multi-use bitcasted masks as well (actual support will come soon).

llvm-svn: 261101
2016-02-17 15:52:39 +00:00
Rafael Espindola
6da20ee54a Change how readobj stores info about dynamic symbols.
We used to keep both a section and a pointer to the first symbol.

The oddity of keeping a section for dynamic symbols is because there is
a DT_SYMTAB but no DT_SYMTABZ, so to print the table we have to find the
size via a section table.

The reason for still keeping a pointer to the first symbol is because we
want to be able to print relocation tables even if the section table is
missing (it is mandatory only for files used in linking).

With this patch we keep just a DynRegionInfo. This then requires
changing a few places that were asking for a Elf_Shdr but actually just
needed the first symbol.

The test change is to delete the program header pointer.
Now that we use the information of both DT_SYMTAB and .dynsym, we don't
depend on the sh_entsize of .dynsym if we see DT_SYMTAB.

Note: It is questionable if it is worth it putting the effort to report
broken sh_entsize given that in files with no section table we have to
assume it is sizeof(Elf_Sym), but that is for another change.

Extracted from r260488.

llvm-svn: 261099
2016-02-17 15:38:21 +00:00
Krzysztof Parzyszek
cacffaaf70 [Hexagon] Fold object construction into map::insert
llvm-svn: 261096
2016-02-17 15:02:07 +00:00
Simon Pilgrim
8d76fd1d94 [X86][SSE] Update pshufb mask test to use a real input instead of undef
We are getting better at combining constant pshufb masks - this test would've failed once we decode bitcasted masks as well.

llvm-svn: 261095
2016-02-17 14:56:58 +00:00
Chad Rosier
f2aa971d21 Typo.
llvm-svn: 261093
2016-02-17 14:45:36 +00:00
Igor Breger
0cae06de47 AVX512: Fix LowerMSCATTER() return value.
Bug description:
  The bug was discovered when test was compiled with -O0.
  In case scatter result is DAG root , VectorLegalizer failed (assert) due to LowerMSCATTER() return kmask as result.
Change LowerMSCATTER() to return chain as original node do.

Differential Revision: http://reviews.llvm.org/D17331

llvm-svn: 261090
2016-02-17 14:04:33 +00:00
Scott Egerton
36020388d8 [mips] Removed the SHF_ALLOC flag and the SHT_REL flag from the .pdr section.
This section is used for debug information and has no need to be
in memory at runtime. This patch also fixes an error when compiling
the Linux kernel. The error is that there are relocations within the
.pdr section in a VDSO. SHT_REL was removed as it is a section type
and not a section flag, therefore it does not make sense for it to
be there. With this patch, LLVM now emits the same flags as
the GNU assembler.

llvm-svn: 261083
2016-02-17 11:15:16 +00:00
Simon Pilgrim
54bcb546f1 [X86][AVX] Support bit-blend integer shuffles for 256-bit integer vectors
AVX1 doesn't support the shuffling of 256-bit integer vectors. For 32/64-bit elements we get around this by shuffling as float/double but for 8/16-bit elements (assuming they can't widen) we currently just split, shuffle as 128-bit vectors and concatenate the results back.

This patch adds the ability to lower using the bit-blend patterns before defaulting to the splitting behaviour.

Part 2 of 2

Differential Revision: http://reviews.llvm.org/D17292

llvm-svn: 261082
2016-02-17 10:50:06 +00:00
Simon Pilgrim
b13c68898e [X86][AVX] Support bit-mask integer shuffles for 256-bit integer vectors
AVX1 doesn't support the shuffling of 256-bit integer vectors. For 32/64-bit elements we get around this by shuffling as float/double but for 8/16-bit elements (assuming they can't widen) we currently just split, shuffle as 128-bit vectors and concatenate the results back.

This patch adds the ability to lower using the bit-mask patterns before defaulting to the splitting behaviour. In some cases this ends up matching what AVX2 would do anyhow or what AVX1 does on the split vectors.

Part 1 of 2

Differential Revision: http://reviews.llvm.org/D17292

llvm-svn: 261081
2016-02-17 10:37:49 +00:00
Simon Pilgrim
d35c533371 [X86][SSE] Tidyup BUILD_VECTOR operand collection. NFCI.
Avoid reuse of operand variables, keep them local to a particular lowering - the operand collection is unique to each case anyhow.

Renamed from V to Ops to more closely match their purpose.

llvm-svn: 261078
2016-02-17 10:12:30 +00:00
Benjamin Kramer
eab3ea0306 [Hexagon] cast<> a reference instead of referencing + dereferencing.
llvm-svn: 261077
2016-02-17 09:28:45 +00:00
David Blaikie
8c7405f20a llvm-dwp: Support for type units when merging DWPs into larger DWPs
llvm-svn: 261072
2016-02-17 07:00:24 +00:00
David Blaikie
525cd533c1 Fix the hash function.
llvm-svn: 261071
2016-02-17 07:00:22 +00:00
Cong Hou
1e3109066e Detecte vector reduction operations just before instruction selection.
This patch detects vector reductions before instruction selection. Vector
reductions are vectorized reduction operations, and for such operations we have
freedom to reorganize the elements of the result as long as the reduction of them
stay unchanged. This will enable some reduction pattern recognition during
instruction combine such as SAD/dot-product on X86. A flag is added to
SDNodeFlags to mark those vector reduction nodes to be checked during instruction
combine.

To detect those vector reductions, we search def-use chains starting from the
given instruction, and check if all uses fall into two categories:

1. Reduction with another vector.
2. Reduction on all elements.

in which 2 is detected by recognizing the pattern that the loop vectorizer
generates to reduce all elements in the vector outside of the loop, which
includes several ShuffleVector and one ExtractElement instructions.


Differential revision: http://reviews.llvm.org/D15250

llvm-svn: 261070
2016-02-17 06:37:04 +00:00
Hans Wennborg
8484662941 Revert r260979 "[X86] Enable the LEA optimization pass by default."
Asserts are still firing in Chromium builds. PR26575.

llvm-svn: 261058
2016-02-17 02:49:59 +00:00
Xinliang David Li
2bdd7e9a49 revert r261038: arm/aarch64 bot failure
llvm-svn: 261057
2016-02-17 02:39:34 +00:00
Mehdi Amini
919bd12aad Revert "Query the StringMap only once when creating MDString (NFC)"
This reverts commit r261030 and r261036.
(The revision was marked "approved" on phabricator, but some concerns
were raised on the mailing list. Thanks D. Blaikie for notifying me.)

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261055
2016-02-17 02:18:58 +00:00
Haicheng Wu
6b1e9d59b2 [AliasSetTracker] Teach AliasSetTracker about MemSetInst
This change is to fix the problem discussed in
http://lists.llvm.org/pipermail/llvm-dev/2016-February/095446.html.

llvm-svn: 261052
2016-02-17 02:01:50 +00:00
JF Bastien
08c122fb28 WebAssembly: update expected failures
r261050 seems to inadvertently fix the assertion failure.

llvm-svn: 261051
2016-02-17 01:59:23 +00:00
Dan Gohman
811eb4337e [WebAssembly] Call memcpy for large byval copies.
This fixes very slow compilation on
test/CodeGen/Generic/2010-11-04-BigByval.ll . Note that MaxStoresPerMemcpy
and friends are not yet carefully tuned so the cutoff point is currently
somewhat arbitrary. However, it's important that there be a cutoff point
so that we don't emit unbounded quantities of loads and stores.

llvm-svn: 261050
2016-02-17 01:43:37 +00:00
JF Bastien
83e991d52a WebAssembly: update expected test failures
r261032 adds frame address support.

llvm-svn: 261044
2016-02-17 00:34:15 +00:00
Chandler Carruth
4ec346556c [LCG] Construct an actual call graph with call-edge SCCs nested inside
reference-edge SCCs.

This essentially builds a more normal call graph as a subgraph of the
"reference graph" that was the old model. This allows both to exist and
the different use cases to use the aspect which addresses their needs.
Specifically, the pass manager and other *ordering* constrained logic
can use the reference graph to achieve conservative order of visit,
while analyses reasoning about attributes and other properties derived
from reachability can reason about the direct call graph.

Note that this isn't necessarily complete: it doesn't model edges to
declarations or indirect calls. Those can be found by scanning the
instructions of the function if desirable, and in fact every user
currently does this in order to handle things like calls to instrinsics.
If useful, we could consider caching this information in the call graph
to save the instruction scans, but currently that doesn't seem to be
important.

An important realization for why the representation chosen here works is
that the call graph is a formal subset of the reference graph and thus
both can live within the same data structure. All SCCs of the call graph
are necessarily contained within an SCC of the reference graph, etc.

The design is to build 'RefSCC's to model SCCs of the reference graph,
and then within them more literal SCCs for the call graph.

The formation of actual call edge SCCs is not done lazily, unlike
reference edge 'RefSCC's. Instead, once a reference SCC is formed, it
directly builds the call SCCs within it and stores them in a post-order
sequence. This is used to provide a consistent platform for mutation and
update of the graph. The post-order also allows for very efficient
updates in common cases by bounding the number of nodes (and thus edges)
considered.

There is considerable common code that I'm still looking for the best
way to factor out between the various DFS implementations here. So far,
my attempts have made the code harder to read and understand despite
reducing the duplication, which seems a poor tradeoff. I've not given up
on figuring out the right way to do this, but I wanted to wait until
I at least had the system working and tested to continue attempting to
factor it differently.

This also requires introducing several new algorithms in order to handle
all of the incremental update scenarios for the more complex structure
involving two edge colorings. I've tried to comment the algorithms
sufficiently to make it clear how this is expected to work, but they may
still need more extensive documentation.

I know that there are some changes which are not strictly necessarily
coupled here. The process of developing this started out with a very
focused set of changes for the new structure of the graph and
algorithms, but subsequent changes to bring the APIs and code into
consistent and understandable patterns also ended up touching on other
aspects. There was no good way to separate these out without causing
*massive* merge conflicts. Ultimately, to a large degree this is
a rewrite of most of the core algorithms in the LCG class and so I don't
think it really matters much.

Many thanks to the careful review by Sanjoy Das!

Differential Revision: http://reviews.llvm.org/D16802

llvm-svn: 261040
2016-02-17 00:18:16 +00:00
Reid Kleckner
30231d4f95 [X86] Fix a shrink-wrapping miscompile around __chkstk
__chkstk clobbers EAX. If EAX is live across the prologue, then we have
to take extra steps to save it. We already had code to do this if EAX
was a register parameter. This change adapts it to work when shrink
wrapping is used.

llvm-svn: 261039
2016-02-17 00:17:33 +00:00
Xinliang David Li
fc57478329 New test case: make sure alloc bit is not set for covmap section on Linux
llvm-svn: 261038
2016-02-17 00:14:52 +00:00
Dan Gohman
ea19c3e264 [WebAssembly] Use SDValue::getConstantOperandVal. NFC.
llvm-svn: 261037
2016-02-17 00:14:03 +00:00
Mehdi Amini
7e2849720e Fix MSVC bot: apparently visual studio does not like explicitly defaulted move ctor
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261036
2016-02-17 00:11:59 +00:00
Andrew Kaylor
eedb857024 Fix build LLVM with -D LLVM_USE_INTEL_JITEVENTS:BOOL=ON on Windows
Differential Revision: http://reviews.llvm.org/D16940

llvm-svn: 261033
2016-02-16 23:52:18 +00:00
Dan Gohman
7699d5ed4b [WebAssembly] Implement __builtin_frame_address.
Differential Revision: http://reviews.llvm.org/D17307

llvm-svn: 261032
2016-02-16 23:48:04 +00:00
Mehdi Amini
5c2caef940 Query the StringMap only once when creating MDString (NFC)
Summary: Loading IR with debug info improves MDString::get() from 19ms to 10ms.

Reviewers: dexonsmith

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D16597

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261030
2016-02-16 23:05:56 +00:00
Mehdi Amini
6f127aff71 Define the ThinLTO Pipeline (experimental)
Summary:
On the contrary to Full LTO, ThinLTO can afford to shift compile time
from the frontend to the linker: both phases are parallel (even if
it is not totally "free": projects like clang are reusing product
from the "compile phase" for multiple link, think about
libLLVMSupport reused for opt, llc, etc.).

This pipeline is based on the proposal in D13443 for full LTO. We
didn't move forward on this proposal because the LTO link was far too
long after that. We believe that we can afford it with ThinLTO.

The ThinLTO pipeline integrates in the regular O2/O3 flow:

 - The compile phase perform the inliner with a somehow lighter
   function simplification. (TODO: tune the inliner thresholds here)
   This is intendend to simplify the IR and get rid of obvious things
   like linkonce_odr that will be inlined.
 - The link phase will run the pipeline from the start, extended with
   some specific passes that leverage the augmented knowledge we have
   during LTO. Especially after the inliner is done, a sequence of
   globalDCE/globalOpt is performed, followed by another run of the
   "function simplification" passes. It is not clear if this part
   of the pipeline will stay as is, as the split model of ThinLTO
   does not allow the same benefit as FullLTO without added tricks.

The measurements on the public test suite as well as on our internal
suite show an overall net improvement. The binary size for the clang
executable is reduced by 5%. We're still tuning it with the bringup
of ThinLTO and it will evolve, but this should provide a good starting
point.

Reviewers: tejohnson

Differential Revision: http://reviews.llvm.org/D17115

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261029
2016-02-16 23:02:29 +00:00
Mehdi Amini
639bf1e488 Refactor the PassManagerBuilder: extract a "addFunctionSimplificationPasses()" (NFC)
It is intended to contains the passes run over a function after the
inliner is done with a function and before it moves to its callers.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261028
2016-02-16 22:54:27 +00:00
Adam Nemet
f3d8c27701 Fix test from r261013
llvm-svn: 261027
2016-02-16 22:50:19 +00:00
Simon Pilgrim
3b4ddae9de [X86][AVX] Regenerated vselect tests
llvm-svn: 261026
2016-02-16 22:33:27 +00:00
Ahmed Bougacha
c1f422afe6 [X86] Remove the now-unused X86ISD::PSIGN. NFC.
llvm-svn: 261025
2016-02-16 22:14:12 +00:00
Ahmed Bougacha
0b74af0c16 [X86] Generalize logic blend of (x, -x) combine to match (-x, x).
I suspect this is what let PR26110 lie dormant for so long.

llvm-svn: 261024
2016-02-16 22:14:07 +00:00
Ahmed Bougacha
c6b1c28e14 [X86] Don't turn (c?-v:v) into (c?-v:0) by blindly using PSIGN.
Currently, we sometimes miscompile this vector pattern:
    (c ? -v : v)
We lower it to (because "c" is <4 x i1>, lowered as a vector mask):
    (~c & v) | (c & -v)

When we have SSSE3, we incorrectly lower that to PSIGN, which does:
    (c < 0 ? -v : c > 0 ? v : 0)
in other words, when c is either all-ones or all-zero:
    (c ? -v : 0)
While this is an old bug, it rarely triggers because the PSIGN combine
is too sensitive to operand order. This will be improved separately.

Note that the PSIGN tests are also incorrect. Consider:
    %b.lobit = ashr <4 x i32> %b, <i32 31, i32 31, i32 31, i32 31>
    %sub = sub nsw <4 x i32> zeroinitializer, %a
    %0 = xor <4 x i32> %b.lobit, <i32 -1, i32 -1, i32 -1, i32 -1>
    %1 = and <4 x i32> %a, %0
    %2 = and <4 x i32> %b.lobit, %sub
    %cond = or <4 x i32> %1, %2
    ret <4 x i32> %cond
if %b is zero:
    %b.lobit = <4 x i32> zeroinitializer
    %sub = sub nsw <4 x i32> zeroinitializer, %a
    %0 = <4 x i32> <i32 -1, i32 -1, i32 -1, i32 -1>
    %1 = <4 x i32> %a
    %2 = <4 x i32> zeroinitializer
    %cond = or <4 x i32> %a, zeroinitializer
    ret <4 x i32> %a
whereas we currently generate:
    psignd %xmm1, %xmm0
    retq
which returns 0, as %xmm1 is 0.

Instead, use a pure logic sequence, as described in:
https://graphics.stanford.edu/~seander/bithacks.html#ConditionalNegate

Fixes PR26110.

Differential Revision: http://reviews.llvm.org/D17181

llvm-svn: 261023
2016-02-16 22:14:03 +00:00
Ahmed Bougacha
f8020709f1 [X86] Extract PSIGN/BLENDVP tests into vector-blend.ll. NFC.
We're going to stop generating PSIGN, so calling a test "psign"
isn't ideal. Instead, call these tests what they really are:
variable blends using logic.
Also add a test to exhibit a case we're currently missing in
the PSIGN combine.

llvm-svn: 261022
2016-02-16 22:13:59 +00:00
Ahmed Bougacha
7fc85e4b5a [X86] Extract PSIGN/BLENDVP combine. NFC.
llvm-svn: 261021
2016-02-16 22:13:55 +00:00
Ahmed Bougacha
8f9b9ee793 [X86] Extract ANDNP combine. NFC.
This makes it IMO more readable and reduces indentation.

llvm-svn: 261020
2016-02-16 22:13:49 +00:00
Mehdi Amini
7adcf03313 Bitcode writer: fix a typo, using getName() instead of getSourceFileName()
When emitting the source filename, the encoding of the string
was checked against the name instead of the filename.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261019
2016-02-16 22:07:03 +00:00
Derek Schuff
a5f71b59fb [WebAssembly] Update torture test expectations
These were fixed with r260978

llvm-svn: 261017
2016-02-16 21:52:06 +00:00
Reid Kleckner
4b86dc7a52 [codeview] Bail on a DBG_VALUE register operand with no register
This apparently comes up when the register allocator decides that a
variable will become undef along a certain path.

Also improve the error message we emit when we can't map from LLVM
register number to CV register number.

llvm-svn: 261016
2016-02-16 21:49:26 +00:00
Derek Schuff
b9542a8754 [WebAssemly] Don't move calls or stores past intervening loads
The register stackifier currently checks for intervening stores (and
loads that may alias them) but doesn't account for the fact that the
instruction being moved may affect intervening loads.

Differential Revision: http://reviews.llvm.org/D17298

llvm-svn: 261014
2016-02-16 21:44:19 +00:00
Adam Nemet
e39424112f [LTO] Support Statistics
Summary:
I thought -Xlinker -mllvm -Xlinker -stats worked at some point but maybe
it never did.

For clang, I believe that stats are printed from cc1_main.  This patch
also prints them for LTO, specifically right after codegen happens.

I only looked at the C API for LTO briefly to see if this is a good
place.  Probably there are still cases where this wouldn't be printed
but it seems to be working for the common case.  I also experimented
putting this in the LTOCodeGenerator destructor but that didn't trigger
for me because ld64 does not destroy the LTOCodeGenerator.

Reviewers: dexonsmith, joker.eph

Subscribers: rafael, joker.eph, llvm-commits

Differential Revision: http://reviews.llvm.org/D17302

llvm-svn: 261013
2016-02-16 21:41:51 +00:00
Reid Kleckner
eaf9090d89 [codeview] Fix assertion on non-memory, non-register DBG_VALUE instructions
Eventually we should find a way to describe constant variables, but it
is not obvious how to do this at the moment.

llvm-svn: 261010
2016-02-16 21:14:51 +00:00
Colin LeMahieu
d875c88104 [Hexagon] Adding relocation for code size, cold path optimization allowing a 23-bit 4-byte aligned relocation to be a valid instruction encoding.
The usual way to get a 32-bit relocation is to use a constant extender which doubles the size of the instruction, 4 bytes to 8 bytes.

Another way is to put a .word32 and mix code and data within a function.  The disadvantage is it's not a valid instruction encoding and jumping over it causes prefetch stalls inside the hardware.

This relocation packs a 23-bit value in to an "r0 = add(rX, #a)" instruction by overwriting the source register bits.  Since r0 is the return value register, if this instruction is placed after a function call which return void, r0 will be filled with an undefined value, the prefetch won't be confused, and the callee can access the constant value by way of the link register.

llvm-svn: 261006
2016-02-16 20:38:17 +00:00
Jun Bum Lim
bf77014eda [AArch64] Add pass to remove redundant copy after RA
Summary:
This change will add a pass to remove unnecessary zero copies in target blocks
of cbz/cbnz instructions. E.g., the copy instruction in the code below can be
removed because the cbz jumps to BB1 when x0 is zero :
  BB0:
    cbz x0, .BB1
  BB1:
    mov x0, xzr

Jun

Reviewers: gberry, jmolloy, HaoLiu, MatzeB, mcrosier

Subscribers: mcrosier, mssimpso, haicheng, bmakam, llvm-commits, aemerson, rengolin

Differential Revision: http://reviews.llvm.org/D16203

llvm-svn: 261004
2016-02-16 20:02:39 +00:00
Quentin Colombet
aa2c5cf11c [GlobalISel] Re-apply r260922-260923 with MSVC-friendly code.
Original message:
Get rid of the ifdefs in TargetLowering.
Introduce a new API used only by GlobalISel: CallLowering.
This API will contain target hooks dedicated to call lowering.

llvm-svn: 260998
2016-02-16 19:26:02 +00:00