1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 12:33:33 +02:00
Commit Graph

94600 Commits

Author SHA1 Message Date
Kevin Enderby
207197ac9e Next set of additional error checks for invalid Mach-O files for bad load commands
that use the Mach::linkedit_data_command type for the load commands that are
currently used in the MachOObjectFile constructor.

This contains the missing checks for LC_DATA_IN_CODE and
LC_LINKER_OPTIMIZATION_HINT load commands and the fields for the
Mach::linkedit_data_command type.  Checking for other load commands that
use this type will be added later.

Also fixed a couple of places that was using sizeof(MachOObjectFile::LoadCommandInfo)
that should have been using sizeof(MachO::load_command).

llvm-svn: 280267
2016-08-31 17:57:46 +00:00
Geoff Berry
561767ef01 [EarlyCSE] Allow forwarding a non-invariant load into an invariant load.
Reviewers: sanjoy

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D23935

llvm-svn: 280265
2016-08-31 17:45:31 +00:00
Chad Rosier
ce78968521 [SLP] Update the debug based on Michael's suggestion.
Passing the types/opcode check still doesn't guarantee we'll actually vectorize.
Therefore, just make it clear we're attempting to vectorize.

llvm-svn: 280263
2016-08-31 17:41:12 +00:00
Chad Rosier
090d701872 [SLP] Sink debug after checking for matching types/opcode.
Differential Revision: https://reviews.llvm.org/D24090

llvm-svn: 280260
2016-08-31 17:31:09 +00:00
Davide Italiano
6c9ddd7257 [lib/LTO] Factor out logic for running passes.
This is in preparation for adding an option
to run a custom pipeline with the new PM. It's
currently used in lld.

llvm-svn: 280258
2016-08-31 17:02:44 +00:00
Tim Shen
830cc5f6f2 s/static inline/static/ for headers I have changed in r279475. NFC.
llvm-svn: 280257
2016-08-31 16:48:13 +00:00
Reid Kleckner
0669cf2688 [codeview] Emit vtable shape information
The shape of the vtable is passed down as the size of the
__vtbl_ptr_type. This special pointer type appears both as the pointee
type of the vptr type, and by itself in every dynamic class. For classes
with multiple vtables, only the shape of the primary vftable is
included, as the shape of all secondary vftables will be the same as in
the base class.

Fixes PR28150

llvm-svn: 280254
2016-08-31 15:59:30 +00:00
Philip Reames
686905fa4d [statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.

The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.

Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)

My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.

Differential Revision: https://reviews.llvm.org/D24000

llvm-svn: 280250
2016-08-31 15:12:17 +00:00
Simon Pilgrim
a931b633c1 [X86][SSE] Improve awareness of (v)cvtpd2ps implicit zeroing of upper 64-bits of xmm result
Associate x86_sse2_cvtpd2ps with X86ISD::VFPROUND to avoid inserting unnecessary zeroing shuffles.

Differential Revision: https://reviews.llvm.org/D23797

llvm-svn: 280249
2016-08-31 15:09:34 +00:00
Chad Rosier
66d09c898e [SLP] Arguments should be camel case, and start with an upper case letter. NFC.
llvm-svn: 280248
2016-08-31 15:06:58 +00:00
Sjoerd Meijer
b739719e02 Clang patch r280064 introduced ways to set the FP exceptions and denormal
types. This is the LLVM counterpart and it adds options that map onto FP
exceptions and denormal build attributes allowing better fp math library
selections.

Differential Revision: https://reviews.llvm.org/D24070

llvm-svn: 280246
2016-08-31 14:17:38 +00:00
Krzysztof Parzyszek
5d37c679be Fixed spill stack objects are mutable
Differential Revision: https://reviews.llvm.org/D24039

llvm-svn: 280244
2016-08-31 13:52:17 +00:00
James Molloy
6a93446586 Revert "[SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more cases"
This reverts commit r280218. This *also* causes buildbot errors. Sigh. Not a successful day all around!

llvm-svn: 280239
2016-08-31 13:32:28 +00:00
James Molloy
0cb02140c1 Revert "[SimplifyCFG] Change the algorithm in SinkThenElseCodeToEnd"
This reverts commit r280216 - it caused buildbot failures.

llvm-svn: 280234
2016-08-31 13:16:52 +00:00
James Molloy
7f2f60f9b1 Revert "[SimplifyCFG] Handle tail-sinking of more than 2 incoming branches"
This reverts commit r280217. r280216 caused buildbot failures - backing out the entire chain.

llvm-svn: 280233
2016-08-31 13:16:45 +00:00
James Molloy
2366f91976 Revert "[SimplifyCFG] Add a workaround to fix PR30188"
This reverts commit r280219. r280216 caused buildbot failures - backing out the entire chain.

llvm-svn: 280232
2016-08-31 13:16:36 +00:00
James Molloy
351927e4cd Revert "[SimplifyCFG] Fix bootstrap failure after r280220"
This reverts commit r280228. r280216 caused buildbot failures - backing out the entire sequence.

llvm-svn: 280231
2016-08-31 13:16:30 +00:00
Diana Picus
da60ac4ab8 Use abstraction in AArch64AsmPrinter::lowerSTACKMAP. NFCI
Use functionality from StackMapOpers instead of hardcoding an operand access.

llvm-svn: 280230
2016-08-31 12:43:49 +00:00
Diana Picus
13a702a2c8 Typo fixes. NFC
llvm-svn: 280229
2016-08-31 12:43:44 +00:00
James Molloy
5252babd4e [SimplifyCFG] Fix bootstrap failure after r280220
We check that a sinking candidate is used by only one PHI node during our legality checks. However for instructions that are used by other sinking candidates our heuristic is less conservative. This can result in a candidate actually being illegal when we come to sink it because of how we sunk a predecessor. Do the used-by-only-one-PHI checks again during sinking to ensure we don't crash.

llvm-svn: 280228
2016-08-31 12:33:48 +00:00
Nikolay Haustov
3027d29492 AMDGPU/SI: Handle aliases in AMDGPUAlwaysInlinePass
Summary:
Simply replace usage of aliases to functions with aliasee.
This came up when bitcode linking to builtin library and
calls to aliases not being resolved.

Also made minor improvements to existing test.

Reviewers: tstellarAMD, alex-t, vpykhtin

Subscribers: arsenm, wdng, rampitec

Differential Revision: https://reviews.llvm.org/D24023

llvm-svn: 280221
2016-08-31 11:18:33 +00:00
James Molloy
323dc48eb8 [SimplifyCFG] Add a workaround to fix PR30188
We're sinking stores, which is a good thing, but in the process creating selects for the store address operand, which SROA/Mem2Reg can't look through, which caused serious regressions.

The real fix is in SROA, which I'll be looking into.

llvm-svn: 280219
2016-08-31 10:46:45 +00:00
James Molloy
97b3566778 [SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more cases
A very important case is not handled here: multiple arcs to a single block with a PHI. Consider:

    a:
      %1 = icmp %b, 1
      br %1, label %c, label %e
    c:
      %2 = icmp %b, 2
      br %2, label %d, label %e
    d:
      br %e
    e:
      phi [0, %a], [1, %c], [2, %d]

FoldValueComparisonIntoPredecessors will refuse to fold this, as it doesn't know how to deal with two arcs to a common destination with different PHI values. The answer is obvious - just split all conflicting arcs.

llvm-svn: 280218
2016-08-31 10:46:39 +00:00
James Molloy
bfb4ed7a07 [SimplifyCFG] Handle tail-sinking of more than 2 incoming branches
This was a real restriction in the original version of SinkIfThenCodeToEnd. Now it's been rewritten, the restriction can be lifted.

As part of this, we handle a very common and useful case where one of the incoming branches is actually conditional. Consider:

   if (a)
     x(1);
   else if (b)
     x(2);

This produces the following CFG:

         [if]
        /    \
      [x(1)] [if]
        |     | \
        |     |  \
        |  [x(2)] |
         \    |  /
          [ end ]

[end] has two unconditional predecessor arcs and one conditional. The conditional refers to the implicit empty 'else' arc. This same pattern can also be caused by an empty default block in a switch.

We can't sink the call to x() down to end because no call to x() happens on the third incoming arc (assume that x() has sideeffects for the sake of argument; if something is safe to speculate we could indeed sink nevertheless but this cannot happen in the general case and causes many extra selects).

We are now able to detect this case and split off the unconditional arcs to a common successor:

         [if]
        /    \
      [x(1)] [if]
        |     | \
        |     |  \
        |  [x(2)] |
         \   /    |
     [sink.split] |
           \     /
           [ end ]

Now we can sink the call to x() into %sink.split. This can cause significant code simplification in many testcases.

llvm-svn: 280217
2016-08-31 10:46:33 +00:00
James Molloy
0803b4eca8 [SimplifyCFG] Change the algorithm in SinkThenElseCodeToEnd
r279460 rewrote this function to be able to handle more than two incoming edges and took pains to ensure this didn't regress anything.

This time we change the logic for determining if an instruction should be sunk. Previously we used a single pass greedy algorithm - sink instructions until one requires more than one PHI node or we run out of instructions to sink.

This had the problem that sinking instructions that had non-identical but trivially the same operands needed extra logic so we sunk them aggressively. For example:

    %a = load i32* %b          %d = load i32* %b
    %c = gep i32* %a, i32 0    %e = gep i32* %d, i32 1

Sinking %c and %e would naively require two PHI merges as %a != %d. But the loads are obviously equivalent (and maybe can't be hoisted because there is no common predecessor).

This is why we implemented the fairly complex function areValuesTriviallySame(), to look through trivial differences like this. However it's just not clever enough.

Instead, throw areValuesTriviallySame away, use pointer equality to check equivalence of operands and switch to a two-stage algorithm.

In the "scan" stage, we look at every sinkable instruction in isolation from end of block to front. If it's sinkable, we keep track of all operands that required PHI merging.

In the "sink" stage, we iteratively sink the last non-terminator in the source blocks. But when calculating how many PHIs are actually required to be inserted (to work out if we should stop or not) we remove any values that have already been sunk from the set of PHI-merges required, which allows us to be more aggressive.

This turns an algorithm with potentially recursive lookahead (looking through GEPs, casts, loads and any other instruction potentially not CSE'd) to two linear scans.

llvm-svn: 280216
2016-08-31 10:46:23 +00:00
James Molloy
6315c274f3 [SimplifyCFG] Tail-merge calls with sideeffects
This was deliberately disabled during my rewrite of SinkIfThenToEnd to keep behaviour
at least vaguely consistent with the previous version and keep it as close to NFC as
I could.

There's no real reason not to merge sideeffect calls though, so let's do it! Small fixup
along the way to ensure we don't create indirect calls.

Should fix PR28964.

llvm-svn: 280215
2016-08-31 10:46:16 +00:00
Simon Pilgrim
dcb63f85f3 [X86][SSE] Improve awareness of fptrunc implicit zeroing of upper 64-bits of xmm result
Add patterns to avoid inserting unnecessary zeroing shuffles when lowering fptrunc to (v)cvtpd2ps

Differential Revision: https://reviews.llvm.org/D23797

llvm-svn: 280214
2016-08-31 10:35:13 +00:00
Igor Kudrin
972cd75cf9 [Coverage] Make sorting criteria for CounterMappingRegions local.
Move the comparison function into the only place there it is used,
i.e. the call to std::stable_sort in CoverageMappingWriter::write().

Add sorting by region kinds as it is required to ensure stable order
in our tests and to simplify D23987.

Differential Revision: https://reviews.llvm.org/D24034

llvm-svn: 280198
2016-08-31 07:01:17 +00:00
Craig Topper
6de2ad51c7 [AVX-512] Add patterns to select masked logical operations if the select has a floating point type.
This is needed in order to replace the masked floating point logical op intrinsics with native IR.

llvm-svn: 280195
2016-08-31 05:37:52 +00:00
Dean Michael Berris
63093a6577 [XRay] Support multiple return instructions in a single basic block
Add a .mir test to catch this case, and fix the xray-instrumentation
pass to handle it appropriately.

llvm-svn: 280192
2016-08-31 05:20:08 +00:00
David Majnemer
d770d3137d [Loads] Properly populate the visited set in isDereferenceableAndAlignedPointer
There were paths where we wouldn't populate the visited set, causing us
to recurse forever if an SSA variable was defined in terms of itself.

This fixes PR30210.

llvm-svn: 280191
2016-08-31 03:22:32 +00:00
Hal Finkel
cddd83b2d6 [PowerPC] Don't spill the frame pointer twice
When a function contains something, such as inline asm, which explicitly
clobbers the register used as the frame pointer, don't spill it twice. If we
need a frame pointer, it will be saved/restored in the prologue/epilogue code.
Explicitly spilling it again will reuse the same spill slot used by the
prologue/epilogue code, thus clobbering the saved value. The same applies
to the base-pointer or PIC-base register.

Partially fixes PR26856. Thanks to Ulrich for his analysis and the small
inline-asm reproducer.

llvm-svn: 280188
2016-08-31 00:52:03 +00:00
Gor Nishanov
2f5f55d9bd [Coroutines] Part 10: Add coroutine promise support.
Summary:
1) CoroEarly now lowers llvm.coro.promise intrinsic that allows to obtain
a coroutine promise pointer from a coroutine frame and vice versa.

2) CoroFrame now interprets Promise argument of llvm.coro.begin to
place CoroutinPromise alloca at a deterministic offset from the coroutine frame.

Now, the coroutine promise example from docs\Coroutines.rst compiles and produces expected result (see test/Transform/Coroutines/ex4.ll).

Reviewers: majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23993

llvm-svn: 280184
2016-08-31 00:35:41 +00:00
Sanjay Patel
ad21d55ed2 [InstCombine] clean up InsertRangeTest; NFCI
It's much less code and easier to read if we don't duplicate
everything between the 'Inside' and not 'Inside' cases.

As noted with the FIXME, the goal is to make this vector-friendly
in a follow-up patch.

llvm-svn: 280183
2016-08-31 00:19:35 +00:00
Alina Sbirlea
82dac894c9 [LoadStoreVectorizer] Change VectorSet to Vector to match head and tail positions. Resolves PR29148.
Summary:
LSV was using two vector sets (heads and tails) to track pairs of adjiacent position to vectorize.
A recent optimization is trying to obtain the longest chain to vectorize and assumes the positions
in heads(H) and tails(T) match, which is not the case is there are multiple tails for the same head.

e.g.:
i1: store a[0]
i2: store a[1]
i3: store a[1]
Leads to:
H: i1
T: i2 i3
Instead of:
H: i1 i1
T: i2 i3
So the positions for instructions that follow i3 will have different indexes in H/T.
This patch resolves PR29148.

This issue also surfaced the fact that if the chain is too long, and TLI
returns a "not-fast" answer, the whole chain will be abandoned for
vectorization, even though a smaller one would be beneficial.
Added a testcase and FIXME for this.

Reviewers: tstellarAMD, arsenm, jlebar

Subscribers: mzolotukhin, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D24057

llvm-svn: 280179
2016-08-30 23:53:59 +00:00
Reid Kleckner
ee70f4de4f [codeview] Remove redundant TypeTable lookup
As written, the code should assert if this lookup would have ever
succeeded.  Without looking through composite types, the type graph
should be acyclic.

llvm-svn: 280168
2016-08-30 21:48:14 +00:00
Kevin Enderby
f77b75655a Next set of additional error checks for invalid Mach-O files for bad LC_DYSYMTAB’s.
This contains the missing checks for LC_DYSYMTAB load command fields.

llvm-svn: 280161
2016-08-30 21:28:30 +00:00
Tim Northover
aaa2a927bc GlobalISel: combine extracts & sequences created for legalization
Legalization ends up creating many G_SEQUENCE/G_EXTRACT pairs which leads to
inefficient codegen (even for -O0), so add a quick pass over the function to
remove them again.

llvm-svn: 280155
2016-08-30 20:51:25 +00:00
Matt Arsenault
ea8814c4f7 AMDGPU: Relax SGPR asm constraint register class
s should be SReg_32 to be as general as possible. This can avoid a copy
from m0.

llvm-svn: 280154
2016-08-30 20:50:08 +00:00
Mike Aizatsky
b9c35c415b [libfuzzer] simplified unit truncation; do not write trunc items to disc
Differential Revision: https://reviews.llvm.org/D24049

llvm-svn: 280153
2016-08-30 20:49:07 +00:00
Michael Kuperstein
39c8e8b0c7 [LoopVectorizer] Predicate instructions in blocks with several incoming edges
We don't need to limit predication to blocks that have a single incoming
edge, we just need to use the right mask.
This fixes PR30172.

Differential Revision: https://reviews.llvm.org/D24009

llvm-svn: 280148
2016-08-30 20:22:21 +00:00
David Majnemer
7622ad4008 [COFFObjectFile] Ignore broken symbol table
When binaries are compressed by UPX, information about symbol table
offset and symbol count remain unchanged (but became invalid due to
compression).
This causes failure in the constructor and the rest of the binary cannot
be processed.

Instead, reset symbol related information (symbol/string table pointers,
sizes) - this should disable the related iterators and functions while
the rest of the binary can still be processed.

Patch by Bandzi Michal!

llvm-svn: 280147
2016-08-30 20:20:24 +00:00
Duncan P. N. Exon Smith
1463620411 CodeGen: Fixup for r280128, since GCC isn't as permissive as Clang
Fixes the bots, e.g.:
http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-buildserver/builds/10055

llvm-svn: 280135
2016-08-30 19:11:11 +00:00
Tim Northover
4f03f012c7 GlobalISel: forbid physical registers on generic MIs.
We're intending to move to a world where the type of a register is determined
by its (unique) def. This is incompatible with physregs, which are untyped.

It also means the other passes don't have to worry quite so much about
register-class compatibility and inserting COPYs appropriately.

llvm-svn: 280132
2016-08-30 18:52:46 +00:00
Duncan P. N. Exon Smith
9df0e64f25 ADT: Split ilist_node_traits into alloc and callback, NFC
Many lists want to override only allocation semantics, or callbacks for
iplist.  Split these up to prevent code duplication.
- Specialize ilist_alloc_traits to change the implementations of
  deleteNode() and createNode().
- One common desire is to do nothing deleteNode() and disable
  createNode().  Specialize ilist_alloc_traits to inherit from
  ilist_noalloc_traits for that behaviour.
- Specialize ilist_callback_traits to use the addNodeToList(),
  removeNodeFromList(), and transferNodesFromList() callbacks.

As a drive-by, add some coverage to the callback-related unit tests.

llvm-svn: 280128
2016-08-30 18:40:47 +00:00
Kyle Butt
6a05633b20 TailDuplication: Extract Indirect-Branch block limit as option. NFC
The existing code hard-coded a limit of 20 instructions for duplication
when a block ended with an indirect branch. Extract this as an option.
No functional change intended.

llvm-svn: 280125
2016-08-30 18:18:54 +00:00
Duncan P. N. Exon Smith
fa87381dee ADT: Guarantee transferNodesFromList is only called on transfers
Guarantee that ilist_traits<T>::transferNodesFromList is only called
when nodes are actually changing lists.

I also moved all the callbacks to occur *first*, before the operation.
This is the only choice for iplist<T>::merge, so we might as well be
consistent.  I expect this to have no effect in practice, although it
simplifies the logic in both iplist<T>::transfer and iplist<T>::insert.

llvm-svn: 280122
2016-08-30 18:00:45 +00:00
Sanjay Patel
a2b59dfe55 [InstCombine] replace divide-by-constant checks with asserts; NFC
These folds already have tests for scalar and vector types, except 
for the vector div-by-0 case, so I'm adding tests for that.

llvm-svn: 280115
2016-08-30 17:31:34 +00:00
Sanjay Patel
f098368a5a [InstCombine] clean up foldICmpDivConstant; NFCI
1. Fix comments to match variable names
2. Remove redundant CmpRHS variable
3. Add FIXME to replace some checks with asserts

llvm-svn: 280112
2016-08-30 17:10:49 +00:00
NAKAMURA Takumi
35c1fea391 Fixup r279618, instantiate *AnalysisManagerProxy<*AnalysisManager,LazyCallGraph::SCC>, instead of *AnalysisManagerProxy<*AnalysisManager,LazyCallGraph::SCC,LazyCallGraph&>, for PassID.
Or they were not instantiated as expected;

  llvm::InnerAnalysisManagerProxy<llvm::AnalysisManager<llvm::Function>, llvm::LazyCallGraph::SCC>::PassID
  llvm::InnerAnalysisManagerProxy<llvm::AnalysisManager<llvm::Function>, llvm::LazyCallGraph::SCC>::PassID

llvm-svn: 280105
2016-08-30 15:47:13 +00:00
Valery Pykhtin
7ff35e6fee [AMDGPU] Refactor SOP instructions TD files.
Differential revision: https://reviews.llvm.org/D23617

llvm-svn: 280101
2016-08-30 15:20:31 +00:00
Kostya Serebryany
733e18adcb [libFuzzer] fix a bug when running a single unit of N bytes with -max_len=M, M<N, caused a buffer overflow
llvm-svn: 280098
2016-08-30 14:52:05 +00:00
Kostya Serebryany
1d077e5054 [libFuzzer] stop using bits for memcmp's value profile -- seems to blow up the corpus too much
llvm-svn: 280096
2016-08-30 14:39:33 +00:00
Nirav Dave
0a2093c3a1 [MC] Move parser helper functions from Asmparser to MCAsmParser
NFC Intended.

llvm-svn: 280092
2016-08-30 14:15:43 +00:00
Chad Rosier
a507852652 [Reassociate] Add additional debug output. NFC.
llvm-svn: 280090
2016-08-30 13:58:35 +00:00
NAKAMURA Takumi
0be0197c94 SILoadStoreOptimizer.cpp: Fix a warning in r279991. [-Wunused-variable]
llvm-svn: 280075
2016-08-30 11:50:21 +00:00
James Molloy
7f284c1512 [SimplifyCFG] Properly CSE metadata in SinkThenElseCodeToEnd
This was missing, meaning the metadata in sunk instructions was potentially bogus and could cause miscompiles.

llvm-svn: 280072
2016-08-30 10:56:08 +00:00
James Y Knight
67397c2b8d Replace incorrect "#ifdef DEBUG" with "#ifndef NDEBUG".
The former is simply wrong -- the code will either never be used or will
always be used, rather than being dependent upon whether it's built with
debug assertions enabled.

The macro DEBUG isn't ever set by the llvm build system. But, the macro
DEBUG(X) is defined (unconditionally) if you happen to include
llvm/Support/Debug.h.

The code in Value.h which was erroneously protected by the #ifdef DEBUG
didn't even compile -- you can't cast<> from an LLVMOpaqueValue
directly. Fortunately, it was never invoked, as Core.cpp included
Value.h before Debug.h.

The conditionalized code in AArch64CollectLOH.cpp was previously always
used, as it includes Debug.h.

llvm-svn: 280056
2016-08-30 03:16:16 +00:00
Kostya Serebryany
3620aadd00 [libFuzzer] use bits instead of bytes for memcmp/strcmp value profile -- the fuzzer reaches the goal much faster, at least on the simple puzzles
llvm-svn: 280054
2016-08-30 03:05:50 +00:00
Anna Thomas
69a1d2d4d0 [RewriteStatepointsForGC] Update comment for same PHI node check. NFC
llvm-svn: 280052
2016-08-30 02:36:48 +00:00
Hal Finkel
914a9afcd7 [PowerPC] Force entry alignment in .got2
Implement Bill's suggested fix for 32-bit targets for PR22711 (for the
alignment of each entry). As pointed out in the bug report, we could just force
the section alignment, since we only add pointer-sized things currently, but
this fix is somewhat more future-proof.

llvm-svn: 280049
2016-08-30 01:43:38 +00:00
Sanjoy Das
cd267c08fb Fix coding style; NFC
Avoid variables starting with lowercase.

llvm-svn: 280048
2016-08-30 01:38:59 +00:00
Kostya Serebryany
d305c04722 [libFuzzer] use trace-div and trace-gep for guided fuzzing, add tests
llvm-svn: 280046
2016-08-30 01:30:14 +00:00
Kostya Serebryany
a0e2bc22aa [sanitizer-coverage] add two more modes of instrumentation: trace-div and trace-gep, mostly usaful for value-profile-based fuzzing; llvm part
llvm-svn: 280043
2016-08-30 01:12:10 +00:00
Hal Finkel
62419dd0e2 [PowerPC] Add support for -mlongcall
The "long call" option forces the use of the indirect calling sequence for all
calls (even those that don't really need it). GCC provides this option; This is
helpful, under certain circumstances, for building very-large binaries, and
some other specialized use cases.

Fixes PR19098.

llvm-svn: 280040
2016-08-30 00:59:23 +00:00
Piotr Padlewski
f3f0f64ccf NFC: add early exit in ModuleSummaryAnalysis
Summary:
Changed this code because it was not very readable.
The one question that I got after changing it is, should we
count calls to intrinsics? We don't add them to caller summary,
so maybe we shouldn't also count them?

Reviewers: tejohnson, eraman, mehdi_amini

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23949

llvm-svn: 280036
2016-08-30 00:46:26 +00:00
Duncan P. N. Exon Smith
4e09f9bf86 ADT: Give ilist<T>::reverse_iterator a handle to the current node
Reverse iterators to doubly-linked lists can be simpler (and cheaper)
than std::reverse_iterator.  Make it so.

In particular, change ilist<T>::reverse_iterator so that it is *never*
invalidated unless the node it references is deleted.  This matches the
guarantees of ilist<T>::iterator.

(Note: MachineBasicBlock::iterator is *not* an ilist iterator, but a
MachineInstrBundleIterator<MachineInstr>.  This commit does not change
MachineBasicBlock::reverse_iterator, but it does update
MachineBasicBlock::reverse_instr_iterator.  See note at end of commit
message for details on bundle iterators.)

Given the list (with the Sentinel showing twice for simplicity):

     [Sentinel] <-> A <-> B <-> [Sentinel]

the following is now true:
 1. begin() represents A.
 2. begin() holds the pointer for A.
 3. end() represents [Sentinel].
 4. end() holds the poitner for [Sentinel].
 5. rbegin() represents B.
 6. rbegin() holds the pointer for B.
 7. rend() represents [Sentinel].
 8. rend() holds the pointer for [Sentinel].

The changes are #6 and #8.  Here are some properties from the old
scheme (which used std::reverse_iterator):
- rbegin() held the pointer for [Sentinel] and rend() held the pointer
  for A;
- operator*() cost two dereferences instead of one;
- converting from a valid iterator to its valid reverse_iterator
  involved a confusing increment; and
- "RI++->erase()" left RI invalid.  The unintuitive replacement was
  "RI->erase(), RE = end()".

With vector-like data structures these properties are hard to avoid
(since past-the-beginning is not a valid pointer), and don't impose a
real cost (since there's still only one dereference, and all iterators
are invalidated on erase).  But with lists, this was a poor design.

Specifically, the following code (which obviously works with normal
iterators) now works with ilist::reverse_iterator as well:

    for (auto RI = L.rbegin(), RE = L.rend(); RI != RE;)
      fooThatMightRemoveArgFromList(*RI++);

Converting between iterator and reverse_iterator for the same node uses
the getReverse() function.

    reverse_iterator iterator::getReverse();
    iterator reverse_iterator::getReverse();

Why doesn't iterator <=> reverse_iterator conversion use constructors?

In order to catch and update old code, reverse_iterator does not even
have an explicit conversion from iterator.  It wouldn't be safe because
there would be no reasonable way to catch all the bugs from the changed
semantic (see the changes at call sites that are part of this patch).

Old code used this API:

    std::reverse_iterator::reverse_iterator(iterator);
    iterator std::reverse_iterator::base();

Here's how to update from old code to new (that incorporates the
semantic change), assuming I is an ilist<>::iterator and RI is an
ilist<>::reverse_iterator:

            [Old]         ==>          [New]
    reverse_iterator(I)       (--I).getReverse()
    reverse_iterator(I)         ++I.getReverse()
  --reverse_iterator(I)           I.getReverse()
    reverse_iterator(++I)         I.getReverse()
          RI.base()          (--RI).getReverse()
          RI.base()            ++RI.getReverse()
        --RI.base()              RI.getReverse()
      (++RI).base()              RI.getReverse()
  delete &*RI, RE = end()         delete &*RI++
  RI->erase(), RE = end()         RI++->erase()

=======================================
Note: bundle iterators are out of scope
=======================================

MachineBasicBlock::iterator, also known as
MachineInstrBundleIterator<MachineInstr>, is a wrapper to represent
MachineInstr bundles.  The idea is that each operator++ takes you to the
beginning of the next bundle.  Implementing a sane reverse iterator for
this is harder than ilist.  Here are the options:
- Use std::reverse_iterator<MBB::i>.  Store a handle to the beginning of
  the next bundle.  A call to operator*() runs a loop (usually
  operator--() will be called 1 time, for unbundled instructions).
  Increment/decrement just works.  This is the status quo.
- Store a handle to the final node in the bundle.  A call to operator*()
  still runs a loop, but it iterates one time fewer (usually
  operator--() will be called 0 times, for unbundled instructions).
  Increment/decrement just works.
- Make the ilist_sentinel<MachineInstr> *always* store that it's the
  sentinel (instead of just in asserts mode).  Then the bundle iterator
  can sniff the sentinel bit in operator++().

I initially tried implementing the end() option as part of this commit,
but updating iterator/reverse_iterator conversion call sites was
error-prone.  I have a WIP series of patches that implements the final
option.

llvm-svn: 280032
2016-08-30 00:13:12 +00:00
Jan Vesely
1d384dd991 AMDGPU/R600: Cleanup DAGCombine
Move SDLoc initialization to comon place.
fall back to AMDGPU version in one place

Differential Revision: https://reviews.llvm.org/D23900

llvm-svn: 280030
2016-08-29 23:21:46 +00:00
Michael Kuperstein
90d6b31c2b Fix typo in comment. NFC.
llvm-svn: 280025
2016-08-29 22:49:05 +00:00
Teresa Johnson
4391198150 [ThinLTO] Indirect call promotion fixes for promoted local functions
Summary:
Fix a couple issues limiting the application of indirect call promotion
in ThinLTO mode:
- Invoke indirect call promotion before globalopt, since it may
  eliminate imported functions which appear unreferenced.
- Invoke indirect call promotion with InLTO=true so that the PGOFuncName
  metadata is used to get the name for locals which would have been
  renamed during promotion.

Reviewers: davidxl, mehdi_amini

Subscribers: Prazek, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D24004

llvm-svn: 280024
2016-08-29 22:46:56 +00:00
Hal Finkel
380cfab8df [PowerPC] Fix i8/i16 atomics for little-Endian targets without partword atomics
For little-Endian PowerPC, we generally target only P8 and later by default.
However, generic (older) 64-bit configurations are still an option, and in that
case, partword atomics are not available (e.g. stbcx.). To lower i8/i16 atomics
without true i8/i16 atomic operations, we emulate using i32 atomics in
combination with a bunch of shifting and masking, etc. The amount by which to
shift in little-Endian mode is different from the amount in big-Endian mode (it
is inverted -- meaning we can leave off the xor when computing the amount).

Fixes PR22923.

llvm-svn: 280022
2016-08-29 22:25:36 +00:00
Chad Rosier
834b605f85 [SLP] Return a boolean value for these static helpers. NFC.
Differential Revision: https://reviews.llvm.org/D24008

llvm-svn: 280020
2016-08-29 22:09:51 +00:00
Jan Vesely
3b720aa486 AMDGPU/R600: Remove MergeVectorStores from legalization
This is handled by DAGCombiner in a more generic way

Differential Revision: https://reviews.llvm.org/D23970

llvm-svn: 280019
2016-08-29 22:05:06 +00:00
Tim Northover
742261b303 GlobalISel: use multi-dimensional arrays for legalize actions.
Instead of putting all possible requests into a single table, we can perform
the extremely dense lookup based on opcode and type-index in constant time
using multi-dimensional array-like things.

This roughly halves the time spent doing legalization, which was dominated by
queries against the Actions table.

llvm-svn: 280011
2016-08-29 21:00:00 +00:00
Easwaran Raman
08d34d7c85 Fix a thinko in r278189.
llvm-svn: 280008
2016-08-29 20:45:51 +00:00
Saleem Abdulrasool
b7be5c7c36 AMDGPU: fix mismatch tags, NFC
llvm-svn: 280006
2016-08-29 20:42:07 +00:00
Saleem Abdulrasool
367ba1e8b7 ExecutionEngine: fix a bug in the movt/movw relocator
According to the arm arm specifications, 4 bytes are needed for a shift instead
of 8, this was causing the movt instruction to write to a different register
sometimes.

Patch by Walter Erquinigo!

llvm-svn: 280005
2016-08-29 20:42:03 +00:00
Matthew Simpson
5aacf5b410 [LV] Move insertelement sequence after scalar definitions
After r279649 when getting a vector value from VectorLoopValueMap, we create an
insertelement sequence on-demand if the value has been scalarized instead of
vectorized. We previously inserted this insertelement sequence before the
value's first vector user. However, this insert location is problematic if that
user is the phi node of a first-order recurrence. With this patch, we move the
insertelement sequence after the last scalar instruction we created when
scalarizing the value. Thus, the value's vector definition in the new loop will
immediately follow its scalar definitions. This should fix PR30183.

Reference: https://llvm.org/bugs/show_bug.cgi?id=30183
llvm-svn: 280001
2016-08-29 20:14:04 +00:00
Krzysztof Parzyszek
0b67800907 Propagate TBAA info in SelectionDAG::getIndexedLoad
Patch by Pranav Bhandarkar.

llvm-svn: 279998
2016-08-29 19:50:15 +00:00
Douglas Katzman
0455dc465b [Myriad]: add missing 'mcpu' values
Should have been done with r276646.

llvm-svn: 279996
2016-08-29 19:42:57 +00:00
Tom Stellard
d773f533b9 AMDGPU/SI: Implement a custom MachineSchedStrategy
Summary:
GCNSchedStrategy re-uses most of GenericScheduler, it's just uses
a different method to compute the excess and critical register
pressure limits.

It's not enabled by default, to enable it you need to pass -misched=gcn
to llc.

Shader DB stats:

32464 shaders in 17874 tests
Totals:
SGPRS: 1542846 -> 1643125 (6.50 %)
VGPRS: 1005595 -> 904653 (-10.04 %)
Spilled SGPRs: 29929 -> 27745 (-7.30 %)
Spilled VGPRs: 334 -> 352 (5.39 %)
Scratch VGPRs: 1612 -> 1624 (0.74 %) dwords per thread
Code Size: 36688188 -> 37034900 (0.95 %) bytes
LDS: 1913 -> 1913 (0.00 %) blocks
Max Waves: 254101 -> 265125 (4.34 %)
Wait states: 0 -> 0 (0.00 %)

Totals from affected shaders:
SGPRS: 1338220 -> 1438499 (7.49 %)
VGPRS: 886221 -> 785279 (-11.39 %)
Spilled SGPRs: 29869 -> 27685 (-7.31 %)
Spilled VGPRs: 334 -> 352 (5.39 %)
Scratch VGPRs: 1612 -> 1624 (0.74 %) dwords per thread
Code Size: 34315716 -> 34662428 (1.01 %) bytes
LDS: 1551 -> 1551 (0.00 %) blocks
Max Waves: 188127 -> 199151 (5.86 %)
Wait states: 0 -> 0 (0.00 %)

Reviewers: arsenm, mareko, nhaehnle, MatzeB, atrick

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: https://reviews.llvm.org/D23688

llvm-svn: 279995
2016-08-29 19:42:52 +00:00
Vitaly Buka
54b04fe77b [asan] Enable new stack poisoning with store instruction by default
Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23968

llvm-svn: 279993
2016-08-29 19:28:34 +00:00
Tim Northover
92392fdf1a GlobalISel: switch to SmallVector for pending legalizations.
std::queue was doing far to many heap allocations to be healthy.

llvm-svn: 279992
2016-08-29 19:27:20 +00:00
Tom Stellard
261d5c31cf AMDGPU/SI: Improve SILoadStoreOptimizer and run it before the scheduler
Summary:
The SILoadStoreOptimizer can now look ahead more then one instruction when
looking for instructions to merge, which greatly improves the number of
loads/stores that we are able to merge.

Moving the pass before scheduling avoids increasing register pressure after
the scheduler, so that the scheduler's register pressure estimates will be
more accurate.  It also gives more consistent results, since it is no longer
affected by minor scheduling changes.

Reviewers: arsenm

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: https://reviews.llvm.org/D23814

llvm-svn: 279991
2016-08-29 19:15:22 +00:00
Tim Northover
2fbd6a61b4 ASan: remove variable only used in assertions build
llvm-svn: 279990
2016-08-29 19:12:20 +00:00
Tim Northover
34bd451acc GlobalISel: legalize frem to a libcall on AArch64.
llvm-svn: 279988
2016-08-29 19:07:16 +00:00
Tim Northover
05f74b5dbe GlobalISel: rework CallLowering so that it can be used for libcalls too.
There should be no functional change here, I'm just making the implementation
of "frem" (to libcall) legalization easier for a followup.

llvm-svn: 279987
2016-08-29 19:07:08 +00:00
Matt Arsenault
2e0aaa6df5 AMDGPU/R600: Fix fixups used for constant arrays
Fixes bug 29289

llvm-svn: 279986
2016-08-29 19:01:48 +00:00
Kyle Butt
01686d3b5f IfConversion: Fix branch predication bug.
This bug shows up with diamonds that share unpredicable, unanalyzable branches.
There's an included test case from Hexagon. What was happening was that we were
attempting to predicate the branch instruction despite the fact that it was
checked to be the same. Now for unanalyzable branches we skip over the branch
instructions when predicating the block.

Differential Revision: https://reviews.llvm.org/D23939

llvm-svn: 279985
2016-08-29 18:27:12 +00:00
Vitaly Buka
da8c27db69 Use store operation to poison allocas for lifetime analysis.
Summary:
Calling __asan_poison_stack_memory and __asan_unpoison_stack_memory for small
variables is too expensive.

Code is disabled by default and can be enabled by -asan-experimental-poisoning.

PR27453

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23947

llvm-svn: 279984
2016-08-29 18:17:21 +00:00
Vitaly Buka
1d9c174ef6 [asan] Separate calculation of ShadowBytes from calculating ASanStackFrameLayout
Summary: No functional changes, just refactoring to make D23947 simpler.

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23954

llvm-svn: 279982
2016-08-29 17:41:29 +00:00
David Majnemer
485740d256 [SimplifyCFG] Hoisting invalidates metadata
We forgot to remove optimization metadata when performing hosting during
FoldTwoEntryPHINode.

This fixes PR29163.

llvm-svn: 279980
2016-08-29 17:14:08 +00:00
Evandro Menezes
1ef8196cc2 [AArch64] Adjust the scheduling model for Exynos M1.
Further refine the model for loads.

llvm-svn: 279976
2016-08-29 16:04:37 +00:00
Anna Thomas
2c186056d6 [StatepointsForGC] Rematerialize in the presence of PHIs
Summary:
While walking the use chain for identifying rematerializable values in RS4GC,
add the case where the current value and base value are the same PHI nodes.

This will aid rematerialization of geps and casts instead of relocating.

Reviewers: sanjoy, reames, igor

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23920

llvm-svn: 279975
2016-08-29 15:41:59 +00:00
Teresa Johnson
bad28dab29 [LTO] Remove extraneous output
Remove some debugging output to stderr that snuck in with r279576.

llvm-svn: 279974
2016-08-29 15:33:01 +00:00
Sanjay Patel
d0d823d9f1 [Constant] remove fdiv and frem from canTrap()
Assuming the default FP env, we should not treat fdiv and frem any differently in terms of
trapping behavior than any other FP op. Ie, FP ops do not trap with the default FP env.

This matches how we treat the fdiv/frem in IR with isSafeToSpeculativelyExecute() and in 
the backend after:
https://reviews.llvm.org/rL279970

llvm-svn: 279973
2016-08-29 15:27:17 +00:00
Gor Nishanov
cc3ff9c81d [Coroutines] Part 9: Add cleanup subfunction.
Summary:
[Coroutines] Part 9: Add cleanup subfunction.

This patch completes coroutine heap allocation elision. Now, the heap elision example from docs\Coroutines.rst compiles and produces expected result (see test/Transform/Coroutines/ex3.ll)

Intrinsic Changes:
* coro.free gets a token parameter tying it to coro.id to allow reliably discovering all coro.frees associated with a particular coroutine.
* coro.id gets an extra parameter that points back to a coroutine function. This allows to check whether a coro.id describes the enclosing function or it belongs to a different function that was later inlined.

CoroSplit now creates three subfunctions:
# f$resume - resume logic
# f$destroy - cleanup logic, followed by a deallocation code
# f$cleanup - just the cleanup code

CoroElide pass during devirtualization replaces coro.destroy with either f$destroy or f$cleanup depending whether heap elision is performed or not.

Other fixes, improvements:
* Fixed buglet in Shape::buildFrame that was not creating coro.save properly if coroutine has more than one suspend point.

* Switched to using variable width suspend index field (no longer limited to 32 bit index field can be as little as i1 or as large as i<whatever-size_t-is>)

Reviewers: majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23844

llvm-svn: 279971
2016-08-29 14:34:12 +00:00
Sanjay Patel
e3955cf340 [TargetLowering] remove fdiv and frem from canOpTrap() (PR29114)
Assuming the default FP env, we should not treat fdiv and frem any differently in terms of 
trapping behavior than any other FP op. Ie, FP ops do not trap with the default FP env.

This matches how we treat these ops in IR with isSafeToSpeculativelyExecute(). There's a 
similar bug in Constant::canTrap().

This bug manifests in PR29114:
https://llvm.org/bugs/show_bug.cgi?id=29114
...as a sequence of scalar divisions instead of a vector division on x86 for a <3 x float> 
type.

Differential Revision: https://reviews.llvm.org/D23974

llvm-svn: 279970
2016-08-29 13:32:41 +00:00
Krzysztof Parzyszek
3628e5ad57 Do not use MRI::getMaxLaneMaskForVReg as a mask covering whole register
MRI::getMaxLaneMaskForVReg does not always cover the whole register.
For example, on X86 the upper 16 bits of EAX cannot be accessed via
any subregister. Consequently, there is no lane mask that only covers
that part of EAX. The getMaxLaneMaskForVReg will return the union of
the lane masks for all subregisters, and in case of EAX, that union
will not cover the upper 16 bits.

This fixes https://llvm.org/bugs/show_bug.cgi?id=29132

llvm-svn: 279969
2016-08-29 13:15:35 +00:00
Tom Stellard
12f6142ff9 AMDGPU/SI: Improve register allocation hints for sopk instructions
Summary:
For shrinking SOPK instructions, we were creating a hint to tell the
register allocator to use the register allocated for src0 for the dst
operand as well.  However, this seems to not work sometimes depending
on the order virtual registers are assigned physical registers.

To fix this, I've added a second allocation hint which does the reverse,
asks that the register allocated for dst is used for src0.

Reviewers: arsenm

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D23862

llvm-svn: 279968
2016-08-29 13:06:10 +00:00
Rafael Espindola
2aeb8bcd81 Use the correct ctor/dtor section for dynamic-no-pic.
llvm-svn: 279967
2016-08-29 12:47:22 +00:00
Rafael Espindola
f7e5d81bb8 Move code only used by codegen out of MC. NFC.
MC itself never needs to know about these sections.

llvm-svn: 279965
2016-08-29 12:33:42 +00:00
Haojian Wu
1ffe62f2c3 Fix -Wunused-but-set-variable warning.
Summary: A follow-up fix on r279958.

Reviewers: bkramer

Subscribers: cfe-commits

Differential Revision: https://reviews.llvm.org/D23989

llvm-svn: 279964
2016-08-29 12:26:33 +00:00
Tom Stellard
10a0d7df6a AMDGPU/SI: Query AA, if available, in areMemAccessesTriviallyDisjoint()
Summary:
The SILoadStoreOptimizer will need to use AliasAnalysis here in order to
move it before scheduling.

Reviewers: arsenm

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D23813

llvm-svn: 279963
2016-08-29 12:05:32 +00:00
Igor Breger
1ff2ecf610 Fixed a bug in type legalizer for masked gather.
The problem occurs when the Node doesn't updated in place , UpdateNodeOperation() return the node that already exist.
In this case assert fail in PromoteIntegerOperand() , N have 2 results ( val + chain).

Differential Revision: http://reviews.llvm.org/D23756

llvm-svn: 279961
2016-08-29 09:12:31 +00:00
Igor Breger
e231cf9ebe [AVX512] In some cases KORTEST instruction may be used instead of ZEXT + TEST sequence.
Differential Revision: http://reviews.llvm.org/D23490

llvm-svn: 279960
2016-08-29 08:52:52 +00:00
Haojian Wu
cb4437094e [InstructionSelect] NumBlocks isn't defined in DEBUG build.
Summary: A follow-up fixing on http://llvm.org/viewvc/llvm-project?view=revision&revision=279905.

Reviewers: bkramer

Subscribers: cfe-commits

Differential Revision: https://reviews.llvm.org/D23985

llvm-svn: 279959
2016-08-29 08:48:15 +00:00
Craig Topper
b910cf620b [X86] Don't lower FABS/FNEG masking directly to a ConstantPool load. Just create a ConstantFPSDNode and let that be lowered.
This allows broadcast loads to used when available.

llvm-svn: 279958
2016-08-29 04:49:31 +00:00
Craig Topper
dde5d7a8cc [AVX-512] Always use v8i64 when converting 512-bit FAND/FOR/FXOR/FANDN to integer operations when DQI isn't supported. This is consistent with the recent changes to promote logical operations to i64 vectors.
llvm-svn: 279957
2016-08-29 04:49:27 +00:00
Craig Topper
de3199b8e1 [AVX-512] Add support for selecting 512-bit VPABSB/VPABSW when BWI is available.
llvm-svn: 279951
2016-08-28 22:20:51 +00:00
Craig Topper
23bcb14401 [AVX-512] Add patterns for selecting 128/256-bit EVEX VPABS instructions.
llvm-svn: 279950
2016-08-28 22:20:48 +00:00
Sanjay Patel
d4c3ab24ab [InstCombine] use m_APInt to allow icmp (and X, Y), C folds for splat constant vectors
llvm-svn: 279937
2016-08-28 18:18:00 +00:00
Simon Pilgrim
913f5c9084 [X86][AVX512] Only combine EVEX targets shuffles to shuffles of the same number of vector elements
Over eager combing prevents the correct folding of writemasks.

At the moment this occurs for ALL EVEX shuffles, in the future we need to check that the user of the root shuffle is a VSELECT that can fold to a writemask.

llvm-svn: 279934
2016-08-28 17:27:14 +00:00
Hal Finkel
9e501da293 [PowerPC] Implement lowering for atomicrmw min/max/umin/umax
Implement lowering for atomicrmw min/max/umin/umax. Fixes PR28818.

llvm-svn: 279933
2016-08-28 16:17:58 +00:00
Elena Demikhovsky
eed0bb3d71 [Loop Vectorizer] Fixed memory confilict checks.
Fixed a bug in run-time checks for possible memory conflicts inside loop.
The bug is in Low <-> High boundaries calculation. The High boundary should be calculated as "last memory access pointer + element size".

Differential revision: https://reviews.llvm.org/D23176

llvm-svn: 279930
2016-08-28 08:53:53 +00:00
Craig Topper
0f68f0b2ab [AVX-512] Promote AND/OR/XOR to v2i64/v4i64/v8i64 even when we have AVX512F/AVX512VL.
Previously we weren't creating masked logical operations if bitcasts appeared between the logic operation and the select. The IR optimizers can move bitcasts across logic operations and create these cases. To minimize the number of cases we need to handle, this change promotes all logic ops to an i64 vector type just like when only SSE or AVX is available.

Unfortunately, this also has the consequence of making it difficult to select unmasked VPANDD/VPORD/VPXORD in all the cases it was previously used. This is the cause of most of the test change. This shouldn't result in any functional change though.

llvm-svn: 279929
2016-08-28 06:06:28 +00:00
Craig Topper
9a9a09e60f [X86] Rename PABSB/D/W instructions to be consistent with SSE/AVX instructions instead of ending 128/256. NFC
llvm-svn: 279927
2016-08-28 06:06:21 +00:00
Jan Vesely
0d9671ace0 AMDGPU/R600: Enable Load combine
Fix and improve tests

Differential Revision: https://reviews.llvm.org/D23899

llvm-svn: 279925
2016-08-27 19:09:43 +00:00
Craig Topper
d6a4523f86 [X86] Rename predicate function that detects if requires one of the REX.B, REX.X or REX.R bits. It's old name conflicted with a function in X8II namespace that doesnt' quite do the same thing. NFC
llvm-svn: 279924
2016-08-27 17:13:43 +00:00
Craig Topper
ce1e9291d5 [X86] Keep looping over operands looking for byte registers even if we already found a register that requires a REX prefix. Otherwise we don't error if a high byte register is used after SPL/BPL/DIL/SIL.
llvm-svn: 279923
2016-08-27 17:13:41 +00:00
Craig Topper
929e8489be [X86] Include XMM/YMM/ZMM16-23 in X86II::isX86_64ExtendedReg. This feels more consistent with its name and simplifies assembler code.
llvm-svn: 279922
2016-08-27 17:13:37 +00:00
Craig Topper
c65627ae77 [X86] Don't allow DR8-DR15 to be assembled in 32-bit mode. Add missing test for CR8-CR15.
llvm-svn: 279921
2016-08-27 17:13:34 +00:00
Craig Topper
c727b74959 [X86] Remove stale comment about FixupBWInsts pass being off by default. NFC
llvm-svn: 279915
2016-08-27 05:26:54 +00:00
Craig Topper
7c9fd2b2be [AVX-512] Allow EVEX encoding unordered/ordered/equal/notequal VCMPPS/PD/SS/SD to be commuted just like the SSE and AVX counterparts.
llvm-svn: 279914
2016-08-27 05:22:15 +00:00
Craig Topper
dc033bf380 [X86] Enable FR32/FR64 cmpeq/cmpne/cmpunord/cmpord to be commuted.
llvm-svn: 279913
2016-08-27 05:22:12 +00:00
Craig Topper
1964e92665 [AVX-512] Add load folding for EVEX vcmpps/pd/ss/sd.
llvm-svn: 279912
2016-08-27 05:22:08 +00:00
Teresa Johnson
864ef41e7f [LTO] Don't create a new common unless merged has different size
Summary:
This addresses a regression in common handling from the new LTO
API in r278338. Only create a new common if the size is different.
The type comparison against an array type fails when the size is
different but not an array. GlobalMerge does not handle the
array types as well and we lose some global merging opportunities.

Reviewers: mehdi_amini

Subscribers: junbuml, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23955

llvm-svn: 279911
2016-08-27 04:41:22 +00:00
Matt Arsenault
7d35696bfc AMDGPU: Mark sched model complete
Fixes bug 26800

llvm-svn: 279910
2016-08-27 03:39:27 +00:00
Matt Arsenault
5c1096e49d AMDGPU: Remove unneeded implicit exec uses/defs
SI_BREAK, SI_IF_BREAK, and SI_ELSE_BREAK do not def exec.
SI_IF_BREAK and SI_ELSE_BREAK do not read it either.

llvm-svn: 279909
2016-08-27 03:00:51 +00:00
Sebastian Pop
c6175f06bd GVN-hoist: invalidate MD cache (PR29144)
Without invalidating the entries in the MD cache we would try to access instructions
that were removed in previous iterations of hoisting.

Differential Revision: https://reviews.llvm.org/D23927

llvm-svn: 279907
2016-08-27 02:48:41 +00:00
Quentin Colombet
a56859d74d [RegBankSelect] Do not abort when the target wants to fall back.
llvm-svn: 279906
2016-08-27 02:38:27 +00:00
Quentin Colombet
cd84f181b1 [InstructionSelect] Do not abort when the target wants to fall back.
llvm-svn: 279905
2016-08-27 02:38:24 +00:00
Quentin Colombet
ff81dc2c09 [MachineLegalize] Do not abort when the target wants to fall back.
llvm-svn: 279904
2016-08-27 02:38:21 +00:00
Matt Arsenault
d3892e17d5 AMDGPU: Select mulhi 24-bit instructions
llvm-svn: 279902
2016-08-27 01:32:27 +00:00
Matt Arsenault
af7cdf6a46 AMDGPU: Move cndmask pseudo to be isel pseudo
There's only one use of this for the convenience
of a pattern. I think v_mov_b64_pseudo should also be
moved, but SIFoldOperands does currently make use of it.

llvm-svn: 279901
2016-08-27 01:00:37 +00:00
Matt Arsenault
9b698596fe AMDGPU: Fix sched type for branches
llvm-svn: 279900
2016-08-27 00:51:02 +00:00
Matt Arsenault
09098bc2b3 AMDGPU: Remove register operand from si_mask_branch
It isn't used for anything, and is also misleading since
it could be spilled at the end of the block, so it can't be relied
on. There ends up being a verifier error about using an undefined
register since the spill kills the register.

llvm-svn: 279899
2016-08-27 00:42:21 +00:00
Matt Arsenault
2922d104bf AMDGPU: Improve error reporting for maximum branch distance
Unfortunately this seems to only help the assembler diagnostic.

llvm-svn: 279895
2016-08-27 00:21:22 +00:00
Quentin Colombet
1636e7c63b [GlobalISel] Add a fallback path to SDISel.
When global-isel fails on a MachineFunction MF, MF will be cleaned up
and given to SDISel.
Thanks to this fallback, we can already perform correctness test even if
we support only a small portion of the functions in a test.

llvm-svn: 279891
2016-08-27 00:18:31 +00:00
Quentin Colombet
7a3f77bdff [AArch64][CallLowering] Do not assert for not implemented part.
When doing the ABI lowering, report a failure to the caller instead of
asserting. This gives a chance for the caller to recover.

llvm-svn: 279890
2016-08-27 00:18:28 +00:00
Quentin Colombet
c8ffecf434 [GlobalISel] Teach the core pipeline not to run if ISel failed.
llvm-svn: 279889
2016-08-27 00:18:24 +00:00
Quentin Colombet
a46582d729 [IRTranslator] Do not abort when the target wants to fall back.
Every pass in the GlobalISel pipeline will need to do something similar.

llvm-svn: 279886
2016-08-26 23:49:05 +00:00
Quentin Colombet
5f3ddf5325 [MFProperties] Introduce a FailedISel property.
This is used to communicate that the instruction selection pipeline
failed at some point.
Another way to achieve that would be to have some kind of conditional
scheduling in the PassManager, such that we only schedule a pass based
on the success/failure of another one. The property approach has the
advantage of being lightweight and solve the problem at stake.

llvm-svn: 279885
2016-08-26 23:49:01 +00:00
Teresa Johnson
048eeff265 [ThinLTO] Move loading of cache entry to client
Summary:
Have the cache pass back the path to the cache entry when it
is ready to be loaded, instead of a buffer.

For gold-plugin we can simply pass this file back to gold directly,
which avoids expensive writing of a separate tmp file. Ensure
the cache entry is not deleted on cleanup by adjusting the setting
of the IsTemporary flags.

Moved the loading of the buffer into llvm-lto2 to maintain current
behavior.

Reviewers: mehdi_amini

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23946

llvm-svn: 279883
2016-08-26 23:29:14 +00:00
Quentin Colombet
9f9ccd2d5a [TargetPassConfig] Add a target hook to know what GlobalISel should do on error.
By default, this hook tells GlobalISel to abort (report a fatal error)
when it encounters an error. The alternative will be to fall back on
SDISel.
This fall back will be removed when the bring-up of GlobalISel is over.

llvm-svn: 279879
2016-08-26 22:32:59 +00:00
Quentin Colombet
0b16bca0b7 [IRTranslator][NFC] Use DEBUG_TYPE instead of repeating the name.
llvm-svn: 279878
2016-08-26 22:32:57 +00:00
Quentin Colombet
2cd9ce7522 [SelectionDAG] Do not run the ISel process on already selected code.
Right now, this cannot happen, but with the fall back path of GlobalISel
it will show up eventually.

llvm-svn: 279877
2016-08-26 22:32:55 +00:00
Quentin Colombet
947cfac2b5 [MachineFunction] Introduce a reset method.
This method allows to reset the state of a MachineFunction as if it was
just created. This will be used during the bring-up of GlobalISel to
provide a way to fallback on SelectionDAG. That way, we can start doing
correctness testing even if we are not able to select all functions via
the global instruction selector.

llvm-svn: 279876
2016-08-26 22:32:53 +00:00
Quentin Colombet
fcb250bf0c [MFProperties] Introduce a reset method with no argument.
This method allows to reset all the properties in one go.

llvm-svn: 279874
2016-08-26 22:09:11 +00:00
Quentin Colombet
e0a55b84fa [MFProperties][NFC] Rename clear into reset to match BitVector naming.
The name clear is used to reset all the bit in bitvectors and using it
to reset just properties was confusing.

llvm-svn: 279873
2016-08-26 22:09:08 +00:00
Tom Stellard
2dbd047f3a AMDGPU/SI: Canonicalize offset order for merged DS instructions
Summary:
If the scheduler clusters the loads, then the offsets will be sorted,
but it is possible for the scheduler to scheduler loads together
without out explicitly clustering them, which would give us non-sorted
offsets.

Also, we will want to do this if we move the load/store optimizer before
the scheduler.

Reviewers: arsenm

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D23776

llvm-svn: 279870
2016-08-26 21:36:47 +00:00
Tom Stellard
8b436348bd XXX
llvm-svn: 279868
2016-08-26 21:16:40 +00:00
Tom Stellard
13f830c61a AMDGPU/SI: Use a better method for determining the largest pressure sets
Summary:
There are a few different sgpr pressure sets, but we only care about
the one which covers all of the sgprs.  We were using hard-coded
register pressure set names to determine the reg set id for the
biggest sgpr set.  However, we were using the wrong name, and this
method is pretty fragile, since the reg pressure set names may
change.

The new method just looks for the pressure set that contains the most
reg units and sets that set as our SGPR pressure set.  We've also
adopted the same technique for determining our VGPR pressure set.

Reviewers: arsenm

Subscribers: MatzeB, arsenm, llvm-commits, kzhuravl

Differential Revision: https://reviews.llvm.org/D23687

llvm-svn: 279867
2016-08-26 21:16:37 +00:00
Adam Nemet
68515406cf [Inliner] Report when inlining fails because callee's def is unavailable
Summary:
This is obviously an interesting case because it may motivate code
restructuring or LTO.

Reporting this requires instantiation of ORE in the loop where the call
sites are first gathered.  I've checked compile-time
overhead *with* -Rpass-with-hotness and the worst slow-down was 6% in
mcf and quickly tailing off.  As before without -Rpass-with-hotness
there is no overhead.

Because this could be a pretty noisy diagnostics, it is currently
qualified as 'verbose'.  As of this patch, 'verbose' diagnostics are
only emitted with -Rpass-with-hotness, i.e. when the output is expected
to be filtered.

Reviewers: eraman, chandlerc, davidxl, hfinkel

Subscribers: tejohnson, Prazek, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D23415

llvm-svn: 279860
2016-08-26 20:21:05 +00:00
Rafael Espindola
26c1b18156 Make writeToResolutionFile a static helper.
llvm-svn: 279859
2016-08-26 20:19:35 +00:00
Kyle Butt
160e4d9563 TailDuplication: Record blocks that received the duplicated block. NFC.
This will allow tail duplication during layout to handle the cfg changes more
cleanly.

llvm-svn: 279858
2016-08-26 20:12:40 +00:00
Kevin Enderby
9991a7696a Next set of additional error checks for invalid Mach-O files for bad LC_SYMTAB’s.
This contains the missing checks for LC_SYMTAB load command fields.

llvm-svn: 279854
2016-08-26 19:34:07 +00:00
Manman Ren
23c5a589f3 Swift Calling Convetion: add support for AArch64.
It will just be the same as the regular calling convention.

rdar://28029509

llvm-svn: 279853
2016-08-26 19:28:17 +00:00
Tim Northover
0f9e84c16f AArch64: avoid assertion on illegal types in performFDivCombine.
In the code to detect fixed-point conversions and make use of AArch64's special
instructions, we weren't prepared for weird types. The fptosi direction got
fixed recently, but not the similar sitofp code.

llvm-svn: 279852
2016-08-26 18:52:31 +00:00
Sanjay Patel
66b7ce5846 [InstCombine] add helper function for icmp (and (sh X, Y), C2), C1 ; NFC
Like other recent changes near here, the goal is to allow vector types for
all of these folds. Splitting things up makes it easier to incrementally 
enhance the code and easier to read.

llvm-svn: 279851
2016-08-26 18:28:46 +00:00
Chad Rosier
ab1bc0d119 [AArch64] Avoid materializing constant values when generating csel instructions.
Differential Revision: https://reviews.llvm.org/D23677

llvm-svn: 279849
2016-08-26 18:05:50 +00:00
Davide Italiano
f0d80a35f9 [AsmParser] Placate a -Wmisleading-indentantion warning (GCC7).
llvm-svn: 279848
2016-08-26 18:05:03 +00:00
Reid Kleckner
7500758e97 [MC] Move .cv_loc management logic out of MCContext
MCContext already has many tasks, and separating CodeView out from it is
probably a good idea. The .cv_loc tracking was modelled on the DWARF
tracking which lived directly in MCContext.

Removes the inclusion of MCCodeView.h from MCContext.h, so now there are
only 10 build actions while I hack on CodeView support instead of 265.

llvm-svn: 279847
2016-08-26 17:58:37 +00:00
Tim Northover
4f35a13881 GlobalISel: mark G_FPEXT legal from float to double.
llvm-svn: 279845
2016-08-26 17:46:22 +00:00
Tim Northover
2f9c75fe2f GlobalISel: mark G_FCMP legal on float & double.
llvm-svn: 279844
2016-08-26 17:46:19 +00:00
Tim Northover
ac577545eb GlobalISel: simplify G_ICMP legalization regime.
It's unclear how the old

    %res(32) = G_ICMP { s32, s32 } intpred(eq), %0, %1

is actually different from an s1 verison

    %res(1) = G_ICMP { s1, s32 } intpred(eq), %0, %1

so we'll remove it for now.

llvm-svn: 279843
2016-08-26 17:46:17 +00:00
Tim Northover
a9938048f7 GlobalISel: legalize sdiv and srem operations.
llvm-svn: 279842
2016-08-26 17:46:13 +00:00
Tim Northover
94b2dc6476 GlobalISel: legalize under-width divisions.
llvm-svn: 279841
2016-08-26 17:46:06 +00:00
Tim Northover
e9b45b8f4e GlobalISel: mark selects legal
llvm-svn: 279840
2016-08-26 17:46:03 +00:00
Tim Northover
aa17b67046 GlobalISel: mark float/int conversions legal
llvm-svn: 279839
2016-08-26 17:45:58 +00:00
Sanjay Patel
327342b2d5 [InstCombine] clean up foldICmpAndConstConst(); NFC
1. Early exit to reduce indent
2. Fix comments and variable names to match
3. Reformat comments / clang-format code

llvm-svn: 279837
2016-08-26 17:15:22 +00:00
Krzysztof Parzyszek
fdbb18927f Missed a semicolon in r279835
llvm-svn: 279836
2016-08-26 16:50:57 +00:00
Krzysztof Parzyszek
d4f2333399 Add some more detailed debugging information in RegisterCoalescer
llvm-svn: 279835
2016-08-26 16:46:14 +00:00
Sanjay Patel
c810f30220 [InstCombine] add helper function for folding of icmp (and X, C2), C; NFC
llvm-svn: 279834
2016-08-26 16:42:33 +00:00
Bob Haarman
7dc400a765 limit the number of instructions per block examined by dead store elimination
Summary: Dead store elimination gets very expensive when large numbers of instructions need to be analyzed. This patch limits the number of instructions analyzed per store to the value of the memdep-block-scan-limit parameter (which defaults to 100). This resulted in no observed difference in performance of the generated code, and no change in the statistics for the dead store elimination pass, but improved compilation time on some files by more than an order of magnitude.

Reviewers: dexonsmith, bruno, george.burgess.iv, dberlin, reames, davidxl

Subscribers: davide, chandlerc, dberlin, davidxl, eraman, tejohnson, mbodart, llvm-commits

Differential Revision: https://reviews.llvm.org/D15537

llvm-svn: 279833
2016-08-26 16:34:27 +00:00
Sanjay Patel
a6a044e2b8 [InstCombine] rename variables in foldICmpAndConstant(); NFC
llvm-svn: 279831
2016-08-26 16:14:06 +00:00
Bob Haarman
278b19017e test commit
llvm-svn: 279830
2016-08-26 16:00:04 +00:00
Adam Nemet
c4e22f3757 [LoopUnroll] Use OptimizationRemarkEmitter directly not via the analysis pass
We can't mark ORE (a function pass) preserved as required by the loop
passes because that is how we ensure that the required passes like
LazyBFI are all available any time ORE is used.  See the new comments in
the patch.

Instead we use it directly just like the inliner does in D22694.

As expected there is some additional overhead after removing the caching
provided by analysis passes.  The worst case, I measured was
LNT/CINT2006_ref/401.bzip2 which regresses by 12%.  As before, this only
affects -Rpass-with-hotness and not default compilation.

llvm-svn: 279829
2016-08-26 15:58:34 +00:00
Sanjay Patel
807e452fc0 [InstCombine] rename variables in foldICmpDivConstant(); NFC
Removing the redundant 'CmpRHSV' local variable exposes a bug in the caller
foldICmpShrConstant() - it was sending in the div constant instead of the
cmp constant. But I have not been able to expose this in a regression test
yet - the affected folds all appear to be handled before we ever reach this
code. I'll keep trying to find a case as I make changes to allow vector folds
in both functions.

llvm-svn: 279828
2016-08-26 15:53:01 +00:00
Davide Italiano
f463cb5866 [lib/LTO] Add an assertion to catch invalid opt levels.
llvm-svn: 279823
2016-08-26 15:22:59 +00:00
Chad Rosier
ae57150dc8 [AArch64] Avoid materializing constant 1 by using csinc, rather than csel.
This is similar to what was done in r261675, but for CSINC rather than CSINV.

Differential Revision: https://reviews.llvm.org/D23892

llvm-svn: 279822
2016-08-26 14:01:55 +00:00
Pablo Barrio
1f84662941 Handle empty functions with debug info in load/store opt pass
Summary:
In fuctions that contained debug info but were empty otherwise,
the ARM load/store optimizer could abort. This was because
function MergeReturnIntoLDM handled the special case where a
Machine Basic BLock is empty by calling MBB.empty(). However, this
returns false in presence of debug info, although the function
should be considered empty in the eyes of the load/store optimizer.
This has been fixed by handling the case where searching through the
block finds only debug instructions.

Reviewers: rengolin, dexonsmith, llvm-commits, jmolloy

Subscribers: t.p.northover, aemerson, rengolin, samparker

Differential Revision: https://reviews.llvm.org/D23847

llvm-svn: 279820
2016-08-26 13:00:39 +00:00
Simon Pilgrim
8d9b313c40 [X86][SSE4A] The EXTRQ/INSERTQ bit extraction/insertion ops should be in the integer domain
llvm-svn: 279811
2016-08-26 09:55:41 +00:00
Eugene Leviant
eb33dbc270 Implement getRandomBytes() function
This function allows getting arbitrary sized block of random bytes.
Primary motivation is support for --build-id=uuid in lld.

Differential revision: https://reviews.llvm.org/D23671

llvm-svn: 279807
2016-08-26 08:14:54 +00:00
Craig Topper
471febb7d3 [X86][SSE] Add CMPSS/CMPSD intrinsic scalar load folding support.
llvm-svn: 279806
2016-08-26 07:08:00 +00:00
Matt Arsenault
7621e40810 Replace subregister uses when processing tied operands
This was for some reason skipping operands that are subregisters
instead of keeping the same subregister index.

v_movreld_b32 expects src0 to be the subregister of the tied
super register use/def.

e.g.

v_movreld_b32 v0, v9, <imp-def, tied3> v[0:3], <imp-use, tied2> v[0:3]

was being replaced with

v[4:7] = copy v[0:3]
v_movreld_b32 v0, v9, <imp-def, tied3> v[4:7], <imp-use, tied2> v[4:7],

which really writes to v[0:3]

llvm-svn: 279804
2016-08-26 06:31:32 +00:00
Kostya Serebryany
70186ece8e [libFuzzer] simplify a test to make it pass on the bot
llvm-svn: 279796
2016-08-26 00:18:16 +00:00
Kostya Serebryany
d4cdf49632 [libFuzzer] make sure we have symbols on fuzzer tests
llvm-svn: 279792
2016-08-25 23:30:02 +00:00
Michael Kuperstein
0f3d6c7984 Revert r274613 because it breaks the test suite with AVX512
This reverts most of r274613 (AKA r274626) and its follow-ups (r276347, r277289),
due to miscompiles in the test suite. The FastISel change was left in, because
it apparently fixes an unrelated issue.

(Recommit of r279782 which was broken due to a bad merge.)

This fixes 4 out of the 5 test failures in PR29112.

llvm-svn: 279788
2016-08-25 22:48:11 +00:00
Kostya Serebryany
25e0e96b53 [libFizzer] rename -print_new_cov_pcs=1 into -print_pcs=1 and make it more useful: print PCs only after the initial corpus has been read and symbolize them
llvm-svn: 279787
2016-08-25 22:35:08 +00:00
Michael Kuperstein
9e11987d29 Revert r279782 due to debug buildbot breakage.
llvm-svn: 279785
2016-08-25 22:14:45 +00:00
Michael Kuperstein
041c1cdc10 Revert r274613 because it breaks the test suite with AVX512
This reverts most of r274613 and its follow-ups (r276347, r277289), due to
miscompiles in the test suite. The FastISel change was left in, because it
apparently fixes an unrelated issue.

This fixes 4 out of the 5 test failures in PR29112.

llvm-svn: 279782
2016-08-25 21:55:41 +00:00
Tim Shen
a8864abefc [MemCpy] Add comments for r279769
Differential Revision: https://reviews.llvm.org/D23846

llvm-svn: 279778
2016-08-25 21:03:46 +00:00
Tim Northover
35c35dd4f1 ARM: by default don't set the Thumb bit on MachO relocated values.
Its existence is largely historical, apparently we tried to make ARM object
files look maybe-almost-possibly runnable by putting our best guess at the
actual value into relocated locations. Of course, the real linker then comes
along and can completely change things.

But it should only be there for word-sized and movw/movt relocations. It can't
be encoded in branch relocations, and I've seen it mess up validity
calculations twice in the last couple of weeks so the default is clearly problematic.

llvm-svn: 279773
2016-08-25 20:41:30 +00:00
Tim Shen
358f130893 [MemCpy] Check for alias in performMemCpyToMemSetOptzn, instead of the identity of two operands
Summary:
This fixes pr29105. The reason is that lifetime marks creates new
aliasing pointers the original ones, but before this patch aliases
were not checked in performMemCpyToMemSetOptzn.

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23846

llvm-svn: 279769
2016-08-25 19:27:26 +00:00
Michael Kuperstein
815b7bc975 Reuse an SDLoc throughout a function. NFC.
llvm-svn: 279767
2016-08-25 18:50:56 +00:00
Tim Northover
1e63f9c419 GlobalISel: add missing type to G_UADDE instructions
llvm-svn: 279762
2016-08-25 17:37:44 +00:00
Tim Northover
3892589cf4 GlobalISel: mark overflow bit of overflow ops legal.
It's expected this will map to NZCV register class and be properly selectable.

llvm-svn: 279761
2016-08-25 17:37:41 +00:00
Tim Northover
5ba7519ed1 GlobalISel: mark simple ops legal even on types < 32-bit.
The 32-bit variants of these operations don't depend on the bits not being
operated on, so they also naturally model operations narrower than the actual
register width.

llvm-svn: 279760
2016-08-25 17:37:39 +00:00
Tim Northover
ae0d153231 GlobalISel: mark pointer constants as legal on AArch64.
llvm-svn: 279759
2016-08-25 17:37:35 +00:00