1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 20:12:56 +02:00
Commit Graph

134722 Commits

Author SHA1 Message Date
Tim Northover
b2bffb2bad ARM: validate immediate branch targets in AsmParser.
Immediate branch targets aren't commonly used, but if they are we should make
sure they can actually be encoded. This means they must be divisible by 2 when
targeting Thumb mode, and by 4 when targeting ARM mode.

Also do a little naming cleanup while I was changing everything around anyway.

llvm-svn: 275116
2016-07-11 22:29:37 +00:00
Nicolai Haehnle
e3114ca1df AMDGPU: Treat texture gather instructions more like other MIMG instructions
Summary:
Setting MIMG to 0 has a bunch of unexpected side effects, including that
isVMEM returns false which leads to incorrect treatment in the hazard
recognizer. The reason I noticed it is that it also leads to incorrect
treatment in VGPR-to-SGPR copies, which is one cause of the referenced bug.

The only reason why MIMG was set to 0 is to signal the special handling of
dmasks, but that can be checked differently.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96877

Reviewers: arsenm, tstellarAMD

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D22210

llvm-svn: 275113
2016-07-11 21:59:43 +00:00
Zachary Turner
63d4db3b0c Refactor the PDB writing to use a builder approach
llvm-svn: 275110
2016-07-11 21:45:26 +00:00
Zachary Turner
b1b6db2b41 [pdb] Add a pdb2yaml option to not dump file headers.
This will be useful once we start adding the ability to dump type
records and symbol records, since it will allow us to generate
mergeable information instead of information that specifies an
entire file.

llvm-svn: 275109
2016-07-11 21:45:09 +00:00
Nicolai Haehnle
b2e61fcd7d AMDGPU: fix local stack slot allocation bugs
Summary:
The main bug fix here is using the 32-bit encoding of V_ADD_I32 in
materializeFrameBaseRegister and resolveFrameIndex, so that arbitrary
immediates work.

The second part is that we may now require the SegmentWaveByteOffset
even when there are initially no stack objects and VGPR spilling isn't
enabled, for stack slots that are allocated later. This means that some
bits become effectively dead and can be cleaned up.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96602
Tested-by: Kai Wasserbäch <kai@dev.carbon-project.org>

Reviewers: arsenm, tstellarAMD

Subscribers: arsenm, llvm-commits, kzhuravl

Differential Revision: http://reviews.llvm.org/D21551

llvm-svn: 275108
2016-07-11 21:44:40 +00:00
Michael Kuperstein
7e6a08b33c [X86] Make some cast costs more precise
Make some AVX and AVX512 cast costs more precise.
Based on part of a patch by Elena Demikhovsky (D15604).

Differential Revision: http://reviews.llvm.org/D22064

llvm-svn: 275106
2016-07-11 21:39:44 +00:00
Kyle Butt
b281da1235 Codegen: Fix comment in BranchFolding.cpp
Blocks to be tail-merged may share more than one successor. Correct the
comment to state that they share a specific successor, SuccBB, rather
than a single successor, which is not true.

llvm-svn: 275104
2016-07-11 21:37:03 +00:00
Quentin Colombet
778f8c8cb6 [X86] Fix tailcall return address clobber bug.
This bug (llvm.org/PR28124) was introduced by r237977, which refactored
the tail call  sequence to be generated in two passes instead of one.

Unfortunately, the stack adjustment produced by the first pass was not
recognized by X86FrameLowering::mergeSPUpdates() in all cases, causing
code such as the following, which clobbers the return address, to be
generated:

popl    %edi
popl    %edi
pushl   %eax
jmp     tailcallee              # TAILCALL

To fix the problem, the entire stack adjustment is performed in
X86ExpandPseudo::ExpandMI() for tail calls.

Patch by Magnus Lång <margnus1@gmail.com>

Differential Revision: http://reviews.llvm.org/D21325

llvm-svn: 275103
2016-07-11 21:03:03 +00:00
Sanjay Patel
6be59af417 fix documentation comments; NFC
llvm-svn: 275101
2016-07-11 20:50:39 +00:00
Alina Sbirlea
7abab61a0e Add TLI.allowsMisalignedMemoryAccesses to LoadStoreVectorizer
Summary: Extend TTI to access TLI.allowsMisalignedMemoryAccesses(). Check condition when vectorizing load and store chains.
Add additional parameters: AddressSpace, Alignment, Fast.

Reviewers: llvm-commits, jlebar

Subscribers: arsenm, mzolotukhin

Differential Revision: http://reviews.llvm.org/D21935

llvm-svn: 275100
2016-07-11 20:46:17 +00:00
Michael Kuperstein
e6e21159a7 [X86] Disable FixupSetCC for CodeGenOpt::None
It is an optimization pass, and should not run at -O0. Especially since Fast RA
will not do the required register coalescing anyway, so it's a loss even from
the optimization standpoint.

This also works around (but doesn't quite fix) PR28489.

llvm-svn: 275099
2016-07-11 20:40:44 +00:00
Chad Rosier
151288c681 [IPRA] Properly compute register usage at call sites.
Differential Revision: http://reviews.llvm.org/D21395
Patch by Vivek Pandya.
PR28144

llvm-svn: 275087
2016-07-11 18:45:49 +00:00
Zhan Jun Liau
f92effb44f [SystemZ] Recognize Load On Condition Immediate (LOCHI/LOGHI) opportunities
Summary: Add support for the z13 instructions LOCHI and LOCGHI which
conditionally load immediate values.  Add target instruction info hooks so
that if conversion will allow predication of LHI/LGHI.

Author: RolandF

Reviewers: uweigand

Subscribers: zhanjunl

Commiting on behalf of Roland.

Differential Revision: http://reviews.llvm.org/D22117

llvm-svn: 275086
2016-07-11 18:45:03 +00:00
Davide Italiano
2f0dd2eaeb [SCCP] Try to follow the DRY principle, use OpSt.
Thanks to Eli Friedman for pointing out in his post-commit review!

llvm-svn: 275084
2016-07-11 18:21:29 +00:00
Jingyue Wu
b465fe9970 [SLSR] Call getPointerSizeInBits with the correct address space.
llvm-svn: 275083
2016-07-11 18:13:28 +00:00
Davide Italiano
fd6ab00139 [PM/IPO] Port LowerTypeTests to the new PassManager.
There's a little bit of churn in this patch because the initialization
mechanism is now shared between the old and the new PM. Other than
that, it's just a pretty mechanical translation.

llvm-svn: 275082
2016-07-11 18:10:06 +00:00
Jacques Pienaar
6194a54b7e [lanai] Add more tests for assembly of conditional ALU ops
llvm-svn: 275081
2016-07-11 17:58:16 +00:00
Dehao Chen
8b4fe9a409 Fix the assertion failure caused by http://reviews.llvm.org/D22118
Summary: http://reviews.llvm.org/D22118 uses metadata to store the call count, which makes it possible to have branch weight to have only one elements. Also fix the assertion failure in inliner when checking the instruction type to include "invoke" instruction.

Reviewers: mkuper, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22228

llvm-svn: 275079
2016-07-11 17:36:02 +00:00
David Majnemer
18cc889bef [IR] Stop a -Wsign-compare warning from firing
llvm-svn: 275077
2016-07-11 17:09:06 +00:00
Davide Italiano
f02633eea9 [LowerTypeTests] Don't rely on doInitialization().
In preparation for porting this pass to the new PM (which has no
doInitialization()).

Differential Revision:  http://reviews.llvm.org/D22223

llvm-svn: 275074
2016-07-11 17:00:31 +00:00
Dehao Chen
0f5497429b Implement callsite-hotness based inline cost for Sample-based PGO
Summary:
For sample-based PGO, using BFI to calculate callsite count is sometime not accurate. This is because with sampling based approach, if a callsite resides in a hot loop deeply nested in a bunch of cold branches, the callsite's BFI frequency would be inaccurately calculated due to lack of samples in the cold branch.

E.g.

if (A1 && A2 && A3 && ..... && A10) {
  for (i=0; i < 100000000; i++) {
    callsite();
  }
}

Assume that A1 to A100 are all 100% taken, and callsite has 1000 samples and thus is considerred hot. Because the loop's trip count is huge, it's normal that all branches outside the loop has no sample at all. As a result, we can only use static branch probability to derive the the frequency of the loop header. Assuming that static heuristic thinks each branch is 50% taken, then the count calculated from BFI will be 1/(2^10) of the actual value.

In order to get more accurate callsite count, we directly annotate the weight on the call instruction, and directly use it when checking callsite hotness.

Note that this mechanism can also be shared by instrumentation based callsite hotness analysis. The side benefit is that it breaks the dependency from Inliner to BFI as call count is embedded in the IR.

Reviewers: davidxl, eraman, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22118

llvm-svn: 275073
2016-07-11 16:48:54 +00:00
Dehao Chen
febf32fae1 Tune the weight propagation algorithm for sample profile.
Summary: Handle the case when there is only one incoming/outgoing edge for a visited basic block: use the block weight to adjust edge weight even when the edge has been visited before. This can help reduce inaccuracies introduced by incorrect basic block profile, as shown in the updated unittest.

Reviewers: davidxl, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22180

llvm-svn: 275072
2016-07-11 16:40:17 +00:00
Sanjay Patel
2aecec5c00 [x86] make some of the tests 256-bit for testing diversity
llvm-svn: 275070
2016-07-11 15:08:37 +00:00
Nirav Dave
6c7c56d57e Add missing include from previous commit
llvm-svn: 275069
2016-07-11 14:32:57 +00:00
Nirav Dave
209a4b5ef4 Fix branch relaxation in 16-bit mode.
Thread through MCSubtargetInfo to relaxInstruction function allowing relaxation
to generate jumps with 16-bit sized immediates in 16-bit mode.

This fixes PR22097.

Reviewers: dwmw2, tstellarAMD, craig.topper, jyknight

Subscribers: jfb, arsenm, jyknight, llvm-commits, dsanders

Differential Revision: http://reviews.llvm.org/D20830

llvm-svn: 275068
2016-07-11 14:23:53 +00:00
Sanjay Patel
8da2d8fa34 [x86] specify triple to avoid bot failures
llvm-svn: 275067
2016-07-11 14:17:54 +00:00
Nicolai Haehnle
ae937fec7b [Sink] Don't move calls to readonly functions across stores
Summary:

Reviewers: hfinkel, majnemer, tstellarAMD, sunfish

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D17279

llvm-svn: 275066
2016-07-11 14:11:51 +00:00
Nicolai Haehnle
3fa0c76630 AliasAnalysis: unify getModRefInfo(I, CS) semantics with other overloads
This subtle change to getModRefInfo(Instruction, ImmutableCallSite) is to
ensure that the semantics are equal to that of getModRefInfo(CS1, CS2) when
the Instruction is a call-site.

This is now more in line with getModRefInfo generally: it returns Mod when
I modifies a memory location that is accessed (read or written) by CS and
Ref when I reads a memory location that is written by CS.

From a grep of the code, the only uses of this particular getModRefInfo
overload are in MemorySSA and MemCpyOptimizer, and they only care about
where the result is MR_NoModRef or not. Therefore, this change should have
no visible effect.

Separated out from D17279 upon request.

llvm-svn: 275065
2016-07-11 14:11:45 +00:00
Sanjay Patel
d6225c8cc2 [x86] update checks
llvm-svn: 275064
2016-07-11 14:07:31 +00:00
Simon Pilgrim
76a847bf01 [X86][SSE] Generalise target shuffle combine of shuffles using variable masks
At present the only shuffle with a variable mask we recognise is PSHUFB, which influences if its worth the cost of mask creation/loading of a combined target shuffle with a variable mask. This change sets up the infrastructure to support other shuffles in the future but has no effect yet.

llvm-svn: 275059
2016-07-11 12:49:35 +00:00
Nirav Dave
483683bb34 Provide support for preserving assembly comments
Preserve assembly comments from input in output assembly and flags to
toggle property. This is on by default for inline assembly and off in
llvm-mc.

Parsed comments are emitted immediately before an EOL which generally
places them on the expected line.

Reviewers: rtrieu, dwmw2, rnk, majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D20020

llvm-svn: 275058
2016-07-11 12:42:14 +00:00
Artem Tamazov
f16982e6bd [AMDGPU][llvm-mc] Quickfix for r272748 to enable labels in branch instructions.
Fixes issue mentioned at:
  https://github.com/RadeonOpenCompute/LLVM-AMDGPU-Assembler-Extra/issues/13.
Lit tests added.

Differential Revision: http://reviews.llvm.org/D22133

llvm-svn: 275054
2016-07-11 12:07:18 +00:00
Zlatko Buljan
25db5cab21 [mips][microMIPS] Implement LDC1, SDC1, LDC2, SDC2, LWC1, SWC1, LWC2 and SWC2 instructions and add CodeGen support
Differential Revision: http://reviews.llvm.org/D18824

llvm-svn: 275050
2016-07-11 07:41:56 +00:00
Elena Demikhovsky
2df828f30a AVX-512: DAG lowering for scalar MIN/MAX commutable ops
DAG lowering was missing for the scalar FMINC, FMAXC nodes.
The nodes are generated only in the "unsafe-fp-math" mode.
Added tests.

llvm-svn: 275048
2016-07-11 06:08:06 +00:00
Craig Topper
ba1b8635b0 [AVX512] Add support for 512-bit ANDN now that all ones build vectors survive long enough to allow the matching.
llvm-svn: 275046
2016-07-11 05:36:53 +00:00
Craig Topper
b6ac80a1ab [AVX512] Use vpternlog with an immediate of 0xff to create 512-bit all one vectors.
llvm-svn: 275045
2016-07-11 05:36:48 +00:00
Craig Topper
2c851cdb56 [X86] Add the AVX512 SET0 pseudos to foldMemoryOperandImpl since they are marked for CanFoldAsLoad.
I don't really know how to test this.

llvm-svn: 275044
2016-07-11 05:36:41 +00:00
Hal Finkel
3aee15342e Revert r275027 - Let FuncAttrs infer the 'returned' argument attribute
Reverting r275027 and r275033. These seem to cause miscompiles on the AArch64 buildbot.

llvm-svn: 275042
2016-07-11 04:51:23 +00:00
Daniel Berlin
66bd539212 Allow BasicBlockEdge to be used in DenseMap
Summary: Add a DenseMapInfo specialization for BasicBlockEdge

Reviewers: hfinkel, chandlerc, majnemer

Differential Revision: http://reviews.llvm.org/D22207

llvm-svn: 275041
2016-07-11 04:37:53 +00:00
Hal Finkel
122a57d52a Pointer-comparison folding should look through returned-argument functions
For functions which are known to return a specific argument, pointer-comparison
folding can look through the function calls as part of its analysis.

Differential Revision: http://reviews.llvm.org/D9387

llvm-svn: 275039
2016-07-11 03:37:59 +00:00
Hal Finkel
392961ad04 Teach isDereferenceablePointer to look through returned-argument functions
For functions which are known to return their argument,
isDereferenceableAndAlignedPointer can examine the argument value.

Differential Revision: http://reviews.llvm.org/D9384

llvm-svn: 275038
2016-07-11 03:08:49 +00:00
Hal Finkel
f9c4041c84 Teach SCEV to look through returned-argument functions
When building SCEVs, if a function is known to return its argument, then we can
build the SCEV using the corresponding argument value.

Differential Revision: http://reviews.llvm.org/D9381

llvm-svn: 275037
2016-07-11 02:48:23 +00:00
Hal Finkel
66f66627ed Teach computeKnownBits to look through returned-argument functions
If a function is known to return one of its arguments, we can use that in order
to compute known bits of the return value.

Differential Revision: http://reviews.llvm.org/D9397

llvm-svn: 275036
2016-07-11 02:25:14 +00:00
Hal Finkel
9dd10de0f9 BasicAA should look through functions with returned arguments
Motivated by the work on the llvm.noalias intrinsic, teach BasicAA to look
through returned-argument functions when answering queries. This is essential
so that we don't loose all other AA information when supplementing with
llvm.noalias.

Differential Revision: http://reviews.llvm.org/D9383

llvm-svn: 275035
2016-07-11 01:32:20 +00:00
Hal Finkel
ad4e59155b Add a 'Returned' intrinsic property corresponding to the 'returned' argument attribute
This will be used by the upcoming llvm.noalias intrinsic.

Differential Revision: http://reviews.llvm.org/D22201

llvm-svn: 275034
2016-07-11 01:28:42 +00:00
Hal Finkel
94d6a0faef Don't use a SmallSet for returned attribute inference
Suggested post-commit by David Majnemer on IRC (following-up on a pre-commit
review comment).

llvm-svn: 275033
2016-07-11 01:14:21 +00:00
Hal Finkel
55a12f3927 Add getReturnedArgOperand to Call/InvokeInst, CallSite
In order to make the optimizer smarter about using the 'returned' argument
attribute (generally, but motivated by my llvm.noalias intrinsic work), add a
utility function to Call/InvokeInst, and CallSite, to make it easy to get the
returned call argument (when one exists).

P.S. There is already an unfortunate amount of code duplication between
CallInst and InvokeInst, and this adds to it. We should probably clean that up
separately.

Differential Revision: http://reviews.llvm.org/D22204

llvm-svn: 275031
2016-07-10 23:01:32 +00:00
Simon Pilgrim
7492f5fd1a [X86][SSE] Relax type assertions for matchVectorShuffleAsInsertPS
Calls to matchVectorShuffleAsInsertPS only need to ensure the inputs are 128-bit vectors. Only lowerVectorShuffleAsInsertPS needs to ensure that they are v4f32.

llvm-svn: 275028
2016-07-10 22:26:05 +00:00
Hal Finkel
6271874a51 Let FuncAttrs infer the 'returned' argument attribute
A function can have one argument with the 'returned' attribute, indicating that
the associated argument is always the return value of the function. Add
FuncAttrs inference logic.

Differential Revision: http://reviews.llvm.org/D22202

llvm-svn: 275027
2016-07-10 22:02:55 +00:00
Hal Finkel
f6aeef6f80 Update the LangRef description of the 'returned' attribute
The description of the 'returned' attribute says that it is only used when
code-generating the caller. I'd like to make the optimizer smarter about
looking through functions with returned arguments (generally, but motivated by
my llvm.noalias work). As David pointed out in the review of D22202, the
LangRef should be updated to make its expanded uses clearer.

Differential Revision: http://reviews.llvm.org/D22205

llvm-svn: 275026
2016-07-10 21:52:39 +00:00