1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

123340 Commits

Author SHA1 Message Date
Xinliang David Li
3edbd8ba8a [PGO] Use template file to define runtime structures
With this change, instrumentation code and reader/write
code related to profile data structs are kept strictly
in-sync. THis will be extended to cfe and compile-rt 
references as well.

Differential Revision: http://reviews.llvm.org/D13843

llvm-svn: 252113
2015-11-05 00:47:26 +00:00
Mehdi Amini
c56e727526 Fix Abbrev emission in WriteIdentificationBlock
This Abbrev was not emitted and basically unused, just leacking there.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 252110
2015-11-05 00:25:03 +00:00
Rafael Espindola
1d7efb8a20 Fix pr24832.
It is pretty simple now that the yak is shaved.

llvm-svn: 252105
2015-11-05 00:10:08 +00:00
Rafael Espindola
4535122e7b Simplify now that emitValueToOffset always returns false.
llvm-svn: 252102
2015-11-04 23:59:18 +00:00
Rafael Espindola
b32d97ac38 Simplify .org processing and make it a bit more powerful.
We now always create the fragment, which lets us handle things like .org after
a .align.

llvm-svn: 252101
2015-11-04 23:50:29 +00:00
Xinliang David Li
3c4afd7651 Define portable macros for packed struct definitions:
1. A macro with argument: LLVM_PACKED(StructDefinition)
 2. A pair of macros defining scope of region with packing:
    LLVM_PACKED_START
     struct A { ... };
     struct B { ... };
    LLVM_PACKED_END

Differential Revision: http://reviews.llvm.org/D14337

llvm-svn: 252099
2015-11-04 23:42:56 +00:00
Davide Italiano
c3b20ee04f [SimplifyLibCalls] New transformation: tan(atan(x)) -> x
This is enabled only under -ffast-math.
So, instead of emitting:
  4007b0:       50                      push   %rax
  4007b1:       e8 8a fd ff ff          callq  400540 <atanf@plt>
  4007b6:       58                      pop    %rax
  4007b7:       e9 94 fd ff ff          jmpq   400550 <tanf@plt>
  4007bc:       0f 1f 40 00             nopl   0x0(%rax)

for:
float mytan(float x) {
  return tanf(atanf(x));
}
we emit a single retq.

Differential Revision:	 http://reviews.llvm.org/D14302

llvm-svn: 252098
2015-11-04 23:36:56 +00:00
Kostya Serebryany
9d859680b2 [libFuzzer] when choosing the next unit to mutate, give some preference to the most recent units (they are more likely to be interesting)
llvm-svn: 252097
2015-11-04 23:22:25 +00:00
Sanjay Patel
f9eb37ed52 fix typo; NFC
llvm-svn: 252096
2015-11-04 23:21:13 +00:00
Sanjoy Das
b36621a78e [CaptureTracking] Support operand bundles conservatively
Summary:
Earlier CaptureTracking would assume all "interesting" operands to a
call or invoke were its arguments.  With operand bundles this is no
longer true.

Note: an earlier change got `doesNotCapture` working correctly with
operand bundles.

This change uses DSE to test the changes to CaptureTracking.  DSE is a
vehicle for testing only, and is not directly involved in this change.

Reviewers: reames, majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14306

llvm-svn: 252095
2015-11-04 23:21:06 +00:00
Chris Bieneman
c127f2db5e [CMake] Bug 25059 - CMake libllvm.so.$MAJOR.$MINOR shared object name not compatible with ldconfig
Summary:
This change makes the CMake build system generate libraries for Linux and Darwin matching the makefile build system.

Linux libraries follow the pattern lib${name}.${MAJOR}.${MINOR}.so so that ldconfig won't pick it up incorrectly.

Darwin libraries are not versioned.

Note: On linux the non-versioned symlink is generated at install-time not build time. I plan to fix that eventually, but I expect that is good enough for the purposes of fixing this bug.

Reviewers: loladiro, tstellarAMD

Subscribers: axw, llvm-commits

Differential Revision: http://reviews.llvm.org/D13841

llvm-svn: 252093
2015-11-04 23:11:12 +00:00
Rafael Espindola
6cc9101fe1 Slightly saner handling of thumb branches.
The generic infrastructure already did a lot of work to decide if the
fixup value is know or not. It doesn't make sense to reimplement a very
basic case: same fragment.

llvm-svn: 252090
2015-11-04 23:00:39 +00:00
Quentin Colombet
dacc177d59 [x86] Teach the shrink-wrapping hooks to do the proper thing with Win64.
Win64 has some strict requirements for the epilogue. As a result, we disable
shrink-wrapping for Win64 unless the block that gets the epilogue is already an
exit block.

Fixes PR24193.

llvm-svn: 252088
2015-11-04 22:37:28 +00:00
Eugene Zelenko
7c062ccc18 Fix some Clang-tidy modernize warnings, other minor fixes.
Fixed warnings are: modernize-use-override, modernize-use-nullptr and modernize-redundant-void-arg.

Differential revision: http://reviews.llvm.org/D14312

llvm-svn: 252087
2015-11-04 22:32:32 +00:00
Justin Bogner
4f0860a6c9 PM: Rephrase PrintLoopPass as a wrapper around a new-style pass. NFC
Splits PrintLoopPass into a new-style pass and a PrintLoopPassWrapper,
much like we already do for PrintFunctionPass and PrintModulePass.

llvm-svn: 252085
2015-11-04 22:24:08 +00:00
Cong Hou
8e8cd88013 Add new interfaces to MBB for manipulating successors with probabilities instead of weights. NFC.
This is part-1 of the patch that replaces all edge weights in MBB by
probabilities, which only adds new interfaces. No functional changes.

Differential revision: http://reviews.llvm.org/D13908

llvm-svn: 252083
2015-11-04 21:37:58 +00:00
Simon Pilgrim
ea116cf21c Warning fix.
llvm-svn: 252078
2015-11-04 21:27:22 +00:00
Sanjoy Das
8361535ee5 [IR] Add a data_operand abstraction
Summary:
Data operands of a call or invoke consist of the call arguments, and
the bundle operands associated with the `call` (or `invoke`)
instruction.  The motivation for this change is that we'd like to be
able to query "argument attributes" like `readonly` and `nocapture`
for bundle operands naturally.

This change also provides a conservative "implementation" for these
attributes for any bundle operand, and an extension point for future
work.

Reviewers: chandlerc, majnemer, reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14305

llvm-svn: 252077
2015-11-04 21:05:24 +00:00
Tom Stellard
9c18b9c83d llvm-config: Add --has-rtti option
Summary:
This prints NO if LLVM was built with -fno-rtti or an equivalent flag
and YES otherwise.  The reasons to add -has-rtti rather than adding -fno-rtti
to --cxxflags are:

1. Building LLVM with -fno-rtti does not always mean that client
applications need this flag.

2. Some compilers have a different flag for disabling rtti, and the
compiler being used to build LLVM may not be the compiler being used to
build the application.

Reviewers: echristo, chandlerc, beanz

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11849

llvm-svn: 252075
2015-11-04 20:57:43 +00:00
Simon Pilgrim
d1f5a2789e [X86][SSE] Add general memory folding for (V)INSERTPS instruction
This patch improves the memory folding of the inserted float element for the (V)INSERTPS instruction.

The existing implementation occurs in the DAGCombiner and relies on the narrowing of a whole vector load into a scalar load (and then converted into a vector) to (hopefully) allow folding to occur later on. Not only has this proven problematic for debug builds, it also prevents other memory folds (notably stack reloads) from happening.

This patch removes the old implementation and moves the folding code to the X86 foldMemoryOperand handler. A new private 'special case' function - foldMemoryOperandCustom - has been added to deal with memory folding of instructions that can't just use the lookup tables - (V)INSERTPS is the first of several that could be done.

It also tweaks the memory operand folding code with an additional pointer offset that allows existing memory addresses to be modified, in this case to convert the vector address to the explicit address of the scalar element that will be inserted.

Unlike the previous implementation we now set the insertion source index to zero, although this is ignored for the (V)INSERTPSrm version, anything that relied on shuffle decodes (such as unfolding of insertps loads) was incorrectly calculating the source address - I've added a test for this at insertps-unfold-load-bug.ll

Differential Revision: http://reviews.llvm.org/D13988

llvm-svn: 252074
2015-11-04 20:48:09 +00:00
Sanjoy Das
442b82fb2f [IR] Add bounds checking to paramHasAttr
Summary:
This is intended to make a later change simpler.

Note: adding this bounds checking required fixing `X86FastISel`.  As
far I can tell I've preserved original behavior but a careful review
will be appreciated.

Reviewers: reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D14304

llvm-svn: 252073
2015-11-04 20:33:45 +00:00
David Blaikie
9a95592e5f Orc: Streamline some lambda usage in a unit test
llvm-svn: 252070
2015-11-04 19:43:24 +00:00
Rafael Espindola
92414e9e7f Relax the check for ninja.
On fedora the ninja executable is called ninja-build :-(

llvm-svn: 252062
2015-11-04 19:18:11 +00:00
Andrew Kaylor
e42c061561 Created new X86 FMA3 opcodes (FMA*_Int) that are used now for lowering of scalar FMA intrinsics.
Patch by Slava Klochkov 

The key difference between FMA* and FMA*_Int opcodes is that FMA*_Int opcodes are handled more conservatively. It is illegal to commute the 1st operand of FMA*_Int instructions as the upper bits of scalar FMA intrinsic result must be taken from the 1st operand, but such commute transformation would change those upper bits and invalidate the intrinsic's result.

Reviewers: Quentin Colombet, Elena Demikhovsky

Differential Revision: http://reviews.llvm.org/D13710

llvm-svn: 252060
2015-11-04 18:10:41 +00:00
James Molloy
39326e32ea [ARM] Combine CMOV into BFI where possible
If we have a CMOV, OR and AND combination such as:
  if (x & CN)
    y |= CM;

And:
  * CN is a single bit;
  * All bits covered by CM are known zero in y;

Then we can convert this to a sequence of BFI instructions. This will always be a win if CM is a single bit, will always be no worse than the TST & OR sequence if CM is two bits, and for thumb will be no worse if CM is three bits (due to the extra IT instruction).

llvm-svn: 252057
2015-11-04 16:55:07 +00:00
Teresa Johnson
fb3e404cde [ThinLTO] Always set linkage type to external when converting alias
When converting an alias to a non-alias when the aliasee is not
imported, ensure that the linkage type is set to external so that it is
a valid linkage type. Added a test case that exposed this issue.

llvm-svn: 252054
2015-11-04 16:01:16 +00:00
James Molloy
17b2066590 [SimplifyCFG] Merge conditional stores
We can often end up with conditional stores that cannot be speculated. They can come from fairly simple, idiomatic code:

  if (c & flag1)
    *a = x;
  if (c & flag2)
    *a = y;
  ...

There is no dominating or post-dominating store to a, so it is not legal to move the store unconditionally to the end of the sequence and cache the intermediate result in a register, as we would like to.

It is, however, legal to merge the stores together and do the store once:

  tmp = undef;
  if (c & flag1)
    tmp = x;
  if (c & flag2)
    tmp = y;
  if (c & flag1 || c & flag2)
    *a = tmp;

The real power in this optimization is that it allows arbitrary length ladders such as these to be completely and trivially if-converted. The typical code I'd expect this to trigger on often uses binary-AND with constants as the condition (as in the above example), which means the ending condition can simply be truncated into a single binary-AND too: 'if (c & (flag1|flag2))'. As in the general case there are bitwise operators here, the ladder can often be optimized further too.

This optimization involves potentially increasing register pressure. Even in the simplest case, the lifetime of the first predicate is extended. This can be elided in some cases such as using binary-AND on constants, but not in the general case. Threading 'tmp' through all branches can also increase register pressure.

The optimization as in this patch is enabled by default but kept in a very conservative mode. It will only optimize if it thinks the resultant code should be if-convertable, and additionally if it can thread 'tmp' through at least one existing PHI, so it will only ever in the worst case create one more PHI and extend the lifetime of a predicate.

This doesn't trigger much in LNT, unfortunately, but it does trigger in a big way in a third party test suite.

llvm-svn: 252051
2015-11-04 15:28:04 +00:00
Filipe Cabecinhas
e215a5f192 Error out when faced with value names containing '\0'
Bug found with afl-fuzz.

llvm-svn: 252048
2015-11-04 14:53:36 +00:00
Aaron Ballman
52c1d43c18 Silence an extra semicolon warning; NFC.
llvm-svn: 252046
2015-11-04 14:40:54 +00:00
Michael Kuperstein
7726e4d796 [ELF] elfiamcu triple should imply e_machine == EM_IAMCU
Differential Revision: http://reviews.llvm.org/D14109

llvm-svn: 252043
2015-11-04 11:21:50 +00:00
Michael Kuperstein
715d358c66 [X86] DAGCombine should not introduce FILD in soft-float mode
The x86 "sitofp i64 to double" dag combine, in 32-bit mode, lowers sitofp 
directly to X86ISD::FILD (or FILD_FLAG). This should not be done in soft-float mode.

llvm-svn: 252042
2015-11-04 11:17:53 +00:00
James Molloy
080c19dae1 Revert "[PatternMatch] Switch to use ValueTracking::matchSelectPattern"
This was breaking the modules build and is being reverted while we reach consensus on the right way to solve this layering problem. This reverts commit r251785.

llvm-svn: 252040
2015-11-04 08:36:53 +00:00
Pawel Bylica
12fbf20eab Fix unit tests on Windows: handle env vars with non-ASCII chars.
Summary: On Windows we have to take UTF16 encoded env vars and convert them to UTF8. This patch fixes CopyEnvironment helper function used by process unit tests.

Reviewers: yaron.keren

Subscribers: yaron.keren, llvm-commits

Differential Revision: http://reviews.llvm.org/D14278

llvm-svn: 252039
2015-11-04 08:25:20 +00:00
Sanjoy Das
35a16a42b9 [OperandBundles] Refactor; NFCI.
Extract out a helper function `operandBundleFromBundleOpInfo`.

llvm-svn: 252038
2015-11-04 04:31:21 +00:00
Sanjoy Das
ce6d998be5 [OperandBundles] Refactor; NFCI
Intended to make later changes simpler.  Exposes
`getBundleOperandsStartIndex` and `getBundleOperandsEndIndex`, and uses
them for the computation in `getNumTotalBundleOperands`.

llvm-svn: 252037
2015-11-04 04:31:06 +00:00
Philip Reames
93ead3ba46 [LVI] Update a comment to clarify what's actually happening and why
llvm-svn: 252033
2015-11-04 01:47:04 +00:00
Philip Reames
d4059ff8b7 [CVP] Fold return values if possible
In my previous change to CVP (251606), I made CVP much more aggressive about trying to constant fold comparisons. This patch is a reversal in direction. Rather than being agressive about every compare, we restore the non-block local restriction for most, and then try hard for compares feeding returns.

The motivation for this is two fold:
 * The more I thought about it, the less comfortable I got with the possible compile time impact of the other approach. There have been no reported issues, but after talking to a couple of folks, I've come to the conclusion the time probably isn't justified.
 * It turns out we need to know the context to leverage the full power of LVI. In particular, asking about something at the end of it's block (the use of a compare in a return) will frequently get more precise results than something in the middle of a block. This is an implementation detail, but it's also hard to get around since mid-block queries have to reason about possible throwing instructions and don't get to use most of LVI's block focused infrastructure. This will become particular important when combined with http://reviews.llvm.org/D14263.

Differential Revision: http://reviews.llvm.org/D14271

llvm-svn: 252032
2015-11-04 01:43:54 +00:00
Igor Laevsky
c5001abb24 [StatepointLowering] Remove distinction between call and invoke safepoints
There is no point in having invoke safepoints handled differently than the
call safepoints. All relevant decisions could be made by looking at whether
or not gc.result and gc.relocate lay in a same basic block. This change will
 allow to lower call safepoints with relocates and results in a different 
basic blocks. See test case for example.

Differential Revision: http://reviews.llvm.org/D14158

llvm-svn: 252028
2015-11-04 01:16:10 +00:00
Alexey Samsonov
5c12917a28 Fix the test case for Windows.
llvm-svn: 252027
2015-11-04 01:09:37 +00:00
Alexey Samsonov
4cb73711bb [LLVMSymbolize] Reduce indentation by using helper function. NFC.
llvm-svn: 252022
2015-11-04 00:30:26 +00:00
Alexey Samsonov
27ddee3db7 [LLVMSymbolize] Properly propagate object parsing errors from the library.
llvm-svn: 252021
2015-11-04 00:30:24 +00:00
Alexey Samsonov
198315389a [llvm-symbolizer] Improve the test for missing input file.
llvm-svn: 252020
2015-11-04 00:30:19 +00:00
Adam Nemet
24ab61577e Fix unused variable warning from r252017
llvm-svn: 252019
2015-11-04 00:10:33 +00:00
Adam Nemet
933537bc6b LLE 6/6: Add LoopLoadElimination pass
Summary:
The goal of this pass is to perform store-to-load forwarding across the
backedge of a loop.  E.g.:

  for (i)
     A[i + 1] = A[i] + B[i]

  =>

  T = A[0]
  for (i)
     T = T + B[i]
     A[i + 1] = T

The pass relies on loop dependence analysis via LoopAccessAnalisys to
find opportunities of loop-carried dependences with a distance of one
between a store and a load.  Since it's using LoopAccessAnalysis, it was
easy to also add support for versioning away may-aliasing intervening
stores that would otherwise prevent this transformation.

This optimization is also performed by Load-PRE in GVN without the
option of multi-versioning.  As was discussed with Daniel Berlin in
http://reviews.llvm.org/D9548, this is inferior to a more loop-aware
solution applied here.  Hopefully, we will be able to remove some
complexity from GVN/MemorySSA as a consequence.

In the long run, we may want to extend this pass (or create a new one if
there is little overlap) to also eliminate loop-indepedent redundant
loads and store that *require* versioning due to may-aliasing
intervening stores/loads.  I have some motivating cases for store
elimination. My plan right now is to wait for MemorySSA to come online
first rather than using memdep for this.

The main motiviation for this pass is the 456.hmmer loop in SPECint2006
where after distributing the original loop and vectorizing the top part,
we are left with the critical path exposed in the bottom loop.  Being
able to promote the memory dependence into a register depedence (even
though the HW does perform store-to-load fowarding as well) results in a
major gain (~20%).  This gain also transfers over to x86: it's
around 8-10%.

Right now the pass is off by default and can be enabled
with -enable-loop-load-elim.  On the LNT testsuite, there are two
performance changes (negative number -> improvement):

  1. -28% in Polybench/linear-algebra/solvers/dynprog: the length of the
     critical paths is reduced
  2. +2% in Polybench/stencils/adi: Unfortunately, I couldn't reproduce this
     outside of LNT

The pass is scheduled after the loop vectorizer (which is after loop
distribution).  The rational is to try to reuse LAA state, rather than
recomputing it.  The order between LV and LLE is not critical because
normally LV does not touch scalar st->ld forwarding cases where
vectorizing would inhibit the CPU's st->ld forwarding to kick in.

LoopLoadElimination requires LAA to provide the full set of dependences
(including forward dependences).  LAA is known to omit loop-independent
dependences in certain situations.  The big comment before
removeDependencesFromMultipleStores explains why this should not occur
for the cases that we're interested in.

Reviewers: dberlin, hfinkel

Subscribers: junbuml, dberlin, mssimpso, rengolin, sanjoy, llvm-commits

Differential Revision: http://reviews.llvm.org/D13259

llvm-svn: 252017
2015-11-03 23:50:08 +00:00
Adam Nemet
6d8e7bca0f [LAA] LLE 5/6: Add predicate functions Dependence::isForward/isBackward, NFC
Summary: Will be used by the LoopLoadElimination pass.

Reviewers: hfinkel

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D13258

llvm-svn: 252016
2015-11-03 23:50:03 +00:00
Adam Nemet
ea9a067ee3 [LAA] LLE 4/6: APIs to access the dependent instructions for a dependence, NFC
Summary:
The functions use LAI and MemoryDepChecker classes so they need to be
defined after those definitions outside of the Dependence class.

Will be used by the LoopLoadElimination pass.

Reviewers: hfinkel

Subscribers: rengolin, llvm-commits

Differential Revision: http://reviews.llvm.org/D13257

llvm-svn: 252015
2015-11-03 23:49:58 +00:00
Peter Collingbourne
284b8079b3 CodeGen, Target: Move Mach-O-specific symbol name logic to Mach-O lowering.
A profile of an LTO link of Chrome revealed that we were spending some
~30-50% of execution time in the function Constant::getRelocationInfo(),
which is called from TargetLoweringObjectFile::getKindForGlobal() and in turn
from TargetMachine::getNameWithPrefix().

It turns out that we only need the result of getKindForGlobal() when
targeting Mach-O, so this change moves the relevant part of the logic to
TargetLoweringObjectFileMachO.

NFCI.

Differential Revision: http://reviews.llvm.org/D14168

llvm-svn: 252014
2015-11-03 23:40:03 +00:00
Matt Arsenault
510281e46d AMDGPU: Make flat_scratch name consistent
The printed name and the parsed assembler names weren't the same.
I'm not sure which name SC prints these as, but I think it's this one.

llvm-svn: 252010
2015-11-03 22:50:34 +00:00
Matt Arsenault
1e730f96ac AMDGPU: Fix asserts on invalid register ranges
If the requested SGPR was not actually aligned, it was
accepted and rounded down instead of rejected.

Also fix an assert if the range is an invalid size.

llvm-svn: 252009
2015-11-03 22:50:32 +00:00
Matt Arsenault
b15a099217 AMDGPU: Fix off by one error in register parsing
If trying to use one past the end, this would assert.

llvm-svn: 252008
2015-11-03 22:50:27 +00:00