1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 03:53:04 +02:00
Commit Graph

121504 Commits

Author SHA1 Message Date
Davide Italiano
d66d78419d [MC] Convert all the remaining tests from macho-dump to llvm-readobj.
This sort-of deprecates macho-dump. It may take still a little while
to garbage collect it, but at least there's no real usage of it in
the tree anymore. New tests should always rely on llvm-readobj or
llvm-objdump.

llvm-svn: 247235
2015-09-10 01:50:00 +00:00
Ahmed Bougacha
ac10764756 [AArch64] Match base+offset in STNP addressing mode.
Followup to r247231.

llvm-svn: 247234
2015-09-10 01:48:29 +00:00
Mehdi Amini
0085639fea Makes EmitRecord() accepting ArrayRef and raw array (NFC)
After r247186, a vector is no longer needed as the push_front for
the code is removed.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 247232
2015-09-10 01:45:55 +00:00
Ahmed Bougacha
5f7023b12c [AArch64] Support selecting STNP.
We could go through the load/store optimizer and match STNP where
we would have matched a nontemporal-annotated STP, but that's not
reliable enough, as an opportunistic optimization.
Insetad, we can guarantee emitting STNP, by matching them at ISel.
Since there are no single-input nontemporal stores, we have to
resort to some high-bits-extracting trickery to generate an STNP
from a plain store.

Also, we need to support another, LDP/STP-specific addressing mode,
base + signed scaled 7-bit immediate offset.
For now, only match the base. Let's make it smart separately.

Part of PR24086.

llvm-svn: 247231
2015-09-10 01:42:28 +00:00
Matt Arsenault
e27d2bced7 AMDGPU/SI: Fix more cases of losing exec operands
llvm-svn: 247230
2015-09-10 01:23:28 +00:00
Matt Arsenault
238e81b7c6 AMDGPU/SI: Fix creating v_mov_b32s without exec uses
This will be caught by existing tests with a
verifier check to be added in a future commit.

llvm-svn: 247229
2015-09-10 01:06:06 +00:00
Hans Wennborg
ddb1cf7aeb Revert r247216: "Fix Clang-tidy misc-use-override warnings, other minor fixes"
This caused build breakges, e.g.
http://lab.llvm.org:8011/builders/clang-x86_64-ubuntu-gdb-75/builds/24926

llvm-svn: 247226
2015-09-10 00:57:26 +00:00
Ahmed Bougacha
d90c25bcb0 [CodeGen] Make x86 nontemporal store patfrags generic. NFC.
To be used by other targets.

llvm-svn: 247225
2015-09-10 00:53:15 +00:00
Philip Reames
a5cde832ad [RewriteStatepointsForGC] Minor refactor to use shared implementation [NFC]
llvm-svn: 247223
2015-09-10 00:44:10 +00:00
Philip Reames
0e2ea3ec9a [RewriteStatepointsForGC] Strengthen a confusingly weak assertion [NFC]
The assertion was weaker than it should be and gave the impression we're growing the number of base defining values being considered during the fixed point interation.  That's not true.  The tighter form of the assert is useful documentation.

llvm-svn: 247221
2015-09-10 00:32:56 +00:00
Philip Reames
5581a8a2b4 [RewriteStatepointsForGC] One last bit of naming [NFCI]
llvm-svn: 247220
2015-09-10 00:27:50 +00:00
Reid Kleckner
b5753fd45f [WinEH] Add codegen support for cleanuppad and cleanupret
All of the complexity is in cleanupret, and it mostly follows the same
codepaths as catchret, except it doesn't take a return value in RAX.

This small example now compiles and executes successfully on win32:
  extern "C" int printf(const char *, ...) noexcept;
  struct Dtor {
    ~Dtor() { printf("~Dtor\n"); }
  };
  void has_cleanup() {
    Dtor o;
    throw 42;
  }
  int main() {
    try {
      has_cleanup();
    } catch (int) {
      printf("caught it\n");
    }
  }

Don't try to put the cleanup in the same function as the catch, or Bad
Things will happen.

llvm-svn: 247219
2015-09-10 00:25:23 +00:00
Philip Reames
ad09731a88 [RewriteStatepointsForGC] Further style/naming fixup [NFCI]
llvm-svn: 247217
2015-09-10 00:22:49 +00:00
Hans Wennborg
b5db40bf43 Fix Clang-tidy misc-use-override warnings, other minor fixes
Patch by Eugene Zelenko!

Differential Revision: http://reviews.llvm.org/D12740

llvm-svn: 247216
2015-09-10 00:12:56 +00:00
Mehdi Amini
7ea419f1ad Bitcode Writer: EmitRecordWith* takes an ArrayRef instead of a SmallVector (NFC)
This reapply commit r247178 after post-commit review from D.Blaikie
in a way that makes it compatible with the existing API.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 247215
2015-09-10 00:05:09 +00:00
Mehdi Amini
f9dd260ee5 Add makeArrayRef() overload for ArrayRef input (no-op/identity) NFC
The purpose is to allow templated wrapper to work with either
ArrayRef or any convertible operation:

template<typename Container>
void wrapper(const Container &Arr) {
  impl(makeArrayRef(Arr));
}

with Container being a std::vector, a SmallVector, or an ArrayRef.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 247214
2015-09-10 00:05:04 +00:00
Philip Reames
153e484862 [RewriteStatepointsForGC] More naming cleanup [NFCI]
llvm-svn: 247213
2015-09-10 00:01:53 +00:00
Philip Reames
b1d69773b9 [RewriteStatepointsForGC] Code cleanup [NFC]
Factor out common code related to naming values, fix a small style issue.  More to follow in separate changes.

llvm-svn: 247211
2015-09-09 23:57:18 +00:00
Philip Reames
b2d2c7c606 [RewriteStatepointsForGC] Extend base pointer inference to handle insertelement
This change is simply enhancing the existing inference algorithm to handle insertelement instructions by conservatively inserting a new instruction to propagate the vector of associated base pointers. In the process, I'm ripping out the peephole optimizations which mostly helped cover the fact this hadn't been done.

Note that most of the newly inserted nodes will be nearly immediately removed by the post insertion optimization pass introduced in 246718. Arguably, we should be trying harder to avoid the malloc traffic here, but I'd rather get the code correct, then worry about compile time.

Unlike previous extensions of the algorithm to handle more case, I discovered the existing code was causing miscompiles in some cases. In particular, we had an implicit assumption that the peephole covered *all* insert element instructions, so if we had a value directly based on a insert element the peephole didn't cover, we proceeded as if it were a base anyways. Not good. I believe we had the same issue with shufflevector which is why I adjusted the predicate for them as well.

Differential Revision: http://reviews.llvm.org/D12583

llvm-svn: 247210
2015-09-09 23:40:12 +00:00
Philip Reames
b4dfda2138 [RewriteStatepointsForGC] Make base pointer inference deterministic
Previously, the base pointer algorithm wasn't deterministic. The core fixed point was (of course), but we were inserting new nodes and optimizing them in an order which was unspecified and variable. We'd somewhat hacked around this for testing by sorting by value name, but that doesn't solve the general determinism problem.

Instead, we can use the order of traversal over the def/use graph to give us a single consistent ordering. Today, this is a DFS order, but the exact order doesn't mater provided it's deterministic for a given input.

(Q: It is safe to rely on a deterministic order of operands right?)

Note that this only fixes the determinism within a single inference step. The inference step is currently invoked many times in a non-deterministic order. That's a future change in the sequence. :)

Differential Revision: http://reviews.llvm.org/D12640

llvm-svn: 247208
2015-09-09 23:26:08 +00:00
Peter Collingbourne
04c6c402ba LowerBitSets: Fix non-determinism bug.
Visit disjoint sets in a deterministic order based on the maximum BitSetNM
index, otherwise the order in which we visit them will depend on pointer
comparisons. This was being exposed by MSan.

llvm-svn: 247201
2015-09-09 22:30:32 +00:00
Reid Kleckner
40ce82e375 [SEH] Emit 32-bit SEH tables for the new EH IR
The 32-bit tables don't actually contain PC range data, so emitting them
is incredibly simple.

The 64-bit tables, on the other hand, use the same table for state
numbering as well as label ranges. This makes things more difficult, so
it will be implemented later.

llvm-svn: 247192
2015-09-09 21:10:03 +00:00
Dan Gohman
c4e3274379 [WebAssembly] Update target datalayout strings.
llvm-svn: 247187
2015-09-09 20:54:31 +00:00
Teresa Johnson
8df32eb93e Change EmitRecordWithAbbrevImpl to take Optional record code. NFC.
This change enables EmitRecord to pass the supplied record Code to
EmitRecordWithAbbrevImpl, rather than insert it into the Vals array.
It is an enabler for changing EmitRecord to take an ArrayRef<uintty> instead
of a SmallVectorImpl<uintty>&

Patch suggested by Duncan P. N. Exon Smith, modified by myself a bit to get
correct assertion checking.

llvm-svn: 247186
2015-09-09 20:53:31 +00:00
Piotr Padlewski
56f48d9943 ScalarEvolution assume hanging bugfix
http://reviews.llvm.org/D12719

llvm-svn: 247184
2015-09-09 20:47:30 +00:00
Mehdi Amini
2f7683bcd2 Revert "Bitcode Writer: EmitRecordWith* takes an ArrayRef instead of a SmallVector (NFC)"
This reverts commit r247178.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 247182
2015-09-09 20:35:15 +00:00
David Majnemer
3926c488cf Revert trunc(lshr (sext A), Cst) to ashr A, Cst
This reverts commit r246997, it introduced a regression (PR24763).

llvm-svn: 247180
2015-09-09 20:20:08 +00:00
Mehdi Amini
acd97a51fe Bitcode Writer: EmitRecordWith* takes an ArrayRef instead of a SmallVector (NFC)
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 247178
2015-09-09 20:08:39 +00:00
Renato Golin
32a92f6d16 Revert "AVX512: Implemented encoding and intrinsics for vextracti64x4 ,vextracti64x2, vextracti32x8, vextracti32x4, vextractf64x4, vextractf64x2, vextractf32x8, vextractf32x4 Added tests for intrinsics and encoding."
This reverts commit r247149, as it was breaking numerous buildbots of varied architectures.

llvm-svn: 247177
2015-09-09 19:44:40 +00:00
Sanjay Patel
0d6bc4f108 allow unpredictable metadata on switch statements
llvm-svn: 247174
2015-09-09 18:38:30 +00:00
Matthias Braun
a4356ce0e6 Save LaneMask with livein registers
With subregister liveness enabled we can detect the case where only
parts of a register are live in, this is expressed as a 32bit lanemask.
The current code only keeps registers in the live-in list and therefore
enumerated all subregisters affected by the lanemask. This turned out to
be too conservative as the subregister may also cover additional parts
of the lanemask which are not live. Expressing a given lanemask by
enumerating a minimum set of subregisters is computationally expensive
so the best solution is to simply change the live-in list to store the
lanemasks as well. This will reduce memory usage for targets using
subregister liveness and slightly increase it for other targets

Differential Revision: http://reviews.llvm.org/D12442

llvm-svn: 247171
2015-09-09 18:08:03 +00:00
Matthias Braun
d5a7fc40ed VirtRegMap: Improve addMBBLiveIns() using SlotIndex::MBBIndexIterator; NFC
Now that we have an explicit iterator over the idx2MBBMap in SlotIndices
we can use the fact that segments and the idx2MBBMap is sorted by
SlotIndex position so can advance both simultaneously instead of
starting from the beginning for each segment.

This complicates the code for the subregister case somewhat but should
be more efficient and has the advantage that we get the final lanemask
for each block immediately which will be important for a subsequent
change.

Removes the now unused SlotIndexes::findMBBLiveIns function.

Differential Revision: http://reviews.llvm.org/D12443

llvm-svn: 247170
2015-09-09 18:07:54 +00:00
Chandler Carruth
d7003090ac [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.

This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:

- FunctionAAResults is a type-erasing alias analysis results aggregation
  interface to walk a single query across a range of results from
  different alias analyses. Currently this is function-specific as we
  always assume that aliasing queries are *within* a function.

- AAResultBase is a CRTP utility providing stub implementations of
  various parts of the alias analysis result concept, notably in several
  cases in terms of other more general parts of the interface. This can
  be used to implement only a narrow part of the interface rather than
  the entire interface. This isn't really ideal, this logic should be
  hoisted into FunctionAAResults as currently it will cause
  a significant amount of redundant work, but it faithfully models the
  behavior of the prior infrastructure.

- All the alias analysis passes are ported to be wrapper passes for the
  legacy PM and new-style analysis passes for the new PM with a shared
  result object. In some cases (most notably CFL), this is an extremely
  naive approach that we should revisit when we can specialize for the
  new pass manager.

- BasicAA has been restructured to reflect that it is much more
  fundamentally a function analysis because it uses dominator trees and
  loop info that need to be constructed for each function.

All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.

The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.

This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.

Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.

One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.

Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.

Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.

Differential Revision: http://reviews.llvm.org/D12080

llvm-svn: 247167
2015-09-09 17:55:00 +00:00
Matthias Braun
364e6db7a4 MachineVerifier: Check that SlotIndex MBBIndexList is sorted.
This introduces a check that the MBBIndexList is sorted as proposed in
http://reviews.llvm.org/D12443 but split up into a separate commit.

llvm-svn: 247166
2015-09-09 17:49:46 +00:00
Matt Arsenault
bcf29c51fa AMDGPU: Extract full 64-bit subregister and use subregs
Instead of extracting both 32-bit components from the 128-bit
register. This produces fewer copies and is easier for
the copy peephole optimizer to understand and see the actual uses
as extracts from a reg_sequence.

This avoids needing to handle subregister composing in the
PeepholeOptimizer's ValueTracker for this case.

llvm-svn: 247162
2015-09-09 17:03:29 +00:00
Matt Arsenault
813d7779aa AMDGPU: Remove unused multiclass argument
llvm-svn: 247161
2015-09-09 17:03:18 +00:00
Tom Stellard
1736b04eed llvm-config: Add --build-system option
Summary:
This can be used for distinguishing between cmake and autoconf builds.
Users may need this in order to handle inconsistencies between the
outputs of the two build systems.

Reviewers: echristo, chandlerc, beanz

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11838

llvm-svn: 247159
2015-09-09 16:39:30 +00:00
Dan Gohman
9726a6ad06 [WebAssembly] Implement calls with void return types.
llvm-svn: 247158
2015-09-09 16:13:47 +00:00
Tom Stellard
687b1fc846 AMDGPU/SI: Fold operands through REG_SEQUENCE instructions
Summary:
This helps mostly when we use add instructions for address calculations
that contain immediates.

Reviewers: arsenm

Subscribers: arsenm, llvm-commits

Differential Revision: http://reviews.llvm.org/D12256

llvm-svn: 247157
2015-09-09 15:43:26 +00:00
Silviu Baranga
f7131068ce [CostModel][AArch64] Remove amortization factor for some of the vector select instructions
Summary:
We are not scalarizing the wide selects in codegen for i16 and i32 and
therefore we can remove the amortization factor. We still have issues
with i64 vectors in codegen though.

Reviewers: mcrosier

Subscribers: mcrosier, aemerson, llvm-commits, rengolin

Differential Revision: http://reviews.llvm.org/D12724

llvm-svn: 247156
2015-09-09 15:35:02 +00:00
Sanjay Patel
b8699722a6 don't repeat function names in comments; NFC
llvm-svn: 247154
2015-09-09 15:24:36 +00:00
Dan Gohman
d982a06194 [WebAssembly] Tidy up some unneeded newline characters.
llvm-svn: 247152
2015-09-09 15:13:36 +00:00
Joseph Tremoulet
69441e6c62 [CMake] Flag recursive cmake invocations for cross-compile
Summary:
Cross-compilation uses recursive cmake invocations to build native host
tools.  These recursive invocations only forward a fixed set of
variables/options, since the native environment is generally the default.
This change adds -DLLVM_TARGET_IS_CROSSCOMPILE_HOST=TRUE to the recursive
cmake invocations, so that cmake files can distinguish these recursive
invocations from top-level ones, which can explain why expected options
are unset.

LLILC will use this to avoid trying to generate its build rules in the
crosscompile native host target (where it is not needed), which would fail
if attempted because LLILC requires a cmake variable passed on the command
line, which is not forwarded in the recursive invocation.

Reviewers: rnk, beanz

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D12679

llvm-svn: 247151
2015-09-09 14:57:06 +00:00
Sanjay Patel
e55928d8ce function names start with a lower case letter; NFC
llvm-svn: 247150
2015-09-09 14:54:29 +00:00
Igor Breger
1a3ef530c1 AVX512: Implemented encoding and intrinsics for
vextracti64x4 ,vextracti64x2, vextracti32x8, vextracti32x4, vextractf64x4, vextractf64x2, vextractf32x8, vextractf32x4
Added tests for intrinsics and encoding.

Differential Revision: http://reviews.llvm.org/D11802

llvm-svn: 247149
2015-09-09 14:35:09 +00:00
Sanjay Patel
36b333fb18 don't repeat function names in comments; NFC
llvm-svn: 247148
2015-09-09 14:34:26 +00:00
Zoran Jovanovic
cb8b3d36cb [mips][microMIPS] Implement ADDU16, AND16, ANDI16, NOT16, OR16, SLL16 and SRL16 instructions
Differential Revision: http://reviews.llvm.org/D11178

llvm-svn: 247146
2015-09-09 13:55:45 +00:00
Alex Lorenz
683352c838 Fix PR 24633 - Handle undef values when parsing standalone constants.
llvm-svn: 247145
2015-09-09 13:44:33 +00:00
James Molloy
793641a3d1 Rename ExitCount to BackedgeTakenCount, because that's what it is.
We called a variable ExitCount, stored the backedge count in it, then redefined it to be the exit count again.

llvm-svn: 247140
2015-09-09 12:51:10 +00:00
James Molloy
a75bf2d4b8 Delay predication of stores until near the end of vector code generation
Predicating stores requires creating extra blocks. It's much cleaner if we do this in one pass instead of mutating the CFG while writing vector instructions.

Besides which we can make use of helper functions to update domtree for us, reducing the work we need to do.

llvm-svn: 247139
2015-09-09 12:51:06 +00:00