1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 19:23:23 +01:00
Commit Graph

107482 Commits

Author SHA1 Message Date
Daniel Sanders
c085a99400 [mips] Move MipsTargetLowering::MipsCC::regSize() to MipsSubtarget::getGPRSizeInBytes()
Summary:
The GPR size is more a property of the subtarget than that of the ABI so move
this information to the MipsSubtarget.

No functional change.

Reviewers: vmedic

Reviewed By: vmedic

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5009

llvm-svn: 217436
2014-09-09 12:11:16 +00:00
Pavel Chupin
45cea6e50f [x32] Emit callq for CALLpcrel32
Summary:
In AT&T annotation for both x86_64 and x32 calls should be printed as
callq in assembly. It's only a matter of correct mnemonic, object output
is ok.

Test Plan: trivial test added

Reviewers: nadav, dschuff, craig.topper

Subscribers: llvm-commits, zinovy.nis

Differential Revision: http://reviews.llvm.org/D5213

llvm-svn: 217435
2014-09-09 11:54:12 +00:00
Daniel Sanders
318ff0d57f [mips] Don't cache IsO32 and IsFP64 in MipsTargetLowering::MipsCC
Summary:
Use a MipsSubtarget reference instead.

No functional change.

Reviewers: vmedic

Reviewed By: vmedic

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5008

llvm-svn: 217434
2014-09-09 10:46:48 +00:00
Tim Northover
17de4f5ae4 llvm-objdump: don't crash when __compact_unwind has no relocs.
llvm-svn: 217433
2014-09-09 10:45:06 +00:00
Toma Tabacu
7b4aafeb51 [mips] Add assembler support for .set push/pop directive.
Summary:
These directives are used to save the current assembler options (in the case of ".set push") and restore the previously saved options (in the case of ".set pop").

Contains work done by Matheus Almeida.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D4821

llvm-svn: 217432
2014-09-09 10:15:38 +00:00
Renato Golin
024d56e8f8 ARM: Negative offset support problem
This patch is to permit a negative offset usage for a non frame access.

Patch by Igor Oblakov.

llvm-svn: 217431
2014-09-09 09:57:59 +00:00
Justin Bogner
04bf7c7e7c llvm-cov: Use ArrayRef::slice (NFC)
llvm-svn: 217430
2014-09-09 09:15:52 +00:00
Patrik Hagglund
7b94ce2090 [MachineSinking] Conservatively clear kill flags after coalescing.
This solves the problem of having a kill flag inside a loop
with a definition of the register prior to the loop:

%vreg368<def> ...

Inside loop:

        %vreg520<def> = COPY %vreg368
        %vreg568<def,tied1> = add %vreg341<tied0>, %vreg520<kill>

=> was coalesced into =>

        %vreg568<def,tied1> = add %vreg341<tied0>, %vreg368<kill>

MachineVerifier then complained:
*** Bad machine code: Virtual register killed in block, but needed live out. ***

The kill flag for %vreg368 is incorrect, and is cleared by this patch.

This is similar to the clearing done at the end of
MachineSinking::SinkInstruction().

Patch provided by Jonas Paulsson.

Reviewed by Quentin Colombet and Juergen Ributzka.

llvm-svn: 217427
2014-09-09 07:47:00 +00:00
Justin Bogner
951fd60695 llvm-cov: Combine two types that were nearly identical (NFC)
llvm-cov had a SourceRange type that was nearly identical to a
CountedRegion except that it shaved off a couple of fields. There
aren't likely to be enough of these for the minor memory savings to be
worth the extra complexity here.

llvm-svn: 217417
2014-09-09 05:32:18 +00:00
Justin Bogner
8f071790c7 llvm-cov: Rename MappingRegion to coverage::CountedRegion (NFC)
This name was too similar to CoverageMappingRegion, and the type
really belongs in the coverage library anyway.

llvm-svn: 217416
2014-09-09 05:32:14 +00:00
Bob Wilson
cc87ed123c Set trunc store action to Expand for all X86 targets.
When compiling without SSE2, isTruncStoreLegal(F64, F32) would return Legal, whereas with SSE2 it would return Expand. And since the Target doesn't seem to actually handle a truncstore for double -> float, it would just output a store of a full double in the space for a float hence overwriting other bits on the stack.

Patch by Luqman Aden!

llvm-svn: 217410
2014-09-09 01:13:36 +00:00
Justin Bogner
51a0be5757 llvm-cov: Try to appease MSVC after r217404
llvm-svn: 217406
2014-09-08 21:31:43 +00:00
Dan Liew
90cbaf828c Fix type error in insertvalue example in LangRef. %agg1 is of type {i32,
float} and thus cannot be used where a type {i32, {float}} is expected.

llvm-svn: 217405
2014-09-08 21:19:46 +00:00
Justin Bogner
b977873733 llvm-cov: Use ErrorOr rather than an error_code* (NFC)
llvm-svn: 217404
2014-09-08 21:04:00 +00:00
Hans Wennborg
6ebf2a8b60 Fast-ISel: Remove dead code after falling back from selecting call instructions (PR20863)
Previously, fast-isel would not clean up after failing to select a call
instruction, because it would have called flushLocalValueMap() which moves
the insertion point, making SavedInsertPt in selectInstruction() invalid.

Fixing this by making SavedInsertPt a member variable, and having
flushLocalValueMap() update it.

This removes some redundant code at -O0, and more importantly fixes PR20863.

Differential Revision: http://reviews.llvm.org/D5249

llvm-svn: 217401
2014-09-08 20:24:10 +00:00
Sanjay Patel
b6481eb090 Group unsafe fmul math folds together for easier reading. No functional change.
llvm-svn: 217399
2014-09-08 20:16:42 +00:00
Justin Bogner
bb93be54d8 llvm-cov: Remove dead code
FunctionCoverageMapping::PrettyName was from a version of the tool
during review, and isn't actually used currently.

llvm-svn: 217398
2014-09-08 19:51:21 +00:00
Hal Finkel
082ca75fd6 Don't static_cast invalid pointers
UBSan complained about using static_cast on the invalid (tombstone, etc.)
pointers used by DenseMap. Use a reinterpret_cast instead.

llvm-svn: 217397
2014-09-08 19:31:25 +00:00
Alexey Samsonov
f4c0500cb9 Be more careful in parsing Module::ModFlagBehavior value
to make sure we don't do invalid load of an enum. Share the
conversion code between llvm::Module implementation and the
verifier.

This bug was reported by UBSan.

llvm-svn: 217395
2014-09-08 19:16:28 +00:00
Sanjay Patel
3803a37086 Fix the FIXME that was just added in r217390 - remove a bunch of redundant fold permutations.
The testcases for these folds already exist in test/CodeGen/X86/fp-fast.ll.

llvm-svn: 217393
2014-09-08 18:22:51 +00:00
Sanjay Patel
510f5b3b43 group unsafe math folds together for easier reading
Also added a FIXME regarding redundant folds for non-canonicalized constants.

llvm-svn: 217390
2014-09-08 17:32:19 +00:00
Chad Rosier
6796f48338 [AArch64] Enabled AA support for Cortex-A57.
llvm-svn: 217381
2014-09-08 15:34:16 +00:00
Matt Arsenault
993c5caa5b R600/SI: Fix assertion from copying a TargetGlobalAddress
Assert in scheduler from an inserted copy_to_regclass from
a constant.

This only seems to break sometimes when a constant initializer
address is forced into VGPRs in a non-entry block. No test
since the only case I've managed to hit only happens with a future
patch, and that case will also not be a problem once scalar instructions
are used in non-entry blocks.

llvm-svn: 217380
2014-09-08 15:07:33 +00:00
Matt Arsenault
3745c3a669 R600/SI: Replace LDS atomics with no return versions
llvm-svn: 217379
2014-09-08 15:07:31 +00:00
Matt Arsenault
6e993e725e R600/SI: Add InstrMapping for noret atomics.
Only handles LDS atomics for now, and will be used
to replace atomics with no uses with the no return
versions.

llvm-svn: 217378
2014-09-08 15:07:27 +00:00
Chad Rosier
7f50169e13 [AArch64] Improve AA to remove unneeded edges in the AA MI scheduling graph.
Patch by Sanjin Sijaric <ssijaric@codeaurora.org>!
Phabricator Review: http://reviews.llvm.org/D5103

llvm-svn: 217371
2014-09-08 14:43:48 +00:00
Chad Rosier
1d31d7e6bb [AArch64] Enabled AA support for Cortex-A53.
Patch by Sanjin Sijaric <ssijaric@codeaurora.org>!
Phabricator Review: http://reviews.llvm.org/D5103

llvm-svn: 217370
2014-09-08 14:31:49 +00:00
Alexander Kornienko
2528e8b93a Add .clang-tidy configuration file to provide LLVM-optimized defaults for
clang-tidy.

Reviewers: chandlerc, djasper

Reviewed By: djasper

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5236

llvm-svn: 217365
2014-09-08 13:30:00 +00:00
Sid Manning
da49f703af Spelling correction
Another trivial spelling change.

llvm-svn: 217364
2014-09-08 13:05:23 +00:00
Andrew Trick
af37eb881d Add a comment to getNewAlignmentDiff.
llvm-svn: 217350
2014-09-07 23:16:24 +00:00
Hal Finkel
56525dc578 Make use @llvm.assume for loop guards in ScalarEvolution
This adds a basic (but important) use of @llvm.assume calls in ScalarEvolution.
When SE is attempting to validate a condition guarding a loop (such as whether
or not the loop count can be zero), this check should also include dominating
assumptions.

llvm-svn: 217348
2014-09-07 21:37:59 +00:00
Hal Finkel
5d195fd587 Check for all known bits on ret in InstCombine
From a combination of @llvm.assume calls (and perhaps through other means, such
as range metadata), it is possible that all bits of a return value might be
known. Previously, InstCombine did not check for this (which is understandable
given assumptions of constant propagation), but means that we'd miss simple
cases where assumptions are involved.

llvm-svn: 217346
2014-09-07 21:28:34 +00:00
Hal Finkel
9ede5a39dd Make use of @llvm.assume from LazyValueInfo
This change teaches LazyValueInfo to use the @llvm.assume intrinsic. Like with
the known-bits change (r217342), this requires feeding a "context" instruction
pointer through many functions. Aside from a little refactoring to reuse the
logic that turns predicates into constant ranges in LVI, the only new code is
that which can 'merge' the range from an assumption into that otherwise
computed. There is also a small addition to JumpThreading so that it can have
LVI use assumptions in the same block as the comparison feeding a conditional
branch.

With this patch, we can now simplify this as expected:
int foo(int a) {
  __builtin_assume(a > 5);
  if (a > 3) {
    bar();
    return 1;
  }
  return 0;
}

llvm-svn: 217345
2014-09-07 20:29:59 +00:00
Hal Finkel
21d1f99033 Add an AlignmentFromAssumptions Pass
This adds a ScalarEvolution-powered transformation that updates load, store and
memory intrinsic pointer alignments based on invariant((a+q) & b == 0)
expressions. Many of the simple cases we can get with ValueTracking, but we
still need something like this for the more complicated cases (such as those
with an offset) that require some algebra. Note that gcc's
__builtin_assume_aligned's optional third argument provides exactly for this
kind of 'misalignment' offset for which this kind of logic is necessary.

The primary motivation is to fixup alignments for vector loads/stores after
vectorization (and unrolling). This pass is added to the optimization pipeline
just after the SLP vectorizer runs (which, admittedly, does not preserve SE,
although I imagine it could).  Regardless, I actually don't think that the
preservation matters too much in this case: SE computes lazily, and this pass
won't issue any SE queries unless there are any assume intrinsics, so there
should be no real additional cost in the common case (SLP does preserve DT and
LoopInfo).

llvm-svn: 217344
2014-09-07 20:05:11 +00:00
Hal Finkel
be0364002a Add additional patterns for @llvm.assume in ValueTracking
This builds on r217342, which added the infrastructure to compute known bits
using assumptions (@llvm.assume calls). That original commit added only a few
patterns (to catch common cases related to determining pointer alignment); this
change adds several other patterns for simple cases.

r217342 contained that, for assume(v & b = a), bits in the mask
that are known to be one, we can propagate known bits from the a to v. It also
had a known-bits transfer for assume(a = b). This patch adds:

assume(~(v & b) = a) : For those bits in the mask that are known to be one, we
                       can propagate inverted known bits from the a to v.

assume(v | b = a) :    For those bits in b that are known to be zero, we can
                       propagate known bits from the a to v.

assume(~(v | b) = a):  For those bits in b that are known to be zero, we can
                       propagate inverted known bits from the a to v.

assume(v ^ b = a) :    For those bits in b that are known to be zero, we can
		       propagate known bits from the a to v. For those bits in
		       b that are known to be one, we can propagate inverted
                       known bits from the a to v.

assume(~(v ^ b) = a) : For those bits in b that are known to be zero, we can
		       propagate inverted known bits from the a to v. For those
		       bits in b that are known to be one, we can propagate
                       known bits from the a to v.

assume(v << c = a) :   For those bits in a that are known, we can propagate them
                       to known bits in v shifted to the right by c.

assume(~(v << c) = a) : For those bits in a that are known, we can propagate
                        them inverted to known bits in v shifted to the right by c.

assume(v >> c = a) :   For those bits in a that are known, we can propagate them
                       to known bits in v shifted to the right by c.

assume(~(v >> c) = a) : For those bits in a that are known, we can propagate
                        them inverted to known bits in v shifted to the right by c.

assume(v >=_s c) where c is non-negative: The sign bit of v is zero

assume(v >_s c) where c is at least -1: The sign bit of v is zero

assume(v <=_s c) where c is negative: The sign bit of v is one

assume(v <_s c) where c is non-positive: The sign bit of v is one

assume(v <=_u c): Transfer the known high zero bits

assume(v <_u c): Transfer the known high zero bits (if c is know to be a power
                 of 2, transfer one more)

A small addition to InstCombine was necessary for some of the test cases. The
problem is that when InstCombine was simplifying and, or, etc. it would fail to
check the 'do I know all of the bits' condition before checking less specific
conditions and would not fully constant-fold the result. I'm not sure how to
trigger this aside from using assumptions, so I've just included the change
here.

llvm-svn: 217343
2014-09-07 19:21:07 +00:00
Hal Finkel
f8bb9b78cf Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.

As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.

The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.

Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.

This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).

llvm-svn: 217342
2014-09-07 18:57:58 +00:00
David Blaikie
9f14661b4c DebugInfo: Do not use DW_FORM_GNU_addr_index in skeleton CUs, GDB 7.8 errors on this.
It's probably not a huge deal to not do this - if we could, maybe the
address could be reused by a subprogram low_pc and avoid an extra
relocation, but it's just one per CU at best.

llvm-svn: 217338
2014-09-07 17:31:42 +00:00
Hal Finkel
575ec5e04c Add functions for finding ephemeral values
This adds a set of utility functions for collecting 'ephemeral' values. These
are LLVM IR values that are used only by @llvm.assume intrinsics (directly or
indirectly), and thus will be removed prior to code generation, implying that
they should be considered free for certain purposes (like inlining). The
inliner's cost analysis, and a few other passes, have been updated to account
for ephemeral values using the provided functionality.

This functionality is important for the usability of @llvm.assume, because it
limits the "non-local" side-effects of adding llvm.assume on inlining, loop
unrolling, etc. (these are hints, and do not generate code, so they should not
directly contribute to estimates of execution cost).

llvm-svn: 217335
2014-09-07 13:49:57 +00:00
Hal Finkel
6122fb79cb Add an Assumption-Tracking Pass
This adds an immutable pass, AssumptionTracker, which keeps a cache of
@llvm.assume call instructions within a module. It uses callback value handles
to keep stale functions and intrinsics out of the map, and it relies on any
code that creates new @llvm.assume calls to notify it of the new instructions.
The benefit is that code needing to find @llvm.assume intrinsics can do so
directly, without scanning the function, thus allowing the cost of @llvm.assume
handling to be negligible when none are present.

The current design is intended to be lightweight. We don't keep track of
anything until we need a list of assumptions in some function. The first time
this happens, we scan the function. After that, we add/remove @llvm.assume
calls from the cache in response to registration calls and ValueHandle
callbacks.

There are no new direct test cases for this pass, but because it calls it
validation function upon module finalization, we'll pick up detectable
inconsistencies from the other tests that touch @llvm.assume calls.

This pass will be used by follow-up commits that make use of @llvm.assume.

llvm-svn: 217334
2014-09-07 12:44:26 +00:00
Chandler Carruth
5a213858d8 [x86] Revert my over-eager commit in r217332.
I hadn't actually run all the tests yet and these combines have somewhat
surprisingly far reaching effects.

llvm-svn: 217333
2014-09-07 12:37:11 +00:00
Chandler Carruth
75df70c921 [x86] Tweak the rules surrounding 0,0 and 1,1 v2f64 shuffles and add
support for MOVDDUP which is really important for matrix multiply style
operations that do lots of non-vector-aligned load and splats.

The original motivation was to add support for MOVDDUP as the lack of it
regresses matmul_f64_4x4 by 5% or so. However, all of the rules here
were somewhat suspicious.

First, we should always be using the floating point domain shuffles,
regardless of how many copies we have to make as a movapd is *crazy*
faster than the domain switching cost on some chips. (Mostly because
movapd is crazy cheap.) Because SHUFPD can't do the copy-for-free trick
of the PSHUF instructions, there is no need to avoid canonicalizing on
UNPCK variants, so do that canonicalizing. This also ensures we have the
chance to form MOVDDUP. =]

Second, we assume SSE2 support when doing any vector lowering, and given
that we should just use UNPCKLPD and UNPCKHPD as they can operate on
registers or memory. If vectors get spilled or come from memory at all
this is going to allow the load to be folded into the operation. If we
want to optimize for encoding size (the only difference, and only
a 2 byte difference) it should be done *much* later, likely after RA.

llvm-svn: 217332
2014-09-07 12:02:14 +00:00
Hans Wennborg
421418c484 Try to unflake AllocatorTest.TestAlignmentPastSlab
llvm-svn: 217331
2014-09-07 05:14:29 +00:00
Hans Wennborg
e6933de88d BumpPtrAllocator: do the size check without moving any pointers
Instead of aligning and moving the CurPtr forward, and then comparing
with End, simply calculate how much space is needed, and compare that
to how much is available.

Hopefully this avoids any doubts about comparing addresses possibly
derived from past the end of the slab array, overflowing, etc.

Also add a test where aligning CurPtr would move it past End.

llvm-svn: 217330
2014-09-07 04:24:31 +00:00
Lang Hames
3912e666a4 [MCJIT] Revert partial RuntimeDyldELF cleanup that was prematurely committed in
r217328.

llvm-svn: 217329
2014-09-07 04:13:13 +00:00
Lang Hames
31f4910c56 [MCJIT] Rewrite RuntimeDyldMachO and its derived classes to use the 'Offset'
field of RelocationValueRef, rather than the 'Addend' field.

This is consistent with RuntimeDyldELF's use of RelocationValueRef, and more
consistent with the semantics of the data being stored (the offset from the
start of a section or symbol).

llvm-svn: 217328
2014-09-07 04:03:32 +00:00
Lang Hames
a31589cae0 [MCJIT] Fix a bug RuntimeDyldImpl's read/writeBytesUnaligned methods.
The previous implementation was writing to the high-bytes of integers on BE
targets (when run on LE hosts).

http://llvm.org/PR20640

llvm-svn: 217325
2014-09-07 02:05:26 +00:00
Matt Arsenault
09d25183ee R600/SI: Fix register class for some 64-bit atomics
llvm-svn: 217323
2014-09-07 00:46:20 +00:00
Matt Arsenault
5cd4df4d43 R600/SI: Relax a few tests to help enable scheduler
llvm-svn: 217320
2014-09-06 20:44:41 +00:00
Matt Arsenault
6b7b9d6b29 R600/SI: Fix broken check lines.
Fix missing check, and hardcoded register numbers.

llvm-svn: 217318
2014-09-06 20:37:56 +00:00
Saleem Abdulrasool
15443979a5 MC: correct DWARF line info for PE/COFF
DWARF address ranges contain a reference to the debug_info section.  This offset
is an absolute relocation except on non-PE/COFF targets where it is section
relative.  We would emit this incorrectly, and trying to map the debug info from
the address would fail.

llvm-svn: 217317
2014-09-06 19:57:48 +00:00