1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-02-01 05:01:59 +01:00

105289 Commits

Author SHA1 Message Date
Daniel Sanders
f510655e3d Fix r212522 - [mips] Improve encapsulation of the .MIPS.abiflags implementation and limit scope of related enums
Added two lines that should have been in r212522.

llvm-svn: 212523
2014-07-08 10:35:52 +00:00
Daniel Sanders
694f6eac28 [mips] Improve encapsulation of the .MIPS.abiflags implementation and limit scope of related enums
Summary:
Follow on to r212519 to improve the encapsulation and limit the scope of the enums.

Also merged two very similar parser functions, fixed a bug where ASE's
were not being reported, and marked CPR1's as being 128-bit when MSA is
enabled.

Differential Revision: http://reviews.llvm.org/D4384

llvm-svn: 212522
2014-07-08 10:11:38 +00:00
Renato Golin
74bbc07d65 Revert "Refactor ARM subarchitecture parsing"
This reverts commit 7b4a6882467e7fef4516a0cbc418cbfce0fc6f6d.

llvm-svn: 212521
2014-07-08 10:06:16 +00:00
Arnaud A. de Grandmaison
5d079c21a0 Truncate the immediate in logical operation to the register width
And continue to produce an error if the 32 most significant bits are not all ones or zeros.

llvm-svn: 212520
2014-07-08 09:53:04 +00:00
Vladimir Medic
f3dd261ec8 Mips.abiflags is a new implicitly generated section that will be present on all new modules. The section contains a versioned data structure which represents essentially information to allow a program loader to determine the requirements of the application. This patch implements mips.abiflags section and provides test cases for it.
llvm-svn: 212519
2014-07-08 08:59:22 +00:00
Chandler Carruth
be294c3274 [x86,SDAG] Sink the logic for folding shuffles of splats more
aggressively from the x86 shuffle lowering to the generic SDAG vector
shuffle formation code.

This code already tried to fold away shuffles of splats! It just had
lots of bugs and couldn't handle the case my new x86 shuffle lowering
needed.

First, it failed to correctly compute whether N2 was undef because it
pre-computed this, then did transformations which could *make* N2 undef,
then failed to ever re-consider the precomputed state.

Second, it didn't look through bitcasts at all, even in the safe cases
where they are just element-type bitcasts with no change to the number
of elements.

Third, it didn't handle all-zero bit casts nicely the way my code in the
x86 side of things did, which is essential to getting good zext-shuffle
lowerings.

But all of these are generic. I just ported the code down to this layer
and fixed the surrounding bugs. Tests exercising this in the x86 backend
still pass and some silly code in widen_cast-6.ll gets better. I updated
that test to be a bit more precise but it's still pretty unclear what
the value of the test is in this day and age.

llvm-svn: 212517
2014-07-08 08:45:38 +00:00
Chandler Carruth
b2bc863a82 [SDAG] Actually check for a non-constant splat and clarify comments
around the handling of UNDEF lanes in boolean vector content analysis.

The code before my changes here also failed to check for non-constant
splats in a buildvector. I have no idea how to trigger this, I just
spotted by inspection when trying to understand the code. It seems
extremely unlikely to be worth the trouble to teach the only caller of
this code (DAG combining setcc patterns) how to cleverly handle undef
lanes, so I've just commented more thoroughly that we're giving up
there.

llvm-svn: 212515
2014-07-08 07:44:15 +00:00
Chandler Carruth
0764373bcc [SDAG] Build up a more rich set of APIs for querying build-vector SDAG
nodes about whether they are splats. This is factored out and improved
from r212324 which got reverted as it was far too aggressive. The new
API should help more conservatively handle buildvectors that are
a mixture of splatted and undef values.

No functionality change at this point. The hope is to slowly
re-introduce the undef-tolerant optimization of splats, but each time
being forced to make a concious decision about how to handle the undefs
in a way that doesn't lead to contradicting assumptions about the
collapsed value.

Hal has pointed out in discussions that this may not end up being the
desired API and instead it may be more convenient to get a mask of the
undef elements or something similar. I'm starting simple and will expand
the API as I adapt actual callers and see exactly what they need.

llvm-svn: 212514
2014-07-08 07:19:55 +00:00
Alexey Samsonov
ff7b6cd169 [ASan] Completely remove sanitizer blacklist file from instrumentation pass.
All blacklisting logic is now moved to the frontend (Clang).
If a function (or source file it is in) is blacklisted, it doesn't
get sanitize_address attribute and is therefore not instrumented.
If a global variable (or source file it is in) is blacklisted, it is
reported to be blacklisted by the entry in llvm.asan.globals metadata,
and is not modified by the instrumentation.

The latter may lead to certain false positives - not all the globals
created by Clang are described in llvm.asan.globals metadata (e.g,
RTTI descriptors are not), so we may start reporting errors on them
even if "module" they appear in is blacklisted. We assume it's fine
to take such risk:
  1) errors on these globals are rare and usually indicate wild memory access
  2) we can lazily add descriptors for these globals into llvm.asan.globals
     lazily.

llvm-svn: 212505
2014-07-08 00:50:49 +00:00
Adam Nemet
ee6f661620 [X86] AVX512: Only allow k1-k7 as predicates to vpcmp*
As destination k0 is allowed but not as predicate/writemask.

I also modified the test to allow checking of error messages by the assembler.
I applied a similar approach to the test ret.s in the same directory.

llvm-svn: 212504
2014-07-08 00:22:32 +00:00
Alexey Samsonov
62b9ab9cbe Kill unnecessary include
llvm-svn: 212503
2014-07-08 00:03:11 +00:00
Andrea Di Biagio
e6975da1cd [x86] Fix assertion failure caused by a wrong combine of PSHUFD nodes with different types.
When combining a sequence of two PSHUFD dag nodes into a single PSHUFD,
make sure that we assign the correct type to the resulting PSHUFD.
X86ISD::PSHUFD dag nodes can be either MVT::v4i32 or MVT::v4f32.

Before this change, an assertion failure was triggered in method
'DAGCombinerInfo::CombineTo' when trying to combine the shuffles from the test
below into a single PSHUFD.

define <4 x float> @test1(<4 x float> %V) {
  %1 = shufflevector <4 x float> %V, <4 x float> undef, <4 x i32> <i32 3, i32 0, i32 2, i32 1>
  %2 = shufflevector <4 x float> %1, <4 x float> undef, <4 x i32> <i32 3, i32 0, i32 2, i32 1>
  ret <4 x float> %2
}

llvm-svn: 212498
2014-07-07 23:25:23 +00:00
Sanjay Patel
69a5950ba3 fixed some typos
llvm-svn: 212495
2014-07-07 22:13:58 +00:00
Juergen Ributzka
19aa45cd04 [FastISel][X86] Fix smul.with.overflow.i8 lowering.
Add custom lowering code for signed multiply instruction selection, because the
default FastISel instruction selection for ISD::MUL will use unsigned multiply
for the i8 type and signed multiply for all other types. This would set the
incorrect flags for the overflow check.

This fixes <rdar://problem/17549300>

llvm-svn: 212493
2014-07-07 21:52:21 +00:00
Louis Gerbarg
30a2dcd984 Allow AArch64FastISel to degrade graceully in the presence of an MVT::i128
Currently AArch64FastISel crashes if it tries to extend an integer into an
MVT::i128. This can happen by creating 128 bit integers like so:

  typedef unsigned int uint128_t __attribute__((mode(TI)));
  typedef int sint128_t __attribute__((mode(TI)));

This patch makes EmitIntExt check for their presence and then falls back to
SelectionDAG.

Tests included.

rdar://17516686

llvm-svn: 212492
2014-07-07 21:37:51 +00:00
Sanjay Patel
e29d90184d Fix for PR17073 ( http://llvm.org/pr17073 ), simplifycfg illegally hoists an operation in a phi node that can trap.
This patch adds to an existing loop over phi nodes in SimplifyCondBranchToCondBranch() to check for trapping ops and bails out of the optimization if we find one of those.

The test cases verify that trapping ops are not hoisted and non-trapping ops are still optimized as expected.

llvm-svn: 212490
2014-07-07 21:19:00 +00:00
Rafael Espindola
aecbc5b288 Use raw_fd_ostream instead of std::ofstream.
llvm-svn: 212483
2014-07-07 20:34:51 +00:00
Renato Golin
659d34425a Refactor ARM subarchitecture parsing
According to a FIXME in ARMMCTargetDesc.cpp the ARM version parsing should be
in the Triple helper class.

Patch by: Gabor Ballabas

llvm-svn: 212479
2014-07-07 20:01:11 +00:00
Ulrich Weigand
d3a8fd6703 [PowerPC] Fix testcase regression
Use -mcpu to avoid different codegen depending on host platform.

llvm-svn: 212478
2014-07-07 19:41:54 +00:00
Ulrich Weigand
5e8a0b585b [PowerPC] Fix no-assert build
r212476 caused a compile failure (unused variable) in a non-assertion
build ...

llvm-svn: 212477
2014-07-07 19:39:44 +00:00
Ulrich Weigand
8f2aff0b0c [PowerPC] Fix "byval align" arguments
Arguments passed as "byval align" should get the specified alignment
in the parameter save area.  There was some code in PPCISelLowering.cpp
that attempted to implement this, but this didn't work correctly:
while code did update the ArgOffset value, it neglected to update
the PtrOff value (which was already computed from the old ArgOffset),
and it also neglected to update GPR_idx -- fields skipped due to
alignment in the save area must likewise be skipped in GPRs.

This patch fixes and simplifies this logic by:
- handling argument offset alignment right at the beginning
  of argument processing, using a new helper routine
  CalculateStackSlotAlignment (this avoids having to update
  PtrOff and other derived values later on)
- not tracking GPR_idx separately, but always computing the
  correct GPR_idx for each argument *from* its ArgOffset
- removing some redundant computation in LowerFormalArguments:
  MinReservedArea must equal ArgOffset after argument processing,
  so there's no use in computing it twice.

[This doesn't change the behavior of the current clang front-end,
since that never creates "byval align" arguments at the moment.
This will change with a follow-on patch, however.]

llvm-svn: 212476
2014-07-07 19:26:41 +00:00
Chandler Carruth
9fd1035842 [x86] Revert r212324 which was too aggressive w.r.t. allowing undef
lanes in vector splats.

The core problem here is that undef lanes can't *unilaterally* be
considered to contribute to splats. Their handling needs to be more
cautious. There is also a reported failure of the nightly testers
(thanks Tobias!) that may well stem from the same core issue. I'm going
to fix this theoretical issue, factor the APIs a bit better, and then
verify that I don't see anything bad with Tobias's reduction from the
test suite before recommitting.

Original commit message for r212324:
  [x86] Generalize BuildVectorSDNode::getConstantSplatValue to work for
  any constant, constant FP, or undef splat and to tolerate any undef
  lanes in a splat, then replace all uses of isSplatVector in X86's
  lowering with it.

  This fixes issues where undef lanes in an otherwise splat vector would
  prevent the splat logic from firing. It is a touch more awkward to use
  this interface, but it is much more accurate. Suggestions for better
  interface structuring welcome.

  With this fix, the code generated with the widening legalization
  strategy for widen_cast-4.ll is *dramatically* improved as the special
  lowering strategies for a v16i8 SRA kick in even though the high lanes
  are undef.

  We also get a slightly different choice for broadcasting an aligned
  memory location, and use vpshufd instead of vbroadcastss. This looks
  like a minor win for pipelining and domain crossing, but a minor loss
  for the number of micro-ops. I suspect its a wash, but folks can
  easily tweak the lowering if they want.

llvm-svn: 212475
2014-07-07 19:03:32 +00:00
Matt Arsenault
f39a57f581 R600: Fix mishandling of load / store chains.
Fixes various bugs with reordering loads and stores.
Scalarized vector loads weren't collecting the chains
at all.

llvm-svn: 212473
2014-07-07 18:34:45 +00:00
Matt Arsenault
98979d7f5f Fix typo, weird indentation
llvm-svn: 212472
2014-07-07 18:34:42 +00:00
Tim Northover
b4e00dee49 [testing]: lld generally lives in tools/, so fix llvm-lit.
Otherwise we can't run individual tests directly ("llvm-lit /path/to/test")

llvm-svn: 212461
2014-07-07 15:26:53 +00:00
Benjamin Kramer
195f0552f0 Make helper functions static.
llvm-svn: 212460
2014-07-07 14:47:51 +00:00
Tim Northover
a011516d8b X86: revert unintentional change to X86FastISel.
This crept in with r212443.

llvm-svn: 212459
2014-07-07 14:06:42 +00:00
Evgeniy Stepanov
8d97516cfe [asan] Generate asm instrumentation in MC.
Generate entire ASan asm instrumentation in MC without
relying on runtime helper functions.

Patch by Yuri Gorshenin.

llvm-svn: 212455
2014-07-07 13:57:37 +00:00
Evgeniy Stepanov
6ff273cf04 [msan] Fix handling of phi in blacklisted functions.
llvm-svn: 212454
2014-07-07 13:28:31 +00:00
Benjamin Kramer
065c70166c InstCombine: Simplify code, no functionality change.
llvm-svn: 212449
2014-07-07 11:01:16 +00:00
Chandler Carruth
bf23b76679 [x86] Teach the new vector shuffle lowering code to handle what is
essentially a DAG combine that never gets a chance to run.

We might typically expect DAG combining to remove shuffles-of-splats and
other similar patterns, but we don't get a chance to run the DAG
combiner when we recursively form sub-shuffles during the lowering of
a shuffle. So instead hand-roll a really important combine directly into
the lowering code to detect shuffles-of-splats, especially shuffles of
an all-zero splat which needn't even have the same element width, etc.

This lets the new vector shuffle lowering handle shuffles which
implement things like zero-extension really nicely. This will become
even more important when I wire the legalization of zero-extension to
vector shuffles with the new widening legalization strategy.

llvm-svn: 212444
2014-07-07 09:06:58 +00:00
Tim Northover
96e2bce418 CodeGen: it turns out that NAND is not the same thing as BIC. At all.
We've been performing the wrong operation on ARM for "atomicrmw nand" for
years, since "a NAND b" is "~(a & b)" rather than ARM's very tempting "a & ~b".
This bled over into the generic expansion pass.

So I assume no-one has ever actually tried to do an atomic nand in the real
world. Oh well.

llvm-svn: 212443
2014-07-07 09:06:35 +00:00
Saleem Abdulrasool
b5f61cbe40 ARM: properly lower dllimport'ed global values
This completes the handling for DLL import storage symbols when lowering
instructions.  A DLL import storage symbol must have an additional load
performed prior to use.  This is applicable to variables and functions.

This is particularly important for non-function symbols as it is possible to
handle function references by emitting a thunk which performs the translation
from the unprefixed __imp_ symbol to the proper symbol (although, this is a
non-optimal lowering).  For a variable symbol, no such thunk can be
accommodated.

llvm-svn: 212431
2014-07-07 05:18:35 +00:00
Saleem Abdulrasool
b53a07c63a ARM: correctly mangle dllimport symbols
Add support for tracking DLLImport storage class information on a per symbol
basis in the ARM instruction selection.  Use that information to correctly
mangle the symbol (dllimport symbols are referenced via *__imp_<name>).

llvm-svn: 212430
2014-07-07 05:18:30 +00:00
Saleem Abdulrasool
dd5562d3c7 ARM: unify symbol name retrieval
Ensure that all paths that retrieve the symbol name go through GetARMGVSymbol
rather than getSymbol.  This is desirable so that any global symbol mangling can
be centralised to this function.  The motivation for this is handling of symbols
that are marked as having dll import dll storage.  Such a symbol requires an
extra load that is currently handled in the backend and a __imp_ prefix on the
symbol name.

llvm-svn: 212429
2014-07-07 05:18:22 +00:00
Kevin Qin
c72c5a1639 [AArch64] Normalize all constants to build a vector.
The value of constant operands will be truncated to fit element width.

llvm-svn: 212428
2014-07-07 02:45:40 +00:00
Sanjay Patel
f4bfc3ef60 fixed typos in comments
llvm-svn: 212424
2014-07-06 23:24:53 +00:00
Sanjay Patel
81fb1ce2eb fixed some typos in comments
llvm-svn: 212423
2014-07-06 23:10:24 +00:00
Saleem Abdulrasool
9cdbf65f90 AArch64: whitespace cleanup
llvm-svn: 212420
2014-07-06 22:13:26 +00:00
Aaron Ballman
89c83764b1 These should be EXPECT_TRUE, not EXPECT_FALSE. Amends r212415.
llvm-svn: 212419
2014-07-06 20:20:02 +00:00
Aaron Ballman
b97f52e5e0 Fixing compile errors related to changes with MemoryBuffer::getFile.
llvm-svn: 212415
2014-07-06 19:34:52 +00:00
Rafael Espindola
858b9e1423 Update the MemoryBuffer API to use ErrorOr.
llvm-svn: 212405
2014-07-06 17:43:13 +00:00
Rafael Espindola
5b4502ed15 Declare variable on first use.
llvm-svn: 212403
2014-07-06 14:31:22 +00:00
Rafael Espindola
43905716e7 This only needs a StringRef.
llvm-svn: 212402
2014-07-06 14:24:03 +00:00
Rafael Espindola
19edf68ee5 This only needs a StringRef.
llvm-svn: 212401
2014-07-06 14:17:29 +00:00
Alp Toker
223ff032c6 Fix the MSVC build following r212382
Looks like the casts are needed there after all.

llvm-svn: 212399
2014-07-06 10:54:41 +00:00
Alp Toker
0ca11b2493 SourceMgr: make valid buffer IDs start from one
Use 0 for the invalid buffer instead of -1/~0 and switch to unsigned
representation to enable more idiomatic usage.

Also introduce a trivial SourceMgr::getMainFileID() instead of hard-coding 0/1
to identify the main file.

llvm-svn: 212398
2014-07-06 10:33:31 +00:00
Alp Toker
6874e36edb Don't use StringRef iterator functions for data access
And also remove some redundant casts from r212371.

llvm-svn: 212397
2014-07-06 10:32:55 +00:00
Alp Toker
3cfd773515 Remove IntrusiveRefCntPtr::getPtr() function
It was deprecated in r212366 and all uses have been switched to get().

llvm-svn: 212382
2014-07-05 22:20:59 +00:00
Matt Arsenault
5c86fb1055 Use cast<> instead of dyn_cast + assert
llvm-svn: 212380
2014-07-05 21:16:43 +00:00