1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-24 13:33:37 +02:00
Commit Graph

105876 Commits

Author SHA1 Message Date
Michael J. Spencer
ff351b2bdb Revert "[x86] Fold extract_vector_elt of a load into the Load's address computation."
There's a bug where this can create cycles in the DAG. It will take a bit
to fix, so I'm backing it out for now.

llvm-svn: 213339
2014-07-18 00:15:50 +00:00
Eric Christopher
1ff1279d6f Reset the Subtarget in the AsmPrinter for each machine function
and add explanatory comment about dual initialization. Fix
use of the Subtarget to grab the information off of the target machine.

llvm-svn: 213336
2014-07-18 00:08:53 +00:00
Eric Christopher
29cda30f93 Avoid resetting the UseSoftFloat and FloatABIType on the TargetMachine
Options struct and move the comment to inMips16HardFloat. Use the
fact that we now know whether or not we cared about soft float to
set the libcalls.
Accordingly rename mipsSEUsesSoftFloat to abiUsesSoftFloat and
propagate since it's no longer CPU specific.

llvm-svn: 213335
2014-07-18 00:08:50 +00:00
Lang Hames
b9404be62f [MCJIT] Fix the alignment requirements for ARM and AArch64 which were mistakenly
relaxed in the big RuntimeDyldMachO cleanup of r213293.

No test case yet - this was found via inspection and there's no easy way to test
GOT alignment in RuntimeDyldChecker at the moment. I'm working on adding support
for this now, and hope to have a test case for this soon.

llvm-svn: 213331
2014-07-17 23:11:30 +00:00
Kevin Enderby
0d0ec178f4 Tweak formating to match what clang-format would be for llvm-nm.cpp .
No functional change.

llvm-svn: 213330
2014-07-17 22:56:27 +00:00
Kevin Enderby
abaf0d20c2 Add printing of Mach-O stabs in llvm-nm.
llvm-svn: 213327
2014-07-17 22:47:16 +00:00
Reid Kleckner
cabffe9de0 Remove rules against std::function from the programmer's manual
Clarify that llvm::function_ref is like StringRef for callables.

llvm-svn: 213326
2014-07-17 22:43:00 +00:00
Nico Weber
5a62ce559b ms inline asm: Don't add x86 segment registers to the clobber list.
Clang tries to check the clobber list but doesn't list segment registers in its
x86 register list. This fixes PR20343.

llvm-svn: 213303
2014-07-17 20:24:55 +00:00
Lang Hames
fa39401520 Make myself code owner of MCJIT.
llvm-svn: 213302
2014-07-17 20:23:31 +00:00
Alp Toker
1bfa029bc4 Drop the udis86 wrapper from llvm::sys
This optional dependency on the udis86 library was added some time back to aid
JIT development, but doesn't make much sense to link into LLVM binaries these
days.

llvm-svn: 213300
2014-07-17 20:05:29 +00:00
Reid Kleckner
a0033713ef TableGen: Add 'static' to a large array to avoid a huge stack allocation
Speculative fix for a -Wframe-larger-than warning from gcc.  Clang will
implicitly promote such constant arrays to globals, so in theory it
won't hit this.

llvm-svn: 213298
2014-07-17 19:43:40 +00:00
Arnaud A. de Grandmaison
32cb6202b9 [AArch64] Cleanup AsmParser: no need to use dyn_cast + assert. cast does it for us.
llvm-svn: 213296
2014-07-17 19:08:14 +00:00
Suyog Sarda
7302ccd41e Rectify r213231. Use proper version of 'ComputeNumSignBits'.
Earlier when the code was in InstCombine, we were calling the version of ComputeNumSignBits in InstCombine.h
that automatically added the DataLayout* before calling into ValueTracking.
When the code moved to InstSimplify, we are calling into ValueTracking directly without passing in the DataLayout*.
This patch rectifies the same by passing DataLayout in ComputeNumSignBits.

llvm-svn: 213295
2014-07-17 19:07:00 +00:00
Lang Hames
8eaed621d6 [MCJIT] Significantly refactor the RuntimeDyldMachO class.
The previous implementation of RuntimeDyldMachO mixed logic for all targets
within a single class, creating problems for readability, maintainability, and
performance. To address these issues, this patch strips the RuntimeDyldMachO
class down to just target-independent functionality, and moves all
target-specific functionality into target-specific subclasses RuntimeDyldMachO.

The new class hierarchy is as follows:

class RuntimeDyldMachO
Implemented in RuntimeDyldMachO.{h,cpp}
Contains logic that is completely independent of the target. This consists
mostly of MachO helper utilities which the derived classes use to get their
work done.


template <typename Impl>
class RuntimeDyldMachOCRTPBase<Impl> : public RuntimeDyldMachO
Implemented in RuntimeDyldMachO.h
Contains generic MachO algorithms/data structures that defer to the Impl class
for target-specific behaviors.

RuntimeDyldMachOARM : public RuntimeDyldMachOCRTPBase<RuntimeDyldMachOARM>
RuntimeDyldMachOARM64 : public RuntimeDyldMachOCRTPBase<RuntimeDyldMachOARM64>
RuntimeDyldMachOI386 : public RuntimeDyldMachOCRTPBase<RuntimeDyldMachOI386>
RuntimeDyldMachOX86_64 : public RuntimeDyldMachOCRTPBase<RuntimeDyldMachOX86_64>
Implemented in their respective *.h files in lib/ExecutionEngine/RuntimeDyld/MachOTargets
Each of these contains the relocation logic specific to their target architecture.

llvm-svn: 213293
2014-07-17 18:54:50 +00:00
Alexey Samsonov
7fb0be9dd0 [ASan] Don't instrument load/stores with !nosanitize metadata.
This is used to avoid instrumentation of instructions added by UBSan
in Clang frontend (see r213291). This fixes PR20085.

Reviewed in http://reviews.llvm.org/D4544.

llvm-svn: 213292
2014-07-17 18:48:12 +00:00
Hans Wennborg
ebf51704c5 Typo: exists -> exits
llvm-svn: 213290
2014-07-17 18:33:44 +00:00
Justin Holewinski
3021ef095b [NVPTX] Improve handling of FP fusion
We now consider the FPOpFusion flag when determining whether
to fuse ops.  We also explicitly emit add.rn when fusion is
disabled to prevent ptxas from fusing the operations on its
own.

llvm-svn: 213287
2014-07-17 18:10:09 +00:00
Matt Arsenault
0968995f30 Fix typos
llvm-svn: 213285
2014-07-17 17:50:22 +00:00
Zinovy Nis
c10a269e5f [BUG] Due to a typo introduced in r199933 and r200027 two tests for FMA were never even started.
llvm-svn: 213283
2014-07-17 17:14:35 +00:00
Adam Nemet
09fcf8939c [X86] AVX512: Add disassembler support for compressed displacement
There are two parts here.  First is to modify tablegen to adjust the encoding
type ENCODING_RM with the scaling factor.

The second is to use the new encoding types to compute the correct
displacement in the decoder.

Fixes <rdar://problem/17608489>

llvm-svn: 213281
2014-07-17 17:04:56 +00:00
Adam Nemet
e394093056 [X86] AVX512: Rename EVEX_CD8V to CD8_Form
This is to match the naming of CD8_EltSize, CD8_Scale, etc.

No functional change.

llvm-svn: 213280
2014-07-17 17:04:52 +00:00
Adam Nemet
7032b7fc18 [X86] AVX512: Use the TD version of CD8_Scale in the assembler
Passes the computed scaling factor in TSFlags rather than the old attributes.

Also removes the C++ version of computing the scaling factor (MemObjSize)
along with the asserts added by the previous patch.

No functional change.

llvm-svn: 213279
2014-07-17 17:04:50 +00:00
Adam Nemet
76494ac79d [X86] AVX512: Move compressed displacement logic to TD
This does not actually move the logic yet but reimplements it in the Tablegen
language.  Then asserts that the new implementation results in the same value.

The next patch will remove the assert and the temporary use of the TSFlags and
remove the C++ implementation.

The formula requires a limited form of the logical left and right operators.
I implemented these with the bit-extract/insert operator (i.e. blah{bits}).

No functional change.

llvm-svn: 213278
2014-07-17 17:04:34 +00:00
Adam Nemet
3bb0a6b076 [TableGen] Allow shift operators to take bits<n>
Convert the operand to int if possible, i.e. if the value is properly
initialized.  (I suppose there is further room for improvement here to also
peform the shift if the uninitialized bits are shifted out.)

With this little change we can now compute the scaling factor for compressed
displacement with pure tablegen code in the X86 backend.  This is useful
because both the X86-disassembler-specific part of tablegen and the assembler
need this and TD is the natural sharing place.

The patch also adds the missing documentation for the shift and add operator.

llvm-svn: 213277
2014-07-17 17:04:27 +00:00
Justin Holewinski
60265475a1 [NVPTX] Add missing .v4 qualifier on vector store instruction
llvm-svn: 213276
2014-07-17 16:58:56 +00:00
Saleem Abdulrasool
86c034e0ff MC: correct DWARF header for PE/COFF assembly input
The header contains an offset to the DWARF abbreviations for the CU.  The offset
must be section relative for COFF and absolute for others.  The non-assembly
code path for the DWARF header generation already had the correct emission for
the headers.  This corrects just the assembly path.  Due to the invalid
relocation, processing of the debug information would halt previously on the
first assembly input as the associated abbreviations would be out of range as
they would have the location increased by image base and the section offset.

This address PR20332.

llvm-svn: 213275
2014-07-17 16:27:44 +00:00
Saleem Abdulrasool
b6a416296a MC: fix MCAsmInfo usage for windows-itanium
Windows itanium uses the GNUCOFF assmebly format, not ELF.

llvm-svn: 213274
2014-07-17 16:27:40 +00:00
Saleem Abdulrasool
e1094fea63 MC: collapse emission of producer
Rather than use three EmitBytes, concatenate the string at compile time,
constructing a single StringRef and emitting the data in one shot.  This also
creates nicer assembly output.  NFC.

llvm-svn: 213273
2014-07-17 16:27:35 +00:00
Justin Holewinski
5248ed4d97 [NVPTX] Flag surface/texture query instructions with IsTexSurfQuery
Also, add some tests to make sure we can handle surface/texture
queries on both Fermi and Kepler+.

llvm-svn: 213268
2014-07-17 14:51:33 +00:00
Justin Holewinski
9c3e284e16 [NVPTX] Add more surface/texture intrinsics, including CUDA unified texture fetch
This also uses TSFlags to mark machine instructions that are surface/texture
accesses, as well as the vector width for surface operations.  This is used
to simplify some of the switch statements that need to detect surface/texture
instructions

llvm-svn: 213256
2014-07-17 11:59:04 +00:00
Tim Northover
48ae22e14a ARM: support direct f16 <-> f64 conversions
ARMv8 has instructions to handle it, otherwise a libcall is needed.

llvm-svn: 213254
2014-07-17 11:27:04 +00:00
Justin Holewinski
a1eab159d8 [TABLEGEN] Do not crash on intrinsics with names longer than 40 characters
Differential Revision: http://reviews.llvm.org/D4537

llvm-svn: 213253
2014-07-17 11:23:29 +00:00
Tim Northover
21a41cb9a1 CodeGen: generate single libcall for fptrunc -> f16 operations.
Previously we asserted on this code. Currently compiler-rt doesn't
actually implement any of these new libcalls, but external help is
pretty much the only viable option for LLVM.

I've followed the much more generic "__truncST2" naming, as opposed to
the odd name for f32 -> f16 truncation. This can obviously be changed
later, or overridden by any targets that need to.

llvm-svn: 213252
2014-07-17 11:12:12 +00:00
Tim Northover
81da81acc1 X86: support double extension of f16 type.
x86 has no native ability to extend an f16 to f64, but the same result
is obtained if we expand it into two separate extensions: f16 -> f32
-> f64.

Unfortunately the same is not true for truncate, so that still results
in a compilation failure.

llvm-svn: 213251
2014-07-17 11:04:04 +00:00
Tim Northover
eae1f1c8cc CodeGen: extend f16 conversions to permit types > float.
This makes the two intrinsics @llvm.convert.from.f16 and
@llvm.convert.to.f16 accept types other than simple "float". This is
only strictly needed for the truncate operation, since otherwise
double rounding occurs and there's no way to represent the strict IEEE
conversion. However, for symmetry we allow larger types in the extend
too.

During legalization, we can expand an "fp16_to_double" operation into
two extends for convenience, but abort when the truncate isn't legal. A new
libcall is probably needed here.

Even after this commit, various target tweaks are needed to actually use the
extended intrinsics. I've put these into separate commits for clarity, so there
are no actual tests of f64 conversion here.

llvm-svn: 213248
2014-07-17 10:51:23 +00:00
Yi Kong
9b1652c5d0 Port memory barriers intrinsics to AArch64
Memory barrier __builtin_arm_[dmb, dsb, isb] intrinsics are required to
implement their corresponding ACLE and MSVC intrinsics.

This patch ports ARM dmb, dsb, isb intrinsic to AArch64.

Differential Revision: http://reviews.llvm.org/D4520

llvm-svn: 213247
2014-07-17 10:50:20 +00:00
Daniel Sanders
f49e5efb1a [mips] .reginfo is 8 byte aligned on N32.
Differential Revision: http://reviews.llvm.org/D4540

llvm-svn: 213246
2014-07-17 10:10:04 +00:00
Daniel Sanders
e88052c0c5 [mips] Correct ELF e_flags for the N32 ABI when using a mips-* triple rather than a mips64-* triple
Summary:
Generally speaking, mips-* vs mips64-* should not be used to make decisions
about the content or format of the ELF. This should be based on the ABI
and CPU in use. For example, `mips-linux-gnu-clang -mips64r2 -mabi=64`
should produce an ELF64 as should `mips64-linux-gnu-clang -mabi=64`.
Conversely, `mips64-linux-gnu-clang -mabi=n32` should produce an ELF32 as
should `mips-linux-gnu-clang -mips64r2 -mabi=n32`.

This patch fixes the e_flags but leaves the ELF32 vs ELF64 issue for now
since there is no apparent way to base this decision on the ABI and CPU.

Differential Revision: http://reviews.llvm.org/D4539

llvm-svn: 213244
2014-07-17 10:02:08 +00:00
Daniel Sanders
7121fa671b [mips] Correct .MIPS.abiflags for -mfpxx on MIPS32r6
Summary:
The cpr1_size field describes the minimum register width to run the program
rather than the size of the registers on the target. MIPS32r6 was acting
as if -mfp64 has been given because it starts off with 64-bit FPU registers.

Differential Revision: http://reviews.llvm.org/D4538

llvm-svn: 213243
2014-07-17 09:57:23 +00:00
Daniel Sanders
eea41061d0 [mips] Fix ELF e_flags related to -mabicalls and -mplt.
Summary:
These options are not implemented yet but we act as if they are always
given.

The integrated assembler is driven by the clang driver so the e_flag test
cases should match the e_flags emitted by GCC+GAS rather than GAS
by itself.

Differential Revision: http://reviews.llvm.org/D4536

llvm-svn: 213242
2014-07-17 09:52:56 +00:00
Yi Kong
d5d6470070 Fix the prefix for arm64 triple
Triple.cpp still returns "arm64" as prefix for arm64 triple, causing Clang not
being able to select the correct GCCBuiltin IR.

This patch changes the value to correct prefix "aarch64". Regression test will
be added in the coming patch.

Differential Revision: http://reviews.llvm.org/D4516

llvm-svn: 213240
2014-07-17 09:43:27 +00:00
Evgeniy Stepanov
5b945c9d7e [msan] Avoid redundant origin stores.
Origin is meaningless for fully initialized values. Avoid
storing origin for function arguments that are known to
be always initialized (i.e. shadow is a compile-time null
constant).

This is not about correctness, but purely an optimization.
Seems to affect compilation time of blacklisted functions
significantly.

llvm-svn: 213239
2014-07-17 09:10:37 +00:00
Suyog Sarda
e62b39fcd0 Move ashr optimization from InstCombineShift to InstSimplify.
Refactor code, no functionality change, test case moved from instcombine to instsimplify.

Differential Revision: http://reviews.llvm.org/D4102
 

llvm-svn: 213231
2014-07-17 06:28:15 +00:00
Matt Arsenault
45d9529fe9 Use range for
llvm-svn: 213230
2014-07-17 06:19:06 +00:00
Matt Arsenault
54393bb30e R600: Short circuit alloca check if address space isn't private.
Skip calling GetUnderlyingObject in cases where it obviously
isn't from an alloca. This should only be a compile time improvement.

llvm-svn: 213229
2014-07-17 06:13:41 +00:00
Suyog Sarda
fe6fdd5295 Fix Typo (first commit to test commit access)
llvm-svn: 213228
2014-07-17 06:09:34 +00:00
Eric Fiselier
95d51f6cee [lit] Add --show-unsupported flag to LIT
llvm-svn: 213227
2014-07-17 05:53:00 +00:00
Saleem Abdulrasool
b2cdb8f33d MC: make WinEH opcode an opaque value
This makes the opcode an opaque value (unsigned int) rather than the
enumeration.  This permits the use of target specific operands.

Split out the generic type into a MCWinEH header and add a supporting
MCWin64EH::Instruction to abstract out the selection of the opcode and
construction of the actual instruction.

llvm-svn: 213221
2014-07-17 03:08:50 +00:00
Hal Finkel
9779454568 Improve BasicAA CS-CS queries (redux)
This reverts, "r213024 - Revert r212572 "improve BasicAA CS-CS queries", it
causes PR20303." with a fix for the bug in pr20303. As it turned out, the
relevant code was both wrong and over-conservative (because, as with the code
it replaced, it would return the overall ModRef mask even if just Ref had been
implied by the argument aliasing results). Hopefully, this correctly fixes both
problems.

Thanks to Nick Lewycky for reducing the test case for pr20303 (which I've
cleaned up a little and added in DSE's test directory). The BasicAA test has
also been updated to check for this error.

Original commit message:

BasicAA contains knowledge of certain intrinsics, such as memcpy and memset,
and uses that information to form more-accurate answers to CallSite vs. Loc
ModRef queries. Unfortunately, it did not use this information when answering
CallSite vs. CallSite queries.

Generically, when an intrinsic takes one or more pointers and the intrinsic is
marked only to read/write from its arguments, the offset/size is unknown. As a
result, the generic code that answers CallSite vs. CallSite (and CallSite vs.
Loc) queries in AA uses UnknownSize when forming Locs from an intrinsic's
arguments. While BasicAA's CallSite vs. Loc override could use more-accurate
size information for some intrinsics, it did not do the same for CallSite vs.
CallSite queries.

This change refactors the intrinsic-specific logic in BasicAA into a generic AA
query function: getArgLocation, which is overridden by BasicAA to supply the
intrinsic-specific knowledge, and used by AA's generic implementation. This
allows the intrinsic-specific knowledge to be used by both CallSite vs. Loc and
CallSite vs. CallSite queries, and simplifies the BasicAA implementation.

Currently, only one function, Mac's memset_pattern16, is handled by BasicAA
(all the rest are intrinsics). As a side-effect of this refactoring, BasicAA's
getModRefBehavior override now also returns OnlyAccessesArgumentPointees for
this function (which is an improvement).

llvm-svn: 213219
2014-07-17 01:28:25 +00:00
Jingyue Wu
7c4bea3e99 Partially revert r210444 due to performance regression
Summary:
Converting outermost zext(a) to sext(a) causes worse code when the
computation of zext(a) could be reused. For example, after converting

... = array[zext(a)]
... = array[zext(a) + 1]

to

... = array[sext(a)]
... = array[zext(a) + 1],

the program computes sext(a), which is actually unnecessary. I added one
test in split-gep-and-gvn.ll to illustrate this scenario.

Also, with r211281 and r211084, we annotate more "nuw" tags to
computation involving CUDA intrinsics such as threadIdx.x. These
annotations help with splitting GEP a lot, rendering the benefit we get
from this reverted optimization only marginal.

Test Plan: make check-all

Reviewers: eliben, meheff

Reviewed By: meheff

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D4542

llvm-svn: 213209
2014-07-16 23:25:00 +00:00