1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 20:43:44 +02:00
Commit Graph

29366 Commits

Author SHA1 Message Date
Tim Northover
48ae22e14a ARM: support direct f16 <-> f64 conversions
ARMv8 has instructions to handle it, otherwise a libcall is needed.

llvm-svn: 213254
2014-07-17 11:27:04 +00:00
Tim Northover
21a41cb9a1 CodeGen: generate single libcall for fptrunc -> f16 operations.
Previously we asserted on this code. Currently compiler-rt doesn't
actually implement any of these new libcalls, but external help is
pretty much the only viable option for LLVM.

I've followed the much more generic "__truncST2" naming, as opposed to
the odd name for f32 -> f16 truncation. This can obviously be changed
later, or overridden by any targets that need to.

llvm-svn: 213252
2014-07-17 11:12:12 +00:00
Tim Northover
81da81acc1 X86: support double extension of f16 type.
x86 has no native ability to extend an f16 to f64, but the same result
is obtained if we expand it into two separate extensions: f16 -> f32
-> f64.

Unfortunately the same is not true for truncate, so that still results
in a compilation failure.

llvm-svn: 213251
2014-07-17 11:04:04 +00:00
Tim Northover
eae1f1c8cc CodeGen: extend f16 conversions to permit types > float.
This makes the two intrinsics @llvm.convert.from.f16 and
@llvm.convert.to.f16 accept types other than simple "float". This is
only strictly needed for the truncate operation, since otherwise
double rounding occurs and there's no way to represent the strict IEEE
conversion. However, for symmetry we allow larger types in the extend
too.

During legalization, we can expand an "fp16_to_double" operation into
two extends for convenience, but abort when the truncate isn't legal. A new
libcall is probably needed here.

Even after this commit, various target tweaks are needed to actually use the
extended intrinsics. I've put these into separate commits for clarity, so there
are no actual tests of f64 conversion here.

llvm-svn: 213248
2014-07-17 10:51:23 +00:00
Yi Kong
9b1652c5d0 Port memory barriers intrinsics to AArch64
Memory barrier __builtin_arm_[dmb, dsb, isb] intrinsics are required to
implement their corresponding ACLE and MSVC intrinsics.

This patch ports ARM dmb, dsb, isb intrinsic to AArch64.

Differential Revision: http://reviews.llvm.org/D4520

llvm-svn: 213247
2014-07-17 10:50:20 +00:00
Daniel Sanders
f49e5efb1a [mips] .reginfo is 8 byte aligned on N32.
Differential Revision: http://reviews.llvm.org/D4540

llvm-svn: 213246
2014-07-17 10:10:04 +00:00
Daniel Sanders
e88052c0c5 [mips] Correct ELF e_flags for the N32 ABI when using a mips-* triple rather than a mips64-* triple
Summary:
Generally speaking, mips-* vs mips64-* should not be used to make decisions
about the content or format of the ELF. This should be based on the ABI
and CPU in use. For example, `mips-linux-gnu-clang -mips64r2 -mabi=64`
should produce an ELF64 as should `mips64-linux-gnu-clang -mabi=64`.
Conversely, `mips64-linux-gnu-clang -mabi=n32` should produce an ELF32 as
should `mips-linux-gnu-clang -mips64r2 -mabi=n32`.

This patch fixes the e_flags but leaves the ELF32 vs ELF64 issue for now
since there is no apparent way to base this decision on the ABI and CPU.

Differential Revision: http://reviews.llvm.org/D4539

llvm-svn: 213244
2014-07-17 10:02:08 +00:00
Daniel Sanders
7121fa671b [mips] Correct .MIPS.abiflags for -mfpxx on MIPS32r6
Summary:
The cpr1_size field describes the minimum register width to run the program
rather than the size of the registers on the target. MIPS32r6 was acting
as if -mfp64 has been given because it starts off with 64-bit FPU registers.

Differential Revision: http://reviews.llvm.org/D4538

llvm-svn: 213243
2014-07-17 09:57:23 +00:00
Daniel Sanders
eea41061d0 [mips] Fix ELF e_flags related to -mabicalls and -mplt.
Summary:
These options are not implemented yet but we act as if they are always
given.

The integrated assembler is driven by the clang driver so the e_flag test
cases should match the e_flags emitted by GCC+GAS rather than GAS
by itself.

Differential Revision: http://reviews.llvm.org/D4536

llvm-svn: 213242
2014-07-17 09:52:56 +00:00
Matt Arsenault
45d9529fe9 Use range for
llvm-svn: 213230
2014-07-17 06:19:06 +00:00
Matt Arsenault
54393bb30e R600: Short circuit alloca check if address space isn't private.
Skip calling GetUnderlyingObject in cases where it obviously
isn't from an alloca. This should only be a compile time improvement.

llvm-svn: 213229
2014-07-17 06:13:41 +00:00
Sanjay Patel
b3fb7dc171 Remove Atom references in description.
Any CPU can run this pass.

llvm-svn: 213190
2014-07-16 20:18:49 +00:00
Chris Bieneman
8124ab9a24 [RegisterCoalescer] Moving the RegisterCoalescer subtarget hook onto the TargetRegisterInfo instead of the TargetSubtargetInfo.
llvm-svn: 213188
2014-07-16 20:13:31 +00:00
Justin Holewinski
35f9408e7f [NVPTX] Honor alignment on vector loads/stores
We were not considering the stated alignment on vector loads/stores,
leading us to generate vector instructions even when we do not have
sufficient alignment.

Now, for IR like:

  %1 = load <4 x float>, <4 x float>* %ptr, align 4

we will generate correct, conservative PTX like:

  ld.f32 ... [%ptr]
  ld.f32 ... [%ptr+4]
  ld.f32 ... [%ptr+8]
  ld.f32 ... [%ptr+12]

Or if we have an alignment of 8 (for example), we can
generate code like:

  ld.v2.f32 ... [%ptr]
  ld.v2.f32 ... [%ptr+8]

llvm-svn: 213186
2014-07-16 19:45:35 +00:00
Chris Bieneman
aa19c0dccd Added documentation for SizeMultiplier in the ARM subtarget hook for register coalescing. Also fixed some 80 col violations.
No functional code changes.

llvm-svn: 213169
2014-07-16 16:27:31 +00:00
Justin Holewinski
84f0bca9c1 [NVPTX] Rename registers %fl -> %fd and %rl -> %rd
This matches the internal behavior of NVIDIA tools like libnvvm.

llvm-svn: 213168
2014-07-16 16:26:58 +00:00
Tim Northover
c6c02a43ba CodeGen: don't form illegail EXTLOAD operations.
It turns out that in most cases (the main exception being i1-related
types) once these operations are formed we cannot separate them and
the targets end up having to deal with them whether they want to or
not.

This is not a good situation, and a more reasonable default can be
formed by ackowledging this and having targets leave them as Legal.
Only x86 seems to be affected (other targets don't even try marking
the operation Expand).

Mostly there's no visible change here yet, but it will be useful to
have truly expanded EXTLOADS for MVT::f16 softening support.

llvm-svn: 213162
2014-07-16 15:37:24 +00:00
Daniel Sanders
512a21080d [mips][fp64a] Temporarily disable odd-numbered double-precision registers when using the FP64A ABI.
Summary:
A few instructions (mostly cvt.d.w and similar) are causing problems with
-mfp64 and -mno-odd-spreg and it looks like fixing it properly may
take several weeks. In the meantime, let's disable the odd-numbered
double-precision registers so that the generated code is at least valid.

The problem is that instructions like cvt.d.w read from the 32-bit low
subregister of a double-precision FPU register. This often leads to the compiler
to inserting moves to transfer a GPR32 to a FGR32 using mtc1. Such moves
violate the rules against 32-bit writes to odd-numbered FPU registers imposed
by -mno-odd-spreg. By disabling the odd-numbered double-precision registers, it
becomes impossible for the 32-bit low subregister to be odd-numbered.

This fixes numerous test-suite failures when compiling for the FP64A ABI
('-mfp64 -mno-odd-spreg'). There is no LLVM test case because it's difficult to
test that odd-numbered FPU registers are not allocatable. Instead, we depend on
the assembler (GAS and -fintegrated-as) raising errors when the rules are
violated.

Differential Revision: http://reviews.llvm.org/D4532

llvm-svn: 213160
2014-07-16 15:34:07 +00:00
Andrea Di Biagio
a161db3638 [X86] Add a check for 'isMOVHLPSMask' within method 'isShuffleMaskLegal'.
Before this change, method 'isShuffleMaskLegal' didn't know that shuffles
implementing a 'movhlps' operation were perfectly legal for SSE targets.

This patch adds the missing check for 'isMOVHLPSMask' inside method
'isShuffleMaskLegal' to fix the problem.

The reason why it is important to do this is because the DAGCombiner
conservatively avoids combining a pair of shuffles if the resulting shuffle
node has an illegal mask. Before this patch, shuffles with a MOVHLPS mask were
wrongly considered not to be legal. This was the root cause of some poor-code
generation bugs.

llvm-svn: 213137
2014-07-16 11:29:39 +00:00
Matt Arsenault
15eb0d54b0 R600/SI: Allow using f32 rcp / rsq when denormals not handled.
These are precise enough to use for OpenCL unless denormals
are handled.

llvm-svn: 213107
2014-07-15 23:50:10 +00:00
David Majnemer
53236d3873 X86: Simplify X86WindowsTargetObjectFile::getSectionForConstant
There exists a helper function to abstract away the various differences
between ConstantVector, ConstantDataVector, ConstantAggregateZero, etc.

Use it to simplify X86WindowsTargetObjectFile::getSectionForConstant.

llvm-svn: 213104
2014-07-15 23:01:10 +00:00
Sanjay Patel
2f0f025b2b Move Post RA Scheduling flag bit into SchedMachineModel
Refactoring; no functional changes intended

    Removed PostRAScheduler bits from subtargets (X86, ARM).
    Added PostRAScheduler bit to MCSchedModel class.
    This bit is set by a CPU's scheduling model (if it exists).
    Removed enablePostRAScheduler() function from TargetSubtargetInfo and subclasses.
    Fixed the existing enablePostMachineScheduler() method to use the MCSchedModel (was just returning false!).
    Added methods to TargetSubtargetInfo to allow overrides for AntiDepBreakMode, CriticalPathRCs, and OptLevel for PostRAScheduling.
    Added enablePostRAScheduler() function to PostRAScheduler class which queries the subtarget for the above values.
    Preserved existing scheduler behavior for ARM, MIPS, PPC, and X86: 
       a. ARM overrides the CPU's postRA settings by enabling postRA for any non-Thumb or Thumb2 subtarget. 
       b. MIPS overrides the CPU's postRA settings by enabling postRA for everything. 
       c. PPC overrides the CPU's postRA settings by enabling postRA for everything. 
       d. X86 is the only target that actually has postRA specified via sched model info.

Differential Revision: http://reviews.llvm.org/D4217

llvm-svn: 213101
2014-07-15 22:39:58 +00:00
Matt Arsenault
c093eee935 R600/SI: Fix select on i1
llvm-svn: 213096
2014-07-15 21:44:37 +00:00
Matt Arsenault
1ceb5e82c1 R600/SI: Implement less wrong f32 fdiv
Assuming single precision denormals and accurate sqrt/div are not
reported, this passes the OpenCL conformance test.

llvm-svn: 213089
2014-07-15 20:18:31 +00:00
Matt Arsenault
42709b4212 R600: Add predicate for UnsafeFPMath
llvm-svn: 213088
2014-07-15 20:18:24 +00:00
Matt Arsenault
b916da43e5 R600: Remove intrinsics that appear to be unused
llvm-svn: 213087
2014-07-15 20:10:27 +00:00
Chris Bieneman
d1b660f0a6 [RegisterCoalescer] Add new subtarget hook allowing targets to opt-out of coalescing.
The coalescer is very aggressive at propagating constraints on the register classes, and the register allocator doesn’t know how to split sub-registers later to recover. This patch provides an escape valve for targets that encounter this problem to limit coalescing.

This patch also implements such for ARM to lower register pressure when using lots of large register classes. This works around PR18825.

llvm-svn: 213078
2014-07-15 17:18:41 +00:00
Cameron McInally
e9e4e99ecf Revert r213070. It's breaking the build in MCELFStreamer::EmitInstToData(...).
llvm-svn: 213073
2014-07-15 16:24:24 +00:00
Jan Vesely
aa9875787e R600: Implement zero undef variants of ctlz/cttz
v2: use ffbh/l if available
v3: Rebase on top of Matt's SI patches

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 213072
2014-07-15 15:51:09 +00:00
Daniel Sanders
2972f614a7 [mips] Correct .MIPS.abiflags fp_abi field for -mfpxx and without .module
Summary: Previously all the test cases set it after initialization with '.module fp=xx'.

Differential Revision: http://reviews.llvm.org/D4489

llvm-svn: 213071
2014-07-15 15:31:39 +00:00
Cameron McInally
6eb8b83c5d Add x86 patterns to match a specific add-with-carry.
llvm-svn: 213070
2014-07-15 15:03:32 +00:00
NAKAMURA Takumi
365309bb8b Prune Redundant libdeps in CMake's target_link_libraries and LLVMBuild.txt.
I checked this with Release+Asserts on x86_64-mingw32. Please restore partially if this were overkill.

llvm-svn: 213064
2014-07-15 11:37:03 +00:00
Andrea Di Biagio
d6b74facab Silence a warning in conditional expression.
Fixes a gcc warning caused by a typo. A redundant assignment operation was
accidentally used as the third operand of a conditional expression.
No functional change intended.

llvm-svn: 213061
2014-07-15 10:53:44 +00:00
Tim Northover
93f17f8aee AArch64: fall back to generic code for out of range extract/insert.
rdar://problem/17624784

llvm-svn: 213059
2014-07-15 10:00:26 +00:00
David Majnemer
88aea23021 Fix typo in comment
No functionality changed.

llvm-svn: 213052
2014-07-15 07:11:32 +00:00
Juergen Ributzka
3182b7c402 [FastISel][X86] Remove no longer needed functions.
llvm-svn: 213051
2014-07-15 06:35:53 +00:00
Juergen Ributzka
dcb0648576 [FastISel][X86] Implement the FastLowerIntrinsicCall hook.
Rename X86VisitIntrinsicCall -> FastLowerIntrinsicCall, which effectively
implements the target hook.

llvm-svn: 213050
2014-07-15 06:35:50 +00:00
Juergen Ributzka
e29c2182f5 [FastISel][X86] Implement the FastLowerCall hook.
This implements the FastLowerCall hook, which is based on the DoSelectCall
function. The implementation is very similar, but the target-independent call
lowering part has been factored out.

This should also enable patchpoint intrinsic lowering for FastISel on X86.

Related to <rdar://problem/17427052>.

llvm-svn: 213049
2014-07-15 06:35:47 +00:00
Juergen Ributzka
a9f192e6fd Revert "[FastISel][X86] Remove no longer needed functions."
Revert "[FastISel][X86] Implement the FastLowerIntrinsicCall hook."
Revert "[FastISel][X86] Implement the FastLowerCall hook."

This reverts commit r213035, r213036, and r213037 to make the
buildbots happy again.

llvm-svn: 213048
2014-07-15 05:23:40 +00:00
David Majnemer
7f4b6696c1 CodeGen: Handle ConstantVector and undef in WinCOFF constant pools
The constant pool entry code for WinCOFF assumed that vector constants
would be formed using ConstantDataVector, it did not expect to see a
ConstantVector.  Furthermore, it did not expect undef as one of the
elements of the vector.

ConstantVectors should be handled like ConstantDataVectors, treat Undef
as zero.

llvm-svn: 213038
2014-07-15 02:34:12 +00:00
Juergen Ributzka
a13b19d75f [FastISel][X86] Remove no longer needed functions.
llvm-svn: 213037
2014-07-15 02:22:56 +00:00
Juergen Ributzka
e3f78df3fa [FastISel][X86] Implement the FastLowerIntrinsicCall hook.
Rename X86VisitIntrinsicCall -> FastLowerIntrinsicCall, which effectively
implements the target hook.

llvm-svn: 213036
2014-07-15 02:22:53 +00:00
Juergen Ributzka
a5365414c7 [FastISel][X86] Implement the FastLowerCall hook.
This implements the FastLowerCall hook, which is based on the DoSelectCall
function. The implementation is very similar, but the target-independent call
lowering part has been factored out.

This should also enable patchpoint intrinsic lowering for FastISel on X86.

Related to <rdar://problem/17427052>.

llvm-svn: 213035
2014-07-15 02:22:49 +00:00
Matt Arsenault
211ccabffb R600: Add dag combine for copy of an illegal type.
This helps avoid redundant instructions to unpack, and repack
the vectors. Ideally we could recognize that pattern and eliminate
it. Currently v4i8 and other small element type vectors are scalarized,
so this has the added bonus of avoiding that.

llvm-svn: 213031
2014-07-15 02:06:31 +00:00
Matt Arsenault
24911cb984 R600: Add denormal handling subtarget features.
llvm-svn: 213018
2014-07-14 23:40:49 +00:00
Matt Arsenault
62262e12fa R600/SI: Default to no single precision denormals.
llvm-svn: 213017
2014-07-14 23:40:43 +00:00
Adam Nemet
81456265bf [X86] Specify all TSFlags bit-offsets symbolically
No functional change.

The offsets for the other bitfields are specified symbolically.  I need to
increase the size for one of the earlier fields which is easier after this
cleanup.

Why these bits are relative to VEXShift is a bit strange but that is for
another cleanup.

I made sure that the values for the enums are unchanged after this change.

llvm-svn: 213011
2014-07-14 23:18:39 +00:00
David Majnemer
94c981273e CodeGen: Stick constant pool entries in COMDAT sections for WinCOFF
COFF lacks a feature that other object file formats support: mergeable
sections.

To work around this, MSVC sticks constant pool entries in special COMDAT
sections so that each constant is in it's own section.  This permits
unused constants to be dropped and it also allows duplicate constants in
different translation units to get merged together.

This fixes PR20262.

Differential Revision: http://reviews.llvm.org/D4482

llvm-svn: 213006
2014-07-14 22:57:27 +00:00
Saleem Abdulrasool
6f4137a9fc X86: correct 64-bit atomics on 32-bit
We would emit a libcall for a 64-bit atomic on x86 after SVN r212119.  This was
due to the misuse of hasCmpxchg16 to indicate if cmpxchg8b was supported on a
32-bit target.  They were added at different times and would result in the
border condition being mishandled.

This fixes the border case to emit the cmpxchg8b instruction for 64-bit atomic
operations on x86 at the cost of restoring a long-standing bug in the codegen.
We emit a cmpxchg8b on all x86 targets even where the CPU does not support this
instruction (pre-Pentium CPUs).  Although this bug should be fixed, this was
present prior to SVN r212119 and this change, so this is not really introducing
a regression.

llvm-svn: 212956
2014-07-14 16:28:13 +00:00
Tim Northover
4ef7afc394 X86: remove temporary atomicrmw used during lowering.
We construct a temporary "atomicrmw xchg" instruction when lowering atomic
stores for widths that aren't supported natively. This isn't on the top-level
worklist though, so it won't be removed automatically and we have to do it
ourselves once that itself has been lowered.

Thanks Saleem for pointing this out!

llvm-svn: 212948
2014-07-14 15:31:13 +00:00
Daniel Sanders
cff61819ea Re-commit: [mips] Correct section alignments and EntrySizes for .bss, .text, .data, .reginfo, .MIPS.options, and .MIPS.abiflags
The lld tests will temporarily fail again but Simon Atanasyan will commit a fix for those shortly.

llvm-svn: 212946
2014-07-14 15:05:51 +00:00
Daniel Sanders
a57dbb6f1d Revert: [mips] Correct section alignments and EntrySizes for .bss, .text, .data, .reginfo, .MIPS.options, and .MIPS.abiflags
This commit causes multiple lld tests to fail. Reverting while I investigate the issue.

llvm-svn: 212945
2014-07-14 14:43:45 +00:00
Daniel Sanders
16ca05d23a [mips] Correct section alignments and EntrySizes for .bss, .text, .data, .reginfo, .MIPS.options, and .MIPS.abiflags
Summary:
.bss, .text, and .data are at least 16-byte aligned.
.reginfo is 4-byte aligned and has a 24-byte EntrySize.
.MIPS.abiflags has an 24-byte EntrySize.
.MIPS.options is 8-byte aligned and has 1-byte EntrySize.

Using a 1-byte EntrySize for .MIPS.options seems strange because the
records are neither 1-byte long nor fixed-length but this matches the value
that GAS emits.

Differential Revision: http://reviews.llvm.org/D4487

llvm-svn: 212939
2014-07-14 14:02:14 +00:00
Daniel Sanders
3af4859657 [mips] For the FP64A ABI, odd-numbered double-precision moves must not use mtc1/mfc1.
Summary:
This is because the FP64A the hardware will redirect 32-bit reads/writes
from/to odd-numbered registers to the upper 32-bits of the corresponding
even register. In effect, simulating FR=0 mode when FR=0 mode is not
available.

Unfortunately, we have to make the decision to avoid mfc1/mtc1 before
register allocation so we currently do this for even registers too.

FPXX has a similar requirement on 32-bit architectures that lack
mfhc1/mthc1 so this patch also handles the affected moves from the FPU for
FPXX too. Moves to the FPU were supported by an earlier commit.

Differential Revision: http://reviews.llvm.org/D4484

llvm-svn: 212938
2014-07-14 13:08:14 +00:00
Daniel Sanders
fc1f878e70 [mips] Use MFHC1 when it is available (MIPS32r2 and later) for both FP32 and FP64 moves
Summary:
This is similar to r210771 which did the same thing for MTHC1.

Also corrected MTHC1_D32 and MTHC1_D64 which used AFGR64 and FGR64 on the
wrong definitions.

Differential Revision: http://reviews.llvm.org/D4483

llvm-svn: 212936
2014-07-14 12:41:31 +00:00
Tim Northover
4ac35c9d7b AArch64: remove unnecessary pseudo-instruction.
Sufficiently twisted use of TableGen lets us write patterns directly for f16
(as an i16 promoted to i32) -> f32 conversion.

llvm-svn: 212933
2014-07-14 11:16:02 +00:00
Daniel Sanders
8ca7923a7c [mips] Correct the AFL_FLAGS1_ODDSPREG flag in .MIPS.abiflags when no '.module oddspreg' is used
Differential Revision: http://reviews.llvm.org/D4486

llvm-svn: 212932
2014-07-14 10:26:15 +00:00
Sasa Stankovic
d8104abd36 [mips] Expand BuildPairF64 to a spill and reload when the O32 FPXX ABI is
enabled and mthc1 and dmtc1 are not available (e.g. on MIPS32r1)

This prevents the upper 32-bits of a double precision value from being moved to
the FPU with mtc1 to an odd-numbered FPU register. This is necessary to ensure
that the code generated executes correctly regardless of the current FPU mode.

MIPS32r2 and above continues to use mtc1/mthc1, while MIPS-IV and above continue
to use dmtc1.

Differential Revision: http://reviews.llvm.org/D4465

llvm-svn: 212930
2014-07-14 09:40:29 +00:00
NAKAMURA Takumi
c5a5ec5528 NVPTX/LLVMBuild.txt: Add "Scalar" to required_libraries. It is really referenced.
llvm-svn: 212918
2014-07-14 02:52:19 +00:00
Saleem Abdulrasool
9786281f88 MC: make DWARF and Windows unwinding handling more similar
Rename member variables and functions for the MCStreamer for DWARF-like
unwinding management.  Rename the Windows ones as well and make the naming and
handling similar across the two.  No functional change intended.

llvm-svn: 212912
2014-07-13 19:03:36 +00:00
Matt Arsenault
fcf9b46646 Remove unused include
llvm-svn: 212898
2014-07-13 03:08:59 +00:00
Matt Arsenault
66d430cbc3 R600: Use range for and fix missing consts.
llvm-svn: 212897
2014-07-13 03:06:43 +00:00
Matt Arsenault
fdf94244f2 R600: Make ShaderType private
llvm-svn: 212896
2014-07-13 03:06:39 +00:00
Matt Arsenault
235492a3e5 R600: Add option to disable promote alloca
This can make writing some tests harder, so add a flag
to disable it.

llvm-svn: 212893
2014-07-13 02:08:26 +00:00
Saleem Abdulrasool
00c379d824 AArch64: add support for llvm.aarch64.hint intrinsic
This adds a llvm.aarch64.hint intrinsic to mirror the llvm.arm.hint in order to
support the various hint intrinsic functions in the ACLE.

Add an optional pattern field that permits the subclass to specify the pattern
that matches the selection.  The intrinsic pattern is set as mayLoad, mayStore,
so overload the value for the definition of the hint instruction.

llvm-svn: 212883
2014-07-12 21:20:49 +00:00
Juergen Ributzka
0a26fad15e Revert "[FastISel][X86] Implement the FastLowerIntrinsicCall hook."
This reverts commit r212851, because it broke the memset lowering.

llvm-svn: 212855
2014-07-11 23:10:08 +00:00
Juergen Ributzka
b8ac8c5034 [FastISel][X86] Implement the FastLowerIntrinsicCall hook.
Rename X86VisitIntrinsicCall -> FastLowerIntrinsicCall, which effectively
implements the target hook.

llvm-svn: 212851
2014-07-11 22:37:43 +00:00
Ulrich Weigand
1aae06e395 [PowerPC] Fix invalid displacement created by LocalStackAlloc
This commit fixes a bug in PPCRegisterInfo::isFrameOffsetLegal that
could result in the LocalStackAlloc pass creating an MI instruction
out-of-range displacement:
        %vreg17<def> = LD 33184, %vreg31; mem:LD8[%g](align=32)
        %G8RC:%vreg17 G8RC_and_G8RC_NOX0:%vreg31
(In final assembler output the top bits are stripped off, resulting
in a negative offset loading from below the stack pointer.)

Common code expects the isFrameOffsetLegal routine to verify whether
adding a given offset to the offset already present in the instruction
results in a valid displacement.  However, on PowerPC the routine
did not take the already present instruction offset into account.

This commit fixes isFrameOffsetLegal to add the instruction offset,
and updates a local caller (needsFrameBaseReg) to no longer add the
instruction offset itself before calling isFrameOffsetLegal.

Reviewed by Hal Finkel.

llvm-svn: 212832
2014-07-11 17:19:31 +00:00
Marek Olsak
7757b563f1 R600/SI: Use i32 vectors for resources and samplers
This affects new intrinsics only.

What surprises me is that v32i8 still works.

llvm-svn: 212831
2014-07-11 17:11:52 +00:00
Marek Olsak
6db1789f95 R600/SI: add sample and image intrinsics exposing all instruction fields
We need the intrinsics with offsets, so why not just add them all.
The R128 parameter will also be useful for reducing SGPR usage.
GL_ARB_image_load_store also adds some image GLSL modifiers like "coherent",
so Mesa will probably translate those to slc, glc, etc.

When LLVM 3.5 is released, I'll switch Mesa to these new intrinsics.

llvm-svn: 212830
2014-07-11 17:11:46 +00:00
Marek Olsak
5ce87c6a53 R600/SI: fix shadow mapping for 1D and 2D array textures
It was conflicting with def TEX_SHADOW_ARRAY, which also handles them.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 212829
2014-07-11 17:11:39 +00:00
Oliver Stannard
6090374181 ARM: Allow __fp16 as a function arg or return type for AArch64
ACLE 2.0 allows __fp16 to be used as a function argument or return
type. This enables this for AArch64.

llvm-svn: 212812
2014-07-11 13:33:46 +00:00
Quentin Colombet
854e94b5f3 [X86] Fix the inversion of low and high bits for the lowering of MUL_LOHI.
Also add a few comments.

<rdar://problem/17581756>

llvm-svn: 212808
2014-07-11 12:08:23 +00:00
Adam Nemet
1cd3c36d60 [X86] AVX512: Improve readability of isCDisp8
No functional change.  As I was trying to understand this function, I found
that variables were reused with confusing names and the broadcast case was a
bit too implicit.  Hopefully, this is an improvement.

llvm-svn: 212795
2014-07-11 05:23:25 +00:00
Adam Nemet
d1760d9cd7 [X86] AVX512: Simplify logic in isCDisp8
It was computing the VL/n case as:
  MemObjSize = VectorByteSize / ElemByteSize / Divider * ElemByteSize

ElemByteSize not only falls out but VectorByteSize/Divider now actually
matches the definition of VL/n.

Also some formatting fixes.

llvm-svn: 212794
2014-07-11 05:23:12 +00:00
Jan Vesely
f405b95fd6 R600: Implement float to long/ulong
Use alg. from LegalizeDAG.cpp
Move Expand setting to SIISellowering

v2: Extend existing tests instead of creating new ones
v3: use separate LowerFPTOSINT function
v4: use TargetLowering::expandFP_TO_SINT
    add comment about using FP_TO_SINT for uints

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 212773
2014-07-10 22:40:21 +00:00
Brad Smith
6d7f7f9d67 Use the integrated assembler by default on OpenBSD.
llvm-svn: 212771
2014-07-10 22:37:28 +00:00
Zoran Jovanovic
45b7aee40f [mips] Emit two CFI offset directives per double precision SDC1/LDC1
instead of just one for FR=1 registers
Differential Revision: http://reviews.llvm.org/D4310

llvm-svn: 212769
2014-07-10 22:23:30 +00:00
Akira Hatanaka
fc40b8c4ea [X86] Mark pseudo instruction TEST8ri_NOEREX as hasSIdeEffects=0.
Also, add a case clause in X86InstrInfo::shouldScheduleAdjacent to enable
macro-fusion.

<rdar://problem/15680770>

llvm-svn: 212747
2014-07-10 18:00:53 +00:00
Eric Christopher
3599325ed0 Make it possible for the Subtarget to change between function
passes in the mips back end. This, unfortunately, required a
bit of churn in the various predicates to use a pointer rather
than a reference.

llvm-svn: 212744
2014-07-10 17:26:51 +00:00
David Majnemer
0202b3fa67 Mips: Silence a -Wcovered-switch-default
Remove a default label which covered no enumerators, replace it with a
llvm_unreachable.

No functionality changed.

llvm-svn: 212729
2014-07-10 16:04:04 +00:00
Zoran Jovanovic
87a12de86d [mips] Added FPXX modeless calling convention.
Differential Revision: http://reviews.llvm.org/D4293

llvm-svn: 212726
2014-07-10 15:36:12 +00:00
Arnaud A. de Grandmaison
e705812564 [AArch64] Add logical alias instructions to MC AsmParser
This patch teaches the AsmParser to accept some logical+immediate
instructions and convert them as shown:

  bic  Rd, Rn, #imm  ->  and Rd, Rn, #~imm
  bics Rd, Rn, #imm  ->  ands Rd, Rn, #~imm
  orn  Rd, Rn, #imm  ->  orr Rd, Rn, #~imm
  eon  Rd, Rn, #imm  ->  eor Rd, Rn, #~imm

Those instructions are an alternate syntax available to assembly coders,
and are needed in order to support code already compiling with some other
assemblers. For example, the bic construct is used by the linux kernel.

llvm-svn: 212722
2014-07-10 15:12:26 +00:00
Tim Northover
3a26804901 AArch64: correctly fast-isel i8 & i16 multiplies
We were asking for a register for type i8 or i16 which caused an assert.

rdar://problem/17620015

llvm-svn: 212718
2014-07-10 14:18:46 +00:00
Daniel Sanders
aa92e911ca [mips] Add support for -modd-spreg/-mno-odd-spreg
Summary:
When -mno-odd-spreg is in effect, 32-bit floating point values are not
permitted in odd FPU registers. The option also prohibits 32-bit and 64-bit
floating point comparison results from being written to odd registers.

This option has three purposes:
* It allows support for certain MIPS implementations such as loongson-3a that
  do not allow the use of odd registers for single precision arithmetic.
* When using -mfpxx, -mno-odd-spreg is the default and this allows us to
  statically check that code is compliant with the O32 FPXX ABI since mtc1/mfc1
  instructions to/from odd registers are guaranteed not to appear for any
  reason. Once this has been established, the user can then re-enable
  -modd-spreg to regain the use of all 32 single-precision registers.
* When using -mfp64 and -mno-odd-spreg together, an O32 extension named
  O32 FP64A is used as the ABI. This is intended to provide almost all
  functionality of an FR=1 processor but can also be executed on a FR=0 core
  with the assistance of a hardware compatibility mode which emulates FR=0
  behaviour on an FR=1 processor.

* Added '.module oddspreg' and '.module nooddspreg' each of which update
  the .MIPS.abiflags section appropriately
* Moved setFpABI() call inside emitDirectiveModuleFP() so that the caller
  doesn't have to remember to do it.
* MipsABIFlags now calculates the flags1 and flags2 member on demand rather
  than trying to maintain them in the same format they will be emitted in.

There is one portion of the -mfp64 and -mno-odd-spreg combination that is not
implemented yet. Moves to/from odd-numbered double-precision registers must not
use mtc1. I will fix this in a follow-up.

Differential Revision: http://reviews.llvm.org/D4383

llvm-svn: 212717
2014-07-10 13:38:23 +00:00
Zinovy Nis
34ef30df00 [x32] Add AsmBackend for X32 which uses ELF32 with x86_64 (the author is Pavel Chupin).
This is minimal change for backend required to have "hello world" compiled and working on x32 target (x86_64-linux-gnux32). More patches for x32 will follow.

Differential Revision: http://reviews.llvm.org/D4181

llvm-svn: 212716
2014-07-10 13:03:26 +00:00
Richard Sandiford
2bd4df26d9 [SystemZ] Use SystemZCallingConv.td to define callee-saved registers
Just a clean-up.  No behavioral change intended.

llvm-svn: 212711
2014-07-10 11:44:37 +00:00
Richard Sandiford
9bea71208f [SystemZ] Tweak instruction format classifications
There's no real need to have Shift as a separate format type from Binary.
The comments for other format types were too specific and in some cases
no longer accurate.

Just a clean-up, no behavioral change intended.

llvm-svn: 212707
2014-07-10 11:29:23 +00:00
Chandler Carruth
b0b7a4016c [x86] Add another combine that is particularly useful for the new vector
shuffle lowering: match shuffle patterns equivalent to an unpcklwd or
unpckhwd instruction.

This allows us to use generic lowering code for v8i16 shuffles and match
the unpack pattern late.

llvm-svn: 212705
2014-07-10 11:09:29 +00:00
Richard Sandiford
e97885896b [SystemZ] Add MC support for LEDBRA, LEXBRA and LDXBRA
These instructions aren't used for codegen since the original L*DB instructions
are suitable for fround.

llvm-svn: 212703
2014-07-10 11:00:55 +00:00
Richard Sandiford
6bdbd7dbea [SystemZ] Avoid using i8 constants for immediate fields
Immediate fields that have no natural MVT type tended to use i8 if the
field was small enough.  This was a bit confusing since i8 isn't a legal
type for the target.  Fields for short immediates in a 32-bit or 64-bit
operation use i32 or i64 instead, so it would be better to do the same
for all fields.

No behavioral change intended.

llvm-svn: 212702
2014-07-10 10:52:51 +00:00
Richard Sandiford
c340b63db7 [SystemZ] Fix FPR dwarf numbering
The dwarf FPR numbers are supposed to have the order F0, F2, F4, F6,
F1, F3, F5, F7, F8, etc., which matches the pairing of registers for
long doubles.  E.g. a long double stored in F0 is paired with F2.

llvm-svn: 212701
2014-07-10 10:45:11 +00:00
Daniel Sanders
a59d10a761 Make it possible for ints/floats to return different values from getBooleanContents()
Summary:
On MIPS32r6/MIPS64r6, floating point comparisons return 0 or -1 but integer
comparisons return 0 or 1.

Updated the various uses of getBooleanContents. Two simplifications had to be
disabled when float and int boolean contents differ:
- ScalarizeVecRes_VSELECT except when the kind of boolean contents is trivially
  discoverable (i.e. when the condition of the VSELECT is a SETCC node).
- visitVSELECT (select C, 0, 1) -> (xor C, 1).
  Come to think of it, this one could test for the common case of 'C'
  being a SETCC too.

Preserved existing behaviour for all other targets and updated the affected
MIPS32r6/MIPS64r6 tests. This also fixes the pi benchmark where the 'low'
variable was counting in the wrong direction because it thought it could simply
add the result of the comparison.

Reviewers: hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, jholewinski, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D4389

llvm-svn: 212697
2014-07-10 10:18:12 +00:00
Chandler Carruth
ee588e17c9 [x86] Expand the target DAG combining for PSHUFD nodes to be able to
combine into half-shuffles through unpack instructions that expand the
half to a whole vector without messing with the dword lanes.

This fixes some redundant instructions in splat-like lowerings for
v16i8, which are now getting to be *really* nice.

llvm-svn: 212695
2014-07-10 09:57:36 +00:00
Chandler Carruth
7c6ee1b539 [x86] Tweak the v16i8 single input special case lowering for shuffles
that splat i8s into i16s.

Previously, we would try much too hard to arrange a sequence of i8s in
one half of the input such that we could unpack them into i16s and
shuffle those into place. This isn't always going to be a cheaper i8
shuffle than our other strategies. The case where it is always going to
be cheaper is when we can arrange all the necessary inputs into one half
using just i16 shuffles. It happens that viewing the problem this way
also makes it much easier to produce an efficient set of shuffles to
move the inputs into one half and then unpack them.

With this, our splat code gets one step closer to being not terrible
with the new experimental lowering strategy. It also exposes two
combines missing which I will add next.

llvm-svn: 212692
2014-07-10 09:16:40 +00:00
Chandler Carruth
7132367354 [x86] Initial improvements to the new shuffle lowering for v16i8
shuffles specifically for cases where a small subset of the elements in
the input vector are actually used.

This is specifically targetted at improving the shuffles generated for
trunc operations, but also helps out splat-like operations.

There is still some really low-hanging fruit here that I want to address
but this is a huge step in the right direction.

llvm-svn: 212680
2014-07-10 04:34:06 +00:00
Matt Arsenault
834bcebaa6 R600/SI: Add support for llvm.convert.{to|from}.fp16
llvm-svn: 212676
2014-07-10 03:22:20 +00:00
Chandler Carruth
17aad54271 [x86] Refactor some of the new code for lowering v16i8 shuffles to
remove duplication and make it easier to select different strategies.

No functionality changed.

llvm-svn: 212674
2014-07-10 02:24:26 +00:00
Chandler Carruth
86e8e81be5 [SDAG] Make the new zext-vector-inreg node default to expand so targets
don't need to set it manually.

This is based on feedback from Tom who pointed out that if every target
needs to handle this we need to reach out to those maintainers. In fact,
it doesn't make sense to duplicate everything when anything other than
expand seems unlikely at this stage.

llvm-svn: 212661
2014-07-09 22:53:04 +00:00
Jim Grosbach
10098b8e10 AArch64: Better codegen for storing to __fp16.
Storing will generally be immediately preceded by rounding from an f32
or f64, so make sure to match those patterns directly to convert into the
FPR16 register class directly rather than going through the integer GPRs.

This also eliminates an extra step in the convert-from-f64 path
which was first converting to f32 and then to f16 from there.

rdar://17594379

llvm-svn: 212638
2014-07-09 18:55:52 +00:00
Benjamin Kramer
e971397307 TargetRegisterInfo: Remove function that fell out of use years ago.
llvm-svn: 212636
2014-07-09 18:53:57 +00:00
Adam Nemet
46d0e99fe3 [X86] AVX512: Enable it in the Loop Vectorizer
This lets us experiment with 512-bit vectorization without passing
force-vector-width manually.

The code generated for a simple integer memset loop is properly vectorized.
Disassembly is still broken for it though :(.

llvm-svn: 212634
2014-07-09 18:22:33 +00:00
Louis Gerbarg
53a9c08b97 Make AArch64FastISel::EmitIntExt explicitly check its source and destination types
This is a follow up to r212492. There should be no functional difference, but
this patch makes it clear that SrcVT must be an i1/i8/16/i32 and DestVT must be
an i8/i16/i32/i64.

rdar://17516686

llvm-svn: 212633
2014-07-09 17:54:32 +00:00
Benjamin Kramer
ba4b6af99b X86: When lowering v8i32 himuls use the correct shuffle masks for AVX2.
Turns out my trick of using the same masks for SSE4.1 and AVX2 didn't work out
as we have to blend two vectors. While there remove unecessary cross-lane moves
from the shuffles so the backend can lower it to palignr instead of vperm.

Fixes PR20118, a miscompilation of vector sdiv by constant on AVX2.

llvm-svn: 212611
2014-07-09 11:12:39 +00:00
Chandler Carruth
eb00ef26a3 [x86] Add a ZERO_EXTEND_VECTOR_INREG DAG node and use it when widening
vector types to be legal and a ZERO_EXTEND node is encountered.

When we use widening to legalize vector types, extend nodes are a real
challenge. Either the input or output is likely to be legal, but in many
cases not both. As a consequence, we don't really have any way to
represent this situation and the prior code in the widening legalization
framework would just scalarize the extend operation completely.

This patch introduces a new DAG node to represent doing a zero extend of
a vector "in register". The core of the idea is to allow legal but
different vector types in the input and output. The output vector must
have fewer lanes but wider elements. The operation is defined to zero
extend the low elements of the input to the size of the output elements,
and drop all of the high elements which don't have a corresponding lane
in the output vector.

It also includes generic expansion of this node in terms of blending
a zero vector into the high elements of the vector and bitcasting
across. This in turn yields extremely nice code for x86 SSE2 when we use
the new widening legalization logic in conjunction with the new shuffle
lowering logic.

There is still more to do here. We need to support sign extension, any
extension, and potentially int-to-float conversions. My current plan is
to continue using similar synthetic nodes to model each of these
transitions with generic lowering code for each one.

However, with this patch LLVM already reaches performance parity with
GCC for the core C loops of the x264 code (assuming you disable the
hand-written assembly versions) when compiling for SSE2 and SSE3
architectures and enabling the new widening and lowering logic for
vectors.

Differential Revision: http://reviews.llvm.org/D4405

llvm-svn: 212610
2014-07-09 10:58:18 +00:00
Daniel Sanders
80d45ce1e9 [mips][mips64r6] Correct select patterns that have the condition or true/false values backwards
Summary: This bug caused SingleSource/Regression/C/uint64_to_float and SingleSource/UnitTests/2002-05-02-CastTest3 to fail (among others).

Differential Revision: http://reviews.llvm.org/D4388

llvm-svn: 212608
2014-07-09 10:47:26 +00:00
Daniel Sanders
f8ec2a1561 [mips][mips64r6] Correct cond names in the cmp.cond.[ds] instructions
Summary:
It seems we accidentally read the wrong column of the table MIPS64r6 spec
and used the names for c.cond.fmt instead of cmp.cond.fmt.

Differential Revision: http://reviews.llvm.org/D4387

llvm-svn: 212607
2014-07-09 10:40:20 +00:00
Chandler Carruth
3cd7a03de4 [x86] Initialize a pointer to null to fix a bug in r212602.
This should restore GCC hosts (which happen to put the bad stuff into
the pointer) and MSan, etc.

llvm-svn: 212606
2014-07-09 10:36:42 +00:00
Daniel Sanders
847346c646 [mips][mips64r6] Use JALR for indirect branches instead of JR (which is not available on MIPS32r6/MIPS64r6)
Summary:
This completes the change to use JALR instead of JR on MIPS32r6/MIPS64r6.

Reviewers: jkolek, vmedic, zoran.jovanovic, dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D4269

llvm-svn: 212605
2014-07-09 10:21:59 +00:00
Daniel Sanders
471e2a9293 [mips][mips64r6] Use JALR for returns instead of JR (which is not available on MIPS32r6/MIPS64r6)
Summary:
RET, and RET_MM have been replaced by a pseudo named PseudoReturn.
In addition a version with a 64-bit GPR named PseudoReturn64 has been
added.

Instruction selection for a return matches RetRA, which is expanded post
register allocation to PseudoReturn/PseudoReturn64. During MipsAsmPrinter,
this PseudoReturn/PseudoReturn64 are emitted as:
- (JALR64 $zero, $rs) on MIPS64r6
- (JALR $zero, $rs) on MIPS32r6
- (JR_MM $rs) on microMIPS
- (JR $rs) otherwise

On MIPS32r6/MIPS64r6, 'jr $rs' is an alias for 'jalr $zero, $rs'. To aid
development and review (specifically, to ensure all cases of jr are
updated), these aliases are temporarily named 'r6.jr' instead of 'jr'.
A follow up patch will change them back to the correct mnemonic.

Added (JALR $zero, $rs) to MipsNaClELFStreamer's definition of an indirect
jump, and removed it from its definition of a call.
Note: I haven't accounted for MIPS64 in MipsNaClELFStreamer since it's
doesn't appear to account for any MIPS64-specifics.

The return instruction created as part of eh_return expansion is now expanded
using expandRetRA() so we use the right return instruction on MIPS32r6/MIPS64r6
('jalr $zero, $rs').

Also, fixed a misuse of isABI_N64() to detect 64-bit wide registers in
expandEhReturn().

Reviewers: jkolek, vmedic, mseaborn, zoran.jovanovic, dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D4268

llvm-svn: 212604
2014-07-09 10:16:07 +00:00
Chandler Carruth
0b494c0aa6 [x86] Re-apply a variant of the x86 side of r212324 now that the rest
has settled without incident, removing the x86-specific and overly
strict 'isVectorSplat' routine in favor of generic and more powerful
splat detection.

The primary motivation and result of this is that the x86 backend can
now see through splats which contain undef elements. This is essential
if we are using a widening form of legalization and I've updated a test
case to also run in that mode as before this change the generated code
for the test case was completely scalarized.

This version of the patch much more carefully handles the undef lanes.
- We aren't overly conservative about them in the shift lowering
  (where we will never use the splat itself).
- One place where the splat would have been re-used by the existing code
  now explicitly constructs a new constant splat that will be safe.
- The broadcast lowering is much more reasonable with undefs by doing
  a correct check of whether the splat is the only user of a loaded
  value, checking that the splat actually crosses multiple lanes before
  using a broadcast, and handling broadcasts of non-constant splats.

As a consequence of the last bullet, the weird usage of vpshufd instead
of vbroadcast is gone, and we actually can lower an AVX splat with
vbroadcastss where before we emitted a really strange pattern of
a vector load and a manual splat across the vector.

llvm-svn: 212602
2014-07-09 10:06:58 +00:00
NAKAMURA Takumi
1d25744c84 MipsTargetStreamer.h: Avoid "using" to appease msc17.
llvm-svn: 212577
2014-07-08 23:48:22 +00:00
Jim Grosbach
4c963f72c5 AArch64: Better codegen for loading from __fp16.
Loading will generally extend to an f32 or an 64, so make sure
to match those patterns directly to load into the FPR16 register
class directly rather than going through the integer GPRs.

This also eliminates an extra step in the convert-to-f64 path
which was first converting to f32 and then to f64 from there.

rdar://17594379

llvm-svn: 212573
2014-07-08 23:28:48 +00:00
Ulrich Weigand
1078e34b76 [PowerPC] Implement atomic NAND operations as actual NAND
This changes the implementation of atomic NAND operations
from "a & ~b" (compatible with GCC < 4.4) to actual "~(a & b)"
(compatible with GCC >= 4.4).

This is in line with the common-code and ARM back-end change
implemented in r212433.

llvm-svn: 212547
2014-07-08 16:16:02 +00:00
Daniel Sanders
9b70b2a39e [mips] Fixed struct/class mismatch introduced in r212522.
Clang emits a warning about this.

llvm-svn: 212528
2014-07-08 13:13:42 +00:00
Daniel Sanders
f510655e3d Fix r212522 - [mips] Improve encapsulation of the .MIPS.abiflags implementation and limit scope of related enums
Added two lines that should have been in r212522.

llvm-svn: 212523
2014-07-08 10:35:52 +00:00
Daniel Sanders
694f6eac28 [mips] Improve encapsulation of the .MIPS.abiflags implementation and limit scope of related enums
Summary:
Follow on to r212519 to improve the encapsulation and limit the scope of the enums.

Also merged two very similar parser functions, fixed a bug where ASE's
were not being reported, and marked CPR1's as being 128-bit when MSA is
enabled.

Differential Revision: http://reviews.llvm.org/D4384

llvm-svn: 212522
2014-07-08 10:11:38 +00:00
Renato Golin
74bbc07d65 Revert "Refactor ARM subarchitecture parsing"
This reverts commit 7b4a6882467e7fef4516a0cbc418cbfce0fc6f6d.

llvm-svn: 212521
2014-07-08 10:06:16 +00:00
Arnaud A. de Grandmaison
5d079c21a0 Truncate the immediate in logical operation to the register width
And continue to produce an error if the 32 most significant bits are not all ones or zeros.

llvm-svn: 212520
2014-07-08 09:53:04 +00:00
Vladimir Medic
f3dd261ec8 Mips.abiflags is a new implicitly generated section that will be present on all new modules. The section contains a versioned data structure which represents essentially information to allow a program loader to determine the requirements of the application. This patch implements mips.abiflags section and provides test cases for it.
llvm-svn: 212519
2014-07-08 08:59:22 +00:00
Chandler Carruth
be294c3274 [x86,SDAG] Sink the logic for folding shuffles of splats more
aggressively from the x86 shuffle lowering to the generic SDAG vector
shuffle formation code.

This code already tried to fold away shuffles of splats! It just had
lots of bugs and couldn't handle the case my new x86 shuffle lowering
needed.

First, it failed to correctly compute whether N2 was undef because it
pre-computed this, then did transformations which could *make* N2 undef,
then failed to ever re-consider the precomputed state.

Second, it didn't look through bitcasts at all, even in the safe cases
where they are just element-type bitcasts with no change to the number
of elements.

Third, it didn't handle all-zero bit casts nicely the way my code in the
x86 side of things did, which is essential to getting good zext-shuffle
lowerings.

But all of these are generic. I just ported the code down to this layer
and fixed the surrounding bugs. Tests exercising this in the x86 backend
still pass and some silly code in widen_cast-6.ll gets better. I updated
that test to be a bit more precise but it's still pretty unclear what
the value of the test is in this day and age.

llvm-svn: 212517
2014-07-08 08:45:38 +00:00
Adam Nemet
ee6f661620 [X86] AVX512: Only allow k1-k7 as predicates to vpcmp*
As destination k0 is allowed but not as predicate/writemask.

I also modified the test to allow checking of error messages by the assembler.
I applied a similar approach to the test ret.s in the same directory.

llvm-svn: 212504
2014-07-08 00:22:32 +00:00
Andrea Di Biagio
e6975da1cd [x86] Fix assertion failure caused by a wrong combine of PSHUFD nodes with different types.
When combining a sequence of two PSHUFD dag nodes into a single PSHUFD,
make sure that we assign the correct type to the resulting PSHUFD.
X86ISD::PSHUFD dag nodes can be either MVT::v4i32 or MVT::v4f32.

Before this change, an assertion failure was triggered in method
'DAGCombinerInfo::CombineTo' when trying to combine the shuffles from the test
below into a single PSHUFD.

define <4 x float> @test1(<4 x float> %V) {
  %1 = shufflevector <4 x float> %V, <4 x float> undef, <4 x i32> <i32 3, i32 0, i32 2, i32 1>
  %2 = shufflevector <4 x float> %1, <4 x float> undef, <4 x i32> <i32 3, i32 0, i32 2, i32 1>
  ret <4 x float> %2
}

llvm-svn: 212498
2014-07-07 23:25:23 +00:00
Juergen Ributzka
19aa45cd04 [FastISel][X86] Fix smul.with.overflow.i8 lowering.
Add custom lowering code for signed multiply instruction selection, because the
default FastISel instruction selection for ISD::MUL will use unsigned multiply
for the i8 type and signed multiply for all other types. This would set the
incorrect flags for the overflow check.

This fixes <rdar://problem/17549300>

llvm-svn: 212493
2014-07-07 21:52:21 +00:00
Louis Gerbarg
30a2dcd984 Allow AArch64FastISel to degrade graceully in the presence of an MVT::i128
Currently AArch64FastISel crashes if it tries to extend an integer into an
MVT::i128. This can happen by creating 128 bit integers like so:

  typedef unsigned int uint128_t __attribute__((mode(TI)));
  typedef int sint128_t __attribute__((mode(TI)));

This patch makes EmitIntExt check for their presence and then falls back to
SelectionDAG.

Tests included.

rdar://17516686

llvm-svn: 212492
2014-07-07 21:37:51 +00:00
Renato Golin
659d34425a Refactor ARM subarchitecture parsing
According to a FIXME in ARMMCTargetDesc.cpp the ARM version parsing should be
in the Triple helper class.

Patch by: Gabor Ballabas

llvm-svn: 212479
2014-07-07 20:01:11 +00:00
Ulrich Weigand
5e8a0b585b [PowerPC] Fix no-assert build
r212476 caused a compile failure (unused variable) in a non-assertion
build ...

llvm-svn: 212477
2014-07-07 19:39:44 +00:00
Ulrich Weigand
8f2aff0b0c [PowerPC] Fix "byval align" arguments
Arguments passed as "byval align" should get the specified alignment
in the parameter save area.  There was some code in PPCISelLowering.cpp
that attempted to implement this, but this didn't work correctly:
while code did update the ArgOffset value, it neglected to update
the PtrOff value (which was already computed from the old ArgOffset),
and it also neglected to update GPR_idx -- fields skipped due to
alignment in the save area must likewise be skipped in GPRs.

This patch fixes and simplifies this logic by:
- handling argument offset alignment right at the beginning
  of argument processing, using a new helper routine
  CalculateStackSlotAlignment (this avoids having to update
  PtrOff and other derived values later on)
- not tracking GPR_idx separately, but always computing the
  correct GPR_idx for each argument *from* its ArgOffset
- removing some redundant computation in LowerFormalArguments:
  MinReservedArea must equal ArgOffset after argument processing,
  so there's no use in computing it twice.

[This doesn't change the behavior of the current clang front-end,
since that never creates "byval align" arguments at the moment.
This will change with a follow-on patch, however.]

llvm-svn: 212476
2014-07-07 19:26:41 +00:00
Chandler Carruth
9fd1035842 [x86] Revert r212324 which was too aggressive w.r.t. allowing undef
lanes in vector splats.

The core problem here is that undef lanes can't *unilaterally* be
considered to contribute to splats. Their handling needs to be more
cautious. There is also a reported failure of the nightly testers
(thanks Tobias!) that may well stem from the same core issue. I'm going
to fix this theoretical issue, factor the APIs a bit better, and then
verify that I don't see anything bad with Tobias's reduction from the
test suite before recommitting.

Original commit message for r212324:
  [x86] Generalize BuildVectorSDNode::getConstantSplatValue to work for
  any constant, constant FP, or undef splat and to tolerate any undef
  lanes in a splat, then replace all uses of isSplatVector in X86's
  lowering with it.

  This fixes issues where undef lanes in an otherwise splat vector would
  prevent the splat logic from firing. It is a touch more awkward to use
  this interface, but it is much more accurate. Suggestions for better
  interface structuring welcome.

  With this fix, the code generated with the widening legalization
  strategy for widen_cast-4.ll is *dramatically* improved as the special
  lowering strategies for a v16i8 SRA kick in even though the high lanes
  are undef.

  We also get a slightly different choice for broadcasting an aligned
  memory location, and use vpshufd instead of vbroadcastss. This looks
  like a minor win for pipelining and domain crossing, but a minor loss
  for the number of micro-ops. I suspect its a wash, but folks can
  easily tweak the lowering if they want.

llvm-svn: 212475
2014-07-07 19:03:32 +00:00
Matt Arsenault
f39a57f581 R600: Fix mishandling of load / store chains.
Fixes various bugs with reordering loads and stores.
Scalarized vector loads weren't collecting the chains
at all.

llvm-svn: 212473
2014-07-07 18:34:45 +00:00
Matt Arsenault
98979d7f5f Fix typo, weird indentation
llvm-svn: 212472
2014-07-07 18:34:42 +00:00
Tim Northover
a011516d8b X86: revert unintentional change to X86FastISel.
This crept in with r212443.

llvm-svn: 212459
2014-07-07 14:06:42 +00:00
Evgeniy Stepanov
8d97516cfe [asan] Generate asm instrumentation in MC.
Generate entire ASan asm instrumentation in MC without
relying on runtime helper functions.

Patch by Yuri Gorshenin.

llvm-svn: 212455
2014-07-07 13:57:37 +00:00
Chandler Carruth
bf23b76679 [x86] Teach the new vector shuffle lowering code to handle what is
essentially a DAG combine that never gets a chance to run.

We might typically expect DAG combining to remove shuffles-of-splats and
other similar patterns, but we don't get a chance to run the DAG
combiner when we recursively form sub-shuffles during the lowering of
a shuffle. So instead hand-roll a really important combine directly into
the lowering code to detect shuffles-of-splats, especially shuffles of
an all-zero splat which needn't even have the same element width, etc.

This lets the new vector shuffle lowering handle shuffles which
implement things like zero-extension really nicely. This will become
even more important when I wire the legalization of zero-extension to
vector shuffles with the new widening legalization strategy.

llvm-svn: 212444
2014-07-07 09:06:58 +00:00
Tim Northover
96e2bce418 CodeGen: it turns out that NAND is not the same thing as BIC. At all.
We've been performing the wrong operation on ARM for "atomicrmw nand" for
years, since "a NAND b" is "~(a & b)" rather than ARM's very tempting "a & ~b".
This bled over into the generic expansion pass.

So I assume no-one has ever actually tried to do an atomic nand in the real
world. Oh well.

llvm-svn: 212443
2014-07-07 09:06:35 +00:00
Saleem Abdulrasool
b5f61cbe40 ARM: properly lower dllimport'ed global values
This completes the handling for DLL import storage symbols when lowering
instructions.  A DLL import storage symbol must have an additional load
performed prior to use.  This is applicable to variables and functions.

This is particularly important for non-function symbols as it is possible to
handle function references by emitting a thunk which performs the translation
from the unprefixed __imp_ symbol to the proper symbol (although, this is a
non-optimal lowering).  For a variable symbol, no such thunk can be
accommodated.

llvm-svn: 212431
2014-07-07 05:18:35 +00:00
Saleem Abdulrasool
b53a07c63a ARM: correctly mangle dllimport symbols
Add support for tracking DLLImport storage class information on a per symbol
basis in the ARM instruction selection.  Use that information to correctly
mangle the symbol (dllimport symbols are referenced via *__imp_<name>).

llvm-svn: 212430
2014-07-07 05:18:30 +00:00
Saleem Abdulrasool
dd5562d3c7 ARM: unify symbol name retrieval
Ensure that all paths that retrieve the symbol name go through GetARMGVSymbol
rather than getSymbol.  This is desirable so that any global symbol mangling can
be centralised to this function.  The motivation for this is handling of symbols
that are marked as having dll import dll storage.  Such a symbol requires an
extra load that is currently handled in the backend and a __imp_ prefix on the
symbol name.

llvm-svn: 212429
2014-07-07 05:18:22 +00:00
Kevin Qin
c72c5a1639 [AArch64] Normalize all constants to build a vector.
The value of constant operands will be truncated to fit element width.

llvm-svn: 212428
2014-07-07 02:45:40 +00:00
Saleem Abdulrasool
9cdbf65f90 AArch64: whitespace cleanup
llvm-svn: 212420
2014-07-06 22:13:26 +00:00
Matt Arsenault
5c86fb1055 Use cast<> instead of dyn_cast + assert
llvm-svn: 212380
2014-07-05 21:16:43 +00:00
Matt Arsenault
d1cb139e25 Fix grammar
llvm-svn: 212379
2014-07-05 21:16:40 +00:00
Ehsan Akhgari
749fa5fa81 Add support for parsing the not operator in Microsoft inline assembly
This fixes http://llvm.org/PR20202

llvm-svn: 212352
2014-07-04 19:13:05 +00:00
Daniel Sanders
43fb238a1e [mips][mips64r6] Set ELF e_flags for MIPS32r6/MIPS64r6. Also do MIPS-I to MIPS-V
Differential Revision: http://reviews.llvm.org/D4386

llvm-svn: 212346
2014-07-04 15:21:53 +00:00
Tim Northover
200b515443 ARM: when falling back to scattered relocs, keep the type.
The linker relies on relocation type info (e.g. is it a branch?) to perform the
correct actions, so we should keep that even when we end up using a scattered
relocation for whatever reason.

rdar://problem/17553104

llvm-svn: 212333
2014-07-04 10:58:05 +00:00
Daniel Sanders
058f73385f [mips][mips64r6] Correct the encoding of dmuh, dmuhu, dmul, and dmulu.
We have detected a documentation bug in the encoding tables of the released
MIPS64r6 specification that has resulted in the wrong encodings being used for
these instructions in LLVM. This commit corrects them.

llvm-svn: 212330
2014-07-04 10:08:27 +00:00
Chandler Carruth
ab7167839a [x86] Generalize BuildVectorSDNode::getConstantSplatValue to work for
any constant, constant FP, or undef splat and to tolerate any undef
lanes in a splat, then replace all uses of isSplatVector in X86's
lowering with it.

This fixes issues where undef lanes in an otherwise splat vector would
prevent the splat logic from firing. It is a touch more awkward to use
this interface, but it is much more accurate. Suggestions for better
interface structuring welcome.

With this fix, the code generated with the widening legalization
strategy for widen_cast-4.ll is *dramatically* improved as the special
lowering strategies for a v16i8 SRA kick in even though the high lanes
are undef.

We also get a slightly different choice for broadcasting an aligned
memory location, and use vpshufd instead of vbroadcastss. This looks
like a minor win for pipelining and domain crossing, but a minor loss
for the number of micro-ops. I suspect its a wash, but folks can easily
tweak the lowering if they want.

llvm-svn: 212324
2014-07-04 08:11:49 +00:00
Alexey Volkov
2c55eafda4 [X86] Limit maximum nop length on Silvermont
Silvermont can only decode one instruction per cycle if the instruction exceeds 8 bytes.
Also in Silvermont instructions with more than 3 prefixes will cause 3 cycle penalty.
Maximum nop length is limited to 7 bytes when used for padding on Silvermont.
For other x86 processors max nop length remains unchanged 15 bytes.

Differential Revision: http://reviews.llvm.org/D4374

llvm-svn: 212321
2014-07-04 07:14:56 +00:00
Robert Lytton
463595f38e XCore target: remove incorrect DebugLoc entries from prologue
Summary: This was causing the prologue_end to be incorrectly positioned.

Differential Revision: http://reviews.llvm.org/D4122

llvm-svn: 212318
2014-07-04 06:38:22 +00:00
Eric Christopher
6861e06059 Move function dependent resetting of a subtarget variable out of the
subtarget. This involved having the movt predicate take the current
function - since we care about size in instruction selection for
whether or not to use movw/movt take the function so we can check
the attributes. This required adding the current MachineFunction to
FastISel and propagating through.

llvm-svn: 212309
2014-07-04 01:55:26 +00:00
Chandler Carruth
9a23698203 [x86] Clarify that this lowering only applies to vectors and is only
used when we have SSE2.

llvm-svn: 212300
2014-07-03 22:57:44 +00:00
Eric Christopher
a364288ae8 Remove caching of the target machine and initialization of the
subtarget from ARMISelDAGtoDAG. The former is unnecessary and the
latter is initialized on each runOnMachineFunction.

llvm-svn: 212297
2014-07-03 22:24:49 +00:00
Andrea Di Biagio
5c282923d9 [CostModel][x86] Improved cost model for alternate shuffles.
This patch:
 1) Improves the cost model for x86 alternate shuffles (originally
added at revision 211339);
 2) Teaches the Cost Model Analysis pass how to analyze alternate shuffles.

Alternate shuffles are a special kind of blend; on x86, we can often
easily lowered alternate shuffled into single blend
instruction (depending on the subtarget features).

The existing cost model didn't take into account subtarget features.
Also, it had a couple of "dead" entries for vector types that are never
legal (example: on x86 types v2i32 and v2f32 are not legal; those are
always either promoted or widened to 128-bit vector types).

The new x86 cost model takes into account what target features we have
before returning the shuffle cost (i.e. the number of instructions
after the blend is lowered/expanded).

This patch also teaches the Cost Model Analysis how to identify and analyze
alternate shuffles (i.e. 'SK_Alternate' shufflevector instructions):
 - added function 'isAlternateVectorMask';
 - added some logic to check if an instruction is a alternate shuffle and, in
   case, call the target specific TTI to get the corresponding shuffle cost;
 - added a test to verify the cost model analysis on alternate shuffles.

llvm-svn: 212296
2014-07-03 22:24:18 +00:00
Andrea Di Biagio
fc8cb32b3f [X86] Add ISel patterns to select 'f32_to_f16' and 'f16_to_f32' dag nodes.
This patch adds tablegen patterns to select F16C float-to-half-float
conversion instructions from 'f32_to_f16' and 'f16_to_f32' dag nodes.

If the target doesn't have F16C, then 'f32_to_f16' and 'f16_to_f32'
are expanded into library calls.

llvm-svn: 212293
2014-07-03 21:51:06 +00:00
Yi Kong
b99d414631 [ARM] Implement ISB memory barrier intrinsic
Adds support for __builtin_arm_isb. Also corrects DSB and ISB instructions
modelling by adding has-side-effects property.

llvm-svn: 212276
2014-07-03 16:00:41 +00:00
Chandler Carruth
0dba9eafa3 [x86] Fix crashes in lowering bitcast instructions with the widening
mode.

This also runs the test in that mode which would reproduce the crash.
What I love is that *every single FIXME* in the test is addressed by
switching to widening.

llvm-svn: 212254
2014-07-03 03:43:47 +00:00
Chandler Carruth
d7e281bba1 [x86] Based on a long conversation between myself, Jim Grosbach, Hal
Finkel, Eric Christopher, and a bunch of other people I'm probably
forgetting (sorry), add an option to the x86 backend to widen vectors
during type legalization rather than promote them.

This still would promote vNi1 vectors to get the masks right, but would
widen other vectors. A lot of experiments are piling up right now
showing that widening should probably be the default legalization
strategy outside of vNi1 cases, but it is very hard to test the
rammifications of that and fix bugs in widening-based legalization
without an option that enables it. I'll be checking in tests shortly
that use this option to exercise cases where widening doesn't work well
and hopefully we'll be able to switch fully to this soon.

llvm-svn: 212249
2014-07-03 02:11:29 +00:00
Eric Christopher
f23b8b3bbe Make these preprocessor directives match all of the others in the port.
llvm-svn: 212245
2014-07-03 00:44:31 +00:00
Eric Christopher
be5a55af78 Remove dead code.
llvm-svn: 212244
2014-07-03 00:44:28 +00:00
Chandler Carruth
fc0fe5064b [codegen,aarch64] Add a target hook to the code generator to control
vector type legalization strategies in a more fine grained manner, and
change the legalization of several v1iN types and v1f32 to be widening
rather than scalarization on AArch64.

This fixes an assertion failure caused by scalarizing nodes like "v1i32
trunc v1i64". As v1i64 is legal it will fail to scalarize v1i32.

This also provides a foundation for other targets to have more granular
control over how vector types are legalized.

Patch by Hao Liu, reviewed by Tim Northover. I'm committing it to allow
some work to start taking place on top of this patch as it adds some
really important hooks to the backend that I'd like to immediately start
using. =]

http://reviews.llvm.org/D4322

llvm-svn: 212242
2014-07-03 00:23:43 +00:00
Eric Christopher
f60da84887 Move subtarget dependent features into the subtarget from the target
machine. Includes a fix for a subtarget initialization for
hard floating point on mips16.

llvm-svn: 212240
2014-07-03 00:10:24 +00:00
Eric Christopher
3a910292e0 So that we can include frame lowering in the subtarget, remove include
circular dependency with the subtarget by inlining accessor methods and
outlining a routine.

llvm-svn: 212236
2014-07-02 23:29:55 +00:00
Eric Christopher
d616eca2ec So that we can include target lowering in the subtarget, remove include
circular dependency with the subtarget by inlining accessor methods and
outlining a routine.

llvm-svn: 212234
2014-07-02 23:18:40 +00:00
Eric Christopher
02e3de51ef Fix typos.
llvm-svn: 212228
2014-07-02 22:05:40 +00:00
Eric Christopher
d83d891cbb Move the data layout and selection dag info from the mips target machine
down to the subtarget.

llvm-svn: 212224
2014-07-02 21:29:23 +00:00
Adam Nemet
e66defe4d5 [X86] AVX512: Allow writemask argument in vpermt* intrinsics
llvm-svn: 212223
2014-07-02 21:26:01 +00:00
Adam Nemet
d43dcabf24 [X86] AVX512: Generate Pat<>'s for the vpermt2* intrinsics via multiclass
This new multiclass, avx512_perm_table_3src derives from the current one and
provides the Pat<>.  The next patch will add another Pat<> that uses the
writemask.

Note that I dropped the type annotation from the intrinsic call, i.e.: (v16f32
VR512:$src1) -> R512:$src1.  I think that this should be fine (at least many
intrinsic calls don't provide them) and it greatly reduces the number of
template arguments.

llvm-svn: 212222
2014-07-02 21:25:58 +00:00
Adam Nemet
ef83b36688 [X86] AVX512: Add writemask variants for vperm*2*
This includes assembler and codegen support (see the new tests in
avx512-encodings.s and avx512-shuffle.ll).

<rdar://problem/17492620>

llvm-svn: 212221
2014-07-02 21:25:54 +00:00
Tom Stellard
e220f55c61 R600: Add a comment that llvm.AMDGPU.trunc is a legacy intrinsic
llvm-svn: 212218
2014-07-02 20:53:57 +00:00
Tom Stellard
c4ab9c96da R600/SI: Use a ComplexPattern for ADDR64 addressing of MUBUF loads
llvm-svn: 212217
2014-07-02 20:53:56 +00:00
Tom Stellard
209c137768 R600: Promote i64 loads to v2i32
llvm-svn: 212216
2014-07-02 20:53:54 +00:00
Tom Stellard
5343a390b0 R600/SI: Adjsut SGPR live ranges before register allocation
SGPRs are written by instructions that sometimes will ignore control flow,
which means if you have code like:

if (VGPR0) {
  SGPR0 = S_MOV_B32 0
} else {
  SGPR0 = S_MOV_B32 1
}

The value of SGPR0 will 1 no matter what the condition is.

In order to deal with this situation correctly, we need to view the
program as if it were a single basic block when we calculate the
live ranges for the SGPRs.  They way we actually update the live
range is by iterating over all of the segments in each LiveRange
object and setting the end of each segment equal to the start of
the next segment.  So a live range like:

[3888r,9312r:0)[10032B,10384B:0)  0@3888r

will become:

[3888r,10032B:0)[10032B,10384B:0)  0@3888r

This change will allow us to use SALU instructions within branches.

llvm-svn: 212215
2014-07-02 20:53:48 +00:00
Tom Stellard
1f2dabfbae R600/SI: Add verifier check for immediates in register operands.
llvm-svn: 212214
2014-07-02 20:53:44 +00:00
Quentin Colombet
fcd6cc7327 [RegAllocGreedy] Provide a subtarget hook to disable the local reassignment
heuristic.
By default, no functionality change.
This is a follow-up of r212099.

This hook provides a finer grain to control the optimization.

<rdar://problem/17444599>

llvm-svn: 212204
2014-07-02 18:32:04 +00:00
Duncan P. N. Exon Smith
3ca30c6b7f AArch64: Re-enable AArch64AddressTypePromotion
This reverts commits r212189 and r212190.

While this pass was accidentally disabled (until r212073), r205437
slipped in a use of `auto` that should have been `auto&`.

This fixes PR20188.

llvm-svn: 212201
2014-07-02 18:17:40 +00:00
Duncan P. N. Exon Smith
53a9506052 AArch64: Remove unnecessary parens
llvm-svn: 212199
2014-07-02 18:14:03 +00:00
Matt Arsenault
6aca178583 R600: Fix crashes when an illegal type load or store is not handled.
I don't think anything hits this now, but will be exposed in future
patches.

llvm-svn: 212197
2014-07-02 17:44:53 +00:00
Duncan P. N. Exon Smith
08df999c69 AArch64: Merge isa with dyn_cast
llvm-svn: 212194
2014-07-02 17:26:39 +00:00
Duncan P. N. Exon Smith
3a2ea7e9a6 AArch64: Temporarily disable AArch64AddressTypePromotion
Temporarily disable AArch64AddressTypePromotion, which was effectively
re-enabled in r212073 and r212075, while I look into PR20188.

llvm-svn: 212189
2014-07-02 17:03:16 +00:00
Benjamin Kramer
972b4f1dd9 X86: When combining shuffles just remove shuffles that are completely redundant.
CombineTo doesn't allow replacing a node with itself so this would crash if the
combined shuffle is the same as the input shuffle.

llvm-svn: 212181
2014-07-02 15:09:44 +00:00
Elena Demikhovsky
2a53c3fac5 AVX-512: dec/inc instructions are slow on KNL
After Alexey Volkov, I'm adding the same property for KNL, that prefers ADD/SUB instead of INC/DEC.
Added a test.

llvm-svn: 212178
2014-07-02 14:11:05 +00:00
Saleem Abdulrasool
ac95fd68f3 aarch64: support target-specific .req assembler directive
Based on the support for .req on ARM. The aarch64 variant has to keep track if
the alias register was a vector register (v0-31) or a general purpose or
VFP/Advanced SIMD ([bhsdq]0-31) register.

Patch by Janne Grunau!

llvm-svn: 212161
2014-07-02 04:50:23 +00:00
Eric Christopher
0a1a2e475f Break out subtarget initialization that dependent variables need into
a separate function and clean up calling convention for helper function.

llvm-svn: 212153
2014-07-02 01:14:43 +00:00
Eric Christopher
b072603a8a Unify these two lines.
llvm-svn: 212152
2014-07-02 01:02:28 +00:00
Eric Christopher
ca67399b59 Move MipsJITInfo to the subtarget rather than the target machine.
llvm-svn: 212151
2014-07-02 00:54:12 +00:00
Eric Christopher
e519bfada2 Remove unnecessary include.
llvm-svn: 212150
2014-07-02 00:54:10 +00:00
Eric Christopher
d6e2fee952 Remove the cached InstrItineraryData on the TargetMachine, it's unnecessary.
llvm-svn: 212149
2014-07-02 00:54:07 +00:00
Eric Christopher
db2f596d56 Move the subtarget dependent features from XCoreTargetMachine
down to the subtarget.

llvm-svn: 212147
2014-07-02 00:10:09 +00:00
Eric Christopher
e035cb7329 Make XCoreSelectionDAGInfo take a DataLayout since it only needs
that information.

llvm-svn: 212146
2014-07-02 00:10:05 +00:00
Tim Northover
6522771fe6 X86: remove atomic instructions *after* we've iterated through them.
Otherwise they get freed and the implicit "isa<XYZ>" tests following
turn out badly (at least under sanitizers).

Also corrects the ordering of unordered atomic stores.

llvm-svn: 212136
2014-07-01 22:10:30 +00:00
Juergen Ributzka
241f08ad4e [DAG] Pass the argument list to the CallLoweringInfo via move semantics. NFCI.
The argument list vector is never used after it has been passed to the
CallLoweringInfo and moving it to the CallLoweringInfo is cleaner and
pretty much as cheap as keeping a pointer to it.

llvm-svn: 212135
2014-07-01 22:01:54 +00:00
Tim Northover
f7aeb57c87 X86: delegate expanding atomic libcalls to generic code.
On targets without cmpxchg16b or cmpxchg8b, the borderline atomic
operations were slipping through the gaps.

X86AtomicExpand.cpp was delegating to ISelLowering. Generic
ISelLowering was delegating to X86ISelLowering and X86ISelLowering was
asserting. The correct behaviour is to expand to a libcall, preferably
in generic ISelLowering.

This can be achieved by X86ISelLowering deciding it doesn't want the
faff after all.

llvm-svn: 212134
2014-07-01 21:44:59 +00:00
Eric Christopher
d403bea5ee Move the subtarget dependent features from SystemZTargetMachine
down to the subtarget. Add an initialization routine to assist.

llvm-svn: 212124
2014-07-01 20:19:02 +00:00
Eric Christopher
d0042e96b4 Remove the use and initialization of the target machine and subtarget
from SystemZFrameLowering.

llvm-svn: 212123
2014-07-01 20:18:59 +00:00
Tim Northover
2d5af48fc1 AArch64: fix comment typo
llvm-svn: 212120
2014-07-01 19:47:09 +00:00
Tim Northover
60e9ada729 X86: expand atomics in IR instead of as MachineInstrs.
The logic for expanding atomics that aren't natively supported in
terms of cmpxchg loops is much simpler to express at the IR level. It
also allows the normal optimisations and CodeGen improvements to help
out with atomics, instead of using a limited set of possible
instructions..

rdar://problem/13496295

llvm-svn: 212119
2014-07-01 18:53:31 +00:00
Adam Nemet
da774db14e [X86] AVX512: Allow writemasks with vpcmp
For now I only updated the _alt variants.  The main variants are used by
codegen and that will need a bit more work to trigger.

<rdar://problem/17492620>

llvm-svn: 212114
2014-07-01 18:03:45 +00:00
Adam Nemet
3b6318203d [X86] AVX512: Factor generating the AsmString into avx512_icmp_cc
Adding a writemask variant would require a third asm string to be passed to
the template.  Generate the AsmString in the template instead.

No change in X86.td.expanded.

llvm-svn: 212113
2014-07-01 18:03:43 +00:00
Reid Kleckner
f824c863c7 Fix .seh_stackalloc 0
seh_stackalloc 0 is not representable in Win64 SEH info, so emitting it
is a bug.

Reviewers: rnk

Differential Revision: http://reviews.llvm.org/D4334

Patch by Vadim Chugunov!

llvm-svn: 212081
2014-07-01 00:42:47 +00:00
Duncan P. N. Exon Smith
c07a03b9ce AArch64: Follow-up to r212073
In r212073 I missed a call of `use_begin()` that assumed the wrong
semantics.  It's not clear to me at all what this code does without the
fix, so I'm not sure how to write a testcase.

llvm-svn: 212075
2014-07-01 00:05:37 +00:00
Duncan P. N. Exon Smith
8dd9ef0369 AArch64: Actually do address type promotion
AArch64AddressTypePromotion was doing nothing because it was using the
old semantics of `Use` and `uses()`, when it really wanted to get at the
`users()`.

llvm-svn: 212073
2014-06-30 23:42:14 +00:00
Alp Toker
000fb20af5 Fix 'platform-specific' hyphenations
llvm-svn: 212056
2014-06-30 18:57:16 +00:00
Matt Arsenault
09212562b8 R600: Move mul combine to separate function
llvm-svn: 212052
2014-06-30 17:55:48 +00:00
Matt Arsenault
add99aadc1 R600: Remove unused declarations leftover from AMDIL
llvm-svn: 212051
2014-06-30 17:37:17 +00:00
Andrea Di Biagio
2bc066b1b4 [X86] Add support for builtin to read performance monitoring counters.
This patch adds support for a new builtin instruction called
__builtin_ia32_rdpmc.

Builtin '__builtin_ia32_rdpmc' is defined as a 'GCC builtin'; on X86, it can
be used to read performance monitoring counters. It takes as input the index
of the performance counter to read, and returns the value of the specified
performance counter as a 64-bit number.

Calls to this new builtin will map to instruction RDPMC.
The index in input to the builtin call is moved to register %ECX. The result
of the builtin call is the value of the specified performance counter (RDPMC
would return that quantity in registers RDX:RAX).

This patch:
 - Adds builtin int_x86_rdpmc as a GCCBuiltin;
 - Adds a new x86 DAG node called 'RDPMC_DAG';
 - Teaches how to lower this new builtin;
 - Adds an ISel pattern to select instruction RDPMC;
 - Fixes the definition of instruction RDPMC adding %RAX and %RDX as
   implicit definitions, and adding %ECX as implicit use;
 - Adds a LLVM test to verify that the new builtin is correctly selected.

llvm-svn: 212049
2014-06-30 17:14:21 +00:00
Chad Rosier
e739f85e32 [AArch64] Unsized types don't specify an alignment.
PR20109

llvm-svn: 212045
2014-06-30 15:03:00 +00:00
Chad Rosier
2954f936f3 [AArch64] Convert mul x, -(pow2 +/- 1) to shift + add/sub.
The combine for mul x, pow2 +/- 1 is unchanged. Test cases for
both combines as well as mul x, pow2 have been added as well.

llvm-svn: 212044
2014-06-30 14:51:14 +00:00
Scott Douglass
dbd9f7ce21 ARM: take care not to set the ThumbFunc bit on TLS data symbols
This fixes LNT SingleSource/UnitTests/Threads with -mthumb.

Differential Revision: http://reviews.llvm.org/D4324

llvm-svn: 212029
2014-06-30 09:37:24 +00:00
Saleem Abdulrasool
73ee195392 X86: fix comment
Fix a comment typo `DbgLocLImport` instead of `DLLImport`.

llvm-svn: 212012
2014-06-30 03:11:18 +00:00
Saleem Abdulrasool
b846dbabd9 ARM: use symbolic name for constant
This just changes the constant value to the symbolic name corresponding to it.
NFC.

llvm-svn: 212011
2014-06-30 03:11:14 +00:00
Saleem Abdulrasool
499f3c1213 CodeGen: rename Win64 ExceptionHandling to WinEH
This exception format is not specific to Windows x64.  A similar approach is
taken on nearly all architectures.  Generalise the name to reflect reality.
This will eventually be used for Windows on ARM data emission as well.

Switch the enum and namespace into an enum class.

llvm-svn: 212000
2014-06-29 21:43:47 +00:00
Saleem Abdulrasool
d45105fbb4 MC: rename EmitWin64EH routines
Rename the routines to reflect the reality that they are more related to call
frame information than to Win64 EH. Although EH is implemented in an intertwined
manner by augmenting with an exception handler and an associated parameter, the
majority of these routines emit information required to unwind the frames. This
also helps identify that these routines are generic for most windows platforms
(they apply equally to nearly all architectures except x86) although the
encoding of the information is architecture dependent.

Unwinding data is emitted via EmitWinCFI* and exception handling information via
EmitWinEH*.

llvm-svn: 211994
2014-06-29 01:52:01 +00:00
Craig Topper
4c15d35f50 Add ops() method to SDNode that returns an ArrayRef<SDUse>. Use it to simplify some code.
llvm-svn: 211993
2014-06-29 00:40:57 +00:00
Chandler Carruth
747f8eca83 [x86] Fix a bug in the v8i16 shuffling exposed by the new splat-like
lowering for v16i8.

ASan and some bots caught this bug with existing test cases. Fixing it
even fixed a miscompile with one of the test cases. I'm still a bit
suspicious of this test case as I've not taken a proper amount of time
to think about it, but the fix here is strict goodness.

llvm-svn: 211976
2014-06-28 05:46:28 +00:00
Chandler Carruth
d3455a84b3 [x86] Add handling for splat-like widenings of v16i8 shuffles.
These show up really frequently, not the least with actual splats. =] We
lowered these quite badly before. The new code path tries to widen i8
shuffles to i16 shuffles in a splat-like way. There are still some
inefficiencies in our i16 splat logic though, so we aren't really done
here.

Also, for certain patterns (bit of a gather-and-splat) we still
generate pretty silly code, and I've left a fixme for addressing it.
However, I'm not actually worried about this code pattern as much. The
old shuffle lowering generates a 29 instruction monstrosity for it that
should execute much more slowly.

llvm-svn: 211974
2014-06-28 05:16:40 +00:00
Chandler Carruth
c99cbd9cc7 [x86] Fix another bug hit when bootstrapping with the new shuffle
lowering.

For maximum irony, I had already discovered this bug, diagnosed it, and
left FIXMEs about it in the test cases. =[ I just failed to go back over
those until after i had reduced a bootstrap miscompile down to a single
TU, stared at the assembly for an hour, and figured out the bug. Again.

Oh well.

llvm-svn: 211955
2014-06-27 20:07:40 +00:00
Justin Holewinski
42c18ad508 [NVPTX] Use GreatestCommonDivisor64 from MathExtras instead of using our own. Thanks Hal!
llvm-svn: 211952
2014-06-27 19:36:25 +00:00
Justin Holewinski
2a38f449ff [NVPTX] Add reflect intrinsic (better than matching by function name)
Also clean up some of the logic in NVVMReflect.cpp while we're messing around in there.

llvm-svn: 211948
2014-06-27 18:36:11 +00:00
Justin Holewinski
4dfd654c73 [NVPTX] Handle all possible vector types in getSetCCResultType, not just the ones representable as MVTs
llvm-svn: 211947
2014-06-27 18:36:08 +00:00
Justin Holewinski
3774707bba [NVPTX] Add 'b' asm constraint
llvm-svn: 211946
2014-06-27 18:36:06 +00:00
Justin Holewinski
b2ff2161f2 [NVPTX] Simplify some argument lowering logic
llvm-svn: 211945
2014-06-27 18:36:04 +00:00
Justin Holewinski
d4b13b9bc6 [NVPTX] Do not process samplers in GenericToNVVM
llvm-svn: 211944
2014-06-27 18:36:02 +00:00
Justin Holewinski
caff2b4efe [NVPTX] Error out if initializer is given for variable in an address space that does not support initialization
llvm-svn: 211943
2014-06-27 18:36:01 +00:00
Justin Holewinski
004cf81d64 [NVPTX] Add support for .managed variables for UVM
llvm-svn: 211942
2014-06-27 18:35:58 +00:00
Justin Holewinski
6888ca5201 [NVPTX] Emit .weak linkage for link_once, weak, available_externally, and common linkage
llvm-svn: 211941
2014-06-27 18:35:56 +00:00
Justin Holewinski
20bb9827a8 [NVPTX] Variables that start with llvm. or nvvm. are reserved and should not be emitted
llvm-svn: 211940
2014-06-27 18:35:53 +00:00
Justin Holewinski
0ed0e4b150 [NVPTX] Fix handling of ldg/ldu intrinsics.
The address space of the pointer must be global (1) for these intrinsics.  There must also be alignment metadata attached to the intrinsic calls, e.g.

%val = tail call i32 @llvm.nvvm.ldu.i.global.i32.p1i32(i32 addrspace(1)* %ptr), !align !0

!0 = metadata !{i32 4}

llvm-svn: 211939
2014-06-27 18:35:51 +00:00
Justin Holewinski
6fc5dab1cf [NVPTX] Clean up argument lowering code and properly handle alignment for structs and vectors
llvm-svn: 211938
2014-06-27 18:35:44 +00:00
Justin Holewinski
92e1b59c28 [NVPTX] Add missing boolean vector contents flag
llvm-svn: 211937
2014-06-27 18:35:42 +00:00
Justin Holewinski
97b1e28f31 [NVPTX] Add support for [SHL,SRA,SRL]_PARTS
llvm-svn: 211936
2014-06-27 18:35:40 +00:00
Justin Holewinski
7663d1fd5a [NVPTX] Implement fma and imad contraction as target DAGCombiner patterns
This also introduces DAGCombiner patterns for mul.wide to multiply two smaller integers and produce a larger integer

llvm-svn: 211935
2014-06-27 18:35:37 +00:00
Justin Holewinski
1c8d752df2 [NVPTX] Add support for efficient rotate instructions on SM 3.2+
llvm-svn: 211934
2014-06-27 18:35:33 +00:00
Justin Holewinski
b7ecf6b4d6 [NVPTX] Add missing isel patterns for 64-bit atomics
llvm-svn: 211933
2014-06-27 18:35:30 +00:00
Justin Holewinski
2ffa2d24b0 [NVPTX] Add isel patterns for bit-field extract (bfe)
llvm-svn: 211932
2014-06-27 18:35:27 +00:00
Justin Holewinski
7c9cd5f566 [NVPTX] Add support for isspacep instruction
llvm-svn: 211931
2014-06-27 18:35:24 +00:00
Justin Holewinski
ab34c507cf [NVPTX] Add support for envreg reads
llvm-svn: 211930
2014-06-27 18:35:21 +00:00
Justin Holewinski
6eaef88634 [NVPTX] Add target options for PTX 3.2/4.0 and SM 5.0 (Maxwell)
Default PTX version is set to PTX 3.2

llvm-svn: 211929
2014-06-27 18:35:18 +00:00
Justin Holewinski
1c2cdcb292 [NVPTX] Update sub-target feature detection
llvm-svn: 211928
2014-06-27 18:35:16 +00:00
Justin Holewinski
7d536fecc0 [NVPTX] Directly control the Machine SSA passes that are invoked for NVPTX.
NVPTX is a bit special in the optimizations it requires, so this gives
us better control over the backend optimization pipeline.

llvm-svn: 211927
2014-06-27 18:35:14 +00:00
Justin Holewinski
a3b8653f47 [NVPTX] Emit .weak when linkage is not external, internal, or private
llvm-svn: 211926
2014-06-27 18:35:10 +00:00
Justin Holewinski
b137c23ee9 [NVPTX] Just use getTypeAllocSize() when computing return value size for structures and vectors
llvm-svn: 211925
2014-06-27 18:35:08 +00:00
Chandler Carruth
ec4fb8a59b [x86] Fix a miscompile in the new shuffle lowering uncovered by
a bootstrap.

I managed to mis-remember how PACKUS worked on x86, and was using undef
for the high bytes instead of zero. The fix is fairly obvious.

llvm-svn: 211922
2014-06-27 18:25:23 +00:00
Matt Arsenault
5e70db4151 R600: Move trivial getters into header, use initializer list
llvm-svn: 211917
2014-06-27 17:57:00 +00:00
Juergen Ributzka
f7b405f472 [FastISel][X86] Fix typos.
llvm-svn: 211911
2014-06-27 17:16:34 +00:00
Matt Arsenault
128df7aaf1 R600: Don't crash on unhandled instruction in promote alloca
llvm-svn: 211906
2014-06-27 16:52:49 +00:00
Alexander Kornienko
3be4bd19cf Clean up unused variable warning in release build.
llvm-svn: 211902
2014-06-27 15:30:55 +00:00
Ulrich Weigand
85c764c15a [PowerPC] Constrain base register in PPCRegisterInfo::resolveFrameIndex
I've run into a bug where current LLVM at -O0 (with fast-isel)
generated invalid code like:

        ld 0, 20936(1)                  # 8-byte Folded Reload
        stw 12, 10348(0)
        stw 12, 10344(0)

The underlying vreg had been introduced as base register by the
Local Stack Slot Allocation pass.  That register was constrained
to G8RC by PPCRegisterInfo::materializeFrameBaseRegister to match
the ADDI instruction used to set it, but it was *not* constrained
to G8RC_NOX0 to fit the *use* of the register in an address.

That should have happened in PPCRegisterInfo::resolveFrameIndex.
This patch adds an appropriate constrainRegClass call.

Reviewed by Hal Finkel.

llvm-svn: 211897
2014-06-27 13:04:12 +00:00
Chandler Carruth
608f397a25 [x86] Clean up some unused variables, especially in release builds.
llvm-svn: 211894
2014-06-27 12:04:18 +00:00
Chandler Carruth
e7aaf337d0 [x86] Teach the target combine step to aggressively fold pshufd insturcions.
Summary:
This allows it to fold pshufd instructions across intervening
half-shuffles and other noise. This pattern actually shows up in the
generic lowering tests, but I've also added direct tests using
intrinsics to make sure that the specific desired functionality is
working even if the lowering stuff changes in the future.

Differential Revision: http://reviews.llvm.org/D4292

llvm-svn: 211892
2014-06-27 11:40:13 +00:00
Chandler Carruth
127f1b532c [x86] Teach the target-specific combining how to aggressively fold
half-shuffles, even looking through intervening instructions in a chain.

Summary:
This doesn't happen to show up with any test cases I've found for the current
shuffle lowering, but previous attempts would benefit from this and it seems
generally useful. I've tested it directly using intrinsics, which also shows
that it will work with hand vectorized code as well.

Note that even though pshufd isn't directly used in these tests, it gets
exercised because we combine some of the half shuffles into a pshufd
first, and then merge them.

Differential Revision: http://reviews.llvm.org/D4291

llvm-svn: 211890
2014-06-27 11:34:40 +00:00