1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 11:13:28 +01:00
Commit Graph

172245 Commits

Author SHA1 Message Date
Fedor Sergeev
e9e52b4a6a Fixing -print-module-scope for legacy SCC passes
It appears that print-module-scope was not implemented for legacy SCC passes.
Fixed to print a whole module instead of just current SCC.

Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D54793

llvm-svn: 348144
2018-12-03 14:48:15 +00:00
Jonas Paulsson
7777cbe23a [SystemZ::TTI] Return zero cost for ICmp that becomes Load And Test.
A loaded value with multiple users compared with 0 will become a load and
test single instruction. The load is not folded in this case (multiple
users), but the compare instruction is eliminated.

This patch returns 0 cost for the icmp in these cases.

Review: Ulrich Weigand
https://reviews.llvm.org/D55111

llvm-svn: 348141
2018-12-03 14:30:18 +00:00
Pablo Barrio
72d3164a16 [AArch64] Add command-line option for SSBS
Summary:
SSBS (Speculative Store Bypass Safe) is only mandatory from 8.5
onwards but is optional from Armv8.0-A. This patch adds a command
line option to enable SSBS, as it was previously only possible to
enable by selecting -march=armv8.5-a.

Similar patch upstream in GNU binutils:
https://sourceware.org/ml/binutils/2018-09/msg00274.html

Reviewers: olista01, samparker, aemerson

Reviewed By: samparker

Subscribers: javed.absar, kristof.beyls, kristina, llvm-commits

Differential Revision: https://reviews.llvm.org/D54629

llvm-svn: 348137
2018-12-03 14:00:47 +00:00
Ron Lieberman
c710fa7f34 [AMDGPU] Add sdwa support for ADD|SUB U64 decomposed Pseudos
The introduction of S_{ADD|SUB}_U64_PSEUDO instructions which are decomposed
into VOP3 instruction pairs for S_ADD_U64_PSEUDO:
  V_ADD_I32_e64
  V_ADDC_U32_e64
and for S_SUB_U64_PSEUDO
  V_SUB_I32_e64
  V_SUBB_U32_e64
preclude the use of SDWA to encode a constant.
SDWA: Sub-Dword addressing is supported on VOP1 and VOP2 instructions,
but not on VOP3 instructions.

We desire to fold the bit-and operand into the instruction encoding
for the V_ADD_I32 instruction. This requires that we transform the
VOP3 into a VOP2 form of the instruction (_e32).
  %19:vgpr_32 = V_AND_B32_e32 255,
      killed %16:vgpr_32, implicit $exec
  %47:vgpr_32, %49:sreg_64_xexec = V_ADD_I32_e64
      %26.sub0:vreg_64, %19:vgpr_32, implicit $exec
 %48:vgpr_32, dead %50:sreg_64_xexec = V_ADDC_U32_e64
      %26.sub1:vreg_64, %54:vgpr_32, killed %49:sreg_64_xexec, implicit $exec

which then allows the SDWA encoding and becomes
  %47:vgpr_32 = V_ADD_I32_sdwa
      0, %26.sub0:vreg_64, 0, killed %16:vgpr_32, 0, 6, 0, 6, 0,
      implicit-def $vcc, implicit $exec
  %48:vgpr_32 = V_ADDC_U32_e32
      0, %26.sub1:vreg_64, implicit-def $vcc, implicit $vcc, implicit $exec


Differential Revision: https://reviews.llvm.org/D54882

llvm-svn: 348132
2018-12-03 13:04:54 +00:00
Tim Northover
5daefbd8b2 ARM: use target-specific SUBS node when combining cmp with cmov.
This has two positive effects. First, using a custom node prevents
recombination leading to an infinite loop since the output DAG is notionally a
little more complex than the input one. Using a flag-setting instruction also
allows the subtraction to be folded with the related comparison more easily.

https://reviews.llvm.org/D53190

llvm-svn: 348122
2018-12-03 11:16:21 +00:00
Diogo N. Sampaio
61a6678d57 [NFC][AArch64] Split out backend features
This patch splits backend features currently
hidden behind architecture versions.

For example, currently the only way to activate
complex numbers extension is targeting an v8.3
architecture, where after the patch this extension
can be added separately.

This refactoring is required by the new command lines proposal:
http://lists.llvm.org/pipermail/llvm-dev/2018-September/126346.html

Reviewers: DavidSpickett, olista01, t.p.northover

Subscribers: kristof.beyls, bryanpkc, javed.absar, pbarrio

Differential revision: https://reviews.llvm.org/D54633

llvm-svn: 348121
2018-12-03 11:08:13 +00:00
Stefan Granitz
e03d279ac1 [CMake] Add LLVM_EXTERNALIZE_DEBUGINFO_OUTPUT_DIR for custom dSYM target directory on Darwin
Summary: When using `LLVM_EXTERNALIZE_DEBUGINFO` in LLDB, the default dSYM location for the shared library in LLDB.framework is inside the framework bundle. With `LLVM_EXTERNALIZE_DEBUGINFO_OUTPUT_DIR` we can easily fix that. I consider it a useful feature to be able to set a global output directory for external debug info (rather then having a target-specific one). Only implemented for Darwin so far.

Reviewers: beanz, aprantl

Reviewed By: aprantl

Subscribers: mgorny, aprantl, #lldb, lldb-commits, llvm-commits

Differential Revision: https://reviews.llvm.org/D55114

llvm-svn: 348118
2018-12-03 10:42:32 +00:00
Alex Bradbury
02c51ed077 [RISCV] Fix test/MC/Disassembler/RISCV/invalid-instruction.txt after rL347988
The test for [0x00 0x00] failed due to the introduction of c.unimp.

This particular test is unnecessary now that c.unimp was defined (and is 
tested in test/MC/RISCV/rv32c-valid.s).

llvm-svn: 348117
2018-12-03 10:35:46 +00:00
George Rimar
20faa793e5 [llvm-dwarfdump] - Stop printing the bogus empty section name on invalid dwarf.
When there is no .debug_addr section for some reason,
llvm-dwarfdump would print the bogus empty section name when dumping ranges
in .debug_info:

DW_AT_ranges [DW_FORM_rnglistx]   (indexed (0x0) rangelist = 0x00000004
    [0x0000000000000000, 0x0000000000000001) ""
    [0x0000000000000000, 0x0000000000000002) "")

That happens because of the code which uses 0 (zero) as a section index as a default value.
The code should use -1ULL instead because technically 0 is a valid zero section index
in ELF and -1ULL is a special constant used that means "no section available".

This is mostly a fix for the overall correctness/safety of the code,
but a test case is provided too.

Differential revision: https://reviews.llvm.org/D55113

llvm-svn: 348115
2018-12-03 10:33:40 +00:00
Oliver Stannard
eb66331216 [ARM][MC] Move information about variadic register defs into tablegen
Currently, variadic operands on an MCInst are assumed to be uses,
because they come after the defs. However, this is not always the case,
for example the Arm/Thumb LDM instructions write to a variable number of
registers.

This adds a property of instruction definitions which can be used to
mark variadic operands as defs. This only affects MCInst, because
MachineInstruction already tracks use/def per operand in each instance
of the instruction, so can already represent this.

This property can then be checked in MCInstrDesc, allowing us to remove
some special cases in ARMAsmParser::isITBlockTerminator.

Differential revision: https://reviews.llvm.org/D54853

llvm-svn: 348114
2018-12-03 10:32:42 +00:00
Oliver Stannard
a7553313be [ARM][Asm] Debug trace for the processInstruction loop
In the Arm assembly parser, we first match an instruction, then call
processInstruction to possibly change it to a different encoding, to
match rules in the architecture manual which can't be expressed by the
table-generated matcher.

This adds debug printing so that this process is visible when using the
-debug option.

To support this, I've added a new overload of MCInst::dump_pretty which
takes the opcode name as a StringRef, since we don't have an InstPrinter
instance in the assembly parser. Instead, we can get the same
information directly from the MCInstrInfo.

Differential revision: https://reviews.llvm.org/D54852

llvm-svn: 348113
2018-12-03 10:21:28 +00:00
Alexander Potapenko
8e6b80a242 [KMSAN] Enable -msan-handle-asm-conservative by default
This change enables conservative assembly instrumentation in KMSAN builds
by default.
It's still possible to disable it with -msan-handle-asm-conservative=0
if something breaks. It's now impossible to enable conservative
instrumentation for userspace builds, but it's not used anyway.

llvm-svn: 348112
2018-12-03 10:15:43 +00:00
Petr Pavlu
3e533dcfa4 [GlobalISel] Fix test irtranslator-stackprotect-check.ll
Fix for commit r347862. Use correct AArch64 triple in test
CodeGen/AArch64/GlobalISel/irtranslator-stackprotect-check.ll.

llvm-svn: 348111
2018-12-03 09:28:28 +00:00
Sjoerd Meijer
129cbf3900 [ARM] FP16: support vld1.16 for vector loads with post-increment
Differential Revision: https://reviews.llvm.org/D55112

llvm-svn: 348110
2018-12-03 08:26:34 +00:00
Kang Zhang
b1f5b9262b [PowerPC] Fix inconsistent ImmMustBeMultipleOf for same instruction
Summary:
There are 4 instructions which have Inconsistent ImmMustBeMultipleOf in the
function PPCInstrInfo::instrHasImmForm, they are LFS, LFD, STFS, STFD.
These four instructions should set the ImmMustBeMultipleOf to 1 instead of 4.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D54738

llvm-svn: 348109
2018-12-03 03:32:57 +00:00
QingShan Zhang
46ee47cec1 [NFC] [PowerPC] add an routine in PPCTargetLowering to determine if a global is accessed as got-indirect or not.
In theory, we should let the PPC target to determine how to lower the TOC Entry for globals. 
And the PPCTargetLowering requires this query to do some optimization for TOC_Entry. 

Differential Revision: https://reviews.llvm.org/D54925

llvm-svn: 348108
2018-12-03 03:32:16 +00:00
Nico Weber
184177b539 [gn build] Fix cosmetic bug in write_cmake_config.py
Before, #cmakedefine FOO resulted in #define FOO  with a trailing space if FOO
was set to something truthy. Make it so that it's just #define FOO without a
trailing space.

No functional difference.

Differential Revision: https://reviews.llvm.org/D55172

llvm-svn: 348107
2018-12-02 22:26:18 +00:00
Nico Weber
1835f9f561 [gn build] Slightly simplify write_cmake_config.
Before, the script had a bunch of special cases for #cmakedefine and
#cmakedefine01 and then did general variable substitution. Now, the script
always does general variable substitution for all lines and handles the special
cases afterwards.

This has no observable effect for the inputs we use, but is easier to explain
and slightly easier to implement.

Also mention to link to CMake's configure_file() in the docstring.

(The new behavior doesn't quite match CMake on lines like #cmakedefine ${FOO},
but nobody does that.)

Differential Revision: https://reviews.llvm.org/D55171

llvm-svn: 348106
2018-12-02 22:25:25 +00:00
Nico Weber
39f9b06cce [gn build] Add build files for llvm/lib/Analysis and llvm/lib/ProfileData
Differential Revision: https://reviews.llvm.org/D55166

llvm-svn: 348105
2018-12-02 21:43:15 +00:00
Craig Topper
b14bada7bd [X86] Add a DAG combine to turn stores of vXi1 on pre-avx512 targets into a bitcast and a store of a iX scalar.
llvm-svn: 348104
2018-12-02 19:47:14 +00:00
Craig Topper
5c466e750d [X86] Fix bad comment. NFC
llvm-svn: 348103
2018-12-02 19:47:13 +00:00
Michal Gorny
76c7542bde [test] Fix use of 'sort -b' in SimpleLoopUnswitch on NetBSD
Add '-k 1' to 'sort -b' calls in SimpleLoopUnswitch tests, as required
for sort implementation on NetBSD.  The '-b' modifier is ineffective
if specified without any key.  Per the manpage:

  Note that the -b option has no effect unless key fields are specified.

Differential Revision: https://reviews.llvm.org/D55168

llvm-svn: 348097
2018-12-02 16:49:33 +00:00
Michal Gorny
2d8d7b8468 [test] Fix ScalarEvolution test to allow __func__ with prototype
Fix ScalarEvolution/solve-quadratic.ll test to account for __func__
output listing the complete function prototype rather than just its
name, as it does on NetBSD.

Example Linux output:

  GetQuadraticEquation: addrec coeff bw: 4
  GetQuadraticEquation: equation -2x^2 + -2x + -4, coeff bw: 5, multiplied by 2

Example NetBSD output:

  llvm::Optional<std::tuple<llvm::APInt, llvm::APInt, llvm::APInt, llvm::APInt, unsigned int> > GetQuadraticEquation(const llvm::SCEVAddRecExpr*): addrec coeff bw: 4
  llvm::Optional<std::tuple<llvm::APInt, llvm::APInt, llvm::APInt, llvm::APInt, unsigned int> > GetQuadraticEquation(const llvm::SCEVAddRecExpr*): equation -2x^2 + -2x + -4, coeff bw: 5, multiplied by 2

Differential Revision: https://reviews.llvm.org/D55162

llvm-svn: 348096
2018-12-02 16:49:28 +00:00
Michal Gorny
fe6b91d018 [test] Fix BugPoint/compile-custom.ll to use detected python exec
Spawn the custom compile command in BugPoint/compile-custom.ll via
%python rather than relying on implicit 'env python' shebang, in order
to fix it on systems that don't have 'python' executable such as NetBSD.

Differential Revision: https://reviews.llvm.org/D55161

llvm-svn: 348095
2018-12-02 16:49:23 +00:00
Nikita Popov
bb7c898cb7 [ValueTracking] Support funnel shifts in computeKnownBits()
If the shift amount is known, we can determine the known bits of the
output based on the known bits of two inputs.

This is essentially the same functionality as implemented in D54869,
but for ValueTracking rather than InstCombine SimplifyDemandedBits.

Differential Revision: https://reviews.llvm.org/D55140

llvm-svn: 348091
2018-12-02 14:14:11 +00:00
Sanjay Patel
2986f40e8a [SelectionDAG] fold constant with undef vector per element
This makes the SDAG behavior consistent with the way we do this in IR.
It's possible that we were getting the wrong answer before. For example,
'xor undef, undef --> 0' but 'xor undef, C' --> undef. 

But the most practical improvement is likely as shown in the tests here - 
for FP, we were overconstraining undef lanes to NaN, and that can prevent 
vector simplifications/narrowing (see D51553).

llvm-svn: 348090
2018-12-02 13:48:42 +00:00
Sanjay Patel
7fad8e54d1 [DAGCombiner] guard against an oversized shift crash
This change prevents the crash noted in the post-commit comments 
for rL347478 :
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20181119/605166.html

We can't guarantee that an oversized shift amount is folded away, 
so we have to check for it.

Note that I committed an incomplete fix for that crash with:
rL347502

But as discussed here:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20181126/605679.html
...we have to try harder.

So I'm not sure how to expose the bug now (and apparently no fuzzers have found 
a way yet either).

On the plus side, we have discovered that we're missing real optimizations by 
not simplifying nodes sooner, so the earlier fix still has value, and there's 
likely more value in extending that so we can simplify more opcodes and simplify 
when doing RAUW and/or putting nodes on the combiner worklist.

Differential Revision: https://reviews.llvm.org/D54954

llvm-svn: 348089
2018-12-02 13:33:56 +00:00
Sanjay Patel
9c0a094ebb [ValueTracking] add helper function for testing implied condition; NFCI
We were duplicating code around the existing isImpliedCondition() that
checks for a predecessor block/dominating condition, so make that a
wrapper call.

llvm-svn: 348088
2018-12-02 13:26:03 +00:00
Craig Topper
9ae71594e9 [X86] Simplify LowerBITCAST code for v2i32/v4i16/v8i8/i64->mmx/i64/f64 bitcast.
Previously this code generated its own extracts and build_vector. But we can use a simpler concat_vectors or scalar_to_vector operation and let type legalization do additional legalization of those operations.

llvm-svn: 348087
2018-12-02 07:52:39 +00:00
Craig Topper
1c3f74bc5b [X86] Add custom type legalization for v2i32/v4i16/v8i8->mmx bitcasts to avoid a store/load to/from the stack.
Widen the input to a 128 bit vector by padding with undef elements. Then use a movdq2q to convert from xmm register to mmx register.

llvm-svn: 348086
2018-12-02 05:46:50 +00:00
Craig Topper
b16fe4df30 [X86] Custom type legalize v2i32/v4i16/v8i8->i64 bitcasts in 64-bit mode similar to what's done when the destination is f64.
The generic legalizer will fall back to a stack spill that uses a truncating store. That store will get expanded into a shuffle and non-truncating store on pre-avx512 targets. Once that happens the stack store/load pair will be combined away leaving behind the shuffle and bitcasts. On avx512 targets the truncating store is legal so doesn't get folded away.

By custom legalizing it we can avoid this churn and maybe produce better code.

llvm-svn: 348085
2018-12-02 05:46:48 +00:00
Craig Topper
d89a17060e [X86] Add vXi8 division/remainder by non-splat constant test cases to prepare for an upcoming patch.
llvm-svn: 348082
2018-12-01 21:53:08 +00:00
Jessica Paquette
c877e03376 [MachineOutliner][AArch64] Improve checks for stack instructions
If we know that we'll definitely save LR to a register, there's no reason to
pre-check whether or not a stack instruction is unsafe to fix up.

This makes it so that we check for that condition before mapping instructions.

This allows us to outline more, since we don't pessimise as many instructions.

Also update some tests, since we outline more.

llvm-svn: 348081
2018-12-01 21:24:06 +00:00
Jessica Paquette
2fa8070014 Replace w16/w17 in machine-outliner.mir with w11/w12
These registers should not be used here, since they are interprocedural
scratch registers in AArch64.

llvm-svn: 348080
2018-12-01 21:23:58 +00:00
Craig Topper
108b8ed5bf [X86] Don't use zero_extend_vector_inreg for mulhu lowering with sse 4.1
Summary: With sse4.1 we use two zero_extend_vector_inreg and a pshufd to expand the v16i8 input into two v8i16 vectors for the multiply. That's 3 shuffles to extend one operand. The other operand is usually constant as this is mostly used by division by constant optimization. Pre sse4.1 we use a punpckhbw and a punpcklbw with a zero vector. That's two shuffles and an xor and a copy due to tied register constraints. That seems maybe better than the 3 shuffles. With AVX we avoid the copy so that's obviously better.

Reviewers: spatel, RKSimon

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D55138

llvm-svn: 348079
2018-12-01 19:26:31 +00:00
Simon Pilgrim
a5a555e69a [TTI] Reduction costs only need to include a single extract element cost (REAPPLIED)
We were adding the entire scalarization extraction cost for reductions, which returns the total cost of extracting every element of a vector type.

For reductions we don't need to do this - we just need to extract the 0'th element after the reduction pattern has completed.

Fixes PR37731

Rebased and reapplied after being reverted in rL347541 due to PR39774 - which was fixed by D54955/rL347759 and D55017/rL347997

Differential Revision: https://reviews.llvm.org/D54585

llvm-svn: 348076
2018-12-01 14:18:31 +00:00
Graham Sellers
6a0a522a58 [AMDGPU] Split 64-Bit XNOR to 64-Bit NOT/XOR
The identity ~(x ^ y) == (~x ^ y) == (x ^ ~y) allows XNOR (XOR/NOT) to turn into NOT/XOR. Handling this case with its own split means we can make the NOT remain in the scalar unit. Previously, we split 64-bit XNOR into two 32-bit XNOR, then lowered. Now, we get three instructions (s_not, v_xor, v_xor) rather than four in the case where either of the sources is a scalar 64-bit.

Add test cases to xnor.ll to attempt XNOR Vx, Sy and XNOR Sx, Vy. Also adding test that uses the opposite identity such that (~x ^ y) on the scalar unit (or vector for gfx906) can generate XNOR. This already worked, but I didn't see a test for it.

Differential: https://reviews.llvm.org/D55071
llvm-svn: 348075
2018-12-01 12:27:53 +00:00
Xing GUO
b4155e7296 [llvm-readobj] Improve dynamic section iteration NFC.
llvm-svn: 348074
2018-12-01 12:27:24 +00:00
Simon Pilgrim
161b154d0b [SelectionDAG] Improve SimplifyDemandedBits to SimplifyDemandedVectorElts simplification
D52935 introduced the ability for SimplifyDemandedBits to call SimplifyDemandedVectorElts through BITCASTs if the demanded bit mask entirely covered the sub element.

This patch relaxes this to demanding an element if we need any bit from it.

Differential Revision: https://reviews.llvm.org/D54761

llvm-svn: 348073
2018-12-01 12:08:55 +00:00
Nikita Popov
146fc6e46f [InstCombine] Support ssub.sat canonicalization for non-splats
Extend ssub.sat(X, C) -> sadd.sat(X, -C) canonicalization to also
support non-splat vector constants. This is done by generalizing
the implementation of the isNotMinSignedValue() helper to return
true for constants that are non-splat, but don't contain any
signed min elements.

Differential Revision: https://reviews.llvm.org/D55011

llvm-svn: 348072
2018-12-01 10:58:34 +00:00
Craig Topper
2c4b0ae812 [X86] Remove stale FIXME from test case. NFC
This was fixed in r346581. I just forgot to remove it.

llvm-svn: 348069
2018-12-01 07:45:36 +00:00
Teresa Johnson
12a2456ab6 [ThinLTO] Allow importing of functions with var args
Summary:
Follow up to D54270, which allowed importing of var args functions
unless they called va_start. As pointed out in the post-commit comments
on that patch, the inliner can handle functions that call va_start in
certain situations as well. Go ahead and enable importing of all var
args functions. Measurements on a large binary show that this increases
imports and binary size by an insignificant amount.

Reviewers: davidxl

Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, llvm-commits

Differential Revision: https://reviews.llvm.org/D54607

llvm-svn: 348068
2018-12-01 05:11:46 +00:00
Alex Bradbury
272fa58228 [RISCV] Remove RV64I SLLW/SRLW/SRAW patterns and add new test cases
As noted by Eli Friedman <https://reviews.llvm.org/D52977?id=168629#1315291>, 
the RV64I shift patterns for SLLW/SRLW/SRAW make some incorrect assumptions. 
SRAW assumed that (sext_inreg foo, i32) could only be produced when 
sign-extended an i32. However, it can be produced by input such as:

define i64 @tricky_ashr(i64 %a, i64 %b) {
  %1 = shl i64 %a, 32
  %2 = ashr i64 %1, 32
  %3 = ashr i64 %2, %b
  ret i64 %3
}

It's important not to select sraw in the above case, because sraw only uses 
bits lower 5 bits from the shift, while a shift of 32-63 would be valid.

Similarly, the patterns for srlw assumed (and foo, 0xffffffff) would only be 
produced when zero-extending a value that was originally i32 in LLVM IR. This
is obviously incorrect.

This patch removes the SLLW/SRLW/SRAW shift patterns for the time being and 
adds test cases that would demonstrate a miscompile if the incorrect patterns 
were re-added.

llvm-svn: 348067
2018-12-01 05:00:00 +00:00
Shoaib Meenai
5f2bf664d6 [projects] Use add_llvm_external_project for implicit projects
This allows disabling implicit projects via the LLVM_TOOL_*_BUILD
variables, similar to how implicit tools can be disabled. They'll still
be enabled by default, since add_llvm_external_project defaults the
LLVM_TOOL_*_BUILD variables to ON for in-tree implciit projects.

Differential Revision: https://reviews.llvm.org/D55105

llvm-svn: 348064
2018-12-01 01:41:27 +00:00
Craig Topper
944588cf02 [X86][LoopVectorize] Replace -mcpu=skylake-avx512 with -mattr=avx512f in some tests that failed when experimenting with defaulting to -mprefer-vector-width=256 for skylake-avx512.
llvm-svn: 348063
2018-12-01 01:38:44 +00:00
Zachary Turner
ee08c55862 Use RequireNullTerminator=false in identify_magic.
identify_magic does not need the file to be null terminated.  Passing
true here causes the file reading code to decide not to use mmap in
some rare cases (which happen to be true 100% of the time in PDB files)
which can lead to very large files failing to load.  Since it was
probably just an accident that we were passing true here (since it is
the default function parameter), this should be strictly an improvement.

llvm-svn: 348059
2018-12-01 00:22:39 +00:00
Zachary Turner
807a0ca917 [lit] Add a generic build script with a lit substitution.
This adds a script called build.py as well as a lit substitution
called %build that we can use to invoke it.  The idea is that
this allows a lit test to build test inferiors without having
to worry about architecture / platform specific differences,
command line syntax, finding / configurationg a proper toolchain,
and other issues.  They can simply write something like:

%build --arch=32 -o %t.exe %p/Inputs/foo.cpp

and it will just work.  This paves the way for being able to
run lit tests with multiple configurations, platforms, and
compilers with a single test.

Differential Revision: https://reviews.llvm.org/D54914

llvm-svn: 348058
2018-12-01 00:22:21 +00:00
Artem Belevich
2ff5077d44 [NVPTX] Add lowering of i128 numbers as struct fields
Addition to D34555 - override VTs computation with ComputePTXValueVTs
for struct fields.

Author: Denys Zariaiev<denys.zariaiev@gmail.com>

Differential Revision: https://reviews.llvm.org/D55144

llvm-svn: 348057
2018-12-01 00:21:52 +00:00
Craig Topper
c616079a5b [X86] Replace '-mcpu=skx' with -mattr=avx512f or -mattr=avx512bw in interleave/strided load/store cost model tests.
llvm-svn: 348056
2018-12-01 00:21:49 +00:00
Nico Weber
00e6b8131b [gn build] Add action to generate VCSRevision.h and use it to add llvm/lib/Object/BUILD.gn
Differential Revision: https://reviews.llvm.org/D55090

llvm-svn: 348054
2018-12-01 00:02:39 +00:00