1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 04:22:57 +02:00
Commit Graph

110861 Commits

Author SHA1 Message Date
Kevin Enderby
dc6f805541 Add printing the LC_ENCRYPTION_INFO_64 load command with llvm-objdump’s -private-headers
and add tests for the two AArch64 binaries.

llvm-svn: 224400
2014-12-17 01:01:30 +00:00
David Blaikie
93e50409ac PR21875: codegen for non-type template parameters of nullptr_t type
llvm-svn: 224399
2014-12-17 00:43:22 +00:00
Reid Kleckner
b4ee65bf9b Revert "[CodeGenPrepare] Move sign/zero extensions near loads using type promotion."
This reverts commit r224351. It causes assertion failures when building
ICU.

llvm-svn: 224397
2014-12-17 00:29:23 +00:00
Hans Wennborg
37a572f581 SelectionDAG switch lowering: use 'unsigned' to count destination popularity
SwitchInst::getNumCases() returns unsinged, so using uint64_t to count cases
seems unnecessary.

Also fix a missing CHECK in the test case.

llvm-svn: 224393
2014-12-16 23:41:59 +00:00
Colin LeMahieu
55e21b6e4f [Hexagon] Updating doubleword shift usages to new versions.
llvm-svn: 224391
2014-12-16 23:36:15 +00:00
Kevin Enderby
c4d55fc277 Add printing the LC_ENCRYPTION_INFO load command with llvm-objdump’s -private-headers.
llvm-svn: 224390
2014-12-16 23:25:52 +00:00
Duncan P. N. Exon Smith
4efb9deca4 Linker: Drop superseded subprograms
When a function gets replaced by `ModuleLinker`, drop superseded
subprograms.  This ensures that the "first" subprogram pointing at a
function is the same one that `!dbg` references point at.

This is a stop-gap fix for PR21910.  Notably, this fixes Release+Asserts
bootstraps that are currently asserting out in
`LexicalScopes::initialize()` due to the explicit instantiations in
`lib/IR/Dominators.cpp` eventually getting replaced by -argpromotion.

llvm-svn: 224389
2014-12-16 23:23:41 +00:00
Sanjay Patel
8f620fe4b0 fix typo, add spaces; NFC
llvm-svn: 224384
2014-12-16 22:48:42 +00:00
Simon Pilgrim
f9bdd6a092 [X86][SSE] Vector double -> float conversion memory folding (cvtpd2ps)
Added a missing memory folding relationship for the (V)CVTPD2PS instruction - we can safely fold these for stack reloads.

Differential Revision: http://reviews.llvm.org/D6663

llvm-svn: 224383
2014-12-16 22:30:10 +00:00
Rafael Espindola
2673fda3c7 Make the assert a bit stronger.
We should get no declarations in here.

llvm-svn: 224382
2014-12-16 22:29:43 +00:00
Colin LeMahieu
7907ae05cd [Hexagon] Removing old XTYPE/BIT instructions and replacing usages.
llvm-svn: 224381
2014-12-16 22:17:09 +00:00
Sanjay Patel
af93d5f15c merge consecutive loads that are offset from a base address
SelectionDAG::isConsecutiveLoad() was not detecting consecutive loads
when the first load was offset from a base address. 

This patch recognizes that pattern and subtracts the offset before comparing
the second load to see if it is consecutive.

The codegen change in the new test case improves from:

vmovsd	32(%rdi), %xmm0
vmovsd	48(%rdi), %xmm1 
vmovhpd	56(%rdi), %xmm1, %xmm1
vmovhpd	40(%rdi), %xmm0, %xmm0
vinsertf128	$1, %xmm1, %ymm0, %ymm0

To:

vmovups	32(%rdi), %ymm0

An existing test case is also improved from:

vmovsd	(%rdi), %xmm0
vmovsd	16(%rdi), %xmm1
vmovsd	24(%rdi), %xmm2
vunpcklpd	%xmm2, %xmm0, %xmm0 ## xmm0 = xmm0[0],xmm2[0]
vmovhpd	8(%rdi), %xmm1, %xmm3

To:

vmovsd	(%rdi), %xmm0
vmovsd	16(%rdi), %xmm1
vmovhpd	24(%rdi), %xmm0, %xmm0
vmovhpd	8(%rdi), %xmm1, %xmm1

This patch fixes PR21771 ( http://llvm.org/bugs/show_bug.cgi?id=21771 ).

Differential Revision: http://reviews.llvm.org/D6642

llvm-svn: 224379
2014-12-16 21:57:18 +00:00
Kevin Enderby
601ffaa6fb Fix a bug in llvm-objdump’s -private-headers for the LC_VERSION_MIN_IPHONEOS
load command not getting printed.

llvm-svn: 224376
2014-12-16 21:48:27 +00:00
Colin LeMahieu
5cbdca29ae [Hexagon] Adding tstbit/bitclr/bitset instructions.
llvm-svn: 224374
2014-12-16 21:28:58 +00:00
Kostya Serebryany
7693e4617d [sanitizer] prevent function call merging for sanitizer-coverage callbacks
llvm-svn: 224372
2014-12-16 21:24:15 +00:00
Kevin Enderby
1fc8b820e6 Fix another use of PRIx32 that should have been PRIx64.
llvm-svn: 224368
2014-12-16 21:00:25 +00:00
Colin LeMahieu
bb5c698516 [Hexagon] Adding bit count and twiddling instructions.
llvm-svn: 224367
2014-12-16 20:57:56 +00:00
Colin LeMahieu
28f6e273c4 [Hexagon] Adding asr/lsr/asl reg/imm, asl with saturation, asr with rounding. Doubleword abs/neg/not. Interleave and deinterleave instructions.
llvm-svn: 224365
2014-12-16 20:40:23 +00:00
Frederic Riss
8319e5e2f1 [dsymutil] Pass the verbosity flag down to the processing. NFC for now.
llvm-svn: 224361
2014-12-16 20:22:11 +00:00
Frederic Riss
8a08e35df1 [dsymutil] Avoid calling getStringTableData() for each symbol. NFC.
llvm-svn: 224360
2014-12-16 20:21:34 +00:00
JF Bastien
02501293ba x86-32: PUSHF/POPF use/def EFLAGS
Summary: As a side-quest for D6629 jvoung pointed out that I should use -verify-machineinstrs and this found a bug in x86-32's handling of EFLAGS for PUSHF/POPF. This patch fixes the use/def, and adds -verify-machineinstrs to all x86 tests which contain 'EFLAGS'. One exception: this patch leaves inline-asm-fpstack.ll as-is because it fails -verify-machineinstrs in a way unrelated to EFLAGS. This patch also modifies cmpxchg-clobber-flags.ll along the lines of what D6629 already does by also testing i386.

Test Plan: ninja check

Reviewers: t.p.northover, jvoung

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D6687

llvm-svn: 224359
2014-12-16 20:15:45 +00:00
Rafael Espindola
0e63c2a0a5 Use CastInst::castIsValid to simplify the verifier.
Also delete a dead member variable.

llvm-svn: 224356
2014-12-16 19:29:29 +00:00
Matt Arsenault
4e68f48bcd NVPTX: Remove duplicate of AsmPrinter::lowerConstant
llvm-svn: 224355
2014-12-16 19:16:17 +00:00
Matt Arsenault
dbdac5d39f Move lowerConstant to AsmPrinter
This was a static function before, and NVPTX duplicated it
because it wasn't exposed.

llvm-svn: 224354
2014-12-16 19:16:14 +00:00
Quentin Colombet
d31121348b [CodeGenPrepare] Move sign/zero extensions near loads using type promotion.
This patch extends the optimization in CodeGenPrepare that moves a sign/zero
extension near a load when the target can combine them. The optimization may
promote any operations between the extension and the load to make that possible.

Although this optimization may be beneficial for all targets, in particular
AArch64, this is enabled for X86 only as I have not benchmarked it for other
targets yet.


** Context **

Most targets feature extended loads, i.e., loads that perform a zero or sign
extension for free. In that context it is interesting to expose such pattern in
CodeGenPrepare so that the instruction selection pass can form such loads.
Sometimes, this pattern is blocked because of instructions between the load and
the extension. When those instructions are promotable to the extended type, we
can expose this pattern.


** Motivating Example **

Let us consider an example:
define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) {
  %ld = load i8* %addr1
  %zextld = zext i8 %ld to i32
  %ld2 = load i32* %addr2
  %add = add nsw i32 %ld2, %zextld
  %sextadd = sext i32 %add to i64
  %zexta = zext i8 %a to i32
  %addza = add nsw i32 %zexta, %zextld
  %sextaddza = sext i32 %addza to i64
  %addb = add nsw i32 %b, %zextld
  %sextaddb = sext i32 %addb to i64
  call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb)
  ret void
}

As it is, this IR generates the following assembly on x86_64:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movl  (%rsi), %es      # plain load
  addl  %eax, %esi       # 32-bit add
  movslq  %esi, %rdi     # sign extend the result of add
  movzbl  %dl, %edx      # zero extend the first argument
  addl  %eax, %edx       # 32-bit add
  movslq  %edx, %rsi     # sign extend the result of add
  addl  %eax, %ecx       # 32-bit add
  movslq  %ecx, %rdx     # sign extend the result of add
[...]
The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA.

Now, by promoting the additions to form more extended loads we would generate:
[...]
  movzbl  (%rdi), %eax   # zero-extended load
  movslq  (%rsi), %rdi   # sign-extended load
  addq  %rax, %rdi       # 64-bit add
  movzbl  %dl, %esi      # zero extend the first argument
  addq  %rax, %rsi       # 64-bit add
  movslq  %ecx, %rdx     # sign extend the second argument
  addq  %rax, %rdx       # 64-bit add
[...]
The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA.

This kind of sequences happen a lot on code using 32-bit indexes on 64-bit
architectures.

Note: The throughput numbers are similar on Sandy Bridge and Haswell.


** Proposed Solution **

To avoid the penalty of all these sign/zero extensions, we merge them in the
loads at the beginning of the chain of computation by promoting all the chain of
computation on the extended type. The promotion is done if and only if we do not
introduce new extensions, i.e., if we do not degrade the code quality.
To achieve this, we extend the existing “move ext to load” optimization with the
promotion mechanism introduced to match larger patterns for addressing mode
(r200947).
The idea of this extension is to perform the following transformation:
ext(promotableInst1(...(promotableInstN(load))))
=>
promotedInst1(...(promotedInstN(ext(load))))

The promotion mechanism in that optimization is enabled by a new TargetLowering
switch, which is off by default. In other words, by default, the optimization
performs the “move ext to load” optimization as it was before this patch.


** Performance **

Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10.
Tested Optimization Levels: O3/Os
Tests: llvm-testsuite + externals.
Results:
- No regression beside noise.
- Improvements:
CINT2006/473.astar:  ~2%
Benchmarks/PAQ8p: ~2%
Misc/perlin: ~3%

The results are consistent for both O3 and Os.

<rdar://problem/18310086>

llvm-svn: 224351
2014-12-16 19:09:03 +00:00
Kevin Enderby
42ed7ac478 Fix the arm build bots for a test that was added. A printing routine was incorrectly using PRIx32
when it should have been using PRIx64 for the value that was passed as uint64_t .

llvm-svn: 224350
2014-12-16 18:58:11 +00:00
Robert Khasanov
104b98b388 [AVX512] Enable integer arithmetic lowering for AVX512BW/VL subsets.
Added lowering tests.

llvm-svn: 224349
2014-12-16 18:24:07 +00:00
Evgeny Astigeevich
41308b1f31 On behalf of Matthew Wahab:
An instruction alias defined with InstAlias and an optional operand in the
middle of the AsmString field, "..${a} <operands>", would get the final
"}" printed in the instruction disassembly. This wouldn't happen if the optional
operand appeared as the last item in the AsmString which is how the current
backends avoided the problem.

There don't appear to be any tests for this part of Tablegen but it passes the
pre-commit tests. Manually tested the change by enabling the generic alias
printer in the ARM backend and checking the output.

Differential Revision: http://reviews.llvm.org/D6529

llvm-svn: 224348
2014-12-16 18:16:17 +00:00
Ahmed Bougacha
6c3e1c0f56 [MC] Reset the MCInst in the matcher function before adding opcode/operands.
On X86, the Intel asm parser tries to match all memory operand sizes when
none is explicitly specified.  For LEA, which doesn't really have a memory
operand (just a pointer one), this results in multiple successful matches,
one for each memory size.  There's no error because it's same opcode, so
really, it's just one match.  However, the tablegen'd matcher function
adds opcode/operands to the passed MCInst, and this results in multiple
duplicated operands.

This commit clears the MCInst in the tablegen'd matcher function.
We sometimes clear it when the match failed, so there's no expectation of
keeping the previous content anyway.

Differential Revision: http://reviews.llvm.org/D6670

llvm-svn: 224347
2014-12-16 18:05:28 +00:00
Colin LeMahieu
4932546e48 [Hexagon] Adding absolute value, and negate with saturation
llvm-svn: 224346
2014-12-16 17:44:49 +00:00
Sanjay Patel
8363dd3b42 combine consecutive subvector 16-byte loads into one 32-byte load
This is a fix for PR21709 ( http://llvm.org/bugs/show_bug.cgi?id=21709 ).
When we have 2 consecutive 16-byte loads that are merged into one 32-byte vector,
we can use a single 32-byte load instead. 
But we don't do this for SandyBridge / IvyBridge because they have slower 32-byte memops.
We also don't bother using 32-byte *integer* loads on a machine that only has AVX1 (btver2)
because those operands would have to be split in half anyway since there is no support for
32-byte integer math ops.

Differential Revision: http://reviews.llvm.org/D6492

llvm-svn: 224344
2014-12-16 16:30:01 +00:00
Colin LeMahieu
585f29d985 [Hexagon] Adding saturate and swizzle instructions.
llvm-svn: 224343
2014-12-16 16:27:17 +00:00
Robert Khasanov
8231be9f66 [AVX512] Add a comment for avx512_broadcast_pat multiclass
llvm-svn: 224341
2014-12-16 16:12:11 +00:00
Colin LeMahieu
c1eb9c21e5 [Hexagon] Removing old multiply defs and updating references to new versions.
llvm-svn: 224340
2014-12-16 16:10:01 +00:00
Vladimir Medic
6c45970ced The single check for N64 inside MipsDisassemblerBase's subclasses is actually wrong. It should be testing for FeatureGP64bit.There are no functional changes.
llvm-svn: 224339
2014-12-16 15:29:12 +00:00
Zoran Jovanovic
d72dae73a8 [mips][microMIPS] Implement SWP and LWP instructions
Differential Revision: http://reviews.llvm.org/D5667

llvm-svn: 224338
2014-12-16 14:59:10 +00:00
Aaron Ballman
d1ab012d86 Fixing -Wsign-compare warnings; NFC.
llvm-svn: 224337
2014-12-16 14:04:11 +00:00
Vladimir Medic
3860fde4a8 Add disassembler tests for mips4 platform. There are no functional changes.
llvm-svn: 224335
2014-12-16 13:02:25 +00:00
Elena Demikhovsky
fe73fcc29b Masked Load and Store Intrinsics in loop vectorizer.
The loop vectorizer optimizes loops containing conditional memory
accesses by generating masked load and store intrinsics.
This decision is target dependent.

http://reviews.llvm.org/D6527

llvm-svn: 224334
2014-12-16 11:50:42 +00:00
Daniel Sanders
018d1acab3 [mips] Fix arguments-struct.ll for Windows and OSX hosts.
llvm-svn: 224333
2014-12-16 11:21:58 +00:00
Bradley Smith
5d5a40a0f8 [ARM] Prevent PerformVCVTCombine from combining a vmul/vcvt with 8 lanes
This would result in a crash since the vcvt used does not support v8i32 types.

llvm-svn: 224332
2014-12-16 10:59:27 +00:00
Elena Demikhovsky
06e22de2d3 X86: Added FeatureVectorUAMem for all AVX architectures.
According to AVX specification:

"Most arithmetic and data processing instructions encoded using the VEX prefix and
performing memory accesses have more flexible memory alignment requirements
than instructions that are encoded without the VEX prefix. Specifically,
With the exception of explicitly aligned 16 or 32 byte SIMD load/store instructions,
most VEX-encoded, arithmetic and data processing instructions operate in
a flexible environment regarding memory address alignment, i.e. VEX-encoded
instruction with 32-byte or 16-byte load semantics will support unaligned load
operation by default. Memory arguments for most instructions with VEX prefix
operate normally without causing #GP(0) on any byte-granularity alignment
(unlike Legacy SSE instructions)."

The same for AVX-512.

This change does not affect anything right now, because only the "memop pattern fragment"
depends on FeatureVectorUAMem and it is not used in AVX patterns.
All AVX patterns are based on the "unaligned load" anyway.

llvm-svn: 224330
2014-12-16 09:10:08 +00:00
Duncan P. N. Exon Smith
3e579fb285 Remove 'metadata' from comments
llvm-svn: 224328
2014-12-16 07:45:05 +00:00
Duncan P. N. Exon Smith
58ed764767 IR: Stop printing 'metadata' in Metadata::print()
Stop printing `metadata` in `Metadata::print()` and
`Metadata::printAsOperand()`.

llvm-svn: 224327
2014-12-16 07:40:31 +00:00
Duncan P. N. Exon Smith
1fb1f7f9a7 IR: Make MDNode::dump() useful by adding addresses
It's horrible to inspect `MDNode`s in a debugger.  All of their operands
that are `MDNode`s get dumped as `<badref>`, since we can't assign
metadata slots in the context of a `Metadata::dump()`.  (Why not?  Why
not assign numbers lazily?  Because then each time you called `dump()`,
a given `MDNode` could have a different lazily assigned number.)

Fortunately, the C memory model gives us perfectly good identifiers for
`MDNode`.  Add pointer addresses to the dumps, transforming this:

    (lldb) e N->dump()
    !{i32 662302, i32 26, <badref>, null}

    (lldb) e ((MDNode*)N->getOperand(2))->dump()
    !{i32 4, !"foo"}

into:

    (lldb) e N->dump()
    !{i32 662302, i32 26, <0x100706ee0>, null}

    (lldb) e ((MDNode*)0x100706ee0)->dump()
    !{i32 4, !"foo"}

and this:

    (lldb) e N->dump()
    0x101200248 = !{<badref>, <badref>, <badref>, <badref>, <badref>}

    (lldb) e N->getOperand(0)
    (const llvm::MDOperand) $0 = {
      MD = 0x00000001012004e0
    }
    (lldb) e N->getOperand(1)
    (const llvm::MDOperand) $1 = {
      MD = 0x00000001012004e0
    }
    (lldb) e N->getOperand(2)
    (const llvm::MDOperand) $2 = {
      MD = 0x0000000101200058
    }
    (lldb) e N->getOperand(3)
    (const llvm::MDOperand) $3 = {
      MD = 0x00000001012004e0
    }
    (lldb) e N->getOperand(4)
    (const llvm::MDOperand) $4 = {
      MD = 0x0000000101200058
    }
    (lldb) e ((MDNode*)0x00000001012004e0)->dump()
    !{}

    (lldb) e ((MDNode*)0x0000000101200058)->dump()
    !{null}

into:

    (lldb) e N->dump()
    !{<0x1012004e0>, <0x1012004e0>, <0x101200058>, <0x1012004e0>, <0x101200058>}

    (lldb) e ((MDNode*)0x1012004e0)->dump()
    !{}

    (lldb) e ((MDNode*)0x101200058)->dump()
    !{null}

llvm-svn: 224325
2014-12-16 07:09:37 +00:00
Duncan P. N. Exon Smith
c03e29935e DebugInfo: Update testcase to actually check something
This test was missing a `Debug Info Version` so it's `not grep` was
passing vacuously.  Update it to CHECK for something useful at the same
time so it doesn't bitrot quite so easily in the future.

llvm-svn: 224324
2014-12-16 07:08:19 +00:00
Saleem Abdulrasool
c163948b80 ARM: diagnose deprecated syntax
The use of SP and PC in the register list for stores is deprecated on ARM
(ARM ARM A.8.8.199):

  ARM deprecates the use of ARM instructions that include the SP or the PC in
  the list.

Provide a deprecation warning from the assembler in the case that the syntax is
ever seen.

llvm-svn: 224319
2014-12-16 05:53:25 +00:00
Hal Finkel
04ae4c36c5 [PowerPC] Improve instruction selection bit-permuting operations (32-bit)
The PowerPC backend, somewhat embarrassingly, did not generate an
optimal-length sequence of instructions for a 32-bit bswap. While adding a
pattern for the bswap intrinsic to fix this would not have been terribly
difficult, doing so would not have addressed the real problem: we had been
generating poor code for many bit-permuting operations (by which I mean things
like byte-swap that permute the bits of one or more inputs around in various
ways). Here are some initial steps toward solving this deficiency.

Bit-permuting operations are represented, at the SDAG level, using ISD::ROTL,
SHL, SRL, AND and OR (mostly with constant second operands). Looking back
through these operations, we can build up a description of the bits in the
resulting value in terms of bits of one or more input values (and constant
zeros). For each bit, we compute the rotation amount from the original value,
and then group consecutive (value, rotation factor) bits into groups. Groups
sharing these attributes are then collected and sorted, and we can then
instruction select the entire permutation using a combination of masked
rotations (rlwinm), imm ands (andi/andis), and masked rotation inserts
(rlwimi).

The result is that instead of lowering an i32 bswap as:

	rlwinm 5, 3, 24, 16, 23
	rlwinm 4, 3, 24, 0, 7
	rlwimi 4, 3, 8, 8, 15
	rlwimi 5, 3, 8, 24, 31
	rlwimi 4, 5, 0, 16, 31

we now produce:

	rlwinm 4, 3, 8, 0, 31
	rlwimi 4, 3, 24, 16, 23
	rlwimi 4, 3, 24, 0, 7

and for the 'test6' example in the PowerPC/README.txt file:

 unsigned test6(unsigned x) {
   return ((x & 0x00FF0000) >> 16) | ((x & 0x000000FF) << 16);
 }

we used to produce:

	lis 4, 255
	rlwinm 3, 3, 16, 0, 31
	ori 4, 4, 255
	and 3, 3, 4

and now we produce:

	rlwinm 4, 3, 16, 24, 31
	rlwimi 4, 3, 16, 8, 15

and, as a nice bonus, this fixes the FIXME in
test/CodeGen/PowerPC/rlwimi-and.ll.

This commit does not include instruction-selection for i64 operations, those
will come later.

llvm-svn: 224318
2014-12-16 05:51:41 +00:00
Saleem Abdulrasool
55dd0f7b1d ARM: 80-column
clang-format a function with an overly long string constant.  NFC.

llvm-svn: 224314
2014-12-16 04:10:10 +00:00
Matthias Braun
93f392ca19 LiveRangeCalc: Rewrite subrange calculation
This changes subrange calculation to calculate subranges sequentially
instead of in parallel. The code is easier to understand that way and
addresses the code review issues raised about LiveOutData being
hard to understand/needing more comments by removing them :)

llvm-svn: 224313
2014-12-16 04:03:38 +00:00