1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-25 22:12:57 +02:00
Commit Graph

69160 Commits

Author SHA1 Message Date
Alexey Samsonov
90773577bb Move logic for calculating DBG_VALUE history map into separate file/class.
Summary: No functionality change.

Test Plan: llvm regression test suite.

Reviewers: dblaikie

Reviewed By: dblaikie

Subscribers: echristo, llvm-commits

Differential Revision: http://reviews.llvm.org/D3573

llvm-svn: 207708
2014-04-30 21:34:11 +00:00
David Blaikie
5560047d31 Emit DW_AT_object_pointer once, on the declaration, for each function.
This effectively reverts r164326, but adds some comments and
justification and ensures we /don't/ emit the DW_AT_object_pointer on
the (abstract and concrete) definitions. (while still preserving it on
standalone definitions involving ObjC Blocks)

This does increase the size of member function declarations from 7 to 11
bytes, unfortunately, but still seems like the Right Thing to do so that
callers that see only the declaration still have the information about
the object pointer. That said, I don't know what, if any, DWARF
consumers don't have a heuristic to guess this in the case of normal
C++ member functions - perhaps we can remove it entirely.

llvm-svn: 207705
2014-04-30 21:29:41 +00:00
Weiming Zhao
3625856a33 [ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
  %shr = lshr i64 %x, 4
  %and = and i64 %shr, 15
  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
  %0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
  lsr x8, x0, #1
  and x8, x8, #0x78
  add x8, x9, x8

If using ubfx, it only needs 2 instrs:
  ubfx  x8, x0, #4, #4
  add x8, x9, x8, lsl #3

This fixes bug 19589

llvm-svn: 207702
2014-04-30 21:07:24 +00:00
Reid Kleckner
1d2b4b0f8f Fix the clang-cl self-host build by defining ~DwarfDebug out of line
DwarfDebug.h has a SmallVector member containing a unique_ptr of an
incomplete type.  MSVC doesn't have key functions, so the vtable and
dtor are emitted in AsmPrinter.cpp, where DwarfDebug's ctor is called.
AsmPrinter.cpp include DwarfUnit.h and doesn't get a complete definition
of DwarfTypeUnit.  We could fix the problem by including DwarfUnit.h in
DwarfDebug.h, but that would increase header bloat.  Instead, define
~DwarfDebug out of line.

llvm-svn: 207701
2014-04-30 20:34:31 +00:00
Yi Jiang
5faccf9d45 Revert r207571 - Add slp vectorization to LTO passes
llvm-svn: 207693
2014-04-30 19:27:24 +00:00
Michael Zolotukhin
b12e921e64 [X86] Never hoist the shift value of a shift instruction.
There is no need to check if we want to hoist the immediate value of an
shift instruction. Simply return TCC_Free right away.

This change is like r206101, but for X86.

rdar://problem/16190769

llvm-svn: 207692
2014-04-30 19:17:32 +00:00
Alexey Samsonov
307f1e3874 Convert several loops over MachineFunction basic blocks to range-based loops
llvm-svn: 207683
2014-04-30 18:29:51 +00:00
Carlo Kok
3c208bd22a [IPO/MergeFunctions] changes so it doesn't try to bitcast a struct return type but instead recreates it with insert/extract value.
llvm-svn: 207679
2014-04-30 17:53:04 +00:00
David Majnemer
a9ce55b419 IR: Conservatively verify inalloca arguments
Summary: Try to spot obvious mismatches with inalloca use.

Reviewers: rnk

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3572

llvm-svn: 207676
2014-04-30 17:22:00 +00:00
Rafael Espindola
df3c13c420 Simplify ELFObjectWriter::SymbolValue.
It now defers all offset computation to getSymbolOffset.

llvm-svn: 207674
2014-04-30 16:59:35 +00:00
Matheus Almeida
683b66743d [mips] Add instruction alias (negu).
Summary: negu $reg is equivalent to negu $reg, $reg.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3510

llvm-svn: 207673
2014-04-30 16:53:49 +00:00
Matheus Almeida
0ee9f879a8 [mips] Add instruction alias (sltu).
Summary:
The pattern sltu $r1, $r2, $imm is found in handwritten assembly which
is just a shorthand version of sltui $r1, $r2, $imm.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3508

llvm-svn: 207671
2014-04-30 16:29:56 +00:00
Hans Wennborg
1f76d4c2b7 ELFObjectWriter: deduplicate suffices in strtab
We already do this for shstrtab, so might as well do it for strtab. This
extracts the string table building code into a separate class. The idea
is to use it for other object formats too.

I mostly wanted to do this for the general principle, but it does save a
little bit on object file size. I tried this on a clang bootstrap and
saved 0.54% on the sum of object file sizes (1.14 MB out of 212 MB for
a release build).

Differential Revision: http://reviews.llvm.org/D3533

llvm-svn: 207670
2014-04-30 16:25:02 +00:00
Tim Northover
fb74738b66 ARM64: print fp immediates without using scientific notation.
llvm-svn: 207669
2014-04-30 16:13:34 +00:00
Tim Northover
deed2b6361 AArch64/ARM64: implement remaining TLS relocations (purely MC).
llvm-svn: 207668
2014-04-30 16:13:26 +00:00
Tim Northover
39138a8afc AArch64/ARM64: add specific diagnostic for MRS/MSR and enable tests.
llvm-svn: 207667
2014-04-30 16:13:20 +00:00
Tim Northover
2b0a39a525 AArch64/ARM64: accept and print floating-point immediate 0 as "#0.0"
It's been decided that in the future, the floating-point immediate in
instructions like "fcmeq v0.2s, v1.2s, #0.0" will be canonically "0.0", which
has been implemented on AArch64 already but not ARM64.

This fixes that issue.

llvm-svn: 207666
2014-04-30 16:13:07 +00:00
David Majnemer
b01129f6ff IR: Alloca clones should remember inalloca state
Pretty straightforward, we weren't propagating whether or not an
AllocaInst had 'inalloca' marked on it when it came time to clone it.

The inliner exposed this bug.  A reduced testcase is forthcoming.

llvm-svn: 207665
2014-04-30 16:12:21 +00:00
Matheus Almeida
d62a84440c [mips] Add instruction alias (dsll and dsrl).
Summary:
The pattern dsll/dsrl $rd, $rt, $rs is found in handwritten assembly which
is just a shorthand version of dsllv/dsrlv $rd, $rt, $rs.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3486

llvm-svn: 207664
2014-04-30 16:00:49 +00:00
Tom Stellard
690d34daa4 R600/SI: Use VALU instructions for copying i1 values
We can't use SALU instructions for this since they ignore the EXEC mask
and are always executed.

This fixes several OpenCV tests.

llvm-svn: 207661
2014-04-30 15:31:33 +00:00
Tom Stellard
5a08396499 R600/SI: Teach moveToVALU how to handle some SMRD instructions
llvm-svn: 207660
2014-04-30 15:31:29 +00:00
Chad Rosier
ccc0a640e3 [ARM64][fast-isel] Fast-isel doesn't know how to handle f128.
llvm-svn: 207659
2014-04-30 15:29:57 +00:00
Matheus Almeida
242c341b6d [mips] Add instruction alias (sll and srl).
Summary:
The pattern sll/srl $rd, $rt, $rs is found in handwritten assembly which
is just a shorthand version of sllv/srlv $rd, $rt, $rs.

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3483

llvm-svn: 207657
2014-04-30 15:23:04 +00:00
Sasa Stankovic
f649540f83 [mips] Fix MipsLongBranch pass to work when the offset from the branch to the
target cannot be determined accurately. This is the case for NaCl where the
sandboxing instructions are added in MC layer, after the MipsLongBranch pass.
It is also the case when the code has inline assembly. Instead of calculating
offset in the MipsLongBranch pass, use %hi(sym1 - sym2) and %lo(sym1 - sym2)
expressions that are resolved during the fixup.

This patch also deletes microMIPS test file test/CodeGen/Mips/micromips-long-branch.ll
and implements microMIPS CHECKs in a much simpler way in a file
test/CodeGen/Mips/longbranch.ll, together with MIPS32 and MIPS64.

llvm-svn: 207656
2014-04-30 15:06:25 +00:00
Tom Stellard
53186b159c R600: Remove unused function AMDGPUSubtarget::getDefaultSize()
llvm-svn: 207654
2014-04-30 14:20:53 +00:00
Evgeniy Stepanov
fdf3d359e7 [asan] Disable asm instrumentation on unsupported platforms.
Only emit calls to compiler-rt asm routines on platforms where they are
present (currently limited to linux i386/x86_64).

Patch by Yuri Gorshenin.

llvm-svn: 207651
2014-04-30 14:04:31 +00:00
Tim Northover
c123418ea4 ARM64: print lsr instead of lsrv for variable shifts (etc)
The canonical syntax for shifts by a variable amount does not end with 'v', but
that syntax should be supported as an alias (presumably for legacy reasons).

llvm-svn: 207649
2014-04-30 13:37:07 +00:00
Tim Northover
e4924b5483 ARM64: use 32-bit operations for uxtb & uxth
Testing will be enabled shortly with basic-a64-instructions.s

llvm-svn: 207648
2014-04-30 13:37:02 +00:00
Tim Northover
c748cd1ad3 AArch64/ARM64: allow smaller granule relocations on MOVZ/MOVN
Testing will be enabled shortly with basic-a64-instructions.s

llvm-svn: 207647
2014-04-30 13:36:59 +00:00
Tim Northover
0799faeab2 AArch64/ARM64: copy support for bCC instead of b.CC across.
llvm-svn: 207646
2014-04-30 13:36:56 +00:00
Tim Northover
9265793bc8 AArch64/ARM64: expunge CPSR from the sources
AArch64 does not have a CPSR register in the same way that AArch32 does. Most
of its compiler-relevant roles have been taken over by the more specific NZCV
register (representing just the flags set by normal instructions).

Its system control functions still remain, but are now under the
pseudo-register referred to as "PSTATE". They're accessed via various MRS & MSR
instructions described in the reference manual.

llvm-svn: 207645
2014-04-30 13:14:14 +00:00
Tim Northover
dd37c75601 AArch64/ARM64: use HS instead of CS & LO instead of CC.
On instructions using the NZCV register, a couple of conditions have dual
representations: HS/CS and LO/CC (meaning unsigned-higher-or-same/carry-set and
unsigned-lower/carry-clear). The first of these is more descriptive in most
circumstances, so we should print it.

llvm-svn: 207644
2014-04-30 13:14:03 +00:00
Rafael Espindola
492391e686 Grammar fix.
Thanks to Saleem Abdulrasool for noticing it.

llvm-svn: 207643
2014-04-30 12:42:22 +00:00
Daniel Sanders
2feda1892d [mips][msa] Fix vector insertions where the index is variable
Summary:
This isn't supported directly so we rotate the vector by the desired number of
elements, insert to element zero, then rotate back.

The i64 case generates rather poor code on MIPS32. There is an obvious
optimisation to be made in future (do both insert.w's inside a shared 
rotate/unrotate sequence) but for now it's sufficient to select valid code
instead of aborting.

Depends on D3536

Reviewers: matheusalmeida

Reviewed By: matheusalmeida

Differential Revision: http://reviews.llvm.org/D3537

llvm-svn: 207640
2014-04-30 12:09:32 +00:00
Tim Northover
fb9c502a68 ARM64: accept ELF-relocated load/store insts without a #.
E.g. we print "ldr x0, [x0, :lo12:symbol]" so we need to accept that syntax
too.

llvm-svn: 207639
2014-04-30 12:00:20 +00:00
Tim Northover
70276fb203 ARM64: remove duplication by templating InstPrinter methods
No functional change, so no tests.

llvm-svn: 207638
2014-04-30 11:43:36 +00:00
Matheus Almeida
33be312700 [mips] Add support for .cpload.
Summary:
This directive is used for setting up $gp in the beginning of a function.
It expands to three instructions if PIC is enabled:
lui   $gp, %hi(_gp_disp)
addui $gp, $gp, %lo(_gp_disp)
addu  $gp, $gp, $reg

_gp_disp is a special symbol that the linker sets to the distance between
the lui instruction and the context pointer (_gp).

Reviewers: dsanders

Reviewed By: dsanders

Differential Revision: http://reviews.llvm.org/D3480

llvm-svn: 207637
2014-04-30 11:28:42 +00:00
Tim Northover
beb9daa30f ARM64: use hex immediates for movz/movk instructions
Since these are mostly used in "lsl #16", "lsl #32", "lsl #48" combinations to
piece together an immediate in 16-bit chunks, hex is probably the most
appropriate format.

llvm-svn: 207635
2014-04-30 11:19:40 +00:00
Tim Northover
b3572de3c8 ARM64: hexify printing various immediate operands
This is mostly aimed at the NEON logical operations and MOVI/MVNI (since they
accept weird shifts which are more naturally understandable in hex notation).

Also changes BRK/HINT etc, which is probably a neutral change, but easier than
the alternative.

llvm-svn: 207634
2014-04-30 11:19:28 +00:00
Tim Northover
357012cebb ARM64: print canonical syntax for add/sub (imm) instructions.
Since these instructions only accept a 12-bit immediate, possibly shifted left
by 12, the canonical syntax used by the architecture reference manual is "#N {,
lsl #12 }". We should accept an immediate that has already been shifted, (e.g.

Also, print a comment giving the full addend since it can be helpful.

llvm-svn: 207633
2014-04-30 11:19:15 +00:00
Chandler Carruth
bd97884116 [LCG] Add the really, *really* boring edge insertion case: adding an
edge entirely within an existing SCC. Shockingly, making the connected
component more connected is ... a total snooze fest. =]

Anyways, its wired up, and I even added a test case to make sure it
pretty much sorta works. =D

llvm-svn: 207631
2014-04-30 10:48:36 +00:00
James Molloy
b4aaad8248 [ARM64] Simplify if condition.
v2f32 and v4f32 were missed out of these conditions, so this is also
a bugfix.

llvm-svn: 207628
2014-04-30 10:15:50 +00:00
James Molloy
c2391aaf55 [ARM64] Fix stupid copy-pasto in ARM64MCAsmInfo.cpp - aarch64_be -> arm64_be
llvm-svn: 207627
2014-04-30 10:15:46 +00:00
James Molloy
5b8b4a67ec [ARM64] Try and make the ELF MCJIT *slightly* less broken for ARM64.
A bunch of switch cases were missing, not just for ARM64 but also for
AArch64_BE. I've fixed all those, but there's zero testing as
ExecutionEngine tests are disabled when crosscompiling and I don't
have a native platform available to test on.

llvm-svn: 207626
2014-04-30 10:15:41 +00:00
James Molloy
b41c043c3c [ARM64] Ensure arm64_be is dealt with when emitting debug info.
This is a partial port of r204816 (cpirker "Elf support for MC-JIT
runtime dynamic linker") from AArch64 to ARM64.

llvm-svn: 207625
2014-04-30 10:15:35 +00:00
Tim Northover
5b5983afc0 ARM64: make sure FastISel uses a GPR64 source in 64-bit extensions.
llvm-svn: 207620
2014-04-30 09:32:01 +00:00
Chandler Carruth
aa6122effe [LCG] Actually test the *basic* edge removal bits (IE, the non-SCC
bits), and discover that it's totally broken. Yay tests. Boo bug. Fix
the basic edge removal so that it works by nulling out the removed edges
rather than actually removing them. This leaves the indices valid in the
map from callee to index, and preserves some of the locality for
iterating over edges. The iterator is made bidirectional to reflect that
it now has to skip over null entries, and the skipping logic is layered
onto it.

As future work, I would like to track essentially the "load factor" of
the edge list, and when it falls below a threshold do a compaction.

An alternative I considered (and continue to consider) is storing the
callees in a doubly linked list where each element of the list is in
a set (which is essentially the classical linked-hash-table
datastructure). The problem with that approach is that either you need
to heap allocate the linked list nodes and use pointers to them, or use
a bucket hash table (with even *more* linked list pointer overhead!),
etc. It's pretty easy to get 5x overhead for values that are just
pointers. So far, I think punching holes in the vector, and periodic
compaction is likely to be much more efficient overall in the space/time
tradeoff.

llvm-svn: 207619
2014-04-30 07:45:27 +00:00
Benjamin Kramer
805d786f28 Add a <tuple> include to more files that aren't getting it transitively on MSVC.
llvm-svn: 207617
2014-04-30 07:21:01 +00:00
Craig Topper
79b097d66a Use makeArrayRef insted of calling ArrayRef<T> constructor directly. I introduced most of these recently.
llvm-svn: 207616
2014-04-30 07:17:30 +00:00
Saleem Abdulrasool
b81ba9f22e ARM: support stack probe emission for Windows on ARM
This introduces the stack lowering emission of the stack probe function for
Windows on ARM. The stack on Windows on ARM is a dynamically paged stack where
any page allocation which crosses a page boundary of the following guard page
will cause a page fault. This page fault must be handled by the kernel to
ensure that the page is faulted in. If this does not occur and a write access
any memory beyond that, the page fault will go unserviced, resulting in an
abnormal program termination.

The watermark for the stack probe appears to be at 4080 bytes (for
accommodating the stack guard canaries and stack alignment) when SSP is
enabled.  Otherwise, the stack probe is emitted on the page size boundary of
4096 bytes.

llvm-svn: 207615
2014-04-30 07:05:07 +00:00