1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-24 05:23:45 +02:00
Commit Graph

71627 Commits

Author SHA1 Message Date
Joerg Sonnenberger
3be1146503 Rename PPCLinuxMCAsmInfo to PPCELFMCAsmInfo to better reflect the
systems it represents.

llvm-svn: 214755
2014-08-04 18:46:13 +00:00
Joerg Sonnenberger
660877af60 Allow .lcomm with alignment on ELF targets.
llvm-svn: 214754
2014-08-04 18:45:10 +00:00
Alex Lorenz
faf4f53aec Coverage: add HasCodeBefore flag to a mapping region.
This flag will be used by the coverage tool to help 
compute the execution counts for each line in a source file.

Differential Revision: http://reviews.llvm.org/D4746

llvm-svn: 214740
2014-08-04 18:00:51 +00:00
Eric Christopher
dd8d4e476d Move the R600 intrinsic support back to the target machine - there's
nothing subtarget dependent about the intrinsic support in any
backend as far as I can tell.

llvm-svn: 214738
2014-08-04 17:37:43 +00:00
Justin Bogner
cb125e9a3e Path: Stop claiming path::const_iterator is bidirectional
path::const_iterator claims that it's a bidirectional iterator, but it
doesn't satisfy all of the contracts for a bidirectional iterator.
For example, n3376 24.2.5 p6 says "If a and b are both dereferenceable,
then a == b if and only if *a and *b are bound to the same object",
but this doesn't work with how we stash and recreate Components.

This means that our use of reverse_iterator on this type is invalid
and leads to many of the valgrind errors we're hitting, as explained
by Tilmann Scheller here:

    http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140728/228654.html

Instead, we admit that path::const_iterator is only an input_iterator,
and implement a second input_iterator for path::reverse_iterator (by
changing const_iterator::operator-- to reverse_iterator::operator++).
All of the uses of this just traverse once over the path in one
direction or the other anyway.

llvm-svn: 214737
2014-08-04 17:36:41 +00:00
Joerg Sonnenberger
51a8467117 Refactor SPRG instructions.
llvm-svn: 214733
2014-08-04 17:26:15 +00:00
Akira Hatanaka
ee3d211c82 [X86] Place parentheses around "isMask_32(STReturns) && N <= 2".
This corrects r214672, which was committed to silence a gcc warning.

llvm-svn: 214732
2014-08-04 17:23:38 +00:00
Joerg Sonnenberger
f6049406d6 Add support for m[ft][di]bat[ul] instructions.
llvm-svn: 214731
2014-08-04 17:07:41 +00:00
Matt Arsenault
2d8adc4261 Use the known address space constant rather than checking it
llvm-svn: 214729
2014-08-04 16:55:35 +00:00
Matt Arsenault
55071ec9a7 R600: Remove unused include
llvm-svn: 214728
2014-08-04 16:55:33 +00:00
Eric Christopher
4ba92fd57d Add a dummy subtarget to the CPP backend target machine. This will
allow us to forward all of the standard TargetMachine calls to the
subtarget and still return null as we were before.

llvm-svn: 214727
2014-08-04 16:40:55 +00:00
Joerg Sonnenberger
b8c7e54901 Add features for PPC 4xx and e500/e500mc instructions.
Move the test cases for them into separate files.

llvm-svn: 214724
2014-08-04 15:47:38 +00:00
Robert Khasanov
35dfdfef2d [SKX] Enabling load/store instructions: encoding
Instructions: VMOVAPD, VMOVAPS, VMOVDQA8, VMOVDQA16, VMOVDQA32,VMOVDQA64, VMOVDQU8, VMOVDQU16, VMOVDQU32,VMOVDQU64, VMOVUPD, VMOVUPS,

Reviewed by Elena Demikhovsky <elena.demikhovsky@intel.com>

llvm-svn: 214719
2014-08-04 14:35:15 +00:00
Ulrich Weigand
5df23aacbe [PowerPC] Swap arguments to vpkuhum/vpkuwum on little-endian
In commit r213915, Bill fixed little-endian usage of vmrgh* and vmrgl*
by swapping the input arguments.  As it turns out, the exact same fix
is also required for the vpkuhum/vpkuwum patterns.

This fixes another regression in llvmpipe when vector support is
enabled.

Reviewed by Bill Schmidt.

llvm-svn: 214718
2014-08-04 13:53:40 +00:00
Aaron Ballman
cc57635052 Improving the name of the function parameter, which happens to solve two likely-less-than-useful MSVC warnings: warning C4258: 'I' : definition from the for loop is ignored; the definition from the enclosing scope is used.
llvm-svn: 214717
2014-08-04 13:51:27 +00:00
Ulrich Weigand
bf94969247 [PowerPC] MULHU/MULHS are not legal for vector types
I ran into some test failures where common code changed vector division
by constant into a multiply-high operation (MULHU).  But these are not
implemented by the back-end, so we failed to recognize the insn.

Fixed by marking MULHU/MULHS as Expand for vector types.

llvm-svn: 214716
2014-08-04 13:27:12 +00:00
Ulrich Weigand
32b8ceb243 [PowerPC] Fix and improve vector comparisons
This patch refactors code generation of vector comparisons.

This fixes a wrong code-gen bug for ISD::SETGE for floating-point types,
and improves generated code for vector comparisons in general.

Specifically, the patch moves all logic deciding how to implement vector
comparisons into getVCmpInst, which gets two extra boolean outputs
indicating to its caller whether its needs to swap the input operands
and/or negate the result of the comparison.  Apart from implementing
these two modifications as directed by getVCmpInst, there is no need
to ever implement vector comparisons in any other manner; in particular,
there is never a need to perform two separate comparisons (e.g. one for
equal and one for greater-than, as code used to do before this patch).

Reviewed by Bill Schmidt.

llvm-svn: 214714
2014-08-04 13:13:57 +00:00
Daniel Sanders
ea3b302186 [mips] Add assembler support for '.set mipsX'.
Summary:
This patch also fixes an issue with the way the Mips assembler enables/disables architecture
features. Before this patch, the assembler never disabled feature bits. For example,
.set mips64
.set mips32r2

would result in the 'OR' of mips64 with mips32r2 feature bits which isn't right.
Unfortunately this isn't trivial to fix because there's not an easy way to clear
feature bits as the algorithm in MCSubtargetInfo (ToggleFeature) only clears the bits
that imply the feature being cleared and not the implied bits by the feature (there's a
better explanation to the code I added).

Patch by Matheus Almeida and updated by Toma Tabacu

Reviewers: vmedic, matheusalmeida, dsanders

Reviewed By: dsanders

Subscribers: tomatabacu, llvm-commits

Differential Revision: http://reviews.llvm.org/D4123

llvm-svn: 214709
2014-08-04 12:20:00 +00:00
Chandler Carruth
6bda1ceee2 [x86] Just unilaterally prefer SSSE3-style PSHUFB lowerings over clever
use of PACKUS. It's cleaner that way.

I looked at implementing clever combine-based folding of PACKUS chains
into PSHUFB but it is quite hard and doesn't seem likely to be worth it.
The most annoying part would be detecting that the correct masking had
been done to use PACKUS-style instructions as a blend operation rather
than there being any saturating as is indicated by its name. We generate
really nice code for what few test cases I've come up with that aren't
completely contrived for this by just directly prefering PSHUFB and so
let's go with that strategy for now. =]

llvm-svn: 214707
2014-08-04 10:17:35 +00:00
Chandler Carruth
8632182226 [x86] Implement more aggressive use of PACKUS chains for lowering common
patterns of v16i8 shuffles.

This implements one of the more important FIXMEs for the SSE2 support in
the new shuffle lowering. We now generate the optimal shuffle sequence
for truncate-derived shuffles which show up essentially everywhere.

Unfortunately, this exposes a weakness in other parts of the shuffle
logic -- we can no longer form PSHUFB here. I'll add the necessary
support for that and other things in a subsequent commit.

llvm-svn: 214702
2014-08-04 09:40:02 +00:00
Kevin Qin
348a8dd760 Revert "r214669 - MachineCombiner Pass for selecting faster instruction"
This commit broke "make check" for several hours, so get it reverted.

llvm-svn: 214697
2014-08-04 05:10:33 +00:00
NAKAMURA Takumi
9474954f12 MemoryBuffer: Don't use mmap when FileSize is multiple of 4k on Cygwin.
On Cygwin, getpagesize() returns 64k(AllocationGranularity).

In r214580, the size of X86GenInstrInfo.inc became 1499136.

FIXME: We should reorganize again getPageSize() on Win32.
MapFile allocates address along AllocationGranularity but view is mapped by physical page.

llvm-svn: 214681
2014-08-04 01:43:37 +00:00
Chandler Carruth
ea07cc714d [x86] Handle single input shuffles in the SSSE3 case more intelligently.
I spent some time looking into a better or more principled way to handle
this. For example, by detecting arbitrary "unneeded" ORs... But really,
there wasn't any point. We just shouldn't build blatantly wrong code so
late in the pipeline rather than adding more stages and logic later on
to fix it. Avoiding this is just too simple.

llvm-svn: 214680
2014-08-04 01:14:24 +00:00
Peter Zotov
18e1ab4b09 [LLVM-C] Add LLVM{IsConstantString,GetAsString,GetElementAsConstant}.
llvm-svn: 214676
2014-08-03 23:54:16 +00:00
Chandler Carruth
9918929946 [x86] Don't add nodes to the combined set (and prune subsequent
combines) until they are legal.

Doing it the old way could, when the stars align *just* right, cause
a node to get into the combine set prior to being legalized. Then, when
the same node showed up as an operand to another node later on (but not
so much later on that it had been deleted as dead) we would fail to add
it back to the worklist thinking it had already been combined. This
would in turn cause it to not be legalized. Fortunately, we can also
walk the operands looking for uncombined (and thus potentially
un-legalized) nodes late. It will still ensure that we walk all operands
of all nodes and send all of them through both the legalizer without
changes and the combiner at least once. (Which was the original goal of
this).

I have a test case for this bug, but it is terribly brittle. For
example, it will stop finding the bug the moment I enable the new
shuffle lowering. I don't yet have any test case that reliably exercises
this bug, and it isn't clear that it will be possible to craft one. It
is entirely possible that with the new shuffle lowering the two forms of
doing this are precisely equivalent. That doesn't mean we shouldn't take
the more conservative approach of insisting on things in the combined
set having survived the legalizer.

llvm-svn: 214673
2014-08-03 23:10:59 +00:00
Saleem Abdulrasool
1a36f331c8 X86: silence warning (-Wparentheses)
GCC 4.8.2 points out the ambiguity in evaluation of the assertion condition:

lib/Target/X86/X86FloatingPoint.cpp:949:49: warning: suggest parentheses around ‘&&’ within ‘||’ [-Wparentheses]
   assert(STReturns == 0 || isMask_32(STReturns) && N <= 2);

llvm-svn: 214672
2014-08-03 23:00:39 +00:00
Saleem Abdulrasool
189689a302 CodeGen: silence a warning
GCC 4.8.2 objects to the tautological condition in the assert as the unsigned
value is guaranteed to be >= 0.  Simplify the assertion by dropping the
tautological condition.

llvm-svn: 214671
2014-08-03 23:00:38 +00:00
Sanjay Patel
70baef2f02 fix for PR20354 - Miscompile of fabs due to vectorization
This is intended to be the minimal change needed to fix PR20354 ( http://llvm.org/bugs/show_bug.cgi?id=20354 ). The check for a vector operation was wrong; we need to check that the fabs itself is not a vector operation.

This patch will not generate the optimal code. A constant pool load and 'and' op will be generated instead of just returning a value that we can calculate in advance (as we do for the scalar case). I've put a 'TODO' comment for that here and expect to have that patch ready soon.

There is a very similar optimization that we can do in visitFNEG, so I've put another 'TODO' there and expect to have another patch for that too.

llvm-svn: 214670
2014-08-03 22:48:23 +00:00
Gerolf Hoflehner
265dd68643 MachineCombiner Pass for selecting faster instruction
sequence -  AArch64 target support

 This patch turns off madd/msub generation in the DAGCombiner and generates
 them in the MachineCombiner instead. It replaces the original code sequence
 with the combined sequence when it is beneficial to do so.

 When there is no machine model support it always generates the madd/msub
 instruction. This is true also when the objective is to optimize for code
 size: when the combined sequence is shorter is always chosen and does not
 get evaluated.

 When there is a machine model the combined instruction sequence
 is evaluated for critical path and resource length using machine
 trace metrics and the original code sequence is replaced when it is
 determined to be faster.

 rdar://16319955

llvm-svn: 214669
2014-08-03 22:03:40 +00:00
Gerolf Hoflehner
b4a5e33ee0 MachineCombiner Pass for selecting faster instruction
sequence -  target independent framework

 When the DAGcombiner selects instruction sequences
 it could increase the critical path or resource len.

 For example, on arm64 there are multiply-accumulate instructions (madd,
 msub). If e.g. the equivalent  multiply-add sequence is not on the
 crictial path it makes sense to select it instead of  the combined,
 single accumulate instruction (madd/msub). The reason is that the
 conversion from add+mul to the madd could lengthen the critical path
 by the latency of the multiply.

 But the DAGCombiner would always combine and select the madd/msub
 instruction.

 This patch uses machine trace metrics to estimate critical path length
 and resource length of an original instruction sequence vs a combined
 instruction sequence and picks the faster code based on its estimates.

 This patch only commits the target independent framework that evaluates
 and selects code sequences. The machine instruction combiner is turned
 off for all targets and expected to evolve over time by gradually
 handling DAGCombiner pattern in the target specific code.

 This framework lays the groundwork for fixing
 rdar://16319955

llvm-svn: 214666
2014-08-03 21:35:39 +00:00
Saleem Abdulrasool
ca07b08496 MC: virtualise EmitWindowsUnwindTables
This makes EmitWindowsUnwindTables a virtual function and lowers the
implementation of the function to the X86WinCOFFStreamer.  This method is a
target specific operation.  This enables making the behaviour target dependent
by isolating it entirely to the target specific streamer.

llvm-svn: 214664
2014-08-03 18:51:26 +00:00
Saleem Abdulrasool
7b26bbada1 MC: rename Win64EHFrameInfo to WinEH::FrameInfo
The frame information stored in this structure is driven by the requirements for
Windows NT unwinding rather than Windows 64 specifically.  As a result, this
type can be shared across multiple architectures (ARM, AXP, MIPS, PPC, SH).
Rename this class in preparation for adding support for supporting unwinding
information for Windows on ARM.

Take the opportunity to constify the members as everything except the
ChainedParent is read-only.  This required some adjustment to the label
handling.

llvm-svn: 214663
2014-08-03 18:51:17 +00:00
Matt Arsenault
90c2681a28 R600/SI: Fix extra whitespace in asm str
This slipped in in r214467, so something like

V_MOV_B32_e32  v0, ... is now printed with 2 spaces
between the instruction name and first operand.

llvm-svn: 214660
2014-08-03 05:27:14 +00:00
Manman Ren
97601eacaa [SimplifyCFG] fix accessing deleted PHINodes in switch-to-table conversion.
When we have a covered lookup table, make sure we don't delete PHINodes that
are cached in PHIs.

rdar://17887153

llvm-svn: 214642
2014-08-02 23:41:54 +00:00
Joerg Sonnenberger
1253396c1a tlbia support
llvm-svn: 214640
2014-08-02 20:16:29 +00:00
Joerg Sonnenberger
e208527e31 mfdcr / mtdcr support
llvm-svn: 214639
2014-08-02 20:00:26 +00:00
Erik Eckstein
4cdbc63bd2 fix bug 20513 - Crash in SLP Vectorizer
llvm-svn: 214638
2014-08-02 19:39:42 +00:00
Joerg Sonnenberger
ca3cb82714 Don't use additional arguments for dss and friends to satisfy DSS_Form,
when let can do the same thing. Keep the 64bit variants as codegen-only.
While they have a different register class, the encoding is the same for
32bit and 64bit mode. Having both present would otherwise confuse the
disassembler.

llvm-svn: 214636
2014-08-02 15:09:41 +00:00
James Molloy
240087c61a [AArch64] Teach DAGCombiner that converting two consecutive loads into a vector load is not a good transform when paired loads are available.
The combiner was creating Q-register loads and stores, which then had to be spilled because there are no callee-save Q registers!

llvm-svn: 214634
2014-08-02 14:51:24 +00:00
Chandler Carruth
b86a753f28 [x86] Remove the FIXME that was implemented in r214628. Managed to
forget to update the comment here... =/

llvm-svn: 214630
2014-08-02 11:34:23 +00:00
Chandler Carruth
ccbf36b272 [x86] Largely complete the use of PSHUFB in the new vector shuffle
lowering with a small addition to it and adding PSHUFB combining.

There is one obvious place in the new vector shuffle lowering where we
should form PSHUFBs directly: when without them we will unpack a vector
of i8s across two different registers and do a potentially 4-way blend
as i16s only to re-pack them into i8s afterward. This is the crazy
expensive fallback path for i8 shuffles and we can just directly use
pshufb here as it will always be cheaper (the unpack and pack are
two instructions so even a single shuffle between them hits our
three instruction limit for forming PSHUFB).

However, this doesn't generate very good code in many cases, and it
leaves a bunch of common patterns not using PSHUFB. So this patch also
adds support for extracting a shuffle mask from PSHUFB in the X86
lowering code, and uses it to handle PSHUFBs in the recursive shuffle
combining. This allows us to combine through them, combine multiple ones
together, and generally produce sufficiently high quality code.

Extracting the PSHUFB mask is annoyingly complex because it could be
either pre-legalization or post-legalization. At least this doesn't have
to deal with re-materialized constants. =] I've added decode routines to
handle the different patterns that show up at this level and we dispatch
through them as appropriate.

The two primary test cases are updated. For the v16 test case there is
still a lot of room for improvement. Since I was going through it
systematically I left behind a bunch of FIXME lines that I'm hoping to
turn into ALL lines by the end of this.

llvm-svn: 214628
2014-08-02 10:39:15 +00:00
Chandler Carruth
a780975d83 [x86] Switch to using the variable we extracted this operand into.
Spotted this missed refactoring by inspection when reading code, and it
doesn't changethe functionality at all.

llvm-svn: 214627
2014-08-02 10:29:36 +00:00
Chandler Carruth
9b3e1eb850 [x86] Fix a few typos in my comments spotted in passing.
llvm-svn: 214626
2014-08-02 10:29:34 +00:00
Chandler Carruth
6875b68d8c [x86] Teach the target shuffle mask extraction to recognize unary forms
of normally binary shuffle instructions like PUNPCKL and MOVLHPS.

This detects cases where a single register is used for both operands
making the shuffle behave in a unary way. We detect this and adjust the
mask to use the unary form which allows the existing DAG combine for
shuffle instructions to actually work at all.

As a consequence, this uncovered a number of obvious bugs in the
existing DAG combine which are fixed. It also now canonicalizes several
shuffles even with the existing lowering. These typically are trying to
match the shuffle to the domain of the input where before we only really
modeled them with the floating point variants. All of the cases which
change to an integer shuffle here have something in the integer domain, so
there are no more or fewer domain crosses here AFAICT. Technically, it
might be better to go from a GPR directly to the floating point domain,
but detecting floating point *outputs* despite integer inputs is a lot
more code and seems unlikely to be worthwhile in practice. If folks are
seeing domain-crossing regressions here though, let me know and I can
hack something up to fix it.

Also as a consequence, a bunch of missed opportunities to form pshufb
now can be formed. Notably, splats of i8s now form pshufb.
Interestingly, this improves the existing splat lowering too. We go from
3 instructions to 1. Yes, we may tie up a register, but it seems very
likely to be worth it, especially if splatting the 0th byte (the
common case) as then we can use a zeroed register as the mask.

llvm-svn: 214625
2014-08-02 10:27:38 +00:00
Chandler Carruth
746864d542 [x86] Teach my pshufb comment printer to handle VPSHUFB forms as well as
PSHUFB forms. This will be important to update some AVX tests when I add
PSHUFB combining.

llvm-svn: 214624
2014-08-02 10:08:17 +00:00
Chandler Carruth
f8f86e7b92 [SDAG] Refactor the code which deletes nodes in the DAG combiner to do
so using a single helper which adds operands back onto the worklist.

Several places didn't rigorously do this but a couple already did.
Factoring them together and doing it rigorously is important to delete
things recursively early on in the combiner and get a chance to see
accurate hasOneUse values. While no existing test cases change, an
upcoming patch to add DAG combining logic for PSHUFB requires this to
work correctly.

llvm-svn: 214623
2014-08-02 10:02:07 +00:00
Owen Anderson
6c991acdcf Fix issues with ISD::FNEG and ISD::FMA SDNodes where they would not be constant-folded
during DAGCombine in certain circumstances.  Unfortunately, the circumstances required
to trigger the issue seem to require a pretty specific interaction of DAGCombines,
and I haven't been able to find a testcase that reproduces on X86, ARM, or AArch64.
The functionality added here is replicated in essentially every other DAG combine,
so it seems pretty obviously correct.

llvm-svn: 214622
2014-08-02 08:45:33 +00:00
Justin Bogner
af51eedfd7 CodeGen: Remove commented out code
These two lines have been commented out for over 4 years. They aren't
helping anyone.

llvm-svn: 214615
2014-08-02 06:47:07 +00:00
Akira Hatanaka
5a2758bfe7 [ARM] In dynamic-no-pic mode, ARM's post-RA pseudo expansion was incorrectly
expanding pseudo LOAD_STATCK_GUARD using instructions that are normally used
in pic mode. This patch fixes the bug.

<rdar://problem/17886592>

llvm-svn: 214614
2014-08-02 05:40:40 +00:00
Lang Hames
f2bb6bf8f0 [MCJIT] Fix an overly-aggressive check in RuntimeDyldMachOARM.
This should fix the MachO_ARM_PIC_relocations.s test failures on some 32-bit
testers.

llvm-svn: 214613
2014-08-02 03:00:49 +00:00