1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

28332 Commits

Author SHA1 Message Date
Chandler Carruth
7381b41ffe [x86] Remove a low-value test that was just checking how we cleared
a register. We have lots of tests covering this.

llvm-svn: 228133
2015-02-04 10:47:34 +00:00
Chandler Carruth
d418a34d82 [x86] Mechanically update a bunch of tests' check lines using the latest
version of the script.

Changes include:
- Using the VEX prefix
- Skipping more detail when we have useful shuffle comments to match
- Matching more shuffle comments that have been added to the printer
  (yay!)
- Matching the destination registers of some AVX instructions
- Stripping trailing whitespace that crept in
- Fixing indentation issues

Nothing interesting going on here. I'm just trying really hard to ensure
these changes don't show up in the diffs with actual changes to the
backend.

llvm-svn: 228132
2015-02-04 10:46:53 +00:00
Renato Golin
f8fd9bab9d Reverting VLD1/VST1 base-updating/post-incrementing combining
This reverts patches 223862, 224198, 224203, and 224754, which were all
related to the vector load/store combining and were reverted/reaplied
a few times due to the same alignment problems we're seeing now.

Further tests, mainly self-hosting Clang, will be needed to reapply this
patch in the future.

llvm-svn: 228129
2015-02-04 10:11:59 +00:00
Chandler Carruth
bc1e3c9278 [x86] Include the destination register in the check-lines for AVX
instructions.

No actual change here.

llvm-svn: 228127
2015-02-04 09:18:27 +00:00
Chandler Carruth
4044a6e4ff [x86] Add some tests I missed in the prior commit to cover blends with
zero for v8i16 as well.

These exhibit the same domain badness, but also exhibit other weaknesses
in our blend lowering. More fixes to come.

llvm-svn: 228126
2015-02-04 09:15:46 +00:00
Chandler Carruth
61ac2c112b [x86] Start to introduce bit-masking based blend lowering.
This is the simplest form of bit-math based blending which only fires
when we are blending with zero and is relatively profitable. I've only
enabled this path on very specific lowering strategies. I'm planning to
widen its applicability in subsequent patches, but so far you'll notice
that even though we get fewer shufps instructions, we *still* do the bit
math in the FP execution port. I'm looking into why this is still
happening.

llvm-svn: 228124
2015-02-04 09:06:05 +00:00
Chandler Carruth
55467700cf [x86] Add tests for blends-with-zero on 4-element vectors.
llvm-svn: 228122
2015-02-04 09:05:58 +00:00
Bill Schmidt
60f9638fe2 [PowerPC] Handle 32-bit targets properly in PPCTLSDynamicCall.cpp
llvm-svn: 228116
2015-02-04 05:51:56 +00:00
Frederic Riss
cba433502b Fix some unnoticed/unwanted behavior change from r222319.
The ARM assembler allows register alias redefinitions as long as it
targets the same register. r222319 broke that. In the AArch64 case
it would just produce a new warning, but in the ARM case it would
error out on previously accepted assembler.

llvm-svn: 228109
2015-02-04 03:10:03 +00:00
Kostya Serebryany
f9d1d4b256 [sanitizer] add another workaround for PR 17409: when over a threshold emit coverage instrumentation as calls.
llvm-svn: 228102
2015-02-04 01:21:45 +00:00
Kevin Enderby
baa1461a00 Add code to llvm-objdump so the -section option with -macho will disassemble sections
that have attributes indicating they contain instructions.

llvm-svn: 228101
2015-02-04 01:01:38 +00:00
Chandler Carruth
2b858b7b20 [x86] Refresh the checks of a number of tests using
update_llc_test_checks.py.

The exact format of the checks has changed over time. This includes
different indenting rules, new shuffle comments that have been added,
and more operand hiding behind regular expressions.

No functional change to the tests are expected here, but this will make
subsequent patches have a clean diff as they change shuffle lowering.

llvm-svn: 228097
2015-02-04 00:58:42 +00:00
Chandler Carruth
4d271d9dd1 [x86] Switch to using the long '--check-prefix' form which the
update_llc_test_checks.py script uses, and refresh the checks in this
test.

No functionality changed here, just bringing this test up to work with
automated updates using the python script.

llvm-svn: 228096
2015-02-04 00:58:40 +00:00
Chandler Carruth
d489a3effe [x86] Port this test to use utils/update_llc_test_checks.py.
This will make it easy to update as I change some parts of the X86
backend, makes it more clear what instruction differences are
introduced, and I find it makes it a bit easier to read as well.

llvm-svn: 228095
2015-02-04 00:58:37 +00:00
Sanjay Patel
b1e7e01db7 improved CHECK
llvm-svn: 228086
2015-02-04 00:24:06 +00:00
Owen Anderson
ec6d708888 Remove a gross usage of environment variables in MachineVerifier, replacing it with support for setting the -verify-machineinstrs flag via an environment variable in LIT.
This preserves the handy functionality of force-enabling the MachineVerifier, without the need to embed usage of environment variables in LLVM client applications.

llvm-svn: 228079
2015-02-04 00:02:59 +00:00
Simon Pilgrim
eee3b225d9 [X86][SSE] psrl(w/d/q) and psll(w/d/q) bit shifts for SSE2
Patch to match cases where shuffle masks can be reduced to bit shifts. Similar to byte shift shuffle matching from D5699.

Differential Revision: http://reviews.llvm.org/D6649

llvm-svn: 228047
2015-02-03 21:58:29 +00:00
Bill Schmidt
d20879d5e6 [PowerPC] Implement the vpopcnt instructions for POWER8
Patch by Kit Barton.

Add the vector population count instructions for byte, halfword, word,
and doubleword sizes.  There are two major changes here:

    PPCISelLowering.cpp: Make CTPOP legal for vector types.
    PPCRegisterInfo.td: Added v2i64 to the VRRC register
      definition. This is needed for the doubleword variations of the
      integer ops that were added in P8. 

Test Plan

Test the instruction vpcnt* encoding/decoding in ppc64-encoding-vmx.s

Test the generation of the vpopcnt instructions for various vector
data types.  When adding the v2i64 type to the Vector Register set, I
also needed to add the appropriate bit conversion patterns between
v2i64 and the existing vector types.  Testing for these conversions
were also added in the test case by passing a different vector type as
a parameter into the test functions.  There is also a run step that
will ensure the vpopcnt instructions are generated when the vsx
feature is disabled.

llvm-svn: 228046
2015-02-03 21:58:23 +00:00
Chandler Carruth
e4646d63a8 [x86] Add two truly horrific test cases for the new vector shuffle
lowering. I'm prepping patches to improve these, and this will let the
delta of those patches show the improvement. =]

llvm-svn: 228044
2015-02-03 21:56:28 +00:00
Chandler Carruth
7c1eb70e22 [x86] Update the indent and layout of some tests in this file. NFC
This is just to remove voise from using the update_llc_test_checks
script.

llvm-svn: 228043
2015-02-03 21:56:24 +00:00
Duncan P. N. Exon Smith
b0edee547b AsmParser: Recognize DW_TAG_* constants
Recognize `DW_TAG_` constants in assembly, and output it by default for
`GenericDebugNode`.

llvm-svn: 228042
2015-02-03 21:56:01 +00:00
Duncan P. N. Exon Smith
55694c075d IR: Assembly and bitcode for GenericDebugNode
llvm-svn: 228041
2015-02-03 21:54:14 +00:00
Marek Olsak
fa2a952243 R600/SI: Remove the -CHECK suffix from all FileCheck prefixes in LIT tests
llvm-svn: 228040
2015-02-03 21:53:27 +00:00
Marek Olsak
7724be694b R600/SI: Fix B64 VALU shifts on VI
SI only has standard versions. VI only has REV versions.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 228037
2015-02-03 21:53:01 +00:00
Justin Bogner
84c1a035e8 InstrProf: Remove CoverageMapping::HasCodeBefore, it isn't used
It's not entirely clear to me what this field was meant for, but it's
always false. Remove it.

llvm-svn: 228034
2015-02-03 21:35:36 +00:00
Chandler Carruth
0a9c0a2838 [x86] Tweak my update script to use test case function names starting
with 'stress' to indicate that the specific output isn't interesting and
relax them to only check the last instruction (a ret).

I've updated the one test case that really uses this to name the one
'stress_test' which was actually producing output we can directly check.
With this, the script doesn't introduce noise when run over the v16 test
file.

llvm-svn: 228033
2015-02-03 21:26:45 +00:00
Colin LeMahieu
3534179cf0 [Hexagon] Converting XTYPE/SHIFT intrinsics. Cleaning out old intrinsic patterns and updating tests.
llvm-svn: 228026
2015-02-03 20:40:52 +00:00
Daniel Berlin
2d2eb452e9 Allow PRE to insert no-cost phi nodes
llvm-svn: 228024
2015-02-03 20:37:08 +00:00
Simon Pilgrim
6b1cbac334 [X86][SSE] Added general integer shuffle matching for MOVQ instruction
This patch adds general shuffle pattern matching for the MOVQ zero-extend instruction (copy lower 64bits, zero upper) for all 128-bit integer vectors, it is added as a fallback test in lowerVectorShuffleAsZeroOrAnyExtend.

llvm-svn: 228022
2015-02-03 20:09:18 +00:00
Colin LeMahieu
22fa1ee703 [Hexagon] Updating XTYPE/PRED intrinsics.
llvm-svn: 228019
2015-02-03 19:43:59 +00:00
Jingyue Wu
4e99b65428 Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,

LLVM unrolls

  #pragma unroll
  foo (int i = 0; i < 3; ++i) {
    sum += foo((b + i) * s);
  }

into

  sum += foo(b * s);
  sum += foo((b + 1) * s);
  sum += foo((b + 2) * s);

However, no optimizations yet reduce the internal redundancy of the three
expressions:

  b * s
  (b + 1) * s
  (b + 2) * s

With SLSR, LLVM can optimize these three expressions into:

  t1 = b * s
  t2 = t1 + s
  t3 = t2 + s

This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.

Test Plan: test/StraightLineStrengthReduce/slsr.ll

Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick

Reviewed By: jholewinski, atrick

Subscribers: karthikthecool, jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D7310

llvm-svn: 228016
2015-02-03 19:37:06 +00:00
Colin LeMahieu
dc6eea20d4 [Hexagon] Updating XTYPE/PERM intrinsics.
llvm-svn: 228015
2015-02-03 19:36:59 +00:00
Simon Pilgrim
f6904ffa22 [X86][AVX2] Enabled shuffle matching for the AVX2 zero extension (128bit -> 256bit) vpmovzx* instructions.
Differential Revision: http://reviews.llvm.org/D7251

llvm-svn: 228014
2015-02-03 19:34:09 +00:00
Rafael Espindola
b49c9ab599 Fix typo in test/CodeGen/X86/sibcall.ll (pr22331).
llvm-svn: 228011
2015-02-03 19:20:26 +00:00
Colin LeMahieu
59d3eac138 [Hexagon] Adding missing vector multiply instruction encodings. Converting multiply intrinsics and updating tests.
llvm-svn: 228010
2015-02-03 19:15:11 +00:00
Sanjay Patel
dd58512572 Merge consecutive 16-byte loads into one 32-byte load (PR22329)
This patch detects consecutive vector loads using the existing 
EltsFromConsecutiveLoads() logic. This fixes:
http://llvm.org/bugs/show_bug.cgi?id=22329

This patch effectively reverts the tablegen additions of D6492 / 
http://reviews.llvm.org/rL224344 ...which in hindsight were a horrible hack.

The test cases that were added with that patch are simply modified to load
from varying offsets of a base pointer. These loads did not match the existing
tablegen patterns.

A happy side effect of doing this optimization earlier is that we can now fold
the load into a math op where possible; this is shown in some of the updated
checks in the test file.

Differential Revision: http://reviews.llvm.org/D7303

llvm-svn: 228006
2015-02-03 18:54:00 +00:00
Colin LeMahieu
530baa45f7 [Hexagon] Converting complex number intrinsics and adding tests.
llvm-svn: 227995
2015-02-03 18:16:28 +00:00
Colin LeMahieu
ed25cb4221 [Hexagon] Adding vector intrinsics for alu32/alu and xtype/alu.
llvm-svn: 227993
2015-02-03 18:01:45 +00:00
Marek Olsak
31489249b9 R600/SI: Don't generate non-existent LSHL, LSHR, ASHR B32 variants on VI
This can happen when a REV instruction is commuted.

The trick is not to define the _vi versions of instructions, which has these
consequences:
- code generation will always fail if a pseudo cannot be lowered
  (very useful to catch bugs where an unsupported instruction somehow makes
   it to the printer)
- ability to query if a pseudo can be lowered, which is done in commuteOpcode
  to prevent REV from commuting to non-REV on VI

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227990
2015-02-03 17:38:12 +00:00
Marek Olsak
08103dc7bd R600/SI: Fix dependency between instruction writing M0 and S_SENDMSG on VI (v2)
This fixes a hang when using an empty geometry shader.

v2: - don't add s_nop when followed by s_waitcnt
    - comestic changes

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227986
2015-02-03 17:37:52 +00:00
Sanjay Patel
fb26cf0017 Fix program crashes due to alignment exceptions generated for SSE memop instructions (PR22371).
r224330 introduced a bug by misinterpreting the "FeatureVectorUAMem" bit.
The commit log says that change did not affect anything, but that's not correct.
That change allowed SSE instructions to have unaligned mem operands folded into
math ops, and that's not allowed in the default specification for any SSE variant. 

The bug is exposed when compiling for an AVX-capable CPU that had this feature
flag but without enabling AVX codegen. Another mistake in r224330 was not adding
the feature flag to all AVX CPUs; the AMD chips were excluded.

This is part of the fix for PR22371 ( http://llvm.org/bugs/show_bug.cgi?id=22371 ).

This feature bit is SSE-specific, so I've renamed it to "FeatureSSEUnalignedMem".
Changed the existing test case for the feature bit to reflect the new name and
renamed the test file itself to better reflect the feature.
Added runs to fold-vex.ll to check for the failing codegen.

Note that the feature bit is not set by default on any CPU because it may require a
configuration register setting to enable the enhanced unaligned behavior.

llvm-svn: 227983
2015-02-03 17:13:04 +00:00
Bill Schmidt
9a2ff580f4 Disable 32-bit tests in tls-pic.ll until they can be repaired
llvm-svn: 227981
2015-02-03 16:57:38 +00:00
Bill Schmidt
9e8707495e Further revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
llvm-svn: 227980
2015-02-03 16:33:55 +00:00
Bill Schmidt
a2f3bcb6f0 Further revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
llvm-svn: 227978
2015-02-03 16:29:52 +00:00
Bill Schmidt
b3733e14a6 Revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
llvm-svn: 227977
2015-02-03 16:24:05 +00:00
Bill Schmidt
c18bf56926 [PowerPC] Yet another approach to __tls_get_addr
This patch is a third attempt to properly handle the local-dynamic and
global-dynamic TLS models.

In my original implementation, calls to __tls_get_addr were hidden
from view until the asm-printer phase, at which point the underlying
branch-and-link instruction was created with proper relocations.  This
mostly worked well, but I used some repellent techniques to ensure
that the TLS_GET_ADDR nodes at the SD and MI levels correctly received
input from GPR3 and produced output into GPR3.  This proved to work
badly in the presence of multiple TLS variable accesses, with the
copies to and from GPR3 being scheduled incorrectly and generally
creating havoc.

In r221703, I addressed that problem by representing the calls to
__tls_get_addr as true calls during instruction lowering.  This had
the advantage of removing all of the bad hacks and relying on the
existing call machinery to properly glue the copies in place. It
looked like this was going to be the right way to go.

However, as a side effect of the recent discovery of problems with
linker optimizations for TLS, we discovered cases of suboptimal code
generation with this strategy.  The problem comes when tls_get_addr is
called for the same address, and there is a resulting CSE
opportunity.  It turns out that in such cases MachineCSE will common
the addis/addi instructions that set up the input value to
tls_get_addr, but will not common the calls themselves.  MachineCSE
does not have any machinery to common idempotent calls.  This is
perfectly sensible, since presumably this would be done at the IR
level, and introducing calls in the back end isn't commonplace.  In
any case, we end up with two calls to __tls_get_addr when one would
suffice, and that isn't good.

I presumed that the original design would have allowed commoning of
the machine-specific nodes that hid the __tls_get_addr calls, so as
suggested by Ulrich Weigand, I went back to that design and cleaned it
up so that the copies were properly held together by glue
nodes.  However, it turned out that this didn't work either...the
presence of copies to physical registers kept the machine-specific
nodes from being commoned also.

All of which leads to the design presented here.  This is a return to
the original design, except that no attempt is made to introduce
copies to and from GPR3 during instruction lowering.  Virtual registers
are used until prior to register allocation.  At that point, a special
pass is run that identifies the machine-specific nodes that hide the
tls_get_addr calls and introduces the copies to and from GPR3 around
them.  The register allocator then coalesces these copies away.  With
this design, MachineCSE succeeds in commoning tls_get_addr calls where
possible, and we get nice optimal code generation (better than GCC at
the moment, which does not common these calls).

One additional problem must be dealt with:  After introducing the
mentions of the physical register GPR3, the aggressive anti-dependence
breaker sees opportunities to improve scheduling by selecting a
different register instead.  Flags must be used on the instruction
descriptions to tell the anti-dependence breaker to keep its hands in
its pockets.

One thing missing from the original design was recording a definition
of the link register on the GET_TLS_ADDR nodes.  Doing this was found
to be insufficient to force a stack frame to be created, which led to
looping behavior because two different LR values were stored at the
same address.  This appears to have been an oversight in
PPCFrameLowering::determineFrameLayout(), which is repaired here.

Because MustSaveLR() returns true for calls to builtin_return_address,
this changed the expected behavior of
test/CodeGen/PowerPC/retaddr2.ll, which now stacks a frame but
formerly did not.  I've fixed the test case to reflect this.

There are existing TLS tests to catch regressions; the checks in
test/CodeGen/PowerPC/tls-store2.ll proved to be too restrictive in the
face of instruction scheduling with these changes, so I fixed that
up.

I've added a new test case based on the PrettyStackTrace module that
demonstrated the original problem. This checks that we get correct
code generation and that CSE of the calls to __get_tls_addr has taken
place.

llvm-svn: 227976
2015-02-03 16:16:01 +00:00
Sanjay Patel
35293871e5 Improve test to actually check for a folded load.
This test was checking for lack of a "movaps" (an aligned load)
rather than a "movups" (an unaligned load). It also included
a store which complicated the checking.

Add specific CPU runs to prevent subtarget feature flag overrides
from inhibiting this optimization.

llvm-svn: 227972
2015-02-03 15:37:18 +00:00
Bruno Cardoso Lopes
c6e5ceb6dd [X86][MMX] Improve transfer from mmx to i32
Improve EXTRACT_VECTOR_ELT DAG combine to catch conversion patterns
between x86mmx and i32 with more layers of indirection.

Before:
  movq2dq %mm0, %xmm0
  movd %xmm0, %eax
After:
  movd %mm0, %eax

llvm-svn: 227969
2015-02-03 14:46:49 +00:00
Craig Topper
5a9b4168e7 [X86] Make fxsave64/fxrstor64/xsave64/xsrstor64/xsaveopt64 parseable in AT&T syntax. Also make them the default output.
llvm-svn: 227963
2015-02-03 11:03:57 +00:00
Rafael Espindola
40410a9efa Propagate a better error message to the C api.
llvm-svn: 227934
2015-02-03 01:53:03 +00:00