1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-31 16:02:52 +01:00
Commit Graph

159 Commits

Author SHA1 Message Date
Evan Cheng
f84dd0cf40 Implement r160312 as target indepedenet dag combine.
llvm-svn: 160354
2012-07-17 08:31:11 +00:00
Chandler Carruth
8a358b3669 Convert all tests using TCL-style quoting to use shell-style quoting.
This was done through the aid of a terrible Perl creation. I will not
paste any of the horrors here. Suffice to say, it require multiple
staged rounds of replacements, state carried between, and a few
nested-construct-parsing hacks that I'm not proud of. It happens, by
luck, to be able to deal with all the TCL-quoting patterns in evidence
in the LLVM test suite.

If anyone is maintaining large out-of-tree test trees, feel free to poke
me and I'll send you the steps I used to convert things, as well as
answer any painful questions etc. IRC works best for this type of thing
I find.

Once converted, switch the LLVM lit config to use ShTests the same as
Clang. In addition to being able to delete large amounts of Python code
from 'lit', this will also simplify the entire test suite and some of
lit's architecture.

Finally, the test suite runs 33% faster on Linux now. ;]
For my 16-hardware-thread (2x 4-core xeon e5520): 36s -> 24s

llvm-svn: 159525
2012-07-02 12:47:22 +00:00
Chandler Carruth
b27545af79 Rewrite three tests that had truly egregious abuses of 'grep' in them to
use FileCheck.

Aside from removing a dependence on TCL-style quoting, this also makes
the tests ... significantly more robust. =] It would be really, *really*
great of the maintainer(s) of the CellSPU backend went through and
systematically rewrite these tests to use FileCheck. There are a lot
more that have nearly this bad of abuses.

Another step along the path to a TclTest-free testsuite.

llvm-svn: 159523
2012-07-02 12:20:14 +00:00
Chandler Carruth
728acc9bd9 Flip the new block-placement pass to be on by default.
This is mostly to test the waters. I'd like to get results from FNT
build bots and other bots running on non-x86 platforms.

This feature has been pretty heavily tested over the last few months by
me, and it fixes several of the execution time regressions caused by the
inlining work by preventing inlining decisions from radically impacting
block layout.

I've seen very large improvements in yacr2 and ackermann benchmarks,
along with the expected noise across all of the benchmark suite whenever
code layout changes. I've analyzed all of the regressions and fixed
them, or found them to be impossible to fix. See my email to llvmdev for
more details.

I'd like for this to be in 3.1 as it complements the inliner changes,
but if any failures are showing up or anyone has concerns, it is just
a flag flip and so can be easily turned off.

I'm switching it on tonight to try and get at least one run through
various folks' performance suites in case SPEC or something else has
serious issues with it. I'll watch bots and revert if anything shows up.

llvm-svn: 154816
2012-04-16 13:49:17 +00:00
Nadav Rotem
37734277f0 1. Remove the part of r153848 which optimizes shuffle-of-shuffle into a new
shuffle node because it could introduce new shuffle nodes that were not
   supported efficiently by the target.

2. Add a more restrictive shuffle-of-shuffle optimization for cases where the
   second shuffle reverses the transformation of the first shuffle.

llvm-svn: 154266
2012-04-07 21:19:08 +00:00
Nadav Rotem
2729f54295 This commit contains a few changes that had to go in together.
1. Simplify xor/and/or (bitcast(A), bitcast(B)) -> bitcast(op (A,B))
   (and also scalar_to_vector).

2. Xor/and/or are indifferent to the swizzle operation (shuffle of one src).
   Simplify xor/and/or (shuff(A), shuff(B)) -> shuff(op (A, B))

3. Optimize swizzles of shuffles:  shuff(shuff(x, y), undef) -> shuff(x, y).

4. Fix an X86ISelLowering optimization which was very bitcast-sensitive.

Code which was previously compiled to this:

movd    (%rsi), %xmm0
movdqa  .LCPI0_0(%rip), %xmm2
pshufb  %xmm2, %xmm0
movd    (%rdi), %xmm1
pshufb  %xmm2, %xmm1
pxor    %xmm0, %xmm1
pshufb  .LCPI0_1(%rip), %xmm1
movd    %xmm1, (%rdi)
ret

Now compiles to this:

movl    (%rsi), %eax
xorl    %eax, (%rdi)
ret

llvm-svn: 153848
2012-04-01 19:31:22 +00:00
Eli Bendersky
3ef88c1833 Continue cleanup of LIT, getting rid of the remaining artifacts from dejagnu
* Removed test/lib/llvm.exp - it is no longer needed 
* Deleted the dg.exp reading code from test/lit.cfg. There are no dg.exp files
  left in the test suite so this code is no longer required. test/lit.cfg is
  now much shorter and clearer 
* Removed a lot of duplicate code in lit.local.cfg files that need access to
  the root configuration, by adding a "root" attribute to the TestingConfig
  object. This attribute is dynamically computed to provide the same
  information as was previously provided by the custom getRoot functions. 
* Documented the config.root attribute in docs/CommandGuide/lit.pod

llvm-svn: 153408
2012-03-25 09:02:19 +00:00
Eli Bendersky
4afdeeb682 Replace all instances of dg.exp file with lit.local.cfg, since all tests are run with LIT now and now Dejagnu. dg.exp is no longer needed.
Patch reviewed by Daniel Dunbar. It will be followed by additional cleanup patches.

llvm-svn: 150664
2012-02-16 06:28:33 +00:00
Nadav Rotem
ea4aecb3e5 This patch addresses the problem of poor code generation for the zext
v8i8 -> v8i32 on AVX machines. The codegen often scalarizes ANY_EXTEND nodes.
The DAGCombiner has two optimizations that can mitigate the problem. First,
if all of the operands of a BUILD_VECTOR node are extracted from an ZEXT/ANYEXT
nodes, then it is possible to create a new simplified BUILD_VECTOR which uses
UNDEFS/ZERO values to eliminate the scalar ZEXT/ANYEXT nodes.
Second, another dag combine optimization lowers BUILD_VECTOR into a shuffle
vector instruction.

In the case of zext v8i8->v8i32 on AVX, a value in an XMM register is to be
shuffled into a wide YMM register.

This patch modifes the second optimization and allows the creation of
shuffle vectors even when the newly generated vector and the original vector
from which we extract the values are of different types.

llvm-svn: 150340
2012-02-12 15:05:31 +00:00
Nadav Rotem
c23e698b5c Transform: (EXTRACT_VECTOR_ELT( VECTOR_SHUFFLE )) -> EXTRACT_VECTOR_ELT.
llvm-svn: 148337
2012-01-17 21:44:01 +00:00
Jakob Stoklund Olesen
bb527a67c0 Remove histogram tests.
Counting the number of occurences of each opcode is not a useful test.

llvm-svn: 144474
2011-11-12 22:39:40 +00:00
Dan Gohman
a5f382da8b Reapply r143206, with fixes. Disallow physical register lifetimes
across calls, and only check for nested dependences on the special
call-sequence-resource register.

llvm-svn: 143660
2011-11-03 21:49:52 +00:00
Dan Gohman
826cec9a4b Revert r143206, as there are still some failing tests.
llvm-svn: 143262
2011-10-29 00:41:52 +00:00
Dan Gohman
dedcc22bcd Reapply r143177 and r143179 (reverting r143188), with scheduler
fixes: Use a separate register, instead of SP, as the
calling-convention resource, to avoid spurious conflicts with
actual uses of SP. Also, fix unscheduling of calling sequences,
which can be triggered by pseudo-two-address dependencies.

llvm-svn: 143206
2011-10-28 17:55:38 +00:00
Duncan Sands
a6507c4bcb Speculatively disable Dan's commits 143177 and 143179 to see if
it fixes the dragonegg self-host (it looks like gcc is miscompiled).
Original commit messages:
Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.

Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.

Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.

Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.

This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.

Delete #if 0 code accidentally left in.

llvm-svn: 143188
2011-10-28 09:55:57 +00:00
Dan Gohman
484df993bd Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.

Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.

Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.

Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.

This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.

llvm-svn: 143177
2011-10-28 01:29:32 +00:00
Nadav Rotem
8ed6f090ef Enable element promotion type legalization by deafault.
Changed tests which assumed that vectors are legalized by widening them.

llvm-svn: 142152
2011-10-16 20:31:33 +00:00
Nadav Rotem
a733f43137 Fix a bug in LowerV2I64Splat, which generated a BUILD_VECTOR for which there was
no pattern.

llvm-svn: 142130
2011-10-16 10:02:06 +00:00
Kalle Raiskila
15993a5d28 Mark 'branch indirect' instruction as an indirect branch.
Not having it confused assembly printing of jumptables.

llvm-svn: 141862
2011-10-13 11:40:03 +00:00
Kalle Raiskila
7c154fe467 Pass signed (not unsigned) 10 bit field to SPU 'ori' instruction.
llvm-svn: 139004
2011-09-02 10:05:01 +00:00
Chris Lattner
0899957b99 make the asmparser reject function and type redefinitions. 'Merging' hasn't been
needed since llvm-gcc 3.4 days.

llvm-svn: 133248
2011-06-17 07:06:44 +00:00
Chris Lattner
9ec82f54d4 manually upgrade a bunch of tests to modern syntax, and remove some that
are either unreduced or only test old syntax.

llvm-svn: 133228
2011-06-17 03:14:27 +00:00
Chris Lattner
de62b962e8 don't test for codegen of 'store undef'
llvm-svn: 129184
2011-04-09 02:31:26 +00:00
Cameron Zwarich
bf5c9cd119 Roll r127459 back in:
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127498
2011-03-11 21:52:04 +00:00
Daniel Dunbar
a02706c889 Revert r127459, "Optimize trivial branches in CodeGenPrepare, which often get
created from the", it broke some GCC test suite tests.

llvm-svn: 127477
2011-03-11 19:30:30 +00:00
Cameron Zwarich
9ed726c151 Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127459
2011-03-11 04:54:27 +00:00
Benjamin Kramer
8313cf1cf4 Fix mistyped CHECK lines.
llvm-svn: 127366
2011-03-09 22:07:31 +00:00
Joerg Sonnenberger
5f2f5fa638 Be nice to Xcore and the XMOS assembler and avoid quoting section names
that contain only letters, digits and the characters "_" and ".".

llvm-svn: 127028
2011-03-04 20:03:14 +00:00
Kalle Raiskila
6e33c92ffb Allow vector shifts (shl,lshr,ashr) on SPU.
There was a previous implementation with patterns that would 
have matched e.g. 
	shl <v4i32> <i32>,
but this is not valid LLVM IR so they never were selected.

llvm-svn: 126998
2011-03-04 13:19:18 +00:00
Kalle Raiskila
72cfda1a29 Allow load from constant on SPU.
A 'load <4 x i32>* null' crashes llc before this fix.

llvm-svn: 126995
2011-03-04 12:00:11 +00:00
Joerg Sonnenberger
bb93506f95 Bug#9033: For the ELF assembler output, always quote the section name.
llvm-svn: 126963
2011-03-03 22:31:08 +00:00
Chris Lattner
47e8019142 fix visitShift to properly zero extend the shift amount if the provided operand
is narrower than the shift register.  Doing an anyext provides undefined bits in
the top part of the register.

llvm-svn: 125457
2011-02-13 09:02:52 +00:00
Kalle Raiskila
070fb5e54d Allow sign-extending of i8 and i16 to i128 on SPU.
llvm-svn: 123912
2011-01-20 15:49:06 +00:00
Kalle Raiskila
8eaf0e83d5 Don't crash SPU BE with memory accesses with big alignmnet.
llvm-svn: 123620
2011-01-17 11:59:20 +00:00
Kalle Raiskila
68f221707a Don't feed 19 bit immediates to ILA.
Patch (slightly modified) by Visa Putkinen.

llvm-svn: 122052
2010-12-17 09:36:09 +00:00
Devang Patel
6fe7fe8dd4 If dbg_declare() or dbg_value() is not lowered by isel then emit DEBUG message instead of creating DBG_VALUE for undefined value in reg0.
llvm-svn: 121059
2010-12-06 22:39:26 +00:00
Kalle Raiskila
71dec6ff42 Handle lshr for i128 correctly on SPU also when
shiftamount > 7.

llvm-svn: 120288
2010-11-29 14:44:28 +00:00
Kalle Raiskila
45865e9165 Enable PostRA scheduling for SPU.
This speeds up selected test cases with up to
5% - no slowdowns observed.

llvm-svn: 120286
2010-11-29 10:30:25 +00:00
Kalle Raiskila
b017eaea7b Allow for 'fcmp ogt' in SPU.
Fix by Visa Putkinen!

llvm-svn: 120090
2010-11-24 11:42:17 +00:00
Kalle Raiskila
f71cc94c91 Division by pow-of-2 is not cheap on SPU, do it with
shifts.

llvm-svn: 120022
2010-11-23 13:27:59 +00:00
Kalle Raiskila
8f1131e569 Fix a bug with extractelement on SPU.
In the attached testcase, the element was
never extracted (missing rotate).

llvm-svn: 119973
2010-11-22 16:28:26 +00:00
Kalle Raiskila
f89d0d0389 Fix memory access lowering on SPU, adding
support for the case where alignment<value size.

These cases were silently miscompiled before this patch.
Now they are overly verbose -especially storing is- and
any front-end should still avoid misaligned memory 
accesses as much as possible. The bit juggling algorithm
added here probably has some room for improvement still.

llvm-svn: 118889
2010-11-12 10:14:03 +00:00
Kalle Raiskila
64680cd5b8 Change v64 datalayout in SPU.
The SPU ABI does not mention v64, and all examples
in C suggest v128 are treated similarily to arrays, 
we use array alignment for v64 too. This makes the 
alignment of e.g. [2 x <2 x i32>] behave "intuitively"
and similar to as if the elements were e.g. i32s.

This also makes an "unaligned store" test to be 
aligned, with different (but functionally equivalent)
code generated.

llvm-svn: 117360
2010-10-26 10:45:47 +00:00
Kalle Raiskila
3cdfdd9383 Improve lowering of sext to i128 on SPU.
The old algorithm inserted a 'rotqmbyi' instruction which was
both redundant and wrong - it made shufb select bytes from the
wrong end of the input quad.

llvm-svn: 116701
2010-10-18 09:34:19 +00:00
Kalle Raiskila
c6bdc97934 Zap some redundant 'ori $?, $?, 0' from SPU.
Also remove some code that died in the process.
One now non-existant ori is checked for.

llvm-svn: 115306
2010-10-01 09:20:01 +00:00
Kalle Raiskila
68e2c15954 Change SPU register re-interpretations from OR to COPY_TO_REGCLASS instruction.
This cleans up after the mess r108567 left in the CellSPU backend.
ORCvt-instruction were used to reinterpret registers, and the ORs were then
removed by isMoveInstr(). This patch now removes 350 instrucions of format:
	or $3, $3, $3
(from the 52 testcases in CodeGen/CellSPU). One case of a nonexistant or is
checked for.

Some moves of the form 'ori $., $., 0' and 'ai $., $., 0' still remain.

llvm-svn: 114074
2010-09-16 12:29:33 +00:00
Kalle Raiskila
be22226628 Fix CellSPU vector shuffles, again.
Some cases of lowering to rotate were miscompiled.

llvm-svn: 113355
2010-09-08 11:53:38 +00:00
Kalle Raiskila
daba4ffc75 Fix lowering of INSERT_VECTOR_ELT in SPU.
The IDX was treated as byte index, not element index.

llvm-svn: 112422
2010-08-29 12:41:50 +00:00
Kalle Raiskila
1be8a5f947 Fix SPU BE to use all the available return registers.
llc used to assert on the added testcase.

llvm-svn: 111911
2010-08-24 11:50:48 +00:00
Kalle Raiskila
05d3cc2ef8 Fix a bug with insertelement on SPU.
The previous algorithm in LowerVECTOR_SHUFFLE 
didn't check all requirements for "monotonic" shuffles.

llvm-svn: 111361
2010-08-18 10:20:29 +00:00