1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-25 14:02:52 +02:00
Commit Graph

151 Commits

Author SHA1 Message Date
Nadav Rotem
ea4aecb3e5 This patch addresses the problem of poor code generation for the zext
v8i8 -> v8i32 on AVX machines. The codegen often scalarizes ANY_EXTEND nodes.
The DAGCombiner has two optimizations that can mitigate the problem. First,
if all of the operands of a BUILD_VECTOR node are extracted from an ZEXT/ANYEXT
nodes, then it is possible to create a new simplified BUILD_VECTOR which uses
UNDEFS/ZERO values to eliminate the scalar ZEXT/ANYEXT nodes.
Second, another dag combine optimization lowers BUILD_VECTOR into a shuffle
vector instruction.

In the case of zext v8i8->v8i32 on AVX, a value in an XMM register is to be
shuffled into a wide YMM register.

This patch modifes the second optimization and allows the creation of
shuffle vectors even when the newly generated vector and the original vector
from which we extract the values are of different types.

llvm-svn: 150340
2012-02-12 15:05:31 +00:00
Nadav Rotem
c23e698b5c Transform: (EXTRACT_VECTOR_ELT( VECTOR_SHUFFLE )) -> EXTRACT_VECTOR_ELT.
llvm-svn: 148337
2012-01-17 21:44:01 +00:00
Jakob Stoklund Olesen
bb527a67c0 Remove histogram tests.
Counting the number of occurences of each opcode is not a useful test.

llvm-svn: 144474
2011-11-12 22:39:40 +00:00
Dan Gohman
a5f382da8b Reapply r143206, with fixes. Disallow physical register lifetimes
across calls, and only check for nested dependences on the special
call-sequence-resource register.

llvm-svn: 143660
2011-11-03 21:49:52 +00:00
Dan Gohman
826cec9a4b Revert r143206, as there are still some failing tests.
llvm-svn: 143262
2011-10-29 00:41:52 +00:00
Dan Gohman
dedcc22bcd Reapply r143177 and r143179 (reverting r143188), with scheduler
fixes: Use a separate register, instead of SP, as the
calling-convention resource, to avoid spurious conflicts with
actual uses of SP. Also, fix unscheduling of calling sequences,
which can be triggered by pseudo-two-address dependencies.

llvm-svn: 143206
2011-10-28 17:55:38 +00:00
Duncan Sands
a6507c4bcb Speculatively disable Dan's commits 143177 and 143179 to see if
it fixes the dragonegg self-host (it looks like gcc is miscompiled).
Original commit messages:
Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.

Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.

Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.

Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.

This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.

Delete #if 0 code accidentally left in.

llvm-svn: 143188
2011-10-28 09:55:57 +00:00
Dan Gohman
484df993bd Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.

Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.

Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.

Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.

This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.

llvm-svn: 143177
2011-10-28 01:29:32 +00:00
Nadav Rotem
8ed6f090ef Enable element promotion type legalization by deafault.
Changed tests which assumed that vectors are legalized by widening them.

llvm-svn: 142152
2011-10-16 20:31:33 +00:00
Nadav Rotem
a733f43137 Fix a bug in LowerV2I64Splat, which generated a BUILD_VECTOR for which there was
no pattern.

llvm-svn: 142130
2011-10-16 10:02:06 +00:00
Kalle Raiskila
15993a5d28 Mark 'branch indirect' instruction as an indirect branch.
Not having it confused assembly printing of jumptables.

llvm-svn: 141862
2011-10-13 11:40:03 +00:00
Kalle Raiskila
7c154fe467 Pass signed (not unsigned) 10 bit field to SPU 'ori' instruction.
llvm-svn: 139004
2011-09-02 10:05:01 +00:00
Chris Lattner
0899957b99 make the asmparser reject function and type redefinitions. 'Merging' hasn't been
needed since llvm-gcc 3.4 days.

llvm-svn: 133248
2011-06-17 07:06:44 +00:00
Chris Lattner
9ec82f54d4 manually upgrade a bunch of tests to modern syntax, and remove some that
are either unreduced or only test old syntax.

llvm-svn: 133228
2011-06-17 03:14:27 +00:00
Chris Lattner
de62b962e8 don't test for codegen of 'store undef'
llvm-svn: 129184
2011-04-09 02:31:26 +00:00
Cameron Zwarich
bf5c9cd119 Roll r127459 back in:
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127498
2011-03-11 21:52:04 +00:00
Daniel Dunbar
a02706c889 Revert r127459, "Optimize trivial branches in CodeGenPrepare, which often get
created from the", it broke some GCC test suite tests.

llvm-svn: 127477
2011-03-11 19:30:30 +00:00
Cameron Zwarich
9ed726c151 Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127459
2011-03-11 04:54:27 +00:00
Benjamin Kramer
8313cf1cf4 Fix mistyped CHECK lines.
llvm-svn: 127366
2011-03-09 22:07:31 +00:00
Joerg Sonnenberger
5f2f5fa638 Be nice to Xcore and the XMOS assembler and avoid quoting section names
that contain only letters, digits and the characters "_" and ".".

llvm-svn: 127028
2011-03-04 20:03:14 +00:00
Kalle Raiskila
6e33c92ffb Allow vector shifts (shl,lshr,ashr) on SPU.
There was a previous implementation with patterns that would 
have matched e.g. 
	shl <v4i32> <i32>,
but this is not valid LLVM IR so they never were selected.

llvm-svn: 126998
2011-03-04 13:19:18 +00:00
Kalle Raiskila
72cfda1a29 Allow load from constant on SPU.
A 'load <4 x i32>* null' crashes llc before this fix.

llvm-svn: 126995
2011-03-04 12:00:11 +00:00
Joerg Sonnenberger
bb93506f95 Bug#9033: For the ELF assembler output, always quote the section name.
llvm-svn: 126963
2011-03-03 22:31:08 +00:00
Chris Lattner
47e8019142 fix visitShift to properly zero extend the shift amount if the provided operand
is narrower than the shift register.  Doing an anyext provides undefined bits in
the top part of the register.

llvm-svn: 125457
2011-02-13 09:02:52 +00:00
Kalle Raiskila
070fb5e54d Allow sign-extending of i8 and i16 to i128 on SPU.
llvm-svn: 123912
2011-01-20 15:49:06 +00:00
Kalle Raiskila
8eaf0e83d5 Don't crash SPU BE with memory accesses with big alignmnet.
llvm-svn: 123620
2011-01-17 11:59:20 +00:00
Kalle Raiskila
68f221707a Don't feed 19 bit immediates to ILA.
Patch (slightly modified) by Visa Putkinen.

llvm-svn: 122052
2010-12-17 09:36:09 +00:00
Devang Patel
6fe7fe8dd4 If dbg_declare() or dbg_value() is not lowered by isel then emit DEBUG message instead of creating DBG_VALUE for undefined value in reg0.
llvm-svn: 121059
2010-12-06 22:39:26 +00:00
Kalle Raiskila
71dec6ff42 Handle lshr for i128 correctly on SPU also when
shiftamount > 7.

llvm-svn: 120288
2010-11-29 14:44:28 +00:00
Kalle Raiskila
45865e9165 Enable PostRA scheduling for SPU.
This speeds up selected test cases with up to
5% - no slowdowns observed.

llvm-svn: 120286
2010-11-29 10:30:25 +00:00
Kalle Raiskila
b017eaea7b Allow for 'fcmp ogt' in SPU.
Fix by Visa Putkinen!

llvm-svn: 120090
2010-11-24 11:42:17 +00:00
Kalle Raiskila
f71cc94c91 Division by pow-of-2 is not cheap on SPU, do it with
shifts.

llvm-svn: 120022
2010-11-23 13:27:59 +00:00
Kalle Raiskila
8f1131e569 Fix a bug with extractelement on SPU.
In the attached testcase, the element was
never extracted (missing rotate).

llvm-svn: 119973
2010-11-22 16:28:26 +00:00
Kalle Raiskila
f89d0d0389 Fix memory access lowering on SPU, adding
support for the case where alignment<value size.

These cases were silently miscompiled before this patch.
Now they are overly verbose -especially storing is- and
any front-end should still avoid misaligned memory 
accesses as much as possible. The bit juggling algorithm
added here probably has some room for improvement still.

llvm-svn: 118889
2010-11-12 10:14:03 +00:00
Kalle Raiskila
64680cd5b8 Change v64 datalayout in SPU.
The SPU ABI does not mention v64, and all examples
in C suggest v128 are treated similarily to arrays, 
we use array alignment for v64 too. This makes the 
alignment of e.g. [2 x <2 x i32>] behave "intuitively"
and similar to as if the elements were e.g. i32s.

This also makes an "unaligned store" test to be 
aligned, with different (but functionally equivalent)
code generated.

llvm-svn: 117360
2010-10-26 10:45:47 +00:00
Kalle Raiskila
3cdfdd9383 Improve lowering of sext to i128 on SPU.
The old algorithm inserted a 'rotqmbyi' instruction which was
both redundant and wrong - it made shufb select bytes from the
wrong end of the input quad.

llvm-svn: 116701
2010-10-18 09:34:19 +00:00
Kalle Raiskila
c6bdc97934 Zap some redundant 'ori $?, $?, 0' from SPU.
Also remove some code that died in the process.
One now non-existant ori is checked for.

llvm-svn: 115306
2010-10-01 09:20:01 +00:00
Kalle Raiskila
68e2c15954 Change SPU register re-interpretations from OR to COPY_TO_REGCLASS instruction.
This cleans up after the mess r108567 left in the CellSPU backend.
ORCvt-instruction were used to reinterpret registers, and the ORs were then
removed by isMoveInstr(). This patch now removes 350 instrucions of format:
	or $3, $3, $3
(from the 52 testcases in CodeGen/CellSPU). One case of a nonexistant or is
checked for.

Some moves of the form 'ori $., $., 0' and 'ai $., $., 0' still remain.

llvm-svn: 114074
2010-09-16 12:29:33 +00:00
Kalle Raiskila
be22226628 Fix CellSPU vector shuffles, again.
Some cases of lowering to rotate were miscompiled.

llvm-svn: 113355
2010-09-08 11:53:38 +00:00
Kalle Raiskila
daba4ffc75 Fix lowering of INSERT_VECTOR_ELT in SPU.
The IDX was treated as byte index, not element index.

llvm-svn: 112422
2010-08-29 12:41:50 +00:00
Kalle Raiskila
1be8a5f947 Fix SPU BE to use all the available return registers.
llc used to assert on the added testcase.

llvm-svn: 111911
2010-08-24 11:50:48 +00:00
Kalle Raiskila
05d3cc2ef8 Fix a bug with insertelement on SPU.
The previous algorithm in LowerVECTOR_SHUFFLE 
didn't check all requirements for "monotonic" shuffles.

llvm-svn: 111361
2010-08-18 10:20:29 +00:00
Kalle Raiskila
8b6f5df4ae Remove all traces of v2[i,f]32 on SPU.
The "half vectors" are now widened to full size by the legalizer.
The only exception is in parameter passing, where half vectors are 
expanded. This causes changes to some dejagnu tests.

llvm-svn: 111360
2010-08-18 10:04:39 +00:00
Kalle Raiskila
0ee13a45c8 Change SPU C calling convention to match that described in
"SPU Application Binary Interface Specification, v1.9" by
IBM. 
Specifically: use r3-r74 to pass parameters and the return value.

llvm-svn: 111358
2010-08-18 09:50:30 +00:00
Kalle Raiskila
e2c0e66ff1 Have SPU handle halfvec stores aligned by 8 bytes.
llvm-svn: 110576
2010-08-09 16:33:00 +00:00
Kalle Raiskila
ce1e4d80cb Make SPU backend handle insertelement and
store for "half vectors"

llvm-svn: 110198
2010-08-04 13:59:48 +00:00
Kalle Raiskila
014c93befb More SPU v2f32 stuff added: insertelement and shuffle.
llvm-svn: 110038
2010-08-02 11:22:10 +00:00
Kalle Raiskila
766fd434df Add preliminary v2f32 support for SPU. Like with v2i32, we just
duplicate the instructions and operate on half vectors. 

Also reorder code in SPUInstrInfo.td for better coherency.

llvm-svn: 110037
2010-08-02 10:25:47 +00:00
Kalle Raiskila
21615cb06e Add preliminary v2i32 support for SPU backend. As there are no
such registers in SPU, this support boils down to "emulating" 
them by duplicating instructions on the general purpose registers. 

This adds the most basic operations on v2i32: passing parameters,
addition, subtraction, multiplication and a few others.

llvm-svn: 110035
2010-08-02 08:54:39 +00:00
Jakob Stoklund Olesen
bcee53a2b8 Remove many calls to TII::isMoveInstr. Targets should be producing COPY anyway.
TII::isMoveInstr is going tobe completely removed.

llvm-svn: 108507
2010-07-16 04:45:42 +00:00