1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-29 23:12:55 +01:00
Commit Graph

24239 Commits

Author SHA1 Message Date
Evan Cheng
9dcd046bbd Handle v2i64 BUILD_VECTOR custom lowering correctly. v2i64 is a legal type,
but i64 is not. If possible, change a i64 op to a f64 (e.g. load, constant)
and then cast it back.

llvm-svn: 27849
2006-04-20 00:11:39 +00:00
Reid Spencer
1c46fc49e1 Allow OpenBSD to be recognized as a UNIX platform.
llvm-svn: 27848
2006-04-19 23:47:16 +00:00
Evan Cheng
d79f6a9f5a isSplatMask() bug: first element can be an undef.
llvm-svn: 27847
2006-04-19 23:28:59 +00:00
Chris Lattner
a1cf724536 Simplify some code
llvm-svn: 27846
2006-04-19 23:17:50 +00:00
Evan Cheng
019dea6886 - Added support to do aribitrary 4 wide shuffle with no more than three
instructions.
- Fixed a commute vector_shuff bug.

llvm-svn: 27845
2006-04-19 22:48:17 +00:00
Evan Cheng
7bbfc1d41a Prefer {p}unpack* and mov*dup over {p}shuf* as well.
llvm-svn: 27844
2006-04-19 21:15:24 +00:00
Evan Cheng
5e80563052 Renamed AddedCost to AddedComplexity.
llvm-svn: 27843
2006-04-19 20:38:28 +00:00
Evan Cheng
a52eb1d7d5 - Renamed AddedCost to AddedComplexity.
- Added more movhlps and movlhps patterns.

llvm-svn: 27842
2006-04-19 20:37:34 +00:00
Evan Cheng
adc5e703ca Rename AddedCost to AddedComplexity.
llvm-svn: 27841
2006-04-19 20:36:09 +00:00
Evan Cheng
265831aa45 Commute vector_shuffle to match more movlhps, movlp{s|d} cases.
llvm-svn: 27840
2006-04-19 20:35:22 +00:00
Chris Lattner
16cc9dc38d Final piece to get relinked .o files buildable universal on Darwin.
llvm-svn: 27839
2006-04-19 18:45:29 +00:00
Chris Lattner
efdb58d23e Regenerate
llvm-svn: 27838
2006-04-19 18:38:19 +00:00
Chris Lattner
640a59bc0d When on darwin, compiler_flags need to be percolated down to the 'gcc -r'
command line so that relinked .o files can be built universal.

llvm-svn: 27837
2006-04-19 18:34:41 +00:00
Evan Cheng
56e205e534 More mov{h|l}p{d|s} patterns.
llvm-svn: 27836
2006-04-19 18:20:17 +00:00
Evan Cheng
b42424177c - More mov{h|l}ps patterns.
- Increase cost (complexity) of patterns which match mov{h|l}ps ops. These
  are preferred over shufps in most cases.

llvm-svn: 27835
2006-04-19 18:11:52 +00:00
Evan Cheng
318120f8ad Allow "let AddedCost = n in" to increase pattern complexity.
llvm-svn: 27834
2006-04-19 18:07:24 +00:00
Chris Lattner
435b87fa61 Alpha too!
llvm-svn: 27833
2006-04-19 17:20:48 +00:00
Chris Lattner
e307f43f35 add a note
llvm-svn: 27832
2006-04-19 16:22:38 +00:00
Andrew Lenharth
ca2734f9f1 Another simple case type merge case to try
llvm-svn: 27831
2006-04-19 15:34:34 +00:00
Andrew Lenharth
ef07d5c5a5 deal with memchr
llvm-svn: 27830
2006-04-19 15:34:02 +00:00
Andrew Lenharth
09948a5331 friendlier error message
llvm-svn: 27829
2006-04-19 15:33:35 +00:00
Chris Lattner
62537a04fb add a note
llvm-svn: 27828
2006-04-19 05:55:06 +00:00
Chris Lattner
99c7c3ad2f Add a note.
llvm-svn: 27827
2006-04-19 05:53:27 +00:00
Chris Lattner
411111435e grammaro
llvm-svn: 27826
2006-04-19 04:21:57 +00:00
Chris Lattner
8b3c2c9a9d Fix a bug owen noticed
llvm-svn: 27825
2006-04-19 04:21:16 +00:00
Chris Lattner
c76104e09b Change wording
llvm-svn: 27824
2006-04-19 04:12:01 +00:00
Chris Lattner
bd5802ad9b add a note
llvm-svn: 27823
2006-04-19 04:05:21 +00:00
Chris Lattner
966f0d4424 add some more notes
llvm-svn: 27822
2006-04-19 04:02:47 +00:00
Andrew Lenharth
d89cdc541a stupid stuff
llvm-svn: 27821
2006-04-19 03:45:25 +00:00
Andrew Lenharth
90e3af9ab1 fix printing call graphs
llvm-svn: 27820
2006-04-18 23:45:19 +00:00
Andrew Lenharth
87c0d713cc I understand now. Shoot.
llvm-svn: 27819
2006-04-18 22:36:11 +00:00
Evan Cheng
7364ee1c92 - PEXTRW cannot take a memory location as its first source operand.
- PINSRWrmi encoding bug.

llvm-svn: 27818
2006-04-18 21:59:43 +00:00
Evan Cheng
d6fa185be2 SHUFP{S|D}, PSHUF* encoding bugs. Left out the mask immediate operand.
llvm-svn: 27817
2006-04-18 21:56:36 +00:00
Evan Cheng
f16e4bf29d Name change for clarity sake
llvm-svn: 27816
2006-04-18 21:55:35 +00:00
Evan Cheng
82d7cacbbc Encoding bug: CMPPSrmi, CMPPDrmi dropped operand 2 (condtion immediate).
llvm-svn: 27815
2006-04-18 21:31:08 +00:00
Evan Cheng
8e87e9b0db Name change for clarity sake
llvm-svn: 27814
2006-04-18 21:29:50 +00:00
Evan Cheng
838f053b09 Left a pattern out
llvm-svn: 27813
2006-04-18 21:29:08 +00:00
Andrew Lenharth
4d432bcb19 llvm.memc* improvements. helps PA a lot in some specmarks
llvm-svn: 27812
2006-04-18 20:59:52 +00:00
Andrew Lenharth
d5fd116ede llvm.memc* improvements. helps PA a lot in some specmarks
llvm-svn: 27811
2006-04-18 19:54:11 +00:00
Chris Lattner
f58f727be6 These are correctly encoded by the JIT. I checked :)
llvm-svn: 27810
2006-04-18 19:03:38 +00:00
Chris Lattner
5f153584d9 add a note
llvm-svn: 27809
2006-04-18 18:30:19 +00:00
Chris Lattner
47a41ae889 Fix a crash on:
void foo2(vector float *A, vector float *B) {
  vector float C = (vector float)vec_cmpeq(*A, *B);
  if (!vec_any_eq(*A, *B))
    *B = (vector float){0,0,0,0};
  *A = C;
}

llvm-svn: 27808
2006-04-18 18:28:22 +00:00
Evan Cheng
2cd4e2d240 Fixed an encoding bug: movd from XMM to R32.
llvm-svn: 27807
2006-04-18 18:19:00 +00:00
Chris Lattner
2bd91746e1 pretty print node name
llvm-svn: 27806
2006-04-18 18:05:58 +00:00
Chris Lattner
44ea12c5f8 Implement an important entry from README_ALTIVEC:
If an altivec predicate compare is used immediately by a branch, don't
use a (serializing) MFCR instruction to read the CR6 register, which requires
a compare to get it back to CR's.  Instead, just branch on CR6 directly. :)

For example, for:
void foo2(vector float *A, vector float *B) {
  if (!vec_any_eq(*A, *B))
    *B = (vector float){0,0,0,0};
}

We now generate:

_foo2:
        mfspr r2, 256
        oris r5, r2, 12288
        mtspr 256, r5
        lvx v2, 0, r4
        lvx v3, 0, r3
        vcmpeqfp. v2, v3, v2
        bne cr6, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
        vxor v2, v2, v2
        stvx v2, 0, r4
        mtspr 256, r2
        blr
LBB1_2: ; UnifiedReturnBlock
        mtspr 256, r2
        blr

instead of:

_foo2:
        mfspr r2, 256
        oris r5, r2, 12288
        mtspr 256, r5
        lvx v2, 0, r4
        lvx v3, 0, r3
        vcmpeqfp. v2, v3, v2
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        cmpwi cr0, r3, 0
        beq cr0, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
        vxor v2, v2, v2
        stvx v2, 0, r4
        mtspr 256, r2
        blr
LBB1_2: ; UnifiedReturnBlock
        mtspr 256, r2
        blr

This implements CodeGen/PowerPC/vec_br_cmp.ll.

llvm-svn: 27804
2006-04-18 17:59:36 +00:00
Chris Lattner
43ef757c0e new testcase
llvm-svn: 27803
2006-04-18 17:56:30 +00:00
Chris Lattner
519001b0ee move some stuff around, clean things up
llvm-svn: 27802
2006-04-18 17:52:36 +00:00
Chris Lattner
3e2a664ada Teach the codegen about instructions used for SSE spill code, allowing it
to optimize cases where it has to spill a lot

llvm-svn: 27801
2006-04-18 16:44:51 +00:00
Nate Begeman
7821901005 Fix a copy & paste error from long ago.
llvm-svn: 27800
2006-04-18 16:03:18 +00:00
Chris Lattner
a72743cb57 Add some more notes, many still missing
llvm-svn: 27799
2006-04-18 06:32:08 +00:00