1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-29 23:12:55 +01:00
Commit Graph

3106 Commits

Author SHA1 Message Date
Andrew Lenharth
e831777542 turn on IEEE for compares
llvm-svn: 20425
2005-03-03 22:12:11 +00:00
Andrew Lenharth
e6dbf989b3 beter Select on FP
llvm-svn: 20424
2005-03-03 21:47:53 +00:00
Chris Lattner
4439f1686f Print -X like this:
double test(double l1_X) {
  return (-l1_X);
}

instead of like this:

double test(double l1_X) {
  return (-0x0p+0 - l1_X);
}

llvm-svn: 20423
2005-03-03 21:12:04 +00:00
Andrew Lenharth
b5ddbc074d LSR cleanup patch
llvm-svn: 20422
2005-03-03 19:03:21 +00:00
Chris Lattner
8074739aa2 Do not lower malloc's to pass "sizeof" expressions like this:
ltmp_0_7 = malloc(((unsigned )(&(((signed char (*)[784])/*NULL*/0)[1u]))));

Instead, just emit the literal constant, like this:

  ltmp_0_7 = malloc(784u);

This works around a bug in ICC 8.1 compiling the CBE generated code.  :-(

llvm-svn: 20415
2005-03-03 01:04:50 +00:00
Chris Lattner
b205d87afe cleanup the cfg after lsr
llvm-svn: 20410
2005-03-02 21:56:00 +00:00
Andrew Lenharth
1e213c7924 remove 32 sign extend after 32 sextload and handle small negative constant
llvm-svn: 20408
2005-03-02 17:23:03 +00:00
Andrew Lenharth
8fc5ba2e06 Added LSR as a beta pass for alpha
llvm-svn: 20407
2005-03-02 17:21:38 +00:00
Chris Lattner
798b18474c Add a temporary option for llc-beta: -enable-lsr-for-ppc, which turns on
Loop Strength Reduction.

llvm-svn: 20399
2005-03-02 06:19:22 +00:00
Chris Lattner
9d57998cda Remove tabs from file.
llvm-svn: 20380
2005-02-28 19:36:15 +00:00
Chris Lattner
b2720f5b57 Add support to the C backend for llvm.prefetch. Patch contributed by
Justin Wick!

llvm-svn: 20378
2005-02-28 19:29:46 +00:00
Andrew Lenharth
7dc9ea9509 fix integer division and stuff
llvm-svn: 20372
2005-02-28 17:22:18 +00:00
Chris Lattner
a024984017 Fix spelling, patch contributed by Gabor Greif!
llvm-svn: 20343
2005-02-27 06:18:25 +00:00
Andrew Lenharth
b5331ffe0f make BB labels be exported for debuging, add fp negation optimization, further pecimise the FP instructions
llvm-svn: 20332
2005-02-25 22:55:15 +00:00
Andrew Lenharth
ef5f87784b fix Allocas. Really. I mean it this time.
llvm-svn: 20306
2005-02-24 18:36:32 +00:00
Tanya Lattner
b640bb0d88 Only print out machine instructions before modulo scheduling if we are actually doing modulo scheduling! :)
llvm-svn: 20292
2005-02-24 02:14:44 +00:00
Andrew Lenharth
69a8320c0d Ah the problems you have to fix when you stray from the One True Way (TM)
llvm-svn: 20290
2005-02-23 17:33:42 +00:00
Chris Lattner
9838ab1271 Silence some uninit variable warnings.
llvm-svn: 20284
2005-02-23 05:57:21 +00:00
Tanya Lattner
a981a711aa Fixed bug in findAllcircuits. Fixed branch addition to schedule. Added debug information.
llvm-svn: 20280
2005-02-23 02:01:42 +00:00
Andrew Lenharth
889efe4fb3 oops
llvm-svn: 20278
2005-02-22 23:29:25 +00:00
Andrew Lenharth
d870103306 dynamic stack allocas
llvm-svn: 20273
2005-02-22 21:59:48 +00:00
Andrew Lenharth
8ead0f13d3 no longer build as a shared library
llvm-svn: 20264
2005-02-22 04:58:26 +00:00
Misha Brukman
ea2c511191 Fix compilation errors with VS 2005, patch contributed by Aaron Gray.
llvm-svn: 20232
2005-02-17 21:40:27 +00:00
Tanya Lattner
a35f2f428e Fixed node deletion bug.
llvm-svn: 20207
2005-02-16 04:00:59 +00:00
Chris Lattner
5174b9cb60 Fix a problem where the PPC backend lost track of the fact that it had
to save and restore the LR register on entry and exit of a leaf function
that needed to access globals or the constant pool.  This should hopefully
fix oscar from sending the PPC tester spinning out of control.

llvm-svn: 20197
2005-02-15 20:26:49 +00:00
Chris Lattner
bf72146607 Fix volatile load/store of pointers. Consider this testcase:
void %test(int** %P) {
  %A = volatile load int** %P
  ret void
}

void %test2(int*** %Q) {
  %P = load int*** %Q
  volatile store int** %P, int*** %Q
  ret void
}

instead of emitting:

void test(int **l1_P) {
  int *l2_A;

  l2_A = (int **((volatile int **)l1_P));
  return;
}
void test2(int ***l2_Q) {
  int **l1_P;

  l1_P = *l2_Q;
  *((volatile int ***)l2_Q) = l1_P;
  return;
}

... which is loading/storing volatile pointers, not through volatile pointers,
emit this (which is right):

void test(int **l1_P) {
  int *l3_A;

  l3_A = *((int * volatile*)l1_P);
  return;
}
void test2(int ***l2_Q) {
  int **l1_P;

  l1_P = *l2_Q;
  *((int ** volatile*)l2_Q) = l1_P;
  return;
}

llvm-svn: 20191
2005-02-15 05:52:14 +00:00
Misha Brukman
7b6a863954 Write out single characters as chars, not strings.
llvm-svn: 20179
2005-02-14 18:52:35 +00:00
Chris Lattner
ef836a918e Implement CodeGen/CBackend/2005-02-14-VolatileOperations.ll
Volatile loads and stores need to emit volatile pointer operations in C.

llvm-svn: 20177
2005-02-14 16:47:52 +00:00
Andrew Lenharth
f023ce8d97 fix setcc on floats, fixes singlesource:pi, perhaps others
llvm-svn: 20172
2005-02-14 05:41:43 +00:00
Andrew Lenharth
e398f7797e try to do better match for i32 adds
llvm-svn: 20143
2005-02-12 21:11:17 +00:00
Andrew Lenharth
089b56ae58 make FP conversion more conservative (matches gcc)
llvm-svn: 20142
2005-02-12 21:10:58 +00:00
Andrew Lenharth
b9c44170a5 oops, I was sure this had already gond though the nightly tester
llvm-svn: 20141
2005-02-12 20:42:09 +00:00
Andrew Lenharth
a12e5330bf added sign extend for boolean
llvm-svn: 20137
2005-02-12 19:35:12 +00:00
Andrew Lenharth
076faf95a8 fix a bunch of regressions due to call behavior
llvm-svn: 20110
2005-02-10 20:10:38 +00:00
Tanya Lattner
04a78e13df Added new circuit finding alogrithm.
Fixed bug in graph so that phi ite diff edges are added.

llvm-svn: 20108
2005-02-10 17:02:58 +00:00
Tanya Lattner
cd54968a72 Allow modsched and local scheduling to both be run.
llvm-svn: 20107
2005-02-10 17:02:06 +00:00
Andrew Lenharth
56c441caf2 so, if you beat on it, you too can talk emacs into having a sane indenting policy... Also, optimize many function calls with pc-relative calls (partial prologue skipping for that case coming soon), try to fix the random jumps to strange places problem by pesimizing div et. al. register usage and fixing up GP before using, some calling convention tweaks, and make frame pointer unallocatable (not strickly necessary, but let's go for correctness first)
llvm-svn: 20106
2005-02-10 06:25:22 +00:00
Andrew Lenharth
6c28128e3e fix fp branch
llvm-svn: 20105
2005-02-10 05:17:38 +00:00
Misha Brukman
4bd124492b * Fix spelling of `volatile'
* Align comments with tablegen elements

llvm-svn: 20103
2005-02-10 01:52:22 +00:00
Andrew Lenharth
d42ae810cb BranchCC, nifty
llvm-svn: 20067
2005-02-08 00:40:03 +00:00
Andrew Lenharth
71fce71669 fix store issue and an FP conversion (segfault) issue
llvm-svn: 20066
2005-02-07 23:02:23 +00:00
Andrew Lenharth
cf4f405e55 copytoreg fix
llvm-svn: 20063
2005-02-07 06:31:44 +00:00
Andrew Lenharth
d20853f420 copyfromreg fix
llvm-svn: 20062
2005-02-07 06:21:37 +00:00
Andrew Lenharth
80cf648100 fix load bug
llvm-svn: 20061
2005-02-07 05:55:55 +00:00
Andrew Lenharth
9f5502e40f more FP load store fixes and Load store simplifications
llvm-svn: 20060
2005-02-07 05:33:15 +00:00
Andrew Lenharth
bc6ddca09c clean up load and stores alot
llvm-svn: 20059
2005-02-07 05:18:02 +00:00
Andrew Lenharth
4416315969 teach all loads and stores about the stack
llvm-svn: 20058
2005-02-07 05:07:00 +00:00
Andrew Lenharth
1b8bf311d2 prefer FP scratch registers and more check in LowerArguments
llvm-svn: 20057
2005-02-06 21:07:31 +00:00
Andrew Lenharth
23ca0026fa fix oopso
llvm-svn: 20056
2005-02-06 16:22:15 +00:00
Andrew Lenharth
baa723abc0 smarter loads and stores. can now handle base+offset.
llvm-svn: 20055
2005-02-06 15:40:40 +00:00
Andrew Lenharth
9a2bc47fba fix build
llvm-svn: 20053
2005-02-05 19:46:51 +00:00
Andrew Lenharth
6bd554a11e clean up
llvm-svn: 20051
2005-02-05 17:41:39 +00:00
Andrew Lenharth
9fd7ce4bca fix f32 setcc, and fp select
llvm-svn: 20050
2005-02-05 16:41:03 +00:00
Andrew Lenharth
5447bb6596 added ugly support for fp compares
llvm-svn: 20049
2005-02-05 13:19:12 +00:00
Misha Brukman
75da90f127 Make the rest of file header comments consistent in format and style
llvm-svn: 20048
2005-02-05 02:24:26 +00:00
Misha Brukman
74be40e1d2 Make file header comment consistent: extend the whole 80 cols to fill the line
llvm-svn: 20039
2005-02-04 20:25:52 +00:00
Andrew Lenharth
e081ab1c69 alignment
llvm-svn: 20028
2005-02-04 14:09:38 +00:00
Andrew Lenharth
68f8792889 get alignment printing correctly and get rid of __main hack
llvm-svn: 20027
2005-02-04 14:01:21 +00:00
Andrew Lenharth
fa74ac60e6 FP fixes
llvm-svn: 20019
2005-02-03 21:01:15 +00:00
Andrew Lenharth
d5de7adf26 Store fix
llvm-svn: 20004
2005-02-02 17:32:39 +00:00
Andrew Lenharth
fef75b04f1 oops
llvm-svn: 20003
2005-02-02 17:01:31 +00:00
Andrew Lenharth
b4bf49a4ae prevent register allocator from using the stack pointer :)
llvm-svn: 20002
2005-02-02 17:00:21 +00:00
Andrew Lenharth
c3e3bd1c22 fix loading of floats
llvm-svn: 19997
2005-02-02 15:05:33 +00:00
Andrew Lenharth
2482a0ef99 marked mem* as not supported
llvm-svn: 19992
2005-02-02 05:49:42 +00:00
Andrew Lenharth
a856b4db61 fix Load bug
llvm-svn: 19987
2005-02-02 04:35:44 +00:00
Andrew Lenharth
35ae745650 try to make a bug bugpointable, add yet more constant pool stuff, fixup constant loads for FP
llvm-svn: 19985
2005-02-02 03:36:35 +00:00
Andrew Lenharth
5a2bb3de8b better constant handling, should fix many remaining cases
llvm-svn: 19984
2005-02-02 00:51:15 +00:00
Andrew Lenharth
172fc4b1fd fix FP arg passing bug, Add unsigned to/from int, fix SELECT, fix Constant pool
llvm-svn: 19976
2005-02-01 20:40:27 +00:00
Andrew Lenharth
9086064b72 Print the Constant pool
llvm-svn: 19975
2005-02-01 20:38:53 +00:00
Andrew Lenharth
540700124d Make cmov work right and loads for fp from constant pool
llvm-svn: 19974
2005-02-01 20:36:44 +00:00
Andrew Lenharth
bf88e70920 Correct stack stuff for FP
llvm-svn: 19973
2005-02-01 20:35:57 +00:00
Andrew Lenharth
810fa6d4f1 try to match alpha pattern
llvm-svn: 19972
2005-02-01 20:35:11 +00:00
Andrew Lenharth
b8e15cfe9c fix register names
llvm-svn: 19971
2005-02-01 20:34:29 +00:00
Andrew Lenharth
fa52a84802 pecimise loads, put indirect call addr in right register. still doesn't fix methcall
llvm-svn: 19963
2005-02-01 01:37:24 +00:00
Misha Brukman
1e8e58b829 Fix hyphenation in output comment
llvm-svn: 19954
2005-01-31 06:19:57 +00:00
Andrew Lenharth
648e85bb8a indirect call fix
llvm-svn: 19945
2005-01-31 03:19:31 +00:00
Andrew Lenharth
401bb5807b fp to int and back conversion sequences
llvm-svn: 19944
2005-01-31 01:44:26 +00:00
Andrew Lenharth
43c294ffc3 added fp extend and removed a forgotten assert in more than 6 arg support (should break somewhere else now :) ) and fix an incorrect asm sequence for indirect calls
llvm-svn: 19938
2005-01-30 20:42:36 +00:00
Chris Lattner
c952d46d13 This code is really unreachable.
llvm-svn: 19934
2005-01-30 16:33:46 +00:00
Chris Lattner
d27e3639ba Fix warnings.
llvm-svn: 19933
2005-01-30 16:32:48 +00:00
Andrew Lenharth
60966b9bf0 support for larger calls
llvm-svn: 19932
2005-01-30 00:35:27 +00:00
Chris Lattner
fc8d0e9460 Unbreak the build :(
llvm-svn: 19926
2005-01-29 19:27:28 +00:00
Andrew Lenharth
f426f1c0c9 first step towards a correct and complete stack. also add some forms for things that were getting stuck in the nightly tester.
llvm-svn: 19914
2005-01-29 15:42:07 +00:00
Chris Lattner
774d64469c Finegrainify namespacification.
Adjust TmpInstruction to work with the new User model.

llvm-svn: 19896
2005-01-29 00:36:59 +00:00
Chris Lattner
0b2dec0f59 add namespace qualifier
llvm-svn: 19895
2005-01-29 00:36:38 +00:00
Andrew Lenharth
8a3a14d343 fix ExprMap, partially teach about add long
llvm-svn: 19882
2005-01-28 23:17:54 +00:00
Andrew Lenharth
9db35b0763 fix ExprMap and constant check in setcc
llvm-svn: 19870
2005-01-28 14:06:46 +00:00
Andrew Lenharth
4cfda09ee9 move FP into it's own select
llvm-svn: 19867
2005-01-28 06:57:18 +00:00
Andrew Lenharth
c0cd77a1a0 stack frame fix and zero FP reg fix
llvm-svn: 19857
2005-01-27 08:31:19 +00:00
Andrew Lenharth
55eadc4772 Floating point instructions like Floating point registers
llvm-svn: 19856
2005-01-27 07:58:15 +00:00
Andrew Lenharth
67328d7fac int to float conversion and another setcc
llvm-svn: 19855
2005-01-27 07:50:35 +00:00
Andrew Lenharth
4283bf1216 teach isel about comparison with constants and zero extending bits
llvm-svn: 19853
2005-01-27 03:49:45 +00:00
Andrew Lenharth
b539dd2c83 perhaps this will let me have calls again
llvm-svn: 19851
2005-01-27 01:22:48 +00:00
Andrew Lenharth
227bc0e21a minor bug fix
llvm-svn: 19850
2005-01-27 00:52:26 +00:00
Andrew Lenharth
ae8ce1856a minor bug fix
llvm-svn: 19849
2005-01-27 00:51:05 +00:00
Andrew Lenharth
11fc660a34 added instructions for fp to int to fp moves
llvm-svn: 19848
2005-01-26 23:56:48 +00:00
Andrew Lenharth
1f0b710fb6 initial fp support
llvm-svn: 19847
2005-01-26 21:54:09 +00:00
Andrew Lenharth
f9f01c190b hum, writing on one machine, testing on another...
llvm-svn: 19844
2005-01-26 02:53:56 +00:00
Andrew Lenharth
53ad9ac1db add some operations, fix others. should compile several more tests now
llvm-svn: 19843
2005-01-26 01:24:38 +00:00
Chris Lattner
ab92b92bc5 We can fold promoted and non-promoted loads into divs also!
llvm-svn: 19835
2005-01-25 20:35:10 +00:00
Chris Lattner
a9a0369879 Fold promoted loads into binary ops for FP, allowing us to generate m32 forms
of FP ops.

llvm-svn: 19834
2005-01-25 20:03:11 +00:00
Andrew Lenharth
4e10c17eeb problems with bools, and their work arounds
llvm-svn: 19833
2005-01-25 19:58:40 +00:00
Andrew Lenharth
3ae267eb3b more load choices, better add with imm
llvm-svn: 19821
2005-01-25 00:35:34 +00:00
Andrew Lenharth
3b44cfa26d Clean ups, and taught the instruction selector about immediate forms
llvm-svn: 19816
2005-01-24 19:44:07 +00:00
Andrew Lenharth
3c6e50e63b Alpha JIT prune
llvm-svn: 19815
2005-01-24 18:48:22 +00:00
Andrew Lenharth
ae874f0d85 include prune and JIT prune
llvm-svn: 19814
2005-01-24 18:45:41 +00:00
Andrew Lenharth
e3991f8256 Pruned includes
llvm-svn: 19813
2005-01-24 18:37:48 +00:00
Chris Lattner
88a4c43e67 Fix a spurious warning.
llvm-svn: 19799
2005-01-24 01:40:18 +00:00
Chris Lattner
6ff85c3152 Silence a warning.
llvm-svn: 19798
2005-01-23 23:20:06 +00:00
Chris Lattner
94952e0947 Allow the FP stackifier to completely ignore functions that do not use FP at
all.  This should speed up the X86 backend fairly significantly on integer
codes.  Now if only we didn't have to compute livevar still... ;-)

llvm-svn: 19796
2005-01-23 23:13:59 +00:00
Chris Lattner
680bc75f7f Build Alpha by default.
llvm-svn: 19777
2005-01-23 04:34:46 +00:00
Reid Spencer
e48557f583 Fix alloca support for Cygwin. On cygwin its __alloca not __builtin_alloca
llvm-svn: 19776
2005-01-23 04:32:47 +00:00
Reid Spencer
5c7b6e83f0 Support Cygwin assembly generation. The cygwin version of Gnu ASsembler
doesn't support certain directives and symbols on cygwin are prefixed with
an underscore. This patch makes the necessary adjustments to the output.

llvm-svn: 19775
2005-01-23 03:52:14 +00:00
Andrew Lenharth
f5b9a8fe57 Let me introduce you to the early stages of the llvm backend for the alpha processor
llvm-svn: 19764
2005-01-22 23:41:55 +00:00
Chris Lattner
b4cf4ffb04 Speed up folding operations into loads.
llvm-svn: 19733
2005-01-21 21:43:02 +00:00
Chris Lattner
fd4d7f71ae The ever-important vanity pass name :)
llvm-svn: 19731
2005-01-21 21:35:14 +00:00
Chris Lattner
5f2fbeaa69 Fix a FIXME: realize that argument stores are all independent (don't alias)
llvm-svn: 19728
2005-01-21 19:46:38 +00:00
Chris Lattner
febeb380ae Implement ADD_PARTS/SUB_PARTS so that 64-bit integer add/sub work. This
fixes most of the remaining llc-beta failures.

llvm-svn: 19716
2005-01-20 18:53:00 +00:00
Chris Lattner
8b0a2a3251 Fix a crash compiling 134.perl.
llvm-svn: 19711
2005-01-20 16:50:16 +00:00
Chris Lattner
6534e1ede3 Fix a problem where were were literally selecting for INCREASED register
pressure, not decreases register pressure.  Fix problem where we accidentally
swapped the operands of SHLD, which caused fourinarow to fail.  This fixes
fourinarow.

llvm-svn: 19697
2005-01-19 17:24:34 +00:00
Chris Lattner
b75589131d When commuting these instructions, make sure to actually swap the operands too.
llvm-svn: 19694
2005-01-19 16:55:52 +00:00
Chris Lattner
fde1a5688b Implement Regression/CodeGen/X86/rotate.ll: emit rotate instructions (which
typically cost 1 cycle) instead of shld/shrd instruction (which are typically
6 or more cycles).  This also saves code space.

For example, instead of emitting:

rotr:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %CL, BYTE PTR [%ESP + 8]
        shrd %EAX, %EAX, %CL
        ret
rotli:
        mov %EAX, DWORD PTR [%ESP + 4]
        shrd %EAX, %EAX, 27
        ret

Emit:

rotr32:
        mov %CL, BYTE PTR [%ESP + 8]
        mov %EAX, DWORD PTR [%ESP + 4]
        ror %EAX, %CL
        ret
rotli32:
        mov %EAX, DWORD PTR [%ESP + 4]
        ror %EAX, 27
        ret

We also emit byte rotate instructions which do not have a sh[lr]d counterpart
at all.

llvm-svn: 19692
2005-01-19 08:07:05 +00:00
Chris Lattner
34757ff939 Add rotate instructions.
llvm-svn: 19690
2005-01-19 07:50:03 +00:00
Chris Lattner
e539ce8223 Match 16-bit shld/shrd instructions as well, implementing shift-double.llx:test5
llvm-svn: 19689
2005-01-19 07:37:26 +00:00
Chris Lattner
9d5ee289d7 Improve coverage of the X86 instruction set by adding 16-bit shift doubles.
llvm-svn: 19687
2005-01-19 07:31:24 +00:00
Chris Lattner
c03f360215 Teach the code generator that shrd/shld is commutable if it has an immediate.
This allows us to generate this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        shld %EDX, %EDX, 2
        shl %EAX, 2
        ret

instead of this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EDX, %EAX
        shrd %EDX, %ECX, 30
        shl %EAX, 2
        ret

Note the magically transmogrifying immediate.

llvm-svn: 19686
2005-01-19 07:11:01 +00:00
Chris Lattner
33efebcdc8 Finegrainify namespacification
Add default impl of commuteInstruction
Add notes about ugly V9 code.

llvm-svn: 19684
2005-01-19 06:53:34 +00:00
Chris Lattner
575e912fcf Codegen long >> 2 to this:
foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        shrd %EAX, %EDX, 2
        sar %EDX, 2
        ret

instead of this:

test1:
        mov %ECX, DWORD PTR [%ESP + 4]
        shr %ECX, 2
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %EAX, %EDX
        shl %EAX, 30
        or %EAX, %ECX
        sar %EDX, 2
        ret

and long << 2 to this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
***     mov %EDX, %EAX
        shrd %EDX, %ECX, 30
        shl %EAX, 2
        ret

instead of this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        shr %ECX, 30
        mov %EDX, DWORD PTR [%ESP + 8]
        shl %EDX, 2
        or %EDX, %ECX
        shl %EAX, 2
        ret

The extra copy (marked ***) can be eliminated when I teach the code generator
that shrd32rri8 is really commutative.

llvm-svn: 19681
2005-01-19 06:18:43 +00:00
Chris Lattner
419a5d213b X86 shifts mask the amount.
llvm-svn: 19678
2005-01-19 03:36:30 +00:00
Chris Lattner
fbd1f8e4fd Add a hook to find out how the target handles shift amounts that are out of
range.  Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).

This defaults to undefined, which is conservatively correct.

llvm-svn: 19677
2005-01-19 03:36:14 +00:00
Chris Lattner
6dec8cb829 Code to handle FP_EXTEND is dead now. X86 doesn't support any data types to
FP_EXTEND from!

llvm-svn: 19674
2005-01-18 20:05:56 +00:00
Chris Lattner
798e9c85d6 Remove more dead code.
llvm-svn: 19673
2005-01-18 19:50:08 +00:00
Chris Lattner
401814508f The selection dag code handles the promotions from F32 to F64 for us, so we
don't need to even think about F32 in the X86 code anymore.

llvm-svn: 19672
2005-01-18 19:46:54 +00:00
Chris Lattner
dc09e52b3e Fix 124.m88ksim.
llvm-svn: 19667
2005-01-18 17:35:28 +00:00
Chris Lattner
a04b1ee7a8 Do not emit loads multiple times, potentially in the wrong places.
llvm-svn: 19661
2005-01-18 04:18:32 +00:00
Tanya Lattner
d3459278f2 Minor changes.
llvm-svn: 19660
2005-01-18 04:15:41 +00:00
Chris Lattner
722ddeb86e Eliminate bad assertions.
llvm-svn: 19659
2005-01-18 04:00:54 +00:00
Chris Lattner
8f3a8d96e2 * Eliminate the TokenSet and just use the ExprMap for both tokens and values.
* Insert some really pedantic assertions that will notice when we emit the
  same loads more than one time, exposing bugs.  This turns a miscompilation in
  bzip2 into a compile-fail.  yaay.

llvm-svn: 19658
2005-01-18 03:51:59 +00:00
Chris Lattner
b3edb09ede Rely on the code in MatchAddress to do this work. Otherwise we fail to
match (X+Y)+(Z << 1), because we match the X+Y first, consuming the index
register, then there is no place to put the Z.

llvm-svn: 19652
2005-01-18 02:25:52 +00:00
Chris Lattner
ce2e0125dc Fix a problem where probing for addressing modes caused expressions to be
emitted too early.  In particular, this fixes
Regression/CodeGen/X86/regpressure.ll:regpressure3.

This also improves the 2nd basic block in 164.gzip:flush_block, which went from

.LBBflush_block_1:      # loopentry.1.i
        movzx %EAX, WORD PTR [dyn_ltree + 20]
        movzx %ECX, WORD PTR [dyn_ltree + 16]
        mov DWORD PTR [%ESP + 32], %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 12]
        movzx %EDX, WORD PTR [dyn_ltree + 8]
        movzx %EBX, WORD PTR [dyn_ltree + 4]
        mov DWORD PTR [%ESP + 36], %EBX
        movzx %EBX, WORD PTR [dyn_ltree]
        add DWORD PTR [%ESP + 36], %EBX
        add %EDX, DWORD PTR [%ESP + 36]
        add %ECX, %EDX
        add DWORD PTR [%ESP + 32], %ECX
        add %EAX, DWORD PTR [%ESP + 32]
        movzx %ECX, WORD PTR [dyn_ltree + 24]
        add %EAX, %ECX
        mov %ECX, 0
        mov %EDX, %ECX

to

.LBBflush_block_1:      # loopentry.1.i
        movzx %EAX, WORD PTR [dyn_ltree]
        movzx %ECX, WORD PTR [dyn_ltree + 4]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 8]
        add %EAX, %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 12]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 16]
        add %EAX, %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 20]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 24]
        add %ECX, %EAX
        mov %EAX, 0
        mov %EDX, %EAX

... which results in less spilling in the function.

This change alone speeds up 164.gzip from 37.23s to 36.24s on apoc.  The
default isel takes 37.31s.

llvm-svn: 19650
2005-01-18 01:06:26 +00:00
Chris Lattner
a78f9ced61 Fix indentation.
llvm-svn: 19649
2005-01-17 23:25:45 +00:00
Chris Lattner
dff1e3e86f Don't bother using max here.
llvm-svn: 19647
2005-01-17 23:02:13 +00:00
Chris Lattner
2d86b43318 Do not give token factor nodes outrageous weights
llvm-svn: 19645
2005-01-17 22:56:09 +00:00
Chris Lattner
f2878ce8ba Two changes:
1. Fold  [mem] += (1|-1) into inc [mem]/dec [mem] to save some icache space.
 2. Do not let token factor nodes prevent forming '[mem] op= val' folds.

llvm-svn: 19643
2005-01-17 22:10:42 +00:00
Chris Lattner
40c0fca632 Refactor load/op/store folding into it's own method, no functionality changes.
llvm-svn: 19641
2005-01-17 19:25:26 +00:00
Chris Lattner
2348abc421 Fix a major regression last night that prevented us from producing [mem] op= reg
operations.

The body of the if is less indented but unmodified in this patch.

llvm-svn: 19638
2005-01-17 17:49:14 +00:00
Chris Lattner
adb669ab1f Codegen this:
int %foo(int %X) {
        %T = add int %X, 13
        %S = mul int %T, 3
        ret int %S
}

as this:

        mov %ECX, DWORD PTR [%ESP + 4]
        lea %EAX, DWORD PTR [%ECX + 2*%ECX + 39]
        ret

instead of this:

        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EAX, %ECX
        add %EAX, 13
        imul %EAX, %EAX, 3
        ret

llvm-svn: 19633
2005-01-17 06:48:02 +00:00
Tanya Lattner
5a10531cf8 Added tmp instructions to preserve ssa.
llvm-svn: 19632
2005-01-17 06:47:26 +00:00
Chris Lattner
51590b615c Fix test/Regression/CodeGen/X86/2005-01-17-CycleInDAG.ll and 132.ijpeg.
Do not fold a load into an operation if it will induce a cycle in the DAG.

Repeat after me: dAg.

llvm-svn: 19631
2005-01-17 06:26:58 +00:00
Chris Lattner
f1e85bec5a Do not fold a load into a comparison that is used by more than one place.
The comparison will probably be folded, so this is not ok to do.
This fixed 197.parser.

llvm-svn: 19624
2005-01-17 01:34:14 +00:00
Chris Lattner
1b8c8fe020 Do not codegen 'xor bool, true' as 'not reg'. not reg inverts the upper bits
of the bytereg.  This fixes yacr2, 300.twolf and probably others.

llvm-svn: 19622
2005-01-17 00:23:16 +00:00
Chris Lattner
46dac4394c Set up the shift and setcc types.
If we emit a load because we followed a token chain to get to it, try to
fold it into its single user if possible.

llvm-svn: 19620
2005-01-17 00:00:33 +00:00
Chris Lattner
4c88cc95ee Shift and setcc types default to the pointer type.
llvm-svn: 19619
2005-01-16 23:59:48 +00:00
Tanya Lattner
fea188af7e Added paramters to a few functions in order to allow me to change the functions to preserve SSA
llvm-svn: 19615
2005-01-16 08:51:10 +00:00
Chris Lattner
9ffc59287e * Adjust to changes in TargetLowering interfaces.
* Remove custom promotion for bool and byte select ops.  Legalize now
  promotes them for us.
* Allow folding ConstantPoolIndexes into EXTLOAD's, useful for float immediates.
* Declare which operations are not supported better.
* Add some hacky code for TRUNCSTORE to pretend that we have truncstore
  for i16 types.  This is useful for testing promotion code because I can
  just remove 16-bit registers all together and verify that programs work.

llvm-svn: 19614
2005-01-16 07:34:08 +00:00
Chris Lattner
b49d2a7b0f Use enums, move virtual dtor out of line.
llvm-svn: 19610
2005-01-16 07:28:11 +00:00
Chris Lattner
be2a427f51 cycles_t -> CycleCount_t
llvm-svn: 19604
2005-01-16 04:20:30 +00:00
Reid Spencer
afa1cb9e11 Rename BUILD_* to PROJ_*
llvm-svn: 19592
2005-01-16 02:21:29 +00:00
Tanya Lattner
66cf1a6f82 Fixed a couple of instructions that broke SSA.
llvm-svn: 19587
2005-01-16 02:14:17 +00:00
Chris Lattner
605b9a23a2 Improve compatiblity with HPUX on Itanium, patch by Duraid Madina
llvm-svn: 19586
2005-01-16 01:31:31 +00:00
Chris Lattner
06c297f8ca Set up identity transforms.
llvm-svn: 19584
2005-01-16 01:20:18 +00:00
Chris Lattner
1d0e1ffe02 Move some information out of LegalizeDAG into the generic Target interface.
llvm-svn: 19581
2005-01-16 01:10:58 +00:00
Chris Lattner
98611ce291 Add a new target-independent code generator flag.
llvm-svn: 19567
2005-01-15 06:00:32 +00:00
Chris Lattner
f3d950e816 Add support for truncstore and *extload.
llvm-svn: 19566
2005-01-15 05:22:24 +00:00
Chris Lattner
27c91fac94 Adjust to CopyFromREg changes.
llvm-svn: 19561
2005-01-14 22:37:41 +00:00
Chris Lattner
c032990335 Fix Regression/CodeGen/PowerPC/2005-01-14-UndefLong.ll
llvm-svn: 19557
2005-01-14 20:22:02 +00:00
Chris Lattner
b0b49268c4 Fix: Regression/CodeGen/PowerPC/2005-01-14-SetSelectCrash.ll
llvm-svn: 19555
2005-01-14 19:31:00 +00:00
Chris Lattner
7a8788c9ac Add new ImplicitDef node, rename CopyRegSDNode class to RegSDNode.
llvm-svn: 19535
2005-01-13 20:50:02 +00:00
Chris Lattner
fce6a5439d Codegen factor nodes more intelligently according to perceived register pressure.
llvm-svn: 19532
2005-01-13 19:56:00 +00:00
Chris Lattner
cb4359465a Initial trivial (but stupid) codegen for this node.
llvm-svn: 19529
2005-01-13 18:01:36 +00:00
Chris Lattner
9a70166615 Add some really pedantic assertions to the load folding code. Fix a bunch
of cases where we accidentally emitted a load folded once and unfolded
elsewhere.

llvm-svn: 19522
2005-01-13 05:53:16 +00:00
Chris Lattner
2ab70aafe0 We can only fold a load into an op if there is exactly one use of the value.
Checking to see if the load has two uses is not equivalent, as the chain
value may have zero uses.

llvm-svn: 19518
2005-01-12 18:38:26 +00:00
Chris Lattner
4b03f0f99e Try both ways to fold an add together. This allows us to generate this code
imul %EAX, %EAX, 400
        add %ECX, %EAX
        add %ESI, DWORD PTR [%ECX + 4*%EDX]
        inc %EDX
        cmp %EDX, 100

instead of this:

        imul %EAX, %EAX, 400
        add %ECX, %EAX
        mov %EAX, %EDX
        shl %EAX, 2
        add %ECX, %EAX
        add %ESI, DWORD PTR [%ECX]
        inc %EDX
        cmp %EDX, 100

llvm-svn: 19513
2005-01-12 18:08:53 +00:00
Chris Lattner
61c572eb7f Fix a major miscompilation where we were overwriting the scale reg.
llvm-svn: 19511
2005-01-12 07:33:20 +00:00
Chris Lattner
5816f1a302 Do not use the type of the RHS constant to determine the type of the operation.
This fails for shifts because the constant is always 8 bits.

llvm-svn: 19508
2005-01-12 05:22:07 +00:00
Chris Lattner
89d6b21ae6 Do not lose the offset from teh global when peephole optimizing instructions.
This fixes FreeBench/pcompress

llvm-svn: 19507
2005-01-12 05:17:28 +00:00
Jeff Cohen
614a5ec22a Fix C++ more compilatiom errors
llvm-svn: 19504
2005-01-12 04:29:05 +00:00
Chris Lattner
5ef92f3a40 Fix a compile error with VC++, which things that static const arrays need
to be dynamically initialized. :(

llvm-svn: 19503
2005-01-12 04:23:22 +00:00
Chris Lattner
627c64e5e5 Fix a bug that caused us to crash on povray. We weren't emitting an FP_REG_KILL into a block that had a successor with a FP PHI node.
llvm-svn: 19502
2005-01-12 04:21:28 +00:00
Chris Lattner
a5f0ba59a0 Print a load of a null pointer (in intel mode) like this:
mov %AX, WORD PTR [0]

instead of like this:

        mov %AX, WORD PTR []

llvm-svn: 19501
2005-01-12 04:07:11 +00:00
Chris Lattner
360988bae2 Print a load of a null pointer like this:
movw 0, %ax

instead of like this:

        movw , %ax

llvm-svn: 19500
2005-01-12 04:05:19 +00:00
Chris Lattner
3c85c67c97 Fix a crash compiling povray on UINT_TO_FP from i16.
llvm-svn: 19499
2005-01-12 04:00:00 +00:00
Chris Lattner
4e72a2a000 There are no [mem] op= reg instructions for FP, so remove their entries.
llvm-svn: 19496
2005-01-12 03:16:09 +00:00
Chris Lattner
00cb0ace9b Fix a bug where we didn't insert FP_REG_KILL instructions into MBB's that
contain FP PHI nodes but no other FP defining instructions.  This fixes
183.equake

llvm-svn: 19495
2005-01-12 02:57:10 +00:00
Chris Lattner
92166ed1df Fold TRUNCATE (LOAD P) into a smaller load from P.
llvm-svn: 19494
2005-01-12 02:19:06 +00:00
Chris Lattner
258b23bd9d Be more careful about order of arg evalution for CopyToReg nodes. This shrinks
256.bzip2 from 7142 to 7103 lines of .s file.

Second, add initial support for folding loads into compares, though this code
is dynamically dead for now. :(

llvm-svn: 19493
2005-01-12 02:02:48 +00:00
Chris Lattner
604416e8f4 Fold some more [mem] op= val operators. This allows us to things like this
several times in 256.bzip2:

        mov %EAX, DWORD PTR [%ESP + 204]
-       mov %EAX, DWORD PTR [%EAX]
-       or %EAX, 2097152
-       mov %ECX, DWORD PTR [%ESP + 204]
-       mov DWORD PTR [%ECX], %EAX
+       or DWORD PTR [%EAX], 2097152

llvm-svn: 19492
2005-01-12 01:28:00 +00:00
Chris Lattner
e83ae1063f Fold loads into sign/zero extends. instead of:
mov %AL, BYTE PTR [%EDX + l18_length_code]
  movzx %EAX, %AL

Emit:

  movzx %EAX, BYTE PTR [%EDX + l18_length_code]

llvm-svn: 19489
2005-01-11 23:33:00 +00:00
Chris Lattner
87a38bd4a8 Comment out debug code :)
Select [mem] += Val operations.  For constants, we used to get:

  mov %ECX, -32768
  add %ECX, DWORD PTR [l4_match_start]
  mov DWORD PTR [l4_match_start], %ECX

Now we get:

  add DWORD PTR [l4_match_start], -32768

For other values we used to get:

  mov %EBP, %EDI   ;; because the add destroys the value
  add %EBP, DWORD PTR [l4_input_len]
  mov DWORD PTR [l4_input_len], %EBP

now we get:

  add DWORD PTR [l4_input_len], %EDI

Both of these use less registers than the alternative, are faster and smaller.

llvm-svn: 19488
2005-01-11 23:21:30 +00:00
Chris Lattner
282473a25d Handle the global address case here, not just the offset case.
llvm-svn: 19487
2005-01-11 22:58:43 +00:00
Chris Lattner
9eb2cc700b Treat int constants as not requiring a register, since they are almost always
folded into an instruction.

llvm-svn: 19486
2005-01-11 22:29:12 +00:00
Chris Lattner
7cb2220907 * Factor a bunch of binary operator cases into shared code.
* Fold loads into Add, sub, and, or, xor and mul when possible.
* Codegen shl X, 1 as add X, X

llvm-svn: 19483
2005-01-11 21:19:59 +00:00
Chris Lattner
b1a72cb39a Clear the whole array, always.
llvm-svn: 19482
2005-01-11 20:25:26 +00:00
Chris Lattner
b838c9748e Fold multiplies by 3,5,9 into addressing modes when possible.
llvm-svn: 19480
2005-01-11 19:37:02 +00:00
Chris Lattner
e7b1130b01 Instead of generating stuff like this:
mov %ECX, %EAX
        add %ECX, 32768
        mov %SI, WORD PTR [2*%ECX + l13_prev]

Generate this:

        mov %SI, WORD PTR [2*%ECX + l13_prev + 65536]

This occurs when you have a GEP instruction where an index is
"something + imm".

llvm-svn: 19472
2005-01-11 06:36:20 +00:00
Chris Lattner
bb63a09cd1 Implement MEMCPY natively in terms of rep movs*
llvm-svn: 19468
2005-01-11 06:19:26 +00:00
Chris Lattner
b2b08a8bc1 Implement memset -> rep stos*
llvm-svn: 19467
2005-01-11 06:14:36 +00:00
Chris Lattner
58816a9e81 Announce that we don't support mem ops yet.
llvm-svn: 19466
2005-01-11 05:57:36 +00:00
Chris Lattner
f867443d7e Teach the address selector to make 'reg+reg' addressing modes.
llvm-svn: 19457
2005-01-11 04:40:19 +00:00
Chris Lattner
edf06be50e Emit NOT instructions.
llvm-svn: 19455
2005-01-11 04:31:30 +00:00
Chris Lattner
4e4bef2d6c Fix a bug emitting branches that broke a lot of programs.
llvm-svn: 19452
2005-01-11 04:06:27 +00:00
Chris Lattner
4b51297a94 Be more careful where we set ContainsFPCode. We were missing a set in the
int -> FP casting code.  Note that we don't have to set it for FP operations
that take FP values as operands: whatever produces the FP value will set the
flag.

llvm-svn: 19451
2005-01-11 03:50:45 +00:00
Chris Lattner
0c4c4094e3 Fix a major bug in setcc/cmov folding, where we accidentally
inverted the sense of the comparison.

llvm-svn: 19450
2005-01-11 03:37:59 +00:00
Chris Lattner
d188e03011 Take register pressure into account when we have to decide whether to
evaluate the LHS or the RHS of an operation first.  This causes good things
to happen.  For example, instead of compiling a loop to this:

.LBBstrength_result7_1: # loopentry
        movl 16(%esp), %edi
        movl (%edi), %edi             ;;; LOAD
        movl (%ecx), %ebx
        movl $2, (%eax,%ebx,4)
        movl (%edx), %ebx
        movl %esi, %ebp
        addl $21, %ebp
        addl $42, %esi
        cmpl $0, %edi                 ;;; USE
        cmovne %esi, %ebp
        cmpl %ebp, %ebx
        movl %ebp, %esi
        jg .LBBstrength_result7_1

We now compile it to this:

.LBBstrength_result7_1: # loopentry
        movl %edi, %ebx
        addl $42, %ebx
        addl $21, %edi
        movl (%ecx), %ebp              ;; LOAD
        cmpl $0, %ebp                  ;; USE
        cmovne %ebx, %edi
        movl (%edx), %ebx
        movl $2, (%eax,%ebx,4)
        movl (%esi), %ebx
        cmpl %edi, %ebx
        jg .LBBstrength_result7_1

Which reduces register pressure enough (in this case) to avoid spilling in the
loop.

As another example, consider the CodeGen/X86/regpressure.ll testcase.  We
used to generate this code for both cases:

regpressure1:
        subl $32, %esp
        movl %esi, 12(%esp)
        movl %edi, 8(%esp)
        movl %ebx, 4(%esp)
        movl %ebp, (%esp)
        movl 36(%esp), %ecx
        movl (%ecx), %eax
        movl 4(%ecx), %edx
        movl %edx, 24(%esp)
        movl 8(%ecx), %edx
        movl %edx, 16(%esp)
        movl 12(%ecx), %edx
        movl 16(%ecx), %esi
        movl 20(%ecx), %edi
        movl 24(%ecx), %ebx
        movl %ebx, 28(%esp)
        movl 28(%ecx), %ebx
        movl 32(%ecx), %ebp
        movl %ebp, 20(%esp)
        movl 36(%ecx), %ecx
        imull 24(%esp), %eax
        imull 16(%esp), %eax
        imull %edx, %eax
        imull %esi, %eax
        imull %edi, %eax
        imull 28(%esp), %eax
        imull %ebx, %eax
        imull 20(%esp), %eax
        imull %ecx, %eax
        movl (%esp), %ebp
        movl 4(%esp), %ebx
        movl 8(%esp), %edi
        movl 12(%esp), %esi
        addl $32, %esp
        ret

This code is basically trying to do all of the loads first, then execute all
of the multiplies.  Because we run out of registers, lots of spill code happens.
We now generate this code for both cases:

regpressure1:
        movl 4(%esp), %ecx
        movl (%ecx), %eax
        movl 4(%ecx), %edx
        imull %edx, %eax
        movl 8(%ecx), %edx
        imull %edx, %eax
        movl 12(%ecx), %edx
        imull %edx, %eax
        movl 16(%ecx), %edx
        imull %edx, %eax
        movl 20(%ecx), %edx
        imull %edx, %eax
        movl 24(%ecx), %edx
        imull %edx, %eax
        movl 28(%ecx), %edx
        imull %edx, %eax
        movl 32(%ecx), %edx
        imull %edx, %eax
        movl 36(%ecx), %ecx
        imull %ecx, %eax
        ret

which is much nicer (when we fold loads into the muls it will be even better).
The old instruction selector used to produce the good code for regpressure1
but not for regpressure2, as it depended on the order of operations in the
LLVM code.

llvm-svn: 19449
2005-01-11 03:11:44 +00:00
Chris Lattner
497e24c885 Fold setcc instructions into selects.
llvm-svn: 19438
2005-01-10 22:10:13 +00:00
Chris Lattner
65d007ab62 Add conditional moves for the parity flag.
llvm-svn: 19437
2005-01-10 22:09:33 +00:00
Chris Lattner
d61491dea2 Implement 8-bit multiply for X86.
llvm-svn: 19435
2005-01-10 20:55:48 +00:00
Chris Lattner
fcab5f75c0 Codegen (Reg|imm)+&GV as an LEA, because we cannot put it into the immediate field
of an ADDri (due to current restrictions on MachineOperand :( ).  This allows
us to generate:

        leal Data+16000, %edx

instead of:

        movl $Data, %edx
        addl $16000, %edx

llvm-svn: 19420
2005-01-09 20:20:29 +00:00
Chris Lattner
35375c11bf Fix copy and pasto's for FP -> Int. This fixes fldry
llvm-svn: 19418
2005-01-09 19:49:59 +00:00
Chris Lattner
45155a3dee Initial implementation of FP->INT and INT->FP casts
Also, fix zero_extend from bool to i8, which fixes Shootout/objinst.

llvm-svn: 19414
2005-01-09 18:52:44 +00:00
Chris Lattner
9ca9b20447 Fix a subtle bug involving constant expr casts from int to fp
llvm-svn: 19410
2005-01-09 01:49:29 +00:00
Chris Lattner
c5e53c07fd Implement varargs and returnaddress/frameaddress intrinsics. With this
patch, all of SingleSource/UnitTests passes.

llvm-svn: 19408
2005-01-09 00:01:27 +00:00
Chris Lattner
ca81756527 Okay 15th time is the charm. Looking at the vector size is useless as it
gets clobbered by a previous statement.  This fixes all calls finally.

llvm-svn: 19399
2005-01-08 20:51:36 +00:00
Chris Lattner
85816cff9a Okay, my off by one was actually off by two. This fixes Generic/2003-07-07-BadLongConst.ll
llvm-svn: 19398
2005-01-08 20:39:31 +00:00
Chris Lattner
2d68cb6cf4 Fix off by one error
llvm-svn: 19396
2005-01-08 20:31:34 +00:00
Chris Lattner
c4d075cfa3 Adjust to changes in LowerCallTo interface
Minor bugfixes

llvm-svn: 19376
2005-01-08 19:28:19 +00:00
Chris Lattner
6c7d3bd8ea Wrap long line.
llvm-svn: 19367
2005-01-08 06:59:50 +00:00
Chris Lattner
473ec492f7 The X86 instruction selector already handles codegen of:
store float 123.45, float* %P

as an integer store.  This adds handling of float immediate stores as integers
for arguments passed function calls.

This is now tested by CodeGen/X86/store-fp-constant.ll

llvm-svn: 19364
2005-01-08 05:45:24 +00:00
Chris Lattner
2c398fc8f6 Allow the selection-dag based selector to be diabled with -disable-pattern-isel.
For now, this is the default, as the current selector is missing some big pieces.
To enable the new selector, pass -disable-pattern-isel=false to llc or lli.

llvm-svn: 19335
2005-01-07 07:50:50 +00:00
Chris Lattner
216198574d Reimplementation of the X86 pattern isel. This is still missing many large
pieces, but can already do amazing things in some cases.

llvm-svn: 19334
2005-01-07 07:49:41 +00:00
Chris Lattner
74019f517a This file is now dead.
llvm-svn: 19333
2005-01-07 07:49:05 +00:00
Chris Lattner
079b497982 Add a new prototype
llvm-svn: 19332
2005-01-07 07:48:33 +00:00
Chris Lattner
fb848e6fad First draft of new Target interface
llvm-svn: 19324
2005-01-07 07:44:53 +00:00
Chris Lattner
608dd77d6b Codegen -1 and -0.0 more efficiently. This implements CodeGen/X86/negatize_zero.ll
llvm-svn: 19313
2005-01-06 21:19:16 +00:00
Jeff Cohen
146e5504e5 Fix CBE code so that it compiles with VC++.
llvm-svn: 19303
2005-01-06 04:21:49 +00:00
Chris Lattner
6d651234d6 1. If a double FP constant must be put into a constant pool, but it can be
precisely represented as a float, put it into the constant pool as a
   float.
2. Use the cbw/cwd/cdq instructions instead of an explicit SAR for signed
   division.

llvm-svn: 19291
2005-01-05 16:30:14 +00:00
Chris Lattner
b438f5251f Minor optimization to allocate R8 registers in a better order.
llvm-svn: 19289
2005-01-05 16:09:16 +00:00
Jeff Cohen
36968ed8c1 Revert elimination of global variable hack... still needed.
llvm-svn: 19273
2005-01-03 16:34:19 +00:00
Chris Lattner
1aaf8cccb2 ADC and IMUL are also commutable.
llvm-svn: 19264
2005-01-03 01:27:59 +00:00
Chris Lattner
93fc4bd9cb This hunk:
-  unsigned TrueValue = getReg(TrueVal, BB, BB->begin());
+  unsigned TrueValue = getReg(TrueVal);

Fixes the PPC regressions from last night.

The other hunk is just a clarity improvement.

llvm-svn: 19263
2005-01-02 23:07:31 +00:00
Jeff Cohen
1087b72875 Eliminate the use of the global variable hack in the X86 target that was used
to get Visual Studio to link in X86.lib to the executables that need it.  There
is another way of doing it.

llvm-svn: 19252
2005-01-02 04:23:12 +00:00
Chris Lattner
a78fd4726e Disable 2->3 address promotion of add and inc instructions to LEA's. In
addition to being three address, LEA's don't set the flags.

This fixes 186.crafty.

llvm-svn: 19251
2005-01-02 04:18:17 +00:00
Chris Lattner
3ef32da6c3 Add a new method.
llvm-svn: 19249
2005-01-02 02:38:18 +00:00
Chris Lattner
95f1e628ed Add support for SETNPr to lower to memory form.
llvm-svn: 19248
2005-01-02 02:37:46 +00:00
Chris Lattner
d6bc921fa8 Implement the convertToThreeAddress method, add support for inverting JP/JNP
branches.

llvm-svn: 19247
2005-01-02 02:37:07 +00:00
Chris Lattner
0d6f03e52b Two changes here:
1. Add new instructions for checking parity flags: JP, JNP, SETP, SETNP.
2. Set the isCommutable and isPromotableTo3Address bits on several
   instructions.

llvm-svn: 19246
2005-01-02 02:35:46 +00:00
Chris Lattner
cc26e332b3 Add some bits that can be set for instructions.
llvm-svn: 19241
2005-01-02 02:27:48 +00:00
Chris Lattner
ad63a0d6a4 Fix a FIXME: Select instructions on longs were miscompiled.
While we're at it, improve codegen of select instructions.  For this
testcase:

int %test(bool %C, int %A, int %B) {
  %D = select bool %C, int %A, int %B
  ret int %D
}

We used to generate this code:

_test:
        cmpwi cr0, r3, 0
        bne .LBB_test_2 ;
.LBB_test_1:    ;
        b .LBB_test_3   ;
.LBB_test_2:    ;
        or r5, r4, r4
.LBB_test_3:    ;
        or r3, r5, r5
        blr

Now we emit:

_test:
        cmpwi cr0, r3, 0
        bne .LBB_test_2 ;
.LBB_test_1:    ;
        or r4, r5, r5
.LBB_test_2:    ;
        or r3, r4, r4
        blr

-Chris

llvm-svn: 19214
2005-01-01 16:10:12 +00:00
Chris Lattner
ccd0d44133 Substantially improve the code generated by non-folded setcc instructions.
In particular, instead of compiling this:

bool %test(int %A, int %B) {
  %C = setlt int %A, %B
  ret bool %C
}

to this:

test:
        save %sp, -96, %sp
        subcc %i0, %i1, %g0
        bl .LBBtest_1   !
        nop
        ba .LBBtest_2   !
        nop
.LBBtest_1:     !
        or %g0, 1, %i0
        ba .LBBtest_3   !
        nop
.LBBtest_2:     !
        or %g0, 0, %i0
        ba .LBBtest_3   !
        nop
.LBBtest_3:     !
        restore %g0, %g0, %g0
        retl
        nop

We now compile it to this:

test:
        save %sp, -96, %sp
        subcc %i0, %i1, %g0
        or %g0, 1, %i0
        bl .LBBtest_2   !
        nop
.LBBtest_1:     !
        or %g0, %g0, %i0
.LBBtest_2:     !
        restore %g0, %g0, %g0
        retl
        nop

llvm-svn: 19213
2005-01-01 16:06:57 +00:00
Chris Lattner
37b9eef884 Fix PR490
Fix testcase CodeGen/CBackend/2004-12-28-LogicalConstantExprs.ll

llvm-svn: 19176
2004-12-29 04:00:09 +00:00
Chris Lattner
3b78513843 Remove unused enum value
llvm-svn: 19024
2004-12-17 22:41:46 +00:00
Chris Lattner
5f28e9fafc Remove unused #include
llvm-svn: 19021
2004-12-17 19:07:04 +00:00
Chris Lattner
d11ba51208 Change the sentinal
llvm-svn: 19007
2004-12-17 00:46:51 +00:00
Chris Lattner
59d0c02d2b Create a stack slot for the return address lazily instead of eagerly. This
save small amounts of time for functions that don't call llvm.returnaddress
or llvm.frameaddress (which is almost all functions).

llvm-svn: 19006
2004-12-17 00:07:46 +00:00
Tanya Lattner
3b44c9a485 Chris is a pain ;) Removing reassociate.
llvm-svn: 19005
2004-12-16 23:16:16 +00:00
Tanya Lattner
93142a02ee Removing commented out lines.
llvm-svn: 19004
2004-12-16 23:13:16 +00:00
Tanya Lattner
c537a19bad Removed LICM and GCSE.
llvm-svn: 19003
2004-12-16 23:07:36 +00:00
Chris Lattner
50411edddf Remove dead #include
llvm-svn: 18994
2004-12-16 19:32:38 +00:00
Chris Lattner
4b1d58bf4b Adjust to changes in asmwriter filenames
llvm-svn: 18987
2004-12-16 17:33:24 +00:00
Chris Lattner
cea3ae9792 Specify all of the targets built.
llvm-svn: 18985
2004-12-16 17:26:44 +00:00
Chris Lattner
dc59826592 Use the rules in Makefile.rules to build SparcV9GenCodeEmitter.inc instead
of custom rules.

llvm-svn: 18984
2004-12-16 16:47:56 +00:00
Chris Lattner
d311c2587d Fix header
llvm-svn: 18983
2004-12-16 16:47:03 +00:00
Chris Lattner
a0561d43b2 Factor out common .td file chunks.
llvm-svn: 18982
2004-12-16 16:31:57 +00:00
Chris Lattner
cf5cd542d4 Fix PR485, instead of emitting zero sized arrays, emit arrays of size 1.
llvm-svn: 18974
2004-12-15 23:13:15 +00:00
Brian Gaeke
1b6a79c6d5 The mystery of Olden/tsp solved, and more opportunities for speedup.
llvm-svn: 18932
2004-12-14 09:10:10 +00:00
Brian Gaeke
83dcf14697 Get rid of shifts by zero in most cases.
llvm-svn: 18931
2004-12-14 08:21:02 +00:00
Chris Lattner
0e9a9d2098 When generating code for X86 targets, make sure the fp control word is set
to 64-bit precision, not 80 bits.

llvm-svn: 18915
2004-12-13 21:52:52 +00:00
Chris Lattner
3fe2a6aa6a Add some notes
llvm-svn: 18911
2004-12-13 20:13:10 +00:00
Chris Lattner
4136428410 Set the rounding mode for the X86 FPU to 64-bits instead of 80-bits. We
don't support long double anyway, and this gives us FP results closer to
other targets.

This also speeds up 179.art from 41.4s to 18.32s, by eliminating a problem
with extra precision that causes an FP == comparison to fail (leading to
extra loop iterations).

llvm-svn: 18895
2004-12-13 17:23:11 +00:00
Brian Gaeke
aa9f3851f7 Add V8 SPEC status.
llvm-svn: 18844
2004-12-13 00:27:35 +00:00
Chris Lattner
9f0237ca85 Fix Regression/CodeGen/PowerPC/2004-12-12-ZeroSizeCommon.ll, and all programs
when compiled with debug information.

llvm-svn: 18835
2004-12-12 20:36:19 +00:00
Chris Lattner
dc33000e67 CSE calls to getTypeSize.
llvm-svn: 18833
2004-12-12 20:31:00 +00:00
Chris Lattner
6131b06f73 Use the target triple to pick this target.
llvm-svn: 18830
2004-12-12 17:40:28 +00:00
Brian Gaeke
757a3aa9b9 Complete the list of MultiSource failures.
llvm-svn: 18826
2004-12-12 08:22:11 +00:00
Brian Gaeke
a440424596 hbd should be working now.
llvm-svn: 18824
2004-12-12 07:42:59 +00:00
Brian Gaeke
ee60e35a28 Finally enable the setcc-branch folding code.
Also, fix a bug where ubyte 255 would sometimes be output as -1. This
was afflicting hbd.

llvm-svn: 18823
2004-12-12 07:42:58 +00:00
Brian Gaeke
09c4a78ece Add (currently disabled) code for canFoldSetCC
llvm-svn: 18820
2004-12-12 06:22:30 +00:00
Brian Gaeke
55c163e41e Add stubs for setcc-branch folding support.
llvm-svn: 18818
2004-12-12 06:01:26 +00:00
Brian Gaeke
5d213ad8c3 Print llvm code one function at a time.
llvm-svn: 18805
2004-12-11 22:17:07 +00:00
Brian Gaeke
80831ad19a JIT should print LLVM each function before selecting instructions for it.
llvm-svn: 18803
2004-12-11 18:41:09 +00:00
Brian Gaeke
220fb4f8cd Bools are *also* not ints. Sigh. Furthermore, most of the TargetMachine
ctor parameters can be defaulted.

Print the transformed llvm code input to the instruction selector
when -print-machineinstrs is on, just like V9.

llvm-svn: 18794
2004-12-11 05:19:04 +00:00
Brian Gaeke
24ea5d3dd7 Look for many more moves to fold (previously, we only
*or g0, x      add g0, x          recognized * as a move)
 or  x, g0     add  x, g0
 or  0, x      add  0, x
 or  x, 0      add  x, 0

llvm-svn: 18793
2004-12-11 05:19:03 +00:00
Brian Gaeke
948a8145bf Make GEPs not suck so much:
* Don't emit the Index * ElementSize multiply if Index is a constant.
* Use a shift, not a multiply, if ElementSize is 1/2/4/8.
* If ElementSize fits in the immediate field of SMUL, then put it there.

Fix a bug where struct offsets might be truncated (ConstantSInt::get is
now used instead of ConstantInt::get).

llvm-svn: 18792
2004-12-11 05:19:02 +00:00
Brian Gaeke
e0643b792b Update lists of failing benchmarks, including info on which
ones are failing in cbe.

llvm-svn: 18791
2004-12-11 05:19:01 +00:00
Brian Gaeke
dc916ae40f Move -lowerselect later in the chain; some select instructions were
slipping through into the instruction selector, which can't deal with
them yet.

llvm-svn: 18758
2004-12-10 08:39:30 +00:00
Brian Gaeke
7ec3883e1a Add the rest of the multiply instructions.
llvm-svn: 18757
2004-12-10 08:39:29 +00:00
Brian Gaeke
2a9ecc433f Support binary operations with immediates for <= cInt.
llvm-svn: 18756
2004-12-10 08:39:28 +00:00
Brian Gaeke
45f3af8d88 Update lists of failing benchmarks (except C++...something is the
matter with my sparcv8 libstdc++.a) and to-do list.

llvm-svn: 18755
2004-12-10 08:39:27 +00:00
Brian Gaeke
1da3720799 Emit correct prototype for __builtin_alloca on V8.
llvm-svn: 18745
2004-12-10 05:44:45 +00:00
Brian Gaeke
91cf4fe1ca Add SparcV8 target back into the build
llvm-svn: 18738
2004-12-10 04:54:21 +00:00
Brian Gaeke
5c8cefdf9a Adjust paths: Sparc/V8 --> SparcV8
llvm-svn: 18737
2004-12-10 04:48:57 +00:00
Brian Gaeke
262fe40da0 Make this file self-contained.
llvm-svn: 18736
2004-12-10 04:46:30 +00:00
Brian Gaeke
ef7289195a Update list of failing MultiSource benchmarks. It works out to +5 -5, but I
think some of these might be the CFE's fault; a rebuild should come soon.

llvm-svn: 18735
2004-12-10 04:42:46 +00:00
Brian Gaeke
999a5ba9ba When FpMOVDs appeared in pairs, we were mistakenly skipping over the latter of
each pair. I think this fixes that.

One of these days, I swear I'm going to get the hang of C++ iterators.
Really.

llvm-svn: 18734
2004-12-10 04:42:45 +00:00
Brian Gaeke
a196f80d71 We're continuing to make progress on MultiSource.
llvm-svn: 18714
2004-12-09 18:54:31 +00:00
Brian Gaeke
706a0c3988 Bytes and shorts are aligned differently from words.
llvm-svn: 18713
2004-12-09 18:51:02 +00:00
Brian Gaeke
43cb9ee8a4 Fix asm-printing directives (how did we not see this before...apparently,
everything was an int!)

llvm-svn: 18712
2004-12-09 18:51:01 +00:00
Chris Lattner
bf9258ba8b Move lower intrinsics before FP constant emission, in case
intrinsic lowering ever introduces constants.

Rename local symbols before printing function bodies, fixing 255.vortex
with the CBE!!!

llvm-svn: 18534
2004-12-05 06:49:44 +00:00
Chris Lattner
40e7175e44 Fix test/Regression/CodeGen/CBackend/2004-12-03-ExternStatics.ll and
PR472

llvm-svn: 18459
2004-12-03 17:19:10 +00:00
Brian Gaeke
cabac53133 This code rotted - change it to call abort() until someone wants
to rewrite this to use relocations.

llvm-svn: 18453
2004-12-03 06:57:14 +00:00
Tanya Lattner
d24558ac54 When writing kernel, save the branches til the end. They are still put in the "right place" in the schedule, but sometimes when folding to make a kernel instructions are added between branches. This is wrong. To avoid this, we handle branches special.
llvm-svn: 18450
2004-12-03 05:25:22 +00:00
Chris Lattner
99c8cf8ef8 Fix a regression caused by the previous patch
llvm-svn: 18449
2004-12-03 05:13:15 +00:00
Chris Lattner
eb18ce7a43 The stripping pass as we know it is about to disappear
llvm-svn: 18436
2004-12-02 21:05:01 +00:00
John Criswell
faf1e07531 Reverting revision 1.209.
Including alloca.h on Solaris brings in the prototype of strftime(), which
breaks compilation of CBE generated code.

llvm-svn: 18435
2004-12-02 19:02:49 +00:00
Chris Lattner
316f923a9c Spill/restore X86 floating point stack registers with 64-bits of precision
instead of 80-bits of precision.  This fixes PR467.

This change speeds up fldry on X86 with LLC from 7.32s on apoc to 4.68s.

llvm-svn: 18433
2004-12-02 18:17:31 +00:00
Chris Lattner
dfdd49b7af Consider 64-bit registers to be FP as well.
llvm-svn: 18432
2004-12-02 17:57:21 +00:00
Tanya Lattner
01adb68f38 Reworked branch adding in prologue. Added check for infinite loops which are not modulo scheduled.
llvm-svn: 18419
2004-12-02 07:22:15 +00:00
Tanya Lattner
893f987574 Reverting this patch:
http://mail.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20041122/021428.html

It broke Mutlisource/Applications/obsequi

llvm-svn: 18407
2004-12-01 18:27:03 +00:00
Chris Lattner
a94cee5109 Initial support for packed types, contributed by Morten Ofstad
llvm-svn: 18406
2004-12-01 17:14:28 +00:00
Chris Lattner
0abd8109fa Do not let GCC emit a warning for INT64_MIN
llvm-svn: 18398
2004-11-30 21:33:58 +00:00