Chris Lattner
4b688a1c70
This mega patch converts us from using Function::a{iterator|begin|end} to
...
using Function::arg_{iterator|begin|end}. Likewise Module::g* -> Module::global_*.
This patch is contributed by Gabor Greif, thanks!
llvm-svn: 20597
2005-03-15 04:54:21 +00:00
Reid Spencer
b0ca4aa8cd
Patch to make assembly output compatible with mingw compilation (identical
...
to cygwin)
llvm-svn: 20520
2005-03-08 17:02:05 +00:00
Chris Lattner
a024984017
Fix spelling, patch contributed by Gabor Greif!
...
llvm-svn: 20343
2005-02-27 06:18:25 +00:00
Chris Lattner
9838ab1271
Silence some uninit variable warnings.
...
llvm-svn: 20284
2005-02-23 05:57:21 +00:00
Chris Lattner
ab92b92bc5
We can fold promoted and non-promoted loads into divs also!
...
llvm-svn: 19835
2005-01-25 20:35:10 +00:00
Chris Lattner
a9a0369879
Fold promoted loads into binary ops for FP, allowing us to generate m32 forms
...
of FP ops.
llvm-svn: 19834
2005-01-25 20:03:11 +00:00
Chris Lattner
6ff85c3152
Silence a warning.
...
llvm-svn: 19798
2005-01-23 23:20:06 +00:00
Chris Lattner
94952e0947
Allow the FP stackifier to completely ignore functions that do not use FP at
...
all. This should speed up the X86 backend fairly significantly on integer
codes. Now if only we didn't have to compute livevar still... ;-)
llvm-svn: 19796
2005-01-23 23:13:59 +00:00
Reid Spencer
5c7b6e83f0
Support Cygwin assembly generation. The cygwin version of Gnu ASsembler
...
doesn't support certain directives and symbols on cygwin are prefixed with
an underscore. This patch makes the necessary adjustments to the output.
llvm-svn: 19775
2005-01-23 03:52:14 +00:00
Chris Lattner
b4cf4ffb04
Speed up folding operations into loads.
...
llvm-svn: 19733
2005-01-21 21:43:02 +00:00
Chris Lattner
fd4d7f71ae
The ever-important vanity pass name :)
...
llvm-svn: 19731
2005-01-21 21:35:14 +00:00
Chris Lattner
5f2fbeaa69
Fix a FIXME: realize that argument stores are all independent (don't alias)
...
llvm-svn: 19728
2005-01-21 19:46:38 +00:00
Chris Lattner
febeb380ae
Implement ADD_PARTS/SUB_PARTS so that 64-bit integer add/sub work. This
...
fixes most of the remaining llc-beta failures.
llvm-svn: 19716
2005-01-20 18:53:00 +00:00
Chris Lattner
8b0a2a3251
Fix a crash compiling 134.perl.
...
llvm-svn: 19711
2005-01-20 16:50:16 +00:00
Chris Lattner
6534e1ede3
Fix a problem where were were literally selecting for INCREASED register
...
pressure, not decreases register pressure. Fix problem where we accidentally
swapped the operands of SHLD, which caused fourinarow to fail. This fixes
fourinarow.
llvm-svn: 19697
2005-01-19 17:24:34 +00:00
Chris Lattner
b75589131d
When commuting these instructions, make sure to actually swap the operands too.
...
llvm-svn: 19694
2005-01-19 16:55:52 +00:00
Chris Lattner
fde1a5688b
Implement Regression/CodeGen/X86/rotate.ll: emit rotate instructions (which
...
typically cost 1 cycle) instead of shld/shrd instruction (which are typically
6 or more cycles). This also saves code space.
For example, instead of emitting:
rotr:
mov %EAX, DWORD PTR [%ESP + 4]
mov %CL, BYTE PTR [%ESP + 8]
shrd %EAX, %EAX, %CL
ret
rotli:
mov %EAX, DWORD PTR [%ESP + 4]
shrd %EAX, %EAX, 27
ret
Emit:
rotr32:
mov %CL, BYTE PTR [%ESP + 8]
mov %EAX, DWORD PTR [%ESP + 4]
ror %EAX, %CL
ret
rotli32:
mov %EAX, DWORD PTR [%ESP + 4]
ror %EAX, 27
ret
We also emit byte rotate instructions which do not have a sh[lr]d counterpart
at all.
llvm-svn: 19692
2005-01-19 08:07:05 +00:00
Chris Lattner
34757ff939
Add rotate instructions.
...
llvm-svn: 19690
2005-01-19 07:50:03 +00:00
Chris Lattner
e539ce8223
Match 16-bit shld/shrd instructions as well, implementing shift-double.llx:test5
...
llvm-svn: 19689
2005-01-19 07:37:26 +00:00
Chris Lattner
9d5ee289d7
Improve coverage of the X86 instruction set by adding 16-bit shift doubles.
...
llvm-svn: 19687
2005-01-19 07:31:24 +00:00
Chris Lattner
c03f360215
Teach the code generator that shrd/shld is commutable if it has an immediate.
...
This allows us to generate this:
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %EDX, DWORD PTR [%ESP + 8]
shld %EDX, %EDX, 2
shl %EAX, 2
ret
instead of this:
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %ECX, DWORD PTR [%ESP + 8]
mov %EDX, %EAX
shrd %EDX, %ECX, 30
shl %EAX, 2
ret
Note the magically transmogrifying immediate.
llvm-svn: 19686
2005-01-19 07:11:01 +00:00
Chris Lattner
575e912fcf
Codegen long >> 2 to this:
...
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %EDX, DWORD PTR [%ESP + 8]
shrd %EAX, %EDX, 2
sar %EDX, 2
ret
instead of this:
test1:
mov %ECX, DWORD PTR [%ESP + 4]
shr %ECX, 2
mov %EDX, DWORD PTR [%ESP + 8]
mov %EAX, %EDX
shl %EAX, 30
or %EAX, %ECX
sar %EDX, 2
ret
and long << 2 to this:
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %ECX, DWORD PTR [%ESP + 8]
*** mov %EDX, %EAX
shrd %EDX, %ECX, 30
shl %EAX, 2
ret
instead of this:
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %ECX, %EAX
shr %ECX, 30
mov %EDX, DWORD PTR [%ESP + 8]
shl %EDX, 2
or %EDX, %ECX
shl %EAX, 2
ret
The extra copy (marked ***) can be eliminated when I teach the code generator
that shrd32rri8 is really commutative.
llvm-svn: 19681
2005-01-19 06:18:43 +00:00
Chris Lattner
419a5d213b
X86 shifts mask the amount.
...
llvm-svn: 19678
2005-01-19 03:36:30 +00:00
Chris Lattner
6dec8cb829
Code to handle FP_EXTEND is dead now. X86 doesn't support any data types to
...
FP_EXTEND from!
llvm-svn: 19674
2005-01-18 20:05:56 +00:00
Chris Lattner
798e9c85d6
Remove more dead code.
...
llvm-svn: 19673
2005-01-18 19:50:08 +00:00
Chris Lattner
401814508f
The selection dag code handles the promotions from F32 to F64 for us, so we
...
don't need to even think about F32 in the X86 code anymore.
llvm-svn: 19672
2005-01-18 19:46:54 +00:00
Chris Lattner
dc09e52b3e
Fix 124.m88ksim.
...
llvm-svn: 19667
2005-01-18 17:35:28 +00:00
Chris Lattner
a04b1ee7a8
Do not emit loads multiple times, potentially in the wrong places.
...
llvm-svn: 19661
2005-01-18 04:18:32 +00:00
Chris Lattner
722ddeb86e
Eliminate bad assertions.
...
llvm-svn: 19659
2005-01-18 04:00:54 +00:00
Chris Lattner
8f3a8d96e2
* Eliminate the TokenSet and just use the ExprMap for both tokens and values.
...
* Insert some really pedantic assertions that will notice when we emit the
same loads more than one time, exposing bugs. This turns a miscompilation in
bzip2 into a compile-fail. yaay.
llvm-svn: 19658
2005-01-18 03:51:59 +00:00
Chris Lattner
b3edb09ede
Rely on the code in MatchAddress to do this work. Otherwise we fail to
...
match (X+Y)+(Z << 1), because we match the X+Y first, consuming the index
register, then there is no place to put the Z.
llvm-svn: 19652
2005-01-18 02:25:52 +00:00
Chris Lattner
ce2e0125dc
Fix a problem where probing for addressing modes caused expressions to be
...
emitted too early. In particular, this fixes
Regression/CodeGen/X86/regpressure.ll:regpressure3.
This also improves the 2nd basic block in 164.gzip:flush_block, which went from
.LBBflush_block_1: # loopentry.1.i
movzx %EAX, WORD PTR [dyn_ltree + 20]
movzx %ECX, WORD PTR [dyn_ltree + 16]
mov DWORD PTR [%ESP + 32], %ECX
movzx %ECX, WORD PTR [dyn_ltree + 12]
movzx %EDX, WORD PTR [dyn_ltree + 8]
movzx %EBX, WORD PTR [dyn_ltree + 4]
mov DWORD PTR [%ESP + 36], %EBX
movzx %EBX, WORD PTR [dyn_ltree]
add DWORD PTR [%ESP + 36], %EBX
add %EDX, DWORD PTR [%ESP + 36]
add %ECX, %EDX
add DWORD PTR [%ESP + 32], %ECX
add %EAX, DWORD PTR [%ESP + 32]
movzx %ECX, WORD PTR [dyn_ltree + 24]
add %EAX, %ECX
mov %ECX, 0
mov %EDX, %ECX
to
.LBBflush_block_1: # loopentry.1.i
movzx %EAX, WORD PTR [dyn_ltree]
movzx %ECX, WORD PTR [dyn_ltree + 4]
add %ECX, %EAX
movzx %EAX, WORD PTR [dyn_ltree + 8]
add %EAX, %ECX
movzx %ECX, WORD PTR [dyn_ltree + 12]
add %ECX, %EAX
movzx %EAX, WORD PTR [dyn_ltree + 16]
add %EAX, %ECX
movzx %ECX, WORD PTR [dyn_ltree + 20]
add %ECX, %EAX
movzx %EAX, WORD PTR [dyn_ltree + 24]
add %ECX, %EAX
mov %EAX, 0
mov %EDX, %EAX
... which results in less spilling in the function.
This change alone speeds up 164.gzip from 37.23s to 36.24s on apoc. The
default isel takes 37.31s.
llvm-svn: 19650
2005-01-18 01:06:26 +00:00
Chris Lattner
a78f9ced61
Fix indentation.
...
llvm-svn: 19649
2005-01-17 23:25:45 +00:00
Chris Lattner
dff1e3e86f
Don't bother using max here.
...
llvm-svn: 19647
2005-01-17 23:02:13 +00:00
Chris Lattner
2d86b43318
Do not give token factor nodes outrageous weights
...
llvm-svn: 19645
2005-01-17 22:56:09 +00:00
Chris Lattner
f2878ce8ba
Two changes:
...
1. Fold [mem] += (1|-1) into inc [mem]/dec [mem] to save some icache space.
2. Do not let token factor nodes prevent forming '[mem] op= val' folds.
llvm-svn: 19643
2005-01-17 22:10:42 +00:00
Chris Lattner
40c0fca632
Refactor load/op/store folding into it's own method, no functionality changes.
...
llvm-svn: 19641
2005-01-17 19:25:26 +00:00
Chris Lattner
2348abc421
Fix a major regression last night that prevented us from producing [mem] op= reg
...
operations.
The body of the if is less indented but unmodified in this patch.
llvm-svn: 19638
2005-01-17 17:49:14 +00:00
Chris Lattner
adb669ab1f
Codegen this:
...
int %foo(int %X) {
%T = add int %X, 13
%S = mul int %T, 3
ret int %S
}
as this:
mov %ECX, DWORD PTR [%ESP + 4]
lea %EAX, DWORD PTR [%ECX + 2*%ECX + 39]
ret
instead of this:
mov %ECX, DWORD PTR [%ESP + 4]
mov %EAX, %ECX
add %EAX, 13
imul %EAX, %EAX, 3
ret
llvm-svn: 19633
2005-01-17 06:48:02 +00:00
Chris Lattner
51590b615c
Fix test/Regression/CodeGen/X86/2005-01-17-CycleInDAG.ll and 132.ijpeg.
...
Do not fold a load into an operation if it will induce a cycle in the DAG.
Repeat after me: dAg.
llvm-svn: 19631
2005-01-17 06:26:58 +00:00
Chris Lattner
f1e85bec5a
Do not fold a load into a comparison that is used by more than one place.
...
The comparison will probably be folded, so this is not ok to do.
This fixed 197.parser.
llvm-svn: 19624
2005-01-17 01:34:14 +00:00
Chris Lattner
1b8c8fe020
Do not codegen 'xor bool, true' as 'not reg'. not reg inverts the upper bits
...
of the bytereg. This fixes yacr2, 300.twolf and probably others.
llvm-svn: 19622
2005-01-17 00:23:16 +00:00
Chris Lattner
46dac4394c
Set up the shift and setcc types.
...
If we emit a load because we followed a token chain to get to it, try to
fold it into its single user if possible.
llvm-svn: 19620
2005-01-17 00:00:33 +00:00
Chris Lattner
9ffc59287e
* Adjust to changes in TargetLowering interfaces.
...
* Remove custom promotion for bool and byte select ops. Legalize now
promotes them for us.
* Allow folding ConstantPoolIndexes into EXTLOAD's, useful for float immediates.
* Declare which operations are not supported better.
* Add some hacky code for TRUNCSTORE to pretend that we have truncstore
for i16 types. This is useful for testing promotion code because I can
just remove 16-bit registers all together and verify that programs work.
llvm-svn: 19614
2005-01-16 07:34:08 +00:00
Chris Lattner
f3d950e816
Add support for truncstore and *extload.
...
llvm-svn: 19566
2005-01-15 05:22:24 +00:00
Chris Lattner
27c91fac94
Adjust to CopyFromREg changes.
...
llvm-svn: 19561
2005-01-14 22:37:41 +00:00
Chris Lattner
7a8788c9ac
Add new ImplicitDef node, rename CopyRegSDNode class to RegSDNode.
...
llvm-svn: 19535
2005-01-13 20:50:02 +00:00
Chris Lattner
fce6a5439d
Codegen factor nodes more intelligently according to perceived register pressure.
...
llvm-svn: 19532
2005-01-13 19:56:00 +00:00
Chris Lattner
cb4359465a
Initial trivial (but stupid) codegen for this node.
...
llvm-svn: 19529
2005-01-13 18:01:36 +00:00
Chris Lattner
9a70166615
Add some really pedantic assertions to the load folding code. Fix a bunch
...
of cases where we accidentally emitted a load folded once and unfolded
elsewhere.
llvm-svn: 19522
2005-01-13 05:53:16 +00:00