1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 06:22:56 +02:00
Commit Graph

7 Commits

Author SHA1 Message Date
Chris Lattner
efa13b2edd implement support for sinking a load out the bottom of a block that
has no stores between the load and the end of block.  This works 
great and sinks hundreds of stores, but we can't turn it on because
machineinstrs don't have volatility information and we don't want to
sink volatile stores :(

llvm-svn: 45894
2008-01-12 00:17:41 +00:00
Chris Lattner
bfffa4f21e Simplify the side effect stuff a bit more and make licm/sinking
both work right according to the new flags.

This removes the TII::isReallySideEffectFree predicate, and adds
TII::isInvariantLoad. 

It removes NeverHasSideEffects+MayHaveSideEffects and adds
UnmodeledSideEffects as machine instr flags.  Now the clients
can decide everything they need.

I think isRematerializable can be implemented in terms of the
flags we have now, though I will let others tackle that.

llvm-svn: 45843
2008-01-10 23:08:24 +00:00
Chris Lattner
e5b817779c Clamp down on sinking of lots of instructions.
llvm-svn: 45841
2008-01-10 22:35:15 +00:00
Chris Lattner
0d444b96ac The current impl is really trivial, add some comments about how it can be made better.
llvm-svn: 45625
2008-01-05 06:47:58 +00:00
Chris Lattner
9e7943159e don't sink anything with side effects, this makes lots of stuff work, but sinks almost nothing.
llvm-svn: 45617
2008-01-05 02:33:22 +00:00
Chris Lattner
377c720459 fix a common crash.
llvm-svn: 45614
2008-01-05 01:39:17 +00:00
Chris Lattner
f4972fa569 Add a really quick hack at a machine code sinking pass, enabled with --enable-sinking.
It is missing validity checks, so it is known broken.  However, it is powerful enough
to compile this contrived code:

void test1(int C, double A, double B, double *P) {
  double Tmp = A*A+B*B;
  *P = C ? Tmp : A;
}

into:

_test1:
	movsd	8(%esp), %xmm0
	cmpl	$0, 4(%esp)
	je	LBB1_2	# entry
LBB1_1:	# entry
	movsd	16(%esp), %xmm1
	mulsd	%xmm1, %xmm1
	mulsd	%xmm0, %xmm0
	addsd	%xmm1, %xmm0
LBB1_2:	# entry
	movl	24(%esp), %eax
	movsd	%xmm0, (%eax)
	ret

instead of:

_test1:
	movsd	16(%esp), %xmm0
	mulsd	%xmm0, %xmm0
	movsd	8(%esp), %xmm1
	movapd	%xmm1, %xmm2
	mulsd	%xmm2, %xmm2
	addsd	%xmm0, %xmm2
	cmpl	$0, 4(%esp)
	je	LBB1_2	# entry
LBB1_1:	# entry
	movapd	%xmm2, %xmm1
LBB1_2:	# entry
	movl	24(%esp), %eax
	movsd	%xmm1, (%eax)
	ret

woo.

llvm-svn: 45570
2008-01-04 07:36:53 +00:00