1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 03:23:01 +02:00
Commit Graph

13 Commits

Author SHA1 Message Date
Dan Gohman
30c5ce1b7d Switch the MachineOperand accessors back to the short names like
isReg, etc., from isRegister, etc.

llvm-svn: 57006
2008-10-03 15:45:36 +00:00
Dan Gohman
fa32c7c6d9 Remove isImm(), isReg(), and friends, in favor of
isImmediate(), isRegister(), and friends, to avoid confusion
about having two different names with the same meaning. I'm
not attached to the longer names, and would be ok with
changing to the shorter names if others prefer it.

llvm-svn: 56189
2008-09-13 17:58:21 +00:00
Dan Gohman
e1f9be27bc Tidy up several unbeseeming casts from pointer to intptr_t.
llvm-svn: 55779
2008-09-04 17:05:41 +00:00
Dan Gohman
bab18cae46 Clean up the use of static and anonymous namespaces. This turned up
several things that were neither in an anonymous namespace nor static
but not intended to be global.

llvm-svn: 51017
2008-05-13 00:00:25 +00:00
Evan Cheng
b9fc5d6d07 Refactor some code out of MachineSink into a MachineInstr query.
llvm-svn: 48311
2008-03-13 00:44:09 +00:00
Dan Gohman
cabaec582f Rename MRegisterInfo to TargetRegisterInfo.
llvm-svn: 46930
2008-02-10 18:45:23 +00:00
Chris Lattner
efa13b2edd implement support for sinking a load out the bottom of a block that
has no stores between the load and the end of block.  This works 
great and sinks hundreds of stores, but we can't turn it on because
machineinstrs don't have volatility information and we don't want to
sink volatile stores :(

llvm-svn: 45894
2008-01-12 00:17:41 +00:00
Chris Lattner
bfffa4f21e Simplify the side effect stuff a bit more and make licm/sinking
both work right according to the new flags.

This removes the TII::isReallySideEffectFree predicate, and adds
TII::isInvariantLoad. 

It removes NeverHasSideEffects+MayHaveSideEffects and adds
UnmodeledSideEffects as machine instr flags.  Now the clients
can decide everything they need.

I think isRematerializable can be implemented in terms of the
flags we have now, though I will let others tackle that.

llvm-svn: 45843
2008-01-10 23:08:24 +00:00
Chris Lattner
e5b817779c Clamp down on sinking of lots of instructions.
llvm-svn: 45841
2008-01-10 22:35:15 +00:00
Chris Lattner
0d444b96ac The current impl is really trivial, add some comments about how it can be made better.
llvm-svn: 45625
2008-01-05 06:47:58 +00:00
Chris Lattner
9e7943159e don't sink anything with side effects, this makes lots of stuff work, but sinks almost nothing.
llvm-svn: 45617
2008-01-05 02:33:22 +00:00
Chris Lattner
377c720459 fix a common crash.
llvm-svn: 45614
2008-01-05 01:39:17 +00:00
Chris Lattner
f4972fa569 Add a really quick hack at a machine code sinking pass, enabled with --enable-sinking.
It is missing validity checks, so it is known broken.  However, it is powerful enough
to compile this contrived code:

void test1(int C, double A, double B, double *P) {
  double Tmp = A*A+B*B;
  *P = C ? Tmp : A;
}

into:

_test1:
	movsd	8(%esp), %xmm0
	cmpl	$0, 4(%esp)
	je	LBB1_2	# entry
LBB1_1:	# entry
	movsd	16(%esp), %xmm1
	mulsd	%xmm1, %xmm1
	mulsd	%xmm0, %xmm0
	addsd	%xmm1, %xmm0
LBB1_2:	# entry
	movl	24(%esp), %eax
	movsd	%xmm0, (%eax)
	ret

instead of:

_test1:
	movsd	16(%esp), %xmm0
	mulsd	%xmm0, %xmm0
	movsd	8(%esp), %xmm1
	movapd	%xmm1, %xmm2
	mulsd	%xmm2, %xmm2
	addsd	%xmm0, %xmm2
	cmpl	$0, 4(%esp)
	je	LBB1_2	# entry
LBB1_1:	# entry
	movapd	%xmm2, %xmm1
LBB1_2:	# entry
	movl	24(%esp), %eax
	movsd	%xmm1, (%eax)
	ret

woo.

llvm-svn: 45570
2008-01-04 07:36:53 +00:00