1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 19:52:54 +01:00

move an entry, add some notes, remove a completed item (IMPLICIT_DEF)

llvm-svn: 60821
This commit is contained in:
Chris Lattner 2008-12-10 01:30:48 +00:00
parent e2b5854e41
commit 3987712b2d

View File

@ -2,13 +2,6 @@ Target Independent Opportunities:
//===---------------------------------------------------------------------===// //===---------------------------------------------------------------------===//
We should make the various target's "IMPLICIT_DEF" instructions be a single
target-independent opcode like TargetInstrInfo::INLINEASM. This would allow
us to eliminate the TargetInstrDesc::isImplicitDef() method, and would allow
us to avoid having to define this for every target for every register class.
//===---------------------------------------------------------------------===//
With the recent changes to make the implicit def/use set explicit in With the recent changes to make the implicit def/use set explicit in
machineinstrs, we should change the target descriptions for 'call' instructions machineinstrs, we should change the target descriptions for 'call' instructions
so that the .td files don't list all the call-clobbered registers as implicit so that the .td files don't list all the call-clobbered registers as implicit
@ -30,7 +23,10 @@ Make the PPC branch selector target independant
//===---------------------------------------------------------------------===// //===---------------------------------------------------------------------===//
Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
precision don't matter (ffastmath). Misc/mandel will like this. :) precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
safe in general, even on darwin. See the libm implementation of hypot for
examples (which special case when x/y are exactly zero to get signed zeros etc
right).
//===---------------------------------------------------------------------===// //===---------------------------------------------------------------------===//
@ -166,6 +162,9 @@ Expand these to calls of sin/cos and stores:
Doing so could allow SROA of the destination pointers. See also: Doing so could allow SROA of the destination pointers. See also:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
This is now easily doable with MRVs. We could even make an intrinsic for this
if anyone cared enough about sincos.
//===---------------------------------------------------------------------===// //===---------------------------------------------------------------------===//
Scalar Repl cannot currently promote this testcase to 'ret long cst': Scalar Repl cannot currently promote this testcase to 'ret long cst':
@ -511,6 +510,8 @@ int i;
} }
} }
BasicAA also doesn't do this for add. It needs to know that &A[i+1] != &A[i].
//===---------------------------------------------------------------------===// //===---------------------------------------------------------------------===//
We should investigate an instruction sinking pass. Consider this silly We should investigate an instruction sinking pass. Consider this silly
@ -925,35 +926,6 @@ vec2d foo () {
//===---------------------------------------------------------------------===// //===---------------------------------------------------------------------===//
This C++ file:
void g(); struct A { int n; int m; A& operator++(void) { ++n; if (n == m) g();
return *this; } A() : n(0), m(0) { } friend bool operator!=(A const& a1,
A const& a2) { return a1.n != a2.n; } }; void testfunction(A& iter) { A const
end; while (iter != end) ++iter; }
Compiles down to:
bb: ; preds = %bb3.backedge, %bb.nph
%.rle = phi i32 [ %1, %bb.nph ], [ %7, %bb3.backedge ] ; <i32> [#uses=1]
%4 = add i32 %.rle, 1 ; <i32> [#uses=2]
store i32 %4, i32* %0, align 4
%5 = load i32* %3, align 4 ; <i32> [#uses=1]
%6 = icmp eq i32 %4, %5 ; <i1> [#uses=1]
br i1 %6, label %bb1, label %bb3.backedge
bb1: ; preds = %bb
tail call void @_Z1gv()
br label %bb3.backedge
bb3.backedge: ; preds = %bb, %bb1
%7 = load i32* %0, align 4 ; <i32> [#uses=2]
The %7 load is partially redundant with the store of %4 to %0, GVN's PRE
should remove it, but it doesn't apply to memory objects.
//===---------------------------------------------------------------------===//
Better mod/ref analysis for scanf would allow us to eliminate the vtable and a Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
bunch of other stuff from this example (see PR1604): bunch of other stuff from this example (see PR1604):
@ -1432,3 +1404,32 @@ void foo (int a, struct T b)
//===---------------------------------------------------------------------===// //===---------------------------------------------------------------------===//
This C++ file:
void g(); struct A { int n; int m; A& operator++(void) { ++n; if (n == m) g();
return *this; } A() : n(0), m(0) { } friend bool operator!=(A const& a1,
A const& a2) { return a1.n != a2.n; } }; void testfunction(A& iter) { A const
end; while (iter != end) ++iter; }
Compiles down to:
bb: ; preds = %bb3.backedge, %bb.nph
%.rle = phi i32 [ %1, %bb.nph ], [ %7, %bb3.backedge ] ; <i32> [#uses=1]
%4 = add i32 %.rle, 1 ; <i32> [#uses=2]
store i32 %4, i32* %0, align 4
%5 = load i32* %3, align 4 ; <i32> [#uses=1]
%6 = icmp eq i32 %4, %5 ; <i1> [#uses=1]
br i1 %6, label %bb1, label %bb3.backedge
bb1: ; preds = %bb
tail call void @_Z1gv()
br label %bb3.backedge
bb3.backedge: ; preds = %bb, %bb1
%7 = load i32* %0, align 4 ; <i32> [#uses=2]
The %7 load is partially redundant with the store of %4 to %0, GVN's PRE
should remove it, but it doesn't apply to memory objects.
//===---------------------------------------------------------------------===//