1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 20:23:11 +01:00
Commit Graph

10027 Commits

Author SHA1 Message Date
Chris Lattner
a2edd7e449 Preserve CC's when linking modules
llvm-svn: 21799
2005-05-09 01:09:39 +00:00
Chris Lattner
2d9c054f4e Preserve calling conventions when doing IPO
llvm-svn: 21798
2005-05-09 01:05:50 +00:00
Chris Lattner
eff214d7de wrap long lines, preserve calling conventions when cloning functions and
turning calls into invokes

llvm-svn: 21797
2005-05-09 01:04:34 +00:00
Chris Lattner
5a7f1642b7 By definition, 'tail' calls cannot access the stack frame of their caller.
Expose this as a simple form of mod/ref information.  This implements
BasicAA/tailcall-modref.ll

llvm-svn: 21796
2005-05-08 23:58:12 +00:00
Chris Lattner
fb4a99b117 Verify that varargs functions all have ccc
llvm-svn: 21792
2005-05-08 22:27:09 +00:00
Chris Lattner
b57ab2e975 Convert non-address taken functions with C calling conventions to fastcc.
llvm-svn: 21791
2005-05-08 22:18:06 +00:00
Chris Lattner
d5a353a675 Implement Reassociate/mul-neg-add.ll
llvm-svn: 21788
2005-05-08 21:41:35 +00:00
Chris Lattner
f535f6e808 Bail out earlier
llvm-svn: 21786
2005-05-08 21:33:47 +00:00
Chris Lattner
39f74def7f Teach reassociate that 0-X === X*-1
llvm-svn: 21785
2005-05-08 21:28:52 +00:00
Chris Lattner
319ac8f822 Fix PR557 and basictest[34].ll.
This makes reassociate realize that loads should be treated as unmovable, and
gives distinct ranks to distinct values defined in the same basic block, allowing
reassociate to do its thing.

llvm-svn: 21783
2005-05-08 20:57:04 +00:00
Chris Lattner
b5de308c5f Add debugging information
llvm-svn: 21781
2005-05-08 20:09:57 +00:00
Chris Lattner
e74082156b eliminate gotos
llvm-svn: 21780
2005-05-08 19:48:43 +00:00
Chris Lattner
6d85b91b24 Wrap long lines. Fix "warning: conflicting types for built-in function 'memset'"
warning from the CBE+GCC.

llvm-svn: 21779
2005-05-08 19:46:29 +00:00
Chris Lattner
a9d5fdd4fd Improve reassociation handling of inverses, implementing inverses.ll.
llvm-svn: 21778
2005-05-08 18:59:37 +00:00
Chris Lattner
afbdc0b969 clean up and modernize this pass.
llvm-svn: 21776
2005-05-08 18:45:26 +00:00
Chris Lattner
7b41539f32 Strength reduce SAR into SHR if there is no way sign bits could be shifted
in.  This tends to get cases like this:

  X = cast ubyte to int
  Y = shr int X, ...

Tested by: shift.ll:test24

llvm-svn: 21775
2005-05-08 17:34:56 +00:00
Chris Lattner
c2670a0da6 Refactor some code
llvm-svn: 21772
2005-05-08 00:19:31 +00:00
Chris Lattner
cd7caaa866 Handle some simple cases where we can see that values get annihilated.
llvm-svn: 21771
2005-05-08 00:08:33 +00:00
Chris Lattner
1e84d885b7 Fix a miscompilation of crafty by clobbering the "A" variable.
llvm-svn: 21770
2005-05-07 23:49:08 +00:00
Chris Lattner
5662127ed6 Rewrite the guts of the reassociate pass to be more efficient and logical. Instead
of trying to do local reassociation tweaks at each level, only process an expression
tree once (at its root).  This does not improve the reassociation pass in any real way.

llvm-svn: 21768
2005-05-07 21:59:39 +00:00
Reid Spencer
b4fdf14d34 * Add two strlen optimizations:
strlen(x) != 0 -> *x != 0
    strlen(x) == 0 -> *x == 0
* Change nested statistics to use style of other LLVM statistics so that
  only the name of the optimization (simplify-libcalls) is used as the
  statistic name, and the description indicates which specific all is
  optimized. Cuts down on some redundancy and saves a few bytes of space.
* Make note of stpcpy optimization that could be done.

llvm-svn: 21766
2005-05-07 20:15:59 +00:00
Reid Spencer
65d553cd03 Don't increment the counter unless the debug flag is set.
llvm-svn: 21762
2005-05-07 04:59:45 +00:00
Chris Lattner
3edf09a5eb Convert shifts to muls to assist reassociation. This implements
Reassociate/shifttest.ll

llvm-svn: 21761
2005-05-07 04:24:13 +00:00
Chris Lattner
b1ea71fbcd Simplify the code and rearrange it. No major functionality changes here.
llvm-svn: 21759
2005-05-07 04:08:02 +00:00
Jeff Cohen
eafa15885e Silence VC++ warnings about unsafe mixing of ints and bools with the | operator.
llvm-svn: 21758
2005-05-07 02:44:04 +00:00
Chris Lattner
f6775e16bf remove some dead (always dynamically false) flags
llvm-svn: 21752
2005-05-06 22:35:09 +00:00
Chris Lattner
1f6d3b2344 encode calling conventions for call/invoke instructions.
llvm-svn: 21751
2005-05-06 22:34:01 +00:00
Chris Lattner
494f3da7b3 encode function calling convs in the bytecode file. invoke and call are
still to come.

llvm-svn: 21749
2005-05-06 20:42:57 +00:00
Chris Lattner
562734e130 parse new calling conv specifiers
llvm-svn: 21748
2005-05-06 20:27:19 +00:00
Chris Lattner
de5b492521 wrap a longline
llvm-svn: 21747
2005-05-06 20:27:03 +00:00
Chris Lattner
26a44493ef add support for explicit calling conventions
llvm-svn: 21746
2005-05-06 20:26:43 +00:00
Chris Lattner
0995b3da02 use splice instead of remove/insert for a minor speedup
llvm-svn: 21743
2005-05-06 19:58:35 +00:00
Chris Lattner
146014b748 remove some ugly hacks that are no longer needed since andrew removed the
varargs munging code

llvm-svn: 21742
2005-05-06 19:49:51 +00:00
Chris Lattner
c9be572154 BAD typeo which caused many testsuite failures last night. Note to self, do
not change code after testing it without retesting!

llvm-svn: 21741
2005-05-06 17:13:16 +00:00
Chris Lattner
1bc2753d69 clean up the CBE output a bit
llvm-svn: 21740
2005-05-06 06:58:42 +00:00
Chris Lattner
f70b2785b7 add tail marker as a comment
llvm-svn: 21739
2005-05-06 06:53:07 +00:00
Chris Lattner
4e9d804f1d Make the stub functions be tail calls
llvm-svn: 21738
2005-05-06 06:48:54 +00:00
Chris Lattner
146447f57a Preserve tail marker
llvm-svn: 21737
2005-05-06 06:48:21 +00:00
Chris Lattner
0187977904 Implement Transforms/Inline/inline-tail.ll
llvm-svn: 21736
2005-05-06 06:47:52 +00:00
Chris Lattner
3d4098b1e0 preserve the tail marker
llvm-svn: 21734
2005-05-06 06:46:58 +00:00
Chris Lattner
47c5cd63f6 lex tail
llvm-svn: 21729
2005-05-06 06:20:33 +00:00
Chris Lattner
59d23baab1 add bytecode reader support for tail calls
llvm-svn: 21727
2005-05-06 06:13:34 +00:00
Chris Lattner
72ffd7e7d5 Add a 'tail' marker for call instructions, patch contributed by
Alexander Friedman.

llvm-svn: 21722
2005-05-06 05:51:46 +00:00
Chris Lattner
99db0ab3df Wrap long lines
llvm-svn: 21720
2005-05-06 05:34:40 +00:00
Chris Lattner
b953e27f85 DCE intrinsic instructions without side effects.
llvm-svn: 21719
2005-05-06 05:27:34 +00:00
Chris Lattner
4f7bba1106 These intrinsics do not access memory
llvm-svn: 21718
2005-05-06 05:21:04 +00:00
Chris Lattner
2b4c801d10 Teach instcombine propagate zeroness through shl instructions, implementing
and.ll:test31

llvm-svn: 21717
2005-05-06 04:53:20 +00:00
Chris Lattner
ead76729cc Implement shift.ll:test23. If we are shifting right then immediately truncating
the result, turn signed shift rights into unsigned shift rights if possible.

This leads to later simplification and happens *often* in 176.gcc.  For example,
this testcase:

struct xxx { unsigned int code : 8; };
enum codes { A, B, C, D, E, F };
int foo(struct xxx *P) {
  if ((enum codes)P->code == A)
     bar();
}

used to be compiled to:

int %foo(%struct.xxx* %P) {
        %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0           ; <uint*> [#uses=1]
        %tmp.2 = load uint* %tmp.1              ; <uint> [#uses=1]
        %tmp.3 = cast uint %tmp.2 to int                ; <int> [#uses=1]
        %tmp.4 = shl int %tmp.3, ubyte 24               ; <int> [#uses=1]
        %tmp.5 = shr int %tmp.4, ubyte 24               ; <int> [#uses=1]
        %tmp.6 = cast int %tmp.5 to sbyte               ; <sbyte> [#uses=1]
        %tmp.8 = seteq sbyte %tmp.6, 0          ; <bool> [#uses=1]
        br bool %tmp.8, label %then, label %UnifiedReturnBlock

Now it is compiled to:

        %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0           ; <uint*> [#uses=1]
        %tmp.2 = load uint* %tmp.1              ; <uint> [#uses=1]
        %tmp.2 = cast uint %tmp.2 to sbyte              ; <sbyte> [#uses=1]
        %tmp.8 = seteq sbyte %tmp.2, 0          ; <bool> [#uses=1]
        br bool %tmp.8, label %then, label %UnifiedReturnBlock

which is the difference between this:

foo:
        subl $4, %esp
        movl 8(%esp), %eax
        movl (%eax), %eax
        shll $24, %eax
        sarl $24, %eax
        testb %al, %al
        jne .LBBfoo_2

and this:

foo:
        subl $4, %esp
        movl 8(%esp), %eax
        movl (%eax), %eax
        testb %al, %al
        jne .LBBfoo_2

This occurs 3243 times total in the External tests, 215x in povray,
6x in each f2c'd program, 1451x in 176.gcc, 7x in crafty, 20x in perl,
25x in gap, 3x in m88ksim, 25x in ijpeg.

Maybe this will cause a little jump on gcc tommorow :)

llvm-svn: 21715
2005-05-06 04:18:52 +00:00
Chris Lattner
20b5bce229 Implement xor.ll:test22
llvm-svn: 21713
2005-05-06 02:07:39 +00:00
Chris Lattner
27f6e62cac implement and.ll:test30 and set.ll:test21
llvm-svn: 21712
2005-05-06 01:53:19 +00:00