1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-30 07:22:55 +01:00
Commit Graph

2001 Commits

Author SHA1 Message Date
Andrew Lenharth
e68b566084 hack to fix bzip2 (bug 571)
llvm-svn: 22192
2005-06-04 12:43:56 +00:00
Reid Spencer
59580db70f Make the registration hash_map static. No other module needs it. Also,
document what its for a little better.

llvm-svn: 22164
2005-05-21 01:27:04 +00:00
Reid Spencer
6c0f6b6458 Adjust the file comment to read a little easier.
llvm-svn: 22163
2005-05-21 00:57:44 +00:00
Reid Spencer
5161c39bfe Make sure ... arguments are casted to sbyte* where needed.
llvm-svn: 22162
2005-05-21 00:39:30 +00:00
Reid Spencer
0a43070283 Add a "brief" comment for CastToCStr
llvm-svn: 22161
2005-05-21 00:23:23 +00:00
Chris Lattner
d7d4a57a4f Fix mismatched type problem that crashed on cases like this:
sprintf(P, "%s", X);

Where X is not an sbyte*.  This fixes the bug JohnMC reported on llvm-bugs.

llvm-svn: 22159
2005-05-20 22:22:25 +00:00
Chris Lattner
b13335fff2 Fix Transforms/SimplifyCFG/switch-simplify-crash.ll
llvm-svn: 22158
2005-05-20 22:19:54 +00:00
Chris Lattner
df9c75fb12 teach the inliner about coldcc and noreturn functions
llvm-svn: 22113
2005-05-18 04:30:33 +00:00
Reid Spencer
720fbd937a Don't look for __builtin_ffs, we'll never see it from llvm-gcc and there's
not reason to include it for other front ends.

llvm-svn: 22070
2005-05-15 21:27:34 +00:00
Reid Spencer
58ad53e9d3 Provide this optimization as well:
ffs(x) -> (x == 0 ? 0 : 1+llvm.cttz(x))

llvm-svn: 22068
2005-05-15 21:19:45 +00:00
Reid Spencer
24104523b5 Duh .. you actually have to #include Config/config.h before you can test
for one of the values that it defines!

llvm-svn: 22058
2005-05-15 17:20:47 +00:00
Reid Spencer
9cd1000c52 Changes for ffs lib call simplification:
* Check for availability of ffsll call in configure script
* Support ffs, ffsl, and ffsll conversion to constant value if the argument
  is constant.

llvm-svn: 22027
2005-05-14 16:42:52 +00:00
Chris Lattner
bcd4c17bfd Preserve calling conv when hacking on calls
llvm-svn: 22025
2005-05-14 12:28:32 +00:00
Chris Lattner
7771ae3fcc preserve calling conventions when hacking on code
llvm-svn: 22024
2005-05-14 12:25:32 +00:00
Chris Lattner
214f1a8cf9 Make sure to preserve the calling convention when changing an invoke into
a call.  This fixes Prolangs-C++/deriv2, kimwitu++, and Misc-C++/bigfib
on X86 with -enable-x86-fastcc.

llvm-svn: 22023
2005-05-14 12:21:56 +00:00
Chris Lattner
b06ee3dd65 calling a function with the wrong CC is undefined, turn it into an unreachable
instruction.  This is useful for catching optimizers that don't preserve
calling conventions

llvm-svn: 21928
2005-05-13 07:09:09 +00:00
Chris Lattner
b9d99c9b2b When lowering invokes to calls, amke sure to preserve the calling conv. This
fixes Ptrdist/anagram with x86 llcbeta

llvm-svn: 21925
2005-05-13 06:27:02 +00:00
Chris Lattner
dfe45a21b1 Prefer int 0 instead of long 0 for GEP arguments.
llvm-svn: 21924
2005-05-13 06:10:12 +00:00
Chris Lattner
758f2fe1a3 Fix Reassociate/shifttest.ll
llvm-svn: 21839
2005-05-10 03:39:25 +00:00
Chris Lattner
f221558c21 If a function contains no allocas, all of the calls in it are trivially
suitable for tail calls.

llvm-svn: 21836
2005-05-09 23:51:13 +00:00
Chris Lattner
d3bb28d97a implement and.ll:test33
llvm-svn: 21809
2005-05-09 04:58:36 +00:00
Chris Lattner
2d9c054f4e Preserve calling conventions when doing IPO
llvm-svn: 21798
2005-05-09 01:05:50 +00:00
Chris Lattner
eff214d7de wrap long lines, preserve calling conventions when cloning functions and
turning calls into invokes

llvm-svn: 21797
2005-05-09 01:04:34 +00:00
Chris Lattner
b57ab2e975 Convert non-address taken functions with C calling conventions to fastcc.
llvm-svn: 21791
2005-05-08 22:18:06 +00:00
Chris Lattner
d5a353a675 Implement Reassociate/mul-neg-add.ll
llvm-svn: 21788
2005-05-08 21:41:35 +00:00
Chris Lattner
f535f6e808 Bail out earlier
llvm-svn: 21786
2005-05-08 21:33:47 +00:00
Chris Lattner
39f74def7f Teach reassociate that 0-X === X*-1
llvm-svn: 21785
2005-05-08 21:28:52 +00:00
Chris Lattner
319ac8f822 Fix PR557 and basictest[34].ll.
This makes reassociate realize that loads should be treated as unmovable, and
gives distinct ranks to distinct values defined in the same basic block, allowing
reassociate to do its thing.

llvm-svn: 21783
2005-05-08 20:57:04 +00:00
Chris Lattner
b5de308c5f Add debugging information
llvm-svn: 21781
2005-05-08 20:09:57 +00:00
Chris Lattner
e74082156b eliminate gotos
llvm-svn: 21780
2005-05-08 19:48:43 +00:00
Chris Lattner
a9d5fdd4fd Improve reassociation handling of inverses, implementing inverses.ll.
llvm-svn: 21778
2005-05-08 18:59:37 +00:00
Chris Lattner
afbdc0b969 clean up and modernize this pass.
llvm-svn: 21776
2005-05-08 18:45:26 +00:00
Chris Lattner
7b41539f32 Strength reduce SAR into SHR if there is no way sign bits could be shifted
in.  This tends to get cases like this:

  X = cast ubyte to int
  Y = shr int X, ...

Tested by: shift.ll:test24

llvm-svn: 21775
2005-05-08 17:34:56 +00:00
Chris Lattner
c2670a0da6 Refactor some code
llvm-svn: 21772
2005-05-08 00:19:31 +00:00
Chris Lattner
cd7caaa866 Handle some simple cases where we can see that values get annihilated.
llvm-svn: 21771
2005-05-08 00:08:33 +00:00
Chris Lattner
1e84d885b7 Fix a miscompilation of crafty by clobbering the "A" variable.
llvm-svn: 21770
2005-05-07 23:49:08 +00:00
Chris Lattner
5662127ed6 Rewrite the guts of the reassociate pass to be more efficient and logical. Instead
of trying to do local reassociation tweaks at each level, only process an expression
tree once (at its root).  This does not improve the reassociation pass in any real way.

llvm-svn: 21768
2005-05-07 21:59:39 +00:00
Reid Spencer
b4fdf14d34 * Add two strlen optimizations:
strlen(x) != 0 -> *x != 0
    strlen(x) == 0 -> *x == 0
* Change nested statistics to use style of other LLVM statistics so that
  only the name of the optimization (simplify-libcalls) is used as the
  statistic name, and the description indicates which specific all is
  optimized. Cuts down on some redundancy and saves a few bytes of space.
* Make note of stpcpy optimization that could be done.

llvm-svn: 21766
2005-05-07 20:15:59 +00:00
Reid Spencer
65d553cd03 Don't increment the counter unless the debug flag is set.
llvm-svn: 21762
2005-05-07 04:59:45 +00:00
Chris Lattner
3edf09a5eb Convert shifts to muls to assist reassociation. This implements
Reassociate/shifttest.ll

llvm-svn: 21761
2005-05-07 04:24:13 +00:00
Chris Lattner
b1ea71fbcd Simplify the code and rearrange it. No major functionality changes here.
llvm-svn: 21759
2005-05-07 04:08:02 +00:00
Chris Lattner
c9be572154 BAD typeo which caused many testsuite failures last night. Note to self, do
not change code after testing it without retesting!

llvm-svn: 21741
2005-05-06 17:13:16 +00:00
Chris Lattner
146447f57a Preserve tail marker
llvm-svn: 21737
2005-05-06 06:48:21 +00:00
Chris Lattner
0187977904 Implement Transforms/Inline/inline-tail.ll
llvm-svn: 21736
2005-05-06 06:47:52 +00:00
Chris Lattner
3d4098b1e0 preserve the tail marker
llvm-svn: 21734
2005-05-06 06:46:58 +00:00
Chris Lattner
99db0ab3df Wrap long lines
llvm-svn: 21720
2005-05-06 05:34:40 +00:00
Chris Lattner
b953e27f85 DCE intrinsic instructions without side effects.
llvm-svn: 21719
2005-05-06 05:27:34 +00:00
Chris Lattner
2b4c801d10 Teach instcombine propagate zeroness through shl instructions, implementing
and.ll:test31

llvm-svn: 21717
2005-05-06 04:53:20 +00:00
Chris Lattner
ead76729cc Implement shift.ll:test23. If we are shifting right then immediately truncating
the result, turn signed shift rights into unsigned shift rights if possible.

This leads to later simplification and happens *often* in 176.gcc.  For example,
this testcase:

struct xxx { unsigned int code : 8; };
enum codes { A, B, C, D, E, F };
int foo(struct xxx *P) {
  if ((enum codes)P->code == A)
     bar();
}

used to be compiled to:

int %foo(%struct.xxx* %P) {
        %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0           ; <uint*> [#uses=1]
        %tmp.2 = load uint* %tmp.1              ; <uint> [#uses=1]
        %tmp.3 = cast uint %tmp.2 to int                ; <int> [#uses=1]
        %tmp.4 = shl int %tmp.3, ubyte 24               ; <int> [#uses=1]
        %tmp.5 = shr int %tmp.4, ubyte 24               ; <int> [#uses=1]
        %tmp.6 = cast int %tmp.5 to sbyte               ; <sbyte> [#uses=1]
        %tmp.8 = seteq sbyte %tmp.6, 0          ; <bool> [#uses=1]
        br bool %tmp.8, label %then, label %UnifiedReturnBlock

Now it is compiled to:

        %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0           ; <uint*> [#uses=1]
        %tmp.2 = load uint* %tmp.1              ; <uint> [#uses=1]
        %tmp.2 = cast uint %tmp.2 to sbyte              ; <sbyte> [#uses=1]
        %tmp.8 = seteq sbyte %tmp.2, 0          ; <bool> [#uses=1]
        br bool %tmp.8, label %then, label %UnifiedReturnBlock

which is the difference between this:

foo:
        subl $4, %esp
        movl 8(%esp), %eax
        movl (%eax), %eax
        shll $24, %eax
        sarl $24, %eax
        testb %al, %al
        jne .LBBfoo_2

and this:

foo:
        subl $4, %esp
        movl 8(%esp), %eax
        movl (%eax), %eax
        testb %al, %al
        jne .LBBfoo_2

This occurs 3243 times total in the External tests, 215x in povray,
6x in each f2c'd program, 1451x in 176.gcc, 7x in crafty, 20x in perl,
25x in gap, 3x in m88ksim, 25x in ijpeg.

Maybe this will cause a little jump on gcc tommorow :)

llvm-svn: 21715
2005-05-06 04:18:52 +00:00
Chris Lattner
20b5bce229 Implement xor.ll:test22
llvm-svn: 21713
2005-05-06 02:07:39 +00:00
Chris Lattner
27f6e62cac implement and.ll:test30 and set.ll:test21
llvm-svn: 21712
2005-05-06 01:53:19 +00:00
Chris Lattner
d38c600c9d implement or.ll:test20
llvm-svn: 21709
2005-05-06 00:58:50 +00:00
Chris Lattner
adcc532d05 Fix a bug compimling Ruby, fixing this testcase:
LowerSetJmp/2005-05-05-OldUses.ll

llvm-svn: 21696
2005-05-05 15:47:43 +00:00
Chris Lattner
1c462db06f Instcombine: cast (X != 0) to int, cast (X == 1) to int -> X iff X has only the low bit set.
This implements set.ll:test20.

This triggers 2x on povray, 9x on mesa, 11x on gcc, 2x on crafty, 1x on eon,
6x on perlbmk and 11x on m88ksim.

It allows us to compile these two functions into the same code:

struct s { unsigned int bit : 1; };
unsigned foo(struct s *p) {
  if (p->bit)
    return 1;
  else
    return 0;
}
unsigned bar(struct s *p) { return p->bit; }

llvm-svn: 21690
2005-05-04 19:10:26 +00:00
Reid Spencer
c564fd819c Implement the IsDigitOptimization for simplifying calls to the isdigit
library function:
  isdigit(chr) -> 0 or 1 if chr is constant
  isdigit(chr) -> chr - '0' <= 9 otherwise

Although there are many calls to isdigit in llvm-test, most of them are
compiled away by macros leaving only this:

2 MultiSource/Applications/hexxagon

llvm-svn: 21688
2005-05-04 18:58:28 +00:00
Reid Spencer
8d2736401b * Correct the function prototypes for some of the functions to match the
actual spec (int -> uint)
* Add the ability to get/cache the strlen function prototype.
* Make sure generated values are appropriately named for debugging purposes
* Add the SPrintFOptimiation for 4 casts of sprintf optimization:
    sprintf(str,cstr) -> llvm.memcpy(str,cstr) (if cstr has no %)
    sprintf(str,"")   -> store sbyte 0, str
    sprintf(str,"%s",src) -> llvm.memcpy(str,src) (if src is constant)
    sprintf(str,"%c",chr) -> store chr, str   ; store sbyte 0, str+1

The sprintf optimization didn't fire as much as I had hoped:

  2 MultiSource/Applications/SPASS
  5 MultiSource/Benchmarks/McCat/18-imp
 22 MultiSource/Benchmarks/Prolangs-C/TimberWolfMC
  1 MultiSource/Benchmarks/Prolangs-C/assembler
  6 MultiSource/Benchmarks/Prolangs-C/unix-smail
  2 MultiSource/Benchmarks/mediabench/mpeg2/mpeg2dec

llvm-svn: 21679
2005-05-04 03:20:21 +00:00
Reid Spencer
f52c228416 Implement optimizations for the strchr and llvm.memset library calls.
Neither of these activated as many times as was hoped:

strchr:
9 MultiSource/Applications/siod
1 MultiSource/Applications/d
2 MultiSource/Prolangs-C/archie-client
1 External/SPEC/CINT2000/176.gcc/176.gcc

llvm.memset:
no hits

llvm-svn: 21669
2005-05-03 07:23:44 +00:00
Reid Spencer
0c484ea7de Avoid garbage output in the statistics display by ensuring that the
strings passed to Statistic's constructor are not destructable. The stats
are printed during static destruction and the SimplifyLibCalls module was
getting destructed before the statistics.

llvm-svn: 21661
2005-05-03 02:54:54 +00:00
Reid Spencer
123f4e393f Add the StrNCmpOptimization which is similar to strcmp.
Unfortunately, this optimization didn't trigger on any llvm-test tests.

llvm-svn: 21660
2005-05-03 01:43:45 +00:00
Reid Spencer
a5fcd1660f Implement the fprintf optimization which converts calls like this:
fprintf(F,"hello") -> fwrite("hello",strlen("hello"),1,F)
  fprintf(F,"%s","hello") -> fwrite("hello",strlen("hello"),1,F)
  fprintf(F,"%c",'x') -> fputc('c',F)

This optimization fires severals times in llvm-test:

313 MultiSource/Applications/Burg
302 MultiSource/Benchmarks/Prolangs-C/TimberWolfMC
189 MultiSource/Benchmarks/Prolangs-C/mybison
175 MultiSource/Benchmarks/Prolangs-C/football
130 MultiSource/Benchmarks/Prolangs-C/unix-tbl

llvm-svn: 21657
2005-05-02 23:59:26 +00:00
John Criswell
d1933cb2e4 Fixed a comment.
llvm-svn: 21653
2005-05-02 14:47:42 +00:00
Chris Lattner
7db64049a6 Implement getelementptr.ll:test11
llvm-svn: 21647
2005-05-01 04:42:15 +00:00
Chris Lattner
cee86a7095 Check for volatile loads only once.
Implement load.ll:test7

llvm-svn: 21645
2005-05-01 04:24:53 +00:00
Reid Spencer
f7511e4fe2 Fix a comment that stated the wrong thing.
llvm-svn: 21638
2005-04-30 06:45:47 +00:00
Reid Spencer
cc551c4345 * Don't depend on "guessing" what a FILE* is, just require that the actual
type be obtained from a CallInst we're optimizing.
* Make it possible for getConstantStringLength to return the ConstantArray
  that it extracts in case the content is needed by an Optimization.
* Implement the strcmp optimization
* Implement the toascii optimization

This pass is now firing several to many times in the following MultiSource
tests:

Applications/Burg      -   7 (strcat,strcpy)
Applications/siod      -  13 (strcat,strcpy,strlen)
Applications/spiff     - 120 (exit,fputs,strcat,strcpy,strlen)
Applications/treecc    -  66 (exit,fputs,strcat,strcpy)
Applications/kimwitu++ -  34 (strcmp,strcpy,strlen)
Applications/SPASS     - 588 (exit,fputs,strcat,strcpy,strlen)

llvm-svn: 21626
2005-04-30 03:17:54 +00:00
Reid Spencer
a32eb179ed Implement the optimizations for "pow" and "fputs" library calls.
llvm-svn: 21618
2005-04-29 09:39:47 +00:00
Reid Spencer
ff5cc3cb16 Remove optimizations that don't require both operands to be constant. These
are moved to simplify-libcalls pass.

llvm-svn: 21614
2005-04-29 05:55:35 +00:00
Jeff Cohen
6dccb593c9 Consistently use 'class' to silence VC++
llvm-svn: 21612
2005-04-29 03:05:44 +00:00
Reid Spencer
fb6e0590a8 * Add constant folding for additional floating point library calls such as
sinh, cosh, etc.
* Make the name comparisons for the fp libcalls a little more efficient by
  switching on the first character of the name before doing comparisons.

llvm-svn: 21611
2005-04-28 23:01:59 +00:00
Reid Spencer
e7eb17c64b Remove from the TODO list those optimizations that are already handled by
constant folding implemented in lib/Transforms/Utils/Local.cpp.

llvm-svn: 21604
2005-04-28 18:05:16 +00:00
Reid Spencer
b5d4b854ea Document additional libcall transformations that need to be written.
Help Wanted!

There's a lot of them to write.

llvm-svn: 21603
2005-04-28 04:40:06 +00:00
Reid Spencer
49cfe25457 Doxygenate.
llvm-svn: 21602
2005-04-27 21:29:20 +00:00
Chris Lattner
96704dee49 remove 'statement with no effect' warning
llvm-svn: 21600
2005-04-27 20:12:17 +00:00
Reid Spencer
b7cff5d9d1 More Cleanup:
* Name the instructions by appending to name of original
* Factor common part out of a switch statement.

llvm-svn: 21597
2005-04-27 17:46:54 +00:00
Reid Spencer
1eb67fef62 This is a cleanup commit:
* Correct stale documentation in a few places
* Re-order the file to better associate things and reduce line count
* Make the pass thread safe by caching the Function* objects needed by the
  optimizers in the pass object instead of globally.
* Provide the SimplifyLibCalls pass object to the optimizer classes so they
  can access cached Function* objects and TargetData info
* Make sure the pass resets its cache if the Module passed to runOnModule
  changes
* Rename CallOptimizer LibCallOptimization. All the classes are named
  *Optimization while the objects are *Optimizer.
* Don't cache Function* in the optimizer objects because they could be used
  by multiple PassManager's running in multiple threads
* Add an optimization for strcpy which is similar to strcat
* Add a "TODO" list at the end of the file for ideas on additional libcall
  optimizations that could be added (get ideas from other compilers).

Sorry for the huge diff. Its mostly reorganization of code. That won't
happen again as I believe the design and infrastructure for this pass is
now done or close to it.

llvm-svn: 21589
2005-04-27 07:54:40 +00:00
Chris Lattner
792ae155ad detect functions that never return, and turn the instruction following a
call to them into an 'unreachable' instruction.

This triggers a bunch of times, particularly on gcc:

gzip: 36
gcc: 601
eon: 12
bzip: 38
llvm-svn: 21587
2005-04-27 04:52:23 +00:00
Reid Spencer
e3b60245eb Prefix the debug statistics so they group together.
llvm-svn: 21583
2005-04-27 00:20:23 +00:00
Reid Spencer
27f80b8c96 In debug builds, make a statistic for each kind of call optimization. This
helps track down what gets triggered in the pass so its easier to identify
good test cases.

llvm-svn: 21582
2005-04-27 00:05:45 +00:00
Chris Lattner
bd077a1945 This analysis doesn't take 'throwing' into consideration, it looks at
'unwinding'

llvm-svn: 21581
2005-04-26 23:53:25 +00:00
Reid Spencer
ddef064121 Fix up the debug statement to actually use a newline .. radical concept.
llvm-svn: 21580
2005-04-26 23:07:08 +00:00
Reid Spencer
7f06064798 Uh, this isn't argpromotion.
llvm-svn: 21579
2005-04-26 23:05:17 +00:00
Reid Spencer
42906defb1 Add some debugging output so we can tell which calls are getting triggered
llvm-svn: 21578
2005-04-26 23:02:16 +00:00
Reid Spencer
47a20efcb0 No, seriously folks, memcpy really does return void.
llvm-svn: 21575
2005-04-26 22:49:48 +00:00
Reid Spencer
270f03e49e memcpy returns void!!!!!
llvm-svn: 21574
2005-04-26 22:46:23 +00:00
Reid Spencer
303c65cea6 Fix some bugs found by running on llvm-test:
* MemCpyOptimization can only be optimized if the 3rd and 4th arguments are
  constants and we weren't checking for that.
* The result of llvm.memcpy (and llvm.memmove) is void* not sbyte*, put in
  a cast.

llvm-svn: 21570
2005-04-26 19:55:57 +00:00
Reid Spencer
27afdaf88f Changes From Review Feedback:
* Have the SimplifyLibCalls pass acquire the TargetData and pass it down to
  the optimization classes so they can use it to make better choices for
  the signatures of functions, etc.
* Rearrange the code a little so the utility functions are closer to their
  usage and keep the core of the pass near the top of the files.
* Adjust the StrLen pass to get/use the correct prototype depending on the
  TargetData::getIntPtrType() result. The result of strlen is size_t which
  could be either uint or ulong depending on the platform.
* Clean up some coding nits (cast vs. dyn_cast, remove redundant items from
  a switch, etc.)
* Implement the MemMoveOptimization as a twin of MemCpyOptimization (they
  only differ in name).

llvm-svn: 21569
2005-04-26 19:13:17 +00:00
Chris Lattner
f6199ef63a Fix the compile failures from last night.
llvm-svn: 21565
2005-04-26 14:40:41 +00:00
Reid Spencer
5590c48202 * Merge get_GVInitializer and getCharArrayLength into a single function
named getConstantStringLength. This is the common part of StrCpy and
  StrLen optimizations and probably several others, yet to be written. It
  performs all the validity checks for looking at constant arrays that are
  supposed to be null-terminated strings and then computes the actual
  length of the string.
* Implement the MemCpyOptimization class. This just turns memcpy of 1, 2, 4
  and 8 byte data blocks that are properly aligned on those boundaries into
  a load and a store. Much more could be done here but alignment
  restrictions and lack of knowledge of the target instruction set prevent
  use from doing significantly more. That will have to be delegated to the
  code generators as they lower llvm.memcpy calls.

llvm-svn: 21562
2005-04-26 07:45:18 +00:00
Reid Spencer
584e662d19 * Implement StrLenOptimization
* Factor out commonalities between StrLenOptimization and StrCatOptimization
* Make sure that signatures return sbyte* not void*

llvm-svn: 21559
2005-04-26 05:24:00 +00:00
Reid Spencer
6a1c238029 Incorporate feedback from Chris:
* Change signatures of OptimizeCall and ValidateCalledFunction so they are
  non-const, allowing the optimization object to be modified. This is in
  support of caching things used across multiple calls.
* Provide two functions for constructing and caching function types
* Modify the StrCatOptimization to cache Function objects for strlen and
  llvm.memcpy so it doesn't regenerate them on each call site. Make sure
  these are invalidated each time we start the pass.
* Handle both a GEP Instruction and a GEP ConstantExpr
* Add additional checks to make sure we really are dealing with an arary of
  sbyte and that all the element initializers are ConstantInt or
  ConstantExpr that reduce to ConstantInt.
* Make sure the GlobalVariable is constant!
* Don't use ConstantArray::getString as it can fail and it doesn't give us
  the right thing. We must check for null bytes in the middle of the array.
* Use llvm.memcpy instead of memcpy so we can factor alignment into it.
* Don't use void* types in signatures, replace with sbyte* instead.

llvm-svn: 21555
2005-04-26 03:26:15 +00:00
Reid Spencer
5fcce35fa8 Changes due to code review and new implementation:
* Don't use std::string for the function names, const char* will suffice
* Allow each CallOptimizer to validate the function signature before
  doing anything
* Repeatedly loop over the functions until an iteration produces
  no more optimizations. This allows one optimization to insert a
  call that is optimized by another optimization.
* Implement the ConstantArray portion of the StrCatOptimization
* Provide a template for the MemCpyOptimization
* Make ExitInMainOptimization split the block, not delete everything
  after the return instruction.
(This covers revision 1.3 and 1.4, as the 1.3 comments were botched)

llvm-svn: 21548
2005-04-25 21:20:38 +00:00
Reid Spencer
9b66533e40 Lots of changes based on review and new functionality:
* Use a 

llvm-svn: 21546
2005-04-25 21:11:48 +00:00
Chris Lattner
3f22e5ba5d implement getelementptr.ll:test10
llvm-svn: 21541
2005-04-25 20:17:30 +00:00
Reid Spencer
4b4864684a Post-Review Cleanup:
* Fix comments at top of file
* Change algorithm for running the call optimizations from n*n to something
  closer to n.
* Use a hash_map to store and lookup the optimizations since there will
  eventually (or potentially) be a large number of them. This gets lookup
  based on the name of the function to O(1). Each CallOptimizer now has a
  std::string member named func_name that tracks the name of the function
  that it applies to. It is this string that is entered into the hash_map
  for fast comparison against the function names encountered in the module.
* Cleanup some style issues pertaining to iterator invalidation
* Don't pass the Function pointer to the OptimizeCall function because if
  the optimization needs it, it can get it from the CallInst passed in.
* Add the skeleton for a new CallOptimizer, StrCatOptimizer which will
  eventually replace strcat's of constant strings with direct copies.

llvm-svn: 21526
2005-04-25 03:59:26 +00:00
Reid Spencer
95a0d8af78 A new pass to provide specific optimizations for certain well-known library
calls. The pass visits all external functions in the module and determines
if such function calls can be optimized. The optimizations are specific to
the library calls involved. This initial version only optimizes calls to
exit(3) when they occur in main(): it changes them to ret instructions.

llvm-svn: 21522
2005-04-25 02:53:12 +00:00
Chris Lattner
e78ae0e1b1 Eliminate cases where we could << by 64, which is undefined in C.
llvm-svn: 21500
2005-04-24 17:46:05 +00:00
Chris Lattner
5fdcc49858 Implement xor.ll:test21: select (not C), A, B -> select C, B, A
llvm-svn: 21495
2005-04-24 07:30:14 +00:00
Chris Lattner
26c5e79151 Use getPrimitiveSizeInBits() instead of getPrimitiveSize()*8
Completely rework the 'setcc (cast x to larger), y' code.  This code has
the advantage of implementing setcc.ll:test19 (being more general than
the previous code) and being correct in all cases.

This allows us to unxfail 2004-11-27-SetCCForCastLargerAndConstant.ll,
and close PR454.

llvm-svn: 21491
2005-04-24 06:59:08 +00:00
Jeff Cohen
6c42217055 Eliminate tabs and trailing spaces
llvm-svn: 21480
2005-04-23 21:38:35 +00:00
Chris Lattner
42869e162e Generalize the setcc -> PHI and Select folding optimizations to work with
any constant RHS, not just a constant integer RHS.  This implements
select.ll:test17

llvm-svn: 21470
2005-04-23 15:31:55 +00:00