1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 11:42:57 +01:00
Commit Graph

6051 Commits

Author SHA1 Message Date
Devang Patel
4fcea36b8b Rewrite code that 1) filters loops and 2) calculates new loop bounds.
This fixes many bugs. I will add more test cases in a separate check-in.

Some day, the code that manipulates CFG and updates dom. info could use refactoring help.

llvm-svn: 60554
2008-12-04 21:38:42 +00:00
Bill Wendling
a0466523bd Temporarily revert r60519. It was causing a bootstrap failure:
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/xgcc -B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/ -B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.5.0/bin/ -B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.5.0/lib/ -isystem /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.5.0/include -isystem /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.5.0/sys-include -DHAVE_CONFIG_H -I. -I../../../llvm-gcc.src/libgomp -I. -I../../../llvm-gcc.src/libgomp/config/posix -I../../../llvm-gcc.src/libgomp -Wall -pthread -Werror -O2 -g -O2 -MT barrier.lo -MD -MP -MF .deps/barrier.Tpo -c ../../../llvm-gcc.src/libgomp/barrier.c  -fno-common -DPIC -o .libs/barrier.o
checking for sys/file.h... /var/folders/zG/zGE-ZJOGFiGjv0B5cs5oYE+++TM/-Tmp-//cc34Jg5P.s:13:non-relocatable subtraction expression, "_gomp_tls_key" minus "L1$pb"
/var/folders/zG/zGE-ZJOGFiGjv0B5cs5oYE+++TM/-Tmp-//cc34Jg5P.s:13:symbol: "_gomp_tls_key" can't be undefined in a subtraction expression
make[4]: *** [barrier.lo] Error 1
make[4]: *** Waiting for unfinished jobs....
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/xgcc -B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/ -B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.5.0/bin/ -B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.5.0/lib/ -isystem /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.5.0/include -isystem /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.5.0/sys-include -DHAVE_CONFIG_H -I. -I../../../llvm-gcc.src/libgomp -I. -I../../../llvm-gcc.src/libgomp/config/posix -I../../../llvm-gcc.src/libgomp -Wall -pthread -Werror -O2 -g -O2 -MT alloc.lo -MD -MP -MF .deps/alloc.Tpo -c ../../../llvm-gcc.src/libgomp/alloc.c -o alloc.o >/dev/null 2>&1
yes
checking for sys/param.h... make[3]: *** [all-recursive] Error 1
make[2]: *** [all] Error 2
make[1]: *** [all-target-libgomp] Error 2
make[1]: *** Waiting for unfinished jobs....

llvm-svn: 60527
2008-12-04 04:07:00 +00:00
Evan Cheng
d4b7459179 Visibility hidden GVs do not require extra load of symbol address from the GOT or non-lazy-ptr.
llvm-svn: 60519
2008-12-04 01:56:50 +00:00
Evan Cheng
05ded29738 Use mmx (punpckldq VR64, (mmx_v_set0)) to clear high 32-bits of a VR64 register.
llvm-svn: 60499
2008-12-03 19:38:05 +00:00
Rafael Espindola
0b01e188e5 Fix some tests. The grep for "il" was matching "file".
llvm-svn: 60485
2008-12-03 17:14:56 +00:00
Richard Osborne
e74ae9dbb7 Add support for ISD::TRAP to the XCore backend
llvm-svn: 60479
2008-12-03 10:59:16 +00:00
Evan Cheng
803ac3b438 Fix test.
llvm-svn: 60476
2008-12-03 08:20:45 +00:00
Chris Lattner
3f3717a4e2 testcase for br undef folding.
llvm-svn: 60471
2008-12-03 07:48:27 +00:00
Chris Lattner
f00b2f3fb4 Teach jump threading some more simple tricks:
1) have it fold "br undef", which does occur with
   surprising frequency as jump threading iterates.
2) teach j-t to delete dead blocks.  This removes the successor
   edges, reducing the in-edges of other blocks, allowing 
   recursive simplification.
3) Fold things like:
     br COND, BBX, BBY
  BBX:
     br COND, BBZ, BBW

   which also happens because jump threading iterates.

llvm-svn: 60470
2008-12-03 07:48:08 +00:00
Chris Lattner
683df044b0 don't spew tons of stuff to the output. This testcase is *not* for
loop deletion (it is for a ton of passes), which is very bad.

llvm-svn: 60465
2008-12-03 06:41:50 +00:00
Dan Gohman
ac6561793c Mark x86's V_SET0 and V_SETALLONES with isSimpleLoad, and teach X86's
foldMemoryOperand how to "fold" them, by converting them into constant-pool
loads. When they aren't folded, they use xorps/cmpeqd, but for example when
register pressure is high, they may now be folded as memory operands, which
reduces register pressure.

Also, mark V_SET0 isAsCheapAsAMove so that two-address-elimination will
remat it instead of copying zeros around (V_SETALLONES was already marked).

llvm-svn: 60461
2008-12-03 05:21:24 +00:00
Bill Wendling
f1fab58701 Change label to 'carry' for unsigned adds.
llvm-svn: 60460
2008-12-03 02:43:12 +00:00
Dan Gohman
dcd4896f12 Fix byval arguments in the fastcc calling convention. The fastcc convention
delegates to the regular x86-32 convention which handles byval, but only
after it handles a few cases, and it's necessary to handle byval before
handling those cases. This fixes PR3122 (and rdar://6400815), llvm-gcc
miscompiling LLVM.

llvm-svn: 60453
2008-12-03 01:28:04 +00:00
Dan Gohman
06c3ee5aa8 Add nounwind attributes to this test.
llvm-svn: 60451
2008-12-03 01:10:18 +00:00
Dale Johannesen
da5e01399a testcases for recent dag combiner changes
llvm-svn: 60449
2008-12-03 00:52:41 +00:00
Evan Cheng
a77559c870 Remove a (what appears to be) overly strict assertion. Here is what happened:
1. ppcf128 select is expanded to f64 select's.
2. f64 select operand 0 is an i1 truncate, it's promoted to i32 zero_extend.
3. f64 select is updated. It's changed back to a "NewNode" and being re-analyzed.
4. f64 select operands are being processed. Operand 0 is a "NewNode". It's being expunged out of ReplacedValues map.
5. ExpungeNode tries to remap f64 select and notice it's a "NewNode" and assert.
Duncan, please take a look. Thanks.

llvm-svn: 60443
2008-12-02 21:57:09 +00:00
Scott Michel
e0bbe7afb7 CellSPU:
- Incorporate Tilmann Scheller's ISD::TRUNCATE custom lowering patch
- Update SPU calling convention info, even if it's not used yet (but can be
  at some point or another)
- Ensure that any-extended f32 loads are custom lowered, especially when
  they're promoted for use in printf.

llvm-svn: 60438
2008-12-02 19:53:53 +00:00
Chris Lattner
2a9747548e Implement PRE of loads in the GVN pass with a pretty cheap and
straight-forward implementation.  This does not require any extra
alias analysis queries beyond what we already do for non-local loads.

Some programs really really like load PRE.  For example, SPASS triggers
this ~1000 times, ~300 times in 255.vortex, and ~1500 times on 403.gcc.

The biggest limitation to the implementation is that it does not split
critical edges.  This is a huge killer on many programs and should be
addressed after the initial patch is enabled by default.

The implementation of this should incidentally speed up rejection of 
non-local loads because it avoids creating the repl densemap in cases 
when it won't be used for fully redundant loads.

This is currently disabled by default.
Before I turn this on, I need to fix a couple of miscompilations in
the testsuite, look at compile time performance numbers, and look at
perf impact.  This is pretty close to ready though.

llvm-svn: 60408
2008-12-02 08:16:11 +00:00
Owen Anderson
bd844014fa Add a test for my previous PRE fix.
llvm-svn: 60394
2008-12-02 04:25:42 +00:00
Evan Cheng
39d7e00ff9 Fix PR3124: overly strict assert.
llvm-svn: 60392
2008-12-02 02:15:36 +00:00
Bill Wendling
580f12ae30 Second stab at target-dependent lowering of everyone's favorite nodes: [SU]ADDO
- LowerXADDO lowers [SU]ADDO into an ADD with an implicit EFLAGS define. The
  EFLAGS are fed into a SETCC node which has the conditional COND_O or COND_C,
  depending on the type of ADDO requested.

- LowerBRCOND now recognizes if it's coming from a SETCC node with COND_O or
  COND_C set.

llvm-svn: 60388
2008-12-02 01:06:39 +00:00
Chris Lattner
baf38b4f91 Add rdar reference, make this actually fail when the patch isn't applied.
llvm-svn: 60376
2008-12-01 22:35:31 +00:00
Dale Johannesen
f4362aae8c Consider only references to an IV within the loop when
figuring out the base of the IV.  This produces better
code in the example.  (Addresses use (IV) instead of 
(BASE,IV) - a significant improvement on low-register
machines like x86).

llvm-svn: 60374
2008-12-01 22:00:01 +00:00
Scott Michel
cf677b5a67 CellSPU:
- Fix v2[if]64 vector insertion code before IBM files a bug report.
- Ensure that zero (0) offsets relative to $sp don't trip an assert
  (add $sp, 0 gets legalized to $sp alone, tripping an assert)
- Shuffle masks passed to SPUISD::SHUFB are now v16i8 or v4i32

llvm-svn: 60358
2008-12-01 17:56:02 +00:00
Bill Wendling
a6e7dd2299 Use m_Specific() instead of double matching.
llvm-svn: 60341
2008-12-01 08:09:47 +00:00
Chris Lattner
e6c7ed156f simplify these patterns using m_Specific. No need to grep for
xor in testcase (or is a substring).

llvm-svn: 60328
2008-12-01 05:16:26 +00:00
Chris Lattner
0e03e40a76 Teach inst combine to merge GEPs through PHIs. This is really
important because it is sinking the loads using the GEPs, but
not the GEPs themselves.  This triggers 647 times on 403.gcc
and makes the .s file much much nicer.  For example before:

        je      LBB1_87 ## bb78
LBB1_62:        ## bb77
        leal    84(%esi), %eax
LBB1_63:        ## bb79
        movl    (%eax), %eax
...
LBB1_87:        ## bb78
        movl    $0, 4(%esp)
        movl    %esi, (%esp)
        call    L_make_decl_rtl$stub
        jmp     LBB1_62 ## bb77


after:

        jne     LBB1_63 ## bb79
LBB1_62:        ## bb78
        movl    $0, 4(%esp)
        movl    %esi, (%esp)
        call    L_make_decl_rtl$stub
LBB1_63:        ## bb79
        movl    84(%esi), %eax

The input code was (and the GEPs are merged and
the PHI is now eliminated by instcombine):

        br i1 %tmp233, label %bb78, label %bb77
bb77:           
        %tmp234 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22              
        br label %bb79
bb78:           
        call void @make_decl_rtl(%struct.tree_node* %t_addr.3, i8* null) nounwind
        %tmp235 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22              
        br label %bb79
bb79:           
        %iftmp.12.0.in = phi %struct.rtx_def** [ %tmp235, %bb78 ], [ %tmp234, %bb77 ]           
        %iftmp.12.0 = load %struct.rtx_def** %iftmp.12.0.in             

llvm-svn: 60322
2008-12-01 02:34:36 +00:00
Chris Lattner
01150dce74 testcase for my previous commit.
llvm-svn: 60315
2008-12-01 01:42:03 +00:00
Bill Wendling
23684a026c Implement ((A|B)&1)|(B&-2) -> (A&1) | B transformation. This also takes care of
permutations of this pattern.

llvm-svn: 60312
2008-12-01 01:07:11 +00:00
Bill Wendling
66a7442059 Add instruction combining for ((A&~B)|(~A&B)) -> A^B and all permutations.
llvm-svn: 60291
2008-11-30 13:52:49 +00:00
Bill Wendling
3e27ac16a6 Implement (A&((~A)|B)) -> A&B transformation in the instruction combiner. This
takes care of all permutations of this pattern.

llvm-svn: 60290
2008-11-30 13:08:13 +00:00
Bill Wendling
97ad688c1b getSExtValue() doesn't work for ConstantInts with bitwidth > 64 bits. Use all
APInt calls instead.

This fixes PR3144.

llvm-svn: 60288
2008-11-30 12:38:24 +00:00
Eli Friedman
2bc3921ce2 Optimize memmove and memset into the LLVM builtins. Note that these
only show up in code from front-ends besides llvm-gcc, like clang.

llvm-svn: 60287
2008-11-30 08:32:11 +00:00
Eli Friedman
ccdfdbfc99 Followup to r60283: optimize arbitrary width signed divisions as well
as unsigned divisions.  Same caveats as before.

llvm-svn: 60284
2008-11-30 06:35:39 +00:00
Eli Friedman
d7a261120f Fix for PR2164: allow transforming arbitrary-width unsigned divides into
multiplies.

Some more cleverness would be nice, though. It would be nice if we 
could do this transformation on illegal types.  Also, we would 
prefer a narrower constant when possible so that we can use a narrower
multiply, which can be cheaper.

llvm-svn: 60283
2008-11-30 06:02:26 +00:00
Eli Friedman
0ef5e1dc82 APIntify a test which is potentially unsafe otherwise, and fix the
nearby FIXME.

I'm not sure what the right way to fix the Cell test was; if the 
approach I used isn't okay, please let me know.

llvm-svn: 60277
2008-11-30 04:59:26 +00:00
Bill Wendling
5020e916ef Strengthen check for div inst-combining.
llvm-svn: 60276
2008-11-30 04:33:53 +00:00
Bill Wendling
ac11f7d37e Instcombine was illegally transforming -X/C into X/-C when either X or C
overflowed on negation. This commit checks to make sure that neithe C nor X
overflows. This requires that the RHS of X (a subtract instruction) be a
constant integer.

llvm-svn: 60275
2008-11-30 03:42:12 +00:00
Chris Lattner
203a3299e9 don't require GVN to work on dead values, just make the
test return the loaded value.

llvm-svn: 60252
2008-11-29 21:21:48 +00:00
Chris Lattner
f3e49f038c Fix a thinko that manifested as a crash on clamav last night.
llvm-svn: 60251
2008-11-29 20:29:04 +00:00
Chris Lattner
494758e720 Fix PR3141 by ensuring that MemoryDependenceAnalysis::removeInstruction
properly updates the reverse dependency map when it installs updated 
dependencies for instructions that depend on the removed instruction.

llvm-svn: 60222
2008-11-28 22:51:08 +00:00
Chris Lattner
a854ab3760 don't call MergeBasicBlockIntoOnlyPred on a block whose only
predecessor is itself.  This doesn't make sense, and this is
a dead infinite loop anyway.

llvm-svn: 60210
2008-11-28 19:54:49 +00:00
Nick Lewycky
40db216722 Chris prefers icmp/select over udiv!
llvm-svn: 60187
2008-11-27 22:41:10 +00:00
Nick Lewycky
882443585d Add a couple of missed optimizations on integer vectors. Multiply and divide
by 1, as well as multiply by -1.

llvm-svn: 60182
2008-11-27 20:21:08 +00:00
Chris Lattner
73b251b3bf Fix PR3138: if we merge the entry block into another block, make sure to
move the other block back up into the entry position!

llvm-svn: 60179
2008-11-27 19:25:19 +00:00
Bill Wendling
7742719284 XFAil test due to reverting of patch.
llvm-svn: 60161
2008-11-27 07:34:10 +00:00
Chris Lattner
532458b89f Make jump threading substantially more powerful, in the following ways:
1. Make it fold blocks separated by an unconditional branch.  This enables
   jump threading to see a broader scope.
2. Make jump threading able to eliminate locally redundant loads when they
   feed the branch condition of a block.  This frequently occurs due to
   reg2mem running.
3. Make jump threading able to eliminate *partially redundant* loads when
   they feed the branch condition of a block.  This is common in code with
   lots of loads and stores like C++ code and 255.vortex.

This implements thread-loads.ll and rdar://6402033.

Per the fixme's, several pieces of this should be moved into Transforms/Utils.

llvm-svn: 60148
2008-11-27 05:07:53 +00:00
Evan Cheng
ee5e950c25 Avoid inserting noop's in the middle of a loop.
llvm-svn: 60141
2008-11-27 01:16:00 +00:00
Evan Cheng
f18016728c On x86 favors folding short immediate into some arithmetic operations (e.g. add, and, xor, etc.) because materializing an immediate in a register is expensive in turns of code size.
e.g.
movl 4(%esp), %eax
addl $4, %eax

is 2 bytes shorter than

movl $4, %eax
addl 4(%esp), %eax

llvm-svn: 60139
2008-11-27 00:49:46 +00:00
Evan Cheng
4da44412cf Add -march=x86.
llvm-svn: 60135
2008-11-27 00:37:06 +00:00