1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 04:22:57 +02:00
Commit Graph

7137 Commits

Author SHA1 Message Date
Owen Anderson
a88628cd72 Now that the profitable bits of EnableFullLoadPRE have been enabled by default, rip out the remainder.
Anyone interested in more general PRE would be better served by implementing it separately, to get real
anticipation calculation, etc.

llvm-svn: 115337
2010-10-01 20:02:55 +00:00
Eric Christopher
792e8112b0 Fix the other half of the alignment changing issue by making sure that the
memcpy alignment is the minimum of the incoming alignments.

Fixes PR 8266.

llvm-svn: 115305
2010-10-01 09:02:05 +00:00
Chris Lattner
bf0f375aba fix PR8267 - Instcombine shouldn't optimizer away volatile memcpy's.
llvm-svn: 115296
2010-10-01 05:51:02 +00:00
Dale Johannesen
c14a1eda84 Massive rewrite of MMX:
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.

Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics. 

MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces.  Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.

The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.

llvm-svn: 115243
2010-09-30 23:57:10 +00:00
Owen Anderson
5adba2c2ff We do want to allow LoadPRE to perform LICM-like transformations: we already consider PHI nodes to be negligible for
code size (making this transform code size neutral), and it allows us to hoist values out of loops, which is always
a good thing.

llvm-svn: 115205
2010-09-30 20:53:04 +00:00
Jakob Stoklund Olesen
603950c4af Try again to disable critical edge splitting in CodeGenPrepare.
The bug that broke i386 linux has been fixed in r115191.

llvm-svn: 115204
2010-09-30 20:51:52 +00:00
Benjamin Kramer
d1710d7fb3 Tighten up prototype verification of strchr and strrchr to avoid a crash in the very unlikely case that someone passes an integer > i64 to strchr.
llvm-svn: 115144
2010-09-30 11:21:59 +00:00
Benjamin Kramer
2a44a539e2 Add constant folding for strspn and strcspn to SimplifyLibCalls.
llvm-svn: 115116
2010-09-30 00:58:35 +00:00
Benjamin Kramer
476bfb7a10 Add strpbrk folding to SimplifyLibCalls.
llvm-svn: 115111
2010-09-29 23:52:12 +00:00
Benjamin Kramer
cec2603ec2 Simplify the loop in StrChrOptimizer. FileCheckize test.
llvm-svn: 115095
2010-09-29 22:29:12 +00:00
Benjamin Kramer
75a825ff6b Teach SimplifyLibCalls how to optimize strrchr.
llvm-svn: 115091
2010-09-29 21:50:51 +00:00
Owen Anderson
8e70968a13 Fix PR8247: JumpThreading can cause a block to become unreachable while still having predecessor, if it is part of a self-loop.
Because of this, we cannot use the Simplify* APIs, as they can assert-fail on unreachable code.  Since it's not easy to determine
if a given threading will cause a block to become unreachable, simply defer simplifying simplification to later InstCombine and/or
DCE passes.

llvm-svn: 115082
2010-09-29 20:34:41 +00:00
Owen Anderson
8d6be4804f Revert r114919, which caused some serious regressions on ARM.
llvm-svn: 115053
2010-09-29 18:05:19 +00:00
Oscar Fuentes
eb27a44982 Removed a bunch of unnecessary target_link_libraries.
llvm-svn: 114999
2010-09-28 22:39:14 +00:00
Owen Anderson
4e6951b92f Weight loop unrolling counts by nesting depth. Unrolling deeply nested loops tends to cause high
register pressure and thus excess spills, which we don't currently recover from well.  This should
be re-evaluated in the future if our ability to generate good spills/splits improves.

Partial fix for <rdar://problem/7635585>.

llvm-svn: 114919
2010-09-27 22:58:54 +00:00
Jakob Stoklund Olesen
cfed90fe40 Revert "Disable codegen prepare critical edge splitting. Machine instruction passes now"
This reverts revision 114633. It was breaking llvm-gcc-i386-linux-selfhost.

It seems there is a downstream bug that is exposed by
-cgp-critical-edge-splitting=0. When that bug is fixed, this patch can go back
in.

Note that the changes to tailcallfp2.ll are not reverted. They were good are
required.

llvm-svn: 114859
2010-09-27 18:43:48 +00:00
Dan Gohman
4222dd7f87 Delete an unused function.
llvm-svn: 114841
2010-09-27 16:58:21 +00:00
Owen Anderson
856fcd57d1 LoadPRE was not properly checking that the load it was PRE'ing post-dominated the block it was being hoisted to.
Splitting critical edges at the merge point only addressed part of the issue; it is also possible for non-post-domination
to occur when the path from the load to the merge has branches in it.  Unfortunately, full anticipation analysis is
time-consuming, so for now approximate it.  This is strictly more conservative than real anticipation, so we will miss
some cases that real PRE would allow, but we also no longer insert loads into paths where they didn't exist before. :-)

This is a very slight net positive on SPEC for me (0.5% on average).  Most of the benchmarks are largely unaffected, but
when it pays off it pays off decently: 181.mcf improves by 4.5% on my machine.

llvm-svn: 114785
2010-09-25 05:26:18 +00:00
Eric Christopher
7dec569dfa If we're changing the source of a memcpy we need to use the alignment
of the source, not the original alignment since it may no longer
be valid.

Fixes rdar://8400094

llvm-svn: 114781
2010-09-25 00:57:26 +00:00
Michael J. Spencer
87fea6690f Get rid of pop_macro warnings on MSVC.
llvm-svn: 114750
2010-09-24 19:48:47 +00:00
Bob Wilson
0fdee121f7 Fix llvm-extract so that it changes the linkage of all GlobalValues to
"external" even when doing lazy bitcode loading.  This was broken because
a function that is not materialized fails the !isDeclaration() test.

llvm-svn: 114666
2010-09-23 17:25:06 +00:00
Evan Cheng
1493b1799e Disable codegen prepare critical edge splitting. Machine instruction passes now
break critical edges on demand.

llvm-svn: 114633
2010-09-23 06:55:34 +00:00
Bob Wilson
fbec6680d1 When moving zext/sext to be folded with a load, ignore the issue of whether
truncates are free only in the case where the extended type is legal but the
load type is not.  If both types are illegal, such as when they are too big,
the load may not be legalized into an extended load.

llvm-svn: 114568
2010-09-22 18:44:56 +00:00
Bob Wilson
a22747a563 Move a sign-extend or a zero-extend of a load to the same basic block as the
load when the type of the load is not legal, even if truncates are not free.
The load is going to be legalized to an extending load anyway.

llvm-svn: 114488
2010-09-21 21:54:27 +00:00
Bob Wilson
064f6a1a3d Clarify a comment.
llvm-svn: 114487
2010-09-21 21:44:14 +00:00
Gabor Greif
dd1c709af4 do not rely on the implicit-dereference semantics of dyn_cast_or_null
llvm-svn: 114278
2010-09-18 11:55:34 +00:00
Gabor Greif
5348c51fbe do not rely on the implicit-dereference semantics of dyn_cast_or_null
llvm-svn: 114277
2010-09-18 11:53:39 +00:00
Owen Anderson
1a679ae773 Use a depth-first iteratation in CorrelatedValuePropagation to avoid wasting time trying
to optimize unreachable blocks.

llvm-svn: 114105
2010-09-16 18:35:07 +00:00
Dale Johannesen
cf9dc14249 When substituting sunkaddrs into indirect arguments an asm, we were
walking the asm arguments once and stashing their Values.  This is
wrong because the same memory location can be in the list twice, and
if the first one has a sunkaddr substituted, the stashed value for the
second one will be wrong (use-after-free).  PR 8154.

llvm-svn: 114104
2010-09-16 18:30:55 +00:00
Chris Lattner
8729e47b8f fix PR8144, a bug where constant merge would merge globals marked
attribute(used).

llvm-svn: 113911
2010-09-15 00:30:11 +00:00
Owen Anderson
9f8f48f221 Remove the option to disable LazyValueInfo in JumpThreading, as it is now
on by default and has received significant testing.

llvm-svn: 113852
2010-09-14 20:57:41 +00:00
Chris Lattner
0718ff9be2 fix PR8102, a case where we'd copyValue from a value that we already
deleted.  Fix this by doing the copyValue's before we delete stuff!

The testcase only repros the problem on my system with valgrind.

llvm-svn: 113820
2010-09-14 00:19:00 +00:00
Michael J. Spencer
90f807fda5 Revert "CMake: Get rid of LLVMLibDeps.cmake and export the libraries normally."
This reverts commit r113632

Conflicts:

	cmake/modules/AddLLVM.cmake

llvm-svn: 113819
2010-09-13 23:59:48 +00:00
Eric Christopher
6d1cd6fab4 Remove unused variable.
llvm-svn: 113769
2010-09-13 18:27:59 +00:00
John Thompson
ae3a86d6de Added skeleton for inline asm multiple alternative constraint support.
llvm-svn: 113766
2010-09-13 18:15:37 +00:00
Owen Anderson
9c34a7831d Re-apply r113679, which was reverted in r113720, which added a paid of new instcombine transforms
to expose greater opportunities for store narrowing in codegen.  This patch fixes a potential
infinite loop in instcombine caused by one of the introduced transforms being overly aggressive.

llvm-svn: 113763
2010-09-13 17:59:27 +00:00
Eric Christopher
d4aaabfa74 Revert 113679, it was causing an infinite loop in a testcase that I've sent
on to Owen.

llvm-svn: 113720
2010-09-12 06:09:23 +00:00
Owen Anderson
d4ebde12ce Invert and-of-or into or-of-and when doing so would allow us to clear bits of the and's mask.
This can result in increased opportunities for store narrowing in code generation.  Update a number of
tests for this change.  This fixes <rdar://problem/8285027>.

Additionally, because this inverts the order of ors and ands, some patterns for optimizing or-of-and-of-or
no longer fire in instances where they did originally.  Add a simple transform which recaptures most of these
opportunities: if we have an or-of-constant-or and have failed to fold away the inner or, commute the order 
of the two ors, to give the non-constant or a chance for simplification instead.

llvm-svn: 113679
2010-09-11 05:48:06 +00:00
Gabor Greif
dfe6dea95f typoes
llvm-svn: 113647
2010-09-10 22:25:58 +00:00
Michael J. Spencer
98ad3f2ea7 CMake: Get rid of LLVMLibDeps.cmake and export the libraries normally.
llvm-svn: 113632
2010-09-10 21:14:25 +00:00
Benjamin Kramer
cd84f14639 This transform is also performed by InstructionSimplify, remove the duplicate.
llvm-svn: 113608
2010-09-10 19:52:35 +00:00
Owen Anderson
c0664eb99a Lower the unrolling theshold to 150. Empirical tests indicate that this is a sweet spot in the performance per
code size increase curve.

llvm-svn: 113595
2010-09-10 17:57:00 +00:00
Owen Anderson
f7276d42d3 What the loop unroller cares about, rather than just not unrolling loops with calls, is
not unrolling loops that contain calls that would be better off getting inlined.  This mostly
comes up when an interleaved devirtualization pass has devirtualized a call which the inliner
will inline on a future pass.  Thus, rather than blocking all loops containing calls, add
a metric for "inline candidate calls" and block loops containing those instead.

llvm-svn: 113535
2010-09-09 20:32:23 +00:00
Owen Anderson
db6a08beef Revert r113439, which relaxed the requirement that loops containing calls cannot be unrolled. After some discussion,
there seems to be a better way to achieve the same effect.

llvm-svn: 113528
2010-09-09 20:02:23 +00:00
Owen Anderson
82f5f82ccd r113526 introduced an unintended change to the loop unrolling threshold. Revert it.
llvm-svn: 113527
2010-09-09 19:11:57 +00:00
Owen Anderson
46189436ea Fix typo in code to cap the loop code size reduction calculation.
llvm-svn: 113526
2010-09-09 19:08:59 +00:00
Owen Anderson
a182e2b20b Use code-size reduction metrics to estimate the amount of savings we'll get when we unroll a loop.
Next step is to recalculate the threshold values given this new heuristic.

llvm-svn: 113525
2010-09-09 19:07:31 +00:00
Owen Anderson
956afdd1f2 Relax the "don't unroll loops containing calls" rule. Instead, when a loop contains a call, lower the
unrolling threshold to the optimize-for-size threshold.  Basically, for loops containing calls, unrolling
can still be profitable as long as the loop is REALLY small.

llvm-svn: 113439
2010-09-08 23:10:07 +00:00
Owen Anderson
c51d7d1a8d Generalize instcombine's support for combining multiple bit checks into a single test. Patch by Dirk Steinke!
llvm-svn: 113423
2010-09-08 22:16:17 +00:00
Owen Anderson
38493be0ce Add a separate unrolling threshold when the current function is being optimized for size.
The threshold value of 50 is arbitrary, and I chose it simply by analogy to the inlining thresholds, where
the baseline unrolling threshold is slightly smaller than the baseline inlining threshold.  This could
undoubtedly use some tuning.

llvm-svn: 113306
2010-09-07 23:15:30 +00:00