1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 03:23:01 +02:00
Commit Graph

2670 Commits

Author SHA1 Message Date
Devang Patel
3e2aba2402 Revert r87059 for now. It is failing clang tests.
llvm-svn: 87070
2009-11-13 02:27:33 +00:00
Victor Hernandez
0be9d07279 Remove unnecessary llvm.dbg.declare bitcast
llvm-svn: 87059
2009-11-13 01:44:55 +00:00
Devang Patel
c024e96ca3 "Attach debug info with llvm instructions" mode was enabled a month ago. Now make it permanent and remove old way of inserting intrinsics to encode debug info for line number and scopes.
llvm-svn: 87014
2009-11-12 19:02:56 +00:00
Evan Cheng
b0a193db31 - Teach LSR to avoid changing cmp iv stride if it will create an immediate that
cannot be folded into target cmp instruction.
- Avoid a phase ordering issue where early cmp optimization would prevent the
  later count-to-zero optimization.
- Add missing checks which could cause LSR to reuse stride that does not have
  users.
- Fix a bug in count-to-zero optimization code which failed to find the pre-inc
  iv's phi node.
- Remove, tighten, loosen some incorrect checks disable valid transformations.
- Quite a bit of code clean up.

llvm-svn: 86969
2009-11-12 07:35:05 +00:00
Chris Lattner
3e63fb7318 various fixes to the lattice transfer functions.
llvm-svn: 86952
2009-11-12 04:57:13 +00:00
Chris Lattner
6d1ca5d976 Add a new getPredicateOnEdge method which returns more rich information for
constant constraints.  Improve the LVI lattice to include inequality 
constraints.

llvm-svn: 86950
2009-11-12 04:36:58 +00:00
Chris Lattner
b5bb115ece expose edge information and switch j-t to use it.
llvm-svn: 86920
2009-11-12 01:29:10 +00:00
Chris Lattner
46056d81aa move some stuff into DEBUG's and turn on lazy-value-info for
the basic.ll testcase.

llvm-svn: 86918
2009-11-12 01:22:16 +00:00
Devang Patel
c70b8eefb7 Do not use StringRef in DebugInfo interface.
This allows StringRef to skip controversial if(str) check in constructor.
Buildbots, wait for corresponding clang and llvm-gcc FE check-ins!

llvm-svn: 86914
2009-11-12 00:50:58 +00:00
Chris Lattner
4c831006c7 make LazyValueInfo actually to some stuff. This isn't very tested but improves
strswitch.

llvm-svn: 86889
2009-11-11 22:48:44 +00:00
Chris Lattner
b45381c3f0 stub out some LazyValueInfo interfaces, and have JumpThreading
start using them in a trivial way when -enable-jump-threading-lvi
is passed.  enable-jump-threading-lvi will be my playground for 
awhile.

llvm-svn: 86789
2009-11-11 02:08:33 +00:00
Chris Lattner
6c04051d2a Stub out a new lazy value info pass, which will eventually
vend value constraint information to the optimizer.

llvm-svn: 86767
2009-11-11 00:22:30 +00:00
Chris Lattner
c60b41d336 remove redundant foward declaration. This function is already in
Analysis/Passes.h

llvm-svn: 86765
2009-11-11 00:21:21 +00:00
Devang Patel
5c983cb2ab Implement support to debug inlined functions.
llvm-svn: 86748
2009-11-10 23:06:00 +00:00
Chris Lattner
ec4264fbb0 move some generally useful functions out of jump threading
into libanalysis and transformutils.

llvm-svn: 86735
2009-11-10 22:26:15 +00:00
Devang Patel
e70bec27b0 Process InlinedAt location info.
Update InsertDeclare to return newly inserted llvm.dbg.declare intrinsic.

llvm-svn: 86727
2009-11-10 22:05:35 +00:00
Victor Hernandez
3c98070f2c Update computeArraySize() to use ComputeMultiple() to determine the array size associated with a malloc; also extend PerformHeapAllocSRoA() to check if the optimized malloc's arg had its highest bit set, so that it is safe for ComputeMultiple() to look through sext instructions while determining the optimized malloc's array size
llvm-svn: 86676
2009-11-10 08:32:25 +00:00
Victor Hernandez
89c04dbe2f Add ComputeMultiple() analysis function that recursively determines if a Value V is a multiple of unsigned Base
llvm-svn: 86675
2009-11-10 08:28:35 +00:00
Chris Lattner
04da9d4b33 I misread the parens, not so redundant after all.
llvm-svn: 86648
2009-11-10 02:04:54 +00:00
Chris Lattner
fb540241c8 remove some redundant parens.
llvm-svn: 86645
2009-11-10 01:56:04 +00:00
Chris Lattner
a279728372 add a new SimplifyInstruction API, which is like ConstantFoldInstruction,
except that the result may not be a constant.  Switch jump threading to 
use it so that it gets things like (X & 0) -> 0, which occur when phi preds
are deleted and the remaining phi pred was a zero.

llvm-svn: 86637
2009-11-10 01:08:51 +00:00
Jeffrey Yasskin
23ac706aab Fix DenseMap iterator constness.
This patch forbids implicit conversion of DenseMap::const_iterator to
DenseMap::iterator which was possible because DenseMapIterator inherited
(publicly) from DenseMapConstIterator. Conversion the other way around is now
allowed as one may expect.

The template DenseMapConstIterator is removed and the template parameter
IsConst which specifies whether the iterator is constant is added to
DenseMapIterator.

Actually IsConst parameter is not necessary since the constness can be
determined from KeyT but this is not relevant to the fix and can be addressed
later.

Patch by Victor Zverovich!

llvm-svn: 86636
2009-11-10 01:02:17 +00:00
Chris Lattner
3730cf6fef factor simplification logic for AND and OR out to InstSimplify from instcombine.
llvm-svn: 86635
2009-11-10 00:55:12 +00:00
Chris Lattner
9941f27797 pull a bunch of logic out of instcombine into instsimplify for compare
simplification, this handles the foldable fcmp x,x cases among many others.

llvm-svn: 86627
2009-11-09 23:55:12 +00:00
Dan Gohman
22719d2a1a Pass the (optional) TargetData object to ConstantFoldInstOperands
and ConstantFoldCompareInstOperands.

llvm-svn: 86626
2009-11-09 23:34:17 +00:00
Chris Lattner
25700676d4 rename SimplifyCompare -> SimplifyCmpInst and split it into
Simplify[IF]Cmp pieces.  Add some predicates to CmpInst to 
determine whether a predicate is fp or int.

llvm-svn: 86624
2009-11-09 23:28:39 +00:00
Chris Lattner
131172dc76 fix ConstantFoldCompareInstOperands to take the LHS/RHS as
individual operands instead of taking a temporary array

llvm-svn: 86619
2009-11-09 23:06:58 +00:00
Chris Lattner
126d78f1c7 stub out a new libanalysis "instruction simplify" interface that
takes decimated instructions and applies identities to them.  This
is pretty minimal at this point, but I plan to pull some instcombine
logic out into these and similar routines.

llvm-svn: 86613
2009-11-09 22:57:59 +00:00
Dan Gohman
6780148e20 Default-addressspace null pointers don't alias anything. This allows
GVN to be more aggressive. Patch by Hans Wennborg! (with a comment added by me)

llvm-svn: 86582
2009-11-09 19:29:11 +00:00
Dan Gohman
68d528963c Minor tidiness fixes.
llvm-svn: 86565
2009-11-09 18:19:43 +00:00
Victor Hernandez
8736a8fca4 Re-commit r86077 now that r86290 fixes the 179.art and 175.vpr ARM regressions.
Here is the original commit message:

This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.

Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.

Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.

Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses.  The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.

Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses.  The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.

Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.

Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.

llvm-svn: 86311
2009-11-07 00:16:28 +00:00
Devang Patel
1a557eacb8 Tolerate invalid derived type.
llvm-svn: 86269
2009-11-06 18:24:05 +00:00
Devang Patel
0d1f6a9d0e Do not bother to emit debug info for nameless global variable.
llvm-svn: 86259
2009-11-06 17:58:12 +00:00
Chris Lattner
903ae55e1c remove a bunch of extraneous LLVMContext arguments
from various APIs, addressing PR5325.

llvm-svn: 86231
2009-11-06 04:27:31 +00:00
Victor Hernandez
a5a12cd62e Revert r86077 because it caused crashes in 179.art and 175.vpr on ARM
llvm-svn: 86213
2009-11-06 01:33:24 +00:00
Dan Gohman
770613e3be Fix IVUsers to avoid assuming that the loop has a unique backedge.
llvm-svn: 86161
2009-11-05 19:41:37 +00:00
Dan Gohman
6f6862e558 Factor out the predicate code for loopsimplify form exit blocks into
a separate helper function.

llvm-svn: 86159
2009-11-05 19:21:41 +00:00
Victor Hernandez
21ec158c23 Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.

Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.

Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses.  The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.

Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses.  The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.

Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.

Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.

llvm-svn: 86077
2009-11-05 00:03:03 +00:00
Devang Patel
c69c29bc4b While calculating original type size for a derived type, handle type variants encoded as DIDerivedType appropriately.
This improves bitfield support.

llvm-svn: 86073
2009-11-04 23:48:00 +00:00
Victor Hernandez
628b65ea6e Changes requested (avoid getFunction(), avoid Type creation via isVoidTy(), and avoid redundant isFreeCall cases) in feedback to r85176
llvm-svn: 85936
2009-11-03 20:39:35 +00:00
Victor Hernandez
046375ac3f Changes (* location in pointer variables, avoiding include, and using APInt::getLimitedValue) based on feedback to r85814
llvm-svn: 85933
2009-11-03 20:02:35 +00:00
Chris Lattner
7585c1f162 remove unneeded checks of isFreeCall
llvm-svn: 85866
2009-11-03 05:35:19 +00:00
Chris Lattner
cbd903cd12 remove a check of isFreeCall: the argument to free is already nocapture so the generic call code works fine.
llvm-svn: 85865
2009-11-03 05:34:51 +00:00
Victor Hernandez
1d2fd19df6 Set bit instead of calling pow() to compute 2 << n
llvm-svn: 85814
2009-11-02 18:51:28 +00:00
Edward O'Callaghan
e91c6a1a44 Fix for warning seen on DF-BSD, Victor, please fix this to use a shift instead of pow()
llvm-svn: 85781
2009-11-02 03:14:31 +00:00
Edward O'Callaghan
64a2e8f6a5 Apply fix for PR5135, Credit to Andreas Neustifter.
llvm-svn: 85779
2009-11-02 02:55:39 +00:00
Duncan Sands
fd3669403c Add a missing closing parenthesis, and tweak to fit in 80
columns.

llvm-svn: 85732
2009-11-01 19:12:43 +00:00
Chris Lattner
11944e39d4 add a comment about why we don't allow inlining indbr.
llvm-svn: 85724
2009-11-01 18:16:30 +00:00
Douglas Gregor
16c6819959 Reverting 85714, 85715, 85716, which are breaking the build
llvm-svn: 85717
2009-11-01 16:42:53 +00:00
Dan Gohman
58714b62b6 Add a function to Passes.h to allow clients to create instances
of the ScalarEvolution pass without needing to #include ScalarEvolution.h.

llvm-svn: 85716
2009-11-01 15:28:36 +00:00