1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

1241 Commits

Author SHA1 Message Date
Chris Lattner
8e0bfd5ce0 Fix rdar://7694996 a miscompile of 183.equake from my patch yesterday,
confusing the old MAT variable with the new GlobalType one.  This caused
us to promote the @disp global pointer into:

@disp.body = internal global double*** undef

instead of:

@disp.body = internal global [3 x double**] undef

llvm-svn: 97285
2010-02-26 23:42:13 +00:00
Chris Lattner
995bac5839 remove dead code, by this point all uses of CI are gone.
llvm-svn: 97283
2010-02-26 23:35:25 +00:00
Chris Lattner
3832b527d8 fix PR6435 another bug from the MallocInst elimination work.
llvm-svn: 97231
2010-02-26 18:23:13 +00:00
Chris Lattner
8cb1dc746d rewrite OptimizeGlobalAddressOfMalloc to fix PR6422, some bugs
introduced when mallocinst was eliminated. 

llvm-svn: 97178
2010-02-25 22:33:52 +00:00
Nick Lewycky
e0601cb7f8 Modernize comment.
llvm-svn: 97121
2010-02-25 06:39:10 +00:00
Nick Lewycky
772e9deb48 Correct whitespace.
llvm-svn: 97120
2010-02-25 06:38:51 +00:00
Duncan Sands
1b33dd3c83 There are two ways of checking for a given type, for example isa<PointerType>(T)
and T->isPointerTy().  Convert most instances of the first form to the second form.
Requested by Chris.

llvm-svn: 96344
2010-02-16 11:11:14 +00:00
Duncan Sands
2acaf3609c Uniformize the names of type predicates: rather than having isFloatTy and
isInteger, we now have isFloatTy and isIntegerTy.  Requested by Chris!

llvm-svn: 96223
2010-02-15 16:12:20 +00:00
Jakob Stoklund Olesen
8ce1b3d280 Enable the inlinehint attribute in the Inliner.
Functions explicitly marked inline will get an inlining threshold slightly
more aggressive than the default for -O3. This means than -O3 builds are
mostly unaffected while -Os builds will be a bit bigger and faster.

The difference depends entirely on how many 'inline's are sprinkled on the
source.

In the CINT2006 suite, only these tests are significantly affected under -Os:

               Size   Time
471.omnetpp   +1.63% -1.85%
473.astar     +4.01% -6.02%
483.xalancbmk +4.60%  0.00%

Note that 483.xalancbmk runs too quickly to give useful timing results.

llvm-svn: 96066
2010-02-13 01:51:53 +00:00
Chris Lattner
ecb203898a 1. modernize the constantmerge pass, using densemap/smallvector.
2. don't bother trying to merge globals in non-default sections,
   doing so is quite dubious at best anyway.
3. fix a bug reported by Arnaud de Grandmaison where we'd try to
   merge two globals in different address spaces.

llvm-svn: 95995
2010-02-12 18:17:23 +00:00
Devang Patel
43cc7530be Strip new llvm.dbg.value intrinsic.
llvm-svn: 95807
2010-02-10 21:19:56 +00:00
Dan Gohman
92b6122204 Fix "the the" and similar typos.
llvm-svn: 95781
2010-02-10 16:03:48 +00:00
Jakob Stoklund Olesen
83ebc265b3 Reintroduce the InlineHint function attribute.
This time it's for real! I am going to hook this up in the frontends as well.

The inliner has some experimental heuristics for dealing with the inline hint.
When given a -respect-inlinehint option, functions marked with the inline
keyword are given a threshold just above the default for -O3.

We need some experiments to determine if that is the right thing to do.

llvm-svn: 95466
2010-02-06 01:16:28 +00:00
Jakob Stoklund Olesen
54b09bc819 Increase inliner thresholds by 25.
This makes the inliner about as agressive as it was before my changes to the
inliner cost calculations. These levels give the same performance and slightly
smaller code than before.

llvm-svn: 95320
2010-02-04 18:48:20 +00:00
Jakob Stoklund Olesen
e3fd8b5848 Keep iterating over all uses when meeting a phi node in AllUsesOfValueWillTrapIfNull().
This bug was exposed by my inliner cost changes in r94615, and caused failures
of lencod on most architectures when building with LTO.

This patch fixes lencod and 464.h264ref on x86-64 (and likely others).

llvm-svn: 94858
2010-01-29 23:54:14 +00:00
Jeffrey Yasskin
fb10587e50 Kill ModuleProvider and ghost linkage by inverting the relationship between
Modules and ModuleProviders. Because the "ModuleProvider" simply materializes
GlobalValues now, and doesn't provide modules, it's renamed to
"GVMaterializer". Code that used to need a ModuleProvider to materialize
Functions can now materialize the Functions directly. Functions no longer use a
magic linkage to record that they're materializable; they simply ask the
GVMaterializer.

Because the C ABI must never change, we can't remove LLVMModuleProviderRef or
the functions that refer to it. Instead, because Module now exposes the same
functionality ModuleProvider used to, we store a Module* in any
LLVMModuleProviderRef and translate in the wrapper methods.  The bindings to
other languages still use the ModuleProvider concept.  It would probably be
worth some time to update them to follow the C++ more closely, but I don't
intend to do it.

Fixes http://llvm.org/PR5737 and http://llvm.org/PR5735.

llvm-svn: 94686
2010-01-27 20:34:15 +00:00
Chris Lattner
5a57121631 make -fno-rtti the default unless a directory builds with REQUIRES_RTTI.
llvm-svn: 94378
2010-01-24 20:43:08 +00:00
Nick Lewycky
f9a681fb61 Speculatively revert r94322 to see if it fixes darwin selfhost buildbot.
llvm-svn: 94331
2010-01-23 20:32:12 +00:00
Nick Lewycky
8bbb754c7c Teach DAE that even though it can't modify the function signature of an
externally visible function, it can still find all callers of it and replace
the parameters to a dead argument with undef.

llvm-svn: 94322
2010-01-23 19:19:34 +00:00
Benjamin Kramer
a56f170cc0 Another strncmp -> StringRef.startswith simplification.
llvm-svn: 94203
2010-01-22 20:00:21 +00:00
Chris Lattner
276811b58a Stop building RTTI information for *most* llvm libraries. Notable
missing ones are libsupport, libsystem and libvmcore.  libvmcore is
currently blocked on bugpoint, which uses EH.  Once it stops using
EH, we can switch it off.

This #if 0's out 3 unit tests, because gtest requires RTTI information.
Suggestions welcome on how to fix this.

llvm-svn: 94164
2010-01-22 06:49:46 +00:00
Jakob Stoklund Olesen
ac14f3bf31 Move per-function inline threshold calculation to a method.
No functional change except the forgotten test for
InlineLimit.getNumOccurrences() == 0 in the CurrentThreshold2 calculation.

llvm-svn: 94007
2010-01-20 17:51:28 +00:00
Duncan Sands
8299315be7 Be less stingy as to how many selects and phi nodes we
are prepared to look through.

llvm-svn: 92898
2010-01-07 05:48:42 +00:00
Chris Lattner
1a64e58fa3 handle ConstantVector while I'm in here.
llvm-svn: 92892
2010-01-07 01:20:20 +00:00
Chris Lattner
0b0caf0877 fix a globalopt crash on 'bullet' (handling evaluation of a store
to an element of a vector in a static ctor) which occurs with an 
unrelated patch I'm testing.  Annoyingly, EvaluateStoreInto basically 
does exactly the same stuff as InsertElement constant folding, but it
now handles vectors, and you can't insertelement into a vector.  It
would be 'really nice' if GEP into a vector were not legal.

llvm-svn: 92889
2010-01-07 01:16:21 +00:00
Duncan Sands
de0adbdf25 Fix a README item: have functionattrs look through selects and
phi nodes when deciding which pointers point to local memory.
I actually checked long ago how useful this is, and it isn't
very: it hardly ever fires in the testsuite, but since Chris
wants it here it is!

llvm-svn: 92836
2010-01-06 15:37:47 +00:00
Duncan Sands
4ef1119d94 Partially address a README by having functionattrs consider calls to
memcpy, memset and other intrinsics that only access their arguments
to be readnone if the intrinsic's arguments all point to local memory.
This improves the testcase in the README to readonly, but it could in
theory be made readnone, however this would involve more sophisticated
analysis that looks through the memcpy.

llvm-svn: 92829
2010-01-06 08:45:52 +00:00
Benjamin Kramer
0ba7479f2c Move remaining stuff to the isInteger predicate.
llvm-svn: 92771
2010-01-05 21:05:54 +00:00
Dan Gohman
7924aacfe4 Fix indentation.
llvm-svn: 92733
2010-01-05 16:20:55 +00:00
Benjamin Kramer
e90a3c66c4 Avoid going through the LLVMContext for type equality where it's safe to dereference the type pointer.
llvm-svn: 92726
2010-01-05 13:12:22 +00:00
David Greene
6b20914d8a Change errs() to dbgs().
llvm-svn: 92639
2010-01-05 01:28:37 +00:00
David Greene
a48859b420 Change errs() to dbgs().
llvm-svn: 92636
2010-01-05 01:28:29 +00:00
David Greene
6c314b2f2b Change errs() to dbgs().
llvm-svn: 92633
2010-01-05 01:28:12 +00:00
David Greene
706266dce6 Change errs() to dbgs().
llvm-svn: 92631
2010-01-05 01:28:07 +00:00
David Greene
04f5c49e6f Change errs() to dbgs().
llvm-svn: 92629
2010-01-05 01:28:05 +00:00
David Greene
e5c3a6f658 Change errs() to dbgs().
llvm-svn: 92627
2010-01-05 01:27:54 +00:00
David Greene
0d082a1e9d Change errs() to dbgs().
llvm-svn: 92625
2010-01-05 01:27:51 +00:00
Chris Lattner
84e9de4a58 Final step in the metadata API restructuring: move the
getMDKindID/getMDKindNames methods to LLVMContext (and add
convenience methods to Module), eliminating MetadataContext.
Move the state that it maintains out to LLVMContext.

llvm-svn: 92259
2009-12-29 09:01:33 +00:00
Chris Lattner
9ec640a902 This is a major cleanup of the instruction metadata interfaces that
I asked Devang to do back on Sep 27.  Instead of going through the
MetadataContext class with methods like getMD() and getMDs(), just
ask the instruction directly for its metadata with getMetadata()
and getAllMetadata().

This includes a variety of other fixes and improvements: previously
all Value*'s were bloated because the HasMetadata bit was thrown into
value, adding a 9th bit to a byte.  Now this is properly sunk down to
the Instruction class (the only place where it makes sense) and it
will be folded away somewhere soon.

This also fixes some confusion in getMDs and its clients about 
whether the returned list is indexed by the MDID or densely packed.
This is now returned sorted and densely packed and the comments make
this clear.

This introduces a number of fixme's which I'll follow up on.

llvm-svn: 92235
2009-12-28 23:41:32 +00:00
Chris Lattner
cd3aa9d1ff rename getMDKind -> getMDKindID, make it autoinsert if an MD Kind
doesn't exist already, eliminate registerMDKind.  Tidy up a bunch
of random stuff.

llvm-svn: 92225
2009-12-28 20:45:51 +00:00
Duncan Sands
897f9579d6 Teach GlobalOpt to delete aliases with internal linkage (after
forwarding any uses).  GlobalDCE can also do this, but is only
run at -O3.

llvm-svn: 90850
2009-12-08 10:10:20 +00:00
Dan Gohman
58bb87921b Make ConstantFoldConstantExpression recursively visit the entire
ConstantExpr, not just the top-level operator. This allows it to
fold many more constants.

Also, make GlobalOpt call ConstantFoldConstantExpression on
GlobalVariable initializers.

llvm-svn: 89659
2009-11-23 16:22:21 +00:00
Nick Lewycky
8cbd0c3156 Remove unused LLVMContext.
llvm-svn: 89642
2009-11-23 03:29:18 +00:00
Dan Gohman
44ddc50043 Extend CaptureTracking to indicate when a value is never stored, even
if it is not ultimately captured. Teach BasicAliasAnalysis that a 
local object address which does not escape and is never stored does
not alias with a value resulting from a load.

llvm-svn: 89398
2009-11-19 21:57:48 +00:00
Devang Patel
04a40c8ff7 Remove debug info attached with an instruction.
llvm-svn: 89016
2009-11-17 00:47:06 +00:00
Chris Lattner
9a1f2aaff0 use isInstructionTriviallyDead, as pointed out by Duncan
llvm-svn: 87035
2009-11-12 21:58:18 +00:00
Chris Lattner
00a9240c9c implement a nice little efficiency hack in the inliner. Since we're now
running IPSCCP early, and we run functionattrs interlaced with the inliner,
we often (particularly for small or noop functions) completely propagate
all of the information about a call to its call site in IPSSCP (making a call
dead) and functionattrs is smart enough to realize that the function is
readonly (because it is interlaced with inliner).

To improve compile time and make the inliner threshold more accurate, realize
that we don't have to inline dead readonly function calls.  Instead, just 
delete the call.  This happens all the time for C++ codes, here are some
counters from opt/llvm-ld counting the number of times calls were deleted vs
inlined on various apps:

Tramp3d opt:
  5033 inline                - Number of call sites deleted, not inlined
 24596 inline                - Number of functions inlined
llvm-ld:
  667 inline           - Number of functions deleted because all callers found
  699 inline           - Number of functions inlined

483.xalancbmk opt:
  8096 inline                - Number of call sites deleted, not inlined
 62528 inline                - Number of functions inlined
llvm-ld:
   217 inline           - Number of allocas merged together
  2158 inline           - Number of functions inlined

471.omnetpp:
  331 inline                - Number of call sites deleted, not inlined
 8981 inline                - Number of functions inlined
llvm-ld:
  171 inline           - Number of functions deleted because all callers found
  629 inline           - Number of functions inlined


Deleting a call is much faster than inlining it, and is insensitive to the
size of the callee. :)

llvm-svn: 86975
2009-11-12 07:56:08 +00:00
Victor Hernandez
3c98070f2c Update computeArraySize() to use ComputeMultiple() to determine the array size associated with a malloc; also extend PerformHeapAllocSRoA() to check if the optimized malloc's arg had its highest bit set, so that it is safe for ComputeMultiple() to look through sext instructions while determining the optimized malloc's array size
llvm-svn: 86676
2009-11-10 08:32:25 +00:00
Victor Hernandez
4613fc8629 - new SROA mallocs should have the mallocs running-or'ed, not the malloc's bitcast
- fix ProcessInternalGlobal() debug output

llvm-svn: 86317
2009-11-07 00:41:19 +00:00
Victor Hernandez
8736a8fca4 Re-commit r86077 now that r86290 fixes the 179.art and 175.vpr ARM regressions.
Here is the original commit message:

This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.

Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.

Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.

Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses.  The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.

Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses.  The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.

Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.

Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.

llvm-svn: 86311
2009-11-07 00:16:28 +00:00