1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 20:43:44 +02:00
Commit Graph

4133 Commits

Author SHA1 Message Date
Chandler Carruth
889ecbc0f8 Extend the inline cost calculation to account for bonuses due to
correlated pairs of pointer arguments at the callsite. This is designed
to recognize the common C++ idiom of begin/end pointer pairs when the
end pointer is a constant offset from the begin pointer. With the
C-based idiom of a pointer and size, the inline cost saw the constant
size calculation, and this provides the same level of information for
begin/end pairs.

In order to propagate this information we have to search for candidate
operations on a pair of pointer function arguments (or derived from
them) which would be simplified if the pointers had a known constant
offset. Then the callsite analysis looks for such pointer pairs in the
argument list, and applies the appropriate bonus.

This helps LLVM detect that half of bounds-checked STL algorithms
(such as hash_combine_range, and some hybrid sort implementations)
disappear when inlined with a constant size input. However, it's not
a complete fix due the inaccuracy of our cost metric for constants in
general. I'm looking into that next.

Benchmarks showed no significant code size change, and very minor
performance changes. However, specific code such as hashing is showing
significantly cleaner inlining decisions.

llvm-svn: 152752
2012-03-14 23:19:53 +00:00
Stepan Dyatkovskiy
72fdcabd4d llvm::SwitchInst
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.

llvm-svn: 152532
2012-03-11 06:09:17 +00:00
Chandler Carruth
a4cd27625a Refactor some methods to look through bitcasts and GEPs on pointers into
a common collection of methods on Value, and share their implementation.
We had two variations in two different places already, and I need the
third variation for inline cost estimation.

Reviewed by Duncan Sands on IRC, but further comments here welcome.

llvm-svn: 152490
2012-03-10 08:39:09 +00:00
Stepan Dyatkovskiy
79f3dd93b7 Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html

Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".

ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.

Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.

Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow

for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
  BasicBlock *BB = i.getCaseSuccessor();
  ConstantInt *V = i.getCaseValue();
  // Do something.
}

If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.

There are also related changes in llvm-clients: klee and clang.

llvm-svn: 152297
2012-03-08 07:06:20 +00:00
Chandler Carruth
53d60ebfa2 Switch this code to use hash_combine_range rather than incremental calls
to hash_combine. One of the interfaces could already do this, and the
other can just use a small buffer. This is a much more efficient way to
use the hash_combine interface, although I don't have any particular
benchmark where this code was hot, so I can't measure much of an impact.
It at least doesn't slow anything down.

llvm-svn: 152200
2012-03-07 03:22:32 +00:00
Chandler Carruth
42ee38de30 Cache the sized-ness of struct types, once we reach the steady state of
"is sized". This prevents every query to isSized() from recursing over
every sub-type of a struct type. This could get *very* slow for
extremely deep nesting of structs, as in 177.mesa.

This change is a 45% speedup for 'opt -O2' of 177.mesa.linked.bc, and
likely a significant speedup for other cases as well. It even impacts
-O0 cases because so many part of the code try to check whether a type
is sized.

Thanks for the review from Nick Lewycky and Benjamin Kramer on IRC.

llvm-svn: 152197
2012-03-07 02:33:09 +00:00
Jay Foad
f7931a878d Change ConstantAggrUniqueMap to use Chandler's new hashing
implementation. Patch by Meador Inge

llvm-svn: 152116
2012-03-06 10:43:52 +00:00
Chandler Carruth
a93fbd8fff Replace the hashing functions on APInt and APFloat with overloads of the
new hash_value infrastructure, and replace their implementations using
hash_combine. This removes a complete copy of Jenkin's lookup3 hash
function (which is both significantly slower and lower quality than the
one implemented in hash_combine) along with a somewhat scary xor-only
hash function.

Now that APInt and APFloat can be passed directly to hash_combine,
simplify the rest of the LLVMContextImpl hashing to use the new
infrastructure.

llvm-svn: 152004
2012-03-04 12:02:57 +00:00
Chandler Carruth
cc9b4516cb Rewrite LLVM's generalized support library for hashing to follow the API
of the proposed standard hashing interfaces (N3333), and to use
a modified and tuned version of the CityHash algorithm.

Some of the highlights of this change:
 -- Significantly higher quality hashing algorithm with very well
    distributed results, and extremely few collisions. Should be close to
    a checksum for up to 64-bit keys. Very little clustering or clumping of
    hash codes, to better distribute load on probed hash tables.
 -- Built-in support for reserved values.
 -- Simplified API that composes cleanly with other C++ idioms and APIs.
 -- Better scaling performance as keys grow. This is the fastest
    algorithm I've found and measured for moderately sized keys (such as
    show up in some of the uniquing and folding use cases)
 -- Support for enabling per-execution seeds to prevent table ordering
    or other artifacts of hashing algorithms to impact the output of
    LLVM. The seeding would make each run different and highlight these
    problems during bootstrap.

This implementation was tested extensively using the SMHasher test
suite, and pased with flying colors, doing better than the original
CityHash algorithm even.

I've included a unittest, although it is somewhat minimal at the moment.
I've also added (or refactored into the proper location) type traits
necessary to implement this, and converted users of GeneralHash over.

My only immediate concerns with this implementation is the performance
of hashing small keys. I've already started working to improve this, and
will continue to do so. Currently, the only algorithms faster produce
lower quality results, but it is likely there is a better compromise
than the current one.

Many thanks to Jeffrey Yasskin who did most of the work on the N3333
paper, pair-programmed some of this code, and reviewed much of it. Many
thanks also go to Geoff Pike Pike and Jyrki Alakuijala, the original
authors of CityHash on which this is heavily based, and Austin Appleby
who created MurmurHash and the SMHasher test suite.

Also thanks to Nadav, Tobias, Howard, Jay, Nick, Ahmed, and Duncan for
all of the review comments! If there are further comments or concerns,
please let me know and I'll jump on 'em.

llvm-svn: 151822
2012-03-01 18:55:25 +00:00
Benjamin Kramer
c57a6d4a2a Emit the "is an intrinsic overloaded" table as a bitfield.
llvm-svn: 151792
2012-03-01 02:16:57 +00:00
Rafael Espindola
c97c3343f1 Use the DT dominates function in the verifier.
llvm-svn: 151470
2012-02-26 02:23:37 +00:00
Rafael Espindola
34b7c064cb Change the implementation of dominates(inst, inst) to one based on what the
verifier does. This correctly handles invoke.
Thanks to Duncan, Andrew and Chris for the comments.
Thanks to Joerg for the early testing.

llvm-svn: 151469
2012-02-26 02:19:19 +00:00
Rafael Espindola
f5f6146ba4 Don't call dominates on unreachable instructions.
llvm-svn: 151468
2012-02-26 02:14:25 +00:00
Nick Lewycky
57a70ec2c4 Remove spurious emacs mode marker.
llvm-svn: 151440
2012-02-25 07:20:06 +00:00
Jay Foad
4c186f919d Reinstate r151049 now that GeneralHash is fixed.
llvm-svn: 151248
2012-02-23 09:17:40 +00:00
Chad Rosier
3703a1917a Remove extra semi-colons.
llvm-svn: 151169
2012-02-22 17:25:00 +00:00
Jay Foad
aea6e7df6b Revert r151049 cos it broke the buildbots.
llvm-svn: 151052
2012-02-21 11:44:46 +00:00
Jay Foad
9034028575 PR1210: make uniquing of struct and function types more efficient by
using a DenseMap and Talin's new GeneralHash, avoiding the need for a
temporary std::vector on every lookup.

Patch by Meador Inge!

llvm-svn: 151049
2012-02-21 09:25:52 +00:00
Ahmed Charles
745c53c2a7 Remove dead code. Improve llvm_unreachable text. Simplify some control flow.
llvm-svn: 150918
2012-02-19 11:37:01 +00:00
Rafael Espindola
6be26e6aa6 White space fixes.
llvm-svn: 150886
2012-02-18 19:46:02 +00:00
Bill Wendling
3cc07ad486 s/ModAttrBehavior/ModFlagBehavior/g to be consistent with how module flags are named elsewhere.
llvm-svn: 150679
2012-02-16 10:28:10 +00:00
NAKAMURA Takumi
2be06d544c VMCore/AsmWriter.cpp: Tweak to check #INF and #NAN earlier.
With MSVCRT, prior checker missed emission of #INF and #NAN.

FIXME: Checking should be simpler.
llvm-svn: 150667
2012-02-16 08:12:24 +00:00
NAKAMURA Takumi
39331a549a VMCore/AsmWriter.cpp: Use APFloat instead of atof(3).
atof(3) might behave differently among platforms.

llvm-svn: 150661
2012-02-16 04:19:15 +00:00
Bill Wendling
2f3fb87abe Use the enum instead of 'unsigned'.
llvm-svn: 150632
2012-02-15 23:27:50 +00:00
Bill Wendling
e5d57bd4d0 Add a module flags accessor method which returns the flags in a vector.
llvm-svn: 150623
2012-02-15 22:34:20 +00:00
Eric Christopher
6be42e9898 Add a way to replace a field inside a metadata node. This can be
used to incrementally update a created node without needing a
temporary node and RAUW.

llvm-svn: 150571
2012-02-15 09:09:29 +00:00
Andrew Trick
3a4ed52447 Added TargetPassConfig::disablePass/substitutePass as a general mechanism to override specific passes.
llvm-svn: 150562
2012-02-15 03:21:47 +00:00
Bill Wendling
0dfc3d1e3e [WIP] Initial code for module flags.
Module flags are key-value pairs associated with the module. They include a
'behavior' value, indicating how module flags react when mergine two
files. Normally, it's just the union of the two module flags. But if two module
flags have the same key, then the resulting flags are dictated by the behaviors.

Allowable behaviors are:

     Error
       Emits an error if two values disagree.

     Warning
       Emits a warning if two values disagree.

     Require
       Emits an error when the specified value is not present
       or doesn't have the specified value. It is an error for
       two (or more) llvm.module.flags with the same ID to have
       the Require behavior but different values. There may be
       multiple Require flags per ID.

     Override
       Uses the specified value if the two values disagree. It
       is an error for two (or more) llvm.module.flags with the
       same ID to have the Override behavior but different
       values.

llvm-svn: 150300
2012-02-11 11:38:06 +00:00
Andrew Trick
135effe79b Added Pass::createPass(ID) to handle pass configuration by ID
llvm-svn: 150092
2012-02-08 21:22:34 +00:00
Bill Wendling
085bdb73fa Cache the sizes of vectors instead of calculating them all over the place.
llvm-svn: 149954
2012-02-07 01:48:12 +00:00
Bill Wendling
4183d7faf3 Reserve space in these vectors to prevent having to grow the array too
much. This gets us an addition 0.9% on 445.gobmk.

llvm-svn: 149952
2012-02-07 01:27:51 +00:00
Chris Lattner
7a6bd0185e Remove some dead code and tidy things up now that vectors use ConstantDataVector
instead of always using ConstantVector.

llvm-svn: 149912
2012-02-06 21:56:39 +00:00
Bill Wendling
8c63e349bc [unwind removal] Remove all of the code for the dead 'unwind' instruction. There
were no 'unwind' instructions being generated before this, so this is in effect
a no-op.

llvm-svn: 149906
2012-02-06 21:44:22 +00:00
Craig Topper
e18a06be4d Convert assert(0) to llvm_unreachable
llvm-svn: 149849
2012-02-05 22:14:15 +00:00
Talin
176069f81d Efficient Constant Uniquing.
llvm-svn: 149848
2012-02-05 20:54:10 +00:00
Chris Lattner
9782adedd7 reapply the patches reverted in r149470 that reenable ConstantDataArray,
but with a critical fix to the SelectionDAG code that optimizes copies
from strings into immediate stores: the previous code was stopping reading
string data at the first nul.  Address this by adding a new argument to
llvm::getConstantStringInfo, preserving the behavior before the patch.

llvm-svn: 149800
2012-02-05 02:29:43 +00:00
Devang Patel
9146918282 Update llvm debug version to support new structure and tag for Objective-C property's debug info.
llvm-svn: 149736
2012-02-04 01:30:01 +00:00
Duncan Sands
9a1b76b649 Simplify some GEP checks in the verifier.
llvm-svn: 149698
2012-02-03 17:28:51 +00:00
Craig Topper
94fcde74f6 Add auto upgrade support for x86 pcmpgt/pcmpeq intrinics removed in r149367.
llvm-svn: 149678
2012-02-03 06:10:55 +00:00
Andrew Trick
0838f6788a whitespace
llvm-svn: 149671
2012-02-03 05:12:30 +00:00
Stepan Dyatkovskiy
856ca370cc SwitchInst refactoring.
The purpose of refactoring is to hide operand roles from SwitchInst user (programmer). If you want to play with operands directly, probably you will need lower level methods than SwitchInst ones (TerminatorInst or may be User). After this patch we can reorganize SwitchInst operands and successors as we want.

What was done:

1. Changed semantics of index inside the getCaseValue method:
getCaseValue(0) means "get first case", not a condition. Use getCondition() if you want to resolve the condition. I propose don't mix SwitchInst case indexing with low level indexing (TI successors indexing, User's operands indexing), since it may be dangerous.
2. By the same reason findCaseValue(ConstantInt*) returns actual number of case value. 0 means first case, not default. If there is no case with given value, ErrorIndex will returned.
3. Added getCaseSuccessor method. I propose to avoid usage of TerminatorInst::getSuccessor if you want to resolve case successor BB. Use getCaseSuccessor instead, since internal SwitchInst organization of operands/successors is hidden and may be changed in any moment.
4. Added resolveSuccessorIndex and resolveCaseIndex. The main purpose of these methods is to see how case successors are really mapped in TerminatorInst.
4.1 "resolveSuccessorIndex" was created if you need to level down from SwitchInst to TerminatorInst. It returns TerminatorInst's successor index for given case successor.
4.2 "resolveCaseIndex" converts low level successors index to case index that curresponds to the given successor.

Note: There are also related compatability fix patches for dragonegg, klee, llvm-gcc-4.0, llvm-gcc-4.2, safecode, clang.
llvm-svn: 149481
2012-02-01 07:49:51 +00:00
Andrew Trick
3c9be3c4cc Add pass printer passes in the right place.
The pass pointer should never be referenced after sending it to
schedulePass(), which may delete the pass. To fix this bug I had to
clean up the design leading to more goodness.

You may notice now that any non-analysis pass is printed. So things like loop-simplify and lcssa show up, while target lib, target data, alias analysis do not show up. Normally, analysis don't mutate the IR, but you can now check this by using both -print-after and -print-before. The effects of analysis will now show up in between the two.

The llc path is still in bad shape. But I'll be improving it in my next checkin. Meanwhile, print-machineinstrs still works the same way. With print-before/after, many llc passes that were not printed before now are, some of these should be converted to analysis. A few very important passes, isel and scheduler, are not properly initialized, so not printed.

llvm-svn: 149480
2012-02-01 07:16:20 +00:00
Argyrios Kyrtzidis
492f34016f Revert Chris' commits up to r149348 that started causing VMCoreTests unit test to fail.
These are:

r149348
r149351
r149352
r149354
r149356
r149357
r149361
r149362
r149364
r149365

llvm-svn: 149470
2012-02-01 04:51:17 +00:00
Chris Lattner
a9b3505e9a eliminate the "string" form of ConstantArray::get, using
ConstantDataArray::getString instead.

llvm-svn: 149365
2012-01-31 06:18:43 +00:00
Chris Lattner
06407b2f81 with recent changes, ConstantArray is never a "string". Remove the associated
methods and constant fold the clients to false.

llvm-svn: 149362
2012-01-31 06:05:00 +00:00
Chris Lattner
63a482554f fix a small oversight that broke the fhourstones app.
llvm-svn: 149357
2012-01-31 05:18:56 +00:00
Chris Lattner
b463bd21fc Change ConstantArray::get to form a ConstantDataArray when possible,
kicking in the big win of ConstantDataArray.  As part of this, change
the implementation of GetConstantStringInfo in ValueTracking to work
with ConstantDataArray (and not ConstantArray) making it dramatically,
amazingly, more efficient in the process and renaming it to 
getConstantStringInfo.

This keeps around a GetConstantStringInfo entrypoint that (grossly)
forwards to getConstantStringInfo and constructs the std::string 
required, but existing clients should move over to 
getConstantStringInfo instead.

llvm-svn: 149351
2012-01-31 04:42:22 +00:00
Chris Lattner
1d52d79d31 fix asmwriting of ConstantDataArray to use the right element count,
simplify ConstantArray handling, since they can never be empty.

llvm-svn: 149341
2012-01-31 03:15:40 +00:00
Bill Wendling
011d7b0f34 Add a constified getLandingPad() method.
llvm-svn: 149303
2012-01-31 00:26:24 +00:00
Chris Lattner
b890ac010a Various improvements suggested by Duncan
llvm-svn: 149255
2012-01-30 18:19:30 +00:00