1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 19:23:23 +01:00
Commit Graph

4175 Commits

Author SHA1 Message Date
Bill Wendling
aad3af5f7c Use ArrayRef instead of an explicit vector type.
llvm-svn: 156755
2012-05-14 07:53:40 +00:00
Stepan Dyatkovskiy
fa0cf8dc2e Recommited r156374 with critical fixes in BitcodeReader/Writer:
Ordinary patch for PR1255.
Added new case-ranges orientated methods for adding/removing cases in SwitchInst. After this patch cases will internally representated as ConstantArray-s instead of ConstantInt, externally cases wrapped within the ConstantRangesSet object.
Old methods of SwitchInst are also works well, but marked as deprecated. So on this stage we have no side effects except that I added support for case ranges in BitcodeReader/Writer, of course test for Bitcode is also added. Old "switch" format is also supported.

llvm-svn: 156704
2012-05-12 10:48:17 +00:00
Jay Foad
65d25fa204 Teach Function::hasAddressTaken that BlockAddress doesn't really take
the address of a function.

llvm-svn: 156703
2012-05-12 08:30:16 +00:00
Joel Jones
305ecb3495 Fix a problem with incomplete equality testing of PHINodes in
Instruction::IsIdenticalToWhenDefined.

This manifested itself when inlining two calls to the same function.  The 
inlined function had a switch statement that returned one of a set of 
global variables.  Without this modification, the two phi instructions that 
chose values from the branches of the switch instruction inlined from the 
callee were considered equivalent and jump-threading replaced a load for the 
first switch value with a phi selecting from the second switch, thereby 
producing incorrect code.

This patch has been tested with "make check-all", "lnt runteste nt", and 
llvm self-hosted, and on the original program that had this problem, 
wireshark.

<rdar://problem/11025519>

llvm-svn: 156548
2012-05-10 15:59:41 +00:00
Hans Wennborg
879332e389 Introduce llvm-c function LLVMPrintModuleToFile.
This lets you save the textual representation of the LLVM IR to a file.
Before this patch it could only be printed to STDERR from llvm-c.

Patch by Carlo Kok!

llvm-svn: 156479
2012-05-09 16:54:17 +00:00
Nuno Lopes
e8880a9916 change the objectsize intrinsic signature: add a 3rd parameter to denote the maximum runtime performance penalty that the user is willing to accept.
This commit only adds the parameter. Code taking advantage of it will follow.

llvm-svn: 156473
2012-05-09 15:52:43 +00:00
Stepan Dyatkovskiy
b150cd5ced Rejected r156374: Ordinary PR1255 patch. Due to clang-x86_64-debian-fnt buildbot failure.
llvm-svn: 156377
2012-05-08 08:33:21 +00:00
Craig Topper
77b1a4cee5 Remove 256-bit AVX non-temporal store intrinsics. Similar was previously done for 128-bit.
llvm-svn: 156375
2012-05-08 06:58:15 +00:00
Stepan Dyatkovskiy
33fd2a5bf4 Ordinary patch for PR1255.
Added new case-ranges orientated methods for adding/removing cases in SwitchInst. After this patch cases will internally representated as ConstantArray-s instead of ConstantInt, externally cases wrapped within the ConstantRangesSet object.
Old methods of SwitchInst are also works well, but marked as deprecated. So on this stage we have no side effects except that I added support for case ranges in BitcodeReader/Writer, of course test for Bitcode is also added. Old "switch" format is also supported.

llvm-svn: 156374
2012-05-08 06:36:08 +00:00
Dan Gohman
25a863dcf7 Reapply r155682, making constant folding more consistent, with a fix to work
properly with how the code handles all-undef PHI nodes.

llvm-svn: 155721
2012-04-27 17:50:22 +00:00
NAKAMURA Takumi
a28147f072 Revert r155682, "Use ConstantExpr::getExtractElement when constant-folding vectors"
It broke stage2 build. stage1/clang sometimes crashed.

llvm-svn: 155699
2012-04-27 07:59:20 +00:00
Dan Gohman
a72b2f97a6 Use ConstantExpr::getExtractElement when constant-folding vectors
instead of getAggregateElement. This has the advantage of being
more consistent and allowing higher-level constant folding to
procede even if an inner extract element cannot be folded.

Make ConstantFoldInstruction call ConstantFoldConstantExpression
on the instruction's operands, making it more consistent with 
ConstantFoldConstantExpression itself. This makes sure that
ConstantExprs get TargetData-aware folding before being handed
off as operands for further folding.

This causes more expressions to be folded, but due to a known
shortcoming in constant folding, this currently has the side effect
of stripping a few more nuw and inbounds flags in the non-targetdata
side of constant-fold-gep.ll. This is mostly harmless.

This fixes rdar://11324230.

llvm-svn: 155682
2012-04-27 00:54:36 +00:00
Bill Wendling
d9d9230b83 Don't forget to reset 'first operand' flag when we're setting the MDNodeOperand value.
llvm-svn: 155599
2012-04-26 00:38:42 +00:00
Nadav Rotem
3c817bb807 ConstantFoldSelectInstruction swapped the operands of the select.
Fix 12592. Patch by Matt Pharr.

llvm-svn: 155480
2012-04-24 20:18:49 +00:00
Bill Wendling
0f9f58c75a Cleanup whitespace.
llvm-svn: 155328
2012-04-23 00:23:33 +00:00
Bill Wendling
2b77fec649 Limit the number of times we recurse through this algorithm. All of the
intructions are processed. So there's no need to look at them if they're used as
operands of other instructions.

llvm-svn: 155327
2012-04-23 00:22:55 +00:00
Bill Wendling
8d86028029 Add a flag to the struct type finder to collect only those types which have
names. This saves collecting types we normally don't care about.

llvm-svn: 155300
2012-04-21 23:59:16 +00:00
Bill Wendling
be493e63ea Revert r155241, which is causing some breakage.
llvm-svn: 155253
2012-04-20 23:11:38 +00:00
Bill Wendling
bb9c301c28 If we discover all of the named structs in a module, then don't bother to
process any more Values.

llvm-svn: 155241
2012-04-20 21:56:24 +00:00
Craig Topper
7c784d86eb Remove AVX vpermil intrinsics. I removed their uses from clang headers and builtins a while back.
llvm-svn: 154985
2012-04-18 05:24:00 +00:00
Eric Christopher
2ec1742f9b Typo.
llvm-svn: 154879
2012-04-16 23:54:31 +00:00
Duncan Sands
518668bd76 Remove support for the special 'fast' value for fpmath accuracy for the moment.
llvm-svn: 154850
2012-04-16 19:39:33 +00:00
Duncan Sands
f61d49df40 Make it possible to indicate relaxed floating point requirements at the IR level
through the use of 'fpmath' metadata.  Currently this only provides a 'fpaccuracy'
value, which may be a number in ULPs or the keyword 'fast', however the intent is
that this will be extended with additional information about NaN's, infinities
etc later.  No optimizations have been hooked up to this so far.

llvm-svn: 154822
2012-04-16 16:28:59 +00:00
Duncan Sands
40d080e3b7 Rename "fpaccuracy" metadata to the more generic "fpmath". That's because I'm
thinking of generalizing it to be able to specify other freedoms beyond accuracy
(such as that NaN's don't have to be respected).  I'd like the 3.1 release (the
first one with this metadata) to have the more generic name already rather than
having to auto-upgrade it in 3.2.

llvm-svn: 154744
2012-04-14 12:36:06 +00:00
Dan Gohman
cde3a46455 Def here is an Instruction, so !isa<Instruction>(Def) is always false,
as Eli noticed.

llvm-svn: 154641
2012-04-13 00:50:57 +00:00
Dan Gohman
c0a906405e Add forms of dominates and isReachableFromEntry that accept a Use
directly instead of a user Instruction. This allows them to test
whether a def dominates a particular operand if the user instruction
is a PHI.

llvm-svn: 154631
2012-04-12 23:31:46 +00:00
Benjamin Kramer
eba5ed591b Cache the hash value of the operands in the MDNode.
FoldingSet is implemented as a chained hash table. When there is a hash
collision during insertion, which is common as we fill the table until a
load factor of 2.0 is hit, we walk the chained elements, comparing every
operand with the new element's operands. This can be very expensive if the
MDNode has many operands.

We sacrifice a word of space in MDNode to cache the full hash value, reducing
compares on collision to a minimum. MDNode grows from 28 to 32 bytes + operands
on x86. On x86_64 the new bits fit nicely into existing padding, not growing
the struct at all.

The actual speedup depends a lot on the test case and is typically between
1% and 2% for C++ code with clang -c -O0 -g.

llvm-svn: 154497
2012-04-11 14:06:54 +00:00
Benjamin Kramer
3a0f5a0df3 Compute hashes directly with hash_combine instead of taking a detour through FoldingSetNodeID.
llvm-svn: 154495
2012-04-11 14:06:39 +00:00
Bill Wendling
16712e549c The MDString class stored a StringRef to the string which was already in a
StringMap. This was redundant and unnecessarily bloated the MDString class.

Because the MDString class is a "Value" and will never have a "name", and
because the Name field in the Value class is a pointer to a StringMap entry, we
repurpose the Name field for an MDString. It stores the StringMap entry in the
Name field, and uses the normal methods to get the string (name) back.

PR12474

llvm-svn: 154429
2012-04-10 20:12:16 +00:00
Duncan Sands
f25460b85f Express the number of ULPs in fpaccuracy metadata as a real rather than a
rational number, eg as 2.5 rather than 5, 2.  OK'd by Peter Collingbourne.

llvm-svn: 154387
2012-04-10 08:22:43 +00:00
Bill Wendling
15f23d4018 Remove the 'Parent' pointer from the MDNodeOperand class.
An MDNode has a list of MDNodeOperands allocated directly after it as part of
its allocation. Therefore, the Parent of the MDNodeOperands can be found by
walking back through the operands to the beginning of that list. Mark the first
operand's value pointer as being the 'first' operand so that we know where the
beginning of said list is.

This saves a *lot* of space during LTO with -O0 -g flags.

llvm-svn: 154280
2012-04-08 10:20:49 +00:00
Bill Wendling
d13bec8fa9 Allow subclasses of the ValueHandleBase to store information as part of the
value pointer by making the value pointer into a pointer-int pair with 2 bits
available for flags.

llvm-svn: 154279
2012-04-08 10:16:43 +00:00
Bill Wendling
e3c2c36927 The speedup doesn't appear to have been from this, but was an anomaly of my testing machine.
llvm-svn: 153951
2012-04-03 11:19:21 +00:00
Bill Wendling
3f12fbd290 Reserve space for the eventual filling of the vector. This gives a small speedup.
llvm-svn: 153949
2012-04-03 10:50:09 +00:00
Duncan Sands
5165327295 I noticed in passing that the Metadata getIfExists method was creating a new
node and returning it if one didn't exist.

llvm-svn: 153798
2012-03-31 08:20:11 +00:00
Rafael Espindola
151b420718 Handle unreachable code in the dominates functions. This changes users when
needed for correctness, but still doesn't clean up code that now unnecessary
checks for reachability.

llvm-svn: 153755
2012-03-30 16:46:21 +00:00
Douglas Gregor
d7dc901945 Add missing include of <new>
llvm-svn: 153436
2012-03-26 14:04:17 +00:00
Rafael Espindola
4f3bebb795 Remove always true variable.
llvm-svn: 153392
2012-03-24 20:02:25 +00:00
Rafael Espindola
277fa2bfe2 First part of PR12251. Add documentation and verifier support for the range
metadata.

llvm-svn: 153359
2012-03-24 00:14:51 +00:00
Eric Christopher
224bb6fa26 Fix up cmake build.
llvm-svn: 153306
2012-03-23 03:55:14 +00:00
Eric Christopher
11cd3dd9fc Take out the debug info probe stuff. It's making some changes to
the PassManager annoying and should be reimplemented as a decorator
on top of existing passes (as should the timing data).

llvm-svn: 153305
2012-03-23 03:54:05 +00:00
Chris Lattner
88c929ec53 add load/store volatility control to the C API, patch by Yiannis Tsiouris!
llvm-svn: 153238
2012-03-22 03:54:15 +00:00
Chandler Carruth
889ecbc0f8 Extend the inline cost calculation to account for bonuses due to
correlated pairs of pointer arguments at the callsite. This is designed
to recognize the common C++ idiom of begin/end pointer pairs when the
end pointer is a constant offset from the begin pointer. With the
C-based idiom of a pointer and size, the inline cost saw the constant
size calculation, and this provides the same level of information for
begin/end pairs.

In order to propagate this information we have to search for candidate
operations on a pair of pointer function arguments (or derived from
them) which would be simplified if the pointers had a known constant
offset. Then the callsite analysis looks for such pointer pairs in the
argument list, and applies the appropriate bonus.

This helps LLVM detect that half of bounds-checked STL algorithms
(such as hash_combine_range, and some hybrid sort implementations)
disappear when inlined with a constant size input. However, it's not
a complete fix due the inaccuracy of our cost metric for constants in
general. I'm looking into that next.

Benchmarks showed no significant code size change, and very minor
performance changes. However, specific code such as hashing is showing
significantly cleaner inlining decisions.

llvm-svn: 152752
2012-03-14 23:19:53 +00:00
Stepan Dyatkovskiy
72fdcabd4d llvm::SwitchInst
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.

llvm-svn: 152532
2012-03-11 06:09:17 +00:00
Chandler Carruth
a4cd27625a Refactor some methods to look through bitcasts and GEPs on pointers into
a common collection of methods on Value, and share their implementation.
We had two variations in two different places already, and I need the
third variation for inline cost estimation.

Reviewed by Duncan Sands on IRC, but further comments here welcome.

llvm-svn: 152490
2012-03-10 08:39:09 +00:00
Stepan Dyatkovskiy
79f3dd93b7 Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html

Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".

ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.

Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.

Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow

for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
  BasicBlock *BB = i.getCaseSuccessor();
  ConstantInt *V = i.getCaseValue();
  // Do something.
}

If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.

There are also related changes in llvm-clients: klee and clang.

llvm-svn: 152297
2012-03-08 07:06:20 +00:00
Chandler Carruth
53d60ebfa2 Switch this code to use hash_combine_range rather than incremental calls
to hash_combine. One of the interfaces could already do this, and the
other can just use a small buffer. This is a much more efficient way to
use the hash_combine interface, although I don't have any particular
benchmark where this code was hot, so I can't measure much of an impact.
It at least doesn't slow anything down.

llvm-svn: 152200
2012-03-07 03:22:32 +00:00
Chandler Carruth
42ee38de30 Cache the sized-ness of struct types, once we reach the steady state of
"is sized". This prevents every query to isSized() from recursing over
every sub-type of a struct type. This could get *very* slow for
extremely deep nesting of structs, as in 177.mesa.

This change is a 45% speedup for 'opt -O2' of 177.mesa.linked.bc, and
likely a significant speedup for other cases as well. It even impacts
-O0 cases because so many part of the code try to check whether a type
is sized.

Thanks for the review from Nick Lewycky and Benjamin Kramer on IRC.

llvm-svn: 152197
2012-03-07 02:33:09 +00:00
Jay Foad
f7931a878d Change ConstantAggrUniqueMap to use Chandler's new hashing
implementation. Patch by Meador Inge

llvm-svn: 152116
2012-03-06 10:43:52 +00:00
Chandler Carruth
a93fbd8fff Replace the hashing functions on APInt and APFloat with overloads of the
new hash_value infrastructure, and replace their implementations using
hash_combine. This removes a complete copy of Jenkin's lookup3 hash
function (which is both significantly slower and lower quality than the
one implemented in hash_combine) along with a somewhat scary xor-only
hash function.

Now that APInt and APFloat can be passed directly to hash_combine,
simplify the rest of the LLVMContextImpl hashing to use the new
infrastructure.

llvm-svn: 152004
2012-03-04 12:02:57 +00:00