1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 11:42:57 +01:00
Commit Graph

77212 Commits

Author SHA1 Message Date
Dan Gohman
6e1bd851dc Change the default scheduler from Latency to ILP, since Latency
is going away.

llvm-svn: 142810
2011-10-24 17:45:02 +00:00
Jim Grosbach
0bb9a86fc7 Update test for r142801.
llvm-svn: 142806
2011-10-24 17:26:26 +00:00
Benjamin Kramer
b4f9f1d5f9 XFAIL test on leak checkers.
llvm-svn: 142804
2011-10-24 17:24:05 +00:00
Jim Grosbach
eceea163ef Thumb2 LDM instructions can target PC. Make sure to encode it.
PR11220

llvm-svn: 142801
2011-10-24 17:16:24 +00:00
Bill Wendling
077c57fc9d Cleanup. Get rid of the old SjLj EH lowering code. No functionality change.
llvm-svn: 142800
2011-10-24 17:12:36 +00:00
Chandler Carruth
c93e2dfab0 Sink an otherwise unused variable's initializer into the asserts that
used it. Fixes an unused variable warning from GCC on release builds.

llvm-svn: 142799
2011-10-24 16:51:55 +00:00
Benjamin Kramer
0638353f56 Implement comparison operators for BranchProbability in a way that can't overflow INT64_MAX.
Add a test case for the edge case that triggers this. Thanks to Chandler for bringing this to my attention.

llvm-svn: 142794
2011-10-24 13:50:56 +00:00
Chandler Carruth
d04f838629 Remove return heuristics from the static branch probabilities, and
introduce no-return or unreachable heuristics.

The return heuristics from the Ball and Larus paper don't work well in
practice as they pessimize early return paths. The only good hitrate
return heuristics are those for:
 - NULL return
 - Constant return
 - negative integer return

Only the last of these three can possibly require significant code for
the returning block, and even the last is fairly rare and usually also
a constant. As a consequence, even for the cold return paths, there is
little code on that return path, and so little code density to be gained
by sinking it. The places where sinking these blocks is valuable (inner
loops) will already be weighted appropriately as the edge is a loop-exit
branch.

All of this aside, early returns are nearly as common as all three of
these return categories, and should actually be predicted as taken!
Rather than muddy the waters of the static predictions, just remain
silent on returns and let the CFG itself dictate any layout or other
issues.

However, the return heuristic was flagging one very important case:
unreachable. Unfortunately it still gave a 1/4 chance of the
branch-to-unreachable occuring. It also didn't do a rigorous job of
finding those blocks which post-dominate an unreachable block.

This patch builds a more powerful analysis that should flag all branches
to blocks known to then reach unreachable. It also has better worst-case
runtime complexity by not looping through successors for each block. The
previous code would perform an N^2 walk in the event of a single entry
block branching to N successors with a switch where each successor falls
through to the next and they finally fall through to a return.

Test case added for noreturn heuristics. Also doxygen comments improved
along the way.

llvm-svn: 142793
2011-10-24 12:01:08 +00:00
NAKAMURA Takumi
541ec5f681 Revert "Test commit"
llvm-svn: 142792
2011-10-24 10:03:25 +00:00
NAKAMURA Takumi
8bebd57579 Test commit
llvm-svn: 142791
2011-10-24 10:02:59 +00:00
Nick Lewycky
64d4e26aec Reapply r142781 with fix. Original message:
Enhance SCEV's brute force loop analysis to handle multiple PHI nodes in the
  loop header when computing the trip count.

  With this, we now constant evaluate:
    struct ListNode { const struct ListNode *next; int i; };
    static const struct ListNode node1 = {0, 1};
    static const struct ListNode node2 = {&node1, 2};
    static const struct ListNode node3 = {&node2, 3};
    int test() {
      int sum = 0;
      for (const struct ListNode *n = &node3; n != 0; n = n->next)
        sum += n->i;
      return sum;
    }

llvm-svn: 142790
2011-10-24 06:57:05 +00:00
Chandler Carruth
e443224c31 Doxygen-ify the comments on the public interface for BPI. Also, move the
two more subtle routines to the bottom and expand on their cautionary
comments a bit. No functionality or actual interface change here.

llvm-svn: 142789
2011-10-24 05:55:58 +00:00
Nick Lewycky
341bf1548e PHI nodes not in the loop header aren't part of the loop iteration initial
state. Furthermore, they might not have two operands. This fixes the underlying
issue behind the crashes introduced in r142781.

llvm-svn: 142788
2011-10-24 05:51:01 +00:00
Nick Lewycky
4d47e224d7 A dead malloc, a free(NULL) and a free(undef) are all trivially dead
instructions.

This doesn't introduce any optimizations we weren't doing before (except
potentially due to pass ordering issues), now passes will eliminate them sooner
as part of their own cleanups.

llvm-svn: 142787
2011-10-24 04:35:36 +00:00
Nick Lewycky
d72de74587 Speculatively revert r142781. Bots are showing
Assertion `i_nocapture < OperandTraits<PHINode>::operands(this) && "getOperand() out of range!"' failed.
coming out of indvars.

llvm-svn: 142786
2011-10-24 04:00:25 +00:00
NAKAMURA Takumi
679aa96408 Windows/Path.inc: [PR8460] Get rid of ScopedNullTerminator. Thanks to Zvi Rackover!
llvm-svn: 142785
2011-10-24 03:27:19 +00:00
Chandler Carruth
c5722dbec0 Simplify the design of BranchProbabilityInfo by collapsing it into
a single class. Previously it was split between two classes, one
internal and one external. The concern seemed to center around exposing
the weights used, but those can remain confined to the implementation
file.

Having a single class to maintain the state and analyses in use will
also simplify several of the enhancements I want to make to our static
heuristics.

llvm-svn: 142783
2011-10-24 01:40:45 +00:00
Nick Lewycky
5ab7948d71 Enhance SCEV's brute force loop analysis to handle multiple PHI nodes in the
loop header when computing the trip count.

With this, we now constant evaluate:
  struct ListNode { const struct ListNode *next; int i; };
  static const struct ListNode node1 = {0, 1};
  static const struct ListNode node2 = {&node1, 2};
  static const struct ListNode node3 = {&node2, 3};
  int test() {
    int sum = 0;
    for (const struct ListNode *n = &node3; n != 0; n = n->next)
      sum += n->i;
    return sum;
  }

llvm-svn: 142781
2011-10-23 23:43:14 +00:00
Chandler Carruth
93ee01158c Tidy up a loop to be more idiomatic for LLVM's codebase, and remove some
extraneous whitespace. Trying to clean-up this pass as much as I can
before I start making functional changes.

llvm-svn: 142780
2011-10-23 22:40:13 +00:00
Craig Topper
3cb62dca0f Add X86 SARX, SHRX, and SHLX instructions.
llvm-svn: 142779
2011-10-23 22:18:24 +00:00
Chandler Carruth
151d4fc273 Teach the BranchProbabilityInfo pass to print its results, and use that
to bring it under direct test instead of merely indirectly testing it in
the BlockFrequencyInfo pass.

The next step is to start adding tests for the various heuristics
employed, and to start fixing those heuristics once they're under test.

llvm-svn: 142778
2011-10-23 21:21:50 +00:00
Bill Wendling
743c5bcb96 Rename the script to indicate that this is for the TEST=simple tests.
llvm-svn: 142764
2011-10-23 20:14:06 +00:00
Bill Wendling
de530037c1 Resurrect the 'find regressions for the TEST=nightly tests' script.
llvm-svn: 142763
2011-10-23 20:13:14 +00:00
Chandler Carruth
c10018cd49 Now that we have comparison on probabilities, add some static functions
to get important constant branch probabilities and use them for finding
the best branch out of a set of possibilities.

llvm-svn: 142762
2011-10-23 20:10:34 +00:00
Chandler Carruth
372600fd24 Remove a commented out line of code that snuck by my auditing.
llvm-svn: 142761
2011-10-23 20:10:30 +00:00
Benjamin Kramer
fd11994070 Print branch probabilities as percentages.
50% is much more readable than 5.000000e-01.

llvm-svn: 142752
2011-10-23 11:32:54 +00:00
Benjamin Kramer
9adc582e35 Add compare operators to BranchProbability and use it to determine if an edge is hot.
llvm-svn: 142751
2011-10-23 11:19:14 +00:00
Chandler Carruth
68ba25c47d Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.

The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.

As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.

Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.

The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.

llvm-svn: 142743
2011-10-23 09:18:45 +00:00
Craig Topper
0e63b4485c Add X86 RORX instruction
llvm-svn: 142741
2011-10-23 07:34:00 +00:00
Cameron Zwarich
2dd06afcf5 The element insertion code in scalar replacement doesn't handle incorrect
element types, even though the element extraction code does. It is surprising
that this bug has been here for so long. Fixes <rdar://problem/10318778>.

llvm-svn: 142740
2011-10-23 07:02:10 +00:00
Craig Topper
7019cf1b80 Add X86 MULX instruction for disassembler.
llvm-svn: 142738
2011-10-23 00:33:32 +00:00
Craig Topper
c543ac0876 Remove some duplicate specifying of neverHasSideEffects and mayLoad from X86 multiply instructions.
llvm-svn: 142737
2011-10-22 23:13:53 +00:00
Nick Lewycky
1d759dcde7 Oops! Fix test I forgot to submit as part of r142735.
llvm-svn: 142736
2011-10-22 22:07:31 +00:00
Nick Lewycky
25e5f6896b A non-escaping malloc in the entry block is not unlike an alloca. Do dead-store
elimination on them too.

llvm-svn: 142735
2011-10-22 21:59:35 +00:00
Nick Lewycky
ce8bfeadff Make SCEV's brute force analysis stronger in two ways. Firstly, we should be
able to constant fold load instructions where the argument is a constant.
Second, we should be able to watch multiple PHI nodes through the loop; this
patch only supports PHIs in loop headers, more can be done here.

With this patch, we now constant evaluate:
  static const int arr[] = {1, 2, 3, 4, 5};
  int test() {
    int sum = 0;
    for (int i = 0; i < 5; ++i) sum += arr[i];
    return sum;
  }

llvm-svn: 142731
2011-10-22 19:58:20 +00:00
Nadav Rotem
dad9eeadbb Fix a typo.w
llvm-svn: 142729
2011-10-22 18:44:51 +00:00
Jim Grosbach
b10647b119 Minor updates.
llvm-svn: 142728
2011-10-22 18:17:32 +00:00
Nadav Rotem
c5c37861f7 Added my name to CREDITS.TXT
llvm-svn: 142727
2011-10-22 17:51:04 +00:00
Benjamin Kramer
03065133c3 Move various generated tables into read-only memory, fixing up const correctness along the way.
llvm-svn: 142726
2011-10-22 16:50:00 +00:00
Nadav Rotem
7a79f94aad Fix pr11193.
SHL inserts zeros from the right, thus even when the original
sign_extend_inreg value was of 1-bit, we need to sra.

llvm-svn: 142724
2011-10-22 12:39:25 +00:00
Bill Wendling
66327a8d0e The different flavors of ARM have different valid subsets of registers. Check
that the set of callee-saved registers is correct for the specific platform.
<rdar://problem/10313708> & ctor_dtor_count & ctor_dtor_count-2

llvm-svn: 142706
2011-10-22 00:29:28 +00:00
Jim Grosbach
d964cf8939 Assembly parsing for 4-register sequential variant of VLD2.
llvm-svn: 142704
2011-10-21 23:58:57 +00:00
Jim Grosbach
a6e536367e Assembly parsing for 2-register sequential variant of VLD2.
llvm-svn: 142691
2011-10-21 22:21:10 +00:00
Bill Wendling
34a9073d67 Make sure that the landing pads themselves have no PHI instructions in them.
The assumption in the back-end is that PHIs are not allowed at the start of the
landing pad block for SjLj exceptions.
<rdar://problem/10313708>

llvm-svn: 142689
2011-10-21 22:08:56 +00:00
Benjamin Kramer
917737037d Extend the floating point heuristic to consider NaN checks unlikely.
llvm-svn: 142687
2011-10-21 21:13:47 +00:00
Tanya Lattner
84bb012d55 Revert r141657 for now. This has broken css and changed links on llvm.org. I'd like to understand exactly why the links have changed and if a newer doxygen is required. This may be reapplied once we upgrade on llvm.org and it is fully tested.
llvm-svn: 142686
2011-10-21 20:51:54 +00:00
Eli Friedman
5012ac7cc0 Remap blockaddress correctly when inlining a function. Fixes PR10162.
llvm-svn: 142684
2011-10-21 20:45:19 +00:00
Owen Anderson
0d40d283e9 Use LLVMBool for a function that logically returns a boolean value.
llvm-svn: 142683
2011-10-21 20:35:58 +00:00
Jim Grosbach
68dfc88f95 Assembly parsing for 4-register variant of VLD1.
llvm-svn: 142682
2011-10-21 20:35:01 +00:00
Owen Anderson
7faa1c3317 Fix typo.
llvm-svn: 142681
2011-10-21 20:28:19 +00:00