1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 12:33:33 +02:00
Commit Graph

13270 Commits

Author SHA1 Message Date
Anna Zaks
73f1ba0a88 Rename the test. Thanks Cameron! Use shorter/generic names.
llvm-svn: 133115
2011-06-16 00:34:10 +00:00
Anna Zaks
e2a947a4f7 Function::getNumBlockIDs() should be used instead of Function::size() to set the upper limit on the block IDs since basic blocks might get removed (simplified away) after being initially numbered. Plus the test case, in which SelectionDAGBuilder::visitBr() calls llvm::MachineFunction::removeFromMBBNumbering(), which introduces the hole in numbering leading to an assert in llc (prior to the fix).
llvm-svn: 133113
2011-06-16 00:03:21 +00:00
John McCall
519c63cdeb The ARC language-specific optimizer. Credit to Dan Gohman.
llvm-svn: 133108
2011-06-15 23:37:01 +00:00
Rafael Espindola
8edd93b519 Testcase for previous commit.
llvm-svn: 133089
2011-06-15 21:18:51 +00:00
John McCall
e6835ee44e Add a new function attribute, nonlazybind, which inhibits lazy-loading
optimizations when emitting calls to the function;  instead those calls may
use faster relocations which require the function to be immediately resolved
upon loading the dynamic object featuring the call.  This is useful when it
is known that the function will be called frequently and pervasively and
therefore there is no merit in delaying binding of the function.

Currently only implemented for x86-64, where it turns into a call through
the global offset table.

Patch by Dan Gohman, who assures me that he's going to add LangRef documentation
for this once it's committed.

llvm-svn: 133080
2011-06-15 20:36:13 +00:00
Andrew Trick
6d96f3a7a2 Disabling this test until I can figure out the right lit flags.
llvm-svn: 133068
2011-06-15 18:25:38 +00:00
Jakob Stoklund Olesen
7b0de9a9e0 Remove custom allocation orders in SystemZ.
Note that this actually changes code generation, and someone who
understands this target better should check the changes.

- R12Q is now allocatable. I think it was omitted from the allocation
  order by mistake since it isn't reserved. It as apparently used as a
  GOT pointer sometimes, and it should probably be reserved if that is
  the case.

- The GR64 registers are allocated in a different order now. The
  register allocator will automatically put the CSRs last. There were
  other changes to the order that may have been significant.

The test fix is because r0 and r1 swapped places in the allocation order.

llvm-svn: 133067
2011-06-15 18:02:56 +00:00
Evan Cheng
30f84a59ae Another revsh pattern. rdar://9609059
llvm-svn: 133064
2011-06-15 17:17:48 +00:00
Andrew Trick
ce93f28a36 Added -stress-sched flag in the Asserts build.
Added a test case for handling physreg aliases during pre-RA-sched.

llvm-svn: 133063
2011-06-15 17:16:12 +00:00
Chad Rosier
f6c1b3b81f TargetLoweringOpt is a struct used by DAGCombine, not a pass.
llvm-svn: 133062
2011-06-15 16:48:02 +00:00
Nadav Rotem
72e51c94b1 This test was failing on X86 machines which do not have SSE4. Fixed the test by
specifying that the target CPU is corei7.

llvm-svn: 133053
2011-06-15 12:26:53 +00:00
Bill Wendling
5ae6b0c972 Improve the heuristic to emit the alias if the number of hard-coded registers
are also greater than the alias.

llvm-svn: 133038
2011-06-15 04:31:19 +00:00
Evan Cheng
7624839811 PerformBFICombine - (bfi A, (and B, Mask1), Mask2) -> (bfi A, B, Mask2) iff
the bits being cleared by the AND are not demanded by the BFI.

The previous BFI dag combine rule was actually incorrect (or used to be
correct until BFI representation changed).

rdar://9609030

llvm-svn: 133034
2011-06-15 01:12:31 +00:00
Tanya Lattner
5ee64fc868 Add an optimization that looks for a specific pair-wise add pattern and generates a vpaddl instruction instead of scalarizing the add.
Includes a test case.

llvm-svn: 133027
2011-06-14 23:48:48 +00:00
Rafael Espindola
0811c47a3a Add triple.
llvm-svn: 133026
2011-06-14 23:47:36 +00:00
Chad Rosier
30333c668f When pattern matching during instruction selection make sure shl x,1 is not
converted to add x,x if x is a undef.  add undef, undef does not guarantee
that the resulting low order bit is zero.
Fixes <rdar://problem/9453156> and <rdar://problem/9487392>.

llvm-svn: 133022
2011-06-14 22:29:10 +00:00
Rafael Espindola
d133b092e2 Check the llc output.
llvm-svn: 133021
2011-06-14 22:24:32 +00:00
Stuart Hastings
63da197d28 Test case for x86 MMX inline asm. rdar://problem/8886707
llvm-svn: 133014
2011-06-14 21:51:38 +00:00
Rafael Espindola
4e8b511063 Add a test for the recent regression.
llvm-svn: 133009
2011-06-14 20:38:50 +00:00
Dan Gohman
2071b00bdf This test is still failing. Delete the rest of it.
llvm-svn: 133001
2011-06-14 18:07:36 +00:00
Dan Gohman
d6dcf3e5e3 Revert r132991. This test is failing on the
llvm-gcc-x86_64-linux-selfhost buildbot and others.

llvm-svn: 133000
2011-06-14 18:03:11 +00:00
Rafael Espindola
1e809f99ad Add 132986 back, but avoid non-determinism if a bb address gets reused.
llvm-svn: 132995
2011-06-14 15:31:54 +00:00
Nadav Rotem
7b529545b7 Add a testcase for #9623
llvm-svn: 132991
2011-06-14 13:23:10 +00:00
Rafael Espindola
b90ea8a8c7 revert 132986 to see if the bots go green.
llvm-svn: 132988
2011-06-14 12:48:26 +00:00
Nadav Rotem
b638c3c037 This testcase cause a failure on some bots. Remove the failing test until
further investigation.

llvm-svn: 132986
2011-06-14 09:10:37 +00:00
Nadav Rotem
1b92c3d96c Add a testcase for checking the integer-promotion of many different vector
types (with power of two types such as 8,16,32 .. 512).

Fix a bug in the integer promotion of bitcast nodes. Enable integer expanding
only if the target of the conversion is an integer (when the type action is
scalarize).

Add handling to the legalization of vector load/store in cases where the saved
vector is integer-promoted.

llvm-svn: 132985
2011-06-14 08:11:52 +00:00
Rafael Espindola
434d19ff30 Implement Jakob's suggestion on how to detect fall thought without calling
AnalyzeBranch.

llvm-svn: 132981
2011-06-14 06:08:32 +00:00
Bruno Cardoso Lopes
15b9096112 Since ARM's prefetch implementation predicted the presence of a instruction
cache prefetch and now that the info from "prefetch" to "ARMPreload" is present,
only add a testcase for PLI.

llvm-svn: 132978
2011-06-14 05:11:46 +00:00
Bruno Cardoso Lopes
b6afc5168f Add one more argument to the prefetch intrinsic to indicate whether it's a data
or instruction cache access. Update the targets to match it and also teach
autoupgrade.

llvm-svn: 132976
2011-06-14 04:58:37 +00:00
Rafael Espindola
56a82c5ef8 Make the threshold used by branch folding softer. Before we would get a
sharp all or nothing transition when one extra predecessor was added. Now
we still test first ones for merging.

llvm-svn: 132974
2011-06-14 04:41:17 +00:00
Bill Wendling
77d4d62693 Heuristic: If the number of operands in the alias are more than the number of
operands in the aliasee, don't print the alias.

llvm-svn: 132963
2011-06-14 03:17:20 +00:00
John McCall
7e1ecf5edb Test case for r132797.
llvm-svn: 132962
2011-06-14 03:02:05 +00:00
Stuart Hastings
65d0bc94b4 Avoid fusing bitcasts with dynamic allocas if the amount-to-allocate
might overflow.  Re-typing the alloca to a larger type (e.g. double)
hoists a shift into the alloca, potentially exposing overflow in the
expression.  rdar://problem/9265821

llvm-svn: 132926
2011-06-13 18:48:49 +00:00
Benjamin Kramer
5079b61657 InstCombine: Fold A-b == C --> b == A-C if A and C are constants.
The backend already knew this trick.

llvm-svn: 132915
2011-06-13 15:24:24 +00:00
Jakob Stoklund Olesen
2cac2ea7a1 Be less aggressive about hinting in RAFast.
In particular, don't spill dirty registers only to satisfy a hint. It is
not worth it.

The attached test case provides an example where the fast allocator
would spill a register when other registers are available.

llvm-svn: 132900
2011-06-13 03:26:46 +00:00
Benjamin Kramer
b0765d6ac0 InstCombine: Shrink ((zext X) & C1) == C2 to fold away the cast if the "zext" and the "and" have one use.
llvm-svn: 132897
2011-06-12 22:48:00 +00:00
Benjamin Kramer
4a0f846bbd Simplify code. No functionality changes, name changes aside.
llvm-svn: 132896
2011-06-12 22:47:53 +00:00
Rafael Espindola
8d0f7518b2 Really fix the fall-through logic.
Add a triple to the tests.

llvm-svn: 132885
2011-06-12 05:57:01 +00:00
Rafael Espindola
f73c2dc8f6 Test for the previous commit.
llvm-svn: 132884
2011-06-12 05:35:39 +00:00
Rafael Espindola
db58547906 AnalyzeBranch doesn't change which successors a bb has, just the order
we try to branch to them.

Before we were creating successor lists with duplicated entries. Fixing that
found a bug in isBlockOnlyReachableByFallthrough that would causes it to
return the wrong answer for

-----------
...
jne foo
jmp bar

foo:
----------

llvm-svn: 132882
2011-06-12 03:20:32 +00:00
Eli Friedman
0bb1c525fd Add full x86 fast-isel support for memcpy and memset.
rdar://9431466

llvm-svn: 132864
2011-06-10 23:39:36 +00:00
Eli Friedman
b3764b7c97 Add -mattr=+sse2 to make the buildbots happy.
llvm-svn: 132839
2011-06-10 08:26:26 +00:00
Galina Kistanova
21af52bc9e Reverted r132785. It seems this test needs more research.
llvm-svn: 132836
2011-06-10 05:35:25 +00:00
Galina Kistanova
16bd9972fb Changed condition.
llvm-svn: 132834
2011-06-10 03:57:02 +00:00
Chad Rosier
670af01484 Adding a test case for revision 132825.
llvm-svn: 132830
2011-06-10 02:44:19 +00:00
Eli Friedman
96581336e6 Add a simple test which makes sure folding immediate float zero to a memory operand works.
llvm-svn: 132824
2011-06-10 00:30:08 +00:00
Cameron Zwarich
af47f4a117 A CCState was being created without setting whether it is in the Call or Prologue state,
causing an assertion failure downstream. This fixes <rdar://problem/9562908>.

This really seems like it should always be set at CCState creation time, so mistakes like
this can never happen. I'll take a look at doing that.

llvm-svn: 132811
2011-06-09 22:30:07 +00:00
Eli Friedman
14c6ce9041 Change this DAGCombine to build AND of SHR instead of SHR of AND; this matches the ordering we prefer in instcombine. Part of rdar://9562809.
The potential DAGCombine which enforces this more generally messes up some other very fragile patterns, so I'm leaving that alone, at least for now.

llvm-svn: 132809
2011-06-09 22:14:44 +00:00
John McCall
806ec47668 SplitCriticalEdge can sometimes split the edge from an invoke to a landing
pad, separating the exception and selector calls from the new lpad.  Teaching
it not to do that, or to properly adjust the CFG afterwards, is out of
scope because it would require the other edges to the landing pad to be split
as well (effectively).  Instead, just recover from the most likely cases
during inlining.  The best long-term solution is to change the exception
representation and commit to either requiring or not requiring the more
complex edge-splitting logic;  this is just a shorter-term hack.

llvm-svn: 132799
2011-06-09 20:06:24 +00:00
Galina Kistanova
6f214c237a Added dg.exp to run FrontendC ARM-dependent tests; updated inline-asm-multichar.c test per this change.
llvm-svn: 132785
2011-06-09 17:18:37 +00:00