1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 03:33:20 +01:00
Commit Graph

77895 Commits

Author SHA1 Message Date
Evan Cheng
ba11ee300a At -O0, multiple uses of a virtual registers in the same BB are being marked
"kill". This looks like a bug upstream. Since that's going to take some time
to understand, loosen the assertion and disable the optimization when
multiple kills are seen.

llvm-svn: 144568
2011-11-14 21:02:09 +00:00
Nick Lewycky
53185e9016 Add support for tsan annotations (thread sanitizer, a valgrind-based tool).
These annotations are disabled entirely when either ENABLE_THREADS is off, or
building a release build. When enabled, they add calls to functions with no
statements to ManagedStatic's getters.

Use these annotations to inform tsan that the race used inside ManagedStatic
initialization is actually benign. Thanks to Kostya Serebryany for helping
write this patch!

llvm-svn: 144567
2011-11-14 20:50:16 +00:00
Evan Cheng
2034ff3b0b Add a missing pattern for X86ISD::MOVLPD. rdar://10436044
llvm-svn: 144566
2011-11-14 20:35:52 +00:00
Chad Rosier
65395ac4d0 Add support for Thumb load/stores with negative offsets.
rdar://10412592

llvm-svn: 144565
2011-11-14 20:22:27 +00:00
Benjamin Kramer
8450065be6 Unbreak Release builds.
llvm-svn: 144560
2011-11-14 19:51:48 +00:00
Evan Cheng
f19d257488 Teach two-address pass to re-schedule two-address instructions (or the kill
instructions of the two-address operands) in order to avoid inserting copies.
This fixes the few regressions introduced when the two-address hack was
disabled (without regressing the improvements).
rdar://10422688

llvm-svn: 144559
2011-11-14 19:48:55 +00:00
Pete Cooper
c9d6834f38 Changed SSE4/AVX <2 x i64> extract and insert ops to be Custom lowered
Constant idx case is still done in tablegen but other cases are then expanded

Fixes <rdar://problem/10435460>

llvm-svn: 144557
2011-11-14 19:38:42 +00:00
Benjamin Kramer
720b712c77 Fold ConstantVector::isAllOnesValue into Constant::isAllOnesValue and simplify it.
llvm-svn: 144555
2011-11-14 19:12:20 +00:00
Akira Hatanaka
9dd2dd0320 32-to-64-bit extended load.
llvm-svn: 144554
2011-11-14 19:06:14 +00:00
Akira Hatanaka
282b708fcb AnalyzeCallOperands function for N32/64.
N32/64 places all variable arguments in integer registers (or on stack),
regardless of their types, but follows calling convention of non-vaarg function
when it handles fixed arguments.

llvm-svn: 144553
2011-11-14 19:02:54 +00:00
Akira Hatanaka
b147de0ada Modify LowerFormalArguments to correctly handle vaarg arguments for Mips64.
llvm-svn: 144552
2011-11-14 19:01:09 +00:00
Justin Holewinski
4e7a1c571b PTX: Let LLVM use loads/stores for all mem* intrinsics, instead of relying on custom implementations.
llvm-svn: 144551
2011-11-14 18:58:20 +00:00
Wesley Peck
8b241a329d Add release notes for the MicroBlaze backend.
llvm-svn: 144550
2011-11-14 18:56:41 +00:00
Akira Hatanaka
db610fb423 Remove variable that keeps the size of area used to save byval or variable
argument registers on the callee's stack frame, along with functions that set
and get it.
    
It is not necessary to add the size of this area when computing stack size in
emitPrologue, since it has already been accounted for in
PEI::calculateFrameObjectOffsets.

llvm-svn: 144549
2011-11-14 18:56:20 +00:00
Jakob Stoklund Olesen
6035535c96 Fix early-clobber handling in shrinkToUses.
I broke this in r144515, it affected most ARM testers.

<rdar://problem/10441389>

llvm-svn: 144547
2011-11-14 18:45:38 +00:00
Bob Wilson
f76590b3c5 Disable generation of compact unwind encodings. <rdar://problem/10441578>
This still seems to be causing some failures.  It needs more testing before
it gets enabled again.

llvm-svn: 144543
2011-11-14 18:21:07 +00:00
Jakob Stoklund Olesen
83d6dda738 Delete stale comment.
llvm-svn: 144542
2011-11-14 18:03:05 +00:00
Jim Grosbach
d791cf718c Tidy up. 80 column.
llvm-svn: 144538
2011-11-14 17:52:47 +00:00
Benjamin Kramer
bff5aaee3f Make headers standalone.
llvm-svn: 144537
2011-11-14 17:45:03 +00:00
Benjamin Kramer
6d85bbc486 Make headers standalone, move a virtual method out of line.
llvm-svn: 144536
2011-11-14 17:22:45 +00:00
Daniel Dunbar
3fc722f848 build/Make: Switch over to using llvm-config-2 for dependencies one more (hopefully last) time, now that it also builds as a build tool.
llvm-svn: 144535
2011-11-14 17:17:45 +00:00
Chandler Carruth
9e6d173b9e It helps to deallocate memory as well as allocate it. =] This actually
cleans up all the chains allocated during the processing of each
function so that for very large inputs we don't just grow memory usage
without bound.

llvm-svn: 144533
2011-11-14 10:57:23 +00:00
Chandler Carruth
06afac4924 Remove an over-eager assert that was firing on one of the ARM regression
tests when I forcibly enabled block placement.

It is apparantly possible for an unanalyzable block to fallthrough to
a non-loop block. I don't actually beleive this is correct, I believe
that 'canFallThrough' is returning true needlessly for the code
construct, and I've left a bit of a FIXME on the verification code to
try to track down why this is coming up.

Anyways, removing the assert doesn't degrade the correctness of the algorithm.

llvm-svn: 144532
2011-11-14 10:55:53 +00:00
Chandler Carruth
a1475d9b6b Begin chipping away at one of the biggest quadratic-ish behaviors in
this pass. We're leaving already merged blocks on the worklist, and
scanning them again and again only to determine each time through that
indeed they aren't viable. We can instead remove them once we're going
to have to scan the worklist. This is the easy way to implement removing
them. If this remains on the profile (as I somewhat suspect it will), we
can get a lot more clever here, as the worklist's order is essentially
irrelevant. We can use swapping and fold the two loops to reduce
overhead even when there are many blocks on the worklist but only a few
of them are removed.

llvm-svn: 144531
2011-11-14 09:46:33 +00:00
Chandler Carruth
f89087744e Under the hood, MBPI is doing a linear scan of every successor every
time it is queried to compute the probability of a single successor.
This makes computing the probability of every successor of a block in
sequence... really really slow. ;] This switches to a linear walk of the
successors rather than a quadratic one. One of several quadratic
behaviors slowing this pass down.

I'm not really thrilled with moving the sum code into the public
interface of MBPI, but I don't (at the moment) have ideas for a better
interface. My direction I'm thinking in for a better interface is to
have MBPI actually retain much more state and make *all* of these
queries cheap. That's a lot of work, and would require invasive changes.
Until then, this seems like the least bad (ie, least quadratic)
solution. Suggestions welcome.

llvm-svn: 144530
2011-11-14 09:12:57 +00:00
Tobias Grosser
5c125698be Add clang_complete to release notes
llvm-svn: 144529
2011-11-14 09:09:26 +00:00
Tobias Grosser
f15ffa7a2c Add Polly to release notes
llvm-svn: 144528
2011-11-14 09:09:23 +00:00
Chandler Carruth
09418993f8 Reuse the logic in getEdgeProbability within getHotSucc in order to
correctly handle blocks whose successor weights sum to more than
UINT32_MAX. This is slightly less efficient, but the entire thing is
already linear on the number of successors. Calling it within any hot
routine is a mistake, and indeed no one is calling it. It also
simplifies the code.

llvm-svn: 144527
2011-11-14 08:55:59 +00:00
Chandler Carruth
462bb16130 Fix an overflow bug in MachineBranchProbabilityInfo. This pass relied on
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.

I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.

The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.

llvm-svn: 144526
2011-11-14 08:50:16 +00:00
Craig Topper
ee3a3cf35c Add AVX2 version of instructions to load folding tables. Also add a bunch of missing SSE/AVX instructions.
llvm-svn: 144525
2011-11-14 08:07:55 +00:00
Chandler Carruth
5d03d0351f Add a cautionary note to this API. It was not at all obvious to me how
expensive the most useful interface to this analysis is.

Fun story -- it's also not correct. That's getting fixed in another
patch.

llvm-svn: 144523
2011-11-14 06:51:49 +00:00
Craig Topper
e0b34012db Add neverHasSideEffects, mayLoad, and mayStore to many patternless SSE/AVX instructions. Remove MMX check from LowerVECTOR_SHUFFLE since MMX vector types won't go through it anyway.
llvm-svn: 144522
2011-11-14 06:46:21 +00:00
Chad Rosier
0e5094ca87 Add support for ARM halfword load/stores and signed byte loads with negative
offsets.
rdar://10412592

llvm-svn: 144518
2011-11-14 04:09:28 +00:00
Jakob Stoklund Olesen
25e009690c Use getVNInfoBefore() when it makes sense.
llvm-svn: 144517
2011-11-14 01:39:36 +00:00
Chandler Carruth
b7f21af176 Teach machine block placement to cope with unnatural loops. These don't
get loop info structures associated with them, and so we need some way
to make forward progress selecting and placing basic blocks. The
technique used here is pretty brutal -- it just scans the list of blocks
looking for the first unplaced candidate. It keeps placing blocks like
this until the CFG becomes tractable.

The cost is somewhat unfortunate, it requires allocating a vector of all
basic block pointers eagerly. I have some ideas about how to simplify
and optimize this, but I'm trying to get the logic correct first.

Thanks to Benjamin Kramer for the reduced test case out of GCC. Sadly
there are other bugs that GCC is tickling that I'm reducing and working
on now.

llvm-svn: 144516
2011-11-14 00:00:35 +00:00
Jakob Stoklund Olesen
44da32bc7f Use kill slots instead of the previous slot in shrinkToUses.
It's more natural to use the actual end points.

llvm-svn: 144515
2011-11-13 23:53:25 +00:00
Chandler Carruth
1d9335bbb2 Cleanup some 80-columns violations and poor formatting. These snuck by
when I was reading through the code for style.

llvm-svn: 144513
2011-11-13 22:50:09 +00:00
Jakob Stoklund Olesen
4255046343 Terminate all dead defs at the dead slot instead of the 'next' slot.
This makes no difference for normal defs, but early clobber dead defs
now look like:

  [Slot_EarlyClobber; Slot_Dead)

instead of:

  [Slot_EarlyClobber; Slot_Register).

Live ranges for normal dead defs look like:

  [Slot_Register; Slot_Dead)

as before.

llvm-svn: 144512
2011-11-13 22:42:13 +00:00
Craig Topper
8f22b6d95a Fix comment for LegalizeTypeAction enum.
llvm-svn: 144511
2011-11-13 22:11:24 +00:00
Jakob Stoklund Olesen
170f021d76 Simplify early clobber slots a bit.
llvm-svn: 144507
2011-11-13 22:05:42 +00:00
Chandler Carruth
7e39d0c643 Enhance the assertion mechanisms in place to make it easier to catch
when we fail to place all the blocks of a loop. Currently this is
happening for unnatural loops, and this logic helps more immediately
point to the problem.

llvm-svn: 144504
2011-11-13 21:39:51 +00:00
Jakob Stoklund Olesen
9b34607bdf Rename SlotIndexes to match how they are used.
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.

The load and store slots are not needed after the deferred spill code
insertion framework was deleted.

The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals.  In reality, these slots were used to distinguish
early-clobber defs from normal defs.

The new naming scheme also has 4 slots, but the names match how the
slots are really used.  This is a purely mechanical renaming, but some
of the code makes a lot more sense now.

llvm-svn: 144503
2011-11-13 20:45:27 +00:00
Craig Topper
0c7b1d7cf1 Add BLSI, BLSMSK, and BLSR to getTargetNodeName.
llvm-svn: 144502
2011-11-13 17:31:07 +00:00
Chandler Carruth
bbc3bddb16 Teach MBP to force-merge layout successors for blocks with unanalyzable
branches that also may involve fallthrough. In the case of blocks with
no fallthrough, we can still re-order the blocks profitably. For example
instruction decoding will in some cases continue past an indirect jump,
making laying out its most likely successor there profitable.

Note, no test case. I don't know how to write a test case that exercises
this logic, but it matches the described desired semantics in
discussions with Jakob and others. If anyone has a nice example of IR
that will trigger this, that would be lovely.

Also note, there are still assertion failures in real world code with
this. I'm digging into those next, now that I know this isn't the cause.

llvm-svn: 144499
2011-11-13 12:17:28 +00:00
Chandler Carruth
07f8871a86 Hoist another gross nested loop into a helper method.
llvm-svn: 144498
2011-11-13 11:42:26 +00:00
Chandler Carruth
e85aeac703 Add a missing doxygen comment for a helper method.
llvm-svn: 144497
2011-11-13 11:34:55 +00:00
Chandler Carruth
535c0683ea Hoist a nested loop into its own method.
llvm-svn: 144496
2011-11-13 11:34:53 +00:00
Chandler Carruth
e67c92282f Rewrite #3 of machine block placement. This is based somewhat on the
second algorithm, but only loosely. It is more heavily based on the last
discussion I had with Andy. It continues to walk from the inner-most
loop outward, but there is a key difference. With this algorithm we
ensure that as we visit each loop, the entire loop is merged into
a single chain. At the end, the entire function is treated as a "loop",
and merged into a single chain. This chain forms the desired sequence of
blocks within the function. Switching to a single algorithm removes my
biggest problem with the previous approaches -- they had different
behavior depending on which system triggered the layout. Now there is
exactly one algorithm and one basis for the decision making.

The other key difference is how the chain is formed. This is based
heavily on the idea Andy mentioned of keeping a worklist of blocks that
are viable layout successors based on the CFG. Having this set allows us
to consistently select the best layout successor for each block. It is
expensive though.

The code here remains very rough. There is a lot that needs to be done
to clean up the code, and to make the runtime cost of this pass much
lower. Very much WIP, but this was a giant chunk of code and I'd rather
folks see it sooner than later. Everything remains behind a flag of
course.

I've added a couple of tests to exercise the issues that this iteration
was motivated by: loop structure preservation. I've also fixed one test
that was exhibiting the broken behavior of the previous version.

llvm-svn: 144495
2011-11-13 11:20:44 +00:00
Chad Rosier
58ab241006 The order in which the predicate is added differs between Thumb and ARM mode. Fix predicate when in ARM mode and restore SelectIntrinsicCall.
llvm-svn: 144494
2011-11-13 09:44:21 +00:00
Chad Rosier
8cfccc356e Temporarily disable SelectIntrinsicCall when in ARM mode. This is causing failures.
llvm-svn: 144492
2011-11-13 05:14:43 +00:00