1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 03:53:04 +02:00
Commit Graph

81211 Commits

Author SHA1 Message Date
Jakob Stoklund Olesen
e6574db283 Don't kill the base register when expanding strd.
When an strd instruction doesn't get the registers it wants, it can be
expanded into two str instructions. Make sure the first str doesn't kill
the base register in the case where the base and data registers are
identical:

  t2STRi12 %R0<kill>, %R0, 4, pred:14, pred:%noreg
  t2STRi12 %R2<kill>, %R0, 8, pred:14, pred:%noreg

<rdar://problem/11101911>

llvm-svn: 153611
2012-03-28 23:07:03 +00:00
Jakob Stoklund Olesen
ebee7e5cff Preserve implicit defs in ARMLoadStoreOptimizer.
When a number of sub-register VLRDS instructions are combined into a
VLDM, preserve any super-register implicit defs. This is required to
keep the register scavenger and machine code verifier happy.

Enable machine code verification after ARMLoadStoreOptimizer.
ARM/2012-01-26-CopyPropKills.ll was failing because of this.

llvm-svn: 153610
2012-03-28 22:50:56 +00:00
Jim Grosbach
cb8cd869d2 Tidy up. Whitespace.
llvm-svn: 153609
2012-03-28 22:34:41 +00:00
Danil Malyshev
aba46febe1 Move getPointerToNamedFunction() from JIT/MCJIT to JITMemoryManager.
llvm-svn: 153607
2012-03-28 21:46:36 +00:00
Rafael Espindola
1b3dfa1ce2 Handle intrinsics in GlobalsModRef. Fixes pr12351.
llvm-svn: 153604
2012-03-28 21:31:24 +00:00
Jakob Stoklund Olesen
7623979dd6 Spill DPair registers, not just QPR.
The arm_neon intrinsics can create virtual registers from the DPair
register class which allows both even-odd and odd-even D-register pairs.

This fixes PR12389.

llvm-svn: 153603
2012-03-28 21:20:32 +00:00
Jakob Stoklund Olesen
6ce4ff3ff7 Also verify after ExpandPostRAPseudos.
llvm-svn: 153599
2012-03-28 20:49:30 +00:00
Bill Wendling
02297eee5b Inline function into its one caller.
llvm-svn: 153598
2012-03-28 20:48:49 +00:00
Jakob Stoklund Olesen
f5df00f0fb Enable machine code verification after the late machine optimization passes.
Branch folding invalidates liveness and disables liveness verification
on some targets.

llvm-svn: 153597
2012-03-28 20:47:37 +00:00
Jakob Stoklund Olesen
37927fe83c Skip liveness verification when MRI->tracksLiveness() is false.
Extract the liveness verification into its own method.

This makes it possible to run the machine code verifier after liveness
information is no longer required to be valid.

llvm-svn: 153596
2012-03-28 20:47:35 +00:00
Bill Wendling
3f5e3c9839 Reformat the LTOModule code to be more inline with LLVM's coding standards. Add
a bunch of comments for the various functions. No intended functionality change.

llvm-svn: 153595
2012-03-28 20:46:54 +00:00
Jakob Stoklund Olesen
2c29e5d7f9 Revert r153516: "Invalidate liveness in Thumb2ITBlockPass."
Revert r153519: "ARMLoadStoreOptimizer invalidates register liveness."

These patches caused miscompilations in povray by turning off branch
folding's updating of live-in lists.

It turns out the the late scheduler depends on the live-in lists, even
if it doesn't need correct kill flags.

<rdar://problem/11139228>

llvm-svn: 153593
2012-03-28 20:11:44 +00:00
Jakob Stoklund Olesen
7c56ad07a6 Allow removeLiveIn to be called with a register that isn't live-in.
This avoids the silly double search:

  if (isLiveIn(Reg))
    removeLiveIn(Reg);

llvm-svn: 153592
2012-03-28 20:11:42 +00:00
Chad Rosier
3f0e43807e Revert r153521 as it's causing large regressions on the nightly testers.
Original commit message for r153521 (aka r153423):
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153587
2012-03-28 18:42:50 +00:00
Pete Cooper
3bf62a9db3 Fixed commuteInstructions bug where if its called pre-regalloc the subreg indices weren't commuted
llvm-svn: 153579
2012-03-28 17:02:22 +00:00
Benjamin Kramer
b6baea7014 GlobalOpt: If we have an inbounds GEP from a ConstantAggregateZero global that we just determined to be constant, replace all loads from it with a zero value.
llvm-svn: 153576
2012-03-28 14:50:09 +00:00
Benjamin Kramer
b3055f03e1 Add another note about a missed compare with nsw arithmetic instcombine.
llvm-svn: 153574
2012-03-28 10:50:18 +00:00
Richard Barton
201661d4bc Fixup VST1.32 with writeback instruction. Also re-factor non-writeback version.
llvm-svn: 153573
2012-03-28 10:18:11 +00:00
Chandler Carruth
e507ddfe74 Switch to WeakVHs in the value mapper, and aggressively prune dead basic
blocks in the function cloner. This removes the last case of trivially
dead code that I've been seeing in the wild getting inlined, analyzed,
re-inlined, optimized, only to be deleted. Nukes a FIXME from the
cleanup tests.

llvm-svn: 153572
2012-03-28 08:38:27 +00:00
Eric Christopher
0f1d5b363c More debug output.
llvm-svn: 153571
2012-03-28 07:34:36 +00:00
Eric Christopher
57ec9a8587 Fix the output of the DW_TAG_friend tag to include DW_AT_friend
and not the rest of the member tag.

Fixes PR11695

llvm-svn: 153570
2012-03-28 07:34:31 +00:00
Bill Wendling
853517869b Some whitespace cleanup.
llvm-svn: 153567
2012-03-28 04:17:34 +00:00
Bill Wendling
8d0a85a379 Use the correct filename for the error message.
llvm-svn: 153564
2012-03-28 02:39:06 +00:00
Bill Wendling
ec11c59c98 Use Nakamura's suggestion of bypassing using 'filename' and just the pointers directly.
llvm-svn: 153558
2012-03-28 01:30:51 +00:00
Akira Hatanaka
ef68ecdecc Turn off post-RA scheduler by default.
llvm-svn: 153557
2012-03-28 00:52:23 +00:00
Chad Rosier
23505e5bb6 Fix 80-column violation.
llvm-svn: 153556
2012-03-28 00:35:33 +00:00
Akira Hatanaka
3d463d748a Fix test case.
llvm-svn: 153555
2012-03-28 00:25:01 +00:00
Akira Hatanaka
d2ce66b138 Turn on post register allocation scheduler.
llvm-svn: 153554
2012-03-28 00:24:17 +00:00
Akira Hatanaka
fcd8108a1d Sort relocation entries before they are written out to a file. MIPS ABI
imposes a constraint that GOT16 referring to a local symbol or HI16 has to be
followed immediately by a matching LO16 relocation.

llvm-svn: 153553
2012-03-28 00:23:33 +00:00
Akira Hatanaka
a495c4cbaf Emit all directives except for ".cprestore" during asm printing rather than emit
them as machine instructions. Directives ".set noat" and ".set at" are now
emitted only at the beginning and end of a function except in the case where
they are emitted to enclose .cpload with an immediate operand that doesn't fit
in 16-bit field or unaligned load/stores.

Also, make the following changes:
- Remove function isUnalignedLoadStore and use a switch-case statement to
  determine whether an instruction is an unaligned load or store.

- Define helper function CreateMCInst which generates an instance of an MCInst
  from an opcode and a list of operands.

llvm-svn: 153552
2012-03-28 00:22:50 +00:00
Akira Hatanaka
8f8ee2351c Mark flag neverHasSideEffects of pattern-less instructions that do not have
any side effects.

llvm-svn: 153551
2012-03-28 00:21:37 +00:00
Francois Pichet
d04f23f329 MSVC doesn't like the mixing of declarations and statements in a .c file.
llvm-svn: 153549
2012-03-27 23:52:22 +00:00
Benjamin Kramer
4be5a4bca4 Add a note about a cute little fabs optimization.
llvm-svn: 153543
2012-03-27 22:42:42 +00:00
Benjamin Kramer
cc6159f7d9 Add two missed instcombines related to compares with nsw arithmetic.
llvm-svn: 153542
2012-03-27 22:03:19 +00:00
Bill Wendling
5c8d6c7990 Try to use the CWD if the path to the GCDA output is not available (e.g., the
executable has been moved to another machine). If that's not available
(read-only or something), then exit gracefully.
<rdar://problem/11111686>

llvm-svn: 153538
2012-03-27 21:17:04 +00:00
Akira Hatanaka
3853dcca58 Remove trailing white space.
llvm-svn: 153536
2012-03-27 20:35:51 +00:00
Lang Hames
6c935e3700 Use a SmallVector and linear lookup instead of a DenseSet - SourceMap values
will always be tiny sets, so DenseSet is overkill (SmallSet won't work as we
need iteration support). 

llvm-svn: 153529
2012-03-27 19:10:45 +00:00
Akira Hatanaka
0a613888c3 Add member EmitNOAT and its setter and getter functions to class MipsFunctionInfo.
If EmitNOAT is true, directives ".set noat" and ".set at" are emitted at the
beginning and end of a function. 

llvm-svn: 153528
2012-03-27 19:08:42 +00:00
Eric Christopher
324ff39943 Add a test for the previous commit. Also, remove two tests that were
testing a) the wrong behavior or b) something that I'm already testing
in the new test.

llvm-svn: 153525
2012-03-27 18:35:57 +00:00
Eric Christopher
5f34828440 Use DW_AT_low_pc for a single entry point into a routine.
Fixes PR10105

llvm-svn: 153524
2012-03-27 18:35:54 +00:00
Chad Rosier
d3fe1fcda9 Reapply r153423; the original commit was fine. The failing test, distray, had
undefined behavior, which Rafael was kind enough to fix.

Original commit message for r153423:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153521
2012-03-27 17:44:52 +00:00
Jakob Stoklund Olesen
48d7c2b088 ARMLoadStoreOptimizer invalidates register liveness.
This pass tries to update kill flags, but there are still many bugs.
Passes after the load/store optimizer don't need accurate liveness, so
don't even try.

<rdar://problem/11101911>

llvm-svn: 153519
2012-03-27 17:33:52 +00:00
Jakob Stoklund Olesen
21fd015586 Print SSA and liveness tracking flags in MF::print().
llvm-svn: 153518
2012-03-27 17:17:16 +00:00
Jakob Stoklund Olesen
01ae333053 Branch folding may invalidate liveness.
Branch folding can use a register scavenger to update liveness
information when required. Don't do that if liveness information is
already invalid.

llvm-svn: 153517
2012-03-27 17:06:09 +00:00
Jakob Stoklund Olesen
1f52931c1b Invalidate liveness in Thumb2ITBlockPass.
llvm-svn: 153516
2012-03-27 17:06:06 +00:00
Chris Lattner
bab5825de7 fix what looks like a real logic bug, found by PVS-Studio (part of PR12357)
llvm-svn: 153513
2012-03-27 16:27:21 +00:00
Jakob Stoklund Olesen
7ba0f121e5 Add an MRI::tracksLiveness() flag.
Late optimization passes like branch folding and tail duplication can
transform the machine code in a way that makes it expensive to keep the
register liveness information up to date. There is a fuzzy line between
register allocation and late scheduling where the liveness information
degrades.

The MRI::tracksLiveness() flag makes the line clear: While true,
liveness information is accurate, and can be used for register
scavenging. Once the flag is false, liveness information is not
accurate, and can only be used as a hint.

Late passes generally don't need the liveness information, but they will
sometimes use the register scavenger to help update it. The scavenger
enforces strict correctness, and we have to spend a lot of code to
update register liveness that may never be used.

llvm-svn: 153511
2012-03-27 15:13:58 +00:00
NAKAMURA Takumi
eb0805cb93 llvm/docs/*.html: Fix markups.
llvm-svn: 153508
2012-03-27 11:25:16 +00:00
Chandler Carruth
cd6d2689a0 Make a seemingly tiny change to the inliner and fix the generated code
size bloat. Unfortunately, I expect this to disable the majority of the
benefit from r152737. I'm hopeful at least that it will fix PR12345. To
explain this requires... quite a bit of backstory I'm afraid.

TL;DR: The change in r152737 actually did The Wrong Thing for
linkonce-odr functions. This change makes it do the right thing. The
benefits we saw were simple luck, not any actual strategy. Benchmark
numbers after a mini-blog-post so that I've written down my thoughts on
why all of this works and doesn't work...

To understand what's going on here, you have to understand how the
"bottom-up" inliner actually works. There are two fundamental modes to
the inliner:

1) Standard fixed-cost bottom-up inlining. This is the mode we usually
   think about. It walks from the bottom of the CFG up to the top,
   looking at callsites, taking information about the callsite and the
   called function and computing th expected cost of inlining into that
   callsite. If the cost is under a fixed threshold, it inlines. It's
   a touch more complicated than that due to all the bonuses, weights,
   etc. Inlining the last callsite to an internal function gets higher
   weighth, etc. But essentially, this is the mode of operation.

2) Deferred bottom-up inlining (a term I just made up). This is the
   interesting mode for this patch an r152737. Initially, this works
   just like mode #1, but once we have the cost of inlining into the
   callsite, we don't just compare it with a fixed threshold. First, we
   check something else. Let's give some names to the entities at this
   point, or we'll end up hopelessly confused. We're considering
   inlining a function 'A' into its callsite within a function 'B'. We
   want to check whether 'B' has any callers, and whether it might be
   inlined into those callers. If so, we also check whether inlining 'A'
   into 'B' would block any of the opportunities for inlining 'B' into
   its callers. We take the sum of the costs of inlining 'B' into its
   callers where that inlining would be blocked by inlining 'A' into
   'B', and if that cost is less than the cost of inlining 'A' into 'B',
   then we skip inlining 'A' into 'B'.

Now, in order for #2 to make sense, we have to have some confidence that
we will actually have the opportunity to inline 'B' into its callers
when cheaper, *and* that we'll be able to revisit the decision and
inline 'A' into 'B' if that ever becomes the correct tradeoff. This
often isn't true for external functions -- we can see very few of their
callers, and we won't be able to re-consider inlining 'A' into 'B' if
'B' is external when we finally see more callers of 'B'. There are two
cases where we believe this to be true for C/C++ code: functions local
to a translation unit, and functions with an inline definition in every
translation unit which uses them. These are represented as internal
linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for
linkonce-odr in r152737.

Unfortunately, when I did that, I also introduced a subtle bug. There
was an implicit assumption that the last caller of the function within
the TU was the last caller of the function in the program. We want to
bonus the last caller of the function in the program by a huge amount
for inlining because inlining that callsite has very little cost.
Unfortunately, the last caller in the TU of a linkonce-odr function is
*not* the last caller in the program, and so we don't want to apply this
bonus. If we do, we can apply it to one callsite *per-TU*. Because of
the way deferred inlining works, when it sees this bonus applied to one
callsite in the TU for 'B', it decides that inlining 'B' is of the
*utmost* importance just so we can get that final bonus. It then
proceeds to essentially force deferred inlining regardless of the actual
cost tradeoff.

The result? PR12345: code bloat, code bloat, code bloat. Another result
is getting *damn* lucky on a few benchmarks, and the over-inlining
exposing critically important optimizations. I would very much like
a list of benchmarks that regress after this change goes in, with
bitcode before and after. This will help me greatly understand what
opportunities the current cost analysis is missing.

Initial benchmark numbers look very good. WebKit files that exhibited
the worst of PR12345 went from growing to shrinking compared to Clang
with r152737 reverted.

- Bootstrapped Clang is 3% smaller with this change.
- Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4%
  faster with this change.

Please let me know about any other performance impact you see. Thanks to
Nico for reporting and urging me to actually fix, Richard Smith, Duncan
Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues
today.

llvm-svn: 153506
2012-03-27 10:48:28 +00:00
Craig Topper
bf6a47d0ec Prune some includes
llvm-svn: 153502
2012-03-27 07:54:11 +00:00