1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-01 08:23:21 +01:00
Commit Graph

81344 Commits

Author SHA1 Message Date
Andrew Trick
31337c9d64 misched: Add finalizeScheduler to complete the target interface.
llvm-svn: 153827
2012-04-01 07:24:23 +00:00
Eli Bendersky
f79279db2f Removing a file that's no longer being used after the recent refactorings
llvm-svn: 153825
2012-04-01 06:50:01 +00:00
Hal Finkel
42a487282a Split the LdStGeneral PPC itin. class into LdStLoad and LdStStore.
Loads and stores can have different pipeline behavior, especially on
embedded chips. This change allows those differences to be expressed.
Except for the 440 scheduler, there are no functionality changes.
On the 440, the latency adjustment is only by one cycle, and so this
probably does not affect much. Nevertheless, it will make a larger
difference in the future and this removes a FIXME from the 440 itin.

llvm-svn: 153821
2012-04-01 04:44:16 +00:00
Rafael Espindola
5eae79c8cf Add a workaround for building with old versions of clang.
llvm-svn: 153820
2012-03-31 21:54:20 +00:00
Rafael Espindola
194491d9b0 Add a triple to the test.
llvm-svn: 153818
2012-03-31 18:59:07 +00:00
Rafael Espindola
2da83dbcd4 Teach CodeGen's version of computeMaskedBits to understand the range metadata.
This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.

llvm-svn: 153817
2012-03-31 18:14:00 +00:00
Hal Finkel
548d6f1ad0 Fix dynamic linking on PPC64.
Dynamic linking on PPC64 has had problems since we had to move the top-down
hazard-detection logic post-ra. For dynamic linking to work there needs to be
a nop placed after every call. It turns out that it is really hard to guarantee
that nothing will be placed in between the call (bl) and the nop during post-ra
scheduling. Previous attempts at fixing this by placing logic inside the
hazard detector only partially worked.

This is now fixed in a different way: call+nop codegen-only instructions. As far
as CodeGen is concerned the pair is now a single instruction and cannot be split.
This solution works much better than previous attempts.

The scoreboard hazard detector is also renamed to be more generic, there is currently
no cpu-specific logic in it.

llvm-svn: 153816
2012-03-31 14:45:15 +00:00
Chandler Carruth
1c12d40cce Fix a typo reported in IRC by someone reviewing this code.
llvm-svn: 153815
2012-03-31 13:18:09 +00:00
Chandler Carruth
ad48424b6b Give the always-inliner its own custom filter. It shouldn't have to pay
the very high overhead of the complex inline cost analysis when all it
wants to do is detect three patterns which must not be inlined. Comment
the code, clean it up, and leave some hints about possible performance
improvements if this ever shows up on a profile.

Moving this off of the (now more expensive) inline cost analysis is
particularly important because we have to run this inliner even at -O0.

llvm-svn: 153814
2012-03-31 13:17:18 +00:00
Chandler Carruth
61d0735f00 Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

llvm-svn: 153813
2012-03-31 12:48:08 +00:00
Chandler Carruth
8cacff57bf Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

llvm-svn: 153812
2012-03-31 12:42:41 +00:00
Chandler Carruth
0077a46c37 Add support to the InstVisitor for visiting a generic callsite. The
visitor will now visit a CallInst and an InvokeInst with
instruction-specific visitors, then visit a generic CallSite visitor,
then delegate back to the Instruction visitor and the TerminatorInst
visitors depending on whether a call or an invoke originally. This will
be used in the soon-to-land inline cost rewrite.

llvm-svn: 153811
2012-03-31 11:31:24 +00:00
Bill Wendling
c0128a5e70 Move trivial functions into the class definition.
llvm-svn: 153810
2012-03-31 11:25:18 +00:00
Bill Wendling
42a700ea8a Trim headers.
llvm-svn: 153809
2012-03-31 11:22:30 +00:00
Bill Wendling
73b6dfb5c9 Indent according to LLVM's style guide.
llvm-svn: 153808
2012-03-31 11:15:43 +00:00
Bill Wendling
9850d17689 Cleanup whitespace and trim some of the #includes.
llvm-svn: 153807
2012-03-31 11:10:35 +00:00
Benjamin Kramer
b7ee22e3f3 Internalize: Remove reference of @llvm.noinline, it was replaced with the noinline attribute a long time ago.
llvm-svn: 153806
2012-03-31 11:03:47 +00:00
Bill Wendling
3ca3830022 These strings aren't 'const char *' but 'char *'.
llvm-svn: 153805
2012-03-31 10:51:45 +00:00
Bill Wendling
8e34c4eea6 Cleanup whitespace.
llvm-svn: 153804
2012-03-31 10:50:14 +00:00
Bill Wendling
a8d55bf5e5 Free the codegen options when deleting LTO code generator object.
llvm-svn: 153803
2012-03-31 10:49:43 +00:00
Bill Wendling
cbcec066af Cleanup whitespace and remove unneeded 'extern' keyword on function definitions.
llvm-svn: 153802
2012-03-31 10:44:20 +00:00
Chandler Carruth
385a981fde Clean up the naming in this test. Someone pointed this out in review at
one point, and I forgot to go back and clean it up. Sorry about that. =/

llvm-svn: 153801
2012-03-31 10:38:48 +00:00
Chandler Carruth
15d7a6e00c FileCheck-ize this test, and generally tidy it up prior to changing
things around.

llvm-svn: 153799
2012-03-31 09:22:33 +00:00
Duncan Sands
5165327295 I noticed in passing that the Metadata getIfExists method was creating a new
node and returning it if one didn't exist.

llvm-svn: 153798
2012-03-31 08:20:11 +00:00
Hal Finkel
45ad90afac Correctly vectorize powi.
The powi intrinsic requires special handling because it always takes a single
integer power regardless of the result type. As a result, we can vectorize
only if the powers are equal. Fixes PR12364.

llvm-svn: 153797
2012-03-31 03:38:40 +00:00
Andrew Trick
e5028d95da comment typo
llvm-svn: 153796
2012-03-31 02:39:17 +00:00
Akira Hatanaka
4ef4aae332 Select static relocation model if it is jitting.
llvm-svn: 153795
2012-03-31 02:38:36 +00:00
Andrew Trick
f1fa07f326 Introduce Register Units: Give each leaf register a number.
First small step toward modeling multi-register multi-pressure. In the
future, register units can also be used to model liveness and
aliasing.

llvm-svn: 153794
2012-03-31 01:35:59 +00:00
Jakob Stoklund Olesen
728984c476 Add a 2 byte safety margin in offset computations.
ARMConstantIslandPass still has bugs where jump table compression can
cause constant pool entries to go out of range.

Add a safety margin of 2 bytes when placing constant islands, but use
the real max displacement for verification.

<rdar://problem/11156595>

llvm-svn: 153789
2012-03-31 00:06:44 +00:00
Jakob Stoklund Olesen
91f86a31e7 Add more debugging output to ARMConstantIslandPass.
llvm-svn: 153788
2012-03-31 00:06:42 +00:00
Bill Wendling
188c9214c3 * Set the scope attributes for the ASM symbol we added to be the value passed
into the function.
* Reorder some header files.

llvm-svn: 153783
2012-03-30 23:26:06 +00:00
Benjamin Kramer
dbd6a33c45 Rip out emission of the regIsInRegClass function for the asm printer.
It's slow, bloated and completely redundant with MCRegisterClass::contains.

llvm-svn: 153782
2012-03-30 23:13:40 +00:00
Jim Grosbach
ab2d3b5529 ARM fix encoding fixup resolution for ldrd and friends.
The 8-bit payload is not contiguous in the opcode. Move the upper nibble
over 4 bits into the correct place.

rdar://11158641

llvm-svn: 153780
2012-03-30 21:54:22 +00:00
Jakob Stoklund Olesen
a29c0a3bac Use SequenceToOffsetTable in emitRegisterNameString.
This allows suffix sharing in register names. (AX is a suffix of EAX).

llvm-svn: 153777
2012-03-30 21:12:52 +00:00
Jakob Stoklund Olesen
0255173d11 Reapply 153764 and 153761 with a fix.
Use an explicit comparator instead of the default.

The sets are sorted, but not using the default comparator. Hopefully,
this will unbreak the Linux builders.

llvm-svn: 153772
2012-03-30 20:24:14 +00:00
Rafael Espindola
34915bd5fa Revert 153764 and 153761. They broke a --enable-optimized --enable-assertions
--enable-expensive-checks build.

llvm-svn: 153771
2012-03-30 20:09:06 +00:00
Jim Grosbach
37853d6216 ARM assembler should prefer non-aliases encoding of cmp.
When an immediate is both a value [t2_]so_imm and a [t2_]so_imm_neg,
we want to use the non-negated form to make sure we prefer the normal
encoding, not the aliased encoding via the negation of, e.g., 'cmp.w'.

llvm-svn: 153770
2012-03-30 19:59:02 +00:00
Jim Grosbach
92ee2a8454 ARM encoding for VSWP got the second operand incorrect.
Make the non-tied register operand names line up with what the base
class encoding handler expects.

rdar://11157236

llvm-svn: 153766
2012-03-30 18:53:01 +00:00
Jim Grosbach
472cefe371 ARM can only use narrow encoding for low regs.
llvm-svn: 153765
2012-03-30 18:39:43 +00:00
Jakob Stoklund Olesen
25b63f35f8 Compress SimpleValueType lists by sharing.
Many register classes have the same value types. Share the table space.

llvm-svn: 153764
2012-03-30 17:42:04 +00:00
Jakob Stoklund Olesen
cf79a76c01 Compress register lists by sharing suffixes.
TableGen emits lists of sub-registers, super-registers, and overlaps. Put
them all in a single table and use a SequenceToOffsetTable to share
suffixes.

llvm-svn: 153761
2012-03-30 17:25:43 +00:00
Jakob Stoklund Olesen
dab150c9cd Add a SequenceToOffsetTable to TableGen.
This is similar to the StringToOffsetTable we use to produce string
tables, but it can be used for other sequences than strings, and it
eliminates entries for suffixes.

llvm-svn: 153760
2012-03-30 17:25:40 +00:00
Jim Grosbach
2536615bab ARM integrated assembler should encoding choice for add/sub imm.
For 'adds r2, r2, #56' outside of an IT block, the 16-bit encoding T2
can be used for this syntax. Prefer the narrow encoding when possible.

rdar://11156277

llvm-svn: 153759
2012-03-30 17:20:40 +00:00
Rafael Espindola
151b420718 Handle unreachable code in the dominates functions. This changes users when
needed for correctness, but still doesn't clean up code that now unnecessary
checks for reachability.

llvm-svn: 153755
2012-03-30 16:46:21 +00:00
Danil Malyshev
df8df843d9 Re-factored RuntimeDyLd:
1. The main works will made in the RuntimeDyLdImpl with uses the ObjectFile class. RuntimeDyLdMachO and RuntimeDyLdELF now only parses relocations and resolve it. This is allows to make improvements of the RuntimeDyLd more easily. In addition the support for COFF can be easily added.

2. Added ARM relocations to RuntimeDyLdELF.

3. Added support for stub functions for the ARM, allowing to do a long branch.

4. Added support for external functions that are not loaded from the object files, but can be loaded from external libraries. Now MCJIT can correctly execute the code containing the printf, putc, and etc.

5. The sections emitted instead functions, thanks Jim Grosbach. MemoryManager.startFunctionBody() and MemoryManager.endFunctionBody() have been removed.
6. MCJITMemoryManager.allocateDataSection() and MCJITMemoryManager. allocateCodeSection() used JMM->allocateSpace() instead of JMM->allocateCodeSection() and JMM->allocateDataSection(), because I got an error: "Cannot allocate an allocated block!" with object file contains more than one code or data sections.

llvm-svn: 153754
2012-03-30 16:45:19 +00:00
Jim Grosbach
9b185a753c ARM assembly parsing needs to be paranoid about negative immediates.
Make sure to treat immediates as unsigned when doing relative comparisons.

rdar://11153621

llvm-svn: 153753
2012-03-30 16:31:31 +00:00
Rafael Espindola
4578d5e45a Add computeMaskedBitsLoad back, as it was the change to instsimplify that
caused the slowdown last time.

llvm-svn: 153747
2012-03-30 15:52:11 +00:00
Benjamin Kramer
0365dc97a8 Add a note about a missed cmov -> sbb opportunity.
llvm-svn: 153741
2012-03-30 13:02:58 +00:00
Bill Wendling
619da9f4da Cleanup whitespace. Doxygenize comments. And indent to llvm coding standards.
llvm-svn: 153740
2012-03-30 10:29:38 +00:00
James Molloy
70a6f5ebc7 Ensure conditional BL instructions for ARM are given the fixup fixup_arm_condbranch.
Patch by Tim Northover!

llvm-svn: 153737
2012-03-30 09:15:32 +00:00