contains the type we are looking for, just search the immediately used types.
We can only do this because we keep the "current" type in the nesting level
as we decrement upreferences.
This change speeds up the testcase in PR224 from 50.4s to 22.08s, not
too shabby.
llvm-svn: 11221
the Virt2PhysRegMap std::map with an std::vector. This speeds up the
register allocator another (almost) 40%, from .72->.45s in a release build
of LLC on 253.perlbmk.
llvm-svn: 11219
from physical registers, and they are always dense, it makes sense to not have
a ton of RBtree overhead. This change speeds up regalloclocal about ~30% on
253.perlbmk, from .35s -> .27s in the JIT (in LLC, it goes from .74 -> .55).
Now live variable analysis is the slowest codegen pass. Of course it doesn't
help that we have to run it twice, because regalloclocal doesn't update it,
but even if it did it would be the slowest pass (now it's just the 2x slowest
pass :(
llvm-svn: 11215
1. The "work" was not in the assert, so it was punishing the optimized release
2. getNamedFunction is _very_ expensive in large programs. It is not designed
to be used like this, and was taking 7% of the execution time of the code
generator on perlbmk.
Since the assert "can never fail", I'm just killing it.
llvm-svn: 11214
removeDeadNodes is called, only call it at the end of the pass being run.
This saves 1.3 seconds running DSA on 177.mesa (5.3->4.0s), which is
pretty big. This is only possible because of the automatic garbage
collection done on forwarding nodes.
llvm-svn: 11178
DSGraphs while they are forwarding. When the last reference to the forwarding
node is dropped, the forwarding node is autodeleted. This should simplify
removeTriviallyDead nodes, and is only (efficiently) possible because we are
using an ilist of dsnodes now.
llvm-svn: 11175
slots each. As a concequence they get numbered as 0, 2, 4 and so
on. The first slot is used for operand uses and the second for
defs. Here's an example:
0: A = ...
2: B = ...
4: C = A + B ;; last use of A
The live intervals should look like:
A = [1, 5)
B = [3, x)
C = [5, y)
llvm-svn: 11141
The problem is that the dominator update code didn't "realize" that it's
possible for the newly inserted basic block to dominate anything. Because
it IS possible, stuff was getting updated wrong.
llvm-svn: 11137
complete rewrite of load-vn will make it a bit faster. This changes speeds up
the gcse pass (which uses load-vn) from 25.45s to 0.42s on the testcase in
PR209.
I've also verified that this gives the exact same results as the old one.
llvm-svn: 11132
1. Don't scan to the end of alloca instructions in the caller function to
insert inlined allocas, just insert at the top. This saves a lot of
time inlining into functions with a lot of allocas.
2. Use splice to move the alloca instructions over, instead of remove/insert.
This allows us to transfer a block at a time, and eliminates a bunch of
silly symbol table manipulations.
This speeds up the inliner on the testcase in PR209 from 1.73s -> 1.04s (67%)
llvm-svn: 11118
and that basic block ends with a return instruction. In this case, we can just splice
the cloned "body" of the function directly into the source basic block, avoiding a lot
of rearrangement and splitBasicBlock's linear scan over the split block. This speeds up
the inliner on the testcase in PR209 from 2.3s to 1.7s, a 35% reduction.
llvm-svn: 11116
process. The only optimization we did so far is to avoid creating a
PHI node, then immediately destroying it in the common case where the
callee has one return statement. Instead, we just don't create the return
value. This has no noticable performance impact, but paves the way for
future improvements.
llvm-svn: 11108
to add the cloned block to. This allows the block to be added to the function
immediately, and all of the instructions to be immediately added to the function
symbol table, which speeds up the inliner from 3.7 -> 3.38s on the PR209.
llvm-svn: 11107
instead of a loop that is really inefficient with large basic blocks.
This speeds up the inliner pass on the testcase in PR209 from 13.8s to 2.24s
which still isn't exactly speedy, but is a lot better. :)
llvm-svn: 11105
Basically we store floating point values as their integral components, instead of relying
on the semantics of floating point < to differentiate between values. This is likely to
make the map search be faster anyway.
llvm-svn: 11064
registers (not as the max number of registers).
Change toSpill from a std::set into a std::vector<bool>.
Use the reverse iterator adapter to do a reverse scan of allocatable
registers.
llvm-svn: 11061
of a linear search to find the first range for comparisons. This cuts
down the linear scan register allocator running time by a factor of 3
in 254.perlbmk and by a factor of 2.2 in 176.gcc.
llvm-svn: 11030
FP_REG_KILL instructions at the end of blocks involved with critical edges.
Fix a bug where FP_REG_KILL instructions weren't inserted in fall through
unconditional branches. Perhaps this will fix some linscan problems?
llvm-svn: 11019
function to find the globals, iterate over all of the globals directly. This
speeds the function up from 14s to 6.3s on perlbmk, reducing DSA time from
53->46s.
llvm-svn: 10996
This reduces the number of nodes allocated, then immediately merged and DNE'd
from 2193852 to 1298049. unfortunately this only speeds DSA up by ~1.5s (of
53s), because it's spending most of its time waddling through the scalar map :(
llvm-svn: 10992
Also, use RC::merge when possible, reducing the number of nodes allocated, then immediately merged away from 2985444 to 2193852 on perlbmk.
llvm-svn: 10991
it to be off. If it looks like it's completely unnecessary after testing, I
will remove it completely (which is the hope).
* Callers of the DSNode "copy ctor" can not choose to not copy links.
* Make node collapsing not create a garbage node in some cases, avoiding a
memory allocation, and a subsequent DNE.
* When merging types, allow two functions of different types to be merged
without collapsing.
* Use DSNodeHandle::isNull more often instead of DSNodeHandle::getNode() == 0,
as it is much more efficient.
*** Implement the new, more efficient reachability cloner class
In addition to only cloning nodes that are reachable from interesting
roots, this also fixes the huge inefficiency we had where we cloned lots
of nodes, only to merge them away immediately after they were cloned.
Now we only actually allocate a node if there isn't one to merge it into.
* Eliminate the now-obsolete cloneReachable* and clonePartiallyInto methods
* Rewrite updateFromGlobalsGraph to use the reachability cloner
* Rewrite mergeInGraph to use the reachability cloner
* Disable the scalar map scanning code in removeTriviallyDeadNodes. In large
SCC's, this is extremely expensive. We need a better data structure for the
scalar map, because we really want to scan the unique node handles, not ALL
of the scalars.
* Remove the incorrect SANER_CODE_FOR_CHECKING_IF_ALL_REFERRERS_ARE_FROM_SCALARMAP code.
* Move the code for eliminating integer nodes from the trivially dead
eliminator to the dead node eliminator.
* removeDeadNodes no longer uses removeTriviallyDeadNodes, as it contains a
superset of the node removal power.
* Only futz around with the globals graph in removeDeadNodes if it is modified
llvm-svn: 10987
efficient in the case where a function calls into the same graph multiple times
(ie, it either contains multiple calls to the same function, or multiple calls
to functions in the same SCC graph)
llvm-svn: 10986
out that the problem was actually the writer writing out a 'null' value
because it didn't normalize it. This fixes:
test/Regression/Assembler/2004-01-22-FloatNormalization.ll
llvm-svn: 10967
is a move between two registers, at least one of the registers is
virtual and the two live intervals do not overlap.
This results in about 40% reduction in intervals, 30% decrease in the
register allocators running time and a 20% increase in peephole
optimizations (mainly move eliminations).
The option can be enabled by passing -join-liveintervals where
appropriate.
llvm-svn: 10965
virtReg lives on the stack. Now a virtual register has an entry in the
virtual->physical map or the virtual->stack slot map but never in
both.
llvm-svn: 10958
map was only used to implement a marginal GlobalsGraph optimization, and it
actually slows the analysis down (due to the overhead of keeping it), so just
eliminate it entirely.
llvm-svn: 10955
in terms of it.
Though clonePartiallyInto is not cloning partial graphs yet, this change
dramatically speeds up inlining of graphs with many scalars. For example,
this change speeds up the BU pass on 253.perlbmk from 69s to 36s, because
it avoids iteration over the scalar map, which can get pretty large.
llvm-svn: 10951
fact "profitable" to do so. This makes compactification "free" for small
programs (ie, it is completely disabled) and even helps large programs by
not having to encode pointless compactification planes.
On 176.gcc, this saves 50K from the bytecode file, which is, alas only
a couple percent.
This concludes my head bashing against the bytecode format, at least for
now.
llvm-svn: 10922
This shrinks the bytecode file for 176.gcc by about 200K (10%), and 254.gap by
about 167K, a 25% reduction. There is still a lot of room for improvement in
the encoding of the compaction table.
llvm-svn: 10915
This shrinks the bytecode file for 176.gcc by about 200K (10%), and 254.gap by
about 167K, a 25% reduction. There is still a lot of room for improvement in
the encoding of the compaction table.
llvm-svn: 10914
the bytecode file for 176.gcc by about 200K (10%), and 254.gap by about 167K,
a 25% reduction. There is still a lot of room for improvement in the encoding
of the compaction table.
llvm-svn: 10913
bytecode files when compiling 176.gcc, but more importantly will make it
easier to eliminate CPR's in the future (no new .bc revision will be
required to support them)
llvm-svn: 10884
of forcing them to go through ConstantPointerRef's. This allows bytecode
files to mirror .ll files, allows more efficient encoding, and makes it easier
to eventually eliminate CPR's.
llvm-svn: 10883
returning error codes. Because they don't return an error code, they can
return the value read, which simplifies the code and makes the reader more
efficient (yaay!).
Also eliminate the special case code for little endian machines.
llvm-svn: 10871
intended to save size (and does on small programs), but on big programs it
actually increases the size of the program slightly. The deal is that many
functions end up using the characters that the string contained, and the
characters are no longer in the global constant table, so they have to be
emitted in function specific constant pools.
This pessimization will be fixed in subsequent patches.
llvm-svn: 10864
It's not clear why the code was looking for signed chars < 0, but it can't
matter to the assembler anyway, so the check goes away. This also fixes
compatibility with arrays of [us]byte that have constantexprs in them.
Also slightly restructure some code to be cleaner.
llvm-svn: 10854
because that makes it abort. Also, fix a typo in a comment.
This checkin brought to you by the "It only takes about 30 seconds to run
ENABLE_LLI tests on Shootout on zion, even if they all dump core" fund.
llvm-svn: 10844
Since this really only makes sense for these two, change hte instance variable
to reflect whether we are writing a bytecode file or not. This makes it
reasonable to add bcwriter specific stuff to it as necessary.
llvm-svn: 10837
when an implicitely defined register is later used by an alias. For example:
call foo
%reg1024 = mov %AL
The call implicitely defines EAX but only AL is used. Before this fix
no information was available on AL. Now EAX and all its aliases except
AL get defined and die at the call instruction whereas AL lives to be
killed by the assignment.
llvm-svn: 10813
testcase test/Regression/Assembler/ConstantExprFold.llx
Note that these kinds of things only rarely show up in source code, but are
exceedingly common in the intermediate stages of algorithms like SCCP. By
folding things (especially relational operators) that use symbolic constants,
we are able to speculatively fold more conditional branches, which can
lead to some big simplifications.
It would be easy to add a lot more special cases here, so if you notice
SCCP missing anything "obvious", you know what to make smarter. :)
llvm-svn: 10812
Move a bunch of (now) private stuff from ConstantFolding.h into
ConstantFolding.cpp.
This _finally_ gets us to a place where we have a sane constant folder. The
rules are:
1. LLVM clients now use ConstantExpr::get* methods to fold constants. If they
cannot be folded, a constantexpr is created, so these methods always return
valid Constant*'s.
2. The implementation of ConstantExpr::get* uses the functions exposed by
ConstantFolding.h to try to fold constants. If they cannot be folded,
they should return a null pointer.
3. The implementation of ConstantFolding can do whatever it wants, and only
has one client (Constants.cpp)
This cuts down on the wierd dependencies, and eliminates the two interfaces.
The old constanthandling interface was especially bad for clients to use
because almost none of them took the failure condition into consideration,
thus leading to obscure problems.
llvm-svn: 10807
this whole refactoring: allow constant folding methods to return something
other than predefined classes, allow them to return generic Constant*'s.
llvm-svn: 10806