Simplifying use_iterators by dereferencing
is not a good idea. The codebase does not depend
in this any more, and it may introduce hidden
runtime cost. If you get compile errors, please
dereference your iterator before passing to cast<>
(and friends).
Also: please consider caching the result of
operator* and reusing that instead of dereferencing
many times.
llvm-svn: 109425
is not a good idea. The codebase does not depend
in this any more, and it may introduce hidden
runtime cost. If you get compile errors, please
dereference your iterator before passing to cast<>
(and friends).
Also: please consider caching the result of
operator* and reusing that instead of dereferencing
many times.
llvm-svn: 109220
function with a new NumLowBitsAvailable enum, which makes the
value available as an integer constant expression.
Add PointerLikeTypeTraits specializations for Instruction* and
Use** since they are only guaranteed 4-byte aligned.
Enhance PointerIntPair to know about (and enforce) the alignment
specified by PointerLikeTypeTraits. This should allow things
like PointerIntPair<PointerIntPair<void*, 1,bool>, 1, bool>
because the inner one knows that 2 low bits are free.
llvm-svn: 67979
access each with a fixed negative index from op_end().
This has two important implications:
- getUser() will work faster, because there are less iterations
for the waymarking algorithm to perform. This is important
when running various analyses that want to determine callers
of basic blocks.
- getSuccessor() now runs faster, because the indirection via OperandList
is not necessary: Uses corresponding to the successors are at fixed
offset to "this".
The price we pay is the slightly more complicated logic in the operator
User::delete, as it has to pick up the information whether it has to free
the memory of an original unconditional BranchInst or a BranchInst that
was originally conditional, but has been shortened to unconditional.
I was not able to come up with a nicer solution to this problem. (And
rest assured, I tried *a lot*).
Similar reorderings will follow for InvokeInst and CallInst. After that
some optimizations to pred_iterator and CallSite will fall out naturally.
llvm-svn: 66815
This means that we have to include an additional header.
This patch should be functionally equivalent. I cannot outrule any performance
degradation, though I do not expect any.
llvm-svn: 61694
distinguished from normal (untagged) ones
as per review comment.
I am sufficiently unaquainted with doxygen to
defer the markup to someone with more experience.
llvm-svn: 57676
using the 'volatile' qualifier. This should not have any operational consequences
on code, because tags should always be stripped off (giving a non-volatile pointer)
before dereferencing. The new qualification is there to catch some attempts to use
tagged pointers in a context where an untagged pointer is appropriate.
Notably this approach does not catch dereferencing of tagged pointers, but helps
in separating the two concepts a bit.
llvm-svn: 57641
Do not rely on std::swap<Use>, provide a (faster) member function instead.
This change is primarily necessitated by MSVC++'s incompatibility with
declaring std::swap<Use> to be a friend of Use.
Also contains some minor tweaks to Use inline functions,
to undo pointless changes that sneaked in with the last merge.
llvm-svn: 51078
This list does not provide the ability to go backwards in the list (its
more of an unordered collection, stored in the shape of a list).
This change means that use iterators are now only forward iterators, not
bidirectional.
This improves the memory usage of use lists from '5 + 4*#use' per value to
'1 + 4*#use'. While it would be better to reduce the multiplied factor,
I'm not smart enough to do so. This list also has slightly more efficient
operators for manipulating list nodes (a few less loads/stores), due to not
needing to be able to iterate backwards through the list.
This change reduces the memory footprint required to hold 176.gcc from
66.025M -> 57.687M, a 14% reduction. It also speeds up the compiler,
7.73% in the case of bytecode loading alone (release build loading 176.gcc).
llvm-svn: 19956
Based on the ilist changes avoid allocating an entire Use object for the
end of the Use chain. This saves 8 bytes of memory for each Value allocated
in the program. For 176.gcc, this reduces us from 69.5M -> 66.0M, a 5.3%
memory savings.
llvm-svn: 19925
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.
llvm-svn: 16137
this list (except use_size()) are constant time. Before the killUse method
(used whenever something stopped using a value) was linear time, and thus
very very slow for large programs.
This speeds GCCAS up _substantially_ on large programs: almost 2x for 176.gcc:
176.gcc: 77.07s -> 37.38s
177.mesa: 7.59s -> 5.57s
252.eon: 21.02s -> 19.52s (*)
253.perlbmk: 11.40s -> 13.05s
254.gap: 7.25s -> 7.42s
252.eon would speed up a whole lot more, but optimization time is being
dominated by the inlining pass, which needs to be fixed.
llvm-svn: 9159