access each with a fixed negative index from op_end().
This has two important implications:
- getUser() will work faster, because there are less iterations
for the waymarking algorithm to perform. This is important
when running various analyses that want to determine callers
of basic blocks.
- getSuccessor() now runs faster, because the indirection via OperandList
is not necessary: Uses corresponding to the successors are at fixed
offset to "this".
The price we pay is the slightly more complicated logic in the operator
User::delete, as it has to pick up the information whether it has to free
the memory of an original unconditional BranchInst or a BranchInst that
was originally conditional, but has been shortened to unconditional.
I was not able to come up with a nicer solution to this problem. (And
rest assured, I tried *a lot*).
Similar reorderings will follow for InvokeInst and CallInst. After that
some optimizations to pred_iterator and CallSite will fall out naturally.
llvm-svn: 66815
This means that we have to include an additional header.
This patch should be functionally equivalent. I cannot outrule any performance
degradation, though I do not expect any.
llvm-svn: 61694
using the 'volatile' qualifier. This should not have any operational consequences
on code, because tags should always be stripped off (giving a non-volatile pointer)
before dereferencing. The new qualification is there to catch some attempts to use
tagged pointers in a context where an untagged pointer is appropriate.
Notably this approach does not catch dereferencing of tagged pointers, but helps
in separating the two concepts a bit.
llvm-svn: 57641
Do not rely on std::swap<Use>, provide a (faster) member function instead.
This change is primarily necessitated by MSVC++'s incompatibility with
declaring std::swap<Use> to be a friend of Use.
Also contains some minor tweaks to Use inline functions,
to undo pointless changes that sneaked in with the last merge.
llvm-svn: 51078