The semantics of llvm.dbg.value are that starting from where it is executed, an offset into the specified user source variable is specified to get a new value.
An example:
call void @llvm.dbg.value(metadata !{ i32 7 }, i64 0, metadata !2)
Here the user source variable associated with metadata #2 gets the value "i32 7" at offset 0.
llvm-svn: 90788
sys::cas_flag should be long on this platform, InterlockedAdd() is
defined only for the Itanium architecture (according to MSDN).
Patch by Michael Beck!
llvm-svn: 90748
The coalescer is supposed to clean these up, but when setting up parameters
for a function call, there may be copies to physregs. If the defining
instruction has been LICM'ed far away, the coalescer won't touch it.
The register allocation hint does not always work - when the register
allocator is backtracking, it clears the hints.
This patch takes care of a few more cases that r90163 missed.
llvm-svn: 90502
that it doesn't have dangling pointers when abstract types are resolved. This
modifies it somewhat to address comments: making the "StructLayoutMap" an
anonymous structure, calling "removeAbstractTypeUser" when appropriate, and
adding asserts where helpful.
llvm-svn: 90362
We want LiveVariables clients to use methods rather than accessing the
getVarInfo data structure directly. That way it will be possible to change the
LiveVariables representation.
llvm-svn: 90240
for all the processors where I have tried it, and even when it might not help
performance, the cost is quite low. The opportunities for duplicating
indirect branches are limited by other factors so code size does not change
much due to tail duplicating indirect branches aggressively.
llvm-svn: 90144
If no destination label is available, just point to the node itself
instead of pointing to some source label. Source and destination labels are
not related in any way.
llvm-svn: 90132
Graphviz can layout the graphs better if a node does not contain source
ports. Therefore only print the ports if the source ports are useful,
that means are not labeled with the empty string "".
This patch also simplifies graphs without any edgeSourceLabels e.g. the
dominance trees.
llvm-svn: 90131
running tail duplication when doing branch folding for if-conversion, and
we also want to be able to run tail duplication earlier to fix some
reg alloc problems. Move the CanFallThrough function from BranchFolding
to MachineBasicBlock so that it can be shared by TailDuplication.
llvm-svn: 89904
Make tail duplication of indirect branches much more aggressive (for targets
that indicate that it is profitable), based on further experience with
this transformation. I compiled 3 large applications with and without
this more aggressive tail duplication and measured minimal changes in code
size. ("size" on Darwin seems to round the text size up to the nearest
page boundary, so I can only say that any code size increase was less than
one 4k page.) Radar 7421267.
llvm-svn: 89814
way for each TargetJITInfo subclass to allocate its own stubs. This
means stubs aren't as exactly-sized anymore, but it lets us get rid of
TargetJITInfo::emitFunctionStubAtAddr(), which lets ARM and PPC
support the eager JIT, fixing http://llvm.org/PR4816.
* Rename the JITEmitter's stub creation functions to describe the kind
of stub they create. So far, all of them create lazy-compilation
stubs, but they sometimes get used when far-call stubs are needed.
Fixing http://llvm.org/PR5201 will involve fixing this.
llvm-svn: 89715
Note that "hasDotLocAndDotFile"-style debug info was already broken;
people wanting this functionality should implement it in the
AsmPrinter/DwarfWriter code.
llvm-svn: 89711
the body to not pass the name for the isSigned parameter. However it
seems that changing prototypes is a big-no-no, so here I revert the
previous change and pass "true" for isSigned, meaning this always does
a signed cast, which was the previous behaviour assuming the name was
not NULL! Some other C function needs to be introduced for the general
case of signed or unsigned casts. This hopefully unbreaks the ocaml
binding.
llvm-svn: 89648
tell debug info which base register to use to reference a frame index on a
per-index basis. This is useful, for example, in the presence of dynamic
stack realignment when local variables are indexed via the stack pointer and
stack-based arguments via the frame pointer.
llvm-svn: 89620
When splitting a critical edge, the registers live through the edge are:
- Used in a PHI instruction, or
- Live out from the predecessor, and
- Live in to the successor.
This allows the coalescer to eliminate even more phi joins.
llvm-svn: 89530
it may be used in contexts where preheader insertion may have failed due
to an indirectbr.
Make LoopSimplify's LoopSimplify::SeparateNestedLoop properly fail in
the case that it would require splitting an indirectbr edge.
These fix PR5502.
llvm-svn: 89484
if it is not ultimately captured. Teach BasicAliasAnalysis that a
local object address which does not escape and is never stored does
not alias with a value resulting from a load.
llvm-svn: 89398
4.2.4, 4.3.4, 4.4.2.
The workaround is to use a local min/max implementation that takes an integer
param, and not a reference to integer param (like std::min does).
llvm-svn: 89352
contents of the block to be duplicated. Use this for ARM Cortex A8/9 to
be more aggressive tail duplicating indirect branches, since it makes it
much more likely that they will be predicted in the branch target buffer.
Testcase coming soon.
llvm-svn: 89187
This is probably not confined to *just* these two things.
Anyway, the llvm-gcc front-end may look up the structure layout information for
an abstract type. That information will be stored into a table with the FE's
TD. Instruction combine can come along and also ask for information on that
abstract type, but for a separate TD (the one associated with the pass manager).
After the type is refined, the old structure layout information in the pass
manager's TD file is out of date. If a new type is allocated in the same space
as the old-unrefined type, then the structure type information in the pass
manager's TD file will be wrong, but won't know it.
Fix this by making the TD's structure type information an abstract type user.
llvm-svn: 89176
2. Allow SCCIterator to work with inverse graphs.
3. Fix an incorrect comment in GraphTraits.h (the type in the comment
was given as GraphType* when it is actually const GraphType &).
Patch by Patrick Alexander Simmons.
llvm-svn: 89091
parameter of CreateIntCast then they get an error from the compiler
(or from the linker with a non-gcc compiler). Another possibility
is to flip the order of the DestTy and isSigned parameters, since you
should then get a compiler warning if you try to use a char* for a
Type*.
llvm-svn: 88913
(like DbgDeclareInst's) to shrink substantially. It sucks that we have
to pull Compiler.h into such a public header, but at least Compiler.h
doesn't pull anything else in.
llvm-svn: 88863
- This is an initial step towards -march=native support in Clang, and towards
eliminating host dependencies in the targets. See PR5389.
- Patch by Roman Divacky!
llvm-svn: 88768
so that isa<Instructon> doesn't return true for FixedStackPseudoSourceValue
values. This fixes a variety of problems, including crashes with -debug
and -print-machineinstrs. Also, add a comment to warn about this.
llvm-svn: 88711
Provide special isLoadFromStackSlotPostFE and isStoreToStackSlotPostFE
interfaces to explicitly request checking for post-frame ptr elimination
operands. This uses a heuristic so it isn't reliable for correctness.
llvm-svn: 87047
machine instruction loads or stores from/to a stack slot. Unlike
isLoadFromStackSlot and isStoreFromStackSlot, the instruction may be
something other than a pure load/store (e.g. it may be an arithmetic
operation with a memory operand). This helps AsmPrinter determine when
to print a spill/reload comment.
This is only a hint since we may not be able to figure this out in all
cases. As such, it should not be relied upon for correctness.
Implement for X86. Return false by default for other architectures.
llvm-svn: 87026
slots. The AsmPrinter will use this information to determine whether to
print a spill/reload comment.
Remove default argument values. It's too easy to pass a wrong argument
value when multiple arguments have default values. Make everything
explicit to trap bugs early.
Update all targets to adhere to the new interfaces..
llvm-svn: 87022
making it visible to clients and adding LLVM-style cast capability.
This will be used by AsmPrinter to determine when to emit spill comments
for an instruction.
llvm-svn: 87019
cannot be folded into target cmp instruction.
- Avoid a phase ordering issue where early cmp optimization would prevent the
later count-to-zero optimization.
- Add missing checks which could cause LSR to reuse stride that does not have
users.
- Fix a bug in count-to-zero optimization code which failed to find the pre-inc
iv's phi node.
- Remove, tighten, loosen some incorrect checks disable valid transformations.
- Quite a bit of code clean up.
llvm-svn: 86969
functions like floorf, ceilf, ... Add test for detecting nearbyintf.
This change was prompted by test/Transforms/SimplifyLibCalls/floor.ll
llvm-svn: 86954
This allows StringRef to skip controversial if(str) check in constructor.
Buildbots, wait for corresponding clang and llvm-gcc FE check-ins!
llvm-svn: 86914
- Edges are split before any phis are eliminated, so the code is SSA.
- Create a proper IR BasicBlock for the split edges.
- LiveVariables::addNewBlock now has same syntax as
MachineDominatorTree::addNewBlock. Algorithm calculates predecessor live-out
set rather than successor live-in set.
This feature still causes some miscompilations.
llvm-svn: 86867
llvm.invariant.start to be used without necessarily being paired with a call
to llvm.invariant.end. If you run the entire optimization pipeline then such
calls are in fact deleted (adce does it), but that's actually a good thing since
we probably do want them to be zapped late in the game. There should really be
an integration test that checks that the llvm.invariant.start call lasts long
enough that all passes that do interesting things with it get to do their stuff
before it is deleted. But since no passes do anything interesting with it yet
this will have to wait for later.
llvm-svn: 86840
start using them in a trivial way when -enable-jump-threading-lvi
is passed. enable-jump-threading-lvi will be my playground for
awhile.
llvm-svn: 86789
Critical edges leading to a PHI node are split when the PHI source variable is
live out from the predecessor block. This help the coalescer eliminate more
PHI joins.
llvm-svn: 86725
except that the result may not be a constant. Switch jump threading to
use it so that it gets things like (X & 0) -> 0, which occur when phi preds
are deleted and the remaining phi pred was a zero.
llvm-svn: 86637
This patch forbids implicit conversion of DenseMap::const_iterator to
DenseMap::iterator which was possible because DenseMapIterator inherited
(publicly) from DenseMapConstIterator. Conversion the other way around is now
allowed as one may expect.
The template DenseMapConstIterator is removed and the template parameter
IsConst which specifies whether the iterator is constant is added to
DenseMapIterator.
Actually IsConst parameter is not necessary since the constness can be
determined from KeyT but this is not relevant to the fix and can be addressed
later.
Patch by Victor Zverovich!
llvm-svn: 86636
takes decimated instructions and applies identities to them. This
is pretty minimal at this point, but I plan to pull some instcombine
logic out into these and similar routines.
llvm-svn: 86613
datatypes on a given CPU. This is intended to allow instcombine and other
transformations to avoid converting big sequences of operations to an
inconvenient width, and will help clean up after SRoA. See also "Adding
legal integer sizes to TargetData" on Feb 1, 2009 on llvmdev, and PR3451.
Comments welcome.
llvm-svn: 86370
MachineRelocations, "stub" always refers to a far-call stub or a
load-a-faraway-global stub, so this patch adds "Far" to the term. (Other stubs
are used for lazy compilation and dlsym address replacement.) The variable was
also inconsistent between the positive and negative sense, and the positive
sense ("NeedStub") was more demanding than is accurate (since a nearby-enough
function can be called directly even if the platform often requires a stub).
Since the negative sense causes double-negatives, I switched to
"MayNeedFarStub" globally.
llvm-svn: 86363
A non-identity copy cannot be coalesced when the phi join destination register
is live at the copy site.
Also verify the condition that the PHI join source register is only used in
the PHI join. Otherwise the coalescing is invalid.
llvm-svn: 86322
Here is the original commit message:
This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.
Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86311
This assert was very conservative to begin with (the error condition is well
covered by tests elsewhere in the code) so we won't miss much by removing it.
llvm-svn: 86088
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86077
The KILL pseudo-instruction may survive to the asm printer pass, just like the IMPLICIT_DEF. Print the KILL as a comment instead of just leaving a blank line in the output.
With -asm-verbose=0, a blank line is printed, like IMPLICIT?DEF.
llvm-svn: 86041
This introduces a new pass, SlotIndexes, which is responsible for numbering
instructions for register allocation (and other clients). SlotIndexes numbering
is designed to match the existing scheme, so this patch should not cause any
changes in the generated code.
For consistency, and to avoid naming confusion, LiveIndex has been renamed
SlotIndex.
The processImplicitDefs method of the LiveIntervals analysis has been moved
into its own pass so that it can be run prior to SlotIndexes. This was
necessary to match the existing numbering scheme.
llvm-svn: 85979
This makes both logical sense (see below) and increases the
number of functions marked readnone/readonly by about 1-2%
in practice. The number of functions marked nocapture goes
up by about 5-10%. The reason it makes sense is shown by
the following example: if you run -functionattrs -inline on
it, then no attributes are assigned. But if you instead run
-inline -functionattrs then @f is marked readnone because the
simplifications produced by the inliner eliminate the store.
@x = external global i32
define void @w(i1 %b) {
br i1 %b, label %write, label %return
write:
store i32 1, i32 *@x
br label %return
return:
ret void
}
define void @f() {
call void @w(i1 0)
ret void
}
llvm-svn: 85893
1. we'd run simplifycfg at the very start, even though
the per function passes have already cleaned this up.
2. In the main per-function pipeline that is interlaced with inlining
etc, we would do instcombine, jump threading, simplifycfg *before*
doing SROA. SROA is much more likely to expose opportunities for
these passes than they are for SROA, so move SRoA up earlier.
also add some comments.
llvm-svn: 85742
ipconstprop and doesn't take much time. Just run it in its place.
This adds a testcase for it, which I plan to expand to cover other
"integration" cases, where we expect the optimizer to be able to
eliminate various things. Due to phase order issues we've regressed
in a number of areas and integration tests are the only way I see to
prevent this.
llvm-svn: 85729
block with a blockaddress still referring to it' replace the invalid
blockaddress with a new blockaddress(@func, null) instead of a
inttoptr(1).
This changes the bitcode encoding format, and still needs codegen
support (this should produce a non-zero value, referring to the entry
block of the function would also be quite reasonable).
llvm-svn: 85678
MergeBlockIntoPredecessor. This makes SimplifyCFG slightly more aggressive,
and makes it unnecessary for LoopUnroll to have its own copy of this code.
llvm-svn: 85667
unfolding loads for hoisting. getOpcodeAfterMemoryUnfold returns the
opcode of the original operation without the load, not the load
itself, MachineLICM needs to know the operand index in order to get
the correct register class. Extend getOpcodeAfterMemoryUnfold to
return this information.
llvm-svn: 85622
bunch of associated comments, because it doesn't have anything to do
with DAGs or scheduling. This is another step in decoupling MachineInstr
emitting from scheduling.
llvm-svn: 85517
ArraySize * ElementSize
ElementSize * ArraySize
ArraySize << log2(ElementSize)
ElementSize << log2(ArraySize)
Refactor isArrayMallocHelper and delete isSafeToGetMallocArraySize, so that there is only 1 copy of the malloc array determining logic.
Update users of getMallocArraySize() to not bother calling isArrayMalloc() as well.
llvm-svn: 85421
Checks on Demand algorithm which looks at arbitrary branches instead of loop
iterations. This is GSoC work by Andre Tavares with only editorial changes
applied!
llvm-svn: 85382
In the new world order, BlockAddress can have a BasicBlock operand.
This doesn't permute much, because if you have a ConstantExpr (or
anything more specific than Constant) we still know the operand has
to be a Constant.
llvm-svn: 85375
use it to control tail merging when there is a tradeoff between performance
and code size. When there is only 1 instruction in the common tail, we have
been merging. That can be good for code size but is a definite loss for
performance. Now we will avoid tail merging in that case when the
optimization level is "Aggressive", i.e., "-O3". Radar 7338114.
Since the IfConversion pass invokes BranchFolding, it too needs to know
the optimization level. Note that I removed the RegisterPass instantiation
for IfConversion because it required a default constructor. If someone
wants to keep that for some reason, we can add a default constructor with
a hard-wired optimization level.
llvm-svn: 85346
http://llvm.org/PR5184, and beef up the comments to describe what both options
do and the risks of lazy compilation in the presence of threads.
llvm-svn: 85295
being destroyed. This allows users to run global optimizations like globaldce
even after some functions have been jitted.
This patch also removes the Function* parameter to
JITEventListener::NotifyFreeingMachineCode() since it can cause that to be
called when the Function is partially destroyed. This change will be even more
helpful later when I think we'll want to allow machine code to actually outlive
its Function.
llvm-svn: 85182
GEPs (more than one non-zero index) into simple GEPs (at most one
non-zero index). In some simple experiments using this it's not
uncommon to see 3% overall code size wins, because it exposes
redundancies that can be eliminated, however it's tricky to use
because instcombine aggressively undoes the work that this pass does.
llvm-svn: 85144
bootstrapping. It's not safe to leave identity subreg_to_reg and insert_subreg
around.
- Relax register scavenging to allow use of partially "not-live" registers. It's
common for targets to operate on registers where the top bits are undef. e.g.
s0 =
d0 = insert_subreg d0<undef>, s0, 1
...
= d0
When the insert_subreg is eliminated by the coalescer, the scavenger used to
complain. The previous fix was to keep to insert_subreg around. But that's
brittle and it's overly conservative when we want to use the scavenger to
allocate registers. It's actually legal and desirable for other instructions
to use the "undef" part of d0. e.g.
s0 =
d0 = insert_subreg d0<undef>, s0, 1
...
s1 =
= s1
= d0
We probably need add a "partial-undef" marker on machine operand so the
machine verifier would not complain.
llvm-svn: 85091
used elsewhere - an exit block is a block outside the loop branched to
from within the loop. An exiting block is a block inside the loop that
branches out.
llvm-svn: 85019
Update all analysis passes and transforms to treat free calls just like FreeInst.
Remove RaiseAllocations and all its tests since FreeInst no longer needs to be raised.
llvm-svn: 84987
compiled.
When functions are compiled, they accumulate references in the JITResolver's
stub maps. This patch removes those references when the functions are
destroyed. It's illegal to destroy a Function when any thread may still try to
call its machine code.
This patch also updates r83987 to use ValueMap instead of explicit CallbackVHs
and fixes a couple "do stuff inside assert()" bugs from r84522.
llvm-svn: 84975
even when keys get RAUWed and deleted during its lifetime. By default the keys
act like WeakVHs, but users can pass a third template parameter to configure
how updates work and whether to do anything beyond updating the map on each
action.
It's also possible to automatically acquire a lock around ValueMap updates
triggered by RAUWs and deletes, to support the ExecutionEngine.
llvm-svn: 84890
Analysis/ConstantFolding.cpp. This doesn't change the behavior of
instcombine but makes other clients of ConstantFoldInstruction
able to handle loads. This was partially extracted from Eli's patch
in PR3152.
llvm-svn: 84836
- i < getNumElements() instead of getNumElements() > i
- Make setParent() private
- Fix use of resizeOperands
- Reset HasMetadata bit after removing all metadata attached to an instruction
- Efficient use of iterators
llvm-svn: 84765
JITEmitter.
I'm gradually making Functions auto-remove themselves from the JIT when they're
destroyed. In this case, the Function needs to be removed from the JITEmitter,
but the map recording which Functions need to be removed lived behind the
JITMemoryManager interface, which made things difficult.
This patch replaces the deallocateMemForFunction(Function*) method with a pair
of methods deallocateFunctionBody(void *) and deallocateExceptionTable(void *)
corresponding to the two startFoo/endFoo pairs.
llvm-svn: 84651
appropriate restore location for the spill as well as perform the actual
save and restore.
The Thumb1 target uses this to make sure R12 is not clobbered while a spilled
scavenger register is live there.
llvm-svn: 84554
The JITResolver maps Functions to their canonical stubs and all callsites for
lazily-compiled functions to their target Functions. To make Function
destruction work, I'm going to need to remove all callsites on destruction, so
this patch also adds the reverse mapping for that.
There was an incorrect assumption in here that the only stub for a function
would be the one caused by needing to lazily compile it, while x86-64 far calls
and dlsym-stubs could also cause such stubs, but I didn't look for a test case
that the assumption broke.
This also adds DenseMapInfo<AssertingVH> so I can use DenseMaps instead of
std::maps.
llvm-svn: 84522