to instruction right after the last instruction in the bundle.
- Add a finalizeBundle() variant that doesn't specify LastMI. Instead, the code
will find the last instruction in the bundle by following the 'InsideBundle'
marker. This is useful in case bundles are formed early (i.e. during MI
scheduling) but finalized later (i.e. after register allocator has finished
rewriting virtual registers with physical registers).
llvm-svn: 148444
This SelectionDAG node will be attached to call nodes by LowerCall(),
and eventually becomes a MO_RegisterMask MachineOperand on the
MachineInstr representing the call instruction.
LowerCall() will attach a register mask that depends on the calling
convention.
llvm-svn: 148436
Register masks will be used as a compact representation of large clobber
lists. Currently, an x86 call instruction has some 40 operands
representing call-clobbered registers. That's more than 1kB of useless
operands per call site.
A register mask operand references a bit mask of call-preserved
registers, everything else is clobbered. The bit mask will typically
come from TargetRegisterInfo::getCallPreservedMask().
By abandoning ImplicitDefs for call-clobbered registers, it also becomes
possible to share call instruction descriptions between calling
conventions, and we can get rid of the WINCALL* instructions.
This patch introduces the new operand kind. Future patches will add
RegMask support to target-independent passes before finally the fixed
clobber lists can be removed from call instruction descriptions.
llvm-svn: 148250
Consider this code:
int h() {
int x;
try {
x = f();
g();
} catch (...) {
return x+1;
}
return x;
}
The variable x is undefined on the first edge to the landing pad, but it
has the f() return value on the second edge to the landing pad.
SplitAnalysis::getLastSplitPoint() would assume that the return value
from f() was live into the landing pad when f() throws, which is of
course impossible.
Detect these cases, and treat them as if the landing pad wasn't there.
This allows spill code to be inserted after the function call to f().
<rdar://problem/10664933>
llvm-svn: 147912
Delete the alternative implementation in LiveIntervalAnalysis.
These functions computed the same thing, but SplitAnalysis caches the
result.
llvm-svn: 147911
opportunities that only present themselves after late optimizations
such as tail duplication .e.g.
## BB#1:
movl %eax, %ecx
movl %ecx, %eax
ret
The register allocator also leaves some of them around (due to false
dep between copies from phi-elimination, etc.)
This required some changes in codegen passes. Post-ra scheduler and the
pseudo-instruction expansion passes have been moved after branch folding
and tail merging. They were before branch folding before because it did
not always update block livein's. That's fixed now. The pass change makes
independently since we want to properly schedule instructions after
branch folding / tail duplication.
rdar://10428165
rdar://10640363
llvm-svn: 147716
The register allocators don't currently support adding reserved
registers while they are running. Extend the MRI API to keep track of
the set of reserved registers when register allocation started.
Target hooks like hasFP() and needsStackRealignment() can look at this
set to avoid reserving more registers during register allocation.
llvm-svn: 147577
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
prevent IT blocks from being broken apart.
llvm-svn: 146542
undefined result. This adds new ISD nodes for the new semantics,
selecting them when the LLVM intrinsic indicates that the undef behavior
is desired. The new nodes expand trivially to the old nodes, so targets
don't actually need to do anything to support these new nodes besides
indicating that they should be expanded. I've done this for all the
operand types that I could figure out for all the targets. Owners of
various targets, please review and let me know if any of these are
incorrect.
Note that the expand behavior is *conservatively correct*, and exactly
matches LLVM's current behavior with these operations. Ideally this
patch will not change behavior in any way. For example the regtest suite
finds the exact same instruction sequences coming out of the code
generator. That's why there are no new tests here -- all of this is
being exercised by the existing test suite.
Thanks to Duncan Sands for reviewing the various bits of this patch and
helping me get the wrinkles ironed out with expanding for each target.
Also thanks to Chris for clarifying through all the discussions that
this is indeed the approach he was looking for. That said, there are
likely still rough spots. Further review much appreciated.
llvm-svn: 146466
clients to decide whether to look inside bundled instructions and whether
the query should return true if any / all bundled instructions have the
queried property.
llvm-svn: 146168
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.
For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.
llvm-svn: 146026
This flag is used when bundling machine instructions. It indicates
whether the operand reads a value defined inside or outside its bundle.
llvm-svn: 145997
1. Added opcode BUNDLE
2. Taught MachineInstr class to deal with bundled MIs
3. Changed MachineBasicBlock iterator to skip over bundled MIs; added an iterator to walk all the MIs
4. Taught MachineBasicBlock methods about bundled MIs
llvm-svn: 145975
This was actually a bit of a mess. TLI.setPrefLoopAlignment was clearly
documented as taking log2(bytes) units, but the x86 target would still
set a preferred loop alignment of '16'.
CodePlacementOpt passed this number on to the basic block, and
AsmPrinter interpreted it as bytes.
Now both MachineFunction and MachineBasicBlock use logarithmic
alignments.
Obviously, MachineConstantPool still measures alignments in bytes, so we
can emulate the thrill of using as.
llvm-svn: 145889
This is a patch by Guoping Long!
As part of utilizing LLVM Dominator computation in Clang, made two changes to LLVM dominators tree implementation:
- (1) Change the recalculate() template function to only rely on GraphTraits.
- (2) Add a size() method to GraphTraits template class to query the number of nodes in the graph.
llvm-svn: 145837
change, now you need a TargetOptions object to create a TargetMachine. Clang
patch to follow.
One small functionality change in PTX. PTX had commented out the machine
verifier parts in their copy of printAndVerify. That now calls the version in
LLVMTargetMachine. Users of PTX who need verification disabled should rely on
not passing the command-line flag to enable it.
llvm-svn: 145714
and code model. This eliminates the need to pass OptLevel flag all over the
place and makes it possible for any codegen pass to use this information.
llvm-svn: 144788
time it is queried to compute the probability of a single successor.
This makes computing the probability of every successor of a block in
sequence... really really slow. ;] This switches to a linear walk of the
successors rather than a quadratic one. One of several quadratic
behaviors slowing this pass down.
I'm not really thrilled with moving the sum code into the public
interface of MBPI, but I don't (at the moment) have ideas for a better
interface. My direction I'm thinking in for a better interface is to
have MBPI actually retain much more state and make *all* of these
queries cheap. That's a lot of work, and would require invasive changes.
Until then, this seems like the least bad (ie, least quadratic)
solution. Suggestions welcome.
llvm-svn: 144530
correctly handle blocks whose successor weights sum to more than
UINT32_MAX. This is slightly less efficient, but the entire thing is
already linear on the number of successors. Calling it within any hot
routine is a mistake, and indeed no one is calling it. It also
simplifies the code.
llvm-svn: 144527
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.
I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.
The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.
llvm-svn: 144526
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.
The load and store slots are not needed after the deferred spill code
insertion framework was deleted.
The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals. In reality, these slots were used to distinguish
early-clobber defs from normal defs.
The new naming scheme also has 4 slots, but the names match how the
slots are really used. This is a purely mechanical renaming, but some
of the code makes a lot more sense now.
llvm-svn: 144503
RegAllocGreedy has been the default for six months now.
Deleting RegAllocLinearScan makes it possible to also delete
VirtRegRewriter and clean up the spiller code.
llvm-svn: 144475
When this field is true it means that the load is from constant (runt-time or compile-time) and so can be hoisted from loops or moved around other memory accesses
llvm-svn: 144100
the mailing list. Suggestions for other statistics to collect would be
awesome. =]
Currently these are implemented as a separate pass guarded by a separate
flag. I'm not thrilled by that, but I wanted to be able to collect the
statistics for the old code placement as well as the new in order to
have a point of comparison. I'm planning on folding them into the single
pass if / when there is only one pass of interest.
llvm-svn: 143537
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
In machine code, you can't just replaceRegWith() the same way you can
replaceAllUsesWith() in IR. Virtual registers may have different
register classes that need to be merged first.
llvm-svn: 142201
Most instructions have some requirements for their register operands.
Usually, this is expressed as register class constraints in the
MCInstrDesc, but for inline assembly the constraints are encoded in the
flag words.
llvm-svn: 141835
The <undef> flag says that a MachineOperand doesn't read its register,
or doesn't depend on the previous value of its register.
A full register def never depends on the previous register value. A
partial register def may depend on the previous value if it is intended
to update part of a register.
For example:
%vreg10:dsub_0<def,undef> = COPY %vreg1
%vreg10:dsub_1<def> = COPY %vreg2
The first copy instruction defines the full %vreg10 register with the
bits not covered by dsub_0 defined as <undef>. It is not considered a
read of %vreg10.
The second copy modifies part of %vreg10 while preserving the rest. It
has an implicit read of %vreg10.
This patch adds a MachineOperand::readsReg() method to determine if an
operand reads its register.
Previously, this was modelled by adding a full-register <imp-def>
operand to the instruction. This approach makes it possible to
determine directly from a MachineOperand if it reads its register. No
scanning of MI operands is required.
llvm-svn: 141124
This intrinsic is used to pass the index of the function context to the back-end
for further processing. The back-end is in charge of filling in the rest of the
entries.
llvm-svn: 140676
This also enables domain swizzling for AVX code which required a few
trivial test changes.
The pass will be moved to lib/CodeGen shortly.
llvm-svn: 140659
The function will refuse to use a register class with fewer registers
than MinNumRegs. This can be used by clients to avoid accidentally
increase register pressure too much.
The default value of MinNumRegs=0 doesn't affect how constrainRegClass()
works.
llvm-svn: 140339
The getPrevIndex() function moves to the same slot in the previous
instruction. For getVNInfoBefore(), we just need the previous slot in
the same instruction.
llvm-svn: 139793
There is only one legitimate use remaining, in addIntervalsForSpills().
All other calls to hasPHIKill() are only used to update PHIKill flags.
The addIntervalsForSpills() function is part of the old spilling
framework, only used by linearscan.
llvm-svn: 139783
It is conservatively correct to keep the hasPHIKill flags, even after
deleting PHI-defs.
The calculation can be very expensive after taildup has created a
quadratic number of indirectbr edges in the CFG, and the hasPHIKill flag
isn't used for anything after RenumberValues().
llvm-svn: 139780
(The fix for the related failures on x86 is going to be nastier because we actually need Acquire memoperands attached to the atomic load instrs, etc.)
llvm-svn: 139221
with a vector condition); such selects become VSELECT codegen nodes.
This patch also removes VSETCC codegen nodes, unifying them with SETCC
nodes (codegen was actually often using SETCC for vector SETCC already).
This ensures that various DAG combiner optimizations kick in for vector
comparisons. Passes dragonegg bootstrap with no testsuite regressions
(nightly testsuite as well as "make check-all"). Patch mostly by
Nadav Rotem.
llvm-svn: 139159
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC. While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function. To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function. Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!). Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC. Patch mostly by Sanjoy Das.
llvm-svn: 139140
The landingpad instruction is lowered into the EXCEPTIONADDR and EHSELECTION
SDNodes. The information from the landingpad instruction is harvested by the
'AddLandingPadInfo' function. The new EH uses the current EH scheme in the
back-end. This will change once we switch over to the new scheme. (Reviewed by
Jakob!)
llvm-svn: 137880
This function doesn't have anything to do with spill weights, and MRI
already has functions for manipulating the register class of a virtual
register.
llvm-svn: 137123
This flag is true from isel to register allocation when the machine
function is required to be in SSA form. The TwoAddressInstructionPass
and PHIElimination passes clear the flag.
The SSA flag wil be used by the machine code verifier to check for SSA
form, and eventually an assertion can enforce it in +Asserts builds.
This will catch the common target error of creating machine code with
multiple defs of a virtual register.
llvm-svn: 136532
working on x86 (at least for trivial testcases); other architectures will
need more work so that they actually emit the appropriate instructions for
orderings stricter than 'monotonic'. (As far as I can tell, the ARM, PPC,
Mips, and Alpha backends need such changes.)
llvm-svn: 136457
to MCRegisterInfo. Also initialize the mapping at construction time.
This patch eliminate TargetRegisterInfo from TargetAsmInfo. It's another step
towards fixing the layering violation.
llvm-svn: 135424
This gets rid of some of the gory splitting details in RAGreedy and
makes them available to future SplitKit clients.
Slightly generalize the functionality to support multi-way splitting.
Specifically, SplitEditor::splitLiveThroughBlock() supports switching
between different register intervals in a block.
llvm-svn: 135307
RAGreedy::tryAssign will now evict interference from the preferred
register even when another register is free.
To support this, add the EvictionCost struct that counts how many hints
are broken by an eviction. We don't want to break one hint just to
satisfy another.
Rename canEvict to shouldEvict, and add the first bit of eviction policy
that doesn't depend on spill weights: Always make room in the preferred
register as long as the evictees can be split and aren't already
assigned to their preferred register.
Also make the CSR avoidance more accurate. When looking for a cheaper
register it is OK to use a new volatile register. Only CSR aliases that
have never been used before should be avoided.
llvm-svn: 134735
hasPredecessorHelper function allows predecessors to be cached to speed up
repeated invocations. This fixes PR10186.
X.isPredecessorOf(Y) now just calls Y.hasPredecessor(X)
Y.hasPredecessor(X) calls Y.hasPredecessorHelper(X, Visited, Worklist) with
empty Visited and Worklist sets (i.e. no caching over invocations).
Y.hasPredecessorHelper(X, Visited, Worklist) caches search state in Visited
and Worklist to speed up repeated calls. The Visited set is searched for X
before going to the worklist to further search the DAG if necessary.
llvm-svn: 134592
Add a MI->emitError() method that the backend can use to report errors
related to inline assembly. Call it from X86FloatingPoint.cpp when the
constraints are wrong.
This enables proper clang diagnostics from the backend:
$ clang -c pr30848.c
pr30848.c:5:12: error: Inline asm output regs must be last on the x87 stack
__asm__ ("" : "=u" (d)); /* { dg-error "output regs" } */
^
1 error generated.
llvm-svn: 134307
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.
llvm-svn: 134021
BranchProbabilityInfo (expect setEdgeWeight which is not available here).
Branch Weights are kept in MachineBasicBlocks. To turn off this analysis
set -use-mbpi=false.
llvm-svn: 133184
suffix (e.g. .xdata$myfunc). The suffix part isn't implemented yet, but
I'll get to it in the next patch.
Fix up all callers of the affected functions. Make them pass said suffix to
the function.
llvm-svn: 132205
When instructions are deleted, they leave tombstone SlotIndex entries.
The isZeroLength method should ignore these null indexes.
This causes RABasic to sometimes spill a callee-saved register in the
abi-isel.ll test, so don't run that test with -regalloc=basic. Prioritizing
register allocation according to spill weight can cause more registers to be
used.
llvm-svn: 131436
this clang will use .debug_frame in, for example,
clang -g -c -m32 test.c
This matches gcc's behaviour. It looks like .debug_frame is a bit bigger
than .eh_frame, but has the big advantage of not being allocated.
llvm-svn: 131140
This makes a difference if a live interval is referring to a deleted
instruction. It can be important to insert an instruction before or after a
deleted instruction to avoid interference.
llvm-svn: 130686
give it a bit more responsibility. Also implement it for MachO.
If hacked to use cfi, 32 bit MachO will produce
.cfi_personality 155, L___gxx_personality_v0$non_lazy_ptr
and 64 bit will produce
.cfi_presonality ___gxx_personality_v0
The general idea is that .cfi_personality gets passed the final symbol. It is
up to codegen to produce it if using indirect representation (like 32 bit
MachO), but it is up to MC to decide which relocations to create.
llvm-svn: 130341
more callee-saved registers and introduce copies. Only allows it if scheduling
a node above calls would end up lessen register pressure.
Call operands also has added ABI restrictions for register allocation, so be
extra careful with hoisting them above calls.
rdar://9329627
llvm-svn: 130245
This has two effects: 1. We never inflate to a larger register class than what
the sub-target can handle. 2. Completely unconstrained virtual registers get the
largest possible register class.
llvm-svn: 130229