1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-01 16:33:37 +01:00
Commit Graph

3652 Commits

Author SHA1 Message Date
Evan Cheng
8af07ba749 Added a late machine instruction copy propagation pass. This catches
opportunities that only present themselves after late optimizations
such as tail duplication .e.g.
## BB#1:
        movl    %eax, %ecx
        movl    %ecx, %eax
        ret

The register allocator also leaves some of them around (due to false
dep between copies from phi-elimination, etc.)

This required some changes in codegen passes. Post-ra scheduler and the
pseudo-instruction expansion passes have been moved after branch folding
and tail merging. They were before branch folding before because it did
not always update block livein's. That's fixed now. The pass change makes
independently since we want to properly schedule instructions after
branch folding / tail duplication.

rdar://10428165
rdar://10640363

llvm-svn: 147716
2012-01-07 03:02:36 +00:00
Benjamin Kramer
509b7e6c60 Kill ObjectCodeEmitter and BinaryObject, they were unused and superseded by MC.
llvm-svn: 147618
2012-01-05 22:31:37 +00:00
Andrew Trick
f4817ef455 comment cleanup
llvm-svn: 147585
2012-01-05 01:01:01 +00:00
Jakob Stoklund Olesen
a33612c1f2 Freeze reserved registers before starting register allocation.
The register allocators don't currently support adding reserved
registers while they are running.  Extend the MRI API to keep track of
the set of reserved registers when register allocation started.

Target hooks like hasFP() and needsStackRealignment() can look at this
set to avoid reserving more registers during register allocation.

llvm-svn: 147577
2012-01-05 00:26:49 +00:00
Benjamin Kramer
dfab20fd0c Simplify more DenseMap.find users.
llvm-svn: 147550
2012-01-04 21:41:24 +00:00
Jakob Stoklund Olesen
893037ce23 Move common code into an MRI function.
llvm-svn: 147071
2011-12-21 19:50:05 +00:00
Jakub Staszak
6f3beda2b4 Add some constantness to BranchProbabilityInfo and BlockFrequnencyInfo.
llvm-svn: 146986
2011-12-20 20:03:10 +00:00
David Blaikie
576aba04f1 Unweaken vtables as per http://llvm.org/docs/CodingStandards.html#ll_virtual_anch
llvm-svn: 146960
2011-12-20 02:50:00 +00:00
Chris Lattner
f60c5dadf8 fix typo
llvm-svn: 146940
2011-12-20 01:11:37 +00:00
Dan Gohman
7940e4e81d Add basic generic CodeGen support for half.
llvm-svn: 146927
2011-12-20 00:02:33 +00:00
Joerg Sonnenberger
8cf8d64d19 Allow inlining of functions with returns_twice calls, if they have the
attribute themselve.

llvm-svn: 146851
2011-12-18 20:35:43 +00:00
Devang Patel
b8b77459d6 Update DebugLoc while merging nodes at -O0.
Patch by Kyriakos Georgiou!

llvm-svn: 146670
2011-12-15 18:21:18 +00:00
Evan Cheng
68ba5536f3 - Add MachineInstrBundle.h and MachineInstrBundle.cpp. This includes a function
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
  and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
  prevent IT blocks from being broken apart.

llvm-svn: 146542
2011-12-14 02:11:42 +00:00
Chandler Carruth
e0484f6b37 Initial CodeGen support for CTTZ/CTLZ where a zero input produces an
undefined result. This adds new ISD nodes for the new semantics,
selecting them when the LLVM intrinsic indicates that the undef behavior
is desired. The new nodes expand trivially to the old nodes, so targets
don't actually need to do anything to support these new nodes besides
indicating that they should be expanded. I've done this for all the
operand types that I could figure out for all the targets. Owners of
various targets, please review and let me know if any of these are
incorrect.

Note that the expand behavior is *conservatively correct*, and exactly
matches LLVM's current behavior with these operations. Ideally this
patch will not change behavior in any way. For example the regtest suite
finds the exact same instruction sequences coming out of the code
generator. That's why there are no new tests here -- all of this is
being exercised by the existing test suite.

Thanks to Duncan Sands for reviewing the various bits of this patch and
helping me get the wrinkles ironed out with expanding for each target.
Also thanks to Chris for clarifying through all the discussions that
this is indeed the approach he was looking for. That said, there are
likely still rough spots. Further review much appreciated.

llvm-svn: 146466
2011-12-13 01:56:10 +00:00
Nick Lewycky
d25c68a628 Fix typo, reported by Eitan Adler!
llvm-svn: 146316
2011-12-10 03:16:20 +00:00
Chad Rosier
7e0dc23863 [fast-isel] Add support for selecting insertvalue.
rdar://10530851

llvm-svn: 146276
2011-12-09 20:09:54 +00:00
Owen Anderson
285891eccf Enhance both TargetLibraryInfo and SelectionDAGBuilder so that the latter can use the former to prevent the formation of libm SDNode's when -fno-builtin is passed.
llvm-svn: 146193
2011-12-08 22:15:21 +00:00
Evan Cheng
320b2be38c Make MachineInstr instruction property queries more flexible. This change all
clients to decide whether to look inside bundled instructions and whether
the query should return true if any / all bundled instructions have the
queried property.

llvm-svn: 146168
2011-12-08 19:23:10 +00:00
Evan Cheng
1acd685d87 Add bundle aware API for querying instruction properties and switch the code
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.

For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.

llvm-svn: 146026
2011-12-07 07:15:52 +00:00
Jakob Stoklund Olesen
e612fdbbab Add MachineOperand IsInternalRead flag.
This flag is used when bundling machine instructions.  It indicates
whether the operand reads a value defined inside or outside its bundle.

llvm-svn: 145997
2011-12-07 00:22:07 +00:00
Evan Cheng
5061553f9d First chunk of MachineInstr bundle support.
1. Added opcode BUNDLE
2. Taught MachineInstr class to deal with bundled MIs
3. Changed MachineBasicBlock iterator to skip over bundled MIs; added an iterator to walk all the MIs
4. Taught MachineBasicBlock methods about bundled MIs

llvm-svn: 145975
2011-12-06 22:12:01 +00:00
Sebastian Pop
182ae6a6fa use space star instead of star space
llvm-svn: 145944
2011-12-06 17:34:16 +00:00
Sebastian Pop
cb55bb22ab add missing point at the end of sentences
llvm-svn: 145943
2011-12-06 17:34:11 +00:00
Jakob Stoklund Olesen
e53ed273d9 Use logarithmic units for basic block alignment.
This was actually a bit of a mess. TLI.setPrefLoopAlignment was clearly
documented as taking log2(bytes) units, but the x86 target would still
set a preferred loop alignment of '16'.

CodePlacementOpt passed this number on to the basic block, and
AsmPrinter interpreted it as bytes.

Now both MachineFunction and MachineBasicBlock use logarithmic
alignments.

Obviously, MachineConstantPool still measures alignments in bytes, so we
can emulate the thrill of using as.

llvm-svn: 145889
2011-12-06 01:26:19 +00:00
Jakob Stoklund Olesen
faa1b18b38 Fix unclear wording.
llvm-svn: 145882
2011-12-06 00:51:09 +00:00
Anna Zaks
431b43fdbe Change the Dominators recalculate() function to only rely on GraphTraits
This is a patch by Guoping Long!

As part of utilizing LLVM Dominator computation in Clang, made two changes to LLVM dominators tree implementation:

 - (1) Change the recalculate() template function to only rely on GraphTraits.
 - (2) Add a size() method to GraphTraits template class to query the number of nodes in the graph.

llvm-svn: 145837
2011-12-05 19:17:04 +00:00
Nick Lewycky
7d0d3c2d58 Move global variables in TargetMachine into new TargetOptions class. As an API
change, now you need a TargetOptions object to create a TargetMachine. Clang
patch to follow.

One small functionality change in PTX. PTX had commented out the machine
verifier parts in their copy of printAndVerify. That now calls the version in
LLVMTargetMachine. Users of PTX who need verification disabled should rely on
not passing the command-line flag to enable it.

llvm-svn: 145714
2011-12-02 22:16:29 +00:00
Anshuman Dasgupta
f754c8bf8e Add a deterministic finite automaton based packetizer for VLIW architectures
llvm-svn: 145629
2011-12-01 21:10:21 +00:00
Chad Rosier
0ff2f46d12 If fast-isel fails, remove dead instructions generated during the failed
attempt.  

llvm-svn: 145425
2011-11-29 19:40:47 +00:00
Bill Wendling
8beab76b07 Remove dead llvm.eh.sjlj.dispatchsetup intrinsic.
llvm-svn: 145263
2011-11-28 19:23:13 +00:00
Evan Cheng
2b239cbcf6 Sink codegen optimization level into MCCodeGenInfo along side relocation model
and code model. This eliminates the need to pass OptLevel flag all over the
place and makes it possible for any codegen pass to use this information.

llvm-svn: 144788
2011-11-16 08:38:26 +00:00
Owen Anderson
48a129b50e Rename MVT::untyped to MVT::Untyped to match similar nomenclature.
llvm-svn: 144747
2011-11-16 01:02:57 +00:00
Benjamin Kramer
3eeef2e739 Twinify GraphWriter a little bit.
llvm-svn: 144647
2011-11-15 16:26:38 +00:00
Benjamin Kramer
bff5aaee3f Make headers standalone.
llvm-svn: 144537
2011-11-14 17:45:03 +00:00
Chandler Carruth
f89087744e Under the hood, MBPI is doing a linear scan of every successor every
time it is queried to compute the probability of a single successor.
This makes computing the probability of every successor of a block in
sequence... really really slow. ;] This switches to a linear walk of the
successors rather than a quadratic one. One of several quadratic
behaviors slowing this pass down.

I'm not really thrilled with moving the sum code into the public
interface of MBPI, but I don't (at the moment) have ideas for a better
interface. My direction I'm thinking in for a better interface is to
have MBPI actually retain much more state and make *all* of these
queries cheap. That's a lot of work, and would require invasive changes.
Until then, this seems like the least bad (ie, least quadratic)
solution. Suggestions welcome.

llvm-svn: 144530
2011-11-14 09:12:57 +00:00
Chandler Carruth
09418993f8 Reuse the logic in getEdgeProbability within getHotSucc in order to
correctly handle blocks whose successor weights sum to more than
UINT32_MAX. This is slightly less efficient, but the entire thing is
already linear on the number of successors. Calling it within any hot
routine is a mistake, and indeed no one is calling it. It also
simplifies the code.

llvm-svn: 144527
2011-11-14 08:55:59 +00:00
Chandler Carruth
462bb16130 Fix an overflow bug in MachineBranchProbabilityInfo. This pass relied on
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.

I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.

The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.

llvm-svn: 144526
2011-11-14 08:50:16 +00:00
Chandler Carruth
5d03d0351f Add a cautionary note to this API. It was not at all obvious to me how
expensive the most useful interface to this analysis is.

Fun story -- it's also not correct. That's getting fixed in another
patch.

llvm-svn: 144523
2011-11-14 06:51:49 +00:00
Jakob Stoklund Olesen
9b34607bdf Rename SlotIndexes to match how they are used.
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.

The load and store slots are not needed after the deferred spill code
insertion framework was deleted.

The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals.  In reality, these slots were used to distinguish
early-clobber defs from normal defs.

The new naming scheme also has 4 slots, but the names match how the
slots are really used.  This is a purely mechanical renaming, but some
of the code makes a lot more sense now.

llvm-svn: 144503
2011-11-13 20:45:27 +00:00
Jakob Stoklund Olesen
5a265aeb70 Delete the old spilling framework from LiveIntervalAnalysis.
This is dead code, all register allocators use InlineSpiller.

llvm-svn: 144478
2011-11-12 23:57:05 +00:00
Jakob Stoklund Olesen
78902f9088 Delete the linear scan register allocator.
RegAllocGreedy has been the default for six months now.

Deleting RegAllocLinearScan makes it possible to also delete
VirtRegRewriter and clean up the spiller code.

llvm-svn: 144475
2011-11-12 22:39:45 +00:00
Eli Friedman
8563e57e38 Don't try to form pre/post-indexed loads/stores until after LegalizeDAG runs. Fixes PR11029.
llvm-svn: 144438
2011-11-12 00:35:34 +00:00
Nicolas Geoffray
98ef58406c Add a custom safepoint method, in order for language implementers to decide which machine instruction gets to be a safepoint.
llvm-svn: 144399
2011-11-11 18:32:52 +00:00
Owen Anderson
50d2e054f9 Add additional checking to ensure that MachineMemOperands are never set to null, which can happen in weird circumstances where target intrinsic hooks are implemented incorrectly.
llvm-svn: 144303
2011-11-10 19:25:09 +00:00
Pete Cooper
224434deec Added invariant field to the DAG.getLoad method and changed all calls.
When this field is true it means that the load is from constant (runt-time or compile-time) and so can be hoisted from loops or moved around other memory accesses

llvm-svn: 144100
2011-11-08 18:42:53 +00:00
Chandler Carruth
f95461f23b Begin collecting some of the statistics for block placement discussed on
the mailing list. Suggestions for other statistics to collect would be
awesome. =]

Currently these are implemented as a separate pass guarded by a separate
flag. I'm not thrilled by that, but I wanted to be able to collect the
statistics for the old code placement as well as the new in order to
have a point of comparison. I'm planning on folding them into the single
pass if / when there is only one pass of interest.

llvm-svn: 143537
2011-11-02 07:17:12 +00:00
Nick Lewycky
651475977d Teach our Dwarf emission to use the string pool.
llvm-svn: 143097
2011-10-27 06:44:11 +00:00
Dan Gohman
b54d296fd4 Remove the SystemZ backend.
llvm-svn: 142878
2011-10-24 23:48:32 +00:00
Dan Gohman
91adc96acd Delete the top-down "Latency" scheduler. Top-down scheduling doesn't handle
physreg dependencies, and upcoming codegen changes will require proper
physreg dependence handling.

llvm-svn: 142816
2011-10-24 18:01:06 +00:00
Chandler Carruth
380e0d5013 Implement a block placement pass based on the branch probability and
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:

1) It operates on the Machine-IR in the CodeGen layer. This exposes much
   more (and more precise) information and opportunities. Also, the
   results are more stable due to fewer transforms ocurring after the
   pass runs.
2) It uses the generalized probability and frequency analyses. These can
   model static heuristics, code annotation derived heuristics as well
   as eventual profile loading. By basing the optimization on the
   analysis interface it can work from any (or a combination) of these
   inputs.
3) It uses a more aggressive algorithm, both building chains from tho
   bottom up to maximize benefit, and using an SCC-based walk to layout
   chains of blocks in a profitable ordering without O(N^2) iterations
   which the old pass involves.

The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.

Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D

llvm-svn: 142641
2011-10-21 06:46:38 +00:00