1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 12:33:33 +02:00
llvm-mirror/lib/Target/PowerPC/CMakeLists.txt

47 lines
1.3 KiB
CMake
Raw Normal View History

set(LLVM_TARGET_DEFINITIONS PPC.td)
tablegen(LLVM PPCGenAsmWriter.inc -gen-asm-writer)
tablegen(LLVM PPCGenAsmMatcher.inc -gen-asm-matcher)
tablegen(LLVM PPCGenDisassemblerTables.inc -gen-disassembler)
tablegen(LLVM PPCGenMCCodeEmitter.inc -gen-emitter)
tablegen(LLVM PPCGenRegisterInfo.inc -gen-register-info)
tablegen(LLVM PPCGenInstrInfo.inc -gen-instr-info)
tablegen(LLVM PPCGenDAGISel.inc -gen-dag-isel)
tablegen(LLVM PPCGenFastISel.inc -gen-fast-isel)
tablegen(LLVM PPCGenCallingConv.inc -gen-callingconv)
tablegen(LLVM PPCGenSubtargetInfo.inc -gen-subtarget)
Clean up a pile of hacks in our CMake build relating to TableGen. The first problem to fix is to stop creating synthetic *Table_gen targets next to all of the LLVM libraries. These had no real effect as CMake specifies that add_custom_command(OUTPUT ...) directives (what the 'tablegen(...)' stuff expands to) are implicitly added as dependencies to all the rules in that CMakeLists.txt. These synthetic rules started to cause problems as we started more and more heavily using tablegen files from *subdirectories* of the one where they were generated. Within those directories, the set of tablegen outputs was still available and so these synthetic rules added them as dependencies of those subdirectories. However, they were no longer properly associated with the custom command to generate them. Most of the time this "just worked" because something would get to the parent directory first, and run tablegen there. Once run, the files existed and the build proceeded happily. However, as more and more subdirectories have started using this, the probability of this failing to happen has increased. Recently with the MC refactorings, it became quite common for me when touching a large enough number of targets. To add insult to injury, several of the backends *tried* to fix this by adding explicit dependencies back to the parent directory's tablegen rules, but those dependencies didn't work as expected -- they weren't forming a linear chain, they were adding another thread in the race. This patch removes these synthetic rules completely, and adds a much simpler function to declare explicitly that a collection of tablegen'ed files are referenced by other libraries. From that, we can add explicit dependencies from the smaller libraries (such as every architectures Desc library) on this and correctly form a linear sequence. All of the backends are updated to use it, sometimes replacing the existing attempt at adding a dependency, sometimes adding a previously missing dependency edge. Please let me know if this causes any problems, but it fixes a rather persistent and problematic source of build flakiness on our end. llvm-svn: 136023
2011-07-26 02:09:08 +02:00
add_public_tablegen_target(PowerPCCommonTableGen)
add_llvm_target(PowerPCCodeGen
PPCAsmPrinter.cpp
PPCBranchSelector.cpp
PPCCTRLoops.cpp
PPCHazardRecognizers.cpp
PPCInstrInfo.cpp
PPCISelDAGToDAG.cpp
PPCISelLowering.cpp
PPCEarlyReturn.cpp
PPCFastISel.cpp
2011-01-10 13:39:23 +01:00
PPCFrameLowering.cpp
PPCLoopDataPrefetch.cpp
[PowerPC] Prepare loops for pre-increment loads/stores PowerPC supports pre-increment load/store instructions (except for Altivec/VSX vector load/stores). Using these on embedded cores can be very important, but most loops are not naturally set up to use them. We can often change that, however, by placing loops into a non-canonical form. Generically, this means transforming loops like this: for (int i = 0; i < n; ++i) array[i] = c; to look like this: T *p = array[-1]; for (int i = 0; i < n; ++i) *++p = c; the key point is that addresses accessed are pulled into dedicated PHIs and "pre-decremented" in the loop preheader. This allows the use of pre-increment load/store instructions without loop peeling. A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is introduced to perform this transformation. I've used this code out-of-tree for generating code for the PPC A2 for over a year. Somewhat to my surprise, running the test suite + externals on a P7 with this transformation enabled showed no performance regressions, and one speedup: External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk -2.32514% +/- 1.03736% So I'm going to enable it on everything for now. I was surprised by this because, on the POWER cores, these pre-increment load/store instructions are cracked (and, thus, harder to schedule effectively). But seeing no regressions, and feeling that it is generally easier to split instructions apart late than it is to combine them late, this might be the better approach regardless. In the future, we might want to integrate this functionality into LSR (but currently LSR does not create new PHI nodes, so (for that and other reasons) significant work would need to be done). llvm-svn: 228328
2015-02-05 19:43:00 +01:00
PPCLoopPreIncPrep.cpp
PPCMCInstLower.cpp
PPCMachineFunctionInfo.cpp
PPCRegisterInfo.cpp
PPCSubtarget.cpp
PPCTargetMachine.cpp
PPCTargetObjectFile.cpp
PPCTargetTransformInfo.cpp
PPCSelectionDAGInfo.cpp
PPCTLSDynamicCall.cpp
PPCVSXCopy.cpp
PPCVSXFMAMutate.cpp
[PPC64LE] Remove unnecessary swaps from lane-insensitive vector computations This patch adds a new SSA MI pass that runs on little-endian PPC64 code with VSX enabled. Loads and stores of 4x32 and 2x64 vectors without alignment constraints are accomplished for little-endian using lxvd2x/xxswapd and xxswapd/stxvd2x. The existence of the additional xxswapd instructions hurts performance in comparison with big-endian code, but they are necessary in the general case to support correct semantics. However, the general case does not apply to most vector code. Many vector instructions are lane-insensitive; they do not "care" which lanes the parallel computations are performed within, provided that the resulting data is stored into the correct locations. Thus this pass looks for computations that perform only lane-insensitive operations, and remove the unnecessary swaps from loads and stores in such computations. Future improvements will allow computations using certain lane-sensitive operations to also be optimized in this manner, by modifying the lane-sensitive operations to account for the permuted order of the lanes. However, this patch only adds the infrastructure to permit this; no lane-sensitive operations are optimized at this time. This code is heavily exercised by the various vectorizing applications in the projects/test-suite tree. For the time being, I have only added one simple test case to demonstrate what the pass is doing. Although it is quite simple, it provides coverage for much of the code, including the special case handling of copies and subreg-to-reg operations feeding the swaps. I plan to add additional tests in the future as I fill in more of the "special handling" code. Two existing tests were affected, because they expected the swaps to be present, but they are now removed. llvm-svn: 235910
2015-04-27 21:57:34 +02:00
PPCVSXSwapRemoval.cpp
)
add_subdirectory(AsmParser)
add_subdirectory(Disassembler)
add_subdirectory(InstPrinter)
add_subdirectory(TargetInfo)
add_subdirectory(MCTargetDesc)