1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 20:43:44 +02:00
Commit Graph

2765 Commits

Author SHA1 Message Date
Saleem Abdulrasool
501d3b6235 Target: remove old constructors for CallLoweringInfo
This is mostly a mechanical change changing all the call sites to the newer
chained-function construction pattern.  This removes the horrible 15-parameter
constructor for the CallLoweringInfo in favour of setting properties of the call
via chained functions.  No functional change beyond the removal of the old
constructors are intended.

llvm-svn: 209082
2014-05-17 21:50:17 +00:00
Saleem Abdulrasool
aca2082f10 Target: add support to build CallLoweringInfo
Rather than introducing an auxiliary CallLoweringInfoBuilder, add the methods to
do chained function construction directly to CallLoweringInfo.  This reduces the
monstrous 15-parameter constructor into a series of simpler (for some definition
of simpler) functions that control particular aspects of the call.  The old
interfaces can be completely removed once callers are moved to the new chained
constructor pattern.

llvm-svn: 209081
2014-05-17 21:50:06 +00:00
Saleem Abdulrasool
4b7b7da0ac Target: change member from reference to pointer
This is a preliminary step to help ease the construction of CallLoweringInfo.
Changing the construction to a chained function pattern requires that the
parameter be nullable.  However, rather than copying the vector, save a pointer
rather than the reference to permit a late binding of the arguments.

llvm-svn: 209080
2014-05-17 21:50:01 +00:00
Reid Kleckner
fd1a7dfe04 Add comdat key field to llvm.global_ctors and llvm.global_dtors
This allows us to put dynamic initializers for weak data into the same
comdat group as the data being initialized.  This is necessary for MSVC
ABI compatibility.  Once we have comdats for guard variables, we can use
the combination to help GlobalOpt fire more often for weak data with
guarded initialization on other platforms.

Reviewers: nlewycky

Differential Revision: http://reviews.llvm.org/D3499

llvm-svn: 209015
2014-05-16 20:39:27 +00:00
Rafael Espindola
6d40091c3c Revert "Implement global merge optimization for global variables."
This reverts commit r208934.

The patch depends on aliases to GEPs with non zero offsets. That is not
supported and fairly broken.

The good news is that GlobalAlias is being redesigned and will have support
for offsets, so this patch should be a nice match for it.

llvm-svn: 208978
2014-05-16 13:02:18 +00:00
Eric Christopher
37970f0edd Remove the Options query functions and just access our Options directly.
llvm-svn: 208937
2014-05-16 00:32:52 +00:00
Jiangning Liu
5366cb42f6 Implement global merge optimization for global variables.
This commit implements two command line switches -global-merge-on-external
and -global-merge-aligned, and both of them are false by default, so this
optimization is disabled by default for all targets.

For ARM64, some back-end behaviors need to be tuned to get this optimization
further enabled.

llvm-svn: 208934
2014-05-15 23:45:42 +00:00
Eric Christopher
e4c742a33c Remove unused functions setting MCOptions from TargetMachine.
llvm-svn: 208835
2014-05-15 01:25:04 +00:00
Eric Christopher
f26f61b12b Move the TargetMachine MC options to MCTargetOptions. No functional
change.

llvm-svn: 208832
2014-05-15 01:08:00 +00:00
Jay Foad
e0eac700cb Rename ComputeMaskedBits to computeKnownBits. "Masked" has been
inappropriate since it lost its Mask parameter in r154011.

llvm-svn: 208811
2014-05-14 21:14:37 +00:00
Rafael Espindola
20867572f4 Remove MCUseCFI from TargetMachine.
It was always true.

llvm-svn: 208547
2014-05-12 13:01:42 +00:00
Hal Finkel
5b038e4cbc Pass the value type to TLI::getRegisterByName
We must validate the value type in TLI::getRegisterByName, because if we
don't and the wrong type was used with the IR intrinsic, then we'll assert
(because we won't be able to find a valid register class with which to
construct the requested copy operation). For PPC64, additionally, the type
information is necessary to decide between the 64-bit register and the 32-bit
subregister.

No functionality change.

llvm-svn: 208508
2014-05-11 19:29:07 +00:00
Oliver Stannard
2b1166b162 ARM: HFAs must be passed in consecutive registers
When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must
be passed in a block of consecutive floating-point registers, or on the stack.
This means that unused floating-point registers cannot be back-filled with
part of an HFA, however this can currently happen. This patch, along with the
corresponding clang patch (http://reviews.llvm.org/D3083) prevents this.

llvm-svn: 208413
2014-05-09 14:01:47 +00:00
Hal Finkel
c52e65b830 Move late partial-unrolling thresholds into the processor definitions
The old method used by X86TTI to determine partial-unrolling thresholds was
messy (because it worked by testing target features), and also would not
correctly identify the target CPU if certain target features were disabled.
After some discussions on IRC with Chandler et al., it was decided that the
processor scheduling models were the right containers for this information
(because it is often tied to special uop dispatch-buffer sizes).

This does represent a small functionality change:
 - For generic x86-64 (which uses the SB model and, thus, will get some
   unrolling).
 - For AMD cores (because they still currently use the SB scheduling model)
 - For Haswell (based on benchmarking by Louis Gerbarg, it was decided to bump
   the default threshold to 50; we're working on a test case for this).
Otherwise, nothing has changed for any other targets. The logic, however, has
been moved into BasicTTI, so other targets may now also opt-in to this
functionality simply by setting LoopMicroOpBufferSize in their processor
model definitions.

llvm-svn: 208289
2014-05-08 09:14:44 +00:00
Renato Golin
8a9a382ab2 Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

llvm-svn: 208104
2014-05-06 16:51:25 +00:00
Eric Christopher
904df1ca24 Fix typo (also tab character).
llvm-svn: 208001
2014-05-05 21:40:41 +00:00
Weiming Zhao
3625856a33 [ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
  %shr = lshr i64 %x, 4
  %and = and i64 %shr, 15
  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
  %0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
  lsr x8, x0, #1
  and x8, x8, #0x78
  add x8, x9, x8

If using ubfx, it only needs 2 instrs:
  ubfx  x8, x0, #4, #4
  add x8, x9, x8, lsl #3

This fixes bug 19589

llvm-svn: 207702
2014-04-30 21:07:24 +00:00
Joerg Sonnenberger
1b6e6664bd Fix comment
llvm-svn: 207429
2014-04-28 18:11:51 +00:00
Benjamin Kramer
163df6bc62 DAGCombiner: Turn divs of vector splats into vectorized multiplications.
Otherwise the legalizer would just scalarize everything. Support for
mulhi in the targets isn't that great yet so on most targets we get
exactly the same scalarized output. Add a test for x86 vector udiv.

I had to disable the mulhi nodes on ARM because there aren't any patterns
for it. As far as I know ARM has instructions for getting the high part of
a multiply so this should be fixed.

llvm-svn: 207315
2014-04-26 12:06:28 +00:00
Michael Zolotukhin
1c3340dd78 Revert r206749 till a final decision about the intrinsics is made.
llvm-svn: 207313
2014-04-26 09:56:41 +00:00
Evgeniy Stepanov
c242bd4b23 Create MCTargetOptions.
For now it contains a single flag, SanitizeAddress, which enables
AddressSanitizer instrumentation of inline assembly.

Patch by Yuri Gorshenin.

llvm-svn: 206971
2014-04-23 11:16:03 +00:00
Yi Jiang
b1c450606d ARM64: Combine shifts and uses from different basic block to bit-extract instruction
llvm-svn: 206774
2014-04-21 19:34:27 +00:00
Michael Zolotukhin
c7f992f9a3 Reapply r206732. This time without optimization of branches.
llvm-svn: 206749
2014-04-21 12:01:33 +00:00
Chandler Carruth
164ee32140 Revert r206732 which is causing llc to crash on most of the build bots.
Original commit message:
  Implement builtins for safe division: safe.sdiv.iN, safe.udiv.iN,
  safe.srem.iN, safe.urem.iN (iN = i8, i61, i32, or i64).

llvm-svn: 206735
2014-04-21 07:11:15 +00:00
Michael Zolotukhin
f5ebd83e24 Implement builtins for safe division: safe.sdiv.iN, safe.udiv.iN, safe.srem.iN,
safe.urem.iN (iN = i8, i16, i32, or i64).

llvm-svn: 206732
2014-04-21 05:33:09 +00:00
Yaron Keren
407a465a3d Patch by Vadim Chugunov
Win64 stack unwinder gets confused when execution flow "falls through" after
a call to 'noreturn' function. This fixes the "missing epilogue" problem by 
emitting a trap instruction for IR 'unreachable' on x86_x64-pc-windows.

A secondary use for it would be for anyone wanting to make double-sure that
'noreturn' functions, indeed, do not return.

llvm-svn: 206684
2014-04-19 13:47:43 +00:00
Tim Northover
fa11ed01b6 Atomics: promote ARM's IR-based atomics pass to CodeGen.
Still only 32-bit ARM using it at this stage, but the promotion allows
direct testing via opt and is a reasonably self-contained patch on the
way to switching ARM64.

At this point, other targets should be able to make use of it without
too much difficulty if they want. (See ARM64 commit coming soon for an
example).

llvm-svn: 206485
2014-04-17 18:22:47 +00:00
Craig Topper
c2260fc0ab [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
llvm-svn: 206252
2014-04-15 06:32:26 +00:00
Craig Topper
30281a67fb [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
llvm-svn: 206142
2014-04-14 00:51:57 +00:00
Benjamin Kramer
f6c0615b06 Retire llvm::array_endof in favor of non-member std::end.
While there make array_lengthof constexpr if we have support for it.

llvm-svn: 206112
2014-04-12 16:15:53 +00:00
Benjamin Kramer
ca4efc2423 Make doxygen comment match the declaration.
Found by -Wdocumentation.

llvm-svn: 206076
2014-04-11 21:58:11 +00:00
Tom Stellard
be787d15d0 SelectionDAG: Factor ISD::MUL lowering code out of DAGTypeLegalizer
This code has been moved to a new function in the TargetLowering
class called expandMUL().  The purpose of this is to be able
to share lowering code between the SelectionDAGLegalize and
DAGTypeLegalizer classes.

No functionality changed intended.

llvm-svn: 206036
2014-04-11 16:11:58 +00:00
Reid Kleckner
f99741400f Move the segmented stack switch to a function attribute
This removes the -segmented-stacks command line flag in favor of a
per-function "split-stack" attribute.

Patch by Luqman Aden and Alex Crichton!

llvm-svn: 205997
2014-04-10 22:58:43 +00:00
Matt Arsenault
7b6a70a9cf Add DAG parameter to ComputeNumSignBitsForTargetNode
This way, you can check the number of sign bits in the
operands. The depth parameter it already has is pretty useless
without this.

llvm-svn: 205649
2014-04-04 20:13:13 +00:00
Craig Topper
694437e2ef Make consistent use of MCPhysReg instead of uint16_t throughout the tree.
llvm-svn: 205610
2014-04-04 05:16:06 +00:00
Jim Grosbach
f349849199 Simplify resolveFrameIndex() signature.
Just pass a MachineInstr reference rather than an MBB iterator.
Creating a MachineInstr& is the first thing every implementation did
anyway.

llvm-svn: 205453
2014-04-02 19:28:18 +00:00
Matt Arsenault
8f25a008a2 Add helpers for checking if a value is a target boolean constant.
llvm-svn: 205335
2014-04-01 18:13:22 +00:00
Matt Arsenault
5c7af600db Change shouldSplitVectorElementType to better match the description.
Pass the entire vector type, and not just the element.

llvm-svn: 205247
2014-03-31 20:54:58 +00:00
Hal Finkel
5ecd959a9e Add a TLI hook to control when BUILD_VECTOR might be expanded using shuffles
There are two general methods for expanding a BUILD_VECTOR node:
  1. Use SCALAR_TO_VECTOR on the defined scalar values and then shuffle
     them together.
  2. Build the vector on the stack and then load it.

Currently, we use a fixed heuristic: If there are only one or two unique
defined values, then we attempt an expansion in terms of SCALAR_TO_VECTOR and
vector shuffles (provided that the required shuffle mask is legal). Otherwise,
always expand via the stack. Even when SCALAR_TO_VECTOR is not legal, this
can still be a good idea depending on what tricks the target can play when
lowering the resulting shuffle. If the target can't do anything special,
however, and if SCALAR_TO_VECTOR is expanded via the stack, this heuristic
leads to sub-optimal code (two stack loads instead of one).

Because only the target knows whether the SCALAR_TO_VECTORs and shuffles for a
build vector of a particular type are likely to be optimial, this adds a new
TLI function: shouldExpandBuildVectorWithShuffles which takes the vector type
and the count of unique defined values. If this function returns true, then
method (1) will be used, subject to the constraint that all of the necessary
shuffles are legal (as determined by isShuffleMaskLegal). If this function
returns false, then method (2) is always used.

This commit does not enhance the current code to support expanding a
build_vector with more than two unique values using shuffles, but I'll commit
an implementation of the more-general case shortly.

llvm-svn: 205230
2014-03-31 17:48:10 +00:00
Tim Northover
2f13163a84 ARM64: initial backend import
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.

Everything will be easier with the target in-tree though, hence this
commit.

llvm-svn: 205090
2014-03-29 10:18:08 +00:00
Tim Northover
45e634dadd CodeGenPrep: wrangle IR to exploit AArch64 tbz/tbnz inst.
Given IR like:
    %bit = and %val, #imm-with-1-bit-set
    %tst = icmp %bit, 0
    br i1 %tst, label %true, label %false

some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.

This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.

llvm-svn: 205086
2014-03-29 08:22:29 +00:00
Manman Ren
8d0a571c07 Provide a target override for the cost of using a callee-saved register
for the first time.

Thanks Andy for the discussion.
rdar://16162005

llvm-svn: 204979
2014-03-27 23:10:04 +00:00
David Blaikie
7bf1eb599f DebugInfo: TargetOptions/MCAsmInfo support for compressed debug info sections
llvm-svn: 204957
2014-03-27 20:45:41 +00:00
Renato Golin
748bf42a3e Change @llvm.clear_cache default to call rt-lib
After some discussion on IRC, emitting a call to the library function seems
like a better default, since it will move from a compiler internal error to
a linker error, that the user can work around until LLVM is fixed.

I'm also adding a note on the responsibility of the user to confirm that
the cache was cleared on platforms where nothing is done.

llvm-svn: 204806
2014-03-26 14:01:32 +00:00
Renato Golin
2c1112ea41 Add @llvm.clear_cache builtin
Implementing the LLVM part of the call to __builtin___clear_cache
which translates into an intrinsic @llvm.clear_cache and is lowered
by each target, either to a call to __clear_cache or nothing at all
incase the caches are unified.

Updating LangRef and adding some tests for the implemented architectures.
Other archs will have to implement the method in case this builtin
has to be compiled for it, since the default behaviour is to bail
unimplemented.

A Clang patch is required for the builtin to be lowered into the
llvm intrinsic. This will be done next.

llvm-svn: 204802
2014-03-26 12:52:28 +00:00
Hal Finkel
68c9b6839e [TableGen] Optionally forbid overlap between named and positional operands
There are currently two schemes for mapping instruction operands to
instruction-format variables for generating the instruction encoders and
decoders for the assembler and disassembler respectively: a) to map by name and
b) to map by position.

In the long run, we'd like to remove the position-based scheme and use only
name-based mapping. Unfortunately, the name-based scheme currently cannot deal
with complex operands (those with suboperands), and so we currently must use
the position-based scheme for those. On the other hand, the position-based
scheme cannot deal with (register) variables that are split into multiple
ranges. An upcoming commit to the PowerPC backend (adding VSX support) will
require this capability. While we could teach the position-based scheme to
handle that, since we'd like to move away from the position-based mapping
generally, it seems silly to teach it new tricks now. What makes more sense is
to allow for partial transitioning: use the name-based mapping when possible,
and only use the position-based scheme when necessary.

Now the problem is that mixing the two sensibly was not possible: the
position-based mapping would map based on position, but would not skip those
variables that were mapped by name. Instead, the two sets of assignments would
overlap. However, I cannot currently change the current behavior, because there
are some backends that rely on it [I think mistakenly, but I'll send a message
to llvmdev about that]. So I've added a new TableGen bit variable:
noNamedPositionallyEncodedOperands, that can be used to cause the
position-based mapping to skip variables mapped by name.

llvm-svn: 203767
2014-03-13 07:57:54 +00:00
Patrik Hagglund
f6f25d32ac Replace '#include ValueTypes.h' with forward declarations.
In some cases the include is pushed "downstream" (or removed if
unused).

llvm-svn: 203644
2014-03-12 08:00:24 +00:00
Benjamin Kramer
c4a4a8061a Remove copy ctors that did the same thing as the default one.
The code added nothing but potentially disabled move semantics and made
types non-trivially copyable.

llvm-svn: 203563
2014-03-11 11:32:49 +00:00
Rafael Espindola
cb9ca86245 Replace PROLOG_LABEL with a new CFI_INSTRUCTION.
The old system was fairly convoluted:
* A temporary label was created.
* A single PROLOG_LABEL was created with it.
* A few MCCFIInstructions were created with the same label.

The semantics were that the cfi instructions were mapped to the PROLOG_LABEL
via the temporary label. The output position was that of the PROLOG_LABEL.
The temporary label itself was used only for doing the mapping.

The new CFI_INSTRUCTION has a 1:1 mapping to MCCFIInstructions and points to
one by holding an index into the CFI instructions of this function.

I did consider removing MMI.getFrameInstructions completelly and having
CFI_INSTRUCTION own a MCCFIInstruction, but MCCFIInstructions have non
trivial constructors and destructors and are somewhat big, so the this setup
is probably better.

The net result is that we don't create temporary labels that are never used.

llvm-svn: 203204
2014-03-07 06:08:31 +00:00
Rafael Espindola
0bdff3f258 clang-format a bit of code to make the next patch easier to read.
llvm-svn: 203203
2014-03-07 05:32:03 +00:00
Rafael Espindola
fe5dfa44c9 Remove shouldEmitUsedDirectiveFor.
Clang now uses llvm.compiler.used for these cases.

llvm-svn: 203174
2014-03-06 22:47:08 +00:00
Craig Topper
a3683ec835 [C++11] Add 'override' keyword to virtual methods that override their base class.
llvm-svn: 202953
2014-03-05 09:10:37 +00:00
Chandler Carruth
cfb81122cc [Modules] Move CallSite into the IR library where it belogs. It is
abstracting between a CallInst and an InvokeInst, both of which are IR
concepts.

llvm-svn: 202816
2014-03-04 11:01:28 +00:00
Hal Finkel
593fc5537a Add an OutPatFrag TableGen class
Unfortunately, it is currently impossible to use a PatFrag as part of an output
pattern (the part of the pattern that has instructions in it) in TableGen.
Looking at the current implementation, this was clearly intended to work (there
is already code in place to expand patterns in the output DAG), but is
currently broken by the baked-in type-checking assumption and the order in which
the pattern fragments are processed (output pattern fragments need to be
processed after the instruction definitions are processed).

Fixing this is fairly simple, but requires some way of differentiating output
patterns from the existing input patterns. The simplest way to handle this
seems to be to create a subclass of PatFrag, and so that's what I've done here.

As a simple example, this allows us to write:

def crnot : OutPatFrag<(ops node:$in),
                       (CRNOR $in, $in)>;

def       : Pat<(not i1:$in),
                (crnot $in)>;

which captures the core use case: handling of repeated subexpressions inside
of complicated output patterns.

This will be used by an upcoming commit to the PowerPC backend.

llvm-svn: 202450
2014-02-28 00:26:56 +00:00
Andrew Trick
a8fdecaea7 Provide a target override for the latest regalloc heuristic.
This is a temporary workaround for native arm linux builds:
PR18996: Changing regalloc order breaks "lencod" on native arm linux builds.

llvm-svn: 202433
2014-02-27 21:37:33 +00:00
Andrew Trick
70f699d488 Drive-by comment fix. This regalloc comment was not accurate.
llvm-svn: 202432
2014-02-27 21:37:30 +00:00
Rafael Espindola
8b43142712 Make DisableIntegratedAS a TargetOption.
This replaces the old NoIntegratedAssembler with at TargetOption. This is
more flexible and will be used to forward clang's -no-integrated-as option.

llvm-svn: 201836
2014-02-21 03:13:54 +00:00
Rafael Espindola
60133f3afe move getNameWithPrefix and getSymbol to TargetMachine.
TargetLoweringBase is implemented in CodeGen, so before this patch we had
a dependency fom Target to CodeGen. This would show up as a link failure of
llvm-stress when building with -DBUILD_SHARED_LIBS=ON.

This fixes pr18900.

llvm-svn: 201711
2014-02-19 20:30:41 +00:00
Rafael Espindola
aea6192f20 Add back r201608, r201622, r201624 and r201625
r201608 made llvm corretly handle private globals with MachO. r201622 fixed
a bug in it and r201624 and r201625 were changes for using private linkage,
assuming that llvm would do the right thing.

They all got reverted because r201608 introduced a crash in LTO. This patch
includes a fix for that. The issue was that TargetLoweringObjectFile now has
to be initialized before we can mangle names of private globals. This is
trivially true during the normal codegen pipeline (the asm printer does it),
but LTO has to do it manually.

llvm-svn: 201700
2014-02-19 17:23:20 +00:00
Daniel Jasper
bf4e7d8ac3 Revert r201622 and r201608.
This causes the LLVMgold plugin to segfault. More information on the
replies to r201608.

llvm-svn: 201669
2014-02-19 12:26:01 +00:00
Tim Northover
1b102abe53 X86 CodeGenPrep: sink shufflevectors before shifts
On x86, shifting a vector by a scalar is significantly cheaper than shifting a
vector by another fully general vector. Unfortunately, because SelectionDAG
operates on just one basic block at a time, the shufflevector instruction that
reveals whether the right-hand side of a shift *is* really a scalar is often
not visible to CodeGen when it's needed.

This adds another handler to CodeGenPrepare, to sink any useful shufflevector
instructions down to the basic block where they're used, predicated on a target
hook (since on other architectures, doing so will often just introduce extra
real work).

rdar://problem/16063505

llvm-svn: 201655
2014-02-19 10:02:43 +00:00
Rafael Espindola
5381278d2b Avoid an infinite cycle with private linkage and -f{data|function}-sections.
When outputting an object we check its section to find its name, but when
looking for the section with -ffunction-section we look for the symbol name.

Break the loop by requesting a name with the private prefix when constructing
the section name. This matches the behavior before r201608.

llvm-svn: 201622
2014-02-19 01:28:30 +00:00
Rafael Espindola
d39a573c72 Fix PR18743.
The IR
@foo = private constant i32 42

is valid, but before this patch we would produce an invalid MachO from it. It
was invalid because it would use an L label in a section where the liker needs
the labels in order to atomize it.

One way of fixing it would be to just reject this IR in the backend, but that
would not be very front end friendly.

What this patch does is use an 'l' prefix in sections that we know the linker
requires symbols for atomizing them. This allows frontends to just use
private and not worry about which sections they go to or how the linker handles
them.

One small issue with this strategy is that now a symbol name depends on the
section, which is not available before codegen. This is not a problem in
practice. The reason is that it only happens with private linkage, which will
be ignored by the non codegen users (llvm-nm and llvm-ar).

llvm-svn: 201608
2014-02-18 22:24:57 +00:00
Rafael Espindola
d85e4eb0f5 Rename some member variables from TD to DL.
TargetData was renamed DataLayout back in r165242.

llvm-svn: 201581
2014-02-18 15:33:12 +00:00
Juergen Ributzka
e7d529f2ab [Stackmaps] Fix the ID type to be i64 also for stackmaps (as we claim in the documenation)
The ID type for the stackmap and patchpoint intrinsics are in both cases i64.
This fixes an zero extend in the SelectionDAGBuilder that still used i32. This
also updates the target independent instructions STACKMAP and PATCHPOINT to use
the correct type.

llvm-svn: 201262
2014-02-12 22:17:10 +00:00
Rafael Espindola
8815574a16 Use a consistent argument order in TargetLoweringObjectFile.
These methods normally call each other and it is really annoying if the
arguments are in different order. The more common rule was that the arguments
specific to call are first (GV, Encoding, Suffix) and the auxiliary objects
(Mang, TM) come after. This patch changes the exceptions.

llvm-svn: 201044
2014-02-09 14:50:44 +00:00
Rafael Espindola
8d47aa1e4e Pass the Mangler by reference.
It is never null and it is not used in casts, so there is no reason to use a
pointer. This matches how we pass TM.

llvm-svn: 201025
2014-02-08 14:53:28 +00:00
Rafael Espindola
ef583bf0bc Comment cleanup. Don't repeat the function name in the comment.
llvm-svn: 200999
2014-02-07 22:39:17 +00:00
Rafael Espindola
0cb84825c0 Remove training whitespace.
llvm-svn: 200998
2014-02-07 22:33:56 +00:00
Oliver Stannard
690aee262c LLVM-1163: AAPCS-VFP violation when CPRC allocated to stack
According to the AAPCS, when a CPRC is allocated to the stack, all other
VFP registers should be marked as unavailable.

I have also modified the rules for allocating non-CPRCs to the stack, to make
it more explicit that all GPRs must be made unavailable. I cannot think of a
case where the old version would produce incorrect answers, so there is no test
for this.

llvm-svn: 200970
2014-02-07 11:19:53 +00:00
Jim Grosbach
f2f14a2d43 X86: Resolve a long standing FIXME and properly isel pextr[bw].
Generalize the AArch64 .td nodes for AssertZext and AssertSext. Use
them to match the relevant pextr store instructions.

The test widen_load-2.ll requires a slight change because with the
stores gone, the remaining instructions are scheduled in a different
order.

Add test cases for SSE4 and AVX variants.

Resolves rdar://13414672.

Patch by Adam Nemet <anemet@apple.com>.

llvm-svn: 200957
2014-02-07 00:16:33 +00:00
Matt Arsenault
7b69102edb Add address space argument to allowsUnalignedMemoryAccess.
On R600, some address spaces have more strict alignment
requirements than others.

llvm-svn: 200887
2014-02-05 23:15:53 +00:00
Rafael Espindola
98165a6a91 Remove support for not using .loc directives.
Clang itself was not using this. The only way to access it was via llc.

llvm-svn: 200862
2014-02-05 18:00:21 +00:00
Benjamin Kramer
700474a946 SimplifyLibCalls: Push TLI through the exp2->ldexp transform.
For the odd case of platforms with exp2 available but not ldexp.

llvm-svn: 200795
2014-02-04 20:27:23 +00:00
Tim Northover
d6fb863f04 OS X: the correct function is __sincospif_stret, not __sincospi_stretf
rdar://problem/13729466

llvm-svn: 200771
2014-02-04 16:28:20 +00:00
Reid Kleckner
239e9806ff Implement inalloca codegen for x86 with the new inalloca design
Calls with inalloca are lowered by skipping all stores for arguments
passed in memory and the initial stack adjustment to allocate argument
memory.

Now the frontend is responsible for the memory layout, and the backend
doesn't have to do any work.  As a result these changes are pretty
minimal.

Reviewers: echristo

Differential Revision: http://llvm-reviews.chandlerc.com/D2637

llvm-svn: 200596
2014-01-31 23:50:57 +00:00
Juergen Ributzka
8a4f2500be [TLI] Add a new hook to TargetLowering to query the target if a load of a constant should be converted to simply the constant itself.
Before this patch we used getIntImmCost from TargetTransformInfo to determine if
a load of a constant should be converted to just a constant, but the threshold
for this was set to an arbitrary value. This value works well for the two
targets (X86 and ARM) that implement this target-hook, but it isn't
target-independent at all.

Now targets have the possibility to decide directly if this optimization should
be performed. The default value is set to false to preserve the current
behavior. The target hook has been moved to TargetLowering, which removed the
last use and need of TargetTransformInfo in SelectionDAG.

llvm-svn: 200271
2014-01-28 01:20:14 +00:00
Eric Christopher
2b6e161fce Revert r199871 and replace it with a simple check in the debug info
code to see if we're emitting a function into a non-default
text section. This is still a less-than-ideal solution, but more
contained than r199871 to determine whether or not we're emitting
code into an array of comdat sections.

llvm-svn: 200269
2014-01-28 00:49:26 +00:00
Eric Christopher
4de8f33ce6 Add a variable to track whether or not we've used a unique section,
e.g. linkonce, to TargetMachine and set it when we've done so
for ELF targets currently. This involved making TargetMachine
non-const in a TLOF use and propagating that change around - I'm
open to other ideas.

This will be used in a future commit to handle emitting debug
information with ranges.

llvm-svn: 199871
2014-01-23 06:47:25 +00:00
Yunzhong Gao
12926dc854 Adding new LTO APIs to parse metadata nodes and extract linker options and
dependent libraries from a bitcode module.

Differential Revision: http://llvm-reviews.chandlerc.com/D2343

llvm-svn: 199759
2014-01-21 18:31:27 +00:00
David Majnemer
c0087ca297 WinCOFF: Transform IR expressions featuring __ImageBase into image relative relocations
MSVC on x64 requires that we create image relative symbol
references to refer to RTTI data. Seeing as how there is no way to
explicitly make reference to a given relocation type in LLVM IR, pattern
match expressions of the form &foo - &__ImageBase.

Differential Revision: http://llvm-reviews.chandlerc.com/D2523

llvm-svn: 199312
2014-01-15 09:16:42 +00:00
Rafael Espindola
728814cedc All backends use MC now.
llvm-svn: 198959
2014-01-10 21:49:27 +00:00
Rafael Espindola
4dc5af8bc2 Move the llvm mangler to lib/IR.
This makes it available to tools that don't link with target (like llvm-ar).

llvm-svn: 198708
2014-01-07 21:19:40 +00:00
Bill Wendling
c3b5643da4 Refactor function that checks that __builtin_returnaddress's argument is constant.
This moves the check up into the parent class so that all targets can use it
without having to copy (and keep in sync) the same error message.

llvm-svn: 198579
2014-01-06 00:43:20 +00:00
Rafael Espindola
eae6386a1e Make the llvm mangler depend only on DataLayout.
Before this patch any program that wanted to know the final symbol name of a
GlobalValue had to link with Target.

This patch implements a compromise solution where the mangler uses DataLayout.
This way, any tool that already links with Target (llc, clang) gets the exact
behavior as before and new IR files can be mangled without linking with Target.

With this patch the mangler is constructed with just a DataLayout and DataLayout
is extended to include the information the Mangler needs.

llvm-svn: 198438
2014-01-03 19:21:54 +00:00
Hal Finkel
df8016f76f Disable compare sinking in CodeGenPrepare when multiple condition registers are available
As noted in the comment above CodeGenPrepare::OptimizeInst, which aggressively
sinks compares to reduce pressure on the condition register(s), for targets
such as PowerPC with multiple condition registers, this may not be the right
thing to do. This adds an HasMultipleConditionRegisters boolean to TLI, and
CodeGenPrepare::OptimizeInst is skipped when HasMultipleConditionRegisters is
true.

This functionality will be used by the PowerPC backend in an upcoming commit.
Especially when the PowerPC backend starts tracking individual condition
register bits as separate allocatable entities (which will happen in this
upcoming commit), this sinking from CodeGenPrepare::OptimizeInst is
significantly suboptimial.

llvm-svn: 198354
2014-01-02 21:13:43 +00:00
Hal Finkel
860adf085f Add support for positionally-encoded operands to FixedLenDecoderEmitter
Unfortunately, the PowerPC instruction definitions make heavy use of the
positional operand encoding heuristic to map operands onto bitfield variables
in the instruction definitions. Changing this to use name-based mapping is not
trivial, however, because additional infrastructure needs to be designed to
handle mapping of complex operands (with multiple suboperands) onto multiple
bitfield variables.

In the mean time, this adds support for positionally encoded operands to
FixedLenDecoderEmitter, so that we can generate a disassembler for the PowerPC
backend. To prevent an accidental reliance on this feature, and to prevent an
undesirable interaction with existing disassemblers, a backend must opt-in to
this support by setting the new decodePositionallyEncodedOperands
instruction-set bit to true.

When enabled, this iterates the variables that contribute to the instruction
encoding, just as the encoder does, and emulates the procedure the encoder uses
to map "numbered" operands to variables. The bit range for each variable is
also determined as the encoder determines them. This map is then consulted
during the decoder-generator's loop over operands to decode, allowing the
decoder to understand both position-based and name-based operand-to-variable
mappings.

As noted in the comment on the decodePositionallyEncodedOperands definition,
this support should be removed once it is no longer needed. There should be no
change to existing disassemblers.

llvm-svn: 197691
2013-12-19 16:12:53 +00:00
Yi Jiang
67f2e8e3f8 Enable double to float shrinking optimizations for binary functions like 'fmin/fmax'. Fix radar:15283121
llvm-svn: 197434
2013-12-16 22:42:40 +00:00
Andrew Trick
bdcfd58034 Add TargetRegisterInfo::reverseLocalAssignment hook.
This hook reverses the order of assignment for local live ranges. This
will generally allocate shorter local live ranges first. For targets with
many registers, this could reduce regalloc compile time by a large
factor. It should still achieve optimal coloring; however, it can change
register eviction decisions. It is disabled by default for two reasons:
(1) Top-down allocation is simpler and easier to debug for targets that
don't benefit from reversing the order.
(2) Bottom-up allocation could result in poor evicition decisions on some
targets affecting the performance of compiled code.

llvm-svn: 197001
2013-12-11 03:40:15 +00:00
Richard Sandiford
bc88711db8 Add TargetLowering::prepareVolatileOrAtomicLoad
One unusual feature of the z architecture is that the result of a
previous load can be reused indefinitely for subsequent loads, even if
a cache-coherent store to that location is performed by another CPU.
A special serializing instruction must be used if you want to force
a load to be reattempted.

Since volatile loads are not supposed to be omitted in this way,
we should insert a serializing instruction before each such load.
The same goes for atomic loads.

The patch implements this at the IR->DAG boundary, in a similar way
to atomic fences.  It is a no-op for targets other than SystemZ.

llvm-svn: 196905
2013-12-10 10:36:34 +00:00
Vincent Lejeune
8f22bc4540 Add a RequireStructuredCFG Field to TargetMachine.
llvm-svn: 196634
2013-12-07 01:49:19 +00:00
Andrew Trick
24a9064bbd Machine model comments. Explain a ProcessorUnit's BufferSize.
llvm-svn: 196515
2013-12-05 17:55:53 +00:00
Rafael Espindola
2b4db8d379 Remove the isImplicitlyPrivate argument of getNameWithPrefix.
getSymbolWithGlobalValueBase use is to create a name of a new symbol based
on the name of an existing GV. Assert that and then remove the last call
to pass true to isImplicitlyPrivate.

This gives the mangler API a 1:1 mapping from GV to names, which is what we
need to drop the mangler dependency on the target (and use an extended
datalayout instead).

llvm-svn: 196472
2013-12-05 05:53:12 +00:00
Alp Toker
e845f8af67 Correct word hyphenations
This patch tries to avoid unrelated changes other than fixing a few
hyphen-related ambiguities and contractions in nearby lines.

llvm-svn: 196471
2013-12-05 05:44:44 +00:00
Rafael Espindola
c36d63a948 Move getSymbolWithGlobalValueBase to TargetLoweringObjectFile.
This allows it to be used in TargetLoweringObjectFileImpl.cpp.

llvm-svn: 196117
2013-12-02 16:25:47 +00:00
Rafael Espindola
299ef825a5 Remove dead code.
llvm-svn: 196066
2013-12-02 05:10:04 +00:00
Rafael Espindola
427ca8d886 Change the default of AsmWriterClassName and isMCAsmWriter.
llvm-svn: 196065
2013-12-02 04:55:42 +00:00
Lang Hames
067c025250 Refactor a lot of patchpoint/stackmap related code to simplify and make it
target independent.

Most of the x86 specific stackmap/patchpoint handling was necessitated by the
use of the native address-mode format for frame index operands. PEI has now
been modified to treat stackmap/patchpoint similarly to DEBUG_INFO, allowing
us to use a simple, platform independent register/offset pair for frame
indexes on stackmap/patchpoints.

Notes:
  - Folding is now platform independent and automatically supported.
  - Emiting patchpoints with direct memory references now just involves calling
    the TargetLoweringBase::emitPatchPoint utility method from the target's
    XXXTargetLowering::EmitInstrWithCustomInserter method. (See
    X86TargetLowering for an example).
  - No more ugly platform-specific operand parsers.

This patch shouldn't change the generated output for X86. 

llvm-svn: 195944
2013-11-29 03:07:54 +00:00
Rafael Espindola
5e593c2e45 Remove dead argument.
llvm-svn: 195806
2013-11-27 02:25:20 +00:00
Andrew Trick
95afafe3fa StackMap: Implement support for DirectMemRefOp.
A Direct stack map location records the address of frame index. This
address is itself the value that the runtime requested. This differs
from IndirectMemRefOp locations, which refer to a stack locations from
which the requested values must be loaded. Direct locations can
directly communicate the address if an alloca, while IndirectMemRefOp
handle register spills.

For example:

entry:
  %a = alloca i64...
  llvm.experimental.stackmap(i32 <ID>, i32 <shadowBytes>, i64* %a)

Since both the alloca and stackmap intrinsic are in the entry block,
and the intrinsic takes the address of the alloca, the runtime can
assume that LLVM will not substitute alloca with any intervening
value. This must be verified by the runtime by checking that the stack
map's location is a Direct location type. The runtime can then
determine the alloca's relative location on the stack immediately after
compilation, or at any time thereafter. This differs from Register and
Indirect locations, because the runtime can only read the values in
those locations when execution reaches the instruction address of the
stack map.

llvm-svn: 195712
2013-11-26 02:03:25 +00:00
Paul Robinson
0ef8735f0a Teach ISel not to optimize 'optnone' functions (revised).
Improvements over r195317:
- Set/restore EnableFastISel flag instead of just running FastISel within
  SelectAllBasicBlocks; the flag is checked in various places, and
  FastISel won't run properly if those places don't do the right thing.
- Test looks for normal ISel versus FastISel behavior, and not
  something more subtle that doesn't work everywhere.

Based on work by Andrea Di Biagio.

llvm-svn: 195491
2013-11-22 19:11:24 +00:00
NAKAMURA Takumi
656eba2e1e Whitespace.
llvm-svn: 195341
2013-11-21 11:08:31 +00:00
NAKAMURA Takumi
add94e0548 Revert r195317 (and r195333), "Teach ISel not to optimize 'optnone' functions."
It broke, at least, i686 target. It is reproducible with "llc -mtriple=i686-unknown".

FYI, it didn't appear to add either "-O0" or "-fast-isel".

llvm-svn: 195339
2013-11-21 10:55:15 +00:00
Paul Robinson
eba6ab82dd Teach ISel not to optimize 'optnone' functions.
Based on work by Andrea Di Biagio.

llvm-svn: 195317
2013-11-21 06:33:32 +00:00
Andrew Trick
bd486c29f4 Added a size field to the stack map record to handle subregister spills.
Implementing this on bigendian platforms could get strange. I added a
target hook, getStackSlotRange, per Jakob's recommendation to make
this as explicit as possible.

llvm-svn: 194942
2013-11-17 01:36:23 +00:00
Matt Arsenault
084675c776 Add target hook to prevent folding some bitcasted loads.
This is to avoid this transformation in some cases:
fold (conv (load x)) -> (load (conv*)x)

On architectures that don't natively support some vector
loads efficiently casting the load to a smaller vector of
larger types and loading is more efficient.

Patch by Micah Villmow.

llvm-svn: 194783
2013-11-15 04:42:23 +00:00
Matt Arsenault
9921608896 Add addrspacecast instruction.
Patch by Michele Scandale!

llvm-svn: 194760
2013-11-15 01:34:59 +00:00
Chandler Carruth
f2e7a23acb Move the old pass manager infrastructure into a legacy namespace and
give the files a legacy prefix in the right directory. Use forwarding
headers in the old locations to paper over the name change for most
clients during the transitional period.

No functionality changed here! This is just clearing some space to
reduce renaming churn later on with a new system.

Even when the new stuff starts to go in, it is going to be hidden behind
a flag and off-by-default as it is still WIP and under development.

This patch is specifically designed so that very little out-of-tree code
has to change. I'm going to work as hard as I can to keep that the case.
Only direct forward declarations of the PassManager class are impacted
by this change.

llvm-svn: 194324
2013-11-09 12:26:54 +00:00
Juergen Ributzka
a748d55906 [Stackmap] Materialize the jump address within the patchpoint noop slide.
This patch moves the jump address materialization inside the noop slide. This
enables patching of the materialization itself or its complete removal. This
patch also adds the ability to define scratch registers that can be used safely
by the code called from the patchpoint intrinsic. At least one scratch register
is required, because that one is used for the materialization of the jump
address. This patch depends on D2009.

Differential Revision: http://llvm-reviews.chandlerc.com/D2074

Reviewed by Andy

llvm-svn: 194306
2013-11-09 01:51:33 +00:00
Juergen Ributzka
f27436b708 [Stackmap] Add AnyReg calling convention support for patchpoint intrinsic.
The idea of the AnyReg Calling Convention is to provide the call arguments in
registers, but not to force them to be placed in a paticular order into a
specified set of registers. Instead it is up tp the register allocator to assign
any register as it sees fit. The same applies to the return value (if
applicable).

Differential Revision: http://llvm-reviews.chandlerc.com/D2009

Reviewed by Andy

llvm-svn: 194293
2013-11-08 23:28:16 +00:00
Dmitri Gribenko
90966eb2ea Convert comments to documentation comments (// -> ///)
Patch by MathOnNapkins

llvm-svn: 194093
2013-11-05 21:28:42 +00:00
Bob Wilson
eb7943100b Convert calls to __sinpi and __cospi into __sincospi_stret
This adds an SimplifyLibCalls case which converts the special __sinpi and
__cospi (float & double variants) into a __sincospi_stret where appropriate to
remove duplicated work.

Patch by Tim Northover

llvm-svn: 193943
2013-11-03 06:48:38 +00:00
Andrew Trick
be4d0aecc4 Lower stackmap intrinsics directly to their target opcode in the DAG builder.
llvm-svn: 193769
2013-10-31 17:18:24 +00:00
Matt Arsenault
c0487521f8 Update comment
llvm-svn: 193651
2013-10-29 21:04:19 +00:00
Matt Arsenault
9c33ca743d Workaround MSVC 32-bit miscompile of getCondCodeAction.
Use 32-bit types for the array instead of 64. This should
generally be better anyway.

In optimized + assert builds, I saw a failure when a
cond code / type combination that is never set was loading
a non-zero value and hitting the != Promote assert.

It turns out when loading the 64-bit value to do the shift,
the assembly loads the 2 32-bit halves from non-consecutive
addresses. The address the second half of the loaded uint64_t
doesn't include the offset of the array in the struct. Instead
of being offset + 4, it's just + 4.

I'm not entirely sure why this wasn't observed before.
setCondCodeAction isn't heavily used by the in-tree targets,
and not with the higher valued vector SimpleValueTypes. Only
PPC is using one of the > 32 valued types, and that is probably
never used by anyone on a 32-bit MSVC compiled host.

I ran into this when upgrading LLVM versions, so I guess the
value loaded from the nonsense address happened to work out
before.

No test since I'm not really sure if / how it can be reproduced
with the current in tree targets, and it's not supposed to change
anything.

llvm-svn: 193650
2013-10-29 20:59:29 +00:00
Rafael Espindola
3e081640d5 Move getSymbol to TargetLoweringObjectFile.
This allows constructing a Mangler with just a TargetMachine.

llvm-svn: 193630
2013-10-29 17:28:26 +00:00
Tom Stellard
7d9417ac32 SelectionDAG: Pass along the original argument/element type in ISD::InputArg
For some targets, it is useful to be able to look at the original
type of an argument without having to dig through the original IR.

This also fixes a bug in SelectionDAGBuilder where InputArg.PartOffset
was not taking into account the offset of structure elements.

Patch by: Justin Holewinski

Tom Stellard:
  - Changed the type of ArgVT to EVT, so it can store non-simple types
    like v3i32.

llvm-svn: 193214
2013-10-23 00:44:24 +00:00
Matt Arsenault
69e2fc4f25 Remove unused TargetLowering field.
llvm-svn: 193113
2013-10-21 20:04:01 +00:00
Matt Arsenault
aad9d96b2b Fix CodeGen for vectors of pointers with address spaces.
llvm-svn: 193112
2013-10-21 20:03:58 +00:00
Jack Carter
a416bdc47c [projects/test-suite] White space and long line fixes.
No functionality changes.

llvm-svn: 192863
2013-10-17 01:34:33 +00:00
Andrew Trick
b65138d3af Fix the ExecutionDepsFix pass to handle AVX instructions.
This pass is needed to break false dependencies. Without it, unlucky
register assignment can result in wild (5x) swings in
performance. This pass was trying to handle AVX but not getting it
right. AVX doesn't have partial register defs, it has unused register
reads in which the high bits of a source operand are copied into the
unused bits of the dest.

Fixing this requires conservative liveness analysis. This is awkard
because the pass already has its own pseudo-liveness. However, proper
liveness is expensive, and we would like to use a generic utility to
compute it. The fix only invokes liveness on-demand. It is rare to
detect a case that needs undef-read dependence breaking, but when it
happens, it can be needed many times within a very large block.

I think the existing heuristic which uses a register window of 16 is
too conservative for loop-carried false dependencies. If the loop is a
reduction. The out-of-order engine may be able to execute several loop
iterations in parallel. However, I'll leave this tuning exercise for
next time.

llvm-svn: 192635
2013-10-14 22:19:03 +00:00
Quentin Colombet
c02e5604f4 [DAGCombiner] Reapply load slicing (192471) with a test that explicitly set sse4.2 support.
This should fix the buildbots.

Original commit message:
[DAGCombiner] Slice a big load in two loads when the element are next to each
other in memory and the target has paired load and performs post-isel loads
combining.

E.g., this optimization will transform something like this:
a = load i64* addr
b = trunc i64 a to i32
c = lshr i64 a, 32
d = trunc i64 c to i32

into:
b = load i32* addr1
d = load i32* addr2
Where addr1 = addr2 +/- sizeof(i32), if the target supports paired load and
performs post-isel loads combining.

One should overload TargetLowering::hasPairedLoad to provide this information.
The default is false.

<rdar://problem/14477220>

llvm-svn: 192476
2013-10-11 18:29:42 +00:00
Quentin Colombet
fd0097531f [DAGCombiner] Revert load slicing (r192471), until I figure out why it fails on ubuntu.
llvm-svn: 192474
2013-10-11 18:17:17 +00:00
Quentin Colombet
b60dc81c8b [DAGCombiner] Slice a big load in two loads when the element are next to each
other in memory and the target has paired load and performs post-isel loads
combining.

E.g., this optimization will transform something like this:
 a = load i64* addr
 b = trunc i64 a to i32
 c = lshr i64 a, 32
 d = trunc i64 c to i32

into:
 b = load i32* addr1
 d = load i32* addr2
Where addr1 = addr2 +/- sizeof(i32), if the target supports paired load and
performs post-isel loads combining.

One should overload TargetLowering::hasPairedLoad to provide this information.
The default is false.

<rdar://problem/14477220>

llvm-svn: 192471
2013-10-11 18:01:14 +00:00
Sriram Murali
9494045fff test commit
- fix comments on vector type legalization

llvm-svn: 192389
2013-10-10 20:24:53 +00:00
Matt Arsenault
f7688b53a6 Fix grammar / missing words
llvm-svn: 192380
2013-10-10 18:47:35 +00:00
Arnold Schwaighofer
47322176be IfConverter: Use TargetSchedule for instruction latencies
For targets that have instruction itineraries this means no change. Targets
that move over to the new schedule model will use be able the new schedule
module for instruction latencies in the if-converter (the logic is such that if
there is no itineary we will use the new sched model for the latencies).

Before, we queried "TTI->getInstructionLatency()" for the instruction latency
and the extra prediction cost. Now, we query the TargetSchedule abstraction for
the instruction latency and TargetInstrInfo for the extra predictation cost. The
TargetSchedule abstraction will internally call "TTI->getInstructionLatency" if
an itinerary exists, otherwise it will use the new schedule model.

ATTENTION: Out of tree targets!

(I will also send out an email later to LLVMDev)

This means, if your target implements

 unsigned getInstrLatency(const InstrItineraryData *ItinData,
                          const MachineInstr *MI,
                          unsigned *PredCost);

and returns a value for "PredCost", you now also need to implement

 unsigned getPredictationCost(const MachineInstr *MI);

(if your target uses the IfConversion.cpp pass)

radar://15077010

llvm-svn: 191671
2013-09-30 15:28:56 +00:00
Robert Wilhelm
198f21deb3 Even more spelling fixes for "instruction".
llvm-svn: 191611
2013-09-28 13:42:22 +00:00
Andrew Trick
39686f4c6c Added temp flag -misched-bench for staging in default changes.
llvm-svn: 191423
2013-09-26 05:53:35 +00:00
Andrew Trick
65c09c6381 Mark the x86 machine model as incomplete. PR17367.
Ideally, the machinel model is added at the time the instructions are
defined. But many instructions in X86InstrSSE.td still need a model.

Without this workaround the scheduler asserts because x86 already has
itinerary classes for these instructions, indicating they should be
modeled by the scheduler. Since we use the new machine model for other
instructions, it expects a new machine model for these too.

llvm-svn: 191391
2013-09-25 18:14:12 +00:00
Joey Gouly
fccb3bcae3 Add an instruction deprecation feature to TableGen.
The 'Deprecated' class allows you to specify a SubtargetFeature that the
instruction is deprecated on.

The 'ComplexDeprecationPredicate' class allows you to define a custom
predicate that is called to check for deprecation.
For example:
  ComplexDeprecationPredicate<"MCR">

would mean you would have to define the following function:
  bool getMCRDeprecationInfo(MCInst &MI, MCSubtargetInfo &STI,
                             std::string &Info)

Which returns 'false' for not deprecated, and 'true' for deprecated
and store the warning message in 'Info'.

The MCTargetAsmParser constructor was chaned to take an extra argument of
the MCInstrInfo class, so out-of-tree targets will need to be changed.

llvm-svn: 190598
2013-09-12 10:28:05 +00:00
Andrew Trick
bf56e1926d Added MachineSchedPolicy.
Allow subtargets to customize the generic scheduling strategy.
This is convenient for targets that don't need to add new heuristics
by specializing the strategy.

llvm-svn: 190176
2013-09-06 17:32:34 +00:00
Andrew Trick
8cb647b5c8 mi-sched: Load clustering is a bit to expensive to enable unconditionally.
llvm-svn: 189990
2013-09-04 21:00:08 +00:00
Hao Liu
b344ca7aa3 Inplement aarch64 neon instructions in AdvSIMD(shift). About 24 shift instructions:
sshr,ushr,ssra,usra,srshr,urshr,srsra,ursra,sri,shl,sli,sqshlu,sqshl,uqshl,shrn,sqrshrun,sqshrn,uqshr,sqrshrn,uqrshrn,sshll,ushll
 and 4 convert instructions:
      scvtf,ucvtf,fcvtzs,fcvtzu

llvm-svn: 189925
2013-09-04 09:28:24 +00:00
Hal Finkel
1613794ae8 Add useAA() to TargetSubtargetInfo
There are several optional (off-by-default) features in CodeGen that can make
use of alias analysis. These features are important for generating code for
some kinds of cores (for example the (in-order) PPC A2 core). This adds a
useAA() function to TargetSubtargetInfo to allow these features to be enabled
by default on a per-subtarget basis.

Here is the first use of this function: To control the default of the
-enable-aa-sched-mi feature.

llvm-svn: 189563
2013-08-29 03:25:05 +00:00
Tom Stellard
1287fd01c3 SelectionDAG: Use correct pointer size when lowering function arguments v2
This adds minimal support to the SelectionDAG for handling address spaces
with different pointer sizes.  The SelectionDAG should now correctly
lower pointer function arguments to the correct size as well as generate
the correct code when lowering getelementptr.

This patch also updates the R600 DataLayout to use 32-bit pointers for
the local address space.

v2:
  - Add more helper functions to TargetLoweringBase
  - Use CHECK-LABEL for tests

llvm-svn: 189221
2013-08-26 15:05:36 +00:00
Andrew Trick
80feb87060 PrintVRegOrUnit
llvm-svn: 189124
2013-08-23 17:48:53 +00:00
Jakob Stoklund Olesen
af78d7a3df Add an OtherPreserved field to the CalleeSaved TableGen class.
This field specifies registers that are preserved across function calls,
but that should not be included in the generates SaveList array.

This can be used ot generate regmasks for architectures that save
registers through other means, like SPARC's register windows.

llvm-svn: 189084
2013-08-23 02:25:47 +00:00
Tim Northover
eb7a86ed88 ARM: use TableGen patterns to select CMOV operations.
Back in the mists of time (2008), it seems TableGen couldn't handle the
patterns necessary to match ARM's CMOV node that we convert select operations
to, so we wrote a lot of fairly hairy C++ to do it for us.

TableGen can deal with it now: there were a few minor differences to CodeGen
(see tests), but nothing obviously worse that I could see, so we should
probably address anything that *does* come up in a localised manner.

llvm-svn: 188995
2013-08-22 09:57:11 +00:00
Richard Sandiford
add1a68f21 [SystemZ] Use SRST to optimize memchr
SystemZTargetLowering::emitStringWrapper() previously loaded the character
into R0 before the loop and made R0 live on entry.  I'd forgotten that
allocatable registers weren't allowed to be live across blocks at this stage,
and it confused LiveVariables enough to cause a miscompilation of f3 in
memchr-02.ll.

This patch instead loads R0 in the loop and leaves LICM to hoist it
after RA.  This is actually what I'd tried originally, but I went for
the manual optimisation after noticing that R0 often wasn't being hoisted.
This bug forced me to go back and look at why, now fixed as r188774.

We should also try to optimize null checks so that they test the CC result
of the SRST directly.  The select between null and the SRST GPR result could
then usually be deleted as dead.

llvm-svn: 188779
2013-08-20 09:38:48 +00:00
Richard Sandiford
06a13f49c8 [SystemZ] Use SRST to implement strlen and strnlen
It would also make sense to use it for memchr; I'm working on that now.

llvm-svn: 188547
2013-08-16 11:41:43 +00:00
Richard Sandiford
93a75a2a56 [SystemZ] Use MVST to implement strcpy and stpcpy
llvm-svn: 188546
2013-08-16 11:29:37 +00:00
Richard Sandiford
353c7bc810 [SystemZ] Use CLST to implement strcmp
llvm-svn: 188544
2013-08-16 11:21:54 +00:00
Michael Gottesman
30dcd21864 Update makeLibCall to return both the call and the chain associated with the libcall instead of just the call. This allows us to specify libcalls that return void.
LowerCallTo returns a pair with the return value of the call as the first
element and the chain associated with the return value as the second element. If
we lower a call that has a void return value, LowerCallTo returns an SDValue
with a NULL SDNode and the chain for the call. Thus makeLibCall by just
returning the first value makes it impossible for you to set up the chain so
that the call is not eliminated as dead code.

I also updated all references to makeLibCall to reflect the new return type.

llvm-svn: 188300
2013-08-13 17:54:56 +00:00
Richard Sandiford
4980a32ba3 [SystemZ] Use CLC and IPM to implement memcmp
For now this is restricted to fixed-length comparisons with a length
in the range [1, 256], as for memcpy() and MVC.

llvm-svn: 188163
2013-08-12 10:28:10 +00:00
Benjamin Kramer
169eb89854 Add a overload to CostTable which allows it to infer the size of the table.
Use it to avoid repeating ourselves too often. Also store MVT::SimpleValueType
in the TTI tables so they can be statically initialized, MVT's constructors
create bloated initialization code otherwise.

llvm-svn: 188095
2013-08-09 19:33:32 +00:00
Hal Finkel
bdc7aa32c1 Add ISD::FROUND for libm round()
All libm floating-point rounding functions, except for round(), had their own
ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm
adding ISD::FROUND so that round() can be custom lowered as well.

For the most part, this is straightforward. I've added an intrinsic
and a matching ISD node just like those for nearbyint() and friends. The
SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed
fround).

This will be used by the PowerPC backend in a follow-up commit.

llvm-svn: 187926
2013-08-07 22:49:12 +00:00
Tim Northover
29e73e0f55 Refactor isInTailCallPosition handling
This change came about primarily because of two issues in the existing code.
Niether of:

define i64 @test1(i64 %val) {
  %in = trunc i64 %val to i32
  tail call i32 @ret32(i32 returned %in)
  ret i64 %val
}

define i64 @test2(i64 %val) {
  tail call i32 @ret32(i32 returned undef)
  ret i32 42
}

should be tail calls, and the function sameNoopInput is responsible. The main
problem is that it is completely symmetric in the "tail call" and "ret" value,
but in reality different things are allowed on each side.

For these cases:
1. Any truncation should lead to a larger value being generated by "tail call"
   than needed by "ret".
2. Undef should only be allowed as a source for ret, not as a result of the
   call.

Along the way I noticed that a mismatch between what this function treats as a
valid truncation and what the backends see can lead to invalid calls as well
(see x86-32 test case).

This patch refactors the code so that instead of being based primarily on
values which it recurses into when necessary, it starts by inspecting the type
and considers each fundamental slot that the backend will see in turn. For
example, given a pathological function that returned {{}, {{}, i32, {}}, i32}
we would consider each "real" i32 in turn, and ask if it passes through
unchanged. This is much closer to what the backend sees as a result of
ComputeValueVTs.

Aside from the bug fixes, this eliminates the recursion that's going on and, I
believe, makes the bulk of the code significantly easier to understand. The
trade-off is the nasty iterators needed to find the real types inside a
returned value.

llvm-svn: 187787
2013-08-06 09:12:35 +00:00
Tom Stellard
fdf221305c TargetLowering: Add getVectorIdxTy() function v2
This virtual function can be implemented by targets to specify the type
to use for the index operand of INSERT_VECTOR_ELT, EXTRACT_VECTOR_ELT,
INSERT_SUBVECTOR, EXTRACT_SUBVECTOR.  The default implementation returns
the result from TargetLowering::getPointerTy()

The previous code was using TargetLowering::getPointerTy() for vector
indices, because this is guaranteed to be legal on all targets.  However,
using TargetLowering::getPointerTy() can be a problem for targets with
pointer sizes that differ across address spaces.  On such targets,
when vectors need to be loaded or stored to an address space other than the
default 'zero' address space (which is the address space assumed by
TargetLowering::getPointerTy()), having an index that
is a different size than the pointer can lead to inefficient
pointer calculations, (e.g. 64-bit adds for a 32-bit address space).

There is no intended functionality change with this patch.

llvm-svn: 187748
2013-08-05 22:22:01 +00:00
Bill Wendling
e7b7059f1d Use function attributes to indicate that we don't want to realign the stack.
Function attributes are the future! So just query whether we want to realign the
stack directly from the function instead of through a random target options
structure.

llvm-svn: 187618
2013-08-01 21:42:05 +00:00