1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-23 21:13:02 +02:00
Commit Graph

2655 Commits

Author SHA1 Message Date
Rafael Espindola
20867572f4 Remove MCUseCFI from TargetMachine.
It was always true.

llvm-svn: 208547
2014-05-12 13:01:42 +00:00
Hal Finkel
5b038e4cbc Pass the value type to TLI::getRegisterByName
We must validate the value type in TLI::getRegisterByName, because if we
don't and the wrong type was used with the IR intrinsic, then we'll assert
(because we won't be able to find a valid register class with which to
construct the requested copy operation). For PPC64, additionally, the type
information is necessary to decide between the 64-bit register and the 32-bit
subregister.

No functionality change.

llvm-svn: 208508
2014-05-11 19:29:07 +00:00
Oliver Stannard
2b1166b162 ARM: HFAs must be passed in consecutive registers
When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must
be passed in a block of consecutive floating-point registers, or on the stack.
This means that unused floating-point registers cannot be back-filled with
part of an HFA, however this can currently happen. This patch, along with the
corresponding clang patch (http://reviews.llvm.org/D3083) prevents this.

llvm-svn: 208413
2014-05-09 14:01:47 +00:00
Hal Finkel
c52e65b830 Move late partial-unrolling thresholds into the processor definitions
The old method used by X86TTI to determine partial-unrolling thresholds was
messy (because it worked by testing target features), and also would not
correctly identify the target CPU if certain target features were disabled.
After some discussions on IRC with Chandler et al., it was decided that the
processor scheduling models were the right containers for this information
(because it is often tied to special uop dispatch-buffer sizes).

This does represent a small functionality change:
 - For generic x86-64 (which uses the SB model and, thus, will get some
   unrolling).
 - For AMD cores (because they still currently use the SB scheduling model)
 - For Haswell (based on benchmarking by Louis Gerbarg, it was decided to bump
   the default threshold to 50; we're working on a test case for this).
Otherwise, nothing has changed for any other targets. The logic, however, has
been moved into BasicTTI, so other targets may now also opt-in to this
functionality simply by setting LoopMicroOpBufferSize in their processor
model definitions.

llvm-svn: 208289
2014-05-08 09:14:44 +00:00
Renato Golin
8a9a382ab2 Implememting named register intrinsics
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).

So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.

llvm-svn: 208104
2014-05-06 16:51:25 +00:00
Eric Christopher
904df1ca24 Fix typo (also tab character).
llvm-svn: 208001
2014-05-05 21:40:41 +00:00
Weiming Zhao
3625856a33 [ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
  %shr = lshr i64 %x, 4
  %and = and i64 %shr, 15
  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
  %0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
  lsr x8, x0, #1
  and x8, x8, #0x78
  add x8, x9, x8

If using ubfx, it only needs 2 instrs:
  ubfx  x8, x0, #4, #4
  add x8, x9, x8, lsl #3

This fixes bug 19589

llvm-svn: 207702
2014-04-30 21:07:24 +00:00
Joerg Sonnenberger
1b6e6664bd Fix comment
llvm-svn: 207429
2014-04-28 18:11:51 +00:00
Benjamin Kramer
163df6bc62 DAGCombiner: Turn divs of vector splats into vectorized multiplications.
Otherwise the legalizer would just scalarize everything. Support for
mulhi in the targets isn't that great yet so on most targets we get
exactly the same scalarized output. Add a test for x86 vector udiv.

I had to disable the mulhi nodes on ARM because there aren't any patterns
for it. As far as I know ARM has instructions for getting the high part of
a multiply so this should be fixed.

llvm-svn: 207315
2014-04-26 12:06:28 +00:00
Michael Zolotukhin
1c3340dd78 Revert r206749 till a final decision about the intrinsics is made.
llvm-svn: 207313
2014-04-26 09:56:41 +00:00
Evgeniy Stepanov
c242bd4b23 Create MCTargetOptions.
For now it contains a single flag, SanitizeAddress, which enables
AddressSanitizer instrumentation of inline assembly.

Patch by Yuri Gorshenin.

llvm-svn: 206971
2014-04-23 11:16:03 +00:00
Yi Jiang
b1c450606d ARM64: Combine shifts and uses from different basic block to bit-extract instruction
llvm-svn: 206774
2014-04-21 19:34:27 +00:00
Michael Zolotukhin
c7f992f9a3 Reapply r206732. This time without optimization of branches.
llvm-svn: 206749
2014-04-21 12:01:33 +00:00
Chandler Carruth
164ee32140 Revert r206732 which is causing llc to crash on most of the build bots.
Original commit message:
  Implement builtins for safe division: safe.sdiv.iN, safe.udiv.iN,
  safe.srem.iN, safe.urem.iN (iN = i8, i61, i32, or i64).

llvm-svn: 206735
2014-04-21 07:11:15 +00:00
Michael Zolotukhin
f5ebd83e24 Implement builtins for safe division: safe.sdiv.iN, safe.udiv.iN, safe.srem.iN,
safe.urem.iN (iN = i8, i16, i32, or i64).

llvm-svn: 206732
2014-04-21 05:33:09 +00:00
Yaron Keren
407a465a3d Patch by Vadim Chugunov
Win64 stack unwinder gets confused when execution flow "falls through" after
a call to 'noreturn' function. This fixes the "missing epilogue" problem by 
emitting a trap instruction for IR 'unreachable' on x86_x64-pc-windows.

A secondary use for it would be for anyone wanting to make double-sure that
'noreturn' functions, indeed, do not return.

llvm-svn: 206684
2014-04-19 13:47:43 +00:00
Tim Northover
fa11ed01b6 Atomics: promote ARM's IR-based atomics pass to CodeGen.
Still only 32-bit ARM using it at this stage, but the promotion allows
direct testing via opt and is a reasonably self-contained patch on the
way to switching ARM64.

At this point, other targets should be able to make use of it without
too much difficulty if they want. (See ARM64 commit coming soon for an
example).

llvm-svn: 206485
2014-04-17 18:22:47 +00:00
Craig Topper
c2260fc0ab [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
llvm-svn: 206252
2014-04-15 06:32:26 +00:00
Craig Topper
30281a67fb [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
llvm-svn: 206142
2014-04-14 00:51:57 +00:00
Benjamin Kramer
f6c0615b06 Retire llvm::array_endof in favor of non-member std::end.
While there make array_lengthof constexpr if we have support for it.

llvm-svn: 206112
2014-04-12 16:15:53 +00:00
Benjamin Kramer
ca4efc2423 Make doxygen comment match the declaration.
Found by -Wdocumentation.

llvm-svn: 206076
2014-04-11 21:58:11 +00:00
Tom Stellard
be787d15d0 SelectionDAG: Factor ISD::MUL lowering code out of DAGTypeLegalizer
This code has been moved to a new function in the TargetLowering
class called expandMUL().  The purpose of this is to be able
to share lowering code between the SelectionDAGLegalize and
DAGTypeLegalizer classes.

No functionality changed intended.

llvm-svn: 206036
2014-04-11 16:11:58 +00:00
Reid Kleckner
f99741400f Move the segmented stack switch to a function attribute
This removes the -segmented-stacks command line flag in favor of a
per-function "split-stack" attribute.

Patch by Luqman Aden and Alex Crichton!

llvm-svn: 205997
2014-04-10 22:58:43 +00:00
Matt Arsenault
7b6a70a9cf Add DAG parameter to ComputeNumSignBitsForTargetNode
This way, you can check the number of sign bits in the
operands. The depth parameter it already has is pretty useless
without this.

llvm-svn: 205649
2014-04-04 20:13:13 +00:00
Craig Topper
694437e2ef Make consistent use of MCPhysReg instead of uint16_t throughout the tree.
llvm-svn: 205610
2014-04-04 05:16:06 +00:00
Jim Grosbach
f349849199 Simplify resolveFrameIndex() signature.
Just pass a MachineInstr reference rather than an MBB iterator.
Creating a MachineInstr& is the first thing every implementation did
anyway.

llvm-svn: 205453
2014-04-02 19:28:18 +00:00
Matt Arsenault
8f25a008a2 Add helpers for checking if a value is a target boolean constant.
llvm-svn: 205335
2014-04-01 18:13:22 +00:00
Matt Arsenault
5c7af600db Change shouldSplitVectorElementType to better match the description.
Pass the entire vector type, and not just the element.

llvm-svn: 205247
2014-03-31 20:54:58 +00:00
Hal Finkel
5ecd959a9e Add a TLI hook to control when BUILD_VECTOR might be expanded using shuffles
There are two general methods for expanding a BUILD_VECTOR node:
  1. Use SCALAR_TO_VECTOR on the defined scalar values and then shuffle
     them together.
  2. Build the vector on the stack and then load it.

Currently, we use a fixed heuristic: If there are only one or two unique
defined values, then we attempt an expansion in terms of SCALAR_TO_VECTOR and
vector shuffles (provided that the required shuffle mask is legal). Otherwise,
always expand via the stack. Even when SCALAR_TO_VECTOR is not legal, this
can still be a good idea depending on what tricks the target can play when
lowering the resulting shuffle. If the target can't do anything special,
however, and if SCALAR_TO_VECTOR is expanded via the stack, this heuristic
leads to sub-optimal code (two stack loads instead of one).

Because only the target knows whether the SCALAR_TO_VECTORs and shuffles for a
build vector of a particular type are likely to be optimial, this adds a new
TLI function: shouldExpandBuildVectorWithShuffles which takes the vector type
and the count of unique defined values. If this function returns true, then
method (1) will be used, subject to the constraint that all of the necessary
shuffles are legal (as determined by isShuffleMaskLegal). If this function
returns false, then method (2) is always used.

This commit does not enhance the current code to support expanding a
build_vector with more than two unique values using shuffles, but I'll commit
an implementation of the more-general case shortly.

llvm-svn: 205230
2014-03-31 17:48:10 +00:00
Tim Northover
2f13163a84 ARM64: initial backend import
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.

Everything will be easier with the target in-tree though, hence this
commit.

llvm-svn: 205090
2014-03-29 10:18:08 +00:00
Tim Northover
45e634dadd CodeGenPrep: wrangle IR to exploit AArch64 tbz/tbnz inst.
Given IR like:
    %bit = and %val, #imm-with-1-bit-set
    %tst = icmp %bit, 0
    br i1 %tst, label %true, label %false

some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.

This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.

llvm-svn: 205086
2014-03-29 08:22:29 +00:00
Manman Ren
8d0a571c07 Provide a target override for the cost of using a callee-saved register
for the first time.

Thanks Andy for the discussion.
rdar://16162005

llvm-svn: 204979
2014-03-27 23:10:04 +00:00
David Blaikie
7bf1eb599f DebugInfo: TargetOptions/MCAsmInfo support for compressed debug info sections
llvm-svn: 204957
2014-03-27 20:45:41 +00:00
Renato Golin
748bf42a3e Change @llvm.clear_cache default to call rt-lib
After some discussion on IRC, emitting a call to the library function seems
like a better default, since it will move from a compiler internal error to
a linker error, that the user can work around until LLVM is fixed.

I'm also adding a note on the responsibility of the user to confirm that
the cache was cleared on platforms where nothing is done.

llvm-svn: 204806
2014-03-26 14:01:32 +00:00
Renato Golin
2c1112ea41 Add @llvm.clear_cache builtin
Implementing the LLVM part of the call to __builtin___clear_cache
which translates into an intrinsic @llvm.clear_cache and is lowered
by each target, either to a call to __clear_cache or nothing at all
incase the caches are unified.

Updating LangRef and adding some tests for the implemented architectures.
Other archs will have to implement the method in case this builtin
has to be compiled for it, since the default behaviour is to bail
unimplemented.

A Clang patch is required for the builtin to be lowered into the
llvm intrinsic. This will be done next.

llvm-svn: 204802
2014-03-26 12:52:28 +00:00
Hal Finkel
68c9b6839e [TableGen] Optionally forbid overlap between named and positional operands
There are currently two schemes for mapping instruction operands to
instruction-format variables for generating the instruction encoders and
decoders for the assembler and disassembler respectively: a) to map by name and
b) to map by position.

In the long run, we'd like to remove the position-based scheme and use only
name-based mapping. Unfortunately, the name-based scheme currently cannot deal
with complex operands (those with suboperands), and so we currently must use
the position-based scheme for those. On the other hand, the position-based
scheme cannot deal with (register) variables that are split into multiple
ranges. An upcoming commit to the PowerPC backend (adding VSX support) will
require this capability. While we could teach the position-based scheme to
handle that, since we'd like to move away from the position-based mapping
generally, it seems silly to teach it new tricks now. What makes more sense is
to allow for partial transitioning: use the name-based mapping when possible,
and only use the position-based scheme when necessary.

Now the problem is that mixing the two sensibly was not possible: the
position-based mapping would map based on position, but would not skip those
variables that were mapped by name. Instead, the two sets of assignments would
overlap. However, I cannot currently change the current behavior, because there
are some backends that rely on it [I think mistakenly, but I'll send a message
to llvmdev about that]. So I've added a new TableGen bit variable:
noNamedPositionallyEncodedOperands, that can be used to cause the
position-based mapping to skip variables mapped by name.

llvm-svn: 203767
2014-03-13 07:57:54 +00:00
Patrik Hagglund
f6f25d32ac Replace '#include ValueTypes.h' with forward declarations.
In some cases the include is pushed "downstream" (or removed if
unused).

llvm-svn: 203644
2014-03-12 08:00:24 +00:00
Benjamin Kramer
c4a4a8061a Remove copy ctors that did the same thing as the default one.
The code added nothing but potentially disabled move semantics and made
types non-trivially copyable.

llvm-svn: 203563
2014-03-11 11:32:49 +00:00
Rafael Espindola
cb9ca86245 Replace PROLOG_LABEL with a new CFI_INSTRUCTION.
The old system was fairly convoluted:
* A temporary label was created.
* A single PROLOG_LABEL was created with it.
* A few MCCFIInstructions were created with the same label.

The semantics were that the cfi instructions were mapped to the PROLOG_LABEL
via the temporary label. The output position was that of the PROLOG_LABEL.
The temporary label itself was used only for doing the mapping.

The new CFI_INSTRUCTION has a 1:1 mapping to MCCFIInstructions and points to
one by holding an index into the CFI instructions of this function.

I did consider removing MMI.getFrameInstructions completelly and having
CFI_INSTRUCTION own a MCCFIInstruction, but MCCFIInstructions have non
trivial constructors and destructors and are somewhat big, so the this setup
is probably better.

The net result is that we don't create temporary labels that are never used.

llvm-svn: 203204
2014-03-07 06:08:31 +00:00
Rafael Espindola
0bdff3f258 clang-format a bit of code to make the next patch easier to read.
llvm-svn: 203203
2014-03-07 05:32:03 +00:00
Rafael Espindola
fe5dfa44c9 Remove shouldEmitUsedDirectiveFor.
Clang now uses llvm.compiler.used for these cases.

llvm-svn: 203174
2014-03-06 22:47:08 +00:00
Craig Topper
a3683ec835 [C++11] Add 'override' keyword to virtual methods that override their base class.
llvm-svn: 202953
2014-03-05 09:10:37 +00:00
Chandler Carruth
cfb81122cc [Modules] Move CallSite into the IR library where it belogs. It is
abstracting between a CallInst and an InvokeInst, both of which are IR
concepts.

llvm-svn: 202816
2014-03-04 11:01:28 +00:00
Hal Finkel
593fc5537a Add an OutPatFrag TableGen class
Unfortunately, it is currently impossible to use a PatFrag as part of an output
pattern (the part of the pattern that has instructions in it) in TableGen.
Looking at the current implementation, this was clearly intended to work (there
is already code in place to expand patterns in the output DAG), but is
currently broken by the baked-in type-checking assumption and the order in which
the pattern fragments are processed (output pattern fragments need to be
processed after the instruction definitions are processed).

Fixing this is fairly simple, but requires some way of differentiating output
patterns from the existing input patterns. The simplest way to handle this
seems to be to create a subclass of PatFrag, and so that's what I've done here.

As a simple example, this allows us to write:

def crnot : OutPatFrag<(ops node:$in),
                       (CRNOR $in, $in)>;

def       : Pat<(not i1:$in),
                (crnot $in)>;

which captures the core use case: handling of repeated subexpressions inside
of complicated output patterns.

This will be used by an upcoming commit to the PowerPC backend.

llvm-svn: 202450
2014-02-28 00:26:56 +00:00
Andrew Trick
a8fdecaea7 Provide a target override for the latest regalloc heuristic.
This is a temporary workaround for native arm linux builds:
PR18996: Changing regalloc order breaks "lencod" on native arm linux builds.

llvm-svn: 202433
2014-02-27 21:37:33 +00:00
Andrew Trick
70f699d488 Drive-by comment fix. This regalloc comment was not accurate.
llvm-svn: 202432
2014-02-27 21:37:30 +00:00
Rafael Espindola
8b43142712 Make DisableIntegratedAS a TargetOption.
This replaces the old NoIntegratedAssembler with at TargetOption. This is
more flexible and will be used to forward clang's -no-integrated-as option.

llvm-svn: 201836
2014-02-21 03:13:54 +00:00
Rafael Espindola
60133f3afe move getNameWithPrefix and getSymbol to TargetMachine.
TargetLoweringBase is implemented in CodeGen, so before this patch we had
a dependency fom Target to CodeGen. This would show up as a link failure of
llvm-stress when building with -DBUILD_SHARED_LIBS=ON.

This fixes pr18900.

llvm-svn: 201711
2014-02-19 20:30:41 +00:00
Rafael Espindola
aea6192f20 Add back r201608, r201622, r201624 and r201625
r201608 made llvm corretly handle private globals with MachO. r201622 fixed
a bug in it and r201624 and r201625 were changes for using private linkage,
assuming that llvm would do the right thing.

They all got reverted because r201608 introduced a crash in LTO. This patch
includes a fix for that. The issue was that TargetLoweringObjectFile now has
to be initialized before we can mangle names of private globals. This is
trivially true during the normal codegen pipeline (the asm printer does it),
but LTO has to do it manually.

llvm-svn: 201700
2014-02-19 17:23:20 +00:00
Daniel Jasper
bf4e7d8ac3 Revert r201622 and r201608.
This causes the LLVMgold plugin to segfault. More information on the
replies to r201608.

llvm-svn: 201669
2014-02-19 12:26:01 +00:00