1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 19:42:54 +02:00
Commit Graph

85 Commits

Author SHA1 Message Date
David Majnemer
a0b6521d43 [WinEH] Make FuncletLayout more robust against catchret
Catchret transfers control from a catch funclet to an earlier funclet.
However, it is not completely clear which funclet the catchret target is
part of.  Make this clear by stapling the catchret target's funclet
membership onto the CATCHRET SDAG node.

llvm-svn: 249052
2015-10-01 18:44:59 +00:00
Reid Kleckner
5370cf8a53 [WinEH] Make funclet return instrs pseudo instrs
This makes catchret look more like a branch, and less like a weird use
of BlockAddress. It also lets us get away from
llvm.x86.seh.restoreframe, which relies on the old parentfpoffset label
arithmetic.

llvm-svn: 247936
2015-09-17 20:43:47 +00:00
Ahmed Bougacha
d90c25bcb0 [CodeGen] Make x86 nontemporal store patfrags generic. NFC.
To be used by other targets.

llvm-svn: 247225
2015-09-10 00:53:15 +00:00
Reid Kleckner
1f54bd38ce [WinEH] Add some support for code generating catchpad
We can now run 32-bit programs with empty catch bodies.  The next step
is to change PEI so that we get funclet prologues and epilogues.

llvm-svn: 246235
2015-08-27 23:27:47 +00:00
James Molloy
cf490f8ffc Add new ISD nodes: ISD::FMINNAN and ISD::FMAXNAN
The intention of these is to be a corollary to ISD::FMINNUM/FMAXNUM,
differing only on how NaNs are treated. FMINNUM returns the non-NaN
input (when given one NaN and one non-NaN), FMINNAN returns the NaN
input instead.

This patch includes support for scalarizing, widening and splitting
vectors, but not expansion or softening. The reason is that these
should never be needed - FMINNAN nodes are only going to be created
in one place (SDAGBuilder::visitSelect) and there we'll check if the
node is legal or custom. I could preemptively add expand and soften
code, but I'm fairly opposed to adding code I can't test. It's bad
enough I can't create tests with this patch, but at least this code
will be exercised by the ARM and AArch64 backends fairly shortly.

llvm-svn: 244581
2015-08-11 09:13:05 +00:00
Igor Breger
7ea6c4cce1 AVX-512: Implemented encoding , DAG lowering and intrinsics for Integer Truncate with/without saturation
Added tests for DAG lowering ,encoding and intrinsic

Differential Revision: http://reviews.llvm.org/D11218

llvm-svn: 243122
2015-07-24 17:24:15 +00:00
Chandler Carruth
3b9bc26920 Revert r242990: "AVX-512: Implemented encoding , DAG lowering and ..."
This commit broke the build. Numerous build bots broken, and it was
blocking my progress so reverting.

It should be trivial to reproduce -- enable the BPF backend and it
should fail when running llvm-tblgen.

llvm-svn: 242992
2015-07-23 08:03:44 +00:00
Igor Breger
8ded9931fe AVX-512: Implemented encoding , DAG lowering and intrinsics for Integer Truncate with/without saturation
Added tests for DAG lowering ,encoding and intrinsic

Differential Revision: http://reviews.llvm.org/D11218

llvm-svn: 242990
2015-07-23 07:39:21 +00:00
James Molloy
fbd5dd7f96 [Codegen] Add intrinsics 'absdiff' and corresponding SDNodes for absolute difference operation
This adds new intrinsics "*absdiff" for absolute difference ops to facilitate efficient code generation for "sum of absolute differences" operation.
The patch also contains the introduction of corresponding SDNodes and basic legalization support.Sanity of the generated code is tested on X86.

This is 1st of the three patches.

Patch by Shahid Asghar-ahmad!

llvm-svn: 242409
2015-07-16 15:22:46 +00:00
Rafael Espindola
49ad24ce06 Avoid a Symbol -> Name -> Symbol conversion.
Before this we were producing a TargetExternalSymbol from a MCSymbol.
That meant extracting the symbol name and fetching the symbol again
down the pipeline.

This patch adds a DAG.getMCSymbol that lets the MCSymbol pass unchanged on the
DAG.

Doing so removes the need for MO_NOPREFIX and fixes the root cause of pr23900,
allowing r240130 to be committed again.

llvm-svn: 240300
2015-06-22 17:46:53 +00:00
James Molloy
64b6b65ee6 Add SDNodes for umin, umax, smin and smax.
This adds new SDNodes for signed/unsigned min/max. These nodes are built from
select/icmp pairs matched at SDAGBuilder stage.

This patch adds the nodes, as well as legalization support and sets them to
be "expand" for all targets.

NFC for now; this will be tested when I switch AArch64 to using these new
nodes.

llvm-svn: 237423
2015-05-15 09:03:15 +00:00
Elena Demikhovsky
201b5c4641 Masked gather and scatter - added DAGCombine visitors
and AVX-512 instruction selection patterns.
All other patches, including tests will follow.

http://reviews.llvm.org/D7665

llvm-svn: 236211
2015-04-30 08:38:48 +00:00
Craig Topper
486d07b3d8 [TableGen] Add support constraining a vector type in a pattern to have a specific element type and for constraining a vector type to have the same number of elements as another vector type. This is useful for AVX512 mask operations so we relate the mask type to the type of the other arguments.
llvm-svn: 231356
2015-03-05 07:11:34 +00:00
Elena Demikhovsky
9eda5391f2 Reverted 230471 - gather scatter handling in table gen.
llvm-svn: 230892
2015-03-01 08:23:41 +00:00
Elena Demikhovsky
0e7ac15634 AVX-512: Gather and Scatter patterns
Gather and scatter instructions additionally write to one of the source operands - mask register.
In this case Gather has 2 destination values - the loaded value and the mask.
Till now we did not support code gen pattern for gather - the instruction was generated from 
intrinsic only and machine node was hardcoded.
When we introduce the masked_gather node, we need to select instruction automatically,
in the standard way.
I added a flag "hasTwoExplicitDefs" that allows to handle 2 destination operands.

(Some code in the X86InstrFragmentsSIMD.td is commented out, just to split one big
patch in many small patches)

llvm-svn: 230471
2015-02-25 09:46:31 +00:00
Matt Arsenault
79c7d059c9 Add generic fmad DAG node.
This allows sharing of FMA forming combines to work
with instructions that have the same semantics as a separate
multiply and add.

This is expand by default, and only formed post legalization
so it shouldn't have much impact on targets that do not want it.

llvm-svn: 230070
2015-02-20 22:10:33 +00:00
Matt Arsenault
1c62631bbb Add addrspacecast node to tablegen
The node is still defined oddly so that the
address spaces are not operands and not accessible
from tablegen, but as-is this can now be used to write
a ComplexPattern with an addrspacecast root node.

llvm-svn: 228270
2015-02-05 03:35:34 +00:00
Elena Demikhovsky
befed29343 Masked Load / Store Intrinsics - the CodeGen part.
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.

Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)

Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.

http://reviews.llvm.org/D6191

llvm-svn: 223348
2014-12-04 09:40:44 +00:00
Duncan P. N. Exon Smith
73ce6dbb2b Revert "Masked Vector Load and Store Intrinsics."
This reverts commit r222632 (and follow-up r222636), which caused a host
of LNT failures on an internal bot.  I'll respond to the commit on the
list with a reproduction of one of the failures.

Conflicts:
	lib/Target/X86/X86TargetTransformInfo.cpp

llvm-svn: 222936
2014-11-28 21:29:14 +00:00
Elena Demikhovsky
36a2243ab7 Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)

Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.

http://reviews.llvm.org/D6191

llvm-svn: 222632
2014-11-23 08:07:43 +00:00
Matt Arsenault
74dd906076 Add minnum / maxnum intrinsics
These are named following the IEEE-754 names for these
functions, rather than the libm fmin / fmax to avoid
possible ambiguities. Some languages may implement something
resembling fmin / fmax which return NaN if either operand is
to propagate errors. These implement the IEEE-754 semantics
of returning the other operand if either is a NaN representing
missing data.

llvm-svn: 220341
2014-10-21 23:00:20 +00:00
Hal Finkel
ed9bc0fad0 Add SDAG TableGen definitions for BR_CC
Add SelectionDAG TableGen definitions for BR_CC so that targets can instruction-select
BR_CC using TableGen pattern matching.

Patch by deadal nix.

llvm-svn: 218476
2014-09-25 23:34:18 +00:00
Matt Arsenault
0968995f30 Fix typos
llvm-svn: 213285
2014-07-17 17:50:22 +00:00
Tim Northover
eae1f1c8cc CodeGen: extend f16 conversions to permit types > float.
This makes the two intrinsics @llvm.convert.from.f16 and
@llvm.convert.to.f16 accept types other than simple "float". This is
only strictly needed for the truncate operation, since otherwise
double rounding occurs and there's no way to represent the strict IEEE
conversion. However, for symmetry we allow larger types in the extend
too.

During legalization, we can expand an "fp16_to_double" operation into
two extends for convenience, but abort when the truncate isn't legal. A new
libcall is probably needed here.

Even after this commit, various target tweaks are needed to actually use the
extended intrinsics. I've put these into separate commits for clarity, so there
are no actual tests of f64 conversion here.

llvm-svn: 213248
2014-07-17 10:51:23 +00:00
Tim Northover
2f13163a84 ARM64: initial backend import
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.

Everything will be easier with the target in-tree though, hence this
commit.

llvm-svn: 205090
2014-03-29 10:18:08 +00:00
Hal Finkel
593fc5537a Add an OutPatFrag TableGen class
Unfortunately, it is currently impossible to use a PatFrag as part of an output
pattern (the part of the pattern that has instructions in it) in TableGen.
Looking at the current implementation, this was clearly intended to work (there
is already code in place to expand patterns in the output DAG), but is
currently broken by the baked-in type-checking assumption and the order in which
the pattern fragments are processed (output pattern fragments need to be
processed after the instruction definitions are processed).

Fixing this is fairly simple, but requires some way of differentiating output
patterns from the existing input patterns. The simplest way to handle this
seems to be to create a subclass of PatFrag, and so that's what I've done here.

As a simple example, this allows us to write:

def crnot : OutPatFrag<(ops node:$in),
                       (CRNOR $in, $in)>;

def       : Pat<(not i1:$in),
                (crnot $in)>;

which captures the core use case: handling of repeated subexpressions inside
of complicated output patterns.

This will be used by an upcoming commit to the PowerPC backend.

llvm-svn: 202450
2014-02-28 00:26:56 +00:00
Jim Grosbach
f2f14a2d43 X86: Resolve a long standing FIXME and properly isel pextr[bw].
Generalize the AArch64 .td nodes for AssertZext and AssertSext. Use
them to match the relevant pextr store instructions.

The test widen_load-2.ll requires a slight change because with the
stores gone, the remaining instructions are scheduled in a different
order.

Add test cases for SSE4 and AVX variants.

Resolves rdar://13414672.

Patch by Adam Nemet <anemet@apple.com>.

llvm-svn: 200957
2014-02-07 00:16:33 +00:00
Hao Liu
b344ca7aa3 Inplement aarch64 neon instructions in AdvSIMD(shift). About 24 shift instructions:
sshr,ushr,ssra,usra,srshr,urshr,srsra,ursra,sri,shl,sli,sqshlu,sqshl,uqshl,shrn,sqrshrun,sqshrn,uqshr,sqrshrn,uqrshrn,sshll,ushll
 and 4 convert instructions:
      scvtf,ucvtf,fcvtzs,fcvtzu

llvm-svn: 189925
2013-09-04 09:28:24 +00:00
Hal Finkel
bdc7aa32c1 Add ISD::FROUND for libm round()
All libm floating-point rounding functions, except for round(), had their own
ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm
adding ISD::FROUND so that round() can be custom lowered as well.

For the most part, this is straightforward. I've added an intrinsic
and a matching ISD node just like those for nearbyint() and friends. The
SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed
fround).

This will be used by the PowerPC backend in a follow-up commit.

llvm-svn: 187926
2013-08-07 22:49:12 +00:00
Owen Anderson
5a2e4b2e02 Create an FPOW SDNode opcode def in the target independent .td file rather than in a specific backend.
llvm-svn: 182450
2013-05-22 06:36:09 +00:00
Tim Northover
d12b2f24c8 Remove unused MEMBARRIER DAG node; it's been replaced by ATOMIC_FENCE.
llvm-svn: 179939
2013-04-20 12:32:17 +00:00
Jakob Stoklund Olesen
93b4cf4daf Remove extra MayLoad/MayStore flags from atomic_load/store.
These extra flags are not required to properly order the atomic
load/store instructions. SelectionDAGBuilder chains atomics as if they
were volatile, and SelectionDAG::getAtomic() sets the isVolatile bit on
the memory operands of all atomic operations.

The volatile bit is enough to order atomic loads and stores during and
after SelectionDAG.

This means we set mayLoad on atomic_load, mayStore on atomic_store, and
mayLoad+mayStore on the remaining atomic read-modify-write operations.

llvm-svn: 162733
2012-08-28 03:11:32 +00:00
Hal Finkel
aadd19de06 Add readcyclecounter lowering on PPC64.
On PPC64, this can be done with a simple TableGen pattern.
To enable this, I've added the (otherwise missing) readcyclecounter
SDNode definition to TargetSelectionDAG.td.

llvm-svn: 161302
2012-08-04 14:10:46 +00:00
Jim Grosbach
8a78dc2600 TableGen: Allow conditional instruction pattern in multiclass.
Define a 'null_frag' SDPatternOperator node, which if referenced in an
instruction Pattern, results in the pattern being collapsed to be as-if
'[]' had been specified instead. This allows supporting a multiclass
definition where some instaniations have ISel patterns associated and
others do not.

For example,
multiclass myMulti<RegisterClass rc, SDPatternOperator OpNode = null_frag> {
  def _x : myI<(outs rc:), (ins rc:), []>;
  def _r : myI<(outs rc:), (ins rc:), [(set rc:, (OpNode rc:))]>;
}

defm foo : myMulti<GRa, not>;
defm bar : myMulti<GRb>;

llvm-svn: 160333
2012-07-17 00:47:06 +00:00
Dan Gohman
cc1f60a86c Rename @llvm.debugger to @llvm.debugtrap.
llvm-svn: 156774
2012-05-14 18:58:10 +00:00
Dan Gohman
ed475ad173 Define a new intrinsic, @llvm.debugger. It will be similar to __builtin_trap(),
but it generates int3 on x86 instead of ud2.

llvm-svn: 156593
2012-05-11 00:19:32 +00:00
James Molloy
6cbdd131b3 Improve generated code for extending loads and some trunc stores on ARM.
Teach TargetSelectionDAG about lengthening loads for vector types and set v4i8 as legal. Allow FP_TO_UINT for v4i16 from v4i32.

llvm-svn: 150956
2012-02-20 09:24:05 +00:00
Chandler Carruth
e0484f6b37 Initial CodeGen support for CTTZ/CTLZ where a zero input produces an
undefined result. This adds new ISD nodes for the new semantics,
selecting them when the LLVM intrinsic indicates that the undef behavior
is desired. The new nodes expand trivially to the old nodes, so targets
don't actually need to do anything to support these new nodes besides
indicating that they should be expanded. I've done this for all the
operand types that I could figure out for all the targets. Owners of
various targets, please review and let me know if any of these are
incorrect.

Note that the expand behavior is *conservatively correct*, and exactly
matches LLVM's current behavior with these operations. Ideally this
patch will not change behavior in any way. For example the regtest suite
finds the exact same instruction sequences coming out of the code
generator. That's why there are no new tests here -- all of this is
being exercised by the existing test suite.

Thanks to Duncan Sands for reviewing the various bits of this patch and
helping me get the wrinkles ironed out with expanding for each target.
Also thanks to Chris for clarifying through all the discussions that
this is indeed the approach he was looking for. That said, there are
likely still rough spots. Further review much appreciated.

llvm-svn: 146466
2011-12-13 01:56:10 +00:00
Nadav Rotem
abb5bb41d4 CR fixes per Bruno's request.
Undo the changes from r139285 which added custom lowering to vselect.
Add tablegen lowering for vselect.

llvm-svn: 139479
2011-09-11 15:02:23 +00:00
Eli Friedman
6f95a6ae1b Basic x86 code generation for atomic load and store instructions.
llvm-svn: 138478
2011-08-24 20:50:09 +00:00
Eli Friedman
842ea169de Code generation for 'fence' instruction.
llvm-svn: 136283
2011-07-27 22:21:52 +00:00
Cameron Zwarich
c40edcee8e Add an fma TableGen node.
llvm-svn: 134762
2011-07-08 22:49:34 +00:00
Bruno Cardoso Lopes
b6afc5168f Add one more argument to the prefetch intrinsic to indicate whether it's a data
or instruction cache access. Update the targets to match it and also teach
autoupgrade.

llvm-svn: 132976
2011-06-14 04:58:37 +00:00
Stuart Hastings
fdc9e4af68 FGETSIGN support for x86, using movmskps/pd. Will be enabled with a
patch to TargetLowering.cpp.  rdar://problem/5660695

llvm-svn: 132388
2011-06-01 04:39:42 +00:00
Eric Christopher
bea4df9acc Let the immediate leaf pattern take transforms and switch the signed
immediate patterns in arm to using the pattern.

Handles rdar://9299434

llvm-svn: 130386
2011-04-28 05:49:04 +00:00
Chris Lattner
a473329704 Add a new bit that ImmLeaf's can opt into, which allows them to duck out of
the generated FastISel.  X86 doesn't need to generate code to match ADD16ri8 
since ADD16ri will do just fine.  This is a small codesize win in the generated
instruction selector.

llvm-svn: 129692
2011-04-18 06:36:55 +00:00
Chris Lattner
7ca810dd13 since the VT is fixed for a ImmLeaf, there is no reason to expose it to the matching code.
llvm-svn: 129677
2011-04-17 22:17:27 +00:00
Chris Lattner
2d89feb795 now that predicates have a decent abstraction layer on them, introduce a new
kind of predicate: one that is specific to imm nodes.  The predicate function
specified here just checks an int64_t directly instead of messing around with
SDNode's.  The virtue of this is that it means that fastisel and other things
can reason about these predicates.

llvm-svn: 129675
2011-04-17 22:05:17 +00:00
Chris Lattner
d776e76c44 Rework our internal representation of node predicates to expose more
structure and fix some fixmes.  We now have a TreePredicateFn class
that handles all of the decoding of these things.  This is an internal
cleanup that has no impact on the code generated by tblgen.

llvm-svn: 129670
2011-04-17 21:38:24 +00:00
David Greene
5c173a307b [AVX] Add INSERT_SUBVECTOR and support it on x86. This provides a
default implementation for x86, going through the stack in a similr
fashion to how the codegen implements BUILD_VECTOR.  Eventually this
will get matched to VINSERTF128 if AVX is available.

llvm-svn: 124307
2011-01-26 19:13:22 +00:00