All of the cases were just appending from random access iterators to a
vector. Using insert/append can grow the vector to the perfect size
directly and moves the growing out of the loop. No intended functionalty
change.
llvm-svn: 230845
The keys of the map are unique by pointer address, so there's no need
to use the llvm::less comparator. This allows us to use DenseMap
instead, which reduces tblgen time by 20% on my stress test.
llvm-svn: 230769
Add `CHECK-SAME`, which requires that the pattern matches on the *same*
line as the previous `CHECK`/`CHECK-NEXT` -- in other words, no newline
is allowed in the skipped region. This is similar to `CHECK-NEXT`,
which requires exactly 1 newline in the skipped region.
My motivation is to simplify checking the long lines of LLVM assembly
for the new debug info hierarchy. This allows CHECK sequences like the
following:
CHECK: ![[REF]] = !SomeMDNode(
CHECK-SAME: file: ![[FILE:[0-9]+]]
CHECK-SAME: otherField: 93{{[,)]}}
which is equivalent to:
CHECK: ![[REF]] = !SomeMDNode({{.*}}file: ![[FILE:[0-9]+]]{{.*}}otherField: 93{{[,)]}}
While this example just has two fields, many nodes in debug info have
more than that. `CHECK-SAME` will keep the logic easy to follow.
Morever, it enables interleaving `CHECK-NOT`s without allowing newlines.
Consider the following:
CHECK: ![[REF]] = !SomeMDNode(
CHECK-SAME: file: ![[FILE:[0-9]+]]
CHECK-NOT: unexpectedField:
CHECK-SAME: otherField: 93{{[,)]}}
CHECK-NOT: otherUnexpectedField:
CHECK-SAME: )
which doesn't seem to have an equivalent `CHECK` line.
llvm-svn: 230612
Gather and scatter instructions additionally write to one of the source operands - mask register.
In this case Gather has 2 destination values - the loaded value and the mask.
Till now we did not support code gen pattern for gather - the instruction was generated from
intrinsic only and machine node was hardcoded.
When we introduce the masked_gather node, we need to select instruction automatically,
in the standard way.
I added a flag "hasTwoExplicitDefs" that allows to handle 2 destination operands.
(Some code in the X86InstrFragmentsSIMD.td is commented out, just to split one big
patch in many small patches)
llvm-svn: 230471
Everyone except R600 was manually passing the length of a static array
at each callsite, calculated in a variety of interesting ways. Far
easier to let ArrayRef handle that.
There should be no functional change, but out of tree targets may have
to tweak their calls as with these examples.
llvm-svn: 230118
Parse (and write) symbolic constants in debug info `flags:` fields.
This prevents a readability (and CHECK-ability) regression with the new
debug info hierarchy.
Old (well, current) assembly, with pretty-printing:
!{!"...\\0016387", ...} ; ... [public] [rvalue reference]
Flags field without this change:
!MDDerivedType(flags: 16387, ...)
Flags field with this change:
!MDDerivedType(flags: DIFlagPublic | DIFlagRValueReference, ...)
As discussed in the review thread, this isn't a final state. Most of
these flags correspond to `DW_AT_` symbolic constants, and we might
eventually want to support arbitrary attributes in some form. However,
as it stands now, some of the flags correspond to other concepts (like
`FlagStaticMember`); until things are refactored this is the simplest
way to move forward without regressing assembly.
llvm-svn: 230111
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
Differential Revision: http://reviews.llvm.org/D7065
llvm-svn: 229831
asm and port the mmx vector shuffle test to it.
Not thrilled with how it handles the stack manipulation logic, but I'm
much less bothered by that than I am by updating the test manually. =]
If anyone wants to teach the test checks management script about stack
adjustment patterns, that'd be cool too.
llvm-svn: 229268
is the default.
The lit.cfg files are not all valid Python3 and I've no idea if anyone
is really prepared to update them. The easiest way I know of to ensure
that this script uses Python 2 is to use 'python2.7' in the command. Mac
and Linux are definitely fine with this and I think other platforms will
be as well, but if anyone struggles with this set up and has better
ideas, let me know.
llvm-svn: 229244
Gather and Scatter are new introduced intrinsics, comming after recently implemented masked load and store.
This is the first patch for Gather and Scatter intrinsics. It includes only the syntax, parsing and verification.
Gather and Scatter intrinsics allow to perform multiple memory accesses (read/write) in one vector instruction.
The intrinsics are not target specific and will have the following syntax:
Gather:
declare <16 x i32> @llvm.masked.gather.v16i32(<16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1> <mask>, <16 x i32> <passthru>)
declare <8 x float> @llvm.masked.gather.v8f32(<8 x float*><vector of ptrs>, i32 <alignment>, <8 x i1> <mask>, <8 x float><passthru>)
Scatter:
declare void @llvm.masked.scatter.v8i32(<8 x i32><vector value to be stored> , <8 x i32*><vector of ptrs> , i32 <alignment>, <8 x i1> <mask>)
declare void @llvm.masked.scatter.v16i32(<16 x i32> <vector value to be stored> , <16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1><mask> )
Vector of ptrs - a set of source/destination addresses, to load/store the value.
Mask - switches on/off vector lanes to prevent memory access for switched-off lanes
vector of ptrs, value and mask should have the same vector width.
These are code examples where gather / scatter should be used and will allow function vectorization
;void foo1(int * restrict A, int * restrict B, int * restrict C) {
; for (int i=0; i<SIZE; i++) {
; A[i] = B[C[i]];
; }
;}
;void foo3(int * restrict A, int * restrict B) {
; for (int i=0; i<SIZE; i++) {
; A[B[i]] = i+5;
; }
;}
Tests will come in the following patches, with CodeGen and Vectorizer.
http://reviews.llvm.org/D7433
llvm-svn: 228521
llvm-mode was previously confused when variable names contained keywords.
This changes ensures that keywords are only highlighted when they're standalone.
Patch by Wilfred Hughes!
llvm-svn: 228396
This is done in a bit of a strange way to use a multiline RE instead of
looping over the lines. Suggestions welcome here for a more pythonic way
of doing this as long as its reasonably fast.
llvm-svn: 228131
with 'stress' to indicate that the specific output isn't interesting and
relax them to only check the last instruction (a ret).
I've updated the one test case that really uses this to name the one
'stress_test' which was actually producing output we can directly check.
With this, the script doesn't introduce noise when run over the v16 test
file.
llvm-svn: 228033
Similar to the C++14 void specializations of these templates, useful as
a stop-gap until LLVM switches to '14.
Example use-cases in tblgen because I saw some functors that looked like
they could be simplified/refactored.
Reviewers: dexonsmith
Differential Revision: http://reviews.llvm.org/D7324
llvm-svn: 227828
The hot path through this region of code does lots of batch inserts into sets. By storing them as sorted arrays, we can defer the sorting to the end of the batch, which is dramatically more efficient. This reduces tblgen runtime by 25% on my worst-case target.
llvm-svn: 227682
This is a continuation of my prior work to move some of the inner workings for CodeGenRegister to use bit vectors when computing about register units. This is highly beneficial to TableGen runtime on targets with large, dense register files. This patch represents a ~40% runtime reduction over and above my earlier improvement on a stress test of this case.
llvm-svn: 227678
For target descriptions with very large and very dense register files, TableGen
can take an extremely long time to run. This change makes a dent in that (~15%
in my measurements) by accelerating the single hottest operation with better data
structures.
I believe there's still a lot of room to make this even faster with more global
changes that require replacing some of the existing datastructures in this area
with bit vectors, but that's a more involved change and I wanted to get this
simpler improvement in first.
llvm-svn: 227562
derived classes.
Since global data alignment, layout, and mangling is often based on the
DataLayout, move it to the TargetMachine. This ensures that global
data is going to be layed out and mangled consistently if the subtarget
changes on a per function basis. Prior to this all targets(*) have
had subtarget dependent code moved out and onto the TargetMachine.
*One target hasn't been migrated as part of this change: R600. The
R600 port has, as a subtarget feature, the size of pointers and
this affects global data layout. I've currently hacked in a FIXME
to enable progress, but the port needs to be updated to either pass
the 64-bitness to the TargetMachine, or fix the DataLayout to
avoid subtarget dependent features.
llvm-svn: 227113
In llvm-mode, with electric-pair-mode turned on, typing a literal '['
would print out '[[', and '(' would print a '(('. This was a very
annoying bug caused by overzealous syntax-table entries: the parens are
already part of the '(' and ')' class by default. Fix this.
While at it, notice that i32, i64, i1 etc. are not font-locked despite a
clear intent to do so. The issue is that regexp-opt doesn't accept
regular expressions. So, spell out the common literal integers with
different widths.
Differential Revision: http://reviews.llvm.org/D7036
llvm-svn: 226931
Make it clear that the "llvm.org" style is deriving from "gnu" style,
and use the c-mode-common-hook instead of c-mode-hook and c++-mode-hook.
Differential Revision: http://reviews.llvm.org/D7035
llvm-svn: 226861
Specifically, gc.result benefits from this greatly. Instead of:
gc.result.int.*
gc.result.float.*
gc.result.ptr.*
...
We now have a gc.result.* that can specialize to literally any type.
Differential Revision: http://reviews.llvm.org/D7020
llvm-svn: 226857
ELF linkers by default allow shared libraries to contain undefined references
and it is up to the dynamic linker to look for them.
On COFF and MachO, that is not the case.
This creates a situation where a .so might build on an ELF system, but the build
of the corresponding .dylib or .dll will fail.
This patch changes the cmake build to use -Wl,-z,defs when linking and updates
the dependencies so that -DBUILD_SHARED_LIBS=ON build still works.
llvm-svn: 226611
This patch was generated by a clang tidy checker that is being open sourced.
The documentation of that checker is the following:
/// The emptiness of a container should be checked using the empty method
/// instead of the size method. It is not guaranteed that size is a
/// constant-time function, and it is generally more efficient and also shows
/// clearer intent to use empty. Furthermore some containers may implement the
/// empty method but not implement the size method. Using empty whenever
/// possible makes it easier to switch to another container in the future.
Patch by Gábor Horváth!
llvm-svn: 226161
This adds support for creating an InstAlias with a negative immediate, i.e.:
def NOT : InstAlias<"not $dst, $src", (XORI GR32:$dst, GR32:$src, -1)>;
by resolving this problem:
RISCVGenAsmMatcher.inc:95:11: error: expected '= constant-expression' or end of enumerator definition
CVT_imm_-1,
^^^^^^^^^^
Patch by Jordy Potman, thanks!
llvm-svn: 226073
This adds assembly and bitcode support for `MDLocation`. The assembly
side is rather big, since this is the first `MDNode` subclass (that
isn't `MDTuple`). Part of PR21433.
(If you're wondering where the mountains of testcase updates are, we
don't need them until I update `DILocation` and `DebugLoc` to actually
use this class.)
llvm-svn: 225830
These intrinsics allow multiple functions to share a single stack
allocation from one function's call frame. The function with the
allocation may only perform one allocation, and it must be in the entry
block.
Functions accessing the allocation call llvm.recoverframeallocation with
the function whose frame they are accessing and a frame pointer from an
active call frame of that function.
These intrinsics are very difficult to inline correctly, so the
intention is that they be introduced rarely, or at least very late
during EH preparation.
Reviewers: echristo, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D6493
llvm-svn: 225746
Instead, just present the command for committing it. This way,
the user can test the merge locally, resolve conflicts, etc.
before committing, which seems much safer to me.
llvm-svn: 225737
Summary: I think this is probably a bug, but I'm putting this up for review just to be sure. I think that `lit.util.capture` should decode the resulting string in the same way `lit.util.executeCommand` does.
Reviewers: ddunbar, EricWF
Reviewed By: EricWF
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6769
llvm-svn: 225681
This adds two new fields to the RegisterOperand TableGen class:
string OperandNamespace = "MCOI";
string OperandType = "OPERAND_REGISTER";
These fields can be used to specify a target specific operand type,
which will be stored in the OperandType member of the MCOperandInfo
object.
This can be useful for targets that need to store some extra information
about operands that cannot be expressed using the target independent
types. For example, in the R600 backend, there are operands which
can take either registers or immediates and it is convenient to be able
to specify this in the TableGen definitions.
llvm-svn: 225661
This script is currently specific to x86 and limited to use with very
small regression or feature tests using 'llc' and 'FileCheck' in
a reasonably canonical way. It is in no way general purpose or robust at
this point. However, it works quite well for simple examples. Here is
the intended workflow:
- Make a change that requires updating N test files and M functions'
assertions within those files.
- Stash the change.
- Update those N test files' RUN-lines to look "canonical"[1].
- Refresh the FileCheck lines for either the entire file or select
functions by running this script.
- The script will parse the RUN lines and run the 'llc' binary you
give it according to each line, collecting the asm.
- It will then annotate each function with the appropriate FileCheck
comments to check every instruction from the start of the first
basic block to the last return.
- There will be numerous cases where the script either fails to remove
the old lines, or inserts checks which need to be manually editted,
but the manual edits tend to be deletions or replacements of
registers with FileCheck variables which are fast manual edits.
- A common pattern is to have the script insert complete checking of
every instruction, and then edit it down to only check the relevant
ones.
- Be careful to do all of these cleanups though! The script is
designed to make transferring and formatting the asm output of llc
into a test case fast, it is *not* designed to be authoratitive
about what constitutes a good test!
- Commit the nice fresh baseline of checks.
- Unstash your change and rebuild llc.
- Re-run script to regenerate the FileCheck annotations
- Remember to re-cleanup these annotations!!!
- Check the diff to make sure this is sane, checking the things you
expected it to, and check that the newly updated tests actually pass.
- Profit!
Also, I'm *terrible* at writing Python, and frankly I didn't spend a lot
of time making this script beautiful or well engineered. But it's useful
to me and may be useful to others so I thought I'd send it out.
http://reviews.llvm.org/D5546
llvm-svn: 225618
Propagate whether `MDNode`s are 'distinct' through the other types of IR
(assembly and bitcode). This adds the `distinct` keyword to assembly.
Currently, no one actually calls `MDNode::getDistinct()`, so these nodes
only get created for:
- self-references, which are never uniqued, and
- nodes whose operands are replaced that hit a uniquing collision.
The concept of distinct nodes is still not quite first-class, since
distinct-ness doesn't yet survive across `MapMetadata()`.
Part of PR22111.
llvm-svn: 225474
* Both files have valid package headers and footers (you can verify
with M-x checkdoc).
* Fixed style warnings generated by checkdoc.
* Fixed a byte-compiler warning in llvm-mode.el.
* Ensure that the modes are autoloaded, so users do not need to
(require 'llvm-mode) to use them.
Patch by Wilfred Hughes.
llvm-svn: 225356
Requires new AsmParserOperand types that detect 16-bit and 32/64-bit mode so that we choose the right instruction based on default sizing without predicates. This is necessary since predicates mess up the disassembler table building.
llvm-svn: 225256
This is necessary to allow the disassembler to be able to handle AdSize32 instructions in 64-bit mode when address size prefix is used.
Eventually we should probably also support 'addr32' and 'addr16' in the assembler to override the address size on some of these instructions. But for now we'll just use special operand types that will lookup the current mode size to select the right instruction.
llvm-svn: 225075
This removes a hardcoded list of instructions in the CodeEmitter. Eventually I intend to remove the predicates on the affected instructions since in any given mode two of them are valid if we supported addr32/addr16 prefixes in the assembler.
llvm-svn: 224809
Summary:
The following types can be encoded and decoded by the json library:
`dict`, `list`, `tuple`, `str`, `unicode`, `int`, `long`, `float`, `bool`, `NoneType`.
`JSONMetricValue` can be constructed with any of these types, and used as part of Test.Result.
This patch also adds a toMetricValue function that converts a value into a MetricValue.
Reviewers: ddunbar, EricWF
Reviewed By: EricWF
Subscribers: cfe-commits, llvm-commits
Differential Revision: http://reviews.llvm.org/D6576
llvm-svn: 224628
An instruction alias defined with InstAlias and an optional operand in the
middle of the AsmString field, "..${a} <operands>", would get the final
"}" printed in the instruction disassembly. This wouldn't happen if the optional
operand appeared as the last item in the AsmString which is how the current
backends avoided the problem.
There don't appear to be any tests for this part of Tablegen but it passes the
pre-commit tests. Manually tested the change by enabling the generic alias
printer in the ARM backend and checking the output.
Differential Revision: http://reviews.llvm.org/D6529
llvm-svn: 224348
On X86, the Intel asm parser tries to match all memory operand sizes when
none is explicitly specified. For LEA, which doesn't really have a memory
operand (just a pointer one), this results in multiple successful matches,
one for each memory size. There's no error because it's same opcode, so
really, it's just one match. However, the tablegen'd matcher function
adds opcode/operands to the passed MCInst, and this results in multiple
duplicated operands.
This commit clears the MCInst in the tablegen'd matcher function.
We sometimes clear it when the match failed, so there's no expectation of
keeping the previous content anyway.
Differential Revision: http://reviews.llvm.org/D6670
llvm-svn: 224347
Clang's static analyzer found several potential cases of undefined
behavior, use of un-initialized values, and potentially null pointer
dereferences in tablegen, Support, MC, and ADT. This cleans them up
with specific assertions on the assumptions of the code.
llvm-svn: 224154
We were already requiring 2.5, which meant that people on old linux distros
had to upgrade anyway.
Requiring python 2.6 will make supporting 3.X easier as we can use the 3.X
exception syntax.
According to the discussion on llvmdev, there is not much value is requiring
just 2.6, we may as well just require 2.7.
llvm-svn: 224129
Summary:
This patch gives me just enough to leverage the existing functionality in `TestRunner` for use in `libc++` and `libc++abi` .
It does the following:
* Adds the `UNSUPPORTED` tag to `TestRunner.parseIntegratedTestScript`.
* Allows `parseIntegratedTestScript` to return an empty script if a script is not required by the caller.
Reviewers: ddunbar, EricWF
Reviewed By: EricWF
Subscribers: cfe-commits, llvm-commits
Differential Revision: http://reviews.llvm.org/D6589
llvm-svn: 223915
Let tablegen compute the combination of subregister lanemasks for all
subregisters in a register/register class. This is preparation for further
work subregister allocation
llvm-svn: 223873
Remove setting of default style, this way is not recommended and
means that all the settings have to be duplicated to demonstrate the
c-add-style method which is a much better way of doing it.
Remove the modified date as it is better stored in SVN.
Tweak a few style parameters to make them conform to the actual LLVM
style.
llvm-svn: 223765
Summary:
I currently have to specify --build=mips-linux-gnu or --build=mipsel-linux-gnu
to configure in order to successfully recurse a 32-bit build of the compiler on
my mips64-linux-gnu and mips64el-linux-gnu targets. This is a bug and will be
fixed but in the meantime it will be useful to have a way to work around this.
Reviewers: tstellarAMD
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6522
llvm-svn: 223369
--disable-timestamps was added to the configure command way back in r142647 but
the command that echos this command to the log was not updated at the time.
llvm-svn: 223351
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.
Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
llvm-svn: 223348
This complicates a few algorithms due to not having random access, but
not by a huge degree I don't think (open to debate/design
discussion/etc).
llvm-svn: 223261
--xunit-xml-output saves test results to disk in JUnit's xml format. This will allow Jenkins to report the details of a lit run.
Based on a patch by David Chisnall.
llvm-svn: 223163
This is the second patch in a small series. This patch contains the MachineInstruction and x86-64 backend pieces required to lower Statepoints. It does not include the code to actually generate the STATEPOINT machine instruction and as a result, the entire patch is currently dead code. I will be submitting the SelectionDAG parts within the next 24-48 hours. Since those pieces are by far the most complicated, I wanted to minimize the size of that patch. That patch will include the tests which exercise the functionality in this patch. The entire series can be seen as one combined whole in http://reviews.llvm.org/D5683.
The STATEPOINT psuedo node is generated after all gc values are explicitly spilled to stack slots. The purpose of this node is to wrap an actual call instruction while recording the spill locations of the meta arguments used for garbage collection and other purposes. The STATEPOINT is modeled as modifing all of those locations to prevent backend optimizations from forwarding the value from before the STATEPOINT to after the STATEPOINT. (Doing so would break relocation semantics for collectors which wish to relocate roots.)
The implementation of STATEPOINT is closely modeled on PATCHPOINT. Eventually, much of the code in this patch will be removed. The long term plan is to merge the functionality provided by statepoints and patchpoints. Merging their implementations in the backend is likely to be a good starting point.
Reviewed by: atrick, ributzka
llvm-svn: 223085
Order matters for this container, it seems (using a forward_list and
replacing the original push_backs with emplace_fronts caused test
failures). I didn't look too deeply into why.
(& in retrospect, I might go back & change some of the forward_lists I
introduced to deques anyway - since most don't require removal, deque is
a more memory-friendly data structure (moderate locality while not
invalidating pointers))
llvm-svn: 222950
Seems libstdc++ on some buildbots is lacking std::map::emplace, which is
weird... reverting while I look into it.
This reverts commit r222937.
llvm-svn: 222939
Pointers and references to map elements are never invalidated (except on
removal, which isn't used here) so there's no need for the indirection
unless there's polymorphism at work.
A little const correctness had to be fixed, since the indirection
allowed some benign const violations.
llvm-svn: 222937
This reverts commit r222632 (and follow-up r222636), which caused a host
of LNT failures on an internal bot. I'll respond to the commit on the
list with a reproduction of one of the failures.
Conflicts:
lib/Target/X86/X86TargetTransformInfo.cpp
llvm-svn: 222936
Since the elements were not polymorphic, the unique_ptr was only used to
avoid pointer invalidation on container resizes - might as well skip the
indirection and use a container with suitable invalidation semantics.
llvm-svn: 222931
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
llvm-svn: 222632
Primarily done by using SequenceToOffsetTable to reduce the register pressure set tables and then sizing the indices into the tables appropriately. Size a few other table entries based on content as well. Reduces X86RegisterInfo.o by ~9k.
llvm-svn: 222621
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.
This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...
llvm-svn: 222334
Having two ways to do this doesn't seem terribly helpful and
consistently using the insert version (which we already has) seems like
it'll make the code easier to understand to anyone working with standard
data structures. (I also updated many references to the Entry's
key and value to use first() and second instead of getKey{Data,Length,}
and get/setValue - for similar consistency)
Also removes the GetOrCreateValue functions so there's less surface area
to StringMap to fix/improve/change/accommodate move semantics, etc.
llvm-svn: 222319
StringSet is still a bit dodgy in that it exposes the raw iterator of
the StringMap parent, which exposes the weird detail that StringSet
actually has a 'value'... but anyway, this is useful for a handful of
clients that want to reference the newly inserted/persistent string data
in the StringSet/Map/Entry/thing.
llvm-svn: 222302
This reverts commit r222183.
Broke on the MSVC buildbots due to MSVC not producing default move
operations - I'd fix it immediately but just broke my build system a
bit, so backing out until I have a chance to get everything going again.
llvm-svn: 222187
The next step is to actually use unique_ptr in TreePatternNode's
Children vector. That will be more intrusive, and may not work,
depending on exactly how these things are handled (I have a bad
suspicion things are shared more than they should be, making this more
DAG than tree - but if it's really a tree, unique_ptr should suffice)
llvm-svn: 222183
Indices into the table are stored in each MCRegisterClass instead of a pointer. A new method, getRegClassName, is added to MCRegisterInfo and TargetRegisterInfo to lookup the string in the table.
llvm-svn: 222118
based on instruction complexity
The order that tablegen fast-isel instruction code is generated is
currently based on the text of the predicate (using string
less-than). This patch changes this to instead use the instruction
complexity. Because the complexities are not unique a C++ multimap is
used instead of a map.
This fixes the problem where code with no predicate always comes out
first (the empty string always compares as less than all other
strings) thus making the code with predicates dead code. See the FMUL
code in PPCFastISel.cpp for an example. It also more closely matches
the normal codegen ordering. Some error checking in the tablegen
fast-isel code is fixed as well.
Patch by Bill Seurer.
llvm-svn: 222038
The problem is mostly that variadic output instruction
aren't handled, so it is rejected for having an inconsistent
number of operands, and then the right number of operands
isn't emitted.
llvm-svn: 221117
Summary:
CustomCallingConv is simply a CallingConv that tablegen should not generate the
implementation for. It allows regular CallingConv's to delegate to these custom
functions. This is (currently) necessary for Mips and we cannot use CCCustom
without having to adapt to the different API that CCCustom uses.
This brings us a bit closer to being able to remove
MipsCC::analyzeCallOperands and MipsCC::analyzeFormalArguments in favour of
the common implementation.
No functional change to the targets.
Depends on D3341
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: vmedic, llvm-commits
Differential Revision: http://reviews.llvm.org/D5965
llvm-svn: 221052
* Added LLVM libraries required for IntelJITEvents to LLVMBuild.txt.
* Removed 'jit' library from llvm-jitlistener.
* Added support for OptionalLibraries to llvm-build cmake files generator.
Patch by aleksey.a.bader@intel.com
Differential Revision: http://reviews.llvm.org/D5646
llvm-svn: 220848
execution of a shell command. This can happen for example if the
``RUN:`` line calls a python script which can work correctly under
Linux/OSX but will not work under Windows. A more useful error message
is now shown rather than an unhelpful backtrace.
llvm-svn: 220227
This code is based on the existing LLVM Go bindings project hosted at:
https://github.com/go-llvm/llvm
Note that all contributors to the gollvm project have agreed to relicense
their changes under the LLVM license and submit them to the LLVM project.
Differential Revision: http://reviews.llvm.org/D5684
llvm-svn: 219976
Interchangeable commit ids can now be used on this git-svnrevert, which
will figure out what kind of commit that is (if you use format rNNNN for SVN
commits) and make sure the right ids are used in the right places.
It's a little bit more robust and user-friendly.
llvm-svn: 219290
FastISel has a fixed set of virtual functions that are overridden by the
tablegen-generated code for each target. These functions are distinguished by
the kinds of operands, e.g., register + immediate = "ri". The FastISel emitter
has been blindly emitting functions with different combinations of operand
kinds, even for combinations that are completely unused by FastISel, e.g.,
"fastEmit_rrr". Change to filter out functions that will be irrelevant for
FastISel and do not bother generating the code for them. Also add explicit
"override" keywords for the virtual functions that are overridden.
llvm-svn: 218838
No functionality change intended.
This implements Elena's idea to put the new additionalOperand outside the
switch to cover all cases
(http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140929/237763.html).
Note only nontrivial change is in MRMSrcMemFrm. This requires an inclusive
interval of [2, 4] because we have prefix-dependent *optional* immediate
operand.
llvm-svn: 218790
Summary:
The N32/N64 ABI's require that structs passed in registers are laid out
such that spilling the register with 'sd' places the struct at the lowest
address. For little endian this is trivial but for big-endian it requires
that structs are shifted into the upper bits of the register.
We also require that structs passed in registers have the 'inreg'
attribute for big-endian N32/N64 to work correctly. This is because the
tablegen-erated calling convention implementation only has access to the
lowered form of struct arguments (one or more integers of up to 64-bits
each) and is unable to determine the original type.
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5286
llvm-svn: 218451
As far as I can tell UTF-8 has been supported since the beginning of Python's
codec support, and it's the de facto standard for text these days, at least
for primarily-English text. This allows us to put Unicode into lit RUN lines.
rdar://problem/18311663
llvm-svn: 217688
parsing (and latent bug in the instruction definitions).
This is effectively a revert of r136287 which tried to address
a specific and narrow case of immediate operands failing to be accepted
by x86 instructions with a pretty heavy hammer: it introduced a new kind
of operand that behaved differently. All of that is removed with this
commit, but the test cases are both preserved and enhanced.
The core problem that r136287 and this commit are trying to handle is
that gas accepts both of the following instructions:
insertps $192, %xmm0, %xmm1
insertps $-64, %xmm0, %xmm1
These will encode to the same byte sequence, with the immediate
occupying an 8-bit entry. The first form was fixed by r136287 but that
broke the prior handling of the second form! =[ Ironically, we would
still emit the second form in some cases and then be unable to
re-assemble the output.
The reason why the first instruction failed to be handled is because
prior to r136287 the operands ere marked 'i32i8imm' which forces them to
be sign-extenable. Clearly, that won't work for 192 in a single byte.
However, making thim zero-extended or "unsigned" doesn't really address
the core issue either because it breaks negative immediates. The correct
fix is to make these operands 'i8imm' reflecting that they can be either
signed or unsigned but must be 8-bit immediates. This patch backs out
r136287 and then changes those places as well as some others to use
'i8imm' rather than one of the extended variants.
Naturally, this broke something else. The custom DAG nodes had to be
updated to have a much more accurate type constraint of an i8 node, and
a bunch of Pat immediates needed to be specified as i8 values.
The fallout didn't end there though. We also then ceased to be able to
match the instruction-specific intrinsics to the instructions so
modified. Digging, this is because they too used i32 rather than i8 in
their signature. So I've also switched those intrinsics to i8 arguments
in line with the instructions.
In order to make the intrinsic adjustments of course, I also had to add
auto upgrading for the intrinsics.
I suspect that the intrinsic argument types may have led everything down
this rabbit hole. Pretty happy with the result.
llvm-svn: 217310
This is the final round of renaming. This changes tblgen to emit lower-case
function names for FastEmitInst_* and FastEmit_*, and updates all its uses
in the source code.
Reviewed by Eric
llvm-svn: 217075