This should make it clear under which narrow circumstances implicit
physreg uses are okay when rematerializing and prevent people from
accidentally allowing too much when overriding
isReallyTriviallyReMaterializable() even with the weaker assert in the
RegisterCoalescer.
llvm-svn: 235679
Currently symbol names are printed in quotes if it contains something
outside of the arbitrary set of characters that isAcceptableChar tests
for. On somem targets, it is never OK to print a symbol name in quotes
so allow targets to opt out of this behavior.
llvm-svn: 235670
Add assembler/disassembler support for dcbt/dcbtst (and aliases) with the hint
field specified (non-zero). Unforunately, the syntax for this instruction is
special in that it differs for server vs. embedded cores:
dcbt ra, rb, th [server]
dcbt th, ra, rb [embedded]
where th can be omitted when it is 0. dcbtst is the same. Thus we need to play
games in the parser and the printer to flip the operands around on the embedded
cores. We'll use the server syntax as the default (binutils currently uses the
embedded form by default, but IBM is changing that).
We also stop marking dcbtst as having unmodeled side effects (this is not
necessary, it is just a hint like dcbt -- noticed by inspection, so no separate
test case).
llvm-svn: 235657
(reverted in r235533)
Original commit message:
"Calls to llvm::Value::mutateType are becoming extra-sensitive now that
instructions have extra type information that will not be derived from
operands or result type (alloca, gep, load, call/invoke, etc... ). The
special-handling for mutateType will get more complicated as this work
continues - it might be worth making mutateType virtual & pushing the
complexity down into the classes that need special handling. But with
only two significant uses of mutateType (vectorization and linking) this
seems OK for now.
Totally open to ideas/suggestions/improvements, of course.
With this, and a bunch of exceptions, we can roundtrip an indirect call
site through bitcode and IR. (a direct call site is actually trickier...
I haven't figured out how to deal with the IR deserializer's lazy
construction of Function/GlobalVariable decl's based on the type of the
entity which means looking through the "pointer to T" type referring to
the global)"
The remapping done in ValueMapper for LTO was insufficient as the types
weren't correctly mapped (though I was using the post-mapped operands,
some of those operands might not have been mapped yet so the type
wouldn't be post-mapped yet). Instead use the pre-mapped type and
explicitly map all the types.
llvm-svn: 235651
Specifically, if a pointer accesses different underlying objects in each
iteration, don't look through the phi node defining the pointer.
The motivating case is the underlyling-objects-2.ll testcase. Consider
the loop nest:
int **A;
for (i)
for (j)
A[i][j] = A[i-1][j] * B[j]
This loop is transformed by Load-PRE to stash away A[i] for the next
iteration of the outer loop:
Curr = A[0]; // Prev_0
for (i: 1..N) {
Prev = Curr; // Prev = PHI (Prev_0, Curr)
Curr = A[i];
for (j: 0..N)
Curr[j] = Prev[j] * B[j]
}
Since A[i] and A[i-1] are likely to be independent pointers,
getUnderlyingObjects should not assume that Curr and Prev share the same
underlying object in the inner loop.
If it did we would try to dependence-analyze Curr and Prev and the
analysis of the corresponding SCEVs would fail with non-constant
distance.
To fix this, the getUnderlyingObjects API is extended with an optional
LoopInfo parameter. This is effectively what controls whether we want
the above behavior or the original. Currently, I only changed to use
this approach for LoopAccessAnalysis.
The other testcase is to guard the opposite case where we do want to
look through the loop PHI. If we step through an array by incrementing
a pointer, the underlying object is the incoming value of the phi as the
loop is entered.
Fixes rdar://problem/19566729
llvm-svn: 235634
This will enable us to create a PDBContext so as to expose some
amount of debug info functionality through a common interace.
Differential Revision: http://reviews.llvm.org/D9205
Reviewed by: Alexey Samsonov
llvm-svn: 235612
Move isDereferenceablePointer function to Analysis. This function recursively tracks dereferencability over a chain of values like other functions in ValueTracking.
This refactoring is motivated by further changes to support dereferenceable_or_null attribute (http://reviews.llvm.org/D8650). isDereferenceablePointer will be extended to perform context-sensitive analysis and IR is not a good place to have such functionality.
Patch by: Artur Pilipenko <apilipenko@azulsystems.com>
Differential Revision: reviews.llvm.org/D9075
llvm-svn: 235611
Summary:
Make sure the abbrev operands are valid and that we can read/skip them
afterwards.
Bug found with AFL fuzz.
Reviewers: rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9030
llvm-svn: 235595
This removes the -sehprepare flag and makes __C_specific_handler
functions always to use WinEHPrepare.
This was tested by building all of chromium_builder_tests and running a
few tests that use SEH, but if something breaks, we can revert this.
llvm-svn: 235557
I added the poisoning back in r76891 (2009) because of some bugs in
Unladen Swallow, and then Evan Cheng added the setRangeWritable() call
in r81308. Profiling a Release+Asserts build on Windows shows that this
memory protection call is actually very expensive. 4 seconds of a 70
second Clang compilation are spent in VirtualQuery. These days we have
more reliable tools like ASan to find these kinds of bugs, so we can go
ahead and retire these checks.
llvm-svn: 235542
This reverts commit r235458.
It looks like this might be breaking something LTO-ish. Looking into it
& will recommit with a fix/test case/etc once I've got more to go on.
llvm-svn: 235533
This causes OpKind and all the bitfields after it to use 32-bit load/stores instead of i24's for the existing bitfields.
Bugs will be filed to track whether clang and llvm could have generated the 32-bit operations in the front-end or optimizer.
Reviewed by Rafael.
llvm-svn: 235528
X86 backend.
The code generated for symbolic targets is identical to the code generated for
constant targets, except that a relocation is emitted to fix up the actual
target address at link-time. This allows IR and object files containing
patchpoints to be cached across JIT-invocations where the target address may
change.
llvm-svn: 235483
Without pointee types the space optimization of storing only the pointer
type and not the value type won't be viable - so add the extra type
information that would be missing.
llvm-svn: 235475
Without pointee types the space optimization of storing only the pointer
type and not the value type won't be viable - so add the extra type
information that would be missing.
Storeatomic coming soon.
llvm-svn: 235474
Add a flag to lib/Linker (and `llvm-link`) to override linkage rules.
When set, the functions in the source module *always* replace those in
the destination module.
The `llvm-link` option is `-override=abc.ll`. All the "regular" modules
are loaded and linked first, followed by the `-override` modules. This
is useful for debugging workflows where some subset of the module (e.g.,
a single function) is extracted into a separate file where it's
optimized differently, before being merged back in.
Patch by Luqman Aden!
llvm-svn: 235473
Calls to llvm::Value::mutateType are becoming extra-sensitive now that
instructions have extra type information that will not be derived from
operands or result type (alloca, gep, load, call/invoke, etc... ). The
special-handling for mutateType will get more complicated as this work
continues - it might be worth making mutateType virtual & pushing the
complexity down into the classes that need special handling. But with
only two significant uses of mutateType (vectorization and linking) this
seems OK for now.
Totally open to ideas/suggestions/improvements, of course.
With this, and a bunch of exceptions, we can roundtrip an indirect call
site through bitcode and IR. (a direct call site is actually trickier...
I haven't figured out how to deal with the IR deserializer's lazy
construction of Function/GlobalVariable decl's based on the type of the
entity which means looking through the "pointer to T" type referring to
the global)
llvm-svn: 235458
Summary:
This lets us use range based for loops.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9169
llvm-svn: 235416
Remove the `DIArray` and `DITypeArray` typedefs, preferring the
underlying types (`DebugNodeArray` and `MDTypeRefArray`, respectively).
llvm-svn: 235413
Summary:
MemorySSA uses this algorithm as well, and this enables us to reuse the code in both places.
There are no actual algorithm or datastructure changes in here, just code movement.
Reviewers: qcolombet, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9118
llvm-svn: 235406
Keep the old SEH fan-in lowering on by default for now, since projects
rely on it. This will make it easy to test this change with a simple
flag flip.
llvm-svn: 235399
Summary:
This directive is exactly the same as .asciz, except it's only used by MIPS.
It is used to store null terminated strings in object files.
Reviewers: rafael, dsanders, echristo
Reviewed By: dsanders, echristo
Subscribers: echristo, llvm-commits
Differential Revision: http://reviews.llvm.org/D7530
llvm-svn: 235382
Delete subclasses of (the already deleted) `DIType` in favour of
directly using pointers from the `Metadata` hierarchy.
While `DICompositeType` wraps `MDCompositeTypeBase` and `DIDerivedType`
wraps `MDDerivedTypeBase`, most uses of each really meant the more
specific `MDCompositeType` and `MDDerivedType`.
llvm-svn: 235351
This is the last major parent class, so I'll probably start deleting
classes in batches now. Looks like many of the references to the DI*
hierarchy were updated organically along the way.
llvm-svn: 235331
Replace uses of `DIScope` with `MDScope*`. There was one spot where
I've left an `MDScope*` uninitialized (where `DIScope` would have been
default-initialized to `nullptr`) -- this is intentional, since the
if/else that follows should unconditional assign it to a value.
llvm-svn: 235327
The current implementations could exhibit some behavior differences:
raw_fd_ostream: Whatever the underlying fd does with seek+write. In a normal
file, the write position would be back to the old offset.
raw_svector_ostream: The write position is always the end of the stream, so
after pwrite the write position would be the new end. This matches what OS_X
(all BSD?) do with a pwrite in a O_APPEND fd.
Given that we don't need that feature and don't use O_APPEND a lot in LLVM,
just disallow it.
I am open to suggestions on renaming pwrite to something else, but this fixes
the issue for now.
Thanks to Yaron Keren for reporting it.
llvm-svn: 235303
This patch refactors reduction identification code out of LoopVectorizer and
exposes them as common utilities.
No functional change.
Review: http://reviews.llvm.org/D9046
llvm-svn: 235284
Stop using `DIDescriptor` and its subclasses in the `DebugInfoFinder`
API, as well as the rest of the API hanging around in `DebugInfo.h`.
llvm-svn: 235240
Previously DebugInfoPDB could only load data for a PDB given a
path to the PDB. It could not open an EXE and find the matching
PDB and verify it matched, etc. This patch adds support for that
so that we can simply load debug information for a PDB directly.
Additionally, this patch extends DebugInfoPDB to support getting
source and line information for symbols.
llvm-svn: 235237
The implementation of this GEP::getResultElementType will be refactored
to either rely on a member variable, or recompute the value from the
indicies (any preferences?).
llvm-svn: 235236
Similar to r235222, but for the weak symbol case.
In an "ideal" assembler/object format an expression would always refer to the
final value and A-B would only be computed from a section in the same
comdat as A and B with A and B strong.
Unfortunately that is not the case with debug info on ELF, so we need an
heuristic. Since we need an heuristic, we may as well use the same one as
gas:
* call weak_sym : produces a relocation, even if in the same section.
* A - weak_sym and weak_sym -A: don't produce a relocation if we can
compute it.
This fixes pr23272 and changes the fix of pr22815 to match what gas does.
llvm-svn: 235227
Now (with a few carefully placed suppressions relating to general type
serialization, etc) we can round trip a simple load through bitcode and
textual IR without calling getElementType on a PointerType.
llvm-svn: 235221
Summary:
This patch adds legalization support to operate on FP16 as a load/store type
and do operations on it as floats.
Tests for ARM are added to test/CodeGen/ARM/fp16-promote.ll
Reviewers: srhines, t.p.northover
Differential Revision: http://reviews.llvm.org/D8755
llvm-svn: 235215
When debugging LTO issues with ld64, we use -save-temps to save the merged
optimized bitcode file, then invoke ld64 again on the single bitcode file.
The saved bitcode file is already internalized, so we can call
lto_codegen_set_should_internalize and skip running internalization again.
rdar://20227235
llvm-svn: 235211
The v1i128 type is needed for the quadword add/substract instructions introduced
in POWER8. Futhermore, the PowerPC ABI specifies that parameters of type v1i128
are to be passed in a single vector register, while parameters of type i128 are
passed in pairs of GPRs. Thus, it is necessary to be able to differentiate
between v1i128 and i128 in LLVM.
http://reviews.llvm.org/D8564
llvm-svn: 235198
The i128 type is needed as a builtin type in order to support the v1i128 vector
type. The PowerPC ABI requires that the i128 and v1i128 types are handled
differently when passed as parameters to functions (i128 is passed in pairs of
GPRs, v1i128 is passed in a single vector register).
http://reviews.llvm.org/D8564
llvm-svn: 235196
This now emits simple, unoptimized xdata tables for __C_specific_handler
based on the handlers listed in @llvm.eh.actions calls produced by
WinEHPrepare.
This adds support for running __finally blocks when exceptions are
thrown, and removes the old landingpad fan-in codepath.
I ran some manual execution tests on small basic test cases with and
without optimization, as well as on Chrome base_unittests, which uses a
small amount of SEH. I'm sure there are bugs, and we may need to
revert.
llvm-svn: 235154
See r230786 and r230794 for similar changes to gep and load
respectively.
Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.
When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.
This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.
This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).
No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.
This leaves /only/ the varargs case where the explicit type is required.
Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.
About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.
import fileinput
import sys
import re
pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")
def conv(match, line):
if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
return line
return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]
for line in sys.stdin:
sys.stdout.write(conv(re.search(pat, line), line))
llvm-svn: 235145
Summary:
If a pointer is marked as dereferenceable_or_null(N), LLVM assumes it
is either `null` or `dereferenceable(N)` or both. This change only
introduces the attribute and adds a token test case for the `llvm-as`
/ `llvm-dis`. It does not hook up other parts of the optimizer to
actually exploit the attribute -- those changes will come later.
For pointers in address space 0, `dereferenceable(N)` is now exactly
equivalent to `dereferenceable_or_null(N)` && `nonnull`. For other
address spaces, `dereferenceable(N)` is potentially weaker than
`dereferenceable_or_null(N)` && `nonnull` (since we could have a null
`dereferenceable(N)` pointer).
The motivating case for this change is Java (and other managed
languages), where pointers are either `null` or dereferenceable up to
some usually known-at-compile-time constant offset.
Reviewers: rafael, hfinkel
Reviewed By: hfinkel
Subscribers: nicholas, llvm-commits
Differential Revision: http://reviews.llvm.org/D8650
llvm-svn: 235132
As a step toward killing `DIDescriptor` and its subclasses, remove it
from the `DIBuilder` API. Replace the subclasses with appropriate
pointers from the new debug info hierarchy. There are a couple of
possible surprises in type choices for out-of-tree frontends:
- Subroutine types: `MDSubroutineType`, not `MDCompositeTypeBase`.
- Composite types: `MDCompositeType`, not `MDCompositeTypeBase`.
- Scopes: `MDScope`, not `MDNode`.
- Generic debug info nodes: `DebugNode`, not `MDNode`.
This is part of PR23080.
llvm-svn: 235111
Summary: LoopInfoImpl's loop population is just a normal postorder walk, written out.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9032
llvm-svn: 235073
Delete `DIRef<>`, and replace the remaining uses of it with
`TypedDebugNodeRef<>`. To minimize code churn, I've added typedefs from
`MDTypeRef` to `DITypeRef` (etc.).
llvm-svn: 235071
PR23080 is almost finished. With this commit, there's no consequential
API in `DIDescriptor` and its subclasses. What's left?
- Default-constructed to `nullptr`.
- Handy `const_cast<>` (constructed from `const`, but accessors are
non-`const`).
I think the safe way to catch those is to delete the classes and fix
compile errors. That'll be my next step, after I delete the `DITypeRef`
(etc.) wrapper around `MDTypeRef`.
llvm-svn: 235069
Continuing PR23080, gut `DIType` and its various subclasses, leaving
behind thin wrappers around the pointer types in the new debug info
hierarchy.
llvm-svn: 235064
Remove the accessors of `DIDerivedType` that downcast to
`MDDerivedType`, shifting the `cast<MDDerivedType>` into the callers.
Also remove `DIType::isValid()`, which is really just a check against
`nullptr` at this point.
llvm-svn: 235059
Remove 'inlinedAt:' from MDLocalVariable. Besides saving some memory
(variables with it seem to be single largest `Metadata` contributer to
memory usage right now in -g -flto builds), this stops optimization and
backend passes from having to change local variables.
The 'inlinedAt:' field was used by the backend in two ways:
1. To tell the backend whether and into what a variable was inlined.
2. To create a unique id for each inlined variable.
Instead, rely on the 'inlinedAt:' field of the intrinsic's `!dbg`
attachment, and change the DWARF backend to use a typedef called
`InlinedVariable` which is `std::pair<MDLocalVariable*, MDLocation*>`.
This `DebugLoc` is already passed reliably through the backend (as
verified by r234021).
This commit removes the check from r234021, but I added a new check
(that will survive) in r235048, and changed the `DIBuilder` API in
r235041 to require a `!dbg` attachment whose 'scope:` is in the same
`MDSubprogram` as the variable's.
If this breaks your out-of-tree testcases, perhaps the script I used
(mdlocalvariable-drop-inlinedat.sh) will help; I'll attach it to PR22778
in a moment.
llvm-svn: 235050
Before we start to rely on valid `!dbg` attachments, add a check to the
verifier that `@llvm.dbg.*` intrinsics always have one. Also check that
the `scope:` fields point at the same `MDSubprogram`.
This is in the context of PR22778. The check that the `inlinedAt:`
fields agree has baked for a while (since r234021), so I'll kill [1] the
`MDLocalVariable::getInlinedAt()` field soon.
[1]: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150330/269387.html
Unfortunately, that means it's impossible to keep the current `Verifier`
checks, which rely on comparing `inlinedAt:` fields. We'll be able to
keep the checks I'm adding here.
If this breaks your out-of-tree testcases, the upgrade script
(add-dbg-to-intrinsics.sh) attached to PR22778 that I used for r235040
might fix them for you.
llvm-svn: 235048
Change `DIBuilder::insertDeclare()` and `insertDbgValueIntrinsic()` to
take an `MDLocation*`/`DebugLoc` parameter which it attaches to the
created intrinsic. Assert at creation time that the `scope:` field's
subprogram matches the variable's. There's a matching `clang` commit to
use the API.
The context for this is PR22778, which is removing the `inlinedAt:`
field from `MDLocalVariable`, instead deferring to the `!dbg` location
attached to the debug info intrinsic. The best way to ensure we always
have a `!dbg` attachment is to require one at creation time. I'll be
adding verifier checks next, but this API change is the best way to
shake out frontend bugs.
Note: I added an `llvm_unreachable()` in `bindings/go` and passed in
`nullptr` for the `DebugLoc`. The `llgo` folks will eventually need to
pass a valid `DebugLoc` here.
llvm-svn: 235041
signature match the other layers.
This makes it possible to compose other layers (e.g. IRTransformLayer) on top
of CompileOnDemandLayer.
llvm-svn: 235029
Remove all the global bits to do with preserving use-list order by
moving the `cl::opt`s to the individual tools that want them. There's a
minor functionality change to `libLTO`, in that you can't send in
`-preserve-bc-uselistorder=false`, but making that bit settable (if it's
worth doing) should be through explicit LTO API.
As a drive-by fix, I removed some includes of `UseListOrder.h` that were
made unnecessary by recent commits.
llvm-svn: 234973
Pull the `-preserve-ll-uselistorder` bit up through all the callers of
`Module::print()`. I converted callers of `operator<<` to
`Module::print()` where necessary to pull the bit through.
llvm-svn: 234968
Change the callers of `WriteToBitcodeFile()` to pass `true` or
`shouldPreserveBitcodeUseListOrder()` explicitly. I left the callers
that want to send `false` alone.
I'll keep pushing the bit higher until hopefully I can delete the global
`cl::opt` entirely.
llvm-svn: 234957
Summary:
There are a number of passes that could be sped up by using dominator tree DFS numbers to order or compare things across multiple bbs
(MemorySSA, MergedLoadStoreMotion, EarlyCSE, Sinking, GVN, NewGVN, for starters :P).
For example, GVN/CSE elimination can be done with a simple stack/etc (instead of full-on scoped hash table or repeated leader set walks)
if the DFS pair is stored next to leaders.
The dominator tree keeps them, and the DOM tree nodes expose them as public, but you have no guarantee they are up to date (and in fact,
if you split blocks or whatever during your pass, they definitely won't be)
This means passes either have to compute their own versions[1], or make 32 queries, or ....
Rather than try to hide this, i just made the API public, and make it do nothing if the numbers are already valid.
[1] Which we want as a non-recursive walk, which is not pretty, sadly,
because it cannot use the depth first iterators since you don't get called on the way back up. So you either have to do one walk with po_iterator
and one with df_iterator, or write your own non-recursive walk that looks identical to the one in updateDFSNumbers.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8946
llvm-svn: 234930
As a follow-up to r234850, add an implicit conversion from
`DISubprogram` to `DIScope` to support Kaleidoscope Ch. 8. This also
reverts that band-aid from r234890.
(/me learns *again* to build Kaleidoscope before commit...)
llvm-svn: 234904