Change `DIBuilder::insertDeclare()` and `insertDbgValueIntrinsic()` to
take an `MDLocation*`/`DebugLoc` parameter which it attaches to the
created intrinsic. Assert at creation time that the `scope:` field's
subprogram matches the variable's. There's a matching `clang` commit to
use the API.
The context for this is PR22778, which is removing the `inlinedAt:`
field from `MDLocalVariable`, instead deferring to the `!dbg` location
attached to the debug info intrinsic. The best way to ensure we always
have a `!dbg` attachment is to require one at creation time. I'll be
adding verifier checks next, but this API change is the best way to
shake out frontend bugs.
Note: I added an `llvm_unreachable()` in `bindings/go` and passed in
`nullptr` for the `DebugLoc`. The `llgo` folks will eventually need to
pass a valid `DebugLoc` here.
llvm-svn: 235041
Add missing `!dbg` attachments to `@llvm.dbg.*` intrinsics. I updated
these using a script (add-dbg-to-intrinsics.sh) that I'll attach to
PR22778 for posterity.
llvm-svn: 235040
This avoids emitting code for unreachable landingpad blocks that contain
calls to llvm.eh.actions and indirectbr.
It's also a first step towards unifying the SEH and WinEH lowering
codepaths. I'm keeping the old fan-in lowering of SEH around until the
preparation version works well enough that we can switch over without
breaking existing users.
llvm-svn: 235037
signature match the other layers.
This makes it possible to compose other layers (e.g. IRTransformLayer) on top
of CompileOnDemandLayer.
llvm-svn: 235029
BXJ was incorrectly said to be unsupported in ARMv8-A. It is not
supported in the A64 instruction set, but it is supported in the T32
and A32 instruction sets, because it's listed as an instruction in the
ARM ARM section F7.1.28.
Using SP as an operand to BXJ changed from UNPREDICTABLE to
PREDICTABLE in v8-A. This patch reflects that update as well.
This was found by MCHammer.
llvm-svn: 235024
Summary:
With this patch, SLSR may rewrite
S1: X = B + i * S
S2: Y = B + i' * S
to
S2: Y = X + (i' - i) * S
A secondary improvement: if (i' - i) is a power of 2, emit Y as X + (S << log(i' - i)). (S << log(i' -i)) is in a canonical form and thus more likely GVN'ed than (i' - i) * S.
Test Plan: slsr-add.ll
Reviewers: hfinkel, sanjoy, meheff, broune, eliben
Reviewed By: eliben
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8983
llvm-svn: 235019
Many of these predate llvm-readobj. With elf-dump we had to match
a relocation to symbol number and symbol number to symbol name or
section number.
llvm-svn: 235015
This is a 1-line patch (with a TODO for AVX because that will affect
even more regression tests) that lets us substitute the appropriate
64-bit store for the float/double/int domains.
It's not clear to me exactly what the difference is between the 0xD6 (MOVPQI2QImr) and
0x7E (MOVSDto64mr) opcodes, but this is apparently the right choice.
Differential Revision: http://reviews.llvm.org/D8691
llvm-svn: 235014
Set the transform bar at 2 divisions because the fastest current
x86 FP divider circuit is in SandyBridge / Haswell at 10 cycle
latency (best case) relative to a 5 cycle multiplier.
So that's the worst case for this transform (no latency win),
but multiplies are obviously pipelined while divisions are not,
so there's still a big throughput win which we would expect to
show up in typical FP code.
These are the sequences I'm comparing:
divss %xmm2, %xmm0
mulss %xmm1, %xmm0
divss %xmm2, %xmm0
Becomes:
movss LCPI0_0(%rip), %xmm3 ## xmm3 = mem[0],zero,zero,zero
divss %xmm2, %xmm3
mulss %xmm3, %xmm0
mulss %xmm1, %xmm0
mulss %xmm3, %xmm0
[Ignore for the moment that we don't optimize the chain of 3 multiplies
into 2 independent fmuls followed by 1 dependent fmul...this is the DAG
version of: https://llvm.org/bugs/show_bug.cgi?id=21768 ...if we fix that,
then the transform becomes even more profitable on all targets.]
Differential Revision: http://reviews.llvm.org/D8941
llvm-svn: 235012
Summary:
MSP430 doesn't seem to have any additional constraints. Therefore remove
the target hook.
No functional change intended.
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8208
llvm-svn: 235003
Summary:
Refactor MipsAsmParser::getATReg to return an internal register number instead of a register index.
Also change all the int's to unsigned, seeing as the current AT register index is stored as an unsigned in MipsAssemblerOptions.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8478
llvm-svn: 234996
This commit makes LLVM not estimate branch probabilities when doing a
single bit bitmask tests.
The code that originally made me discover this is:
if ((a & 0x1) == 0x1) {
..
}
In this case we don't actually have any branch probability information
and should not assume to have any. LLVM transforms this into:
%and = and i32 %a, 1
%tobool = icmp eq i32 %and, 0
So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.
CodeGen/ARM/2013-10-11-select-stalls.ll started failing because the
changed probabilities changed the results of
ARMBaseInstrInfo::isProfitableToIfCvt() and led to an Ifcvt of the
diamond in the test. AFAICT, the test was never meant to test this and
thus changing the test input slightly to not change the probabilities
seems like the best way to preserve the meaning of the test.
llvm-svn: 234979
Remove all the global bits to do with preserving use-list order by
moving the `cl::opt`s to the individual tools that want them. There's a
minor functionality change to `libLTO`, in that you can't send in
`-preserve-bc-uselistorder=false`, but making that bit settable (if it's
worth doing) should be through explicit LTO API.
As a drive-by fix, I removed some includes of `UseListOrder.h` that were
made unnecessary by recent commits.
llvm-svn: 234973
Pull the `-preserve-ll-uselistorder` bit up through all the callers of
`Module::print()`. I converted callers of `operator<<` to
`Module::print()` where necessary to pull the bit through.
llvm-svn: 234968
For consistency, start pulling out `-preserve-ll-uselistorder`. I'll
drop the global state for both eventually. This pulls it up to
`Module::print()` (but not past there).
llvm-svn: 234966
Change the callers of `WriteToBitcodeFile()` to pass `true` or
`shouldPreserveBitcodeUseListOrder()` explicitly. I left the callers
that want to send `false` alone.
I'll keep pushing the bit higher until hopefully I can delete the global
`cl::opt` entirely.
llvm-svn: 234957
Canonicalize access to whether to preserve use-list order in bitcode on
a `bool` stored in `ValueEnumerator`. Next step, expose this as a
`bool` through `WriteBitcodeToFile()`.
llvm-svn: 234956
Now we don't have to do 2 synchronized passes to compute offsets and then
write the file.
This also includes a fix for the corner case of seeking in /dev/null. It
is not an error, but on some systems (Linux) the returned offset is
always 0. An error is signaled by returning -1. This is checked by
the existing tests now that "clang -o /dev/null ..." seeks.
llvm-svn: 234952
Inlining such intrinsics is very difficult, since you need to
simultaneously transform many calls to llvm.framerecover and potentially
duplicate the functions containing them. Normally this intrinsic isn't
added until EH preparation, which is part of the backend pass pipeline
after inlining. However, if it were to get fed through the inliner,
this change will ensure that it doesn't break the code.
llvm-svn: 234937
Summary:
There are a number of passes that could be sped up by using dominator tree DFS numbers to order or compare things across multiple bbs
(MemorySSA, MergedLoadStoreMotion, EarlyCSE, Sinking, GVN, NewGVN, for starters :P).
For example, GVN/CSE elimination can be done with a simple stack/etc (instead of full-on scoped hash table or repeated leader set walks)
if the DFS pair is stored next to leaders.
The dominator tree keeps them, and the DOM tree nodes expose them as public, but you have no guarantee they are up to date (and in fact,
if you split blocks or whatever during your pass, they definitely won't be)
This means passes either have to compute their own versions[1], or make 32 queries, or ....
Rather than try to hide this, i just made the API public, and make it do nothing if the numbers are already valid.
[1] Which we want as a non-recursive walk, which is not pretty, sadly,
because it cannot use the depth first iterators since you don't get called on the way back up. So you either have to do one walk with po_iterator
and one with df_iterator, or write your own non-recursive walk that looks identical to the one in updateDFSNumbers.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8946
llvm-svn: 234930
Change all the normally relevant output in `verify-uselistorder` from
using `dbgs()` to using `outs()` and `errs()`. Now you don't need
`-debug=uselistorder` to figure out what's going on (or at what stage
verification failed, or to get the paths of the left-behind temporary
files). This is a debugging tool, so I put the logging messages on
`outs()` and the error messages on `errs()`.
I also adjusted the output to be less ***loud***. Not sure why I was so
`*`-happy when I first wrote this.
llvm-svn: 234929