I used the wrong method to obtain the return type inside FinishCall. This fix
simply uses the return type from FastLowerCall, which we already determined to
be a valid type.
Reduced test case from Chad. Thanks.
llvm-svn: 213788
With optimizations disabled, we disable the isel patterns for mul.wide; but we
were still generating MULWIDE ISD nodes. Now, we only try to generate MULWIDE
ISD nodes in DAGCombine if the optimization level is not zero.
llvm-svn: 213773
The target-independent DAGcombiner will generate:
asr w1, X, #31 w1 = splat sign bit.
add X, X, w1, lsr #28 X = X + 0 or pow2-1
asr w0, X, asr #4 w0 = X/pow2
However, the add + shifts is expensive, so generate:
add w0, X, 15 w0 = X + pow2-1
cmp X, wzr X - 0
csel X, w0, X, lt X = (X < 0) ? X + pow2-1 : X;
asr w0, X, asr 4 w0 = X/pow2
llvm-svn: 213758
We were assuming all SBFX-like operations would have the shl/asr form, but
often when the field being extracted is an i8 or i16, we end up with a
SIGN_EXTEND_INREG acting on a shift instead. Simple enough to check for though.
llvm-svn: 213754
Although the final shifter operand is a rotate, this actually only matters for
the half-word extends when the amount == 24. Otherwise folding a shift in is
just as good.
llvm-svn: 213753
This pass attempts to speculatively use a sqrt instruction if one exists on the target, falling back to a libcall if the target instruction returned NaN.
This was enabled for MIPS and System-Z, but is well guarded and is good for most targets - GCC does this for (that I've checked) X86, ARM and AArch64.
llvm-svn: 213752
The ARM ARM prohibits STRB instructions with writeback into the source register. With this commit this constraint is now enforced and we stop assembling STRB instructions with unpredictable behavior.
llvm-svn: 213750
There really is no arm64_be: it was a useful fiction to test big-endian support
while both backends existed in parallel, but now the only platform that uses
the name (iOS) doesn't have a big-endian variant, let alone one called
"arm64_be".
llvm-svn: 213748
The ARM ARM prohibits STR instructions with writeback into the source register. With this commit this constraint is now enforced and we stop assembling STR instructions with unpredictable behavior.
llvm-svn: 213745
Having both Triple::arm64 and Triple::aarch64 is extremely confusing, and
invites bugs where only one is checked. In reality, the only legitimate
difference between the two (arm64 usually means iOS) is also present in the OS
part of the triple and that's what should be checked.
We still parse the "arm64" triple, just canonicalise it to Triple::aarch64, so
there aren't any LLVM-side test changes.
llvm-svn: 213743
This chang fully reverts r211771.
That revision added a canonicalization rule which has the potential to causes a
combine-cycle in the target-independent canonicalizing DAG combine.
The plan is to move the logic that forms target specific addsub nodes as part of
the lowering of shuffles.
llvm-svn: 213736
instruction sequences with CHECK-NEXT for these test cases.
This notably exposes how absolutely horrible the generated code is for
several of these test cases, and will make any future updates to the
test as our vector instruction selection gets better.
llvm-svn: 213732
The post-indexed instructions were missing the constraint, causing unpredictable STRH instructions to be emitted.
The earlyclobber constraint on the pre-indexed STR instructions is not strictly necessary, as the instruction selection for pre-indexed STR instructions goes through an additional layer of pseudo instructions which have the constraint defined, however it doesn't hurt to specify the constraint directly on the pre-indexed instructions as well, since at some point someone might create instances of them programmatically and then the constraint is definitely needed.
llvm-svn: 213729
insertions.
The old behavior could cause arbitrarily bad memory usage in the DAG
combiner if there was heavy traffic of adding nodes already on the
worklist to it. This commit switches the DAG combine worklist to work
the same way as the instcombine worklist where we null-out removed
entries and only add new entries to the worklist. My measurements of
codegen time shows slight improvement. The memory utilization is
unsurprisingly dominated by other factors (the IR and DAG itself
I suspect).
This change results in subtle, frustrating churn in the particular order
in which DAG combines are applied which causes a number of minor
regressions where we fail to match a pattern previously matched by
accident. AFAICT, all of these should be using AddToWorklist to directly
or should be written in a less brittle way. None of the changes seem
drastically bad, and a few of the changes seem distinctly better.
A major change required to make this work is to significantly harden the
way in which the DAG combiner handle nodes which become dead
(zero-uses). Previously, we relied on the ability to "priority-bump"
them on the combine worklist to achieve recursive deletion of these
nodes and ensure that the frontier of remaining live nodes all were
added to the worklist. Instead, I've introduced a routine to just
implement that precise logic with no indirection. It is a significantly
simpler operation than that of the combiner worklist proper. I suspect
this will also fix some other problems with the combiner.
I think the x86 changes are really minor and uninteresting, but the
avx512 change at least is hiding a "regression" (despite the test case
being just noise, not testing some performance invariant) that might be
looked into. Not sure if any of the others impact specific "important"
code paths, but they didn't look terribly interesting to me, or the
changes were really minor. The consensus in review is to fix any
regressions that show up after the fact here.
Thanks to the other reviewers for checking the output on other
architectures. There is a specific regression on ARM that Tim already
has a fix prepped to commit.
Differential Revision: http://reviews.llvm.org/D4616
llvm-svn: 213727
FIXME: "llvm-rtdyld -verify -check" is still sensitive to path separator.
Fix searching StubMap to be tolerant of both '/' and '\\' on Win32.
llvm-svn: 213723
There's no reason to restrict this particular piece of RuntimeDyldChecker
functionality to +Asserts builds.
This should fix failures in MachO_x86-64_PIC_relocations.s on release bots.
llvm-svn: 213708
RuntimeDyldChecker had been testing isalpha(Expr[0]) to recognise symbol tokens,
and throwing unrecognized token errors when it hit symbols with leading
underscores. This fixes that.
llvm-svn: 213706
This commit modifies the existing call lowering functions to be used as the
FastLowerCall and FastLowerIntrinsicCall target-hooks instead.
This enables patchpoint intrinsic lowering for AArch64.
This fixes <rdar://problem/17733076>
llvm-svn: 213704
This patch introduces a 'stub_addr' builtin that can be used to find the address
of the stub for a given (<file>, <section>, <symbol>) tuple. This address can be
used both to verify the contents of stubs (by loading from the returned address)
and to verify references to stubs (by comparing against the returned address).
Example (1) - Verifying stub contents:
Load 8 bytes (assuming a 64-bit target) from the stub for 'x' in the __text
section of f.o, and compare that value against the addres of 'x'.
# rtdyld-check: *{8}(stub_addr(f.o, __text, x) = x
Example (2) - Verifying references to stubs:
Decode the immediate of the instruction at label 'l', and verify that it's
equal to the offset from the next instruction's PC to the stub for 'y' in the
__text section of f.o (i.e. it's the correct PC-rel difference).
# rtdyld-check: decode_operand(l, 4) = stub_addr(f.o, __text, y) - next_pc(l)
l:
movq y@GOTPCREL(%rip), %rax
Since stub inspection requires cooperation with RuntimeDyldImpl this patch
pimpl-ifies RuntimeDyldChecker. Its implementation is moved in to a new class,
RuntimeDyldCheckerImpl, that has access to the definition of RuntimeDyldImpl.
llvm-svn: 213698
Factor out the addend encoding into a helper function and simplify the
processRelocationRef.
Also add a few simple rtdyld tests. More tests to come once GOTs can be tested too.
Related to <rdar://problem/17768539>
llvm-svn: 213689
In MachO for AArch64 it is possible to have an explicit addend defined by
the ARM64_RELOC_ADDEND relocation or having an addend encoded within the
instruction. Only one of them are allowed per relocation.
llvm-svn: 213687
It handles the errors which were seen in PR19958 where wrong code was being emitted due to earlier patch.
Added code for lshr as well as non-exact right shifts.
It implements :
(icmp eq/ne (ashr/lshr const2, A), const1)" ->
(icmp eq/ne A, Log2(const2/const1)) ->
(icmp eq/ne A, Log2(const2) - Log2(const1))
Differential Revision: http://reviews.llvm.org/D4068
llvm-svn: 213678
"((~A & B) | A) -> (A | B)" and "((A & B) | ~A) -> (~A | B)"
Original Patch credit to Ankit Jain !!
Differential Revision: http://reviews.llvm.org/D4591
llvm-svn: 213676
We previously supported the align attribute on all (pointer) parameters, but we
only used it for byval parameters. However, it is completely consistent at the
IR level to treat 'align n' on all pointer parameters as an alignment
assumption on the pointer, and now we wll. Specifically, this causes
computeKnownBits to use the align attribute on all pointer parameters, not just
byval parameters. I've also added an explicit parameter attribute test for this
to test/Bitcode/attributes.ll.
And I've updated the LangRef to document the align parameter attribute (as it
turns out, it was not documented at all previously, although the byval
documentation mentioned that it could be used).
There are (at least) two benefits to doing this:
- It allows enhancing alignment based on the pointer alignment after inlining callees.
- It allows simplification of pointer arithmetic.
llvm-svn: 213670