This is useful when trying to read notes from stripped files and matches
the behavior of GNU readelf and eu-readelf.
Differential Revision: https://reviews.llvm.org/D66358
llvm-svn: 369169
If OptimizeExtractBits() encountered a shift instruction with no operands at all,
it would erase the instruction, but still return false.
This previously didn’t matter because its caller would always return after
processing the instruction, but https://reviews.llvm.org/D63233 changed the
function’s caller to fall through if it returned false, which would then cause
a use-after-free detectable by ASAN.
This change makes OptimizeExtractBits return true if it removes a shift
instruction with no users, terminating processing of the instruction.
Patch by: @brentdax (Brent Royal-Gordon)
Differential Revision: https://reviews.llvm.org/D66330
llvm-svn: 369168
This reverts r368662 (git commit 1a8d790cf5f89c1df718844f13e934e39bef6ef5)
The compile-time regression repro is in https://bugs.llvm.org/show_bug.cgi?id=43024
llvm-svn: 369167
We currently don't use liveness information after this point, but it can
be useful to catch bugs using -verify-machineinstrs, and optimizations
could potentially use this information in the future.
Differential Revision: https://reviews.llvm.org/D66319
llvm-svn: 369162
By partially resolving returned calls we did not record that they were
not fully resolved which caused odd behavior down the line. We could
also end up with some, but not all, returned values of the callee in the
returned values map of the caller, another odd behavior we want to
avoid.
llvm-svn: 369160
Summary:
Before we required the comparison against null to be "canonical", hence
null to be operand #1. This patch allows null to be in either operand,
similar to the handling of loaded globals that follows.
Reviewers: sanjoy, hfinkel, aykevl, sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66321
llvm-svn: 369158
As a preparation to "on-demand" abstract attribute generation we need
implementations for all attributes (as they can be queried and then
created on-demand where we now fail to find one).
Reviewers: uenoku, sstefan1
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66129
llvm-svn: 369155
This was a quick pass through some obvious places. I haven't tried the clang-tidy check.
I also replaced the zeroes in getX86SubSuperRegister with X86::NoRegister which is the real sentinel name.
Differential Revision: https://reviews.llvm.org/D66363
llvm-svn: 369151
Push LR register before calling __gnu_mcount_nc as it expects the value of LR register to be the top value of
the stack on ARM32.
Differential Revision: https://reviews.llvm.org/D65019
llvm-svn: 369147
Summary:
This is the first commit aiming to structure the attribute deduction.
The base idea is that we have default propagation patterns as listed
below on top of which we can add specific, e.g., context sensitive,
logic.
Deduction patterns used in this patch:
- argument states are determined from call site argument states,
see AAAlignArgument and AAArgumentFromCallSiteArguments.
- call site argument states are determined as if they were floating
values, see AAAlignCallSiteArgument and AAAlignFloating.
- floating value states are determined by traversing the def-use chain
and combining the states determined for the leaves, see
AAAlignFloating and genericValueTraversal.
- call site return states are determined from function return states,
see AAAlignCallSiteReturned and AACallSiteReturnedFromReturned.
- function return states are determined from returned value states,
see AAAlignReturned and AAReturnedFromReturnedValues.
Through this strategy all logic for alignment is concentrated in the
AAAlignFloating::updateImpl method.
Note: This commit works on its own but is part of a larger change that
involves "on-demand" creation of abstract attributes that will
participate in the fixpoint iteration. Without this part, we sometimes
do not have an AAAlign abstract attribute to query, loosing information
we determined before. All tests have appropriate FIXMEs and the
information will be recovered once we added all parts.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66126
llvm-svn: 369144
Until we have call site specific liveness and/or value information there
is no need to do call site specific deduction. Though, we need the
symbols in follow up patches that make Attributor::getAAFor return a
reference.
llvm-svn: 369143
Summary:
This patch should not change the behavior except that the added
initialize methods might indicate an optimistic fixpoint earlier. The
code movement is done to keep the attribute definitions in a single
block where it makes sense. No functional changes intended there.
Reviewers: uenoku, sstefan1
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66258
llvm-svn: 369142
This pattern may arise more frequently with an enhancement to SLP vectorization suggested in PR42755:
https://bugs.llvm.org/show_bug.cgi?id=42755
...but we should handle this pattern to make things easier for the backend either way.
For all in-tree targets that I looked at, codegen for typical vector sizes looks better when we change
to a vector select, so this is safe to do without a cost model (in other words, as a target-independent
canonicalization).
For example, if the condition of the select is a scalar, we end up with something like this on x86:
vpcmpgtd %xmm0, %xmm1, %xmm0
vpextrb $12, %xmm0, %eax
testb $1, %al
jne LBB0_2
## %bb.1:
vmovaps %xmm3, %xmm2
LBB0_2:
vmovaps %xmm2, %xmm0
Rather than the splat-condition variant:
vpcmpgtd %xmm0, %xmm1, %xmm0
vpshufd $255, %xmm0, %xmm0 ## xmm0 = xmm0[3,3,3,3]
vblendvps %xmm0, %xmm2, %xmm3, %xmm0
Differential Revision: https://reviews.llvm.org/D66095
llvm-svn: 369140
Summary:
We tried to support EM_ASM with setjmp/longjmp in binaryen. But with dynamic
linking thrown into the mix, the code is no longer understandable and cannot
be maintained. We also discovered more bugs in the EM_ASM handling code.
To ensure maintainability and correctness of the binaryen code, EM_ASM will
no longer be supported with setjmp/longjmp. This is probably fine since the
support was added recently and haven't be published.
Reviewers: tlively, sbc100, jgravelle-google, kripken
Reviewed By: tlively, kripken
Subscribers: dschuff, hiraditya, aheejin, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66356
llvm-svn: 369137
Again, it's weird that these are allowed. Since lowering support was added in
r368709 we started crashing on compiling the neon intrinsics test in the test
suite. This fixes the lowering to fold the 1 elt src/mask case into copies.
llvm-svn: 369135
Eventually we need to generalize combineExtractWithShuffle to handle all faux shuffles and handle truncate (and X86ISD::VTRUNC etc.) there, but we're not ready yet (still creates nodes on the fly, incomplete DemandedElts support, bad use of recursive Depth limit).
llvm-svn: 369134
Recommit with fixes for mac builders.
Summary:
AArch64InstrInfo::getInstSizeInBytes is incorrectly treating meta
instructions (e.g. CFI_INSTRUCTION) as normal instructions and
giving them a size of 4.
This results in branch relaxation calculating block sizes wrong.
Branch relaxation also considers alignment and thus a single
mistake can result in later blocks being incorrectly sized even
when they themselves do not contain meta instructions.
The net result is we might not relax a branch whose destination is
not within range.
Reviewers: nickdesaulniers, peter.smith
Reviewed By: peter.smith
Subscribers: javed.absar, kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66337
> llvm-svn: 369111
llvm-svn: 369133
Summary:
The scheduler's dependence graph gets the use-def dependencies by accessing the operands of the instructions in a bundle. However, buildTree_rec() may change the order of the operands in TreeEntry, and the scheduler is currently not aware of this. This is not causing any functional issues currently, because reordering is restricted to the operands of a single instruction. Once we support operand reordering across multiple TreeEntries, as shown here: http://www.llvm.org/devmtg/2019-04/slides/Poster-Porpodas-Supernode_SLP.pdf , the scheduler will need to get the correct operands from TreeEntry and not from the individual instructions.
In short, this patch:
- Connects the scheduler's bundle with the corresponding TreeEntry. It introduces new TE and Lane fields in ScheduleData.
- Moves the location where the operands of the TreeEntry are initialized. This used to take place in newTreeEntry() setting one operand at a time, but is now moved pre-order just before the recursion of buildTree_rec(). This is required because the scheduler needs to access both operands of the TreeEntry in tryScheduleBundle().
- Updates the scheduler to access the instruction operands through the TreeEntry operands instead of accessing the instruction operands directly.
Reviewers: ABataev, RKSimon, dtemirbulatov, Ayal, dorit, hfinkel
Reviewed By: ABataev
Subscribers: hiraditya, llvm-commits, lebedev.ri, rcorcs
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62432
llvm-svn: 369131
All uses of llvm::make_unique should have been replaced with
std::make_unique. This patch represents the last part of the migration
and removes the utility from LLVM.
Differential revision: https://reviews.llvm.org/D66259
llvm-svn: 369130
At some point we and/or CMake changed our build-mode-style builds from
$LLVM_OBJ_ROOT/bin/$CMAKE_CFG_INTDIR/
to
$LLVM_OBJ_ROOT/$CMAKE_CFG_INTDIR/bin/
which is way easier to use. But no one updated llvm-config.
https://reviews.llvm.org/D66326
llvm-svn: 369129
In function Analysis.cpp:isInTailCallPosition, instructions between call and ret are checked to see if they block tail call optimization. If an instruction is an intrinsic call, only llvm.lifetime_end is allowed and other intrinsic functions block tail call. When compiling tcmalloc, we found llvm.assume between a hot function call and ret, it blocks the optimization. But llvm.assume doesn't generate instructions, it should not block tail call.
Differential Revision: https://reviews.llvm.org/D66096
llvm-svn: 369125
Changes:
There was a condition for `!NeedsFrameRecord` missing in the assert. The
assert in question has changed to:
+ assert((!RPI.isPaired() || !NeedsFrameRecord || RPI.Reg2 != AArch64::FP ||
+ RPI.Reg1 == AArch64::LR) &&
+ "FrameRecord must be allocated together with LR");
This addresses PR43016.
llvm-svn: 369122
Summary:
To be able to use the TextAPI/Reader for tbd file consumption (by libObject)
it gets passed a MemoryBufferRef which isn't castable to MemoryBuffer.
Updated the tests to expect that input as well.
Reviewers: ributzka, steven_wu
Reviewed By: steven_wu
Subscribers: hiraditya, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66147
llvm-svn: 369119
MVE also has some sext of loads, which will be free just as scalar
instructions are.
Differential Revision: https://reviews.llvm.org/D66008
llvm-svn: 369118
Summary:
This is continuation of D63829 / https://bugs.llvm.org/show_bug.cgi?id=42399
I thought naive pattern would solve my issue, but nope, it involved truncation,
thus more folds needed.. This isn't really the fold i'm interested in,
i need trunc-of-lshr, but i'we decided to start with `shl` because it's simpler.
In this case, no extra legality checks are needed:
https://rise4fun.com/Alive/CAb
We should be careful about not increasing instruction count,
since we need to produce `zext` because `and` is done in wider type.
Reviewers: spatel, nikic, xbolva00
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66057
llvm-svn: 369117
Only in public interfaces that have not yet been converted should there remain
registers with unsigned type.
Differential Revision: https://reviews.llvm.org/D66252
llvm-svn: 369114
Summary:
AArch64InstrInfo::getInstSizeInBytes is incorrectly treating meta
instructions (e.g. CFI_INSTRUCTION) as normal instructions and
giving them a size of 4.
This results in branch relaxation calculating block sizes wrong.
Branch relaxation also considers alignment and thus a single
mistake can result in later blocks being incorrectly sized even
when they themselves do not contain meta instructions.
The net result is we might not relax a branch whose destination is
not within range.
Reviewers: nickdesaulniers, peter.smith
Reviewed By: peter.smith
Subscribers: javed.absar, kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66337
llvm-svn: 369111
The widening and narrowing MVE instructions like VLDRH.32 are only permitted to
use low tGPR registers. This means that if they are used for a stack slot,
where the register used is only decided during frame setup, we need to be able
to correctly pick a thumb1 register over a normal GPR.
This attempts to add the required logic into eliminateFrameIndex and
rewriteT2FrameIndex, only picking the FrameReg if it is a valid register for
the operands register class, and picking a valid scratch register for the
register class.
Differential Revision: https://reviews.llvm.org/D66285
llvm-svn: 369108