Besides better codegen, the motivation is to be able to canonicalize this pattern
in IR (currently we don't) knowing that the backend is prepared for that.
This may also allow removing code for special constant cases in
DAGCombiner::foldSelectOfConstants() that was added in D30180.
Differential Revision: https://reviews.llvm.org/D31944
llvm-svn: 301457
Summary of changes:
- corrected vmcnt, expcnt, lgkmcnt helpers to checks their argument for truncation;
- added saturated versions of these helpers.
See bug 32711 for details: https://bugs.llvm.org//show_bug.cgi?id=32711
Reviewers: artem.tamazov, vpykhtin
Differential Revision: https://reviews.llvm.org/D32546
llvm-svn: 301439
Marking them as used causes them to be considered visible outside of LTO. This
prevents the symbols from being internalized or discarded, either by GlobalDCE
or by summary-based dead stripping in ThinLTO.
This change makes it unnecessary to add these symbols to llvm.compiler.used
in the backend, as the symbols are kept alive by virtue of being external,
so remove the backend code that handles that.
Fixes PR32798.
Differential Revision: https://reviews.llvm.org/D32544
llvm-svn: 301438
This patch introduces a new KnownBits struct that wraps the two APInt used by computeKnownBits. This allows us to treat them as more of a unit.
Initially I've just altered the signatures of computeKnownBits and InstCombine's simplifyDemandedBits to pass a KnownBits reference instead of two separate APInt references. I'll do similar to the SelectionDAG version of computeKnownBits/simplifyDemandedBits as a separate patch.
I've added a constructor that allows initializing both APInts to the same bit width with a starting value of 0. This reduces the repeated pattern of initializing both APInts. Once place default constructed the APInts so I added a default constructor for those cases.
Going forward I would like to add more methods that will work on the pairs. For example trunc, zext, and sext occur on both APInts together in several places. We should probably add a clear method that can be used to clear both pieces. Maybe a method to check for conflicting information. A method to return (Zero|One) so we don't write it out everywhere. Maybe a method for (Zero|One).isAllOnesValue() to determine if all bits are known. I'm sure there are many other methods we can come up with.
Differential Revision: https://reviews.llvm.org/D32376
llvm-svn: 301432
Commits were:
"Use WeakVH instead of WeakTrackingVH in AliasSetTracker's UnkownInsts"
"Add a new WeakVH value handle; NFC"
"Rename WeakVH to WeakTrackingVH; NFC"
The changes assumed pointers are 8 byte aligned on all architectures.
llvm-svn: 301429
Summary:
In cases where an instruction (a call site, say) is RAUW'ed with some
other value (this is possible via the `returned` attribute, amongst
other things), we want the slot in UnknownInsts to point to the
original Instruction we wanted to track, not the value it got replaced
by.
Fixes PR32587.
Reviewers: davide
Subscribers: mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D32268
llvm-svn: 301426
Summary:
WeakVH nulls itself out if the value it was tracking gets deleted, but
it does not track RAUW.
Reviewers: dblaikie, davide
Subscribers: mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D32267
llvm-svn: 301425
Summary:
I plan to use WeakVH to mean "nulls itself out on deletion, but does
not track RAUW" in a subsequent commit.
Reviewers: dblaikie, davide
Reviewed By: davide
Subscribers: arsenm, mehdi_amini, mcrosier, mzolotukhin, jfb, llvm-commits, nhaehnle
Differential Revision: https://reviews.llvm.org/D32266
llvm-svn: 301424
The SampleProfWriter emits function information in an order determined
by the string hash function. The situation is a bit brittle, because
changing the hash function can break the tests.
Instead of sorting the function samples to get a relaible ordering (that
might be too expensive), make the tests not depend on a particular
ordering of function samples.
Differential Revision: https://reviews.llvm.org/D32516
llvm-svn: 301419
Build vectors have magical truncation powers, so we have things like this:
v4i1 = BUILD_VECTOR Constant:i32<1>, Constant:i32<1>, Constant:i32<1>, Constant:i32<1>
v4i16 = BUILD_VECTOR Constant:i32<1>, Constant:i32<1>, Constant:i32<1>, Constant:i32<1>
If we don't truncate the splat node returned by getConstantSplatNode(), then we won't find
truth when ZeroOrNegativeOneBooleanContent is the rule.
Differential Revision: https://reviews.llvm.org/D32505
llvm-svn: 301408
For targets that don't have ISD::MULHS or ISD::SMUL_LOHI for the type
and the double width type is illegal, then the two operands are
sign extended to twice their size then multiplied to check for overflow.
The extended upper halves were mismatched causing an incorrect result.
This fixes the mismatch.
A test was added for ARM V6-M where the bug was detected.
Patch by James Duley.
Differential Revision: https://reviews.llvm.org/D31807
llvm-svn: 301404
Summary:
Otherwise we might end up with some empty basic blocks or
single-entry-single-exit basic blocks.
This fixes PR32085
Reviewers: chandlerc, danielcdh
Subscribers: mehdi_amini, RKSimon, llvm-commits
Differential Revision: https://reviews.llvm.org/D30468
llvm-svn: 301395
Removed micro mips register classes for gp initialization because gp initialization uses pure mips64 instruction. Even when compiling for micro mips, gp initialization can be done with pure mips64 instructions.
Reviewed by Simon Dardis
Differential: D32286
llvm-svn: 301394
r299766 contained a "conditional move or jump depends on uninitialized value"
fault, identified by valgrind. This occurred as MipsFastISel::finishCall(..)
used CCState over MipsCCState. The latter is required for the TableGen'd calling
convention logic due to reliance on pre-analyzing type information to lower call
results/returns of vectors correctly.
This change modifies the MipsCC AnalyzeCallResult to be useful with both the
SelectionDAG and FastISel lowering logic.
Reviewers: slthakur
Differential Revision: https://reviews.llvm.org/D32004
llvm-svn: 301392
Summary:
Expose the internal query structure, start using it.
Note: This is the most minimal change possible i could create. I have
trivial followups, like fixing the one use of const FastMathFlags &,
the renaming of CtxI to be consistent, etc.
This should be NFC.
Reviewers: majnemer, davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32448
llvm-svn: 301379
If Select pseudo instruction doesn't have use SR, then
CMP instructions are being marked as dead and later can be
removed by MachineCSE pass. This leads to incorrect code
generation.
Differential Revision: https://reviews.llvm.org/D32473
llvm-svn: 301372
The order in which GCOV file info is printed depends on the string hash
function. This makes some GCOV tests brittle, because the tests must be
updated whenever the hash function changes.
Sort the filenames before printing out the file info to solve the
problem. This should be relatively cheap.
Differential Revision: https://reviews.llvm.org/D32512
llvm-svn: 301371
Summary:
Addends are used as offsets to addresses of globals
and can be both positive and negative. This change
prints libObject in line with the spec and the MC
layer.
Subscribers: jfb, dschuff
Differential Revision: https://reviews.llvm.org/D32507
llvm-svn: 301369