If the loop body contains conditions of the form IndVar < #constant, we
can remove the checks by peeling off #constant iterations.
This improves codegen for PR34364.
Reviewers: mkuper, mkazantsev, efriedma
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D43876
llvm-svn: 327671
We were unnecessarily copying a bunch of these FunctionInfo objects
around when rehashing the DenseMap.
Furthermore, r327620 introduced pointers referring to objects owned by
FunctionInfo, and the default copy ctor did the wrong thing in this
case, leading to use-after-free when the DenseMap gets rehashed.
I will rebase r327620 on this next and recommit it.
llvm-svn: 327665
It is common to have conditional exits within a loop which are known not to be taken on some iterations, but not necessarily all. This patches extends our reasoning around guaranteed to execute (used when establishing whether it's safe to dereference a location from the preheader) to handle the case where an exit is known not to be taken on the first iteration and the instruction of interest *is* known to be taken on the first iteration.
This case comes up in two major ways:
* If we have a range check which we've been unable to eliminate, we frequently know that it doesn't fail on the first iteration.
* Pass ordering. We may have a check which will be eliminated through some sequence of other passes, but depending on the exact pass sequence we might never actually do so or we might miss other optimizations from passes run before the check is finally eliminated.
The initial version (here) is implemented via InstSimplify. At the moment, it catches a few cases, but misses a lot too. I added test cases for missing cases in InstSimplify which I'll follow up on separately. Longer term, we should probably wire SCEV through to here to get much smarter loop aware simplification of the first iteration predicate.
Differential Revision: https://reviews.llvm.org/D44287
llvm-svn: 327664
The FADD part of the addsub/subadd pattern can have its operands commuted, but when checking for fsubadd we were using the fadd as reference and commuting the fsub node.
llvm-svn: 327660
The code that creates fmsubadd from shuffle vector has some code to allow commuting the operands of the fadd node. This code was originally created when we only recognized fmaddsub. When fmsubadd support was added this code was not updated and is now commuting the fsub operands instead.
llvm-svn: 327659
If we've already established an invariant scope with an earlier generation, we don't want to hide it in the scoped hash table with one with a later generation. I noticed this when working on the invariant-load handling, but it also applies to the invariant.start case as well.
Without this change, my previous patch for invariant-load regresses some cases, so I'm pushing this without waiting for review. This is why you don't make last minute tweaks to patches to catch "obvious cases" after it's already been reviewed. Bad Philip!
llvm-svn: 327655
PR35402 triggered this case. It bswap and stores a 48bit value, current STBRX optimization transforms it into STBRX. Unfortunately 48bit is not a simple MVT, there is no PPC instruction to support it, and it can't be automatically expanded by llvm, so caused a crash.
This patch detects the non-simple MVT and returns early.
Differential Revision: https://reviews.llvm.org/D44500
llvm-svn: 327651
Rather than enumerating all specific types, for the DAG combine we can just use TLI::isTypeLegal and an SSE3 check. For the BUILD_VECTOR version we already know the type is legal so we just need to check SSE3.
llvm-svn: 327649
It previously only worked when the key and value types were
both 4 byte integers. We now have a use case for a non trivial
value type, so we need to extend it to support arbitrary value
types, which means templatizing it.
llvm-svn: 327647
This is a follow up to https://reviews.llvm.org/D43716 which rewrites the invariant load handling using the new infrastructure. It's slightly more powerful, but only in somewhat minor ways for the moment. It's not clear that DSE of stores to invariant locations is actually interesting since why would your IR have such a construct to start with?
Note: The submitted version is slightly different than the reviewed one. I realized the scope could start for an invariant load which was proven redundant and removed. Added a test case to illustrate that as well.
Differential Revision: https://reviews.llvm.org/D44497
llvm-svn: 327646
Now both method DispatchUnit::checkRAT() and DispatchUnit::canDispatch take as
input an Instruction refrence instead of an instruction descriptor.
This was requested by Simon in D44488 to simplify the diff.
llvm-svn: 327640
This patch adds new load/store instructions for integer scalar types
which can be used for X-Form when fed by add with an @tls relocation.
Differential Revision: https://reviews.llvm.org/D43315
llvm-svn: 327635
As discussed on D44428 and PR36726, this patch splits off WriteFMove/WriteVecMove, WriteFLoad/WriteVecLoad and WriteFStore/WriteVecStore scheduler classes to permit vectors to be handled separately from gpr/scalar types.
I've minimised the diff here by only moving various basic SSE/AVX vector instructions across - we can fix the rest when called for. This does fix the MOVDQA vs MOVAPS/MOVAPD discrepancies mentioned on D44428.
Differential Revision: https://reviews.llvm.org/D44471
llvm-svn: 327630
This matcher implementation appears to be slightly more efficient than
the generic constant check that it is replacing because every use was
for matching FP patterns, but the previous code would check int and
pointer type nulls too.
llvm-svn: 327627
From the LangRef definition for frem:
"The value produced is the floating-point remainder of the two operands.
This is the same output as a libm ‘fmod‘ function, but without any
possibility of setting errno. The remainder has the same sign as the
dividend. This instruction is assumed to execute in the default
floating-point environment."
llvm-svn: 327626
This is a follow up of the AArch64 FP16 intrinsics work;
the codegen tests had not been added yet.
Differential Revision: https://reviews.llvm.org/D44510
llvm-svn: 327624
When hoisting common code from the "then" and "else" branches of a condition
to before the "if", the HoistThenElseCodeToIf routine will attempt to merge
the debug location associated with the two original copies of the hoisted
instruction.
This is a problem in the special case where the hoisted instruction is a
debug info intrinsic, since for those the debug location is considered
part of the intrinsic and attempting to modify it may resut in invalid
IR. This is the underlying cause of PR36410.
This patch fixes the problem by handling debug info intrinsics specially:
instead of hoisting one copy and merging the two locations, the code now
simply hoists both copies, each with its original location intact. Note
that this is still only done in the case where both original copies are
otherwise (i.e. apart from location metadata) identical.
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D44312
llvm-svn: 327622
This patch sorts local variables by lexical scope and emits them inside
an appropriate S_BLOCK32 CodeView symbol.
Differential Revision: https://reviews.llvm.org/D42926
llvm-svn: 327620
There are two nontrivial details here:
* Loop structure update interface is quite different with new pass manager,
so the code to add new loops was factored out
* BranchProbabilityInfo is not a loop analysis, so it can not be just getResult'ed from
within the loop pass. It cant even be queried through getCachedResult as LoopCanonicalization
sequence (e.g. LoopSimplify) might invalidate BPI results.
Complete solution for BPI will likely take some time to discuss and figure out,
so for now this was partially solved by making BPI optional in IRCE
(skipping a couple of profitability checks if it is absent).
Most of the IRCE tests got their corresponding new-pass-manager variant enabled.
Only two of them depend on BPI, both marked with TODO, to be turned on when BPI
starts being available for loop passes.
Reviewers: chandlerc, mkazantsev, sanjoy, asbirlea
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D43795
llvm-svn: 327619
Summary:
Before this patch call graph is like this in the LoopUnrollPass:
tryToUnrollLoop
ApproximateLoopSize
collectEphemeralValues
/* Use collected ephemeral values */
computeUnrollCount
analyzeLoopUnrollCost
/* Bail out from the analysis if loop contains CallInst */
This patch moves collection of the ephemeral values to the tryToUnrollLoop
function and passes the collected values into both ApproximateLoopsize (as
before) and additionally starts using them in analyzeLoopUnrollCost:
tryToUnrollLoop
collectEphemeralValues
ApproximateLoopSize(EphValues)
/* Use EphValues */
computeUnrollCount(EphValues)
analyzeLoopUnrollCost(EphValues)
/* Ignore ephemeral values - they don't contribute to the final cost */
/* Bail out from the analysis if loop contains CallInst */
Reviewers: mzolotukhin, evstupac, sanjoy
Reviewed By: evstupac
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D43931
llvm-svn: 327617
Methods `computeExitLimitFromCondCached` and `computeExitLimitFromCondImpl` take
true and false branches as parameters and only use them for asserts and for identifying
whether true/false branch belongs to the loop (which can be done once earlier). This fact
complicates generalization of exit limit computation logic on guards because the guards
don't have blocks to which they go in case of failure explicitly.
The motivation of this patch is that currently this part of SCEV knows nothing about guards
and only works with explicit branches. As result, it fails to prove that a loop
for (i = 0; i < 100; i++)
guard(i < 10);
exits after 10th iteration, while in the equivalent example
for (i = 0; i < 100; i++)
if (i >= 10) break;
SCEV easily proves this fact. We are going to change it in near future, and this is why
we need to make these methods operate on more abstract level.
This patch refactors this code to get rid of these parameters as meaningless and prepare
ground for teaching these methods to work with guards as well as they work with explicit
branching instructions.
Differential Revision: https://reviews.llvm.org/D44419
llvm-svn: 327615
There is no 512 bit addsub instruction, but we partially match it handle fmaddsub matching. We explicitly bail out for 512 bit vectors after failing the fmaddsub match, but we had no test coverage for that bail out.
We might want to consider splitting and using 256 bit instructions instead of the long sequence seen here.
llvm-svn: 327605
Summary:
Some PDB symbols do not have a valid VA or RVA but have Addr by Section and Offset. For example, a variable in thread-local storage has the following properties:
get_addressOffset: 0
get_addressSection: 5
get_lexicalParentId: 2
get_name: g_tls
get_symIndexId: 12
get_typeId: 4
get_dataKind: 6
get_symTag: 7
get_locationType: 2
This change provides a new method to locate line numbers by Section and Offset from those symbols.
Reviewers: zturner, rnk, llvm-commits
Subscribers: asmith, JDevlieghere
Differential Revision: https://reviews.llvm.org/D44407
llvm-svn: 327601
Summary:
This variable is largely going unused; aside from reporting number of instructions for in DEBUG builds.
The only use of NumInstructions is in debug output to represent the LoopSize. That value can be can be misleading as it also includes metadata instructions (e.g., DBG_VALUE) which have no real impact. If we do choose to keep this around, we probably should guard it by a DEBUG macro, as it's not used in production builds.
Reviewers: majnemer, congh, rengolin
Reviewed By: rengolin
Subscribers: llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D44495
llvm-svn: 327589