Begin replacing individual getMemIntrinsicNode calls and setup (for X86ISD::VBROADCAST_LOAD + X86ISD::SUBV_BROADCAST_LOAD opcodes) with this getBROADCAST_LOAD helper.
This patch introduces a new RAII struct that will temporarily make an OpenMP
RTL function have external linkage. This is done before the attributor is
invoked to prevent it from incorrectly removing some function definitions that
we will use later. For example, if we determine all calls to one function are
dead, because it has internal linkage it can safely be removed. Later when we
try to get an instance to that function to modify the source using
`getOrCreateRuntimeFunction` we will then get an empty declaration for that
function that won't be defined anywhere. This patch prevents this from
occurring.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D106707
If the rotation amount is a known splat, perform the modulo on the splat source, and then perform the splat. That way the amount-extension performed later by LowerScalarVariableShift can fold the splats away without any multiple-use issues.
Fixes one of the concerns raised on D104156
Rather than adding methods for dropping these attributes in
various places, add a function that returns an AttrBuilder with
these attributes, which can then be used with existing methods
for dropping attributes. This is with an eye on D104641, which
also needs to drop them from returns, not just parameters.
Also be more explicit about the semantics of the method in the
documentation. Refer to UB rather than Undef, which is what this
is actually about.
From LangRef:
> if the parameter or return pointer is null, poison value is
> returned or passed instead. The nonnull attribute should be
> combined with the noundef attribute to ensure a pointer is not
> null or otherwise the behavior is undefined.
Dropping noundef is sufficient to prevent UB. Including nonnull
in this method just muddies the semantics.
Bug Fix for PR: https://llvm.org/PR47960
This patch makes sure that the fast math flag used in the 'select'
instruction is the same as the 'fabs' instruction after the transformation.
Differential Revision: https://reviews.llvm.org/D101727
This is a minimum extension of D106607 to allow folding for
2 non-zero constantsi that can be materialized as immediates..
In the reduced test examples, we save 1 instruction by rolling
the constants into LEA/ADD. In the motivating test from the bullet
benchmark, we absorb both of the constant moves into add ops via
LEA magic, so we reduce by 2 instructions.
Differential Revision: https://reviews.llvm.org/D106684
This patches fixes the warning:
llvm/include/llvm/Analysis/InlineCost.h:62:3: error: definition of
implicit copy assignment operator for 'CostBenefitPair' is
deprecated because it has a user-declared copy constructor
[-Werror,-Wdeprecated-copy]
by removing the explicit copy constructor.
As noticed on D105390 - we were hardwiring the depth limit for combining to VPERMI2W/VPERMI2B instructions. Not only had we made the limit too low, we hadn't accounted for slow/fast shuffles via the VariableCrossLaneShuffleDepth control
For types like s96, we don't want to clamp to s64, we want to first widen to
s128 and then narrow it. Otherwise we end up with impossible to legalize types.
I stumbled onto a case where our (sext_inreg (assertzexti32 (fptoui X)), i32)
isel pattern can cause an fcvt.wu and fcvt.lu to be emitted if
the assertzexti32 has an additional user. If we add a one use check
it would just cause a fcvt.lu followed by a sext.w when only need
a fcvt.wu to satisfy both users.
To mitigate this I've added custom isel and new ISD opcodes for
fcvt.wu. This allows us to keep know it started life as a conversion
to i32 without needing to match multiple nodes. ComputeNumSignBits
has been taught that this new nodes produces 33 sign bits. To
prevent regressions when we need to zero extend the result of an
(i32 (fptoui X)), I've added a DAG combine to convert it to an
(i64 (fptoui X)) before type legalization. In most cases this would
happen in InstCombine, but a zero_extend can be created for function
returns or arguments.
To keep everything consistent I've added new nodes for fptosi as well.
Reviewed By: luismarques
Differential Revision: https://reviews.llvm.org/D106346
When BasicTTIImpl::getCastInstrCost can't determine the cost of a
vector cast operation when the types need legalization, it falls
back to calculating scalarization costs. Instead of crashing on
`cast<FixedVectorType>(DstVTy)` when the type is a scalable vector,
return an Invalid cost.
Reviewed By: david-arm
Differential Revision: https://reviews.llvm.org/D106655
Many of the tests have used NEXT when DAG is more approprite. In
some cases single DAG lines have been used. Note that these are
manual tests because they're to complex for update_llc_test_checks.py
and so it's worth not relying too much on the ordered output.
I've also made the CHECK lines more uniform when it comes to the
ordering of things like LO/HI.
If value tracking can confirm that the cttz/ctlz source is known non-zero then we don't need to create a branch (which DAG will struggle to recover from).
Differential Revision: https://reviews.llvm.org/D106685
Most other registers are allocatable and therefore cannot be used.
This issue was flagged by the machine verifier, because reading other
registers is considered reading from an undefined register.
Differential Revision: https://reviews.llvm.org/D96969
This patch fixes some issues with the RORB pseudo instruction.
- A minor issue in which the instructions were said to use the SREG,
which is not true.
- An issue with the BLD instruction, which did not have an output operand.
- A major issue in which invalid instructions were generated. The fix
also reduce RORB from 4 to 3 instructions, so it's also a small
optimization.
These issues were flagged by the machine verifier.
Differential Revision: https://reviews.llvm.org/D96957
This patch makes sure shift instructions such as this one:
%result = shl i32 %n, %amount
are expanded just before the IR to SelectionDAG conversion to a loop so
that calls to non-existing library functions such as __ashlsi3 are
avoided. The generated code is currently pretty bad but there's a lot of
room for improvement: the shift itself can be done in just four
instructions.
Differential Revision: https://reviews.llvm.org/D96677
There were some serious issues with atomic operations. This patch should
fix the biggest issues.
For details on the issue take a look at this Compiler Explorer sample:
https://godbolt.org/z/n3ndhn
Code:
void atomicadd(_Atomic char *val) {
*val += 5;
}
Output:
atomicadd:
movw r26, r24
ldi r24, 5 ; 'operand' register
in r0, 63
cli
ld r24, X ; load value
add r24, r26 ; value += X
st X, r24 ; store value back
out 63, r0
ret ; return the wrong value (in r24)
There are various problems with this.
- The value to add (5) is stored in r24. However, the value to add to
is loaded in the same register: r24.
- The `add` instruction adds half of the pointer to the loaded value,
instead of (attempting to) add the operand with value 5.
- The output value of the cmpxchg instruction (which is not used in
this code sample) is the new value with 5 added, not the old value.
The LangRef specifies that it has to be the old value, before the
operation.
This patch fixes the first two and leaves the third problem to be fixed
at a later date. I believe atomics were mostly broken before this patch,
with this patch they should become usable as long as you ignore the
output of the atomic operation. In particular it fixes the following
things:
- It sets the earlyclobber flag for the input ('$operand' operand) so
that the register allocator puts it in a different register than the
output value.
- It fixes a number of issues with the pseudo op expansion pass, for
example now it adds the $operand field instead of the pointer. This
fixes most machine instruction verifier issues (other flagged issues
are unrelated to atomics).
Differential Revision: https://reviews.llvm.org/D97127
In most cases, using R31R30 is fine because the call (which always
precedes ADJCALLSTACKDOWN) will clobber R31R30 anyway. However, in some
rare cases the register allocator might insert an instruction between
the call and the ADJCALLSTACKDOWN instruction and expect the register
pair to be live afterwards. I think this happens as a result of
rematerialization. Therefore, to fix this, the instruction needs to have
Defs set to R31R30.
Setting the Defs field does have the effect of making the instruction
look dead, which it certainly is not. This is fixed by setting
hasSideEffects to true.
Differential Revision: https://reviews.llvm.org/D97745
Previously, AVRTargetLowering::LowerCall attempted to keep stack stores
in order with chains. Perhaps this worked in the past, but it does not
work now: it appears that the SelectionDAG legalization phase removes
these chains. Therefore, I've removed these chains entirely to match
X86 (which, similar to AVR, also prefers to use push instructions over
stack-relative stores to set up a call frame). With this change, all the
stack stores are in a somewhat reasonable order.
Differential Revision: https://reviews.llvm.org/D97853
I've setup the basic framework for the isGuaranteedNotToBeUndefOrPoison call and updated DAGCombiner::visitFREEZE to use it, further Opcodes can be handled when we have test coverage.
I'm not aware of any vector test freeze coverage so the DemandedElts (and the Depth) args are not being used yet - but they are in place.
SelectionDAG::isGuaranteedNotToBePoison wrappers have also been added.
Differential Revision: https://reviews.llvm.org/D106668
In D106041, a freeze was added before the branch condition to solve the miscompilation problem of SimpleLoopUnswitch.
However, I found that the added freeze disturbed other optimizations in the following situations.
```
arg.fr = freeze(arg)
use(arg.fr)
...
use(arg)
```
It is a problem that occurred when arg and arg.fr were recognized as different values.
Therefore, changing to use arg.fr instead of arg throughout the function eliminates the above problem.
Thus, I add a function that changes all uses of arg to freeze(arg) to visitFreeze of InstCombine.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D106233
Currently when linking LLVM against Libxml2, a simple check is performed to check whether it can be linked successfully. This check currently adds the include directories and the libraries for libxml2, but not definitions found by the config.
This causes issues on Windows when trying to link against a static libxml2. Libxml2 requires LIBXML_STATIC to be defined in the preprocessor to be able to link statically. This definition is put into LIBXML2_DEFINITIONS in the cmake config, but not properly forwarded to check_symbol_exists leading to it failing as it could not find xmlReadMemory in a DLL.
This patch simply appends the content of LIBXML2_DEFINITIONS to the symbol check definitions, fixing the issue.
Differential Revision: https://reviews.llvm.org/D106740
This is just a workaround. Pass the `-mllvm,-O0` link flags only if its
not ThinLTO. Doing that with ThinLTO currently results in an error:
```
Remaining virtual register operands
UNREACHABLE executed at .../llvm/lib/CodeGen/MachineRegisterInfo.cpp:209!
```