Get the argument register and ensure there's a copy to the virtual
register. AMDGPU and AArch64 have similarish code to get the livein
value, and I also want to use this in multiple places.
This is a bit more aggressive about setting the register class than
the original function, but that's probably OK.
I think we're missing a few verifier checks for function live ins. I
noticed AArch64's calling convention code is not actually adding
liveins to functions, only the entry block (which apparently might not
matter that much?). There should probably be a verifier check that
entry block live ins are also live into the function. We also might
need a verifier check that the copy to the livein virtual register is
in the entry block.
The SVE instruction set only supports sdiv/udiv for 32-bit and 64-bit
integers. If we see an 8-bit or 16-bit divide, widen the operands to 32
bits, and narrow the result.
Differential Revision: https://reviews.llvm.org/D85170
If a section is supposed to hold elements of type T, then the
corresponding CreateSecStartEnd()'s Ty parameter represents T*.
Forwarding it to GlobalVariable constructor causes the resulting
GlobalVariable's type to be T*, and its SSA value type to be T**, which
is one indirection too many. This issue is mostly masked by pointer
casts, however, the global variable still gets an incorrect alignment,
which causes SystemZ to choose wrong instructions to access the
section.
This corresponds with the SelectionDAGISel change in D84056.
Also, rename some poorly named tests in CodeGen/X86/fast-isel-fneg.ll with NFC.
Differential Revision: https://reviews.llvm.org/D85149
Four new CO-RE relocations are introduced:
- TYPE_EXISTENCE: whether a typedef/record/enum type exists
- TYPE_SIZE: the size of a typedef/record/enum type
- ENUM_VALUE_EXISTENCE: whether an enum value of an enum type exists
- ENUM_VALUE: the enum value of an enum type
These additional relocations will make CO-RE bpf programs
more adaptive for potential kernel internal data structure
changes.
Differential Revision: https://reviews.llvm.org/D83878
This one is pretty easy and shrinks the list of unhandled
intrinsics. I'm not sure how relevant the insert point is. Using the
insert position of EntryBuilder will place this after
constants. SelectionDAG seems to end up emitting these after argument
copies and before anything else, but I don't think it really
matters. This also ends up emitting these in the opposite order from
SelectionDAG, but I don't think that matters either.
This also needs a fix to stop the later passes dropping this as a dead
instruction. DeadMachineInstructionElim's version of isDead special
cases LOCAL_ESCAPE for some reason, and I'm not sure why it's excluded
from MachineInstr::isLabel (or why isDead doesn't check it).
I also noticed DeadMachineInstructionElim never considers inline asm
as dead, but GlobalISel will drop asm with no constraints.
This patch tries to improve readability and maintenance
of createVectorizedLoopSkeleton by reorganizing some lines,
updating some of the comments and breaking it up into
smaller logical units.
Reviewed By: pjeeva01
Differential Revision: https://reviews.llvm.org/D83824
This revision adds the following peephole optimization
and it's negation:
%a = urem i64 %x, %y
%b = icmp ule i64 %a, %x
====>
%b = true
With John Regehr's help this optimization was checked with Alive2
which suggests it should be valid.
This pattern occurs in the bound checks of Rust code, the program
const N: usize = 3;
const T = u8;
pub fn split_mutiple(slice: &[T]) -> (&[T], &[T]) {
let len = slice.len() / N;
slice.split_at(len * N)
}
the method call slice.split_at will check that len * N is within
the bounds of slice, this bounds check is after some transformations
turned into the urem seen above and then LLVM fails to optimize it
any further. Adding this optimization would cause this bounds check
to be fully optimized away.
ref: https://github.com/rust-lang/rust/issues/74938
Differential Revision: https://reviews.llvm.org/D85092
D83530 removed --inlining={true,false} which were used by old asan_symbolize.py script.
Add compatibility aliases so that old asan_symbolize.py and sanitizer
binaries can work with new llvm-symbolizer.
Reviewed By: thakis
Differential Revision: https://reviews.llvm.org/D85228
Teach SCCP to create notconstant lattice values from inequality
assumes and nonnull metadata, and update getConstant() to make
use of them. Additionally isOverdefined() needs to be changed to
consider notconstant an overdefined value.
Handling inequality branches is delayed until our branch on undef
story in other passes has been improved.
Differential Revision: https://reviews.llvm.org/D83643
This reverts commit aa1f905890fbbfedf396530f1e14409875ece13c.
The flag -codegenprepare maybe causing failures. Reverting this
to investigate the root cause.
Introduced by fd6584a22043b254a323635c142b28ce80ae5b5b
Following similar use of casts in AsmParser.cpp, for instance - ideally
this type would use unsigned chars as they're more representative of raw
data and don't get confused around implementation defined choices of
char's signedness, but this is what it is & the signed/unsigned
conversions are (so far as I understand) safe/bit preserving in this
usage and what's intended, given the API design here.
This patch stops unconditionally transforming FSUB(-0, X) into an FNEG(X) while building the MIR.
This corresponds with the SelectionDAGISel change in D84056.
Differential Revision: https://reviews.llvm.org/D85139
There seems to be an unrelated CSEMIRBuilder bug that was causing
expensive checks failures in this case. Hack the test to avoid this
problem for now until that's fixed.
for the advantage outlined by D83639 ([OptTable] Support grouped short options)
Some behavior changes:
* -i={0,false} is removed. Use --no-inlines instead.
* --demangle={0,false} is removed. Use --no-demangle instead
* -untag-addresses={0,false} is removed. Use --no-untag-addresses instead
Added a higher level API OptTable::parseArgs which handles optional
initial options populated from an environment variable, expands response
files recursively, and parses options.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D83530
This patch added the following additional compile-once
run-everywhere (CO-RE) relocations:
- existence/size of typedef, struct/union or enum type
- enum value and enum value existence
These additional relocations will make CO-RE bpf programs more
adaptive for potential kernel internal data structure changes.
For existence/size relocations, the following two code patterns
are supported:
1. uint32_t __builtin_preserve_type_info(*(<type> *)0, flag);
2. <type> var;
uint32_t __builtin_preserve_field_info(var, flag);
flag = 0 for existence relocation and flag = 1 for size relocation.
For enum value existence and enum value relocations, the following code
pattern is supported:
uint64_t __builtin_preserve_enum_value(*(<enum_type> *)<enum_value>,
flag);
flag = 0 means existence relocation and flag = 1 for enum value.
relocation. In the above <enum_type> can be an enum type or
a typedef to enum type. The <enum_value> needs to be an enumerator
value from the same enum type. The return type is uint64_t to
permit potential 64bit enumerator values.
Differential Revision: https://reviews.llvm.org/D83242
The swap removal pass looks to remove swaps when a loaded value is swapped, some
number of lane-insensitive operations are performed and then the value is
swapped again and stored.
However, in a situation where we load the value, swap it and then store it
without swapping again, the pass erroneously removes the single swap. The
reason is that both checks in the same equivalence class:
- load feeds a swap
- swap feeds a store
pass. However, there is no check that the two swaps are actually a single swap.
This patch just fixes that.
Differential revision: https://reviews.llvm.org/D84785
The custom lowering saves an instruction over the generic expansion, by
taking advantage of the fact that PowerPC shift instructions are well
defined in the shift-by-bitwidth case.
Differential Revision: https://reviews.llvm.org/D83948
Now that rG47cea9e82dda941e lets us aggressively decode multi-use shuffles for the OR(SHUFFLE(),SHUFFLE()) case we don't need the computeKnownBits variant any more.
Commit https://reviews.llvm.org/rGcd53ded557c3 attempts to fix the
computation in computeHostNumPhysicalCores() to respect Affinity.
However, the GLIBC wrapper of the affinity system call fails with
a default size of cpu_set_t on systems that have more than 1024 CPUs.
This just fixes the computation on such large machines.
Permit lane-crossing post shuffles on AVX1 targets as long as every element comes from the same source lane, which for v8f32/v4f64 cases can be efficiently lowered with the LowerShuffleAsLanePermuteAnd* style methods.
This produces a chrome://tracing compatible trace file in the same way
as -ftime-trace.
This can be useful in optimising test time where one long test is causing
long overall test time on a wide machine.
This also helped in finding tests which have side effects on others
(e.g. https://reviews.llvm.org/D84885).
Differential Revision: https://reviews.llvm.org/D84931
This is based on the existing code for the non-intrinsic idioms
in InstCombine.
The vector constant constraint is non-obvious: undefs should be
ok in the outer call, but they can't propagate safely from the
inner call in all cases. Example:
https://alive2.llvm.org/ce/z/-2bVbM
define <2 x i8> @src(<2 x i8> %x) {
%0:
%m = umin <2 x i8> %x, { 7, undef }
%m2 = umin <2 x i8> { 9, 9 }, %m
ret <2 x i8> %m2
}
=>
define <2 x i8> @tgt(<2 x i8> %x) {
%0:
%m = umin <2 x i8> %x, { 7, undef }
ret <2 x i8> %m
}
Transformation doesn't verify!
ERROR: Value mismatch
Example:
<2 x i8> %x = < undef, undef >
Source:
<2 x i8> %m = < #x00 (0) [based on undef value], #x00 (0) >
<2 x i8> %m2 = < #x00 (0), #x00 (0) >
Target:
<2 x i8> %m = < #x07 (7), #x10 (16) >
Source value: < #x00 (0), #x00 (0) >
Target value: < #x07 (7), #x10 (16) >
This patch adds a CFI entry for each SVE callee saved register
that needs unwind info at an offset from the CFA. The offset is
a DWARF expression because the offset is partly scalable.
The CFI entries only cover a subset of the SVE callee-saves and
only encodes the lower 64-bits, thus implementing the lowest
common denominator ABI. Existing unwinders may support VG but
only restore the lower 64-bits.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D84044