This function picks X86 opcode name based on type, masking,
and whether not a load or broadcast has been folded using multiple
switch statements. The contents of the switches mostly just vary in
a few characters in the instruction name. So use some macros to
build the instruction names to reduce the repetiveness.
This patch adds the td definitions and asm/disasm tests for the
following instructions:
XXSPLTIW
XXSPLTIDP
XXSPLTI32DX
XXPERMX
XXBLENDVB
XXBLENDVH
XXBLENDVW
XXBLENDVD
VSLDBI
VSRDBI
Differential Revision: https://reviews.llvm.org/D82896
It's messy to pattern-match, and completely unnecessary: scalar indexes
work equally well.
See also discussion on D81620 and D82061.
Differential Revision: https://reviews.llvm.org/D82430
The indexing was messed up, so the result was completely broken.
Shuffle constant exprs are rare in practice; without vscale types,
constant folding generally elminates them. So sort of hard to trip over.
Fixes regression from D72467.
(Recommitting after fix for memory leak.)
Differential Revision: https://reviews.llvm.org/D80330
In most cases, this doesn't have much impact: the destructors just call
the base class destructor anyway. A few subclasses of ConstantExpr
actually store non-trivial data, though. Make sure we clean up
appropriately.
This is sort of ugly, but I don't see a good alternative given the
constraints.
Issue found by asan buildbots running the testcase for D80330.
Differential Revision: https://reviews.llvm.org/D82509
D79164/2596da31740f changed getCFInstrCost to return 1 per default.
AArch64 did not have its own implementation, hence the throughput cost
of CFI instructions is overestimated. On most cores, most branches should
be predicated and essentially free throughput wise.
This restores a 9% performance regression on a SPEC2006 benchmark on
AArch64 with -O3 LTO & PGO.
This patch effectively restores pre 2596da31740f behavior for AArch64
and undoes the AArch64 test changes of the patch.
Reviewers: samparker, dmgreen, anemet
Reviewed By: samparker
Differential Revision: https://reviews.llvm.org/D82755
This replaces the switch statement implementation in the clang's
X86.cpp with a lookup table in X86TargetParser.cpp.
I've used constexpr and copy of the FeatureBitset from
SubtargetFeature.h to store the features in a lookup table.
After the lookup the bitset is translated into strings for use
by the rest of the frontend code.
I had to modify the implementation of the FeatureBitset to avoid
bugs in gcc 5.5 constexpr handling. It seems to not like the
same array entry to be used on the left side and right hand side
of an assignment or &= or |=. I've also used uint32_t instead of
uint64_t and sized based on the X86::CPU_FEATURE_MAX.
I've initialized the features for different CPUs outside of the
table so that we can express inheritance in an adhoc way. This
was one of the big limitations of the switch and we had resorted
to labels and gotos.
Differential Revision: https://reviews.llvm.org/D82731
If the addend of the fma is zero, common sense would suggest that we can
convert fma x, y, 0.0 to fmul x, y. This comes up with some user code
that was expecting the first fma in an unrolled loop to simplify to a
fmul.
Floating point often does not follow naive common sense though. Alive
suggests that this should be guarded by nsz (as fadd -0.0, 0.0 = 0.0).
fma x, y, -0.0 is always valid.
Differential Revision: https://reviews.llvm.org/D82778
Summary:
Follow up to D81736. Move getOpenMPDirectiveKind, getOpenMPClauseKind, getOpenMPDirectiveName and
getOpenMPClauseName to the new tablegen code generation. The code is generated in a new file named OMP.cpp.inc
Reviewers: jdoerfert, jdenny, thakis
Reviewed By: jdoerfert, jdenny
Subscribers: mgorny, yaxunl, hiraditya, guansong, sstefan1, llvm-commits, thakis
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D82405
This change lets LLVM use the LC_BUILD_VERSION command when building for macOS 10.14, iOS 12, tvOS 12, and watchOS 5.
Additionally, this change ensures that new platforms like Apple Silicon macOS / Mac Catalyst,
and simulators running on Apple Silicon alway use LC_BUILD_VERSION with the OS version set to the
minimum supported OS version if the deployment target version is older.
Differential Revision: https://reviews.llvm.org/D82836
Implement `-change` option for install-name-tool. The behavior exactly
matches that of cctools. Depends on D82410.
Reviewed By: jhenderson, smeenai
Differential Revision: https://reviews.llvm.org/D82613
Implement `-id` option for install-name-tool. Differences from cctool's
behavior:
- Does **NOT** throw an error if multiple -id options are specified.
Instead, picks the last one.
- Throws an error in case empty id is specified.
Reviewed By: jhenderson, smeenai
Differential Revision: https://reviews.llvm.org/D82410
This reduces peak memory on my test case from 1960.14MB to 1700.63MB
(-260MB, -13.2%) with no measurable impact on CPU time. I'm currently
working with a publics stream that is about 277MB. Before this change,
we would allocate 277MB of heap memory, serialize publics into them,
hold onto that heap memory, open the PDB, and commit into it. After
this change, we defer the serialization until commit time.
In the last change I made to public writing, I re-sorted the list of
publics multiple times in place to avoid allocating new temporary data
structures. Deferring serialization until later requires that we don't
reorder the publics. Instead of sorting the publics, I partially
construct the hash table data structures, store a publics index in them,
and then sort the hash table data structures. Later, I replace the index
with the symbol record offset.
This change also addresses a FIXME and moves the list of global and
public records from GSIHashStreamBuilder to GSIStreamBuilder. Now that
publics aren't being serialized, it makes even less sense to store them
as a list of CVSymbol records. The hash table used to deduplicate
globals is moved as well, since that is specific to globals, and not
publics.
Reviewed By: aganea, hans
Differential Revision: https://reviews.llvm.org/D81296
In RISC-V vector extension, users could group multiple vector registers
as one pseudo register. In mixed width operations, users could use
partial vector registers to reduce the register pressure. The parameter
to control register grouping and partial use is called LMUL. LMUL is a
part of the type. So, we have a bunch of vector types. In order to
support all these types, we need new MVT types in LLVM. In this patch, I
added several MVT types that are used in RISC-V vector implementation.
This is a standalone patch for MVT types without RISC-V related implementation.
Differential revision: https://reviews.llvm.org/D81724
Rename `future*` encoding test files to include ISA3.1 in the file name
and combine with exisitng ISA3.1 instruction encoding tests that were
added into `p10*` test files.
Keeping the `p10*` files for now to ensure we don't add more to it.
Will remove once all ISA3.1 instruction are implemented.
After the rewrite of this pass (D79175) I missed one thing: the inserted VCTP
intrinsic can be cloned to exit blocks if there are instructions present in it
that perform the same operation, but this wasn't triggering anymore. However,
it turns out that for handling reductions, see D75533, it's actually easier not
not to have the VCTP in exit blocks, so this removes that code.
This was possible because it turned out that some other code that depended on
this, rematerialization of the trip count enabling more dead code removal
later, wasn't doing much anymore due to more aggressive dead code removal that
was added to the low-overhead loops pass.
Differential Revision: https://reviews.llvm.org/D82773
This patch stops the trunc, rint, round, floor and ceil intrinsics from blocking tail predication.
Differential Revision: https://reviews.llvm.org/D82553
The knownbits.ll test case is somewhat fragile since:
- it relies on undef inputs; and
- it operates just at the limits of the MaxRecursionDepth
This means that optimization changes may easily cause the test
to spuriously fail. Rewrite the test so it still validates
the same thing, but in a less fragile manner.
If we're masking the result of an OR-reduction before comparing against zero, we can fold this into the PTEST() / MOVMSK(CMPEQ()) codegen by pre-masking the source value.
This works particularly well on PTEST which performs the AND as part of its operation, but the MOVMSK variant also benefits for non-V2I64 cases.
Fixes PR44781
Summary:
Old PM runs SpeculativeExecutionPass for targets that have divergent branches.
It uses `createSpeculativeExecutionIfHasBranchDivergencePass` that creates
the pass with `OnlyIfDivergentTarget=true`, whereas new PM just created the
pass with default `OnlyIfDivergentTarget=fase` so it unexpectedly runs and
causes buildbot test fails.
Reviewers: chandlerc, arsenm
Reviewed By: arsenm
Subscribers: wdng, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D82735
Summary:
This fixes ASan and MSan tests on SystemZ after
commit 6a822e20ce70 ("[ASan][MSan] Remove EmptyAsm and set the CallInst
to nomerge to avoid from merging.").
Based on commit 80e107ccd088 ("Add NoMerge MIFlag to avoid MIR branch
folding").
Reviewers: uweigand, jonpa
Reviewed By: uweigand
Subscribers: hiraditya, llvm-commits, Andreas-Krebbel
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D82794
We convert `APSInt`s to Z3 Bitvectors in an inefficient way for most cases.
We should not serialize to std::string just to pass an int64 integer.
For the vast majority of cases, we use at most 64-bit width integers (at least
in the Clang Static Analyzer). We should simply call the `Z3_mk_unsigned_int64`
and `Z3_mk_int64` instead of the `Z3_mk_numeral` as stated in the Z3 docs.
Which says:
> It (`Z3_mk_unsigned_int64`, etc.) is slightly faster than `Z3_mk_numeral` since
> it is not necessary to parse a string.
If the `APSInt` is wider than 64 bits, we will use the `Z3_mk_numeral` with a
`SmallString` instead of a heap-allocated `std::string`.
Differential Revision: https://reviews.llvm.org/D78453
This is a followup on D78403.
I'm unsure about `getAtomicOpAlign` overloads that take `AtomicRMWInst` and `AtomicCmpXchgInst`, shouldn't `getAlign` provide the correct answer already?
Differential Revision: https://reviews.llvm.org/D81369
Summary:
Separate introduction of IntrNoFree property as suggested in D70365
Reviewers: arsenm, nhaehnle
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D82587