Match ML.EXE's behavior for ALIGN, EVEN, and ORG directives both at file level and in STRUCTs.
We currently reject negative offsets passed to ORG inside STRUCTs (in ML.EXE and ML64.EXE, they wrap around as for an unsigned 32-bit integer).
Also, if a STRUCT is declared using an ORG directive, no value of that type can be defined.
Reviewed By: thakis
Differential Revision: https://reviews.llvm.org/D92507
This fixes a bug at LibCallSimplifier::optimizeMemChr which does the following transformation:
```
// memchr("\r\n", C, 2) != nullptr -> (1 << C & ((1 << '\r') | (1 << '\n')))
// != 0
// after bounds check.
```
As written above, a bounds check on C (whether it is less than integer bitwidth) is done before doing `1 << C` otherwise 1 << C will overflow.
If the bounds check is false, the result of (1 << C & ...) must not be used at all, otherwise the result of shift (which is poison) will contaminate the whole results.
A correct way to encode this is `select i1 (bounds check), (1 << C & ...), false` because select does not allow the unused operand to contaminate the result.
However, this optimization was introducing `and (bounds check), (1 << C & ...)` which cannot do that.
The bug was found from compilation of this C++ code: https://reviews.llvm.org/rG2fd3037ac615#1007197
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D104901
The metadata added in D102361 introduces a module flag that we can check
to determine if the module was compiled with `-fopenmp` enables. We can
now check for the precense of this instead of scanning the call graph
for OpenMP runtime functions.
Depends on D102361
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D102423
When the default target arch isn't one that is supported as a
windows target, we want to set a suitable architecture (so that
Clang tests that run plain 'llvm-rc' succeed checks for e.g.
"#ifdef _WIN32" even for llvm builds that default to e.g. ppc64).
But if the default target architecture is usable, don't rewrite it.
(Rewriting it, by e.g. "T.setArch(T.getArch())", normalizes the
spelling of the architecture, e.g. changing i686 to i386. Such a
change can make clang unable to find the right sysroot.)
This can't, unfortunately, practically be tested very well because
it is entirely dependent on the default triple of the llvm build.
Differential Revision: https://reviews.llvm.org/D104589
Add support for the .reloc directive along the lines of
other back-ends.
This fixes a regression after https://reviews.llvm.org/D104080
was merged, since that patch presupposed support for .reloc.
Types should be defined in function scope instead of a local lexical scope. Field types should be defined inside in its parent type scope.
We were seeing a type defined in a local scope causing trouble to the dwarf emitter where a context is required to be a funciton scope, a namespace or a global scope.
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D104937
This add as a fold of sub(0, splat(sub(0, x))) -> splat(x). This can
come up in the lowering of right shifts under AArch64, where we generate
a shift left of a negated number.
Differential Revision: https://reviews.llvm.org/D103755
We don't need to have the compare output a value and then copy it
to FPSW for use by FNSTSW. Instead we can just have the compare
output Glue and glue the FNSTSW to it. InstrEmitter effectively
performed this optimization when emitting the Machine IR. Doing
it directly simplifies the codes and reduces the work in
InstrEmitter. There's no change in the machine IR at the end of
isel before and after this change.
If we have a umul.with.overflow where the multiply result is not used and one of the operands is a constant, we can perform the overflow check cheaper with a comparison then by performing the multiply and extracting the overflow flag.
(Noticed when looking at the conditions SCEV emits for overflow checks.)
Differential Revision: https://reviews.llvm.org/D104665
This option is already supported by update_test_checks.py, but it can
also be useful in update_cc_test_checks.py. For example, I'd like to
use it in OpenMP offload codegen tests to check global variables like
`.offload_maptypes*`.
Reviewed By: jdoerfert, arichardson, ggeorgakoudis
Differential Revision: https://reviews.llvm.org/D104714
For each of the x.with.overflow variants, if only the overflow bit is consumed, we can generate a direct overflow comparison. This precommits tests for each of the variants and tries to cover interesting cornercases.
This change is NFC upstream. We pass in the loop's block to the kernel
rewriter explicitly, instead of assuming it's the loop's top block. This
change is made for downstream targets where this assumption doesn't hold.
Differential Revision: https://reviews.llvm.org/D104811
With new pm becomes the default, the old-style test command becomes exactly the same as the new test command, i.e. the two commands are now redundant.
We should just delete the old command. (unless someone wants to add enable-new-pm=0 to all old commands.
Differential Revision: https://reviews.llvm.org/D104895
Do this by making opaque pointers a valid pointer element type,
for which we implicitly create an opaque pointer (moving the logic
from getPointerTo into PointerType::get).
We'll never create something like a "pointer to opaque pointer",
but accept it in the API, because a lot of code reasonably assumes
that you can create a pointer to pointer type.
Differential Revision: https://reviews.llvm.org/D104902
There's no reason to use the weaker name-only analysis when we
have a function prototype to check (in fact, we probably should
not even have that name-only function exposed for general use,
but removing it requires auditing all of the callers).
The version of getLibFunc that takes a Function argument also
does some prototype checking to make sure the arguments/return
type match the expected signature of a real library call.
This is NFC-intended because the code in MemoryBuiltins does its
own function signature checking. For now, that means there may
be some redundancy in the checking, but that should not be above
the noise for compile-time. Ideally, we can move the checks to
a single location.
There's still a hole in the logic that allows the example in
https://llvm.org/PR50846 to cause a compiler crash.
To reflect that the size may be scalable, a TypeSize is returned
instead of an unsigned. In places where the result is used,
it currently relies on an implicit cast of TypeSize -> uint64_t,
which asserts that the type is not scalable.
This patch is NFC for fixed-width vectors.
Reviewed By: aemerson
Differential Revision: https://reviews.llvm.org/D104454
This patch extends applyLoopGuards to detect a single-cond range check
idiom that InstCombine generates.
It extends applyLoopGuards to detect conditions of the form
(-C1 + X < C2). InstCombine will create this form when combining two
checks of the form (X u< C2 + C1) and (X >=u C1).
In practice, this enables us to correctly compute a tight trip count
bounds for code as in the function below. InstCombine will fold the
minimum iteration check created by LoopRotate with the user check (< 8).
void unsigned_check(short *pred, unsigned width) {
if (width < 8) {
for (int x = 0; x < width; x++)
pred[x] = pred[x] * pred[x];
}
}
As a consequence, LLVM creates dead vector loops for the code above,
e.g. see https://godbolt.org/z/cb8eTcqEThttps://alive2.llvm.org/ce/z/SHHW4d
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D104741
Separate out the case that uses llvm-dis without
--force-opaque-pointers. This will generally produce a different
result from the other cases, because things like global symbol
pointers will be non-opaque in this case.
Function Records are required to be aligned on 8 bytes. This is enforced for each
records except the first, when one relies on the default alignment within an
std::string. There's no such guarantee, and indeed on 32 bits for some
implementation of std::string this is not enforced.
Provide a portable implementation based on llvm's MemoryBuffer.
Differential Revision: https://reviews.llvm.org/D104745
This custom lowers <4 x i8> vector loads using a 32-bit load, followed by 2
SSHLL instructions to extend it to e.g. a <4 x i32> vector. Before, it was
really inefficient and expensive to construct a <4 x i32> for this as 4 byte
loads and 4 moves were used. With this improvement SLP vectorisation might for
example become profitable, see D103629.
Differential Revision: https://reviews.llvm.org/D104782
On PowerPC, VSRpRC represents the pairs of even and odd VSX register,
and VRRC corresponds to higher 32 VSX registers. In some cases, extra
copies are produced when handling incoming VRRC arguments with VSRpRC.
This patch changes allocation order of VSRpRC to eliminate this kind of
copy.
Stack frame sizes may increase if allocating non-volatile registers, and
some other vector copies happen. They need fix in future changes.
Reviewed By: nemanjai
Differential Revision: https://reviews.llvm.org/D104855
For a bfi chain like:
a = bfi input, x, y
b = bfi a, x', y'
The previous code was RAUW'ing a with x, mutating the second 'b' bfi, and when
SelectionDAG's CSE code ended up deleting it unexpectedly, bad things happend.
There's no need to RAUW in this case because we can just return our newly
created replacement BFI node. It also looked incorrect because it didn't account
for other users of the 'a' bfi.
Since it seems that chains of more than 2 BFI nodes are hard/impossible to
produce without this combine kicking in at some point, I've removed that
functionality since it had no test coverage.
rdar://79095399
Differential Revision: https://reviews.llvm.org/D104868
This patch teaches the compiler to generate code to handle larger RVV
stack sizes and stack offsets which resolve an amount larger than 2047
vector registers in size.
The previous behaviour was asserting on such large values as it was only
able to materialize the constant by feeding it to the 12-bit immediate
of an `ADDI` instruction. The compiler can now materialize this amount
into a temporary register before continuing with the computation.
A test case for this scenario is included which also checks that the
temporary register used to materialize the amount doesn't require an
additional spill slot over what we're already reserving for RVV code.
Reviewed By: rogfer01
Differential Revision: https://reviews.llvm.org/D104727
Previously this instruction could be used only in assembler. This change
makes it available for compiler also. Scheduling information was copied
from FTST instruction, hopefully this can be a satisfactory approximation.
Differential Revision: https://reviews.llvm.org/D104853
... even on targets preferring RELA. The section is only consumed by ld.lld
which can handle REL.
Follow-up to D104080 as I explained in the review. There are two advantages:
* The D104080 code only handles RELA, so arm/i386/mips32 etc may warn for -fprofile-use=/-fprofile-sample-use= usage.
* Decrease object file size for RELA targets
While here, change the relocation to relocate weights, instead of 0,1,2,3,..
I failed to catch the issue during review.
Commit 0464586ac515e8cfebe4c7615387fd625c8869f5 added a combine
for a 64-bit load feeding a bswap but the implementation is only
correct for little endian systems.
This fixes it for big endian systems.
Remove the old name for the methods. These were only left behind to
ease the transition for downstreams.
Differential Revision: https://reviews.llvm.org/D104820
This is a mechanical change. This actually also renames the
similarly named methods in the SmallString class, however these
methods don't seem to be used outside of the llvm subproject, so
this doesn't break building of the rest of the monorepo.
Rename functions with the `xx_lower()` names to `xx_insensitive()`.
This was requested during the review of D104218.
Test names and variables in llvm/unittests/ADT/StringRefTest.cpp
that refer to "lower" are renamed to "insensitive" correspondingly.
Unused function aliases with the former method names are left
in place (without any deprecation attributes) for transition purposes.
All references within the monorepo will be changed (with essentially
mechanical changes), and then the old names will be removed in a
later commit.
Also remove the superfluous method names at the start of doxygen
comments, for the methods that are touched here. (There are more
occurrances of this left in other methods though.) Also remove
duplicate doxygen comments from the implementation file.
Differential Revision: https://reviews.llvm.org/D104819