Symbol tables can have symbols with no size in mach-o files that were failing to get combined into a single entry. This resulted in many duplicate entries for the same address and made gsym files larger.
Differential Revision: https://reviews.llvm.org/D105068
There can be a use after free in the Value::replaceUsesWithIf()
if two uses point to the same constant. Patch defers handling
of the constants past the iterator scan.
Another potential issue is that handleOperandChange updates all
the uses in a given Constant, not just the one passed to
ShouldReplace. Added a FIXME comment.
Both issues are not currently exploitable as the only use of
this call with constants avoids it.
Differential Revision: https://reviews.llvm.org/D105061
Now that the OpenMPOpt module pass include important optimizations for removing
globalization from offloading regions it should be run at a lower optimization
level.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D105056
Add UNIQUED and DISTINCT properties in Metadata.def and use them to
implement restrictions on the `distinct` property of MDNodes:
* DIExpression can currently be parsed from IR or read from bitcode
as `distinct`, but this property is silently dropped when printing
to IR. This causes accepted IR to fail to round-trip. As DIExpression
appears inline at each use in the canonical form of IR, it cannot
actually be `distinct` anyway, as there is no syntax to describe it.
* Similarly, DIArgList is conceptually always uniqued. It is currently
restricted to only appearing in contexts where there is no syntax for
`distinct`, but for consistency it is treated equivalently to
DIExpression in this patch.
* DICompileUnit is already restricted to always being `distinct`, but
along with adding general support for the inverse restriction I went
ahead and described this in Metadata.def and updated the parser to be
general. Future nodes which have this restriction can share this
support.
The new UNIQUED property applies to DIExpression and DIArgList, and
forbids them to be `distinct`. It also implies they are canonically
printed inline at each use, rather than via MDNode ID.
The new DISTINCT property applies to DICompileUnit, and requires it to
be `distinct`.
A potential alternative change is to forbid the non-inline syntax for
DIExpression entirely, as is done with DIArgList implicitly by requiring
it appear in the context of a function. For example, we would forbid:
!named = !{!0}
!0 = !DIExpression()
Instead we would only accept the equivalent inlined version:
!named = !{!DIExpression()}
This essentially removes the ability to create a `distinct` DIExpression
by construction, as there is no syntax for `distinct` inline. If this
patch is accepted as-is, the result would be that the non-canonical
version is accepted, but the following would be an error and produce a diagnostic:
!named = !{!0}
; error: 'distinct' not allowed for !DIExpression()
!0 = distinct !DIExpression()
Also update some documentation to consistently use the inline syntax for
DIExpression, and to describe the restrictions on `distinct` for nodes
where applicable.
Reviewed By: StephenTozer, t-tye
Differential Revision: https://reviews.llvm.org/D104827
Currently, AsmWriter will stick uselistorder directives for global
values inside individual functions. This doesn't make a lot of sense,
and interacts badly with D104950, as use list order adjustments will
be performed while still working on a forward reference.
This patch instead always prints uselistorder directives for globals
at the module level. This isn't really compatible with the previously
used implementation approach. Rather than walking through all values
again, use the OrderMap (after stabilizing its order) to go through
all values and compute the use list shuffles for them. Classify them
per-function, or nullptr for globals.
Even independently of D104950, this seems to fix a few
verify-uselistorder failures. Conveniently, there is even a
pre-existing failing test that this fixes.
Differential Revision: https://reviews.llvm.org/D104976
We could use a bigger hammer and bail out on any constant
expression, but there's a regression test that appears to
validly do the transform (although it may not have been
intending to check that optimization).
I added an assertion in D91816 (documenting behavior added in D93422)
that callers and callees with mismatched fn attr's related to stack
protectors should not occur unless the callee was attributed
always_inline.
This falls apart when a call, invoke, or callbr (any instruction
inheriting from CallBase) itself has an always_inline attribute. Clang
will emit such attributes on Instructions when __attribute__((flatten))
is used to recursively force inlining from a caller.
Since these assertions only had the caller and callee Functions, and not
the call site (CallBase derived classes), we would have to search the
caller for such instructions to reconstruct the call site information.
But at that point, inlining has already occurred; the call site has
already been removed from the caller.
Remove the assertions, add a unit test for always_inline call sites, and
update the LangRef.
Another curiosity is that the always_inline Attribute on Instructions is
only expanded by the inline pass, not the always_inline pass.
Thanks to @pcc on this report when building Android's RunTime (ART)
interpreter.
Reviewed By: pcc, MaskRay
Differential Revision: https://reviews.llvm.org/D104944
While adding remark based tests in D104944, I noticed that the tests
that we were passing were passing for the wrong reason. They were
passing because the dynamic allocas were preventing inlining, not the
code I added in D91816.
Rewrite and simplify the test. Add remark based checks to validate we're
preventing inline substitutions for the right reasons.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D104958
Currently OpenMPOpt will only check if a function is a kernel before deciding not to internalize it. Any uncalled function that gets internalized will be trivially dead in the module so this is unnnecessary.
Depends on D102423
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D104890
Relands patch reverted by 61242c0addb120294211d24a97ed89837418cb36
The original patch mistakenly included unrelated tests.
Adds a utility to combine multiple Callables into a single Callable.
This is useful to make constructing a visitor for `std::visit`-like
functions more natural; functions like this will be added in future
patches.
Intended to supercede https://reviews.llvm.org/D99560 by
perfectly-forwarding the combined Callables.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D100670
Fix the use-list-order for br instructions by setting the operands in
order of their index to match the use-list-order prediction. The case
where this matters is when there is a condition but the if-true and
if-false branches are identical.
Bug was found when reviewing failures pointed at by
https://reviews.llvm.org/D104950. Fix is similar to
3cf415c6c367ced43175ebd1dc4bd9582c7f5376.
Differential Revision: https://reviews.llvm.org/D104959
Adds a utility to combine multiple Callables into a single Callable.
This is useful to make constructing a visitor for `std::visit`-like
functions more natural; functions like this will be added in future
patches.
Intended to supercede https://reviews.llvm.org/D99560 by
perfectly-forwarding the combined Callables.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D100670
the call's return type is void
Instead of trying hard to prevent global optimization passes such as
deadargelim from changing the return type to void, just ignore the
bundle if the return type is void. clang currently emits calls to
@llvm.objc.clang.arc.noop.use, which consumes the function call result,
immediately after the function call to prevent changes to the return
type, but optimization passes can delete the call to
@llvm.objc.clang.arc.noop.use if the function call doesn't return, which
enables deadargelim to change the return type.
rdar://76671438
Differential Revision: https://reviews.llvm.org/D103062
- In the caller of the overridden `parseStatement` function (i.e. the `AsmParser::Run()`) in the case of an error **and** if we're not at the start of the statement, we "eat" up until the end of the current statement, so we don't have to process it again.
- However, in the HLASMAsmParser class what's happening is that, if an error occurs at the very start of the statement (for example, you invoke the HLASMAsmParser to parse a gnu directive), we will error out, but we never really progress in terms of the next token in the statement to parse. We simply keep looping processing the same error over and over again (partly because we're at the start of the statement)
- To remedy this, when the `parseAsHLASMLabel` function fails, before returning, we "eat" until the end of the statement function, so we don't process it anymore.
Reviewed By: uweigand
Differential Revision: https://reviews.llvm.org/D104869
This reverts commit 51e434fc2590d1d3ffa6545cd07290a238db2b88 because of a
build bot failure in test-suite::GCC-C-execute-pr60960.test that I need to
investigate.
There is a constraint that coro.suspend instructions need to be in their
own blocks. The coro split pass initially creates IR that obeys this constraint
(which is later checked). Sinking rematerializable instructions into these
blocks breaks that constraint.
Instead rematerialize in the predecessor block to the suspend's single
predecessor block.
Differential Revision: https://reviews.llvm.org/D104051
This intrinsic blocks floating point transformations by the optimizer.
Author: Pengfei
Reviewed By: LuoYuanke, Andy Kaylor, Craig Topper, kpn
Differential Revision: https://reviews.llvm.org/D99675
Previously xscale was known to everything apart
from the ELF streamer so we would crash as soon
as you tried to output an object file.
Reviewed By: nickdesaulniers
Differential Revision: https://reviews.llvm.org/D104776
On AIX the alignment implementation has the storage aligned to the
preferred alignment instead of the alignment of a type. Macro guard
these tests for AIX and have them pass when the "reference alignment" is
less than or equal to the alignment observed. In other words, the
alignment applied is at least as strict as the required alignment.
Reviewed By: hubert.reinterpretcast
Differential Revision: https://reviews.llvm.org/D104786
This patch relands https://reviews.llvm.org/D104454, but fixes some failing
builds on Mac OS which apparently has a different definition for size_t,
that caused 'ambiguous operator overload' for the implicit conversion
of TypeSize to a scalar value.
This reverts commit b732e6c9a8438e5204ac96c8ca76f9b11abf98ff.
Peephole optimizer should not be introducing sub-reg definitions
as they are illegal in machine SSA phase. This patch modifies
the optimizer to not emit sub-register definitions.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D103408
Compiler crashes at an assertion while casting operands to PtrToIntInst at some cases when
ptrtoint is present as an explicit operand to inttoptr. Explicit instruction operator as
operand can not be casted to an Instruction.
This patch replaces cast from PtrToInst to Operator which are later checked for constant
expressions.
Differential Revision: https://reviews.llvm.org/D105002
Adds legalizer, register bank select, and instruction
select support for G_SBFX and G_UBFX. These opcodes generate
scalar or vector ALU bitfield extract instructions for
AMDGPU. The instructions allow both constant or register
values for the offset and width operands.
The 32-bit scalar version is expanded to a sequence that
combines the offset and width into a single register.
There are no 64-bit vgpr bitfield extract instructions, so the
operations are expanded to a sequence of instructions that
implement the operation. If the width is a constant,
then the 32-bit bitfield extract instructions are used.
Moved the AArch64 specific code for creating G_SBFX to
CombinerHelper.cpp so that it can be used by other targets.
Only bitfield extracts with constant offset and width values
are handled currently.
Differential Revision: https://reviews.llvm.org/D100149
Increase the number of attributor iterations on a GPU target. I forgot to
change this in D104416.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D104920
This adds support for Armv9-A's Realm Management Extension, including
three new system registers - MFAR_EL3, GPCCR_EL3 and GPTBR_EL3 - and
four new TLBI instructions.
The reference for the Realm Management Extension can be found at: https://developer.arm.com/documentation/ddi0615/aa.
Based on patches by Victor Campos.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D104773
Prior to the changes from D52010, clobbering Thumb's high registers in
inline asm would cause incorrect code to be generated - or an assertion
failure for debug builds. Now that the issue is no longer reproducible,
this patch adds a MIR test to cover that scenario.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D96335
Currently we will allow loops with a fixed width VF of 1 to vectorize
if the -enable-strict-reductions flag is set. However, the loop vectorizer
will not use ordered reductions if `VF.isScalar()` and the resulting
vectorized loop will be out of order.
This patch removes `VF.isVector()` when checking if ordered reductions
should be used. Also, instead of converting the FAdds to reductions if the
VF = 1, operands of the FAdds are changed such that the order is preserved.
Reviewed By: david-arm
Differential Revision: https://reviews.llvm.org/D104533
Sinking scalar operands into predicated-triangle regions may allow
merging regions. This patch adds a VPlan-to-VPlan transform that tries
to merge predicate-triangle regions after sinking.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D100260
This adds another small fold for extract of a vdup, between a i32 and a
f32, converting to a BITCAST. This allows some extra folding to happen,
simplifying the resulting code.
Differential Revision: https://reviews.llvm.org/D104857
The patch reuses the common code to print memory operand addresses as
instruction comments. This helps to align the comments and enables using
target-specific comment markers when `evaluateMemoryOperandAddress()` is
implemented for them.
Differential Revision: https://reviews.llvm.org/D104861