The option --no-print-imm-hex was not included in the command guide for
llvm-objdump but appears in the help text. This commit adds it to the
command guide.
Differential Revision: https://reviews.llvm.org/D104717
llvm-objdump had some missing coverage that is fixed by this change:
- A test specifically for --print-imm-hex, and coverage of --no-print-imm-hex
- section-headers.test checks the aliases --headers or --section-headers
- A test for the use of --private-headers for ELF that checks the output
- A test for ELF program headers
Differential Revision: https://reviews.llvm.org/D103974
If we unroll a loop in the vectorizer (without vectorizing), and the cost model requires a epilogue be generated for correctness, the code generation must actually do so.
The included test case on an unmodified opt will access memory one past the expected bound. As a result, this patch is fixing a latent miscompile.
Differential Revision: https://reviews.llvm.org/D103700
While we might eventually want to disallow allocas that do not have the
alloca-AS set, it seems undesirable to crash on them. Add a cast when
required so that we can support such allocas (at least here).
Differential Revision: https://reviews.llvm.org/D104866
This patch reads machine value numbers from DBG_PHI instructions (marking
where SSA PHIs used to be), and matches them up with DBG_INSTR_REF
instructions that refer to them. Essentially they are two separate parts of
a DBG_VALUE: the place to read the value (register and program position),
and where the variable is assigned that value.
Sometimes these DBG_PHIs can be duplicated, usually by tail duplication.
This corresponds to the SSA structure of the program being destroyed, and
the original PHI being split. When this happens: run LLVMs standard
SSAUpdater utility, to work out what values should appear in which blocks.
The majority of this patch is boilerplate to make use of SSAUpdater.
If there are any additional PHIs on the path between multiple DBG_PHIs and
their using DBG_INSTR_REF, their existance is validated, just in case a
value gets clobbered along the way (see dbg-phis-with-loops.mir for
several examples).
Differential Revision: https://reviews.llvm.org/D86814
The r1 register should be cleared in prologue of ISR as it is used
as constant zero.
Reviewed By: dylanmckay
Differential Revision: https://reviews.llvm.org/D99467
Suggested on D101074 - add a 'icmp sgt i64 %0, -2147483649' comparison that can fold to 'icmp sge i64 %0, -2147483648' on D101074 allowing i32 immediate folding
This patch fixes a crash when the target instruction for sinking is
dead. In that case, no recipe is created and trying to get the recipe
for it results in a crash. To ensure all sink targets are alive, find &
use the first previous alive instruction.
Note that the case where the sink source is dead is already handled.
Found by
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=35320
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D104603
Previously in setCostBasedWideningDecision if we encountered an
invariant store we just assumed that we could scalarize the store
and called getUniformMemOpCost to get the associated cost.
However, for scalable vectors this is not an option because it is
not currently possibly to scalarize the store. At the moment we
crash in VPReplicateRecipe::execute when trying to scalarize the
store.
Therefore, I have changed setCostBasedWideningDecision so that if
we are storing a scalable vector out to a uniform address and the
target supports scatter instructions, then we should use those
instead.
Tests have been added here:
Transforms/LoopVectorize/AArch64/sve-inv-store.ll
Differential Revision: https://reviews.llvm.org/D104624
In all of these, the value must be an instruction for us to succeed anyway,
so change it to maybe hopefully make further changes more straight-forward.
This adds a small fold for extract (ARM_BUILD_VECTOR) to fold to the
original node. This can help simplify the resulting codegen in some
cases.
Differential Revision: https://reviews.llvm.org/D104860
CMOV conversion first rewrites all CMOVs with memory load to branches.
Then runs a second pass to convert other CMOVs in loops if profitable.
But the first pass doesn't add new basic blocks to MachineLoopInfo,
CMOVs in these blocks are ignored in the subsequent pass.
Reviewed By: pengfei
Differential Revision: https://reviews.llvm.org/D104692
(V * Scale) % X may not produce the same result for any possible value
of V, e.g. if the multiplication overflows. This means we currently
incorrectly determine NoAlias in some cases.
This patch updates LinearExpression to track whether the expression
has NSW and uses that to adjust the scale used for alias checks.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D99424
This reverts commit c94cf97b53566a26245c54ea0c41b0dc83daf8a0
since it appears to have broken linaro-clang-armv7-quick build bot
and needs further investigation.
- Add standalone metadata parsing support so that machine metadata nodes
could be populated before and accessed during MIR is parsed.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D103282
CoroElide pass works only when a post-split coroutine is inlined into another post-split coroutine.
In O0, there is no inlining after CoroSplit, and hence no CoroElide can happen.
It's useless to put CoroElide pass in the O0 pipeline and it will never be triggered (unless I miss anything).
Differential Revision: https://reviews.llvm.org/D105066
Symbol tables can have symbols with no size in mach-o files that were failing to get combined into a single entry. This resulted in many duplicate entries for the same address and made gsym files larger.
Differential Revision: https://reviews.llvm.org/D105068
There can be a use after free in the Value::replaceUsesWithIf()
if two uses point to the same constant. Patch defers handling
of the constants past the iterator scan.
Another potential issue is that handleOperandChange updates all
the uses in a given Constant, not just the one passed to
ShouldReplace. Added a FIXME comment.
Both issues are not currently exploitable as the only use of
this call with constants avoids it.
Differential Revision: https://reviews.llvm.org/D105061
Now that the OpenMPOpt module pass include important optimizations for removing
globalization from offloading regions it should be run at a lower optimization
level.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D105056
Add UNIQUED and DISTINCT properties in Metadata.def and use them to
implement restrictions on the `distinct` property of MDNodes:
* DIExpression can currently be parsed from IR or read from bitcode
as `distinct`, but this property is silently dropped when printing
to IR. This causes accepted IR to fail to round-trip. As DIExpression
appears inline at each use in the canonical form of IR, it cannot
actually be `distinct` anyway, as there is no syntax to describe it.
* Similarly, DIArgList is conceptually always uniqued. It is currently
restricted to only appearing in contexts where there is no syntax for
`distinct`, but for consistency it is treated equivalently to
DIExpression in this patch.
* DICompileUnit is already restricted to always being `distinct`, but
along with adding general support for the inverse restriction I went
ahead and described this in Metadata.def and updated the parser to be
general. Future nodes which have this restriction can share this
support.
The new UNIQUED property applies to DIExpression and DIArgList, and
forbids them to be `distinct`. It also implies they are canonically
printed inline at each use, rather than via MDNode ID.
The new DISTINCT property applies to DICompileUnit, and requires it to
be `distinct`.
A potential alternative change is to forbid the non-inline syntax for
DIExpression entirely, as is done with DIArgList implicitly by requiring
it appear in the context of a function. For example, we would forbid:
!named = !{!0}
!0 = !DIExpression()
Instead we would only accept the equivalent inlined version:
!named = !{!DIExpression()}
This essentially removes the ability to create a `distinct` DIExpression
by construction, as there is no syntax for `distinct` inline. If this
patch is accepted as-is, the result would be that the non-canonical
version is accepted, but the following would be an error and produce a diagnostic:
!named = !{!0}
; error: 'distinct' not allowed for !DIExpression()
!0 = distinct !DIExpression()
Also update some documentation to consistently use the inline syntax for
DIExpression, and to describe the restrictions on `distinct` for nodes
where applicable.
Reviewed By: StephenTozer, t-tye
Differential Revision: https://reviews.llvm.org/D104827
Currently, AsmWriter will stick uselistorder directives for global
values inside individual functions. This doesn't make a lot of sense,
and interacts badly with D104950, as use list order adjustments will
be performed while still working on a forward reference.
This patch instead always prints uselistorder directives for globals
at the module level. This isn't really compatible with the previously
used implementation approach. Rather than walking through all values
again, use the OrderMap (after stabilizing its order) to go through
all values and compute the use list shuffles for them. Classify them
per-function, or nullptr for globals.
Even independently of D104950, this seems to fix a few
verify-uselistorder failures. Conveniently, there is even a
pre-existing failing test that this fixes.
Differential Revision: https://reviews.llvm.org/D104976
We could use a bigger hammer and bail out on any constant
expression, but there's a regression test that appears to
validly do the transform (although it may not have been
intending to check that optimization).
I added an assertion in D91816 (documenting behavior added in D93422)
that callers and callees with mismatched fn attr's related to stack
protectors should not occur unless the callee was attributed
always_inline.
This falls apart when a call, invoke, or callbr (any instruction
inheriting from CallBase) itself has an always_inline attribute. Clang
will emit such attributes on Instructions when __attribute__((flatten))
is used to recursively force inlining from a caller.
Since these assertions only had the caller and callee Functions, and not
the call site (CallBase derived classes), we would have to search the
caller for such instructions to reconstruct the call site information.
But at that point, inlining has already occurred; the call site has
already been removed from the caller.
Remove the assertions, add a unit test for always_inline call sites, and
update the LangRef.
Another curiosity is that the always_inline Attribute on Instructions is
only expanded by the inline pass, not the always_inline pass.
Thanks to @pcc on this report when building Android's RunTime (ART)
interpreter.
Reviewed By: pcc, MaskRay
Differential Revision: https://reviews.llvm.org/D104944
While adding remark based tests in D104944, I noticed that the tests
that we were passing were passing for the wrong reason. They were
passing because the dynamic allocas were preventing inlining, not the
code I added in D91816.
Rewrite and simplify the test. Add remark based checks to validate we're
preventing inline substitutions for the right reasons.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D104958
Currently OpenMPOpt will only check if a function is a kernel before deciding not to internalize it. Any uncalled function that gets internalized will be trivially dead in the module so this is unnnecessary.
Depends on D102423
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D104890