This is for tweaking SHT_SYMTAB sections.
Their sh_info contains the (number of symbols + 1) usually.
But for creating invalid inputs for test cases it would be convenient
to allow explicitly override this field from YAML.
Differential revision: https://reviews.llvm.org/D58779
llvm-svn: 355193
There was a time when we couldn't dump target-specific flags such as
arm-sbrel etc, so the tests didn't check for them. We can now be more
specific in our tests.
llvm-svn: 355189
This is a small addition to arithmetic operations that improves
expressiveness of the language.
Differential Revision: https://reviews.llvm.org/D58775
llvm-svn: 355187
This patch allows all forms of values for options to be used at the end
of a group. With the fix, it is possible to follow the way GNU binutils
tools handle grouping options better. For example, the -j option can be
used with objdump in any of the following ways:
$ objdump -d -j .text a.o
$ objdump -d -j.text a.o
$ objdump -dj .text a.o
$ objdump -dj.text a.o
Differential Revision: https://reviews.llvm.org/D58711
llvm-svn: 355185
If an option, which requires a value, has a `cl::Grouping` formatting
modifier, it works well as far as it is used at the end of a group,
or as a separate argument. However, if the option appears accidentally
in the middle of a group, the program just crashes. This patch prints
an error message instead.
Differential Revision: https://reviews.llvm.org/D58499
llvm-svn: 355184
These were not recognized as potential atomics by memory legalizer.
The test was working not because legalizer did a right thing, but
because it has skipped all these instructions. When I have fixed
DS desciption test started to fail because region address has
changed from 4 to 2 a while ago.
Differential Revision: https://reviews.llvm.org/D58802
llvm-svn: 355179
Unsigned mul high for MIPS32 is selected into two PseudoInstructions:
PseudoMULTu and PseudoMFHI that use accumulator register class ACC64 for
some of its operands. Registers in this class have appropriate hi and lo
register as subregisters: $lo0 and $hi0 are subregisters of $ac0 etc.
mul instruction implicit-defs $lo0 and $hi0 according to MipsInstrInfo.td.
In functions where mul and PseudoMULTu are present fastRegisterAllocator
will "run out of registers during register allocation" because
'calcSpillCost' for $ac0 will return spillImpossible because subregisters
$lo0 and $hi0 of $ac0 are reserved by mul instruction above. A solution is
to mark implicit-defs of $lo0 and $hi0 as dead in mul instruction.
Differential Revision: https://reviews.llvm.org/D58715
llvm-svn: 355178
Summary:
ConstIntInfoVec contains elements extracted from the previous function.
In new PM, releaseMemory() is not called and the dangling elements can
cause segfault in findConstantInsertionPoint.
Rename releaseMemory() to cleanup() to deliver the idea that it is
mandatory and call cleanup() in ConstantHoistingPass::runImpl to fix
this.
Reviewers: ormris, zzheng, dmgreen, wmi
Reviewed By: ormris, wmi
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58589
llvm-svn: 355174
Subtarget features are stored in a std::bitset that has been subclassed. There is a special constructor to allow the tablegen files to provide a list of bits to initialize the std::bitset to. This constructor isn't constexpr and std::bitset doesn't support many constexpr operations either. This results in a static global constructor being used to initialize the feature bitsets in these files at startup.
To fix this I've introduced a new FeatureBitArray class that holds three 64-bit values representing the initial bit values and taught tablegen to emit hex constants for them based on the feature enum values. This makes the tablegen files less readable than they were before. I can add the list of features back as a comment if we think that's important.
I've added a method to convert from this class into the std::bitset subclass we had before. I considered making the new FeatureBitArray class just implement the std::bitset interface we need instead, but thought I'd see how others felts about that first.
I've simplified the interfaces to SetImpliedBits and ClearImpliedBits a little minimize the number of times we need to convert to the bitset.
This removes about 27K from my local release+asserts build of llc.
Differential Revision: https://reviews.llvm.org/D58520
llvm-svn: 355167
Summary:
These sorts of blocks often contain calls to noreturn functions, like
longjmp, throw, or trap. If they don't end the program, they are
"interesting" from the perspective of sanitizer coverage, so we should
instrument them. This was discussed in https://reviews.llvm.org/D57982.
Reviewers: kcc, vitalybuka
Subscribers: llvm-commits, craig.topper, efriedma, morehouse, hiraditya
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58740
llvm-svn: 355152
There's no reason to limit the DWARF CFI dumper to EM_386 and EM_X86_64;
ELF files could contain DWARF CFI on almost any platform (even 32-bit ARM;
NetBSD uses DWARF CFI on that platform). So start using the DWARF CFI dumper
unconditionally so that we can dump .eh_frame sections on the remaining ELF
platforms as well as in NetBSD binaries.
Differential Revision: https://reviews.llvm.org/D58761
llvm-svn: 355151
Add support for cloning DWARF expressions that contain base type DIE
references in dsymutil.
<rdar://problem/48167812>
Differential Revision: https://reviews.llvm.org/D58534
llvm-svn: 355148
In certain cases, the first non-frame-setup instruction in a function is
a branch. For example, it could be a cbz on an argument. Make sure we
correctly allocate the UnwindHelp, and find an appropriate register to
use to initialize it.
Fixes https://bugs.llvm.org/show_bug.cgi?id=40184
Differential Revision: https://reviews.llvm.org/D58752
llvm-svn: 355136
The basic idea of the pass is to use a circular buffer to log the execution ordering of the functions. We only log the function when it is first executed. We use a 8-byte hash to log the function symbol name.
In this pass, we add three global variables:
(1) an order file buffer: a circular buffer at its own llvm section.
(2) a bitmap for each module: one byte for each function to say if the function is already executed.
(3) a global index to the order file buffer.
At the function prologue, if the function has not been executed (by checking the bitmap), log the function hash, then atomically increase the index.
Differential Revision: https://reviews.llvm.org/D57463
llvm-svn: 355133
Part 2 of CSPGO changes (mostly related to ProfileSummary).
Note that I use a default parameter in setProfileSummary() and getSummary().
This is to break the dependency in clang. I will make the parameter explicit
after changing clang in a separated patch.
Differential Revision: https://reviews.llvm.org/D54175
llvm-svn: 355131
This is another step towards ensuring that we produce the optimal code for reductions,
but there are other potential benefits as seen in the tests diffs:
1. Memory loads may get scalarized resulting in more efficient code.
2. Memory stores may get scalarized resulting in more efficient code.
3. Complex ops like fdiv/sqrt get scalarized which may be faster instructions depending on uarch.
4. Even simple ops like addss/subss/mulss/roundss may result in faster operation/less frequency throttling when scalarized depending on uarch.
The TODO comment suggests 1 or more follow-ups for opcodes that can currently result in regressions.
Differential Revision: https://reviews.llvm.org/D58282
llvm-svn: 355130
Like the other load/store instructions, "w" register is preferred when
disassembling BPF_STX | BPF_W | BPF_XADD.
v1 -> v2:
- Updated testcase insn-unit.s (Yonghong)
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 355127
Support sub-register code-gen for XADD is like supporting any other Load
and Store patterns.
No new instruction is introduced.
lock *(u32 *)(r1 + 0) += w2
has exactly the same underlying insn as:
lock *(u32 *)(r1 + 0) += r2
BPF_W width modifier has guaranteed they behave the same at runtime. This
patch merely teaches BPF back-end that BPF_W width modifier could work
GPR32 register class and that's all needed for sub-register code-gen
support for XADD.
test/CodeGen/BPF/xadd.ll updated to include sub-register code-gen tests.
A new testcase test/CodeGen/BPF/xadd_legal.ll is added to make sure the
legal case could pass on all code-gen modes. It could also test dead Def
check on GPR32. If there is no proper handling like what has been done
inside BPFMIChecking.cpp:hasLivingDefs, then this testcase will fail.
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 355126
BPF XADD semantics require all Defs of XADD are dead, meaning any result of
XADD insn is not used.
However, BPF backend hasn't enabled sub-register liveness track, so when
the source and destination operands of XADD are GPR32, there is no
sub-register dead info. If we rely on the generic
MachineInstr::allDefsAreDead, then we will raise false alarm on GPR32 Def.
This was fine as there was no sub-register code-gen support for XADD which
will be added by the next patch.
To support GPR32 Def, ideally we could just enable sub-registr liveness
track on BPF backend, then allDefsAreDead could work on GPR32 Def. This
requires implementing TargetSubtargetInfo::enableSubRegLiveness on BPF.
However, sub-register liveness tracking module inside LLVM is actually
designed for the situation where one register could be split into more
than one sub-registers for which case each sub-register could have their
own liveness and kill one of them doesn't kill others. So, tracking
liveness for each make sense.
For BPF, each 64-bit register could only have one 32-bit sub-register. This
is exactly the case which LLVM think brings no benefits for doing
sub-register tracking, because the live range of sub-register must always
equal to its parent register, therefore liveness tracking is disabled even
the back-end has implemented enableSubRegLiveness. The detailed information
is at r232695:
Author: Matthias Braun <matze@braunis.de>
Date: Thu Mar 19 00:21:58 2015 +0000
Do not track subregister liveness when it brings no benefits
Hence, for BPF, we enhance MachineInstr::allDefsAreDead. Given the solo
sub-register always has the same liveness as its parent register, LLVM is
already attaching a implicit 64-bit register Def whenever the there is
a sub-register Def. The liveness of the implicit 64-bit Def is available.
For example, for "lock *(u32 *)(r0 + 4) += w9", the MachineOperand info
could be:
$w9 = XADDW32 killed $r0, 4, $w9(tied-def 0),
implicit killed $r9, implicit-def dead $r9
Even though w9 is not marked as Dead, the parent register r9 is marked as
Dead correctly, and it is safe to use such information or our purpose.
v1 -> v2:
- Simplified code logic inside hasLiveDefs. (Yonghong)
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 355124
This enables lit to work with unicode file names via mkdir, rm, and redirection.
Lit still uses utf-8 internally, but converts to utf-16 on Windows, or just utf-8
bytes on everything else.
Committed on behalf of Jason Mittertreiner
Differential Revision: https://reviews.llvm.org/D56754
llvm-svn: 355122
This is part of a transform that may be done in the backend:
D13757
...but it should always be beneficial to fold this sooner in IR
for all targets.
https://rise4fun.com/Alive/vaiW
Name: sext add nsw
%add = add nsw i8 %i, C0
%ext = sext i8 %add to i32
%r = add i32 %ext, C1
=>
%s = sext i8 %i to i32
%r = add i32 %s, sext(C0)+C1
Name: zext add nuw
%add = add nuw i8 %i, C0
%ext = zext i8 %add to i16
%r = add i16 %ext, C1
=>
%s = zext i8 %i to i16
%r = add i16 %s, zext(C0)+C1
llvm-svn: 355118
We don't have any combines that can look through a bitcast to truncate a build vector of constants. So the truncate will stick around and give us something like this pattern (binop (trunc X), (trunc (bitcast (build_vector)))) which has two truncates in it. Which will be reversed by hoistLogicOpWithSameOpcodeHands in the generic DAG combiner. Thus causing an infinite loop.
Even if we had a combine for (truncate (bitcast (build_vector))), I think it would need to be implemented in getNode otherwise DAG combiner visit ordering would probably still visit the binop first and reverse it. Or combineTruncatedArithmetic would need to do its own constant folding.
Differential Revision: https://reviews.llvm.org/D58705
llvm-svn: 355116
Dsymutil gets library member information is through the ambiguous
/path/to/archive.a(member.o). The current logic we use would get
confused by additional parentheses. Using rfind mitigates this issue.
llvm-svn: 355114
Summary:
In the clang UI, replaces -mthread-model posix with -matomics as the
source of truth on threading. In the backend, replaces
-thread-model=posix with the atomics target feature, which is now
collected on the WebAssemblyTargetMachine along with all other used
features. These collected features will also be used to emit the
target features section in the future.
The default configuration for the backend is thread-model=posix and no
atomics, which was previously an invalid configuration. This change
makes the default valid because the thread model is ignored.
A side effect of this change is that objects are never emitted with
passive segments. It will instead be up to the linker to decide
whether sections should be active or passive based on whether atomics
are used in the final link.
Reviewers: aheejin, sbc100, dschuff
Subscribers: mehdi_amini, jgravelle-google, hiraditya, sunfish, steven_wu, dexonsmith, rupprecht, jfb, jdoerfert, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D58742
llvm-svn: 355112
This should have been part of r355110, but my brain isn't quite awake yet, despite the coffee. Per the original submit comment... Doing scalar promotion w/o being able to prove the alignment of the hoisted load or sunk store is a bug. Update tests to actually show the alignment so that impact of the patch which fixes this can be seen.
llvm-svn: 355111
Doing scalar promotion w/o being able to prove the alignment of the hoisted load or sunk store is a bug. Update tests to actually show the alignment so that impact of the patch which fixes this can be seen.
llvm-svn: 355110
Second part of D58593.
Compute precise overflow conditions based on all known bits, rather
than just the sign bits. Unsigned a - b overflows iff a < b, and we
can determine whether this always/never happens based on the minimal
and maximal values achievable for a and b subject to the known bits
constraint.
llvm-svn: 355109
Summary:
This was removed in r349068, but it is needed when llvm is compiled
using the non-default c++ standard library on a platform.
Reviewers: sylvestre.ledru, infinity0, mgorny, cuviper
Reviewed By: sylvestre.ledru
Subscribers: jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57859
llvm-svn: 355107
Summary:
It is mentioned in the document of DTU that "It is illegal to submit any update that has already been submitted, i.e., you are supposed not to insert an existent edge or delete a nonexistent edge." It is dangerous to violet this rule because DomTree and PostDomTree occasionally crash on this scenario.
This patch fixes `MergeBlockIntoPredecessor`, making it conformant to this precondition.
Reviewers: kuhar, brzycki, chandlerc
Reviewed By: brzycki
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58444
llvm-svn: 355105
This extends the existing support for shufflevector to handle cases like
<2 x float>, which we can implement by concating the vectors and using a TBL1.
Differential Revision: https://reviews.llvm.org/D58684
llvm-svn: 355104