Some of the conditions necessary to produce ccmp sequences were only
checked in recursive calls to emitConjunctionDisjunctionTree() after
some of the earlier expressions were already built. Move all checks over
to isConjunctionDisjunctionTree() so they are all checked before we
start emitting instructions.
Also rename some variable to better reflect their usage.
llvm-svn: 258605
Previously it failed to add NumArgRegs to the offset and so clobbered an
already-used register. Now just start the numbering after the arg regs
and don't duplicate the add. Test coverage for this coming shortly with
the implementation of byval.
llvm-svn: 258597
Cleanups in C++ are a little weird. They are only guaranteed to be
reliably executed if, and only if, there is a viable catch handler which
can handle the exception.
This means that reachability of a cleanup is lexically determined by it
being nested with a try-block which unwinds to a catch. It is *cannot*
be reasoned about by examining the control flow edges leaving a cleanup.
Usually this is not a problem. It becomes a problem when there are *no*
edges out of a cleanup because we believed that code post-dominated by
the cleanup is dead. In LLVM's case, this code is what informs the
personality routine about the presence of a suitable catch handler.
However, the lack of edges to that catch handler makes the handler
become unreachable which causes us to remove it. By removing the
handler, the cleanup becomes unreachable.
Instead, inject a catch-all handler with every cleanup that has no
unwind edges. This will allow us to properly unwind the stack.
This fixes PR25997.
llvm-svn: 258580
in MachOObjectFile::getSymbolByIndex() when a Mach-O file has
a symbol table load command but the number of symbols are zero.
The code in MachOObjectFile::symbol_begin_impl() should not be
assuming there is a symbol at index 0, in cases there is no symbol
table load command or the count of symbol is zero. So I also fixed
that. And needed to fix MachOObjectFile::symbol_end_impl() to
also do the same thing for no symbol table or one with zero entries.
The code in MachOObjectFile::getSymbolByIndex() should trigger
the report_fatal_error() for programmatic errors for any index when
there is no symbol table load command and not return the end iterator.
So also fixed that. Note there is no test case as this is a programmatic
error.
The test case using the file macho-invalid-bad-symbol-index has
a symbol table load command with its number of symbols (nsyms)
is zero. Which was incorrectly testing the bad triggering of the
report_fatal_error() in in MachOObjectFile::getSymbolByIndex().
This test case is an invalid Mach-O file but not for that reason.
It appears this Mach-O file use to have an nsyms value of 11,
and what makes this Mach-O file invalid is the counts and
indexes into the symbol table of the dynamic load command
are now invalid because the number of symbol table entries
(nsyms) is now zero. Which can be seen with the existing
llvm-obdump:
% llvm-objdump -private-headers macho-invalid-bad-symbol-index
…
Load command 4
cmd LC_SYMTAB
cmdsize 24
symoff 4216
nsyms 0
stroff 4392
strsize 144
Load command 5
cmd LC_DYSYMTAB
cmdsize 80
ilocalsym 0
nlocalsym 8 (past the end of the symbol table)
iextdefsym 8 (greater than the number of symbols)
nextdefsym 2 (past the end of the symbol table)
iundefsym 10 (greater than the number of symbols)
nundefsym 1 (past the end of the symbol table)
...
And the native darwin tools generates an error for this file:
% nm macho-invalid-bad-symbol-index
nm: object: macho-invalid-bad-symbol-index truncated or malformed object (ilocalsym plus nlocalsym in LC_DYSYMTAB load command extends past the end of the symbol table)
I added new checks for the indexes and sizes for these in the
constructor of MachOObjectFile. And added comments for what
would be a proper diagnostic messages.
And changed the test case using macho-invalid-bad-symbol-index
to test for the new error now produced.
Also added a test with a valid Mach-O file with a symbol table
load command where the number of symbols is zero that shows
the report_fatal_error() is not called.
llvm-svn: 258576
Summary:
Fix the issue with the most recently discovered unit receiving much less attention.
Note: this is the second attempt (prev: r258473). Now, libc++ build is fixed.
Reviewers: aizatsky, kcc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16487
llvm-svn: 258571
Summary:
The testing for returnBB was flipped which may cause ARM ld/st opt pass uses callee saved regs in returnBB when shrink-wrap is used.
Reviewers: t.p.northover, apazos, MatzeB
Subscribers: mcrosier, zzheng, aemerson, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D16434
llvm-svn: 258569
When we build LLVM with externalized debug info, all debugging and
symbolication related data is extracted into dSYM files prior to
stripping. As such, there is no need to preserve local symbols in LLVM
binaries after dSYM creation.
This shrinks libLLVM.dylib from 58MB to 55MB on my system.
llvm-svn: 258566
If the intrinsic is overloaded and works on multiple types,
it cannot resolve to a single corresponding builtin and requires
handling in clang. This just causes crashes now.
llvm-svn: 258559
The intrinsic target prefix should match the target name
as it appears in the triple.
This is not yet complete, but gets most of the important ones.
llvm.AMDGPU.* intrinsics used by mesa and libclc are still handled
for compatability for now.
llvm-svn: 258557
Summary:
Make sure that any new and optimized objects created during GlobalOPT copy all the attributes from the base object.
A good example of improper behavior in the current implementation is section information associated with the GlobalObject. If a section was set for it, and GlobalOpt is creating/modifying a new object based on this one (often copying the original name), without this change new object will be placed in a default section, resulting in inappropriate properties of the new variable.
The argument here is that if customer specified a section for a variable, any changes to it that compiler does should not cause it to change that section allocation.
Moreover, any other properties worth representation in copyAttributesFrom() should also be propagated.
Reviewers: jmolloy, joker-eph, joker.eph
Subscribers: slarin, joker.eph, rafael, tobiasvk, llvm-commits
Differential Revision: http://reviews.llvm.org/D16074
llvm-svn: 258556
\src\llvm-rw\include\llvm/Support/AlignOf.h(254) :
error C2872: 'detail' : ambiguous symbol
could be 'llvm::detail'
or 'llvm::support::detail'
llvm-svn: 258553
Summary:
This change adds a `-spp-no-statepoints` flag to PlaceSafepoints that
bypasses the code that wraps newly introduced polls and existing calls
in gc.statepoint. With `-spp-no-statepoints` enabled, PlaceSafepoints
effectively becomes a safpeoint **poll** insertion pass.
The eventual goal is to "constant fold" this option, along with
`-rs4gc-use-deopt-bundles` to `true`, once clients using gc.statepoint
are okay doing so.
Reviewers: pgavlin, reames, JosephTremoulet
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D16439
llvm-svn: 258551
Make the variable a member of the writer trait object owned
now by the writer. Also use a different generator interface
to pass the infoObject from the writer.
llvm-svn: 258544
The promote alloca pass didn't handle these intrinsics and crashed.
These intrinsics should accept any address space, but for now just
erase them to avoid breaking.
llvm-svn: 258537
Using an array instead of ArrayRef would allow type inference, but
(short of using C99) one would still need to write
typedef uint16_t VT[];
LE.write(VT{0x1234, 0x5678});
llvm-svn: 258535
The current behavior is incorrect, as the two CCs returned by
changeFPCCToAArch64CC, intended to be OR'ed, are instead used
in an AND ccmp chain.
Consider:
define i32 @t(float %a, float %b, float %c, float %d, i32 %e, i32 %f) {
%cc1 = fcmp one float %a, %b
%cc2 = fcmp olt float %c, %d
%and = and i1 %cc1, %cc2
%r = select i1 %and, i32 %e, i32 %f
ret i32 %r
}
Assuming (%a < %b) and (%c < %d); we used to do:
fcmp s0, s1 # nzcv <- 1000
orr w8, wzr, #0x1 # w8 <- 1
csel w9, w8, wzr, mi # w9 <- 1
csel w8, w8, w9, gt # w8 <- 1
fcmp s2, s3 # nzcv <- 1000
cset w9, mi # w9 <- 1
tst w8, w9 # (w8 & w9) == 1, so: nzcv <- 0000
csel w0, w0, w1, ne # w0 <- w0
We now do:
fcmp s2, s3 # nzcv <- 1000
fccmp s0, s1, #0, mi # mi, so: nzcv <- 1000
fccmp s0, s1, #8, le # !le, so: nzcv <- 1000
csel w0, w0, w1, pl # !pl, so: w0 <- w1
In other words, we transformed:
(c < d) && ((a < b) || (a > b))
into:
(c < d) && (a u>= b) && (a u<= b)
whereas, per De Morgan's, we wanted:
(c < d) && !((a u>= b) && (a u<= b))
Note that this problem doesn't occur in the test-suite.
changeFPCCToAArch64CC produces disjunct CCs; here, one -> mi/gt.
We can't represent that in the fccmp chain; it can't express
arbitrary OR sequences, as one comment explains:
In general we can create code for arbitrary "... (and (and A B) C)"
sequences. We can also implement some "or" expressions, because
"(or A B)" is equivalent to "not (and (not A) (not B))" and we can
implement some negation operations. [...] However there is no way
to negate the result of a partial sequence.
Instead, introduce changeFPCCToANDAArch64CC, which produces the
conjunct cond codes:
- (a one b)
== ((a olt b) || (a ogt b))
== ((a ord b) && (a une b))
- (a ueq b)
== ((a uno b) || (a oeq b))
== ((a ule b) && (a uge b))
Note that, at first, one might think that, when PushNegate is true,
we should use the disjunct CCs, in effect doing:
(a || b)
= !(!a && !(b))
= !(!a && !(b1 || b2)) <- changeFPCCToAArch64CC(b, b1, b2)
= !(!a && !b1 && !b2)
However, we can take advantage of the fact that the CC is already
negated, which lets us avoid special-casing PushNegate and doing
the simpler to reason about:
(a || b)
= !(!a && (!b))
= !(!a && (b1 && b2)) <- changeFPCCToANDAArch64CC(!b, b1, b2)
= !(!a && b1 && b2)
This makes both emitConditionalCompare cases behave identically,
and produces correct ccmp sequences for the 2-CC fcmps.
llvm-svn: 258533
We verify that the op tree is eligible for CCMP emission in
isConjunctionDisjunctionTree, but it's also possible that
emitConjunctionDisjunctionTree fails later.
The initial check is useful, as it avoids building nodes
that will get discarded.
Still, make sure that inconsistencies don't happen with
an assert.
llvm-svn: 258532