StreamRef was designed to be a thin wrapper over an abstract
stream interface that could itself be treated the same as any
other stream interface. For this reason, it inherited publicly
from StreamInterface, and stored a StreamInterface* internally.
But StreamRef was also designed to be lightweight and easily
copyable, similar to ArrayRef. This led to two misuses of
the classes.
1) When creating a StreamRef A from another StreamRef B, it was
possible to end up with A storing a pointer to B, even when
B was a temporary object, leading to use after free.
2) The above situation could be repeated ad nauseum, so that
A stores a pointer to B, which itself stores a pointer to
another StreamRef C, and so on and so on, creating an
unnecessarily level of nesting depth.
This patch removes the public inheritance relationship between
StreamRef and StreamInterface, making it so that we can never
accidentally convert a StreamRef to a StreamInterface.
llvm-svn: 271570
If the processor name failed to parse for amdgcn,
the resulting output would have R600 ISA in it.
If the processor name was missing or invalid for R600,
the wavefront size would not be set and there would be
crashes from missing itinerary data.
Fixes crashes in future commit caused by dividing by the unset/0
wavefront size.
llvm-svn: 271561
We've been pretending that segments are i8imm since the initial
support (r68645), predating the addition of the SEGMENT_REG class
(r81895). That happens to works, but is wrong, and inconsistent
with how we print (e.g., X86ATTInstPrinter::printMemReference)
and parse them (e.g., X86Operand::addMemOperands).
This change shouldn't affect any tool users, but is visible to
library users or out-of-tree tablegen backends: this causes
MCOperandInfo for the segment op to have an RC instead of "unknown",
and TII::getRegClass to actually return something. As the registers
are reserved and no vregs of the class ever created, that shouldn't
change anything.
No test change; no suspicious getRegClass() in X86 and CodeGen.
llvm-svn: 271559
D19271.
Previous attempt was broken by NetBSD, so in this version I've made the
fallback path generic rather than Windows specific and sent both Windows
and NetBSD to it.
I've also re-formatted the code some, and used an exact clone of the
code in PassSupport.h for doing manual call-once using our atomics
rather than rolling a new one.
If this sticks, we can replace the fallback path for Windows with
a Windows-specific implementation that is more reliable.
Original commit message:
This patch adds an llvm_call_once which is a wrapper around
std::call_once on platforms where it is available and devoid
of bugs. The patch also migrates the ManagedStatic mutex to
be allocated using llvm_call_once.
These changes are philosophically equivalent to the changes
added in r219638, which were reverted due to a hang on Win32
which was the result of a bug in the Windows implementation
of std::call_once.
Differential Revision: http://reviews.llvm.org/D5922
llvm-svn: 271558
Unlike other sections that can grow to any size, the COFF section header
stream has maximum length because each record is fixed size and the COFF
file format limits the maximum number of sections. So I decided to not
create a specific stream class for it. Instead, I added a member function
to DbiStream class which returns a vector of COFF headers.
Differential Revision: http://reviews.llvm.org/D20717
llvm-svn: 271557
This is part of an effort to shave allocations from APInt heavy paths. I'll
be moving many of the other operators to r-value references soon and this is
a step towards doing that without too much duplication.
Saves 15k allocations when doing 'opt -O2 verify-uselistorder.bc'.
llvm-svn: 271556
Testing for specific CPUs has a number of problems, better use subtarget
features:
- When some tweak is added for a specific CPU it is often desirable for
the next version of that CPU as well, yet we often forget to add it.
- It is hard to keep track of checks scattered around the target code;
Declaring all target specifics together with the CPU in the tablegen
file is a clear representation.
- Subtarget features can be tweaked from the command line.
To discourage people from using CPU checks in the future I removed the
isCortexXX(), isCyclone(), ... functions. I added an getProcFamily()
function for exceptional circumstances but made it clear in the comment
that usage is discouraged.
Reformat feature list in AArch64.td to have 1 feature per line in
alphabetical order to simplify merging and sorting for out of tree
tweaks.
No functional change intended.
Differential Revision: http://reviews.llvm.org/D20762
llvm-svn: 271555
This is effectively NFC because we already do this transform after r175380:
http://reviews.llvm.org/rL175380
and also via foldBoolSextMaskToSelect().
This change should just make it a bit more efficient to match the pattern.
The original guard was added in r95058:
http://reviews.llvm.org/rL95058
A sampling of codegen for current in-tree targets shows no problems. This
makes sense given that we're already producing the vector selects via the
other transforms.
llvm-svn: 271554
Summary:
Also convert test/CodeGen/PowerPC/vsx-ldst-builtin-le.ll to use
FileCheck instead of two grep and count runs.
This change is needed to avoid spurious diffs in these tests when
EarlyCSE is improved to use MemorySSA and can do more load elimination.
Reviewers: hfinkel
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20238
llvm-svn: 271553
The DIType* for void is the null pointer. A null DIType can never be a
qualified type, so we can just exit the loop at this point and go to
getTypeIndex(BaseTy).
Fixes PR27984
llvm-svn: 271550
Summary:
In PR29973 Sanjay Patel reported an assertion failure when a certain
loop was optimized, for a target without SSE2 support. It turned out
this was because of the AVG pattern detection introduced in rL253952.
Prevent the assertion failure by bailing out early in
`detectAVGPattern()`, if the target does not support SSE2.
Also add a minimized test case.
Reviewers: congh, eli.friedman, spatel
Subscribers: emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D20905
llvm-svn: 271548
Also fix slice wrappers drop_front and drop_back.
The unittests are pretty awkward, but do the job; alternatives
welcome!
..and yes, I do have ArrayRefs with more than 4 billion elements.
llvm-svn: 271546
Do not issue lexing errors found during the parsing of macro body
definitions and parseIdentifier function in AsmParser. This changes the
Parser to not issue a lexing error when we reach an error, but rather
when it is consumed allowing us time to examine and recover from an
error.
As a result, of this, we stop issuing a both lexing error and a parsing
error in floating-literals test. Minor tweak to parseDirectiveRealValue
to favor more meaningful lexing error over less helpful parse error.
Reviewers: rnk, majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20535
llvm-svn: 271542
except for CompareAndSwap. That is the only one still being used
anywhere now that statistics have been moved onto std::atomic.
Also, add a warning to the header that we shouldn't introduce more uses
of these old style atomics and instead should be using C++11's
std::atomic facilities.
Really hoping that we can hammer out the last couple of users here and
replace them with something more localized and/or principled, but
figured this was a pretty good start. =]
Note that this patch will need to be reverted if r271504 needs to be
reverted as that removes the last user of these. However, the biggest
risk for that patch was MSVC 2013 and at least one bot has already
passed where it would have failed there. I've tested MSVC 2015 using
their web interfaces and other platforms seem fine, so I'm optimistic.
Differential Revision: http://reviews.llvm.org/D20901
llvm-svn: 271540
This directory is used to find if there is a PDB associated with an
executable. I plan to use this functionality to teach llvm-symbolizer
whether it should use DIA or DWARF to symbolize a given DLL.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D20885
llvm-svn: 271539
Inline virtual functions has linkeonceodr linkage (emitted in comdat on
supporting targets). If the vtable for the class is not emitted in the
defining module, function won't be address taken thus its address is not
recorded. At the mercy of the linker, if the per-func prf_data from this
module (in comdat) is picked at link time, we will lose mapping from
function address to its hash val. This leads to missing icall promotion.
The second test case (currently disabled) in compiler_rt (r271528):
instrprof-icall-prom.test demostrates the bug. The first profile-use
subtest is fine due to linker order difference.
With this change, no missing icall targets is found in instrumented clang's
raw profile.
llvm-svn: 271532
Summary:
When this flag is specified, the target llvm-lto is not built, but is still
used as a dependency of the test targets. cmake 2.8 silently ignored this
situation, but with cmake_minimum_required(3.4) it becomes an error. Fix this
by avoiding the inclusion of the target as a dependency.
Reviewers: beanz
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20882
llvm-svn: 271530
Summary:
If the target requests it, use emptry spaces in the fixed and
callee-save stack area to allocate local stack objects.
AArch64: Change last callee-save reg stack object alignment instead of
size to leave a gap to take advantage of above change.
Reviewers: t.p.northover, qcolombet, MatzeB
Subscribers: rengolin, mcrosier, llvm-commits, aemerson
Differential Revision: http://reviews.llvm.org/D20220
llvm-svn: 271527
Although this was intended to be NFC, the test case wiggle shows a change in
code scheduling/RA caused by a difference in the SDLoc() generation.
Depending on how you look at it, this is the (dis)advantage of exact checking
in regression tests.
llvm-svn: 271526
Handle it locally instead of having the target-independent pass deal
with it. The generic pass does not preserve implicit uses, which may
be necessary.
llvm-svn: 271520
This patch removes the llvm intrinsics (V)CVTTPS2DQ and VCVTTPD2DQ truncation (round to zero) conversions and auto-upgrades to FP_TO_SINT calls instead.
Note: I looked at updating CVTTPD2DQ as well but this still requires a lot more work to correctly lower.
Differential Revision: http://reviews.llvm.org/D20860
llvm-svn: 271510
This removes usage of the hacky, incorrect, and TSan-unfriendly
home-grown atomics. It should actually be more efficient in some cases.
Based on our existing usage of <atomic>, all of this is portably
available AFAICT. One small challenge is initializing the stastic, but
I've tried a comparable sample out on MSVC (the most likely to complain
here) and it seems to work. Will have to watch the build bots of course.
llvm-svn: 271504
statistics.
Scaling statistics atomically doesn't make any sense anyways, and none
were using these. If you find yourself wanting to do this, you should
probably keep a local count that you scale and then apply that after
scaling to the shared statistic object.
llvm-svn: 271503
We only considered the length of the operation and the length of the
StreamRef without considered what it meant for the offset to be at a
non-zero position.
llvm-svn: 271496
Use the type index of the underlying type unless we have a typedef from
long to HRESULT; HRESULT typedefs are translated to T_HRESULT.
llvm-svn: 271494
This fixes a broken part of the build on OSX as the dataflow sanitizer is not supported
on OSX yet.
Differential Revision: http://reviews.llvm.org/D20894
llvm-svn: 271492
The motivation for this change is to fix linking issues on OSX.
However this only partially fixes linking issues (the uninstrumented
tests and a few others won't succesfully link yet).
This change introduces a struct of function pointers
(``fuzzer::ExternalFuntions``) which when initialised will point to the
optional functions if they are available. Currently these
``LLVMFuzzerInitialize`` and ``LLVMFuzzerCustomMutator`` functions.
Two implementations of ``fuzzer::ExternalFunctions`` constructor are
provided one for Linux and one for OSX.
The OSX implementation uses ``dlsym()`` because the prior implementation
using weak symbols does not work unless the additional flags are passed
to the linker.
The Linux implementation continues to use weak symbols because the
``dlsym()`` approach does not work unless additional flags are passed
to the linker.
Differential Revision: http://reviews.llvm.org/D20741
llvm-svn: 271491
I'm not sure why this was missing for so long.
This also exposed that we were picking floating point 256-bit VMOVNTPS for some integer types in normal isel for AVX1 even though VMOVNTDQ is available. In practice it doesn't matter due to the execution dependency fix pass, but it required extra isel patterns. Fixing that in a follow up commit.
llvm-svn: 271481