Summary:
This function is never called. isReallyTriviallyReMaterializable() is
the function that should be implemented instead.
Reviewers: arsenm
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11620
llvm-svn: 243651
Update the debug info in the check-lines because the change in r243638
introduced a constant initialization before the prologue's end as part
of a register spill.
llvm-svn: 243640
Summary:
This hidden option would disable code generation through FastISel by
default. It was removed from the available options and from the
Fast-ISel tests that required it in order to run the tests.
Reviewers: dsanders
Subscribers: qcolombet, llvm-commits
Differential Revision: http://reviews.llvm.org/D11610
llvm-svn: 243638
Summary:
Previously, we would sign-extend non-boolean negative constants and
zero-extend otherwise. This was problematic for PHI instructions with
negative values that had a type with bitwidth less than that of the
register used for materialization.
More specifically, ComputePHILiveOutRegInfo() assumes the constants
present in a PHI node are zero extended in their container and
afterwards deduces the known bits.
For example, previously we would materialize an i16 -4 with the
following instruction:
addiu $r, $zero, -4
The register would end-up with the 32-bit 2's complement representation
of -4. However, ComputePHILiveOutRegInfo() would generate a constant
with the upper 16-bits set to zero. The SelectionDAG builder would use
that information to generate an AssertZero node that would remove any
subsequent trunc & zero_extend nodes.
In theory, we should modify ComputePHILiveOutRegInfo() to consult
target-specific hooks about the way they prefer to materialize the
given constants. However, git-blame reports that this specific code
has not been touched since 2011 and it seems to be working well for every
target so far.
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11592
llvm-svn: 243636
Summary:
Mips doesn't implement unw_getcontext() or libunwind::Registers_*::jumpto() yet
so we must disable libunwind for this release.
Reviewers: hans
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11563
llvm-svn: 243633
The reason I was passing this vector by value in the constructor so that
I wouldn't have to copy when initializing the corresponding member but
then I forgot the std::move.
The use-case is LoopDistribution which filters the checks then
std::moves it to LoopVersioning's constructor. With this interface we
can avoid any copies.
llvm-svn: 243616
Before, we were passing the pointer partitions to LAA. Now, we get all
the checks from LAA and filter out the checks within partitions in
LoopDistribution.
This effectively concludes the steps to move filtering memchecks from
LAA into its clients. There is still some cleanup left to remove the
unused interfaces in LAA that still take PtrPartition.
(Moving this functionality to LoopDistribution also requires
needsChecking on pointers to be made public.)
llvm-svn: 243613
This avoids stack overflows when the the compiler does not perform tail call
elimination. Apparently this happens for MSVC with the /Ob2 switch which
may be used by external code including this header.
Reported by and based on a patch from Jean-Francois Riendeau.
Related to rdar://21900756
llvm-svn: 243590
Bonus change to remove emacs major mode marker from SystemZMachineFunctionInfo.cpp because emacs already knows it's C++ from the extension. Also fix typo "appeary" in AMDGPUMCAsmInfo.h.
llvm-svn: 243585
Prevent all the unrelated LLVM options to appear in the -help output
by introducing a tool specific option category. As a drive-by improve
the wording of the help message.
llvm-svn: 243583
The dsymutil-classic -v option dumps the tool version rather than
putting it in verbose mode. Rename -v to -verbose and update the
tests that use it (in the process removing it from a few tests that
didn't require it anymore since the -dump-debug-map option was
introduced).
A followup commit will reintroduce the -v option that dumps the
version.
llvm-svn: 243582
This reverts commit r243567, which ultimately reapplies r243563.
The fix here was to use std::enable_if for overload resolution. Thanks to David
Blaikie for lots of help on this, and for the extra tests!
Original commit message follows:
For cases where we needed a foreach loop in reverse over a container,
we had to do something like
for (const GlobalValue *GV : make_range(TypeInfos.rbegin(),
TypeInfos.rend())) {
This provides a convenience method which shortens this to
for (const GlobalValue *GV : reverse(TypeInfos)) {
There are 2 versions of this, with a preference to the rbegin() version.
The first uses rbegin() and rend() to construct an iterator_range.
The second constructs an iterator_range from the begin() and end() methods
wrapped in std::reverse_iterator's.
Reviewed by David Blaikie.
llvm-svn: 243581
This patch improves the 32-bit target i64 constant matching to detect the shuffle vector splats that are introduced by i64 vector shift vectorization (D8416).
Differential Revision: http://reviews.llvm.org/D11327
llvm-svn: 243577
It's potentially more efficient on Cyclone, and from the optimization guides &
schedulers looks like it has no effect on Cortex-A53 or A57. In general you'd
expect a MOV to be about the most efficient instruction with its semantics,
even though the official "UXTW" alias is really a UBFX.
llvm-svn: 243576
This commit extracts the code that's used by the class 'MIRParserImpl' to parse
the machine basic block references into a new method named 'parseMBBReference'.
llvm-svn: 243572
This patch vectorizes the v2i64/v4i64 ASHR shift operations - the last remaining integer vector shifts that are still being transferred to/from the scalar unit to be completed.
Differential Revision: http://reviews.llvm.org/D11439
llvm-svn: 243569
This reverts commit r243563.
The GCC buildbots were extremely unhappy about this. Reverting while
we discuss a better way of doing overload resolution.
llvm-svn: 243567
For cases where we needed a foreach loop in reverse over a container,
we had to do something like
for (const GlobalValue *GV : make_range(TypeInfos.rbegin(),
TypeInfos.rend())) {
This provides a convenience method which shortens this to
for (const GlobalValue *GV : reverse(TypeInfos)) {
There are 2 versions of this, with a preference to the rbegin() version.
The first uses rbegin() and rend() to construct an iterator_range.
The second constructs an iterator_range from the begin() and end() methods
wrapped in std::reverse_iterator's.
Reviewed by David Blaikie.
llvm-svn: 243563
Summary:
returns_twice (most importantly, setjmp) functions are
optimization-hostile: if local variable is promoted to register, and is
changed between setjmp() and longjmp() calls, this update will be
undone. This is the reason why "man setjmp" advises to mark all these
locals as "volatile".
This can not be enough for ASan, though: when it replaces static alloca
with dynamic one, optionally called if UAR mode is enabled, it adds a
whole lot of SSA values, and computations of local variable addresses,
that can involve virtual registers, and cause unexpected behavior, when
these registers are restored from buffer saved in setjmp.
To fix this, just disable dynamic alloca and UAR tricks whenever we see
a returns_twice call in the function.
Reviewers: rnk
Subscribers: llvm-commits, kcc
Differential Revision: http://reviews.llvm.org/D11495
llvm-svn: 243561
Making allowableAlignment() more accessible was suggested as a predecessor patch
for D10662, so I've pulled it into TargetLowering. This let's us remove 4 instances
of duplicate logic in LegalizeDAG.
There's a subtle functional change in the implementation: the existing
allowableAlignment() code was using getPrefTypeAlignment() when checking
alignment with the DataLayout and assumed that was fast. In this implementation,
we use getABITypeAlignment() and assume that is fast. See the TODO comment or the
discussion in the Phab review for future improvements in this implementation
(don't use the data layout at all).
There are no regression test changes from this difference, and I'm not sure how to
expose it via a test. I think we actually do want to provide the 'Fast' param when
checking this from DAGCombiner::MergeConsecutiveStores(). Ie, we shouldn't merge
stores if the new stores are not going to be fast. But that change will require
fixing allowsMisalignedMemoryAccess() overrides as noted in D10662.
Differential Revision: http://reviews.llvm.org/D10905
llvm-svn: 243549
ASan shadow on Android starts at address 0 for both historic and
performance reasons. This is possible because the platform mandates
-pie, which makes lower memory region always available.
This is not such a good idea on 64-bit platforms because of MAP_32BIT
incompatibility.
This patch changes Android/AArch64 mapping to be the same as that of
Linux/AAarch64.
llvm-svn: 243548
No functional change because "lsl #12" is actually encoded as 12, but one less
bug if someone ever decides to change that for the giggles.
llvm-svn: 243536
This isn't part of the official release process, but provides a convenient way
to build binaries for those who want to experiment with it. Hopefully the run-
time can be part of the regular build and release process for 3.8.
Differential Revision: http://reviews.llvm.org/D11494
llvm-svn: 243531