vector shift by immedate count (VSHLI/VSRLI/VSRAI) into a build_vector when
the vector in input to the shift is a build_vector of all constants or UNDEFs.
Target specific nodes for packed shifts by immediate count are in
general introduced by function 'getTargetVShiftByConstNode' (in
X86ISelLowering.cpp) when lowering shift operations, SSE/AVX immediate
shift intrinsics and (only in very few cases) SIGN_EXTEND_INREG dag
nodes.
This patch adds extra rules for simplifying vector shifts inside
function 'getTargetVShiftByConstNode'.
Added file test/CodeGen/X86/vec_shift5.ll to verify that packed
shifts by immediate are correctly folded into a build_vector when the
input vector to the shift dag node is a vector of constants or undefs.
llvm-svn: 198113
The GNU assembler supports .rep as an alias for .rept. This simply creates the
alias for it and introduces a test for both .rept and .rep.
llvm-svn: 198097
widespread glibc bugs.
The glibc implementation of exp10 has a very serious precision bug in
version 2.15 (and older versions). This is still very widely used (the
current Ubuntu LTS for example uses it) and so it isn't reasonable to
make transforms that produce these functions. This fixes many
miscompiles introduced when we started transforming pow(10.0, ...) into
exp10, and it may have fixed other latent miscompiles where exp10
provided sufficient precision but exp10f did not.
This is all really horrible. The primary bug has been fixed for over
a year and glibc 2.18 works correctly for the test cases I have, but it
will be 2017 before the LTS using 2.15 is no longer supported by Ubuntu
(and thus reasonable for folks to be relying on). =[ We're either going
to need to live without these optimizations, or find a way to switch
behavior more dynamically than using simply the fact that the OS is
"Linux".
To make matters worse, there appears to be significant testing and
fixing of numerous other bugs in the exp10 family of functions right now
in glibc. While those haven't been causing problems I've seen in the
wild, it gives me concerns that we may need to wait until an even later
release of glibc before we can reliably transform code into exp10.
llvm-svn: 198093
This reduces the size of clang-format from 22 MB to 1.8 MB, diagtool goes from
21 MB to 2.8 MB, libclang.so goes from 29 MB to 20 MB, etc. The size of the
bin/ folder shrinks from 270 MB to 200 MB.
Targets that support plugins and don't already use EXPORTED_SYMBOL_FILE
(which libclang and libLTO already do) can set NO_DEAD_STRIP to opt out.
llvm-svn: 198087
ConstantSDNodes (or UNDEFs) into a simple BUILD_VECTOR.
For example, given the following sequence of dag nodes:
i32 C = Constant<1>
v4i32 V = BUILD_VECTOR C, C, C, C
v4i32 Result = SIGN_EXTEND_INREG V, ValueType:v4i1
The SIGN_EXTEND_INREG node can be folded into a build_vector since
the vector in input is a BUILD_VECTOR of constants.
The optimized sequence is:
i32 C = Constant<-1>
v4i32 Result = BUILD_VECTOR C, C, C, C
llvm-svn: 198084
It's no longer necessary to lazily add members to the DICompositeType
member list. Instead any lazy members (special member functions and
member template instantiations) are added to the parent late based on
their context link, the same way that nested types have always been
handled (never being in the member list - just added to the parent DIE
lazily based on context).
Clang's been updated not to use this function anymore as it improves
type unit consistency by never emitting lazy members in type units.
llvm-svn: 198079
This is an iterator which you can build around a MemoryBuffer. It will
iterate through the non-empty, non-comment lines of the buffer as
a forward iterator. It should be small and reasonably fast (although it
could be made much faster if anyone cares, I don't really...).
This will be used to more simply support the text-based sample
profile file format, and is largely based on the original patch by
Diego. I've re-worked the style of it and separated it from the work of
producing a MemoryBuffer from a file which both simplifies the interface
and makes it easier to test.
The style of the API follows the C++ standard naming conventions to fit
in better with iterators in general, much like the Path and FileSystem
interfaces follow standard-based naming conventions.
llvm-svn: 198068
This should fix http://llvm.org/bugs/show_bug.cgi?id=17976
Another test checking for the global variables' locations and prefixes on Darwin will be committed separately.
llvm-svn: 198017
...namely LOAD AND ADD, LOAD AND AND, LOAD AND OR and LOAD AND EXCLUSIVE OR.
LOAD AND ADD LOGICAL isn't really separately useful for LLVM.
I'll look at adding reusing the CC results in new year.
llvm-svn: 197985
DAG.getVectorShuffle() doesn't always return a vector_shuffle node.
If mask is the exact sequence of it's operand(For example, operand_0
is v8i8, and the mask is 0, 1, 2, 3, 4, 5, 6, 7), it will directly
return that operand. So a check is added here.
llvm-svn: 197967
This failure caused by improper condition when lowering shuffle_vector
to scalar_to_vector. After this patch NEON_VDUP with v1i64 will not
be generated.
llvm-svn: 197966
These still have "experimental" status, meaning we don't guarantee
backward compatibility. However, they are already actively used by the
open source WebKit project, and have started to be adopted by other
projects.
llvm-svn: 197930
Check for single use of fmul node in fused multiply patterns
to allow generation of fused multiply add/sub instructions.
Otherwise fmul operation ends up being repeated more than
once which does not help peformance on targets with
only one MAC unit, as for example cortex-a53.
llvm-svn: 197929
The correct pattern matching should be:
- fnmadd is (-Ra) + (-Rn)*Rm which should be matched as:
fma (fneg node:$Rn), node:$Rm, (fneg node:$Ra) and as
(f32 (fsub (f32 (fneg FPR32:$Ra)), (f32 (fmul FPR32:$Rn, FPR32:$Rm))))
- fnmsub is (-Ra) + Rn*Rm which should be matched as
fma node:$Rn, node:$Rm, (fneg node:$Ra) and as
(f32 (fsub (f32 (fmul FPR32:$Rn, FPR32:$Rm)), FPR32:$Ra))))
llvm-svn: 197928
Split sadd.with.overflow into add + sadd.with.overflow to allow
analysis and optimization. This should ideally be done after
InstCombine, which can perform code motion (eventually indvars should
run after all canonical instcombines). We want ISEL to recombine the
add and the check, at least on x86.
This is currently under an option for reducing live induction
variables: -liv-reduce. The next step is reducing liveness of IVs that
are live out of the overflow check paths. Once the related
optimizations are fully developed, reviewed and tested, I do expect
this to become default.
llvm-svn: 197926