This should hopefully have fixed the stage2/stage3 miscompare on the dragonegg
testers.
"LoopVectorize: Use the dependence test utility class
We now no longer need alias analysis - the cases that alias analysis would
handle are now handled as accesses with a large dependence distance.
We can now vectorize loops with simple constant dependence distances.
for (i = 8; i < 256; ++i) {
a[i] = a[i+4] * a[i+8];
}
for (i = 8; i < 256; ++i) {
a[i] = a[i-4] * a[i-8];
}
We would be able to vectorize about 200 more loops (in many cases the cost model
instructs us no to) in the test suite now. Results on x86-64 are a wash.
I have seen one degradation in ammp. Interestingly, the function in which we
now vectorize a loop is never executed so we probably see some instruction
cache effects. There is a 2% improvement in h264ref. There is one or the other
TSCV loop kernel that speeds up.
radar://13681598"
llvm-svn: 184724
This adds instruction patterns to cover the generic forms of
the conditional branch instructions. This allows the assembler
to support the generic mnemonics.
The compiler will still generate the various specific forms
of the instruction that were already supported.
llvm-svn: 184722
There is currently only limited support for the "absolute" variants
of branch instructions. This patch adds support for the absolute
variants of all branches that are currently otherwise supported.
This requires adding new fixup types so that the correct variant
of relocation type can be selected by the object writer.
While the compiler will continue to usually choose the relative
branch variants, this will allow the asm parser to fully support
the absolute branches, with either immediate (numerical) or
symbolic target addresses.
No change in code generation intended.
llvm-svn: 184721
The method significandParts() is a helper method meant to ease access to
APFloat's significand by allowing the user to not need to be aware of whether or
not the APFloat is using memory allocated in the instance itself or in an
external array.
This assert says that one can only access the significand of FiniteNonZero/NaN
floats. This makes it cumbersome and more importantly dangerous when one wishes
to zero out the significand of a zero/infinity value since one will have to deal
with the aforementioned quandary related to how the memory in APFloat is
allocated.
llvm-svn: 184711
In the context of APFloat, seeing a macro called convolve suggests that APFloat
is using said value in some sort of convolution somewhere in the source code.
This is misleading.
I also added a documentation comment to the macro.
llvm-svn: 184710
CGSCC pass manager. This should insulate the inlining decisions from the
vectorization decisions, however it may have both compile time and code
size problems so it is just an experimental option right now.
Adding this based on a discussion with Arnold and it seems at least
worth having this flag for us to both run some experiments to see if
this strategy is workable. It may solve some of the regressions seen
with the loop vectorizer.
llvm-svn: 184698
There is some hope of eventually supporting a unified build with it, but
until then this lets me (and others) check it out in this location
without things breaking.
llvm-svn: 184697
exponent_t is only used internally in APFloat and no exponent_t values are
exposed via the APFloat API. In light of such conditions it does not make any
sense to gum up the llvm namespace with said type. Plus it makes it clearer that
exponent_t is associated with APFloat.
llvm-svn: 184686
We now no longer need alias analysis - the cases that alias analysis would
handle are now handled as accesses with a large dependence distance.
We can now vectorize loops with simple constant dependence distances.
for (i = 8; i < 256; ++i) {
a[i] = a[i+4] * a[i+8];
}
for (i = 8; i < 256; ++i) {
a[i] = a[i-4] * a[i-8];
}
We would be able to vectorize about 200 more loops (in many cases the cost model
instructs us no to) in the test suite now. Results on x86-64 are a wash.
I have seen one degradation in ammp. Interestingly, the function in which we
now vectorize a loop is never executed so we probably see some instruction
cache effects. There is a 2% improvement in h264ref. There is one or the other
TSCV loop kernel that speeds up.
radar://13681598
llvm-svn: 184685
This class checks dependences by subtracting two Scalar Evolution access
functions allowing us to catch very simple linear dependences.
The checker assumes source order in determining whether vectorization is safe.
We currently don't reorder accesses.
Positive true dependencies need to be a multiple of VF otherwise we impede
store-load forwarding.
llvm-svn: 184684
Sets of dependent accesses are built by unioning sets based on underlying
objects. This class will be used by the upcoming dependence checker.
llvm-svn: 184683
Untill now we detected the vectorizable tree and evaluated the cost of the
entire tree. With this patch we can decide to trim-out branches of the tree
that are not profitable to vectorizer.
Also, increase the max depth from 6 to 12. In the worse possible case where all
of the code is made of diamond-shaped graph this can bring the cost to 2**10,
but diamonds are not very common.
llvm-svn: 184681
This makes it possible to write unit tests that are less susceptible
to minor code motion, particularly copy placement. block-placement.ll
covers this case with -pre-RA-sched=source which will soon be
default. One incorrectly named block is already fixed, but without
this fix, enabling new coalescing and scheduling would cause more
failures.
llvm-svn: 184680
This is an awful implementation of the target hook. But we don't have
abstractions yet for common machine ops, and I don't see any quick way
to make it table-driven.
llvm-svn: 184664
Rewrote the SLP-vectorization as a whole-function vectorization pass. It is now able to vectorize chains across multiple basic blocks.
It still does not vectorize PHIs, but this should be easy to do now that we scan the entire function.
I removed the support for extracting values from trees.
We are now able to vectorize more programs, but there are some serious regressions in many workloads (such as flops-6 and mandel-2).
llvm-svn: 184647