vabd intrinsic and add and/or zext operations. In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests. Auto-upgrade the old intrinsics.
llvm-svn: 112941
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests. Add auto-upgrade support for the old intrinsics.
llvm-svn: 112773
Add support for using the FPSCR in conjunction with the vcvtr instruction, for controlling fp to int rounding.
Add support for the FLT_ROUNDS_ node now that the FPSCR is exposed.
llvm-svn: 110152
the overloaded vector types allowed floating-point or integer vector elements.
Most of these operations actually depend on the element type, so bitcasting
was not an option.
If you include the vpadd intrinsics that I updated earlier, this gets rid
of 20 intrinsics.
llvm-svn: 78646
as vector shuffles did not work out well. Shuffles that produce double-wide
vectors accurately represent the operation but make it hard to do anything
with the results. I considered splitting them up into 2 shuffles, one to
write each register separately, but there doesn't seem to be a good way to
reunite them for codegen.
llvm-svn: 78437
wide vectors. Likewise, change VSTn intrinsics to take separate arguments
for each vector in a multi-vector struct. Adjust tests accordingly.
llvm-svn: 77468
"parameter" types. An intrinsic can now return a multiple return values like
this:
def add_with_overflow : Intrinsic<[llvm_i32_ty, llvm_i1_ty],
[LLVMMatchType<0>, LLVMMatchType<0>]>;
llvm-svn: 59237