1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 04:02:41 +01:00
llvm-mirror/test/CodeGen/X86/pr37879.ll
Sanjay Patel 8720d89aac [DAGCombiner] re-enable truncation of binops
This is effectively re-committing the changes from:
rL347917 (D54640)
rL348195 (D55126)
...which were effectively reverted here:
rL348604
...because the code had a bug that could induce infinite looping
or eventual out-of-memory compilation.

The bug was that this code did not guard against transforming
opaque constants. More details are in the post-commit mailing
list thread for r347917. A reduced test for that is included
in the x86 bool-math.ll file. (I wasn't able to reduce a PPC
backend test for this, but it was almost the same pattern.)

Original commit message for r347917:

The motivating case for this is shown in:
https://bugs.llvm.org/show_bug.cgi?id=32023
and the corresponding rot16.ll regression tests.

Because x86 scalar shift amounts are i8 values, we can end up with trunc-binop-trunc
sequences that don't get folded in IR.

As the TODO comments suggest, there will be regressions if we extend this (for x86,
we mostly seem to be missing LEA opportunities, but there are likely vector folds
missing too). I think those should be considered existing bugs because this is the
same transform that we do as an IR canonicalization in instcombine. We just need
more tests to make those visible independent of this patch.

llvm-svn: 348706
2018-12-08 16:07:38 +00:00

20 lines
678 B
LLVM

; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc -O3 < %s -mtriple=x86_64-apple-darwin -mattr=+avx512bw | FileCheck %s
define double @foo(i32** nocapture readonly) #0 {
; CHECK-LABEL: foo:
; CHECK: ## %bb.0:
; CHECK-NEXT: movq (%rax), %rax
; CHECK-NEXT: vcvtsi2sdq %rax, %xmm0, %xmm1
; CHECK-NEXT: kmovd %eax, %k1
; CHECK-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
; CHECK-NEXT: vmovsd %xmm1, %xmm0, %xmm0 {%k1}
; CHECK-NEXT: retq
%2 = load i64, i64* undef, align 8
%3 = and i64 %2, 1
%4 = icmp eq i64 %3, 0
%5 = sitofp i64 %2 to double
%6 = select i1 %4, double 1.000000e+00, double %5
ret double %6
}