1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 12:12:47 +01:00
llvm-mirror/test/CodeGen/X86/bitcast-i256.ll
Simon Pilgrim d654e7d40c [X86] Handle COPYs of physregs better (regalloc hints)
Enable enableMultipleCopyHints() on X86.

Original Patch by @jonpa:

While enabling the mischeduler for SystemZ, it was discovered that for some reason a test needed one extra seemingly needless COPY (test/CodeGen/SystemZ/call-03.ll). The handling for that is resulted in this patch, which improves the register coalescing by providing not just one copy hint, but a sorted list of copy hints. On SystemZ, this gives ~12500 less register moves on SPEC, as well as marginally less spilling.

Instead of improving just the SystemZ backend, the improvement has been implemented in common-code (calculateSpillWeightAndHint(). This gives a lot of test failures, but since this should be a general improvement I hope that the involved targets will help and review the test updates.

Differential Revision: https://reviews.llvm.org/D38128

llvm-svn: 342578
2018-09-19 18:59:08 +00:00

23 lines
771 B
LLVM

; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx,-slow-unaligned-mem-32 | FileCheck %s --check-prefix=FAST
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx,+slow-unaligned-mem-32 | FileCheck %s --check-prefix=SLOW
define i256 @foo(<8 x i32> %a) {
; FAST-LABEL: foo:
; FAST: # %bb.0:
; FAST-NEXT: movq %rdi, %rax
; FAST-NEXT: vmovups %ymm0, (%rdi)
; FAST-NEXT: vzeroupper
; FAST-NEXT: retq
;
; SLOW-LABEL: foo:
; SLOW: # %bb.0:
; SLOW-NEXT: movq %rdi, %rax
; SLOW-NEXT: vextractf128 $1, %ymm0, 16(%rdi)
; SLOW-NEXT: vmovups %xmm0, (%rdi)
; SLOW-NEXT: vzeroupper
; SLOW-NEXT: retq
%r = bitcast <8 x i32> %a to i256
ret i256 %r
}