1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 03:02:36 +01:00
llvm-mirror/test/CodeGen/ARM/swift-vldm.ll
Matthias Braun 5ca7af07c9 ARM: Introduce conservative load/store optimization mode
Most of the time ARM has the CCR.UNALIGN_TRP bit set to false which
means that unaligned loads/stores do not trap and even extensive testing
will not catch these bugs. However the multi/double variants are not
affected by this bit and will still trap. In effect a more aggressive
load/store optimization will break existing (bad) code.

These bugs do not necessarily manifest in the broken code where the
misaligned pointer is formed but often later in perfectly legal code
where it is accessed. This means recompiling system libraries (which
have no alignment bugs) with a newer compiler will break existing
applications (with alignment bugs) that worked before.

So (under protest) I implemented this safe mode which limits the
formation of multi/double operations to cases that are not affected by
user code (stack operations like spills/reloads) or cases where the
normal operations trap anyway (floating point load/stores). It is
disabled by default.

Differential Revision: http://reviews.llvm.org/D17015

llvm-svn: 262504
2016-03-02 19:20:00 +00:00

31 lines
1.3 KiB
LLVM

; RUN: llc < %s -mcpu=swift -mtriple=armv7s-apple-ios | FileCheck %s
; RUN: llc < %s -arm-assume-misaligned-load-store -mcpu=swift -mtriple=armv7s-apple-ios | FileCheck %s
; Check that we avoid producing vldm instructions using d registers that
; begin in the most-significant half of a q register. These require more
; micro-ops on swift and so aren't worth combining.
; CHECK-LABEL: test_vldm
; CHECK: vldmia r{{[0-9]+}}, {d2, d3, d4}
; CHECK-NOT: vldmia r{{[0-9]+}}, {d1, d2, d3, d4}
declare fastcc void @force_register(double %d0, double %d1, double %d2, double %d3, double %d4)
define void @test_vldm(double* %x, double * %y) {
entry:
%addr1 = getelementptr double, double * %x, i32 1
%addr2 = getelementptr double, double * %x, i32 2
%addr3 = getelementptr double, double * %x, i32 3
%d0 = load double , double * %y
%d1 = load double , double * %x
%d2 = load double , double * %addr1
%d3 = load double , double * %addr2
%d4 = load double , double * %addr3
; We are trying to force x[0-3] in registers d1 to d4 so that we can test we
; don't form a "vldmia rX, {d1, d2, d3, d4}".
; We are relying on the calling convention and that register allocation
; properly coalesces registers.
call fastcc void @force_register(double %d0, double %d1, double %d2, double %d3, double %d4)
ret void
}