1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
llvm-mirror/test/CodeGen/PowerPC/2013-05-15-preinc-fold.ll

34 lines
952 B
LLVM
Raw Normal View History

; RUN: llc < %s | FileCheck %s
target datalayout = "E-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-f128:128:128-v128:128:128-n32:64"
target triple = "powerpc64-unknown-linux-gnu"
define i8* @test(i8* %base, i8 %val) {
entry:
%arrayidx = getelementptr inbounds i8* %base, i32 -1
store i8 %val, i8* %arrayidx, align 1
%arrayidx2 = getelementptr inbounds i8* %base, i32 1
store i8 %val, i8* %arrayidx2, align 1
ret i8* %arrayidx
}
; CHECK: @test
; CHECK: %entry
; CHECK-NEXT: stbu 4, -1(3)
; CHECK-NEXT: stb 4, 2(3)
; CHECK-NEXT: blr
[PowerPC] Use true offset value in "memrix" machine operands This is the second part of the change to always return "true" offset values from getPreIndexedAddressParts, tackling the case of "memrix" type operands. This is about instructions like LD/STD that only have a 14-bit field to encode immediate offsets, which are implicitly extended by two zero bits by the machine, so that in effect we can access 16-bit offsets as long as they are a multiple of 4. The PowerPC back end currently handles such instructions by carrying the 14-bit value (as it will get encoded into the actual machine instructions) in the machine operand fields for such instructions. This means that those values are in fact not the true offset, but rather the offset divided by 4 (and then truncated to an unsigned 14-bit value). Like in the case fixed in r182012, this makes common code operations on such offset values not work as expected. Furthermore, there doesn't really appear to be any strong reason why we should encode machine operands this way. This patch therefore changes the encoding of "memrix" type machine operands to simply contain the "true" offset value as a signed immediate value, while enforcing the rules that it must fit in a 16-bit signed value and must also be a multiple of 4. This change must be made simultaneously in all places that access machine operands of this type. However, just about all those changes make the code simpler; in many cases we can now just share the same code for memri and memrix operands. llvm-svn: 182032
2013-05-16 19:58:02 +02:00
define i64* @test64(i64* %base, i64 %val) {
entry:
%arrayidx = getelementptr inbounds i64* %base, i32 -1
store i64 %val, i64* %arrayidx, align 8
%arrayidx2 = getelementptr inbounds i64* %base, i32 1
store i64 %val, i64* %arrayidx2, align 8
ret i64* %arrayidx
}
; CHECK: @test64
; CHECK: %entry
; CHECK-NEXT: stdu 4, -8(3)
; CHECK-NEXT: std 4, 16(3)
; CHECK-NEXT: blr