1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 19:23:23 +01:00
llvm-mirror/test/CodeGen/AArch64/arm64-fastisel-gep-promote-before-add.ll
Jessica Paquette 365bbe9d10 [AArch64][GlobalISel] Import XRO load/store patterns instead of custom selection
Instead of using custom C++ in `earlySelect` for loads and stores, just import
the patterns.

Remove `earlySelectLoad`, since we can just import the work it's doing.

Some minor changes to how `ComplexRendererFns` are returned for the XRO
addressing modes. If you add immediates in two steps, sometimes they are not
imported properly and you only end up with one immediate. I'm not sure if this
is intentional.

- Update load-addressing-modes.mir to include the instructions we can now
  import.

- Add a similar test, store-addressing-modes.mir to show which store opcodes we
  currently import, and show that we can pull in shifts etc.

- Update arm64-fastisel-gep-promote-before-add.ll to use FastISel instead of
  GISel. This test failed with GISel because GISel folds the gep into the load.
  The test checks that FastISel doesn't fold non-pointer-width adds into loads.
  GISel on the other hand, produces a G_CONSTANT of -128 for the add, and then
  a G_GEP, which must be pointer-width.

Note that we don't get STRBRoX right now. It seems like the importer can't
handle `FPR8Op:{ *:[Untyped] }:$Rt` source operands. So, those are not currently
supported.

Differential Revision: https://reviews.llvm.org/D66679

llvm-svn: 369806
2019-08-23 20:31:34 +00:00

19 lines
554 B
LLVM

; fastisel should not fold add with non-pointer bitwidth
; sext(a) + sext(b) != sext(a + b)
; RUN: llc -fast-isel -mtriple=arm64-apple-darwin %s -O0 -o - | FileCheck %s
define zeroext i8 @gep_promotion(i8* %ptr) nounwind uwtable ssp {
entry:
%ptr.addr = alloca i8*, align 8
%add = add i8 64, 64 ; 0x40 + 0x40
%0 = load i8*, i8** %ptr.addr, align 8
; CHECK-LABEL: _gep_promotion:
; CHECK: ldrb {{[a-z][0-9]+}}, {{\[[a-z][0-9]+\]}}
%arrayidx = getelementptr inbounds i8, i8* %0, i8 %add
%1 = load i8, i8* %arrayidx, align 1
ret i8 %1
}