1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-01-31 20:51:52 +01:00
llvm-mirror/test/CodeGen/AMDGPU/smrd-fold-offset.mir
Matt Arsenault 6705a324ed AMDGPU: Rename add/sub with carry out instructions
The hardware has created a real mess in the naming for add/sub, which
have been renamed basically every generation. Switch the carry out
pseudos to have the gfx9/gfx10 names. We were using the original SI/CI
v_add_i32/v_sub_i32 names. Later targets reintroduced these names as
carryless instructions with a saturating clamp bit, which we do not
define. Do this rename so we can unambiguously add these missing
instructions.

The carry-in versions should also be renamed, but at least those had a
consistent _u32 name to begin with. The 16-bit instructions were also
renamed, but aren't ambiguous.

This does regress assembler error message quality in some cases. In
mismatched wave32/wave64 situations, this will switch from
"unsupported instruction" to "invalid operand", with the error
pointing at the wrong position. I couldn't quite follow how the
assembler selects these, but the previous behavior seemed accidental
to me. It looked like there was a partial attempt to handle this which
was never completed (i.e. there is an AMDGPUOperand::isBoolReg but it
isn't used for anything).
2020-07-16 13:16:30 -04:00

51 lines
1.7 KiB
YAML

# RUN: llc -march=amdgcn -mcpu=gfx900 -run-pass si-fix-sgpr-copies -o - %s | FileCheck -check-prefix=GCN %s
# GCN-LABEL: name: smrd_vgpr_offset_imm
# GCN: V_READFIRSTLANE_B32
# GCN: S_BUFFER_LOAD_DWORD_SGPR
---
name: smrd_vgpr_offset_imm
body: |
bb.0:
liveins: $sgpr0, $sgpr1, $sgpr2, $sgpr3, $vgpr0
%4:vgpr_32 = COPY $vgpr0
%3:sgpr_32 = COPY $sgpr3
%2:sgpr_32 = COPY $sgpr2
%1:sgpr_32 = COPY $sgpr1
%0:sgpr_32 = COPY $sgpr0
%5:sgpr_128 = REG_SEQUENCE %0, %subreg.sub0, %1, %subreg.sub1, %2, %subreg.sub2, %3, %subreg.sub3
%6:sreg_32_xm0 = S_MOV_B32 4095
%8:vgpr_32 = COPY %6
%7:vgpr_32 = V_ADD_CO_U32_e32 %4, killed %8, implicit-def dead $vcc, implicit $exec
%10:sreg_32 = COPY %7
%9:sreg_32_xm0_xexec = S_BUFFER_LOAD_DWORD_SGPR killed %5, killed %10, 0, 0
$vgpr0 = COPY %9
SI_RETURN_TO_EPILOG $vgpr0
...
# GCN-LABEL: name: smrd_vgpr_offset_imm_add_u32
# GCN: V_READFIRSTLANE_B32
# GCN: S_BUFFER_LOAD_DWORD_SGPR
---
name: smrd_vgpr_offset_imm_add_u32
body: |
bb.0:
liveins: $sgpr0, $sgpr1, $sgpr2, $sgpr3, $vgpr0
%4:vgpr_32 = COPY $vgpr0
%3:sgpr_32 = COPY $sgpr3
%2:sgpr_32 = COPY $sgpr2
%1:sgpr_32 = COPY $sgpr1
%0:sgpr_32 = COPY $sgpr0
%5:sgpr_128 = REG_SEQUENCE %0, %subreg.sub0, %1, %subreg.sub1, %2, %subreg.sub2, %3, %subreg.sub3
%6:sreg_32_xm0 = S_MOV_B32 4095
%8:vgpr_32 = COPY %6
%7:vgpr_32 = V_ADD_U32_e32 %4, killed %8, implicit $exec
%10:sreg_32 = COPY %7
%9:sreg_32_xm0_xexec = S_BUFFER_LOAD_DWORD_SGPR killed %5, killed %10, 0, 0 :: (dereferenceable invariant load 4)
$vgpr0 = COPY %9
SI_RETURN_TO_EPILOG $vgpr0
...