1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 03:53:04 +02:00
Commit Graph

14 Commits

Author SHA1 Message Date
Evan Cheng
fa630cca3f Get rid of more dead code.
llvm-svn: 77227
2009-07-27 18:38:54 +00:00
Evan Cheng
03307125f5 Clean up.
llvm-svn: 77221
2009-07-27 18:25:24 +00:00
Evan Cheng
a773ae7a39 Get rid of some more getOpcode calls.
This also fixes potential problems in ARMBaseInstrInfo routines not recognizing thumb1 instructions when 32-bit and 16-bit instructions mix.

llvm-svn: 77218
2009-07-27 18:20:05 +00:00
Evan Cheng
674c4d47b9 Use t2LDRi12 and t2STRi12 to load / store to / from stack frames. Eliminate more getOpcode calls.
llvm-svn: 77181
2009-07-27 03:14:20 +00:00
Evan Cheng
d615e606c4 Change Thumb2 jumptable codegen to one that uses two level jumps:
Before:
      adr r12, #LJTI3_0_0
      ldr pc, [r12, +r0, lsl #2]
LJTI3_0_0:
      .long    LBB3_24
      .long    LBB3_30
      .long    LBB3_31
      .long    LBB3_32

After:
      adr r12, #LJTI3_0_0
      add pc, r12, +r0, lsl #2
LJTI3_0_0:
      b.w    LBB3_24
      b.w    LBB3_30
      b.w    LBB3_31
      b.w    LBB3_32

This has several advantages.
1. This will make it easier to optimize this to a TBB / TBH instruction +
   (smaller) table.
2. This eliminate the need for ugly asm printer hack to force the address
   into thumb addresses (bit 0 is one).
3. Same codegen for pic and non-pic.
4. This eliminate the need to align the table so constantpool island pass
   won't have to over-estimate the size.

Based on my calculation, the later is probably slightly faster as well since
ldr pc with shifter address is very slow. That is, it should be a win as long
as the HW implementation can do a reasonable job of branch predict the second
branch.

llvm-svn: 77024
2009-07-25 00:33:29 +00:00
Eli Friedman
1be7dfb1aa Remove unused member functions.
llvm-svn: 76960
2009-07-24 07:43:59 +00:00
Evan Cheng
d13b8f8353 FLDD, FLDS, FCPYD, FCPYS, FSTD, FSTS, VMOVD, VMOVQ maps to the same instructions on all sub-targets.
llvm-svn: 76925
2009-07-24 00:53:56 +00:00
David Goodwin
85bcdffa4f Correctly handle the Thumb-2 imm8 addrmode. Specialize frame index elimination more exactly for Thumb-2 to get better code gen.
llvm-svn: 76919
2009-07-24 00:16:18 +00:00
David Goodwin
f3c839e0b9 Fix frame index elimination to correctly handle thumb-2 addressing modes that don't allow negative offsets. During frame elimination convert *i12 opcode to a *i8 when necessary due to a negative offset.
llvm-svn: 76883
2009-07-23 17:06:46 +00:00
Evan Cheng
fd384e9493 Fix a regression from 76124. Thumb1 instructions default to S bit being true.
llvm-svn: 76374
2009-07-19 19:16:46 +00:00
Anton Korobeynikov
dc39f4fff8 Emit cross regclass register moves for thumb2.
Minor code duplication cleanup.

llvm-svn: 76124
2009-07-16 23:26:06 +00:00
Evan Cheng
4d2678d1ff Move isPredicated from .cpp to .h
llvm-svn: 75217
2009-07-10 01:38:27 +00:00
David Goodwin
94209a4c31 Generalize opcode selection in ARMBaseRegisterInfo.
llvm-svn: 75036
2009-07-08 20:28:28 +00:00
David Goodwin
5bdef4b3f7 Checkpoint Thumb2 Instr info work. Generalized base code so that it can be shared between ARM and Thumb2. Not yet activated because register information must be generalized first.
llvm-svn: 75010
2009-07-08 16:09:28 +00:00