1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 03:23:01 +02:00
Commit Graph

312 Commits

Author SHA1 Message Date
Evan Cheng
239d9b439d Conditional moves are slightly more expensive than moves.
llvm-svn: 118985
2010-11-13 05:14:20 +00:00
Evan Cheng
a7d3c3d387 Add conditional move of large immediate.
llvm-svn: 118968
2010-11-13 02:25:14 +00:00
Owen Anderson
053d9fb9b5 Revert r118939 while I work out why it broke some buildbots.
llvm-svn: 118942
2010-11-12 23:36:03 +00:00
Owen Anderson
f8192cf0cc Attemt to provide correct encodings for Thumb2 binary operators.
llvm-svn: 118939
2010-11-12 23:18:11 +00:00
Evan Cheng
19f018a1be Add conditional mvn instructions.
llvm-svn: 118935
2010-11-12 22:42:47 +00:00
Owen Anderson
f1ffc8fdc9 First stab at providing correct Thumb2 encodings, start with adc.
llvm-svn: 118924
2010-11-12 21:12:40 +00:00
Evan Cheng
165e65f53a Fix @llvm.prefetch isel. Selecting between pld / pldw using the first immediate rw. There is currently no intrinsic that matches to pli.
llvm-svn: 118237
2010-11-04 05:19:35 +00:00
Evan Cheng
eab7251695 Fix preload instruction isel. Only v7 supports pli, and only v7 with mp extension supports pldw. Add subtarget attribute to denote mp extension support and legalize illegal ones to nothing.
llvm-svn: 118160
2010-11-03 06:34:55 +00:00
Evan Cheng
b41703bc2f Add support to match @llvm.prefetch to pld / pldw / pli. rdar://8601536.
llvm-svn: 118152
2010-11-03 05:14:24 +00:00
Jim Grosbach
c10d3f3d4b Break ARM addrmode4 (load/store multiple base address) into its constituent
parts. Represent the operation mode as an optional operand instead.
rdar://8614429

llvm-svn: 118137
2010-11-03 01:01:43 +00:00
Chris Lattner
d3f7a5d3bd Completely reject instructions that have an operand in their
ins/outs list that isn't specified by their asmstring.  Previously
the asmmatcher would just force a 0 register into it, which clearly
isn't right.  Mark a bunch of ARM instructions that use this as 
isCodeGenOnly.  Some of them are clearly pseudo instructions (like
t2TBB) others use a weird hasExtraSrcRegAllocReq thing that will
either need to be removed or the asmmatcher will need to be taught
about it (someday).

llvm-svn: 118119
2010-11-02 23:40:41 +00:00
Jim Grosbach
311aa5e22f The T2 extract/pack instructions are only valid in Thumb2 mode. Mark the
patterns as such

llvm-svn: 117923
2010-11-01 15:59:52 +00:00
Chris Lattner
5d088218e5 two changes: make the asmmatcher generator ignore ARM pseudos properly,
and make it a hard error for instructions to not have an asm string.
These instructions should be marked isCodeGenOnly.

llvm-svn: 117861
2010-10-31 19:15:18 +00:00
Chris Lattner
01acd65875 reapply r117858 with apparent editor malfunction fixed (somehow I
got a dulicated line).

llvm-svn: 117860
2010-10-31 19:10:56 +00:00
Chris Lattner
8132a182e7 revert r117858 while I check out a failure I missed.
llvm-svn: 117859
2010-10-31 19:05:32 +00:00
Chris Lattner
70b05a5b88 the asm matcher can't handle operands with modifiers (like ${foo:bar}).
Instead of silently ignoring these instructions, emit a hard error and
force the target author to either refactor the target or mark the 
instruction 'isCodeGenOnly'.

Mark a few instructions in ARM and MBlaze as isCodeGenOnly the are 
doing this.

llvm-svn: 117858
2010-10-31 18:48:12 +00:00
Bob Wilson
183c466006 Overhaul memory barriers in the ARM backend. Radar 8601999.
There were a number of issues to fix up here:
* The "device" argument of the llvm.memory.barrier intrinsic should be
used to distinguish the "Full System" domain from the "Inner Shareable"
domain.  It has nothing to do with using DMB vs. DSB instructions.
* The compiler should never need to emit DSB instructions.  Remove the
ARMISD::SYNCBARRIER node and also remove the instruction patterns for DSB.
* Merge the separate DMB/DSB instructions for options only used for the
disassembler with the default DMB/DSB instructions.  Add the default
"full system" option ARM_MB::SY to the ARM_MB::MemBOpt enum.
* Add a separate ARMISD::MEMBARRIER_MCR node for subtargets that implement
a data memory barrier using the MCR instruction.
* Fix up encodings for these instructions (except MCR).
I also updated the tests and added a few new ones to check for DMB options
that were not currently being exercised.

llvm-svn: 117756
2010-10-30 00:54:37 +00:00
Jim Grosbach
9a473e23b8 Remove hard tab characters.
llvm-svn: 117742
2010-10-29 23:23:15 +00:00
Evan Cheng
bc4588c439 Re-commit 117518 and 117519 now that ARM MC test failures are out of the way.
llvm-svn: 117531
2010-10-28 06:47:08 +00:00
Evan Cheng
fdc80a0316 Revert 117518 and 117519 for now. They changed scheduling and cause MC tests to fail. Ugh.
llvm-svn: 117520
2010-10-28 02:00:25 +00:00
Evan Cheng
5c358e02ea - Assign load / store with shifter op address modes the right itinerary classes.
- For now, loads of [r, r] addressing mode is the same as the
  [r, r lsl/lsr/asr #] variants. ARMBaseInstrInfo::getOperandLatency() should
  identify the former case and reduce the output latency by 1.
- Also identify [r, r << 2] case. This special form of shifter addressing mode
  is "free".

llvm-svn: 117519
2010-10-28 01:49:06 +00:00
Jim Grosbach
1a13b873e7 imm12 operands aren't Thumb2 only, so rename the printer helper function.
llvm-svn: 117291
2010-10-25 20:00:01 +00:00
Bob Wilson
6b6b53ad6f Remove unused ARMISD::AND selection DAG node.
llvm-svn: 116566
2010-10-15 04:34:40 +00:00
Jim Grosbach
29dc23398f Tweak the ARM backend to use the RRX mnemonic instead of the 'mov a, b, rrx'
pseudonym.

llvm-svn: 116512
2010-10-14 20:43:44 +00:00
Jim Grosbach
506b966b9d A few 80 column fixes.
llvm-svn: 116451
2010-10-13 23:34:31 +00:00
Jim Grosbach
c0a61c0796 Allow use of the 16-bit literal move instruction in CMOVs for Thumb2 mode.
llvm-svn: 115890
2010-10-07 00:53:56 +00:00
Jim Grosbach
de2bd8cd3f Clean up MOVi32imm and t2MOVi32imm pseudo instruction definitions.
llvm-svn: 115853
2010-10-06 22:01:26 +00:00
Evan Cheng
6fbb6dea7c - Add TargetInstrInfo::getOperandLatency() to compute operand latencies. This
allow target to correctly compute latency for cases where static scheduling
  itineraries isn't sufficient. e.g. variable_ops instructions such as
  ARM::ldm.
  This also allows target without scheduling itineraries to compute operand
  latencies. e.g. X86 can return (approximated) latencies for high latency
  instructions such as division.
- Compute operand latencies for those defined by load multiple instructions,
  e.g. ldm and those used by store multiple instructions, e.g. stm.

llvm-svn: 115755
2010-10-06 06:27:31 +00:00
Jim Grosbach
619f1c1cc5 Nuke the rest of the :comment references
llvm-svn: 115373
2010-10-01 23:21:38 +00:00
Jim Grosbach
52b5709c99 The asm strings are never used at all, so just nuke 'em entirely.
llvm-svn: 115160
2010-09-30 16:56:53 +00:00
Jim Grosbach
ad67153eb3 Go ahead and jump!
Now that the MC lowering handles the expansion of the pseudos, kill the horrible
blobs of text.

llvm-svn: 115130
2010-09-30 02:18:06 +00:00
Evan Cheng
fa5d40dbff ARM instruction itinerary fixes:
1. Cortex-a9 8-bit and 16-bit loads / stores AGU cycles are 1 cycle longer than 32-bit ones.
2. Cortex-a9 is out-of-order so model all read cycles as cycle 1.
3. Lots of other random fixes for A8 and A9.

llvm-svn: 115121
2010-09-30 01:08:25 +00:00
Evan Cheng
b44d480808 Model Cortex-a9 load to SUB, RSB, ADD, ADC, SBC, RSC, CMN, MVN, or CMP
pipeline forwarding path.

llvm-svn: 115098
2010-09-29 22:42:35 +00:00
Evan Cheng
7eb08b1ad9 Separate itinerary classes for mvn from mov; for tst / teq from cmp / cmn.
llvm-svn: 115010
2010-09-29 00:49:25 +00:00
Evan Cheng
7fffe3cf58 Assign bitwise binary instructions different itinerary classes from ALU instructions such as add / sub.
llvm-svn: 115008
2010-09-29 00:27:46 +00:00
Evan Cheng
124ae30ef8 More pseudo instruction scheduling itinerary fixes.
llvm-svn: 114768
2010-09-24 22:41:41 +00:00
Evan Cheng
eb81dc39dc Fix scheduling itinerary for pseudo mov immediate instructions which expand into two real instructions.
llvm-svn: 114766
2010-09-24 22:03:46 +00:00
Owen Anderson
4fc55c0e02 Revert r114703 and r114702, removing the isConditionalMove flag from instructions. After further
reflection, this isn't going to achieve the purpose I intended it for.  Back to the drawing board!

llvm-svn: 114710
2010-09-23 23:45:25 +00:00
Owen Anderson
15c6948d29 Add isConditionalMove bits to X86 and ARM instructions.
llvm-svn: 114703
2010-09-23 22:57:01 +00:00
Chris Lattner
55043ef46a fix a long standing wart: all the ComplexPattern's were being
passed the root of the match, even though only a few patterns
actually needed this (one in X86, several in ARM [which should
be refactored anyway], and some in CellSPU that I don't feel 
like detangling).   Instead of requiring all ComplexPatterns to
take the dead root, have targets opt into getting the root by
putting SDNPWantRoot on the ComplexPattern.

llvm-svn: 114471
2010-09-21 20:31:19 +00:00
Evan Cheng
b87520ca74 Fix LDM_RET schedule itinery.
llvm-svn: 113435
2010-09-08 22:57:08 +00:00
Chris Lattner
c0e5368884 remove some dead code. t2addrmode_imm8s4 is never used in a
pattern, so there is no need to define a matching function.

llvm-svn: 113122
2010-09-05 22:51:11 +00:00
Chris Lattner
b74759a9fa temporarily revert r112664, it is causing a decoding conflict, and
the testcases should be merged.

llvm-svn: 112711
2010-09-01 16:00:50 +00:00
Bill Wendling
bb6052cfd6 We have a chance for an optimization. Consider this code:
int x(int t) {
  if (t & 256)
    return -26;
  return 0;
}

We generate this:

     tst.w   r0, #256
     mvn     r0, #25
     it      eq
     moveq   r0, #0

while gcc generates this:

     ands    r0, r0, #256
     it      ne
     mvnne   r0, #25
     bx      lr

Scandalous really!

During ISel time, we can look for this particular pattern. One where we have a
"MOVCC" that uses the flag off of a CMPZ that itself is comparing an AND
instruction to 0. Something like this (greatly simplified):

  %r0 = ISD::AND ...
  ARMISD::CMPZ %r0, 0         @ sets [CPSR]
  %r0 = ARMISD::MOVCC 0, -26  @ reads [CPSR]

All we have to do is convert the "ISD::AND" into an "ARM::ANDS" that sets [CPSR]
when it's zero. The zero value will all ready be in the %r0 register and we only
need to change it if the AND wasn't zero. Easy!

llvm-svn: 112664
2010-08-31 22:41:22 +00:00
Bill Wendling
7532e3418e Use the existing T2I_bin_s_irs pattern instead of creating T2I_bin_sw_irs, which
is meant to do exactly the same thing. Thanks to Jim Grosbach for pointing this
out! :-)

llvm-svn: 112538
2010-08-30 22:05:23 +00:00
Jim Grosbach
674b25ce31 Make ARM add rN, sp, #imm instructions rematerializable. That's how the address of locals is calculated, so this should
help relieve register pressure a bit. Recalculating the local address is
almost always going to be better than spilling.

llvm-svn: 112503
2010-08-30 19:49:58 +00:00
Bill Wendling
c325a15569 Create Thumb2sI_cpsr and T2sI_cpsr. These new classes indicate that CPSR is the
optional modified register (instead of reg0). Along with r112461 it will make
sure that the optional define of CPSR is marked as "def" and will thus mark the
instructions using these classes (t2ANDS*) as setting the 's' flag.

llvm-svn: 112462
2010-08-30 01:47:35 +00:00
Bill Wendling
6d105ce757 - Add a parameter to T2I_bin_irs for those patterns which set the S bit.
- Create T2I_bin_sw_irs to be like T2I_bin_w_irs, but that it sets the S bit.

llvm-svn: 112399
2010-08-29 03:55:31 +00:00
Bill Wendling
8ad57ff92e Name ANDflag to ANDS, which is less stupid.
llvm-svn: 112395
2010-08-29 03:06:09 +00:00
Bill Wendling
385ad1516f Create an ARMISD::AND node. This node is exactly like the "ARM::AND" node, but
it sets the CPSR register.

llvm-svn: 112393
2010-08-29 03:02:11 +00:00
Jim Grosbach
5b1ce460ec Restrict the register to tGPR to make sure the str instruction will be
encodable as a 16-bit wide instruction.

llvm-svn: 112195
2010-08-26 17:02:47 +00:00
Dan Gohman
b1020bb551 Revert r112176; it broke test/CodeGen/Thumb2/thumb2-cmn.ll.
llvm-svn: 112191
2010-08-26 15:50:25 +00:00
Bill Wendling
a125fb1689 There seems to be a (potential) hardware bug with the CMN instruction and
comparison with 0. These two pieces of code should give identical results:

  rsbs r1, r1, 0
  cmp  r0, r1
  mov  r0, #0
  it   ls
  mov  r0, #1

and:

  cmn  r0, r1
  mov  r0, #0
  it   ls
  mov  r0, #1

However, the CMN gives the *opposite* result when r1 is 0. This is because the
carry flag is set in the CMP case but not in the CMN case. In short, the CMP
instruction doesn't perform a truncate of the (logical) NOT of 0 plus the value
of r0 and the carry bit (because the "carry bit" parameter to AddWithCarry is
defined as 1 in this case, the carry flag will always be set when r0 >= 0). The
CMN instruction doesn't perform a NOT of 0 so there is never a "carry" when this
AddWithCarry is performed (because the "carry bit" parameter to AddWithCarry is
defined as 0).

The AddWithCarry in the CMP case seems to be relying upon the identity:

  ~x + 1 = -x

However when x is 0 and unsigned, this doesn't hold:

   x = 0
  ~x = 0xFFFF FFFF
  ~x + 1 = 0x1 0000 0000
  (-x = 0) != (0x1 0000 0000 = ~x + 1)

Therefore, we should disable *all* versions of CMN, especially when comparing
against zero, until we can limit when the CMN instruction is used (when we know
that the RHS is not 0) or when we have a hardware fix for this.

(See the ARM docs for the "AddWithCarry" pseudo-code.)

This is related to <rdar://problem/7569620>.

llvm-svn: 112176
2010-08-26 09:07:33 +00:00
Bill Wendling
fa85185486 Add the "isCompare" attribute to the defm instead of each individual instr.
llvm-svn: 111481
2010-08-19 00:05:48 +00:00
Jakob Stoklund Olesen
20dbe1681b Don't call tablegen'ed Predicate_* functions in the ARM target.
llvm-svn: 111277
2010-08-17 20:39:04 +00:00
Jim Grosbach
1d9631950f 80 column cleanup.
llvm-svn: 111266
2010-08-17 18:39:16 +00:00
Bob Wilson
e382fce916 Change ARM PKHTB and PKHBT instructions to use a shift_imm operand to avoid
printing "lsl #0".  This fixes the remaining parts of pr7792.  Make
corresponding changes for encoding/decoding these instructions.

llvm-svn: 111251
2010-08-17 17:23:19 +00:00
Bob Wilson
d662e8cd02 Generalize a pattern for PKHTB: an SRL of 16-31 bits will guarantee
that the high halfword is zero.  The shift need not be exactly 16 bits.

llvm-svn: 111196
2010-08-16 22:26:55 +00:00
Bob Wilson
985dab611d Rename sat_shift operand to shift_imm, in preparation for using it for other
instructions besides saturate instructions.  No functional changes.

llvm-svn: 111168
2010-08-16 18:27:34 +00:00
Bob Wilson
b1eb015fc8 T2I_rbin_irs rr variant is for disassembly only, so don't provide a pattern.
llvm-svn: 111068
2010-08-14 03:18:29 +00:00
Bob Wilson
92bf5a7425 Add a Thumb2 t2RSBrr instruction for disassembly only.
This fixes another part of PR7792.

llvm-svn: 111057
2010-08-13 23:24:25 +00:00
Bob Wilson
0883c6aae3 Move the Thumb2 SSAT and USAT optional shift operator out of the
instruction opcode.  This fixes part of PR7792.

llvm-svn: 111047
2010-08-13 21:48:10 +00:00
Evan Cheng
e67c4c3723 Really control isel of barrier instructions with cpu feature.
llvm-svn: 110787
2010-08-11 06:36:31 +00:00
Evan Cheng
5fca4ca5f9 - Add subtarget feature -mattr=+db which determine whether an ARM cpu has the
memory and synchronization barrier dmb and dsb instructions.
- Change instruction names to something more sensible (matching name of actual
  instructions).
- Added tests for memory barrier codegen.

llvm-svn: 110785
2010-08-11 06:22:01 +00:00
Daniel Dunbar
a77e3fc8d8 ARM: Quote $p in an asm string.
llvm-svn: 110780
2010-08-11 04:46:10 +00:00
Evan Cheng
966ed540a6 CBZ and CBNZ are implemented.
llvm-svn: 110745
2010-08-10 23:27:11 +00:00
Evan Cheng
784a286b92 Delete some unused instructions.
llvm-svn: 110710
2010-08-10 19:36:22 +00:00
Bill Wendling
39c49e3e17 Use the "isCompare" machine instruction attribute instead of calling the
relatively expensive comparison analyzer on each instruction. Also rename the
comparison analyzer method to something more in line with what it actually does.

This pass is will eventually be folded into the Machine CSE pass.

llvm-svn: 110539
2010-08-08 05:04:59 +00:00
Bob Wilson
58c8a5da9e Move newlines before inline jumptables from the asm strings in .td files to
the jtblock_operand print methods.  This avoids extra newlines in the
disassembler's output.  PR7757.

llvm-svn: 109948
2010-07-31 06:28:10 +00:00
Jim Grosbach
1718345a30 Many Thumb2 instructions can reference the full ARM register set (i.e.,
have 4 bits per register in the operand encoding), but have undefined
behavior when the operand value is 13 or 15 (SP and PC, respectively).
The trivial coalescer in linear scan sometimes will merge a copy from
SP into a subsequent instruction which uses the copy, and if that
instruction cannot legally reference SP, we get bad code such as:
  mls r0,r9,r0,sp
instead of:
  mov r2, sp
  mls r0, r9, r0, r2

This patch adds a new register class for use by Thumb2 that excludes
the problematic registers (SP and PC) and is used instead of GPR
for those operands which cannot legally reference PC or SP. The
trivial coalescer explicitly requires that the register class
of the destination for the COPY instruction contain the source
register for the COPY to be considered for coalescing. This prevents
errant instructions like that above.

PR7499

llvm-svn: 109842
2010-07-30 02:41:01 +00:00
Nate Begeman
0b0f838c32 Add builtins for ssat/usat, similar to RealView's __ssat and __usat intrinsics.
llvm-svn: 109813
2010-07-29 22:48:09 +00:00
Nate Begeman
b24fa8b8ae Add intrinsics __builtin_arm_qadd & __builtin_arm_qsub to allow access to the QADD & QSUB instructions.
Behave identically to __qadd & __qsub RealView instruction intrinsics.

llvm-svn: 109770
2010-07-29 17:56:55 +00:00
Jim Grosbach
17bec0f609 Remove incorrect substitution pattern for UXTB16. It wrongly assumed the input shift was actually a rotate. rdar://8240138
llvm-svn: 109693
2010-07-28 23:17:45 +00:00
Jim Grosbach
30f1b06af3 Using BIC for immediates needs an extra bump for its complexity to get
instruction selection to prefer it when possible. rdar://7903972

llvm-svn: 108844
2010-07-20 16:07:04 +00:00
Jim Grosbach
749f4fca0a Add basic support to code-gen the ARM/Thumb2 bit-field insert (BFI) instruction
and a combine pattern to use it for setting a bit-field to a constant
value. More to come for non-constant stores.

llvm-svn: 108570
2010-07-16 23:05:05 +00:00
Jim Grosbach
e2d1ecbe70 Improve 64-subtraction of immediates when parts of the immediate can fit
in the literal field of an instruction. E.g.,
long long foo(long long a) {
  return a - 734439407618LL;
}

rdar://7038284

llvm-svn: 108339
2010-07-14 17:45:16 +00:00
Bob Wilson
f60d34bfad Add missing address register update to t2LDM_RET instruction.
Patch by Brian Lucas. PR7636.

llvm-svn: 108332
2010-07-14 16:02:13 +00:00
Evan Cheng
6349fa5ec4 PR7503: uxtb16 is not available for ARMv7-M. Patch by Brian G. Lucas.
llvm-svn: 107122
2010-06-29 05:38:36 +00:00
Eli Friedman
0698ec53f6 Always allow Thumb-2 SXTB, SXTH, UXTB, and UXTH. Fixes PR7324.
llvm-svn: 106770
2010-06-24 18:20:04 +00:00
Jim Grosbach
bc31f7a24b LEApcrelJT shouldn't be marked as neverHasSideEffects, as we don't want it
being moved around away from the jump table it references. rdar://8104340

llvm-svn: 106483
2010-06-21 21:27:27 +00:00
Evan Cheng
b5fadc47e0 Allow ARM if-converter to be run after post allocation scheduling.
- This fixed a number of bugs in if-converter, tail merging, and post-allocation
  scheduler. If-converter now runs branch folding / tail merging first to
  maximize if-conversion opportunities.
- Also changed the t2IT instruction slightly. It now defines the ITSTATE
  register which is read by instructions in the IT block.
- Added Thumb2 specific hazard recognizer to ensure the scheduler doesn't
  change the instruction ordering in the IT block (since IT mask has been
  finalized). It also ensures no other instructions can be scheduled between
  instructions in the IT block.

This is not yet enabled.

llvm-svn: 106344
2010-06-18 23:09:54 +00:00
Jim Grosbach
f3bd81ce11 Clean up 80 column violations. No functional change.
llvm-svn: 105350
2010-06-02 21:53:11 +00:00
Jim Grosbach
f4442c2ca2 Cosmetic cleanup. No functional change.
llvm-svn: 104974
2010-05-28 17:51:20 +00:00
Jim Grosbach
2eb2c2d257 make sure accesses to set up the jmpbuf don't get moved after it by the scheduler. Add a missing \n.
llvm-svn: 104967
2010-05-28 17:37:40 +00:00
Jim Grosbach
b004e2cf0f Update the saved stack pointer in the sjlj function context following either
an alloca() or an llvm.stackrestore(). rdar://8031573

llvm-svn: 104900
2010-05-27 23:49:24 +00:00
Jim Grosbach
95c228acb0 fix off by 1 (insn) error in eh.sjlj.setjmp thumb code sequence.
llvm-svn: 104661
2010-05-26 01:22:21 +00:00
Bob Wilson
bcd0854609 Allow t2MOVsrl_flag and t2MOVsra_flag instructions to be predicated.
I don't know of any particular reason why that would be important, but
neither can I see any reason to disallow it.

llvm-svn: 104583
2010-05-25 04:51:47 +00:00
Bob Wilson
c8bea44d68 Fix up instruction classes for Thumb2 RSB instructions to be consistent with
Thumb2 ADD and SUB instructions: allow RSB instructions be changed to set the
condition codes, and allow RSBS instructions to be predicated.

llvm-svn: 104582
2010-05-25 04:43:08 +00:00
Bob Wilson
b5c1a4be63 Allow Thumb2 MVN instructions to set condition codes. The immediate operand
version of t2MVN already allowed that, but not the register versions.

llvm-svn: 104570
2010-05-24 22:41:19 +00:00
Bob Wilson
c71b7c8c61 Thumb2 RSBS instructions were being printed without the 'S' suffix.
Fix it by changing the T2I_rbin_s_is multiclass to handle the CPSR
output and 'S' suffix in the same way as T2I_bin_s_irs.

llvm-svn: 104531
2010-05-24 18:44:06 +00:00
Evan Cheng
6f52107b12 t2LEApcrel and tLEApcrel are re-materializable. This makes it possible to hoist more loads during machine LICM.
llvm-svn: 104115
2010-05-19 07:28:01 +00:00
Evan Cheng
0aa58d5b69 Mark pattern-less mayLoad / mayStore instructions neverHasSideEffects. These do not have other un-modeled side effects.
llvm-svn: 104111
2010-05-19 06:07:03 +00:00
Evan Cheng
23fb523b44 Mark a few more pattern-less instructions with neverHasSideEffects. This is especially important on instructions like t2LEApcreal which are prime candidate for machine LICM.
llvm-svn: 104102
2010-05-19 01:52:25 +00:00
Anton Korobeynikov
a63555c10d Chris said that the comment char should be escaped. Fix all the occurences of "@" in *.td
llvm-svn: 103903
2010-05-16 09:15:36 +00:00
Jim Grosbach
e04cc6cb43 Cleanup of ARMv7M support. Move hardware divide and Thumb2 extract/pack
instructions to subtarget features and update tests to reflect.
PR5717.

llvm-svn: 103136
2010-05-05 23:44:43 +00:00
Jim Grosbach
3630aff780 Add initial support for ARMv7M subtarget and cortex-m3 cpu. Patch by
Jordy <snhjordy@gmail.com>.

Followup patches will add some tests and adjust to use Subtarget features
for the instructions.

llvm-svn: 103119
2010-05-05 20:44:35 +00:00
Bob Wilson
ef934eac9f Provide versions of the ARM eh_sjlj_setjmp instructions for non-VFP subtargets
such that the non-VFP versions have no implicit defs of VFP registers.
If any callee-saved VFP registers are marked as having been defined, the
prologue/epilogue code will try to save and restore them.
Radar 7770432.

llvm-svn: 100892
2010-04-09 20:41:18 +00:00
Bob Wilson
279818d473 Remove the writeback flag from ARM's address mode 4. Now that we have separate
instructions for ld/st with writeback, the flag is completely redundant.

llvm-svn: 98643
2010-03-16 17:46:45 +00:00
Bob Wilson
0e8a3d7a13 Change ARM ld/st multiple instructions to have variant instructions for
writebacks to the address register.  This gets rid of the hack that the
first register on the list was the magic writeback register operand.  There
was an implicit constraint that if that operand was not reg0 it had to match
the base register operand.  The post-RA scheduler's antidependency breaker
did not understand that constraint and sometimes changed one without the
other.  This also fixes Radar 7495976 and should help the verifier work
better for ARM code.

There are now new ld/st instructions explicit writeback operands and explicit
constraints that tie those registers together.

llvm-svn: 98409
2010-03-13 01:08:20 +00:00
Johnny Chen
8d150046ba Set the (Format)F filed of t2Int_MemBarrierV7 & t2Int_SyncBarrierV7 to ThumbFrm,
instead of Pseudo, which helps Thumb decoder to recognize them as Thumb instr.

llvm-svn: 98285
2010-03-11 21:02:50 +00:00