1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 14:33:02 +02:00
Commit Graph

413 Commits

Author SHA1 Message Date
Tim Northover
b83dc53c3a Test commit.
llvm-svn: 155626
2012-04-26 08:24:07 +00:00
Jim Grosbach
86b5cd7421 ARM 'vuzp.32 Dd, Dm' is a pseudo-instruction.
While there is an encoding for it in VUZP, the result of that is undefined,
so we should avoid it. Define the instruction as a pseudo for VTRN.32
instead, as the ARM ARM indicates.

rdar://11222366

llvm-svn: 154511
2012-04-11 17:40:18 +00:00
Jim Grosbach
e54b48cd74 ARM 'vzip.32 Dd, Dm' is a pseudo-instruction.
While there is an encoding for it in VZIP, the result of that is undefined,
so we should avoid it. Define the instruction as a pseudo for VTRN.32
instead, as the ARM ARM indicates.

rdar://11221911

llvm-svn: 154505
2012-04-11 16:53:25 +00:00
Jim Grosbach
a3eeee8a91 ARM refactor more NEON VLD/VST instructions to use composite physregs
Register pair VLD1/VLD2 all-lanes instructions. Kill off more of the
pseudos as a result.

llvm-svn: 152150
2012-03-06 22:01:44 +00:00
Jim Grosbach
fdfaed95ae ARM refactor away a bunch of VLD/VST pseudo instructions.
With the new composite physical registers to represent arbitrary pairs
of DPR registers, we don't need the pseudo-registers anymore. Get rid of
a bunch of them that use DPR register pairs and just use the real
instructions directly instead.

llvm-svn: 152045
2012-03-05 19:33:30 +00:00
Duncan Sands
a580df6438 Remove unused variable.
llvm-svn: 151251
2012-02-23 11:01:22 +00:00
Evan Cheng
d18a688213 Optimize a couple of common patterns involving conditional moves where the false
value is zero. Instead of a cmov + op, issue an conditional op instead. e.g.
    cmp   r9, r4
    mov   r4, #0
    moveq r4, #1 
    orr   lr, lr, r4

should be:
    cmp   r9, r4
    orreq lr, lr, #1

That is, optimize (or x, (cmov 0, y, cond)) to (or.cond x, y). Similarly extend
this to xor as well as (and x, (cmov -1, y, cond)) => (and.cond x, y).

It's possible to extend this to ADD and SUB but I don't think they are common.

rdar://8659097

llvm-svn: 151224
2012-02-23 01:19:06 +00:00
Craig Topper
11bcb12b5e Convert assert(0) to llvm_unreachable
llvm-svn: 149961
2012-02-07 02:50:20 +00:00
David Blaikie
06ecc99a56 More dead code removal (using -Wunreachable-code)
llvm-svn: 148578
2012-01-20 21:51:11 +00:00
Jim Grosbach
59537e1ce3 ARM updating VST2 pseudo-lowering fixed vs. register update.
rdar://10663487

llvm-svn: 147876
2012-01-10 21:11:12 +00:00
Jim Grosbach
f7236d1084 ARM NEON assmebly parsing for VLD2 to all lanes instructions.
llvm-svn: 147069
2011-12-21 19:40:55 +00:00
Jim Grosbach
2dac770227 ARM NEON refactor VST2 w/ writeback instructions.
In addition to improving the representation, this adds support for assembly
parsing of these instructions.

llvm-svn: 146588
2011-12-14 21:32:11 +00:00
Jim Grosbach
489e81da30 ARM assembly parsing and encoding for VLD2 with writeback.
Refactor the instructions into fixed writeback and register-stride
writeback variants to simplify the offset operand (no more optional
register operand using reg0). This is a simpler representation and allows
the assembly parser to more easily handle these instructions.

Add tests for the instruction variants now supported.

llvm-svn: 146278
2011-12-09 21:28:25 +00:00
Jim Grosbach
538759efa7 ARM assembly parsing and encoding for four-register VST1.
llvm-svn: 145450
2011-11-29 22:58:48 +00:00
Jim Grosbach
5418e87582 ARM assembly parsing and encoding for three-register VST1.
llvm-svn: 145442
2011-11-29 22:38:04 +00:00
Jim Grosbach
76dd8a9702 ARM VST1 w/ writeback assembly parsing and encoding.
llvm-svn: 143369
2011-10-31 21:50:31 +00:00
Jakob Stoklund Olesen
de21509dcd Also set addrmode6 alignment when align==size.
Previously, we were only setting the alignment bits on over-aligned
loads and stores.

llvm-svn: 143160
2011-10-27 22:39:16 +00:00
Jim Grosbach
67d4fb4bc0 ARM isel for vld1, opcode selection for register stride post-index pseudos.
llvm-svn: 143158
2011-10-27 22:25:42 +00:00
Jim Grosbach
4a6508dd4e ARM refactor am6offset usage for VLD1.
Split am6offset into fixed and register offset variants so the instruction
encodings are explicit rather than relying an a magic reg0 marker.
Needed to being able to parse these.

llvm-svn: 142853
2011-10-24 21:45:13 +00:00
Eli Friedman
f43710f4a8 Fix misc warnings. Patch by Joe Abbey.
llvm-svn: 142332
2011-10-18 03:17:34 +00:00
Bill Wendling
7121342ad5 Reapply r141365 now that PR11107 is fixed.
llvm-svn: 141591
2011-10-10 22:59:55 +00:00
Bill Wendling
7cba44defc Revert r141365. It was causing MultiSource/Benchmarks/MiBench/consumer-lame to
hang, and possibly SPEC/CINT2006/464_h264ref.

llvm-svn: 141560
2011-10-10 18:27:30 +00:00
Anton Korobeynikov
b0534f0113 Disable ABS optimization for Thumb1 target, we don't have necessary instructions there.
llvm-svn: 141481
2011-10-08 08:38:45 +00:00
Anton Korobeynikov
0944a4c5cc Peephole optimization for ABS on ARM.
Patch by Ana Pazos!

llvm-svn: 141365
2011-10-07 16:15:08 +00:00
Cameron Zwarich
5fb7c6643e Always merge profitable shifts on A9, not just when they have a single use.
llvm-svn: 141248
2011-10-05 23:39:02 +00:00
Cameron Zwarich
cc5f846d58 Remove a check from ARM shifted operand isel helper methods, which were blocking
merging an lsl #2 that has multiple uses on A9. This shift is free, so there is
no problem merging it in multiple places. Other unprofitable shifts will not be
merged.

llvm-svn: 141247
2011-10-05 23:38:50 +00:00
Cameron Zwarich
1f890576e3 Add braces around something that throws me for a loop.
llvm-svn: 141173
2011-10-05 08:59:10 +00:00
Cameron Zwarich
0cb2071d35 There is no point in setting out-parameters for a ComplexPattern function when
it returns false, at least as far as I could tell by reading the code.

llvm-svn: 141172
2011-10-05 08:59:05 +00:00
Jakob Stoklund Olesen
ca6877343b Also match negative offsets for addrmode3 and addrmode5.
Math is hard, and isScaledConstantInRange() always returned false for
negative constants.  It was doing unsigned division of negative numbers
before casting back to signed.

llvm-svn: 140425
2011-09-23 22:10:33 +00:00
Jim Grosbach
74f96e7f3c Tidy up a few 80 column violations.
llvm-svn: 139636
2011-09-13 20:30:37 +00:00
Owen Anderson
de17548520 When performing instruction selection for LDR_PRE_IMM/LDRB_PRE_IMM, we still need to preserve the sign of the index. This fixes miscompilations of Quicksort in the nightly testsuite, and hopefully others as well.
<rdar://problem/10046188>

llvm-svn: 138885
2011-08-31 20:00:11 +00:00
Eli Friedman
5d3814e0c4 64-bit atomic cmpxchg for ARM.
llvm-svn: 138868
2011-08-31 17:52:22 +00:00
Eli Friedman
d71c865ae0 Some 64-bit atomic operations on ARM. 64-bit cmpxchg coming next.
llvm-svn: 138845
2011-08-31 00:31:29 +00:00
Owen Anderson
44c24b13a3 addrmode_imm12 and addrmode2_offset encode their immediate values differently. Update the manual instruction selection code that was encoding them the addrmode2 way even though LDR_PRE_IMM/LDRB_PRE_IMM had switched to addrmode_imm12. Should fix a number of nightly test failures.
llvm-svn: 138758
2011-08-29 20:16:50 +00:00
Owen Anderson
e7857867d6 Fix ARM codegen breakage caused by r138653.
llvm-svn: 138657
2011-08-26 21:12:37 +00:00
Owen Anderson
af51fd9868 invalid-LDR_PRE-arm.txt was already passing, but for the wrong reasons. We were failing to specify enough fixed bits of LDR_PRE/LDRB_PRE, resulting in decoding conflicts. Separate them into immediate vs. register versions, allowing us to specify the necessary fixed bits. This in turn results in the test being decoded properly, and being rejected as UNPREDICTABLE rather than a hard failure.
llvm-svn: 138653
2011-08-26 20:43:14 +00:00
Jim Grosbach
b33129ebad Thumb1 ADD/SUB SP instructions are predicable in Thumb2 mode.
Add the predicate operand to the instructions. Update the back end
accordingly where the instructions are used. Restrict the SP operands
to actually only be SP, as otherwise these break assembly parsing for the
normal instruction variants.

llvm-svn: 138445
2011-08-24 17:46:13 +00:00
Jim Grosbach
182acb7018 ARM refactor indexed store instructions.
Refactor STR[B] pre and post indexed instructions to use addressing modes for
memory operands, which is necessary for assembly parsing and is more consistent
with the rest of the memory instruction definitions. Make some incremental
progress on refactoring away the mega-operand addrmode2 along the way, which
is nice.

llvm-svn: 136978
2011-08-05 20:35:44 +00:00
Jim Grosbach
c5cd3228c4 ARM parsing and encoding of SBFX and UBFX.
Encode the width operand as it encodes in the instruction, which simplifies
the disassembler and the encoder, by using the imm1_32 operand def. Add a
diagnostic for the context-sensitive constraint that the width must be in
the range [1,32-lsb].

llvm-svn: 136264
2011-07-27 21:09:25 +00:00
Owen Anderson
cc4c746c65 Split am2offset into register addend and immediate addend forms, necessary for allowing the fixed-length disassembler to distinguish between SBFX and STR_PRE.
llvm-svn: 136141
2011-07-26 20:54:26 +00:00
Owen Anderson
42c1cb92e5 Fix test failures caused by my so_reg refactoring.
llvm-svn: 135785
2011-07-22 18:30:30 +00:00
Owen Anderson
e34471d064 Get rid of the extraneous GPR operand on so_reg_imm operands, which in turn necessitates a lot of changes to related bits.
llvm-svn: 135722
2011-07-21 23:38:37 +00:00
Owen Anderson
2e26de13d2 Split up the ARM so_reg ComplexPattern into so_reg_reg and so_reg_imm, allowing us to distinguish the encodings that use shifted registers from those that use shifted immediates. This is necessary to allow the fixed-length decoder to distinguish things like BICS vs LDRH.
llvm-svn: 135693
2011-07-21 18:54:16 +00:00
Evan Cheng
bff5f78cb5 Sink ARMMCExpr and ARMAddressingModes into MC layer. First step to separate ARM MC code from target.
llvm-svn: 135636
2011-07-20 23:34:39 +00:00
Evan Cheng
4a169be530 - Rename TargetInstrDesc, TargetOperandInfo to MCInstrDesc and MCOperandInfo and
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.

llvm-svn: 134021
2011-06-28 19:10:37 +00:00
Owen Anderson
c79aec8247 Change the REG_SEQUENCE SDNode to take an explict register class ID as its first operand. This operand is lowered away by the time we reach MachineInstrs, so the actual register-allocation handling of them doesn't need to change.
This is intended to support using REG_SEQUENCE SDNode's with type MVT::untyped, and is part of the long road to eliminating some of the hacks we currently use to support register pairs and other strange constraints, particularly on ARM NEON.

llvm-svn: 133178
2011-06-16 18:17:13 +00:00
Bruno Cardoso Lopes
6d5e369a10 Add support for ARM ldrexd/strexd intrinsics. They both use i32 register pairs
to load/store i64 values. Since there's no current support to explicitly
declare such restrictions, implement it by using specific hardcoded register
pairs during isel.

llvm-svn: 132248
2011-05-28 04:07:29 +00:00
Eli Friedman
996092fea3 Zap a couple now-unused functions.
llvm-svn: 130557
2011-04-29 22:56:48 +00:00
Bob Wilson
3daeb462cb This patch combines several changes from Evan Cheng for rdar://8659675.
Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the #3 exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Enable these fp vmlx codegen changes for Cortex-A9.

llvm-svn: 129775
2011-04-19 18:11:57 +00:00
Evan Cheng
56c151cba9 Do not lose mem_operands while lowering VLD / VST intrinsics.
llvm-svn: 129738
2011-04-19 00:04:03 +00:00