1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 20:12:56 +02:00
Commit Graph

1450 Commits

Author SHA1 Message Date
Evan Cheng
86dd733bc8 Some enhancements for memcpy / memset inline expansion.
1. Teach it to use overlapping unaligned load / store to copy / set the trailing
   bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies.
2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g.
   x86 and ARM.
3. When memcpy from a constant string, do *not* replace the load with a constant
   if it's not possible to materialize an integer immediate with a single
   instruction (required a new target hook: TLI.isIntImmLegal()).
4. Use unaligned load / stores more aggressively if target hooks indicates they
   are "fast".
5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8.
   Also increase the threshold to something reasonable (8 for memset, 4 pairs
   for memcpy).

This significantly improves Dhrystone, up to 50% on ARM iOS devices.

rdar://12760078

llvm-svn: 169791
2012-12-10 23:21:26 +00:00
Tim Northover
2dbbd221f7 Added Mapping Symbols for ARM ELF
Before this patch, when you objdump an LLVM-compiled file, objdump tried to
decode data-in-code sections as if they were code.  This patch adds the missing
Mapping Symbols, as defined by "ELF for the ARM Architecture" (ARM IHI 0044D).

Patch based on work by Greg Fitzgerald.

llvm-svn: 169609
2012-12-07 16:50:23 +00:00
Dmitri Gribenko
d81733f67d Fix typos in CHECK lines.
Patch by Alexander Zinenko.

llvm-svn: 169547
2012-12-06 21:24:47 +00:00
Evan Cheng
f3d075e766 Properly fix the tes.
llvm-svn: 169464
2012-12-06 02:29:29 +00:00
NAKAMURA Takumi
c472c7e5f7 llvm/test/CodeGen/ARM/extload-knownzero.ll: Try to unbreak, to add -O0. I guess Chad expects fastisel here.
llvm-svn: 169463
2012-12-06 02:22:58 +00:00
Chad Rosier
c7e4217273 [arm fast-isel] Make the fast-isel implementation of memcpy respect alignment.
rdar://12821569

llvm-svn: 169460
2012-12-06 01:34:31 +00:00
Evan Cheng
c1db873871 Let targets provide hooks that compute known zero and ones for any_extend
and extload's. If they are implemented as zero-extend, or implicitly
zero-extend, then this can enable more demanded bits optimizations. e.g.

define void @foo(i16* %ptr, i32 %a) nounwind {
entry:
  %tmp1 = icmp ult i32 %a, 100
  br i1 %tmp1, label %bb1, label %bb2
bb1:
  %tmp2 = load i16* %ptr, align 2
  br label %bb2
bb2:
  %tmp3 = phi i16 [ 0, %entry ], [ %tmp2, %bb1 ]
  %cmp = icmp ult i16 %tmp3, 24
  br i1 %cmp, label %bb3, label %exit
bb3:
  call void @bar() nounwind
  br label %exit
exit:
  ret void
}

This compiles to the followings before:
        push    {lr}
        mov     r2, #0
        cmp     r1, #99
        bhi     LBB0_2
@ BB#1:                                 @ %bb1
        ldrh    r2, [r0]
LBB0_2:                                 @ %bb2
        uxth    r0, r2
        cmp     r0, #23
        bhi     LBB0_4
@ BB#3:                                 @ %bb3
        bl      _bar
LBB0_4:                                 @ %exit
        pop     {lr}
        bx      lr

The uxth is not needed since ldrh implicitly zero-extend the high bits. With
this change it's eliminated.

rdar://12771555

llvm-svn: 169459
2012-12-06 01:28:01 +00:00
Evan Cheng
cae5a79f6b ARM custom lower ctpop for vector types. Patch by Pete Couperus.
llvm-svn: 169325
2012-12-04 22:41:50 +00:00
Bill Wendling
32e4544198 Use the 'count' attribute to calculate the upper bound of an array.
The count attribute is more accurate with regards to the size of an array. It
also obviates the upper bound attribute in the subrange. We can also better
handle an unbound array by setting the count to -1 instead of the lower bound to
1 and upper bound to 0.

llvm-svn: 169312
2012-12-04 21:34:03 +00:00
Bill Wendling
8ade948576 Add a 'count' field to the DWARF subrange.
The count field is necessary because there isn't a difference between the 'lo'
and 'hi' attributes for a one-element array and a zero-element array. When the
count is '0', we know that this is a zero-element array. When it's >=1, then
it's a normal constant sized array. When it's -1, then the array is unbounded.

llvm-svn: 169218
2012-12-04 06:20:49 +00:00
Manman Ren
40ba054405 Stack Alignment: when creating stack objects in MachineFrameInfo, make sure
the alignment is clamped to TargetFrameLowering.getStackAlignment if the target
does not support stack realignment or the option "realign-stack" is off.

This will cause miscompile if the address is treated as aligned and add is
replaced with or in DAGCombine.

Added a bool StackRealignable to TargetFrameLowering to check whether stack
realignment is implemented for the target. Also added a bool RealignOption
to MachineFrameInfo to check whether the option "realign-stack" is on.

rdar://12713765

llvm-svn: 169197
2012-12-04 00:52:33 +00:00
Jakob Stoklund Olesen
4aa22e2c8d Simplify REG_SEQUENCE lowering.
The TwoAddressInstructionPass takes the machine code out of SSA form by
expanding REG_SEQUENCE instructions into copies. It is no longer
necessary to rewrite the registers used by a REG_SEQUENCE instruction
because the new coalescer algorithm can do it now.

REG_SEQUENCE is just converted to a sequence of sub-register copies now.

llvm-svn: 169067
2012-12-01 01:06:44 +00:00
Sebastian Pop
f372d2334f Codegen failure for vmull with small vectors
Codegen was failing with an assertion because of unexpected vector
operands when legalizing the selection DAG for a MUL instruction.

The asserting code was legalizing multiplies for vectors of size 128
bits. It uses a custom lowering to try and detect cases where it can
use a VMULL instruction instead of a VMOVL + VMUL.  The code was
looking for input operands to the MUL that had been sign or zero
extended. If it found the extended operands it would drop the
sign/zero extension and use the original vector size as input to a
VMULL instruction.

The code assumed that the original input vector was 64 bits so that
after dropping the extension it would fit directly into a D register
and could be used as an operand of a VMULL instruction. The input
code that trigger the failure used a vector of <4 x i8> that was
sign extended to <4 x i32>. It was not safe to drop the sign
extension in this case because the original vector is only 32 bits
wide. The fix is to insert a sign extension for the vector to reach
the required 64 bit size. In this particular example, the vector would
need to be sign extented to a <4 x i16>.

llvm-svn: 169024
2012-11-30 19:08:04 +00:00
Bill Wendling
18531926d1 Handle the situation where CodeGenPrepare removes a reference to a BB that has
the last invoke instruction in the function. This also removes the last landing
pad in an function. This is fine, but with SjLj EH code, we've already placed a
bunch of code in the 'entry' block, which expects the landing pad to stick
around.

When we get to the situation where CGP has removed the last landing pad, go
ahead and nuke the SjLj instructions from the 'entry' block.
<rdar://problem/12721258>

llvm-svn: 168930
2012-11-29 19:38:06 +00:00
Silviu Baranga
d93d64a5fd Added atomic 64 min/max/umin/umax instrinsics support in the ARM backend.
llvm-svn: 168886
2012-11-29 14:41:25 +00:00
Jakob Stoklund Olesen
9dbb7d3582 Avoid rewriting instructions twice.
This could cause miscompilations in targets where sub-register
composition is not always idempotent (ARM).

<rdar://problem/12758887>

llvm-svn: 168837
2012-11-29 00:26:11 +00:00
Benjamin Kramer
bd65c85dc1 ARM: Implement CanLowerReturn so large vectors get expanded into sret.
Fixes 14337.

llvm-svn: 168809
2012-11-28 20:55:10 +00:00
Chad Rosier
9a90d62b0b Add -verify-machineinstrs to these fast-isel test cases.
llvm-svn: 168723
2012-11-27 20:49:56 +00:00
Manman Ren
c45c0a304b CSE: allow PerformTrivialCoalescing to check copies across basic block
boundaries.

Given the following case:
BB0
  %vreg1<def> = SUBrr %vreg0, %vreg7
  %vreg2<def> = COPY %vreg7
BB1
  %vreg10<def> = SUBrr %vreg0, %vreg2
We should be able to CSE between SUBrr in BB0 and SUBrr in BB1.

rdar://12462006

llvm-svn: 168717
2012-11-27 18:58:41 +00:00
Ulrich Weigand
d899cee68f Never use .lcomm on platforms where it does not accept an alignment
argument.  Instead, use a pair of .local and .comm directives.

This avoids spurious differences between binaries built by the
integrated assembler vs. those built by the external assembler,
since the external assembler may impose alignment requirements
on .lcomm symbols where the integrated assembler does not.

llvm-svn: 168704
2012-11-27 16:11:16 +00:00
Chad Rosier
0001e972e0 Extend test case for r168657.
llvm-svn: 168658
2012-11-27 01:10:48 +00:00
Tim Northover
3556e52a02 Fix physical register liveness calculations:
+ Take account of clobbers
+ Give outputs priority over inputs since they happen later.

llvm-svn: 168360
2012-11-20 09:56:11 +00:00
Anton Korobeynikov
7a285e97e2 Factor out type info emission into separate routine.
It turned out that ARM wants different layout of type infos.
This is yet another patch in attempt to fix PR7187 

llvm-svn: 168325
2012-11-19 21:06:26 +00:00
Eli Friedman
d7496f6688 Mark FP_EXTEND form v2f32 to v2f64 as "expand" for ARM NEON. Patch by Pete Couperus.
llvm-svn: 168240
2012-11-17 01:52:46 +00:00
Chad Rosier
7aa7c0d952 [fast-isel] Add the -verify-machineinstrs to these test cases. The remaining
test cases require fixes to fast-isel before the verifier can be enabled.
Part of rdar://12594152

llvm-svn: 168233
2012-11-17 00:42:06 +00:00
Weiming Zhao
85dce59506 Remove hard coded registers in ARM ldrexd and strexd instructions
This patch replaces the hard coded GPR pair [R0, R1] of
Intrinsic:arm_ldrexd and [R2, R3] of Intrinsic:arm_strexd with
even/odd GPRPair reg class.
Similar to the lowering of atomic_64 operation.

llvm-svn: 168207
2012-11-16 21:55:34 +00:00
Anton Korobeynikov
3cd85d754d Make sure FABS on v2f32 and v4f32 is legal on ARM NEON
This fixes PR14359

llvm-svn: 168200
2012-11-16 21:15:20 +00:00
Eli Friedman
79932a2f77 Mark FP_ROUND for converting NEON v2f64 to v2f32 as expand. Add a missing
case to vector legalization so this actually works.

Patch by Pete Couperus.  Fixes PR12540.

llvm-svn: 168107
2012-11-15 22:44:27 +00:00
Nadav Rotem
b339c55cd3 The code pattern "imm0_255_neg" is used for checking if an immediate value is a small negative number.
This patch changes the definition of negative from -0..-255 to -1..-255. I am changing this because of
a bug that we had in some of the patterns that assumed that "subs" of zero does not set the carry flag.

rdar://12028498

llvm-svn: 167963
2012-11-14 19:39:15 +00:00
Anton Korobeynikov
c8df249529 Fix really stupid ARM EHABI info generation bug: we should not emit
eh table and handler data if there are no landing pads in the function.
Patch by Logan Chien with some cleanups from me.

llvm-svn: 167945
2012-11-14 19:13:30 +00:00
Anton Korobeynikov
3edf77ac04 Use TARGET2 relocation for TType references on ARM.
Do some cleanup of the code while here.

Inspired by patch by Logan Chien!

llvm-svn: 167904
2012-11-14 01:47:00 +00:00
Andrew Trick
d8c621a864 Cleanup the main RegisterCoalescer loop.
Block priorities still apply outside loops.

llvm-svn: 167793
2012-11-13 00:34:44 +00:00
Andrew Trick
ad4b55b3d8 misched: Infrastructure for weak DAG edges.
This adds support for weak DAG edges to the general scheduling
infrastructure in preparation for MachineScheduler support for
heuristics based on weak edges.

llvm-svn: 167738
2012-11-12 19:28:57 +00:00
Evan Cheng
ebe241fb9d Disable the Thumb no-return call optimization:
mov lr, pc
b.w _foo

The "mov" instruction doesn't set bit zero to one, it's putting incorrect
value in lr. It messes up backtraces.

rdar://12663632

llvm-svn: 167657
2012-11-10 02:09:05 +00:00
Amara Emerson
f7a46cedbc Recommit modified r167540.
Improve ARM build attribute emission for architectures types.
This also changes the default architecture emitted for a generic CPU to "v7".

llvm-svn: 167574
2012-11-08 09:51:45 +00:00
Quentin Colombet
522698f693 Vext Lowering was missing opportunities
llvm-svn: 167318
2012-11-02 21:32:17 +00:00
Quentin Colombet
dde058d386 Change ForceSizeOpt attribute into MinSize attribute
llvm-svn: 167020
2012-10-30 16:32:52 +00:00
Jakob Stoklund Olesen
05cec5db28 Completely disallow partial copies in adjustCopiesBackFrom().
Partial copies can show up even when CoalescerPair.isPartial() returns
false. For example:

   %vreg24:dsub_0<def> = COPY %vreg31:dsub_0; QPR:%vreg24,%vreg31

Such a partial-partial copy is not good enough for the transformation
adjustCopiesBackFrom() needs to do.

llvm-svn: 166944
2012-10-29 17:51:52 +00:00
Quentin Colombet
bcd2bc3437 [code size][ARM] Emit regular call instructions instead of the move, branch sequence
llvm-svn: 166854
2012-10-27 01:10:17 +00:00
Jakob Stoklund Olesen
4b4db880a3 Revert r163298 "Optimize codegen for VSETLNi{8,16,32} operating on Q registers."
Keep the integer_insertelement test case, the new coalescer can handle
this kind of lane insertion without help from pseudo-instructions.

llvm-svn: 166835
2012-10-26 23:39:46 +00:00
Evan Cheng
f97472cdf6 Fix a miscompilation caused by a typo. When turning a adde with negative value
into a sbc with a positive number, the immediate should be complemented, not
negated. Also added a missing pattern for ARM codegen.

rdar://12559385

llvm-svn: 166613
2012-10-24 19:53:01 +00:00
Bill Wendling
e97df2d337 When a block ends in an indirect branch, add its successors to the machine basic block.
The CFG of the machine function needs to know that the targets of the indirect
branch are successors to the indirect branch.
<rdar://problem/12529625>

llvm-svn: 166448
2012-10-22 23:30:04 +00:00
Shuxin Yang
3ad15929e7 This patch is to fix radar://8426430. It is about llvm support of __builtin_debugtrap()
which is supposed to consistently raise SIGTRAP across all systems. In contrast,
__builtin_trap() behave differently on different systems. e.g. it raises SIGTRAP on ARM, and
SIGILL on X86. The purpose of __builtin_debugtrap() is to consistently provide "trap"
functionality, in the mean time preserve the compatibility with on gcc on __builtin_trap().

  The X86 backend is already able to handle debugtrap(). This patch is to:
  1) make front-end recognize "__builtin_debugtrap()" (emboddied in the one-line change to Clang).
  2) In DAG legalization phase, by default, "debugtrap" will be replaced with "trap", which
     make the __builtin_debugtrap() "available" to all existing ports without the hassle of
     changing their code.
  3) If trap-function is specified (via -trap-func=xyz to llc), both __builtin_debugtrap() and
     __builtin_trap() will be expanded into the function call of the specified trap function.
    This behavior may need change in the future.

  The provided testing-case is to make sure 2) and 3) are working for ARM port, and we
already have a testing case for x86. 

llvm-svn: 166300
2012-10-19 20:11:16 +00:00
Stepan Dyatkovskiy
ece4c2a9c1 ARM:
Removed extra stack frame object for fixed byval arguments,
VarArgsStyleRegisters invocation was reworked due to some improper usage in
past. PR14099 also demonstrates it.

llvm-svn: 166273
2012-10-19 08:23:06 +00:00
Jakob Stoklund Olesen
1cfbe5c549 Revert r166046 "Switch back to the old coalescer for now to fix the 32 bit bit"
A fix for PR14098, including the test case is in the next commit.

llvm-svn: 166067
2012-10-16 22:51:55 +00:00
Rafael Espindola
2f08719190 Switch back to the old coalescer for now to fix the 32 bit bit
llvm+clang+compiler-rt bootstrap.

llvm-svn: 166046
2012-10-16 19:34:06 +00:00
Stepan Dyatkovskiy
09c6b0a273 Issue:
Stack is formed improperly for long structures passed as byval arguments for
EABI mode.

If we took AAPCS reference, we can found the next statements:

A: "If the argument requires double-word alignment (8-byte), the NCRN (Next
Core Register Number) is rounded up to the next even register number." (5.5
Parameter Passing, Stage C, C.3).

B: "The alignment of an aggregate shall be the alignment of its most-aligned
component." (4.3 Composite Types, 4.3.1 Aggregates).

So if we have structure with doubles (9 double fields) and 3 Core unused
registers (r1, r2, r3): caller should use r2 and r3 registers only.
Currently r1,r2,r3 set is used, but it is invalid.

Callee VA routine should also use r2 and r3 regs only. All is ok here. This
behaviour is guessed by rounding up SP address with ADD+BFC operations.

Fix:
Main fix is in ARMTargetLowering::HandleByVal. If we detected AAPCS mode and
8 byte alignment, we waste odd registers then.

P.S.:
I also improved LDRB_POST_IMM regression test. Since ldrb instruction will
not generated by current regression test after this patch. 

llvm-svn: 166018
2012-10-16 07:16:47 +00:00
Jim Grosbach
8df1c73056 ARM: v1i64 and v2i64 VBSL intrinsic support.
rdar://12502028

llvm-svn: 165981
2012-10-15 21:23:40 +00:00
Silviu Baranga
e3e0e84559 Fixed PR13938: the ARM backend was crashing because it couldn't select a VDUPLANE node with the vector input size different from the output size. This was bacause the BUILD_VECTOR lowering code didn't check that the size of the input vector was correct for using VDUPLANE.
llvm-svn: 165929
2012-10-15 09:41:32 +00:00
Jakob Stoklund Olesen
4aac24404c Drop <def,dead> flags when merging into an unused lane.
The new coalescer can merge a dead def into an unused lane of an
otherwise live vector register.

Clear the <dead> flag when that happens since the flag refers to the
full virtual register which is still live after the partial dead def.

This fixes PR14079.

llvm-svn: 165877
2012-10-13 17:26:47 +00:00