1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 04:02:41 +01:00
Commit Graph

3101 Commits

Author SHA1 Message Date
Chris Lattner
8c5a5c9094 teach codegen to turn trunc(zextload) into load when possible.
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering.  In that
case it is saves a few bytes on x86-32.

llvm-svn: 101350
2010-04-15 05:40:59 +00:00
Chris Lattner
510d19e597 add a simple dag combine to replace trivial shl+lshr with
and.  This happens with the store->load narrowing stuff.

llvm-svn: 101348
2010-04-15 05:28:43 +00:00
Chris Lattner
3282f3d34f Implement rdar://7860110 (also in target/readme.txt) narrowing
a load/or/and/store sequence into a narrower store when it is
safe.  Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does  trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.

This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll 
into:

        movl    %eax, 36(%rdi)

instead of:

        movl    $4294967295, %eax       ## imm = 0xFFFFFFFF
        andq    32(%rdi), %rax
        shlq    $32, %rcx
        addq    %rax, %rcx
        movq    %rcx, 32(%rdi)

and each of the testcases into a single store.  Each of them used
to compile into craziness like this:

_test4:
	movl	$65535, %eax            ## imm = 0xFFFF
	andl	(%rdi), %eax
	shll	$16, %esi
	addl	%eax, %esi
	movl	%esi, (%rdi)
	ret

llvm-svn: 101343
2010-04-15 04:48:01 +00:00
Chris Lattner
553267e9cc further tweak this to do something useful.
llvm-svn: 101341
2010-04-15 04:31:42 +00:00
Chris Lattner
a4b3756baf remove undef control flow.
llvm-svn: 101340
2010-04-15 04:30:19 +00:00
Jakob Stoklund Olesen
7343a90490 Remove unneeded types from test.
llvm-svn: 101286
2010-04-14 20:56:09 +00:00
Bob Wilson
7b19d89e3a Don't custom lower bit converts to ARM VMOVDRRD or VMOVDRR when the operand
does not have a legal type.  The legalizer does not know how to handle those
nodes.  Radar 7854640.

llvm-svn: 101282
2010-04-14 20:45:23 +00:00
Evan Cheng
172f2f9e2d Add test for post-ra machine licm.
llvm-svn: 101182
2010-04-13 22:10:03 +00:00
Bob Wilson
526e615ff9 Handle a v2f64 formal parameter that is split between registers and memory
such that the entire second half is in memory.  Radar 7855014.

llvm-svn: 101181
2010-04-13 22:03:22 +00:00
Evan Cheng
6ffb1ed4fb Fix test on non-x86 hosts.
llvm-svn: 101163
2010-04-13 18:54:04 +00:00
Evan Cheng
b8861dcb04 Re-apply 101075 and fix it properly. Just reuse the debug info of the branch instruction being optimized. There is no need to --I which can deref off start of the BB.
llvm-svn: 101162
2010-04-13 18:50:27 +00:00
Eric Christopher
330ca0c937 Temporarily revert r101075, it's causing invalid iterator assertions
in a nightly tester.

llvm-svn: 101158
2010-04-13 18:37:58 +00:00
Chris Lattner
dabcd9738c add llvm codegen support for -ffunction-sections and -fdata-sections,
patch by Sylvere Teissier!

llvm-svn: 101106
2010-04-13 00:36:43 +00:00
Evan Cheng
ec21a36774 Use .set expression for x86 pic jump table reference to reduce assembly relocation. rdar://7738756
llvm-svn: 101085
2010-04-12 23:07:17 +00:00
Bill Wendling
5a56f7fc20 Third time's a charm...
llvm-svn: 101081
2010-04-12 22:43:21 +00:00
Bill Wendling
e1bf74de52 Genericize the label test.
llvm-svn: 101079
2010-04-12 22:40:37 +00:00
Bill Wendling
fd58812c95 Correct test to test what I mean it to test.
llvm-svn: 101077
2010-04-12 22:25:42 +00:00
Bill Wendling
1f2e71928c Micro-optimization:
If we have this situation:

    jCC  L1
    jmp  L2
L1:
  ...
L2:
  ...

We can get a small performance boost by emitting this instead:

    jnCC L2
L1:
  ...
L2:
  ...

This testcase shows an example of this:

float func(float x, float y) {
    double product = (double)x * y;
    if (product == 0.0)
        return product;
    return product - 1.0;
}

llvm-svn: 101075
2010-04-12 22:19:57 +00:00
Evan Cheng
90788354c9 Enable post regalloc machine licm by default.
llvm-svn: 101023
2010-04-12 06:25:28 +00:00
Benjamin Kramer
f040734da3 Make sure this test tests something.
llvm-svn: 100879
2010-04-09 19:03:31 +00:00
Bob Wilson
ee7665078a Add a testcase for svn r100568.
llvm-svn: 100876
2010-04-09 18:29:29 +00:00
Chris Lattner
5408e8a62b "On SPU, variables in the .bss section that are allocated with the .lcomm directive are not aligned on 16 byte boundaries. This causes misaligned loads, as the generated assembly assumes this "default" alignment.
this patch disables .lcomm in favour of '.local .comm'

Patch by Kalle Raisklia!

llvm-svn: 100875
2010-04-09 18:27:03 +00:00
Dan Gohman
a451b859f9 Merge a few fast-isel tests.
llvm-svn: 100860
2010-04-09 15:03:55 +00:00
Evan Cheng
619f1b8a94 Coalescer should not delete copy instructions whose defs are partially dead. e.g.
%RDI<def,dead> = MOV64rr %RAX<kill>, %EDI<imp-def>

llvm-svn: 100804
2010-04-08 20:02:37 +00:00
Evan Cheng
3fa0b6fb03 Avoid using f64 to lower memcpy from constant string. It's cheaper to use i32 store of immediates.
llvm-svn: 100751
2010-04-08 07:37:57 +00:00
Dan Gohman
0a77dd29de When expanding expressions which are using post-inc mode for multiple loops,
ensure that the expansion is dominated by the increments of those loops.

llvm-svn: 100748
2010-04-08 05:57:57 +00:00
Chris Lattner
23334439e9 add newlines at the end of files.
llvm-svn: 100705
2010-04-07 22:53:17 +00:00
Dan Gohman
b5210c934f Generalize IVUsers to track arbitrary expressions rather than expressions
explicitly split into stride-and-offset pairs. Also, add the
ability to track multiple post-increment loops on the same expression.

This refines the concept of "normalizing" SCEV expressions used for
to post-increment uses, and introduces a dedicated utility routine for
normalizing and denormalizing expressions.

This fixes the expansion of expressions which are post-increment users
of more than one loop at a time. More broadly, this takes LSR another
step closer to being able to reason about more than one loop at a time.

llvm-svn: 100699
2010-04-07 22:27:08 +00:00
Dale Johannesen
4cdb545401 Split big test into multiple directories to cater to
those who don't build all targets.

llvm-svn: 100688
2010-04-07 20:43:35 +00:00
Chris Lattner
367cbc160c this has a pr!
llvm-svn: 100637
2010-04-07 18:04:56 +00:00
Chris Lattner
60e5f55565 fix a latent bug my inline asm stuff exposed:
MachineOperand::isIdenticalTo wasn't handling metadata operands.

llvm-svn: 100636
2010-04-07 18:03:19 +00:00
Sanjiv Gupta
b3557c497a Remove XFAIL for vg_leak as the leaks are fixed by 100601.
llvm-svn: 100612
2010-04-07 07:06:48 +00:00
Jakob Stoklund Olesen
370a3e553f Don't try to collapse DomainValues onto an incompatible SSE domain.
This fixes the Bullet regression on i386/nocona.

llvm-svn: 100553
2010-04-06 19:48:56 +00:00
Evan Cheng
7d177beb1d Add nounwind.
llvm-svn: 100482
2010-04-05 22:30:05 +00:00
Dan Gohman
2aff0055c1 Don't do code sinking on unreachable blocks. It's unprofitable and hazardous.
llvm-svn: 100455
2010-04-05 19:17:22 +00:00
Chris Lattner
779e051357 resolve a fixme.
llvm-svn: 100346
2010-04-04 19:28:59 +00:00
Evan Cheng
5d825988d0 Correctly lower memset / memcpy of undef. It should be a nop. PR6767.
llvm-svn: 100208
2010-04-02 19:36:14 +00:00
Dan Gohman
7051600e3f Revert the recent alignment changes. They're broken for -Os because,
in particular, they end up aligning strings at 16-byte boundaries, and
there's no way for GlobalOpt to check OptForSize.

llvm-svn: 100172
2010-04-02 03:04:37 +00:00
Evan Cheng
921fc2c77b After trivial coalescing, the MI being visited may have become a copy. Avoid adding it to CSE hash table since copies aren't being considered for CSE and they may be deleted.
rdar://7819990

llvm-svn: 100170
2010-04-02 02:21:24 +00:00
Dan Gohman
ec731a99fa Remove this initializer so that the optimizer doesn't convert
unaligned loads into aligned loads.

llvm-svn: 100166
2010-04-02 01:26:13 +00:00
Dan Gohman
d14fe2a5cd Update this test for the new preferred alignment heuristics.
llvm-svn: 100165
2010-04-02 01:24:08 +00:00
Evan Cheng
9508456bb6 In 64-bit mode, use i64 to lower memcpy / memset instead of f64.
llvm-svn: 100137
2010-04-01 20:27:45 +00:00
Evan Cheng
8728924812 - Avoid using floating point stores to implement memset unless the value is zero.
- Do not try to infer GV alignment unless its type is sized. It's not possible to infer alignment if it has opaque type.

llvm-svn: 100118
2010-04-01 18:19:11 +00:00
Evan Cheng
202b6327c4 Add -mcpu to memcpy / memset tests to ensure they behave the same on all hosts / targets.
llvm-svn: 100101
2010-04-01 08:25:26 +00:00
Evan Cheng
562bb43207 Fix sdisel memcpy, memset, memmove lowering:
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
   load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
   ops more effectively.
rdar://7774704

llvm-svn: 100090
2010-04-01 06:04:33 +00:00
Jakob Stoklund Olesen
58296f9543 Replace V_SET0 with variants for each SSE execution domain.
llvm-svn: 99975
2010-03-31 00:40:13 +00:00
Jakob Stoklund Olesen
13a7a0adff Fix typo. Thank you, valgrind.
llvm-svn: 99974
2010-03-31 00:40:08 +00:00
Jakob Stoklund Olesen
c6213a8bc0 Not all platforms start symbols with _
llvm-svn: 99959
2010-03-30 23:12:48 +00:00
Jakob Stoklund Olesen
a027a6d4f2 Enable -sse-domain-fix by default. Now with tests!
llvm-svn: 99954
2010-03-30 22:47:00 +00:00
Eric Christopher
4c3a3208e3 Remove the pmulld intrinsic and autoupdate it as a vector multiply.
Rewrite the pmulld patterns, and make sure that they fold in loads of
arguments into the instruction.

llvm-svn: 99910
2010-03-30 18:49:01 +00:00