1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-01 00:12:50 +01:00
Commit Graph

1907 Commits

Author SHA1 Message Date
Dan Gohman
eaa3ee65bd When converting a test to a cmp to fold a load, use the cmp that has an
8-bit immediate field rather than one with a wider immediate field.

llvm-svn: 104064
2010-05-18 21:42:03 +00:00
Daniel Dunbar
8c20c162fe MC/X86: Implement custom lowering to make sure we match things like
X86::ADC32ri $0, %eax
to
  X86::ADC32i32 $0

llvm-svn: 104030
2010-05-18 17:22:24 +00:00
Dale Johannesen
b71d6a4150 Removing as part of previous reversion.
llvm-svn: 103915
2010-05-16 20:19:40 +00:00
Dale Johannesen
cf2d4b9f91 Revert 103911; it broke a test that expects bitconvert
<1xi64> -> i64 to work in MMX registers on hosts where -no-sse
is the default (not mine).  The right thing is
to accept this and make i64->f64 conversions go through memory,
but I don't have time right now.

llvm-svn: 103914
2010-05-16 20:19:04 +00:00
Dale Johannesen
15dce10b5a Make x86-64 64-bit bitconvert work when SSE is not available.
(This worked as of about 6 months ago and I didn't track down
exactly what broke it; I think this fix is appropriate.)

llvm-svn: 103911
2010-05-16 18:22:38 +00:00
Anton Korobeynikov
925a32ae37 Add support for thiscall calling convention.
Patch by Charles Davis and Steven Watanabe!

llvm-svn: 103902
2010-05-16 09:08:45 +00:00
Jakob Stoklund Olesen
3eac02b22f Simplify the handling of physreg defs and uses in RegAllocFast.
This adds extra security against using clobbered physregs, and it adds kill
markers to physreg uses.

llvm-svn: 103784
2010-05-14 18:03:25 +00:00
Jakob Stoklund Olesen
d99818256c Take allocation hints from copy instructions to/from physregs.
This causes way more identity copies to be generated, ripe for coalescing.

llvm-svn: 103686
2010-05-13 00:19:43 +00:00
Jakob Stoklund Olesen
7d1323d9a5 Make sure to add kill flags to the last use of a virtreg when it is redefined.
The X86 floating point stack pass and others depend on good kill flags.

llvm-svn: 103635
2010-05-12 18:46:03 +00:00
Jakob Stoklund Olesen
6976c543cd Enable a bunch more -regalloc=fast tests
llvm-svn: 103531
2010-05-12 00:11:24 +00:00
Jakob Stoklund Olesen
99d5d74fb0 One more -regalloc=fast test
llvm-svn: 103509
2010-05-11 20:51:07 +00:00
Jakob Stoklund Olesen
e27902ac68 Simplify the tracking of used physregs to a bulk bitor followed by a transitive
closure after allocating all blocks.

Add a few more test cases for -regalloc=fast.

llvm-svn: 103500
2010-05-11 20:30:28 +00:00
Jakob Stoklund Olesen
442e38c4de Mostly rewrite RegAllocFast.
Sorry for the big change. The path leading up to this patch had some TableGen
changes that I didn't want to commit before I knew they were useful. They
weren't, and this version does not need them.

The fast register allocator now does no liveness calculations. Instead it relies
on kill flags provided by isel. (Currently those kill flags are also ignored due
to isel bugs). The allocation algorithm is supposed to work with any subset of
valid kill flags. More kill flags simply means fewer spills inserted.

Registers are allocated from a working set that contains no aliases. That means
most allocations can be done directly without expensive alias checks. When the
working set runs out of registers we do the full alias check to find new free
registers.

llvm-svn: 103488
2010-05-11 18:54:45 +00:00
Evan Cheng
df350445c6 Be careful with operand promotion. For a binary operation, the source operands may be the same. PR7018. rdar://7939869.
llvm-svn: 103419
2010-05-10 19:03:57 +00:00
Bill Wendling
787de8fe38 Readd testcase.
llvm-svn: 103335
2010-05-08 04:47:54 +00:00
Dan Gohman
4ca74b9c6c When pruning candidate formulae out of an LSRUse, update the
LSRUse's Regs set after all pruning is done, rather than trying
to do it on the fly, which can produce an incomplete result.

This fixes a case where heuristic pruning was stripping all
formulae from a use, which led the solver to enter an infinite
loop.

Also, add a few asserts to diagnose this kind of situation.

llvm-svn: 103328
2010-05-07 23:36:59 +00:00
Bill Wendling
4241941b52 Remove. Don't XFAIL.
llvm-svn: 103321
2010-05-07 23:09:17 +00:00
Bill Wendling
3846ec7889 Temorarily revert r101984.
llvm-svn: 103314
2010-05-07 22:45:36 +00:00
Dale Johannesen
1ee37ac5d4 Fix PR 7087, and probably other things, by extending
getConstantFP to accept the two supported long double
target types.  This was not the original intent, but
there are other places that assume this works and it's
easy enough to do.

llvm-svn: 103299
2010-05-07 21:35:53 +00:00
Duncan Sands
ed2ef5a987 Correct some bogus target triples.
llvm-svn: 103265
2010-05-07 17:03:48 +00:00
Nick Lewycky
3e5720a898 Revert r103133 and add testcase from PR7066.
llvm-svn: 103233
2010-05-07 01:45:38 +00:00
Dan Gohman
f863ad3dc5 Disable the new unknown-location code for now. It causes a major
increase in the debug line info section, and it's causing
regressions in a gdb testsuite.

llvm-svn: 103226
2010-05-07 01:08:53 +00:00
Dan Gohman
497e752655 Add a DebugLoc argument to TargetInstrInfo::copyRegToReg, so that it
doesn't have to guess.

llvm-svn: 103194
2010-05-06 20:33:48 +00:00
Dan Gohman
3190e5c292 Add a testcase for r103135, explicitly representing unknown
locations in debug line info.

llvm-svn: 103189
2010-05-06 17:49:17 +00:00
Chris Lattner
014a954e3d Fix PR7054 - Assertion `Symbol->isUndefined() && "Cannot define a symbol twice!"' failed.
Users can write broken code that emits the same label twice with asm renaming,
detect this and emit a fatal backend error instead of aborting.

llvm-svn: 103140
2010-05-06 00:05:37 +00:00
Jakob Stoklund Olesen
2e5d12acfa Fix PR6520. An earlyclobber physreg must not be allocated to anything else.
llvm-svn: 103133
2010-05-05 23:07:41 +00:00
Jakob Stoklund Olesen
51ab2653d5 Check that subregisters don't have independent values in RemoveCopyByCommutingDef().
This fixes PR6941.

llvm-svn: 102970
2010-05-03 22:40:32 +00:00
Dan Gohman
c024fd43c9 Fix tests to use fadd, fsub, and fmul, instead of add, sub, and mul,
when the type is floating-point.

llvm-svn: 102969
2010-05-03 22:36:46 +00:00
Dan Gohman
15cb983f55 Fix a bug which prevented tail merging of return instructions in
beneficial cases. See the changes in test/CodeGen/X86/tail-opts.ll and
test/CodeGen/ARM/ifcvt2.ll for details.

The fix is to change HashEndOfMBB to hash at most one instruction,
instead of trying to apply heuristics about when it will be profitable to
consider more than one instruction. The regular tail-merging heuristics
are already prepared to handle the same cases, and they're more precise.

Also, make test/CodeGen/ARM/ifcvt5.ll and
test/CodeGen/Thumb2/thumb2-branch.ll slightly more complex so that they
continue to test what they're intended to test.

And, this eliminates the problem in
test/CodeGen/Thumb2/2009-10-15-ITBlockBranch.ll, the testcase from
PR5204. Update it accordingly.

llvm-svn: 102907
2010-05-03 14:35:47 +00:00
Duncan Sands
153ad3b903 Remove the -enable-sjlj-eh option, which doesn't do anything.
Remove the -enable-eh option which is only used by the JIT,
and replace it with -jit-enable-eh.

llvm-svn: 102865
2010-05-02 15:36:26 +00:00
Bill Wendling
7b6527b94b Test failing too much on too many platforms.
llvm-svn: 102812
2010-05-01 00:12:33 +00:00
Bill Wendling
a348ff69b8 Maybe it needs sse2?
llvm-svn: 102802
2010-04-30 23:19:29 +00:00
Bill Wendling
d1c5f11f72 Force 64-bit.
llvm-svn: 102800
2010-04-30 22:45:20 +00:00
Bill Wendling
95a4929ac7 EXTRACT_VECTOR_ELT of an INSERT_VECTOR_ELT may have the same index, but the
indexes could be of a different value type. Or not even using the same SDNode
for the constant (weird, I know). Compare the actual values instead of the
pointers.

llvm-svn: 102791
2010-04-30 22:19:17 +00:00
Jakob Stoklund Olesen
fac584ed9e The local register allocator has to spill dirty callee saved registers before a
call that might throw. The landing pad assumes that all registers are in stack
slots.

We used to spill those dirty CSRs after the call, and the stack slots would be
wrong when arriving at the landing pad.

llvm-svn: 102770
2010-04-30 21:19:29 +00:00
Evan Cheng
fc86d7fbdc Fix test.
llvm-svn: 102694
2010-04-30 06:00:56 +00:00
Evan Cheng
9303b11e47 Another sibcall bug. If caller and callee calling conventions differ, then it's only safe to do a tail call if the results are returned in the same way.
llvm-svn: 102683
2010-04-30 01:12:32 +00:00
Jakob Stoklund Olesen
01254ea96d Reject really weird coalescer case when trying to merge identical subregisters
of different register classes. e.g.

  %reg1048:3<def> = EXTRACT_SUBREG %RAX<kill>, 3

Where %reg1048 is a GR32 register. This is not impossible to handle, but it is
pretty hard and very rare.

This should unbreak the dragonegg builder.

llvm-svn: 102672
2010-04-29 23:47:46 +00:00
Evan Cheng
da89f22f3d Load folding tail call should not use ebp / rbp after it's popped. PEI
should use esp / rsp to reference frame instead.

llvm-svn: 102596
2010-04-29 05:08:22 +00:00
Chris Lattner
9867c1a075 Rework global alignment computation again. Now we do round up
alignment of globals to the preferred alignment, but only when
there is no section specified on the global (by far the common
case).

llvm-svn: 102515
2010-04-28 19:58:07 +00:00
Evan Cheng
d4fe387eb8 Enable i16 to i32 promotion by default.
llvm-svn: 102493
2010-04-28 08:30:49 +00:00
Evan Cheng
08e5f737d2 Update tests.
llvm-svn: 102487
2010-04-28 01:53:13 +00:00
Devang Patel
570e9d53a7 Emit debug info for byval parameters.
llvm-svn: 102486
2010-04-28 01:39:28 +00:00
Evan Cheng
2aaefc6167 Do not count kill, implicit_def instructions as printed instructions.
llvm-svn: 102453
2010-04-27 19:38:45 +00:00
Chris Lattner
a9c1328501 round zero-byte .zerofill directives up to 1 byte. This
should fix some "g++.dg-struct-layout-1" failures, 
rdar://7886017

llvm-svn: 102421
2010-04-27 07:41:44 +00:00
Chris Lattner
9292bad5f5 on darwin empty functions need to codegen into something of non-zero length,
otherwise labels get incorrectly merged.  We handled this by emitting a 
".byte 0", but this isn't correct on thumb/arm targets where the text segment
needs to be a multiple of 2/4 bytes.  Handle this by emitting a noop.  This
is more gross than it should be because arm/ppc are not fully mc'ized yet.

This fixes rdar://7908505

llvm-svn: 102400
2010-04-26 23:37:21 +00:00
Dan Gohman
40561dd0ba When checking whether the special handling for an addrec increment which
doesn't dominate the header is needed, don't check whether the increment
expression has computable loop evolution. While the operands of an
addrec are required to be loop-invariant, they're not required to 
dominate any part of the loop. This fixes PR6914.

llvm-svn: 102389
2010-04-26 21:46:36 +00:00
Chris Lattner
4854eab087 fix PR6921 a different way. Intead of increasing the
alignment of globals with a specified alignment, we fix
common variables to obey their alignment.  Add a comment
explaining why this behavior is important.

llvm-svn: 102365
2010-04-26 18:46:46 +00:00
Chris Lattner
a8cd2ac893 Revert r102300/102301, which serious broke objc apps.
llvm-svn: 102359
2010-04-26 18:30:45 +00:00
Chris Lattner
e4a25eb35a Fix PR6921: globals were not getting correctly rounded up to their
preferred alignment unless they were common or some other special
case.

llvm-svn: 102300
2010-04-25 05:30:43 +00:00
Dan Gohman
42337e0ee9 Generalize LSR's OptimizeMax to handle the new kinds of max expressions
that indvars may use, now that indvars is recognizing le and ge loops.

llvm-svn: 102235
2010-04-24 03:13:44 +00:00
Stuart Hastings
85b5c330f2 Per Chris, fuse four trivial tests using grep (r102199) into one that uses FileCheck.
llvm-svn: 102216
2010-04-23 22:12:57 +00:00
Dan Gohman
6a48222bd8 Change TargetData's algorithm for computing defualt vector type
alignment to match what's used in clang and GCC for __alignof, rather
than trying to guess what Legalize is going to be doing.

llvm-svn: 102206
2010-04-23 19:41:15 +00:00
Stuart Hastings
ad81819149 Add some missing x86 patterns for movdq2q. Fixes two (LLVM-)GCC DejaGNU testcases. Radar 6881029.
llvm-svn: 102199
2010-04-23 19:03:32 +00:00
Dan Gohman
38949c2f1f Fix LSR to tolerate cases where ScalarEvolution initially
misses an opportunity to fold add operands, but folds them
after LSR has separated them out. This fixes rdar://7886751.

llvm-svn: 102157
2010-04-23 01:55:05 +00:00
Evan Cheng
a324da99ae Do not try to optimize a copy that has already been marked for deletion.
llvm-svn: 102027
2010-04-21 20:57:54 +00:00
Evan Cheng
dbfb7dc438 Implement -disable-non-leaf-fp-elim which disable frame pointer elimination
optimization for non-leaf functions. This will be hooked up to gcc's
-momit-leaf-frame-pointer option. rdar://7886181

llvm-svn: 101984
2010-04-21 03:18:23 +00:00
Evan Cheng
a0c4b2952f - Clean up some crappy code which deals with coalescing of copies which look at
extract_subreg / insert_subreg, etc.
- Add support for more aggressive insert_subreg coalescing.

llvm-svn: 101971
2010-04-21 00:44:22 +00:00
Dan Gohman
570b621976 Add another variant of this test which found a place where
CodeGen's ComputeMaskedBits was being over-conservative when computing
bits for an ADD.

llvm-svn: 101963
2010-04-21 00:19:28 +00:00
Chris Lattner
6db0f451a7 teach the x86 address matching stuff to handle
(shl (or x,c), 3) the same as (shl (add x, c), 3)
when x doesn't have any bits from c set.

This finishes off PR1135.  Before we compiled the block to:
to:

LBB0_3:                                 ## %bb
	cmpb	$4, %dl
	sete	%dl
	addb	%dl, %cl
	movb	%cl, %dl
	shlb	$2, %dl
	addb	%r8b, %dl
	shlb	$2, %dl
	movzbl	%dl, %edx
	movl	%esi, (%rdi,%rdx,4)
	leaq	2(%rdx), %r9
	movl	%esi, (%rdi,%r9,4)
	leaq	1(%rdx), %r9
	movl	%esi, (%rdi,%r9,4)
	addq	$3, %rdx
	movl	%esi, (%rdi,%rdx,4)
	incb	%r8b
	decb	%al
	movb	%r8b, %dl
	jne	LBB0_1

Now we produce:

LBB0_3:                                 ## %bb
	cmpb	$4, %dl
	sete	%dl
	addb	%dl, %cl
	movb	%cl, %dl
	shlb	$2, %dl
	addb	%r8b, %dl
	shlb	$2, %dl
	movzbl	%dl, %edx
	movl	%esi, (%rdi,%rdx,4)
	movl	%esi, 8(%rdi,%rdx,4)
	movl	%esi, 4(%rdi,%rdx,4)
	movl	%esi, 12(%rdi,%rdx,4)
	incb	%r8b
	decb	%al
	movb	%r8b, %dl
	jne	LBB0_1

llvm-svn: 101958
2010-04-20 23:18:40 +00:00
Bill Wendling
a87efb5d0f Move CodeGen/X86/2010-04-19-DAGCombineCrash.ll into CodeGen/X86/crash.ll. Also
reduce.

llvm-svn: 101925
2010-04-20 18:14:47 +00:00
Bill Wendling
887dac2aa6 The visitXOR method can return the same SDNode. If so, we don't want to delete
it as it's not dead.

llvm-svn: 101855
2010-04-20 01:25:01 +00:00
Dan Gohman
5736cd1e47 Start function numbering at 0.
llvm-svn: 101638
2010-04-17 16:29:15 +00:00
Evan Cheng
d3d5e6793a Add nounwind.
llvm-svn: 101613
2010-04-17 03:43:36 +00:00
Jakob Stoklund Olesen
7e77f60652 Add test case for machine-sink on critical edges
llvm-svn: 101416
2010-04-15 23:19:16 +00:00
Chris Lattner
1b7ecfdf60 enhance the load/store narrowing optimization to handle a
tokenfactor in between the load/store.  This allows us to 
optimize test7 into:

_test7:                                 ## @test7
## BB#0:                                ## %entry
	movl	(%rdx), %eax
                                        ## kill: SIL<def> ESI<kill>
	movb	%sil, 5(%rdi)
	ret

instead of:

_test7:                                 ## @test7
## BB#0:                                ## %entry
	movl	4(%esp), %ecx
	movl	$-65281, %eax           ## imm = 0xFFFFFFFFFFFF00FF
	andl	4(%ecx), %eax
	movzbl	8(%esp), %edx
	shll	$8, %edx
	addl	%eax, %edx
	movl	12(%esp), %eax
	movl	(%eax), %eax
	movl	%edx, 4(%ecx)
	ret

llvm-svn: 101355
2010-04-15 06:10:49 +00:00
Chris Lattner
8c5a5c9094 teach codegen to turn trunc(zextload) into load when possible.
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering.  In that
case it is saves a few bytes on x86-32.

llvm-svn: 101350
2010-04-15 05:40:59 +00:00
Chris Lattner
3282f3d34f Implement rdar://7860110 (also in target/readme.txt) narrowing
a load/or/and/store sequence into a narrower store when it is
safe.  Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does  trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.

This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll 
into:

        movl    %eax, 36(%rdi)

instead of:

        movl    $4294967295, %eax       ## imm = 0xFFFFFFFF
        andq    32(%rdi), %rax
        shlq    $32, %rcx
        addq    %rax, %rcx
        movq    %rcx, 32(%rdi)

and each of the testcases into a single store.  Each of them used
to compile into craziness like this:

_test4:
	movl	$65535, %eax            ## imm = 0xFFFF
	andl	(%rdi), %eax
	shll	$16, %esi
	addl	%eax, %esi
	movl	%esi, (%rdi)
	ret

llvm-svn: 101343
2010-04-15 04:48:01 +00:00
Chris Lattner
553267e9cc further tweak this to do something useful.
llvm-svn: 101341
2010-04-15 04:31:42 +00:00
Chris Lattner
a4b3756baf remove undef control flow.
llvm-svn: 101340
2010-04-15 04:30:19 +00:00
Jakob Stoklund Olesen
7343a90490 Remove unneeded types from test.
llvm-svn: 101286
2010-04-14 20:56:09 +00:00
Evan Cheng
172f2f9e2d Add test for post-ra machine licm.
llvm-svn: 101182
2010-04-13 22:10:03 +00:00
Evan Cheng
6ffb1ed4fb Fix test on non-x86 hosts.
llvm-svn: 101163
2010-04-13 18:54:04 +00:00
Evan Cheng
b8861dcb04 Re-apply 101075 and fix it properly. Just reuse the debug info of the branch instruction being optimized. There is no need to --I which can deref off start of the BB.
llvm-svn: 101162
2010-04-13 18:50:27 +00:00
Eric Christopher
330ca0c937 Temporarily revert r101075, it's causing invalid iterator assertions
in a nightly tester.

llvm-svn: 101158
2010-04-13 18:37:58 +00:00
Chris Lattner
dabcd9738c add llvm codegen support for -ffunction-sections and -fdata-sections,
patch by Sylvere Teissier!

llvm-svn: 101106
2010-04-13 00:36:43 +00:00
Evan Cheng
ec21a36774 Use .set expression for x86 pic jump table reference to reduce assembly relocation. rdar://7738756
llvm-svn: 101085
2010-04-12 23:07:17 +00:00
Bill Wendling
5a56f7fc20 Third time's a charm...
llvm-svn: 101081
2010-04-12 22:43:21 +00:00
Bill Wendling
e1bf74de52 Genericize the label test.
llvm-svn: 101079
2010-04-12 22:40:37 +00:00
Bill Wendling
fd58812c95 Correct test to test what I mean it to test.
llvm-svn: 101077
2010-04-12 22:25:42 +00:00
Bill Wendling
1f2e71928c Micro-optimization:
If we have this situation:

    jCC  L1
    jmp  L2
L1:
  ...
L2:
  ...

We can get a small performance boost by emitting this instead:

    jnCC L2
L1:
  ...
L2:
  ...

This testcase shows an example of this:

float func(float x, float y) {
    double product = (double)x * y;
    if (product == 0.0)
        return product;
    return product - 1.0;
}

llvm-svn: 101075
2010-04-12 22:19:57 +00:00
Evan Cheng
90788354c9 Enable post regalloc machine licm by default.
llvm-svn: 101023
2010-04-12 06:25:28 +00:00
Dan Gohman
a451b859f9 Merge a few fast-isel tests.
llvm-svn: 100860
2010-04-09 15:03:55 +00:00
Evan Cheng
619f1b8a94 Coalescer should not delete copy instructions whose defs are partially dead. e.g.
%RDI<def,dead> = MOV64rr %RAX<kill>, %EDI<imp-def>

llvm-svn: 100804
2010-04-08 20:02:37 +00:00
Evan Cheng
3fa0b6fb03 Avoid using f64 to lower memcpy from constant string. It's cheaper to use i32 store of immediates.
llvm-svn: 100751
2010-04-08 07:37:57 +00:00
Dan Gohman
0a77dd29de When expanding expressions which are using post-inc mode for multiple loops,
ensure that the expansion is dominated by the increments of those loops.

llvm-svn: 100748
2010-04-08 05:57:57 +00:00
Chris Lattner
23334439e9 add newlines at the end of files.
llvm-svn: 100705
2010-04-07 22:53:17 +00:00
Dan Gohman
b5210c934f Generalize IVUsers to track arbitrary expressions rather than expressions
explicitly split into stride-and-offset pairs. Also, add the
ability to track multiple post-increment loops on the same expression.

This refines the concept of "normalizing" SCEV expressions used for
to post-increment uses, and introduces a dedicated utility routine for
normalizing and denormalizing expressions.

This fixes the expansion of expressions which are post-increment users
of more than one loop at a time. More broadly, this takes LSR another
step closer to being able to reason about more than one loop at a time.

llvm-svn: 100699
2010-04-07 22:27:08 +00:00
Dale Johannesen
4cdb545401 Split big test into multiple directories to cater to
those who don't build all targets.

llvm-svn: 100688
2010-04-07 20:43:35 +00:00
Chris Lattner
367cbc160c this has a pr!
llvm-svn: 100637
2010-04-07 18:04:56 +00:00
Chris Lattner
60e5f55565 fix a latent bug my inline asm stuff exposed:
MachineOperand::isIdenticalTo wasn't handling metadata operands.

llvm-svn: 100636
2010-04-07 18:03:19 +00:00
Jakob Stoklund Olesen
370a3e553f Don't try to collapse DomainValues onto an incompatible SSE domain.
This fixes the Bullet regression on i386/nocona.

llvm-svn: 100553
2010-04-06 19:48:56 +00:00
Evan Cheng
7d177beb1d Add nounwind.
llvm-svn: 100482
2010-04-05 22:30:05 +00:00
Dan Gohman
2aff0055c1 Don't do code sinking on unreachable blocks. It's unprofitable and hazardous.
llvm-svn: 100455
2010-04-05 19:17:22 +00:00
Chris Lattner
779e051357 resolve a fixme.
llvm-svn: 100346
2010-04-04 19:28:59 +00:00
Evan Cheng
5d825988d0 Correctly lower memset / memcpy of undef. It should be a nop. PR6767.
llvm-svn: 100208
2010-04-02 19:36:14 +00:00
Dan Gohman
7051600e3f Revert the recent alignment changes. They're broken for -Os because,
in particular, they end up aligning strings at 16-byte boundaries, and
there's no way for GlobalOpt to check OptForSize.

llvm-svn: 100172
2010-04-02 03:04:37 +00:00
Dan Gohman
ec731a99fa Remove this initializer so that the optimizer doesn't convert
unaligned loads into aligned loads.

llvm-svn: 100166
2010-04-02 01:26:13 +00:00
Dan Gohman
d14fe2a5cd Update this test for the new preferred alignment heuristics.
llvm-svn: 100165
2010-04-02 01:24:08 +00:00
Evan Cheng
9508456bb6 In 64-bit mode, use i64 to lower memcpy / memset instead of f64.
llvm-svn: 100137
2010-04-01 20:27:45 +00:00