1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 04:02:41 +01:00
Commit Graph

6083 Commits

Author SHA1 Message Date
Owen Anderson
bd844014fa Add a test for my previous PRE fix.
llvm-svn: 60394
2008-12-02 04:25:42 +00:00
Evan Cheng
39d7e00ff9 Fix PR3124: overly strict assert.
llvm-svn: 60392
2008-12-02 02:15:36 +00:00
Bill Wendling
580f12ae30 Second stab at target-dependent lowering of everyone's favorite nodes: [SU]ADDO
- LowerXADDO lowers [SU]ADDO into an ADD with an implicit EFLAGS define. The
  EFLAGS are fed into a SETCC node which has the conditional COND_O or COND_C,
  depending on the type of ADDO requested.

- LowerBRCOND now recognizes if it's coming from a SETCC node with COND_O or
  COND_C set.

llvm-svn: 60388
2008-12-02 01:06:39 +00:00
Chris Lattner
baf38b4f91 Add rdar reference, make this actually fail when the patch isn't applied.
llvm-svn: 60376
2008-12-01 22:35:31 +00:00
Dale Johannesen
f4362aae8c Consider only references to an IV within the loop when
figuring out the base of the IV.  This produces better
code in the example.  (Addresses use (IV) instead of 
(BASE,IV) - a significant improvement on low-register
machines like x86).

llvm-svn: 60374
2008-12-01 22:00:01 +00:00
Scott Michel
cf677b5a67 CellSPU:
- Fix v2[if]64 vector insertion code before IBM files a bug report.
- Ensure that zero (0) offsets relative to $sp don't trip an assert
  (add $sp, 0 gets legalized to $sp alone, tripping an assert)
- Shuffle masks passed to SPUISD::SHUFB are now v16i8 or v4i32

llvm-svn: 60358
2008-12-01 17:56:02 +00:00
Bill Wendling
a6e7dd2299 Use m_Specific() instead of double matching.
llvm-svn: 60341
2008-12-01 08:09:47 +00:00
Chris Lattner
e6c7ed156f simplify these patterns using m_Specific. No need to grep for
xor in testcase (or is a substring).

llvm-svn: 60328
2008-12-01 05:16:26 +00:00
Chris Lattner
0e03e40a76 Teach inst combine to merge GEPs through PHIs. This is really
important because it is sinking the loads using the GEPs, but
not the GEPs themselves.  This triggers 647 times on 403.gcc
and makes the .s file much much nicer.  For example before:

        je      LBB1_87 ## bb78
LBB1_62:        ## bb77
        leal    84(%esi), %eax
LBB1_63:        ## bb79
        movl    (%eax), %eax
...
LBB1_87:        ## bb78
        movl    $0, 4(%esp)
        movl    %esi, (%esp)
        call    L_make_decl_rtl$stub
        jmp     LBB1_62 ## bb77


after:

        jne     LBB1_63 ## bb79
LBB1_62:        ## bb78
        movl    $0, 4(%esp)
        movl    %esi, (%esp)
        call    L_make_decl_rtl$stub
LBB1_63:        ## bb79
        movl    84(%esi), %eax

The input code was (and the GEPs are merged and
the PHI is now eliminated by instcombine):

        br i1 %tmp233, label %bb78, label %bb77
bb77:           
        %tmp234 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22              
        br label %bb79
bb78:           
        call void @make_decl_rtl(%struct.tree_node* %t_addr.3, i8* null) nounwind
        %tmp235 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22              
        br label %bb79
bb79:           
        %iftmp.12.0.in = phi %struct.rtx_def** [ %tmp235, %bb78 ], [ %tmp234, %bb77 ]           
        %iftmp.12.0 = load %struct.rtx_def** %iftmp.12.0.in             

llvm-svn: 60322
2008-12-01 02:34:36 +00:00
Chris Lattner
01150dce74 testcase for my previous commit.
llvm-svn: 60315
2008-12-01 01:42:03 +00:00
Bill Wendling
23684a026c Implement ((A|B)&1)|(B&-2) -> (A&1) | B transformation. This also takes care of
permutations of this pattern.

llvm-svn: 60312
2008-12-01 01:07:11 +00:00
Bill Wendling
66a7442059 Add instruction combining for ((A&~B)|(~A&B)) -> A^B and all permutations.
llvm-svn: 60291
2008-11-30 13:52:49 +00:00
Bill Wendling
3e27ac16a6 Implement (A&((~A)|B)) -> A&B transformation in the instruction combiner. This
takes care of all permutations of this pattern.

llvm-svn: 60290
2008-11-30 13:08:13 +00:00
Bill Wendling
97ad688c1b getSExtValue() doesn't work for ConstantInts with bitwidth > 64 bits. Use all
APInt calls instead.

This fixes PR3144.

llvm-svn: 60288
2008-11-30 12:38:24 +00:00
Eli Friedman
2bc3921ce2 Optimize memmove and memset into the LLVM builtins. Note that these
only show up in code from front-ends besides llvm-gcc, like clang.

llvm-svn: 60287
2008-11-30 08:32:11 +00:00
Eli Friedman
ccdfdbfc99 Followup to r60283: optimize arbitrary width signed divisions as well
as unsigned divisions.  Same caveats as before.

llvm-svn: 60284
2008-11-30 06:35:39 +00:00
Eli Friedman
d7a261120f Fix for PR2164: allow transforming arbitrary-width unsigned divides into
multiplies.

Some more cleverness would be nice, though. It would be nice if we 
could do this transformation on illegal types.  Also, we would 
prefer a narrower constant when possible so that we can use a narrower
multiply, which can be cheaper.

llvm-svn: 60283
2008-11-30 06:02:26 +00:00
Eli Friedman
0ef5e1dc82 APIntify a test which is potentially unsafe otherwise, and fix the
nearby FIXME.

I'm not sure what the right way to fix the Cell test was; if the 
approach I used isn't okay, please let me know.

llvm-svn: 60277
2008-11-30 04:59:26 +00:00
Bill Wendling
5020e916ef Strengthen check for div inst-combining.
llvm-svn: 60276
2008-11-30 04:33:53 +00:00
Bill Wendling
ac11f7d37e Instcombine was illegally transforming -X/C into X/-C when either X or C
overflowed on negation. This commit checks to make sure that neithe C nor X
overflows. This requires that the RHS of X (a subtract instruction) be a
constant integer.

llvm-svn: 60275
2008-11-30 03:42:12 +00:00
Chris Lattner
203a3299e9 don't require GVN to work on dead values, just make the
test return the loaded value.

llvm-svn: 60252
2008-11-29 21:21:48 +00:00
Chris Lattner
f3e49f038c Fix a thinko that manifested as a crash on clamav last night.
llvm-svn: 60251
2008-11-29 20:29:04 +00:00
Chris Lattner
494758e720 Fix PR3141 by ensuring that MemoryDependenceAnalysis::removeInstruction
properly updates the reverse dependency map when it installs updated 
dependencies for instructions that depend on the removed instruction.

llvm-svn: 60222
2008-11-28 22:51:08 +00:00
Chris Lattner
a854ab3760 don't call MergeBasicBlockIntoOnlyPred on a block whose only
predecessor is itself.  This doesn't make sense, and this is
a dead infinite loop anyway.

llvm-svn: 60210
2008-11-28 19:54:49 +00:00
Nick Lewycky
40db216722 Chris prefers icmp/select over udiv!
llvm-svn: 60187
2008-11-27 22:41:10 +00:00
Nick Lewycky
882443585d Add a couple of missed optimizations on integer vectors. Multiply and divide
by 1, as well as multiply by -1.

llvm-svn: 60182
2008-11-27 20:21:08 +00:00
Chris Lattner
73b251b3bf Fix PR3138: if we merge the entry block into another block, make sure to
move the other block back up into the entry position!

llvm-svn: 60179
2008-11-27 19:25:19 +00:00
Bill Wendling
7742719284 XFAil test due to reverting of patch.
llvm-svn: 60161
2008-11-27 07:34:10 +00:00
Chris Lattner
532458b89f Make jump threading substantially more powerful, in the following ways:
1. Make it fold blocks separated by an unconditional branch.  This enables
   jump threading to see a broader scope.
2. Make jump threading able to eliminate locally redundant loads when they
   feed the branch condition of a block.  This frequently occurs due to
   reg2mem running.
3. Make jump threading able to eliminate *partially redundant* loads when
   they feed the branch condition of a block.  This is common in code with
   lots of loads and stores like C++ code and 255.vortex.

This implements thread-loads.ll and rdar://6402033.

Per the fixme's, several pieces of this should be moved into Transforms/Utils.

llvm-svn: 60148
2008-11-27 05:07:53 +00:00
Evan Cheng
ee5e950c25 Avoid inserting noop's in the middle of a loop.
llvm-svn: 60141
2008-11-27 01:16:00 +00:00
Evan Cheng
f18016728c On x86 favors folding short immediate into some arithmetic operations (e.g. add, and, xor, etc.) because materializing an immediate in a register is expensive in turns of code size.
e.g.
movl 4(%esp), %eax
addl $4, %eax

is 2 bytes shorter than

movl $4, %eax
addl 4(%esp), %eax

llvm-svn: 60139
2008-11-27 00:49:46 +00:00
Evan Cheng
4da44412cf Add -march=x86.
llvm-svn: 60135
2008-11-27 00:37:06 +00:00
Bill Wendling
3376836463 Add x86-specific test for add-with-overflow intrinsics.
llvm-svn: 60125
2008-11-26 22:42:19 +00:00
Chris Lattner
d01522d33a Turn on my codegen prepare heuristic by default. It doesn't affect
performance in most cases on the Grawp tester, but does speed some 
things up (like shootout/hash by 15%).  This also doesn't impact 
compile time in a noticable way on the Grawp tester.

It also, of course, gets the testcase it was designed for right :)

llvm-svn: 60120
2008-11-26 22:16:44 +00:00
Duncan Sands
f64dd4b09c Check that running the DAG combiner between type
and operation legalization does something useful.

llvm-svn: 60108
2008-11-26 16:44:30 +00:00
Bill Wendling
f069b62cd7 Add test for rdar://6394879.
llvm-svn: 60079
2008-11-26 02:21:12 +00:00
Chris Lattner
61c2a0fc8a This adds in some code (currently disabled unless you pass
-enable-smarter-addr-folding to llc) that gives CGP a better
cost model for when to sink computations into addressing modes.
The basic observation is that sinking increases register 
pressure when part of the addr computation has to be available
for other reasons, such as having a use that is a non-memory
operation.  In cases where it works, it can substantially reduce
register pressure.

This code is currently an overall win on 403.gcc and 255.vortex
(the two things I've been looking at), but there are several 
things I want to do before enabling it by default:

1. This isn't doing any caching of results, so it is much slower 
   than it could be.  It currently slows down release-asserts llc 
   by 1.7% on 176.gcc: 27.12s -> 27.60s.
2. This doesn't think about inline asm memory operands yet.
3. The cost model botches the case when the needed value is live
   across the computation for other reasons.

I'll continue poking at this, and eventually turn it on as llcbeta.

llvm-svn: 60074
2008-11-26 02:00:14 +00:00
Chris Lattner
8209f83091 Teach CodeGenPrepare to look through Bitcast instructions when attempting to
optimize addressing modes.  This allows us to optimize things like isel-sink2.ll
into:

	movl	4(%esp), %eax
	cmpb	$0, 4(%eax)
	jne	LBB1_2	## F
LBB1_1:	## TB
	movl	$4, %eax
	ret
LBB1_2:	## F
	movzbl	7(%eax), %eax
	ret

instead of:

_test:
	movl	4(%esp), %eax
	cmpb	$0, 4(%eax)
	leal	4(%eax), %eax
	jne	LBB1_2	## F
LBB1_1:	## TB
	movl	$4, %eax
	ret
LBB1_2:	## F
	movzbl	3(%eax), %eax
	ret

This shrinks (e.g.) 403.gcc from 1133510 to 1128345 lines of .s.

Note that the 2008-10-16-SpillerBug.ll testcase is dubious at best, I doubt
it is really testing what it thinks it is.

llvm-svn: 60068
2008-11-26 00:26:16 +00:00
Chris Lattner
017dde7e2b fix an over-reduced test.
llvm-svn: 60067
2008-11-26 00:12:08 +00:00
Chris Lattner
72db9f8bdd this doesn't need EH
llvm-svn: 60066
2008-11-26 00:03:26 +00:00
Mikhail Glushenkov
89bfeb825b Since the old llvmc was removed, rename llvmc2 to llvmc.
llvm-svn: 60048
2008-11-25 21:38:12 +00:00
Evan Cheng
c11d7e324f convertToSignExtendedInteger should return opInvalidOp instead of asserting if sematics of float does not allow arithmetics.
llvm-svn: 60042
2008-11-25 19:00:29 +00:00
Scott Michel
59013b297c CellSPU:
(a) Remove conditionally removed code in SelectXAddr. Basically, hope for the
    best that the A-form and D-form address predicates catch everything before
    the code decides to emit a X-form address.
(b) Expand vector store test cases to include the usual suspects.

llvm-svn: 60034
2008-11-25 17:29:43 +00:00
Scott Michel
bb575152bc CellSPU: test should use shlqby, not shlqbyi
llvm-svn: 60001
2008-11-25 01:30:37 +00:00
Bill Wendling
c9f3eec3f9 XFAIL this test. A recent CellSPU check-in broke it.
llvm-svn: 60000
2008-11-25 00:56:34 +00:00
Dan Gohman
92cedc8a95 Initial support for anti-dependence breaking. Currently this code does not
introduce any new spilling; it just uses unused registers.

Refactor the SUnit topological sort code out of the RRList scheduler and
make use of it to help with the post-pass scheduler.

llvm-svn: 59999
2008-11-25 00:52:40 +00:00
Bill Wendling
cb92038dbd Testcase for constant CFStrings.
llvm-svn: 59992
2008-11-24 23:28:09 +00:00
Chris Lattner
a07ad05059 reenable test
llvm-svn: 59986
2008-11-24 21:27:20 +00:00
Bill Wendling
36ee715e71 Temporarily XFAIL this test. r59976 and r59972 broke it.
llvm-svn: 59981
2008-11-24 20:43:33 +00:00
Chris Lattner
e5bf93e61f Fix 3113: If we have a dead cyclic PHI, replace the whole thing
with an undef.

llvm-svn: 59972
2008-11-24 19:25:36 +00:00
Scott Michel
259a64c097 CellSPU:
(a) Slight rethink on i64 zero/sign/any extend code - use a shuffle to
    directly zero-extend i32 to i64, but use rotates and shifts for
    sign extension. Also ensure unified register consistency.
(b) Add new test harness for i64 operations: i64ops.ll

llvm-svn: 59970
2008-11-24 18:20:46 +00:00
Scott Michel
c3965308a4 CellSPU:
(a) Improve the extract element code: there's no need to do gymnastics with
    rotates into the preferred slot if a shuffle will do the same thing.
(b) Rename a couple of SPUISD pseudo-instructions for readability and better
    semantic correspondence.
(c) Fix i64 sign/any/zero extension lowering.

llvm-svn: 59965
2008-11-24 17:11:17 +00:00
Bill Wendling
855ac77084 Test add-with-overflow with fast ISel.
llvm-svn: 59945
2008-11-24 05:23:38 +00:00
Nick Lewycky
47fa9bd187 Extend the 'noalias' attribute to function return values. This is intended to
indicate functions that allocate, such as operator new, or list::insert. The
actual definition is slightly less strict (for now).

No changes to the bitcode reader/writer, asm printer or verifier were needed.

llvm-svn: 59934
2008-11-24 03:41:24 +00:00
Bill Wendling
4bb8a7a498 Add support for llvm.uadd.with.overflow.
llvm-svn: 59926
2008-11-24 01:38:29 +00:00
Scott Michel
50e49b28f0 CellSPU: Fix bug 3056. Varadic extract_element was not implemented (nor was it
ever conceived to occur).

llvm-svn: 59891
2008-11-22 23:50:42 +00:00
Nick Lewycky
2fbf26fe70 Optimize (x/y)*y into x-(x%y) in general. Div and rem are about the same, and
a subtract is cheaper than a multiply. This generalizes an existing transform.

llvm-svn: 59800
2008-11-21 07:33:58 +00:00
Scott Michel
314d705baf CellSPU:
(a) Fix bgs 3052, 3057
(b) Incorporate Duncan's suggestions re: i1 promotion
(c) Indentation updates.

llvm-svn: 59790
2008-11-21 02:56:16 +00:00
Bill Wendling
1e6d74b84a Add generic test for add with overflow.
llvm-svn: 59781
2008-11-21 02:15:51 +00:00
Dan Gohman
7e92e53e25 Test -pre-RA-sched=fast too, for completeness.
llvm-svn: 59741
2008-11-20 19:26:04 +00:00
Evan Cheng
2805dcc9a0 - Register scavenger should use MachineRegisterInfo and internal map to find the first use of a register after a given machine instruction.
- When scavenging a register, in addition to the spill, insert a restore before the first use.
- Abort if client is looking to scavenge a register even when a previously scavenged register is still live.

llvm-svn: 59697
2008-11-20 02:32:35 +00:00
Devang Patel
cd2e68c069 If there are two consecutive llvm.dbg.stoppoint calls then
it is likely that the optimizer deleted code in between these
two intrinsics. Keep only the last llvm.dbg.stoppoint in this case.

llvm-svn: 59657
2008-11-19 18:56:50 +00:00
Dan Gohman
60e2650b09 Revert r59640. It broke this test for builds that aren't
configured with llvm-gcc.

llvm-svn: 59641
2008-11-19 16:24:37 +00:00
Dan Gohman
1b9557279c Use %llvmgcc -xassembler instead of invoking as directly. This avoids
problems for example when LLVM is built with --with-extra-options=-m64
and as defaults to x86-32 mode.

llvm-svn: 59640
2008-11-19 16:02:14 +00:00
Owen Anderson
482ea64f7b Add support for rematerialization in pre-alloc-splitting.
llvm-svn: 59587
2008-11-19 04:28:29 +00:00
Daniel Dunbar
9c71cd5448 LLVMC2: -emit-llvm stops compilation.
llvm-svn: 59586
2008-11-19 04:15:56 +00:00
Daniel Dunbar
60f1563256 LLVMC2: Teach llvm_gcc_c tool about -include and -fsyntax-only.
- Only focusing on llvm_gcc_c for now, eventually this needs to be
   refactored so it can be shared via all the gcc-like tools.

llvm-svn: 59582
2008-11-19 02:59:00 +00:00
Evan Cheng
145b3db050 Register scavenger should process early clobber defs first. A dead early clobber def should not interfere with a normal def which happens one slot later.
llvm-svn: 59559
2008-11-18 22:28:38 +00:00
Nick Lewycky
c573f70ae4 Add a utility function that detects whether a loop is guaranteed to be finite.
Use it to safely handle less-than-or-equals-to exit conditions in loops. These
also occur when the loop exit branch is exit on true because SCEV inverses the
icmp predicate.

Use it again to handle non-zero strides, but only with an unsigned comparison
in the exit condition.

llvm-svn: 59528
2008-11-18 15:10:54 +00:00
Duncan Sands
3f0dbb4ead Reapply r59464, this time using the correct type
when softening FNEG.

llvm-svn: 59513
2008-11-18 09:15:03 +00:00
Bill Wendling
8c9e9be673 A simple test for stack protectors. This should be valid on all platforms.
llvm-svn: 59505
2008-11-18 07:34:50 +00:00
Bill Wendling
33cf8ff597 Revert r59464. It was causing this failure:
Running /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/XCore/dg.exp ...
FAIL: /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/XCore/fneg.ll
Failed with signal(SIGABRT) at line 1
while running:  llvm-as < /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/XCore/fneg.ll |  llc -march=xcore > fneg.ll.tmp1.s
Assertion failed: (VT.isFloatingPoint() && "Cannot create integer FP constant!"), function getConstantFP, file /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/lib/CodeGen/SelectionDAG/SelectionDAG.cpp, line 913.
0   llc                                 0x0092115c _ZN4llvm3sys18RemoveFileOnSignalERKNS0_4PathEPSs + 844
1   libSystem.B.dylib                   0x9217809b _sigtramp + 43
2   ???                                 0xffffffff 0x0 + 4294967295
3   libSystem.B.dylib                   0x921f0ec2 raise + 26
4   libSystem.B.dylib                   0x9220047f abort + 73
5   libSystem.B.dylib                   0x921f2063 __assert_rtn + 101
6   llc                                 0x005a5b0a _ZN4llvm12SelectionDAG13getConmake[1]: *** [check-local] Error 1
make: *** [check] Error 2

llvm-svn: 59487
2008-11-18 01:49:24 +00:00
Devang Patel
2a0aa9fa51 Give SIToFPInst preference over UIToFPInst because it is faster on platforms that are widely used.
llvm-svn: 59476
2008-11-18 00:40:02 +00:00
Devang Patel
705f88d5b5 While handling floating point IVs lift restrictions on initial value and increment value.
llvm-svn: 59471
2008-11-17 23:27:13 +00:00
Duncan Sands
b13af5a714 Add soft float support for a bunch more operations. Original
patch by Richard Osborne, tweaked and extended by your humble
servant.

llvm-svn: 59464
2008-11-17 20:52:38 +00:00
Dale Johannesen
652c29e68d Remove these, which test for optimizations that
are not currently done (cf PowerPC/README.txt).

llvm-svn: 59456
2008-11-17 18:57:45 +00:00
Richard Osborne
2eb278eb4d Don't produce ADDC/ADDE when expanding SHL unless they are legal
for the target. This fixes PR3080.

llvm-svn: 59450
2008-11-17 17:34:31 +00:00
Lang Hames
cdccf43c58 Removed 2008-10-17-SpillerBug.ll as it does not provide an accurate test of PR2898.
llvm-svn: 59431
2008-11-16 23:30:12 +00:00
Lang Hames
66bb641598 2008-10-17-SpillerBug.ll is currently failing, but this doesn't reflect an actual regression of PR2898. This test should probably be removed. I've XFAILed it for now to keep buildbot quiet while this is considered.
llvm-svn: 59415
2008-11-16 13:11:09 +00:00
Mon P Wang
b6661b480b Improved shuffle normalization to avoid using extract/build when we
can extract using different indexes for two vectors. Added a few tests
for vector shuffles.

llvm-svn: 59399
2008-11-16 05:06:27 +00:00
Chris Lattner
21f18c9760 Handle the case where there is no "not". It is possible it got
folded into the select.

llvm-svn: 59389
2008-11-16 04:25:26 +00:00
Chris Lattner
4f8153d48f make this actually test what it is trying to.
llvm-svn: 59386
2008-11-16 04:21:51 +00:00
Nick Lewycky
1cddd8346f Don't brute-force analyze cubic or higher polynomials.
If this patch causes a performance regression for anyone, please let me know,
and it can be fixed in a different way with much more effort.

llvm-svn: 59384
2008-11-16 04:14:25 +00:00
Nick Lewycky
75d57a3bc3 Correct this error message.
llvm-svn: 59370
2008-11-15 17:50:47 +00:00
Richard Osborne
c2b2d5e6cf [XCore] Fix expansion of 64 bit add/sub. Don't custom expand
these operations if ladd/lsub are not available on the current
subtarget.

llvm-svn: 59305
2008-11-14 15:59:19 +00:00
Richard Osborne
8f86bb4d20 Add XCore intrinsics for getid (returns thread id) and bitrev (reverses
bits in a word).

llvm-svn: 59296
2008-11-14 10:12:16 +00:00
Dan Gohman
0a3ae5c0f2 Remove the FlaggedNodes member from SUnit. Instead of requiring each SUnit
to carry a SmallVector of flagged nodes, just calculate the flagged nodes
dynamically when they are needed.

The local-liveness change is due to a trivial scheduling change where
the scheduler arbitrary decision differently.

llvm-svn: 59273
2008-11-13 23:24:17 +00:00
Dale Johannesen
cc7dc0ec70 testcase for PR 1779.
llvm-svn: 59268
2008-11-13 22:17:10 +00:00
Bill Wendling
aede28fc3d Added testcase for r59214.
llvm-svn: 59218
2008-11-13 04:09:04 +00:00
Tanya Lattner
19cb5b9b91 Add test case for ptr annotation.
llvm-svn: 59142
2008-11-12 16:12:27 +00:00
Duncan Sands
117397c8dd Correct some thinkos in the expansion of ADD/SUB
when the target does not support ADDC/SUBC.  This
fixes PR3044.

llvm-svn: 59120
2008-11-12 08:23:26 +00:00
Dale Johannesen
a2cd0724ea Fix the testb optimization so x86 also bootstraps.
Reenable test.

llvm-svn: 59101
2008-11-12 02:00:35 +00:00
Andrew Lenharth
d096adcb5f fix another libgcc blocker
llvm-svn: 59026
2008-11-11 06:06:07 +00:00
Bill Wendling
97ad53032e Un-XFAIL tests now that they're fixed.
llvm-svn: 59023
2008-11-11 04:44:42 +00:00
Bill Wendling
e27327ae95 r59009 broke these tests. XFAIL for now.
llvm-svn: 59010
2008-11-11 00:36:10 +00:00
Bill Wendling
891f177dd0 Temporarily revert r58979 and related patch. It's causing a failure in X86 bootstrap:
Comparing stages 2 and 3
warning: ./cc1-checksum.o differs
warning: ./cc1obj-checksum.o differs
warning: ./cc1objplus-checksum.o differs
warning: ./cc1plus-checksum.o differs
Bootstrap comparison failure!
./alias.o differs
./alloc-pool.o differs
./attribs.o differs
./bb-reorder.o differs
./bitmap.o differs
./build/errors.o differs
./build/genattrtab.o differs
./build/genautomata.o differs
./build/genemit.o differs
./build/genextract.o differs
...

-bw

llvm-svn: 59003
2008-11-10 21:22:06 +00:00
Devang Patel
f0d6bd18d5 If the sign of exit condition and split condition does not match
then do not split loop index.

llvm-svn: 58995
2008-11-10 19:48:34 +00:00
Duncan Sands
22e8a45a01 Fix PR2667: add soft float support for sint_to_fp/uint_to_fp
where the argument is an apint, or smaller than the minimum
size for which there is a libcall (i32). 

llvm-svn: 58994
2008-11-10 17:36:26 +00:00
Duncan Sands
1d0b7dccf7 When promoting the result of fp_to_uint/fp_to_sint,
inform the optimizers that the result must be zero/
sign extended from the smaller type.  For example,
if a fp to unsigned i16 is promoted to fp to i32,
then we are allowed to assume that the extra 16 bits
are zero (because the result of fp to i16 is undefined
if the result does not fit in an i16).  This is
quite aggressive, but should help the optimizers
produce better code.  This requires correcting a
test which thought that fp_to_uint is some kind
of truncation, which it is not: in the testcase
(which does fp to i1), either the fp value converts
to 0 or 1 or the result is undefined, which is
quite different to truncation.

llvm-svn: 58991
2008-11-10 17:28:30 +00:00
Dale Johannesen
28c0044273 Reenable test.
llvm-svn: 58980
2008-11-10 07:30:32 +00:00