1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 12:12:47 +01:00
Commit Graph

2984 Commits

Author SHA1 Message Date
Evan Cheng
51f51c7572 Allow JIT with non-static relocation model.
llvm-svn: 45304
2007-12-22 01:12:14 +00:00
Evan Cheng
a111629401 New entry.
llvm-svn: 45280
2007-12-21 01:31:58 +00:00
Evan Cheng
eba18a1952 Fix JIT encoding for CMPSD as well.
llvm-svn: 45268
2007-12-20 19:57:09 +00:00
Chris Lattner
93d750bbe3 add an obvious load folding missed optzn.
llvm-svn: 45161
2007-12-18 16:48:14 +00:00
Bill Wendling
e5af8b6e5c Add "mayHaveSideEffects" and "neverHasSideEffects" flags to some instructions. I
based what flag to set on whether it was already marked as
"isRematerializable". If there was a further check to determine if it's "really"
rematerializable, then I marked it as "mayHaveSideEffects" and created a check
in the X86 back-end similar to the remat one.

llvm-svn: 45132
2007-12-17 23:07:56 +00:00
Bill Wendling
b33d6155da LD_Fp64m should have "isRematerializable" set.
llvm-svn: 45128
2007-12-17 22:17:14 +00:00
Chris Lattner
f04ce286e2 fix a questionable cast, thanks to Mike Stump for pointing this out.
llvm-svn: 45075
2007-12-16 20:26:54 +00:00
Chris Lattner
c6fd78dec1 Fix the JIT encoding of cmp*ss, which aborts with this assertion currently:
X86CodeEmitter.cpp:378: failed assertion `0 && "Immediate size not set!"'

I *think* this is right, but Evan, please verify.  It also looks like
CMPSDrr and maybe others are missing this info.  Evan, plz investigate.

llvm-svn: 45074
2007-12-16 20:12:41 +00:00
Evan Cheng
1d95b669b6 Make better use of instructions that clear high bits; fix various 2-wide shuffle bugs.
llvm-svn: 45058
2007-12-15 03:00:47 +00:00
Evan Cheng
55e450d6eb Actually, MOVPQIto64mr is a dup of MOVPQI2QImr, MOV64toPQIrm is a dup of MOVQI2PQIrm.
llvm-svn: 45041
2007-12-14 20:08:14 +00:00
Evan Cheng
0a36fa6625 Fix (mem) <-> low 64-bits of xmm bugs pointed out by David Greene. Mac OS X Leopard assembler recognizes movq.
llvm-svn: 45040
2007-12-14 19:54:07 +00:00
Dale Johannesen
1e083ec1f6 x86-32 long doubles are 4-byte aligned on the stack
for parameter passing (only for that, on Darwin).

llvm-svn: 45038
2007-12-14 19:25:34 +00:00
Evan Cheng
42f27a28a4 Fix bsf / bsr jit encoding.
llvm-svn: 45037
2007-12-14 18:49:43 +00:00
Evan Cheng
375141a82d Oops. Forgot these.
llvm-svn: 45036
2007-12-14 18:25:34 +00:00
Dan Gohman
0efc49e9b8 Fix Intel asm syntax for the bsr and bsf instructions.
llvm-svn: 45030
2007-12-14 15:10:00 +00:00
Evan Cheng
6909ff8c4b Fix ctlz and cttz. llvm definition requires them to return number of bits in of the src type when value is zero.
llvm-svn: 45029
2007-12-14 08:30:15 +00:00
Evan Cheng
51cf86ded0 Implement ctlz and cttz with bsr and bsf.
llvm-svn: 45024
2007-12-14 02:13:44 +00:00
Evan Cheng
343929c773 Fold some and + shift in x86 addressing mode.
llvm-svn: 44970
2007-12-13 00:43:27 +00:00
Evan Cheng
64a1febf9a Implicit def instructions, e.g. X86::IMPLICIT_DEF_GR32, are always re-materializable and they should not be spilled.
llvm-svn: 44960
2007-12-12 23:12:09 +00:00
Dan Gohman
0075ea1f5f Allow vector integer constants to be created with
SelectionDAG::getConstant, in the same way as vector floating-point
constants. This allows the legalize expansion code for @llvm.ctpop and
friends to be usable with vector types.

llvm-svn: 44954
2007-12-12 22:21:26 +00:00
Evan Cheng
ad3e7f3286 Use shuffles to implement insert_vector_elt for i32, i64, f32, and f64.
llvm-svn: 44929
2007-12-12 07:55:34 +00:00
Evan Cheng
d36d69fe92 Lower a build_vector with all constants into a constpool load unless it can be done with a move to low part.
llvm-svn: 44921
2007-12-12 06:45:40 +00:00
Scott Michel
81b4099173 Correct typo for Linux: s/esp/%rsp/
llvm-svn: 44904
2007-12-12 02:38:28 +00:00
Nate Begeman
e9067c13ec Allow the JIT to encode MMX instructions
llvm-svn: 44869
2007-12-11 18:06:14 +00:00
Evan Cheng
f6c2838f36 - Improved v8i16 shuffle lowering. It now uses pshuflw and pshufhw as much as
possible before resorting to pextrw and pinsrw.
- Better codegen for v4i32 shuffles masquerading as v8i16 or v16i8 shuffles.
- Improves (i16 extract_vector_element 0) codegen by recognizing
  (i32 extract_vector_element 0) does not require a pextrw.

llvm-svn: 44836
2007-12-11 01:46:18 +00:00
Nate Begeman
8b194d1718 x86 doesn't actually want to custom lower v3i32
llvm-svn: 44835
2007-12-11 01:41:33 +00:00
Anton Korobeynikov
005fe34c3b Hey, English is not my native language :)
llvm-svn: 44820
2007-12-10 23:10:20 +00:00
Anton Korobeynikov
b003fb0ed7 Clarify the need of CFI() stuff
llvm-svn: 44819
2007-12-10 23:08:35 +00:00
Anton Korobeynikov
fd74645812 Provide convenient way to disable CFI stuff for old/broken assemblers.
Use it for Darwin.

llvm-svn: 44818
2007-12-10 23:04:38 +00:00
Chris Lattner
b511799808 Disable cfi directives for now, darwin does't support them.
These should probably be something like:

  CFI(".cfi_def_cfa_offset 16\n")

where CFI is defined to a noop on darwin and other platforms
that don't support those directives.

llvm-svn: 44803
2007-12-10 19:10:18 +00:00
Anton Korobeynikov
cd497afc30 And finally annotate X86-64 version of callback.
All bad stuff from SSE version is implicitely inherited :)

llvm-svn: 44794
2007-12-10 15:27:07 +00:00
Anton Korobeynikov
49e2962ad3 Provide annotation for SSE version of callback. It's even more
broken, because doesn't mark xmm regs properly

llvm-svn: 44793
2007-12-10 15:13:55 +00:00
Anton Korobeynikov
0e4780cfe2 Annotate JIT callback function with call frame infromation.
This will allow us (theoretically) to unwind through JITer.
The code wasn't verified, so I'm pretty sure offsets are wrong :)

llvm-svn: 44792
2007-12-10 14:54:42 +00:00
Bill Wendling
8d8d9a2f5e Reverting 44702. It wasn't correct to rename them.
llvm-svn: 44727
2007-12-08 23:58:46 +00:00
Chris Lattner
12fca81026 aesthetic changes, no functionality change. Evan, it's not clear
what 'Available' is, please add a comment near it and rename it
if appropriate.

llvm-svn: 44703
2007-12-08 07:22:58 +00:00
Bill Wendling
d10837def7 Renaming:
isTriviallyReMaterializable -> hasNoSideEffects
  isReallyTriviallyReMaterializable -> isTriviallyReMaterializable

llvm-svn: 44702
2007-12-08 07:17:56 +00:00
Evan Cheng
c4db072c74 Add comment.
llvm-svn: 44686
2007-12-07 21:30:01 +00:00
Evan Cheng
34c7b35135 Much improved v8i16 shuffles. (Step 1).
llvm-svn: 44676
2007-12-07 08:07:39 +00:00
Evan Cheng
4dc538449d Remove a bogus optimization. It's not possible to do a move to low element to a <8 x i16> or <16 x i8> vector.
llvm-svn: 44669
2007-12-06 22:14:22 +00:00
Chris Lattner
011d2aab51 add a note
llvm-svn: 44637
2007-12-05 22:58:19 +00:00
Evan Cheng
8464a0bf00 Add a argument to storeRegToStackSlot and storeRegToAddr to specify whether
the stored register is killed.

llvm-svn: 44600
2007-12-05 03:14:33 +00:00
Evan Cheng
58b387dfb0 Remove redundant foldMemoryOperand variants and other code clean up.
llvm-svn: 44517
2007-12-02 08:30:39 +00:00
Evan Cheng
79e8b92dc3 Allow some reloads to be folded in multi-use cases. Specifically testl r, r -> cmpl [mem], 0.
llvm-svn: 44479
2007-12-01 02:07:52 +00:00
Nate Begeman
4278967588 Support returning non-power-of-2 vectors to unblock some work
llvm-svn: 44371
2007-11-27 19:28:48 +00:00
Duncan Sands
3602011bec Fix PR1146: parameter attributes are longer part of
the function type, instead they belong to functions
and function calls.  This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll).  Hopefully
a bitcode guru (who might that be? :) ) will fix it.

llvm-svn: 44359
2007-11-27 13:23:08 +00:00
Chris Lattner
be0c5a0500 Fix a long standing deficiency in the X86 backend: we would
sometimes emit "zero" and "all one" vectors multiple times,
for example:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M2
	ret

instead of:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	movq	%mm0, _M2
	ret

This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type.  This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.

This patch makes the following changes:

1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
   their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
   immAllZerosV in the wrong form now use *_bc to match them with a
   bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle 
   bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
   is legal, instead of generating one that is illegal and expecting
   a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.

This patch is definite goodness, but has the potential to cause random
code quality regressions.  Please be on the lookout for these and let 
me know if they happen.

llvm-svn: 44310
2007-11-25 00:24:49 +00:00
Chris Lattner
3862759b53 remove bogus assertion that broke CodeGen/Generic/cast-fp.ll on x86
among others.

llvm-svn: 44302
2007-11-24 18:37:20 +00:00
Chris Lattner
28262fbaf2 Several changes:
1) Change the interface to TargetLowering::ExpandOperationResult to 
   take and return entire NODES that need a result expanded, not just
   the value.  This allows us to handle things like READCYCLECOUNTER,
   which returns two values.
2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES.
3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new
   ExpandOperationResult.  This makes the result simpler and fully 
   general.
4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes.
5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM
   i64 shifts, allowing them to work with LegalizeDAGTypes.
6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT,
   allowing them to work with LegalizeDAGTypes.

LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when
type legalization in LegalizeDAG is ifdef'd out.

llvm-svn: 44300
2007-11-24 07:07:01 +00:00
Chris Lattner
9020367ea0 add a note
llvm-svn: 44299
2007-11-24 06:13:33 +00:00
Dale Johannesen
8c3541787f Fix .eh table linkage issues on Darwin. Some EH support
for Darwin PPC, but it's not fully working yet.

llvm-svn: 44258
2007-11-20 23:24:42 +00:00
Nate Begeman
2a8ef3f29a Add support for vectors to int <-> float casts.
llvm-svn: 44204
2007-11-17 03:58:34 +00:00
Anton Korobeynikov
cd9b16df61 Implement codegen for flt_rounds on x86
llvm-svn: 44183
2007-11-16 01:31:51 +00:00
Evan Cheng
c0dc7b6e61 Oops. Debugging code shouldn't have been checked in.
llvm-svn: 44128
2007-11-14 19:08:32 +00:00
Anton Korobeynikov
58298cb9cc Fix PIC jump table codegen on x86-32/linux. In fact, such thing should be applied
to all targets uses GOT-relative offsets for PIC (Alpha?)

llvm-svn: 44108
2007-11-14 09:18:41 +00:00
Duncan Sands
e6821dd990 Eliminate the recently introduced CCAssignToStackABISizeAlign
in favour of teaching CCAssignToStack that size 0 and/or align
0 means to use the ABI values.  This seems a neater solution.
It is safe since no legal value type has size 0.

llvm-svn: 44107
2007-11-14 08:29:13 +00:00
Evan Cheng
fd33cb316f Clean up sub-register implementation by moving subReg information back to
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.

llvm-svn: 44104
2007-11-14 07:59:08 +00:00
Dale Johannesen
70ca3c1f03 Revert previous; these files aren't ready to go in yet.
llvm-svn: 44057
2007-11-13 19:16:02 +00:00
Dale Johannesen
5fd9e7a615 Add parameter to getDwarfRegNum to permit targets
to use different mappings for EH and debug info;
no functional change yet.
Fix warning in X86CodeEmitter.

llvm-svn: 44056
2007-11-13 19:13:01 +00:00
Evan Cheng
994043f515 Fix x86-64 jit: remove reliance on Dwarf numbers.
llvm-svn: 44048
2007-11-13 17:54:34 +00:00
Bill Wendling
934fcd87e7 Unifacalize the CALLSEQ{START,END} stuff.
llvm-svn: 44045
2007-11-13 09:19:02 +00:00
Bill Wendling
cc75435ebf Unify CALLSEQ_{START,END}. They take 4 parameters: the chain, two stack
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).

This can only result in tears...

llvm-svn: 44037
2007-11-13 00:44:25 +00:00
Owen Anderson
aba398a5ce Add a flag for indirect branch instructions.
Target maintainers: please check that the instructions for your target are correctly marked.

llvm-svn: 44012
2007-11-12 07:39:39 +00:00
Anton Korobeynikov
8e8473c783 Use TableGen to emit information for dwarf register numbers.
This makes DwarfRegNum to accept list of numbers instead.
Added three different "flavours", but only slightly tested on x86-32/linux.
Please check another subtargets if possible,

llvm-svn: 43997
2007-11-11 19:50:10 +00:00
Dale Johannesen
2e9b020e89 Add CCAssignToStackABISizeAlign for convenience in
dealing with types whose size & alignment are
different on different subtargets.  Use it for x86 f80.

llvm-svn: 43988
2007-11-10 22:07:15 +00:00
Arnold Schwaighofer
64ad6fa1fa Update tailcall code to include inline attribute operand for memcpy.
llvm-svn: 43978
2007-11-10 10:48:01 +00:00
Evan Cheng
946afd2f6c Unbreak x86-64 jumptable.
llvm-svn: 43955
2007-11-09 19:11:23 +00:00
Dale Johannesen
eca19e7eca Revert previous rewrite per chris's comments.
llvm-svn: 43950
2007-11-09 18:07:11 +00:00
Evan Cheng
7d8deec92f Much improved pic jumptable codegen:
Then:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        imull   $4, %ecx, %ecx
        leal    LJTI1_0-"L1$pb"(%eax), %edx
        addl    LJTI1_0-"L1$pb"(%ecx,%eax), %edx
        jmpl    *%edx

        .align  2
        .set L1_0_set_3,LBB1_3-LJTI1_0
        .set L1_0_set_2,LBB1_2-LJTI1_0
        .set L1_0_set_5,LBB1_5-LJTI1_0
        .set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

Now:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        addl    LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
        jmpl    *%eax

		.align  2
		.set L1_0_set_3,LBB1_3-"L1$pb"
		.set L1_0_set_2,LBB1_2-"L1$pb"
		.set L1_0_set_5,LBB1_5-"L1$pb"
		.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

llvm-svn: 43924
2007-11-09 01:32:10 +00:00
Dale Johannesen
8a9ec1582b Rewrite Dwarf number handling per review comments.
llvm-svn: 43918
2007-11-09 00:47:10 +00:00
Dale Johannesen
b11aca8a92 Complete conditionalization of Dwarf reg numbers.
Would somebody not on Darwin please make sure this
doesn't break anything.  Exception handling failures
would be the most likely symptom.

llvm-svn: 43844
2007-11-07 21:48:35 +00:00
Dale Johannesen
a863789700 Interchange Dwarf numbers of ESP and EBP on x86 Darwin.
Much improvement in exception handling.

llvm-svn: 43794
2007-11-07 00:25:05 +00:00
Rafael Espindola
ec025c3042 Move the LowerMEMCPY and LowerMEMCPYCall to a common place.
Thanks for the suggestions Bill :-)

llvm-svn: 43742
2007-11-05 23:12:20 +00:00
Evan Cheng
c49995c027 Use movups to spill / restore SSE registers on targets where stacks alignment is
less than 16. This is a temporary solution until dynamic stack alignment is
implemented.

llvm-svn: 43703
2007-11-05 07:30:01 +00:00
Duncan Sands
d1bdbd010b Eliminate the remaining uses of getTypeSize. This
should only effect x86 when using long double.  Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment).  This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.

llvm-svn: 43688
2007-11-05 00:04:43 +00:00
Chris Lattner
8fac63c8b5 Fix PR1761 by not printing (rip) suffix when in -static mode.
Evan, please review this.

llvm-svn: 43680
2007-11-04 19:23:28 +00:00
Chris Lattner
67cd357fb8 Fix PR1763 by allowing the 'q' constraint to work with 64-bit
regs on x86-64.

llvm-svn: 43669
2007-11-04 06:51:12 +00:00
Evan Cheng
bf8e7c6644 Unbreak tailcall opt.
llvm-svn: 43646
2007-11-02 17:45:40 +00:00
Chris Lattner
679e22d547 add a note
llvm-svn: 43642
2007-11-02 17:04:20 +00:00
Evan Cheng
b50cc64eb0 Missing a getNumOperands check.
llvm-svn: 43630
2007-11-02 01:26:22 +00:00
Bill Wendling
df2eaa8a55 Silence, accersed warning
llvm-svn: 43609
2007-11-01 08:51:44 +00:00
Rafael Espindola
27a8907a7c Make ARM and X86 LowerMEMCPY identical by moving the isThumb check into getMaxInlineSizeThreshold
and by restructuring the X86 version.

New I just have to move this to a common place :-)

llvm-svn: 43554
2007-10-31 14:39:58 +00:00
Rafael Espindola
fae98471a9 Make ARM an X86 memcpy expansion more similar to each other.
Now both subtarget define getMaxInlineSizeThreshold and the expansion uses it.

This should not change generated code.

llvm-svn: 43552
2007-10-31 11:52:06 +00:00
Dale Johannesen
9bc04ae496 Make i64=expand_vector_elt(v2i64) work in 32-bit mode.
llvm-svn: 43535
2007-10-31 00:32:36 +00:00
Dale Johannesen
7167117945 Add missing SSE builtins: CVTPD2PI, CVTPS2PI,
CVTTPD2PI, CVTTPS2PI, CVTPI2PD, CVTPI2PS.

llvm-svn: 43523
2007-10-30 22:15:38 +00:00
Duncan Sands
f6837e8634 Fix for visibility warnings generated by gcc-4.2.
llvm-svn: 43500
2007-10-30 13:14:37 +00:00
Dale Johannesen
461a0c47f8 Add missing MMX PSUBQ.
llvm-svn: 43488
2007-10-30 01:18:38 +00:00
Evan Cheng
5fe81cf64e Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).

llvm-svn: 43465
2007-10-29 19:58:20 +00:00
Evan Cheng
1113931fd8 Avoid doing something dumb like rewriting using a 64-bit iv in 32-bit mode.
llvm-svn: 43446
2007-10-29 07:57:50 +00:00
Chris Lattner
be8379fac5 add a note.
llvm-svn: 43444
2007-10-29 06:19:48 +00:00
Chris Lattner
1503362624 Add support for the x86-64 'q' regigster modifier, and add support for the
b/h/w/k/q inline asm memory modifiers, which are just ignored.  This fixes
PR1748 and CodeGen/X86/2007-10-28-inlineasm-q-modifier.ll

llvm-svn: 43430
2007-10-29 03:09:07 +00:00
Evan Cheng
053178440a New entry.
llvm-svn: 43420
2007-10-28 04:01:09 +00:00
Anton Korobeynikov
0d3f43480e Fix off-by-one stack offset computations (dwarf information) for callee-saved
registers in case, when FP pointer was eliminated. This should fixes misc. random
EH-related crahses, when stuff is compiled with -fomit-frame-pointer.
Thanks Duncan for nailing this bug!

llvm-svn: 43381
2007-10-26 09:13:24 +00:00
Evan Cheng
53696b7e9f Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
e.g.
Turns this loop:
LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
        movw    %dx, %si
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %edi
        movw    %si, (%edi)
        movl    L_Y$non_lazy_ptr, %edi
        movw    %dx, (%edi)
		addw    $4, %dx
		incw    %si
		incl    %ecx
		cmpl    %eax, %ecx
		jne     LBB1_2  # bb
	
into

LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %esi
        movw    %cx, (%esi)
        movl    L_Y$non_lazy_ptr, %esi
        movw    %dx, (%esi)
        addw    $4, %dx
		incl    %ecx
        cmpl    %eax, %ecx
        jne     LBB1_2  # bb

llvm-svn: 43375
2007-10-26 01:56:11 +00:00
Dan Gohman
76e104c8ad Fix the folding of multiplication into addresses on x86, which was broken
by the recent {U,S}MUL_LOHI changes.

llvm-svn: 43230
2007-10-22 20:22:24 +00:00
Evan Cheng
ddeab10144 Fix an unfolding bug.
llvm-svn: 43212
2007-10-22 03:03:20 +00:00
Dale Johannesen
2edd0fb69d Allow for copysign having f80 second argument.
Fixes 5550319.

llvm-svn: 43205
2007-10-21 01:07:44 +00:00
Evan Cheng
b56784f9ea Resolve unfold tables ambiguity.
llvm-svn: 43194
2007-10-19 23:50:58 +00:00
Evan Cheng
ded6550885 Local spiller optimization:
Turn a store folding instruction into a load folding instruction. e.g.
     xorl  %edi, %eax
     movl  %eax, -32(%ebp)
     movl  -36(%ebp), %eax
     orl   %eax, -32(%ebp)
=>
     xorl  %edi, %eax
     orl   -36(%ebp), %eax
     mov   %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.

llvm-svn: 43192
2007-10-19 21:23:22 +00:00
Rafael Espindola
d8d4372845 Add support for byval function whose argument is not 32 bit aligned.
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset.  I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)

llvm-svn: 43172
2007-10-19 10:41:11 +00:00
Evan Cheng
0449186690 - Added getOpcodeAfterMemoryUnfold(). It doesn't unfold an instruction, but only returns the opcode of the instruction post unfolding.
- Fix some copy+paste bugs.

llvm-svn: 43153
2007-10-18 22:40:57 +00:00
Evan Cheng
c852780685 Use SmallVectorImpl instead of SmallVector with hardcoded size in MRegister public interface.
llvm-svn: 43150
2007-10-18 21:29:24 +00:00
Christopher Lamb
a26b82ea94 Fix a typo
llvm-svn: 43144
2007-10-18 19:28:55 +00:00
Chris Lattner
3a19e981f5 Change fp to sint legalization on x86-32 to do 2 x i32
loads instead of 1 x i64 loads.  This doesn't change any functionality yet.

llvm-svn: 43068
2007-10-17 06:17:29 +00:00
Chris Lattner
ba2d55a564 fix some funny indentation, add comments.
llvm-svn: 43066
2007-10-17 06:02:13 +00:00
Dale Johannesen
63411d36bf Check for invalid cc's in f80 select.
llvm-svn: 43033
2007-10-16 18:09:08 +00:00
Arnold Schwaighofer
f0d4d73bf6 Correction to tail call optimization code. The new return address
was stored to the acutal stack slot before the parameters were
lowered to their stack slot. This could cause arguments to be
overwritten by the return address if the called function had less
parameters than the caller function. The update should remove the
last failing test case of llc-beta: SPASS.

llvm-svn: 43027
2007-10-16 09:05:00 +00:00
Evan Cheng
f5bcd3d737 LowerFP_TO_SINT must not create a stack object if it's not needed.
llvm-svn: 43004
2007-10-15 20:11:21 +00:00
Evan Cheng
90645f30db Unbreak x86-64.
llvm-svn: 42962
2007-10-14 10:09:39 +00:00
Evan Cheng
33df6a6bed Revert 42908 for now.
llvm-svn: 42960
2007-10-14 05:57:21 +00:00
Duncan Sands
bf31a19c62 Clarify that fastcc has a problem with nested function
trampolines, rather than with nested functions themselves.

llvm-svn: 42955
2007-10-13 07:38:37 +00:00
Evan Cheng
2e2d6358bc Change unfoldMemoryOperand(). User is now responsible for passing in the
register used by the unfolded instructions. User can also specify whether to
unfold the load, the store, or both.

llvm-svn: 42946
2007-10-13 02:35:06 +00:00
Arnold Schwaighofer
50d2c33530 Correcting the corrections. Bad bad baaad emacs!
llvm-svn: 42935
2007-10-12 21:53:12 +00:00
Arnold Schwaighofer
6bcd9e7ec2 Corrected many typing errors. And removed 'nest' parameter handling
for fastcc from X86CallingConv.td.  This means that nested functions
are not supported for calling convention 'fastcc'.

llvm-svn: 42934
2007-10-12 21:30:57 +00:00
Duncan Sands
d781ed9d21 Due to the new tail call optimization, trampolines can no
longer be created for fastcc functions.

llvm-svn: 42925
2007-10-12 19:37:31 +00:00
Evan Cheng
c36fdf163a Update.
llvm-svn: 42922
2007-10-12 18:22:55 +00:00
Dan Gohman
a75e4a62e6 Change the names used for internal labels to use the current
function symbol name instead of a codegen-assigned function
number.

Thanks Evan! :-)

llvm-svn: 42908
2007-10-12 14:53:36 +00:00
Dan Gohman
ad3e823efa Mark vector ctpop, cttz, and ctlz as Expand on x86.
llvm-svn: 42905
2007-10-12 14:09:42 +00:00
Evan Cheng
c7b7a3cb74 Fold load / store into MOV32to32_ and MOV16to16_.
llvm-svn: 42895
2007-10-12 08:38:01 +00:00
Evan Cheng
f1ead16fd5 Flag MOV32to32_ with EXTRACT_SUBREG. They should not be scheduled apart.
llvm-svn: 42894
2007-10-12 07:55:53 +00:00
Dan Gohman
edc841fb53 Set ISD::FPOW to Expand.
llvm-svn: 42881
2007-10-11 23:21:31 +00:00
Dale Johannesen
9486be1cf2 Add missing argument to PALIGNR
llvm-svn: 42874
2007-10-11 20:58:37 +00:00
Arnold Schwaighofer
d47210011e Added tail call optimization to the x86 back end. It can be
enabled by passing -tailcallopt to llc.  The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
  elf/pic enabled + callee is in module + callee has
  visibility protected or hidden

llvm-svn: 42870
2007-10-11 19:40:01 +00:00
Dan Gohman
6c3e0cdd36 LowerIntegerDivOrRem no longer exists.
llvm-svn: 42787
2007-10-09 15:45:13 +00:00
Dan Gohman
cc317de0f5 Fix grammar in a comment.
llvm-svn: 42786
2007-10-09 15:44:37 +00:00
Dan Gohman
9546d48e97 This is done.
llvm-svn: 42785
2007-10-09 15:42:21 +00:00
Evan Cheng
c00dbfc5bc Under 64-bit mode use LEA64_32r instead of LEA64r to save a byte.
llvm-svn: 42783
2007-10-09 07:14:53 +00:00
Evan Cheng
90aa032f98 Bug fix. X86 was emitting redundant setcc and test instructions before a conditional move.
llvm-svn: 42774
2007-10-08 22:16:29 +00:00
Dan Gohman
6df332f0cb Migrate X86 and ARM from using X86ISD::{,I}DIV and ARMISD::MULHILO{U,S} to
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.

llvm-svn: 42762
2007-10-08 18:33:35 +00:00
Evan Cheng
090bfbebd1 Allow x86 compare to be commutable by default.
llvm-svn: 42761
2007-10-08 18:27:46 +00:00
Chris Lattner
fcccf4b6c4 disable this entirely: it is causing use of invalidated iterators and infinite looping.
llvm-svn: 42739
2007-10-07 22:00:31 +00:00
Chris Lattner
39dbb82db2 Fix many regressions on x86 by avoiding dereferencing the end iterator.
llvm-svn: 42738
2007-10-07 21:53:12 +00:00
Anton Korobeynikov
54ecd77023 Oops, I really wanted to commit this part also :)
llvm-svn: 42700
2007-10-06 16:39:43 +00:00
Anton Korobeynikov
34fefcf678 Move merge code into new helper function.
llvm-svn: 42699
2007-10-06 16:17:49 +00:00
Evan Cheng
dc95020e30 Added DAG xforms. e.g.
(vextract (v4f32 s2v (f32 load $addr)), 0) -> (f32 load $addr) 
(vextract (v4i32 bc (v4f32 s2v (f32 load $addr))), 0) -> (i32 load $addr)
Remove x86 specific patterns.

llvm-svn: 42677
2007-10-06 02:46:29 +00:00
Evan Cheng
9af50ee6ef Commute x86 cmove instructions by swapping the operands and change the condition
to its inverse.
Testing this as llcbeta

llvm-svn: 42661
2007-10-05 23:13:21 +00:00
Evan Cheng
e0e36e4a0e This is done.
llvm-svn: 42656
2007-10-05 22:34:59 +00:00
Evan Cheng
dc467c6323 Enable convertToThreeAddress for X86 by default.
llvm-svn: 42655
2007-10-05 22:31:10 +00:00
Evan Cheng
2b3122e56e INC64_32r -> LEA64_32r is better than INC64_32r -> LEA32r, but it still can
cause performance degradation.

llvm-svn: 42653
2007-10-05 21:55:32 +00:00
Evan Cheng
688f34a273 In 64-bit mode, avoid using leal with 32-bit 32-bit address size, e.g.
leal 1(%ecx), %edi, which requires 67H prefix.

llvm-svn: 42647
2007-10-05 20:34:26 +00:00
Evan Cheng
b069dd6a25 Add support to convert more 64-bit instructions to 3-address instructions.
llvm-svn: 42642
2007-10-05 18:20:36 +00:00
Evan Cheng
f658191412 ADC and SBB uses EFLAGS.
llvm-svn: 42640
2007-10-05 17:59:57 +00:00
Dan Gohman
821635b63f Change a few more spaces to tabs in assembly output.
llvm-svn: 42638
2007-10-05 15:58:41 +00:00
Dan Gohman
950f96e456 Change a space to a tab in the assembly output of a .globl directive
for consistency.

llvm-svn: 42637
2007-10-05 15:54:58 +00:00
Evan Cheng
4e46ad06fe Testing convertToThreeeAddress as X86 llcbeta.
llvm-svn: 42630
2007-10-05 08:04:01 +00:00
Evan Cheng
6e5205d379 Added storeRegToAddr, loadRegFromAddr, and unfoldMemoryOperand's.
llvm-svn: 42624
2007-10-05 01:34:55 +00:00
Evan Cheng
32766d3518 Not needed any more.
llvm-svn: 42623
2007-10-05 01:34:14 +00:00
Chris Lattner
4224151a44 add a note.
llvm-svn: 42607
2007-10-04 15:47:27 +00:00
Dan Gohman
30ba45b569 Use empty() member functions when that's what's being tested for instead
of comparing begin() and end().

llvm-svn: 42585
2007-10-03 19:26:29 +00:00
Chris Lattner
a31fa80185 add a note
llvm-svn: 42579
2007-10-03 17:10:03 +00:00
Chris Lattner
dfcb750656 Bill's example is still not enough to repro this, but it has other issues that
seem significant as well.

llvm-svn: 42564
2007-10-03 03:40:24 +00:00