1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 04:02:41 +01:00
Commit Graph

7767 Commits

Author SHA1 Message Date
Evan Cheng
42f27a28a4 Fix bsf / bsr jit encoding.
llvm-svn: 45037
2007-12-14 18:49:43 +00:00
Evan Cheng
375141a82d Oops. Forgot these.
llvm-svn: 45036
2007-12-14 18:25:34 +00:00
Dan Gohman
0efc49e9b8 Fix Intel asm syntax for the bsr and bsf instructions.
llvm-svn: 45030
2007-12-14 15:10:00 +00:00
Evan Cheng
6909ff8c4b Fix ctlz and cttz. llvm definition requires them to return number of bits in of the src type when value is zero.
llvm-svn: 45029
2007-12-14 08:30:15 +00:00
Evan Cheng
51cf86ded0 Implement ctlz and cttz with bsr and bsf.
llvm-svn: 45024
2007-12-14 02:13:44 +00:00
Bill Wendling
c8c611e88f Add flags to indicate that there are "never" side effects or that there "may be"
side effects for machine instructions.

llvm-svn: 45022
2007-12-14 01:48:59 +00:00
Evan Cheng
343929c773 Fold some and + shift in x86 addressing mode.
llvm-svn: 44970
2007-12-13 00:43:27 +00:00
Evan Cheng
64a1febf9a Implicit def instructions, e.g. X86::IMPLICIT_DEF_GR32, are always re-materializable and they should not be spilled.
llvm-svn: 44960
2007-12-12 23:12:09 +00:00
Duncan Sands
47526c4a42 Remove host endianness info from TargetData and
put it in a new header System/Host.h instead.
Instead of getting the endianness from configure,
calculate it directly.

llvm-svn: 44959
2007-12-12 23:03:45 +00:00
Dan Gohman
0075ea1f5f Allow vector integer constants to be created with
SelectionDAG::getConstant, in the same way as vector floating-point
constants. This allows the legalize expansion code for @llvm.ctpop and
friends to be usable with vector types.

llvm-svn: 44954
2007-12-12 22:21:26 +00:00
Evan Cheng
ad3e7f3286 Use shuffles to implement insert_vector_elt for i32, i64, f32, and f64.
llvm-svn: 44929
2007-12-12 07:55:34 +00:00
Evan Cheng
d36d69fe92 Lower a build_vector with all constants into a constpool load unless it can be done with a move to low part.
llvm-svn: 44921
2007-12-12 06:45:40 +00:00
Scott Michel
81b4099173 Correct typo for Linux: s/esp/%rsp/
llvm-svn: 44904
2007-12-12 02:38:28 +00:00
Nate Begeman
e9067c13ec Allow the JIT to encode MMX instructions
llvm-svn: 44869
2007-12-11 18:06:14 +00:00
Evan Cheng
f6c2838f36 - Improved v8i16 shuffle lowering. It now uses pshuflw and pshufhw as much as
possible before resorting to pextrw and pinsrw.
- Better codegen for v4i32 shuffles masquerading as v8i16 or v16i8 shuffles.
- Improves (i16 extract_vector_element 0) codegen by recognizing
  (i32 extract_vector_element 0) does not require a pextrw.

llvm-svn: 44836
2007-12-11 01:46:18 +00:00
Nate Begeman
8b194d1718 x86 doesn't actually want to custom lower v3i32
llvm-svn: 44835
2007-12-11 01:41:33 +00:00
Chris Lattner
f7c53191c0 Move TargetData::hostIsLittleEndian out of line, which means we
don't have to #include config.h in it.  #including config.h breaks
other projects that have their own autoconf stuff and try to #include
the llvm headers.  One obscure example is llvm-gcc.

llvm-svn: 44825
2007-12-11 00:28:59 +00:00
Anton Korobeynikov
005fe34c3b Hey, English is not my native language :)
llvm-svn: 44820
2007-12-10 23:10:20 +00:00
Anton Korobeynikov
b003fb0ed7 Clarify the need of CFI() stuff
llvm-svn: 44819
2007-12-10 23:08:35 +00:00
Anton Korobeynikov
fd74645812 Provide convenient way to disable CFI stuff for old/broken assemblers.
Use it for Darwin.

llvm-svn: 44818
2007-12-10 23:04:38 +00:00
Chris Lattner
b511799808 Disable cfi directives for now, darwin does't support them.
These should probably be something like:

  CFI(".cfi_def_cfa_offset 16\n")

where CFI is defined to a noop on darwin and other platforms
that don't support those directives.

llvm-svn: 44803
2007-12-10 19:10:18 +00:00
Anton Korobeynikov
cd497afc30 And finally annotate X86-64 version of callback.
All bad stuff from SSE version is implicitely inherited :)

llvm-svn: 44794
2007-12-10 15:27:07 +00:00
Anton Korobeynikov
49e2962ad3 Provide annotation for SSE version of callback. It's even more
broken, because doesn't mark xmm regs properly

llvm-svn: 44793
2007-12-10 15:13:55 +00:00
Anton Korobeynikov
0e4780cfe2 Annotate JIT callback function with call frame infromation.
This will allow us (theoretically) to unwind through JITer.
The code wasn't verified, so I'm pretty sure offsets are wrong :)

llvm-svn: 44792
2007-12-10 14:54:42 +00:00
Bill Wendling
8d8d9a2f5e Reverting 44702. It wasn't correct to rename them.
llvm-svn: 44727
2007-12-08 23:58:46 +00:00
Chris Lattner
12fca81026 aesthetic changes, no functionality change. Evan, it's not clear
what 'Available' is, please add a comment near it and rename it
if appropriate.

llvm-svn: 44703
2007-12-08 07:22:58 +00:00
Bill Wendling
d10837def7 Renaming:
isTriviallyReMaterializable -> hasNoSideEffects
  isReallyTriviallyReMaterializable -> isTriviallyReMaterializable

llvm-svn: 44702
2007-12-08 07:17:56 +00:00
Chris Lattner
e93a775a4d Fix a significant code quality regression I introduced on PPC64 quite
a while ago.  We now produce:

_foo:
	mflr r0
	std r0, 16(r1)
	ld r2, 16(r1)
	std r2, 0(r3)
	ld r0, 16(r1)
	mtlr r0
	blr 

instead of:

_foo:
	mflr r0
	std r0, 16(r1)
	lis r0, 0
	ori r0, r0, 16
	ldx r2, r1, r0
	std r2, 0(r3)
	ld r0, 16(r1)
	mtlr r0
	blr 

for:

void foo(void **X) {
  *X = __builtin_return_address(0);
}

on ppc64.

llvm-svn: 44701
2007-12-08 07:04:58 +00:00
Chris Lattner
e16166b78d implement __builtin_return_addr(0) on ppc.
llvm-svn: 44700
2007-12-08 06:59:59 +00:00
Chris Lattner
1024cda0bd refactor some code to avoid overloading the name 'usesLR' in
different places to mean different things.  Document what the
one in PPCFunctionInfo means and when it is valid.

llvm-svn: 44699
2007-12-08 06:39:11 +00:00
Evan Cheng
fdd03d0589 Doh
llvm-svn: 44694
2007-12-08 01:01:07 +00:00
Evan Cheng
28c2b7e647 Fix a compilation warning.
llvm-svn: 44692
2007-12-08 01:00:31 +00:00
Evan Cheng
6bfc0cadf3 Fix a compilation warning.
llvm-svn: 44691
2007-12-08 01:00:21 +00:00
Bill Wendling
c08dedb060 Initial commit of the machine code LICM pass. It successfully hoists this:
_foo:
        li r2, 0
LBB1_1: ; bb
        li r5, 0
        stw r5, 0(r3)
        addi r2, r2, 1
        addi r3, r3, 4
        cmplw cr0, r2, r4
        bne cr0, LBB1_1 ; bb
LBB1_2: ; return
        blr 

to:

_foo:
        li r2, 0
        li r5, 0
LBB1_1: ; bb
        stw r5, 0(r3)
        addi r2, r2, 1
        addi r3, r3, 4
        cmplw cr0, r2, r4
        bne cr0, LBB1_1 ; bb
LBB1_2: ; return
        blr

ZOMG!! :-)

Moar to come...

llvm-svn: 44687
2007-12-07 21:42:31 +00:00
Evan Cheng
c4db072c74 Add comment.
llvm-svn: 44686
2007-12-07 21:30:01 +00:00
Evan Cheng
34c7b35135 Much improved v8i16 shuffles. (Step 1).
llvm-svn: 44676
2007-12-07 08:07:39 +00:00
Evan Cheng
4dc538449d Remove a bogus optimization. It's not possible to do a move to low element to a <8 x i16> or <16 x i8> vector.
llvm-svn: 44669
2007-12-06 22:14:22 +00:00
Chris Lattner
c467b49c96 implement a readme entry, compiling the code into:
_foo:
	movl	$12, %eax
	andl	4(%esp), %eax
	movl	_array(%eax), %eax
	ret

instead of:

_foo:
	movl	4(%esp), %eax
	shrl	$2, %eax
	andl	$3, %eax
	movl	_array(,%eax,4), %eax
	ret

As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:

-       movl    8(%eax), %eax
-       shll    $2, %eax
-       andl    $1020, %eax
-       movl    (%esi,%eax), %eax
+       movzbl  8(%eax), %eax
+       movl    (%esi,%eax,4), %eax


-       shll    $2, %edx
-       andl    $1020, %edx
-       movl    (%edi,%edx), %edx
+       andl    $255, %edx
+       movl    (%edi,%edx,4), %edx

Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:

-       andl    $85, %ebx
-       addl    _bit_count(,%ebx,4), %ebp
+       shll    $2, %ebx
+       andl    $340, %ebx
+       addl    _bit_count(%ebx), %ebp

llvm-svn: 44656
2007-12-06 07:33:36 +00:00
Chris Lattner
e3f1487574 add a note
llvm-svn: 44638
2007-12-05 23:05:06 +00:00
Chris Lattner
011d2aab51 add a note
llvm-svn: 44637
2007-12-05 22:58:19 +00:00
Scott Michel
a9a40d4347 Minor updates:
- Fix typo in SPUCallingConv.td
- Credit myself for CellSPU work
- Add CellSPU to 'all' host target list

llvm-svn: 44627
2007-12-05 21:23:16 +00:00
Evan Cheng
27986f1ac7 Added canFoldMemoryOperand for PPC.
llvm-svn: 44623
2007-12-05 18:41:29 +00:00
Evan Cheng
aecb76bcc2 Update foldMemoryOperand.
llvm-svn: 44621
2007-12-05 18:36:37 +00:00
Chris Lattner
0914ad3008 fix warnings
llvm-svn: 44620
2007-12-05 18:32:18 +00:00
Chris Lattner
df5cd03710 allow this to build
llvm-svn: 44619
2007-12-05 18:30:11 +00:00
Evan Cheng
8464a0bf00 Add a argument to storeRegToStackSlot and storeRegToAddr to specify whether
the stored register is killed.

llvm-svn: 44600
2007-12-05 03:14:33 +00:00
Scott Michel
871b3a4fd4 More stuff for CellSPU -- this should be enough to get an error-free
compilation (no files missing). Test cases remain to be checked in.

llvm-svn: 44598
2007-12-05 02:01:41 +00:00
Scott Michel
8a2cb11b05 Updated source file headers to llvm coding standard.
llvm-svn: 44597
2007-12-05 01:40:25 +00:00
Scott Michel
026ace10b2 Two missing files.
llvm-svn: 44596
2007-12-05 01:31:18 +00:00
Scott Michel
191775d31f Main CellSPU backend files checked in. Intrinsics and autoconf files
remain.

llvm-svn: 44595
2007-12-05 01:24:05 +00:00
Scott Michel
512cb025cc More files in the CellSPU drop...
llvm-svn: 44584
2007-12-04 22:35:58 +00:00
Scott Michel
774da2e74c More of the Cell SPU code drop from "Team Aerospace".
llvm-svn: 44582
2007-12-04 22:23:35 +00:00
Scott Michel
3996f647d2 More CellSPU files... more to follow.
llvm-svn: 44559
2007-12-03 23:14:43 +00:00
Scott Michel
c312b999e6 Makefile fragment for CellSPU.
llvm-svn: 44558
2007-12-03 23:12:49 +00:00
Scott Michel
34987128e0 First commit to CellSPU. More to follow
llvm-svn: 44557
2007-12-03 23:09:49 +00:00
Duncan Sands
1e2e4972ff Rather than having special rules like "intrinsics cannot
throw exceptions", just mark intrinsics with the nounwind
attribute.  Likewise, mark intrinsics as readnone/readonly
and get rid of special aliasing logic (which didn't use
anything more than this anyway).

llvm-svn: 44544
2007-12-03 20:06:50 +00:00
Evan Cheng
58b387dfb0 Remove redundant foldMemoryOperand variants and other code clean up.
llvm-svn: 44517
2007-12-02 08:30:39 +00:00
Evan Cheng
79e8b92dc3 Allow some reloads to be folded in multi-use cases. Specifically testl r, r -> cmpl [mem], 0.
llvm-svn: 44479
2007-12-01 02:07:52 +00:00
Chris Lattner
906683b821 Work around a GCC bug, producing this code:
unsigned char *llvm_cbe_X;
...
  llvm_cbe_X = 0; *((void**)&llvm_cbe_X) = __builtin_stack_save();

instead of:

  llvm_cbe_X = __builtin_stack_save();

See PR1809 for details.

llvm-svn: 44415
2007-11-28 21:26:17 +00:00
Chris Lattner
e59a7ee26a Implement ExpandOperationResult for ppc i64 fp->int, which fixes
CodeGen/Generic/fp_to_int.ll among others.  Its unclear why this 
just started failing...

llvm-svn: 44407
2007-11-28 18:44:47 +00:00
Duncan Sands
1b0feb42e2 Add some convenience methods for querying attributes, and
use them.

llvm-svn: 44403
2007-11-28 17:07:01 +00:00
Chris Lattner
706eb604ae several entries got significantly better, though they still aren't done.
llvm-svn: 44382
2007-11-27 22:41:52 +00:00
Chris Lattner
d2ee2dad04 implement a trivial readme entry.
llvm-svn: 44380
2007-11-27 22:36:16 +00:00
Chris Lattner
5e0cabc90e Fix a crash on invalid code due to memcpy lowering.
llvm-svn: 44378
2007-11-27 22:14:42 +00:00
Nate Begeman
4278967588 Support returning non-power-of-2 vectors to unblock some work
llvm-svn: 44371
2007-11-27 19:28:48 +00:00
Andrew Lenharth
6e449dc482 something wrong with this opt
llvm-svn: 44370
2007-11-27 18:31:30 +00:00
Duncan Sands
3602011bec Fix PR1146: parameter attributes are longer part of
the function type, instead they belong to functions
and function calls.  This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll).  Hopefully
a bitcode guru (who might that be? :) ) will fix it.

llvm-svn: 44359
2007-11-27 13:23:08 +00:00
Chris Lattner
be0c5a0500 Fix a long standing deficiency in the X86 backend: we would
sometimes emit "zero" and "all one" vectors multiple times,
for example:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M2
	ret

instead of:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	movq	%mm0, _M2
	ret

This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type.  This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.

This patch makes the following changes:

1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
   their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
   immAllZerosV in the wrong form now use *_bc to match them with a
   bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle 
   bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
   is legal, instead of generating one that is illegal and expecting
   a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.

This patch is definite goodness, but has the potential to cause random
code quality regressions.  Please be on the lookout for these and let 
me know if they happen.

llvm-svn: 44310
2007-11-25 00:24:49 +00:00
Chris Lattner
8a1dfeecab add a immAllZerosV_bc pattern fragment for consistency with others.
llvm-svn: 44303
2007-11-24 19:02:07 +00:00
Chris Lattner
3862759b53 remove bogus assertion that broke CodeGen/Generic/cast-fp.ll on x86
among others.

llvm-svn: 44302
2007-11-24 18:37:20 +00:00
Chris Lattner
28262fbaf2 Several changes:
1) Change the interface to TargetLowering::ExpandOperationResult to 
   take and return entire NODES that need a result expanded, not just
   the value.  This allows us to handle things like READCYCLECOUNTER,
   which returns two values.
2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES.
3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new
   ExpandOperationResult.  This makes the result simpler and fully 
   general.
4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes.
5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM
   i64 shifts, allowing them to work with LegalizeDAGTypes.
6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT,
   allowing them to work with LegalizeDAGTypes.

LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when
type legalization in LegalizeDAG is ifdef'd out.

llvm-svn: 44300
2007-11-24 07:07:01 +00:00
Chris Lattner
9020367ea0 add a note
llvm-svn: 44299
2007-11-24 06:13:33 +00:00
Dale Johannesen
6293438d50 Fix compiler warning.
llvm-svn: 44261
2007-11-21 00:45:00 +00:00
Dale Johannesen
8c3541787f Fix .eh table linkage issues on Darwin. Some EH support
for Darwin PPC, but it's not fully working yet.

llvm-svn: 44258
2007-11-20 23:24:42 +00:00
Dan Gohman
27ac53cc23 Remove meaningless qualifiers from return types, avoiding compiler warnings.
llvm-svn: 44240
2007-11-19 20:46:23 +00:00
Nate Begeman
2a8ef3f29a Add support for vectors to int <-> float casts.
llvm-svn: 44204
2007-11-17 03:58:34 +00:00
Anton Korobeynikov
cd9b16df61 Implement codegen for flt_rounds on x86
llvm-svn: 44183
2007-11-16 01:31:51 +00:00
Evan Cheng
c0dc7b6e61 Oops. Debugging code shouldn't have been checked in.
llvm-svn: 44128
2007-11-14 19:08:32 +00:00
Anton Korobeynikov
58298cb9cc Fix PIC jump table codegen on x86-32/linux. In fact, such thing should be applied
to all targets uses GOT-relative offsets for PIC (Alpha?)

llvm-svn: 44108
2007-11-14 09:18:41 +00:00
Duncan Sands
e6821dd990 Eliminate the recently introduced CCAssignToStackABISizeAlign
in favour of teaching CCAssignToStack that size 0 and/or align
0 means to use the ABI values.  This seems a neater solution.
It is safe since no legal value type has size 0.

llvm-svn: 44107
2007-11-14 08:29:13 +00:00
Evan Cheng
fd33cb316f Clean up sub-register implementation by moving subReg information back to
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.

llvm-svn: 44104
2007-11-14 07:59:08 +00:00
Dale Johannesen
70ca3c1f03 Revert previous; these files aren't ready to go in yet.
llvm-svn: 44057
2007-11-13 19:16:02 +00:00
Dale Johannesen
5fd9e7a615 Add parameter to getDwarfRegNum to permit targets
to use different mappings for EH and debug info;
no functional change yet.
Fix warning in X86CodeEmitter.

llvm-svn: 44056
2007-11-13 19:13:01 +00:00
Evan Cheng
994043f515 Fix x86-64 jit: remove reliance on Dwarf numbers.
llvm-svn: 44048
2007-11-13 17:54:34 +00:00
Bill Wendling
934fcd87e7 Unifacalize the CALLSEQ{START,END} stuff.
llvm-svn: 44045
2007-11-13 09:19:02 +00:00
Bill Wendling
cc75435ebf Unify CALLSEQ_{START,END}. They take 4 parameters: the chain, two stack
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).

This can only result in tears...

llvm-svn: 44037
2007-11-13 00:44:25 +00:00
Anton Korobeynikov
c58fa8584b Completely forgot, that we have some debug information emission on PPC. This should fix
some regressions on ppc nightly tests.

llvm-svn: 44029
2007-11-12 23:36:13 +00:00
Bruno Cardoso Lopes
0661f1be90 Added JumpTable support
Fixed some AsmPrinter issues
Added GLOBAL_OFFSET_TABLE Node handle.

llvm-svn: 44024
2007-11-12 19:49:57 +00:00
Owen Anderson
aba398a5ce Add a flag for indirect branch instructions.
Target maintainers: please check that the instructions for your target are correctly marked.

llvm-svn: 44012
2007-11-12 07:39:39 +00:00
Anton Korobeynikov
a4eb4336d2 Clarify the meaning of '-2' register number
llvm-svn: 43998
2007-11-11 19:53:50 +00:00
Anton Korobeynikov
8e8473c783 Use TableGen to emit information for dwarf register numbers.
This makes DwarfRegNum to accept list of numbers instead.
Added three different "flavours", but only slightly tested on x86-32/linux.
Please check another subtargets if possible,

llvm-svn: 43997
2007-11-11 19:50:10 +00:00
Dale Johannesen
2e9b020e89 Add CCAssignToStackABISizeAlign for convenience in
dealing with types whose size & alignment are
different on different subtargets.  Use it for x86 f80.

llvm-svn: 43988
2007-11-10 22:07:15 +00:00
Arnold Schwaighofer
64ad6fa1fa Update tailcall code to include inline attribute operand for memcpy.
llvm-svn: 43978
2007-11-10 10:48:01 +00:00
Evan Cheng
946afd2f6c Unbreak x86-64 jumptable.
llvm-svn: 43955
2007-11-09 19:11:23 +00:00
Anton Korobeynikov
dcc6077439 Silence a warning
llvm-svn: 43954
2007-11-09 19:06:14 +00:00
Dale Johannesen
eca19e7eca Revert previous rewrite per chris's comments.
llvm-svn: 43950
2007-11-09 18:07:11 +00:00
Evan Cheng
7d8deec92f Much improved pic jumptable codegen:
Then:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        imull   $4, %ecx, %ecx
        leal    LJTI1_0-"L1$pb"(%eax), %edx
        addl    LJTI1_0-"L1$pb"(%ecx,%eax), %edx
        jmpl    *%edx

        .align  2
        .set L1_0_set_3,LBB1_3-LJTI1_0
        .set L1_0_set_2,LBB1_2-LJTI1_0
        .set L1_0_set_5,LBB1_5-LJTI1_0
        .set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

Now:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        addl    LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
        jmpl    *%eax

		.align  2
		.set L1_0_set_3,LBB1_3-"L1$pb"
		.set L1_0_set_2,LBB1_2-"L1$pb"
		.set L1_0_set_5,LBB1_5-"L1$pb"
		.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

llvm-svn: 43924
2007-11-09 01:32:10 +00:00
Dale Johannesen
8a9ec1582b Rewrite Dwarf number handling per review comments.
llvm-svn: 43918
2007-11-09 00:47:10 +00:00
Lauro Ramos Venancio
d8f2190c19 [ARM] Implement __builtin_thread_pointer.
llvm-svn: 43892
2007-11-08 17:20:05 +00:00
Dale Johannesen
b11aca8a92 Complete conditionalization of Dwarf reg numbers.
Would somebody not on Darwin please make sure this
doesn't break anything.  Exception handling failures
would be the most likely symptom.

llvm-svn: 43844
2007-11-07 21:48:35 +00:00
Dale Johannesen
a863789700 Interchange Dwarf numbers of ESP and EBP on x86 Darwin.
Much improvement in exception handling.

llvm-svn: 43794
2007-11-07 00:25:05 +00:00
Bruno Cardoso Lopes
77e5c419ec Better processor definition
llvm-svn: 43749
2007-11-06 03:15:20 +00:00
Rafael Espindola
ec025c3042 Move the LowerMEMCPY and LowerMEMCPYCall to a common place.
Thanks for the suggestions Bill :-)

llvm-svn: 43742
2007-11-05 23:12:20 +00:00
Lauro Ramos Venancio
f5081ba980 [ARM] Fix code generation for:
static __thread struct {
    int a;
    int b;
} teste = {0, 0};

llvm-svn: 43722
2007-11-05 18:33:37 +00:00
Evan Cheng
c49995c027 Use movups to spill / restore SSE registers on targets where stacks alignment is
less than 16. This is a temporary solution until dynamic stack alignment is
implemented.

llvm-svn: 43703
2007-11-05 07:30:01 +00:00
Bruno Cardoso Lopes
569b5512b0 Added support for PIC code with "explicit relocations" *only*.
Removed all macro code for PIC (goodbye "la").
Support tested with shootout bench.

llvm-svn: 43697
2007-11-05 03:02:32 +00:00
Duncan Sands
d1bdbd010b Eliminate the remaining uses of getTypeSize. This
should only effect x86 when using long double.  Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment).  This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.

llvm-svn: 43688
2007-11-05 00:04:43 +00:00
Chris Lattner
8fac63c8b5 Fix PR1761 by not printing (rip) suffix when in -static mode.
Evan, please review this.

llvm-svn: 43680
2007-11-04 19:23:28 +00:00
Nick Lewycky
36047b0b5b Fix crash before main on ppc/linux with static constructors. PR1771
llvm-svn: 43676
2007-11-04 17:32:10 +00:00
Chris Lattner
67cd357fb8 Fix PR1763 by allowing the 'q' constraint to work with 64-bit
regs on x86-64.

llvm-svn: 43669
2007-11-04 06:51:12 +00:00
Evan Cheng
bf8e7c6644 Unbreak tailcall opt.
llvm-svn: 43646
2007-11-02 17:45:40 +00:00
Chris Lattner
679e22d547 add a note
llvm-svn: 43642
2007-11-02 17:04:20 +00:00
Evan Cheng
b50cc64eb0 Missing a getNumOperands check.
llvm-svn: 43630
2007-11-02 01:26:22 +00:00
Duncan Sands
eb464e976f Executive summary: getTypeSize -> getTypeStoreSize / getABITypeSize.
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment.  This gives a primitive type for
which getTypeSize differed from getABITypeSize.  For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).

This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition).  Instead there is:

(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type.  For a primitive type, this is the minimum number
of bits.  For an i36 this is 36 bits.  For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.

(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it).  For an
i36 this is 40 bits, for an x86 long double it is 80 bits.  This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes).  There doesn't seem to be anything
corresponding to this in gcc.

(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment.  For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS.  This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes).  This is
TYPE_SIZE in gcc.

Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize.  This means that the size of an array
is the length times the getABITypeSize.  It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize.  Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case.  So alloca's and mallocs should use getABITypeSize.  Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.

Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.

In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases).  I will get around to auditing these too at some point,
but I could do with some help.

Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize.  I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers.  If someone wants to pack these types more
tightly they can always use a packed struct.

llvm-svn: 43620
2007-11-01 20:53:16 +00:00
Bill Wendling
df2eaa8a55 Silence, accersed warning
llvm-svn: 43609
2007-11-01 08:51:44 +00:00
Rafael Espindola
27a8907a7c Make ARM and X86 LowerMEMCPY identical by moving the isThumb check into getMaxInlineSizeThreshold
and by restructuring the X86 version.

New I just have to move this to a common place :-)

llvm-svn: 43554
2007-10-31 14:39:58 +00:00
Rafael Espindola
fae98471a9 Make ARM an X86 memcpy expansion more similar to each other.
Now both subtarget define getMaxInlineSizeThreshold and the expansion uses it.

This should not change generated code.

llvm-svn: 43552
2007-10-31 11:52:06 +00:00
Dale Johannesen
9bc04ae496 Make i64=expand_vector_elt(v2i64) work in 32-bit mode.
llvm-svn: 43535
2007-10-31 00:32:36 +00:00
Dale Johannesen
7167117945 Add missing SSE builtins: CVTPD2PI, CVTPS2PI,
CVTTPD2PI, CVTTPS2PI, CVTPI2PD, CVTPI2PS.

llvm-svn: 43523
2007-10-30 22:15:38 +00:00
Duncan Sands
f6837e8634 Fix for visibility warnings generated by gcc-4.2.
llvm-svn: 43500
2007-10-30 13:14:37 +00:00
Dale Johannesen
461a0c47f8 Add missing MMX PSUBQ.
llvm-svn: 43488
2007-10-30 01:18:38 +00:00
Evan Cheng
5fe81cf64e Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).

llvm-svn: 43465
2007-10-29 19:58:20 +00:00
Evan Cheng
1113931fd8 Avoid doing something dumb like rewriting using a 64-bit iv in 32-bit mode.
llvm-svn: 43446
2007-10-29 07:57:50 +00:00
Chris Lattner
be8379fac5 add a note.
llvm-svn: 43444
2007-10-29 06:19:48 +00:00
Chris Lattner
1503362624 Add support for the x86-64 'q' regigster modifier, and add support for the
b/h/w/k/q inline asm memory modifiers, which are just ignored.  This fixes
PR1748 and CodeGen/X86/2007-10-28-inlineasm-q-modifier.ll

llvm-svn: 43430
2007-10-29 03:09:07 +00:00
Chris Lattner
7e3a8a7604 Fix PR1749 and InstCombine/2007-10-28-EmptyField.ll by handling
zero-length fields better.

llvm-svn: 43427
2007-10-29 02:40:02 +00:00
Evan Cheng
053178440a New entry.
llvm-svn: 43420
2007-10-28 04:01:09 +00:00
Anton Korobeynikov
0d3f43480e Fix off-by-one stack offset computations (dwarf information) for callee-saved
registers in case, when FP pointer was eliminated. This should fixes misc. random
EH-related crahses, when stuff is compiled with -fomit-frame-pointer.
Thanks Duncan for nailing this bug!

llvm-svn: 43381
2007-10-26 09:13:24 +00:00
Eric Christopher
82c77dd85b clo/clz aren't supported on mips I. Keep them around for when we'll
want them later (mips32/64).

llvm-svn: 43380
2007-10-26 04:00:13 +00:00
Evan Cheng
53696b7e9f Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
e.g.
Turns this loop:
LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
        movw    %dx, %si
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %edi
        movw    %si, (%edi)
        movl    L_Y$non_lazy_ptr, %edi
        movw    %dx, (%edi)
		addw    $4, %dx
		incw    %si
		incl    %ecx
		cmpl    %eax, %ecx
		jne     LBB1_2  # bb
	
into

LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %esi
        movw    %cx, (%esi)
        movl    L_Y$non_lazy_ptr, %esi
        movw    %dx, (%esi)
        addw    $4, %dx
		incl    %ecx
        cmpl    %eax, %ecx
        jne     LBB1_2  # bb

llvm-svn: 43375
2007-10-26 01:56:11 +00:00
Dale Johannesen
0774a9c549 Support non-POSIX hosts by removing use of strncasecmp.
llvm-svn: 43364
2007-10-25 21:54:43 +00:00
Dale Johannesen
94241a8d3a Disable a couple more things for ppcf128.
llvm-svn: 43267
2007-10-23 23:20:14 +00:00
Evan Cheng
0590c75f18 Temporary solution: added a different set of BCTRL_Macho / BCTRL_ELF with right callee-saved defs set for ppc64.
llvm-svn: 43248
2007-10-23 06:42:42 +00:00
Evan Cheng
252d9ddb4d Fix memcpy lowering when addresses are 4-byte aligned but size is not multiple of 4.
llvm-svn: 43234
2007-10-22 22:11:27 +00:00
Dan Gohman
76e104c8ad Fix the folding of multiplication into addresses on x86, which was broken
by the recent {U,S}MUL_LOHI changes.

llvm-svn: 43230
2007-10-22 20:22:24 +00:00
Evan Cheng
85eb733eff Use ptr type in the immediate field of a BxA instruction so we don't end up selecting 32-bit call instruction for ppc64.
llvm-svn: 43228
2007-10-22 19:46:19 +00:00
Evan Cheng
ddeab10144 Fix an unfolding bug.
llvm-svn: 43212
2007-10-22 03:03:20 +00:00
Dale Johannesen
2edd0fb69d Allow for copysign having f80 second argument.
Fixes 5550319.

llvm-svn: 43205
2007-10-21 01:07:44 +00:00
Evan Cheng
b56784f9ea Resolve unfold tables ambiguity.
llvm-svn: 43194
2007-10-19 23:50:58 +00:00
Evan Cheng
ded6550885 Local spiller optimization:
Turn a store folding instruction into a load folding instruction. e.g.
     xorl  %edi, %eax
     movl  %eax, -32(%ebp)
     movl  -36(%ebp), %eax
     orl   %eax, -32(%ebp)
=>
     xorl  %edi, %eax
     orl   -36(%ebp), %eax
     mov   %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.

llvm-svn: 43192
2007-10-19 21:23:22 +00:00
Rafael Espindola
c751cbdb02 split LowerMEMCPY into LowerMEMCPYCall and LowerMEMCPYInline in the ARM backend.
llvm-svn: 43176
2007-10-19 14:35:17 +00:00
Rafael Espindola
d8d4372845 Add support for byval function whose argument is not 32 bit aligned.
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset.  I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)

llvm-svn: 43172
2007-10-19 10:41:11 +00:00
Chris Lattner
4354f2db6a comment fixes
llvm-svn: 43168
2007-10-19 04:08:28 +00:00
Chris Lattner
57e2fa4ba0 Add an easy microoptimization I noticed.
llvm-svn: 43164
2007-10-19 03:29:26 +00:00
Dale Johannesen
b23b0bfa8f More ppcf128 issues (maybe the last)?
llvm-svn: 43160
2007-10-19 00:59:18 +00:00
Evan Cheng
0449186690 - Added getOpcodeAfterMemoryUnfold(). It doesn't unfold an instruction, but only returns the opcode of the instruction post unfolding.
- Fix some copy+paste bugs.

llvm-svn: 43153
2007-10-18 22:40:57 +00:00
Evan Cheng
c852780685 Use SmallVectorImpl instead of SmallVector with hardcoded size in MRegister public interface.
llvm-svn: 43150
2007-10-18 21:29:24 +00:00
Christopher Lamb
7f21e45b06 Fix a misnamed parameter.
llvm-svn: 43145
2007-10-18 19:29:45 +00:00
Christopher Lamb
a26b82ea94 Fix a typo
llvm-svn: 43144
2007-10-18 19:28:55 +00:00
Gordon Henriksen
3b309c68d1 Work around downrev gccs which do not inherit visibility of the
Registry<>::iterator member class.

llvm-svn: 43122
2007-10-18 11:53:05 +00:00