1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-23 04:52:54 +02:00
Commit Graph

885 Commits

Author SHA1 Message Date
Chris Lattner
c5e53c07fd Implement varargs and returnaddress/frameaddress intrinsics. With this
patch, all of SingleSource/UnitTests passes.

llvm-svn: 19408
2005-01-09 00:01:27 +00:00
Chris Lattner
ca81756527 Okay 15th time is the charm. Looking at the vector size is useless as it
gets clobbered by a previous statement.  This fixes all calls finally.

llvm-svn: 19399
2005-01-08 20:51:36 +00:00
Chris Lattner
85816cff9a Okay, my off by one was actually off by two. This fixes Generic/2003-07-07-BadLongConst.ll
llvm-svn: 19398
2005-01-08 20:39:31 +00:00
Chris Lattner
2d68cb6cf4 Fix off by one error
llvm-svn: 19396
2005-01-08 20:31:34 +00:00
Chris Lattner
c4d075cfa3 Adjust to changes in LowerCallTo interface
Minor bugfixes

llvm-svn: 19376
2005-01-08 19:28:19 +00:00
Chris Lattner
6c7d3bd8ea Wrap long line.
llvm-svn: 19367
2005-01-08 06:59:50 +00:00
Chris Lattner
473ec492f7 The X86 instruction selector already handles codegen of:
store float 123.45, float* %P

as an integer store.  This adds handling of float immediate stores as integers
for arguments passed function calls.

This is now tested by CodeGen/X86/store-fp-constant.ll

llvm-svn: 19364
2005-01-08 05:45:24 +00:00
Chris Lattner
2c398fc8f6 Allow the selection-dag based selector to be diabled with -disable-pattern-isel.
For now, this is the default, as the current selector is missing some big pieces.
To enable the new selector, pass -disable-pattern-isel=false to llc or lli.

llvm-svn: 19335
2005-01-07 07:50:50 +00:00
Chris Lattner
216198574d Reimplementation of the X86 pattern isel. This is still missing many large
pieces, but can already do amazing things in some cases.

llvm-svn: 19334
2005-01-07 07:49:41 +00:00
Chris Lattner
74019f517a This file is now dead.
llvm-svn: 19333
2005-01-07 07:49:05 +00:00
Chris Lattner
079b497982 Add a new prototype
llvm-svn: 19332
2005-01-07 07:48:33 +00:00
Chris Lattner
608dd77d6b Codegen -1 and -0.0 more efficiently. This implements CodeGen/X86/negatize_zero.ll
llvm-svn: 19313
2005-01-06 21:19:16 +00:00
Chris Lattner
6d651234d6 1. If a double FP constant must be put into a constant pool, but it can be
precisely represented as a float, put it into the constant pool as a
   float.
2. Use the cbw/cwd/cdq instructions instead of an explicit SAR for signed
   division.

llvm-svn: 19291
2005-01-05 16:30:14 +00:00
Chris Lattner
b438f5251f Minor optimization to allocate R8 registers in a better order.
llvm-svn: 19289
2005-01-05 16:09:16 +00:00
Jeff Cohen
36968ed8c1 Revert elimination of global variable hack... still needed.
llvm-svn: 19273
2005-01-03 16:34:19 +00:00
Chris Lattner
1aaf8cccb2 ADC and IMUL are also commutable.
llvm-svn: 19264
2005-01-03 01:27:59 +00:00
Jeff Cohen
1087b72875 Eliminate the use of the global variable hack in the X86 target that was used
to get Visual Studio to link in X86.lib to the executables that need it.  There
is another way of doing it.

llvm-svn: 19252
2005-01-02 04:23:12 +00:00
Chris Lattner
a78fd4726e Disable 2->3 address promotion of add and inc instructions to LEA's. In
addition to being three address, LEA's don't set the flags.

This fixes 186.crafty.

llvm-svn: 19251
2005-01-02 04:18:17 +00:00
Chris Lattner
3ef32da6c3 Add a new method.
llvm-svn: 19249
2005-01-02 02:38:18 +00:00
Chris Lattner
95f1e628ed Add support for SETNPr to lower to memory form.
llvm-svn: 19248
2005-01-02 02:37:46 +00:00
Chris Lattner
d6bc921fa8 Implement the convertToThreeAddress method, add support for inverting JP/JNP
branches.

llvm-svn: 19247
2005-01-02 02:37:07 +00:00
Chris Lattner
0d6f03e52b Two changes here:
1. Add new instructions for checking parity flags: JP, JNP, SETP, SETNP.
2. Set the isCommutable and isPromotableTo3Address bits on several
   instructions.

llvm-svn: 19246
2005-01-02 02:35:46 +00:00
Chris Lattner
3b78513843 Remove unused enum value
llvm-svn: 19024
2004-12-17 22:41:46 +00:00
Chris Lattner
d11ba51208 Change the sentinal
llvm-svn: 19007
2004-12-17 00:46:51 +00:00
Chris Lattner
59d0c02d2b Create a stack slot for the return address lazily instead of eagerly. This
save small amounts of time for functions that don't call llvm.returnaddress
or llvm.frameaddress (which is almost all functions).

llvm-svn: 19006
2004-12-17 00:07:46 +00:00
Chris Lattner
4b1d58bf4b Adjust to changes in asmwriter filenames
llvm-svn: 18987
2004-12-16 17:33:24 +00:00
Chris Lattner
4136428410 Set the rounding mode for the X86 FPU to 64-bits instead of 80-bits. We
don't support long double anyway, and this gives us FP results closer to
other targets.

This also speeds up 179.art from 41.4s to 18.32s, by eliminating a problem
with extra precision that causes an FP == comparison to fail (leading to
extra loop iterations).

llvm-svn: 18895
2004-12-13 17:23:11 +00:00
Chris Lattner
6131b06f73 Use the target triple to pick this target.
llvm-svn: 18830
2004-12-12 17:40:28 +00:00
Chris Lattner
99c8cf8ef8 Fix a regression caused by the previous patch
llvm-svn: 18449
2004-12-03 05:13:15 +00:00
Chris Lattner
316f923a9c Spill/restore X86 floating point stack registers with 64-bits of precision
instead of 80-bits of precision.  This fixes PR467.

This change speeds up fldry on X86 with LLC from 7.32s on apoc to 4.68s.

llvm-svn: 18433
2004-12-02 18:17:31 +00:00
Chris Lattner
dfdd49b7af Consider 64-bit registers to be FP as well.
llvm-svn: 18432
2004-12-02 17:57:21 +00:00
Tanya Lattner
893f987574 Reverting this patch:
http://mail.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20041122/021428.html

It broke Mutlisource/Applications/obsequi

llvm-svn: 18407
2004-12-01 18:27:03 +00:00
Chris Lattner
9c400f3b28 Revamp long/ulong comparisons to use a much more efficient sequence (thanks
to Brian and the Sun compiler for pointing out that the obvious works :)

This also enables folding all long comparisons into setcc and branch
instructions: before we could only do == and !=

For example, for:
void test(unsigned long long A, unsigned long long B) {
   if (A < B) foo();
 }

We now generate:

test:
        subl $4, %esp
        movl %esi, (%esp)
        movl 8(%esp), %eax
        movl 12(%esp), %ecx
        movl 16(%esp), %edx
        movl 20(%esp), %esi
        subl %edx, %eax
        sbbl %esi, %ecx
        jae .LBBtest_2  # UnifiedReturnBlock
.LBBtest_1:     # then
        call foo
        movl (%esp), %esi
        addl $4, %esp
        ret
.LBBtest_2:     # UnifiedReturnBlock
        movl (%esp), %esi
        addl $4, %esp
        ret

Instead of:

test:
        subl $12, %esp
        movl %esi, 8(%esp)
        movl %ebx, 4(%esp)
        movl 16(%esp), %eax
        movl 20(%esp), %ecx
        movl 24(%esp), %edx
        movl 28(%esp), %esi
        cmpl %edx, %eax
        setb %al
        cmpl %esi, %ecx
        setb %bl
        cmove %ax, %bx
        testb %bl, %bl
        je .LBBtest_2   # UnifiedReturnBlock
.LBBtest_1:     # then
        call foo
        movl 4(%esp), %ebx
        movl 8(%esp), %esi
        addl $12, %esp
        ret
.LBBtest_2:     # UnifiedReturnBlock
        movl 4(%esp), %ebx
        movl 8(%esp), %esi
        addl $12, %esp
        ret

llvm-svn: 18330
2004-11-29 05:55:24 +00:00
Chris Lattner
7a34cbf266 Do not push two return addresses on the stack when we call external functions who have their addresses taken. This fixes test-call.ll
llvm-svn: 18134
2004-11-22 22:25:30 +00:00
Chris Lattner
f4c8575535 There is no reason to emit function stubs for direct calls.
llvm-svn: 18082
2004-11-21 03:46:06 +00:00
Chris Lattner
6d1fb33657 ignore generated files
llvm-svn: 18073
2004-11-21 00:01:54 +00:00
Chris Lattner
4a340e281e Remove all JIT specific code and switch the code generator over to emitting
relocations for global references.

llvm-svn: 18068
2004-11-20 23:55:15 +00:00
Chris Lattner
b9a44893e9 Implement the X86 JIT interfaces
llvm-svn: 18067
2004-11-20 23:54:33 +00:00
Chris Lattner
8e33311566 Describe the X86 target-specific relocations.
llvm-svn: 18066
2004-11-20 23:54:19 +00:00
Chris Lattner
3c20464ad7 We implement these interfaces
llvm-svn: 18065
2004-11-20 23:53:56 +00:00
Chris Lattner
0c79788bc4 Dont' forget to switch back to decimal output
llvm-svn: 18010
2004-11-19 20:57:07 +00:00
Chris Lattner
a7eec14b04 Fix a major bug in the signed shr code, which apparently only breaks 134.perl!
llvm-svn: 17902
2004-11-16 18:40:52 +00:00
Chris Lattner
41d31d7461 Remove a dead function, which died when we got GAS emission working (phwew,
hold your nose!)

llvm-svn: 17869
2004-11-16 04:34:29 +00:00
Chris Lattner
b378786c97 Implement a simple FIXME: if we are emitting a basic block address that has
already been emitted, we don't have to remember it and deal with it later,
just emit it directly.

llvm-svn: 17868
2004-11-16 04:30:51 +00:00
Chris Lattner
3f73c77ace * Merge some win32 ifdefs together
* Get rid of "emitMaybePCRelativeValue", either we want to emit a PC relative
  value or not: drop the maybe BS.  As it turns out, the only places where
  the bool was a variable coming in, the bool was a dynamic constant.

llvm-svn: 17867
2004-11-16 04:21:18 +00:00
Chris Lattner
3ed3e8669f Add debug-only=jit printout, so we see when lazily resolved symbols are
set up.

llvm-svn: 17862
2004-11-15 23:16:55 +00:00
Chris Lattner
9ef34d44e1 Simplify and rearrange long shift code
llvm-svn: 17861
2004-11-15 23:16:34 +00:00
Misha Brukman
8c1b4a5b9d GhostLinkage should not reach asm printing stage
llvm-svn: 17750
2004-11-14 21:03:49 +00:00
Chris Lattner
09b7f968e0 Don't print unneeded labels
llvm-svn: 17714
2004-11-13 23:27:11 +00:00
Chris Lattner
1cde11aa95 shld is a very high latency operation. Instead of emitting it for shifts of
two or three, open code the equivalent operation which is faster on athlon
and P4 (by a substantial margin).

For example, instead of compiling this:

long long X2(long long Y) { return Y << 2; }

to:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        shldl $2, %eax, %edx
        shll $2, %eax
        ret

Compile it to:

X2:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        shrl $30, %edx
        leal (%edx,%ecx,4), %edx
        shll $2, %eax
        ret

Likewise, for << 3, compile to:

X3:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        shrl $29, %edx
        leal (%edx,%ecx,8), %edx
        shll $3, %eax
        ret

This matches icc, except that icc open codes the shifts as adds on the P4.

llvm-svn: 17707
2004-11-13 20:48:57 +00:00