1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-30 07:22:55 +01:00
Commit Graph

1919 Commits

Author SHA1 Message Date
Andrew Lenharth
b44263313a The first patch of X86 support for read cycle counter
llvm-svn: 24429
2005-11-20 21:32:07 +00:00
Chris Lattner
c830542c70 more progress towards bug 291 being finished. Patch by Owen Anderson,
HAVE_GV case fixed up by me.

llvm-svn: 24428
2005-11-20 03:45:52 +00:00
Chris Lattner
517942843d Unbreak codegen of bools. This should fix the llc/jit/llc-beta failures
from last night.

llvm-svn: 24427
2005-11-19 18:40:42 +00:00
Chris Lattner
fc1975aa3b Improve Selection DAG printer portability. Patch by Owen Anderson!
llvm-svn: 24425
2005-11-19 07:44:09 +00:00
Chris Lattner
72dc36da76 Teach the graph viewer to handle register operands that are zero.
llvm-svn: 24421
2005-11-19 06:58:46 +00:00
Chris Lattner
3a1a1557e1 Silence a bogus warning
llvm-svn: 24420
2005-11-19 05:51:46 +00:00
Chris Lattner
89056c7145 Add some method variants, patch by Evan Cheng
llvm-svn: 24418
2005-11-19 01:44:53 +00:00
Nate Begeman
7d513f65ae Teach LLVM how to scalarize packed types. Currently, this only works on
packed types with an element count of 1, although more generic support is
coming.  This allows LLVM to turn the following code:

void %foo(<1 x float> * %a) {
entry:
  %tmp1 = load <1 x float> * %a;
  %tmp2 = add <1 x float> %tmp1, %tmp1
  store <1 x float> %tmp2, <1 x float> *%a
  ret void
}

Into:

_foo:
        lfs f0, 0(r3)
        fadds f0, f0, f0
        stfs f0, 0(r3)
        blr

llvm-svn: 24416
2005-11-19 00:36:38 +00:00
Nate Begeman
78ac456d32 Split out the shift code from visitBinary.
llvm-svn: 24412
2005-11-18 07:42:56 +00:00
Chris Lattner
0b177075c2 Allow targets to custom legalize leaf nodes like GlobalAddress.
llvm-svn: 24387
2005-11-17 06:41:44 +00:00
Chris Lattner
48668daec3 Teach legalize about targetglobaladdress
llvm-svn: 24385
2005-11-17 05:52:24 +00:00
Chris Lattner
2095b19912 when debugging lower dbg intrinsics to calls
llvm-svn: 24377
2005-11-16 07:22:30 +00:00
Chris Lattner
5d9032c0e9 Remove extraneous parents around constants when using a constant expr cast.
llvm-svn: 24357
2005-11-15 00:03:16 +00:00
Chris Lattner
389e3bfb0c Teach emitAlignment to handle explicit alignment requests by globals.
llvm-svn: 24354
2005-11-14 19:00:06 +00:00
Jeff Cohen
566c6d987a Fix operator precedence bug caught by VC++.
llvm-svn: 24318
2005-11-12 00:59:01 +00:00
Andrew Lenharth
9b036b1bdb added a chain output
llvm-svn: 24306
2005-11-11 22:48:54 +00:00
Andrew Lenharth
dca2f13e76 continued readcyclecounter support
llvm-svn: 24300
2005-11-11 16:47:30 +00:00
Chris Lattner
b6d5dcd181 nuke blank line
llvm-svn: 24278
2005-11-10 18:49:46 +00:00
Chris Lattner
4868465cb6 Get rid of casts by #including the right header
llvm-svn: 24275
2005-11-10 18:36:17 +00:00
Chris Lattner
aa86c10fe6 Compile C strings to:
l1__2E_str_1:                           ; '.str_1'
        .asciz  "foo"

not:

        .align  0
l1__2E_str_1:                           ; '.str_1'
        .asciz  "foo"

llvm-svn: 24273
2005-11-10 18:09:27 +00:00
Chris Lattner
88c7013f18 add support for .asciz, and enable it by default. If your target assemblerdoesn't support .asciz, just set AscizDirective to null in your asmprinter.
This compiles C strings to:

l1__2E_str_1:                           ; '.str_1'
        .asciz  "foo"

instead of:

l1__2E_str_1:                           ; '.str_1'
        .ascii  "foo\000"

llvm-svn: 24272
2005-11-10 18:06:33 +00:00
Chris Lattner
29585fd8c8 Switch the allnodes list from a vector of pointers to an ilist of nodes.This eliminates the vector, allows constant time removal of a node froma graph, and makes iteration over the all nodes list stable when adding
nodes to the graph.

llvm-svn: 24263
2005-11-09 23:47:37 +00:00
Chris Lattner
11d12a572e Refactor intrinsic lowering stuff out of visitCall
llvm-svn: 24261
2005-11-09 19:44:01 +00:00
Chris Lattner
8052f32866 Handle the trivial (but common) two-op case more efficiently
llvm-svn: 24259
2005-11-09 18:48:57 +00:00
Chris Lattner
82596272da Nuke noop copies.
llvm-svn: 24258
2005-11-09 18:22:42 +00:00
Chris Lattner
306c386a79 Fix CodeGen/X86/shift-folding.ll:test3 on X86
llvm-svn: 24256
2005-11-09 16:50:40 +00:00
Chris Lattner
90e4c8a2a7 Disable some overly-aggressive checking code. This speeds up the local
allocator from 23s to 11s on kc++ in debug mode.

llvm-svn: 24255
2005-11-09 05:28:45 +00:00
Chris Lattner
798441d725 Avoid creating a token factor node in trivially redundant cases. This
eliminates almost one node per block in common cases.

llvm-svn: 24254
2005-11-09 05:03:03 +00:00
Chris Lattner
948932a624 Handle GEP's a bit more intelligently. Fold constant indices early and
turn power-of-two multiplies into shifts early to improve compile time.

llvm-svn: 24253
2005-11-09 04:45:33 +00:00
Chris Lattner
90eff65d1c Allocate the right amount of memory for this vector up front.
llvm-svn: 24252
2005-11-08 23:32:44 +00:00
Chris Lattner
89f1b405f4 Change the ValueList array for each node to be shared instead of individuallyallocated. Further, in the common case where a node has a single value, justreference an element from a small array. This is a small compile-time win.
llvm-svn: 24251
2005-11-08 23:30:28 +00:00
Chris Lattner
cffd7d5bdc Switch the operandlist/valuelist from being vectors to being just an array.This saves 12 bytes from SDNode, but doesn't speed things up substantially
(our graphs apparently already fit within the cache on my g5).  In any case
this reduces memory usage.

llvm-svn: 24249
2005-11-08 22:07:03 +00:00
Chris Lattner
80717f007c Explicitly initialize some instance vars
llvm-svn: 24247
2005-11-08 21:54:57 +00:00
Chris Lattner
e394cb13bd Clean up RemoveDeadNodes significantly, by eliminating the need for a temporary
set and eliminating the need to iterate whenever something is removed (which
can be really slow in some cases).  Thx to Jim for pointing out something silly
I was getting stuck on. :)

llvm-svn: 24241
2005-11-08 18:52:27 +00:00
Jim Laskey
0c65e09865 Let's try ignoring resource utilization on the backward pass.
llvm-svn: 24231
2005-11-07 19:08:53 +00:00
Chris Lattner
fc76f9f0c1 Always compute max align.
llvm-svn: 24227
2005-11-06 17:43:20 +00:00
Nate Begeman
aecebc076b Add the necessary support to the ISel to allow targets to codegen the new
alignment information appropriately.  Includes code for PowerPC to support
fixed-size allocas with alignment larger than the stack.  Support for
arbitrarily aligned dynamic allocas coming soon.

llvm-svn: 24224
2005-11-06 09:00:38 +00:00
Jim Laskey
5a3005b7d0 Fix logic bug in finding retry slot in tally.
llvm-svn: 24188
2005-11-05 00:01:25 +00:00
Jim Laskey
305647f84e Fix a warning
llvm-svn: 24187
2005-11-04 18:26:02 +00:00
Jim Laskey
670144ec9e Scheduling now uses itinerary data.
llvm-svn: 24180
2005-11-04 04:05:35 +00:00
Nate Begeman
d6ddce1ced Fix a crash that Andrew noticed, and add a pair of braces to unfconfuse
XCode's indenting.

llvm-svn: 24159
2005-11-02 18:42:59 +00:00
Chris Lattner
7b5cc7c0e4 Fix a source of undefined behavior when dealing with 64-bit types. This
may fix PR652.  Thanks to Andrew for tracking down the problem.

llvm-svn: 24145
2005-11-02 01:47:04 +00:00
Jim Laskey
8a0072ec92 1. Embed and not inherit vector for NodeGroup.
2. Iterate operands and not uses (performance.)

3. Some long pending comment changes.

llvm-svn: 24119
2005-10-31 12:49:09 +00:00
Chris Lattner
d7ef6d6774 Significantly simplify this code and make it more aggressive. Instead of having
a special case hack for X86, make the hack more general: if an incoming argument
register is not used in any block other than the entry block, don't copy it to
a vreg.  This helps us compile code like this:

%struct.foo = type { int, int, [0 x ubyte] }
int %test(%struct.foo* %X) {
        %tmp1 = getelementptr %struct.foo* %X, int 0, uint 2, int 100
        %tmp = load ubyte* %tmp1                ; <ubyte> [#uses=1]
        %tmp2 = cast ubyte %tmp to int          ; <int> [#uses=1]
        ret int %tmp2
}

to:

_test:
        lbz r3, 108(r3)
        blr

instead of:

_test:
        lbz r2, 108(r3)
        or r3, r2, r2
        blr

The (dead) copy emitted to copy r3 into a vreg for extra-block uses was
increasing the live range of r3 past the load, preventing the coallescing.

This implements CodeGen/PowerPC/reg-coallesce-simple.ll

llvm-svn: 24115
2005-10-30 19:42:35 +00:00
Chris Lattner
b0c50d1b7d Reduce the number of copies emitted as machine instructions by
generating results in vregs that will need them.  In the case of something
like this:  CopyToReg((add X, Y), reg1024), we no longer emit code like
this:

   reg1025 = add X, Y
   reg1024 = reg 1025

Instead, we emit:

   reg1024 = add X, Y

Whoa! :)

llvm-svn: 24111
2005-10-30 18:54:27 +00:00
Chris Lattner
26841f9e6b Codegen mul by negative power of two with a shift and negate.
This implements test/Regression/CodeGen/PowerPC/mul-neg-power-2.ll,
producing:

_foo:
        slwi r2, r3, 1
        subfic r3, r2, 63
        blr

instead of:

_foo:
        mulli r2, r3, -2
        addi r3, r2, 63
        blr

llvm-svn: 24106
2005-10-30 06:41:49 +00:00
Chris Lattner
24c5aebb55 Fix DSE to not nuke dead stores unless they redundant store is the same
VT as the killing one.  Fix fixes PR491

llvm-svn: 24034
2005-10-27 07:10:34 +00:00
Chris Lattner
83a994e57c Add a simple xform that is useful for bitfield operations.
llvm-svn: 24029
2005-10-27 05:06:38 +00:00
Chris Lattner
daf6a48dae Fix some spello's pointed out by Gabor Greif
llvm-svn: 24019
2005-10-26 18:41:41 +00:00
Nate Begeman
98c5495992 Allow custom lowered FP_TO_SINT ops in the check for whether a larger
FP_TO_SINT is preferred to a larger FP_TO_UINT.  This seems to be begging
for a TLI.isOperationCustom() helper function.

llvm-svn: 23992
2005-10-25 23:47:25 +00:00