1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-26 04:32:44 +01:00
Commit Graph

1589 Commits

Author SHA1 Message Date
Chris Lattner
2db97248f8 In 64-bit mode, 64-bit GPRs are callee saved, not 32-bit ones.
llvm-svn: 29096
2006-07-11 00:48:23 +00:00
Chris Lattner
abaaddc214 Implement Regression/CodeGen/PowerPC/bswap-load-store.ll by folding bswaps
into i16/i32 load/stores.

llvm-svn: 29089
2006-07-10 20:56:58 +00:00
Chris Lattner
da960e218f Undisable ppc64 jit
llvm-svn: 29011
2006-07-06 17:10:42 +00:00
Chris Lattner
496bd3fbf6 Use hidden visibility to make symbols in an anonymous namespace get
dropped.  This shrinks libllvmgcc.dylib another 67K

llvm-svn: 28975
2006-06-28 23:17:24 +00:00
Chris Lattner
26f1985fdc shrink libllvmgcc.dylib another 25K
llvm-svn: 28971
2006-06-28 22:00:36 +00:00
Chris Lattner
852423b469 Don't match 64-bit bitfield inserts into rlwimi's. todo add rldimi. :)
llvm-svn: 28944
2006-06-27 21:08:52 +00:00
Chris Lattner
d7b1f61e72 Fix ppc64 jump tables
llvm-svn: 28941
2006-06-27 20:46:17 +00:00
Chris Lattner
01965c2fd8 Print stubs for external globals right.
llvm-svn: 28936
2006-06-27 20:20:53 +00:00
Chris Lattner
2c3f67f6a7 Implement 64-bit select, bswap, etc.
llvm-svn: 28935
2006-06-27 20:14:52 +00:00
Chris Lattner
86c7ca4fd4 Add a pattern for i64 sra. Print 8-byte units with a space between the .quad
and the data

llvm-svn: 28934
2006-06-27 20:07:26 +00:00
Chris Lattner
3422f47382 Fix rewriting frame offsets with ixaddr instructions, which implicitly shift
the offset two bits to the left.

llvm-svn: 28933
2006-06-27 18:55:49 +00:00
Chris Lattner
8569f4042d PPC doesn't have bit converts to/from i64
llvm-svn: 28932
2006-06-27 18:40:08 +00:00
Chris Lattner
da08df5d8a Add 64-bit MTCTR so that indirect calls work.
llvm-svn: 28931
2006-06-27 18:36:44 +00:00
Chris Lattner
20959f59cd Fix an incorrect store pattern. This fixes em3d.
llvm-svn: 28930
2006-06-27 18:22:50 +00:00
Chris Lattner
26f2bd4d4b Implement 64-bit undef, sub, shl/shr, srem/urem
llvm-svn: 28929
2006-06-27 18:18:41 +00:00
Chris Lattner
b4a636f966 Use i32 for shift amounts instead of i64. This gets bisort working.
llvm-svn: 28927
2006-06-27 17:34:57 +00:00
Chris Lattner
01182783c4 Add zextload from i32 -> i64, with this, perimeter works.
llvm-svn: 28926
2006-06-27 17:30:08 +00:00
Chris Lattner
10e71f60df Print darwin stub stuff correctly in 64-bit mode. With this, treeadd works in
ppc64 mode!

llvm-svn: 28923
2006-06-27 01:02:25 +00:00
Chris Lattner
a572f110b4 Fix variable shadowing issue
llvm-svn: 28922
2006-06-27 00:10:13 +00:00
Chris Lattner
494f476ca7 Implement a bunch of 64-bit cleanliness work. With this, treeadd builds (but
doesn't work right).

llvm-svn: 28921
2006-06-27 00:04:13 +00:00
Chris Lattner
c8a47e0bb0 Rearrange compares, add ADDI8, add sext from 32-to-64 bit register
llvm-svn: 28920
2006-06-26 23:53:10 +00:00
Chris Lattner
cbd4d14b24 Improve PPC64 calling convention support
llvm-svn: 28919
2006-06-26 22:48:35 +00:00
Chris Lattner
5d0654b832 Remove two more definitions
llvm-svn: 28918
2006-06-26 22:47:37 +00:00
Chris Lattner
209c2db6b9 remove two unused instructions.
llvm-svn: 28917
2006-06-26 22:44:13 +00:00
Jim Laskey
a8284f65e1 Add and sort "sections" in debug lines. This always stepping through
code in sections other than ".text", including weak sections like ctors and
dtors.

llvm-svn: 28909
2006-06-23 12:51:53 +00:00
Chris Lattner
5fa6e47534 Correct returns of 64-bit values, though they seemed to work before...
llvm-svn: 28892
2006-06-21 00:34:03 +00:00
Chris Lattner
10d22c274e Make these predicates correct in 64-bit mode too.
llvm-svn: 28890
2006-06-20 23:21:20 +00:00
Chris Lattner
75e6449a0f Rename OR4 -> OR. Move some PPC64-specific stuff to the 64-bit file
llvm-svn: 28889
2006-06-20 23:18:58 +00:00
Chris Lattner
2e1d3158f1 remove unused flag
llvm-svn: 28888
2006-06-20 23:15:07 +00:00
Chris Lattner
c74ef80a95 add some logical ops
llvm-svn: 28887
2006-06-20 23:11:59 +00:00
Chris Lattner
19df1fcd72 remove some unused patterns
llvm-svn: 28886
2006-06-20 23:11:36 +00:00
Chris Lattner
40a0a6c400 Add some more immediate patterns. This allows us to compile:
void test6() {
  Y = 0xABCD0123BCDE4567;
}

into:

_test6:
        lis r2, -21555
        lis r3, ha16(_Y)
        ori r2, r2, 291
        rldicr r2, r2, 32, 31
        oris r2, r2, 48350
        ori r2, r2, 17767
        std r2, lo16(_Y)(r3)
        blr

llvm-svn: 28885
2006-06-20 23:03:01 +00:00
Chris Lattner
690b03fb44 Instead of li/xoris use li/oris. Note that this doesn't work if bit 15 is
set, so disable the pattern in that case.

llvm-svn: 28884
2006-06-20 22:38:59 +00:00
Chris Lattner
eede1e2c00 Add some 64-bit logical ops.
Split imm16Shifted into a sext/zext form for 64-bit support.
Add some patterns for immediate formation.  For example, we now compile this:

static unsigned long long Y;
void test3() {
  Y = 0xF0F00F00;
}

into:

_test3:
        li r2, 3840
        lis r3, ha16(_Y)
        xoris r2, r2, 61680
        std r2, lo16(_Y)(r3)
        blr

GCC produces:

_test3:
        li r0,0
        lis r2,ha16(_Y)
        ori r0,r0,61680
        sldi r0,r0,16
        ori r0,r0,3840
        std r0,lo16(_Y)(r2)
        blr

llvm-svn: 28883
2006-06-20 22:34:10 +00:00
Chris Lattner
4ff5f3d852 64-bit bugfix: 0xFFFF0000 cannot be formed with a single lis.
llvm-svn: 28880
2006-06-20 21:39:30 +00:00
Chris Lattner
c97820b17c Add some patterns for globals, so we can now compile this:
static unsigned long long X, Y;
void test1() {
  X = Y;
}

into:

_test1:
        lis r2, ha16(_Y)
        lis r3, ha16(_X)
        ld r2, lo16(_Y)(r2)
        std r2, lo16(_X)(r3)
        blr

llvm-svn: 28879
2006-06-20 21:23:06 +00:00
Chris Lattner
3ae4156dd7 Remove some now-unneeded casts from instruction patterns. With the casts
removed, tblgen produces identical output to with them in.

llvm-svn: 28867
2006-06-20 00:39:56 +00:00
Chris Lattner
19339e7a96 Add some patterns for ppc64
llvm-svn: 28866
2006-06-20 00:38:36 +00:00
Chris Lattner
d817b32a8e Implement the getPointerRegClass method, which is required for the ptr_rc
magic to work.

llvm-svn: 28847
2006-06-17 00:01:04 +00:00
Chris Lattner
89a0d10812 Upgrade some load/store instructions to use the proper addressing mode stuff.
llvm-svn: 28841
2006-06-16 21:29:41 +00:00
Chris Lattner
163da7cdcb In 64-bit mode, addr mode operands use G8RC instead of GPRC.
llvm-svn: 28840
2006-06-16 21:29:03 +00:00
Chris Lattner
81845946ff fix some assumptions that pointers can only be 32-bits. With this, we can
now compile:

static unsigned long X;
void test1() {
  X = 0;
}

into:

_test1:
        lis r2, ha16(_X)
        li r3, 0
        stw r3, lo16(_X)(r2)
        blr

Totally amazing :)

llvm-svn: 28839
2006-06-16 21:01:35 +00:00
Chris Lattner
cb294464e7 Split 64-bit instructions out into a separate .td file
llvm-svn: 28838
2006-06-16 20:22:01 +00:00
Chris Lattner
59947dda25 Force 64-bit register availability in 64-bit mode. For real.
llvm-svn: 28837
2006-06-16 20:05:06 +00:00
Chris Lattner
126464b577 Remove the -darwin and -aix llc options, inferring darwinism and aixism from
the target triple & subtarget info.  woo.

llvm-svn: 28835
2006-06-16 18:50:48 +00:00
Chris Lattner
6a9ec7e80e Don't pass target name into TargetData anymore, it is never used or needed.
Remove explicit casts to std::string now that there is no overload resolution
issues in the TargetData ctors.

llvm-svn: 28830
2006-06-16 18:22:52 +00:00
Chris Lattner
19680a4928 Document the subtarget features better, make sure that 64-bit mode, 64-bit
support, and 64-bit register use are all consistent with each other.

Add a new "IsPPC" feature, to distinguish ppc32 vs ppc64 targets, use this
to configure TargetData differently.  This not makes ppc64 blow up on lots
of stuff :)

llvm-svn: 28825
2006-06-16 17:50:12 +00:00
Chris Lattner
fa884ac11b Rename some subtarget features. A CPU now can *have* 64-bit instructions,
can in 32-bit mode we can choose to optionally *use* 64-bit registers.

llvm-svn: 28824
2006-06-16 17:34:12 +00:00
Chris Lattner
aeb5a015cd First baby step towards ppc64 support. This adds a new -march=ppc64 backend
that is currently just like ppc32 :)

llvm-svn: 28813
2006-06-16 01:37:27 +00:00
Jim Laskey
849c76e55c 1. Support standard dwarf format (was bootstrapping in Apple format.)
2. Add vector support.

llvm-svn: 28807
2006-06-15 20:51:43 +00:00
Evan Cheng
32feafd76c Type of extract_element index operand should be iPTR.
llvm-svn: 28797
2006-06-15 08:18:06 +00:00
Jim Laskey
69d5018a05 Place dwarf headers at earliest possible point. Well behaved when skipping
functions.

llvm-svn: 28781
2006-06-14 11:35:03 +00:00
Chris Lattner
6f45959365 Gaar! Don't use r11 for CR save/restore, use R0. R11 can be register
allocated, thus live across the save/reload.  This fixes

llc-beta /MultiSource/Applications/spiff/spiff
llc-beta /MultiSource/Benchmarks/sim/sim:
llc-beta /MultiSource/Benchmarks/Ptrdist/bc/bc
llc-beta /MultiSource/Benchmarks/McCat/12-IOtest/iotest:
llc-beta /MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow
llc-beta /MultiSource/Benchmarks/Fhourstones-3.1/fhourstones3.1
llc-beta /MultiSource/Benchmarks/mediabench/adpcm/rawdaudio/rawdaudio
llc-beta /MultiSource/Benchmarks/mediabench/adpcm/rawcaudio/rawcaudio
llc-beta /MultiSource/Benchmarks/mediabench/g721/g721encode/encode
llc-beta /MultiSource/Benchmarks/mediabench/jpeg/jpeg-6a/cjpeg

and probably others, with -regalloc=local.

llvm-svn: 28761
2006-06-12 23:59:16 +00:00
Chris Lattner
93ed4373c4 Fix spilling and reloading of CR regs to reload the right values. This fixes
Olden/power (and probably others) with -regalloc=local.

llvm-svn: 28760
2006-06-12 21:50:57 +00:00
Chris Lattner
7bc8eae1f0 Work around a nasty tblgen bug where it doesn't add operands for varargs
nodes correctly.

llvm-svn: 28745
2006-06-10 01:15:02 +00:00
Chris Lattner
b231c3d11c Fix a problem exposed by the local allocator. CALL instructions are not marked
as using incoming argument registers, so the local allocator would clobber them
between their set and use.  To fix this, we give the call instructions a variable
number of uses in the CALL MachineInstr itself, so live variables understands
the live ranges of these register arguments.

llvm-svn: 28744
2006-06-10 01:14:28 +00:00
Chris Lattner
bfbee64ecf Add PowerPC intrinsics to support dcbz[l]
llvm-svn: 28696
2006-06-06 21:29:23 +00:00
Chris Lattner
1d2618c6c7 Silence -pedantic warning
llvm-svn: 28633
2006-06-01 17:17:06 +00:00
Chris Lattner
31b150e334 Always reserve space for 8 spilled GPRs. GCC apparently assumes that this
space will be available, even if the callee isn't varargs.

llvm-svn: 28571
2006-05-30 21:21:04 +00:00
Evan Cheng
de0f25081a Change RET node to include signness information of the return values. i.e.
RET chain, value1, sign1, value2, sign2, ...

llvm-svn: 28510
2006-05-26 23:10:12 +00:00
Chris Lattner
cbcad040b3 Fix build failure of povray
llvm-svn: 28473
2006-05-25 18:06:16 +00:00
Chris Lattner
e3059fb8bd Fix Benchmarks/MallocBench/cfrac
llvm-svn: 28471
2006-05-25 16:54:16 +00:00
Evan Cheng
4a74dd0c51 CALL node change (arg / sign pairs instead of just arguments).
llvm-svn: 28462
2006-05-25 00:57:32 +00:00
Evan Cheng
09942d3f8b Assert if InflightSet is not cleared after instruction selecting a BB.
llvm-svn: 28459
2006-05-25 00:24:28 +00:00
Evan Cheng
b040dd86af Clear HandleMap and ReplaceMap after instruction selection. Or it may cause
non-deterministic behavior.

llvm-svn: 28454
2006-05-24 20:46:25 +00:00
Chris Lattner
f604017e47 Patches to make the LLVM sources more -pedantic clean. Patch provided
by Anton Korobeynikov!  This is a step towards closing PR786.

llvm-svn: 28447
2006-05-24 17:04:05 +00:00
Chris Lattner
bc3be2ff8a Fix CodeGen/Generic/vector.ll:test_div with altivec.
llvm-svn: 28445
2006-05-24 00:15:25 +00:00
Chris Lattner
56862bbd53 Handle SETO* like we handle SET*, restoring behavior after Evan's setcc
change.  This fixes PowerPC/fnegsel.ll.

llvm-svn: 28443
2006-05-24 00:06:44 +00:00
Owen Anderson
4a78af08aa Make TargetData strings less redundant.
llvm-svn: 28423
2006-05-20 23:28:54 +00:00
Owen Anderson
c6947bf2ce Make all of the TargetMachine subclasses use the new string TargetData methods.
This is part of the on-going work on PR 761.

llvm-svn: 28414
2006-05-20 00:24:56 +00:00
Evan Cheng
667b133ab9 getCalleeSaveRegs and getCalleeSaveRegClasses are no long TableGen'd.
llvm-svn: 28378
2006-05-18 00:12:58 +00:00
Evan Cheng
ea24815aa3 Remove PointerType from class Target
llvm-svn: 28368
2006-05-17 21:20:27 +00:00
Chris Lattner
477732bab9 Add a note about a note
llvm-svn: 28355
2006-05-17 19:02:25 +00:00
Chris Lattner
2208c3214c Make PPC call lowering more aggressive, making the isel matching code simple
enough to be autogenerated.

llvm-svn: 28354
2006-05-17 19:00:46 +00:00
Chris Lattner
03c70b7f27 Switch PPC over to a call-selection model where the lowering code creates
the copyto/fromregs instead of making the PPCISD::CALL selection code create
them.  This vastly simplifies the selection code, and moves the ABI handling
parts into one place.

llvm-svn: 28346
2006-05-17 06:01:33 +00:00
Chris Lattner
348883611c 3 changes, 2 of which are cleanup one of which changes codegen:
1. Rearrange code a bit so that the special case doesn't require indenting lots
   of code.
2. Add comments describing PPC calling convention.
3. Only round up to 56-bytes of stack space for an outgoing call if the callee
   is varargs.  This saves a bit of stack space.

llvm-svn: 28342
2006-05-17 00:15:40 +00:00
Chris Lattner
a36579803f implement passing/returning vector regs to calls, at least non-varargs calls.
llvm-svn: 28341
2006-05-16 23:54:25 +00:00
Chris Lattner
b5271a0f4c Instead of implementing LowerCallTo directly, let the default impl produce an
ISD::CALL node, then custom lower that.  This means that we only have to handle
LEGAL call operands/results, not every possible type.  This allows us to
simplify the call code, shrinking it by about 1/3.

llvm-svn: 28339
2006-05-16 22:56:08 +00:00
Chris Lattner
40d1eaad0a Simplify the argument counting logic by only incrementing the index.
llvm-svn: 28335
2006-05-16 18:58:15 +00:00
Chris Lattner
0ae068ed8f Simplify the dead argument handling code.
llvm-svn: 28334
2006-05-16 18:54:32 +00:00
Chris Lattner
fbbe542235 Vector args passed in registers don't reserve stack space.
llvm-svn: 28333
2006-05-16 18:51:52 +00:00
Chris Lattner
0a12e343e2 Switch the PPC backend over to using FORMAL_ARGUMENTS for formal argument
handling.  This makes the lower argument code significantly simpler (we
only need to handle legal argument types).

Incidentally, this also implements support for vector argument registers,
so long as they are not on the stack.

llvm-svn: 28331
2006-05-16 18:18:50 +00:00
Chris Lattner
199f3f6af8 Fit in 80 cols
llvm-svn: 28311
2006-05-16 04:20:24 +00:00
Chris Lattner
901e7ad557 Remove some dead code, identified by coverity.
llvm-svn: 28303
2006-05-15 05:48:32 +00:00
Chris Lattner
adcb0582d8 Remove dead var, fix bad override.
llvm-svn: 28264
2006-05-12 21:09:57 +00:00
Chris Lattner
9789688d36 remove dead variable.
llvm-svn: 28248
2006-05-12 17:33:59 +00:00
Chris Lattner
bcd2c4f32d Fix PowerPC/2006-05-12-rlwimi-crash.ll
Nate, please verify that if InsertMask is 0, rlwimi shouldn't be used.
This fixes the crash and causes no PPC testsuite regressions.

llvm-svn: 28243
2006-05-12 16:29:37 +00:00
Owen Anderson
29e4d70aed Refactor a bunch of includes so that TargetMachine.h doesn't have to include
TargetData.h.  This should make recompiles a bit faster with my current
TargetData tinkering.

llvm-svn: 28238
2006-05-12 06:33:49 +00:00
Chris Lattner
085cfba0ca Fix the PowerPC JIT-only failure on UnitTests/Vector/sumarray-dbl, which is
really a bad codegen bug that LLC happens to get lucky with. I must chat with
Nate for the proper fix.

llvm-svn: 28213
2006-05-10 06:38:32 +00:00
Chris Lattner
56680711dc Indent .data/.text in the .s file
llvm-svn: 28204
2006-05-09 16:15:00 +00:00
Chris Lattner
f45b6d5c08 Split SwitchSection into SwitchTo{Text|Data}Section methods.
llvm-svn: 28184
2006-05-09 04:59:56 +00:00
Nate Begeman
db854c6772 Yet more readme updating
llvm-svn: 28172
2006-05-08 20:54:02 +00:00
Nate Begeman
1ff4d8f2fe New note about something bad happening in target independent optimizers
llvm-svn: 28170
2006-05-08 20:08:28 +00:00
Nate Begeman
b8fa6337df Proving once again that I am not as smart as the compiler
llvm-svn: 28169
2006-05-08 19:09:24 +00:00
Nate Begeman
a706539a72 Fold more shifts into inserts, and update the README
llvm-svn: 28168
2006-05-08 17:38:32 +00:00
Nate Begeman
591488077e Update some stuff now that the new rlwimi code has gone in
llvm-svn: 28162
2006-05-08 02:52:38 +00:00
Nate Begeman
dc94b738d0 New rlwimi implementation, which is superior to the old one. There are
still a couple missed optimizations, but we now generate all the possible
rlwimis for multiple inserts into the same bitfield.  More regression tests
to come.

llvm-svn: 28156
2006-05-07 00:23:38 +00:00
Chris Lattner
daae9ee503 Print a grouping around inline asm blocks so that we can tell when we are
using them.

llvm-svn: 28134
2006-05-05 21:50:04 +00:00
Chris Lattner
4978a4f2f4 New note, Nate, please check to see if I'm full of it :)
llvm-svn: 28118
2006-05-05 05:36:15 +00:00
Chris Lattner
eb41c99161 Rename MO_VirtualRegister -> MO_Register. Clean up immediate handling.
llvm-svn: 28104
2006-05-04 18:05:43 +00:00
Chris Lattner
685568510a Move some methods out of MachineInstr into MachineOperand
llvm-svn: 28102
2006-05-04 17:52:23 +00:00
Chris Lattner
97f1af2f14 There shalt be only one "immediate" operand type!
llvm-svn: 28099
2006-05-04 17:21:20 +00:00
Chris Lattner
20affbd29a Revert Nate's CR patch from last night, which caused many regressions (e.g. fhourstones).
Loading and storing off R0 isn't what we wanted.  Also, taking some CR's out of
CRRC seems to cause failures as well.  Further investigation is required.

llvm-svn: 28097
2006-05-04 16:56:45 +00:00
Chris Lattner
c779fca289 Remove a bunch more SparcV9 specific stuff
llvm-svn: 28093
2006-05-04 01:15:02 +00:00
Chris Lattner
0f89e6b11d Remove some more unused stuff from MachineInstr that was leftover from V9.
llvm-svn: 28091
2006-05-04 00:44:25 +00:00
Chris Lattner
f89e1162ad Change from using MachineRelocation ctors to using static methods
in MachineRelocation to create Relocations.

llvm-svn: 28088
2006-05-03 20:30:20 +00:00
Chris Lattner
d36b66d6dc Suck block address tracking out of targets into the JIT Emitter. This
simplifies the MachineCodeEmitter interface just a little bit and makes
BasicBlocks work like constant pools and jump tables.

llvm-svn: 28082
2006-05-03 17:10:41 +00:00
Owen Anderson
71bc529dfa Refactor TargetMachine, pushing handling of TargetData into the target-specific subclasses. This has one caller-visible change: getTargetData() now returns a pointer instead of a reference.
This fixes PR 759.

llvm-svn: 28074
2006-05-03 01:29:57 +00:00
Chris Lattner
06ccac43d7 Change the BasicBlockAddrs map to be a vector, indexed by MBB number.
llvm-svn: 28069
2006-05-03 00:32:55 +00:00
Chris Lattner
2bf37af52d Several related changes:
1. Change several methods in the MachineCodeEmitter class to be pure virtual.
2. Suck emitConstantPool/initJumpTableInfo into startFunction, removing them
   from the MachineCodeEmitter interface, and reducing the amount of target-
   specific code.
3. Change the JITEmitter so that it allocates constantpools and jump tables
   *right* next to the functions that they belong to, instead of in a separate
   pool of memory.  This makes all memory for a function be contiguous, and
   means the JITEmitter only tracks one block of memory now.

llvm-svn: 28065
2006-05-02 23:22:24 +00:00
Chris Lattner
d100478886 Fix a purely hypothetical problem (for now): emitWord emits in the host
byte format.  This doesn't work when using the code emitter in a cross target
environment.  Since the code emitter is only really used by the JIT, this
isn't a current problem, but if we ever start emitting .o files, it would be.

llvm-svn: 28060
2006-05-02 19:14:47 +00:00
Chris Lattner
055baf5c7b Refactor the machine code emitter interface to pull the pointers for the current
code emission location into the base class, instead of being in the derived classes.

This change means that low-level methods like emitByte/emitWord now are no longer
virtual (yaay for speed), and we now have a framework to support growable code
segments.  This implements feature request #1 of PR469.

llvm-svn: 28059
2006-05-02 18:27:26 +00:00
Nate Begeman
d7b4d2a743 Since we don't handle callee-save CRs right yet, don't allocate them. Also
don't step on R11 in the middle of a function when saving and restoring CRs

llvm-svn: 28058
2006-05-02 17:37:31 +00:00
Nate Begeman
fa83cee567 Hooray, everyone now uses the same printBasicBlockLabel implementation
llvm-svn: 28056
2006-05-02 17:34:51 +00:00
Nate Begeman
05174045df Extend printBasicBlockLabel a bit so that it can be used to print all
basic block labels, consolidating the code to do so in one place for each
target.

llvm-svn: 28050
2006-05-02 05:37:32 +00:00
Nate Begeman
82a6c0c66c Update the PPC compilation callback code to not need weird abi-violating
prologs and epilogs, keep all the asm in one place, and remove use of
compiler builtin functions.

llvm-svn: 28049
2006-05-02 04:50:05 +00:00
Chris Lattner
e3de67fae2 Fix CodeGen/Generic/2006-04-28-Sign-extend-bool.ll
llvm-svn: 28017
2006-04-28 21:56:10 +00:00
Chris Lattner
65291785c8 Add a note
llvm-svn: 27999
2006-04-28 00:04:05 +00:00
Nate Begeman
deeb953086 No functionality changes, but cleaner code with correct comments.
llvm-svn: 27966
2006-04-25 04:45:59 +00:00
Nate Begeman
7ed816f900 JumpTable support! What this represents is working asm and jit support for
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.

llvm-svn: 27947
2006-04-22 18:53:45 +00:00
Chris Lattner
de560fcaf7 Teach the JIT how to relocate LI, this fixes the JIT on Prolangs-C/TimberWolfMC
llvm-svn: 27943
2006-04-22 06:17:56 +00:00
Nate Begeman
dc60393018 Fix the comment
llvm-svn: 27938
2006-04-21 22:11:27 +00:00
Nate Begeman
67b3094f27 Change the PPC JIT to use a Static relocation model
llvm-svn: 27937
2006-04-21 22:04:15 +00:00
Chris Lattner
f1a59f3dc1 Fix the CodeGen/PowerPC/buildvec_canonicalize.ll regression last night.
llvm-svn: 27908
2006-04-20 19:01:30 +00:00
Chris Lattner
d11e0056ae Make sure that the new instructions selected have the right type. This fixes
CodeGen/PowerPC/2006-04-19-vmaddfp-crash.ll

llvm-svn: 27868
2006-04-20 05:58:10 +00:00
Chris Lattner
e307f43f35 add a note
llvm-svn: 27832
2006-04-19 16:22:38 +00:00
Chris Lattner
62537a04fb add a note
llvm-svn: 27828
2006-04-19 05:55:06 +00:00
Chris Lattner
f58f727be6 These are correctly encoded by the JIT. I checked :)
llvm-svn: 27810
2006-04-18 19:03:38 +00:00
Chris Lattner
5f153584d9 add a note
llvm-svn: 27809
2006-04-18 18:30:19 +00:00
Chris Lattner
47a41ae889 Fix a crash on:
void foo2(vector float *A, vector float *B) {
  vector float C = (vector float)vec_cmpeq(*A, *B);
  if (!vec_any_eq(*A, *B))
    *B = (vector float){0,0,0,0};
  *A = C;
}

llvm-svn: 27808
2006-04-18 18:28:22 +00:00
Chris Lattner
2bd91746e1 pretty print node name
llvm-svn: 27806
2006-04-18 18:05:58 +00:00
Chris Lattner
44ea12c5f8 Implement an important entry from README_ALTIVEC:
If an altivec predicate compare is used immediately by a branch, don't
use a (serializing) MFCR instruction to read the CR6 register, which requires
a compare to get it back to CR's.  Instead, just branch on CR6 directly. :)

For example, for:
void foo2(vector float *A, vector float *B) {
  if (!vec_any_eq(*A, *B))
    *B = (vector float){0,0,0,0};
}

We now generate:

_foo2:
        mfspr r2, 256
        oris r5, r2, 12288
        mtspr 256, r5
        lvx v2, 0, r4
        lvx v3, 0, r3
        vcmpeqfp. v2, v3, v2
        bne cr6, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
        vxor v2, v2, v2
        stvx v2, 0, r4
        mtspr 256, r2
        blr
LBB1_2: ; UnifiedReturnBlock
        mtspr 256, r2
        blr

instead of:

_foo2:
        mfspr r2, 256
        oris r5, r2, 12288
        mtspr 256, r5
        lvx v2, 0, r4
        lvx v3, 0, r3
        vcmpeqfp. v2, v3, v2
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        cmpwi cr0, r3, 0
        beq cr0, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; cond_true
        vxor v2, v2, v2
        stvx v2, 0, r4
        mtspr 256, r2
        blr
LBB1_2: ; UnifiedReturnBlock
        mtspr 256, r2
        blr

This implements CodeGen/PowerPC/vec_br_cmp.ll.

llvm-svn: 27804
2006-04-18 17:59:36 +00:00
Chris Lattner
519001b0ee move some stuff around, clean things up
llvm-svn: 27802
2006-04-18 17:52:36 +00:00
Chris Lattner
e90fdf3b98 Use vmladduhm to do v8i16 multiplies which is faster and simpler than doing
even/odd halves.  Thanks to Nate telling me what's what.

llvm-svn: 27793
2006-04-18 04:28:57 +00:00
Chris Lattner
5951b60cb4 Implement v16i8 multiply with this code:
vmuloub v5, v3, v2
        vmuleub v2, v3, v2
        vperm v2, v2, v5, v4

This implements CodeGen/PowerPC/vec_mul.ll.  With this, v16i8 multiplies are
6.79x faster than before.

Overall, UnitTests/Vector/multiplies.c is now 2.45x faster with LLVM than with
GCC.

Remove the 'integer multiplies' todo from the README file.

llvm-svn: 27792
2006-04-18 03:57:35 +00:00
Chris Lattner
4d84b56e64 Lower v8i16 multiply into this code:
li r5, lo16(LCPI1_0)
        lis r6, ha16(LCPI1_0)
        lvx v4, r6, r5
        vmulouh v5, v3, v2
        vmuleuh v2, v3, v2
        vperm v2, v2, v5, v4

where v4 is:
LCPI1_0:                                        ;  <16 x ubyte>
        .byte   2
        .byte   3
        .byte   18
        .byte   19
        .byte   6
        .byte   7
        .byte   22
        .byte   23
        .byte   10
        .byte   11
        .byte   26
        .byte   27
        .byte   14
        .byte   15
        .byte   30
        .byte   31

This is 5.07x faster on the G5 (measured) than lowering to scalar code +
loads/stores.

llvm-svn: 27789
2006-04-18 03:43:48 +00:00
Chris Lattner
613d7fda64 Custom lower v4i32 multiplies into a cute sequence, instead of having legalize
scalarize the sequence into 4 mullw's and a bunch of load/store traffic.

This speeds up v4i32 multiplies 4.1x (measured) on a G5.  This implements
PowerPC/vec_mul.ll

llvm-svn: 27788
2006-04-18 03:24:30 +00:00
Chris Lattner
81938fa3db remove done item
llvm-svn: 27778
2006-04-17 21:52:03 +00:00
Chris Lattner
fdecddb741 Don't diddle VRSAVE if no registers need to be added/removed from it. This
allows us to codegen functions as:

_test_rol:
        vspltisw v2, -12
        vrlw v2, v2, v2
        blr

instead of:

_test_rol:
        mfvrsave r2, 256
        mr r3, r2
        mtvrsave r3
        vspltisw v2, -12
        vrlw v2, v2, v2
        mtvrsave r2
        blr

Testcase here: CodeGen/PowerPC/vec_vrsave.ll

llvm-svn: 27777
2006-04-17 21:48:13 +00:00
Chris Lattner
021f521a41 Vectors that are known live-in and live-out are clearly already marked in
the vrsave register for the caller.  This allows us to codegen a function as:

_test_rol:
        mfspr r2, 256
        mr r3, r2
        mtspr 256, r3
        vspltisw v2, -12
        vrlw v2, v2, v2
        mtspr 256, r2
        blr

instead of:

_test_rol:
        mfspr r2, 256
        oris r3, r2, 40960
        mtspr 256, r3
        vspltisw v0, -12
        vrlw v2, v0, v0
        mtspr 256, r2
        blr

llvm-svn: 27772
2006-04-17 21:22:06 +00:00
Chris Lattner
a717d4f53b Prefer to allocate V2-V5 before V0,V1. This lets us generate code like this:
vspltisw v2, -12
        vrlw v2, v2, v2

instead of:

        vspltisw v0, -12
        vrlw v2, v0, v0

when a function is returning a value.

llvm-svn: 27771
2006-04-17 21:19:12 +00:00
Chris Lattner
6b76deffb5 Move some knowledge about registers out of the code emitter into the register info.
llvm-svn: 27770
2006-04-17 21:07:20 +00:00
Chris Lattner
face261a94 Use a small table instead of macros to do this conversion.
llvm-svn: 27769
2006-04-17 20:59:25 +00:00
Chris Lattner
f2347c31b4 Make sure to check splats of every constant we can, handle splat(31) by
being a bit more clever, add support for odd splats from -31 to -17.

llvm-svn: 27764
2006-04-17 18:09:22 +00:00
Chris Lattner
cc4222d95b Teach the ppc backend to use rol and vsldoi to generate splatted constants.
This implements vec_constants.ll:test_vsldoi and test_rol

llvm-svn: 27760
2006-04-17 17:55:10 +00:00
Chris Lattner
7d66e5a118 add a note
llvm-svn: 27758
2006-04-17 17:29:41 +00:00
Chris Lattner
2d8d6c9feb Make some code more general, adding support for constant formation of several
new patterns.

llvm-svn: 27754
2006-04-17 06:58:41 +00:00
Chris Lattner
9dd4ebffca Learn how to make odd splatted constants in range [17,29]. This implements
PowerPC/vec_constants.ll:test_29.

llvm-svn: 27752
2006-04-17 06:07:44 +00:00
Chris Lattner
72a67a5b1f Pull some code out into a helper function.
Effeciently codegen even splats in the range [-32,30].

This allows us to codegen <30,30,30,30> as:

        vspltisw v0, 15
        vadduwm v2, v0, v0

instead of as a cp load.

llvm-svn: 27750
2006-04-17 06:00:21 +00:00
Chris Lattner
5367a73dec Implement a TODO: for any shuffle that can be viewed as a v4[if]32 shuffle,
if it can be implemented in 3 or fewer discrete altivec instructions, codegen
it as such.  This implements Regression/CodeGen/PowerPC/vec_perf_shuffle.ll

llvm-svn: 27748
2006-04-17 05:28:54 +00:00