1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 19:42:54 +02:00
Commit Graph

201 Commits

Author SHA1 Message Date
Chris Lattner
06d8ca1f56 convert ctors/dtors section to be in TLOF instead of
TAI.

llvm-svn: 77842
2009-08-02 00:34:36 +00:00
Evan Cheng
5ef6928dff Fix Thumb2 function call isel. Thumb1 and Thumb2 should share the same
instructions for calls since BL and BLX are always 32-bit long and BX is always
16-bit long.

Also, we should be using BLX to call external function stubs.

llvm-svn: 77756
2009-08-01 00:16:10 +00:00
Chris Lattner
c156a00641 refactor section construction in TLOF to be through an explicit
initialize method, which can be called when an MCContext is available.

llvm-svn: 77687
2009-07-31 17:42:42 +00:00
Bob Wilson
8624e45518 Lower a 128-bit BUILD_VECTOR with 2 elements to a pair of INSERT_VECTOR_ELTs.
llvm-svn: 77557
2009-07-30 00:31:25 +00:00
Evan Cheng
fc846dd401 Optimize Thumb2 jumptable to use tbb / tbh when all the offsets fit in byte / halfword.
llvm-svn: 77422
2009-07-29 02:18:14 +00:00
Evan Cheng
cf483eb0c0 In thumb2 mode, add pc is unpredictable. Use add + mov pc instead (that is until more optimization goes in).
llvm-svn: 77364
2009-07-28 20:53:24 +00:00
Chris Lattner
c74586940a the apple "ld_classic" linker doesn't support .literal16 in 32-bit
mode, and "ld64" (the default linker) falls back to it in -static
mode.

llvm-svn: 77334
2009-07-28 17:50:28 +00:00
Chris Lattner
55461787cc Rip all of the global variable lowering logic out of TargetAsmInfo. Since
it is highly specific to the object file that will be generated in the end,
this introduces a new TargetLoweringObjectFile interface that is implemented
for each of ELF/MachO/COFF/Alpha/PIC16 and XCore.

Though still is still a brutal and ugly refactoring, this is a major step
towards goodness.

This patch also:
1. fixes a bunch of dangling pointer problems in the PIC16 backend.
2. disables the TargetLowering copy ctor which PIC16 was accidentally using.
3. gets us closer to xcore having its own crazy target section flags and
   pic16 not having to shadow sections with its own objects.
4. fixes wierdness where ELF targets would set CStringSection but not
   CStringSection_.  Factor the code better.
5. fixes some bugs in string lowering on ELF targets.

llvm-svn: 77294
2009-07-28 03:13:23 +00:00
Bob Wilson
ec256c8938 Add support for ARM Neon VREV instructions.
Patch by Anton Korzh, with some modifications from me.

llvm-svn: 77101
2009-07-26 00:39:34 +00:00
Evan Cheng
d615e606c4 Change Thumb2 jumptable codegen to one that uses two level jumps:
Before:
      adr r12, #LJTI3_0_0
      ldr pc, [r12, +r0, lsl #2]
LJTI3_0_0:
      .long    LBB3_24
      .long    LBB3_30
      .long    LBB3_31
      .long    LBB3_32

After:
      adr r12, #LJTI3_0_0
      add pc, r12, +r0, lsl #2
LJTI3_0_0:
      b.w    LBB3_24
      b.w    LBB3_30
      b.w    LBB3_31
      b.w    LBB3_32

This has several advantages.
1. This will make it easier to optimize this to a TBB / TBH instruction +
   (smaller) table.
2. This eliminate the need for ugly asm printer hack to force the address
   into thumb addresses (bit 0 is one).
3. Same codegen for pic and non-pic.
4. This eliminate the need to align the table so constantpool island pass
   won't have to over-estimate the size.

Based on my calculation, the later is probably slightly faster as well since
ldr pc with shifter address is very slow. That is, it should be a win as long
as the HW implementation can do a reasonable job of branch predict the second
branch.

llvm-svn: 77024
2009-07-25 00:33:29 +00:00
Owen Anderson
cc287b28c9 Get rid of the Pass+Context magic.
llvm-svn: 76702
2009-07-22 00:24:57 +00:00
Chris Lattner
499fe29f12 fix an arm codegen bug (the same as PR4482 on ppc) where available_externally
symbols were not getting stubs.  While I'm at it, add a big testcase for
stub generation to make sure I don't break anything.

llvm-svn: 75737
2009-07-15 04:12:33 +00:00
Bob Wilson
8682c6607e Remove an extra space.
llvm-svn: 75658
2009-07-14 18:44:34 +00:00
Torok Edwin
f955a6ef49 llvm_unreachable->llvm_unreachable(0), LLVM_UNREACHABLE->llvm_unreachable.
This adds location info for all llvm_unreachable calls (which is a macro now) in
!NDEBUG builds.
In NDEBUG builds location info and the message is off (it only prints
"UREACHABLE executed").

llvm-svn: 75640
2009-07-14 16:55:14 +00:00
Bob Wilson
e0478ff8e5 Fix comment typos.
llvm-svn: 75479
2009-07-13 18:11:36 +00:00
Torok Edwin
ae8a3ff177 assert(0) -> LLVM_UNREACHABLE.
Make llvm_unreachable take an optional string, thus moving the cerr<< out of
line.
LLVM_UNREACHABLE is now a simple wrapper that makes the message go away for
NDEBUG builds.

llvm-svn: 75379
2009-07-11 20:10:48 +00:00
Owen Anderson
8970999512 Thread LLVMContext through MVT and related parts of SDISel.
llvm-svn: 75153
2009-07-09 17:57:24 +00:00
David Goodwin
49fbd8d6b7 Use common code for both ARM and Thumb-2 instruction and register info.
llvm-svn: 75067
2009-07-08 23:10:31 +00:00
Torok Edwin
ad3be984b7 Start converting to new error handling API.
cerr+abort -> llvm_report_error
assert(0)+abort -> LLVM_UNREACHABLE (assert(0)+llvm_unreachable-> abort() included)

llvm-svn: 75018
2009-07-08 18:01:40 +00:00
Nick Lewycky
d46a7b2d22 Remove the vicmp and vfcmp instructions. Because we never had a release with
these instructions, no autoupgrade or backwards compatibility support is
provided.

llvm-svn: 74991
2009-07-08 03:04:38 +00:00
Evan Cheng
46b98516f6 Add some more Thumb2 multiplication instructions.
llvm-svn: 74889
2009-07-07 01:17:28 +00:00
Tilmann Scheller
cea3c16aa5 Add NumFixedArgs attribute to CallSDNode which indicates the number of fixed arguments in a vararg call.
With the SVR4 ABI on PowerPC, vector arguments for vararg calls are passed differently depending on whether they are a fixed or a variable argument. Variable vector arguments always go into memory, fixed vector arguments are put 
into vector registers. If there are no free vector registers available, fixed vector arguments are put on the stack.

The NumFixedArgs attribute allows to decide for an argument in a vararg call whether it belongs to the fixed or variable portion of the parameter list.

llvm-svn: 74764
2009-07-03 06:44:53 +00:00
Evan Cheng
f20e4fba49 Add thumb2 sign / zero extend with rotate instructions.
llvm-svn: 74755
2009-07-03 01:43:10 +00:00
Evan Cheng
dad6a41d14 Thumb2 pre/post indexed loads.
llvm-svn: 74696
2009-07-02 07:28:31 +00:00
Evan Cheng
7249bab8a5 80 col violation.
llvm-svn: 74693
2009-07-02 06:44:30 +00:00
Bill Wendling
fdd5badace Update comments to make it clear that the function alignment is the Log2 of the
bytes and not bytes.

llvm-svn: 74624
2009-07-01 18:50:55 +00:00
Bill Wendling
c0fb316bd3 Add an "alignment" field to the MachineFunction object. It makes more sense to
have the alignment be calculated up front, and have the back-ends obey whatever
alignment is decided upon.

This allows for future work that would allow for precise no-op placement and the
like.

llvm-svn: 74564
2009-06-30 22:38:32 +00:00
David Goodwin
9e1280adf3 Rename ARMcmpNZ to ARMcmpZ and use it to represent comparisons that set only the Z flag (i.e. eq and ne). Make ARMcmpZ commutative.
llvm-svn: 74423
2009-06-29 15:33:01 +00:00
David Goodwin
921faa64cd Thumb-2 has CLZ.
llvm-svn: 74322
2009-06-26 20:47:43 +00:00
Bob Wilson
6db76aaf10 Add support for ARM's Advanced SIMD (NEON) instruction set.
This is still a work in progress but most of the NEON instruction set
is supported.

llvm-svn: 73919
2009-06-22 23:27:02 +00:00
Evan Cheng
706927c96a Add comments.
llvm-svn: 73761
2009-06-19 07:06:07 +00:00
Evan Cheng
8f613095de Should be using Bcc (average) latency to determine if-conversion threshold, not BL.
llvm-svn: 73759
2009-06-19 06:56:26 +00:00
Evan Cheng
f671ce4eba Latency information for ARM v6. It's rough and not yet hooked up. Right now we are only using branch latency to determine if-conversion limits.
llvm-svn: 73747
2009-06-19 01:51:50 +00:00
Evan Cheng
7426d278ae Remove UseThumbBacktraces. Just check if subtarget is darwin.
llvm-svn: 73734
2009-06-18 23:14:30 +00:00
Anton Korobeynikov
cc8d0058e2 Address review comments: add 3 ARM calling conventions.
Dispatch C calling conv. to one of these conventions based on
target triple and subtarget features.

llvm-svn: 73530
2009-06-16 18:50:49 +00:00
Anton Korobeynikov
d08df21f36 The attached patches implement most of the ARM AAPCS-VFP hard float
ABI. The missing piece is support for putting "homogeneous aggregates"
into registers.

Patch by Sandeep Patel!

llvm-svn: 73095
2009-06-08 22:53:56 +00:00
Bob Wilson
372984e7a2 Only 64-bit targets support TImode libcalls. Disable the TImode shift libcalls
for ARM.  This fixes rdar://6908807.

llvm-svn: 72269
2009-05-22 17:38:41 +00:00
Bob Wilson
51e9d81e72 Fix pr4202: Disable CodePlacementOpt for ARM. The ARMConstantIslandPass has
to run last because it needs to know the exact size and position of every
basic block.  Currently CodePlacementOpt is set up to run last.  It might be
worthwhile to investigate reordering these passes, but for now, let's just
make it work.

llvm-svn: 72037
2009-05-18 20:55:32 +00:00
Jim Grosbach
bed3aeff20 Update the names of the exception handling sjlj instrinsics to
llvm.eh.sjlj.* for better clarity as to their purpose and scope. Add
a description of llvm.eh.sjlj.setjmp to ExceptionHandling.html.
(llvm.eh.sjlj.longjmp documentation coming when that implementation is
added).

llvm-svn: 71758
2009-05-14 00:46:35 +00:00
Evan Cheng
9bd08f0cde Run code placement optimization for targets that want it (arm and x86 for now).
llvm-svn: 71726
2009-05-13 21:42:09 +00:00
Jim Grosbach
4bb5e9d1df Add support for GCC compatible builtin setjmp and longjmp intrinsics. This is
a supporting preliminary patch for GCC-compatible SjLJ exception handling. Note that these intrinsics are not designed to be invoked directly by the user, but
rather used by the front-end as target hooks for exception handling.

llvm-svn: 71610
2009-05-12 23:59:14 +00:00
Bob Wilson
162870a4b3 Change LowerCallResult method so that CCValAssign::BCvt can be used with
f64 types.  This is not used for anything yet.

llvm-svn: 70006
2009-04-25 00:33:20 +00:00
Bob Wilson
0ac8a0cf95 Adjust a comment to reflect what the code does. Splitting a 64-bit argument
between registers and the stack may be required with the APCS ABI, but it
isn't tied to using a particular version of the ARM architecture.

llvm-svn: 69978
2009-04-24 17:05:01 +00:00
Bob Wilson
2a47f01759 Fix up some problems with getCopyToReg and getCopyFromReg nodes being
chained and "flagged" together.  I also made a few changes to handle the
chain and flag values more consistently.  I found these problems by
inspection so I'm not aware of anything that breaks because of them
(thus no testcase).

llvm-svn: 69977
2009-04-24 17:00:36 +00:00
Bob Wilson
f7e9ff1d28 Move duplicated AddLiveIn function from X86 and ARM backends to be a method
in the MachineFunction class, renaming it to addLiveIn for consistency with
the same method in MachineBasicBlock.  Thanks for Anton for suggesting this.

llvm-svn: 69615
2009-04-20 18:36:57 +00:00
Bob Wilson
312344e025 Move the AddLiveIn function definition closer to its uses.
llvm-svn: 69382
2009-04-17 20:42:34 +00:00
Bob Wilson
b2ccd16655 Rearrange code to reduce indentation.
llvm-svn: 69381
2009-04-17 20:40:45 +00:00
Bob Wilson
911e92c7a3 Clean up formatting, remove trailing whitespace, fix comment typos and
punctuation.  No functional changes.

llvm-svn: 69378
2009-04-17 20:35:10 +00:00
Bob Wilson
b8756b00cd Use CallConvLower.h and TableGen descriptions of the calling conventions
for ARM.  Patch by Sandeep Patel.

llvm-svn: 69371
2009-04-17 19:07:39 +00:00
Bob Wilson
a4abfb962e Fix PR3795: Apply Dan's suggested fix for
ARMTargetLowering::isLegalAddressingMode.

llvm-svn: 68619
2009-04-08 17:55:28 +00:00