1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 22:42:46 +02:00
Commit Graph

105 Commits

Author SHA1 Message Date
Dan Gohman
b38d9b6a57 Fix a grammaro and clarify a comment.
llvm-svn: 72668
2009-05-31 17:52:18 +00:00
Evan Cheng
2d198e1bc2 (i64 (zext (srl GR32 8))) -> movzbl AH is not safe since srl 8 only clear the top 8 bits.
llvm-svn: 72618
2009-05-30 08:43:27 +00:00
Evan Cheng
550fc9ba9f More h-registers tricks: folding zext nodes.
llvm-svn: 72558
2009-05-29 01:44:43 +00:00
Chris Lattner
5cc9a36d1c Add basic support for code generation of
addrspace(257) -> FS relative on x86.  Patch by Zoltan Varga!

llvm-svn: 70992
2009-05-05 18:52:19 +00:00
Dan Gohman
180fa04e35 Rename GR8_, GR16_, GR32_, and GR64_ to GR8_ABCD, GR16_ABCD,
GR32_ABCD, and GR64_ABCD, respectively, to help describe them.

llvm-svn: 70210
2009-04-27 16:33:14 +00:00
Dan Gohman
885b9c3688 Break up long multi-mnemonic strings into separate lines for readability.
llvm-svn: 70209
2009-04-27 15:13:28 +00:00
Rafael Espindola
4e7a0bf1f1 Fix PR 4004 by including the call to __tls_get_addr in X86tlsaddr. This is not
very elegant, but neither is the tls specification :-(

llvm-svn: 69968
2009-04-24 12:59:40 +00:00
Rafael Espindola
5adc7ad39e TLS_addr64 and TLS_addr32 define RDI and EAX. They don't use them.
This fixes PR4002.

llvm-svn: 69672
2009-04-21 08:22:09 +00:00
Rafael Espindola
d74132e2c5 For general dynamic TLS access we must use
leaq	foo@TLSGD(%rip), %rdi

as part of the instruction sequence. Using a register other than %rdi and then
copying it to %rdi is not valid.

llvm-svn: 69350
2009-04-17 14:35:58 +00:00
Dan Gohman
8393d29bc8 Rename COPY_TO_SUBCLASS to COPY_TO_REGCLASS, and generalize
it accordingly. Thanks to Jakob Stoklund Olesen for pointing
out how this might be useful.

llvm-svn: 68986
2009-04-13 21:06:25 +00:00
Dan Gohman
be7227005f Implement x86 h-register extract support.
- Add patterns for h-register extract, which avoids a shift and mask,
   and in some cases a temporary register.
 - Add address-mode matching for turning (X>>(8-n))&(255<<n), where
   n is a valid address-mode scale value, into an h-register extract
   and a scaled-offset address.
 - Replace X86's MOV32to32_ and related instructions with the new
   target-independent COPY_TO_SUBREG instruction.

On x86-64 there are complicated constraints on h registers, and
CodeGen doesn't currently provide a high-level way to express all of them,
so they are handled with a bunch of special code. This code currently only
supports extracts where the result is used by a zero-extend or a store,
though these are fairly common.

These transformations are not always beneficial; since there are only
4 h registers, they sometimes require extra move instructions, and
this sometimes increases register pressure because it can force out
values that would otherwise be in one of those registers. However,
this appears to be relatively uncommon.

llvm-svn: 68962
2009-04-13 16:09:41 +00:00
Dan Gohman
a68d99c707 Add a comment about MOVSX64rr8.
llvm-svn: 68950
2009-04-13 15:13:28 +00:00
Rafael Espindola
7eb72dc5f2 Re-apply 68552.
Tested by bootstrapping llvm-gcc and using that to build llvm.

llvm-svn: 68645
2009-04-08 21:14:34 +00:00
Dan Gohman
c9ce27d6b7 Implement support for using modeling implicit-zero-extension on x86-64
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce
SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG
instructions), and teach the DAGCombiner to take advantage of this on
targets which support it. This eliminates many redundant
zero-extension operations on x86-64.

This adds a new TargetLowering hook, isZExtFree. It's similar to
isTruncateFree, except it only applies to actual definitions, and not
no-op truncates which may not zero the high bits.

Also, this adds a new optimization to SimplifyDemandedBits: transform
operations like x+y into (zext (add (trunc x), (trunc y))) on targets
where all the casts are no-ops. In contexts where the high part of the
add is explicitly masked off, this allows the mask operation to be
eliminated. Fix the DAGCombiner to avoid undoing these transformations
to eliminate casts on targets where the casts are no-ops.

Also, this adds a new two-address lowering heuristic. Since
two-address lowering runs before coalescing, it helps to be able to
look through copies when deciding whether commuting and/or
three-address conversion are profitable.

Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle
the case that a clobber range extended both before and beyond an
existing live range. In that case, multiple live ranges need to be
added. This was exposed by the new subreg coalescing code.

Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the
spiller behavior it was looking for no longer occurrs with the new
instruction selection.

llvm-svn: 68576
2009-04-08 00:15:30 +00:00
Bill Wendling
6e702cf68c Temporarily revert r68552. This was causing a failure in the self-hosting LLVM
builds.

--- Reverse-merging (from foreign repository) r68552 into '.':
U    test/CodeGen/X86/tls8.ll
U    test/CodeGen/X86/tls10.ll
U    test/CodeGen/X86/tls2.ll
U    test/CodeGen/X86/tls6.ll
U    lib/Target/X86/X86Instr64bit.td
U    lib/Target/X86/X86InstrSSE.td
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86RegisterInfo.cpp
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86CodeEmitter.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86InstrInfo.h
U    lib/Target/X86/X86ISelDAGToDAG.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
U    lib/Target/X86/X86ISelLowering.h
U    lib/Target/X86/X86InstrInfo.cpp
U    lib/Target/X86/X86InstrBuilder.h
U    lib/Target/X86/X86RegisterInfo.td

llvm-svn: 68560
2009-04-07 22:35:25 +00:00
Rafael Espindola
0324937229 Reduce code duplication on the TLS implementation.
This introduces a small regression on the generated code
quality in the case we are just computing addresses, not
loading values.

Will work on it and on X86-64 support.

llvm-svn: 68552
2009-04-07 21:37:46 +00:00
Evan Cheng
3e30bcbd69 When optimzing a mul by immediate into two, the resulting mul's should get a x86 specific node to avoid dag combiner from hacking on them further.
llvm-svn: 68066
2009-03-30 21:36:47 +00:00
Chris Lattner
205380a4e4 Disable the "call to immediate" optimization on x86-64. It is
not safe in general because the immediate could be an arbitrary
value that does not fit in a 32-bit pcrel displacement.  
Conservatively fall back to loading the value into a register
and calling through it.

We still do the optzn on X86-32.

llvm-svn: 67142
2009-03-18 00:43:52 +00:00
Evan Cheng
d112c41d95 Re-apply 66024 with fixes: 1. Fixed indirect call to immediate address assembly. 2. Fixed JIT encoding by making the address pc-relative.
llvm-svn: 66803
2009-03-12 18:15:39 +00:00
Dan Gohman
d30e108f0e Revert r66024. The JIT encoding for CALLpcrel32 is wrong -- see PR3773, and the
assembly text output uses an indirect call ("call *") instead of a direct call.

llvm-svn: 66735
2009-03-11 23:01:47 +00:00
Dan Gohman
f9599e6c5f Don't use plain INC32 and DEC32 on x86-64; it needs
INC64_32r and INC64_16r, because these instructions are encoded
differently on x86-64. This fixes JIT regressions on x86-64 in
kimwitu++ and others.

llvm-svn: 66207
2009-03-05 21:32:23 +00:00
Dan Gohman
31fb085c2e Re-apply 66008, now that the unfoldMemoryOperand bug is fixed.
llvm-svn: 66058
2009-03-04 19:44:21 +00:00
Evan Cheng
7d9019d0f3 Fix PR3666: isel calls to constant addresses.
llvm-svn: 66024
2009-03-04 06:48:53 +00:00
Dan Gohman
6831e2c2a6 Revert r66004 for now; it's causing a variety of test failures.
llvm-svn: 66008
2009-03-04 03:54:19 +00:00
Dan Gohman
c6c669cc1e Teach the x86 backend to eliminate "test" instructions by using the EFLAGS
result from add, sub, inc, and dec instructions in simple cases.

llvm-svn: 66004
2009-03-04 02:33:24 +00:00
Dan Gohman
3c6c7754b2 Add '(implicit EFLAGS)' for AND, OR, XOR, NEG, INC, and DEC
instructions. These aren't used yet.

llvm-svn: 65965
2009-03-03 19:53:46 +00:00
Evan Cheng
a1b9cf3143 80 col violations.
llvm-svn: 64237
2009-02-10 21:39:44 +00:00
Evan Cheng
87def37f67 A few more isAsCheapAsAMove.
llvm-svn: 63852
2009-02-05 08:42:55 +00:00
Nate Begeman
92efc4f0ce Map address space 256 to gs; similar mappings could be supported for the
other x86 segments.  address space 0 is stack/default, 1-255 are reserved for
client use.

llvm-svn: 62980
2009-01-26 01:24:32 +00:00
Evan Cheng
43d680b0d8 Also favors NOT64r.
llvm-svn: 62710
2009-01-21 19:45:31 +00:00
Dan Gohman
8c835f6285 Disable the register+memory forms of the bt instructions for now. Thanks
to Eli for pointing out that these forms don't ignore the high bits of
their index operands, and as such are not immediately suitable for use
by isel.

llvm-svn: 62194
2009-01-13 23:23:30 +00:00
Dan Gohman
15e69a394a Add bt instructions that take immediate operands.
llvm-svn: 62180
2009-01-13 20:33:23 +00:00
Dan Gohman
ca4475dd7b Add patterns to match conditional moves with loads folded
into their left operand, rather than their right. Do this
by commuting the operands and inverting the condition.

llvm-svn: 61842
2009-01-07 01:00:24 +00:00
Dan Gohman
e78fdaec67 Define instructions for cmovo and cmovno.
llvm-svn: 61836
2009-01-07 00:35:10 +00:00
Chris Lattner
062ed6e3dd Fix some JIT encodings.
llvm-svn: 61425
2008-12-25 01:32:49 +00:00
Chris Lattner
f34b843728 BT memory operands load from their address operand.
llvm-svn: 61424
2008-12-25 01:27:10 +00:00
Dan Gohman
1ba93ac6be Add instruction patterns and encodings for the x86 bt instructions.
llvm-svn: 61400
2008-12-23 22:45:23 +00:00
Dan Gohman
22b7b328a4 Move the patterns which have i8 immediates before the patterns
that have i32 immediates so that they get selected first. This
currently only matters in the JIT, as assemblers will
automatically use the smallest encoding.

llvm-svn: 61250
2008-12-19 18:25:21 +00:00
Bill Wendling
13e4a3d0b0 - Use patterns instead of creating completely new instruction matching patterns,
which are identical to the original patterns.

- Change the multiply with overflow so that we distinguish between signed and
  unsigned multiplication. Currently, unsigned multiplication with overflow
  isn't working!

llvm-svn: 60963
2008-12-12 21:15:41 +00:00
Bill Wendling
5d026e47c1 Redo the arithmetic with overflow architecture. I was changing the semantics of
ISD::ADD to emit an implicit EFLAGS. This was horribly broken. Instead, replace
the intrinsic with an ISD::SADDO node. Then custom lower that into an
X86ISD::ADD node with a associated SETCC that checks the correct condition code
(overflow or carry). Then that gets lowered into the correct X86::ADDOvf
instruction.

Similar for SUB and MUL instructions.

llvm-svn: 60915
2008-12-12 00:56:36 +00:00
Bill Wendling
4c8fb3a0cc Add sub/mul overflow intrinsics. This currently doesn't have a
target-independent way of determining overflow on multiplication. It's very
tricky. Patch by Zoltan Varga!

llvm-svn: 60800
2008-12-09 22:08:41 +00:00
Nick Lewycky
e277f75880 Fix typo, psuedo -> pseudo.
llvm-svn: 60651
2008-12-07 03:49:52 +00:00
Dan Gohman
5dad0993a9 Rename isSimpleLoad to canFoldAsLoad, to better reflect its meaning.
llvm-svn: 60487
2008-12-03 18:15:48 +00:00
Bill Wendling
039240b301 Reapply r60382. This time, don't mark "ADC" nodes with "implicit EFLAGS".
llvm-svn: 60385
2008-12-02 00:07:05 +00:00
Bill Wendling
16840cba04 Temporarily revert r60382. It caused CodeGen/X86/i2k.ll and others to fail.
llvm-svn: 60383
2008-12-01 23:44:08 +00:00
Bill Wendling
628848b540 - Have "ADD" instructions return an implicit EFLAGS.
- Add support for seto, setno, setc, and setnc instructions.

llvm-svn: 60382
2008-12-01 23:30:42 +00:00
Dan Gohman
e5420a0ae9 Don't set neverHasSideEffects on x86's divide instructions, since
they trap on divide-by-zero, and this side effect is otherwise
unmodeled.

llvm-svn: 59551
2008-11-18 21:29:14 +00:00
Nate Begeman
e621f0539e Fix PEXTRQ encoding
llvm-svn: 58403
2008-10-29 23:07:17 +00:00
Dan Gohman
268cfea6bc Fun x86 encoding tricks: when adding an immediate value of 128,
use a SUB instruction instead of an ADD, because -128 can be
encoded in an 8-bit signed immediate field, while +128 can't be.
This avoids the need for a 32-bit immediate field in this case.

A similar optimization applies to 64-bit adds with 0x80000000,
with the 32-bit signed immediate field.

To support this, teach tablegen how to handle 64-bit constants.

llvm-svn: 57663
2008-10-17 01:33:43 +00:00
Dan Gohman
5d83bd89a5 Define patterns for shld and shrd that match immediate
shift counts, and patterns that match dynamic shift counts
when the subtract is obscured by a truncate node.

Add DAGCombiner support for recognizing rotate patterns
when the shift counts are defined by truncate nodes.

Fix and simplify the code for commuting shld and shrd
instructions to work even when the given instruction doesn't
have a parent, and when the caller needs a new instruction.

These changes allow LLVM to use the shld, shrd, rol, and ror
instructions on x86 to replace equivalent code using two
shifts and an or in many more cases.

llvm-svn: 57662
2008-10-17 01:23:35 +00:00